Kernel KVM virtualization development
 help / color / mirror / Atom feed
From: Igor Mammedov <imammedo@redhat.com>
To: kvm@vger.kernel.org
Cc: pbonzini@redhat.com, babu.moger@amd.com, seanjc@google.com
Subject: [kvm-unit-tests PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome
Date: Fri, 15 May 2026 13:19:57 +0200	[thread overview]
Message-ID: <20260515111957.4188366-1-imammedo@redhat.com> (raw)

AMD Family 17h models 0x00-0x0f (Naples/Zen+) and 0x30-0x3f (Rome/Zen2)
have a hardware bug where Virtual VMLOAD/VMSAVE causes spurious VMEXITs
on VMLOAD/VMSAVE even with intercepts disabled, when the VMCB physical
address as seen by L1 falls in [0x78000000, 0x80000000) range.
Reserve this range on affected CPUs so the page allocator would never
alocate the VMCB there.

Given that it's relative old CPUs + nested env + only performance
impact, it's not worth fixing on KVM side (which could involve
messing with allocator or reallocating VMCB, until it's not
in affected range). And since test acts here as L1 it's not possible to fix
issue in kernel (L0) anyway. Hence a quirk here, to prevent tests
failures where we can't do anything about them.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
CCed: AMD folks for awaraness and in case they would be willing
to look into fixing issue on CPU side.

there is not official EOL dates on both, but using a newer
generation(s) release dates it appears that both are effectively
discontinued for ~3-5 yeas
---
 x86/svm.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/x86/svm.c b/x86/svm.c
index a85da905..706ff2ef 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -334,6 +334,21 @@ static void setup_npt(void)
 	__setup_mmu_range(pml4e, 0, size, X86_MMU_MAP_USER);
 }
 
+#define VLS_BUG_START	0x78000000ULL
+#define VLS_BUG_END	0x80000000ULL
+
+static bool has_vls_bug(void)
+{
+	u32 sig = cpuid(1).a;
+	u32 fam = x86_family(sig);
+	u32 model = x86_model(sig);
+
+	if (fam != 0x17)
+		return false;
+
+	return model <= 0x0f || (model >= 0x30 && model <= 0x3f);
+}
+
 static void setup_svm(void)
 {
 	void *hsave = alloc_page();
@@ -413,6 +428,13 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
 		return report_summary();
 	}
 
+	if (has_vls_bug()) {
+		phys_addr_t addr;
+
+		for (addr = VLS_BUG_START; addr < VLS_BUG_END; addr += PAGE_SIZE)
+			reserve_pages(addr, 1);
+	}
+
 	setup_svm();
 
 	vmcb = alloc_page();
-- 
2.43.0


             reply	other threads:[~2026-05-15 11:20 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-15 11:19 Igor Mammedov [this message]
2026-05-15 12:44 ` [kvm-unit-tests PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome Sean Christopherson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260515111957.4188366-1-imammedo@redhat.com \
    --to=imammedo@redhat.com \
    --cc=babu.moger@amd.com \
    --cc=kvm@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox