From: Igor Mammedov <imammedo@redhat.com>
To: kvm@vger.kernel.org
Cc: pbonzini@redhat.com, babu.moger@amd.com
Subject: Re: [PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome
Date: Fri, 15 May 2026 13:11:27 +0200 [thread overview]
Message-ID: <20260515131127.0b13249f@imammedo> (raw)
In-Reply-To: <20260514113424.4136527-1-imammedo@redhat.com>
On Thu, 14 May 2026 13:34:24 +0200
Igor Mammedov <imammedo@redhat.com> wrote:
forgot to tags it properly, pls ignore. I'll repost.
> AMD Family 17h models 0x00-0x0f (Naples/Zen+) and 0x30-0x3f (Rome/Zen2)
> have a hardware bug where Virtual VMLOAD/VMSAVE causes spurious VMEXITs
> on VMLOAD/VMSAVE even with intercepts disabled, when the VMCB physical
> address as seen by L1 falls in [0x78000000, 0x80000000) range.
> Reserve this range on affected CPUs so the page allocator would never
> alocate the VMCB there.
>
> Given that it's relative old CPUs + nested env + only performance
> impact, it's not worth fixing on KVM side (which could involve
> messing with allocator or reallocating VMCB, until it's not
> in affected range). Hence a quirk here, to prevent tests
> failures where we can't do anything about them.
>
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
> there is not official EOL dates on both, but using a newer
> generation(s) release dates it appears that both are effectively
> discontinued for ~3-5 yeas
> ---
> x86/svm.c | 22 ++++++++++++++++++++++
> 1 file changed, 22 insertions(+)
>
> diff --git a/x86/svm.c b/x86/svm.c
> index a85da905..706ff2ef 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -334,6 +334,21 @@ static void setup_npt(void)
> __setup_mmu_range(pml4e, 0, size, X86_MMU_MAP_USER);
> }
>
> +#define VLS_BUG_START 0x78000000ULL
> +#define VLS_BUG_END 0x80000000ULL
> +
> +static bool has_vls_bug(void)
> +{
> + u32 sig = cpuid(1).a;
> + u32 fam = x86_family(sig);
> + u32 model = x86_model(sig);
> +
> + if (fam != 0x17)
> + return false;
> +
> + return model <= 0x0f || (model >= 0x30 && model <= 0x3f);
> +}
> +
> static void setup_svm(void)
> {
> void *hsave = alloc_page();
> @@ -413,6 +428,13 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
> return report_summary();
> }
>
> + if (has_vls_bug()) {
> + phys_addr_t addr;
> +
> + for (addr = VLS_BUG_START; addr < VLS_BUG_END; addr += PAGE_SIZE)
> + reserve_pages(addr, 1);
> + }
> +
> setup_svm();
>
> vmcb = alloc_page();
prev parent reply other threads:[~2026-05-15 11:11 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-14 11:34 [PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome Igor Mammedov
2026-05-15 11:11 ` Igor Mammedov [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260515131127.0b13249f@imammedo \
--to=imammedo@redhat.com \
--cc=babu.moger@amd.com \
--cc=kvm@vger.kernel.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.