* [PATCH v4 1/4] x86: Reject CPU policies with vendors other than the host's
2026-03-11 14:27 [PATCH v4 0/4] x86: Drop cross-vendor support Alejandro Vallejo
@ 2026-03-11 14:27 ` Alejandro Vallejo
2026-03-11 14:52 ` Jan Beulich
2026-03-11 14:27 ` [PATCH v4 2/4] x86/hvm: Disable cross-vendor handling in #UD handler Alejandro Vallejo
` (2 subsequent siblings)
3 siblings, 1 reply; 11+ messages in thread
From: Alejandro Vallejo @ 2026-03-11 14:27 UTC (permalink / raw)
To: xen-devel
Cc: Alejandro Vallejo, Oleksii Kurochko, Community Manager,
Jan Beulich, Andrew Cooper, Roger Pau Monné, Anthony PERARD
While in principle it's possible to have a vendor virtualising another,
this is fairly tricky in practice and comes with the world's supply of
security issues.
Reject any CPU policy with vendors not matching the host's.
Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
---
v4:
* Adjusted CHANGELOG
---
CHANGELOG.md | 5 +++++
tools/tests/cpu-policy/test-cpu-policy.c | 27 ++++++++++++++++++++++++
xen/arch/x86/lib/cpu-policy/policy.c | 5 ++++-
3 files changed, 36 insertions(+), 1 deletion(-)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index c191e504aba..90ba5da69e4 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -23,6 +23,11 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
- Xenoprofile support. Oprofile themselves removed support for Xen in 2014
prior to the version 1.0 release, and there has been no development since
before then in Xen.
+ - Domains can no longer run on a system with CPUs of a vendor different from
+ the one they were initially launched on. This affects live migrations and
+ save/restore workflows across mixed-vendor hosts. Cross-vendor emulation
+ has always been unreliable, but since 2017 with the advent of speculation
+ security it became unsustainably so.
- Removed xenpm tool on non-x86 platforms as it doesn't actually provide
anything useful outside of x86.
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index 301df2c0028..88a9a26e8f1 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -586,6 +586,19 @@ static void test_is_compatible_success(void)
.platform_info.cpuid_faulting = true,
},
},
+ {
+ .name = "Host CPU vendor == Guest CPU vendor (both unknown)",
+ .host = {
+ .basic.vendor_ebx = X86_VENDOR_AMD_EBX + 1,
+ .basic.vendor_ecx = X86_VENDOR_AMD_ECX,
+ .basic.vendor_edx = X86_VENDOR_AMD_EDX,
+ },
+ .guest = {
+ .basic.vendor_ebx = X86_VENDOR_AMD_EBX + 1,
+ .basic.vendor_ecx = X86_VENDOR_AMD_ECX,
+ .basic.vendor_edx = X86_VENDOR_AMD_EDX,
+ },
+ },
};
struct cpu_policy_errors no_errors = INIT_CPU_POLICY_ERRORS;
@@ -629,6 +642,20 @@ static void test_is_compatible_failure(void)
},
.e = { -1, -1, 0xce },
},
+ {
+ .name = "Host CPU vendor != Guest CPU vendor (both unknown)",
+ .host = {
+ .basic.vendor_ebx = X86_VENDOR_AMD_EBX + 1,
+ .basic.vendor_ecx = X86_VENDOR_AMD_ECX,
+ .basic.vendor_edx = X86_VENDOR_AMD_EDX,
+ },
+ .guest = {
+ .basic.vendor_ebx = X86_VENDOR_AMD_EBX + 2,
+ .basic.vendor_ecx = X86_VENDOR_AMD_ECX,
+ .basic.vendor_edx = X86_VENDOR_AMD_EDX,
+ },
+ .e = { 0, -1, -1 },
+ },
};
printf("Testing policy compatibility failure:\n");
diff --git a/xen/arch/x86/lib/cpu-policy/policy.c b/xen/arch/x86/lib/cpu-policy/policy.c
index f033d22785b..f991b1f3a96 100644
--- a/xen/arch/x86/lib/cpu-policy/policy.c
+++ b/xen/arch/x86/lib/cpu-policy/policy.c
@@ -15,7 +15,10 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
#define FAIL_MSR(m) \
do { e.msr = (m); goto out; } while ( 0 )
- if ( guest->basic.max_leaf > host->basic.max_leaf )
+ if ( (guest->basic.vendor_ebx != host->basic.vendor_ebx) ||
+ (guest->basic.vendor_ecx != host->basic.vendor_ecx) ||
+ (guest->basic.vendor_edx != host->basic.vendor_edx) ||
+ (guest->basic.max_leaf > host->basic.max_leaf) )
FAIL_CPUID(0, NA);
if ( guest->feat.max_subleaf > host->feat.max_subleaf )
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v4 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
2026-03-11 14:27 [PATCH v4 0/4] x86: Drop cross-vendor support Alejandro Vallejo
2026-03-11 14:27 ` [PATCH v4 1/4] x86: Reject CPU policies with vendors other than the host's Alejandro Vallejo
@ 2026-03-11 14:27 ` Alejandro Vallejo
2026-03-11 14:59 ` Jan Beulich
2026-03-11 18:48 ` Andrew Cooper
2026-03-11 14:27 ` [PATCH v4 3/4] x86/hvm: Remove cross-vendor checks from MSR handlers Alejandro Vallejo
2026-03-11 14:27 ` [PATCH v4 4/4] x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems Alejandro Vallejo
3 siblings, 2 replies; 11+ messages in thread
From: Alejandro Vallejo @ 2026-03-11 14:27 UTC (permalink / raw)
To: xen-devel
Cc: Alejandro Vallejo, Jan Beulich, Andrew Cooper,
Roger Pau Monné, Jason Andryuk
Remove cross-vendor support now that VMs can no longer have a different
vendor than the host.
While at it, refactor the function to exit early and skip initialising
the emulation context when FEP is not enabled.
No functional change intended.
Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
---
v4:
* Reverted refactor of the `walk` variable assignment
* Added ASSERT_UNREACHABLE() to the !hvm_fep path.
* Moved the `reinject` label to the UNIMPLEMENTED case in the emulator
result handler.
---
xen/arch/x86/hvm/hvm.c | 73 +++++++++++++++-----------------------
xen/arch/x86/hvm/svm/svm.c | 3 +-
xen/arch/x86/hvm/vmx/vmx.c | 3 +-
3 files changed, 30 insertions(+), 49 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4d37a93c57a..4280acfc074 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3832,67 +3832,50 @@ int hvm_descriptor_access_intercept(uint64_t exit_info,
return X86EMUL_OKAY;
}
-static bool cf_check is_cross_vendor(
- const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt)
-{
- switch ( ctxt->opcode )
- {
- case X86EMUL_OPC(0x0f, 0x05): /* syscall */
- case X86EMUL_OPC(0x0f, 0x34): /* sysenter */
- case X86EMUL_OPC(0x0f, 0x35): /* sysexit */
- return true;
- }
-
- return false;
-}
-
void hvm_ud_intercept(struct cpu_user_regs *regs)
{
struct vcpu *cur = current;
- bool should_emulate =
- cur->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor;
struct hvm_emulate_ctxt ctxt;
+ const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
+ uint32_t walk;
+ unsigned long addr;
+ char sig[5]; /* ud2; .ascii "xen" */
- hvm_emulate_init_once(&ctxt, opt_hvm_fep ? NULL : is_cross_vendor, regs);
-
- if ( opt_hvm_fep )
+ if ( !opt_hvm_fep )
{
- const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
- uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
- ? PFEC_user_mode : 0) | PFEC_insn_fetch;
- unsigned long addr;
- char sig[5]; /* ud2; .ascii "xen" */
-
- if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
- sizeof(sig), hvm_access_insn_fetch,
- cs, &addr) &&
- (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
- walk, NULL) == HVMTRANS_okay) &&
- (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
- {
- regs->rip += sizeof(sig);
- regs->eflags &= ~X86_EFLAGS_RF;
-
- /* Zero the upper 32 bits of %rip if not in 64bit mode. */
- if ( !(hvm_long_mode_active(cur) && cs->l) )
- regs->rip = (uint32_t)regs->rip;
+ ASSERT_UNREACHABLE();
+ goto reinject;
+ }
- add_taint(TAINT_HVM_FEP);
+ hvm_emulate_init_once(&ctxt, NULL, regs);
- should_emulate = true;
- }
- }
+ walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
+ ? PFEC_user_mode : 0) | PFEC_insn_fetch;
- if ( !should_emulate )
+ if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
+ sizeof(sig), hvm_access_insn_fetch,
+ cs, &addr) &&
+ (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
+ walk, NULL) == HVMTRANS_okay) &&
+ (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
{
- hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
- return;
+ regs->rip += sizeof(sig);
+ regs->eflags &= ~X86_EFLAGS_RF;
+
+ /* Zero the upper 32 bits of %rip if not in 64bit mode. */
+ if ( !(hvm_long_mode_active(cur) && cs->l) )
+ regs->rip = (uint32_t)regs->rip;
+
+ add_taint(TAINT_HVM_FEP);
}
+ else
+ goto reinject;
switch ( hvm_emulate_one(&ctxt, VIO_no_completion) )
{
case X86EMUL_UNHANDLEABLE:
case X86EMUL_UNIMPLEMENTED:
+ reinject:
hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
break;
case X86EMUL_EXCEPTION:
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 243c41fb13a..20591c4a44f 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -589,8 +589,7 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
const struct cpu_policy *cp = v->domain->arch.cpu_policy;
u32 bitmap = vmcb_get_exception_intercepts(vmcb);
- if ( opt_hvm_fep ||
- (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) )
+ if ( opt_hvm_fep )
bitmap |= (1U << X86_EXC_UD);
else
bitmap &= ~(1U << X86_EXC_UD);
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 82c55f49aea..eda99e268d1 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -803,8 +803,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
const struct cpu_policy *cp = v->domain->arch.cpu_policy;
int rc = 0;
- if ( opt_hvm_fep ||
- (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) )
+ if ( opt_hvm_fep )
v->arch.hvm.vmx.exception_bitmap |= (1U << X86_EXC_UD);
else
v->arch.hvm.vmx.exception_bitmap &= ~(1U << X86_EXC_UD);
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [PATCH v4 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
2026-03-11 14:27 ` [PATCH v4 2/4] x86/hvm: Disable cross-vendor handling in #UD handler Alejandro Vallejo
@ 2026-03-11 14:59 ` Jan Beulich
2026-03-11 18:01 ` Alejandro Vallejo
2026-03-11 18:48 ` Andrew Cooper
1 sibling, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2026-03-11 14:59 UTC (permalink / raw)
To: Alejandro Vallejo
Cc: Andrew Cooper, Roger Pau Monné, Jason Andryuk, xen-devel
On 11.03.2026 15:27, Alejandro Vallejo wrote:
> Remove cross-vendor support now that VMs can no longer have a different
> vendor than the host.
>
> While at it, refactor the function to exit early and skip initialising
> the emulation context when FEP is not enabled.
>
> No functional change intended.
>
> Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
> ---
> v4:
> * Reverted refactor of the `walk` variable assignment
"Revert" as in "move it even farther away from the original". As said, you
want re-indentation, so please do just that, nothing else that isn't
explicitly justified (like the moving of hvm_emulate_init_once() is). With
this put back in its original shape (can do while committing, I suppose):
Reviewed-by: Jan Beulich <jbeulich@suse.com>
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3832,67 +3832,50 @@ int hvm_descriptor_access_intercept(uint64_t exit_info,
> return X86EMUL_OKAY;
> }
>
> -static bool cf_check is_cross_vendor(
> - const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt)
> -{
> - switch ( ctxt->opcode )
> - {
> - case X86EMUL_OPC(0x0f, 0x05): /* syscall */
> - case X86EMUL_OPC(0x0f, 0x34): /* sysenter */
> - case X86EMUL_OPC(0x0f, 0x35): /* sysexit */
> - return true;
> - }
> -
> - return false;
> -}
> -
> void hvm_ud_intercept(struct cpu_user_regs *regs)
> {
> struct vcpu *cur = current;
> - bool should_emulate =
> - cur->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor;
> struct hvm_emulate_ctxt ctxt;
> + const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
> + uint32_t walk;
> + unsigned long addr;
> + char sig[5]; /* ud2; .ascii "xen" */
>
> - hvm_emulate_init_once(&ctxt, opt_hvm_fep ? NULL : is_cross_vendor, regs);
> -
> - if ( opt_hvm_fep )
> + if ( !opt_hvm_fep )
> {
> - const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
> - uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
> - ? PFEC_user_mode : 0) | PFEC_insn_fetch;
> - unsigned long addr;
> - char sig[5]; /* ud2; .ascii "xen" */
> -
> - if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
> - sizeof(sig), hvm_access_insn_fetch,
> - cs, &addr) &&
> - (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
> - walk, NULL) == HVMTRANS_okay) &&
> - (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
> - {
> - regs->rip += sizeof(sig);
> - regs->eflags &= ~X86_EFLAGS_RF;
> -
> - /* Zero the upper 32 bits of %rip if not in 64bit mode. */
> - if ( !(hvm_long_mode_active(cur) && cs->l) )
> - regs->rip = (uint32_t)regs->rip;
> + ASSERT_UNREACHABLE();
> + goto reinject;
> + }
>
> - add_taint(TAINT_HVM_FEP);
> + hvm_emulate_init_once(&ctxt, NULL, regs);
>
> - should_emulate = true;
> - }
> - }
> + walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
> + ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>
> - if ( !should_emulate )
> + if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
> + sizeof(sig), hvm_access_insn_fetch,
> + cs, &addr) &&
> + (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
> + walk, NULL) == HVMTRANS_okay) &&
> + (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
> {
> - hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
> - return;
> + regs->rip += sizeof(sig);
> + regs->eflags &= ~X86_EFLAGS_RF;
> +
> + /* Zero the upper 32 bits of %rip if not in 64bit mode. */
> + if ( !(hvm_long_mode_active(cur) && cs->l) )
> + regs->rip = (uint32_t)regs->rip;
> +
> + add_taint(TAINT_HVM_FEP);
> }
> + else
> + goto reinject;
>
> switch ( hvm_emulate_one(&ctxt, VIO_no_completion) )
> {
> case X86EMUL_UNHANDLEABLE:
> case X86EMUL_UNIMPLEMENTED:
> + reinject:
I'm inclined to suggest to indent this the same as the case labels.
Jan
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH v4 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
2026-03-11 14:59 ` Jan Beulich
@ 2026-03-11 18:01 ` Alejandro Vallejo
2026-03-12 6:57 ` Jan Beulich
0 siblings, 1 reply; 11+ messages in thread
From: Alejandro Vallejo @ 2026-03-11 18:01 UTC (permalink / raw)
To: Jan Beulich; +Cc: Andrew Cooper, Roger Pau Monné, Jason Andryuk, xen-devel
On Wed Mar 11, 2026 at 3:59 PM CET, Jan Beulich wrote:
> On 11.03.2026 15:27, Alejandro Vallejo wrote:
>> Remove cross-vendor support now that VMs can no longer have a different
>> vendor than the host.
>>
>> While at it, refactor the function to exit early and skip initialising
>> the emulation context when FEP is not enabled.
>>
>> No functional change intended.
>>
>> Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
>> ---
>> v4:
>> * Reverted refactor of the `walk` variable assignment
>
> "Revert" as in "move it even farther away from the original".
Revert as in not split the assignment and restore the orignal syntax _of the
assignment_, which was the main focus of the prior discussion.
It's hardly my intention to add unrequested changes, but I can't address that
which isn't explicitly requested.
> As said, you want re-indentation,
This is an ambiguous piece of advice.
Of what? That can mean moving the prior logic back to its original location and
crate a minimal diff (1) or simply collapsing the indentation of the block (2).
(1) can't be done with hvm context initialiser moving after the early exit,
which I explicitly mentioned in the commit message I wanted to do.
(2) can't happen because declarations and statements cannot be mixed (though I
really wish we dropped that rule).
There's a third option of keeping a silly { ... } around just for indentation
purposes, but that's worse than either of the other 2 options.
Maybe there's a fourth code arrangement in your head that does all this in a
way you find less intrusive and I just don't see it. If so, feel free to send
a patch I can review. It'll be faster for the both of us. Or tell me precisely
what's at fault here.
If it's the diff, I'll go for option (1) above. I don't care enough about it to
argue.
> so please do just that, nothing else that isn't
> explicitly justified (like the moving of hvm_emulate_init_once() is).
I'm not sure if you're fine with that motion because it's in the commit message
or not because it's a refactor that shouldn't be in the patch. This statement
can be read either way.
> With
> this put back in its original shape (can do while committing, I suppose):
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
I don't think it's very obvious what you mean to do on commit, so it wouldn't be
appropriate to agree to your adjustments, seeing how I just don't know what they
are. I'm happy to send a v4.5 on this particular patch with whatever else needs
modifying. Or a full v5 even. Or review whatever you wish to send as a v4.5 of
this patch.
Your pick.
>> + reinject:
>
> I'm inclined to suggest to indent this the same as the case labels.
I didn't notice the extra statement in CODING_STYLE for labels inside switches.
I tend to do that myself, but thought it wasn't in Xen's style. Sounds good
then.
Cheers,
Alejandro
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH v4 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
2026-03-11 18:01 ` Alejandro Vallejo
@ 2026-03-12 6:57 ` Jan Beulich
0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2026-03-12 6:57 UTC (permalink / raw)
To: Alejandro Vallejo
Cc: Andrew Cooper, Roger Pau Monné, Jason Andryuk, xen-devel
On 11.03.2026 19:01, Alejandro Vallejo wrote:
> On Wed Mar 11, 2026 at 3:59 PM CET, Jan Beulich wrote:
>> On 11.03.2026 15:27, Alejandro Vallejo wrote:
>>> Remove cross-vendor support now that VMs can no longer have a different
>>> vendor than the host.
>>>
>>> While at it, refactor the function to exit early and skip initialising
>>> the emulation context when FEP is not enabled.
>>>
>>> No functional change intended.
>>>
>>> Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
>>> ---
>>> v4:
>>> * Reverted refactor of the `walk` variable assignment
>>
>> "Revert" as in "move it even farther away from the original".
>
> Revert as in not split the assignment and restore the orignal syntax _of the
> assignment_, which was the main focus of the prior discussion.
>
> It's hardly my intention to add unrequested changes, but I can't address that
> which isn't explicitly requested.
>
>> As said, you want re-indentation,
>
> This is an ambiguous piece of advice.
>
> Of what? That can mean moving the prior logic back to its original location and
> crate a minimal diff (1) or simply collapsing the indentation of the block (2).
>
> (1) can't be done with hvm context initialiser moving after the early exit,
> which I explicitly mentioned in the commit message I wanted to do.
>
> (2) can't happen because declarations and statements cannot be mixed (though I
> really wish we dropped that rule).
>
> There's a third option of keeping a silly { ... } around just for indentation
> purposes, but that's worse than either of the other 2 options.
>
> Maybe there's a fourth code arrangement in your head that does all this in a
> way you find less intrusive and I just don't see it. If so, feel free to send
> a patch I can review. It'll be faster for the both of us. Or tell me precisely
> what's at fault here.
>
> If it's the diff, I'll go for option (1) above. I don't care enough about it to
> argue.
>
>> so please do just that, nothing else that isn't
>> explicitly justified (like the moving of hvm_emulate_init_once() is).
>
> I'm not sure if you're fine with that motion because it's in the commit message
> or not because it's a refactor that shouldn't be in the patch. This statement
> can be read either way.
You justify that movement in the description, and I agree with that justification.
>> With
>> this put back in its original shape (can do while committing, I suppose):
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> I don't think it's very obvious what you mean to do on commit, so it wouldn't be
> appropriate to agree to your adjustments, seeing how I just don't know what they
> are. I'm happy to send a v4.5 on this particular patch with whatever else needs
> modifying. Or a full v5 even. Or review whatever you wish to send as a v4.5 of
> this patch.
The variable had an initializer, and mere re-indentation wants to keep it so.
(There's no question that declarations may need to move, for the result to still
compile.)
Jan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v4 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
2026-03-11 14:27 ` [PATCH v4 2/4] x86/hvm: Disable cross-vendor handling in #UD handler Alejandro Vallejo
2026-03-11 14:59 ` Jan Beulich
@ 2026-03-11 18:48 ` Andrew Cooper
1 sibling, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2026-03-11 18:48 UTC (permalink / raw)
To: Alejandro Vallejo, xen-devel
Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Jason Andryuk
On 11/03/2026 2:27 pm, Alejandro Vallejo wrote:
> Remove cross-vendor support now that VMs can no longer have a different
> vendor than the host.
>
> While at it, refactor the function to exit early and skip initialising
> the emulation context when FEP is not enabled.
These two things are at odds. Two patches please.
The first which strips out is_cross_vendor() and initialises
should_emulate to false, to be this patch in conjunction with the
changes for the UD intercept.
Then a subsequent patch to rearrange hvm_ud_intercept() to DCE some more
in the !FEP case, which is no-functional-change.
In fact, I've got half a mind to suggest 3 patches, with the middle
patch being a strict un-indent of the the current "if ( FEP )" clause.
I think that will make a very a surprisingly legible patch 3.
The result will be much more coherent for future archaeologists to follow.
~Andrew
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v4 3/4] x86/hvm: Remove cross-vendor checks from MSR handlers.
2026-03-11 14:27 [PATCH v4 0/4] x86: Drop cross-vendor support Alejandro Vallejo
2026-03-11 14:27 ` [PATCH v4 1/4] x86: Reject CPU policies with vendors other than the host's Alejandro Vallejo
2026-03-11 14:27 ` [PATCH v4 2/4] x86/hvm: Disable cross-vendor handling in #UD handler Alejandro Vallejo
@ 2026-03-11 14:27 ` Alejandro Vallejo
2026-03-11 14:27 ` [PATCH v4 4/4] x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems Alejandro Vallejo
3 siblings, 0 replies; 11+ messages in thread
From: Alejandro Vallejo @ 2026-03-11 14:27 UTC (permalink / raw)
To: xen-devel
Cc: Alejandro Vallejo, Jan Beulich, Andrew Cooper,
Roger Pau Monné, Teddy Astie
Not a functional change now that cross-vendor guests are not launchable.
Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
Reviewed-by: Teddy Astie <teddy.astie@vates.tech>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
xen/arch/x86/msr.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 6a97be59d52..d10891dcfc8 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -169,9 +169,9 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
break;
case MSR_IA32_PLATFORM_ID:
- if ( !(cp->x86_vendor & X86_VENDOR_INTEL) ||
- !(boot_cpu_data.vendor & X86_VENDOR_INTEL) )
+ if ( boot_cpu_data.vendor != X86_VENDOR_INTEL )
goto gp_fault;
+
rdmsrl(MSR_IA32_PLATFORM_ID, *val);
break;
@@ -189,9 +189,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
* from Xen's last microcode load, which can be forwarded straight to
* the guest.
*/
- if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
- !(boot_cpu_data.vendor &
- (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
+ if ( !(boot_cpu_data.vendor & (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
rdmsr_safe(MSR_AMD_PATCHLEVEL, val) )
goto gp_fault;
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v4 4/4] x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems
2026-03-11 14:27 [PATCH v4 0/4] x86: Drop cross-vendor support Alejandro Vallejo
` (2 preceding siblings ...)
2026-03-11 14:27 ` [PATCH v4 3/4] x86/hvm: Remove cross-vendor checks from MSR handlers Alejandro Vallejo
@ 2026-03-11 14:27 ` Alejandro Vallejo
2026-03-11 18:54 ` Andrew Cooper
3 siblings, 1 reply; 11+ messages in thread
From: Alejandro Vallejo @ 2026-03-11 14:27 UTC (permalink / raw)
To: xen-devel
Cc: Alejandro Vallejo, Jan Beulich, Andrew Cooper,
Roger Pau Monné, Jason Andryuk, Teddy Astie
With cross-vendor support gone, it's no longer needed.
AMD CPUs ignore the top 32 bits of the SYSENTER/SYSEXIT MSRs, which is
not how this emulation worked due to the need for cross-vendor support.
Any AMD VMs storing state in the top 32bits of the SEP MSRs will lose
it.
It's very unlikely to affect any production VM because having 64bit width
just isn't how real AMD CPUs behave.
Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
Reviewed-by: Teddy Astie <teddy.astie@vates.tech>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v4:
* Sorted assignments to the vmcb struct by dst address.
---
xen/arch/x86/hvm/svm/svm.c | 42 +++++++++++-------------
xen/arch/x86/hvm/svm/vmcb.c | 3 ++
xen/arch/x86/include/asm/hvm/svm-types.h | 10 ------
3 files changed, 22 insertions(+), 33 deletions(-)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 20591c4a44f..076d57e4847 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -401,10 +401,6 @@ static int svm_vmcb_save(struct vcpu *v, struct hvm_hw_cpu *c)
{
struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
- c->sysenter_cs = v->arch.hvm.svm.guest_sysenter_cs;
- c->sysenter_esp = v->arch.hvm.svm.guest_sysenter_esp;
- c->sysenter_eip = v->arch.hvm.svm.guest_sysenter_eip;
-
if ( vmcb->event_inj.v &&
hvm_event_needs_reinjection(vmcb->event_inj.type,
vmcb->event_inj.vector) )
@@ -468,11 +464,6 @@ static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c)
svm_update_guest_cr(v, 0, 0);
svm_update_guest_cr(v, 4, 0);
- /* Load sysenter MSRs into both VMCB save area and VCPU fields. */
- vmcb->sysenter_cs = v->arch.hvm.svm.guest_sysenter_cs = c->sysenter_cs;
- vmcb->sysenter_esp = v->arch.hvm.svm.guest_sysenter_esp = c->sysenter_esp;
- vmcb->sysenter_eip = v->arch.hvm.svm.guest_sysenter_eip = c->sysenter_eip;
-
if ( paging_mode_hap(v->domain) )
{
vmcb_set_np(vmcb, true);
@@ -501,6 +492,9 @@ static void svm_save_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
{
struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
+ data->sysenter_cs = vmcb->sysenter_cs;
+ data->sysenter_esp = vmcb->sysenter_esp;
+ data->sysenter_eip = vmcb->sysenter_eip;
data->shadow_gs = vmcb->kerngsbase;
data->msr_lstar = vmcb->lstar;
data->msr_star = vmcb->star;
@@ -512,11 +506,14 @@ static void svm_load_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
{
struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
- vmcb->kerngsbase = data->shadow_gs;
- vmcb->lstar = data->msr_lstar;
- vmcb->star = data->msr_star;
- vmcb->cstar = data->msr_cstar;
- vmcb->sfmask = data->msr_syscall_mask;
+ vmcb->lstar = data->msr_lstar;
+ vmcb->star = data->msr_star;
+ vmcb->cstar = data->msr_cstar;
+ vmcb->sfmask = data->msr_syscall_mask;
+ vmcb->kerngsbase = data->shadow_gs;
+ vmcb->sysenter_cs = data->sysenter_cs;
+ vmcb->sysenter_esp = data->sysenter_esp;
+ vmcb->sysenter_eip = data->sysenter_eip;
v->arch.hvm.guest_efer = data->msr_efer;
svm_update_guest_efer(v);
}
@@ -1734,12 +1731,9 @@ static int cf_check svm_msr_read_intercept(
switch ( msr )
{
- /*
- * Sync not needed while the cross-vendor logic is in unilateral effect.
case MSR_IA32_SYSENTER_CS:
case MSR_IA32_SYSENTER_ESP:
case MSR_IA32_SYSENTER_EIP:
- */
case MSR_STAR:
case MSR_LSTAR:
case MSR_CSTAR:
@@ -1754,13 +1748,15 @@ static int cf_check svm_msr_read_intercept(
switch ( msr )
{
case MSR_IA32_SYSENTER_CS:
- *msr_content = v->arch.hvm.svm.guest_sysenter_cs;
+ *msr_content = vmcb->sysenter_cs;
break;
+
case MSR_IA32_SYSENTER_ESP:
- *msr_content = v->arch.hvm.svm.guest_sysenter_esp;
+ *msr_content = vmcb->sysenter_esp;
break;
+
case MSR_IA32_SYSENTER_EIP:
- *msr_content = v->arch.hvm.svm.guest_sysenter_eip;
+ *msr_content = vmcb->sysenter_eip;
break;
case MSR_STAR:
@@ -1954,11 +1950,11 @@ static int cf_check svm_msr_write_intercept(
switch ( msr )
{
case MSR_IA32_SYSENTER_ESP:
- vmcb->sysenter_esp = v->arch.hvm.svm.guest_sysenter_esp = msr_content;
+ vmcb->sysenter_esp = msr_content;
break;
case MSR_IA32_SYSENTER_EIP:
- vmcb->sysenter_eip = v->arch.hvm.svm.guest_sysenter_eip = msr_content;
+ vmcb->sysenter_eip = msr_content;
break;
case MSR_LSTAR:
@@ -1984,7 +1980,7 @@ static int cf_check svm_msr_write_intercept(
break;
case MSR_IA32_SYSENTER_CS:
- vmcb->sysenter_cs = v->arch.hvm.svm.guest_sysenter_cs = msr_content;
+ vmcb->sysenter_cs = msr_content;
break;
case MSR_STAR:
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index e583ef8548c..76fcaf15c2b 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -97,6 +97,9 @@ static int construct_vmcb(struct vcpu *v)
svm_disable_intercept_for_msr(v, MSR_LSTAR);
svm_disable_intercept_for_msr(v, MSR_STAR);
svm_disable_intercept_for_msr(v, MSR_SYSCALL_MASK);
+ svm_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_CS);
+ svm_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_EIP);
+ svm_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_ESP);
vmcb->_msrpm_base_pa = virt_to_maddr(svm->msrpm);
vmcb->_iopm_base_pa = __pa(v->domain->arch.hvm.io_bitmap);
diff --git a/xen/arch/x86/include/asm/hvm/svm-types.h b/xen/arch/x86/include/asm/hvm/svm-types.h
index 051b235d8f6..aaee91b4b61 100644
--- a/xen/arch/x86/include/asm/hvm/svm-types.h
+++ b/xen/arch/x86/include/asm/hvm/svm-types.h
@@ -27,16 +27,6 @@ struct svm_vcpu {
/* VMCB has a cached instruction from #PF/#NPF Decode Assist? */
uint8_t cached_insn_len; /* Zero if no cached instruction. */
-
- /*
- * Upper four bytes are undefined in the VMCB, therefore we can't use the
- * fields in the VMCB. Write a 64bit value and then read a 64bit value is
- * fine unless there's a VMRUN/VMEXIT in between which clears the upper
- * four bytes.
- */
- uint64_t guest_sysenter_cs;
- uint64_t guest_sysenter_esp;
- uint64_t guest_sysenter_eip;
};
struct nestedsvm {
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread