All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/4] x86: Drop cross-vendor support
@ 2026-02-13 11:42 Alejandro Vallejo
  2026-02-13 11:42 ` [PATCH v3 1/4] x86: Reject CPU policies with vendors other than the host's Alejandro Vallejo
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Alejandro Vallejo @ 2026-02-13 11:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Alejandro Vallejo, Oleksii Kurochko, Community Manager,
	Jan Beulich, Andrew Cooper, Roger Pau Monné, Anthony PERARD,
	Jason Andryuk

Hi,

v1: https://lore.kernel.org/xen-devel/20260122164943.20691-1-alejandro.garciavallejo@amd.com/
v2: https://lore.kernel.org/xen-devel/20260205170923.38425-1-alejandro.garciavallejo@amd.com/
pipeline (green): https://gitlab.com/xen-project/people/agvallejo/xen/-/pipelines/2324131649

This time the policy check uses e{b,c,d}x rather than x86_vendor and there's
2 unit tests for it.

Cheers,
Alejandro

Alejandro Vallejo (4):
  x86: Reject CPU policies with vendors other than the host's
  x86/hvm: Disable cross-vendor handling in #UD handler
  x86/hvm: Remove cross-vendor checks from MSR handlers.
  x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems

 CHANGELOG.md                             |  5 ++
 tools/tests/cpu-policy/test-cpu-policy.c | 27 +++++++++
 xen/arch/x86/hvm/hvm.c                   | 77 +++++++++---------------
 xen/arch/x86/hvm/svm/svm.c               | 45 ++++++--------
 xen/arch/x86/hvm/svm/vmcb.c              |  3 +
 xen/arch/x86/hvm/vmx/vmx.c               |  3 +-
 xen/arch/x86/include/asm/hvm/svm-types.h | 10 ---
 xen/arch/x86/msr.c                       |  8 +--
 xen/lib/x86/policy.c                     |  5 +-
 9 files changed, 93 insertions(+), 90 deletions(-)


base-commit: 1f4f85b64d393be1aa8dc8170201f4fbfe9c7222
-- 
2.43.0



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 1/4] x86: Reject CPU policies with vendors other than the host's
  2026-02-13 11:42 [PATCH v3 0/4] x86: Drop cross-vendor support Alejandro Vallejo
@ 2026-02-13 11:42 ` Alejandro Vallejo
  2026-03-11  8:26   ` Jan Beulich
  2026-02-13 11:42 ` [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler Alejandro Vallejo
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Alejandro Vallejo @ 2026-02-13 11:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Alejandro Vallejo, Oleksii Kurochko, Community Manager,
	Jan Beulich, Andrew Cooper, Roger Pau Monné, Anthony PERARD

While in principle it's possible to have a vendor virtualising another,
this is fairly tricky in practice and comes with the world's supply of
security issues.

Reject any CPU policy with vendors not matching the host's.

Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
---
v3:
  * Check vendor_e{b,c,d}x rather than x86_vendor.
  * Added unit tests for pass/fail cases.
    * They cover a success and a failure on a comparison of an unknown
      vendor.
---
 CHANGELOG.md                             |  5 +++++
 tools/tests/cpu-policy/test-cpu-policy.c | 27 ++++++++++++++++++++++++
 xen/lib/x86/policy.c                     |  5 ++++-
 3 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 18f3d10f20..426c0bce67 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -22,6 +22,11 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    - Xenoprofile support.  Oprofile themselves removed support for Xen in 2014
      prior to the version 1.0 release, and there has been no development since
      before then in Xen.
+   - Domains can no longer run on a CPU vendor if they were initially launched
+     on a different CPU vendor. This affects live migrations and save/restore
+     workflows accross mixed-vendor hosts. Cross-vendor emulation has always
+     been unreliable, but since 2017 with the advent of speculation security it
+     became unsustainably so.
 
  - Removed xenpm tool on non-x86 platforms as it doesn't actually provide
    anything useful outside of x86.
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index 301df2c002..88a9a26e8f 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -586,6 +586,19 @@ static void test_is_compatible_success(void)
                 .platform_info.cpuid_faulting = true,
             },
         },
+        {
+            .name = "Host CPU vendor == Guest CPU vendor (both unknown)",
+            .host = {
+                .basic.vendor_ebx = X86_VENDOR_AMD_EBX + 1,
+                .basic.vendor_ecx = X86_VENDOR_AMD_ECX,
+                .basic.vendor_edx = X86_VENDOR_AMD_EDX,
+            },
+            .guest = {
+                .basic.vendor_ebx = X86_VENDOR_AMD_EBX + 1,
+                .basic.vendor_ecx = X86_VENDOR_AMD_ECX,
+                .basic.vendor_edx = X86_VENDOR_AMD_EDX,
+            },
+        },
     };
     struct cpu_policy_errors no_errors = INIT_CPU_POLICY_ERRORS;
 
@@ -629,6 +642,20 @@ static void test_is_compatible_failure(void)
             },
             .e = { -1, -1, 0xce },
         },
+        {
+            .name = "Host CPU vendor != Guest CPU vendor (both unknown)",
+            .host = {
+                .basic.vendor_ebx = X86_VENDOR_AMD_EBX + 1,
+                .basic.vendor_ecx = X86_VENDOR_AMD_ECX,
+                .basic.vendor_edx = X86_VENDOR_AMD_EDX,
+            },
+            .guest = {
+                .basic.vendor_ebx = X86_VENDOR_AMD_EBX + 2,
+                .basic.vendor_ecx = X86_VENDOR_AMD_ECX,
+                .basic.vendor_edx = X86_VENDOR_AMD_EDX,
+            },
+            .e = { 0, -1, -1 },
+        },
     };
 
     printf("Testing policy compatibility failure:\n");
diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
index f033d22785..f991b1f3a9 100644
--- a/xen/lib/x86/policy.c
+++ b/xen/lib/x86/policy.c
@@ -15,7 +15,10 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
 #define FAIL_MSR(m) \
     do { e.msr = (m); goto out; } while ( 0 )
 
-    if ( guest->basic.max_leaf > host->basic.max_leaf )
+    if ( (guest->basic.vendor_ebx != host->basic.vendor_ebx) ||
+         (guest->basic.vendor_ecx != host->basic.vendor_ecx) ||
+         (guest->basic.vendor_edx != host->basic.vendor_edx) ||
+         (guest->basic.max_leaf   >  host->basic.max_leaf) )
         FAIL_CPUID(0, NA);
 
     if ( guest->feat.max_subleaf > host->feat.max_subleaf )
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
  2026-02-13 11:42 [PATCH v3 0/4] x86: Drop cross-vendor support Alejandro Vallejo
  2026-02-13 11:42 ` [PATCH v3 1/4] x86: Reject CPU policies with vendors other than the host's Alejandro Vallejo
@ 2026-02-13 11:42 ` Alejandro Vallejo
  2026-03-11  8:35   ` Jan Beulich
  2026-02-13 11:42 ` [PATCH v3 3/4] x86/hvm: Remove cross-vendor checks from MSR handlers Alejandro Vallejo
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Alejandro Vallejo @ 2026-02-13 11:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Alejandro Vallejo, Jan Beulich, Andrew Cooper,
	Roger Pau Monné, Jason Andryuk

Remove cross-vendor support now that VMs can no longer have a different
vendor than the host.

While at it, refactor the function to exit early and skip initialising
the emulation context when FEP is not enabled.

No functional change intended.

Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
---
v3:
 * No changes

v2:
  * Fix bug introduced in v1: Don't emulate instructions when they
    shouldn't be emulated.
  * Refactor the function so it's simpler.
---
 xen/arch/x86/hvm/hvm.c     | 77 +++++++++++++++-----------------------
 xen/arch/x86/hvm/svm/svm.c |  3 +-
 xen/arch/x86/hvm/vmx/vmx.c |  3 +-
 3 files changed, 32 insertions(+), 51 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4d37a93c57..8708af9425 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3832,69 +3832,47 @@ int hvm_descriptor_access_intercept(uint64_t exit_info,
     return X86EMUL_OKAY;
 }
 
-static bool cf_check is_cross_vendor(
-    const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt)
-{
-    switch ( ctxt->opcode )
-    {
-    case X86EMUL_OPC(0x0f, 0x05): /* syscall */
-    case X86EMUL_OPC(0x0f, 0x34): /* sysenter */
-    case X86EMUL_OPC(0x0f, 0x35): /* sysexit */
-        return true;
-    }
-
-    return false;
-}
-
 void hvm_ud_intercept(struct cpu_user_regs *regs)
 {
     struct vcpu *cur = current;
-    bool should_emulate =
-        cur->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor;
     struct hvm_emulate_ctxt ctxt;
+    const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
+    uint32_t walk = PFEC_insn_fetch;
+    unsigned long addr;
+    char sig[5]; /* ud2; .ascii "xen" */
 
-    hvm_emulate_init_once(&ctxt, opt_hvm_fep ? NULL : is_cross_vendor, regs);
+    if ( !opt_hvm_fep )
+        goto reinject;
 
-    if ( opt_hvm_fep )
-    {
-        const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
-        uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
-                         ? PFEC_user_mode : 0) | PFEC_insn_fetch;
-        unsigned long addr;
-        char sig[5]; /* ud2; .ascii "xen" */
-
-        if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
-                                        sizeof(sig), hvm_access_insn_fetch,
-                                        cs, &addr) &&
-             (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
-                                         walk, NULL) == HVMTRANS_okay) &&
-             (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
-        {
-            regs->rip += sizeof(sig);
-            regs->eflags &= ~X86_EFLAGS_RF;
+    hvm_emulate_init_once(&ctxt, NULL, regs);
 
-            /* Zero the upper 32 bits of %rip if not in 64bit mode. */
-            if ( !(hvm_long_mode_active(cur) && cs->l) )
-                regs->rip = (uint32_t)regs->rip;
+    if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
+        walk |= PFEC_user_mode;
 
-            add_taint(TAINT_HVM_FEP);
+    if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
+                                    sizeof(sig), hvm_access_insn_fetch,
+                                    cs, &addr) &&
+         (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
+                                     walk, NULL) == HVMTRANS_okay) &&
+         (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
+    {
+        regs->rip += sizeof(sig);
+        regs->eflags &= ~X86_EFLAGS_RF;
 
-            should_emulate = true;
-        }
-    }
+        /* Zero the upper 32 bits of %rip if not in 64bit mode. */
+        if ( !(hvm_long_mode_active(cur) && cs->l) )
+            regs->rip = (uint32_t)regs->rip;
 
-    if ( !should_emulate )
-    {
-        hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
-        return;
+        add_taint(TAINT_HVM_FEP);
     }
+    else
+        goto reinject;
 
     switch ( hvm_emulate_one(&ctxt, VIO_no_completion) )
     {
     case X86EMUL_UNHANDLEABLE:
     case X86EMUL_UNIMPLEMENTED:
-        hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
-        break;
+        goto reinject;
     case X86EMUL_EXCEPTION:
         hvm_inject_event(&ctxt.ctxt.event);
         /* fall through */
@@ -3902,6 +3880,11 @@ void hvm_ud_intercept(struct cpu_user_regs *regs)
         hvm_emulate_writeback(&ctxt);
         break;
     }
+
+    return;
+
+ reinject:
+    hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
 }
 
 enum hvm_intblk hvm_interrupt_blocked(struct vcpu *v, struct hvm_intack intack)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 18ba837738..10d1bf350c 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -589,8 +589,7 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
     const struct cpu_policy *cp = v->domain->arch.cpu_policy;
     u32 bitmap = vmcb_get_exception_intercepts(vmcb);
 
-    if ( opt_hvm_fep ||
-         (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) )
+    if ( opt_hvm_fep )
         bitmap |= (1U << X86_EXC_UD);
     else
         bitmap &= ~(1U << X86_EXC_UD);
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 82c55f49ae..eda99e268d 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -803,8 +803,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
     const struct cpu_policy *cp = v->domain->arch.cpu_policy;
     int rc = 0;
 
-    if ( opt_hvm_fep ||
-         (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) )
+    if ( opt_hvm_fep )
         v->arch.hvm.vmx.exception_bitmap |= (1U << X86_EXC_UD);
     else
         v->arch.hvm.vmx.exception_bitmap &= ~(1U << X86_EXC_UD);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 3/4] x86/hvm: Remove cross-vendor checks from MSR handlers.
  2026-02-13 11:42 [PATCH v3 0/4] x86: Drop cross-vendor support Alejandro Vallejo
  2026-02-13 11:42 ` [PATCH v3 1/4] x86: Reject CPU policies with vendors other than the host's Alejandro Vallejo
  2026-02-13 11:42 ` [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler Alejandro Vallejo
@ 2026-02-13 11:42 ` Alejandro Vallejo
  2026-03-11  8:39   ` Jan Beulich
  2026-02-13 11:42 ` [PATCH v3 4/4] x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems Alejandro Vallejo
  2026-03-11  8:54 ` [PATCH v3 0/4] x86: Drop cross-vendor support Jan Beulich
  4 siblings, 1 reply; 19+ messages in thread
From: Alejandro Vallejo @ 2026-02-13 11:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Alejandro Vallejo, Jan Beulich, Andrew Cooper,
	Roger Pau Monné, Teddy Astie

Not a functional change now that cross-vendor guests are not launchable.

Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
Reviewed-by: Teddy Astie <teddy.astie@vates.tech>
---
v3:
 * No changes

v2:
  * Use boot_cpu_data rather than policy vendor.
---
 xen/arch/x86/msr.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index ad75a2e108..d10891dcfc 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -169,9 +169,9 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         break;
 
     case MSR_IA32_PLATFORM_ID:
-        if ( !(cp->x86_vendor & X86_VENDOR_INTEL) ||
-             !(boot_cpu_data.x86_vendor & X86_VENDOR_INTEL) )
+        if ( boot_cpu_data.vendor != X86_VENDOR_INTEL )
             goto gp_fault;
+
         rdmsrl(MSR_IA32_PLATFORM_ID, *val);
         break;
 
@@ -189,9 +189,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
          * from Xen's last microcode load, which can be forwarded straight to
          * the guest.
          */
-        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
-             !(boot_cpu_data.x86_vendor &
-               (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
+        if ( !(boot_cpu_data.vendor & (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
              rdmsr_safe(MSR_AMD_PATCHLEVEL, val) )
             goto gp_fault;
         break;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 4/4] x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems
  2026-02-13 11:42 [PATCH v3 0/4] x86: Drop cross-vendor support Alejandro Vallejo
                   ` (2 preceding siblings ...)
  2026-02-13 11:42 ` [PATCH v3 3/4] x86/hvm: Remove cross-vendor checks from MSR handlers Alejandro Vallejo
@ 2026-02-13 11:42 ` Alejandro Vallejo
  2026-03-11  8:46   ` Jan Beulich
  2026-03-11  8:54 ` [PATCH v3 0/4] x86: Drop cross-vendor support Jan Beulich
  4 siblings, 1 reply; 19+ messages in thread
From: Alejandro Vallejo @ 2026-02-13 11:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Alejandro Vallejo, Jan Beulich, Andrew Cooper,
	Roger Pau Monné, Jason Andryuk, Teddy Astie

With cross-vendor support gone, it's no longer needed.

AMD CPUs ignore the top 32 bits of the SYSENTER/SYSEXIT MSRs, which is
not how this emulation worked due to the need for cross-vendor support.
Any AMD VMs storing state in the top 32bits of the SEP MSRs will lose
it.

It's very unlikely to affect any production VM because having 64bit width
just isn't how real AMD CPUs behave.

Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
Reviewed-by: Teddy Astie <teddy.astie@vates.tech>
---
v3:
  * No changes

v2:
  * Reworded commit message to mention the loss of the top dword for
    any VM stupid enough to use it.
---
 xen/arch/x86/hvm/svm/svm.c               | 42 +++++++++++-------------
 xen/arch/x86/hvm/svm/vmcb.c              |  3 ++
 xen/arch/x86/include/asm/hvm/svm-types.h | 10 ------
 3 files changed, 22 insertions(+), 33 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 10d1bf350c..329c4446e9 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -401,10 +401,6 @@ static int svm_vmcb_save(struct vcpu *v, struct hvm_hw_cpu *c)
 {
     struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
-    c->sysenter_cs = v->arch.hvm.svm.guest_sysenter_cs;
-    c->sysenter_esp = v->arch.hvm.svm.guest_sysenter_esp;
-    c->sysenter_eip = v->arch.hvm.svm.guest_sysenter_eip;
-
     if ( vmcb->event_inj.v &&
          hvm_event_needs_reinjection(vmcb->event_inj.type,
                                      vmcb->event_inj.vector) )
@@ -468,11 +464,6 @@ static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c)
     svm_update_guest_cr(v, 0, 0);
     svm_update_guest_cr(v, 4, 0);
 
-    /* Load sysenter MSRs into both VMCB save area and VCPU fields. */
-    vmcb->sysenter_cs = v->arch.hvm.svm.guest_sysenter_cs = c->sysenter_cs;
-    vmcb->sysenter_esp = v->arch.hvm.svm.guest_sysenter_esp = c->sysenter_esp;
-    vmcb->sysenter_eip = v->arch.hvm.svm.guest_sysenter_eip = c->sysenter_eip;
-
     if ( paging_mode_hap(v->domain) )
     {
         vmcb_set_np(vmcb, true);
@@ -501,6 +492,9 @@ static void svm_save_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
 {
     struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
+    data->sysenter_cs      = vmcb->sysenter_cs;
+    data->sysenter_esp     = vmcb->sysenter_esp;
+    data->sysenter_eip     = vmcb->sysenter_eip;
     data->shadow_gs        = vmcb->kerngsbase;
     data->msr_lstar        = vmcb->lstar;
     data->msr_star         = vmcb->star;
@@ -512,11 +506,14 @@ static void svm_load_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
 {
     struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
-    vmcb->kerngsbase = data->shadow_gs;
-    vmcb->lstar      = data->msr_lstar;
-    vmcb->star       = data->msr_star;
-    vmcb->cstar      = data->msr_cstar;
-    vmcb->sfmask     = data->msr_syscall_mask;
+    vmcb->sysenter_cs  = data->sysenter_cs;
+    vmcb->sysenter_esp = data->sysenter_esp;
+    vmcb->sysenter_eip = data->sysenter_eip;
+    vmcb->kerngsbase   = data->shadow_gs;
+    vmcb->lstar        = data->msr_lstar;
+    vmcb->star         = data->msr_star;
+    vmcb->cstar        = data->msr_cstar;
+    vmcb->sfmask       = data->msr_syscall_mask;
     v->arch.hvm.guest_efer = data->msr_efer;
     svm_update_guest_efer(v);
 }
@@ -1720,12 +1717,9 @@ static int cf_check svm_msr_read_intercept(
 
     switch ( msr )
     {
-        /*
-         * Sync not needed while the cross-vendor logic is in unilateral effect.
     case MSR_IA32_SYSENTER_CS:
     case MSR_IA32_SYSENTER_ESP:
     case MSR_IA32_SYSENTER_EIP:
-         */
     case MSR_STAR:
     case MSR_LSTAR:
     case MSR_CSTAR:
@@ -1740,13 +1734,15 @@ static int cf_check svm_msr_read_intercept(
     switch ( msr )
     {
     case MSR_IA32_SYSENTER_CS:
-        *msr_content = v->arch.hvm.svm.guest_sysenter_cs;
+        *msr_content = vmcb->sysenter_cs;
         break;
+
     case MSR_IA32_SYSENTER_ESP:
-        *msr_content = v->arch.hvm.svm.guest_sysenter_esp;
+        *msr_content = vmcb->sysenter_esp;
         break;
+
     case MSR_IA32_SYSENTER_EIP:
-        *msr_content = v->arch.hvm.svm.guest_sysenter_eip;
+        *msr_content = vmcb->sysenter_eip;
         break;
 
     case MSR_STAR:
@@ -1940,11 +1936,11 @@ static int cf_check svm_msr_write_intercept(
         switch ( msr )
         {
         case MSR_IA32_SYSENTER_ESP:
-            vmcb->sysenter_esp = v->arch.hvm.svm.guest_sysenter_esp = msr_content;
+            vmcb->sysenter_esp = msr_content;
             break;
 
         case MSR_IA32_SYSENTER_EIP:
-            vmcb->sysenter_eip = v->arch.hvm.svm.guest_sysenter_eip = msr_content;
+            vmcb->sysenter_eip = msr_content;
             break;
 
         case MSR_LSTAR:
@@ -1970,7 +1966,7 @@ static int cf_check svm_msr_write_intercept(
         break;
 
     case MSR_IA32_SYSENTER_CS:
-        vmcb->sysenter_cs = v->arch.hvm.svm.guest_sysenter_cs = msr_content;
+        vmcb->sysenter_cs = msr_content;
         break;
 
     case MSR_STAR:
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index e583ef8548..76fcaf15c2 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -97,6 +97,9 @@ static int construct_vmcb(struct vcpu *v)
     svm_disable_intercept_for_msr(v, MSR_LSTAR);
     svm_disable_intercept_for_msr(v, MSR_STAR);
     svm_disable_intercept_for_msr(v, MSR_SYSCALL_MASK);
+    svm_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_CS);
+    svm_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_EIP);
+    svm_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_ESP);
 
     vmcb->_msrpm_base_pa = virt_to_maddr(svm->msrpm);
     vmcb->_iopm_base_pa = __pa(v->domain->arch.hvm.io_bitmap);
diff --git a/xen/arch/x86/include/asm/hvm/svm-types.h b/xen/arch/x86/include/asm/hvm/svm-types.h
index 051b235d8f..aaee91b4b6 100644
--- a/xen/arch/x86/include/asm/hvm/svm-types.h
+++ b/xen/arch/x86/include/asm/hvm/svm-types.h
@@ -27,16 +27,6 @@ struct svm_vcpu {
 
     /* VMCB has a cached instruction from #PF/#NPF Decode Assist? */
     uint8_t cached_insn_len; /* Zero if no cached instruction. */
-
-    /*
-     * Upper four bytes are undefined in the VMCB, therefore we can't use the
-     * fields in the VMCB. Write a 64bit value and then read a 64bit value is
-     * fine unless there's a VMRUN/VMEXIT in between which clears the upper
-     * four bytes.
-     */
-    uint64_t guest_sysenter_cs;
-    uint64_t guest_sysenter_esp;
-    uint64_t guest_sysenter_eip;
 };
 
 struct nestedsvm {
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/4] x86: Reject CPU policies with vendors other than the host's
  2026-02-13 11:42 ` [PATCH v3 1/4] x86: Reject CPU policies with vendors other than the host's Alejandro Vallejo
@ 2026-03-11  8:26   ` Jan Beulich
  2026-03-11 12:43     ` Alejandro Vallejo
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2026-03-11  8:26 UTC (permalink / raw)
  To: Alejandro Vallejo
  Cc: Oleksii Kurochko, Community Manager, Andrew Cooper,
	Roger Pau Monné, Anthony PERARD, xen-devel

On 13.02.2026 12:42, Alejandro Vallejo wrote:
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -22,6 +22,11 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>     - Xenoprofile support.  Oprofile themselves removed support for Xen in 2014
>       prior to the version 1.0 release, and there has been no development since
>       before then in Xen.
> +   - Domains can no longer run on a CPU vendor if they were initially launched
> +     on a different CPU vendor. This affects live migrations and save/restore
> +     workflows accross mixed-vendor hosts. Cross-vendor emulation has always
> +     been unreliable, but since 2017 with the advent of speculation security it
> +     became unsustainably so.

While the code adjustment looks okay to me, the wording is a little odd. What is
"run on a CPU vendor"? How about "Domains can no longer run on a system with CPUs
of a vendor different from the one they were initially launched on"?

Also, nit: "across".

Jan


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
  2026-02-13 11:42 ` [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler Alejandro Vallejo
@ 2026-03-11  8:35   ` Jan Beulich
  2026-03-11  9:25     ` Alejandro Vallejo
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2026-03-11  8:35 UTC (permalink / raw)
  To: Alejandro Vallejo
  Cc: Andrew Cooper, Roger Pau Monné, Jason Andryuk, xen-devel

On 13.02.2026 12:42, Alejandro Vallejo wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3832,69 +3832,47 @@ int hvm_descriptor_access_intercept(uint64_t exit_info,
>      return X86EMUL_OKAY;
>  }
>  
> -static bool cf_check is_cross_vendor(
> -    const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt)
> -{
> -    switch ( ctxt->opcode )
> -    {
> -    case X86EMUL_OPC(0x0f, 0x05): /* syscall */
> -    case X86EMUL_OPC(0x0f, 0x34): /* sysenter */
> -    case X86EMUL_OPC(0x0f, 0x35): /* sysexit */
> -        return true;
> -    }
> -
> -    return false;
> -}
> -
>  void hvm_ud_intercept(struct cpu_user_regs *regs)
>  {
>      struct vcpu *cur = current;
> -    bool should_emulate =
> -        cur->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor;
>      struct hvm_emulate_ctxt ctxt;
> +    const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
> +    uint32_t walk = PFEC_insn_fetch;
> +    unsigned long addr;
> +    char sig[5]; /* ud2; .ascii "xen" */
>  
> -    hvm_emulate_init_once(&ctxt, opt_hvm_fep ? NULL : is_cross_vendor, regs);
> +    if ( !opt_hvm_fep )
> +        goto reinject;

Is this possible at all, i.e. shouldn't there be ASSERT_UNREACHABLE() in
addition if already the check is kept?

> -    if ( opt_hvm_fep )
> -    {
> -        const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
> -        uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
> -                         ? PFEC_user_mode : 0) | PFEC_insn_fetch;

Why is this initializer not retained?

> -        unsigned long addr;
> -        char sig[5]; /* ud2; .ascii "xen" */
> -
> -        if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
> -                                        sizeof(sig), hvm_access_insn_fetch,
> -                                        cs, &addr) &&
> -             (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
> -                                         walk, NULL) == HVMTRANS_okay) &&
> -             (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
> -        {
> -            regs->rip += sizeof(sig);
> -            regs->eflags &= ~X86_EFLAGS_RF;
> +    hvm_emulate_init_once(&ctxt, NULL, regs);
>  
> -            /* Zero the upper 32 bits of %rip if not in 64bit mode. */
> -            if ( !(hvm_long_mode_active(cur) && cs->l) )
> -                regs->rip = (uint32_t)regs->rip;
> +    if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
> +        walk |= PFEC_user_mode;
>  
> -            add_taint(TAINT_HVM_FEP);
> +    if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
> +                                    sizeof(sig), hvm_access_insn_fetch,
> +                                    cs, &addr) &&
> +         (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
> +                                     walk, NULL) == HVMTRANS_okay) &&
> +         (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
> +    {
> +        regs->rip += sizeof(sig);
> +        regs->eflags &= ~X86_EFLAGS_RF;
>  
> -            should_emulate = true;
> -        }
> -    }
> +        /* Zero the upper 32 bits of %rip if not in 64bit mode. */
> +        if ( !(hvm_long_mode_active(cur) && cs->l) )
> +            regs->rip = (uint32_t)regs->rip;
>  
> -    if ( !should_emulate )
> -    {
> -        hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
> -        return;
> +        add_taint(TAINT_HVM_FEP);
>      }
> +    else
> +        goto reinject;
>  
>      switch ( hvm_emulate_one(&ctxt, VIO_no_completion) )
>      {
>      case X86EMUL_UNHANDLEABLE:
>      case X86EMUL_UNIMPLEMENTED:
> -        hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
> -        break;
> +        goto reinject;

How about placing the reinject label here, along with the two case one?

Jan

>      case X86EMUL_EXCEPTION:
>          hvm_inject_event(&ctxt.ctxt.event);
>          /* fall through */
> @@ -3902,6 +3880,11 @@ void hvm_ud_intercept(struct cpu_user_regs *regs)
>          hvm_emulate_writeback(&ctxt);
>          break;
>      }
> +
> +    return;
> +
> + reinject:
> +    hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
>  }
>  
>  enum hvm_intblk hvm_interrupt_blocked(struct vcpu *v, struct hvm_intack intack)


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/4] x86/hvm: Remove cross-vendor checks from MSR handlers.
  2026-02-13 11:42 ` [PATCH v3 3/4] x86/hvm: Remove cross-vendor checks from MSR handlers Alejandro Vallejo
@ 2026-03-11  8:39   ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2026-03-11  8:39 UTC (permalink / raw)
  To: Alejandro Vallejo
  Cc: Andrew Cooper, Roger Pau Monné, Teddy Astie, xen-devel

On 13.02.2026 12:42, Alejandro Vallejo wrote:
> Not a functional change now that cross-vendor guests are not launchable.
> 
> Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@amd.com>
> Reviewed-by: Teddy Astie <teddy.astie@vates.tech>

Acked-by: Jan Beulich <jbeulich@suse.com>



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/4] x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems
  2026-02-13 11:42 ` [PATCH v3 4/4] x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems Alejandro Vallejo
@ 2026-03-11  8:46   ` Jan Beulich
  2026-03-11  9:27     ` Alejandro Vallejo
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2026-03-11  8:46 UTC (permalink / raw)
  To: Alejandro Vallejo
  Cc: Andrew Cooper, Roger Pau Monné, Jason Andryuk, Teddy Astie,
	xen-devel

On 13.02.2026 12:42, Alejandro Vallejo wrote:
> @@ -501,6 +492,9 @@ static void svm_save_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
>  {
>      struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
>  
> +    data->sysenter_cs      = vmcb->sysenter_cs;
> +    data->sysenter_esp     = vmcb->sysenter_esp;
> +    data->sysenter_eip     = vmcb->sysenter_eip;
>      data->shadow_gs        = vmcb->kerngsbase;
>      data->msr_lstar        = vmcb->lstar;
>      data->msr_star         = vmcb->star;

May I suggest to do writes by increasing address order? I.e. while this
already looks fine, ...

> @@ -512,11 +506,14 @@ static void svm_load_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
>  {
>      struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
>  
> -    vmcb->kerngsbase = data->shadow_gs;
> -    vmcb->lstar      = data->msr_lstar;
> -    vmcb->star       = data->msr_star;
> -    vmcb->cstar      = data->msr_cstar;
> -    vmcb->sfmask     = data->msr_syscall_mask;
> +    vmcb->sysenter_cs  = data->sysenter_cs;
> +    vmcb->sysenter_esp = data->sysenter_esp;
> +    vmcb->sysenter_eip = data->sysenter_eip;
> +    vmcb->kerngsbase   = data->shadow_gs;
> +    vmcb->lstar        = data->msr_lstar;
> +    vmcb->star         = data->msr_star;
> +    vmcb->cstar        = data->msr_cstar;
> +    vmcb->sfmask       = data->msr_syscall_mask;
>      v->arch.hvm.guest_efer = data->msr_efer;
>      svm_update_guest_efer(v);
>  }

... your additions would want to move down here (and the other writes may
then want re-ordering as well). Preferably with that:
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 0/4] x86: Drop cross-vendor support
  2026-02-13 11:42 [PATCH v3 0/4] x86: Drop cross-vendor support Alejandro Vallejo
                   ` (3 preceding siblings ...)
  2026-02-13 11:42 ` [PATCH v3 4/4] x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems Alejandro Vallejo
@ 2026-03-11  8:54 ` Jan Beulich
  2026-03-11  9:46   ` Alejandro Vallejo
  4 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2026-03-11  8:54 UTC (permalink / raw)
  To: Alejandro Vallejo
  Cc: Oleksii Kurochko, Community Manager, Andrew Cooper,
	Roger Pau Monné, Anthony PERARD, Jason Andryuk, xen-devel

On 13.02.2026 12:42, Alejandro Vallejo wrote:
> Alejandro Vallejo (4):
>   x86: Reject CPU policies with vendors other than the host's
>   x86/hvm: Disable cross-vendor handling in #UD handler
>   x86/hvm: Remove cross-vendor checks from MSR handlers.
>   x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems

With this, do we actually want to keep emulation of SYS{ENTER,EXIT,CALL,RET}
in the insn emulator? Or at least gate that on e.g. VM_EVENT, to still allow
its use by introspection? Whether to then also permit those with HVM_FEP=y
(but VM_EVENT=n) would be a follow-on question.

Jan


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
  2026-03-11  8:35   ` Jan Beulich
@ 2026-03-11  9:25     ` Alejandro Vallejo
  2026-03-11  9:30       ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Alejandro Vallejo @ 2026-03-11  9:25 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Roger Pau Monné, Jason Andryuk, xen-devel

On Wed Mar 11, 2026 at 9:35 AM CET, Jan Beulich wrote:
> On 13.02.2026 12:42, Alejandro Vallejo wrote:
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3832,69 +3832,47 @@ int hvm_descriptor_access_intercept(uint64_t exit_info,
>>      return X86EMUL_OKAY;
>>  }
>>  
>> -static bool cf_check is_cross_vendor(
>> -    const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt)
>> -{
>> -    switch ( ctxt->opcode )
>> -    {
>> -    case X86EMUL_OPC(0x0f, 0x05): /* syscall */
>> -    case X86EMUL_OPC(0x0f, 0x34): /* sysenter */
>> -    case X86EMUL_OPC(0x0f, 0x35): /* sysexit */
>> -        return true;
>> -    }
>> -
>> -    return false;
>> -}
>> -
>>  void hvm_ud_intercept(struct cpu_user_regs *regs)
>>  {
>>      struct vcpu *cur = current;
>> -    bool should_emulate =
>> -        cur->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor;
>>      struct hvm_emulate_ctxt ctxt;
>> +    const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
>> +    uint32_t walk = PFEC_insn_fetch;
>> +    unsigned long addr;
>> +    char sig[5]; /* ud2; .ascii "xen" */
>>  
>> -    hvm_emulate_init_once(&ctxt, opt_hvm_fep ? NULL : is_cross_vendor, regs);
>> +    if ( !opt_hvm_fep )
>> +        goto reinject;
>
> Is this possible at all, i.e. shouldn't there be ASSERT_UNREACHABLE() in
> addition if already the check is kept?

It isn't.

v2 used to BUG_ON() at VMEXIT when !HVM_FEP and compile out this handler
altogether, but Andrew was unhappy with it because he uses it occasionally and
it'd be more of a PITA to undo the removal or force a HVM_FEP-enabled hypervisor
for the #UD handler to be present at all.

I have no strong views on the ASSERT. It's not expected to happen, but I don't
expect the existing conditions to change either, and if they do that will warrant
a change in the handler too. 

If you want it I can add it, but if we're not killing the handler in release I
don't think it's very helpful to assert/bug_on.

>
>> -    if ( opt_hvm_fep )
>> -    {
>> -        const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
>> -        uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>> -                         ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>
> Why is this initializer not retained?

It is, it's just that the diff is terrible. An unfortunate side effect of the
removal of the braces. The scope collapsing forces it on top of the function,
before the emulation context is initialised.

It's set up in steps. walk is unconditionally initialised as isnsn_fetch, and
later (after emulate_init_once()), OR'd with PFEC_user_mode for DPL == 3. See...

>
>> -        unsigned long addr;
>> -        char sig[5]; /* ud2; .ascii "xen" */
>> -
>> -        if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
>> -                                        sizeof(sig), hvm_access_insn_fetch,
>> -                                        cs, &addr) &&
>> -             (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
>> -                                         walk, NULL) == HVMTRANS_okay) &&
>> -             (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
>> -        {
>> -            regs->rip += sizeof(sig);
>> -            regs->eflags &= ~X86_EFLAGS_RF;
>> +    hvm_emulate_init_once(&ctxt, NULL, regs);
>>  
>> -            /* Zero the upper 32 bits of %rip if not in 64bit mode. */
>> -            if ( !(hvm_long_mode_active(cur) && cs->l) )
>> -                regs->rip = (uint32_t)regs->rip;
>> +    if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
>> +        walk |= PFEC_user_mode;

... here.

>>  
>> -            add_taint(TAINT_HVM_FEP);
>> +    if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
>> +                                    sizeof(sig), hvm_access_insn_fetch,
>> +                                    cs, &addr) &&
>> +         (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
>> +                                     walk, NULL) == HVMTRANS_okay) &&
>> +         (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
>> +    {
>> +        regs->rip += sizeof(sig);
>> +        regs->eflags &= ~X86_EFLAGS_RF;
>>  
>> -            should_emulate = true;
>> -        }
>> -    }
>> +        /* Zero the upper 32 bits of %rip if not in 64bit mode. */
>> +        if ( !(hvm_long_mode_active(cur) && cs->l) )
>> +            regs->rip = (uint32_t)regs->rip;
>>  
>> -    if ( !should_emulate )
>> -    {
>> -        hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
>> -        return;
>> +        add_taint(TAINT_HVM_FEP);
>>      }
>> +    else
>> +        goto reinject;
>>  
>>      switch ( hvm_emulate_one(&ctxt, VIO_no_completion) )
>>      {
>>      case X86EMUL_UNHANDLEABLE:
>>      case X86EMUL_UNIMPLEMENTED:
>> -        hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
>> -        break;
>> +        goto reinject;
>
> How about placing the reinject label here, along with the two case one?
>
> Jan

Sure. That works too.

Cheers,
Alejandro


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/4] x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems
  2026-03-11  8:46   ` Jan Beulich
@ 2026-03-11  9:27     ` Alejandro Vallejo
  0 siblings, 0 replies; 19+ messages in thread
From: Alejandro Vallejo @ 2026-03-11  9:27 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monné, Jason Andryuk, Teddy Astie,
	xen-devel

On Wed Mar 11, 2026 at 9:46 AM CET, Jan Beulich wrote:
> On 13.02.2026 12:42, Alejandro Vallejo wrote:
>> @@ -501,6 +492,9 @@ static void svm_save_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
>>  {
>>      struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
>>  
>> +    data->sysenter_cs      = vmcb->sysenter_cs;
>> +    data->sysenter_esp     = vmcb->sysenter_esp;
>> +    data->sysenter_eip     = vmcb->sysenter_eip;
>>      data->shadow_gs        = vmcb->kerngsbase;
>>      data->msr_lstar        = vmcb->lstar;
>>      data->msr_star         = vmcb->star;
>
> May I suggest to do writes by increasing address order? I.e. while this
> already looks fine, ...
>
>> @@ -512,11 +506,14 @@ static void svm_load_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
>>  {
>>      struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
>>  
>> -    vmcb->kerngsbase = data->shadow_gs;
>> -    vmcb->lstar      = data->msr_lstar;
>> -    vmcb->star       = data->msr_star;
>> -    vmcb->cstar      = data->msr_cstar;
>> -    vmcb->sfmask     = data->msr_syscall_mask;
>> +    vmcb->sysenter_cs  = data->sysenter_cs;
>> +    vmcb->sysenter_esp = data->sysenter_esp;
>> +    vmcb->sysenter_eip = data->sysenter_eip;
>> +    vmcb->kerngsbase   = data->shadow_gs;
>> +    vmcb->lstar        = data->msr_lstar;
>> +    vmcb->star         = data->msr_star;
>> +    vmcb->cstar        = data->msr_cstar;
>> +    vmcb->sfmask       = data->msr_syscall_mask;
>>      v->arch.hvm.guest_efer = data->msr_efer;
>>      svm_update_guest_efer(v);
>>  }
>
> ... your additions would want to move down here (and the other writes may
> then want re-ordering as well). Preferably with that:

Sure.

> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.

Alejandro


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
  2026-03-11  9:25     ` Alejandro Vallejo
@ 2026-03-11  9:30       ` Jan Beulich
  2026-03-11 10:21         ` Alejandro Vallejo
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2026-03-11  9:30 UTC (permalink / raw)
  To: Alejandro Vallejo
  Cc: Andrew Cooper, Roger Pau Monné, Jason Andryuk, xen-devel

On 11.03.2026 10:25, Alejandro Vallejo wrote:
> On Wed Mar 11, 2026 at 9:35 AM CET, Jan Beulich wrote:
>> On 13.02.2026 12:42, Alejandro Vallejo wrote:
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -3832,69 +3832,47 @@ int hvm_descriptor_access_intercept(uint64_t exit_info,
>>>      return X86EMUL_OKAY;
>>>  }
>>>  
>>> -static bool cf_check is_cross_vendor(
>>> -    const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt)
>>> -{
>>> -    switch ( ctxt->opcode )
>>> -    {
>>> -    case X86EMUL_OPC(0x0f, 0x05): /* syscall */
>>> -    case X86EMUL_OPC(0x0f, 0x34): /* sysenter */
>>> -    case X86EMUL_OPC(0x0f, 0x35): /* sysexit */
>>> -        return true;
>>> -    }
>>> -
>>> -    return false;
>>> -}
>>> -
>>>  void hvm_ud_intercept(struct cpu_user_regs *regs)
>>>  {
>>>      struct vcpu *cur = current;
>>> -    bool should_emulate =
>>> -        cur->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor;
>>>      struct hvm_emulate_ctxt ctxt;
>>> +    const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
>>> +    uint32_t walk = PFEC_insn_fetch;
>>> +    unsigned long addr;
>>> +    char sig[5]; /* ud2; .ascii "xen" */
>>>  
>>> -    hvm_emulate_init_once(&ctxt, opt_hvm_fep ? NULL : is_cross_vendor, regs);
>>> +    if ( !opt_hvm_fep )
>>> +        goto reinject;
>>
>> Is this possible at all, i.e. shouldn't there be ASSERT_UNREACHABLE() in
>> addition if already the check is kept?
> 
> It isn't.
> 
> v2 used to BUG_ON() at VMEXIT when !HVM_FEP and compile out this handler
> altogether, but Andrew was unhappy with it because he uses it occasionally and
> it'd be more of a PITA to undo the removal or force a HVM_FEP-enabled hypervisor
> for the #UD handler to be present at all.
> 
> I have no strong views on the ASSERT. It's not expected to happen, but I don't
> expect the existing conditions to change either, and if they do that will warrant
> a change in the handler too. 
> 
> If you want it I can add it, but if we're not killing the handler in release I
> don't think it's very helpful to assert/bug_on.

I see two options: Drop the if() or add ASSERT_UNREACHABLE() to its body.

>>> -    if ( opt_hvm_fep )
>>> -    {
>>> -        const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
>>> -        uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>>> -                         ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>>
>> Why is this initializer not retained?
> 
> It is, it's just that the diff is terrible. An unfortunate side effect of the
> removal of the braces. The scope collapsing forces it on top of the function,
> before the emulation context is initialised.
> 
> It's set up in steps. walk is unconditionally initialised as isnsn_fetch, and
> later (after emulate_init_once()), OR'd with PFEC_user_mode for DPL == 3. See...
> 
>>
>>> -        unsigned long addr;
>>> -        char sig[5]; /* ud2; .ascii "xen" */
>>> -
>>> -        if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
>>> -                                        sizeof(sig), hvm_access_insn_fetch,
>>> -                                        cs, &addr) &&
>>> -             (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
>>> -                                         walk, NULL) == HVMTRANS_okay) &&
>>> -             (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
>>> -        {
>>> -            regs->rip += sizeof(sig);
>>> -            regs->eflags &= ~X86_EFLAGS_RF;
>>> +    hvm_emulate_init_once(&ctxt, NULL, regs);
>>>  
>>> -            /* Zero the upper 32 bits of %rip if not in 64bit mode. */
>>> -            if ( !(hvm_long_mode_active(cur) && cs->l) )
>>> -                regs->rip = (uint32_t)regs->rip;
>>> +    if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
>>> +        walk |= PFEC_user_mode;
> 
> ... here.

But that's the point of my question: Why did you split it? All you mean to
do is re-indentation.

Jan


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 0/4] x86: Drop cross-vendor support
  2026-03-11  8:54 ` [PATCH v3 0/4] x86: Drop cross-vendor support Jan Beulich
@ 2026-03-11  9:46   ` Alejandro Vallejo
  2026-03-11 10:11     ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Alejandro Vallejo @ 2026-03-11  9:46 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Oleksii Kurochko, Community Manager, Andrew Cooper,
	Roger Pau Monné, Anthony PERARD, Jason Andryuk, xen-devel

On Wed Mar 11, 2026 at 9:54 AM CET, Jan Beulich wrote:
> On 13.02.2026 12:42, Alejandro Vallejo wrote:
>> Alejandro Vallejo (4):
>>   x86: Reject CPU policies with vendors other than the host's
>>   x86/hvm: Disable cross-vendor handling in #UD handler
>>   x86/hvm: Remove cross-vendor checks from MSR handlers.
>>   x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems
>
> With this, do we actually want to keep emulation of SYS{ENTER,EXIT,CALL,RET}
> in the insn emulator? Or at least gate that on e.g. VM_EVENT, to still allow
> its use by introspection? Whether to then also permit those with HVM_FEP=y
> (but VM_EVENT=n) would be a follow-on question.
>
> Jan

I can force emulation of anything by writing an instruction to an xAPIC register
followed by RET and then CALL-ing it that address. If we want a hypervisor
capable of running such ridiculous cases the emulator must be complete. If not,
the question is what to do otherwise. Inject #UD? Crash the domain?

Cheers,
Alejandro


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 0/4] x86: Drop cross-vendor support
  2026-03-11  9:46   ` Alejandro Vallejo
@ 2026-03-11 10:11     ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2026-03-11 10:11 UTC (permalink / raw)
  To: Alejandro Vallejo
  Cc: Oleksii Kurochko, Community Manager, Andrew Cooper,
	Roger Pau Monné, Anthony PERARD, Jason Andryuk, xen-devel

On 11.03.2026 10:46, Alejandro Vallejo wrote:
> On Wed Mar 11, 2026 at 9:54 AM CET, Jan Beulich wrote:
>> On 13.02.2026 12:42, Alejandro Vallejo wrote:
>>> Alejandro Vallejo (4):
>>>   x86: Reject CPU policies with vendors other than the host's
>>>   x86/hvm: Disable cross-vendor handling in #UD handler
>>>   x86/hvm: Remove cross-vendor checks from MSR handlers.
>>>   x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems
>>
>> With this, do we actually want to keep emulation of SYS{ENTER,EXIT,CALL,RET}
>> in the insn emulator? Or at least gate that on e.g. VM_EVENT, to still allow
>> its use by introspection? Whether to then also permit those with HVM_FEP=y
>> (but VM_EVENT=n) would be a follow-on question.
> 
> I can force emulation of anything by writing an instruction to an xAPIC register
> followed by RET and then CALL-ing it that address. If we want a hypervisor
> capable of running such ridiculous cases the emulator must be complete.

Well, yes, hence the question. Or in other words: Do we consider completeness
important for these insns? (There are others we don't currently support.)

> If not,
> the question is what to do otherwise. Inject #UD? Crash the domain?

#UD is what I think we inject for anything the emulator can't handle.

Jan


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
  2026-03-11  9:30       ` Jan Beulich
@ 2026-03-11 10:21         ` Alejandro Vallejo
  2026-03-11 11:06           ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Alejandro Vallejo @ 2026-03-11 10:21 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Roger Pau Monné, Jason Andryuk, xen-devel

On Wed Mar 11, 2026 at 10:30 AM CET, Jan Beulich wrote:
> On 11.03.2026 10:25, Alejandro Vallejo wrote:
>> On Wed Mar 11, 2026 at 9:35 AM CET, Jan Beulich wrote:
>>> On 13.02.2026 12:42, Alejandro Vallejo wrote:
>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>> @@ -3832,69 +3832,47 @@ int hvm_descriptor_access_intercept(uint64_t exit_info,
>>>>      return X86EMUL_OKAY;
>>>>  }
>>>>  
>>>> -static bool cf_check is_cross_vendor(
>>>> -    const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt)
>>>> -{
>>>> -    switch ( ctxt->opcode )
>>>> -    {
>>>> -    case X86EMUL_OPC(0x0f, 0x05): /* syscall */
>>>> -    case X86EMUL_OPC(0x0f, 0x34): /* sysenter */
>>>> -    case X86EMUL_OPC(0x0f, 0x35): /* sysexit */
>>>> -        return true;
>>>> -    }
>>>> -
>>>> -    return false;
>>>> -}
>>>> -
>>>>  void hvm_ud_intercept(struct cpu_user_regs *regs)
>>>>  {
>>>>      struct vcpu *cur = current;
>>>> -    bool should_emulate =
>>>> -        cur->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor;
>>>>      struct hvm_emulate_ctxt ctxt;
>>>> +    const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
>>>> +    uint32_t walk = PFEC_insn_fetch;
>>>> +    unsigned long addr;
>>>> +    char sig[5]; /* ud2; .ascii "xen" */
>>>>  
>>>> -    hvm_emulate_init_once(&ctxt, opt_hvm_fep ? NULL : is_cross_vendor, regs);
>>>> +    if ( !opt_hvm_fep )
>>>> +        goto reinject;
>>>
>>> Is this possible at all, i.e. shouldn't there be ASSERT_UNREACHABLE() in
>>> addition if already the check is kept?
>> 
>> It isn't.
>> 
>> v2 used to BUG_ON() at VMEXIT when !HVM_FEP and compile out this handler
>> altogether, but Andrew was unhappy with it because he uses it occasionally and
>> it'd be more of a PITA to undo the removal or force a HVM_FEP-enabled hypervisor
>> for the #UD handler to be present at all.
>> 
>> I have no strong views on the ASSERT. It's not expected to happen, but I don't
>> expect the existing conditions to change either, and if they do that will warrant
>> a change in the handler too. 
>> 
>> If you want it I can add it, but if we're not killing the handler in release I
>> don't think it's very helpful to assert/bug_on.
>
> I see two options: Drop the if() or add ASSERT_UNREACHABLE() to its body.

I'll go for that second option then.

>
>>>> -    if ( opt_hvm_fep )
>>>> -    {
>>>> -        const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
>>>> -        uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>>>> -                         ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>>>
>>> Why is this initializer not retained?
>> 
>> It is, it's just that the diff is terrible. An unfortunate side effect of the
>> removal of the braces. The scope collapsing forces it on top of the function,
>> before the emulation context is initialised.
>> 
>> It's set up in steps. walk is unconditionally initialised as isnsn_fetch, and
>> later (after emulate_init_once()), OR'd with PFEC_user_mode for DPL == 3. See...
>> 
>>>
>>>> -        unsigned long addr;
>>>> -        char sig[5]; /* ud2; .ascii "xen" */
>>>> -
>>>> -        if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
>>>> -                                        sizeof(sig), hvm_access_insn_fetch,
>>>> -                                        cs, &addr) &&
>>>> -             (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
>>>> -                                         walk, NULL) == HVMTRANS_okay) &&
>>>> -             (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
>>>> -        {
>>>> -            regs->rip += sizeof(sig);
>>>> -            regs->eflags &= ~X86_EFLAGS_RF;
>>>> +    hvm_emulate_init_once(&ctxt, NULL, regs);
>>>>  
>>>> -            /* Zero the upper 32 bits of %rip if not in 64bit mode. */
>>>> -            if ( !(hvm_long_mode_active(cur) && cs->l) )
>>>> -                regs->rip = (uint32_t)regs->rip;
>>>> +    if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
>>>> +        walk |= PFEC_user_mode;
>> 
>> ... here.
>
> But that's the point of my question: Why did you split it? All you mean to
> do is re-indentation.

Because I need to declare "walk" ahead of the statements. Thus this...

    uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
                     ? PFEC_user_mode : 0) | PFEC_insn_fetch;

must (by necessity) have the declaration placed on top before the emulator
context initialisation. The options are...

    uint32_t walk;
    [... lines ...]
    walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
            ? PFEC_user_mode : 0) | PFEC_insn_fetch;

... or...

    uint32_t walk = PFEC_insn_fetch;
    [... lines ...]
    if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
        walk |= PFEC_user_mode;

Line count remains at 3 in both cases, but in the former case there's a
comparison, a ternary operator and an OR all adding cognitive load to the
same statement. In the latter case there's an assignment in the 1st statement,
an if+comparison in a separate line, and a separate OR in the final statement.
It's just simpler to meantally parse because the complexity is evenly
distributed.

I can see how the current form was preferred to avoid a third line (and
then a forth due to the required newline, doubling the total). But with the
rearrangement that's no longer relevant.

If you have a very strong preference for the prior form I could keep it, though
I do have a preference myself for the latter out of improved readability.

Cheers,
Alejandro


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
  2026-03-11 10:21         ` Alejandro Vallejo
@ 2026-03-11 11:06           ` Jan Beulich
  2026-03-11 12:40             ` Alejandro Vallejo
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2026-03-11 11:06 UTC (permalink / raw)
  To: Alejandro Vallejo
  Cc: Andrew Cooper, Roger Pau Monné, Jason Andryuk, xen-devel

On 11.03.2026 11:21, Alejandro Vallejo wrote:
> On Wed Mar 11, 2026 at 10:30 AM CET, Jan Beulich wrote:
>> On 11.03.2026 10:25, Alejandro Vallejo wrote:
>>> On Wed Mar 11, 2026 at 9:35 AM CET, Jan Beulich wrote:
>>>> On 13.02.2026 12:42, Alejandro Vallejo wrote:
>>>>> -    if ( opt_hvm_fep )
>>>>> -    {
>>>>> -        const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
>>>>> -        uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>>>>> -                         ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>>>>
>>>> Why is this initializer not retained?
>>>
>>> It is, it's just that the diff is terrible. An unfortunate side effect of the
>>> removal of the braces. The scope collapsing forces it on top of the function,
>>> before the emulation context is initialised.
>>>
>>> It's set up in steps. walk is unconditionally initialised as isnsn_fetch, and
>>> later (after emulate_init_once()), OR'd with PFEC_user_mode for DPL == 3. See...
>>>
>>>>
>>>>> -        unsigned long addr;
>>>>> -        char sig[5]; /* ud2; .ascii "xen" */
>>>>> -
>>>>> -        if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
>>>>> -                                        sizeof(sig), hvm_access_insn_fetch,
>>>>> -                                        cs, &addr) &&
>>>>> -             (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
>>>>> -                                         walk, NULL) == HVMTRANS_okay) &&
>>>>> -             (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
>>>>> -        {
>>>>> -            regs->rip += sizeof(sig);
>>>>> -            regs->eflags &= ~X86_EFLAGS_RF;
>>>>> +    hvm_emulate_init_once(&ctxt, NULL, regs);
>>>>>  
>>>>> -            /* Zero the upper 32 bits of %rip if not in 64bit mode. */
>>>>> -            if ( !(hvm_long_mode_active(cur) && cs->l) )
>>>>> -                regs->rip = (uint32_t)regs->rip;
>>>>> +    if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
>>>>> +        walk |= PFEC_user_mode;
>>>
>>> ... here.
>>
>> But that's the point of my question: Why did you split it? All you mean to
>> do is re-indentation.
> 
> Because I need to declare "walk" ahead of the statements. Thus this...
> 
>     uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>                      ? PFEC_user_mode : 0) | PFEC_insn_fetch;
> 
> must (by necessity) have the declaration placed on top before the emulator
> context initialisation. The options are...
> 
>     uint32_t walk;
>     [... lines ...]
>     walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>             ? PFEC_user_mode : 0) | PFEC_insn_fetch;
> 
> ... or...
> 
>     uint32_t walk = PFEC_insn_fetch;
>     [... lines ...]
>     if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
>         walk |= PFEC_user_mode;
> 
> Line count remains at 3 in both cases, but in the former case there's a
> comparison, a ternary operator and an OR all adding cognitive load to the
> same statement. In the latter case there's an assignment in the 1st statement,
> an if+comparison in a separate line, and a separate OR in the final statement.
> It's just simpler to meantally parse because the complexity is evenly
> distributed.
> 
> I can see how the current form was preferred to avoid a third line (and
> then a forth due to the required newline, doubling the total). But with the
> rearrangement that's no longer relevant.
> 
> If you have a very strong preference for the prior form I could keep it, though
> I do have a preference myself for the latter out of improved readability.

Strong preference or not - readability is subjective. I prefer the present
form, where the variable obtains it final value right away. More generally,
with subjective aspects it may often be better to leave mechanical changes
(here: re-indentation) as purely mechanical. Things are different with
objective aspects, like style violations which of course can (and imo
preferably should) be corrected on such occasions.

Jan


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler
  2026-03-11 11:06           ` Jan Beulich
@ 2026-03-11 12:40             ` Alejandro Vallejo
  0 siblings, 0 replies; 19+ messages in thread
From: Alejandro Vallejo @ 2026-03-11 12:40 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Roger Pau Monné, Jason Andryuk, xen-devel

On Wed Mar 11, 2026 at 12:06 PM CET, Jan Beulich wrote:
> On 11.03.2026 11:21, Alejandro Vallejo wrote:
>> On Wed Mar 11, 2026 at 10:30 AM CET, Jan Beulich wrote:
>>> On 11.03.2026 10:25, Alejandro Vallejo wrote:
>>>> On Wed Mar 11, 2026 at 9:35 AM CET, Jan Beulich wrote:
>>>>> On 13.02.2026 12:42, Alejandro Vallejo wrote:
>>>>>> -    if ( opt_hvm_fep )
>>>>>> -    {
>>>>>> -        const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
>>>>>> -        uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>>>>>> -                         ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>>>>>
>>>>> Why is this initializer not retained?
>>>>
>>>> It is, it's just that the diff is terrible. An unfortunate side effect of the
>>>> removal of the braces. The scope collapsing forces it on top of the function,
>>>> before the emulation context is initialised.
>>>>
>>>> It's set up in steps. walk is unconditionally initialised as isnsn_fetch, and
>>>> later (after emulate_init_once()), OR'd with PFEC_user_mode for DPL == 3. See...
>>>>
>>>>>
>>>>>> -        unsigned long addr;
>>>>>> -        char sig[5]; /* ud2; .ascii "xen" */
>>>>>> -
>>>>>> -        if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
>>>>>> -                                        sizeof(sig), hvm_access_insn_fetch,
>>>>>> -                                        cs, &addr) &&
>>>>>> -             (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
>>>>>> -                                         walk, NULL) == HVMTRANS_okay) &&
>>>>>> -             (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
>>>>>> -        {
>>>>>> -            regs->rip += sizeof(sig);
>>>>>> -            regs->eflags &= ~X86_EFLAGS_RF;
>>>>>> +    hvm_emulate_init_once(&ctxt, NULL, regs);
>>>>>>  
>>>>>> -            /* Zero the upper 32 bits of %rip if not in 64bit mode. */
>>>>>> -            if ( !(hvm_long_mode_active(cur) && cs->l) )
>>>>>> -                regs->rip = (uint32_t)regs->rip;
>>>>>> +    if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
>>>>>> +        walk |= PFEC_user_mode;
>>>>
>>>> ... here.
>>>
>>> But that's the point of my question: Why did you split it? All you mean to
>>> do is re-indentation.
>> 
>> Because I need to declare "walk" ahead of the statements. Thus this...
>> 
>>     uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>>                      ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>> 
>> must (by necessity) have the declaration placed on top before the emulator
>> context initialisation. The options are...
>> 
>>     uint32_t walk;
>>     [... lines ...]
>>     walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>>             ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>> 
>> ... or...
>> 
>>     uint32_t walk = PFEC_insn_fetch;
>>     [... lines ...]
>>     if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
>>         walk |= PFEC_user_mode;
>> 
>> Line count remains at 3 in both cases, but in the former case there's a
>> comparison, a ternary operator and an OR all adding cognitive load to the
>> same statement. In the latter case there's an assignment in the 1st statement,
>> an if+comparison in a separate line, and a separate OR in the final statement.
>> It's just simpler to meantally parse because the complexity is evenly
>> distributed.
>> 
>> I can see how the current form was preferred to avoid a third line (and
>> then a forth due to the required newline, doubling the total). But with the
>> rearrangement that's no longer relevant.
>> 
>> If you have a very strong preference for the prior form I could keep it, though
>> I do have a preference myself for the latter out of improved readability.
>
> Strong preference or not - readability is subjective. I prefer the present
> form, where the variable obtains it final value right away. More generally,
> with subjective aspects it may often be better to leave mechanical changes
> (here: re-indentation) as purely mechanical. Things are different with
> objective aspects, like style violations which of course can (and imo
> preferably should) be corrected on such occasions.

Ack

Cheers,
Alejandro


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/4] x86: Reject CPU policies with vendors other than the host's
  2026-03-11  8:26   ` Jan Beulich
@ 2026-03-11 12:43     ` Alejandro Vallejo
  0 siblings, 0 replies; 19+ messages in thread
From: Alejandro Vallejo @ 2026-03-11 12:43 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Oleksii Kurochko, Community Manager, Andrew Cooper,
	Roger Pau Monné, Anthony PERARD, xen-devel

On Wed Mar 11, 2026 at 9:26 AM CET, Jan Beulich wrote:
> On 13.02.2026 12:42, Alejandro Vallejo wrote:
>> --- a/CHANGELOG.md
>> +++ b/CHANGELOG.md
>> @@ -22,6 +22,11 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>>     - Xenoprofile support.  Oprofile themselves removed support for Xen in 2014
>>       prior to the version 1.0 release, and there has been no development since
>>       before then in Xen.
>> +   - Domains can no longer run on a CPU vendor if they were initially launched
>> +     on a different CPU vendor. This affects live migrations and save/restore
>> +     workflows accross mixed-vendor hosts. Cross-vendor emulation has always
>> +     been unreliable, but since 2017 with the advent of speculation security it
>> +     became unsustainably so.
>
> While the code adjustment looks okay to me, the wording is a little odd. What is
> "run on a CPU vendor"? How about "Domains can no longer run on a system with CPUs
> of a vendor different from the one they were initially launched on"?
>
> Also, nit: "across".

Sure

Cheers,
Alejandro


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2026-03-11 12:44 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-13 11:42 [PATCH v3 0/4] x86: Drop cross-vendor support Alejandro Vallejo
2026-02-13 11:42 ` [PATCH v3 1/4] x86: Reject CPU policies with vendors other than the host's Alejandro Vallejo
2026-03-11  8:26   ` Jan Beulich
2026-03-11 12:43     ` Alejandro Vallejo
2026-02-13 11:42 ` [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler Alejandro Vallejo
2026-03-11  8:35   ` Jan Beulich
2026-03-11  9:25     ` Alejandro Vallejo
2026-03-11  9:30       ` Jan Beulich
2026-03-11 10:21         ` Alejandro Vallejo
2026-03-11 11:06           ` Jan Beulich
2026-03-11 12:40             ` Alejandro Vallejo
2026-02-13 11:42 ` [PATCH v3 3/4] x86/hvm: Remove cross-vendor checks from MSR handlers Alejandro Vallejo
2026-03-11  8:39   ` Jan Beulich
2026-02-13 11:42 ` [PATCH v3 4/4] x86/svm: Drop emulation of Intel's SYSENTER behaviour on AMD systems Alejandro Vallejo
2026-03-11  8:46   ` Jan Beulich
2026-03-11  9:27     ` Alejandro Vallejo
2026-03-11  8:54 ` [PATCH v3 0/4] x86: Drop cross-vendor support Jan Beulich
2026-03-11  9:46   ` Alejandro Vallejo
2026-03-11 10:11     ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.