All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v v3 0/7] Remove x86 prefixed names from cpuinfo
@ 2026-03-13 16:36 Kevin Lampis
  2026-03-13 16:36 ` [PATCH v v3 1/7] x86: relax some CPU checks for non-64 bit CPUs Kevin Lampis
                   ` (6 more replies)
  0 siblings, 7 replies; 16+ messages in thread
From: Kevin Lampis @ 2026-03-13 16:36 UTC (permalink / raw)
  To: xen-devel; +Cc: jbeulich, andrew.cooper3, roger.pau, Kevin Lampis

Remove x86 prefixed names from cpuinfo and all the places it is used.
This work is part of making Xen safe for Intel family 18/19.

Kevin Lampis (7):
  x86: relax some CPU checks for non-64 bit CPUs
  x86: Remove x86 prefixed names from mcheck code
  x86: Remove x86 prefixed names from acpi code
  x86: Remove Intel 0x65, 0x6e, 0x5d from VMX code
  x86: Remove x86 prefixed names from hvm code
  x86: Remove x86 prefixed names from x86/cpu/ files
  x86: Remove x86 prefixed names from cpuinfo

 xen/arch/x86/acpi/cpu_idle.c           |  21 +-
 xen/arch/x86/acpi/cpufreq/acpi.c       |   2 +-
 xen/arch/x86/acpi/cpufreq/cpufreq.c    |   4 +-
 xen/arch/x86/acpi/cpufreq/powernow.c   |   4 +-
 xen/arch/x86/cpu/centaur.c             |   4 +-
 xen/arch/x86/cpu/hygon.c               |   4 +-
 xen/arch/x86/cpu/intel_cacheinfo.c     |   6 +-
 xen/arch/x86/cpu/mcheck/amd_nonfatal.c |   2 +-
 xen/arch/x86/cpu/mcheck/mcaction.c     |   2 +-
 xen/arch/x86/cpu/mcheck/mce.c          |  36 ++--
 xen/arch/x86/cpu/mcheck/mce.h          |   2 +-
 xen/arch/x86/cpu/mcheck/mce_amd.c      |  16 +-
 xen/arch/x86/cpu/mcheck/mce_intel.c    |   5 +-
 xen/arch/x86/cpu/mcheck/non-fatal.c    |   2 +-
 xen/arch/x86/cpu/mcheck/vmce.c         |   8 +-
 xen/arch/x86/cpu/mtrr/generic.c        |   5 +-
 xen/arch/x86/cpu/mwait-idle.c          |   4 +-
 xen/arch/x86/cpu/vpmu.c                |   4 +-
 xen/arch/x86/cpu/vpmu_amd.c            |   6 +-
 xen/arch/x86/cpu/vpmu_intel.c          |   4 +-
 xen/arch/x86/hvm/hvm.c                 |   2 +-
 xen/arch/x86/hvm/svm/svm.c             |   6 +-
 xen/arch/x86/hvm/vmx/vmcs.c            |   4 +-
 xen/arch/x86/hvm/vmx/vmx.c             | 280 ++++++++++++-------------
 xen/arch/x86/include/asm/cpufeature.h  |  21 +-
 25 files changed, 215 insertions(+), 239 deletions(-)

-- 
2.51.1



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v v3 1/7] x86: relax some CPU checks for non-64 bit CPUs
  2026-03-13 16:36 [PATCH v v3 0/7] Remove x86 prefixed names from cpuinfo Kevin Lampis
@ 2026-03-13 16:36 ` Kevin Lampis
  2026-03-23  9:54   ` Jan Beulich
  2026-03-13 16:36 ` [PATCH v v3 2/7] x86: Remove x86 prefixed names from mcheck code Kevin Lampis
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Kevin Lampis @ 2026-03-13 16:36 UTC (permalink / raw)
  To: xen-devel; +Cc: jbeulich, andrew.cooper3, roger.pau, Kevin Lampis

These checks were guarding against non-64 bit CPU models but they are
not supported by Xen anymore so the checks are no longer needed.

The switch statement was removed from mcheck_init()
to support Intel family 18/19.

Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>
---
Changes in v2:
- New patch based on review comments

Changes in v3:
- Moved patch to front of the series
---
 xen/arch/x86/acpi/cpu_idle.c    | 5 ++---
 xen/arch/x86/cpu/mcheck/mce.c   | 8 +-------
 xen/arch/x86/cpu/mtrr/generic.c | 3 +--
 3 files changed, 4 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 0b3d0631dd..46749ca337 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -1059,9 +1059,8 @@ static void acpi_processor_power_init_bm_check(struct acpi_processor_flags *flag
      * is not required while entering C3 type state on
      * P4, Core and beyond CPUs
      */
-    if ( c->x86_vendor == X86_VENDOR_INTEL &&
-        (c->x86 > 0x6 || (c->x86 == 6 && c->x86_model >= 14)) )
-            flags->bm_control = 0;
+    if ( c->x86_vendor == X86_VENDOR_INTEL )
+        flags->bm_control = 0;
 }
 
 #define VENDOR_INTEL                   (1)
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index 9a91807cfb..c4b3b687a2 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -777,13 +777,7 @@ void mcheck_init(struct cpuinfo_x86 *c, bool bsp)
 
 #ifdef CONFIG_INTEL
     case X86_VENDOR_INTEL:
-        switch ( c->x86 )
-        {
-        case 6:
-        case 15:
-            inited = intel_mcheck_init(c, bsp);
-            break;
-        }
+        inited = intel_mcheck_init(c, bsp);
         break;
 #endif
 
diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index c587e9140e..0ca6a2083f 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -218,8 +218,7 @@ static void __init print_mtrr_state(const char *level)
 			printk("%s  %u disabled\n", level, i);
 	}
 
-	if ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
-	     boot_cpu_data.x86 >= 0xf) ||
+	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
 	     boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
 		uint64_t syscfg, tom2;
 
-- 
2.51.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v v3 2/7] x86: Remove x86 prefixed names from mcheck code
  2026-03-13 16:36 [PATCH v v3 0/7] Remove x86 prefixed names from cpuinfo Kevin Lampis
  2026-03-13 16:36 ` [PATCH v v3 1/7] x86: relax some CPU checks for non-64 bit CPUs Kevin Lampis
@ 2026-03-13 16:36 ` Kevin Lampis
  2026-03-23  9:59   ` Jan Beulich
  2026-03-13 16:36 ` [PATCH v v3 3/7] x86: Remove x86 prefixed names from acpi code Kevin Lampis
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Kevin Lampis @ 2026-03-13 16:36 UTC (permalink / raw)
  To: xen-devel; +Cc: jbeulich, andrew.cooper3, roger.pau, Kevin Lampis

struct cpuinfo_x86
  .x86        => .family
  .x86_vendor => .vendor
  .x86_model  => .model
  .x86_mask   => .stepping

No functional change.

This work is part of making Xen safe for Intel family 18/19.

Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>
---
Changes in v2:
- Undo the family != 5 check in mcheck_init()
- Change model range check in mce_firstbank()

Changes in v3:
- In check for family != 0xf in mce_is_broadcast()
---
 xen/arch/x86/cpu/mcheck/amd_nonfatal.c |  2 +-
 xen/arch/x86/cpu/mcheck/mcaction.c     |  2 +-
 xen/arch/x86/cpu/mcheck/mce.c          | 28 +++++++++++++-------------
 xen/arch/x86/cpu/mcheck/mce.h          |  2 +-
 xen/arch/x86/cpu/mcheck/mce_amd.c      | 16 +++++++--------
 xen/arch/x86/cpu/mcheck/mce_intel.c    |  5 +----
 xen/arch/x86/cpu/mcheck/non-fatal.c    |  2 +-
 xen/arch/x86/cpu/mcheck/vmce.c         |  8 ++++----
 8 files changed, 31 insertions(+), 34 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/amd_nonfatal.c b/xen/arch/x86/cpu/mcheck/amd_nonfatal.c
index 7d48c9ab5f..fb52639e13 100644
--- a/xen/arch/x86/cpu/mcheck/amd_nonfatal.c
+++ b/xen/arch/x86/cpu/mcheck/amd_nonfatal.c
@@ -191,7 +191,7 @@ static void cf_check mce_amd_work_fn(void *data)
 
 void __init amd_nonfatal_mcheck_init(struct cpuinfo_x86 *c)
 {
-	if (!(c->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)))
+	if (!(c->vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)))
 		return;
 
 	/* Assume we are on K8 or newer AMD or Hygon CPU here */
diff --git a/xen/arch/x86/cpu/mcheck/mcaction.c b/xen/arch/x86/cpu/mcheck/mcaction.c
index bf7a0de965..236424569a 100644
--- a/xen/arch/x86/cpu/mcheck/mcaction.c
+++ b/xen/arch/x86/cpu/mcheck/mcaction.c
@@ -101,7 +101,7 @@ mc_memerr_dhandler(struct mca_binfo *binfo,
                       * not always precise. In that case, fallback to broadcast.
                       */
                      global->mc_domid != bank->mc_domid ||
-                     (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+                     (boot_cpu_data.vendor == X86_VENDOR_INTEL &&
                       (!(global->mc_gstatus & MCG_STATUS_LMCE) ||
                        !(d->vcpu[mc_vcpuid]->arch.vmce.mcg_ext_ctl &
                          MCG_EXT_CTL_LMCE_EN))) )
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index c4b3b687a2..2c70964a82 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -23,6 +23,7 @@
 #include <asm/apic.h>
 #include <asm/msr.h>
 #include <asm/p2m.h>
+#include <asm/intel-family.h>
 
 #include "mce.h"
 #include "barrier.h"
@@ -334,7 +335,7 @@ mcheck_mca_logout(enum mca_source who, struct mca_banks *bankmask,
                 mca_init_global(mc_flags, mig);
                 /* A hook here to get global extended msrs */
                 if ( IS_ENABLED(CONFIG_INTEL) &&
-                     boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+                     boot_cpu_data.vendor == X86_VENDOR_INTEL )
                     intel_get_extended_msrs(mig, mci);
             }
         }
@@ -564,8 +565,7 @@ bool mce_available(const struct cpuinfo_x86 *c)
  */
 unsigned int mce_firstbank(struct cpuinfo_x86 *c)
 {
-    return c->x86 == 6 &&
-           c->x86_vendor == X86_VENDOR_INTEL && c->x86_model < 0x1a;
+    return c->vfm >= INTEL_PENTIUM_PRO && c->vfm < INTEL_NEHALEM_EP;
 }
 
 static int show_mca_info(int inited, struct cpuinfo_x86 *c)
@@ -596,7 +596,7 @@ static int show_mca_info(int inited, struct cpuinfo_x86 *c)
         case mcheck_amd_famXX:
         case mcheck_hygon:
             printk("%s%s Fam%xh machine check reporting enabled\n",
-                   prefix, type_str[inited], c->x86);
+                   prefix, type_str[inited], c->family);
             break;
 
         case mcheck_none:
@@ -766,7 +766,7 @@ void mcheck_init(struct cpuinfo_x86 *c, bool bsp)
     else if ( cpu_bank_alloc(cpu) )
         panic("Insufficient memory for MCE bank allocations\n");
 
-    switch ( c->x86_vendor )
+    switch ( c->vendor )
     {
 #ifdef CONFIG_AMD
     case X86_VENDOR_AMD:
@@ -876,7 +876,7 @@ static void x86_mcinfo_apei_save(
     memset(&m, 0, sizeof(struct mce));
 
     m.cpu = mc_global->mc_coreid;
-    m.cpuvendor = xen2linux_vendor(boot_cpu_data.x86_vendor);
+    m.cpuvendor = xen2linux_vendor(boot_cpu_data.vendor);
     m.cpuid = cpuid_eax(1);
     m.socketid = mc_global->mc_socketid;
     m.apicid = mc_global->mc_apicid;
@@ -977,10 +977,10 @@ static void cf_check __maybe_unused do_mc_get_cpu_info(void *v)
                         &xcp->mc_apicid, &xcp->mc_ncores,
                         &xcp->mc_ncores_active, &xcp->mc_nthreads);
     xcp->mc_cpuid_level = c->cpuid_level;
-    xcp->mc_family = c->x86;
-    xcp->mc_vendor = xen2linux_vendor(c->x86_vendor);
-    xcp->mc_model = c->x86_model;
-    xcp->mc_step = c->x86_mask;
+    xcp->mc_family = c->family;
+    xcp->mc_vendor = xen2linux_vendor(c->vendor);
+    xcp->mc_model = c->model;
+    xcp->mc_step = c->stepping;
     xcp->mc_cache_size = c->x86_cache_size;
     xcp->mc_cache_alignment = c->x86_cache_alignment;
     memcpy(xcp->mc_vendorid, c->x86_vendor_id, sizeof xcp->mc_vendorid);
@@ -1136,7 +1136,7 @@ static bool __maybe_unused x86_mc_msrinject_verify(struct xen_mc_msrinject *mci)
 
         if ( IS_MCA_BANKREG(reg, mci->mcinj_cpunr) )
         {
-            if ( c->x86_vendor == X86_VENDOR_AMD )
+            if ( c->vendor == X86_VENDOR_AMD )
             {
                 /*
                  * On AMD we can set MCi_STATUS_WREN in the
@@ -1171,15 +1171,15 @@ static bool __maybe_unused x86_mc_msrinject_verify(struct xen_mc_msrinject *mci)
             case MSR_F10_MC4_MISC1:
             case MSR_F10_MC4_MISC2:
             case MSR_F10_MC4_MISC3:
-                if ( c->x86_vendor != X86_VENDOR_AMD )
+                if ( c->vendor != X86_VENDOR_AMD )
                     reason = "only supported on AMD";
-                else if ( c->x86 < 0x10 )
+                else if ( c->family < 0x10 )
                     reason = "only supported on AMD Fam10h+";
                 break;
 
             /* MSRs that the HV will take care of */
             case MSR_K8_HWCR:
-                if ( c->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
+                if ( c->vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
                     reason = "HV will operate HWCR";
                 else
                     reason = "only supported on AMD or Hygon";
diff --git a/xen/arch/x86/cpu/mcheck/mce.h b/xen/arch/x86/cpu/mcheck/mce.h
index 920b075355..3b61b12487 100644
--- a/xen/arch/x86/cpu/mcheck/mce.h
+++ b/xen/arch/x86/cpu/mcheck/mce.h
@@ -137,7 +137,7 @@ void x86_mcinfo_dump(struct mc_info *mi);
 
 static inline int mce_vendor_bank_msr(const struct vcpu *v, uint32_t msr)
 {
-    switch (boot_cpu_data.x86_vendor) {
+    switch (boot_cpu_data.vendor) {
     case X86_VENDOR_INTEL:
         if (msr >= MSR_IA32_MC0_CTL2 &&
             msr < MSR_IA32_MCx_CTL2(v->arch.vmce.mcg_cap & MCG_CAP_COUNT) )
diff --git a/xen/arch/x86/cpu/mcheck/mce_amd.c b/xen/arch/x86/cpu/mcheck/mce_amd.c
index 25c29eb3d2..2d17832d9c 100644
--- a/xen/arch/x86/cpu/mcheck/mce_amd.c
+++ b/xen/arch/x86/cpu/mcheck/mce_amd.c
@@ -160,17 +160,17 @@ mcequirk_lookup_amd_quirkdata(const struct cpuinfo_x86 *c)
 {
     unsigned int i;
 
-    BUG_ON(c->x86_vendor != X86_VENDOR_AMD);
+    BUG_ON(c->vendor != X86_VENDOR_AMD);
 
     for ( i = 0; i < ARRAY_SIZE(mce_amd_quirks); i++ )
     {
-        if ( c->x86 != mce_amd_quirks[i].cpu_family )
+        if ( c->family != mce_amd_quirks[i].cpu_family )
             continue;
         if ( (mce_amd_quirks[i].cpu_model != ANY) &&
-             (mce_amd_quirks[i].cpu_model != c->x86_model) )
+             (mce_amd_quirks[i].cpu_model != c->model) )
             continue;
         if ( (mce_amd_quirks[i].cpu_stepping != ANY) &&
-             (mce_amd_quirks[i].cpu_stepping != c->x86_mask) )
+             (mce_amd_quirks[i].cpu_stepping != c->stepping) )
                 continue;
         return mce_amd_quirks[i].quirk;
     }
@@ -291,13 +291,13 @@ amd_mcheck_init(const struct cpuinfo_x86 *c, bool bsp)
     uint32_t i;
     enum mcequirk_amd_flags quirkflag = 0;
 
-    if ( c->x86_vendor != X86_VENDOR_HYGON )
+    if ( c->vendor != X86_VENDOR_HYGON )
         quirkflag = mcequirk_lookup_amd_quirkdata(c);
 
     /* Assume that machine check support is available.
      * The minimum provided support is at least the K8. */
     if ( bsp )
-        mce_handler_init(c->x86 == 0xf ? &k8_callbacks : &k10_callbacks);
+        mce_handler_init(c->family == 0xf ? &k8_callbacks : &k10_callbacks);
 
     for ( i = 0; i < this_cpu(nr_mce_banks); i++ )
     {
@@ -311,7 +311,7 @@ amd_mcheck_init(const struct cpuinfo_x86 *c, bool bsp)
         }
     }
 
-    if ( c->x86 == 0xf )
+    if ( c->family == 0xf )
         return mcheck_amd_k8;
 
     if ( quirkflag == MCEQUIRK_F10_GART )
@@ -337,6 +337,6 @@ amd_mcheck_init(const struct cpuinfo_x86 *c, bool bsp)
             ppin_msr = MSR_AMD_PPIN;
     }
 
-    return c->x86_vendor == X86_VENDOR_HYGON ?
+    return c->vendor == X86_VENDOR_HYGON ?
             mcheck_hygon : mcheck_amd_famXX;
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index 839a0e5ba9..d49737f24a 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -711,10 +711,7 @@ static bool mce_is_broadcast(struct cpuinfo_x86 *c)
      * DisplayFamily_DisplayModel encoding of 06H_EH and above,
      * a MCA signal is broadcast to all logical processors in the system
      */
-    if ( c->x86_vendor == X86_VENDOR_INTEL && c->x86 == 6 &&
-         c->x86_model >= 0xe )
-        return true;
-    return false;
+    return c->vendor == X86_VENDOR_INTEL && c->family != 0xf;
 }
 
 static bool intel_enable_lmce(void)
diff --git a/xen/arch/x86/cpu/mcheck/non-fatal.c b/xen/arch/x86/cpu/mcheck/non-fatal.c
index a9ee9bb94f..4e7c64abef 100644
--- a/xen/arch/x86/cpu/mcheck/non-fatal.c
+++ b/xen/arch/x86/cpu/mcheck/non-fatal.c
@@ -23,7 +23,7 @@ static int __init cf_check init_nonfatal_mce_checker(void)
 	/*
 	 * Check for non-fatal errors every MCE_RATE s
 	 */
-	switch (c->x86_vendor) {
+	switch (c->vendor) {
 #ifdef CONFIG_AMD
 	case X86_VENDOR_AMD:
 	case X86_VENDOR_HYGON:
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 1a7e92506a..84776aeec8 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -45,7 +45,7 @@ void vmce_init_vcpu(struct vcpu *v)
     int i;
 
     /* global MCA MSRs init */
-    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+    if ( boot_cpu_data.vendor == X86_VENDOR_INTEL )
         v->arch.vmce.mcg_cap = INTEL_GUEST_MCG_CAP;
     else
         v->arch.vmce.mcg_cap = AMD_GUEST_MCG_CAP;
@@ -63,7 +63,7 @@ int vmce_restore_vcpu(struct vcpu *v, const struct hvm_vmce_vcpu *ctxt)
 {
     unsigned long guest_mcg_cap;
 
-    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+    if ( boot_cpu_data.vendor == X86_VENDOR_INTEL )
         guest_mcg_cap = INTEL_GUEST_MCG_CAP | MCG_LMCE_P;
     else
         guest_mcg_cap = AMD_GUEST_MCG_CAP;
@@ -136,7 +136,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
         break;
 
     default:
-        switch ( boot_cpu_data.x86_vendor )
+        switch ( boot_cpu_data.vendor )
         {
 #ifdef CONFIG_INTEL
         case X86_VENDOR_CENTAUR:
@@ -273,7 +273,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         break;
 
     default:
-        switch ( boot_cpu_data.x86_vendor )
+        switch ( boot_cpu_data.vendor )
         {
 #ifdef CONFIG_INTEL
         case X86_VENDOR_INTEL:
-- 
2.51.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v v3 3/7] x86: Remove x86 prefixed names from acpi code
  2026-03-13 16:36 [PATCH v v3 0/7] Remove x86 prefixed names from cpuinfo Kevin Lampis
  2026-03-13 16:36 ` [PATCH v v3 1/7] x86: relax some CPU checks for non-64 bit CPUs Kevin Lampis
  2026-03-13 16:36 ` [PATCH v v3 2/7] x86: Remove x86 prefixed names from mcheck code Kevin Lampis
@ 2026-03-13 16:36 ` Kevin Lampis
  2026-03-23 10:02   ` Jan Beulich
  2026-03-13 16:36 ` [PATCH v v3 4/7] x86: Remove Intel 0x65, 0x6e, 0x5d from VMX code Kevin Lampis
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Kevin Lampis @ 2026-03-13 16:36 UTC (permalink / raw)
  To: xen-devel; +Cc: jbeulich, andrew.cooper3, roger.pau, Kevin Lampis

struct cpuinfo_x86
  .x86        => .family
  .x86_vendor => .vendor
  .x86_model  => .model
  .x86_mask   => .stepping

No functional change.

This work is part of making Xen safe for Intel family 18/19.

Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>
---
Changes in v2:
- Remove the XXX comments

Changes in v3:
- No changes
---
 xen/arch/x86/acpi/cpu_idle.c         | 18 +++++++++---------
 xen/arch/x86/acpi/cpufreq/acpi.c     |  2 +-
 xen/arch/x86/acpi/cpufreq/cpufreq.c  |  4 ++--
 xen/arch/x86/acpi/cpufreq/powernow.c |  4 ++--
 4 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 46749ca337..3001e98a6e 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -178,10 +178,10 @@ static void cf_check do_get_hw_residencies(void *arg)
     struct cpuinfo_x86 *c = &current_cpu_data;
     struct hw_residencies *hw_res = arg;
 
-    if ( c->x86_vendor != X86_VENDOR_INTEL || c->x86 != 6 )
+    if ( c->vendor != X86_VENDOR_INTEL || c->family != 6 )
         return;
 
-    switch ( c->x86_model )
+    switch ( c->model )
     {
     /* 4th generation Intel Core (Haswell) */
     case 0x45:
@@ -915,7 +915,7 @@ void cf_check acpi_dead_idle(void)
             mwait(cx->address, 0);
         }
     }
-    else if ( (current_cpu_data.x86_vendor &
+    else if ( (current_cpu_data.vendor &
                (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
               cx->entry_method == ACPI_CSTATE_EM_SYSIO )
     {
@@ -1042,8 +1042,8 @@ static void acpi_processor_power_init_bm_check(struct acpi_processor_flags *flag
     flags->bm_check = 0;
     if ( num_online_cpus() == 1 )
         flags->bm_check = 1;
-    else if ( (c->x86_vendor == X86_VENDOR_INTEL) ||
-              ((c->x86_vendor == X86_VENDOR_AMD) && (c->x86 == 0x15)) )
+    else if ( (c->vendor == X86_VENDOR_INTEL) ||
+              ((c->vendor == X86_VENDOR_AMD) && (c->family == 0x15)) )
     {
         /*
          * Today all MP CPUs that support C3 share cache.
@@ -1059,7 +1059,7 @@ static void acpi_processor_power_init_bm_check(struct acpi_processor_flags *flag
      * is not required while entering C3 type state on
      * P4, Core and beyond CPUs
      */
-    if ( c->x86_vendor == X86_VENDOR_INTEL )
+    if ( c->vendor == X86_VENDOR_INTEL )
         flags->bm_control = 0;
 }
 
@@ -1415,12 +1415,12 @@ static void amd_cpuidle_init(struct acpi_processor_power *power)
     if ( vendor_override < 0 )
         return;
 
-    switch ( c->x86 )
+    switch ( c->family )
     {
     case 0x1a:
     case 0x19:
     case 0x18:
-        if ( boot_cpu_data.x86_vendor != X86_VENDOR_HYGON )
+        if ( boot_cpu_data.vendor != X86_VENDOR_HYGON )
         {
     default:
             vendor_override = -1;
@@ -1647,7 +1647,7 @@ static int cf_check cpu_callback(
         break;
 
     case CPU_ONLINE:
-        if ( (boot_cpu_data.x86_vendor &
+        if ( (boot_cpu_data.vendor &
               (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
              processor_powers[cpu] )
             amd_cpuidle_init(processor_powers[cpu]);
diff --git a/xen/arch/x86/acpi/cpufreq/acpi.c b/xen/arch/x86/acpi/cpufreq/acpi.c
index d0ca660db1..de67f1aee2 100644
--- a/xen/arch/x86/acpi/cpufreq/acpi.c
+++ b/xen/arch/x86/acpi/cpufreq/acpi.c
@@ -454,7 +454,7 @@ static int cf_check acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
 
     /* Check for APERF/MPERF support in hardware
      * also check for boost support */
-    if (c->x86_vendor == X86_VENDOR_INTEL && c->cpuid_level >= 6)
+    if (c->vendor == X86_VENDOR_INTEL && c->cpuid_level >= 6)
         on_selected_cpus(cpumask_of(cpu), feature_detect, policy, 1);
 
     /*
diff --git a/xen/arch/x86/acpi/cpufreq/cpufreq.c b/xen/arch/x86/acpi/cpufreq/cpufreq.c
index 5740c0d438..9ef62b3538 100644
--- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
+++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
@@ -133,7 +133,7 @@ static int __init cf_check cpufreq_driver_init(void)
 
         ret = -ENOENT;
 
-        switch ( boot_cpu_data.x86_vendor )
+        switch ( boot_cpu_data.vendor )
         {
         case X86_VENDOR_INTEL:
             for ( i = 0; i < cpufreq_xen_cnt; i++ )
@@ -252,7 +252,7 @@ __initcall(cpufreq_driver_late_init);
 int cpufreq_cpu_init(unsigned int cpu)
 {
     /* Currently we only handle Intel, AMD and Hygon processor */
-    if ( boot_cpu_data.x86_vendor &
+    if ( boot_cpu_data.vendor &
          (X86_VENDOR_INTEL | X86_VENDOR_AMD | X86_VENDOR_HYGON) )
         return cpufreq_add_cpu(cpu);
 
diff --git a/xen/arch/x86/acpi/cpufreq/powernow.c b/xen/arch/x86/acpi/cpufreq/powernow.c
index beab6cac36..55166eac72 100644
--- a/xen/arch/x86/acpi/cpufreq/powernow.c
+++ b/xen/arch/x86/acpi/cpufreq/powernow.c
@@ -143,7 +143,7 @@ static void amd_fixup_frequency(struct xen_processor_px *px)
     int index = px->control & 0x00000007;
     const struct cpuinfo_x86 *c = &current_cpu_data;
 
-    if ((c->x86 != 0x10 || c->x86_model >= 10) && c->x86 != 0x11)
+    if ((c->family != 0x10 || c->model >= 10) && c->family != 0x11)
         return;
 
     val = rdmsr(MSR_PSTATE_DEF_BASE + index);
@@ -157,7 +157,7 @@ static void amd_fixup_frequency(struct xen_processor_px *px)
 
     fid = val & 0x3f;
     did = (val >> 6) & 7;
-    if (c->x86 == 0x10)
+    if (c->family == 0x10)
         px->core_frequency = (100 * (fid + 16)) >> did;
     else
         px->core_frequency = (100 * (fid + 8)) >> did;
-- 
2.51.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v v3 4/7] x86: Remove Intel 0x65, 0x6e, 0x5d from VMX code
  2026-03-13 16:36 [PATCH v v3 0/7] Remove x86 prefixed names from cpuinfo Kevin Lampis
                   ` (2 preceding siblings ...)
  2026-03-13 16:36 ` [PATCH v v3 3/7] x86: Remove x86 prefixed names from acpi code Kevin Lampis
@ 2026-03-13 16:36 ` Kevin Lampis
  2026-03-13 16:36 ` [PATCH v v3 5/7] x86: Remove x86 prefixed names from hvm code Kevin Lampis
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Kevin Lampis @ 2026-03-13 16:36 UTC (permalink / raw)
  To: xen-devel; +Cc: jbeulich, andrew.cooper3, roger.pau, Kevin Lampis

These Intel models were used in telecoms and not regarded as
general purpose processors.
- 0x5d (SoFIA 3G Granite/ES2.1)
- 0x65 (SoFIA LTE AOSP)
- 0x6e (Cougar Mountain)

Model 06_5DH does appear in the Intel Software Developers Manuals but
Linux has declined to take these models into intel-family.h because
they're not general purpose.

Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>
---
Changes in v2:
- New patch based on review comments

Changes in v3:
- Expanded the commit message
---
 xen/arch/x86/hvm/vmx/vmx.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 82c55f49ae..e45060d403 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -550,7 +550,7 @@ static const struct lbr_info *__init get_model_specific_lbr(void)
         case 0x1c: case 0x26: case 0x27: case 0x35: case 0x36:
             return at_lbr;
         /* Silvermont */
-        case 0x37: case 0x4a: case 0x4d: case 0x5a: case 0x5d:
+        case 0x37: case 0x4a: case 0x4d: case 0x5a:
         /* Airmont */
         case 0x4c:
             return sm_lbr;
@@ -3126,10 +3126,7 @@ static bool __init has_if_pschange_mc(void)
     case 0x4a: /* Merrifield */
     case 0x5a: /* Moorefield */
     case 0x5c: /* Goldmont */
-    case 0x5d: /* SoFIA 3G Granite/ES2.1 */
-    case 0x65: /* SoFIA LTE AOSP */
     case 0x5f: /* Denverton */
-    case 0x6e: /* Cougar Mountain */
     case 0x75: /* Lightning Mountain */
     case 0x7a: /* Gemini Lake */
     case 0x86: /* Jacobsville */
-- 
2.51.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v v3 5/7] x86: Remove x86 prefixed names from hvm code
  2026-03-13 16:36 [PATCH v v3 0/7] Remove x86 prefixed names from cpuinfo Kevin Lampis
                   ` (3 preceding siblings ...)
  2026-03-13 16:36 ` [PATCH v v3 4/7] x86: Remove Intel 0x65, 0x6e, 0x5d from VMX code Kevin Lampis
@ 2026-03-13 16:36 ` Kevin Lampis
  2026-03-23 10:06   ` Jan Beulich
  2026-03-13 16:36 ` [PATCH v v3 6/7] x86: Remove x86 prefixed names from x86/cpu/ files Kevin Lampis
  2026-03-13 16:36 ` [PATCH v v3 7/7] x86: Remove x86 prefixed names from cpuinfo Kevin Lampis
  6 siblings, 1 reply; 16+ messages in thread
From: Kevin Lampis @ 2026-03-13 16:36 UTC (permalink / raw)
  To: xen-devel; +Cc: jbeulich, andrew.cooper3, roger.pau, Kevin Lampis

struct cpuinfo_x86
  .x86        => .family
  .x86_vendor => .vendor
  .x86_model  => .model
  .x86_mask   => .stepping

No functional change.

This work is part of making Xen safe for Intel family 18/19.

Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>
---
Changes in v2:
- Group Silvermonts, Airmonts, Goldmonts in the switch statement
- Restore Errata info in lbr_tsx_fixup_check() and ler_to_fixup_check()

Changes in v3:
- No changes
---
 xen/arch/x86/hvm/hvm.c      |   2 +-
 xen/arch/x86/hvm/svm/svm.c  |   6 +-
 xen/arch/x86/hvm/vmx/vmcs.c |   4 +-
 xen/arch/x86/hvm/vmx/vmx.c  | 277 ++++++++++++++++++------------------
 4 files changed, 146 insertions(+), 143 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4d37a93c57..6ad52e1197 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3850,7 +3850,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs)
 {
     struct vcpu *cur = current;
     bool should_emulate =
-        cur->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor;
+        cur->domain->arch.cpuid->x86_vendor != boot_cpu_data.vendor;
     struct hvm_emulate_ctxt ctxt;
 
     hvm_emulate_init_once(&ctxt, opt_hvm_fep ? NULL : is_cross_vendor, regs);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 243c41fb13..5e4d8b3c52 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -590,7 +590,7 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
     u32 bitmap = vmcb_get_exception_intercepts(vmcb);
 
     if ( opt_hvm_fep ||
-         (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) )
+         (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.vendor) )
         bitmap |= (1U << X86_EXC_UD);
     else
         bitmap &= ~(1U << X86_EXC_UD);
@@ -1057,7 +1057,7 @@ static void svm_guest_osvw_init(struct domain *d)
      * be conservative here and therefore we tell the guest that erratum 298
      * is present (because we really don't know).
      */
-    if ( osvw_length == 0 && boot_cpu_data.x86 == 0x10 )
+    if ( osvw_length == 0 && boot_cpu_data.family == 0x10 )
         svm->osvw.status |= 1;
 
     spin_unlock(&osvw_lock);
@@ -1805,7 +1805,7 @@ static int cf_check svm_msr_read_intercept(
         if ( !rdmsr_safe(msr, msr_content) )
             break;
 
-        if ( boot_cpu_data.x86 == 0xf )
+        if ( boot_cpu_data.family == 0xf )
         {
             /*
              * Win2k8 x64 reads this MSR on revF chips, where it wasn't
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index c2e7f9aed3..d3b1730f1d 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -26,6 +26,7 @@
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/vmx/vvmx.h>
 #include <asm/idt.h>
+#include <asm/intel-family.h>
 #include <asm/monitor.h>
 #include <asm/msr.h>
 #include <asm/processor.h>
@@ -2163,8 +2164,7 @@ int __init vmx_vmcs_init(void)
 
     if ( opt_ept_ad < 0 )
         /* Work around Erratum AVR41 on Avoton processors. */
-        opt_ept_ad = !(boot_cpu_data.x86 == 6 &&
-                       boot_cpu_data.x86_model == 0x4d);
+        opt_ept_ad = !(boot_cpu_data.vfm == INTEL_ATOM_SILVERMONT_D);
 
     ret = _vmx_cpu_up(true);
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index e45060d403..3d308e149c 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -34,6 +34,7 @@
 #include <asm/hvm/vmx/vmcs.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/vpt.h>
+#include <asm/intel-family.h>
 #include <asm/io.h>
 #include <asm/iocap.h>
 #include <asm/mce.h>
@@ -502,72 +503,74 @@ static const struct lbr_info *__ro_after_init model_specific_lbr;
 
 static const struct lbr_info *__init get_model_specific_lbr(void)
 {
-    switch ( boot_cpu_data.x86 )
+    switch ( boot_cpu_data.vfm )
     {
-    case 6:
-        switch ( boot_cpu_data.x86_model )
-        {
-        /* Core2 Duo */
-        case 0x0f:
-        /* Enhanced Core */
-        case 0x17:
-        /* Xeon 7400 */
-        case 0x1d:
-            return c2_lbr;
-        /* Nehalem */
-        case 0x1a: case 0x1e: case 0x1f: case 0x2e:
-        /* Westmere */
-        case 0x25: case 0x2c: case 0x2f:
-        /* Sandy Bridge */
-        case 0x2a: case 0x2d:
-        /* Ivy Bridge */
-        case 0x3a: case 0x3e:
-        /* Haswell */
-        case 0x3c: case 0x3f: case 0x45: case 0x46:
-        /* Broadwell */
-        case 0x3d: case 0x47: case 0x4f: case 0x56:
-            return nh_lbr;
-        /* Skylake */
-        case 0x4e: case 0x5e:
-        /* Xeon Scalable */
-        case 0x55:
-        /* Cannon Lake */
-        case 0x66:
-        /* Goldmont Plus */
-        case 0x7a:
-        /* Ice Lake */
-        case 0x6a: case 0x6c: case 0x7d: case 0x7e:
-        /* Tiger Lake */
-        case 0x8c: case 0x8d:
-        /* Tremont */
-        case 0x86:
-        /* Kaby Lake */
-        case 0x8e: case 0x9e:
-        /* Comet Lake */
-        case 0xa5: case 0xa6:
-            return sk_lbr;
-        /* Atom */
-        case 0x1c: case 0x26: case 0x27: case 0x35: case 0x36:
-            return at_lbr;
-        /* Silvermont */
-        case 0x37: case 0x4a: case 0x4d: case 0x5a:
-        /* Airmont */
-        case 0x4c:
-            return sm_lbr;
-        /* Goldmont */
-        case 0x5c: case 0x5f:
-            return gm_lbr;
-        }
-        break;
-
-    case 15:
-        switch ( boot_cpu_data.x86_model )
-        {
-        /* Pentium4/Xeon with em64t */
-        case 3: case 4: case 6:
-            return p4_lbr;
-        }
-        break;
+    case INTEL_CORE2_DUNNINGTON:
+    case INTEL_CORE2_MEROM:
+    case INTEL_CORE2_PENRYN:
+        return c2_lbr;
+
+    case INTEL_NEHALEM:
+    case INTEL_NEHALEM_EP:
+    case INTEL_NEHALEM_EX:
+    case INTEL_NEHALEM_G:
+    case INTEL_WESTMERE:
+    case INTEL_WESTMERE_EP:
+    case INTEL_WESTMERE_EX:
+    case INTEL_SANDYBRIDGE:
+    case INTEL_SANDYBRIDGE_X:
+    case INTEL_IVYBRIDGE:
+    case INTEL_IVYBRIDGE_X:
+    case INTEL_HASWELL:
+    case INTEL_HASWELL_G:
+    case INTEL_HASWELL_L:
+    case INTEL_HASWELL_X:
+    case INTEL_BROADWELL:
+    case INTEL_BROADWELL_D:
+    case INTEL_BROADWELL_G:
+    case INTEL_BROADWELL_X:
+        return nh_lbr;
+
+    case INTEL_SKYLAKE:
+    case INTEL_SKYLAKE_L:
+    case INTEL_SKYLAKE_X:
+    case INTEL_CANNONLAKE_L:
+    case INTEL_ATOM_GOLDMONT_PLUS:
+    case INTEL_ICELAKE:
+    case INTEL_ICELAKE_D:
+    case INTEL_ICELAKE_L:
+    case INTEL_ICELAKE_X:
+    case INTEL_TIGERLAKE:
+    case INTEL_TIGERLAKE_L:
+    case INTEL_ATOM_TREMONT_D:
+    case INTEL_KABYLAKE:
+    case INTEL_KABYLAKE_L:
+    case INTEL_COMETLAKE:
+    case INTEL_COMETLAKE_L:
+        return sk_lbr;
+
+    case INTEL_ATOM_BONNELL:
+    case INTEL_ATOM_BONNELL_MID:
+    case INTEL_ATOM_SALTWELL:
+    case INTEL_ATOM_SALTWELL_MID:
+    case INTEL_ATOM_SALTWELL_TABLET:
+        return at_lbr;
+
+    case INTEL_ATOM_SILVERMONT:
+    case INTEL_ATOM_SILVERMONT_MID:
+    case INTEL_ATOM_SILVERMONT_D:
+    case INTEL_ATOM_SILVERMONT_MID2:
+    case INTEL_ATOM_AIRMONT:
+        return sm_lbr;
+
+    case INTEL_ATOM_GOLDMONT:
+    case INTEL_ATOM_GOLDMONT_D:
+        return gm_lbr;
+
+    case INTEL_P4_PRESCOTT:
+    case INTEL_P4_PRESCOTT_2M:
+    case INTEL_P4_CEDARMILL:
+        return p4_lbr;
     }
 
     return NULL;
@@ -804,7 +807,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
     int rc = 0;
 
     if ( opt_hvm_fep ||
-         (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) )
+         (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.vendor) )
         v->arch.hvm.vmx.exception_bitmap |= (1U << X86_EXC_UD);
     else
         v->arch.hvm.vmx.exception_bitmap &= ~(1U << X86_EXC_UD);
@@ -3073,68 +3076,68 @@ static bool __init has_if_pschange_mc(void)
      * IF_PSCHANGE_MC is only known to affect Intel Family 6 processors at
      * this time.
      */
-    if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
-         boot_cpu_data.x86 != 6 )
+    if ( boot_cpu_data.vendor != X86_VENDOR_INTEL ||
+         boot_cpu_data.family != 6 )
         return false;
 
-    switch ( boot_cpu_data.x86_model )
+    switch ( boot_cpu_data.vfm )
     {
         /*
          * Core processors since at least Nehalem are vulnerable.
          */
-    case 0x1f: /* Auburndale / Havendale */
-    case 0x1e: /* Nehalem */
-    case 0x1a: /* Nehalem EP */
-    case 0x2e: /* Nehalem EX */
-    case 0x25: /* Westmere */
-    case 0x2c: /* Westmere EP */
-    case 0x2f: /* Westmere EX */
-    case 0x2a: /* SandyBridge */
-    case 0x2d: /* SandyBridge EP/EX */
-    case 0x3a: /* IvyBridge */
-    case 0x3e: /* IvyBridge EP/EX */
-    case 0x3c: /* Haswell */
-    case 0x3f: /* Haswell EX/EP */
-    case 0x45: /* Haswell D */
-    case 0x46: /* Haswell H */
-    case 0x3d: /* Broadwell */
-    case 0x47: /* Broadwell H */
-    case 0x4f: /* Broadwell EP/EX */
-    case 0x56: /* Broadwell D */
-    case 0x4e: /* Skylake M */
-    case 0x5e: /* Skylake D */
-    case 0x55: /* Skylake-X / Cascade Lake */
-    case 0x7d: /* Ice Lake */
-    case 0x7e: /* Ice Lake */
-    case 0x8e: /* Kaby / Coffee / Whiskey Lake M */
-    case 0x9e: /* Kaby / Coffee / Whiskey Lake D */
-    case 0xa5: /* Comet Lake H/S */
-    case 0xa6: /* Comet Lake U */
+    case INTEL_NEHALEM_G:
+    case INTEL_NEHALEM:
+    case INTEL_NEHALEM_EP:
+    case INTEL_NEHALEM_EX:
+    case INTEL_WESTMERE:
+    case INTEL_WESTMERE_EP:
+    case INTEL_WESTMERE_EX:
+    case INTEL_SANDYBRIDGE:
+    case INTEL_SANDYBRIDGE_X:
+    case INTEL_IVYBRIDGE:
+    case INTEL_IVYBRIDGE_X:
+    case INTEL_HASWELL:
+    case INTEL_HASWELL_X:
+    case INTEL_HASWELL_L:
+    case INTEL_HASWELL_G:
+    case INTEL_BROADWELL:
+    case INTEL_BROADWELL_G:
+    case INTEL_BROADWELL_X:
+    case INTEL_BROADWELL_D:
+    case INTEL_SKYLAKE_L:
+    case INTEL_SKYLAKE:
+    case INTEL_SKYLAKE_X:
+    case INTEL_ICELAKE:
+    case INTEL_ICELAKE_L:
+    case INTEL_KABYLAKE_L:
+    case INTEL_KABYLAKE:
+    case INTEL_COMETLAKE:
+    case INTEL_COMETLAKE_L:
         return true;
 
         /*
          * Atom processors are not vulnerable.
          */
-    case 0x1c: /* Pineview */
-    case 0x26: /* Lincroft */
-    case 0x27: /* Penwell */
-    case 0x35: /* Cloverview */
-    case 0x36: /* Cedarview */
-    case 0x37: /* Baytrail / Valleyview (Silvermont) */
-    case 0x4d: /* Avaton / Rangely (Silvermont) */
-    case 0x4c: /* Cherrytrail / Brasswell */
-    case 0x4a: /* Merrifield */
-    case 0x5a: /* Moorefield */
-    case 0x5c: /* Goldmont */
-    case 0x5f: /* Denverton */
-    case 0x75: /* Lightning Mountain */
-    case 0x7a: /* Gemini Lake */
-    case 0x86: /* Jacobsville */
+    case INTEL_ATOM_BONNELL:
+    case INTEL_ATOM_BONNELL_MID:
+    case INTEL_ATOM_SALTWELL_MID:
+    case INTEL_ATOM_SALTWELL_TABLET:
+    case INTEL_ATOM_SALTWELL:
+    case INTEL_ATOM_SILVERMONT:
+    case INTEL_ATOM_SILVERMONT_D:
+    case INTEL_ATOM_SILVERMONT_MID:
+    case INTEL_ATOM_SILVERMONT_MID2:
+    case INTEL_ATOM_GOLDMONT:
+    case INTEL_ATOM_GOLDMONT_D:
+    case INTEL_ATOM_GOLDMONT_PLUS:
+    case INTEL_ATOM_AIRMONT:
+    case INTEL_ATOM_AIRMONT_NP:
+    case INTEL_ATOM_TREMONT_D:
         return false;
 
     default:
         printk("Unrecognised CPU model %#x - assuming vulnerable to IF_PSCHANGE_MC\n",
-               boot_cpu_data.x86_model);
+               boot_cpu_data.model);
         return true;
     }
 }
@@ -3428,23 +3431,23 @@ static void __init lbr_tsx_fixup_check(void)
      * fixed up as well.
      */
     if ( cpu_has_hle || cpu_has_rtm ||
-         boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
-         boot_cpu_data.x86 != 6 )
+         boot_cpu_data.vendor != X86_VENDOR_INTEL ||
+         boot_cpu_data.family != 6 )
         return;
 
-    switch ( boot_cpu_data.x86_model )
+    switch ( boot_cpu_data.vfm )
     {
-    case 0x3c: /* HSM182, HSD172 - 4th gen Core */
-    case 0x3f: /* HSE117 - Xeon E5 v3 */
-    case 0x45: /* HSM182 - 4th gen Core */
-    case 0x46: /* HSM182, HSD172 - 4th gen Core (GT3) */
-    case 0x3d: /* BDM127 - 5th gen Core */
-    case 0x47: /* BDD117 - 5th gen Core (GT3)
-                  BDW117 - Xeon E3-1200 v4 */
-    case 0x4f: /* BDF85  - Xeon E5-2600 v4
-                  BDH75  - Core-i7 for LGA2011-v3 Socket
-                  BDX88  - Xeon E7-x800 v4 */
-    case 0x56: /* BDE105 - Xeon D-1500 */
+    case INTEL_HASWELL:     /* HSM182, HSD172 - 4th gen Core */
+    case INTEL_HASWELL_X:   /* HSE117 - Xeon E5 v3 */
+    case INTEL_HASWELL_L:   /* HSM182 - 4th gen Core */
+    case INTEL_HASWELL_G:   /* HSM182, HSD172 - 4th gen Core (GT3) */
+    case INTEL_BROADWELL:   /* BDM127 - 5th gen Core */
+    case INTEL_BROADWELL_G: /* BDD117 - 5th gen Core (GT3)
+                               BDW117 - Xeon E3-1200 v4 */
+    case INTEL_BROADWELL_X: /* BDF85  - Xeon E5-2600 v4
+                               BDH75  - Core-i7 for LGA2011-v3 Socket
+                               BDX88  - Xeon E7-x800 v4 */
+    case INTEL_BROADWELL_D: /* BDE105 - Xeon D-1500 */
         break;
     default:
         return;
@@ -3473,19 +3476,19 @@ static void __init ler_to_fixup_check(void)
      * that are not equal to bit[47].  Attempting to context switch this value
      * may cause a #GP.  Software should sign extend the MSR.
      */
-    if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
-         boot_cpu_data.x86 != 6 )
+    if ( boot_cpu_data.vendor != X86_VENDOR_INTEL ||
+         boot_cpu_data.family != 6 )
         return;
 
-    switch ( boot_cpu_data.x86_model )
+    switch ( boot_cpu_data.vfm )
     {
-    case 0x3d: /* BDM131 - 5th gen Core */
-    case 0x47: /* BDD??? - 5th gen Core (H-Processor line)
-                  BDW120 - Xeon E3-1200 v4 */
-    case 0x4f: /* BDF93  - Xeon E5-2600 v4
-                  BDH80  - Core-i7 for LGA2011-v3 Socket
-                  BDX93  - Xeon E7-x800 v4 */
-    case 0x56: /* BDE??? - Xeon D-1500 */
+    case INTEL_BROADWELL:   /* BDM131 - 5th gen Core */
+    case INTEL_BROADWELL_G: /* BDD??? - 5th gen Core (H-Processor line)
+                             * BDW120 - Xeon E3-1200 v4 */
+    case INTEL_BROADWELL_X: /* BDF93  - Xeon E5-2600 v4
+                             * BDH80  - Core-i7 for LGA2011-v3 Socket
+                             * BDX93  - Xeon E7-x800 v4 */
+    case INTEL_BROADWELL_D: /* BDE??? - Xeon D-1500 */
         ler_to_fixup_needed = true;
         break;
     }
-- 
2.51.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v v3 6/7] x86: Remove x86 prefixed names from x86/cpu/ files
  2026-03-13 16:36 [PATCH v v3 0/7] Remove x86 prefixed names from cpuinfo Kevin Lampis
                   ` (4 preceding siblings ...)
  2026-03-13 16:36 ` [PATCH v v3 5/7] x86: Remove x86 prefixed names from hvm code Kevin Lampis
@ 2026-03-13 16:36 ` Kevin Lampis
  2026-03-19  9:24   ` Jan Beulich
  2026-03-13 16:36 ` [PATCH v v3 7/7] x86: Remove x86 prefixed names from cpuinfo Kevin Lampis
  6 siblings, 1 reply; 16+ messages in thread
From: Kevin Lampis @ 2026-03-13 16:36 UTC (permalink / raw)
  To: xen-devel; +Cc: jbeulich, andrew.cooper3, roger.pau, Kevin Lampis

struct cpuinfo_x86
  .x86        => .family
  .x86_vendor => .vendor
  .x86_model  => .model
  .x86_mask   => .stepping

No functional change.

This work is part of making Xen safe for Intel family 18/19.

Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>
---
Changes in v2:
- Switch uint8_t to unsigned int in vpmu_arch_initialise()
- Switch int to unsigned int in vpmu_init()
- Remove XXX comments

Changes in v3:
- No changes
---
 xen/arch/x86/cpu/centaur.c         | 4 ++--
 xen/arch/x86/cpu/hygon.c           | 4 ++--
 xen/arch/x86/cpu/intel_cacheinfo.c | 6 +++---
 xen/arch/x86/cpu/mtrr/generic.c    | 4 ++--
 xen/arch/x86/cpu/mwait-idle.c      | 4 ++--
 xen/arch/x86/cpu/vpmu.c            | 4 ++--
 xen/arch/x86/cpu/vpmu_amd.c        | 6 +++---
 xen/arch/x86/cpu/vpmu_intel.c      | 4 ++--
 8 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/cpu/centaur.c b/xen/arch/x86/cpu/centaur.c
index d2e7c8ec99..9123b05dc1 100644
--- a/xen/arch/x86/cpu/centaur.c
+++ b/xen/arch/x86/cpu/centaur.c
@@ -41,7 +41,7 @@ static void init_c3(struct cpuinfo_x86 *c)
 		}
 	}
 
-	if (c->x86 == 0x6 && c->x86_model >= 0xf) {
+	if (c->family == 0x6 && c->model >= 0xf) {
 		c->x86_cache_alignment = c->x86_clflush_size * 2;
 		__set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
 	}
@@ -52,7 +52,7 @@ static void init_c3(struct cpuinfo_x86 *c)
 
 static void cf_check init_centaur(struct cpuinfo_x86 *c)
 {
-	if (c->x86 == 6)
+	if (c->family == 6)
 		init_c3(c);
 }
 
diff --git a/xen/arch/x86/cpu/hygon.c b/xen/arch/x86/cpu/hygon.c
index b99d83ed4d..7a9fc25d31 100644
--- a/xen/arch/x86/cpu/hygon.c
+++ b/xen/arch/x86/cpu/hygon.c
@@ -41,12 +41,12 @@ static void cf_check init_hygon(struct cpuinfo_x86 *c)
 
 	/* Probe for NSCB on Zen2 CPUs when not virtualised */
 	if (!cpu_has_hypervisor && !cpu_has_nscb && c == &boot_cpu_data &&
-	    c->x86 == 0x18)
+	    c->family == 0x18)
 		detect_zen2_null_seg_behaviour();
 
 	/*
 	 * TODO: Check heuristic safety with Hygon first
-	if (c->x86 == 0x18)
+	if (c->family == 0x18)
 		amd_init_spectral_chicken();
 	 */
 
diff --git a/xen/arch/x86/cpu/intel_cacheinfo.c b/xen/arch/x86/cpu/intel_cacheinfo.c
index e88faa7545..a81d0764fb 100644
--- a/xen/arch/x86/cpu/intel_cacheinfo.c
+++ b/xen/arch/x86/cpu/intel_cacheinfo.c
@@ -168,15 +168,15 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
 	 * Don't use cpuid2 if cpuid4 is supported. For P4, we use cpuid2 for
 	 * trace cache
 	 */
-	if ((num_cache_leaves == 0 || c->x86 == 15) && c->cpuid_level > 1 &&
-	    c->x86_vendor != X86_VENDOR_SHANGHAI)
+	if ((num_cache_leaves == 0 || c->family == 15) && c->cpuid_level > 1 &&
+	    c->vendor != X86_VENDOR_SHANGHAI)
 	{
 		/* supports eax=2  call */
 		unsigned int i, j, n, regs[4];
 		unsigned char *dp = (unsigned char *)regs;
 		int only_trace = 0;
 
-		if (num_cache_leaves != 0 && c->x86 == 15)
+		if (num_cache_leaves != 0 && c->family == 15)
 			only_trace = 1;
 
 		/* Number of times to iterate */
diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index 0ca6a2083f..23c279eb9a 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -218,8 +218,8 @@ static void __init print_mtrr_state(const char *level)
 			printk("%s  %u disabled\n", level, i);
 	}
 
-	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
-	     boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
+	if (boot_cpu_data.vendor == X86_VENDOR_AMD ||
+	     boot_cpu_data.vendor == X86_VENDOR_HYGON) {
 		uint64_t syscfg, tom2;
 
 		rdmsrl(MSR_K8_SYSCFG, syscfg);
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 5962ec1db9..6776eeb9ac 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -1637,7 +1637,7 @@ static int __init mwait_idle_probe(void)
 		lapic_timer_reliable_states = LAPIC_TIMER_ALWAYS_RELIABLE;
 
 	pr_debug(PREFIX "v" MWAIT_IDLE_VERSION " model %#x\n",
-		 boot_cpu_data.x86_model);
+		 boot_cpu_data.model);
 
 	pr_debug(PREFIX "lapic_timer_reliable_states %#x\n",
 		 lapic_timer_reliable_states);
@@ -1816,7 +1816,7 @@ bool __init mwait_pc10_supported(void)
 {
 	unsigned int ecx, edx, dummy;
 
-	if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
+	if (boot_cpu_data.vendor != X86_VENDOR_INTEL ||
 	    !cpu_has_monitor ||
 	    boot_cpu_data.cpuid_level < CPUID_MWAIT_LEAF)
 		return false;
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index c28192ea26..470f5ec98d 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -398,7 +398,7 @@ int vpmu_load(struct vcpu *v, bool from_guest)
 static int vpmu_arch_initialise(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t vendor = current_cpu_data.x86_vendor;
+    unsigned int vendor = current_cpu_data.vendor;
     int ret;
 
     BUILD_BUG_ON(sizeof(struct xen_pmu_intel_ctxt) > XENPMU_CTXT_PAD_SZ);
@@ -815,7 +815,7 @@ static struct notifier_block cpu_nfb = {
 
 static int __init cf_check vpmu_init(void)
 {
-    int vendor = current_cpu_data.x86_vendor;
+    unsigned int vendor = current_cpu_data.vendor;
     const struct arch_vpmu_ops *ops = NULL;
 
     if ( !opt_vpmu_enabled )
diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c
index d1f6bd5495..943a0f4ebe 100644
--- a/xen/arch/x86/cpu/vpmu_amd.c
+++ b/xen/arch/x86/cpu/vpmu_amd.c
@@ -532,7 +532,7 @@ static const struct arch_vpmu_ops *__init common_init(void)
     if ( !num_counters )
     {
         printk(XENLOG_WARNING "VPMU: Unsupported CPU family %#x\n",
-               current_cpu_data.x86);
+               current_cpu_data.family);
         return ERR_PTR(-EINVAL);
     }
 
@@ -557,7 +557,7 @@ static const struct arch_vpmu_ops *__init common_init(void)
 
 const struct arch_vpmu_ops *__init amd_vpmu_init(void)
 {
-    switch ( current_cpu_data.x86 )
+    switch ( current_cpu_data.family )
     {
     case 0x15:
     case 0x17:
@@ -585,7 +585,7 @@ const struct arch_vpmu_ops *__init amd_vpmu_init(void)
 
 const struct arch_vpmu_ops *__init hygon_vpmu_init(void)
 {
-    switch ( current_cpu_data.x86 )
+    switch ( current_cpu_data.family )
     {
     case 0x18:
         num_counters = F15H_NUM_COUNTERS;
diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index 1e3b06ef8e..ed9f62b936 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -917,7 +917,7 @@ const struct arch_vpmu_ops *__init core2_vpmu_init(void)
         return ERR_PTR(-EINVAL);
     }
 
-    if ( current_cpu_data.x86 != 6 )
+    if ( current_cpu_data.family != 6 )
     {
         printk(XENLOG_WARNING "VPMU: only family 6 is supported\n");
         return ERR_PTR(-EINVAL);
@@ -958,7 +958,7 @@ const struct arch_vpmu_ops *__init core2_vpmu_init(void)
               sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt;
 
     /* TODO: It's clearly incorrect for this to quirk all Intel Fam6 CPUs. */
-    pmc_quirk = current_cpu_data.x86 == 6;
+    pmc_quirk = current_cpu_data.family == 6;
 
     if ( sizeof(struct xen_pmu_data) + sizeof(uint64_t) * fixed_pmc_cnt +
          sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt > PAGE_SIZE )
-- 
2.51.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v v3 7/7] x86: Remove x86 prefixed names from cpuinfo
  2026-03-13 16:36 [PATCH v v3 0/7] Remove x86 prefixed names from cpuinfo Kevin Lampis
                   ` (5 preceding siblings ...)
  2026-03-13 16:36 ` [PATCH v v3 6/7] x86: Remove x86 prefixed names from x86/cpu/ files Kevin Lampis
@ 2026-03-13 16:36 ` Kevin Lampis
  2026-03-23 10:03   ` Jan Beulich
  6 siblings, 1 reply; 16+ messages in thread
From: Kevin Lampis @ 2026-03-13 16:36 UTC (permalink / raw)
  To: xen-devel; +Cc: jbeulich, andrew.cooper3, roger.pau, Kevin Lampis

Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>
---
Changes in v2:
- Remove the unneeded unions

Changes in v3:
- No changes
---
 xen/arch/x86/include/asm/cpufeature.h | 21 ++++-----------------
 1 file changed, 4 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index dcd223d84f..11661a114f 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -43,29 +43,16 @@
 #ifndef __ASSEMBLER__
 
 struct cpuinfo_x86 {
-    /* TODO: Phase out the x86 prefixed names. */
     union {
         struct {
-            union {
-                uint8_t x86_model;
-                uint8_t model;
-            };
-            union {
-                uint8_t x86;
-                uint8_t family;
-            };
-            union {
-                uint8_t x86_vendor;
-                uint8_t vendor;
-            };
+            uint8_t model;
+            uint8_t family;
+            uint8_t vendor;
             uint8_t _rsvd;             /* Use of this needs coordinating with VFM_MAKE() */
         };
         uint32_t vfm;                  /* Vendor Family Model */
     };
-    union {
-        uint8_t x86_mask;
-        uint8_t stepping;
-    };
+    uint8_t stepping;
 
     unsigned int cpuid_level;          /* Maximum supported CPUID level */
     unsigned int extended_cpuid_level; /* Maximum supported CPUID extended level */
-- 
2.51.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v v3 6/7] x86: Remove x86 prefixed names from x86/cpu/ files
  2026-03-13 16:36 ` [PATCH v v3 6/7] x86: Remove x86 prefixed names from x86/cpu/ files Kevin Lampis
@ 2026-03-19  9:24   ` Jan Beulich
  2026-03-19 11:34     ` Kevin Lampis
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2026-03-19  9:24 UTC (permalink / raw)
  To: Kevin Lampis; +Cc: andrew.cooper3, roger.pau, xen-devel

On 13.03.2026 17:36, Kevin Lampis wrote:
> struct cpuinfo_x86
>   .x86        => .family
>   .x86_vendor => .vendor
>   .x86_model  => .model
>   .x86_mask   => .stepping
> 
> No functional change.
> 
> This work is part of making Xen safe for Intel family 18/19.
> 
> Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>
> ---
> Changes in v2:
> - Switch uint8_t to unsigned int in vpmu_arch_initialise()
> - Switch int to unsigned int in vpmu_init()
> - Remove XXX comments
> 
> Changes in v3:
> - No changes

With that - where did the ack go?

Jan


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v v3 6/7] x86: Remove x86 prefixed names from x86/cpu/ files
  2026-03-19  9:24   ` Jan Beulich
@ 2026-03-19 11:34     ` Kevin Lampis
  2026-03-19 13:51       ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Kevin Lampis @ 2026-03-19 11:34 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monne, xen-devel@lists.xenproject.org

>With that - where did the ack go?

When I post a new revision should I add the `Acked-by: ` line under my `Signed-off-by:` line in the commit message? Is that the right procedure?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v v3 6/7] x86: Remove x86 prefixed names from x86/cpu/ files
  2026-03-19 11:34     ` Kevin Lampis
@ 2026-03-19 13:51       ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2026-03-19 13:51 UTC (permalink / raw)
  To: Kevin Lampis
  Cc: Andrew Cooper, Roger Pau Monne, xen-devel@lists.xenproject.org

On 19.03.2026 12:34, Kevin Lampis wrote:
>> With that - where did the ack go?
> 
> When I post a new revision should I add the `Acked-by: ` line under my `Signed-off-by:` line in the commit message? Is that the right procedure?

Yes - any tags you have collected you would accumulate in subsequent
submissions. Unless of course they have been invalidated by you making
non-obvious changes. (As "obvious" can be subjective, you may want to
err on the side of dropping tags, if in any doubt.)

More generally for formal things like this one: Please simply keep an
eye on how others do respective things.

Jan


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v v3 1/7] x86: relax some CPU checks for non-64 bit CPUs
  2026-03-13 16:36 ` [PATCH v v3 1/7] x86: relax some CPU checks for non-64 bit CPUs Kevin Lampis
@ 2026-03-23  9:54   ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2026-03-23  9:54 UTC (permalink / raw)
  To: Kevin Lampis; +Cc: andrew.cooper3, roger.pau, xen-devel

On 13.03.2026 17:36, Kevin Lampis wrote:
> These checks were guarding against non-64 bit CPU models but they are
> not supported by Xen anymore so the checks are no longer needed.
> 
> The switch statement was removed from mcheck_init()
> to support Intel family 18/19.
> 
> Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v v3 2/7] x86: Remove x86 prefixed names from mcheck code
  2026-03-13 16:36 ` [PATCH v v3 2/7] x86: Remove x86 prefixed names from mcheck code Kevin Lampis
@ 2026-03-23  9:59   ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2026-03-23  9:59 UTC (permalink / raw)
  To: Kevin Lampis; +Cc: andrew.cooper3, roger.pau, xen-devel

On 13.03.2026 17:36, Kevin Lampis wrote:
> struct cpuinfo_x86
>   .x86        => .family
>   .x86_vendor => .vendor
>   .x86_model  => .model
>   .x86_mask   => .stepping
> 
> No functional change.
> 
> This work is part of making Xen safe for Intel family 18/19.
> 
> Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>

This description doesn't quite cover ...

> --- a/xen/arch/x86/cpu/mcheck/mce_intel.c
> +++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
> @@ -711,10 +711,7 @@ static bool mce_is_broadcast(struct cpuinfo_x86 *c)
>       * DisplayFamily_DisplayModel encoding of 06H_EH and above,
>       * a MCA signal is broadcast to all logical processors in the system
>       */
> -    if ( c->x86_vendor == X86_VENDOR_INTEL && c->x86 == 6 &&
> -         c->x86_model >= 0xe )
> -        return true;
> -    return false;
> +    return c->vendor == X86_VENDOR_INTEL && c->family != 0xf;
>  }

... this change. Code changes themselves look alright.

Jan


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v v3 3/7] x86: Remove x86 prefixed names from acpi code
  2026-03-13 16:36 ` [PATCH v v3 3/7] x86: Remove x86 prefixed names from acpi code Kevin Lampis
@ 2026-03-23 10:02   ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2026-03-23 10:02 UTC (permalink / raw)
  To: Kevin Lampis; +Cc: andrew.cooper3, roger.pau, xen-devel

On 13.03.2026 17:36, Kevin Lampis wrote:
> struct cpuinfo_x86
>   .x86        => .family
>   .x86_vendor => .vendor
>   .x86_model  => .model
>   .x86_mask   => .stepping
> 
> No functional change.
> 
> This work is part of making Xen safe for Intel family 18/19.
> 
> Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v v3 7/7] x86: Remove x86 prefixed names from cpuinfo
  2026-03-13 16:36 ` [PATCH v v3 7/7] x86: Remove x86 prefixed names from cpuinfo Kevin Lampis
@ 2026-03-23 10:03   ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2026-03-23 10:03 UTC (permalink / raw)
  To: Kevin Lampis; +Cc: andrew.cooper3, roger.pau, xen-devel

On 13.03.2026 17:36, Kevin Lampis wrote:
> Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v v3 5/7] x86: Remove x86 prefixed names from hvm code
  2026-03-13 16:36 ` [PATCH v v3 5/7] x86: Remove x86 prefixed names from hvm code Kevin Lampis
@ 2026-03-23 10:06   ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2026-03-23 10:06 UTC (permalink / raw)
  To: Kevin Lampis; +Cc: andrew.cooper3, roger.pau, xen-devel

On 13.03.2026 17:36, Kevin Lampis wrote:
> struct cpuinfo_x86
>   .x86        => .family
>   .x86_vendor => .vendor
>   .x86_model  => .model
>   .x86_mask   => .stepping
> 
> No functional change.
> 
> This work is part of making Xen safe for Intel family 18/19.
> 
> Signed-off-by: Kevin Lampis <kevin.lampis@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> @@ -2163,8 +2164,7 @@ int __init vmx_vmcs_init(void)
>  
>      if ( opt_ept_ad < 0 )
>          /* Work around Erratum AVR41 on Avoton processors. */
> -        opt_ept_ad = !(boot_cpu_data.x86 == 6 &&
> -                       boot_cpu_data.x86_model == 0x4d);
> +        opt_ept_ad = !(boot_cpu_data.vfm == INTEL_ATOM_SILVERMONT_D);

Nit: Why not simply

        opt_ept_ad = (boot_cpu_data.vfm != INTEL_ATOM_SILVERMONT_D);

? Will take the liberty of adjusting while committing.

Jan


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2026-03-23 10:07 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-13 16:36 [PATCH v v3 0/7] Remove x86 prefixed names from cpuinfo Kevin Lampis
2026-03-13 16:36 ` [PATCH v v3 1/7] x86: relax some CPU checks for non-64 bit CPUs Kevin Lampis
2026-03-23  9:54   ` Jan Beulich
2026-03-13 16:36 ` [PATCH v v3 2/7] x86: Remove x86 prefixed names from mcheck code Kevin Lampis
2026-03-23  9:59   ` Jan Beulich
2026-03-13 16:36 ` [PATCH v v3 3/7] x86: Remove x86 prefixed names from acpi code Kevin Lampis
2026-03-23 10:02   ` Jan Beulich
2026-03-13 16:36 ` [PATCH v v3 4/7] x86: Remove Intel 0x65, 0x6e, 0x5d from VMX code Kevin Lampis
2026-03-13 16:36 ` [PATCH v v3 5/7] x86: Remove x86 prefixed names from hvm code Kevin Lampis
2026-03-23 10:06   ` Jan Beulich
2026-03-13 16:36 ` [PATCH v v3 6/7] x86: Remove x86 prefixed names from x86/cpu/ files Kevin Lampis
2026-03-19  9:24   ` Jan Beulich
2026-03-19 11:34     ` Kevin Lampis
2026-03-19 13:51       ` Jan Beulich
2026-03-13 16:36 ` [PATCH v v3 7/7] x86: Remove x86 prefixed names from cpuinfo Kevin Lampis
2026-03-23 10:03   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.