linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v9 00/22] Enable FRED with KVM VMX
@ 2025-10-26 20:18 Xin Li (Intel)
  2025-10-26 20:18 ` [PATCH v9 01/22] KVM: VMX: Enable support for secondary VM exit controls Xin Li (Intel)
                   ` (23 more replies)
  0 siblings, 24 replies; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

This patch set enables the Intel flexible return and event delivery
(FRED) architecture with KVM VMX to allow guests to utilize FRED.

The FRED architecture defines simple new transitions that change
privilege level (ring transitions). The FRED architecture was
designed with the following goals:

1) Improve overall performance and response time by replacing event
   delivery through the interrupt descriptor table (IDT event
   delivery) and event return by the IRET instruction with lower
   latency transitions.

2) Improve software robustness by ensuring that event delivery
   establishes the full supervisor context and that event return
   establishes the full user context.

The new transitions defined by the FRED architecture are FRED event
delivery and, for returning from events, two FRED return instructions.
FRED event delivery can effect a transition from ring 3 to ring 0, but
it is used also to deliver events incident to ring 0. One FRED
instruction (ERETU) effects a return from ring 0 to ring 3, while the
other (ERETS) returns while remaining in ring 0. Collectively, FRED
event delivery and the FRED return instructions are FRED transitions.


Intel VMX architecture is extended to run FRED guests, and the major
changes are:

1) New VMCS fields for FRED context management, which includes two new
event data VMCS fields, eight new guest FRED context VMCS fields and
eight new host FRED context VMCS fields.

2) VMX nested-exception support for proper virtualization of stack
levels introduced with FRED architecture.

Search for the latest FRED spec in most search engines with this search
pattern:

  site:intel.com FRED (flexible return and event delivery) specification


Although FRED and CET supervisor shadow stacks are independent CPU
features, FRED unconditionally includes FRED shadow stack pointer
MSRs IA32_FRED_SSP[0123], and IA32_FRED_SSP0 is just an alias of the
CET MSR IA32_PL0_SSP.  IOW, the state management of MSR IA32_PL0_SSP
becomes an overlap area, and Sean requested that FRED virtualization
to land after CET virtualization [1].

With CET virtualization now merged in v6.18, the path is clear to submit
the FRED virtualization patch series :).


Changes in v9:
* Rebased to the latest kvm-x86/next branch, tag kvm-x86-next-2025.10.20-2.
* Guard FRED state save/restore with guest_cpu_cap_has(vcpu, X86_FEATURE_FRED)
  in patch 19 (syzbot & Chao).
* Use array indexing for exception stack access, eliminating the need for
  the ESTACKS_MEMBERS() macro in struct cea_exception_stacks, and then
  exported __this_cpu_ist_top_va() in a subsequent patch (Dave Hansen).
* Rewrote some of the change logs.


Following is the link to v8 of this patch set:
https://lore.kernel.org/lkml/20251014010950.1568389-1-xin@zytor.com/


[1]: https://lore.kernel.org/kvm/ZvQaNRhrsSJTYji3@google.com/


Xin Li (18):
  KVM: VMX: Enable support for secondary VM exit controls
  KVM: VMX: Initialize VM entry/exit FRED controls in vmcs_config
  KVM: VMX: Disable FRED if FRED consistency checks fail
  KVM: VMX: Initialize VMCS FRED fields
  KVM: VMX: Set FRED MSR intercepts
  KVM: VMX: Save/restore guest FRED RSP0
  KVM: VMX: Add support for saving and restoring FRED MSRs
  KVM: x86: Add a helper to detect if FRED is enabled for a vCPU
  KVM: VMX: Virtualize FRED event_data
  KVM: VMX: Virtualize FRED nested exception tracking
  KVM: x86: Mark CR4.FRED as not reserved
  KVM: VMX: Dump FRED context in dump_vmcs()
  KVM: x86: Advertise support for FRED
  KVM: nVMX: Enable support for secondary VM exit controls
  KVM: nVMX: Handle FRED VMCS fields in nested VMX context
  KVM: nVMX: Validate FRED-related VMCS fields
  KVM: nVMX: Guard SHADOW_FIELD_R[OW] macros with VMX feature checks
  KVM: nVMX: Enable VMX FRED controls

Xin Li (Intel) (4):
  x86/cea: Prefix event stack names with ESTACK_
  x86/cea: Use array indexing to simplify exception stack access
  x86/cea: Export __this_cpu_ist_top_va() to KVM
  KVM: x86: Save/restore the nested flag of an exception

 Documentation/virt/kvm/api.rst        |  21 +-
 arch/x86/coco/sev/noinstr.c           |   4 +-
 arch/x86/coco/sev/vc-handle.c         |   2 +-
 arch/x86/include/asm/cpu_entry_area.h |  70 +++---
 arch/x86/include/asm/kvm_host.h       |  13 +-
 arch/x86/include/asm/msr-index.h      |   1 +
 arch/x86/include/asm/vmx.h            |  48 +++-
 arch/x86/include/uapi/asm/kvm.h       |   4 +-
 arch/x86/kernel/cpu/common.c          |  10 +-
 arch/x86/kernel/dumpstack_64.c        |  18 +-
 arch/x86/kernel/fred.c                |   6 +-
 arch/x86/kernel/traps.c               |   2 +-
 arch/x86/kvm/cpuid.c                  |   1 +
 arch/x86/kvm/kvm_cache_regs.h         |  15 ++
 arch/x86/kvm/svm/svm.c                |   2 +-
 arch/x86/kvm/vmx/capabilities.h       |  25 +-
 arch/x86/kvm/vmx/nested.c             | 343 +++++++++++++++++++++++---
 arch/x86/kvm/vmx/nested.h             |  22 ++
 arch/x86/kvm/vmx/vmcs.h               |   1 +
 arch/x86/kvm/vmx/vmcs12.c             |  19 ++
 arch/x86/kvm/vmx/vmcs12.h             |  40 ++-
 arch/x86/kvm/vmx/vmcs_shadow_fields.h |  37 ++-
 arch/x86/kvm/vmx/vmx.c                | 247 +++++++++++++++++--
 arch/x86/kvm/vmx/vmx.h                |  54 +++-
 arch/x86/kvm/x86.c                    | 131 +++++++++-
 arch/x86/kvm/x86.h                    |   8 +-
 arch/x86/mm/cpu_entry_area.c          |  39 ++-
 arch/x86/mm/fault.c                   |   2 +-
 include/uapi/linux/kvm.h              |   1 +
 29 files changed, 1038 insertions(+), 148 deletions(-)


base-commit: 4cc167c50eb19d44ac7e204938724e685e3d8057
-- 
2.51.0


^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH v9 01/22] KVM: VMX: Enable support for secondary VM exit controls
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
@ 2025-10-26 20:18 ` Xin Li (Intel)
  2025-10-26 20:18 ` [PATCH v9 02/22] KVM: VMX: Initialize VM entry/exit FRED controls in vmcs_config Xin Li (Intel)
                   ` (22 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Introduce infrastructure to support secondary VM exit controls.

Always load the controls when supported by hardware, though all control
bits remain clear in this patch.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v5:
* Add TB from Xuelian Guo.

Changes in v4:
* Fix clearing VM_EXIT_ACTIVATE_SECONDARY_CONTROLS (Chao Gao).
* Check VM exit/entry consistency based on the new macro from Sean
  Christopherson.

Change in v3:
* Do FRED controls consistency checks in the VM exit/entry consistency
  check framework (Sean Christopherson).

Change in v2:
* Always load the secondary VM exit controls (Sean Christopherson).
---
 arch/x86/include/asm/msr-index.h |  1 +
 arch/x86/include/asm/vmx.h       |  3 +++
 arch/x86/kvm/vmx/capabilities.h  |  9 ++++++++-
 arch/x86/kvm/vmx/vmcs.h          |  1 +
 arch/x86/kvm/vmx/vmx.c           | 29 +++++++++++++++++++++++++++--
 arch/x86/kvm/vmx/vmx.h           |  7 ++++++-
 6 files changed, 46 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 9e1720d73244..baf5e1648418 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -1225,6 +1225,7 @@
 #define MSR_IA32_VMX_TRUE_ENTRY_CTLS     0x00000490
 #define MSR_IA32_VMX_VMFUNC             0x00000491
 #define MSR_IA32_VMX_PROCBASED_CTLS3	0x00000492
+#define MSR_IA32_VMX_EXIT_CTLS2		0x00000493
 
 /* Resctrl MSRs: */
 /* - Intel: */
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index c85c50019523..1f60c04d11fb 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -107,6 +107,7 @@
 #define VM_EXIT_PT_CONCEAL_PIP			0x01000000
 #define VM_EXIT_CLEAR_IA32_RTIT_CTL		0x02000000
 #define VM_EXIT_LOAD_CET_STATE                  0x10000000
+#define VM_EXIT_ACTIVATE_SECONDARY_CONTROLS	0x80000000
 
 #define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR	0x00036dff
 
@@ -262,6 +263,8 @@ enum vmcs_field {
 	SHARED_EPT_POINTER		= 0x0000203C,
 	PID_POINTER_TABLE		= 0x00002042,
 	PID_POINTER_TABLE_HIGH		= 0x00002043,
+	SECONDARY_VM_EXIT_CONTROLS	= 0x00002044,
+	SECONDARY_VM_EXIT_CONTROLS_HIGH	= 0x00002045,
 	GUEST_PHYSICAL_ADDRESS          = 0x00002400,
 	GUEST_PHYSICAL_ADDRESS_HIGH     = 0x00002401,
 	VMCS_LINK_POINTER               = 0x00002800,
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 02aadb9d730e..6bd67c40ca3b 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -55,8 +55,9 @@ struct vmcs_config {
 	u32 cpu_based_exec_ctrl;
 	u32 cpu_based_2nd_exec_ctrl;
 	u64 cpu_based_3rd_exec_ctrl;
-	u32 vmexit_ctrl;
 	u32 vmentry_ctrl;
+	u32 vmexit_ctrl;
+	u64 vmexit_2nd_ctrl;
 	u64 misc;
 	struct nested_vmx_msrs nested;
 };
@@ -141,6 +142,12 @@ static inline bool cpu_has_tertiary_exec_ctrls(void)
 		CPU_BASED_ACTIVATE_TERTIARY_CONTROLS;
 }
 
+static inline bool cpu_has_secondary_vmexit_ctrls(void)
+{
+	return vmcs_config.vmexit_ctrl &
+		VM_EXIT_ACTIVATE_SECONDARY_CONTROLS;
+}
+
 static inline bool cpu_has_vmx_virtualize_apic_accesses(void)
 {
 	return vmcs_config.cpu_based_2nd_exec_ctrl &
diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h
index b25625314658..ae152a9d1963 100644
--- a/arch/x86/kvm/vmx/vmcs.h
+++ b/arch/x86/kvm/vmx/vmcs.h
@@ -47,6 +47,7 @@ struct vmcs_host_state {
 struct vmcs_controls_shadow {
 	u32 vm_entry;
 	u32 vm_exit;
+	u64 secondary_vm_exit;
 	u32 pin;
 	u32 exec;
 	u32 secondary_exec;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1021d3b65ea0..8de841c9c905 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2595,8 +2595,9 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 	u32 _cpu_based_exec_control = 0;
 	u32 _cpu_based_2nd_exec_control = 0;
 	u64 _cpu_based_3rd_exec_control = 0;
-	u32 _vmexit_control = 0;
 	u32 _vmentry_control = 0;
+	u32 _vmexit_control = 0;
+	u64 _vmexit2_control = 0;
 	u64 basic_msr;
 	u64 misc_msr;
 
@@ -2617,6 +2618,12 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 		{ VM_ENTRY_LOAD_CET_STATE,		VM_EXIT_LOAD_CET_STATE },
 	};
 
+	struct {
+		u32 entry_control;
+		u64 exit_control;
+	} const vmcs_entry_exit2_pairs[] = {
+	};
+
 	memset(vmcs_conf, 0, sizeof(*vmcs_conf));
 
 	if (adjust_vmx_controls(KVM_REQUIRED_VMX_CPU_BASED_VM_EXEC_CONTROL,
@@ -2703,10 +2710,19 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 				&_vmentry_control))
 		return -EIO;
 
+	if (_vmexit_control & VM_EXIT_ACTIVATE_SECONDARY_CONTROLS)
+		_vmexit2_control =
+			adjust_vmx_controls64(KVM_OPTIONAL_VMX_SECONDARY_VM_EXIT_CONTROLS,
+					      MSR_IA32_VMX_EXIT_CTLS2);
+
 	if (vmx_check_entry_exit_pairs(vmcs_entry_exit_pairs,
 				       _vmentry_control, _vmexit_control))
 		return -EIO;
 
+	if (vmx_check_entry_exit_pairs(vmcs_entry_exit2_pairs,
+				       _vmentry_control, _vmexit2_control))
+		return -EIO;
+
 	/*
 	 * Some cpus support VM_{ENTRY,EXIT}_IA32_PERF_GLOBAL_CTRL but they
 	 * can't be used due to an errata where VM Exit may incorrectly clear
@@ -2755,8 +2771,9 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 	vmcs_conf->cpu_based_exec_ctrl = _cpu_based_exec_control;
 	vmcs_conf->cpu_based_2nd_exec_ctrl = _cpu_based_2nd_exec_control;
 	vmcs_conf->cpu_based_3rd_exec_ctrl = _cpu_based_3rd_exec_control;
-	vmcs_conf->vmexit_ctrl         = _vmexit_control;
 	vmcs_conf->vmentry_ctrl        = _vmentry_control;
+	vmcs_conf->vmexit_ctrl         = _vmexit_control;
+	vmcs_conf->vmexit_2nd_ctrl     = _vmexit2_control;
 	vmcs_conf->misc	= misc_msr;
 
 #if IS_ENABLED(CONFIG_HYPERV)
@@ -4429,6 +4446,11 @@ static u32 vmx_get_initial_vmexit_ctrl(void)
 		~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER);
 }
 
+static u64 vmx_secondary_vmexit_ctrl(void)
+{
+	return vmcs_config.vmexit_2nd_ctrl;
+}
+
 void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -4771,6 +4793,9 @@ static void init_vmcs(struct vcpu_vmx *vmx)
 
 	vm_exit_controls_set(vmx, vmx_get_initial_vmexit_ctrl());
 
+	if (cpu_has_secondary_vmexit_ctrls())
+		secondary_vm_exit_controls_set(vmx, vmx_secondary_vmexit_ctrl());
+
 	/* 22.2.1, 20.8.1 */
 	vm_entry_controls_set(vmx, vmx_get_initial_vmentry_ctrl());
 
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 6cb04a6afeef..349d96e68f96 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -510,7 +510,11 @@ static inline u8 vmx_get_rvi(void)
 	       VM_EXIT_CLEAR_BNDCFGS |					\
 	       VM_EXIT_PT_CONCEAL_PIP |					\
 	       VM_EXIT_CLEAR_IA32_RTIT_CTL |				\
-	       VM_EXIT_LOAD_CET_STATE)
+	       VM_EXIT_LOAD_CET_STATE |					\
+	       VM_EXIT_ACTIVATE_SECONDARY_CONTROLS)
+
+#define KVM_REQUIRED_VMX_SECONDARY_VM_EXIT_CONTROLS (0)
+#define KVM_OPTIONAL_VMX_SECONDARY_VM_EXIT_CONTROLS (0)
 
 #define KVM_REQUIRED_VMX_PIN_BASED_VM_EXEC_CONTROL			\
 	(PIN_BASED_EXT_INTR_MASK |					\
@@ -623,6 +627,7 @@ static __always_inline void lname##_controls_changebit(struct vcpu_vmx *vmx, u##
 }
 BUILD_CONTROLS_SHADOW(vm_entry, VM_ENTRY_CONTROLS, 32)
 BUILD_CONTROLS_SHADOW(vm_exit, VM_EXIT_CONTROLS, 32)
+BUILD_CONTROLS_SHADOW(secondary_vm_exit, SECONDARY_VM_EXIT_CONTROLS, 64)
 BUILD_CONTROLS_SHADOW(pin, PIN_BASED_VM_EXEC_CONTROL, 32)
 BUILD_CONTROLS_SHADOW(exec, CPU_BASED_VM_EXEC_CONTROL, 32)
 BUILD_CONTROLS_SHADOW(secondary_exec, SECONDARY_VM_EXEC_CONTROL, 32)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 02/22] KVM: VMX: Initialize VM entry/exit FRED controls in vmcs_config
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
  2025-10-26 20:18 ` [PATCH v9 01/22] KVM: VMX: Enable support for secondary VM exit controls Xin Li (Intel)
@ 2025-10-26 20:18 ` Xin Li (Intel)
  2025-10-26 20:18 ` [PATCH v9 03/22] KVM: VMX: Disable FRED if FRED consistency checks fail Xin Li (Intel)
                   ` (21 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Setup VM entry/exit FRED controls in the global vmcs_config for proper
FRED VMCS fields management:
  1) load guest FRED state upon VM entry.
  2) save guest FRED state during VM exit.
  3) load host FRED state during VM exit.

Also add FRED control consistency checks to the existing VM entry/exit
consistency check framework.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---

Change in v5:
* Remove the pair VM_ENTRY_LOAD_IA32_FRED/VM_EXIT_ACTIVATE_SECONDARY_CONTROLS,
  since the secondary VM exit controls are unconditionally enabled anyway, and
  there are features other than FRED needing it (Chao Gao).
* Add TB from Xuelian Guo.

Change in v4:
* Do VM exit/entry consistency checks using the new macro from Sean
  Christopherson.

Changes in v3:
* Add FRED control consistency checks to the existing VM entry/exit
  consistency check framework (Sean Christopherson).
* Just do the unnecessary FRED state load/store on every VM entry/exit
  (Sean Christopherson).
---
 arch/x86/include/asm/vmx.h | 4 ++++
 arch/x86/kvm/vmx/vmx.c     | 2 ++
 arch/x86/kvm/vmx/vmx.h     | 7 +++++--
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 1f60c04d11fb..dd79d027ea70 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -109,6 +109,9 @@
 #define VM_EXIT_LOAD_CET_STATE                  0x10000000
 #define VM_EXIT_ACTIVATE_SECONDARY_CONTROLS	0x80000000
 
+#define SECONDARY_VM_EXIT_SAVE_IA32_FRED	BIT_ULL(0)
+#define SECONDARY_VM_EXIT_LOAD_IA32_FRED	BIT_ULL(1)
+
 #define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR	0x00036dff
 
 #define VM_ENTRY_LOAD_DEBUG_CONTROLS            0x00000004
@@ -122,6 +125,7 @@
 #define VM_ENTRY_PT_CONCEAL_PIP			0x00020000
 #define VM_ENTRY_LOAD_IA32_RTIT_CTL		0x00040000
 #define VM_ENTRY_LOAD_CET_STATE                 0x00100000
+#define VM_ENTRY_LOAD_IA32_FRED			0x00800000
 
 #define VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR	0x000011ff
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 8de841c9c905..be48ba2d70e1 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2622,6 +2622,8 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 		u32 entry_control;
 		u64 exit_control;
 	} const vmcs_entry_exit2_pairs[] = {
+		{ VM_ENTRY_LOAD_IA32_FRED,
+			SECONDARY_VM_EXIT_SAVE_IA32_FRED | SECONDARY_VM_EXIT_LOAD_IA32_FRED },
 	};
 
 	memset(vmcs_conf, 0, sizeof(*vmcs_conf));
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 349d96e68f96..645b0343e88c 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -487,7 +487,8 @@ static inline u8 vmx_get_rvi(void)
 	 VM_ENTRY_LOAD_BNDCFGS |					\
 	 VM_ENTRY_PT_CONCEAL_PIP |					\
 	 VM_ENTRY_LOAD_IA32_RTIT_CTL |					\
-	 VM_ENTRY_LOAD_CET_STATE)
+	 VM_ENTRY_LOAD_CET_STATE |					\
+	 VM_ENTRY_LOAD_IA32_FRED)
 
 #define __KVM_REQUIRED_VMX_VM_EXIT_CONTROLS				\
 	(VM_EXIT_SAVE_DEBUG_CONTROLS |					\
@@ -514,7 +515,9 @@ static inline u8 vmx_get_rvi(void)
 	       VM_EXIT_ACTIVATE_SECONDARY_CONTROLS)
 
 #define KVM_REQUIRED_VMX_SECONDARY_VM_EXIT_CONTROLS (0)
-#define KVM_OPTIONAL_VMX_SECONDARY_VM_EXIT_CONTROLS (0)
+#define KVM_OPTIONAL_VMX_SECONDARY_VM_EXIT_CONTROLS			\
+	     (SECONDARY_VM_EXIT_SAVE_IA32_FRED |			\
+	      SECONDARY_VM_EXIT_LOAD_IA32_FRED)
 
 #define KVM_REQUIRED_VMX_PIN_BASED_VM_EXEC_CONTROL			\
 	(PIN_BASED_EXT_INTR_MASK |					\
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 03/22] KVM: VMX: Disable FRED if FRED consistency checks fail
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
  2025-10-26 20:18 ` [PATCH v9 01/22] KVM: VMX: Enable support for secondary VM exit controls Xin Li (Intel)
  2025-10-26 20:18 ` [PATCH v9 02/22] KVM: VMX: Initialize VM entry/exit FRED controls in vmcs_config Xin Li (Intel)
@ 2025-10-26 20:18 ` Xin Li (Intel)
  2025-10-26 20:18 ` [PATCH v9 04/22] x86/cea: Prefix event stack names with ESTACK_ Xin Li (Intel)
                   ` (20 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Do not virtualize FRED if FRED consistency checks fail.

Either on broken hardware, or when run KVM on top of another hypervisor
before the underlying hypervisor implements nested FRED correctly.

Suggested-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Changes in v5:
* Drop the cpu_feature_enabled() in cpu_has_vmx_fred() (Sean).
* Add TB from Xuelian Guo.

Change in v4:
* Call out the reason why not check FRED VM-exit controls in
  cpu_has_vmx_fred() (Chao Gao).
---
 arch/x86/kvm/vmx/capabilities.h | 10 ++++++++++
 arch/x86/kvm/vmx/vmx.c          |  3 +++
 2 files changed, 13 insertions(+)

diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 6bd67c40ca3b..651507627ef3 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -405,6 +405,16 @@ static inline bool vmx_pebs_supported(void)
 	return boot_cpu_has(X86_FEATURE_PEBS) && kvm_pmu_cap.pebs_ept;
 }
 
+static inline bool cpu_has_vmx_fred(void)
+{
+	/*
+	 * setup_vmcs_config() guarantees FRED VM-entry/exit controls
+	 * are either all set or none.  So, no need to check FRED VM-exit
+	 * controls.
+	 */
+	return (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_FRED);
+}
+
 static inline bool cpu_has_notify_vmexit(void)
 {
 	return vmcs_config.cpu_based_2nd_exec_ctrl &
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index be48ba2d70e1..fcfa99160018 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8020,6 +8020,9 @@ static __init void vmx_set_cpu_caps(void)
 		kvm_cpu_cap_check_and_set(X86_FEATURE_DTES64);
 	}
 
+	if (!cpu_has_vmx_fred())
+		kvm_cpu_cap_clear(X86_FEATURE_FRED);
+
 	if (!enable_pmu)
 		kvm_cpu_cap_clear(X86_FEATURE_PDCM);
 	kvm_caps.supported_perf_cap = vmx_get_perf_capabilities();
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 04/22] x86/cea: Prefix event stack names with ESTACK_
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (2 preceding siblings ...)
  2025-10-26 20:18 ` [PATCH v9 03/22] KVM: VMX: Disable FRED if FRED consistency checks fail Xin Li (Intel)
@ 2025-10-26 20:18 ` Xin Li (Intel)
  2025-10-26 20:18 ` [PATCH v9 05/22] x86/cea: Use array indexing to simplify exception stack access Xin Li (Intel)
                   ` (19 subsequent siblings)
  23 siblings, 0 replies; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

Add the ESTACK_ prefix to event stack names to improve clarity and
readability.  Without the prefix, names like DF, NMI, and DB are too
brief and potentially ambiguous.

This renaming also prepares for converting __this_cpu_ist_top_va from
a macro into a function that accepts an enum exception_stack_ordering
argument, without requiring changes to existing callsites.

Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
---

Changes in v7:
* Move rename code to this patch (Dave Hansen).
* Fix a vertical alignment (Dave Hansen).
---
 arch/x86/coco/sev/noinstr.c           |  4 ++--
 arch/x86/coco/sev/vc-handle.c         |  2 +-
 arch/x86/include/asm/cpu_entry_area.h | 26 +++++++++++++-------------
 arch/x86/kernel/cpu/common.c          | 10 +++++-----
 arch/x86/kernel/dumpstack_64.c        | 14 +++++++-------
 arch/x86/kernel/fred.c                |  6 +++---
 arch/x86/kernel/traps.c               |  2 +-
 arch/x86/mm/cpu_entry_area.c          | 12 ++++++------
 arch/x86/mm/fault.c                   |  2 +-
 9 files changed, 39 insertions(+), 39 deletions(-)

diff --git a/arch/x86/coco/sev/noinstr.c b/arch/x86/coco/sev/noinstr.c
index b527eafb6312..c3985c9b232c 100644
--- a/arch/x86/coco/sev/noinstr.c
+++ b/arch/x86/coco/sev/noinstr.c
@@ -30,7 +30,7 @@ static __always_inline bool on_vc_stack(struct pt_regs *regs)
 	if (ip_within_syscall_gap(regs))
 		return false;
 
-	return ((sp >= __this_cpu_ist_bottom_va(VC)) && (sp < __this_cpu_ist_top_va(VC)));
+	return ((sp >= __this_cpu_ist_bottom_va(ESTACK_VC)) && (sp < __this_cpu_ist_top_va(ESTACK_VC)));
 }
 
 /*
@@ -82,7 +82,7 @@ void noinstr __sev_es_ist_exit(void)
 	/* Read IST entry */
 	ist = __this_cpu_read(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC]);
 
-	if (WARN_ON(ist == __this_cpu_ist_top_va(VC)))
+	if (WARN_ON(ist == __this_cpu_ist_top_va(ESTACK_VC)))
 		return;
 
 	/* Read back old IST entry and write it to the TSS */
diff --git a/arch/x86/coco/sev/vc-handle.c b/arch/x86/coco/sev/vc-handle.c
index 7fc136a35334..1d3f086ae4c3 100644
--- a/arch/x86/coco/sev/vc-handle.c
+++ b/arch/x86/coco/sev/vc-handle.c
@@ -871,7 +871,7 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
 
 static __always_inline bool is_vc2_stack(unsigned long sp)
 {
-	return (sp >= __this_cpu_ist_bottom_va(VC2) && sp < __this_cpu_ist_top_va(VC2));
+	return (sp >= __this_cpu_ist_bottom_va(ESTACK_VC2) && sp < __this_cpu_ist_top_va(ESTACK_VC2));
 }
 
 static __always_inline bool vc_from_invalid_context(struct pt_regs *regs)
diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
index 462fc34f1317..d0f884c28178 100644
--- a/arch/x86/include/asm/cpu_entry_area.h
+++ b/arch/x86/include/asm/cpu_entry_area.h
@@ -18,19 +18,19 @@
 
 /* Macro to enforce the same ordering and stack sizes */
 #define ESTACKS_MEMBERS(guardsize, optional_stack_size)		\
-	char	DF_stack_guard[guardsize];			\
-	char	DF_stack[EXCEPTION_STKSZ];			\
-	char	NMI_stack_guard[guardsize];			\
-	char	NMI_stack[EXCEPTION_STKSZ];			\
-	char	DB_stack_guard[guardsize];			\
-	char	DB_stack[EXCEPTION_STKSZ];			\
-	char	MCE_stack_guard[guardsize];			\
-	char	MCE_stack[EXCEPTION_STKSZ];			\
-	char	VC_stack_guard[guardsize];			\
-	char	VC_stack[optional_stack_size];			\
-	char	VC2_stack_guard[guardsize];			\
-	char	VC2_stack[optional_stack_size];			\
-	char	IST_top_guard[guardsize];			\
+	char	ESTACK_DF_stack_guard[guardsize];		\
+	char	ESTACK_DF_stack[EXCEPTION_STKSZ];		\
+	char	ESTACK_NMI_stack_guard[guardsize];		\
+	char	ESTACK_NMI_stack[EXCEPTION_STKSZ];		\
+	char	ESTACK_DB_stack_guard[guardsize];		\
+	char	ESTACK_DB_stack[EXCEPTION_STKSZ];		\
+	char	ESTACK_MCE_stack_guard[guardsize];		\
+	char	ESTACK_MCE_stack[EXCEPTION_STKSZ];		\
+	char	ESTACK_VC_stack_guard[guardsize];		\
+	char	ESTACK_VC_stack[optional_stack_size];		\
+	char	ESTACK_VC2_stack_guard[guardsize];		\
+	char	ESTACK_VC2_stack[optional_stack_size];		\
+	char	ESTACK_IST_top_guard[guardsize];		\
 
 /* The exception stacks' physical storage. No guard pages required */
 struct exception_stacks {
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index c7d3512914ca..5f78b8f63d8d 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2332,12 +2332,12 @@ static inline void setup_getcpu(int cpu)
 static inline void tss_setup_ist(struct tss_struct *tss)
 {
 	/* Set up the per-CPU TSS IST stacks */
-	tss->x86_tss.ist[IST_INDEX_DF] = __this_cpu_ist_top_va(DF);
-	tss->x86_tss.ist[IST_INDEX_NMI] = __this_cpu_ist_top_va(NMI);
-	tss->x86_tss.ist[IST_INDEX_DB] = __this_cpu_ist_top_va(DB);
-	tss->x86_tss.ist[IST_INDEX_MCE] = __this_cpu_ist_top_va(MCE);
+	tss->x86_tss.ist[IST_INDEX_DF]	= __this_cpu_ist_top_va(ESTACK_DF);
+	tss->x86_tss.ist[IST_INDEX_NMI]	= __this_cpu_ist_top_va(ESTACK_NMI);
+	tss->x86_tss.ist[IST_INDEX_DB]	= __this_cpu_ist_top_va(ESTACK_DB);
+	tss->x86_tss.ist[IST_INDEX_MCE]	= __this_cpu_ist_top_va(ESTACK_MCE);
 	/* Only mapped when SEV-ES is active */
-	tss->x86_tss.ist[IST_INDEX_VC] = __this_cpu_ist_top_va(VC);
+	tss->x86_tss.ist[IST_INDEX_VC]	= __this_cpu_ist_top_va(ESTACK_VC);
 }
 #else /* CONFIG_X86_64 */
 static inline void tss_setup_ist(struct tss_struct *tss) { }
diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
index 6c5defd6569a..40f51e278171 100644
--- a/arch/x86/kernel/dumpstack_64.c
+++ b/arch/x86/kernel/dumpstack_64.c
@@ -73,7 +73,7 @@ struct estack_pages {
 	 PFN_DOWN(CEA_ESTACK_OFFS(st) + CEA_ESTACK_SIZE(st) - 1)] = {	\
 		.offs	= CEA_ESTACK_OFFS(st),				\
 		.size	= CEA_ESTACK_SIZE(st),				\
-		.type	= STACK_TYPE_EXCEPTION + ESTACK_ ##st, }
+		.type	= STACK_TYPE_EXCEPTION + st, }
 
 /*
  * Array of exception stack page descriptors. If the stack is larger than
@@ -83,12 +83,12 @@ struct estack_pages {
  */
 static const
 struct estack_pages estack_pages[CEA_ESTACK_PAGES] ____cacheline_aligned = {
-	EPAGERANGE(DF),
-	EPAGERANGE(NMI),
-	EPAGERANGE(DB),
-	EPAGERANGE(MCE),
-	EPAGERANGE(VC),
-	EPAGERANGE(VC2),
+	EPAGERANGE(ESTACK_DF),
+	EPAGERANGE(ESTACK_NMI),
+	EPAGERANGE(ESTACK_DB),
+	EPAGERANGE(ESTACK_MCE),
+	EPAGERANGE(ESTACK_VC),
+	EPAGERANGE(ESTACK_VC2),
 };
 
 static __always_inline bool in_exception_stack(unsigned long *stack, struct stack_info *info)
diff --git a/arch/x86/kernel/fred.c b/arch/x86/kernel/fred.c
index 816187da3a47..06d944a3d051 100644
--- a/arch/x86/kernel/fred.c
+++ b/arch/x86/kernel/fred.c
@@ -87,7 +87,7 @@ void cpu_init_fred_rsps(void)
 	       FRED_STKLVL(X86_TRAP_DF,  FRED_DF_STACK_LEVEL));
 
 	/* The FRED equivalents to IST stacks... */
-	wrmsrq(MSR_IA32_FRED_RSP1, __this_cpu_ist_top_va(DB));
-	wrmsrq(MSR_IA32_FRED_RSP2, __this_cpu_ist_top_va(NMI));
-	wrmsrq(MSR_IA32_FRED_RSP3, __this_cpu_ist_top_va(DF));
+	wrmsrq(MSR_IA32_FRED_RSP1, __this_cpu_ist_top_va(ESTACK_DB));
+	wrmsrq(MSR_IA32_FRED_RSP2, __this_cpu_ist_top_va(ESTACK_NMI));
+	wrmsrq(MSR_IA32_FRED_RSP3, __this_cpu_ist_top_va(ESTACK_DF));
 }
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 6b22611e69cc..47b7b7495114 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -954,7 +954,7 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r
 
 	if (!get_stack_info_noinstr(stack, current, &info) || info.type == STACK_TYPE_ENTRY ||
 	    info.type > STACK_TYPE_EXCEPTION_LAST)
-		sp = __this_cpu_ist_top_va(VC2);
+		sp = __this_cpu_ist_top_va(ESTACK_VC2);
 
 sync:
 	/*
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index 575f863f3c75..9fa371af8abc 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -151,15 +151,15 @@ static void __init percpu_setup_exception_stacks(unsigned int cpu)
 	 * by guard pages so each stack must be mapped separately. DB2 is
 	 * not mapped; it just exists to catch triple nesting of #DB.
 	 */
-	cea_map_stack(DF);
-	cea_map_stack(NMI);
-	cea_map_stack(DB);
-	cea_map_stack(MCE);
+	cea_map_stack(ESTACK_DF);
+	cea_map_stack(ESTACK_NMI);
+	cea_map_stack(ESTACK_DB);
+	cea_map_stack(ESTACK_MCE);
 
 	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) {
 		if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
-			cea_map_stack(VC);
-			cea_map_stack(VC2);
+			cea_map_stack(ESTACK_VC);
+			cea_map_stack(ESTACK_VC2);
 		}
 	}
 }
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 998bd807fc7b..1804eb86cc14 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -671,7 +671,7 @@ page_fault_oops(struct pt_regs *regs, unsigned long error_code,
 		 * and then double-fault, though, because we're likely to
 		 * break the console driver and lose most of the stack dump.
 		 */
-		call_on_stack(__this_cpu_ist_top_va(DF) - sizeof(void*),
+		call_on_stack(__this_cpu_ist_top_va(ESTACK_DF) - sizeof(void*),
 			      handle_stack_overflow,
 			      ASM_CALL_ARG3,
 			      , [arg1] "r" (regs), [arg2] "r" (address), [arg3] "r" (&info));
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 05/22] x86/cea: Use array indexing to simplify exception stack access
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (3 preceding siblings ...)
  2025-10-26 20:18 ` [PATCH v9 04/22] x86/cea: Prefix event stack names with ESTACK_ Xin Li (Intel)
@ 2025-10-26 20:18 ` Xin Li (Intel)
  2025-10-27 15:49   ` Dave Hansen
  2025-10-26 20:18 ` [PATCH v9 06/22] x86/cea: Export __this_cpu_ist_top_va() to KVM Xin Li (Intel)
                   ` (18 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

Refactor struct cea_exception_stacks to leverage array indexing for
exception stack access, improving code clarity and eliminating the
need for the ESTACKS_MEMBERS() macro.

Convert __this_cpu_ist_{bottom,top}_va() from macros to functions,
allowing removal of the now-obsolete CEA_ESTACK_BOT and CEA_ESTACK_TOP
macros.

Also drop CEA_ESTACK_SIZE, which just duplicated EXCEPTION_STKSZ.

Signed-off-by: Xin Li (Intel) <xin@zytor.com>
---

Change in v9:
* Refactor first and then export in a separate patch (Dave Hansen).

Change in v7:
* Access cea_exception_stacks using array indexing (Dave Hansen).
* Use BUILD_BUG_ON(ESTACK_DF != 0) to ensure the starting index is 0
  (Dave Hansen).
* Remove Suggested-bys (Dave Hansen).
* Move rename code in a separate patch (Dave Hansen).

Change in v5:
* Export accessor instead of data (Christoph Hellwig).
* Add TB from Xuelian Guo.

Change in v4:
* Rewrite the change log and add comments to the export (Dave Hansen).
---
 arch/x86/include/asm/cpu_entry_area.h | 52 ++++++++++++---------------
 arch/x86/kernel/dumpstack_64.c        |  4 +--
 arch/x86/mm/cpu_entry_area.c          | 21 ++++++++++-
 3 files changed, 44 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
index d0f884c28178..509e52fc3a0f 100644
--- a/arch/x86/include/asm/cpu_entry_area.h
+++ b/arch/x86/include/asm/cpu_entry_area.h
@@ -16,6 +16,19 @@
 #define VC_EXCEPTION_STKSZ	0
 #endif
 
+/*
+ * The exception stack ordering in [cea_]exception_stacks
+ */
+enum exception_stack_ordering {
+	ESTACK_DF,
+	ESTACK_NMI,
+	ESTACK_DB,
+	ESTACK_MCE,
+	ESTACK_VC,
+	ESTACK_VC2,
+	N_EXCEPTION_STACKS
+};
+
 /* Macro to enforce the same ordering and stack sizes */
 #define ESTACKS_MEMBERS(guardsize, optional_stack_size)		\
 	char	ESTACK_DF_stack_guard[guardsize];		\
@@ -39,37 +52,22 @@ struct exception_stacks {
 
 /* The effective cpu entry area mapping with guard pages. */
 struct cea_exception_stacks {
-	ESTACKS_MEMBERS(PAGE_SIZE, EXCEPTION_STKSZ)
-};
-
-/*
- * The exception stack ordering in [cea_]exception_stacks
- */
-enum exception_stack_ordering {
-	ESTACK_DF,
-	ESTACK_NMI,
-	ESTACK_DB,
-	ESTACK_MCE,
-	ESTACK_VC,
-	ESTACK_VC2,
-	N_EXCEPTION_STACKS
+	struct {
+		char stack_guard[PAGE_SIZE];
+		char stack[EXCEPTION_STKSZ];
+	} event_stacks[N_EXCEPTION_STACKS];
+	char IST_top_guard[PAGE_SIZE];
 };
 
-#define CEA_ESTACK_SIZE(st)					\
-	sizeof(((struct cea_exception_stacks *)0)->st## _stack)
-
-#define CEA_ESTACK_BOT(ceastp, st)				\
-	((unsigned long)&(ceastp)->st## _stack)
-
-#define CEA_ESTACK_TOP(ceastp, st)				\
-	(CEA_ESTACK_BOT(ceastp, st) + CEA_ESTACK_SIZE(st))
-
 #define CEA_ESTACK_OFFS(st)					\
-	offsetof(struct cea_exception_stacks, st## _stack)
+	offsetof(struct cea_exception_stacks, event_stacks[st].stack)
 
 #define CEA_ESTACK_PAGES					\
 	(sizeof(struct cea_exception_stacks) / PAGE_SIZE)
 
+extern unsigned long __this_cpu_ist_top_va(enum exception_stack_ordering stack);
+extern unsigned long __this_cpu_ist_bottom_va(enum exception_stack_ordering stack);
+
 #endif
 
 #ifdef CONFIG_X86_32
@@ -144,10 +142,4 @@ static __always_inline struct entry_stack *cpu_entry_stack(int cpu)
 	return &get_cpu_entry_area(cpu)->entry_stack_page.stack;
 }
 
-#define __this_cpu_ist_top_va(name)					\
-	CEA_ESTACK_TOP(__this_cpu_read(cea_exception_stacks), name)
-
-#define __this_cpu_ist_bottom_va(name)					\
-	CEA_ESTACK_BOT(__this_cpu_read(cea_exception_stacks), name)
-
 #endif
diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
index 40f51e278171..93b10b264e53 100644
--- a/arch/x86/kernel/dumpstack_64.c
+++ b/arch/x86/kernel/dumpstack_64.c
@@ -70,9 +70,9 @@ struct estack_pages {
 
 #define EPAGERANGE(st)							\
 	[PFN_DOWN(CEA_ESTACK_OFFS(st)) ...				\
-	 PFN_DOWN(CEA_ESTACK_OFFS(st) + CEA_ESTACK_SIZE(st) - 1)] = {	\
+	 PFN_DOWN(CEA_ESTACK_OFFS(st) + EXCEPTION_STKSZ - 1)] = {	\
 		.offs	= CEA_ESTACK_OFFS(st),				\
-		.size	= CEA_ESTACK_SIZE(st),				\
+		.size	= EXCEPTION_STKSZ,				\
 		.type	= STACK_TYPE_EXCEPTION + st, }
 
 /*
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index 9fa371af8abc..b3d90f9cfbb1 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -18,6 +18,25 @@ static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage)
 static DEFINE_PER_CPU_PAGE_ALIGNED(struct exception_stacks, exception_stacks);
 DEFINE_PER_CPU(struct cea_exception_stacks*, cea_exception_stacks);
 
+/*
+ * Typically invoked by entry code, so must be noinstr.
+ */
+noinstr unsigned long __this_cpu_ist_bottom_va(enum exception_stack_ordering stack)
+{
+	struct cea_exception_stacks *s;
+
+	BUILD_BUG_ON(ESTACK_DF != 0);
+
+	s = __this_cpu_read(cea_exception_stacks);
+
+	return (unsigned long)&s->event_stacks[stack].stack;
+}
+
+noinstr unsigned long __this_cpu_ist_top_va(enum exception_stack_ordering stack)
+{
+	return __this_cpu_ist_bottom_va(stack) + EXCEPTION_STKSZ;
+}
+
 static DEFINE_PER_CPU_READ_MOSTLY(unsigned long, _cea_offset);
 
 static __always_inline unsigned int cea_offset(unsigned int cpu)
@@ -132,7 +151,7 @@ static void __init percpu_setup_debug_store(unsigned int cpu)
 
 #define cea_map_stack(name) do {					\
 	npages = sizeof(estacks->name## _stack) / PAGE_SIZE;		\
-	cea_map_percpu_pages(cea->estacks.name## _stack,		\
+	cea_map_percpu_pages(cea->estacks.event_stacks[name].stack,	\
 			estacks->name## _stack, npages, PAGE_KERNEL);	\
 	} while (0)
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 06/22] x86/cea: Export __this_cpu_ist_top_va() to KVM
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (4 preceding siblings ...)
  2025-10-26 20:18 ` [PATCH v9 05/22] x86/cea: Use array indexing to simplify exception stack access Xin Li (Intel)
@ 2025-10-26 20:18 ` Xin Li (Intel)
  2025-10-27 15:50   ` Dave Hansen
  2025-10-26 20:18 ` [PATCH v9 07/22] KVM: VMX: Initialize VMCS FRED fields Xin Li (Intel)
                   ` (17 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

Export __this_cpu_ist_top_va() to allow KVM to retrieve the per-CPU
exception stack top.

FRED introduced new fields in the host-state area of the VMCS for stack
levels 1->3 (HOST_IA32_FRED_RSP[123]), each respectively corresponding to
per-CPU exception stacks for #DB, NMI and #DF.  KVM must populate these
fields each time a vCPU is loaded onto a CPU.

Signed-off-by: Xin Li (Intel) <xin@zytor.com>
---
 arch/x86/mm/cpu_entry_area.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index b3d90f9cfbb1..e507621d5c20 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -19,6 +19,11 @@ static DEFINE_PER_CPU_PAGE_ALIGNED(struct exception_stacks, exception_stacks);
 DEFINE_PER_CPU(struct cea_exception_stacks*, cea_exception_stacks);
 
 /*
+ * FRED introduced new fields in the host-state area of the VMCS for
+ * stack levels 1->3 (HOST_IA32_FRED_RSP[123]), each respectively
+ * corresponding to per CPU stacks for #DB, NMI and #DF.  KVM must
+ * populate these each time a vCPU is loaded onto a CPU.
+ *
  * Typically invoked by entry code, so must be noinstr.
  */
 noinstr unsigned long __this_cpu_ist_bottom_va(enum exception_stack_ordering stack)
@@ -36,6 +41,7 @@ noinstr unsigned long __this_cpu_ist_top_va(enum exception_stack_ordering stack)
 {
 	return __this_cpu_ist_bottom_va(stack) + EXCEPTION_STKSZ;
 }
+EXPORT_SYMBOL_FOR_MODULES(__this_cpu_ist_top_va, "kvm-intel");
 
 static DEFINE_PER_CPU_READ_MOSTLY(unsigned long, _cea_offset);
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 07/22] KVM: VMX: Initialize VMCS FRED fields
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (5 preceding siblings ...)
  2025-10-26 20:18 ` [PATCH v9 06/22] x86/cea: Export __this_cpu_ist_top_va() to KVM Xin Li (Intel)
@ 2025-10-26 20:18 ` Xin Li (Intel)
  2025-11-19  2:44   ` Chao Gao
  2025-10-26 20:18 ` [PATCH v9 08/22] KVM: VMX: Set FRED MSR intercepts Xin Li (Intel)
                   ` (16 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Initialize host VMCS FRED fields with host FRED MSRs' value and
guest VMCS FRED fields to 0.

FRED CPU state is managed in 9 new FRED MSRs:
        IA32_FRED_CONFIG,
        IA32_FRED_STKLVLS,
        IA32_FRED_RSP0,
        IA32_FRED_RSP1,
        IA32_FRED_RSP2,
        IA32_FRED_RSP3,
        IA32_FRED_SSP1,
        IA32_FRED_SSP2,
        IA32_FRED_SSP3,
as well as a few existing CPU registers and MSRs:
        CR4.FRED,
        IA32_STAR,
        IA32_KERNEL_GS_BASE,
        IA32_PL0_SSP (also known as IA32_FRED_SSP0).

CR4, IA32_KERNEL_GS_BASE and IA32_STAR are already well managed.
Except IA32_FRED_RSP0 and IA32_FRED_SSP0, all other FRED CPU state
MSRs have corresponding VMCS fields in both the host-state and
guest-state areas.  So KVM just needs to initialize them, and with
proper VM entry/exit FRED controls, a FRED CPU will keep tracking
host and guest FRED CPU state in VMCS automatically.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v5:
* Add TB from Xuelian Guo.

Change in v4:
* Initialize host SSP[1-3] to 0s in vmx_set_constant_host_state()
  because Linux doesn't support kernel shadow stacks (Chao Gao).

Change in v3:
* Use structure kvm_host_values to keep host fred config & stack levels
  (Sean Christopherson).

Changes in v2:
* Use kvm_cpu_cap_has() instead of cpu_feature_enabled() to decouple
  KVM's capability to virtualize a feature and host's enabling of a
  feature (Chao Gao).
* Move guest FRED state init into __vmx_vcpu_reset() (Chao Gao).
---
 arch/x86/include/asm/vmx.h | 32 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/vmx.c     | 36 ++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.h         |  3 +++
 3 files changed, 71 insertions(+)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index dd79d027ea70..6f8b8947c60c 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -293,12 +293,44 @@ enum vmcs_field {
 	GUEST_BNDCFGS_HIGH              = 0x00002813,
 	GUEST_IA32_RTIT_CTL		= 0x00002814,
 	GUEST_IA32_RTIT_CTL_HIGH	= 0x00002815,
+	GUEST_IA32_FRED_CONFIG		= 0x0000281a,
+	GUEST_IA32_FRED_CONFIG_HIGH	= 0x0000281b,
+	GUEST_IA32_FRED_RSP1		= 0x0000281c,
+	GUEST_IA32_FRED_RSP1_HIGH	= 0x0000281d,
+	GUEST_IA32_FRED_RSP2		= 0x0000281e,
+	GUEST_IA32_FRED_RSP2_HIGH	= 0x0000281f,
+	GUEST_IA32_FRED_RSP3		= 0x00002820,
+	GUEST_IA32_FRED_RSP3_HIGH	= 0x00002821,
+	GUEST_IA32_FRED_STKLVLS		= 0x00002822,
+	GUEST_IA32_FRED_STKLVLS_HIGH	= 0x00002823,
+	GUEST_IA32_FRED_SSP1		= 0x00002824,
+	GUEST_IA32_FRED_SSP1_HIGH	= 0x00002825,
+	GUEST_IA32_FRED_SSP2		= 0x00002826,
+	GUEST_IA32_FRED_SSP2_HIGH	= 0x00002827,
+	GUEST_IA32_FRED_SSP3		= 0x00002828,
+	GUEST_IA32_FRED_SSP3_HIGH	= 0x00002829,
 	HOST_IA32_PAT			= 0x00002c00,
 	HOST_IA32_PAT_HIGH		= 0x00002c01,
 	HOST_IA32_EFER			= 0x00002c02,
 	HOST_IA32_EFER_HIGH		= 0x00002c03,
 	HOST_IA32_PERF_GLOBAL_CTRL	= 0x00002c04,
 	HOST_IA32_PERF_GLOBAL_CTRL_HIGH	= 0x00002c05,
+	HOST_IA32_FRED_CONFIG		= 0x00002c08,
+	HOST_IA32_FRED_CONFIG_HIGH	= 0x00002c09,
+	HOST_IA32_FRED_RSP1		= 0x00002c0a,
+	HOST_IA32_FRED_RSP1_HIGH	= 0x00002c0b,
+	HOST_IA32_FRED_RSP2		= 0x00002c0c,
+	HOST_IA32_FRED_RSP2_HIGH	= 0x00002c0d,
+	HOST_IA32_FRED_RSP3		= 0x00002c0e,
+	HOST_IA32_FRED_RSP3_HIGH	= 0x00002c0f,
+	HOST_IA32_FRED_STKLVLS		= 0x00002c10,
+	HOST_IA32_FRED_STKLVLS_HIGH	= 0x00002c11,
+	HOST_IA32_FRED_SSP1		= 0x00002c12,
+	HOST_IA32_FRED_SSP1_HIGH	= 0x00002c13,
+	HOST_IA32_FRED_SSP2		= 0x00002c14,
+	HOST_IA32_FRED_SSP2_HIGH	= 0x00002c15,
+	HOST_IA32_FRED_SSP3		= 0x00002c16,
+	HOST_IA32_FRED_SSP3_HIGH	= 0x00002c17,
 	PIN_BASED_VM_EXEC_CONTROL       = 0x00004000,
 	CPU_BASED_VM_EXEC_CONTROL       = 0x00004002,
 	EXCEPTION_BITMAP                = 0x00004004,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index fcfa99160018..c8b5359123bf 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1459,6 +1459,15 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu)
 				    (unsigned long)(cpu_entry_stack(cpu) + 1));
 		}
 
+		/* Per-CPU FRED MSRs */
+		if (kvm_cpu_cap_has(X86_FEATURE_FRED)) {
+#ifdef CONFIG_X86_64
+			vmcs_write64(HOST_IA32_FRED_RSP1, __this_cpu_ist_top_va(ESTACK_DB));
+			vmcs_write64(HOST_IA32_FRED_RSP2, __this_cpu_ist_top_va(ESTACK_NMI));
+			vmcs_write64(HOST_IA32_FRED_RSP3, __this_cpu_ist_top_va(ESTACK_DF));
+#endif
+		}
+
 		vmx->loaded_vmcs->cpu = cpu;
 	}
 }
@@ -4330,6 +4339,17 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vmx)
 	 */
 	vmcs_write16(HOST_DS_SELECTOR, 0);
 	vmcs_write16(HOST_ES_SELECTOR, 0);
+
+	if (kvm_cpu_cap_has(X86_FEATURE_FRED)) {
+		/* FRED CONFIG and STKLVLS are the same on all CPUs */
+		vmcs_write64(HOST_IA32_FRED_CONFIG, kvm_host.fred_config);
+		vmcs_write64(HOST_IA32_FRED_STKLVLS, kvm_host.fred_stklvls);
+
+		/* Linux doesn't support kernel shadow stacks, thus SSPs are 0s */
+		vmcs_write64(HOST_IA32_FRED_SSP1, 0);
+		vmcs_write64(HOST_IA32_FRED_SSP2, 0);
+		vmcs_write64(HOST_IA32_FRED_SSP3, 0);
+	}
 #else
 	vmcs_write16(HOST_DS_SELECTOR, __KERNEL_DS);  /* 22.2.4 */
 	vmcs_write16(HOST_ES_SELECTOR, __KERNEL_DS);  /* 22.2.4 */
@@ -4841,6 +4861,17 @@ static void init_vmcs(struct vcpu_vmx *vmx)
 	}
 
 	vmx_setup_uret_msrs(vmx);
+
+	if (kvm_cpu_cap_has(X86_FEATURE_FRED)) {
+		vmcs_write64(GUEST_IA32_FRED_CONFIG, 0);
+		vmcs_write64(GUEST_IA32_FRED_RSP1, 0);
+		vmcs_write64(GUEST_IA32_FRED_RSP2, 0);
+		vmcs_write64(GUEST_IA32_FRED_RSP3, 0);
+		vmcs_write64(GUEST_IA32_FRED_STKLVLS, 0);
+		vmcs_write64(GUEST_IA32_FRED_SSP1, 0);
+		vmcs_write64(GUEST_IA32_FRED_SSP2, 0);
+		vmcs_write64(GUEST_IA32_FRED_SSP3, 0);
+	}
 }
 
 static void __vmx_vcpu_reset(struct kvm_vcpu *vcpu)
@@ -8717,6 +8748,11 @@ __init int vmx_hardware_setup(void)
 
 	kvm_caps.inapplicable_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT;
 
+	if (kvm_cpu_cap_has(X86_FEATURE_FRED)) {
+		rdmsrl(MSR_IA32_FRED_CONFIG, kvm_host.fred_config);
+		rdmsrl(MSR_IA32_FRED_STKLVLS, kvm_host.fred_stklvls);
+	}
+
 	return r;
 }
 
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index f3dc77f006f9..0c1fbf75442b 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -52,6 +52,9 @@ struct kvm_host_values {
 	u64 xss;
 	u64 s_cet;
 	u64 arch_capabilities;
+
+	u64 fred_config;
+	u64 fred_stklvls;
 };
 
 void kvm_spurious_fault(void);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 08/22] KVM: VMX: Set FRED MSR intercepts
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (6 preceding siblings ...)
  2025-10-26 20:18 ` [PATCH v9 07/22] KVM: VMX: Initialize VMCS FRED fields Xin Li (Intel)
@ 2025-10-26 20:18 ` Xin Li (Intel)
  2025-11-12  5:49   ` Chao Gao
  2025-10-26 20:18 ` [PATCH v9 09/22] KVM: VMX: Save/restore guest FRED RSP0 Xin Li (Intel)
                   ` (15 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

On a userspace MSR filter change, set FRED MSR intercepts.

The eight FRED MSRs, MSR_IA32_FRED_RSP[123], MSR_IA32_FRED_STKLVLS,
MSR_IA32_FRED_SSP[123] and MSR_IA32_FRED_CONFIG, are all safe to
passthrough, because each has a corresponding host and guest field
in VMCS.

Both MSR_IA32_FRED_RSP0 and MSR_IA32_FRED_SSP0 (aka MSR_IA32_PL0_SSP)
are dedicated for userspace event delivery, IOW they are NOT used in
any kernel event delivery and the execution of ERETS.  Thus KVM can
run safely with guest values in the two MSRs.  As a result, save and
restore of their guest values are deferred until vCPU context switch,
Host MSR_IA32_FRED_RSP0 is restored upon returning to userspace, and
Host MSR_IA32_PL0_SSP is managed with XRSTORS/XSAVES.

Note, FRED SSP MSRs, including MSR_IA32_PL0_SSP, are available on
any processor that enumerates FRED.  On processors that support FRED
but not CET, FRED transitions do not use these MSRs, but they remain
accessible via MSR instructions such as RDMSR and WRMSR.

Intercept MSR_IA32_PL0_SSP when CET shadow stack is not supported,
regardless of FRED support.  This ensures the guest value remains
fully virtual and does not modify the hardware FRED SSP0 MSR.

This behavior is consistent with the current setup in
vmx_recalc_msr_intercepts(), so no change is needed to the interception
logic for MSR_IA32_PL0_SSP.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Changes in v7:
* Rewrite the changelog and comment, majorly for MSR_IA32_PL0_SSP.

Changes in v5:
* Skip execution of vmx_set_intercept_for_fred_msr() if FRED is
  not available or enabled (Sean).
* Use 'intercept' as the variable name to indicate whether MSR
  interception should be enabled (Sean).
* Add TB from Xuelian Guo.
---
 arch/x86/kvm/vmx/vmx.c | 47 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 47 insertions(+)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c8b5359123bf..ef9765779884 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4146,6 +4146,51 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
 	}
 }
 
+static void vmx_set_intercept_for_fred_msr(struct kvm_vcpu *vcpu)
+{
+	bool intercept = !guest_cpu_cap_has(vcpu, X86_FEATURE_FRED);
+
+	if (!kvm_cpu_cap_has(X86_FEATURE_FRED))
+		return;
+
+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP1, MSR_TYPE_RW, intercept);
+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP2, MSR_TYPE_RW, intercept);
+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP3, MSR_TYPE_RW, intercept);
+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_STKLVLS, MSR_TYPE_RW, intercept);
+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_SSP1, MSR_TYPE_RW, intercept);
+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_SSP2, MSR_TYPE_RW, intercept);
+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_SSP3, MSR_TYPE_RW, intercept);
+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_CONFIG, MSR_TYPE_RW, intercept);
+
+	/*
+	 * MSR_IA32_FRED_RSP0 and MSR_IA32_PL0_SSP (aka MSR_IA32_FRED_SSP0) are
+	 * designed for event delivery while executing in userspace.  Since KVM
+	 * operates entirely in kernel mode (CPL is always 0 after any VM exit),
+	 * it can safely retain and operate with guest-defined values for these
+	 * MSRs.
+	 *
+	 * As a result, interception of MSR_IA32_FRED_RSP0 and MSR_IA32_PL0_SSP
+	 * is unnecessary.
+	 *
+	 * Note: Saving and restoring MSR_IA32_PL0_SSP is part of CET supervisor
+	 * context management.  However, FRED SSP MSRs, including MSR_IA32_PL0_SSP,
+	 * are available on any processor that enumerates FRED.
+	 *
+	 * On processors that support FRED but not CET, FRED transitions do not
+	 * use these MSRs, but they remain accessible via MSR instructions such
+	 * as RDMSR and WRMSR.
+	 *
+	 * Intercept MSR_IA32_PL0_SSP when CET shadow stack is not supported,
+	 * regardless of FRED support.  This ensures the guest value remains
+	 * fully virtual and does not modify the hardware FRED SSP0 MSR.
+	 *
+	 * This behavior is consistent with the current setup in
+	 * vmx_recalc_msr_intercepts(), so no change is needed to the interception
+	 * logic for MSR_IA32_PL0_SSP.
+	 */
+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP0, MSR_TYPE_RW, intercept);
+}
+
 static void vmx_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
 {
 	bool intercept;
@@ -4212,6 +4257,8 @@ static void vmx_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
 		vmx_set_intercept_for_msr(vcpu, MSR_IA32_S_CET, MSR_TYPE_RW, intercept);
 	}
 
+	vmx_set_intercept_for_fred_msr(vcpu);
+
 	/*
 	 * x2APIC and LBR MSR intercepts are modified on-demand and cannot be
 	 * filtered by userspace.
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 09/22] KVM: VMX: Save/restore guest FRED RSP0
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (7 preceding siblings ...)
  2025-10-26 20:18 ` [PATCH v9 08/22] KVM: VMX: Set FRED MSR intercepts Xin Li (Intel)
@ 2025-10-26 20:18 ` Xin Li (Intel)
  2025-11-12  5:59   ` Chao Gao
  2025-10-26 20:18 ` [PATCH v9 10/22] KVM: VMX: Add support for saving and restoring FRED MSRs Xin Li (Intel)
                   ` (14 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Save guest FRED RSP0 in vmx_prepare_switch_to_host() and restore it
in vmx_prepare_switch_to_guest() because MSR_IA32_FRED_RSP0 is passed
through to the guest, thus is volatile/unknown.

Note, host FRED RSP0 is restored in arch_exit_to_user_mode_prepare(),
regardless of whether it is modified in KVM.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Changes in v5:
* Remove the cpu_feature_enabled() check when set/get guest
  MSR_IA32_FRED_RSP0, as guest_cpu_cap_has() should suffice (Sean).
* Add a comment when synchronizing current MSR_IA32_FRED_RSP0 MSR to
  the kernel's local cache, because its handling is different from
  the MSR_KERNEL_GS_BASE handling (Sean).
* Add TB from Xuelian Guo.

Changes in v3:
* KVM only needs to save/restore guest FRED RSP0 now as host FRED RSP0
  is restored in arch_exit_to_user_mode_prepare() (Sean Christopherson).

Changes in v2:
* Don't use guest_cpuid_has() in vmx_prepare_switch_to_{host,guest}(),
  which are called from IRQ-disabled context (Chao Gao).
* Reset msr_guest_fred_rsp0 in __vmx_vcpu_reset() (Chao Gao).
---
 arch/x86/kvm/vmx/vmx.c | 13 +++++++++++++
 arch/x86/kvm/vmx/vmx.h |  1 +
 2 files changed, 14 insertions(+)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ef9765779884..c1fb3745247c 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1292,6 +1292,9 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
 	}
 
 	wrmsrq(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
+
+	if (guest_cpu_cap_has(vcpu, X86_FEATURE_FRED))
+		wrmsrns(MSR_IA32_FRED_RSP0, vmx->msr_guest_fred_rsp0);
 #else
 	savesegment(fs, fs_sel);
 	savesegment(gs, gs_sel);
@@ -1336,6 +1339,16 @@ static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx)
 	invalidate_tss_limit();
 #ifdef CONFIG_X86_64
 	wrmsrq(MSR_KERNEL_GS_BASE, vmx->vt.msr_host_kernel_gs_base);
+
+	if (guest_cpu_cap_has(&vmx->vcpu, X86_FEATURE_FRED)) {
+		vmx->msr_guest_fred_rsp0 = read_msr(MSR_IA32_FRED_RSP0);
+		/*
+		 * Synchronize the current value in hardware to the kernel's
+		 * local cache.  The desired host RSP0 will be set when the
+		 * CPU exits to userspace (RSP0 is a per-task value).
+		 */
+		fred_sync_rsp0(vmx->msr_guest_fred_rsp0);
+	}
 #endif
 	load_fixmap_gdt(raw_smp_processor_id());
 	vmx->vt.guest_state_loaded = false;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 645b0343e88c..48a5ab12cccf 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -227,6 +227,7 @@ struct vcpu_vmx {
 	bool                  guest_uret_msrs_loaded;
 #ifdef CONFIG_X86_64
 	u64		      msr_guest_kernel_gs_base;
+	u64		      msr_guest_fred_rsp0;
 #endif
 
 	u64		      spec_ctrl;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 10/22] KVM: VMX: Add support for saving and restoring FRED MSRs
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (8 preceding siblings ...)
  2025-10-26 20:18 ` [PATCH v9 09/22] KVM: VMX: Save/restore guest FRED RSP0 Xin Li (Intel)
@ 2025-10-26 20:18 ` Xin Li (Intel)
  2025-11-12  6:16   ` Chao Gao
  2025-10-26 20:18 ` [PATCH v9 11/22] KVM: x86: Add a helper to detect if FRED is enabled for a vCPU Xin Li (Intel)
                   ` (13 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Introduce support for handling FRED MSR access requests, enabling both
host and guest to read and write FRED MSRs, which is essential for VM
save/restore and live migration, and allows userspace tools such as QEMU
to access the relevant MSRs.

Specially, intercept accesses to the FRED SSP0 MSR (IA32_PL0_SSP), which
remains accessible when FRED is enumerated even if CET is not.  This
ensures the guest value is fully virtual and does not alter the hardware
FRED SSP0 MSR.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v7:
* Intercept accesses to FRED SSP0, i.e., IA32_PL0_SSP, which remains
  accessible when FRED but !CET (Sean).

Change in v6:
* Return KVM_MSR_RET_UNSUPPORTED instead of 1 when FRED is not available
  (Chao Gao)
* Handle MSR_IA32_PL0_SSP when FRED is enumerated but CET not.

Change in v5:
* Use the newly added guest MSR read/write helpers (Sean).
* Check the size of fred_msr_vmcs_fields[] using static_assert() (Sean).
* Rewrite setting FRED MSRs to make it much easier to read (Sean).
* Add TB from Xuelian Guo.

Changes since v2:
* Add a helper to convert FRED MSR index to VMCS field encoding to
  make the code more compact (Chao Gao).
* Get rid of the "host_initiated" check because userspace has to set
  CPUID before MSRs (Chao Gao & Sean Christopherson).
* Address a few cleanup comments (Sean Christopherson).

Changes since v1:
* Use kvm_cpu_cap_has() instead of cpu_feature_enabled() (Chao Gao).
* Fail host requested FRED MSRs access if KVM cannot virtualize FRED
  (Chao Gao).
* Handle the case FRED MSRs are valid but KVM cannot virtualize FRED
  (Chao Gao).
* Add sanity checks when writing to FRED MSRs.
---
 arch/x86/include/asm/kvm_host.h |  5 ++
 arch/x86/kvm/vmx/vmx.c          | 45 +++++++++++++++++
 arch/x86/kvm/x86.c              | 85 +++++++++++++++++++++++++++++++--
 3 files changed, 132 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 48598d017d6f..43a18e265289 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1092,6 +1092,11 @@ struct kvm_vcpu_arch {
 #if IS_ENABLED(CONFIG_HYPERV)
 	hpa_t hv_root_tdp;
 #endif
+	/*
+	 * Stores the FRED SSP0 MSR when CET is not supported, prompting KVM
+	 * to intercept its accesses.
+	 */
+	u64 fred_ssp0_fallback;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c1fb3745247c..4a74c9f64f90 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1386,6 +1386,18 @@ static void vmx_write_guest_kernel_gs_base(struct vcpu_vmx *vmx, u64 data)
 	vmx_write_guest_host_msr(vmx, MSR_KERNEL_GS_BASE, data,
 				 &vmx->msr_guest_kernel_gs_base);
 }
+
+static u64 vmx_read_guest_fred_rsp0(struct vcpu_vmx *vmx)
+{
+	return vmx_read_guest_host_msr(vmx, MSR_IA32_FRED_RSP0,
+				       &vmx->msr_guest_fred_rsp0);
+}
+
+static void vmx_write_guest_fred_rsp0(struct vcpu_vmx *vmx, u64 data)
+{
+	vmx_write_guest_host_msr(vmx, MSR_IA32_FRED_RSP0, data,
+				 &vmx->msr_guest_fred_rsp0);
+}
 #endif
 
 static void grow_ple_window(struct kvm_vcpu *vcpu)
@@ -1987,6 +1999,27 @@ int vmx_get_feature_msr(u32 msr, u64 *data)
 	}
 }
 
+#ifdef CONFIG_X86_64
+static const u32 fred_msr_vmcs_fields[] = {
+	GUEST_IA32_FRED_RSP1,
+	GUEST_IA32_FRED_RSP2,
+	GUEST_IA32_FRED_RSP3,
+	GUEST_IA32_FRED_STKLVLS,
+	GUEST_IA32_FRED_SSP1,
+	GUEST_IA32_FRED_SSP2,
+	GUEST_IA32_FRED_SSP3,
+	GUEST_IA32_FRED_CONFIG,
+};
+
+static_assert(MSR_IA32_FRED_CONFIG - MSR_IA32_FRED_RSP1 ==
+	      ARRAY_SIZE(fred_msr_vmcs_fields) - 1);
+
+static u32 fred_msr_to_vmcs(u32 msr)
+{
+	return fred_msr_vmcs_fields[msr - MSR_IA32_FRED_RSP1];
+}
+#endif
+
 /*
  * Reads an msr value (of 'msr_info->index') into 'msr_info->data'.
  * Returns 0 on success, non-0 otherwise.
@@ -2009,6 +2042,12 @@ int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_KERNEL_GS_BASE:
 		msr_info->data = vmx_read_guest_kernel_gs_base(vmx);
 		break;
+	case MSR_IA32_FRED_RSP0:
+		msr_info->data = vmx_read_guest_fred_rsp0(vmx);
+		break;
+	case MSR_IA32_FRED_RSP1 ... MSR_IA32_FRED_CONFIG:
+		msr_info->data = vmcs_read64(fred_msr_to_vmcs(msr_info->index));
+		break;
 #endif
 	case MSR_EFER:
 		return kvm_get_msr_common(vcpu, msr_info);
@@ -2241,6 +2280,12 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			vmx_update_exception_bitmap(vcpu);
 		}
 		break;
+	case MSR_IA32_FRED_RSP0:
+		vmx_write_guest_fred_rsp0(vmx, data);
+		break;
+	case MSR_IA32_FRED_RSP1 ... MSR_IA32_FRED_CONFIG:
+		vmcs_write64(fred_msr_to_vmcs(msr_index), data);
+		break;
 #endif
 	case MSR_IA32_SYSENTER_CS:
 		if (is_guest_mode(vcpu))
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b4b5d2d09634..3d612803f5f2 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -331,6 +331,9 @@ static const u32 msrs_to_save_base[] = {
 	MSR_STAR,
 #ifdef CONFIG_X86_64
 	MSR_CSTAR, MSR_KERNEL_GS_BASE, MSR_SYSCALL_MASK, MSR_LSTAR,
+	MSR_IA32_FRED_RSP0, MSR_IA32_FRED_RSP1, MSR_IA32_FRED_RSP2,
+	MSR_IA32_FRED_RSP3, MSR_IA32_FRED_STKLVLS, MSR_IA32_FRED_SSP1,
+	MSR_IA32_FRED_SSP2, MSR_IA32_FRED_SSP3, MSR_IA32_FRED_CONFIG,
 #endif
 	MSR_IA32_TSC, MSR_IA32_CR_PAT, MSR_VM_HSAVE_PA,
 	MSR_IA32_FEAT_CTL, MSR_IA32_BNDCFGS, MSR_TSC_AUX,
@@ -1919,7 +1922,7 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data,
 		 * architecture. Intercepting XRSTORS/XSAVES for this
 		 * special case isn't deemed worthwhile.
 		 */
-	case MSR_IA32_PL0_SSP ... MSR_IA32_INT_SSP_TAB:
+	case MSR_IA32_PL1_SSP ... MSR_IA32_INT_SSP_TAB:
 		if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK))
 			return KVM_MSR_RET_UNSUPPORTED;
 		/*
@@ -1934,6 +1937,52 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data,
 		if (index != MSR_IA32_INT_SSP_TAB && !IS_ALIGNED(data, 4))
 			return 1;
 		break;
+	case MSR_IA32_FRED_STKLVLS:
+		if (!guest_cpu_cap_has(vcpu, X86_FEATURE_FRED))
+			return KVM_MSR_RET_UNSUPPORTED;
+		break;
+	case MSR_IA32_FRED_RSP0 ... MSR_IA32_FRED_RSP3:
+	case MSR_IA32_FRED_SSP1 ... MSR_IA32_FRED_CONFIG: {
+		u64 reserved_bits = 0;
+
+		if (!guest_cpu_cap_has(vcpu, X86_FEATURE_FRED))
+			return KVM_MSR_RET_UNSUPPORTED;
+
+		if (is_noncanonical_msr_address(data, vcpu))
+			return 1;
+
+		switch (index) {
+		case MSR_IA32_FRED_CONFIG:
+			reserved_bits = BIT_ULL(11) | GENMASK_ULL(5, 4) | BIT_ULL(2);
+			break;
+		case MSR_IA32_FRED_RSP0 ... MSR_IA32_FRED_RSP3:
+			reserved_bits = GENMASK_ULL(5, 0);
+			break;
+		case MSR_IA32_FRED_SSP1 ... MSR_IA32_FRED_SSP3:
+			reserved_bits = GENMASK_ULL(2, 0);
+			break;
+		default:
+			WARN_ON_ONCE(1);
+			return 1;
+		}
+
+		if (data & reserved_bits)
+			return 1;
+
+		break;
+	}
+	case MSR_IA32_PL0_SSP: /* I.e., MSR_IA32_FRED_SSP0 */
+		if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK) &&
+		    !guest_cpu_cap_has(vcpu, X86_FEATURE_FRED))
+			return KVM_MSR_RET_UNSUPPORTED;
+
+		if (is_noncanonical_msr_address(data, vcpu))
+			return 1;
+
+		if (!IS_ALIGNED(data, 4))
+			return 1;
+
+		break;
 	}
 
 	msr.data = data;
@@ -1988,10 +2037,19 @@ static int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data,
 		if (!host_initiated)
 			return 1;
 		fallthrough;
-	case MSR_IA32_PL0_SSP ... MSR_IA32_INT_SSP_TAB:
+	case MSR_IA32_PL1_SSP ... MSR_IA32_INT_SSP_TAB:
 		if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK))
 			return KVM_MSR_RET_UNSUPPORTED;
 		break;
+	case MSR_IA32_FRED_RSP0 ... MSR_IA32_FRED_CONFIG:
+		if (!guest_cpu_cap_has(vcpu, X86_FEATURE_FRED))
+			return KVM_MSR_RET_UNSUPPORTED;
+		break;
+	case MSR_IA32_PL0_SSP: /* I.e., MSR_IA32_FRED_SSP0 */
+		if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK) &&
+		    !guest_cpu_cap_has(vcpu, X86_FEATURE_FRED))
+			return KVM_MSR_RET_UNSUPPORTED;
+		break;
 	}
 
 	msr.index = index;
@@ -4316,6 +4374,12 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 #endif
 	case MSR_IA32_U_CET:
 	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
+		if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK)) {
+			WARN_ON_ONCE(msr != MSR_IA32_FRED_SSP0);
+			vcpu->arch.fred_ssp0_fallback = data;
+			break;
+		}
+
 		kvm_set_xstate_msr(vcpu, msr_info);
 		break;
 	default:
@@ -4669,6 +4733,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 #endif
 	case MSR_IA32_U_CET:
 	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
+		if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK)) {
+			WARN_ON_ONCE(msr_info->index != MSR_IA32_FRED_SSP0);
+			msr_info->data = vcpu->arch.fred_ssp0_fallback;
+			break;
+		}
+
 		kvm_get_xstate_msr(vcpu, msr_info);
 		break;
 	default:
@@ -7712,10 +7782,19 @@ static void kvm_probe_msr_to_save(u32 msr_index)
 		if (!kvm_cpu_cap_has(X86_FEATURE_LM))
 			return;
 		fallthrough;
-	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
+	case MSR_IA32_PL1_SSP ... MSR_IA32_PL3_SSP:
 		if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK))
 			return;
 		break;
+	case MSR_IA32_FRED_RSP0 ... MSR_IA32_FRED_CONFIG:
+		if (!kvm_cpu_cap_has(X86_FEATURE_FRED))
+			return;
+		break;
+	case MSR_IA32_PL0_SSP: /* I.e., MSR_IA32_FRED_SSP0 */
+		if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) &&
+		    !kvm_cpu_cap_has(X86_FEATURE_FRED))
+			return;
+		break;
 	default:
 		break;
 	}
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 11/22] KVM: x86: Add a helper to detect if FRED is enabled for a vCPU
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (9 preceding siblings ...)
  2025-10-26 20:18 ` [PATCH v9 10/22] KVM: VMX: Add support for saving and restoring FRED MSRs Xin Li (Intel)
@ 2025-10-26 20:18 ` Xin Li (Intel)
  2025-11-12  6:19   ` Chao Gao
  2025-10-26 20:19 ` [PATCH v9 12/22] KVM: VMX: Virtualize FRED event_data Xin Li (Intel)
                   ` (12 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:18 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Signed-off-by: Xin Li <xin3.li@intel.com>
[ Sean: removed the "kvm_" prefix from the function name ]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v5:
* Add TB from Xuelian Guo.
---
 arch/x86/kvm/kvm_cache_regs.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index 8ddb01191d6f..3c8dbb77d7d4 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -205,6 +205,21 @@ static __always_inline bool kvm_is_cr4_bit_set(struct kvm_vcpu *vcpu,
 	return !!kvm_read_cr4_bits(vcpu, cr4_bit);
 }
 
+/*
+ * It's enough to check just CR4.FRED (X86_CR4_FRED) to tell if
+ * a vCPU is running with FRED enabled, because:
+ * 1) CR4.FRED can be set to 1 only _after_ IA32_EFER.LMA = 1.
+ * 2) To leave IA-32e mode, CR4.FRED must be cleared first.
+ */
+static inline bool is_fred_enabled(struct kvm_vcpu *vcpu)
+{
+#ifdef CONFIG_X86_64
+	return kvm_is_cr4_bit_set(vcpu, X86_CR4_FRED);
+#else
+	return false;
+#endif
+}
+
 static inline ulong kvm_read_cr3(struct kvm_vcpu *vcpu)
 {
 	if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3))
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 12/22] KVM: VMX: Virtualize FRED event_data
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (10 preceding siblings ...)
  2025-10-26 20:18 ` [PATCH v9 11/22] KVM: x86: Add a helper to detect if FRED is enabled for a vCPU Xin Li (Intel)
@ 2025-10-26 20:19 ` Xin Li (Intel)
  2025-11-19  3:24   ` Chao Gao
  2025-10-26 20:19 ` [PATCH v9 13/22] KVM: VMX: Virtualize FRED nested exception tracking Xin Li (Intel)
                   ` (11 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:19 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Set injected-event data when injecting a #PF, #DB, or #NM caused
by extended feature disable using FRED event delivery, and save
original-event data for being used as injected-event data.

Unlike IDT using some extra CPU register as part of an event
context, e.g., %cr2 for #PF, FRED saves a complete event context
in its stack frame, e.g., FRED saves the faulting linear address
of a #PF into the event data field defined in its stack frame.

Thus a new VMX control field called injected-event data is added
to provide the event data that will be pushed into a FRED stack
frame for VM entries that inject an event using FRED event delivery.
In addition, a new VM exit information field called original-event
data is added to store the event data that would have saved into a
FRED stack frame for VM exits that occur during FRED event delivery.
After such a VM exit is handled to allow the original-event to be
delivered, the data in the original-event data VMCS field needs to
be set into the injected-event data VMCS field for the injection of
the original event.

Signed-off-by: Xin Li <xin3.li@intel.com>
[ Sean: reworked event data injection for nested ]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v5:
* Add TB from Xuelian Guo.

Change in v3:
* Rework event data injection for nested (Chao Gao & Sean Christopherson).

Changes in v2:
* Document event data should be equal to CR2/DR6/IA32_XFD_ERR instead
  of using WARN_ON() (Chao Gao).
* Zero event data if a #NM was not caused by extended feature disable
  (Chao Gao).
---
 arch/x86/include/asm/kvm_host.h |  3 ++-
 arch/x86/include/asm/vmx.h      |  4 ++++
 arch/x86/kvm/svm/svm.c          |  2 +-
 arch/x86/kvm/vmx/vmx.c          | 22 ++++++++++++++++++----
 arch/x86/kvm/x86.c              | 16 +++++++++++++++-
 5 files changed, 40 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 43a18e265289..550a8716a227 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -760,6 +760,7 @@ struct kvm_queued_exception {
 	u32 error_code;
 	unsigned long payload;
 	bool has_payload;
+	u64 event_data;
 };
 
 /*
@@ -2230,7 +2231,7 @@ void kvm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr);
 void kvm_queue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code);
 void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr, unsigned long payload);
 void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned int nr,
-			   bool has_error_code, u32 error_code);
+			   bool has_error_code, u32 error_code, u64 event_data);
 void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault);
 void kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu,
 				    struct x86_exception *fault);
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 6f8b8947c60c..539af190ad3e 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -269,8 +269,12 @@ enum vmcs_field {
 	PID_POINTER_TABLE_HIGH		= 0x00002043,
 	SECONDARY_VM_EXIT_CONTROLS	= 0x00002044,
 	SECONDARY_VM_EXIT_CONTROLS_HIGH	= 0x00002045,
+	INJECTED_EVENT_DATA		= 0x00002052,
+	INJECTED_EVENT_DATA_HIGH	= 0x00002053,
 	GUEST_PHYSICAL_ADDRESS          = 0x00002400,
 	GUEST_PHYSICAL_ADDRESS_HIGH     = 0x00002401,
+	ORIGINAL_EVENT_DATA		= 0x00002404,
+	ORIGINAL_EVENT_DATA_HIGH	= 0x00002405,
 	VMCS_LINK_POINTER               = 0x00002800,
 	VMCS_LINK_POINTER_HIGH          = 0x00002801,
 	GUEST_IA32_DEBUGCTL             = 0x00002802,
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index f14709a511aa..2f20c68fcfb3 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4104,7 +4104,7 @@ static void svm_complete_interrupts(struct kvm_vcpu *vcpu)
 
 		kvm_requeue_exception(vcpu, vector,
 				      exitintinfo & SVM_EXITINTINFO_VALID_ERR,
-				      error_code);
+				      error_code, 0);
 		break;
 	}
 	case SVM_EXITINTINFO_TYPE_INTR:
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 4a74c9f64f90..0b5d04c863a8 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1860,6 +1860,9 @@ void vmx_inject_exception(struct kvm_vcpu *vcpu)
 
 	vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, intr_info);
 
+	if (is_fred_enabled(vcpu))
+		vmcs_write64(INJECTED_EVENT_DATA, ex->event_data);
+
 	vmx_clear_hlt(vcpu);
 }
 
@@ -7299,7 +7302,8 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
 static void __vmx_complete_interrupts(struct kvm_vcpu *vcpu,
 				      u32 idt_vectoring_info,
 				      int instr_len_field,
-				      int error_code_field)
+				      int error_code_field,
+				      int event_data_field)
 {
 	u8 vector;
 	int type;
@@ -7334,13 +7338,17 @@ static void __vmx_complete_interrupts(struct kvm_vcpu *vcpu,
 		fallthrough;
 	case INTR_TYPE_HARD_EXCEPTION: {
 		u32 error_code = 0;
+		u64 event_data = 0;
 
 		if (idt_vectoring_info & VECTORING_INFO_DELIVER_CODE_MASK)
 			error_code = vmcs_read32(error_code_field);
+		if (is_fred_enabled(vcpu))
+			event_data = vmcs_read64(event_data_field);
 
 		kvm_requeue_exception(vcpu, vector,
 				      idt_vectoring_info & VECTORING_INFO_DELIVER_CODE_MASK,
-				      error_code);
+				      error_code,
+				      event_data);
 		break;
 	}
 	case INTR_TYPE_SOFT_INTR:
@@ -7358,7 +7366,8 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
 {
 	__vmx_complete_interrupts(&vmx->vcpu, vmx->idt_vectoring_info,
 				  VM_EXIT_INSTRUCTION_LEN,
-				  IDT_VECTORING_ERROR_CODE);
+				  IDT_VECTORING_ERROR_CODE,
+				  ORIGINAL_EVENT_DATA);
 }
 
 void vmx_cancel_injection(struct kvm_vcpu *vcpu)
@@ -7366,7 +7375,8 @@ void vmx_cancel_injection(struct kvm_vcpu *vcpu)
 	__vmx_complete_interrupts(vcpu,
 				  vmcs_read32(VM_ENTRY_INTR_INFO_FIELD),
 				  VM_ENTRY_INSTRUCTION_LEN,
-				  VM_ENTRY_EXCEPTION_ERROR_CODE);
+				  VM_ENTRY_EXCEPTION_ERROR_CODE,
+				  INJECTED_EVENT_DATA);
 
 	vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, 0);
 }
@@ -7520,6 +7530,10 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
 
 	vmx_disable_fb_clear(vmx);
 
+	/*
+	 * Note, even though FRED delivers the faulting linear address via the
+	 * event data field on the stack, CR2 is still updated.
+	 */
 	if (vcpu->arch.cr2 != native_read_cr2())
 		native_write_cr2(vcpu->arch.cr2);
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3d612803f5f2..10f1663d51d7 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -815,9 +815,22 @@ void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu,
 		 * breakpoint), it is reserved and must be zero in DR6.
 		 */
 		vcpu->arch.dr6 &= ~BIT(12);
+
+		/*
+		 * FRED #DB event data matches DR6, but follows the polarity of
+		 * VMX's pending debug exceptions, not DR6.
+		 */
+		ex->event_data = ex->payload & ~BIT(12);
+		break;
+	case NM_VECTOR:
+		ex->event_data = ex->payload;
 		break;
 	case PF_VECTOR:
 		vcpu->arch.cr2 = ex->payload;
+		ex->event_data = ex->payload;
+		break;
+	default:
+		ex->event_data = 0;
 		break;
 	}
 
@@ -925,7 +938,7 @@ static void kvm_queue_exception_e_p(struct kvm_vcpu *vcpu, unsigned nr,
 }
 
 void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned int nr,
-			   bool has_error_code, u32 error_code)
+			   bool has_error_code, u32 error_code, u64 event_data)
 {
 
 	/*
@@ -950,6 +963,7 @@ void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned int nr,
 	vcpu->arch.exception.error_code = error_code;
 	vcpu->arch.exception.has_payload = false;
 	vcpu->arch.exception.payload = 0;
+	vcpu->arch.exception.event_data = event_data;
 }
 EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_requeue_exception);
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 13/22] KVM: VMX: Virtualize FRED nested exception tracking
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (11 preceding siblings ...)
  2025-10-26 20:19 ` [PATCH v9 12/22] KVM: VMX: Virtualize FRED event_data Xin Li (Intel)
@ 2025-10-26 20:19 ` Xin Li (Intel)
  2025-11-19  6:54   ` Chao Gao
  2025-10-26 20:19 ` [PATCH v9 14/22] KVM: x86: Save/restore the nested flag of an exception Xin Li (Intel)
                   ` (10 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:19 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Set the VMX nested exception bit in VM-entry interruption information
field when injecting a nested exception using FRED event delivery to
ensure:
  1) A nested exception is injected on a correct stack level.
  2) The nested bit defined in FRED stack frame is set.

The event stack level used by FRED event delivery depends on whether
the event was a nested exception encountered during delivery of an
earlier event, because a nested exception is "regarded" as happening
on ring 0.  E.g., when #PF is configured to use stack level 1 in
IA32_FRED_STKLVLS MSR:
  - nested #PF will be delivered on the stack pointed by IA32_FRED_RSP1
    MSR when encountered in ring 3 and ring 0.
  - normal #PF will be delivered on the stack pointed by IA32_FRED_RSP0
    MSR when encountered in ring 3.

The VMX nested-exception support ensures a correct event stack level is
chosen when a VM entry injects a nested exception.

Signed-off-by: Xin Li <xin3.li@intel.com>
[ Sean: reworked kvm_requeue_exception() to simply the code changes ]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v5:
* Add TB from Xuelian Guo.

Change in v4:
* Move the check is_fred_enable() from kvm_multiple_exception() to
  vmx_inject_exception() thus avoid bleeding FRED details into
  kvm_multiple_exception() (Chao Gao).

Change in v3:
* Rework kvm_requeue_exception() to simply the code changes (Sean
  Christopherson).

Change in v2:
* Set the nested flag when there is an original interrupt (Chao Gao).
---
 arch/x86/include/asm/kvm_host.h |  4 +++-
 arch/x86/include/asm/vmx.h      |  5 ++++-
 arch/x86/kvm/svm/svm.c          |  2 +-
 arch/x86/kvm/vmx/vmx.c          |  6 +++++-
 arch/x86/kvm/x86.c              | 13 ++++++++++++-
 arch/x86/kvm/x86.h              |  1 +
 6 files changed, 26 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 550a8716a227..3b6dadf368eb 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -760,6 +760,7 @@ struct kvm_queued_exception {
 	u32 error_code;
 	unsigned long payload;
 	bool has_payload;
+	bool nested;
 	u64 event_data;
 };
 
@@ -2231,7 +2232,8 @@ void kvm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr);
 void kvm_queue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code);
 void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr, unsigned long payload);
 void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned int nr,
-			   bool has_error_code, u32 error_code, u64 event_data);
+			   bool has_error_code, u32 error_code, bool nested,
+			   u64 event_data);
 void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault);
 void kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu,
 				    struct x86_exception *fault);
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 539af190ad3e..7b34a9357b28 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -140,6 +140,7 @@
 #define VMX_BASIC_INOUT				BIT_ULL(54)
 #define VMX_BASIC_TRUE_CTLS			BIT_ULL(55)
 #define VMX_BASIC_NO_HW_ERROR_CODE_CC		BIT_ULL(56)
+#define VMX_BASIC_NESTED_EXCEPTION		BIT_ULL(58)
 
 static inline u32 vmx_basic_vmcs_revision_id(u64 vmx_basic)
 {
@@ -442,13 +443,15 @@ enum vmcs_field {
 #define INTR_INFO_INTR_TYPE_MASK        0x700           /* 10:8 */
 #define INTR_INFO_DELIVER_CODE_MASK     0x800           /* 11 */
 #define INTR_INFO_UNBLOCK_NMI		0x1000		/* 12 */
+#define INTR_INFO_NESTED_EXCEPTION_MASK	0x2000		/* 13 */
 #define INTR_INFO_VALID_MASK            0x80000000      /* 31 */
-#define INTR_INFO_RESVD_BITS_MASK       0x7ffff000
+#define INTR_INFO_RESVD_BITS_MASK       0x7fffd000
 
 #define VECTORING_INFO_VECTOR_MASK           	INTR_INFO_VECTOR_MASK
 #define VECTORING_INFO_TYPE_MASK        	INTR_INFO_INTR_TYPE_MASK
 #define VECTORING_INFO_DELIVER_CODE_MASK    	INTR_INFO_DELIVER_CODE_MASK
 #define VECTORING_INFO_VALID_MASK       	INTR_INFO_VALID_MASK
+#define VECTORING_INFO_NESTED_EXCEPTION_MASK	INTR_INFO_NESTED_EXCEPTION_MASK
 
 #define INTR_TYPE_EXT_INTR		(EVENT_TYPE_EXTINT << 8)	/* external interrupt */
 #define INTR_TYPE_RESERVED		(EVENT_TYPE_RESERVED << 8)	/* reserved */
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 2f20c68fcfb3..e3702ca2f633 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4104,7 +4104,7 @@ static void svm_complete_interrupts(struct kvm_vcpu *vcpu)
 
 		kvm_requeue_exception(vcpu, vector,
 				      exitintinfo & SVM_EXITINTINFO_VALID_ERR,
-				      error_code, 0);
+				      error_code, false, 0);
 		break;
 	}
 	case SVM_EXITINTINFO_TYPE_INTR:
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0b5d04c863a8..34e057f65513 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1855,8 +1855,11 @@ void vmx_inject_exception(struct kvm_vcpu *vcpu)
 		vmcs_write32(VM_ENTRY_INSTRUCTION_LEN,
 			     vmx->vcpu.arch.event_exit_inst_len);
 		intr_info |= INTR_TYPE_SOFT_EXCEPTION;
-	} else
+	} else {
 		intr_info |= INTR_TYPE_HARD_EXCEPTION;
+		if (ex->nested && is_fred_enabled(vcpu))
+			intr_info |= INTR_INFO_NESTED_EXCEPTION_MASK;
+	}
 
 	vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, intr_info);
 
@@ -7348,6 +7351,7 @@ static void __vmx_complete_interrupts(struct kvm_vcpu *vcpu,
 		kvm_requeue_exception(vcpu, vector,
 				      idt_vectoring_info & VECTORING_INFO_DELIVER_CODE_MASK,
 				      error_code,
+				      idt_vectoring_info & VECTORING_INFO_NESTED_EXCEPTION_MASK,
 				      event_data);
 		break;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 10f1663d51d7..554442c07f27 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -879,6 +879,10 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu, unsigned int nr,
 		vcpu->arch.exception.pending = true;
 		vcpu->arch.exception.injected = false;
 
+		vcpu->arch.exception.nested = vcpu->arch.exception.nested ||
+					      vcpu->arch.nmi_injected ||
+					      vcpu->arch.interrupt.injected;
+
 		vcpu->arch.exception.has_error_code = has_error;
 		vcpu->arch.exception.vector = nr;
 		vcpu->arch.exception.error_code = error_code;
@@ -908,8 +912,13 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu, unsigned int nr,
 		vcpu->arch.exception.injected = false;
 		vcpu->arch.exception.pending = false;
 
+		/* #DF is NOT a nested event, per its definition. */
+		vcpu->arch.exception.nested = false;
+
 		kvm_queue_exception_e(vcpu, DF_VECTOR, 0);
 	} else {
+		vcpu->arch.exception.nested = true;
+
 		/* replace previous exception with a new one in a hope
 		   that instruction re-execution will regenerate lost
 		   exception */
@@ -938,7 +947,8 @@ static void kvm_queue_exception_e_p(struct kvm_vcpu *vcpu, unsigned nr,
 }
 
 void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned int nr,
-			   bool has_error_code, u32 error_code, u64 event_data)
+			   bool has_error_code, u32 error_code, bool nested,
+			   u64 event_data)
 {
 
 	/*
@@ -963,6 +973,7 @@ void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned int nr,
 	vcpu->arch.exception.error_code = error_code;
 	vcpu->arch.exception.has_payload = false;
 	vcpu->arch.exception.payload = 0;
+	vcpu->arch.exception.nested = nested;
 	vcpu->arch.exception.event_data = event_data;
 }
 EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_requeue_exception);
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 0c1fbf75442b..4f5d12d7136e 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -198,6 +198,7 @@ static inline void kvm_clear_exception_queue(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.exception.pending = false;
 	vcpu->arch.exception.injected = false;
+	vcpu->arch.exception.nested = false;
 	vcpu->arch.exception_vmexit.pending = false;
 }
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 14/22] KVM: x86: Save/restore the nested flag of an exception
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (12 preceding siblings ...)
  2025-10-26 20:19 ` [PATCH v9 13/22] KVM: VMX: Virtualize FRED nested exception tracking Xin Li (Intel)
@ 2025-10-26 20:19 ` Xin Li (Intel)
  2025-11-19  6:13   ` Chao Gao
  2025-10-26 20:19 ` [PATCH v9 15/22] KVM: x86: Mark CR4.FRED as not reserved Xin Li (Intel)
                   ` (9 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:19 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

Save/restore the nested flag of an exception during VM save/restore
and live migration to ensure a correct event stack level is chosen
when a nested exception is injected through FRED event delivery.

Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v8:
* Update KVM_CAP_EXCEPTION_NESTED_FLAG, as the number in v7 is used
  by another new cap.

Change in v5:
* Add TB from Xuelian Guo.

Change in v4:
* Add live migration support for exception nested flag (Chao Gao).
---
 Documentation/virt/kvm/api.rst  | 21 ++++++++++++++++++++-
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/include/uapi/asm/kvm.h |  4 +++-
 arch/x86/kvm/x86.c              | 19 ++++++++++++++++++-
 include/uapi/linux/kvm.h        |  1 +
 5 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 06d79e2cf7bf..4e9678adf661 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -1184,6 +1184,10 @@ The following bits are defined in the flags field:
   fields contain a valid state. This bit will be set whenever
   KVM_CAP_EXCEPTION_PAYLOAD is enabled.
 
+- KVM_VCPUEVENT_VALID_NESTED_FLAG may be set to inform that the
+  exception is a nested exception. This bit will be set whenever
+  KVM_CAP_EXCEPTION_NESTED_FLAG is enabled.
+
 - KVM_VCPUEVENT_VALID_TRIPLE_FAULT may be set to signal that the
   triple_fault_pending field contains a valid state. This bit will
   be set whenever KVM_CAP_X86_TRIPLE_FAULT_EVENT is enabled.
@@ -1286,6 +1290,10 @@ can be set in the flags field to signal that the
 exception_has_payload, exception_payload, and exception.pending fields
 contain a valid state and shall be written into the VCPU.
 
+If KVM_CAP_EXCEPTION_NESTED_FLAG is enabled, KVM_VCPUEVENT_VALID_NESTED_FLAG
+can be set in the flags field to inform that the exception is a nested
+exception and exception_is_nested shall be written into the VCPU.
+
 If KVM_CAP_X86_TRIPLE_FAULT_EVENT is enabled, KVM_VCPUEVENT_VALID_TRIPLE_FAULT
 can be set in flags field to signal that the triple_fault field contains
 a valid state and shall be written into the VCPU.
@@ -8692,7 +8700,7 @@ given VM.
 When this capability is enabled, KVM resets the VCPU when setting
 MP_STATE_INIT_RECEIVED through IOCTL.  The original MP_STATE is preserved.
 
-7.43 KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED
+7.44 KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED
 -------------------------------------------
 
 :Architectures: arm64
@@ -8703,6 +8711,17 @@ This capability indicate to the userspace whether a PFNMAP memory region
 can be safely mapped as cacheable. This relies on the presence of
 force write back (FWB) feature support on the hardware.
 
+7.45 KVM_CAP_EXCEPTION_NESTED_FLAG
+----------------------------------
+
+:Architectures: x86
+:Parameters: args[0] whether feature should be enabled or not
+
+With this capability enabled, an exception is save/restored with the
+additional information of whether it was nested or not. FRED event
+delivery uses this information to ensure a correct event stack level
+is chosen when a VM entry injects a nested exception.
+
 8. Other capabilities.
 ======================
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3b6dadf368eb..5fff22d837aa 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1491,6 +1491,7 @@ struct kvm_arch {
 	bool has_mapped_host_mmio;
 	bool guest_can_read_msr_platform_info;
 	bool exception_payload_enabled;
+	bool exception_nested_flag_enabled;
 
 	bool triple_fault_event;
 
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index d420c9c066d4..fbeeea236fc2 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -331,6 +331,7 @@ struct kvm_reinject_control {
 #define KVM_VCPUEVENT_VALID_SMM		0x00000008
 #define KVM_VCPUEVENT_VALID_PAYLOAD	0x00000010
 #define KVM_VCPUEVENT_VALID_TRIPLE_FAULT	0x00000020
+#define KVM_VCPUEVENT_VALID_NESTED_FLAG	0x00000040
 
 /* Interrupt shadow states */
 #define KVM_X86_SHADOW_INT_MOV_SS	0x01
@@ -368,7 +369,8 @@ struct kvm_vcpu_events {
 	struct {
 		__u8 pending;
 	} triple_fault;
-	__u8 reserved[26];
+	__u8 reserved[25];
+	__u8 exception_is_nested;
 	__u8 exception_has_payload;
 	__u64 exception_payload;
 };
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 554442c07f27..6762f5564341 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4968,6 +4968,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_GET_MSR_FEATURES:
 	case KVM_CAP_MSR_PLATFORM_INFO:
 	case KVM_CAP_EXCEPTION_PAYLOAD:
+	case KVM_CAP_EXCEPTION_NESTED_FLAG:
 	case KVM_CAP_X86_TRIPLE_FAULT_EVENT:
 	case KVM_CAP_SET_GUEST_DEBUG:
 	case KVM_CAP_LAST_CPU:
@@ -5713,6 +5714,7 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 	events->exception.error_code = ex->error_code;
 	events->exception_has_payload = ex->has_payload;
 	events->exception_payload = ex->payload;
+	events->exception_is_nested = ex->nested;
 
 	events->interrupt.injected =
 		vcpu->arch.interrupt.injected && !vcpu->arch.interrupt.soft;
@@ -5738,6 +5740,8 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 			 | KVM_VCPUEVENT_VALID_SMM);
 	if (vcpu->kvm->arch.exception_payload_enabled)
 		events->flags |= KVM_VCPUEVENT_VALID_PAYLOAD;
+	if (vcpu->kvm->arch.exception_nested_flag_enabled)
+		events->flags |= KVM_VCPUEVENT_VALID_NESTED_FLAG;
 	if (vcpu->kvm->arch.triple_fault_event) {
 		events->triple_fault.pending = kvm_test_request(KVM_REQ_TRIPLE_FAULT, vcpu);
 		events->flags |= KVM_VCPUEVENT_VALID_TRIPLE_FAULT;
@@ -5752,7 +5756,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 			      | KVM_VCPUEVENT_VALID_SHADOW
 			      | KVM_VCPUEVENT_VALID_SMM
 			      | KVM_VCPUEVENT_VALID_PAYLOAD
-			      | KVM_VCPUEVENT_VALID_TRIPLE_FAULT))
+			      | KVM_VCPUEVENT_VALID_TRIPLE_FAULT
+			      | KVM_VCPUEVENT_VALID_NESTED_FLAG))
 		return -EINVAL;
 
 	if (events->flags & KVM_VCPUEVENT_VALID_PAYLOAD) {
@@ -5767,6 +5772,13 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 		events->exception_has_payload = 0;
 	}
 
+	if (events->flags & KVM_VCPUEVENT_VALID_NESTED_FLAG) {
+		if (!vcpu->kvm->arch.exception_nested_flag_enabled)
+			return -EINVAL;
+	} else {
+		events->exception_is_nested = 0;
+	}
+
 	if ((events->exception.injected || events->exception.pending) &&
 	    (events->exception.nr > 31 || events->exception.nr == NMI_VECTOR))
 		return -EINVAL;
@@ -5792,6 +5804,7 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 	vcpu->arch.exception.error_code = events->exception.error_code;
 	vcpu->arch.exception.has_payload = events->exception_has_payload;
 	vcpu->arch.exception.payload = events->exception_payload;
+	vcpu->arch.exception.nested = events->exception_is_nested;
 
 	vcpu->arch.interrupt.injected = events->interrupt.injected;
 	vcpu->arch.interrupt.nr = events->interrupt.nr;
@@ -6912,6 +6925,10 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
 		kvm->arch.exception_payload_enabled = cap->args[0];
 		r = 0;
 		break;
+	case KVM_CAP_EXCEPTION_NESTED_FLAG:
+		kvm->arch.exception_nested_flag_enabled = cap->args[0];
+		r = 0;
+		break;
 	case KVM_CAP_X86_TRIPLE_FAULT_EVENT:
 		kvm->arch.triple_fault_event = cap->args[0];
 		r = 0;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 52f6000ab020..ec3cc37b9373 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -963,6 +963,7 @@ struct kvm_enable_cap {
 #define KVM_CAP_RISCV_MP_STATE_RESET 242
 #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243
 #define KVM_CAP_GUEST_MEMFD_FLAGS 244
+#define KVM_CAP_EXCEPTION_NESTED_FLAG 245
 
 struct kvm_irq_routing_irqchip {
 	__u32 irqchip;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 15/22] KVM: x86: Mark CR4.FRED as not reserved
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (13 preceding siblings ...)
  2025-10-26 20:19 ` [PATCH v9 14/22] KVM: x86: Save/restore the nested flag of an exception Xin Li (Intel)
@ 2025-10-26 20:19 ` Xin Li (Intel)
  2025-11-19  7:26   ` Chao Gao
  2025-10-26 20:19 ` [PATCH v9 16/22] KVM: VMX: Dump FRED context in dump_vmcs() Xin Li (Intel)
                   ` (8 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:19 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

The CR4.FRED bit, i.e., CR4[32], is no longer a reserved bit when
guest cpu cap has FRED, i.e.,
  1) All of FRED KVM support is in place.
  2) Guest enumerates FRED.

Otherwise it is still a reserved bit.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v5:
* Add TB from Xuelian Guo.

Change in v4:
* Rebase on top of "guest_cpu_cap".

Change in v3:
* Don't allow CR4.FRED=1 before all of FRED KVM support is in place
  (Sean Christopherson).
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 arch/x86/kvm/x86.h              | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 5fff22d837aa..558f260a1afd 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -142,7 +142,7 @@
 			  | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
 			  | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
 			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
-			  | X86_CR4_LAM_SUP | X86_CR4_CET))
+			  | X86_CR4_LAM_SUP | X86_CR4_CET | X86_CR4_FRED))
 
 #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
 
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 4f5d12d7136e..e9c6f304b02e 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -687,6 +687,8 @@ static inline bool __kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 	if (!__cpu_has(__c, X86_FEATURE_SHSTK) &&       \
 	    !__cpu_has(__c, X86_FEATURE_IBT))           \
 		__reserved_bits |= X86_CR4_CET;         \
+	if (!__cpu_has(__c, X86_FEATURE_FRED))          \
+		__reserved_bits |= X86_CR4_FRED;        \
 	__reserved_bits;                                \
 })
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 16/22] KVM: VMX: Dump FRED context in dump_vmcs()
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (14 preceding siblings ...)
  2025-10-26 20:19 ` [PATCH v9 15/22] KVM: x86: Mark CR4.FRED as not reserved Xin Li (Intel)
@ 2025-10-26 20:19 ` Xin Li (Intel)
  2025-11-19  7:40   ` Chao Gao
  2025-10-26 20:19 ` [PATCH v9 17/22] KVM: x86: Advertise support for FRED Xin Li (Intel)
                   ` (7 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:19 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Add FRED related VMCS fields to dump_vmcs() to dump FRED context.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Changes in v5:
* Read guest FRED RSP0 with vmx_read_guest_fred_rsp0() (Sean).
* Add TB from Xuelian Guo.

Change in v3:
* Use (vmentry_ctrl & VM_ENTRY_LOAD_IA32_FRED) instead of is_fred_enabled()
  (Chao Gao).

Changes in v2:
* Use kvm_cpu_cap_has() instead of cpu_feature_enabled() (Chao Gao).
* Dump guest FRED states only if guest has FRED enabled (Nikolay Borisov).
---
 arch/x86/kvm/vmx/vmx.c | 43 +++++++++++++++++++++++++++++++++++-------
 1 file changed, 36 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 34e057f65513..04442f869abb 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1398,6 +1398,9 @@ static void vmx_write_guest_fred_rsp0(struct vcpu_vmx *vmx, u64 data)
 	vmx_write_guest_host_msr(vmx, MSR_IA32_FRED_RSP0, data,
 				 &vmx->msr_guest_fred_rsp0);
 }
+#else
+/* Make sure it builds on 32-bit */
+static u64 vmx_read_guest_fred_rsp0(struct vcpu_vmx *vmx) { return 0; }
 #endif
 
 static void grow_ple_window(struct kvm_vcpu *vcpu)
@@ -6460,7 +6463,7 @@ void dump_vmcs(struct kvm_vcpu *vcpu)
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	u32 vmentry_ctl, vmexit_ctl;
 	u32 cpu_based_exec_ctrl, pin_based_exec_ctrl, secondary_exec_control;
-	u64 tertiary_exec_control;
+	u64 tertiary_exec_control, secondary_vmexit_ctl;
 	unsigned long cr4;
 	int efer_slot;
 
@@ -6471,6 +6474,8 @@ void dump_vmcs(struct kvm_vcpu *vcpu)
 
 	vmentry_ctl = vmcs_read32(VM_ENTRY_CONTROLS);
 	vmexit_ctl = vmcs_read32(VM_EXIT_CONTROLS);
+	secondary_vmexit_ctl = cpu_has_secondary_vmexit_ctrls() ?
+			       vmcs_read64(SECONDARY_VM_EXIT_CONTROLS) : 0;
 	cpu_based_exec_ctrl = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
 	pin_based_exec_ctrl = vmcs_read32(PIN_BASED_VM_EXEC_CONTROL);
 	cr4 = vmcs_readl(GUEST_CR4);
@@ -6517,6 +6522,16 @@ void dump_vmcs(struct kvm_vcpu *vcpu)
 	vmx_dump_sel("LDTR:", GUEST_LDTR_SELECTOR);
 	vmx_dump_dtsel("IDTR:", GUEST_IDTR_LIMIT);
 	vmx_dump_sel("TR:  ", GUEST_TR_SELECTOR);
+	if (vmentry_ctl & VM_ENTRY_LOAD_IA32_FRED)
+		pr_err("FRED guest: config=0x%016llx, stack_levels=0x%016llx\n"
+		       "RSP0=0x%016llx, RSP1=0x%016llx\n"
+		       "RSP2=0x%016llx, RSP3=0x%016llx\n",
+		       vmcs_read64(GUEST_IA32_FRED_CONFIG),
+		       vmcs_read64(GUEST_IA32_FRED_STKLVLS),
+		       vmx_read_guest_fred_rsp0(vmx),
+		       vmcs_read64(GUEST_IA32_FRED_RSP1),
+		       vmcs_read64(GUEST_IA32_FRED_RSP2),
+		       vmcs_read64(GUEST_IA32_FRED_RSP3));
 	efer_slot = vmx_find_loadstore_msr_slot(&vmx->msr_autoload.guest, MSR_EFER);
 	if (vmentry_ctl & VM_ENTRY_LOAD_IA32_EFER)
 		pr_err("EFER= 0x%016llx\n", vmcs_read64(GUEST_IA32_EFER));
@@ -6568,6 +6583,16 @@ void dump_vmcs(struct kvm_vcpu *vcpu)
 	       vmcs_readl(HOST_TR_BASE));
 	pr_err("GDTBase=%016lx IDTBase=%016lx\n",
 	       vmcs_readl(HOST_GDTR_BASE), vmcs_readl(HOST_IDTR_BASE));
+	if (vmexit_ctl & SECONDARY_VM_EXIT_LOAD_IA32_FRED)
+		pr_err("FRED host: config=0x%016llx, stack_levels=0x%016llx\n"
+		       "RSP0=0x%016lx, RSP1=0x%016llx\n"
+		       "RSP2=0x%016llx, RSP3=0x%016llx\n",
+		       vmcs_read64(HOST_IA32_FRED_CONFIG),
+		       vmcs_read64(HOST_IA32_FRED_STKLVLS),
+		       (unsigned long)task_stack_page(current) + THREAD_SIZE,
+		       vmcs_read64(HOST_IA32_FRED_RSP1),
+		       vmcs_read64(HOST_IA32_FRED_RSP2),
+		       vmcs_read64(HOST_IA32_FRED_RSP3));
 	pr_err("CR0=%016lx CR3=%016lx CR4=%016lx\n",
 	       vmcs_readl(HOST_CR0), vmcs_readl(HOST_CR3),
 	       vmcs_readl(HOST_CR4));
@@ -6593,25 +6618,29 @@ void dump_vmcs(struct kvm_vcpu *vcpu)
 	pr_err("*** Control State ***\n");
 	pr_err("CPUBased=0x%08x SecondaryExec=0x%08x TertiaryExec=0x%016llx\n",
 	       cpu_based_exec_ctrl, secondary_exec_control, tertiary_exec_control);
-	pr_err("PinBased=0x%08x EntryControls=%08x ExitControls=%08x\n",
-	       pin_based_exec_ctrl, vmentry_ctl, vmexit_ctl);
+	pr_err("PinBased=0x%08x EntryControls=0x%08x\n",
+	       pin_based_exec_ctrl, vmentry_ctl);
+	pr_err("ExitControls=0x%08x SecondaryExitControls=0x%016llx\n",
+	       vmexit_ctl, secondary_vmexit_ctl);
 	pr_err("ExceptionBitmap=%08x PFECmask=%08x PFECmatch=%08x\n",
 	       vmcs_read32(EXCEPTION_BITMAP),
 	       vmcs_read32(PAGE_FAULT_ERROR_CODE_MASK),
 	       vmcs_read32(PAGE_FAULT_ERROR_CODE_MATCH));
-	pr_err("VMEntry: intr_info=%08x errcode=%08x ilen=%08x\n",
+	pr_err("VMEntry: intr_info=%08x errcode=%08x ilen=%08x event_data=%016llx\n",
 	       vmcs_read32(VM_ENTRY_INTR_INFO_FIELD),
 	       vmcs_read32(VM_ENTRY_EXCEPTION_ERROR_CODE),
-	       vmcs_read32(VM_ENTRY_INSTRUCTION_LEN));
+	       vmcs_read32(VM_ENTRY_INSTRUCTION_LEN),
+	       kvm_cpu_cap_has(X86_FEATURE_FRED) ? vmcs_read64(INJECTED_EVENT_DATA) : 0);
 	pr_err("VMExit: intr_info=%08x errcode=%08x ilen=%08x\n",
 	       vmcs_read32(VM_EXIT_INTR_INFO),
 	       vmcs_read32(VM_EXIT_INTR_ERROR_CODE),
 	       vmcs_read32(VM_EXIT_INSTRUCTION_LEN));
 	pr_err("        reason=%08x qualification=%016lx\n",
 	       vmcs_read32(VM_EXIT_REASON), vmcs_readl(EXIT_QUALIFICATION));
-	pr_err("IDTVectoring: info=%08x errcode=%08x\n",
+	pr_err("IDTVectoring: info=%08x errcode=%08x event_data=%016llx\n",
 	       vmcs_read32(IDT_VECTORING_INFO_FIELD),
-	       vmcs_read32(IDT_VECTORING_ERROR_CODE));
+	       vmcs_read32(IDT_VECTORING_ERROR_CODE),
+	       kvm_cpu_cap_has(X86_FEATURE_FRED) ? vmcs_read64(ORIGINAL_EVENT_DATA) : 0);
 	pr_err("TSC Offset = 0x%016llx\n", vmcs_read64(TSC_OFFSET));
 	if (secondary_exec_control & SECONDARY_EXEC_TSC_SCALING)
 		pr_err("TSC Multiplier = 0x%016llx\n",
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 17/22] KVM: x86: Advertise support for FRED
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (15 preceding siblings ...)
  2025-10-26 20:19 ` [PATCH v9 16/22] KVM: VMX: Dump FRED context in dump_vmcs() Xin Li (Intel)
@ 2025-10-26 20:19 ` Xin Li (Intel)
  2025-11-12  7:30   ` Chao Gao
  2025-10-26 20:19 ` [PATCH v9 18/22] KVM: nVMX: Enable support for secondary VM exit controls Xin Li (Intel)
                   ` (6 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:19 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Advertise support for FRED to userspace after changes required to enable
FRED in a KVM guest are in place.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v5:
* Don't advertise FRED/LKGS together, LKGS can be advertised as an
  independent feature (Sean).
* Add TB from Xuelian Guo.
---
 arch/x86/kvm/cpuid.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index d563a948318b..0bf97b8a3216 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -1014,6 +1014,7 @@ void kvm_set_cpu_caps(void)
 		F(FSRS),
 		F(FSRC),
 		F(WRMSRNS),
+		X86_64_F(FRED),
 		X86_64_F(LKGS),
 		F(AMX_FP16),
 		F(AVX_IFMA),
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 18/22] KVM: nVMX: Enable support for secondary VM exit controls
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (16 preceding siblings ...)
  2025-10-26 20:19 ` [PATCH v9 17/22] KVM: x86: Advertise support for FRED Xin Li (Intel)
@ 2025-10-26 20:19 ` Xin Li (Intel)
  2025-11-12 13:42   ` Chao Gao
  2025-10-26 20:19 ` [PATCH v9 19/22] KVM: nVMX: Handle FRED VMCS fields in nested VMX context Xin Li (Intel)
                   ` (5 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:19 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Add support for secondary VM exit controls in nested VMX to facilitate
future FRED integration.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Changes in v8:
* Relocate secondary_vm_exit_controls to the last u64 padding field.
* Remove the change to Documentation/virt/kvm/x86/nested-vmx.rst.

Changes in v5:
* Allow writing MSR_IA32_VMX_EXIT_CTLS2 (Sean).
* Add TB from Xuelian Guo.

Change in v3:
* Read secondary VM exit controls from vmcs_conf insteasd of the hardware
  MSR MSR_IA32_VMX_EXIT_CTLS2 to avoid advertising features to L1 that KVM
  itself doesn't support, e.g. because the expected entry+exit pairs aren't
  supported. (Sean Christopherson)
---
 arch/x86/kvm/vmx/capabilities.h |  1 +
 arch/x86/kvm/vmx/nested.c       | 26 +++++++++++++++++++++++++-
 arch/x86/kvm/vmx/vmcs12.c       |  1 +
 arch/x86/kvm/vmx/vmcs12.h       |  3 ++-
 arch/x86/kvm/x86.h              |  2 +-
 5 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 651507627ef3..f390f9f883c3 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -34,6 +34,7 @@ struct nested_vmx_msrs {
 	u32 pinbased_ctls_high;
 	u32 exit_ctls_low;
 	u32 exit_ctls_high;
+	u64 secondary_exit_ctls;
 	u32 entry_ctls_low;
 	u32 entry_ctls_high;
 	u32 misc_low;
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index b0cd745518b4..cbb682424a5b 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -1534,6 +1534,11 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
 			return -EINVAL;
 		vmx->nested.msrs.vmfunc_controls = data;
 		return 0;
+	case MSR_IA32_VMX_EXIT_CTLS2:
+		if (data & ~vmcs_config.nested.secondary_exit_ctls)
+			return -EINVAL;
+		vmx->nested.msrs.secondary_exit_ctls = data;
+		return 0;
 	default:
 		/*
 		 * The rest of the VMX capability MSRs do not support restore.
@@ -1573,6 +1578,9 @@ int vmx_get_vmx_msr(struct nested_vmx_msrs *msrs, u32 msr_index, u64 *pdata)
 		if (msr_index == MSR_IA32_VMX_EXIT_CTLS)
 			*pdata |= VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR;
 		break;
+	case MSR_IA32_VMX_EXIT_CTLS2:
+		*pdata = msrs->secondary_exit_ctls;
+		break;
 	case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
 	case MSR_IA32_VMX_ENTRY_CTLS:
 		*pdata = vmx_control_msr(
@@ -2514,6 +2522,11 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
 		exec_control &= ~VM_EXIT_LOAD_IA32_EFER;
 	vm_exit_controls_set(vmx, exec_control);
 
+	if (exec_control & VM_EXIT_ACTIVATE_SECONDARY_CONTROLS) {
+		exec_control = __secondary_vm_exit_controls_get(vmcs01);
+		secondary_vm_exit_controls_set(vmx, exec_control);
+	}
+
 	/*
 	 * Interrupt/Exception Fields
 	 */
@@ -7128,7 +7141,8 @@ static void nested_vmx_setup_exit_ctls(struct vmcs_config *vmcs_conf,
 		VM_EXIT_HOST_ADDR_SPACE_SIZE |
 #endif
 		VM_EXIT_LOAD_IA32_PAT | VM_EXIT_SAVE_IA32_PAT |
-		VM_EXIT_CLEAR_BNDCFGS | VM_EXIT_LOAD_CET_STATE;
+		VM_EXIT_CLEAR_BNDCFGS | VM_EXIT_LOAD_CET_STATE |
+		VM_EXIT_ACTIVATE_SECONDARY_CONTROLS;
 	msrs->exit_ctls_high |=
 		VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR |
 		VM_EXIT_LOAD_IA32_EFER | VM_EXIT_SAVE_IA32_EFER |
@@ -7141,6 +7155,16 @@ static void nested_vmx_setup_exit_ctls(struct vmcs_config *vmcs_conf,
 
 	/* We support free control of debug control saving. */
 	msrs->exit_ctls_low &= ~VM_EXIT_SAVE_DEBUG_CONTROLS;
+
+	if (msrs->exit_ctls_high & VM_EXIT_ACTIVATE_SECONDARY_CONTROLS) {
+		msrs->secondary_exit_ctls = vmcs_conf->vmexit_2nd_ctrl;
+		/*
+		 * As the secondary VM exit control is always loaded, do not
+		 * advertise any feature in it to nVMX until its nVMX support
+		 * is ready.
+		 */
+		msrs->secondary_exit_ctls &= 0;
+	}
 }
 
 static void nested_vmx_setup_entry_ctls(struct vmcs_config *vmcs_conf,
diff --git a/arch/x86/kvm/vmx/vmcs12.c b/arch/x86/kvm/vmx/vmcs12.c
index 4233b5ca9461..3b01175f392a 100644
--- a/arch/x86/kvm/vmx/vmcs12.c
+++ b/arch/x86/kvm/vmx/vmcs12.c
@@ -66,6 +66,7 @@ const unsigned short vmcs12_field_offsets[] = {
 	FIELD64(HOST_IA32_PAT, host_ia32_pat),
 	FIELD64(HOST_IA32_EFER, host_ia32_efer),
 	FIELD64(HOST_IA32_PERF_GLOBAL_CTRL, host_ia32_perf_global_ctrl),
+	FIELD64(SECONDARY_VM_EXIT_CONTROLS, secondary_vm_exit_controls),
 	FIELD(PIN_BASED_VM_EXEC_CONTROL, pin_based_vm_exec_control),
 	FIELD(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control),
 	FIELD(EXCEPTION_BITMAP, exception_bitmap),
diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h
index 4ad6b16525b9..fa5306dc0311 100644
--- a/arch/x86/kvm/vmx/vmcs12.h
+++ b/arch/x86/kvm/vmx/vmcs12.h
@@ -71,7 +71,7 @@ struct __packed vmcs12 {
 	u64 pml_address;
 	u64 encls_exiting_bitmap;
 	u64 tsc_multiplier;
-	u64 padding64[1]; /* room for future expansion */
+	u64 secondary_vm_exit_controls;
 	/*
 	 * To allow migration of L1 (complete with its L2 guests) between
 	 * machines of different natural widths (32 or 64 bit), we cannot have
@@ -261,6 +261,7 @@ static inline void vmx_check_vmcs12_offsets(void)
 	CHECK_OFFSET(pml_address, 312);
 	CHECK_OFFSET(encls_exiting_bitmap, 320);
 	CHECK_OFFSET(tsc_multiplier, 328);
+	CHECK_OFFSET(secondary_vm_exit_controls, 336);
 	CHECK_OFFSET(cr0_guest_host_mask, 344);
 	CHECK_OFFSET(cr4_guest_host_mask, 352);
 	CHECK_OFFSET(cr0_read_shadow, 360);
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index e9c6f304b02e..1576f192a647 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -95,7 +95,7 @@ do {											\
  * associated feature that KVM supports for nested virtualization.
  */
 #define KVM_FIRST_EMULATED_VMX_MSR	MSR_IA32_VMX_BASIC
-#define KVM_LAST_EMULATED_VMX_MSR	MSR_IA32_VMX_VMFUNC
+#define KVM_LAST_EMULATED_VMX_MSR	MSR_IA32_VMX_EXIT_CTLS2
 
 #define KVM_DEFAULT_PLE_GAP		128
 #define KVM_VMX_DEFAULT_PLE_WINDOW	4096
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 19/22] KVM: nVMX: Handle FRED VMCS fields in nested VMX context
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (17 preceding siblings ...)
  2025-10-26 20:19 ` [PATCH v9 18/22] KVM: nVMX: Enable support for secondary VM exit controls Xin Li (Intel)
@ 2025-10-26 20:19 ` Xin Li (Intel)
  2025-12-02  6:32   ` Chao Gao
  2025-12-08 22:37   ` Sean Christopherson
  2025-10-26 20:19 ` [PATCH v9 20/22] KVM: nVMX: Validate FRED-related VMCS fields Xin Li (Intel)
                   ` (4 subsequent siblings)
  23 siblings, 2 replies; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:19 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Extend nested VMX context management to include FRED-related VMCS fields,
enabling proper handling of FRED state during nested virtualization.

Because KVM always sets SECONDARY_VM_EXIT_SAVE_IA32_FRED, FRED MSRs are
always saved to vmcs02.  However an L1 VMM may choose to clear this bit,
i.e., not to save FRED MSRs to vmcs12.  This is not a problem when the L1
VMM sets SECONDARY_VM_EXIT_LOAD_IA32_FRED, as KVM then immediately loads
host FRED MSRs of vmcs12 to guest FRED MSRs of vmcs01.  However if the L1
VMM clears SECONDARY_VM_EXIT_LOAD_IA32_FRED, KVM should retain FRED MSRs
to run the L1 VMM.

To propagate guest FRED MSRs from vmcs02 to vmcs01, save them in
sync_vmcs02_to_vmcs12() regardless of whether
SECONDARY_VM_EXIT_SAVE_IA32_FRED is set in vmcs12.  Then, use the saved
values to set guest FRED MSRs in vmcs01 within load_vmcs12_host_state()
when !nested_cpu_load_host_fred_state().

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Changes in v9:
* Rebase to kvm-x86/next.
* Guard FRED state save/restore with guest_cpu_cap_has(vcpu, X86_FEATURE_FRED)
  (syzbot & Chao).

Changes in v8:
* Make the newly added FRED fields 64-bit aligned in vmcs12 (Isaku).
* Remove the change to Documentation/virt/kvm/x86/nested-vmx.rst.

Change in v6:
* Handle FRED MSR pre-vmenter save/restore (Chao Gao).
* Save FRED MSRs of vmcs02 at VM-Exit even an L1 VMM clears
  SECONDARY_VM_EXIT_SAVE_IA32_FRED.
* Save FRED MSRs in sync_vmcs02_to_vmcs12() instead of its rare version.

Change in v5:
* Add TB from Xuelian Guo.

Changes in v4:
* Advertise VMX nested exception as if the CPU supports it (Chao Gao).
* Split FRED state management controls (Chao Gao).

Changes in v3:
* Add and use nested_cpu_has_fred(vmcs12) because vmcs02 should be set
  from vmcs12 if and only if the field is enabled in L1's VMX config
  (Sean Christopherson).
* Fix coding style issues (Sean Christopherson).

Changes in v2:
* Remove hyperv TLFS related changes (Jeremi Piotrowski).
* Use kvm_cpu_cap_has() instead of cpu_feature_enabled() (Chao Gao).
---
 arch/x86/kvm/vmx/capabilities.h       |   5 ++
 arch/x86/kvm/vmx/nested.c             | 118 +++++++++++++++++++++++++-
 arch/x86/kvm/vmx/nested.h             |  22 +++++
 arch/x86/kvm/vmx/vmcs12.c             |  18 ++++
 arch/x86/kvm/vmx/vmcs12.h             |  37 ++++++++
 arch/x86/kvm/vmx/vmcs_shadow_fields.h |   4 +
 arch/x86/kvm/vmx/vmx.h                |  41 +++++++++
 7 files changed, 243 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index f390f9f883c3..5eba2530ffb4 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -80,6 +80,11 @@ static inline bool cpu_has_vmx_basic_no_hw_errcode_cc(void)
 	return	vmcs_config.basic & VMX_BASIC_NO_HW_ERROR_CODE_CC;
 }
 
+static inline bool cpu_has_vmx_nested_exception(void)
+{
+	return vmcs_config.basic & VMX_BASIC_NESTED_EXCEPTION;
+}
+
 static inline bool cpu_has_virtual_nmis(void)
 {
 	return vmcs_config.pin_based_exec_ctrl & PIN_BASED_VIRTUAL_NMIS &&
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index cbb682424a5b..63cdfffba58b 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -708,6 +708,9 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
 
 	nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
 					 MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
+
+	nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
+					 MSR_IA32_FRED_RSP0, MSR_TYPE_RW);
 #endif
 	nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
 					 MSR_IA32_SPEC_CTRL, MSR_TYPE_RW);
@@ -1294,9 +1297,11 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data)
 	const u64 feature_bits = VMX_BASIC_DUAL_MONITOR_TREATMENT |
 				 VMX_BASIC_INOUT |
 				 VMX_BASIC_TRUE_CTLS |
-				 VMX_BASIC_NO_HW_ERROR_CODE_CC;
+				 VMX_BASIC_NO_HW_ERROR_CODE_CC |
+				 VMX_BASIC_NESTED_EXCEPTION;
 
-	const u64 reserved_bits = GENMASK_ULL(63, 57) |
+	const u64 reserved_bits = GENMASK_ULL(63, 59) |
+				  BIT_ULL(57) |
 				  GENMASK_ULL(47, 45) |
 				  BIT_ULL(31);
 
@@ -2539,6 +2544,8 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
 			     vmcs12->vm_entry_instruction_len);
 		vmcs_write32(GUEST_INTERRUPTIBILITY_INFO,
 			     vmcs12->guest_interruptibility_info);
+		if (cpu_has_vmx_fred())
+			vmcs_write64(INJECTED_EVENT_DATA, vmcs12->injected_event_data);
 		vmx->loaded_vmcs->nmi_known_unmasked =
 			!(vmcs12->guest_interruptibility_info & GUEST_INTR_STATE_NMI);
 	} else {
@@ -2693,6 +2700,18 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
 				     vmcs12->guest_ssp, vmcs12->guest_ssp_tbl);
 
 	set_cr4_guest_host_mask(vmx);
+
+	if (guest_cpu_cap_has(&vmx->vcpu, X86_FEATURE_FRED) &&
+	    nested_cpu_load_guest_fred_state(vmcs12)) {
+		vmcs_write64(GUEST_IA32_FRED_CONFIG, vmcs12->guest_ia32_fred_config);
+		vmcs_write64(GUEST_IA32_FRED_RSP1, vmcs12->guest_ia32_fred_rsp1);
+		vmcs_write64(GUEST_IA32_FRED_RSP2, vmcs12->guest_ia32_fred_rsp2);
+		vmcs_write64(GUEST_IA32_FRED_RSP3, vmcs12->guest_ia32_fred_rsp3);
+		vmcs_write64(GUEST_IA32_FRED_STKLVLS, vmcs12->guest_ia32_fred_stklvls);
+		vmcs_write64(GUEST_IA32_FRED_SSP1, vmcs12->guest_ia32_fred_ssp1);
+		vmcs_write64(GUEST_IA32_FRED_SSP2, vmcs12->guest_ia32_fred_ssp2);
+		vmcs_write64(GUEST_IA32_FRED_SSP3, vmcs12->guest_ia32_fred_ssp3);
+	}
 }
 
 /*
@@ -2759,6 +2778,18 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 		vmcs_write64(GUEST_IA32_PAT, vcpu->arch.pat);
 	}
 
+	if (guest_cpu_cap_has(vcpu, X86_FEATURE_FRED) &&
+	    (!vmx->nested.nested_run_pending || !nested_cpu_load_guest_fred_state(vmcs12))) {
+		vmcs_write64(GUEST_IA32_FRED_CONFIG, vmx->nested.pre_vmenter_fred_config);
+		vmcs_write64(GUEST_IA32_FRED_RSP1, vmx->nested.pre_vmenter_fred_rsp1);
+		vmcs_write64(GUEST_IA32_FRED_RSP2, vmx->nested.pre_vmenter_fred_rsp2);
+		vmcs_write64(GUEST_IA32_FRED_RSP3, vmx->nested.pre_vmenter_fred_rsp3);
+		vmcs_write64(GUEST_IA32_FRED_STKLVLS, vmx->nested.pre_vmenter_fred_stklvls);
+		vmcs_write64(GUEST_IA32_FRED_SSP1, vmx->nested.pre_vmenter_fred_ssp1);
+		vmcs_write64(GUEST_IA32_FRED_SSP2, vmx->nested.pre_vmenter_fred_ssp2);
+		vmcs_write64(GUEST_IA32_FRED_SSP3, vmx->nested.pre_vmenter_fred_ssp3);
+	}
+
 	vcpu->arch.tsc_offset = kvm_calc_nested_tsc_offset(
 			vcpu->arch.l1_tsc_offset,
 			vmx_get_l2_tsc_offset(vcpu),
@@ -3631,6 +3662,18 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
 				    &vmx->nested.pre_vmenter_ssp,
 				    &vmx->nested.pre_vmenter_ssp_tbl);
 
+	if (guest_cpu_cap_has(vcpu, X86_FEATURE_FRED) &&
+	    (!vmx->nested.nested_run_pending || !nested_cpu_load_guest_fred_state(vmcs12))) {
+		vmx->nested.pre_vmenter_fred_config = vmcs_read64(GUEST_IA32_FRED_CONFIG);
+		vmx->nested.pre_vmenter_fred_rsp1 = vmcs_read64(GUEST_IA32_FRED_RSP1);
+		vmx->nested.pre_vmenter_fred_rsp2 = vmcs_read64(GUEST_IA32_FRED_RSP2);
+		vmx->nested.pre_vmenter_fred_rsp3 = vmcs_read64(GUEST_IA32_FRED_RSP3);
+		vmx->nested.pre_vmenter_fred_stklvls = vmcs_read64(GUEST_IA32_FRED_STKLVLS);
+		vmx->nested.pre_vmenter_fred_ssp1 = vmcs_read64(GUEST_IA32_FRED_SSP1);
+		vmx->nested.pre_vmenter_fred_ssp2 = vmcs_read64(GUEST_IA32_FRED_SSP2);
+		vmx->nested.pre_vmenter_fred_ssp3 = vmcs_read64(GUEST_IA32_FRED_SSP3);
+	}
+
 	/*
 	 * Overwrite vmcs01.GUEST_CR3 with L1's CR3 if EPT is disabled.  In the
 	 * event of a "late" VM-Fail, i.e. a VM-Fail detected by hardware but
@@ -3934,6 +3977,8 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu,
 	u32 idt_vectoring;
 	unsigned int nr;
 
+	vmcs12->original_event_data = 0;
+
 	/*
 	 * Per the SDM, VM-Exits due to double and triple faults are never
 	 * considered to occur during event delivery, even if the double/triple
@@ -3972,6 +4017,13 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu,
 				vcpu->arch.exception.error_code;
 		}
 
+		if ((vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) &&
+		    (vmcs12->guest_cr4 & X86_CR4_FRED) &&
+		    (vcpu->arch.exception.nested))
+			idt_vectoring |= VECTORING_INFO_NESTED_EXCEPTION_MASK;
+
+		vmcs12->original_event_data = vcpu->arch.exception.event_data;
+
 		vmcs12->idt_vectoring_info_field = idt_vectoring;
 	} else if (vcpu->arch.nmi_injected) {
 		vmcs12->idt_vectoring_info_field =
@@ -4714,6 +4766,28 @@ static void sync_vmcs02_to_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
 	vmcs_read_cet_state(&vmx->vcpu, &vmcs12->guest_s_cet,
 			    &vmcs12->guest_ssp,
 			    &vmcs12->guest_ssp_tbl);
+
+	if (guest_cpu_cap_has(vcpu, X86_FEATURE_FRED)) {
+		vmx->nested.fred_msr_at_vmexit.fred_config = vmcs_read64(GUEST_IA32_FRED_CONFIG);
+		vmx->nested.fred_msr_at_vmexit.fred_rsp1 = vmcs_read64(GUEST_IA32_FRED_RSP1);
+		vmx->nested.fred_msr_at_vmexit.fred_rsp2 = vmcs_read64(GUEST_IA32_FRED_RSP2);
+		vmx->nested.fred_msr_at_vmexit.fred_rsp3 = vmcs_read64(GUEST_IA32_FRED_RSP3);
+		vmx->nested.fred_msr_at_vmexit.fred_stklvls = vmcs_read64(GUEST_IA32_FRED_STKLVLS);
+		vmx->nested.fred_msr_at_vmexit.fred_ssp1 = vmcs_read64(GUEST_IA32_FRED_SSP1);
+		vmx->nested.fred_msr_at_vmexit.fred_ssp2 = vmcs_read64(GUEST_IA32_FRED_SSP2);
+		vmx->nested.fred_msr_at_vmexit.fred_ssp3 = vmcs_read64(GUEST_IA32_FRED_SSP3);
+
+		if (nested_cpu_save_guest_fred_state(vmcs12)) {
+			vmcs12->guest_ia32_fred_config = vmx->nested.fred_msr_at_vmexit.fred_config;
+			vmcs12->guest_ia32_fred_rsp1 = vmx->nested.fred_msr_at_vmexit.fred_rsp1;
+			vmcs12->guest_ia32_fred_rsp2 = vmx->nested.fred_msr_at_vmexit.fred_rsp2;
+			vmcs12->guest_ia32_fred_rsp3 = vmx->nested.fred_msr_at_vmexit.fred_rsp3;
+			vmcs12->guest_ia32_fred_stklvls = vmx->nested.fred_msr_at_vmexit.fred_stklvls;
+			vmcs12->guest_ia32_fred_ssp1 = vmx->nested.fred_msr_at_vmexit.fred_ssp1;
+			vmcs12->guest_ia32_fred_ssp2 = vmx->nested.fred_msr_at_vmexit.fred_ssp2;
+			vmcs12->guest_ia32_fred_ssp3 = vmx->nested.fred_msr_at_vmexit.fred_ssp3;
+		}
+	}
 }
 
 /*
@@ -4758,6 +4832,21 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 
 		vmcs12->vm_exit_intr_info = exit_intr_info;
 		vmcs12->vm_exit_instruction_len = exit_insn_len;
+
+		/*
+		 * When there is a valid original event, the exiting event is a nested
+		 * event during delivery of the earlier original event.
+		 *
+		 * FRED event delivery reflects this relationship by setting the value
+		 * of the nested exception bit of VM-exit interruption information
+		 * (aka exiting-event identification) to that of the valid bit of the
+		 * IDT-vectoring information (aka original-event identification).
+		 */
+		if ((vmcs12->idt_vectoring_info_field & VECTORING_INFO_VALID_MASK) &&
+		    (vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) &&
+		    (vmcs12->guest_cr4 & X86_CR4_FRED))
+			vmcs12->vm_exit_intr_info |= INTR_INFO_NESTED_EXCEPTION_MASK;
+
 		vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
 
 		/*
@@ -4786,6 +4875,7 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
 				   struct vmcs12 *vmcs12)
 {
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	enum vm_entry_failure_code ignored;
 	struct kvm_segment seg;
 
@@ -4860,6 +4950,28 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
 		WARN_ON_ONCE(__kvm_emulate_msr_write(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,
 						     vmcs12->host_ia32_perf_global_ctrl));
 
+	if (guest_cpu_cap_has(vcpu, X86_FEATURE_FRED)) {
+		if (nested_cpu_load_host_fred_state(vmcs12)) {
+			vmcs_write64(GUEST_IA32_FRED_CONFIG, vmcs12->host_ia32_fred_config);
+			vmcs_write64(GUEST_IA32_FRED_RSP1, vmcs12->host_ia32_fred_rsp1);
+			vmcs_write64(GUEST_IA32_FRED_RSP2, vmcs12->host_ia32_fred_rsp2);
+			vmcs_write64(GUEST_IA32_FRED_RSP3, vmcs12->host_ia32_fred_rsp3);
+			vmcs_write64(GUEST_IA32_FRED_STKLVLS, vmcs12->host_ia32_fred_stklvls);
+			vmcs_write64(GUEST_IA32_FRED_SSP1, vmcs12->host_ia32_fred_ssp1);
+			vmcs_write64(GUEST_IA32_FRED_SSP2, vmcs12->host_ia32_fred_ssp2);
+			vmcs_write64(GUEST_IA32_FRED_SSP3, vmcs12->host_ia32_fred_ssp3);
+		} else {
+			vmcs_write64(GUEST_IA32_FRED_CONFIG, vmx->nested.fred_msr_at_vmexit.fred_config);
+			vmcs_write64(GUEST_IA32_FRED_RSP1, vmx->nested.fred_msr_at_vmexit.fred_rsp1);
+			vmcs_write64(GUEST_IA32_FRED_RSP2, vmx->nested.fred_msr_at_vmexit.fred_rsp2);
+			vmcs_write64(GUEST_IA32_FRED_RSP3, vmx->nested.fred_msr_at_vmexit.fred_rsp3);
+			vmcs_write64(GUEST_IA32_FRED_STKLVLS, vmx->nested.fred_msr_at_vmexit.fred_stklvls);
+			vmcs_write64(GUEST_IA32_FRED_SSP1, vmx->nested.fred_msr_at_vmexit.fred_ssp1);
+			vmcs_write64(GUEST_IA32_FRED_SSP2, vmx->nested.fred_msr_at_vmexit.fred_ssp2);
+			vmcs_write64(GUEST_IA32_FRED_SSP3, vmx->nested.fred_msr_at_vmexit.fred_ssp3);
+		}
+	}
+
 	/* Set L1 segment info according to Intel SDM
 	    27.5.2 Loading Host Segment and Descriptor-Table Registers */
 	seg = (struct kvm_segment) {
@@ -7339,6 +7451,8 @@ static void nested_vmx_setup_basic(struct nested_vmx_msrs *msrs)
 		msrs->basic |= VMX_BASIC_INOUT;
 	if (cpu_has_vmx_basic_no_hw_errcode_cc())
 		msrs->basic |= VMX_BASIC_NO_HW_ERROR_CODE_CC;
+	if (cpu_has_vmx_nested_exception())
+		msrs->basic |= VMX_BASIC_NESTED_EXCEPTION;
 }
 
 static void nested_vmx_setup_cr_fixed(struct nested_vmx_msrs *msrs)
diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h
index 983484d42ebf..a99d3d83d58e 100644
--- a/arch/x86/kvm/vmx/nested.h
+++ b/arch/x86/kvm/vmx/nested.h
@@ -249,6 +249,11 @@ static inline bool nested_cpu_has_save_preemption_timer(struct vmcs12 *vmcs12)
 	    VM_EXIT_SAVE_VMX_PREEMPTION_TIMER;
 }
 
+static inline bool nested_cpu_has_secondary_vm_exit_controls(struct vmcs12 *vmcs12)
+{
+	return vmcs12->vm_exit_controls & VM_EXIT_ACTIVATE_SECONDARY_CONTROLS;
+}
+
 static inline bool nested_exit_on_nmi(struct kvm_vcpu *vcpu)
 {
 	return nested_cpu_has_nmi_exiting(get_vmcs12(vcpu));
@@ -269,6 +274,23 @@ static inline bool nested_cpu_has_encls_exit(struct vmcs12 *vmcs12)
 	return nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENCLS_EXITING);
 }
 
+static inline bool nested_cpu_load_guest_fred_state(struct vmcs12 *vmcs12)
+{
+	return vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_FRED;
+}
+
+static inline bool nested_cpu_save_guest_fred_state(struct vmcs12 *vmcs12)
+{
+	return nested_cpu_has_secondary_vm_exit_controls(vmcs12) &&
+	       vmcs12->secondary_vm_exit_controls & SECONDARY_VM_EXIT_SAVE_IA32_FRED;
+}
+
+static inline bool nested_cpu_load_host_fred_state(struct vmcs12 *vmcs12)
+{
+	return nested_cpu_has_secondary_vm_exit_controls(vmcs12) &&
+	       vmcs12->secondary_vm_exit_controls & SECONDARY_VM_EXIT_LOAD_IA32_FRED;
+}
+
 /*
  * if fixed0[i] == 1: val[i] must be 1
  * if fixed1[i] == 0: val[i] must be 0
diff --git a/arch/x86/kvm/vmx/vmcs12.c b/arch/x86/kvm/vmx/vmcs12.c
index 3b01175f392a..9691e709061f 100644
--- a/arch/x86/kvm/vmx/vmcs12.c
+++ b/arch/x86/kvm/vmx/vmcs12.c
@@ -67,6 +67,24 @@ const unsigned short vmcs12_field_offsets[] = {
 	FIELD64(HOST_IA32_EFER, host_ia32_efer),
 	FIELD64(HOST_IA32_PERF_GLOBAL_CTRL, host_ia32_perf_global_ctrl),
 	FIELD64(SECONDARY_VM_EXIT_CONTROLS, secondary_vm_exit_controls),
+	FIELD64(INJECTED_EVENT_DATA, injected_event_data),
+	FIELD64(ORIGINAL_EVENT_DATA, original_event_data),
+	FIELD64(GUEST_IA32_FRED_CONFIG, guest_ia32_fred_config),
+	FIELD64(GUEST_IA32_FRED_RSP1, guest_ia32_fred_rsp1),
+	FIELD64(GUEST_IA32_FRED_RSP2, guest_ia32_fred_rsp2),
+	FIELD64(GUEST_IA32_FRED_RSP3, guest_ia32_fred_rsp3),
+	FIELD64(GUEST_IA32_FRED_STKLVLS, guest_ia32_fred_stklvls),
+	FIELD64(GUEST_IA32_FRED_SSP1, guest_ia32_fred_ssp1),
+	FIELD64(GUEST_IA32_FRED_SSP2, guest_ia32_fred_ssp2),
+	FIELD64(GUEST_IA32_FRED_SSP3, guest_ia32_fred_ssp3),
+	FIELD64(HOST_IA32_FRED_CONFIG, host_ia32_fred_config),
+	FIELD64(HOST_IA32_FRED_RSP1, host_ia32_fred_rsp1),
+	FIELD64(HOST_IA32_FRED_RSP2, host_ia32_fred_rsp2),
+	FIELD64(HOST_IA32_FRED_RSP3, host_ia32_fred_rsp3),
+	FIELD64(HOST_IA32_FRED_STKLVLS, host_ia32_fred_stklvls),
+	FIELD64(HOST_IA32_FRED_SSP1, host_ia32_fred_ssp1),
+	FIELD64(HOST_IA32_FRED_SSP2, host_ia32_fred_ssp2),
+	FIELD64(HOST_IA32_FRED_SSP3, host_ia32_fred_ssp3),
 	FIELD(PIN_BASED_VM_EXEC_CONTROL, pin_based_vm_exec_control),
 	FIELD(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control),
 	FIELD(EXCEPTION_BITMAP, exception_bitmap),
diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h
index fa5306dc0311..051016a3afba 100644
--- a/arch/x86/kvm/vmx/vmcs12.h
+++ b/arch/x86/kvm/vmx/vmcs12.h
@@ -191,6 +191,25 @@ struct __packed vmcs12 {
 	u16 host_gs_selector;
 	u16 host_tr_selector;
 	u16 guest_pml_index;
+	u16 padding16[1]; /* align to 64-bit boundary */
+	u64 guest_ia32_fred_config;
+	u64 guest_ia32_fred_rsp1;
+	u64 guest_ia32_fred_rsp2;
+	u64 guest_ia32_fred_rsp3;
+	u64 guest_ia32_fred_stklvls;
+	u64 guest_ia32_fred_ssp1;
+	u64 guest_ia32_fred_ssp2;
+	u64 guest_ia32_fred_ssp3;
+	u64 host_ia32_fred_config;
+	u64 host_ia32_fred_rsp1;
+	u64 host_ia32_fred_rsp2;
+	u64 host_ia32_fred_rsp3;
+	u64 host_ia32_fred_stklvls;
+	u64 host_ia32_fred_ssp1;
+	u64 host_ia32_fred_ssp2;
+	u64 host_ia32_fred_ssp3;
+	u64 injected_event_data;
+	u64 original_event_data;
 };
 
 /*
@@ -373,6 +392,24 @@ static inline void vmx_check_vmcs12_offsets(void)
 	CHECK_OFFSET(host_gs_selector, 992);
 	CHECK_OFFSET(host_tr_selector, 994);
 	CHECK_OFFSET(guest_pml_index, 996);
+	CHECK_OFFSET(guest_ia32_fred_config, 1000);
+	CHECK_OFFSET(guest_ia32_fred_rsp1, 1008);
+	CHECK_OFFSET(guest_ia32_fred_rsp2, 1016);
+	CHECK_OFFSET(guest_ia32_fred_rsp3, 1024);
+	CHECK_OFFSET(guest_ia32_fred_stklvls, 1032);
+	CHECK_OFFSET(guest_ia32_fred_ssp1, 1040);
+	CHECK_OFFSET(guest_ia32_fred_ssp2, 1048);
+	CHECK_OFFSET(guest_ia32_fred_ssp3, 1056);
+	CHECK_OFFSET(host_ia32_fred_config, 1064);
+	CHECK_OFFSET(host_ia32_fred_rsp1, 1072);
+	CHECK_OFFSET(host_ia32_fred_rsp2, 1080);
+	CHECK_OFFSET(host_ia32_fred_rsp3, 1088);
+	CHECK_OFFSET(host_ia32_fred_stklvls, 1096);
+	CHECK_OFFSET(host_ia32_fred_ssp1, 1104);
+	CHECK_OFFSET(host_ia32_fred_ssp2, 1112);
+	CHECK_OFFSET(host_ia32_fred_ssp3, 1120);
+	CHECK_OFFSET(injected_event_data, 1128);
+	CHECK_OFFSET(original_event_data, 1136);
 }
 
 extern const unsigned short vmcs12_field_offsets[];
diff --git a/arch/x86/kvm/vmx/vmcs_shadow_fields.h b/arch/x86/kvm/vmx/vmcs_shadow_fields.h
index cad128d1657b..da338327c2b3 100644
--- a/arch/x86/kvm/vmx/vmcs_shadow_fields.h
+++ b/arch/x86/kvm/vmx/vmcs_shadow_fields.h
@@ -74,6 +74,10 @@ SHADOW_FIELD_RW(HOST_GS_BASE, host_gs_base)
 /* 64-bit */
 SHADOW_FIELD_RO(GUEST_PHYSICAL_ADDRESS, guest_physical_address)
 SHADOW_FIELD_RO(GUEST_PHYSICAL_ADDRESS_HIGH, guest_physical_address)
+SHADOW_FIELD_RO(ORIGINAL_EVENT_DATA, original_event_data)
+SHADOW_FIELD_RO(ORIGINAL_EVENT_DATA_HIGH, original_event_data)
+SHADOW_FIELD_RW(INJECTED_EVENT_DATA, injected_event_data)
+SHADOW_FIELD_RW(INJECTED_EVENT_DATA_HIGH, injected_event_data)
 
 #undef SHADOW_FIELD_RO
 #undef SHADOW_FIELD_RW
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 48a5ab12cccf..cedb714bc082 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -67,6 +67,37 @@ struct pt_desc {
 	struct pt_ctx guest;
 };
 
+/*
+ * Used to snapshot FRED MSRs that may NOT be saved to vmcs12 as specified
+ * in the VM-Exit controls of vmcs12 configured by L1 VMM.
+ *
+ * FRED MSRs are *always* saved into vmcs02 because KVM always sets
+ * SECONDARY_VM_EXIT_SAVE_IA32_FRED.  However an L1 VMM may choose to clear
+ * this bit, resulting in FRED MSRs not being propagated to vmcs12 from
+ * vmcs02.  When the L1 VMM sets SECONDARY_VM_EXIT_LOAD_IA32_FRED, this is
+ * not a problem, since KVM then immediately loads the host FRED MSRs of
+ * vmcs12 to the guest FRED MSRs of vmcs01.
+ *
+ * But if the L1 VMM clears SECONDARY_VM_EXIT_LOAD_IA32_FRED, KVM should
+ * retain the FRED MSRs, i.e., propagate the guest FRED MSRs of vmcs02 to
+ * the guest FRED MSRs of vmcs01.
+ *
+ * This structure stores guest FRED MSRs that an L1 VMM opts not to save
+ * during VM-Exits from L2 to L1.  These MSRs may still be retained for
+ * running the L1 VMM if SECONDARY_VM_EXIT_LOAD_IA32_FRED is cleared in
+ * vmcs12.
+ */
+struct fred_msr_at_vmexit {
+	u64 fred_config;
+	u64 fred_rsp1;
+	u64 fred_rsp2;
+	u64 fred_rsp3;
+	u64 fred_stklvls;
+	u64 fred_ssp1;
+	u64 fred_ssp2;
+	u64 fred_ssp3;
+};
+
 /*
  * The nested_vmx structure is part of vcpu_vmx, and holds information we need
  * for correct emulation of VMX (i.e., nested VMX) on this vcpu.
@@ -184,6 +215,16 @@ struct nested_vmx {
 	u64 pre_vmenter_s_cet;
 	u64 pre_vmenter_ssp;
 	u64 pre_vmenter_ssp_tbl;
+	u64 pre_vmenter_fred_config;
+	u64 pre_vmenter_fred_rsp1;
+	u64 pre_vmenter_fred_rsp2;
+	u64 pre_vmenter_fred_rsp3;
+	u64 pre_vmenter_fred_stklvls;
+	u64 pre_vmenter_fred_ssp1;
+	u64 pre_vmenter_fred_ssp2;
+	u64 pre_vmenter_fred_ssp3;
+
+	struct fred_msr_at_vmexit fred_msr_at_vmexit;
 
 	/* to migrate it to L1 if L2 writes to L1's CR8 directly */
 	int l1_tpr_threshold;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 20/22] KVM: nVMX: Validate FRED-related VMCS fields
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (18 preceding siblings ...)
  2025-10-26 20:19 ` [PATCH v9 19/22] KVM: nVMX: Handle FRED VMCS fields in nested VMX context Xin Li (Intel)
@ 2025-10-26 20:19 ` Xin Li (Intel)
  2025-11-13  3:00   ` Chao Gao
  2025-10-26 20:19 ` [PATCH v9 21/22] KVM: nVMX: Guard SHADOW_FIELD_R[OW] macros with VMX feature checks Xin Li (Intel)
                   ` (3 subsequent siblings)
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:19 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Extend nested VMX field validation to include FRED-specific VMCS fields,
mirroring hardware behavior.

This enables support for nested FRED by ensuring control and guest/host
state fields are properly checked.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v5:
* Add TB from Xuelian Guo.
---
 arch/x86/kvm/vmx/nested.c | 117 +++++++++++++++++++++++++++++++++-----
 1 file changed, 104 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 63cdfffba58b..8682709d8759 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3030,6 +3030,8 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
 					  struct vmcs12 *vmcs12)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	bool fred_enabled = (vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) &&
+			    (vmcs12->guest_cr4 & X86_CR4_FRED);
 
 	if (CC(!vmx_control_verify(vmcs12->vm_entry_controls,
 				    vmx->nested.msrs.entry_ctls_low,
@@ -3047,22 +3049,11 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
 		u8 vector = intr_info & INTR_INFO_VECTOR_MASK;
 		u32 intr_type = intr_info & INTR_INFO_INTR_TYPE_MASK;
 		bool has_error_code = intr_info & INTR_INFO_DELIVER_CODE_MASK;
+		bool has_nested_exception = vmx->nested.msrs.basic & VMX_BASIC_NESTED_EXCEPTION;
 		bool urg = nested_cpu_has2(vmcs12,
 					   SECONDARY_EXEC_UNRESTRICTED_GUEST);
 		bool prot_mode = !urg || vmcs12->guest_cr0 & X86_CR0_PE;
 
-		/* VM-entry interruption-info field: interruption type */
-		if (CC(intr_type == INTR_TYPE_RESERVED) ||
-		    CC(intr_type == INTR_TYPE_OTHER_EVENT &&
-		       !nested_cpu_supports_monitor_trap_flag(vcpu)))
-			return -EINVAL;
-
-		/* VM-entry interruption-info field: vector */
-		if (CC(intr_type == INTR_TYPE_NMI_INTR && vector != NMI_VECTOR) ||
-		    CC(intr_type == INTR_TYPE_HARD_EXCEPTION && vector > 31) ||
-		    CC(intr_type == INTR_TYPE_OTHER_EVENT && vector != 0))
-			return -EINVAL;
-
 		/*
 		 * Cannot deliver error code in real mode or if the interrupt
 		 * type is not hardware exception. For other cases, do the
@@ -3086,8 +3077,28 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
 		if (CC(intr_info & INTR_INFO_RESVD_BITS_MASK))
 			return -EINVAL;
 
-		/* VM-entry instruction length */
+		/*
+		 * When the CPU enumerates VMX nested-exception support, bit 13
+		 * (set to indicate a nested exception) of the intr info field
+		 * may have value 1.  Otherwise bit 13 is reserved.
+		 */
+		if (CC(!(has_nested_exception && intr_type == INTR_TYPE_HARD_EXCEPTION) &&
+		       intr_info & INTR_INFO_NESTED_EXCEPTION_MASK))
+			return -EINVAL;
+
 		switch (intr_type) {
+		case INTR_TYPE_EXT_INTR:
+			break;
+		case INTR_TYPE_RESERVED:
+			return -EINVAL;
+		case INTR_TYPE_NMI_INTR:
+			if (CC(vector != NMI_VECTOR))
+				return -EINVAL;
+			break;
+		case INTR_TYPE_HARD_EXCEPTION:
+			if (CC(vector > 31))
+				return -EINVAL;
+			break;
 		case INTR_TYPE_SOFT_EXCEPTION:
 		case INTR_TYPE_SOFT_INTR:
 		case INTR_TYPE_PRIV_SW_EXCEPTION:
@@ -3095,6 +3106,24 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
 			    CC(vmcs12->vm_entry_instruction_len == 0 &&
 			    CC(!nested_cpu_has_zero_length_injection(vcpu))))
 				return -EINVAL;
+			break;
+		case INTR_TYPE_OTHER_EVENT:
+			switch (vector) {
+			case 0:
+				if (CC(!nested_cpu_supports_monitor_trap_flag(vcpu)))
+					return -EINVAL;
+				break;
+			case 1:
+			case 2:
+				if (CC(!fred_enabled))
+					return -EINVAL;
+				if (CC(vmcs12->vm_entry_instruction_len > X86_MAX_INSTRUCTION_LENGTH))
+					return -EINVAL;
+				break;
+			default:
+				return -EINVAL;
+			}
+			break;
 		}
 	}
 
@@ -3213,9 +3242,29 @@ static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu,
 	if (ia32e) {
 		if (CC(!(vmcs12->host_cr4 & X86_CR4_PAE)))
 			return -EINVAL;
+		if (vmcs12->vm_exit_controls & VM_EXIT_ACTIVATE_SECONDARY_CONTROLS &&
+		    vmcs12->secondary_vm_exit_controls & SECONDARY_VM_EXIT_LOAD_IA32_FRED) {
+			if (CC(vmcs12->host_ia32_fred_config &
+			       (BIT_ULL(11) | GENMASK_ULL(5, 4) | BIT_ULL(2))) ||
+			    CC(vmcs12->host_ia32_fred_rsp1 & GENMASK_ULL(5, 0)) ||
+			    CC(vmcs12->host_ia32_fred_rsp2 & GENMASK_ULL(5, 0)) ||
+			    CC(vmcs12->host_ia32_fred_rsp3 & GENMASK_ULL(5, 0)) ||
+			    CC(vmcs12->host_ia32_fred_ssp1 & GENMASK_ULL(2, 0)) ||
+			    CC(vmcs12->host_ia32_fred_ssp2 & GENMASK_ULL(2, 0)) ||
+			    CC(vmcs12->host_ia32_fred_ssp3 & GENMASK_ULL(2, 0)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_config & PAGE_MASK, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_rsp1, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_rsp2, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_rsp3, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_ssp1, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_ssp2, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_ssp3, vcpu)))
+				return -EINVAL;
+		}
 	} else {
 		if (CC(vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) ||
 		    CC(vmcs12->host_cr4 & X86_CR4_PCIDE) ||
+		    CC(vmcs12->host_cr4 & X86_CR4_FRED) ||
 		    CC((vmcs12->host_rip) >> 32))
 			return -EINVAL;
 	}
@@ -3384,6 +3433,48 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
 	     CC((vmcs12->guest_bndcfgs & MSR_IA32_BNDCFGS_RSVD))))
 		return -EINVAL;
 
+	if (ia32e) {
+		if (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_FRED) {
+			if (CC(vmcs12->guest_ia32_fred_config &
+			       (BIT_ULL(11) | GENMASK_ULL(5, 4) | BIT_ULL(2))) ||
+			    CC(vmcs12->guest_ia32_fred_rsp1 & GENMASK_ULL(5, 0)) ||
+			    CC(vmcs12->guest_ia32_fred_rsp2 & GENMASK_ULL(5, 0)) ||
+			    CC(vmcs12->guest_ia32_fred_rsp3 & GENMASK_ULL(5, 0)) ||
+			    CC(vmcs12->guest_ia32_fred_ssp1 & GENMASK_ULL(2, 0)) ||
+			    CC(vmcs12->guest_ia32_fred_ssp2 & GENMASK_ULL(2, 0)) ||
+			    CC(vmcs12->guest_ia32_fred_ssp3 & GENMASK_ULL(2, 0)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_config & PAGE_MASK, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_rsp1, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_rsp2, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_rsp3, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_ssp1, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_ssp2, vcpu)) ||
+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_ssp3, vcpu)))
+				return -EINVAL;
+		}
+		if (vmcs12->guest_cr4 & X86_CR4_FRED) {
+			unsigned int ss_dpl = VMX_AR_DPL(vmcs12->guest_ss_ar_bytes);
+			switch (ss_dpl) {
+			case 0:
+				if (CC(!(vmcs12->guest_cs_ar_bytes & VMX_AR_L_MASK)))
+					return -EINVAL;
+				break;
+			case 1:
+			case 2:
+				return -EINVAL;
+			case 3:
+				if (CC(vmcs12->guest_rflags & X86_EFLAGS_IOPL))
+					return -EINVAL;
+				if (CC(vmcs12->guest_interruptibility_info & GUEST_INTR_STATE_STI))
+					return -EINVAL;
+				break;
+			}
+		}
+	} else {
+		if (CC(vmcs12->guest_cr4 & X86_CR4_FRED))
+			return -EINVAL;
+	}
+
 	if (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE) {
 		if (nested_vmx_check_cet_state_common(vcpu, vmcs12->guest_s_cet,
 						      vmcs12->guest_ssp,
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 21/22] KVM: nVMX: Guard SHADOW_FIELD_R[OW] macros with VMX feature checks
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (19 preceding siblings ...)
  2025-10-26 20:19 ` [PATCH v9 20/22] KVM: nVMX: Validate FRED-related VMCS fields Xin Li (Intel)
@ 2025-10-26 20:19 ` Xin Li (Intel)
  2025-12-02  6:35   ` Chao Gao
  2025-12-08 22:49   ` Sean Christopherson
  2025-10-26 20:19 ` [PATCH v9 22/22] KVM: nVMX: Enable VMX FRED controls Xin Li (Intel)
                   ` (2 subsequent siblings)
  23 siblings, 2 replies; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:19 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Add VMX feature checks to the SHADOW_FIELD_R[OW] macros to prevent access
to VMCS fields that may be unsupported on some CPUs.

Functions like copy_shadow_to_vmcs12() and copy_vmcs12_to_shadow() access
VMCS fields that may not exist on certain hardware, such as
INJECTED_EVENT_DATA.  To avoid VMREAD/VMWRITE warnings, skip syncing fields
tied to unsupported VMX features.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v5:
* Add TB from Xuelian Guo.

Change since v2:
* Add __SHADOW_FIELD_R[OW] for better readability or maintability (Sean).
---
 arch/x86/kvm/vmx/nested.c             | 79 +++++++++++++++++++--------
 arch/x86/kvm/vmx/vmcs_shadow_fields.h | 41 +++++++++-----
 2 files changed, 83 insertions(+), 37 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 8682709d8759..37ab8250dd31 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -55,14 +55,14 @@ struct shadow_vmcs_field {
 	u16	offset;
 };
 static struct shadow_vmcs_field shadow_read_only_fields[] = {
-#define SHADOW_FIELD_RO(x, y) { x, offsetof(struct vmcs12, y) },
+#define __SHADOW_FIELD_RO(x, y, c) { x, offsetof(struct vmcs12, y) },
 #include "vmcs_shadow_fields.h"
 };
 static int max_shadow_read_only_fields =
 	ARRAY_SIZE(shadow_read_only_fields);
 
 static struct shadow_vmcs_field shadow_read_write_fields[] = {
-#define SHADOW_FIELD_RW(x, y) { x, offsetof(struct vmcs12, y) },
+#define __SHADOW_FIELD_RW(x, y, c) { x, offsetof(struct vmcs12, y) },
 #include "vmcs_shadow_fields.h"
 };
 static int max_shadow_read_write_fields =
@@ -85,6 +85,17 @@ static void init_vmcs_shadow_fields(void)
 			pr_err("Missing field from shadow_read_only_field %x\n",
 			       field + 1);
 
+		switch (field) {
+#define __SHADOW_FIELD_RO(x, y, c)		\
+		case x:				\
+			if (!(c))		\
+				continue;	\
+			break;
+#include "vmcs_shadow_fields.h"
+		default:
+			break;
+		}
+
 		clear_bit(field, vmx_vmread_bitmap);
 		if (field & 1)
 #ifdef CONFIG_X86_64
@@ -110,24 +121,13 @@ static void init_vmcs_shadow_fields(void)
 			  field <= GUEST_TR_AR_BYTES,
 			  "Update vmcs12_write_any() to drop reserved bits from AR_BYTES");
 
-		/*
-		 * PML and the preemption timer can be emulated, but the
-		 * processor cannot vmwrite to fields that don't exist
-		 * on bare metal.
-		 */
 		switch (field) {
-		case GUEST_PML_INDEX:
-			if (!cpu_has_vmx_pml())
-				continue;
-			break;
-		case VMX_PREEMPTION_TIMER_VALUE:
-			if (!cpu_has_vmx_preemption_timer())
-				continue;
-			break;
-		case GUEST_INTR_STATUS:
-			if (!cpu_has_vmx_apicv())
-				continue;
+#define __SHADOW_FIELD_RW(x, y, c)		\
+		case x:				\
+			if (!(c))		\
+				continue;	\
 			break;
+#include "vmcs_shadow_fields.h"
 		default:
 			break;
 		}
@@ -1636,8 +1636,8 @@ int vmx_get_vmx_msr(struct nested_vmx_msrs *msrs, u32 msr_index, u64 *pdata)
 /*
  * Copy the writable VMCS shadow fields back to the VMCS12, in case they have
  * been modified by the L1 guest.  Note, "writable" in this context means
- * "writable by the guest", i.e. tagged SHADOW_FIELD_RW; the set of
- * fields tagged SHADOW_FIELD_RO may or may not align with the "read-only"
+ * "writable by the guest", i.e. tagged __SHADOW_FIELD_RW; the set of
+ * fields tagged __SHADOW_FIELD_RO may or may not align with the "read-only"
  * VM-exit information fields (which are actually writable if the vCPU is
  * configured to support "VMWRITE to any supported field in the VMCS").
  */
@@ -1658,6 +1658,18 @@ static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx)
 
 	for (i = 0; i < max_shadow_read_write_fields; i++) {
 		field = shadow_read_write_fields[i];
+
+		switch (field.encoding) {
+#define __SHADOW_FIELD_RW(x, y, c)		\
+		case x:				\
+			if (!(c))		\
+				continue;	\
+			break;
+#include "vmcs_shadow_fields.h"
+		default:
+			break;
+		}
+
 		val = __vmcs_readl(field.encoding);
 		vmcs12_write_any(vmcs12, field.encoding, field.offset, val);
 	}
@@ -1692,6 +1704,23 @@ static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx)
 	for (q = 0; q < ARRAY_SIZE(fields); q++) {
 		for (i = 0; i < max_fields[q]; i++) {
 			field = fields[q][i];
+
+			switch (field.encoding) {
+#define __SHADOW_FIELD_RO(x, y, c)			\
+			case x:				\
+				if (!(c))		\
+					continue;	\
+				break;
+#define __SHADOW_FIELD_RW(x, y, c)			\
+			case x:				\
+				if (!(c))		\
+					continue;	\
+				break;
+#include "vmcs_shadow_fields.h"
+			default:
+				break;
+			}
+
 			val = vmcs12_read_any(vmcs12, field.encoding,
 					      field.offset);
 			__vmcs_writel(field.encoding, val);
@@ -5951,9 +5980,10 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
 static bool is_shadow_field_rw(unsigned long field)
 {
 	switch (field) {
-#define SHADOW_FIELD_RW(x, y) case x:
+#define __SHADOW_FIELD_RW(x, y, c)	\
+	case x:				\
+		return c;
 #include "vmcs_shadow_fields.h"
-		return true;
 	default:
 		break;
 	}
@@ -5963,9 +5993,10 @@ static bool is_shadow_field_rw(unsigned long field)
 static bool is_shadow_field_ro(unsigned long field)
 {
 	switch (field) {
-#define SHADOW_FIELD_RO(x, y) case x:
+#define __SHADOW_FIELD_RO(x, y, c)	\
+	case x:				\
+		return c;
 #include "vmcs_shadow_fields.h"
-		return true;
 	default:
 		break;
 	}
diff --git a/arch/x86/kvm/vmx/vmcs_shadow_fields.h b/arch/x86/kvm/vmx/vmcs_shadow_fields.h
index da338327c2b3..607945ada35f 100644
--- a/arch/x86/kvm/vmx/vmcs_shadow_fields.h
+++ b/arch/x86/kvm/vmx/vmcs_shadow_fields.h
@@ -1,14 +1,17 @@
-#if !defined(SHADOW_FIELD_RO) && !defined(SHADOW_FIELD_RW)
+#if !defined(__SHADOW_FIELD_RO) && !defined(__SHADOW_FIELD_RW)
 BUILD_BUG_ON(1)
 #endif
 
-#ifndef SHADOW_FIELD_RO
-#define SHADOW_FIELD_RO(x, y)
+#ifndef __SHADOW_FIELD_RO
+#define __SHADOW_FIELD_RO(x, y, c)
 #endif
-#ifndef SHADOW_FIELD_RW
-#define SHADOW_FIELD_RW(x, y)
+#ifndef __SHADOW_FIELD_RW
+#define __SHADOW_FIELD_RW(x, y, c)
 #endif
 
+#define SHADOW_FIELD_RO(x, y) __SHADOW_FIELD_RO(x, y, true)
+#define SHADOW_FIELD_RW(x, y) __SHADOW_FIELD_RW(x, y, true)
+
 /*
  * We do NOT shadow fields that are modified when L0
  * traps and emulates any vmx instruction (e.g. VMPTRLD,
@@ -32,8 +35,12 @@ BUILD_BUG_ON(1)
  */
 
 /* 16-bits */
-SHADOW_FIELD_RW(GUEST_INTR_STATUS, guest_intr_status)
-SHADOW_FIELD_RW(GUEST_PML_INDEX, guest_pml_index)
+__SHADOW_FIELD_RW(GUEST_INTR_STATUS, guest_intr_status, cpu_has_vmx_apicv())
+/*
+ * PML can be emulated, but the processor cannot vmwrite to the VMCS field
+ * GUEST_PML_INDEX that doesn't exist on bare metal.
+ */
+__SHADOW_FIELD_RW(GUEST_PML_INDEX, guest_pml_index, cpu_has_vmx_pml())
 SHADOW_FIELD_RW(HOST_FS_SELECTOR, host_fs_selector)
 SHADOW_FIELD_RW(HOST_GS_SELECTOR, host_gs_selector)
 
@@ -41,9 +48,9 @@ SHADOW_FIELD_RW(HOST_GS_SELECTOR, host_gs_selector)
 SHADOW_FIELD_RO(VM_EXIT_REASON, vm_exit_reason)
 SHADOW_FIELD_RO(VM_EXIT_INTR_INFO, vm_exit_intr_info)
 SHADOW_FIELD_RO(VM_EXIT_INSTRUCTION_LEN, vm_exit_instruction_len)
+SHADOW_FIELD_RO(VM_EXIT_INTR_ERROR_CODE, vm_exit_intr_error_code)
 SHADOW_FIELD_RO(IDT_VECTORING_INFO_FIELD, idt_vectoring_info_field)
 SHADOW_FIELD_RO(IDT_VECTORING_ERROR_CODE, idt_vectoring_error_code)
-SHADOW_FIELD_RO(VM_EXIT_INTR_ERROR_CODE, vm_exit_intr_error_code)
 SHADOW_FIELD_RO(GUEST_CS_AR_BYTES, guest_cs_ar_bytes)
 SHADOW_FIELD_RO(GUEST_SS_AR_BYTES, guest_ss_ar_bytes)
 SHADOW_FIELD_RW(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control)
@@ -54,7 +61,12 @@ SHADOW_FIELD_RW(VM_ENTRY_INTR_INFO_FIELD, vm_entry_intr_info_field)
 SHADOW_FIELD_RW(VM_ENTRY_INSTRUCTION_LEN, vm_entry_instruction_len)
 SHADOW_FIELD_RW(TPR_THRESHOLD, tpr_threshold)
 SHADOW_FIELD_RW(GUEST_INTERRUPTIBILITY_INFO, guest_interruptibility_info)
-SHADOW_FIELD_RW(VMX_PREEMPTION_TIMER_VALUE, vmx_preemption_timer_value)
+/*
+ * The preemption timer can be emulated, but the processor cannot vmwrite to
+ * the VMCS field VMX_PREEMPTION_TIMER_VALUE that doesn't exist on bare metal.
+ */
+__SHADOW_FIELD_RW(VMX_PREEMPTION_TIMER_VALUE, vmx_preemption_timer_value,
+		  cpu_has_vmx_preemption_timer())
 
 /* Natural width */
 SHADOW_FIELD_RO(EXIT_QUALIFICATION, exit_qualification)
@@ -74,10 +86,13 @@ SHADOW_FIELD_RW(HOST_GS_BASE, host_gs_base)
 /* 64-bit */
 SHADOW_FIELD_RO(GUEST_PHYSICAL_ADDRESS, guest_physical_address)
 SHADOW_FIELD_RO(GUEST_PHYSICAL_ADDRESS_HIGH, guest_physical_address)
-SHADOW_FIELD_RO(ORIGINAL_EVENT_DATA, original_event_data)
-SHADOW_FIELD_RO(ORIGINAL_EVENT_DATA_HIGH, original_event_data)
-SHADOW_FIELD_RW(INJECTED_EVENT_DATA, injected_event_data)
-SHADOW_FIELD_RW(INJECTED_EVENT_DATA_HIGH, injected_event_data)
+__SHADOW_FIELD_RO(ORIGINAL_EVENT_DATA, original_event_data, cpu_has_vmx_fred())
+__SHADOW_FIELD_RO(ORIGINAL_EVENT_DATA_HIGH, original_event_data, cpu_has_vmx_fred())
+__SHADOW_FIELD_RW(INJECTED_EVENT_DATA, injected_event_data, cpu_has_vmx_fred())
+__SHADOW_FIELD_RW(INJECTED_EVENT_DATA_HIGH, injected_event_data, cpu_has_vmx_fred())
 
 #undef SHADOW_FIELD_RO
 #undef SHADOW_FIELD_RW
+
+#undef __SHADOW_FIELD_RO
+#undef __SHADOW_FIELD_RW
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v9 22/22] KVM: nVMX: Enable VMX FRED controls
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (20 preceding siblings ...)
  2025-10-26 20:19 ` [PATCH v9 21/22] KVM: nVMX: Guard SHADOW_FIELD_R[OW] macros with VMX feature checks Xin Li (Intel)
@ 2025-10-26 20:19 ` Xin Li (Intel)
  2025-11-13  3:20   ` Chao Gao
  2025-11-06 17:35 ` [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li
  2025-12-08 22:51 ` Sean Christopherson
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li (Intel) @ 2025-10-26 20:19 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	xin, luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

From: Xin Li <xin3.li@intel.com>

Permit use of VMX FRED controls in nested VMX now that support for nested
FRED is implemented.

Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---

Change in v5:
* Add TB from Xuelian Guo.
---
 arch/x86/kvm/vmx/nested.c | 5 +++--
 arch/x86/kvm/vmx/vmx.c    | 1 +
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 37ab8250dd31..655257b34d15 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -7397,7 +7397,8 @@ static void nested_vmx_setup_exit_ctls(struct vmcs_config *vmcs_conf,
 		 * advertise any feature in it to nVMX until its nVMX support
 		 * is ready.
 		 */
-		msrs->secondary_exit_ctls &= 0;
+		msrs->secondary_exit_ctls &= SECONDARY_VM_EXIT_SAVE_IA32_FRED |
+					     SECONDARY_VM_EXIT_LOAD_IA32_FRED;
 	}
 }
 
@@ -7413,7 +7414,7 @@ static void nested_vmx_setup_entry_ctls(struct vmcs_config *vmcs_conf,
 		VM_ENTRY_IA32E_MODE |
 #endif
 		VM_ENTRY_LOAD_IA32_PAT | VM_ENTRY_LOAD_BNDCFGS |
-		VM_ENTRY_LOAD_CET_STATE;
+		VM_ENTRY_LOAD_CET_STATE | VM_ENTRY_LOAD_IA32_FRED;
 	msrs->entry_ctls_high |=
 		(VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | VM_ENTRY_LOAD_IA32_EFER |
 		 VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 04442f869abb..8f3805a71a97 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7994,6 +7994,7 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
 
 	entry = kvm_find_cpuid_entry_index(vcpu, 0x7, 1);
 	cr4_fixed1_update(X86_CR4_LAM_SUP,    eax, feature_bit(LAM));
+	cr4_fixed1_update(X86_CR4_FRED,       eax, feature_bit(FRED));
 
 #undef cr4_fixed1_update
 }
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 05/22] x86/cea: Use array indexing to simplify exception stack access
  2025-10-26 20:18 ` [PATCH v9 05/22] x86/cea: Use array indexing to simplify exception stack access Xin Li (Intel)
@ 2025-10-27 15:49   ` Dave Hansen
  2025-10-28  2:31     ` Xin Li
  0 siblings, 1 reply; 50+ messages in thread
From: Dave Hansen @ 2025-10-27 15:49 UTC (permalink / raw)
  To: Xin Li (Intel), linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

On 10/26/25 13:18, Xin Li (Intel) wrote:
> Refactor struct cea_exception_stacks to leverage array indexing for
> exception stack access, improving code clarity and eliminating the
> need for the ESTACKS_MEMBERS() macro.
> 
> Convert __this_cpu_ist_{bottom,top}_va() from macros to functions,
> allowing removal of the now-obsolete CEA_ESTACK_BOT and CEA_ESTACK_TOP
> macros.
> 
> Also drop CEA_ESTACK_SIZE, which just duplicated EXCEPTION_STKSZ.
> 
> Signed-off-by: Xin Li (Intel) <xin@zytor.com>
> ---
> 
> Change in v9:
> * Refactor first and then export in a separate patch (Dave Hansen).

Thanks for the changes. This also removes the extra union{} that was in
the last version for padding.

Acked-by: Dave Hansen <dave.hansen@linux.intel.com>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 06/22] x86/cea: Export __this_cpu_ist_top_va() to KVM
  2025-10-26 20:18 ` [PATCH v9 06/22] x86/cea: Export __this_cpu_ist_top_va() to KVM Xin Li (Intel)
@ 2025-10-27 15:50   ` Dave Hansen
  0 siblings, 0 replies; 50+ messages in thread
From: Dave Hansen @ 2025-10-27 15:50 UTC (permalink / raw)
  To: Xin Li (Intel), linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta

On 10/26/25 13:18, Xin Li (Intel) wrote:
> Export __this_cpu_ist_top_va() to allow KVM to retrieve the per-CPU
> exception stack top.
> 
> FRED introduced new fields in the host-state area of the VMCS for stack
> levels 1->3 (HOST_IA32_FRED_RSP[123]), each respectively corresponding to
> per-CPU exception stacks for #DB, NMI and #DF.  KVM must populate these
> fields each time a vCPU is loaded onto a CPU.

Acked-by: Dave Hansen <dave.hansen@linux.intel.com>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 05/22] x86/cea: Use array indexing to simplify exception stack access
  2025-10-27 15:49   ` Dave Hansen
@ 2025-10-28  2:31     ` Xin Li
  0 siblings, 0 replies; 50+ messages in thread
From: Xin Li @ 2025-10-28  2:31 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	chao.gao, hch, sohil.mehta



> On Oct 27, 2025, at 8:49 AM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 10/26/25 13:18, Xin Li (Intel) wrote:
>> Refactor struct cea_exception_stacks to leverage array indexing for
>> exception stack access, improving code clarity and eliminating the
>> need for the ESTACKS_MEMBERS() macro.
>> 
>> Convert __this_cpu_ist_{bottom,top}_va() from macros to functions,
>> allowing removal of the now-obsolete CEA_ESTACK_BOT and CEA_ESTACK_TOP
>> macros.
>> 
>> Also drop CEA_ESTACK_SIZE, which just duplicated EXCEPTION_STKSZ.
>> 
>> Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>> ---
>> 
>> Change in v9:
>> * Refactor first and then export in a separate patch (Dave Hansen).
> 
> Thanks for the changes. This also removes the extra union{} that was in
> the last version for padding.

I would say you foresaw it because you suggested to use array indexing:

https://lore.kernel.org/lkml/720bc7ac-7e81-4ad9-8cc5-29ac540be283@intel.com/

Thanks a lot for making it much cleaner.
Xin

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 00/22] Enable FRED with KVM VMX
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (21 preceding siblings ...)
  2025-10-26 20:19 ` [PATCH v9 22/22] KVM: nVMX: Enable VMX FRED controls Xin Li (Intel)
@ 2025-11-06 17:35 ` Xin Li
  2025-11-13 22:20   ` Sean Christopherson
  2025-12-08 22:51 ` Sean Christopherson
  23 siblings, 1 reply; 50+ messages in thread
From: Xin Li @ 2025-11-06 17:35 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-doc
  Cc: pbonzini, seanjc, corbet, tglx, mingo, bp, dave.hansen, x86, hpa,
	luto, peterz, andrew.cooper3, chao.gao, hch, sohil.mehta


> On Oct 26, 2025, at 1:18 PM, Xin Li (Intel) <xin@zytor.com> wrote:
> 
> This patch set enables the Intel flexible return and event delivery
> (FRED) architecture with KVM VMX to allow guests to utilize FRED.
> 
> The FRED architecture defines simple new transitions that change
> privilege level (ring transitions). The FRED architecture was
> designed with the following goals:
> 
> 1) Improve overall performance and response time by replacing event
>   delivery through the interrupt descriptor table (IDT event
>   delivery) and event return by the IRET instruction with lower
>   latency transitions.
> 
> 2) Improve software robustness by ensuring that event delivery
>   establishes the full supervisor context and that event return
>   establishes the full user context.
> 
> The new transitions defined by the FRED architecture are FRED event
> delivery and, for returning from events, two FRED return instructions.
> FRED event delivery can effect a transition from ring 3 to ring 0, but
> it is used also to deliver events incident to ring 0. One FRED
> instruction (ERETU) effects a return from ring 0 to ring 3, while the
> other (ERETS) returns while remaining in ring 0. Collectively, FRED
> event delivery and the FRED return instructions are FRED transitions.
> 
> 
> Intel VMX architecture is extended to run FRED guests, and the major
> changes are:
> 
> 1) New VMCS fields for FRED context management, which includes two new
> event data VMCS fields, eight new guest FRED context VMCS fields and
> eight new host FRED context VMCS fields.
> 
> 2) VMX nested-exception support for proper virtualization of stack
> levels introduced with FRED architecture.
> 
> Search for the latest FRED spec in most search engines with this search
> pattern:
> 
>  site:intel.com FRED (flexible return and event delivery) specification
> 
> 
> Although FRED and CET supervisor shadow stacks are independent CPU
> features, FRED unconditionally includes FRED shadow stack pointer
> MSRs IA32_FRED_SSP[0123], and IA32_FRED_SSP0 is just an alias of the
> CET MSR IA32_PL0_SSP.  IOW, the state management of MSR IA32_PL0_SSP
> becomes an overlap area, and Sean requested that FRED virtualization
> to land after CET virtualization [1].
> 
> With CET virtualization now merged in v6.18, the path is clear to submit
> the FRED virtualization patch series :).

Sean, what is the plan for the FRED patch series?

A good news is that we have got acks on all 3 common x86 patches.

Thanks!
Xin

> 
> Changes in v9:
> * Rebased to the latest kvm-x86/next branch, tag kvm-x86-next-2025.10.20-2.
> * Guard FRED state save/restore with guest_cpu_cap_has(vcpu, X86_FEATURE_FRED)
>  in patch 19 (syzbot & Chao).
> * Use array indexing for exception stack access, eliminating the need for
>  the ESTACKS_MEMBERS() macro in struct cea_exception_stacks, and then
>  exported __this_cpu_ist_top_va() in a subsequent patch (Dave Hansen).
> * Rewrote some of the change logs.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 08/22] KVM: VMX: Set FRED MSR intercepts
  2025-10-26 20:18 ` [PATCH v9 08/22] KVM: VMX: Set FRED MSR intercepts Xin Li (Intel)
@ 2025-11-12  5:49   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-12  5:49 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:18:56PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>On a userspace MSR filter change, set FRED MSR intercepts.
>
>The eight FRED MSRs, MSR_IA32_FRED_RSP[123], MSR_IA32_FRED_STKLVLS,
>MSR_IA32_FRED_SSP[123] and MSR_IA32_FRED_CONFIG, are all safe to
>passthrough, because each has a corresponding host and guest field
>in VMCS.

Sean prefers to pass through MSRs only when there is a reason to do that rather
than just because it is free. My thinking is that RSPs and SSPs are per-task
and are context-switched frequently, so we need to pass through them. But I am
not sure if there is a reason for STKLVLS and CONFIG.

[*] https://lore.kernel.org/all/aKTGVvOb8PZ7mzVr@google.com/

>
>Both MSR_IA32_FRED_RSP0 and MSR_IA32_FRED_SSP0 (aka MSR_IA32_PL0_SSP)
>are dedicated for userspace event delivery, IOW they are NOT used in
>any kernel event delivery and the execution of ERETS.  Thus KVM can
>run safely with guest values in the two MSRs.  As a result, save and
>restore of their guest values are deferred until vCPU context switch,
>Host MSR_IA32_FRED_RSP0 is restored upon returning to userspace, and
>Host MSR_IA32_PL0_SSP is managed with XRSTORS/XSAVES.
>
>Note, FRED SSP MSRs, including MSR_IA32_PL0_SSP, are available on
>any processor that enumerates FRED.  On processors that support FRED
>but not CET, FRED transitions do not use these MSRs, but they remain
>accessible via MSR instructions such as RDMSR and WRMSR.
>
>Intercept MSR_IA32_PL0_SSP when CET shadow stack is not supported,
>regardless of FRED support.  This ensures the guest value remains
>fully virtual and does not modify the hardware FRED SSP0 MSR.
>
>This behavior is consistent with the current setup in
>vmx_recalc_msr_intercepts(), so no change is needed to the interception
>logic for MSR_IA32_PL0_SSP.
>
>Signed-off-by: Xin Li <xin3.li@intel.com>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Shan Kang <shan.kang@intel.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>
>---
>
>Changes in v7:
>* Rewrite the changelog and comment, majorly for MSR_IA32_PL0_SSP.
>
>Changes in v5:
>* Skip execution of vmx_set_intercept_for_fred_msr() if FRED is
>  not available or enabled (Sean).
>* Use 'intercept' as the variable name to indicate whether MSR
>  interception should be enabled (Sean).
>* Add TB from Xuelian Guo.
>---
> arch/x86/kvm/vmx/vmx.c | 47 ++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 47 insertions(+)
>
>diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>index c8b5359123bf..ef9765779884 100644
>--- a/arch/x86/kvm/vmx/vmx.c
>+++ b/arch/x86/kvm/vmx/vmx.c
>@@ -4146,6 +4146,51 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
> 	}
> }
> 
>+static void vmx_set_intercept_for_fred_msr(struct kvm_vcpu *vcpu)
>+{
>+	bool intercept = !guest_cpu_cap_has(vcpu, X86_FEATURE_FRED);
>+
>+	if (!kvm_cpu_cap_has(X86_FEATURE_FRED))
>+		return;
>+
>+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP1, MSR_TYPE_RW, intercept);
>+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP2, MSR_TYPE_RW, intercept);
>+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP3, MSR_TYPE_RW, intercept);
>+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_STKLVLS, MSR_TYPE_RW, intercept);
>+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_SSP1, MSR_TYPE_RW, intercept);
>+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_SSP2, MSR_TYPE_RW, intercept);
>+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_SSP3, MSR_TYPE_RW, intercept);
>+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_CONFIG, MSR_TYPE_RW, intercept);
>+
>+	/*
>+	 * MSR_IA32_FRED_RSP0 and MSR_IA32_PL0_SSP (aka MSR_IA32_FRED_SSP0) are
>+	 * designed for event delivery while executing in userspace.  Since KVM
>+	 * operates entirely in kernel mode (CPL is always 0 after any VM exit),
>+	 * it can safely retain and operate with guest-defined values for these
>+	 * MSRs.
>+	 *
>+	 * As a result, interception of MSR_IA32_FRED_RSP0 and MSR_IA32_PL0_SSP
>+	 * is unnecessary.

I think it would be slightly better to document why MSRs need to be passed
through rather than just why it is safe to pass through.

>+	 *
>+	 * Note: Saving and restoring MSR_IA32_PL0_SSP is part of CET supervisor
>+	 * context management.  However, FRED SSP MSRs, including MSR_IA32_PL0_SSP,
>+	 * are available on any processor that enumerates FRED.
>+	 *
>+	 * On processors that support FRED but not CET, FRED transitions do not
>+	 * use these MSRs, but they remain accessible via MSR instructions such
>+	 * as RDMSR and WRMSR.
>+	 *
>+	 * Intercept MSR_IA32_PL0_SSP when CET shadow stack is not supported,
>+	 * regardless of FRED support.  This ensures the guest value remains
>+	 * fully virtual and does not modify the hardware FRED SSP0 MSR.

Modifying the hardware MSR itself isn't a problem. The problem is that the
MSR isn't supposed to be accessed frequently in the guest if CET isn't
supported and will never be accessed via XSAVES. So, there is no good reason
to pass through it. And passing through the MSR means KVM needs to
context-switch it along with vcpu load/put, i.e., more code and complexity.

>+	 *
>+	 * This behavior is consistent with the current setup in
>+	 * vmx_recalc_msr_intercepts(), so no change is needed to the interception
>+	 * logic for MSR_IA32_PL0_SSP.
>+	 */
>+	vmx_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP0, MSR_TYPE_RW, intercept);
>+}
>+

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 09/22] KVM: VMX: Save/restore guest FRED RSP0
  2025-10-26 20:18 ` [PATCH v9 09/22] KVM: VMX: Save/restore guest FRED RSP0 Xin Li (Intel)
@ 2025-11-12  5:59   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-12  5:59 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:18:57PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>Save guest FRED RSP0 in vmx_prepare_switch_to_host() and restore it
>in vmx_prepare_switch_to_guest() because MSR_IA32_FRED_RSP0 is passed
>through to the guest, thus is volatile/unknown.
>
>Note, host FRED RSP0 is restored in arch_exit_to_user_mode_prepare(),
>regardless of whether it is modified in KVM.
>
>Signed-off-by: Xin Li <xin3.li@intel.com>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Shan Kang <shan.kang@intel.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 10/22] KVM: VMX: Add support for saving and restoring FRED MSRs
  2025-10-26 20:18 ` [PATCH v9 10/22] KVM: VMX: Add support for saving and restoring FRED MSRs Xin Li (Intel)
@ 2025-11-12  6:16   ` Chao Gao
  2025-12-01  6:20     ` Xin Li
  0 siblings, 1 reply; 50+ messages in thread
From: Chao Gao @ 2025-11-12  6:16 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

>@@ -4316,6 +4374,12 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> #endif
> 	case MSR_IA32_U_CET:
> 	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
>+		if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK)) {
>+			WARN_ON_ONCE(msr != MSR_IA32_FRED_SSP0);

This will be triggered if the guest only supports IBT and tries to write U_CET here.

>+			vcpu->arch.fred_ssp0_fallback = data;
>+			break;
>+		}
>+
> 		kvm_set_xstate_msr(vcpu, msr_info);
> 		break;
> 	default:
>@@ -4669,6 +4733,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> #endif
> 	case MSR_IA32_U_CET:
> 	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
>+		if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK)) {
>+			WARN_ON_ONCE(msr_info->index != MSR_IA32_FRED_SSP0);

ditto.

With this fixed,

Reviewed-by: Chao Gao <chao.gao@intel.com>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 11/22] KVM: x86: Add a helper to detect if FRED is enabled for a vCPU
  2025-10-26 20:18 ` [PATCH v9 11/22] KVM: x86: Add a helper to detect if FRED is enabled for a vCPU Xin Li (Intel)
@ 2025-11-12  6:19   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-12  6:19 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:18:59PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>Signed-off-by: Xin Li <xin3.li@intel.com>
>[ Sean: removed the "kvm_" prefix from the function name ]
>Signed-off-by: Sean Christopherson <seanjc@google.com>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Shan Kang <shan.kang@intel.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 17/22] KVM: x86: Advertise support for FRED
  2025-10-26 20:19 ` [PATCH v9 17/22] KVM: x86: Advertise support for FRED Xin Li (Intel)
@ 2025-11-12  7:30   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-12  7:30 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:19:05PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>Advertise support for FRED to userspace after changes required to enable
>FRED in a KVM guest are in place.

I'm not sure if AMD CPUs support FRED, but just in case, we can clear FRED
i.e., kvm_cpu_cap_clear(X86_FEATURE_FRED) in svm_set_cpu_caps().

With this fixed:

Reviewed-by: Chao Gao <chao.gao@intel.com>

>
>Signed-off-by: Xin Li <xin3.li@intel.com>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Shan Kang <shan.kang@intel.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>
>---
>
>Change in v5:
>* Don't advertise FRED/LKGS together, LKGS can be advertised as an
>  independent feature (Sean).
>* Add TB from Xuelian Guo.
>---
> arch/x86/kvm/cpuid.c | 1 +
> 1 file changed, 1 insertion(+)
>
>diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
>index d563a948318b..0bf97b8a3216 100644
>--- a/arch/x86/kvm/cpuid.c
>+++ b/arch/x86/kvm/cpuid.c
>@@ -1014,6 +1014,7 @@ void kvm_set_cpu_caps(void)
> 		F(FSRS),
> 		F(FSRC),
> 		F(WRMSRNS),
>+		X86_64_F(FRED),
> 		X86_64_F(LKGS),
> 		F(AMX_FP16),
> 		F(AVX_IFMA),
>-- 
>2.51.0
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 18/22] KVM: nVMX: Enable support for secondary VM exit controls
  2025-10-26 20:19 ` [PATCH v9 18/22] KVM: nVMX: Enable support for secondary VM exit controls Xin Li (Intel)
@ 2025-11-12 13:42   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-12 13:42 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:19:06PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>Add support for secondary VM exit controls in nested VMX to facilitate
>future FRED integration.
>
>Signed-off-by: Xin Li <xin3.li@intel.com>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Shan Kang <shan.kang@intel.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>
>---
>
>Changes in v8:
>* Relocate secondary_vm_exit_controls to the last u64 padding field.
>* Remove the change to Documentation/virt/kvm/x86/nested-vmx.rst.
>
>Changes in v5:
>* Allow writing MSR_IA32_VMX_EXIT_CTLS2 (Sean).
>* Add TB from Xuelian Guo.
>
>Change in v3:
>* Read secondary VM exit controls from vmcs_conf insteasd of the hardware
>  MSR MSR_IA32_VMX_EXIT_CTLS2 to avoid advertising features to L1 that KVM
>  itself doesn't support, e.g. because the expected entry+exit pairs aren't
>  supported. (Sean Christopherson)
>---
> arch/x86/kvm/vmx/capabilities.h |  1 +
> arch/x86/kvm/vmx/nested.c       | 26 +++++++++++++++++++++++++-
> arch/x86/kvm/vmx/vmcs12.c       |  1 +
> arch/x86/kvm/vmx/vmcs12.h       |  3 ++-
> arch/x86/kvm/x86.h              |  2 +-
> 5 files changed, 30 insertions(+), 3 deletions(-)
>
>diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
>index 651507627ef3..f390f9f883c3 100644
>--- a/arch/x86/kvm/vmx/capabilities.h
>+++ b/arch/x86/kvm/vmx/capabilities.h
>@@ -34,6 +34,7 @@ struct nested_vmx_msrs {
> 	u32 pinbased_ctls_high;
> 	u32 exit_ctls_low;
> 	u32 exit_ctls_high;
>+	u64 secondary_exit_ctls;
> 	u32 entry_ctls_low;
> 	u32 entry_ctls_high;
> 	u32 misc_low;
>diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
>index b0cd745518b4..cbb682424a5b 100644
>--- a/arch/x86/kvm/vmx/nested.c
>+++ b/arch/x86/kvm/vmx/nested.c
>@@ -1534,6 +1534,11 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
> 			return -EINVAL;
> 		vmx->nested.msrs.vmfunc_controls = data;
> 		return 0;
>+	case MSR_IA32_VMX_EXIT_CTLS2:
>+		if (data & ~vmcs_config.nested.secondary_exit_ctls)
>+			return -EINVAL;
>+		vmx->nested.msrs.secondary_exit_ctls = data;
>+		return 0;
> 	default:
> 		/*
> 		 * The rest of the VMX capability MSRs do not support restore.
>@@ -1573,6 +1578,9 @@ int vmx_get_vmx_msr(struct nested_vmx_msrs *msrs, u32 msr_index, u64 *pdata)
> 		if (msr_index == MSR_IA32_VMX_EXIT_CTLS)
> 			*pdata |= VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR;
> 		break;
>+	case MSR_IA32_VMX_EXIT_CTLS2:
>+		*pdata = msrs->secondary_exit_ctls;

MSR_IA32_VMX_EXIT_CTLS2 should be added to emulated_msrs_all[] for live migration.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 20/22] KVM: nVMX: Validate FRED-related VMCS fields
  2025-10-26 20:19 ` [PATCH v9 20/22] KVM: nVMX: Validate FRED-related VMCS fields Xin Li (Intel)
@ 2025-11-13  3:00   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-13  3:00 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:19:08PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>Extend nested VMX field validation to include FRED-specific VMCS fields,
>mirroring hardware behavior.
>
>This enables support for nested FRED by ensuring control and guest/host
>state fields are properly checked.
>
>Signed-off-by: Xin Li <xin3.li@intel.com>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Shan Kang <shan.kang@intel.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

There are some minor issues below that may need to be fixed.

>---
>
>Change in v5:
>* Add TB from Xuelian Guo.
>---
> arch/x86/kvm/vmx/nested.c | 117 +++++++++++++++++++++++++++++++++-----
> 1 file changed, 104 insertions(+), 13 deletions(-)
>
>diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
>index 63cdfffba58b..8682709d8759 100644
>--- a/arch/x86/kvm/vmx/nested.c
>+++ b/arch/x86/kvm/vmx/nested.c
>@@ -3030,6 +3030,8 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
> 					  struct vmcs12 *vmcs12)
> {
> 	struct vcpu_vmx *vmx = to_vmx(vcpu);
>+	bool fred_enabled = (vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) &&
>+			    (vmcs12->guest_cr4 & X86_CR4_FRED);
> 
> 	if (CC(!vmx_control_verify(vmcs12->vm_entry_controls,
> 				    vmx->nested.msrs.entry_ctls_low,
>@@ -3047,22 +3049,11 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
> 		u8 vector = intr_info & INTR_INFO_VECTOR_MASK;
> 		u32 intr_type = intr_info & INTR_INFO_INTR_TYPE_MASK;
> 		bool has_error_code = intr_info & INTR_INFO_DELIVER_CODE_MASK;
>+		bool has_nested_exception = vmx->nested.msrs.basic & VMX_BASIC_NESTED_EXCEPTION;

has_error_code reflects whether the to-be-injected event has an error code.
Using has_nested_exception for CPU capabilities here is a bit confusing.

> 		bool urg = nested_cpu_has2(vmcs12,
> 					   SECONDARY_EXEC_UNRESTRICTED_GUEST);
> 		bool prot_mode = !urg || vmcs12->guest_cr0 & X86_CR0_PE;
> 
>-		/* VM-entry interruption-info field: interruption type */
>-		if (CC(intr_type == INTR_TYPE_RESERVED) ||
>-		    CC(intr_type == INTR_TYPE_OTHER_EVENT &&
>-		       !nested_cpu_supports_monitor_trap_flag(vcpu)))
>-			return -EINVAL;
>-
>-		/* VM-entry interruption-info field: vector */
>-		if (CC(intr_type == INTR_TYPE_NMI_INTR && vector != NMI_VECTOR) ||
>-		    CC(intr_type == INTR_TYPE_HARD_EXCEPTION && vector > 31) ||
>-		    CC(intr_type == INTR_TYPE_OTHER_EVENT && vector != 0))
>-			return -EINVAL;
>-
> 		/*
> 		 * Cannot deliver error code in real mode or if the interrupt
> 		 * type is not hardware exception. For other cases, do the
>@@ -3086,8 +3077,28 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
> 		if (CC(intr_info & INTR_INFO_RESVD_BITS_MASK))
> 			return -EINVAL;
> 
>-		/* VM-entry instruction length */
>+		/*
>+		 * When the CPU enumerates VMX nested-exception support, bit 13
>+		 * (set to indicate a nested exception) of the intr info field
>+		 * may have value 1.  Otherwise bit 13 is reserved.
>+		 */
>+		if (CC(!(has_nested_exception && intr_type == INTR_TYPE_HARD_EXCEPTION) &&
>+		       intr_info & INTR_INFO_NESTED_EXCEPTION_MASK))
>+			return -EINVAL;
>+
> 		switch (intr_type) {
>+		case INTR_TYPE_EXT_INTR:
>+			break;

This can be dropped, as the "default" case will handle it.

>+		case INTR_TYPE_RESERVED:
>+			return -EINVAL;

I think we need to add a CC() statement to make it easier to correlate a
VM-entry failure with a specific consistency check.

>+		case INTR_TYPE_NMI_INTR:
>+			if (CC(vector != NMI_VECTOR))
>+				return -EINVAL;
>+			break;
>+		case INTR_TYPE_HARD_EXCEPTION:
>+			if (CC(vector > 31))
>+				return -EINVAL;
>+			break;
> 		case INTR_TYPE_SOFT_EXCEPTION:
> 		case INTR_TYPE_SOFT_INTR:
> 		case INTR_TYPE_PRIV_SW_EXCEPTION:
>@@ -3095,6 +3106,24 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
> 			    CC(vmcs12->vm_entry_instruction_len == 0 &&
> 			    CC(!nested_cpu_has_zero_length_injection(vcpu))))
> 				return -EINVAL;
>+			break;
>+		case INTR_TYPE_OTHER_EVENT:
>+			switch (vector) {
>+			case 0:
>+				if (CC(!nested_cpu_supports_monitor_trap_flag(vcpu)))
>+					return -EINVAL;

Does this nested_cpu_supports_monitor_trap_flag() check apply to case 1/2?

>+				break;
>+			case 1:
>+			case 2:
>+				if (CC(!fred_enabled))
>+					return -EINVAL;
>+				if (CC(vmcs12->vm_entry_instruction_len > X86_MAX_INSTRUCTION_LENGTH))
>+					return -EINVAL;
>+				break;
>+			default:
>+				return -EINVAL;

Again, I think -EINVAL should be accompanied by a CC() statement.

>+			}
>+			break;
> 		}
> 	}
> 
>@@ -3213,9 +3242,29 @@ static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu,
> 	if (ia32e) {
> 		if (CC(!(vmcs12->host_cr4 & X86_CR4_PAE)))
> 			return -EINVAL;
>+		if (vmcs12->vm_exit_controls & VM_EXIT_ACTIVATE_SECONDARY_CONTROLS &&
>+		    vmcs12->secondary_vm_exit_controls & SECONDARY_VM_EXIT_LOAD_IA32_FRED) {
>+			if (CC(vmcs12->host_ia32_fred_config &
>+			       (BIT_ULL(11) | GENMASK_ULL(5, 4) | BIT_ULL(2))) ||
>+			    CC(vmcs12->host_ia32_fred_rsp1 & GENMASK_ULL(5, 0)) ||
>+			    CC(vmcs12->host_ia32_fred_rsp2 & GENMASK_ULL(5, 0)) ||
>+			    CC(vmcs12->host_ia32_fred_rsp3 & GENMASK_ULL(5, 0)) ||
>+			    CC(vmcs12->host_ia32_fred_ssp1 & GENMASK_ULL(2, 0)) ||
>+			    CC(vmcs12->host_ia32_fred_ssp2 & GENMASK_ULL(2, 0)) ||
>+			    CC(vmcs12->host_ia32_fred_ssp3 & GENMASK_ULL(2, 0)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_config & PAGE_MASK, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_rsp1, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_rsp2, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_rsp3, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_ssp1, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_ssp2, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->host_ia32_fred_ssp3, vcpu)))
>+				return -EINVAL;
>+		}
> 	} else {
> 		if (CC(vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) ||
> 		    CC(vmcs12->host_cr4 & X86_CR4_PCIDE) ||
>+		    CC(vmcs12->host_cr4 & X86_CR4_FRED) ||
> 		    CC((vmcs12->host_rip) >> 32))
> 			return -EINVAL;
> 	}
>@@ -3384,6 +3433,48 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
> 	     CC((vmcs12->guest_bndcfgs & MSR_IA32_BNDCFGS_RSVD))))
> 		return -EINVAL;
> 
>+	if (ia32e) {
>+		if (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_FRED) {
>+			if (CC(vmcs12->guest_ia32_fred_config &
>+			       (BIT_ULL(11) | GENMASK_ULL(5, 4) | BIT_ULL(2))) ||
>+			    CC(vmcs12->guest_ia32_fred_rsp1 & GENMASK_ULL(5, 0)) ||
>+			    CC(vmcs12->guest_ia32_fred_rsp2 & GENMASK_ULL(5, 0)) ||
>+			    CC(vmcs12->guest_ia32_fred_rsp3 & GENMASK_ULL(5, 0)) ||
>+			    CC(vmcs12->guest_ia32_fred_ssp1 & GENMASK_ULL(2, 0)) ||
>+			    CC(vmcs12->guest_ia32_fred_ssp2 & GENMASK_ULL(2, 0)) ||
>+			    CC(vmcs12->guest_ia32_fred_ssp3 & GENMASK_ULL(2, 0)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_config & PAGE_MASK, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_rsp1, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_rsp2, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_rsp3, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_ssp1, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_ssp2, vcpu)) ||
>+			    CC(is_noncanonical_msr_address(vmcs12->guest_ia32_fred_ssp3, vcpu)))
>+				return -EINVAL;
>+		}
>+		if (vmcs12->guest_cr4 & X86_CR4_FRED) {
>+			unsigned int ss_dpl = VMX_AR_DPL(vmcs12->guest_ss_ar_bytes);
>+			switch (ss_dpl) {
>+			case 0:
>+				if (CC(!(vmcs12->guest_cs_ar_bytes & VMX_AR_L_MASK)))
>+					return -EINVAL;
>+				break;
>+			case 1:
>+			case 2:
>+				return -EINVAL;

Ditto.

>+			case 3:
>+				if (CC(vmcs12->guest_rflags & X86_EFLAGS_IOPL))
>+					return -EINVAL;
>+				if (CC(vmcs12->guest_interruptibility_info & GUEST_INTR_STATE_STI))
>+					return -EINVAL;
>+				break;
>+			}
>+		}
>+	} else {
>+		if (CC(vmcs12->guest_cr4 & X86_CR4_FRED))
>+			return -EINVAL;
>+	}
>+
> 	if (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE) {
> 		if (nested_vmx_check_cet_state_common(vcpu, vmcs12->guest_s_cet,
> 						      vmcs12->guest_ssp,
>-- 
>2.51.0
>
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 22/22] KVM: nVMX: Enable VMX FRED controls
  2025-10-26 20:19 ` [PATCH v9 22/22] KVM: nVMX: Enable VMX FRED controls Xin Li (Intel)
@ 2025-11-13  3:20   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-13  3:20 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:19:10PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>Permit use of VMX FRED controls in nested VMX now that support for nested
>FRED is implemented.
>
>Signed-off-by: Xin Li <xin3.li@intel.com>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Shan Kang <shan.kang@intel.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

>---
>
>Change in v5:
>* Add TB from Xuelian Guo.
>---
> arch/x86/kvm/vmx/nested.c | 5 +++--
> arch/x86/kvm/vmx/vmx.c    | 1 +
> 2 files changed, 4 insertions(+), 2 deletions(-)
>
>diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
>index 37ab8250dd31..655257b34d15 100644
>--- a/arch/x86/kvm/vmx/nested.c
>+++ b/arch/x86/kvm/vmx/nested.c
>@@ -7397,7 +7397,8 @@ static void nested_vmx_setup_exit_ctls(struct vmcs_config *vmcs_conf,
> 		 * advertise any feature in it to nVMX until its nVMX support
> 		 * is ready.
> 		 */

Shouldn't this comment be removed? I suppose it was a note to reviewers
explaining why it's hard-coded to 0. Since some features have been added, the
comment can be dropped.

>-		msrs->secondary_exit_ctls &= 0;
>+		msrs->secondary_exit_ctls &= SECONDARY_VM_EXIT_SAVE_IA32_FRED |
>+					     SECONDARY_VM_EXIT_LOAD_IA32_FRED;
> 	}
> }
> 
>@@ -7413,7 +7414,7 @@ static void nested_vmx_setup_entry_ctls(struct vmcs_config *vmcs_conf,
> 		VM_ENTRY_IA32E_MODE |
> #endif
> 		VM_ENTRY_LOAD_IA32_PAT | VM_ENTRY_LOAD_BNDCFGS |
>-		VM_ENTRY_LOAD_CET_STATE;
>+		VM_ENTRY_LOAD_CET_STATE | VM_ENTRY_LOAD_IA32_FRED;
> 	msrs->entry_ctls_high |=
> 		(VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | VM_ENTRY_LOAD_IA32_EFER |
> 		 VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL);
>diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>index 04442f869abb..8f3805a71a97 100644
>--- a/arch/x86/kvm/vmx/vmx.c
>+++ b/arch/x86/kvm/vmx/vmx.c
>@@ -7994,6 +7994,7 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
> 
> 	entry = kvm_find_cpuid_entry_index(vcpu, 0x7, 1);
> 	cr4_fixed1_update(X86_CR4_LAM_SUP,    eax, feature_bit(LAM));
>+	cr4_fixed1_update(X86_CR4_FRED,       eax, feature_bit(FRED));
> 
> #undef cr4_fixed1_update
> }
>-- 
>2.51.0
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 00/22] Enable FRED with KVM VMX
  2025-11-06 17:35 ` [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li
@ 2025-11-13 22:20   ` Sean Christopherson
  0 siblings, 0 replies; 50+ messages in thread
From: Sean Christopherson @ 2025-11-13 22:20 UTC (permalink / raw)
  To: Xin Li
  Cc: linux-kernel, kvm, linux-doc, pbonzini, corbet, tglx, mingo, bp,
	dave.hansen, x86, hpa, luto, peterz, andrew.cooper3, chao.gao,
	hch, sohil.mehta

On Thu, Nov 06, 2025, Xin Li wrote:
> 
> > On Oct 26, 2025, at 1:18 PM, Xin Li (Intel) <xin@zytor.com> wrote:
> > Although FRED and CET supervisor shadow stacks are independent CPU
> > features, FRED unconditionally includes FRED shadow stack pointer
> > MSRs IA32_FRED_SSP[0123], and IA32_FRED_SSP0 is just an alias of the
> > CET MSR IA32_PL0_SSP.  IOW, the state management of MSR IA32_PL0_SSP
> > becomes an overlap area, and Sean requested that FRED virtualization
> > to land after CET virtualization [1].
> > 
> > With CET virtualization now merged in v6.18, the path is clear to submit
> > the FRED virtualization patch series :).
> 
> Sean, what is the plan for the FRED patch series?

Review and merge it asap.  Unfortunately, "asap" isn't all that soon, mostly
because I've been bogged down with non-upstream stuff.  I'm hoping to dive in
next week (but take that with a grain of salt; I've more or less said that exact
thing to someone else for other patches for three weeks running).

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 07/22] KVM: VMX: Initialize VMCS FRED fields
  2025-10-26 20:18 ` [PATCH v9 07/22] KVM: VMX: Initialize VMCS FRED fields Xin Li (Intel)
@ 2025-11-19  2:44   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-19  2:44 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:18:55PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>Initialize host VMCS FRED fields with host FRED MSRs' value and
>guest VMCS FRED fields to 0.
>
>FRED CPU state is managed in 9 new FRED MSRs:
>        IA32_FRED_CONFIG,
>        IA32_FRED_STKLVLS,
>        IA32_FRED_RSP0,
>        IA32_FRED_RSP1,
>        IA32_FRED_RSP2,
>        IA32_FRED_RSP3,
>        IA32_FRED_SSP1,
>        IA32_FRED_SSP2,
>        IA32_FRED_SSP3,
>as well as a few existing CPU registers and MSRs:
>        CR4.FRED,
>        IA32_STAR,
>        IA32_KERNEL_GS_BASE,
>        IA32_PL0_SSP (also known as IA32_FRED_SSP0).
>
>CR4, IA32_KERNEL_GS_BASE and IA32_STAR are already well managed.
>Except IA32_FRED_RSP0 and IA32_FRED_SSP0, all other FRED CPU state
>MSRs have corresponding VMCS fields in both the host-state and
>guest-state areas.  So KVM just needs to initialize them, and with
>proper VM entry/exit FRED controls, a FRED CPU will keep tracking
>host and guest FRED CPU state in VMCS automatically.
>
>Signed-off-by: Xin Li <xin3.li@intel.com>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Shan Kang <shan.kang@intel.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

one nit below,

>@@ -8717,6 +8748,11 @@ __init int vmx_hardware_setup(void)
> 
> 	kvm_caps.inapplicable_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT;
> 
>+	if (kvm_cpu_cap_has(X86_FEATURE_FRED)) {
>+		rdmsrl(MSR_IA32_FRED_CONFIG, kvm_host.fred_config);
>+		rdmsrl(MSR_IA32_FRED_STKLVLS, kvm_host.fred_stklvls);

s/rdmsrl/rdmsrq

>+	}
>+
> 	return r;
> }
> 
>diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
>index f3dc77f006f9..0c1fbf75442b 100644
>--- a/arch/x86/kvm/x86.h
>+++ b/arch/x86/kvm/x86.h
>@@ -52,6 +52,9 @@ struct kvm_host_values {
> 	u64 xss;
> 	u64 s_cet;
> 	u64 arch_capabilities;
>+
>+	u64 fred_config;
>+	u64 fred_stklvls;
> };
> 
> void kvm_spurious_fault(void);
>-- 
>2.51.0
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 12/22] KVM: VMX: Virtualize FRED event_data
  2025-10-26 20:19 ` [PATCH v9 12/22] KVM: VMX: Virtualize FRED event_data Xin Li (Intel)
@ 2025-11-19  3:24   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-19  3:24 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

>diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>index 4a74c9f64f90..0b5d04c863a8 100644
>--- a/arch/x86/kvm/vmx/vmx.c
>+++ b/arch/x86/kvm/vmx/vmx.c
>@@ -1860,6 +1860,9 @@ void vmx_inject_exception(struct kvm_vcpu *vcpu)
> 
> 	vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, intr_info);
> 
>+	if (is_fred_enabled(vcpu))
>+		vmcs_write64(INJECTED_EVENT_DATA, ex->event_data);

I think event_data should be reset to 0 in kvm_clear_exception_queue().
Otherwise, ex->event_data may be stale here, i.e., the event_data from the
previous event may be injected along with the next event.

<snip>

>+
> 	vmx_clear_hlt(vcpu);
> }
> 

> 	/*
>@@ -950,6 +963,7 @@ void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned int nr,
> 	vcpu->arch.exception.error_code = error_code;
> 	vcpu->arch.exception.has_payload = false;
> 	vcpu->arch.exception.payload = 0;
>+	vcpu->arch.exception.event_data = event_data;

If userspace saves guest events (via kvm_vcpu_ioctl_x86_get_vcpu_events())
right after an event is requeued, event_data will be lost (as that uAPI only
saves the payload and KVM doesn't convert the event_data back to a payload
there). So this event will be delivered with incorrect event_data if the
event is restored on another system after migration.

> }
> EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_requeue_exception);
> 
>-- 
>2.51.0
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 14/22] KVM: x86: Save/restore the nested flag of an exception
  2025-10-26 20:19 ` [PATCH v9 14/22] KVM: x86: Save/restore the nested flag of an exception Xin Li (Intel)
@ 2025-11-19  6:13   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-19  6:13 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:19:02PM -0700, Xin Li (Intel) wrote:
>Save/restore the nested flag of an exception during VM save/restore
>and live migration to ensure a correct event stack level is chosen
>when a nested exception is injected through FRED event delivery.
>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 13/22] KVM: VMX: Virtualize FRED nested exception tracking
  2025-10-26 20:19 ` [PATCH v9 13/22] KVM: VMX: Virtualize FRED nested exception tracking Xin Li (Intel)
@ 2025-11-19  6:54   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-19  6:54 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:19:01PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>Set the VMX nested exception bit in VM-entry interruption information
>field when injecting a nested exception using FRED event delivery to
>ensure:
>  1) A nested exception is injected on a correct stack level.
>  2) The nested bit defined in FRED stack frame is set.
>
>The event stack level used by FRED event delivery depends on whether
>the event was a nested exception encountered during delivery of an
>earlier event, because a nested exception is "regarded" as happening
>on ring 0.  E.g., when #PF is configured to use stack level 1 in
>IA32_FRED_STKLVLS MSR:
>  - nested #PF will be delivered on the stack pointed by IA32_FRED_RSP1
>    MSR when encountered in ring 3 and ring 0.
>  - normal #PF will be delivered on the stack pointed by IA32_FRED_RSP0
>    MSR when encountered in ring 3.
>
>The VMX nested-exception support ensures a correct event stack level is
>chosen when a VM entry injects a nested exception.
>
>Signed-off-by: Xin Li <xin3.li@intel.com>
>[ Sean: reworked kvm_requeue_exception() to simply the code changes ]
>Signed-off-by: Sean Christopherson <seanjc@google.com>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Shan Kang <shan.kang@intel.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 15/22] KVM: x86: Mark CR4.FRED as not reserved
  2025-10-26 20:19 ` [PATCH v9 15/22] KVM: x86: Mark CR4.FRED as not reserved Xin Li (Intel)
@ 2025-11-19  7:26   ` Chao Gao
  0 siblings, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-11-19  7:26 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:19:03PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>The CR4.FRED bit, i.e., CR4[32], is no longer a reserved bit when
>guest cpu cap has FRED, i.e.,
>  1) All of FRED KVM support is in place.
>  2) Guest enumerates FRED.
>
>Otherwise it is still a reserved bit.
>
>Signed-off-by: Xin Li <xin3.li@intel.com>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Shan Kang <shan.kang@intel.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>

I am not sure about two things regarding CR4.FRED and emulator code:

1. Should kvm_set_cr4() reject setting CR4.FRED when the vCPU isn't in long
   mode? The concern is that emulator code may call kvm_set_cr4(). This could
   cause VM-entry failure if CR4.FRED is set in other modes.

2. mk_cr_64() drops the high 32 bits of the new CR4 value. So, CR4.FRED is always
   dropped. This may need an update.


This patch itself looks good, so:

Reviewed-by: Chao Gao <chao.gao@intel.com>

>---
>
>Change in v5:
>* Add TB from Xuelian Guo.
>
>Change in v4:
>* Rebase on top of "guest_cpu_cap".
>
>Change in v3:
>* Don't allow CR4.FRED=1 before all of FRED KVM support is in place
>  (Sean Christopherson).
>---
> arch/x86/include/asm/kvm_host.h | 2 +-
> arch/x86/kvm/x86.h              | 2 ++
> 2 files changed, 3 insertions(+), 1 deletion(-)
>
>diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>index 5fff22d837aa..558f260a1afd 100644
>--- a/arch/x86/include/asm/kvm_host.h
>+++ b/arch/x86/include/asm/kvm_host.h
>@@ -142,7 +142,7 @@
> 			  | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
> 			  | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
> 			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
>-			  | X86_CR4_LAM_SUP | X86_CR4_CET))
>+			  | X86_CR4_LAM_SUP | X86_CR4_CET | X86_CR4_FRED))
> 
> #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
> 
>diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
>index 4f5d12d7136e..e9c6f304b02e 100644
>--- a/arch/x86/kvm/x86.h
>+++ b/arch/x86/kvm/x86.h
>@@ -687,6 +687,8 @@ static inline bool __kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
> 	if (!__cpu_has(__c, X86_FEATURE_SHSTK) &&       \
> 	    !__cpu_has(__c, X86_FEATURE_IBT))           \
> 		__reserved_bits |= X86_CR4_CET;         \
>+	if (!__cpu_has(__c, X86_FEATURE_FRED))          \
>+		__reserved_bits |= X86_CR4_FRED;        \
> 	__reserved_bits;                                \
> })
> 
>-- 
>2.51.0
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 16/22] KVM: VMX: Dump FRED context in dump_vmcs()
  2025-10-26 20:19 ` [PATCH v9 16/22] KVM: VMX: Dump FRED context in dump_vmcs() Xin Li (Intel)
@ 2025-11-19  7:40   ` Chao Gao
  2025-11-30 18:42     ` Xin Li
  0 siblings, 1 reply; 50+ messages in thread
From: Chao Gao @ 2025-11-19  7:40 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:19:04PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>Add FRED related VMCS fields to dump_vmcs() to dump FRED context.

Why are SSPx not dumped?

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 16/22] KVM: VMX: Dump FRED context in dump_vmcs()
  2025-11-19  7:40   ` Chao Gao
@ 2025-11-30 18:42     ` Xin Li
  0 siblings, 0 replies; 50+ messages in thread
From: Xin Li @ 2025-11-30 18:42 UTC (permalink / raw)
  To: Chao Gao
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta


>> Add FRED related VMCS fields to dump_vmcs() to dump FRED context.
> 
> Why are SSPx not dumped?

Good eye!

It needs extra logic to extract FRED_SSP0, and I’m a bit lazy to do it now ;)


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 10/22] KVM: VMX: Add support for saving and restoring FRED MSRs
  2025-11-12  6:16   ` Chao Gao
@ 2025-12-01  6:20     ` Xin Li
  0 siblings, 0 replies; 50+ messages in thread
From: Xin Li @ 2025-12-01  6:20 UTC (permalink / raw)
  To: Chao Gao
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

> 
>> @@ -4316,6 +4374,12 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>> #endif
>> case MSR_IA32_U_CET:
>> case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
>> + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK)) {
>> + WARN_ON_ONCE(msr != MSR_IA32_FRED_SSP0);
> 
> This will be triggered if the guest only supports IBT and tries to write U_CET here.

You’re right, my stupid.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 19/22] KVM: nVMX: Handle FRED VMCS fields in nested VMX context
  2025-10-26 20:19 ` [PATCH v9 19/22] KVM: nVMX: Handle FRED VMCS fields in nested VMX context Xin Li (Intel)
@ 2025-12-02  6:32   ` Chao Gao
  2025-12-08 22:37   ` Sean Christopherson
  1 sibling, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-12-02  6:32 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

>diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
>index f390f9f883c3..5eba2530ffb4 100644
>--- a/arch/x86/kvm/vmx/capabilities.h
>+++ b/arch/x86/kvm/vmx/capabilities.h
>@@ -80,6 +80,11 @@ static inline bool cpu_has_vmx_basic_no_hw_errcode_cc(void)
> 	return	vmcs_config.basic & VMX_BASIC_NO_HW_ERROR_CODE_CC;
> }
> 
>+static inline bool cpu_has_vmx_nested_exception(void)
>+{
>+	return vmcs_config.basic & VMX_BASIC_NESTED_EXCEPTION;
>+}
>+
> static inline bool cpu_has_virtual_nmis(void)
> {
> 	return vmcs_config.pin_based_exec_ctrl & PIN_BASED_VIRTUAL_NMIS &&
>diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
>index cbb682424a5b..63cdfffba58b 100644
>--- a/arch/x86/kvm/vmx/nested.c
>+++ b/arch/x86/kvm/vmx/nested.c
>@@ -708,6 +708,9 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
> 
> 	nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
> 					 MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
>+
>+	nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
>+					 MSR_IA32_FRED_RSP0, MSR_TYPE_RW);

Why is only this specific MSR handled? What about other FRED MSRs?

> #endif
> 	nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
> 					 MSR_IA32_SPEC_CTRL, MSR_TYPE_RW);
>@@ -1294,9 +1297,11 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data)
> 	const u64 feature_bits = VMX_BASIC_DUAL_MONITOR_TREATMENT |
> 				 VMX_BASIC_INOUT |
> 				 VMX_BASIC_TRUE_CTLS |
>-				 VMX_BASIC_NO_HW_ERROR_CODE_CC;
>+				 VMX_BASIC_NO_HW_ERROR_CODE_CC |
>+				 VMX_BASIC_NESTED_EXCEPTION;
> 
>-	const u64 reserved_bits = GENMASK_ULL(63, 57) |
>+	const u64 reserved_bits = GENMASK_ULL(63, 59) |
>+				  BIT_ULL(57) |
> 				  GENMASK_ULL(47, 45) |
> 				  BIT_ULL(31);
> 
>@@ -2539,6 +2544,8 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
> 			     vmcs12->vm_entry_instruction_len);
> 		vmcs_write32(GUEST_INTERRUPTIBILITY_INFO,
> 			     vmcs12->guest_interruptibility_info);
>+		if (cpu_has_vmx_fred())
>+			vmcs_write64(INJECTED_EVENT_DATA, vmcs12->injected_event_data);
> 		vmx->loaded_vmcs->nmi_known_unmasked =
> 			!(vmcs12->guest_interruptibility_info & GUEST_INTR_STATE_NMI);
> 	} else {
>@@ -2693,6 +2700,18 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
> 				     vmcs12->guest_ssp, vmcs12->guest_ssp_tbl);
> 
> 	set_cr4_guest_host_mask(vmx);
>+
>+	if (guest_cpu_cap_has(&vmx->vcpu, X86_FEATURE_FRED) &&
>+	    nested_cpu_load_guest_fred_state(vmcs12)) {
>+		vmcs_write64(GUEST_IA32_FRED_CONFIG, vmcs12->guest_ia32_fred_config);
>+		vmcs_write64(GUEST_IA32_FRED_RSP1, vmcs12->guest_ia32_fred_rsp1);
>+		vmcs_write64(GUEST_IA32_FRED_RSP2, vmcs12->guest_ia32_fred_rsp2);
>+		vmcs_write64(GUEST_IA32_FRED_RSP3, vmcs12->guest_ia32_fred_rsp3);
>+		vmcs_write64(GUEST_IA32_FRED_STKLVLS, vmcs12->guest_ia32_fred_stklvls);
>+		vmcs_write64(GUEST_IA32_FRED_SSP1, vmcs12->guest_ia32_fred_ssp1);
>+		vmcs_write64(GUEST_IA32_FRED_SSP2, vmcs12->guest_ia32_fred_ssp2);
>+		vmcs_write64(GUEST_IA32_FRED_SSP3, vmcs12->guest_ia32_fred_ssp3);

...

>+	}
> }
> 
> /*
>@@ -2759,6 +2778,18 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
> 		vmcs_write64(GUEST_IA32_PAT, vcpu->arch.pat);
> 	}
> 
>+	if (guest_cpu_cap_has(vcpu, X86_FEATURE_FRED) &&
>+	    (!vmx->nested.nested_run_pending || !nested_cpu_load_guest_fred_state(vmcs12))) {
>+		vmcs_write64(GUEST_IA32_FRED_CONFIG, vmx->nested.pre_vmenter_fred_config);
>+		vmcs_write64(GUEST_IA32_FRED_RSP1, vmx->nested.pre_vmenter_fred_rsp1);
>+		vmcs_write64(GUEST_IA32_FRED_RSP2, vmx->nested.pre_vmenter_fred_rsp2);
>+		vmcs_write64(GUEST_IA32_FRED_RSP3, vmx->nested.pre_vmenter_fred_rsp3);
>+		vmcs_write64(GUEST_IA32_FRED_STKLVLS, vmx->nested.pre_vmenter_fred_stklvls);
>+		vmcs_write64(GUEST_IA32_FRED_SSP1, vmx->nested.pre_vmenter_fred_ssp1);
>+		vmcs_write64(GUEST_IA32_FRED_SSP2, vmx->nested.pre_vmenter_fred_ssp2);
>+		vmcs_write64(GUEST_IA32_FRED_SSP3, vmx->nested.pre_vmenter_fred_ssp3);

Would it be clearer to add two helpers to read/write FRED VMCS fields? e.g., (compile test only)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index c8edbe9c7e00..b709f4cdcba3 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2614,6 +2614,30 @@ static void vmcs_write_cet_state(struct kvm_vcpu *vcpu, u64 s_cet,
	}
 }
 
+static void vmcs_read_fred_msrs(struct fred_msrs *msrs)
+{
+	msrs->fred_config = vmcs_read64(GUEST_IA32_FRED_CONFIG);
+	msrs->fred_rsp1 = vmcs_read64(GUEST_IA32_FRED_RSP1);
+	msrs->fred_rsp2 = vmcs_read64(GUEST_IA32_FRED_RSP2);
+	msrs->fred_rsp3 = vmcs_read64(GUEST_IA32_FRED_RSP3);
+	msrs->fred_stklvls = vmcs_read64(GUEST_IA32_FRED_STKLVLS);
+	msrs->fred_ssp1 = vmcs_read64(GUEST_IA32_FRED_SSP1);
+	msrs->fred_ssp2 = vmcs_read64(GUEST_IA32_FRED_SSP2);
+	msrs->fred_ssp3 = vmcs_read64(GUEST_IA32_FRED_SSP3);
+}
+
+static void vmcs_write_fred_msrs(struct fred_msrs *msrs)
+{
+	vmcs_write64(GUEST_IA32_FRED_CONFIG, msrs->fred_config);
+	vmcs_write64(GUEST_IA32_FRED_RSP1, msrs->fred_rsp1);
+	vmcs_write64(GUEST_IA32_FRED_RSP2, msrs->fred_rsp2);
+	vmcs_write64(GUEST_IA32_FRED_RSP3, msrs->fred_rsp3);
+	vmcs_write64(GUEST_IA32_FRED_STKLVLS, msrs->fred_stklvls);
+	vmcs_write64(GUEST_IA32_FRED_SSP1, msrs->fred_ssp1);
+	vmcs_write64(GUEST_IA32_FRED_SSP2, msrs->fred_ssp2);
+	vmcs_write64(GUEST_IA32_FRED_SSP3, msrs->fred_ssp3);
+}
+
 static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
 {
	struct hv_enlightened_vmcs *hv_evmcs = nested_vmx_evmcs(vmx);
@@ -2736,16 +2760,8 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
 
	set_cr4_guest_host_mask(vmx);
 
-	if (nested_cpu_load_guest_fred_state(vmcs12)) {
-		vmcs_write64(GUEST_IA32_FRED_CONFIG, vmcs12->guest_ia32_fred_config);
-		vmcs_write64(GUEST_IA32_FRED_RSP1, vmcs12->guest_ia32_fred_rsp1);
-		vmcs_write64(GUEST_IA32_FRED_RSP2, vmcs12->guest_ia32_fred_rsp2);
-		vmcs_write64(GUEST_IA32_FRED_RSP3, vmcs12->guest_ia32_fred_rsp3);
-		vmcs_write64(GUEST_IA32_FRED_STKLVLS, vmcs12->guest_ia32_fred_stklvls);
-		vmcs_write64(GUEST_IA32_FRED_SSP1, vmcs12->guest_ia32_fred_ssp1);
-		vmcs_write64(GUEST_IA32_FRED_SSP2, vmcs12->guest_ia32_fred_ssp2);
-		vmcs_write64(GUEST_IA32_FRED_SSP3, vmcs12->guest_ia32_fred_ssp3);
-	}
+	if (nested_cpu_load_guest_fred_state(vmcs12))
+		vmcs_write_fred_msrs((struct fred_msrs *)&vmcs12->guest_ia32_fred_config);
 }
 
 /*
@@ -2813,16 +2829,8 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
	}
 
	if (!vmx->nested.nested_run_pending ||
-	    !nested_cpu_load_guest_fred_state(vmcs12)) {
-		vmcs_write64(GUEST_IA32_FRED_CONFIG, vmx->nested.pre_vmenter_fred_config);
-		vmcs_write64(GUEST_IA32_FRED_RSP1, vmx->nested.pre_vmenter_fred_rsp1);
-		vmcs_write64(GUEST_IA32_FRED_RSP2, vmx->nested.pre_vmenter_fred_rsp2);
-		vmcs_write64(GUEST_IA32_FRED_RSP3, vmx->nested.pre_vmenter_fred_rsp3);
-		vmcs_write64(GUEST_IA32_FRED_STKLVLS, vmx->nested.pre_vmenter_fred_stklvls);
-		vmcs_write64(GUEST_IA32_FRED_SSP1, vmx->nested.pre_vmenter_fred_ssp1);
-		vmcs_write64(GUEST_IA32_FRED_SSP2, vmx->nested.pre_vmenter_fred_ssp2);
-		vmcs_write64(GUEST_IA32_FRED_SSP3, vmx->nested.pre_vmenter_fred_ssp3);
-	}
+	    !nested_cpu_load_guest_fred_state(vmcs12))
+		vmcs_write_fred_msrs(&vmx->nested.fred_msrs_pre_vmenter);
 
	vcpu->arch.tsc_offset = kvm_calc_nested_tsc_offset(
			vcpu->arch.l1_tsc_offset,
@@ -3830,16 +3838,8 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
				    &vmx->nested.pre_vmenter_ssp_tbl);
 
	if (!vmx->nested.nested_run_pending ||
-	    !nested_cpu_load_guest_fred_state(vmcs12)) {
-		vmx->nested.pre_vmenter_fred_config = vmcs_read64(GUEST_IA32_FRED_CONFIG);
-		vmx->nested.pre_vmenter_fred_rsp1 = vmcs_read64(GUEST_IA32_FRED_RSP1);
-		vmx->nested.pre_vmenter_fred_rsp2 = vmcs_read64(GUEST_IA32_FRED_RSP2);
-		vmx->nested.pre_vmenter_fred_rsp3 = vmcs_read64(GUEST_IA32_FRED_RSP3);
-		vmx->nested.pre_vmenter_fred_stklvls = vmcs_read64(GUEST_IA32_FRED_STKLVLS);
-		vmx->nested.pre_vmenter_fred_ssp1 = vmcs_read64(GUEST_IA32_FRED_SSP1);
-		vmx->nested.pre_vmenter_fred_ssp2 = vmcs_read64(GUEST_IA32_FRED_SSP2);
-		vmx->nested.pre_vmenter_fred_ssp3 = vmcs_read64(GUEST_IA32_FRED_SSP3);
-	}
+	    !nested_cpu_load_guest_fred_state(vmcs12))
+		vmcs_read_fred_msrs(&vmx->nested.fred_msrs_pre_vmenter);
 
	/*
	 * Overwrite vmcs01.GUEST_CR3 with L1's CR3 if EPT is disabled *and*
@@ -4938,25 +4938,10 @@ static void sync_vmcs02_to_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
			    &vmcs12->guest_ssp,
			    &vmcs12->guest_ssp_tbl);
 
-	vmx->nested.fred_msr_at_vmexit.fred_config = vmcs_read64(GUEST_IA32_FRED_CONFIG);
-	vmx->nested.fred_msr_at_vmexit.fred_rsp1 = vmcs_read64(GUEST_IA32_FRED_RSP1);
-	vmx->nested.fred_msr_at_vmexit.fred_rsp2 = vmcs_read64(GUEST_IA32_FRED_RSP2);
-	vmx->nested.fred_msr_at_vmexit.fred_rsp3 = vmcs_read64(GUEST_IA32_FRED_RSP3);
-	vmx->nested.fred_msr_at_vmexit.fred_stklvls = vmcs_read64(GUEST_IA32_FRED_STKLVLS);
-	vmx->nested.fred_msr_at_vmexit.fred_ssp1 = vmcs_read64(GUEST_IA32_FRED_SSP1);
-	vmx->nested.fred_msr_at_vmexit.fred_ssp2 = vmcs_read64(GUEST_IA32_FRED_SSP2);
-	vmx->nested.fred_msr_at_vmexit.fred_ssp3 = vmcs_read64(GUEST_IA32_FRED_SSP3);
+	vmcs_read_fred_msrs(&vmx->nested.fred_msrs_at_vmexit);
 
-	if (nested_cpu_save_guest_fred_state(vmcs12)) {
-		vmcs12->guest_ia32_fred_config = vmx->nested.fred_msr_at_vmexit.fred_config;
-		vmcs12->guest_ia32_fred_rsp1 = vmx->nested.fred_msr_at_vmexit.fred_rsp1;
-		vmcs12->guest_ia32_fred_rsp2 = vmx->nested.fred_msr_at_vmexit.fred_rsp2;
-		vmcs12->guest_ia32_fred_rsp3 = vmx->nested.fred_msr_at_vmexit.fred_rsp3;
-		vmcs12->guest_ia32_fred_stklvls = vmx->nested.fred_msr_at_vmexit.fred_stklvls;
-		vmcs12->guest_ia32_fred_ssp1 = vmx->nested.fred_msr_at_vmexit.fred_ssp1;
-		vmcs12->guest_ia32_fred_ssp2 = vmx->nested.fred_msr_at_vmexit.fred_ssp2;
-		vmcs12->guest_ia32_fred_ssp3 = vmx->nested.fred_msr_at_vmexit.fred_ssp3;
-	}
+	if (nested_cpu_save_guest_fred_state(vmcs12))
+		memcpy(&vmcs12->guest_ia32_fred_config, &vmx->nested.fred_msrs_at_vmexit, sizeof(struct fred_msrs));
 }
 
 /*
@@ -5119,25 +5104,10 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
		WARN_ON_ONCE(__kvm_emulate_msr_write(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,
						     vmcs12->host_ia32_perf_global_ctrl));
 
-	if (nested_cpu_load_host_fred_state(vmcs12)) {
-		vmcs_write64(GUEST_IA32_FRED_CONFIG, vmcs12->host_ia32_fred_config);
-		vmcs_write64(GUEST_IA32_FRED_RSP1, vmcs12->host_ia32_fred_rsp1);
-		vmcs_write64(GUEST_IA32_FRED_RSP2, vmcs12->host_ia32_fred_rsp2);
-		vmcs_write64(GUEST_IA32_FRED_RSP3, vmcs12->host_ia32_fred_rsp3);
-		vmcs_write64(GUEST_IA32_FRED_STKLVLS, vmcs12->host_ia32_fred_stklvls);
-		vmcs_write64(GUEST_IA32_FRED_SSP1, vmcs12->host_ia32_fred_ssp1);
-		vmcs_write64(GUEST_IA32_FRED_SSP2, vmcs12->host_ia32_fred_ssp2);
-		vmcs_write64(GUEST_IA32_FRED_SSP3, vmcs12->host_ia32_fred_ssp3);
-	} else {
-		vmcs_write64(GUEST_IA32_FRED_CONFIG, vmx->nested.fred_msr_at_vmexit.fred_config);
-		vmcs_write64(GUEST_IA32_FRED_RSP1, vmx->nested.fred_msr_at_vmexit.fred_rsp1);
-		vmcs_write64(GUEST_IA32_FRED_RSP2, vmx->nested.fred_msr_at_vmexit.fred_rsp2);
-		vmcs_write64(GUEST_IA32_FRED_RSP3, vmx->nested.fred_msr_at_vmexit.fred_rsp3);
-		vmcs_write64(GUEST_IA32_FRED_STKLVLS, vmx->nested.fred_msr_at_vmexit.fred_stklvls);
-		vmcs_write64(GUEST_IA32_FRED_SSP1, vmx->nested.fred_msr_at_vmexit.fred_ssp1);
-		vmcs_write64(GUEST_IA32_FRED_SSP2, vmx->nested.fred_msr_at_vmexit.fred_ssp2);
-		vmcs_write64(GUEST_IA32_FRED_SSP3, vmx->nested.fred_msr_at_vmexit.fred_ssp3);
-	}
+	if (nested_cpu_load_host_fred_state(vmcs12))
+		vmcs_write_fred_msrs((struct fred_msrs *)vmcs12->host_ia32_fred_config);
+	else
+		vmcs_write_fred_msrs(&vmx->nested.fred_msrs_at_vmexit);
 
	/* Set L1 segment info according to Intel SDM
	    27.5.2 Loading Host Segment and Descriptor-Table Registers */
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 36dcc888e5c6..c1c32e8ae068 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -87,7 +87,7 @@ struct pt_desc {
  * running the L1 VMM if SECONDARY_VM_EXIT_LOAD_IA32_FRED is cleared in
  * vmcs12.
  */
-struct fred_msr_at_vmexit {
+struct fred_msrs {
	u64 fred_config;
	u64 fred_rsp1;
	u64 fred_rsp2;
@@ -215,16 +215,8 @@ struct nested_vmx {
	u64 pre_vmenter_s_cet;
	u64 pre_vmenter_ssp;
	u64 pre_vmenter_ssp_tbl;
-	u64 pre_vmenter_fred_config;
-	u64 pre_vmenter_fred_rsp1;
-	u64 pre_vmenter_fred_rsp2;
-	u64 pre_vmenter_fred_rsp3;
-	u64 pre_vmenter_fred_stklvls;
-	u64 pre_vmenter_fred_ssp1;
-	u64 pre_vmenter_fred_ssp2;
-	u64 pre_vmenter_fred_ssp3;
-
-	struct fred_msr_at_vmexit fred_msr_at_vmexit;
+
+	struct fred_msrs fred_msrs_at_vmexit, fred_msrs_pre_vmenter;
 
	/* to migrate it to L1 if L2 writes to L1's CR8 directly */
	int l1_tpr_threshold;

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 21/22] KVM: nVMX: Guard SHADOW_FIELD_R[OW] macros with VMX feature checks
  2025-10-26 20:19 ` [PATCH v9 21/22] KVM: nVMX: Guard SHADOW_FIELD_R[OW] macros with VMX feature checks Xin Li (Intel)
@ 2025-12-02  6:35   ` Chao Gao
  2025-12-08 22:49   ` Sean Christopherson
  1 sibling, 0 replies; 50+ messages in thread
From: Chao Gao @ 2025-12-02  6:35 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, seanjc, corbet, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, andrew.cooper3,
	hch, sohil.mehta

On Sun, Oct 26, 2025 at 01:19:09PM -0700, Xin Li (Intel) wrote:
>From: Xin Li <xin3.li@intel.com>
>
>Add VMX feature checks to the SHADOW_FIELD_R[OW] macros to prevent access
>to VMCS fields that may be unsupported on some CPUs.
>
>Functions like copy_shadow_to_vmcs12() and copy_vmcs12_to_shadow() access
>VMCS fields that may not exist on certain hardware, such as
>INJECTED_EVENT_DATA.  To avoid VMREAD/VMWRITE warnings, skip syncing fields
>tied to unsupported VMX features.
>
>Signed-off-by: Xin Li <xin3.li@intel.com>
>Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>Tested-by: Shan Kang <shan.kang@intel.com>
>Tested-by: Xuelian Guo <xuelian.guo@intel.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 19/22] KVM: nVMX: Handle FRED VMCS fields in nested VMX context
  2025-10-26 20:19 ` [PATCH v9 19/22] KVM: nVMX: Handle FRED VMCS fields in nested VMX context Xin Li (Intel)
  2025-12-02  6:32   ` Chao Gao
@ 2025-12-08 22:37   ` Sean Christopherson
  1 sibling, 0 replies; 50+ messages in thread
From: Sean Christopherson @ 2025-12-08 22:37 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, corbet, tglx, mingo, bp,
	dave.hansen, x86, hpa, luto, peterz, andrew.cooper3, chao.gao,
	hch, sohil.mehta

On Sun, Oct 26, 2025, Xin Li (Intel) wrote:
> diff --git a/arch/x86/kvm/vmx/vmcs_shadow_fields.h b/arch/x86/kvm/vmx/vmcs_shadow_fields.h
> index cad128d1657b..da338327c2b3 100644
> --- a/arch/x86/kvm/vmx/vmcs_shadow_fields.h
> +++ b/arch/x86/kvm/vmx/vmcs_shadow_fields.h
> @@ -74,6 +74,10 @@ SHADOW_FIELD_RW(HOST_GS_BASE, host_gs_base)
>  /* 64-bit */
>  SHADOW_FIELD_RO(GUEST_PHYSICAL_ADDRESS, guest_physical_address)
>  SHADOW_FIELD_RO(GUEST_PHYSICAL_ADDRESS_HIGH, guest_physical_address)
> +SHADOW_FIELD_RO(ORIGINAL_EVENT_DATA, original_event_data)
> +SHADOW_FIELD_RO(ORIGINAL_EVENT_DATA_HIGH, original_event_data)
> +SHADOW_FIELD_RW(INJECTED_EVENT_DATA, injected_event_data)
> +SHADOW_FIELD_RW(INJECTED_EVENT_DATA_HIGH, injected_event_data)

Please add shadow fields in a separate patch, with sufficient explain to justify
why KVM needs to enable VMCS shadowing for the fields (it's purely a performance
optimazation).


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 21/22] KVM: nVMX: Guard SHADOW_FIELD_R[OW] macros with VMX feature checks
  2025-10-26 20:19 ` [PATCH v9 21/22] KVM: nVMX: Guard SHADOW_FIELD_R[OW] macros with VMX feature checks Xin Li (Intel)
  2025-12-02  6:35   ` Chao Gao
@ 2025-12-08 22:49   ` Sean Christopherson
  1 sibling, 0 replies; 50+ messages in thread
From: Sean Christopherson @ 2025-12-08 22:49 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, corbet, tglx, mingo, bp,
	dave.hansen, x86, hpa, luto, peterz, andrew.cooper3, chao.gao,
	hch, sohil.mehta, Yosry Ahmed

+Yosry

On Sun, Oct 26, 2025, Xin Li (Intel) wrote:
> From: Xin Li <xin3.li@intel.com>
> 
> Add VMX feature checks to the SHADOW_FIELD_R[OW] macros to prevent access
> to VMCS fields that may be unsupported on some CPUs.
> 
> Functions like copy_shadow_to_vmcs12() and copy_vmcs12_to_shadow() access
> VMCS fields that may not exist on certain hardware, such as
> INJECTED_EVENT_DATA.  To avoid VMREAD/VMWRITE warnings, skip syncing fields
> tied to unsupported VMX features.
> 
> Signed-off-by: Xin Li <xin3.li@intel.com>
> Signed-off-by: Xin Li (Intel) <xin@zytor.com>
> Tested-by: Shan Kang <shan.kang@intel.com>
> Tested-by: Xuelian Guo <xuelian.guo@intel.com>
> ---
> 
> Change in v5:
> * Add TB from Xuelian Guo.
> 
> Change since v2:
> * Add __SHADOW_FIELD_R[OW] for better readability or maintability (Sean).

Coming back to this with fresh eyes, handling fields that conditionally exist
_only_ for VMCS shadowing is somewhat ridiculous.  For PML and the VMX preemption
timer, the special case handling makes sense because the fields are emulated by
KVM irrespective of hardware suport.  But for fields that KVM doesn't emulate in
software, e.g. GUEST_INTR_STATUS and the FRED fields, allowing accesses through
emulated VMREAD/VMWRITE and then filtering out VMCS shadowing accesses is just us
being stubborn.

I still 100% think that not restricting based on the virtual CPU model defined by
userspace is the way to go[*], because that'd require an absurd amount of effort,
complexity, and memory to solve a problem no one actually cares about.  But
updating KVM's array of vmcs12 fields once during kvm-intel.ko load isn't difficult,
and would make KVM suck a little less when running on old hardware.

E.g. running the test_vmwrite_vmread KUT subtest on CPUs without TSC scaling still
fails with the wonderful:

  FAIL: VMX_VMCS_ENUM.MAX_INDEX expected: 19, actual: 17

due to QEMU (sanely) setting the max index to 17 (VMX preemption timer) when the
virtual CPU model doesn't support TSC scaling.

And looking forward, we're going to have the same mess with FRED due QEMU (again,
sanely) basing its 

    if (f[FEAT_7_1_EAX] & CPUID_7_1_EAX_FRED) {
        /* FRED injected-event data (0x2052).  */
        kvm_msr_entry_add(cpu, MSR_IA32_VMX_VMCS_ENUM, 0x52);
    } else if (f[FEAT_VMX_EXIT_CTLS] &
               VMX_VM_EXIT_ACTIVATE_SECONDARY_CONTROLS) {
        /* Secondary VM-exit controls (0x2044).  */
        kvm_msr_entry_add(cpu, MSR_IA32_VMX_VMCS_ENUM, 0x44);
    } else if (f[FEAT_VMX_SECONDARY_CTLS] & VMX_SECONDARY_EXEC_TSC_SCALING) {
        /* TSC multiplier (0x2032).  */
        kvm_msr_entry_add(cpu, MSR_IA32_VMX_VMCS_ENUM, 0x32);
    } else {
        /* Preemption timer (0x482E).  */
        kvm_msr_entry_add(cpu, MSR_IA32_VMX_VMCS_ENUM, 0x2E);
    }

KVM will still have virtualization holes, e.g. if userspace hides TSC scaling when
running on hardware+KVM that supports TSC scaling, but as above I don't think that's
a problem worth solving.

I'll post a patch (just need to test on bare metal) to sanitize vmcs12 fields,
at which point FRED nVMX support shouldn't have to do anything special beyond
noting the depending, i.e. it should only take a few lines of code.

[*] https://lore.kernel.org/all/YR2Tf9WPNEzrE7Xg@google.com

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 00/22] Enable FRED with KVM VMX
  2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
                   ` (22 preceding siblings ...)
  2025-11-06 17:35 ` [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li
@ 2025-12-08 22:51 ` Sean Christopherson
  2025-12-09 17:08   ` Xin Li
  23 siblings, 1 reply; 50+ messages in thread
From: Sean Christopherson @ 2025-12-08 22:51 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, linux-doc, pbonzini, corbet, tglx, mingo, bp,
	dave.hansen, x86, hpa, luto, peterz, andrew.cooper3, chao.gao,
	hch, sohil.mehta

On Sun, Oct 26, 2025, Xin Li (Intel) wrote:
> Xin Li (18):
>   KVM: VMX: Enable support for secondary VM exit controls
>   KVM: VMX: Initialize VM entry/exit FRED controls in vmcs_config
>   KVM: VMX: Disable FRED if FRED consistency checks fail
>   KVM: VMX: Initialize VMCS FRED fields
>   KVM: VMX: Set FRED MSR intercepts
>   KVM: VMX: Save/restore guest FRED RSP0
>   KVM: VMX: Add support for saving and restoring FRED MSRs
>   KVM: x86: Add a helper to detect if FRED is enabled for a vCPU
>   KVM: VMX: Virtualize FRED event_data
>   KVM: VMX: Virtualize FRED nested exception tracking
>   KVM: x86: Mark CR4.FRED as not reserved
>   KVM: VMX: Dump FRED context in dump_vmcs()
>   KVM: x86: Advertise support for FRED
>   KVM: nVMX: Enable support for secondary VM exit controls
>   KVM: nVMX: Handle FRED VMCS fields in nested VMX context
>   KVM: nVMX: Validate FRED-related VMCS fields
>   KVM: nVMX: Guard SHADOW_FIELD_R[OW] macros with VMX feature checks
>   KVM: nVMX: Enable VMX FRED controls
> 
> Xin Li (Intel) (4):

I'm guessing the two different "names" isn't intended?

>   x86/cea: Prefix event stack names with ESTACK_
>   x86/cea: Use array indexing to simplify exception stack access
>   x86/cea: Export __this_cpu_ist_top_va() to KVM
>   KVM: x86: Save/restore the nested flag of an exception

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v9 00/22] Enable FRED with KVM VMX
  2025-12-08 22:51 ` Sean Christopherson
@ 2025-12-09 17:08   ` Xin Li
  0 siblings, 0 replies; 50+ messages in thread
From: Xin Li @ 2025-12-09 17:08 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: linux-kernel, kvm, linux-doc, pbonzini, corbet, tglx, mingo, bp,
	dave.hansen, x86, hpa, luto, peterz, andrew.cooper3, chao.gao,
	hch, sohil.mehta


> On Dec 8, 2025, at 2:51 PM, Sean Christopherson <seanjc@google.com> wrote:
> 
>> Xin Li (Intel) (4):
> 
> I'm guessing the two different "names" isn't intended?

Not at all :’(

I've switched from using my Intel email to xin@zytor.com due to recipient
threshold limits.  Earlier versions of the patch series were sent to LKML
with the Intel address, while all new patches use xin@zytor.com.


^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2025-12-09 17:09 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-26 20:18 [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li (Intel)
2025-10-26 20:18 ` [PATCH v9 01/22] KVM: VMX: Enable support for secondary VM exit controls Xin Li (Intel)
2025-10-26 20:18 ` [PATCH v9 02/22] KVM: VMX: Initialize VM entry/exit FRED controls in vmcs_config Xin Li (Intel)
2025-10-26 20:18 ` [PATCH v9 03/22] KVM: VMX: Disable FRED if FRED consistency checks fail Xin Li (Intel)
2025-10-26 20:18 ` [PATCH v9 04/22] x86/cea: Prefix event stack names with ESTACK_ Xin Li (Intel)
2025-10-26 20:18 ` [PATCH v9 05/22] x86/cea: Use array indexing to simplify exception stack access Xin Li (Intel)
2025-10-27 15:49   ` Dave Hansen
2025-10-28  2:31     ` Xin Li
2025-10-26 20:18 ` [PATCH v9 06/22] x86/cea: Export __this_cpu_ist_top_va() to KVM Xin Li (Intel)
2025-10-27 15:50   ` Dave Hansen
2025-10-26 20:18 ` [PATCH v9 07/22] KVM: VMX: Initialize VMCS FRED fields Xin Li (Intel)
2025-11-19  2:44   ` Chao Gao
2025-10-26 20:18 ` [PATCH v9 08/22] KVM: VMX: Set FRED MSR intercepts Xin Li (Intel)
2025-11-12  5:49   ` Chao Gao
2025-10-26 20:18 ` [PATCH v9 09/22] KVM: VMX: Save/restore guest FRED RSP0 Xin Li (Intel)
2025-11-12  5:59   ` Chao Gao
2025-10-26 20:18 ` [PATCH v9 10/22] KVM: VMX: Add support for saving and restoring FRED MSRs Xin Li (Intel)
2025-11-12  6:16   ` Chao Gao
2025-12-01  6:20     ` Xin Li
2025-10-26 20:18 ` [PATCH v9 11/22] KVM: x86: Add a helper to detect if FRED is enabled for a vCPU Xin Li (Intel)
2025-11-12  6:19   ` Chao Gao
2025-10-26 20:19 ` [PATCH v9 12/22] KVM: VMX: Virtualize FRED event_data Xin Li (Intel)
2025-11-19  3:24   ` Chao Gao
2025-10-26 20:19 ` [PATCH v9 13/22] KVM: VMX: Virtualize FRED nested exception tracking Xin Li (Intel)
2025-11-19  6:54   ` Chao Gao
2025-10-26 20:19 ` [PATCH v9 14/22] KVM: x86: Save/restore the nested flag of an exception Xin Li (Intel)
2025-11-19  6:13   ` Chao Gao
2025-10-26 20:19 ` [PATCH v9 15/22] KVM: x86: Mark CR4.FRED as not reserved Xin Li (Intel)
2025-11-19  7:26   ` Chao Gao
2025-10-26 20:19 ` [PATCH v9 16/22] KVM: VMX: Dump FRED context in dump_vmcs() Xin Li (Intel)
2025-11-19  7:40   ` Chao Gao
2025-11-30 18:42     ` Xin Li
2025-10-26 20:19 ` [PATCH v9 17/22] KVM: x86: Advertise support for FRED Xin Li (Intel)
2025-11-12  7:30   ` Chao Gao
2025-10-26 20:19 ` [PATCH v9 18/22] KVM: nVMX: Enable support for secondary VM exit controls Xin Li (Intel)
2025-11-12 13:42   ` Chao Gao
2025-10-26 20:19 ` [PATCH v9 19/22] KVM: nVMX: Handle FRED VMCS fields in nested VMX context Xin Li (Intel)
2025-12-02  6:32   ` Chao Gao
2025-12-08 22:37   ` Sean Christopherson
2025-10-26 20:19 ` [PATCH v9 20/22] KVM: nVMX: Validate FRED-related VMCS fields Xin Li (Intel)
2025-11-13  3:00   ` Chao Gao
2025-10-26 20:19 ` [PATCH v9 21/22] KVM: nVMX: Guard SHADOW_FIELD_R[OW] macros with VMX feature checks Xin Li (Intel)
2025-12-02  6:35   ` Chao Gao
2025-12-08 22:49   ` Sean Christopherson
2025-10-26 20:19 ` [PATCH v9 22/22] KVM: nVMX: Enable VMX FRED controls Xin Li (Intel)
2025-11-13  3:20   ` Chao Gao
2025-11-06 17:35 ` [PATCH v9 00/22] Enable FRED with KVM VMX Xin Li
2025-11-13 22:20   ` Sean Christopherson
2025-12-08 22:51 ` Sean Christopherson
2025-12-09 17:08   ` Xin Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).