kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/4] KVM: VMX: Handle the immediate form of MSR instructions
@ 2025-07-30 17:46 Xin Li (Intel)
  2025-07-30 17:46 ` [PATCH v1 1/4] x86/cpufeatures: Add a CPU feature bit for MSR immediate form instructions Xin Li (Intel)
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Xin Li (Intel) @ 2025-07-30 17:46 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: pbonzini, seanjc, tglx, mingo, bp, dave.hansen, x86, hpa, xin,
	chao.gao

This patch set handles two newly introduced VM exit reasons associated
with the immediate form of MSR instructions to ensure proper
virtualization of these instructions.

The immediate form of MSR access instructions are primarily motivated
by performance, not code size: by having the MSR number in an immediate,
it is available *much* earlier in the pipeline, which allows the
hardware much more leeway about how a particular MSR is handled.

For proper virtualization of the immediate form of MSR instructions,
Intel VMX architecture adds the following changes:

  1) The immediate form of RDMSR uses VM exit reason 84.

  2) The immediate form of WRMSRNS uses VM exit reason 85.

  3) For both VM exit reasons 84 and 85, the exit qualification is set
     to the MSR address causing the VM exit.

  4) Bits 3 ~ 6 of the VM exit instruction information field represent
     the operand register used in the immediate form of MSR instruction.

  5) The VM-exit instruction length field records the size of the
     immediate form of the MSR instruction.

Note: The VMX specification for the immediate form of MSR instructions
was inadvertently omitted from the last published ISE, but it will be
included in the upcoming edition.

Linux bare metal support of the immediate form of MSR instructions is
still under development; however, the KVM support effort is proceeding
independently of the bare metal implementation.


Xin Li (Intel) (4):
  x86/cpufeatures: Add a CPU feature bit for MSR immediate form
    instructions
  KVM: x86: Introduce MSR read/write emulation helpers
  KVM: VMX: Handle the immediate form of MSR instructions
  KVM: x86: Advertise support for the immediate form of MSR instructions

 arch/x86/include/asm/cpufeatures.h |  1 +
 arch/x86/include/asm/kvm_host.h    |  5 ++
 arch/x86/include/uapi/asm/vmx.h    |  6 +-
 arch/x86/kernel/cpu/scattered.c    |  1 +
 arch/x86/kvm/cpuid.c               |  6 +-
 arch/x86/kvm/reverse_cpuid.h       |  5 ++
 arch/x86/kvm/vmx/vmx.c             | 26 ++++++++
 arch/x86/kvm/vmx/vmx.h             |  5 ++
 arch/x86/kvm/x86.c                 | 96 +++++++++++++++++++++++-------
 arch/x86/kvm/x86.h                 |  1 +
 10 files changed, 130 insertions(+), 22 deletions(-)


base-commit: 33f843444e28920d6e624c6c24637b4bb5d3c8de
-- 
2.50.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v1 1/4] x86/cpufeatures: Add a CPU feature bit for MSR immediate form instructions
  2025-07-30 17:46 [PATCH v1 0/4] KVM: VMX: Handle the immediate form of MSR instructions Xin Li (Intel)
@ 2025-07-30 17:46 ` Xin Li (Intel)
  2025-07-30 17:46 ` [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers Xin Li (Intel)
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 17+ messages in thread
From: Xin Li (Intel) @ 2025-07-30 17:46 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: pbonzini, seanjc, tglx, mingo, bp, dave.hansen, x86, hpa, xin,
	chao.gao

The immediate form of MSR access instructions are primarily motivated
by performance, not code size: by having the MSR number in an immediate,
it is available *much* earlier in the pipeline, which allows the
hardware much more leeway about how a particular MSR is handled.

Use a scattered CPU feature bit for MSR immediate form instructions.

Suggested-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
---
 arch/x86/include/asm/cpufeatures.h | 1 +
 arch/x86/kernel/cpu/scattered.c    | 1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 286d509f9363..75b43bbe2a6d 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -491,6 +491,7 @@
 #define X86_FEATURE_TSA_SQ_NO		(21*32+11) /* AMD CPU not vulnerable to TSA-SQ */
 #define X86_FEATURE_TSA_L1_NO		(21*32+12) /* AMD CPU not vulnerable to TSA-L1 */
 #define X86_FEATURE_CLEAR_CPU_BUF_VM	(21*32+13) /* Clear CPU buffers using VERW before VMRUN */
+#define X86_FEATURE_MSR_IMM		(21*32+14) /* MSR immediate form instructions */
 
 /*
  * BUG word(s)
diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
index b4a1f6732a3a..5fe19bbe538e 100644
--- a/arch/x86/kernel/cpu/scattered.c
+++ b/arch/x86/kernel/cpu/scattered.c
@@ -27,6 +27,7 @@ static const struct cpuid_bit cpuid_bits[] = {
 	{ X86_FEATURE_APERFMPERF,		CPUID_ECX,  0, 0x00000006, 0 },
 	{ X86_FEATURE_EPB,			CPUID_ECX,  3, 0x00000006, 0 },
 	{ X86_FEATURE_INTEL_PPIN,		CPUID_EBX,  0, 0x00000007, 1 },
+	{ X86_FEATURE_MSR_IMM,			CPUID_ECX,  5, 0x00000007, 1 },
 	{ X86_FEATURE_APX,			CPUID_EDX, 21, 0x00000007, 1 },
 	{ X86_FEATURE_RRSBA_CTRL,		CPUID_EDX,  2, 0x00000007, 2 },
 	{ X86_FEATURE_BHI_CTRL,			CPUID_EDX,  4, 0x00000007, 2 },
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers
  2025-07-30 17:46 [PATCH v1 0/4] KVM: VMX: Handle the immediate form of MSR instructions Xin Li (Intel)
  2025-07-30 17:46 ` [PATCH v1 1/4] x86/cpufeatures: Add a CPU feature bit for MSR immediate form instructions Xin Li (Intel)
@ 2025-07-30 17:46 ` Xin Li (Intel)
  2025-07-31 10:34   ` Chao Gao
  2025-08-01 14:37   ` Sean Christopherson
  2025-07-30 17:46 ` [PATCH v1 3/4] KVM: VMX: Handle the immediate form of MSR instructions Xin Li (Intel)
  2025-07-30 17:46 ` [PATCH v1 4/4] KVM: x86: Advertise support for " Xin Li (Intel)
  3 siblings, 2 replies; 17+ messages in thread
From: Xin Li (Intel) @ 2025-07-30 17:46 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: pbonzini, seanjc, tglx, mingo, bp, dave.hansen, x86, hpa, xin,
	chao.gao

Add helper functions to centralize guest MSR read and write emulation.
This change consolidates the MSR emulation logic and makes it easier
to extend support for new MSR-related VM exit reasons introduced with
the immediate form of MSR instructions.

Signed-off-by: Xin Li (Intel) <xin@zytor.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/x86.c              | 67 +++++++++++++++++++++++----------
 2 files changed, 49 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f19a76d3ca0e..a854d9a166fe 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -201,6 +201,7 @@ enum kvm_reg {
 	VCPU_EXREG_SEGMENTS,
 	VCPU_EXREG_EXIT_INFO_1,
 	VCPU_EXREG_EXIT_INFO_2,
+	VCPU_EXREG_EDX_EAX,
 };
 
 enum {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a1c49bc681c4..5086c3b30345 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2024,54 +2024,71 @@ static int kvm_msr_user_space(struct kvm_vcpu *vcpu, u32 index,
 	return 1;
 }
 
-int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
+static int kvm_emulate_get_msr(struct kvm_vcpu *vcpu, u32 msr, int reg)
 {
-	u32 ecx = kvm_rcx_read(vcpu);
 	u64 data;
 	int r;
 
-	r = kvm_get_msr_with_filter(vcpu, ecx, &data);
+	r = kvm_get_msr_with_filter(vcpu, msr, &data);
 
 	if (!r) {
-		trace_kvm_msr_read(ecx, data);
+		trace_kvm_msr_read(msr, data);
 
-		kvm_rax_write(vcpu, data & -1u);
-		kvm_rdx_write(vcpu, (data >> 32) & -1u);
+		if (reg == VCPU_EXREG_EDX_EAX) {
+			kvm_rax_write(vcpu, data & -1u);
+			kvm_rdx_write(vcpu, (data >> 32) & -1u);
+		} else {
+			kvm_register_write(vcpu, reg, data);
+		}
 	} else {
 		/* MSR read failed? See if we should ask user space */
-		if (kvm_msr_user_space(vcpu, ecx, KVM_EXIT_X86_RDMSR, 0,
+		if (kvm_msr_user_space(vcpu, msr, KVM_EXIT_X86_RDMSR, 0,
 				       complete_fast_rdmsr, r))
 			return 0;
-		trace_kvm_msr_read_ex(ecx);
+		trace_kvm_msr_read_ex(msr);
 	}
 
 	return kvm_x86_call(complete_emulated_msr)(vcpu, r);
 }
+
+int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
+{
+	return kvm_emulate_get_msr(vcpu, kvm_rcx_read(vcpu), VCPU_EXREG_EDX_EAX);
+}
 EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr);
 
-int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
+static int kvm_emulate_set_msr(struct kvm_vcpu *vcpu, u32 msr, int reg)
 {
-	u32 ecx = kvm_rcx_read(vcpu);
-	u64 data = kvm_read_edx_eax(vcpu);
+	u64 data;
 	int r;
 
-	r = kvm_set_msr_with_filter(vcpu, ecx, data);
+	if (reg == VCPU_EXREG_EDX_EAX)
+		data = kvm_read_edx_eax(vcpu);
+	else
+		data = kvm_register_read(vcpu, reg);
+
+	r = kvm_set_msr_with_filter(vcpu, msr, data);
 
 	if (!r) {
-		trace_kvm_msr_write(ecx, data);
+		trace_kvm_msr_write(msr, data);
 	} else {
 		/* MSR write failed? See if we should ask user space */
-		if (kvm_msr_user_space(vcpu, ecx, KVM_EXIT_X86_WRMSR, data,
+		if (kvm_msr_user_space(vcpu, msr, KVM_EXIT_X86_WRMSR, data,
 				       complete_fast_msr_access, r))
 			return 0;
 		/* Signal all other negative errors to userspace */
 		if (r < 0)
 			return r;
-		trace_kvm_msr_write_ex(ecx, data);
+		trace_kvm_msr_write_ex(msr, data);
 	}
 
 	return kvm_x86_call(complete_emulated_msr)(vcpu, r);
 }
+
+int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
+{
+	return kvm_emulate_set_msr(vcpu, kvm_rcx_read(vcpu), VCPU_EXREG_EDX_EAX);
+}
 EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);
 
 int kvm_emulate_as_nop(struct kvm_vcpu *vcpu)
@@ -2163,9 +2180,8 @@ static int handle_fastpath_set_tscdeadline(struct kvm_vcpu *vcpu, u64 data)
 	return 0;
 }
 
-fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
+static fastpath_t handle_set_msr_irqoff(struct kvm_vcpu *vcpu, u32 msr, int reg)
 {
-	u32 msr = kvm_rcx_read(vcpu);
 	u64 data;
 	fastpath_t ret;
 	bool handled;
@@ -2174,11 +2190,19 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
 
 	switch (msr) {
 	case APIC_BASE_MSR + (APIC_ICR >> 4):
-		data = kvm_read_edx_eax(vcpu);
+		if (reg == VCPU_EXREG_EDX_EAX)
+			data = kvm_read_edx_eax(vcpu);
+		else
+			data = kvm_register_read(vcpu, reg);
+
 		handled = !handle_fastpath_set_x2apic_icr_irqoff(vcpu, data);
 		break;
 	case MSR_IA32_TSC_DEADLINE:
-		data = kvm_read_edx_eax(vcpu);
+		if (reg == VCPU_EXREG_EDX_EAX)
+			data = kvm_read_edx_eax(vcpu);
+		else
+			data = kvm_register_read(vcpu, reg);
+
 		handled = !handle_fastpath_set_tscdeadline(vcpu, data);
 		break;
 	default:
@@ -2200,6 +2224,11 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
 
 	return ret;
 }
+
+fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
+{
+	return handle_set_msr_irqoff(vcpu, kvm_rcx_read(vcpu), VCPU_EXREG_EDX_EAX);
+}
 EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff);
 
 /*
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v1 3/4] KVM: VMX: Handle the immediate form of MSR instructions
  2025-07-30 17:46 [PATCH v1 0/4] KVM: VMX: Handle the immediate form of MSR instructions Xin Li (Intel)
  2025-07-30 17:46 ` [PATCH v1 1/4] x86/cpufeatures: Add a CPU feature bit for MSR immediate form instructions Xin Li (Intel)
  2025-07-30 17:46 ` [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers Xin Li (Intel)
@ 2025-07-30 17:46 ` Xin Li (Intel)
  2025-07-31 11:04   ` Chao Gao
  2025-07-30 17:46 ` [PATCH v1 4/4] KVM: x86: Advertise support for " Xin Li (Intel)
  3 siblings, 1 reply; 17+ messages in thread
From: Xin Li (Intel) @ 2025-07-30 17:46 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: pbonzini, seanjc, tglx, mingo, bp, dave.hansen, x86, hpa, xin,
	chao.gao

Handle two newly introduced VM exit reasons associated with the
immediate form of MSR instructions.

For proper virtualization of the immediate form of MSR instructions,
Intel VMX architecture adds the following changes:

  1) The immediate form of RDMSR uses VM exit reason 84.

  2) The immediate form of WRMSRNS uses VM exit reason 85.

  3) For both VM exit reasons 84 and 85, the exit qualification is set
     to the MSR address causing the VM exit.

  4) Bits 3 ~ 6 of the VM exit instruction information field represent
     the operand register used in the immediate form of MSR instruction.

  5) The VM-exit instruction length field records the size of the
     immediate form of the MSR instruction.

Add code to properly virtualize the immediate form of MSR instructions.

Signed-off-by: Xin Li (Intel) <xin@zytor.com>
---
 arch/x86/include/asm/kvm_host.h |  3 +++
 arch/x86/include/uapi/asm/vmx.h |  6 +++++-
 arch/x86/kvm/vmx/vmx.c          | 26 ++++++++++++++++++++++++++
 arch/x86/kvm/vmx/vmx.h          |  5 +++++
 arch/x86/kvm/x86.c              | 29 ++++++++++++++++++++++++++++-
 arch/x86/kvm/x86.h              |  1 +
 6 files changed, 68 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a854d9a166fe..f8d85efd47b6 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -979,6 +979,7 @@ struct kvm_vcpu_arch {
 	unsigned long guest_debug_dr7;
 	u64 msr_platform_info;
 	u64 msr_misc_features_enables;
+	int rdmsr_reg;
 
 	u64 mcg_cap;
 	u64 mcg_status;
@@ -2156,7 +2157,9 @@ int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, bool host_initiat
 int kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data);
 int kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data);
 int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu);
+int kvm_emulate_rdmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg);
 int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu);
+int kvm_emulate_wrmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg);
 int kvm_emulate_as_nop(struct kvm_vcpu *vcpu);
 int kvm_emulate_invd(struct kvm_vcpu *vcpu);
 int kvm_emulate_mwait(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
index f0f4a4cf84a7..9792e329343e 100644
--- a/arch/x86/include/uapi/asm/vmx.h
+++ b/arch/x86/include/uapi/asm/vmx.h
@@ -94,6 +94,8 @@
 #define EXIT_REASON_BUS_LOCK            74
 #define EXIT_REASON_NOTIFY              75
 #define EXIT_REASON_TDCALL              77
+#define EXIT_REASON_MSR_READ_IMM        84
+#define EXIT_REASON_MSR_WRITE_IMM       85
 
 #define VMX_EXIT_REASONS \
 	{ EXIT_REASON_EXCEPTION_NMI,         "EXCEPTION_NMI" }, \
@@ -158,7 +160,9 @@
 	{ EXIT_REASON_TPAUSE,                "TPAUSE" }, \
 	{ EXIT_REASON_BUS_LOCK,              "BUS_LOCK" }, \
 	{ EXIT_REASON_NOTIFY,                "NOTIFY" }, \
-	{ EXIT_REASON_TDCALL,                "TDCALL" }
+	{ EXIT_REASON_TDCALL,                "TDCALL" }, \
+	{ EXIT_REASON_MSR_READ_IMM,          "MSR_READ_IMM" }, \
+	{ EXIT_REASON_MSR_WRITE_IMM,         "MSR_WRITE_IMM" }
 
 #define VMX_EXIT_REASON_FLAGS \
 	{ VMX_EXIT_REASONS_FAILED_VMENTRY,	"FAILED_VMENTRY" }
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index aa157fe5b7b3..7129e7b1ef03 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6003,6 +6003,22 @@ static int handle_notify(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+static int handle_rdmsr_imm(struct kvm_vcpu *vcpu)
+{
+	u32 msr = vmx_get_exit_qual(vcpu);
+	int reg = vmx_get_instr_info_reg(vmcs_read32(VMX_INSTRUCTION_INFO));
+
+	return kvm_emulate_rdmsr_imm(vcpu, msr, reg);
+}
+
+static int handle_wrmsr_imm(struct kvm_vcpu *vcpu)
+{
+	u32 msr = vmx_get_exit_qual(vcpu);
+	int reg = vmx_get_instr_info_reg(vmcs_read32(VMX_INSTRUCTION_INFO));
+
+	return kvm_emulate_wrmsr_imm(vcpu, msr, reg);
+}
+
 /*
  * The exit handlers return 1 if the exit was handled fully and guest execution
  * may resume.  Otherwise they set the kvm_run parameter to indicate what needs
@@ -6061,6 +6077,8 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
 	[EXIT_REASON_ENCLS]		      = handle_encls,
 	[EXIT_REASON_BUS_LOCK]                = handle_bus_lock_vmexit,
 	[EXIT_REASON_NOTIFY]		      = handle_notify,
+	[EXIT_REASON_MSR_READ_IMM]            = handle_rdmsr_imm,
+	[EXIT_REASON_MSR_WRITE_IMM]           = handle_wrmsr_imm,
 };
 
 static const int kvm_vmx_max_exit_handlers =
@@ -6495,6 +6513,8 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
 #ifdef CONFIG_MITIGATION_RETPOLINE
 	if (exit_reason.basic == EXIT_REASON_MSR_WRITE)
 		return kvm_emulate_wrmsr(vcpu);
+	else if (exit_reason.basic == EXIT_REASON_MSR_WRITE_IMM)
+		return handle_wrmsr_imm(vcpu);
 	else if (exit_reason.basic == EXIT_REASON_PREEMPTION_TIMER)
 		return handle_preemption_timer(vcpu);
 	else if (exit_reason.basic == EXIT_REASON_INTERRUPT_WINDOW)
@@ -7171,6 +7191,12 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu,
 	switch (vmx_get_exit_reason(vcpu).basic) {
 	case EXIT_REASON_MSR_WRITE:
 		return handle_fastpath_set_msr_irqoff(vcpu);
+	case EXIT_REASON_MSR_WRITE_IMM: {
+		u32 msr = vmx_get_exit_qual(vcpu);
+		int reg = vmx_get_instr_info_reg(vmcs_read32(VMX_INSTRUCTION_INFO));
+
+		return handle_fastpath_set_msr_imm_irqoff(vcpu, msr, reg);
+	}
 	case EXIT_REASON_PREEMPTION_TIMER:
 		return handle_fastpath_preemption_timer(vcpu, force_immediate_exit);
 	case EXIT_REASON_HLT:
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index d3389baf3ab3..24d65dac5e89 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -706,6 +706,11 @@ static inline bool vmx_guest_state_valid(struct kvm_vcpu *vcpu)
 
 void dump_vmcs(struct kvm_vcpu *vcpu);
 
+static inline int vmx_get_instr_info_reg(u32 vmx_instr_info)
+{
+	return (vmx_instr_info >> 3) & 0xf;
+}
+
 static inline int vmx_get_instr_info_reg2(u32 vmx_instr_info)
 {
 	return (vmx_instr_info >> 28) & 0xf;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5086c3b30345..ed41d583aaae 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1962,9 +1962,14 @@ EXPORT_SYMBOL_GPL(kvm_set_msr);
 
 static void complete_userspace_rdmsr(struct kvm_vcpu *vcpu)
 {
-	if (!vcpu->run->msr.error) {
+	if (vcpu->run->msr.error)
+		return;
+
+	if (vcpu->arch.rdmsr_reg == VCPU_EXREG_EDX_EAX) {
 		kvm_rax_write(vcpu, (u32)vcpu->run->msr.data);
 		kvm_rdx_write(vcpu, vcpu->run->msr.data >> 32);
+	} else {
+		kvm_register_write(vcpu, vcpu->arch.rdmsr_reg, vcpu->run->msr.data);
 	}
 }
 
@@ -2041,6 +2046,8 @@ static int kvm_emulate_get_msr(struct kvm_vcpu *vcpu, u32 msr, int reg)
 			kvm_register_write(vcpu, reg, data);
 		}
 	} else {
+		vcpu->arch.rdmsr_reg = reg;
+
 		/* MSR read failed? See if we should ask user space */
 		if (kvm_msr_user_space(vcpu, msr, KVM_EXIT_X86_RDMSR, 0,
 				       complete_fast_rdmsr, r))
@@ -2057,6 +2064,12 @@ int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
 }
 EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr);
 
+int kvm_emulate_rdmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg)
+{
+	return kvm_emulate_get_msr(vcpu, msr, reg);
+}
+EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr_imm);
+
 static int kvm_emulate_set_msr(struct kvm_vcpu *vcpu, u32 msr, int reg)
 {
 	u64 data;
@@ -2091,6 +2104,12 @@ int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
 }
 EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);
 
+int kvm_emulate_wrmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg)
+{
+	return kvm_emulate_set_msr(vcpu, msr, reg);
+}
+EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr_imm);
+
 int kvm_emulate_as_nop(struct kvm_vcpu *vcpu)
 {
 	return kvm_skip_emulated_instruction(vcpu);
@@ -2231,6 +2250,12 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
 }
 EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff);
 
+fastpath_t handle_fastpath_set_msr_imm_irqoff(struct kvm_vcpu *vcpu, u32 msr, int reg)
+{
+	return handle_set_msr_irqoff(vcpu, msr, reg);
+}
+EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_imm_irqoff);
+
 /*
  * Adapt set_msr() to msr_io()'s calling convention
  */
@@ -8387,6 +8412,8 @@ static int emulator_get_msr_with_filter(struct x86_emulate_ctxt *ctxt,
 		return X86EMUL_UNHANDLEABLE;
 
 	if (r) {
+		vcpu->arch.rdmsr_reg = VCPU_EXREG_EDX_EAX;
+
 		if (kvm_msr_user_space(vcpu, msr_index, KVM_EXIT_X86_RDMSR, 0,
 				       complete_emulated_rdmsr, r))
 			return X86EMUL_IO_NEEDED;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index bcfd9b719ada..f8d117a17c46 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -438,6 +438,7 @@ int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type,
 int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 			    int emulation_type, void *insn, int insn_len);
 fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
+fastpath_t handle_fastpath_set_msr_imm_irqoff(struct kvm_vcpu *vcpu, u32 msr, int reg);
 fastpath_t handle_fastpath_hlt(struct kvm_vcpu *vcpu);
 
 extern struct kvm_caps kvm_caps;
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v1 4/4] KVM: x86: Advertise support for the immediate form of MSR instructions
  2025-07-30 17:46 [PATCH v1 0/4] KVM: VMX: Handle the immediate form of MSR instructions Xin Li (Intel)
                   ` (2 preceding siblings ...)
  2025-07-30 17:46 ` [PATCH v1 3/4] KVM: VMX: Handle the immediate form of MSR instructions Xin Li (Intel)
@ 2025-07-30 17:46 ` Xin Li (Intel)
  2025-08-01 14:39   ` Sean Christopherson
  3 siblings, 1 reply; 17+ messages in thread
From: Xin Li (Intel) @ 2025-07-30 17:46 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: pbonzini, seanjc, tglx, mingo, bp, dave.hansen, x86, hpa, xin,
	chao.gao

Advertise support for the immediate form of MSR instructions to userspace
if the instructions are supported by the underlying CPU.

The immediate form of MSR access instructions are primarily motivated
by performance, not code size: by having the MSR number in an immediate,
it is available *much* earlier in the pipeline, which allows the
hardware much more leeway about how a particular MSR is handled.

Signed-off-by: Xin Li (Intel) <xin@zytor.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/cpuid.c            | 6 +++++-
 arch/x86/kvm/reverse_cpuid.h    | 5 +++++
 3 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f8d85efd47b6..9ca7ec17c1c5 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -772,6 +772,7 @@ enum kvm_only_cpuid_leafs {
 	CPUID_7_2_EDX,
 	CPUID_24_0_EBX,
 	CPUID_8000_0021_ECX,
+	CPUID_7_1_ECX,
 	NR_KVM_CPU_CAPS,
 
 	NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index e2836a255b16..eaaa9203d4d9 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -985,6 +985,10 @@ void kvm_set_cpu_caps(void)
 		F(LAM),
 	);
 
+	kvm_cpu_cap_init(CPUID_7_1_ECX,
+		SCATTERED_F(MSR_IMM),
+	);
+
 	kvm_cpu_cap_init(CPUID_7_1_EDX,
 		F(AVX_VNNI_INT8),
 		F(AVX_NE_CONVERT),
@@ -1411,9 +1415,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 				goto out;
 
 			cpuid_entry_override(entry, CPUID_7_1_EAX);
+			cpuid_entry_override(entry, CPUID_7_1_ECX);
 			cpuid_entry_override(entry, CPUID_7_1_EDX);
 			entry->ebx = 0;
-			entry->ecx = 0;
 		}
 		if (max_idx >= 2) {
 			entry = do_host_cpuid(array, function, 2);
diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h
index c53b92379e6e..743ab25ba787 100644
--- a/arch/x86/kvm/reverse_cpuid.h
+++ b/arch/x86/kvm/reverse_cpuid.h
@@ -25,6 +25,9 @@
 #define KVM_X86_FEATURE_SGX2		KVM_X86_FEATURE(CPUID_12_EAX, 1)
 #define KVM_X86_FEATURE_SGX_EDECCSSA	KVM_X86_FEATURE(CPUID_12_EAX, 11)
 
+/* Intel-defined sub-features, CPUID level 0x00000007:1 (ECX) */
+#define KVM_X86_FEATURE_MSR_IMM		KVM_X86_FEATURE(CPUID_7_1_ECX, 5)
+
 /* Intel-defined sub-features, CPUID level 0x00000007:1 (EDX) */
 #define X86_FEATURE_AVX_VNNI_INT8       KVM_X86_FEATURE(CPUID_7_1_EDX, 4)
 #define X86_FEATURE_AVX_NE_CONVERT      KVM_X86_FEATURE(CPUID_7_1_EDX, 5)
@@ -87,6 +90,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
 	[CPUID_7_2_EDX]       = {         7, 2, CPUID_EDX},
 	[CPUID_24_0_EBX]      = {      0x24, 0, CPUID_EBX},
 	[CPUID_8000_0021_ECX] = {0x80000021, 0, CPUID_ECX},
+	[CPUID_7_1_ECX]       = {         7, 1, CPUID_ECX},
 };
 
 /*
@@ -128,6 +132,7 @@ static __always_inline u32 __feature_translate(int x86_feature)
 	KVM_X86_TRANSLATE_FEATURE(BHI_CTRL);
 	KVM_X86_TRANSLATE_FEATURE(TSA_SQ_NO);
 	KVM_X86_TRANSLATE_FEATURE(TSA_L1_NO);
+	KVM_X86_TRANSLATE_FEATURE(MSR_IMM);
 	default:
 		return x86_feature;
 	}
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers
  2025-07-30 17:46 ` [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers Xin Li (Intel)
@ 2025-07-31 10:34   ` Chao Gao
  2025-07-31 16:40     ` Xin Li
  2025-08-01 14:37   ` Sean Christopherson
  1 sibling, 1 reply; 17+ messages in thread
From: Chao Gao @ 2025-07-31 10:34 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, pbonzini, seanjc, tglx, mingo, bp, dave.hansen,
	x86, hpa

>-fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
>+static fastpath_t handle_set_msr_irqoff(struct kvm_vcpu *vcpu, u32 msr, int reg)

How about __handle_fastpath_set_msr_irqoff()? It's better to keep
"fastpath" in the function name to convey that this function is for
fastpath only.

> {
>-	u32 msr = kvm_rcx_read(vcpu);
> 	u64 data;
> 	fastpath_t ret;
> 	bool handled;
>@@ -2174,11 +2190,19 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
> 
> 	switch (msr) {
> 	case APIC_BASE_MSR + (APIC_ICR >> 4):
>-		data = kvm_read_edx_eax(vcpu);
>+		if (reg == VCPU_EXREG_EDX_EAX)
>+			data = kvm_read_edx_eax(vcpu);
>+		else
>+			data = kvm_register_read(vcpu, reg);

...

>+
> 		handled = !handle_fastpath_set_x2apic_icr_irqoff(vcpu, data);
> 		break;
> 	case MSR_IA32_TSC_DEADLINE:
>-		data = kvm_read_edx_eax(vcpu);
>+		if (reg == VCPU_EXREG_EDX_EAX)
>+			data = kvm_read_edx_eax(vcpu);
>+		else
>+			data = kvm_register_read(vcpu, reg);
>+

Hoist this chunk out of the switch clause to avoid duplication.

> 		handled = !handle_fastpath_set_tscdeadline(vcpu, data);
> 		break;
> 	default:
>@@ -2200,6 +2224,11 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
> 
> 	return ret;
> }
>+
>+fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
>+{
>+	return handle_set_msr_irqoff(vcpu, kvm_rcx_read(vcpu), VCPU_EXREG_EDX_EAX);
>+}
> EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff);
> 
> /*
>-- 
>2.50.1
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 3/4] KVM: VMX: Handle the immediate form of MSR instructions
  2025-07-30 17:46 ` [PATCH v1 3/4] KVM: VMX: Handle the immediate form of MSR instructions Xin Li (Intel)
@ 2025-07-31 11:04   ` Chao Gao
  2025-07-31 16:53     ` Xin Li
  0 siblings, 1 reply; 17+ messages in thread
From: Chao Gao @ 2025-07-31 11:04 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, pbonzini, seanjc, tglx, mingo, bp, dave.hansen,
	x86, hpa

On Wed, Jul 30, 2025 at 10:46:04AM -0700, Xin Li (Intel) wrote:
>Handle two newly introduced VM exit reasons associated with the
>immediate form of MSR instructions.
>
>For proper virtualization of the immediate form of MSR instructions,
>Intel VMX architecture adds the following changes:

The CPUID feature bit also indicates support for the two new VM-exit reasons.
Therefore, KVM needs to reflect EXIT_REASON_MSR_READ/WRITE_IMM VM-exits to
L1 guests in nested cases if KVM claims it supports the new form of MSR
instructions.

I'm also wondering if the emulator needs to support this new instruction. I
suppose it does.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers
  2025-07-31 10:34   ` Chao Gao
@ 2025-07-31 16:40     ` Xin Li
  2025-07-31 17:19       ` Xin Li
  2025-08-01  0:47       ` Sean Christopherson
  0 siblings, 2 replies; 17+ messages in thread
From: Xin Li @ 2025-07-31 16:40 UTC (permalink / raw)
  To: Chao Gao
  Cc: linux-kernel, kvm, pbonzini, seanjc, tglx, mingo, bp, dave.hansen,
	x86, hpa

On 7/31/2025 3:34 AM, Chao Gao wrote:
>> -fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
>> +static fastpath_t handle_set_msr_irqoff(struct kvm_vcpu *vcpu, u32 msr, int reg)
> 
> How about __handle_fastpath_set_msr_irqoff()? It's better to keep
> "fastpath" in the function name to convey that this function is for
> fastpath only.

This is now a static function with return type fastpath_t, so I guess
it's okay to remove fastpath from its name (It looks that Sean prefers
shorter function names if they contains enough information).

But if the protocol is to have "fastpath" in all fast path function
names, I can change it.

> 
>> {
>> -	u32 msr = kvm_rcx_read(vcpu);
>> 	u64 data;
>> 	fastpath_t ret;
>> 	bool handled;
>> @@ -2174,11 +2190,19 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
>>
>> 	switch (msr) {
>> 	case APIC_BASE_MSR + (APIC_ICR >> 4):
>> -		data = kvm_read_edx_eax(vcpu);
>> +		if (reg == VCPU_EXREG_EDX_EAX)
>> +			data = kvm_read_edx_eax(vcpu);
>> +		else
>> +			data = kvm_register_read(vcpu, reg);
> 
> ...
> 
>> +
>> 		handled = !handle_fastpath_set_x2apic_icr_irqoff(vcpu, data);
>> 		break;
>> 	case MSR_IA32_TSC_DEADLINE:
>> -		data = kvm_read_edx_eax(vcpu);
>> +		if (reg == VCPU_EXREG_EDX_EAX)
>> +			data = kvm_read_edx_eax(vcpu);
>> +		else
>> +			data = kvm_register_read(vcpu, reg);
>> +
> 
> Hoist this chunk out of the switch clause to avoid duplication.

I thought about it, but didn't do so because the original code doesn't 
read the MSR data from registers when a MSR is not being handled in the
fast path, which saves some cycles in most cases.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 3/4] KVM: VMX: Handle the immediate form of MSR instructions
  2025-07-31 11:04   ` Chao Gao
@ 2025-07-31 16:53     ` Xin Li
  2025-07-31 22:10       ` Xin Li
  0 siblings, 1 reply; 17+ messages in thread
From: Xin Li @ 2025-07-31 16:53 UTC (permalink / raw)
  To: Chao Gao
  Cc: linux-kernel, kvm, pbonzini, seanjc, tglx, mingo, bp, dave.hansen,
	x86, hpa

On 7/31/2025 4:04 AM, Chao Gao wrote:
> On Wed, Jul 30, 2025 at 10:46:04AM -0700, Xin Li (Intel) wrote:
>> Handle two newly introduced VM exit reasons associated with the
>> immediate form of MSR instructions.
>>
>> For proper virtualization of the immediate form of MSR instructions,
>> Intel VMX architecture adds the following changes:
> 
> The CPUID feature bit also indicates support for the two new VM-exit reasons.
> Therefore, KVM needs to reflect EXIT_REASON_MSR_READ/WRITE_IMM VM-exits to
> L1 guests in nested cases if KVM claims it supports the new form of MSR
> instructions.

Damn, forgot about nested...

> 
> I'm also wondering if the emulator needs to support this new instruction. I
> suppose it does.

Yes, I thought about it.  However the new instructions use the VEX
prefix, which KVM doesn't support today.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers
  2025-07-31 16:40     ` Xin Li
@ 2025-07-31 17:19       ` Xin Li
  2025-08-01  0:47       ` Sean Christopherson
  1 sibling, 0 replies; 17+ messages in thread
From: Xin Li @ 2025-07-31 17:19 UTC (permalink / raw)
  To: Chao Gao
  Cc: linux-kernel, kvm, pbonzini, seanjc, tglx, mingo, bp, dave.hansen,
	x86, hpa

On 7/31/2025 9:40 AM, Xin Li wrote:
>>> +
>>>         handled = !handle_fastpath_set_x2apic_icr_irqoff(vcpu, data);
>>>         break;
>>>     case MSR_IA32_TSC_DEADLINE:
>>> -        data = kvm_read_edx_eax(vcpu);
>>> +        if (reg == VCPU_EXREG_EDX_EAX)
>>> +            data = kvm_read_edx_eax(vcpu);
>>> +        else
>>> +            data = kvm_register_read(vcpu, reg);
>>> +
>>
>> Hoist this chunk out of the switch clause to avoid duplication.
> 
> I thought about it, but didn't do so because the original code doesn't 
> read the MSR data from registers when a MSR is not being handled in the
> fast path, which saves some cycles in most cases.

I think I can make it an inline function.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 3/4] KVM: VMX: Handle the immediate form of MSR instructions
  2025-07-31 16:53     ` Xin Li
@ 2025-07-31 22:10       ` Xin Li
  0 siblings, 0 replies; 17+ messages in thread
From: Xin Li @ 2025-07-31 22:10 UTC (permalink / raw)
  To: Chao Gao
  Cc: linux-kernel, kvm, pbonzini, seanjc, tglx, mingo, bp, dave.hansen,
	x86, hpa

On 7/31/2025 9:53 AM, Xin Li wrote:
>> The CPUID feature bit also indicates support for the two new VM-exit 
>> reasons.
>> Therefore, KVM needs to reflect EXIT_REASON_MSR_READ/WRITE_IMM VM- 
>> exits to
>> L1 guests in nested cases if KVM claims it supports the new form of MSR
>> instructions.
> 
> Damn, forgot about nested...

The current nested KVM VMX implementation already handles VM exits
caused by the immediate form of MSR instructions, forwarding them to L1
as intended by design.

I just need to add MSR bitmap checks to nested_vmx_exit_handled_msr().

Thanks!
     Xin


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers
  2025-07-31 16:40     ` Xin Li
  2025-07-31 17:19       ` Xin Li
@ 2025-08-01  0:47       ` Sean Christopherson
  2025-08-01  1:35         ` Xin Li
  1 sibling, 1 reply; 17+ messages in thread
From: Sean Christopherson @ 2025-08-01  0:47 UTC (permalink / raw)
  To: Xin Li
  Cc: Chao Gao, linux-kernel, kvm, pbonzini, tglx, mingo, bp,
	dave.hansen, x86, hpa

On Thu, Jul 31, 2025, Xin Li wrote:
> On 7/31/2025 3:34 AM, Chao Gao wrote:
> > > -fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
> > > +static fastpath_t handle_set_msr_irqoff(struct kvm_vcpu *vcpu, u32 msr, int reg)
> > 
> > How about __handle_fastpath_set_msr_irqoff()? It's better to keep
> > "fastpath" in the function name to convey that this function is for
> > fastpath only.
> 
> This is now a static function with return type fastpath_t, so I guess
> it's okay to remove fastpath from its name (It looks that Sean prefers
> shorter function names if they contains enough information).
> 
> But if the protocol is to have "fastpath" in all fast path function
> names, I can change it.

I'm also greedy and want it both ways :-)

Spoiler alert, this is what I ended up with (completely untested at this point):

static fastpath_t __handle_fastpath_wrmsr(struct kvm_vcpu *vcpu, u32 msr,
					  u64 data)

	switch (msr) {
	case APIC_BASE_MSR + (APIC_ICR >> 4):
		if (!lapic_in_kernel(vcpu) || !apic_x2apic_mode(vcpu->arch.apic) ||
		    kvm_x2apic_icr_write_fast(vcpu->arch.apic, data))
			return EXIT_FASTPATH_NONE;
		break;
	case MSR_IA32_TSC_DEADLINE:
		if (!kvm_can_use_hv_timer(vcpu))
			return EXIT_FASTPATH_NONE;

		kvm_set_lapic_tscdeadline_msr(vcpu, data);
		break;
	default:
		return EXIT_FASTPATH_NONE;
	}

	trace_kvm_msr_write(msr, data);

	if (!kvm_skip_emulated_instruction(vcpu))
		return EXIT_FASTPATH_EXIT_USERSPACE;

	return EXIT_FASTPATH_REENTER_GUEST;
}

fastpath_t handle_fastpath_wrmsr(struct kvm_vcpu *vcpu)
{
	return __handle_fastpath_wrmsr(vcpu, kvm_rcx_read(vcpu),
				       kvm_read_edx_eax(vcpu));
}
EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff);

fastpath_t handle_fastpath_wrmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg)
{
	return __handle_fastpath_wrmsr(vcpu, msr, kvm_register_read(vcpu, reg));
}
EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_imm_irqoff);


> > > {
> > > -	u32 msr = kvm_rcx_read(vcpu);
> > > 	u64 data;
> > > 	fastpath_t ret;
> > > 	bool handled;
> > > @@ -2174,11 +2190,19 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
> > > 
> > > 	switch (msr) {
> > > 	case APIC_BASE_MSR + (APIC_ICR >> 4):
> > > -		data = kvm_read_edx_eax(vcpu);
> > > +		if (reg == VCPU_EXREG_EDX_EAX)
> > > +			data = kvm_read_edx_eax(vcpu);
> > > +		else
> > > +			data = kvm_register_read(vcpu, reg);
> > 
> > ...
> > 
> > > +
> > > 		handled = !handle_fastpath_set_x2apic_icr_irqoff(vcpu, data);
> > > 		break;
> > > 	case MSR_IA32_TSC_DEADLINE:
> > > -		data = kvm_read_edx_eax(vcpu);
> > > +		if (reg == VCPU_EXREG_EDX_EAX)
> > > +			data = kvm_read_edx_eax(vcpu);
> > > +		else
> > > +			data = kvm_register_read(vcpu, reg);
> > > +
> > 
> > Hoist this chunk out of the switch clause to avoid duplication.
> 
> I thought about it, but didn't do so because the original code doesn't read
> the MSR data from registers when a MSR is not being handled in the
> fast path, which saves some cycles in most cases.

Can you hold off on doing anything with this series?  Mostly to save your time.

Long story short, I unexpectedly dove into the fastpath code this week while sorting
out an issue with the mediated PMU series, and I ended up with a series of patches
to clean things up for both the mediated PMU series and for this series.

With luck, I'll get the cleanups, the mediated PMU series, and a v2 of this series
posted tomorrow (I also have some feedback on VCPU_EXREG_EDX_EAX; we can avoid it
entirely without much fuss).


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers
  2025-08-01  0:47       ` Sean Christopherson
@ 2025-08-01  1:35         ` Xin Li
  0 siblings, 0 replies; 17+ messages in thread
From: Xin Li @ 2025-08-01  1:35 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Chao Gao, linux-kernel, kvm, pbonzini, tglx, mingo, bp,
	dave.hansen, x86, hpa

>>>> +
>>>> 		handled = !handle_fastpath_set_x2apic_icr_irqoff(vcpu, data);
>>>> 		break;
>>>> 	case MSR_IA32_TSC_DEADLINE:
>>>> -		data = kvm_read_edx_eax(vcpu);
>>>> +		if (reg == VCPU_EXREG_EDX_EAX)
>>>> +			data = kvm_read_edx_eax(vcpu);
>>>> +		else
>>>> +			data = kvm_register_read(vcpu, reg);
>>>> +
>>>
>>> Hoist this chunk out of the switch clause to avoid duplication.
>>
>> I thought about it, but didn't do so because the original code doesn't read
>> the MSR data from registers when a MSR is not being handled in the
>> fast path, which saves some cycles in most cases.
> 
> Can you hold off on doing anything with this series?  Mostly to save your time.

Sure.

> 
> Long story short, I unexpectedly dove into the fastpath code this week while sorting
> out an issue with the mediated PMU series, and I ended up with a series of patches
> to clean things up for both the mediated PMU series and for this series.
> 
> With luck, I'll get the cleanups, the mediated PMU series, and a v2 of this series
> posted tomorrow (I also have some feedback on VCPU_EXREG_EDX_EAX; we can avoid it
> entirely without much fuss).
> 

Will wait and take a look when you post them.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers
  2025-07-30 17:46 ` [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers Xin Li (Intel)
  2025-07-31 10:34   ` Chao Gao
@ 2025-08-01 14:37   ` Sean Christopherson
  2025-08-01 16:27     ` Xin Li
  1 sibling, 1 reply; 17+ messages in thread
From: Sean Christopherson @ 2025-08-01 14:37 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, pbonzini, tglx, mingo, bp, dave.hansen, x86,
	hpa, chao.gao

On Wed, Jul 30, 2025, Xin Li (Intel) wrote:
> Add helper functions to centralize guest MSR read and write emulation.
> This change consolidates the MSR emulation logic and makes it easier
> to extend support for new MSR-related VM exit reasons introduced with
> the immediate form of MSR instructions.
> 
> Signed-off-by: Xin Li (Intel) <xin@zytor.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  1 +
>  arch/x86/kvm/x86.c              | 67 +++++++++++++++++++++++----------
>  2 files changed, 49 insertions(+), 19 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index f19a76d3ca0e..a854d9a166fe 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -201,6 +201,7 @@ enum kvm_reg {
>  	VCPU_EXREG_SEGMENTS,
>  	VCPU_EXREG_EXIT_INFO_1,
>  	VCPU_EXREG_EXIT_INFO_2,
> +	VCPU_EXREG_EDX_EAX,

I really, really don't want to add a "reg" for this.  It's not an actual register,
and bleeds details of one specific flow throughout KVM.

The only path where KVM _needs_ to differentiate between the "legacy" instructions
and the immediate variants instruction is in the inner RDMSR helper.

For the WRMSR helper, KVM can and should simply pass in @data, not pass in a reg
and then have the helper do an if-else on the reg:

  int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
  {
  	return __kvm_emulate_wrmsr(vcpu, kvm_rcx_read(vcpu),
  				   kvm_read_edx_eax(vcpu));
  }
  EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);
  
  int kvm_emulate_wrmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg)
  {
  	return __kvm_emulate_wrmsr(vcpu, msr, kvm_register_read(vcpu, reg));
  }
  EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr_imm);

And for the RDMSR userspace completion, KVM is already eating an indirect function
call, so the wrappers can simply pass in the appropriate completion helper.  It
does mean having to duplicate the vcpu->run->msr.error check, but we'd have to
duplicate the "r == VCPU_EXREG_EDX_EAX" by sharing a callback, *and* we'd also
need to be very careful about setting the effective register in the other existing
flows that utilize complete_fast_rdmsr.

Then to communicate that the legacy form with implicit destination operands is
being emulated, pass -1 for the register.  It's not the prettiest, but I do like
using "reg invalid" to communicate that the destination is implicit.

  static int __kvm_emulate_rdmsr(struct kvm_vcpu *vcpu, u32 msr, int reg,
  			       int (*complete_rdmsr)(struct kvm_vcpu *))
  {
  	u64 data;
  	int r;
  
  	r = kvm_get_msr_with_filter(vcpu, msr, &data);
  	if (!r) {
  		trace_kvm_msr_read(msr, data);
  
  		if (reg < 0) {
  			kvm_rax_write(vcpu, data & -1u);
  			kvm_rdx_write(vcpu, (data >> 32) & -1u);
  		} else {
  			kvm_register_write(vcpu, reg, data);
  		}
  	} else {
  		/* MSR read failed? See if we should ask user space */
  		if (kvm_msr_user_space(vcpu, msr, KVM_EXIT_X86_RDMSR, 0,
  				       complete_rdmsr, r))
  			return 0;
  		trace_kvm_msr_read_ex(msr);
  	}
  
  	return kvm_x86_call(complete_emulated_msr)(vcpu, r);
  }
  
  int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
  {
  	return __kvm_emulate_rdmsr(vcpu, kvm_rcx_read(vcpu), -1,
  				   complete_fast_rdmsr);
  }
  EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr);
  
  int kvm_emulate_rdmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg)
  {
  	vcpu->arch.cui_rdmsr_imm_reg = reg;
  
  	return __kvm_emulate_rdmsr(vcpu, msr, reg, complete_fast_rdmsr_imm);
  }
  EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr_imm);

>  };
>  
>  enum {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index a1c49bc681c4..5086c3b30345 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2024,54 +2024,71 @@ static int kvm_msr_user_space(struct kvm_vcpu *vcpu, u32 index,
>  	return 1;
>  }
>  
> -int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
> +static int kvm_emulate_get_msr(struct kvm_vcpu *vcpu, u32 msr, int reg)

Please keep "rdmsr" and "wrmsr" when dealing emulation of those instructions to
help differentiate from the many other MSR get/set paths.  (ignore the actual
emulator hooks; that code is crusty, but not worth the churn to clean up).

> @@ -2163,9 +2180,8 @@ static int handle_fastpath_set_tscdeadline(struct kvm_vcpu *vcpu, u64 data)
>  	return 0;
>  }
>  
> -fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
> +static fastpath_t handle_set_msr_irqoff(struct kvm_vcpu *vcpu, u32 msr, int reg)

I think it makes sense to (a) add the x86.c code and the vmx.c code in the same
patch, and then (b) add fastpath support in a separate patch to make the initial
(combined x86.c + vmx.c) patch easier to review.  Adding the x86.c plumbing/logic
before the VMX support makes the x86.c change difficult to review, as there are
no users of the new paths, and the VMX changes are quite tiny.  Ignoring the arch
boilerplate, the VMX changes barely add anything relative to the x86.c changes.

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ae2c8c10e5d2..757e4bb89f36 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6003,6 +6003,23 @@ static int handle_notify(struct kvm_vcpu *vcpu)
        return 1;
 }
 
+static int vmx_get_msr_imm_reg(struct kvm_vcpu *vcpu)
+{
+       return vmx_get_instr_info_reg(vmcs_read32(VMX_INSTRUCTION_INFO))
+}
+
+static int handle_rdmsr_imm(struct kvm_vcpu *vcpu)
+{
+       return kvm_emulate_rdmsr_imm(vcpu, vmx_get_exit_qual(vcpu),
+                                    vmx_get_msr_imm_reg(vcpu));
+}
+
+static int handle_wrmsr_imm(struct kvm_vcpu *vcpu)
+{
+       return kvm_emulate_wrmsr_imm(vcpu, vmx_get_exit_qual(vcpu),
+                                    vmx_get_msr_imm_reg(vcpu));
+}
+
 /*
  * The exit handlers return 1 if the exit was handled fully and guest execution
  * may resume.  Otherwise they set the kvm_run parameter to indicate what needs
@@ -6061,6 +6078,8 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
        [EXIT_REASON_ENCLS]                   = handle_encls,
        [EXIT_REASON_BUS_LOCK]                = handle_bus_lock_vmexit,
        [EXIT_REASON_NOTIFY]                  = handle_notify,
+       [EXIT_REASON_MSR_READ_IMM]            = handle_rdmsr_imm,
+       [EXIT_REASON_MSR_WRITE_IMM]           = handle_wrmsr_imm,
 };
 
 static const int kvm_vmx_max_exit_handlers =
@@ -6495,6 +6514,8 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
 #ifdef CONFIG_MITIGATION_RETPOLINE
        if (exit_reason.basic == EXIT_REASON_MSR_WRITE)
                return kvm_emulate_wrmsr(vcpu);
+       else if (exit_reason.basic == EXIT_REASON_MSR_WRITE_IMM)
+               return handle_wrmsr_imm(vcpu);
        else if (exit_reason.basic == EXIT_REASON_PREEMPTION_TIMER)
                return handle_preemption_timer(vcpu);
        else if (exit_reason.basic == EXIT_REASON_INTERRUPT_WINDOW)

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 4/4] KVM: x86: Advertise support for the immediate form of MSR instructions
  2025-07-30 17:46 ` [PATCH v1 4/4] KVM: x86: Advertise support for " Xin Li (Intel)
@ 2025-08-01 14:39   ` Sean Christopherson
  2025-08-01 16:11     ` Xin Li
  0 siblings, 1 reply; 17+ messages in thread
From: Sean Christopherson @ 2025-08-01 14:39 UTC (permalink / raw)
  To: Xin Li (Intel)
  Cc: linux-kernel, kvm, pbonzini, tglx, mingo, bp, dave.hansen, x86,
	hpa, chao.gao

On Wed, Jul 30, 2025, Xin Li (Intel) wrote:
> Advertise support for the immediate form of MSR instructions to userspace
> if the instructions are supported by the underlying CPU.

SVM needs to explicitly clear the capability so that KVM doesn't over-advertise
support if AMD ever implements X86_FEATURE_MSR_IMM.

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index ca550c4fa174..7e7821ee8ee1 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -5311,8 +5311,12 @@ static __init void svm_set_cpu_caps(void)
        /* CPUID 0x8000001F (SME/SEV features) */
        sev_set_cpu_caps();
 
-       /* Don't advertise Bus Lock Detect to guest if SVM support is absent */
+       /*
+        * Clear capabilities that are automatically configured by common code,
+        * but that require explicit SVM support (that isn't yet implemented).
+        */
        kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT);
+       kvm_cpu_cap_clear(X86_FEATURE_MSR_IMM);
 }
 
 static __init int svm_hardware_setup(void)

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 4/4] KVM: x86: Advertise support for the immediate form of MSR instructions
  2025-08-01 14:39   ` Sean Christopherson
@ 2025-08-01 16:11     ` Xin Li
  0 siblings, 0 replies; 17+ messages in thread
From: Xin Li @ 2025-08-01 16:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: linux-kernel, kvm, pbonzini, tglx, mingo, bp, dave.hansen, x86,
	hpa, chao.gao

On 8/1/2025 7:39 AM, Sean Christopherson wrote:
> On Wed, Jul 30, 2025, Xin Li (Intel) wrote:
>> Advertise support for the immediate form of MSR instructions to userspace
>> if the instructions are supported by the underlying CPU.
> 
> SVM needs to explicitly clear the capability so that KVM doesn't over-advertise
> support if AMD ever implements X86_FEATURE_MSR_IMM.
> 
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index ca550c4fa174..7e7821ee8ee1 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -5311,8 +5311,12 @@ static __init void svm_set_cpu_caps(void)
>          /* CPUID 0x8000001F (SME/SEV features) */
>          sev_set_cpu_caps();
>   
> -       /* Don't advertise Bus Lock Detect to guest if SVM support is absent */
> +       /*
> +        * Clear capabilities that are automatically configured by common code,
> +        * but that require explicit SVM support (that isn't yet implemented).
> +        */
>          kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT);
> +       kvm_cpu_cap_clear(X86_FEATURE_MSR_IMM);
>   }
>   
>   static __init int svm_hardware_setup(void)
> 

Nice catch!

Yes, a feature needing explicit enabling effort can't be blindly
advertised until the support on all sub-arch is ready.  I.e., I need to
disable it on non-Intel CPUs because it's only done for Intel.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers
  2025-08-01 14:37   ` Sean Christopherson
@ 2025-08-01 16:27     ` Xin Li
  0 siblings, 0 replies; 17+ messages in thread
From: Xin Li @ 2025-08-01 16:27 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: linux-kernel, kvm, pbonzini, tglx, mingo, bp, dave.hansen, x86,
	hpa, chao.gao

On 8/1/2025 7:37 AM, Sean Christopherson wrote:
> On Wed, Jul 30, 2025, Xin Li (Intel) wrote:
>> Add helper functions to centralize guest MSR read and write emulation.
>> This change consolidates the MSR emulation logic and makes it easier
>> to extend support for new MSR-related VM exit reasons introduced with
>> the immediate form of MSR instructions.
>>
>> Signed-off-by: Xin Li (Intel) <xin@zytor.com>
>> ---
>>   arch/x86/include/asm/kvm_host.h |  1 +
>>   arch/x86/kvm/x86.c              | 67 +++++++++++++++++++++++----------
>>   2 files changed, 49 insertions(+), 19 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index f19a76d3ca0e..a854d9a166fe 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -201,6 +201,7 @@ enum kvm_reg {
>>   	VCPU_EXREG_SEGMENTS,
>>   	VCPU_EXREG_EXIT_INFO_1,
>>   	VCPU_EXREG_EXIT_INFO_2,
>> +	VCPU_EXREG_EDX_EAX,
> 
> I really, really don't want to add a "reg" for this.  It's not an actual register,
> and bleeds details of one specific flow throughout KVM.

Sure.

> 
> The only path where KVM _needs_ to differentiate between the "legacy" instructions
> and the immediate variants instruction is in the inner RDMSR helper.
> 
> For the WRMSR helper, KVM can and should simply pass in @data, not pass in a reg
> and then have the helper do an if-else on the reg:

My initial patch passes @data in the WRMSR path, but to make it 
consistent with the handling of RDMSR I changed it to @reg.

Yes, passing @data makes more sense because it hides unneccesary details.

> 
>    int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
>    {
>    	return __kvm_emulate_wrmsr(vcpu, kvm_rcx_read(vcpu),
>    				   kvm_read_edx_eax(vcpu));
>    }
>    EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);
>    
>    int kvm_emulate_wrmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg)
>    {
>    	return __kvm_emulate_wrmsr(vcpu, msr, kvm_register_read(vcpu, reg));
>    }
>    EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr_imm);
> 
> And for the RDMSR userspace completion, KVM is already eating an indirect function
> call, so the wrappers can simply pass in the appropriate completion helper.  It
> does mean having to duplicate the vcpu->run->msr.error check, but we'd have to
> duplicate the "r == VCPU_EXREG_EDX_EAX" by sharing a callback, *and* we'd also
> need to be very careful about setting the effective register in the other existing
> flows that utilize complete_fast_rdmsr.
> 
> Then to communicate that the legacy form with implicit destination operands is
> being emulated, pass -1 for the register.  It's not the prettiest, but I do like
> using "reg invalid" to communicate that the destination is implicit.
> 
>    static int __kvm_emulate_rdmsr(struct kvm_vcpu *vcpu, u32 msr, int reg,
>    			       int (*complete_rdmsr)(struct kvm_vcpu *))

Yeah, it is a clean way to pass a userspace completion callback.

>    {
>    	u64 data;
>    	int r;
>    
>    	r = kvm_get_msr_with_filter(vcpu, msr, &data);
>    	if (!r) {
>    		trace_kvm_msr_read(msr, data);
>    
>    		if (reg < 0) {
>    			kvm_rax_write(vcpu, data & -1u);
>    			kvm_rdx_write(vcpu, (data >> 32) & -1u);
>    		} else {
>    			kvm_register_write(vcpu, reg, data);
>    		}
>    	} else {
>    		/* MSR read failed? See if we should ask user space */
>    		if (kvm_msr_user_space(vcpu, msr, KVM_EXIT_X86_RDMSR, 0,
>    				       complete_rdmsr, r))
>    			return 0;
>    		trace_kvm_msr_read_ex(msr);
>    	}
>    
>    	return kvm_x86_call(complete_emulated_msr)(vcpu, r);
>    }
>    
>    int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
>    {
>    	return __kvm_emulate_rdmsr(vcpu, kvm_rcx_read(vcpu), -1,
>    				   complete_fast_rdmsr);
>    }
>    EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr);
>    
>    int kvm_emulate_rdmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg)
>    {
>    	vcpu->arch.cui_rdmsr_imm_reg = reg;
>    
>    	return __kvm_emulate_rdmsr(vcpu, msr, reg, complete_fast_rdmsr_imm);
>    }
>    EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr_imm);
> 
>>   };
>>   
>>   enum {
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index a1c49bc681c4..5086c3b30345 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -2024,54 +2024,71 @@ static int kvm_msr_user_space(struct kvm_vcpu *vcpu, u32 index,
>>   	return 1;
>>   }
>>   
>> -int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
>> +static int kvm_emulate_get_msr(struct kvm_vcpu *vcpu, u32 msr, int reg)
> 
> Please keep "rdmsr" and "wrmsr" when dealing emulation of those instructions to
> help differentiate from the many other MSR get/set paths.  (ignore the actual
> emulator hooks; that code is crusty, but not worth the churn to clean up).

Once the rules are laid out, it's easy to act :)

> 
>> @@ -2163,9 +2180,8 @@ static int handle_fastpath_set_tscdeadline(struct kvm_vcpu *vcpu, u64 data)
>>   	return 0;
>>   }
>>   
>> -fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
>> +static fastpath_t handle_set_msr_irqoff(struct kvm_vcpu *vcpu, u32 msr, int reg)
> 
> I think it makes sense to (a) add the x86.c code and the vmx.c code in the same
> patch, and then (b) add fastpath support in a separate patch to make the initial
> (combined x86.c + vmx.c) patch easier to review.  Adding the x86.c plumbing/logic
> before the VMX support makes the x86.c change difficult to review, as there are
> no users of the new paths, and the VMX changes are quite tiny.  Ignoring the arch
> boilerplate, the VMX changes barely add anything relative to the x86.c changes.

Will do.

> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index ae2c8c10e5d2..757e4bb89f36 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6003,6 +6003,23 @@ static int handle_notify(struct kvm_vcpu *vcpu)
>          return 1;
>   }
>   
> +static int vmx_get_msr_imm_reg(struct kvm_vcpu *vcpu)
> +{
> +       return vmx_get_instr_info_reg(vmcs_read32(VMX_INSTRUCTION_INFO))
> +}
> +
> +static int handle_rdmsr_imm(struct kvm_vcpu *vcpu)
> +{
> +       return kvm_emulate_rdmsr_imm(vcpu, vmx_get_exit_qual(vcpu),
> +                                    vmx_get_msr_imm_reg(vcpu));
> +}
> +
> +static int handle_wrmsr_imm(struct kvm_vcpu *vcpu)
> +{
> +       return kvm_emulate_wrmsr_imm(vcpu, vmx_get_exit_qual(vcpu),
> +                                    vmx_get_msr_imm_reg(vcpu));
> +}
> +
>   /*
>    * The exit handlers return 1 if the exit was handled fully and guest execution
>    * may resume.  Otherwise they set the kvm_run parameter to indicate what needs
> @@ -6061,6 +6078,8 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
>          [EXIT_REASON_ENCLS]                   = handle_encls,
>          [EXIT_REASON_BUS_LOCK]                = handle_bus_lock_vmexit,
>          [EXIT_REASON_NOTIFY]                  = handle_notify,
> +       [EXIT_REASON_MSR_READ_IMM]            = handle_rdmsr_imm,
> +       [EXIT_REASON_MSR_WRITE_IMM]           = handle_wrmsr_imm,
>   };
>   
>   static const int kvm_vmx_max_exit_handlers =
> @@ -6495,6 +6514,8 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
>   #ifdef CONFIG_MITIGATION_RETPOLINE
>          if (exit_reason.basic == EXIT_REASON_MSR_WRITE)
>                  return kvm_emulate_wrmsr(vcpu);
> +       else if (exit_reason.basic == EXIT_REASON_MSR_WRITE_IMM)
> +               return handle_wrmsr_imm(vcpu);
>          else if (exit_reason.basic == EXIT_REASON_PREEMPTION_TIMER)
>                  return handle_preemption_timer(vcpu);
>          else if (exit_reason.basic == EXIT_REASON_INTERRUPT_WINDOW)
> 

Thanks!
     Xin

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2025-08-01 16:28 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-30 17:46 [PATCH v1 0/4] KVM: VMX: Handle the immediate form of MSR instructions Xin Li (Intel)
2025-07-30 17:46 ` [PATCH v1 1/4] x86/cpufeatures: Add a CPU feature bit for MSR immediate form instructions Xin Li (Intel)
2025-07-30 17:46 ` [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers Xin Li (Intel)
2025-07-31 10:34   ` Chao Gao
2025-07-31 16:40     ` Xin Li
2025-07-31 17:19       ` Xin Li
2025-08-01  0:47       ` Sean Christopherson
2025-08-01  1:35         ` Xin Li
2025-08-01 14:37   ` Sean Christopherson
2025-08-01 16:27     ` Xin Li
2025-07-30 17:46 ` [PATCH v1 3/4] KVM: VMX: Handle the immediate form of MSR instructions Xin Li (Intel)
2025-07-31 11:04   ` Chao Gao
2025-07-31 16:53     ` Xin Li
2025-07-31 22:10       ` Xin Li
2025-07-30 17:46 ` [PATCH v1 4/4] KVM: x86: Advertise support for " Xin Li (Intel)
2025-08-01 14:39   ` Sean Christopherson
2025-08-01 16:11     ` Xin Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).