public inbox for linux-hyperv@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/16] x86/msr: Inline rdmsr/wrmsr instructions
@ 2026-02-18  8:21 Juergen Gross
  2026-02-18  8:21 ` [PATCH v3 05/16] x86/msr: Minimize usage of native_*() msr access functions Juergen Gross
  2026-02-18 20:37 ` [PATCH v3 00/16] x86/msr: Inline rdmsr/wrmsr instructions H. Peter Anvin
  0 siblings, 2 replies; 4+ messages in thread
From: Juergen Gross @ 2026-02-18  8:21 UTC (permalink / raw)
  To: linux-kernel, x86, linux-coco, kvm, linux-hyperv, virtualization,
	llvm
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Kiryl Shutsemau, Rick Edgecombe,
	Sean Christopherson, Paolo Bonzini, K. Y. Srinivasan,
	Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li, Vitaly Kuznetsov,
	Boris Ostrovsky, xen-devel, Ajay Kaher, Alexey Makhalov,
	Broadcom internal kernel review list, Andy Lutomirski,
	Peter Zijlstra, Xin Li, Nathan Chancellor, Nick Desaulniers,
	Bill Wendling, Justin Stitt, Josh Poimboeuf

When building a kernel with CONFIG_PARAVIRT_XXL the paravirt
infrastructure will always use functions for reading or writing MSRs,
even when running on bare metal.

Switch to inline RDMSR/WRMSR instructions in this case, reducing the
paravirt overhead.

The first patch is a prerequisite fix for alternative patching. Its
is needed due to the initial indirect call needs to be padded with
NOPs in some cases with the following patches.

In order to make this less intrusive, some further reorganization of
the MSR access helpers is done in the patches 1-6.

The next 4 patches are converting the non-paravirt case to use direct
inlining of the MSR access instructions, including the WRMSRNS
instruction and the immediate variants of RDMSR and WRMSR if possible.

Patches 11-13 are some further preparations for making the real switch
to directly patch in the native MSR instructions easier.

Patch 14 is switching the paravirt MSR function interface from normal
call ABI to one more similar to the native MSR instructions.

Patch 15 is a little cleanup patch.

Patch 16 is the final step for patching in the native MSR instructions
when not running as a Xen PV guest.

This series has been tested to work with Xen PV and on bare metal.

Note that there is more room for improvement. This series is sent out
to get a first impression how the code will basically look like.

Right now the same problem is solved differently for the paravirt and
the non-paravirt cases. In case this is not desired, there are two
possibilities to merge the two implementations. Both solutions have
the common idea to have rather similar code for paravirt and
non-paravirt variants, but just use a different main macro for
generating the respective code. For making the code of both possible
scenarios more similar, the following variants are possible:

1. Remove the micro-optimizations of the non-paravirt case, making
   it similar to the paravirt code in my series. This has the
   advantage of being more simple, but might have a very small
   negative performance impact (probably not really detectable).

2. Add the same micro-optimizations to the paravirt case, requiring
   to enhance paravirt patching to support a to be patched indirect
   call in the middle of the initial code snipplet.

In both cases the native MSR function variants would no longer be
usable in the paravirt case, but this would mostly affect Xen, as it
would need to open code the WRMSR/RDMSR instructions to be used
instead the native_*msr*() functions.

Changes since V2:
- switch back to the paravirt approach

Changes since V1:
- Use Xin Li's approach for inlining
- Several new patches

Juergen Gross (16):
  x86/alternative: Support alt_replace_call() with instructions after
    call
  coco/tdx: Rename MSR access helpers
  x86/sev: Replace call of native_wrmsr() with native_wrmsrq()
  KVM: x86: Remove the KVM private read_msr() function
  x86/msr: Minimize usage of native_*() msr access functions
  x86/msr: Move MSR trace calls one function level up
  x86/opcode: Add immediate form MSR instructions
  x86/extable: Add support for immediate form MSR instructions
  x86/msr: Use the alternatives mechanism for WRMSR
  x86/msr: Use the alternatives mechanism for RDMSR
  x86/alternatives: Add ALTERNATIVE_4()
  x86/paravirt: Split off MSR related hooks into new header
  x86/paravirt: Prepare support of MSR instruction interfaces
  x86/paravirt: Switch MSR access pv_ops functions to instruction
    interfaces
  x86/msr: Reduce number of low level MSR access helpers
  x86/paravirt: Use alternatives for MSR access with paravirt

 arch/x86/coco/sev/internal.h              |   7 +-
 arch/x86/coco/tdx/tdx.c                   |   8 +-
 arch/x86/hyperv/ivm.c                     |   2 +-
 arch/x86/include/asm/alternative.h        |   6 +
 arch/x86/include/asm/fred.h               |   2 +-
 arch/x86/include/asm/kvm_host.h           |  10 -
 arch/x86/include/asm/msr.h                | 345 ++++++++++++++++------
 arch/x86/include/asm/paravirt-msr.h       | 148 ++++++++++
 arch/x86/include/asm/paravirt.h           |  67 -----
 arch/x86/include/asm/paravirt_types.h     |  57 ++--
 arch/x86/include/asm/qspinlock_paravirt.h |   4 +-
 arch/x86/kernel/alternative.c             |   5 +-
 arch/x86/kernel/cpu/mshyperv.c            |   7 +-
 arch/x86/kernel/kvmclock.c                |   2 +-
 arch/x86/kernel/paravirt.c                |  42 ++-
 arch/x86/kvm/svm/svm.c                    |  16 +-
 arch/x86/kvm/vmx/tdx.c                    |   2 +-
 arch/x86/kvm/vmx/vmx.c                    |   8 +-
 arch/x86/lib/x86-opcode-map.txt           |   5 +-
 arch/x86/mm/extable.c                     |  35 ++-
 arch/x86/xen/enlighten_pv.c               |  52 +++-
 arch/x86/xen/pmu.c                        |   4 +-
 tools/arch/x86/lib/x86-opcode-map.txt     |   5 +-
 tools/objtool/check.c                     |   1 +
 24 files changed, 576 insertions(+), 264 deletions(-)
 create mode 100644 arch/x86/include/asm/paravirt-msr.h

-- 
2.53.0


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v3 05/16] x86/msr: Minimize usage of native_*() msr access functions
  2026-02-18  8:21 [PATCH v3 00/16] x86/msr: Inline rdmsr/wrmsr instructions Juergen Gross
@ 2026-02-18  8:21 ` Juergen Gross
  2026-02-18 20:37 ` [PATCH v3 00/16] x86/msr: Inline rdmsr/wrmsr instructions H. Peter Anvin
  1 sibling, 0 replies; 4+ messages in thread
From: Juergen Gross @ 2026-02-18  8:21 UTC (permalink / raw)
  To: linux-kernel, x86, linux-hyperv, kvm
  Cc: Juergen Gross, K. Y. Srinivasan, Haiyang Zhang, Wei Liu,
	Dexuan Cui, Long Li, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin, Paolo Bonzini,
	Vitaly Kuznetsov, Sean Christopherson, Boris Ostrovsky, xen-devel

In order to prepare for some MSR access function reorg work, switch
most users of native_{read|write}_msr[_safe]() to the more generic
rdmsr*()/wrmsr*() variants.

For now this will have some intermediate performance impact with
paravirtualization configured when running on bare metal, but this
is a prereq change for the planned direct inlining of the rdmsr/wrmsr
instructions with this configuration.

The main reason for this switch is the planned move of the MSR trace
function invocation from the native_*() functions to the generic
rdmsr*()/wrmsr*() variants. Without this switch the users of the
native_*() functions would lose the related tracing entries.

Note that the Xen related MSR access functions will not be switched,
as these will be handled after the move of the trace hooks.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Sean Christopherson <seanjc@google.com>
Acked-by: Wei Liu <wei.liu@kernel.org>
Reviewed-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
 arch/x86/hyperv/ivm.c          |  2 +-
 arch/x86/kernel/cpu/mshyperv.c |  7 +++++--
 arch/x86/kernel/kvmclock.c     |  2 +-
 arch/x86/kvm/svm/svm.c         | 16 ++++++++--------
 arch/x86/xen/pmu.c             |  4 ++--
 5 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
index 651771534cae..1b2222036a0b 100644
--- a/arch/x86/hyperv/ivm.c
+++ b/arch/x86/hyperv/ivm.c
@@ -327,7 +327,7 @@ int hv_snp_boot_ap(u32 apic_id, unsigned long start_ip, unsigned int cpu)
 	asm volatile("movl %%ds, %%eax;" : "=a" (vmsa->ds.selector));
 	hv_populate_vmcb_seg(vmsa->ds, vmsa->gdtr.base);
 
-	vmsa->efer = native_read_msr(MSR_EFER);
+	rdmsrq(MSR_EFER, vmsa->efer);
 
 	vmsa->cr4 = native_read_cr4();
 	vmsa->cr3 = __native_read_cr3();
diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index 579fb2c64cfd..9bebb1a1ebee 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -111,9 +111,12 @@ void hv_para_set_sint_proxy(bool enable)
  */
 u64 hv_para_get_synic_register(unsigned int reg)
 {
+	u64 val;
+
 	if (WARN_ON(!ms_hyperv.paravisor_present || !hv_is_synic_msr(reg)))
 		return ~0ULL;
-	return native_read_msr(reg);
+	rdmsrq(reg, val);
+	return val;
 }
 
 /*
@@ -123,7 +126,7 @@ void hv_para_set_synic_register(unsigned int reg, u64 val)
 {
 	if (WARN_ON(!ms_hyperv.paravisor_present || !hv_is_synic_msr(reg)))
 		return;
-	native_write_msr(reg, val);
+	wrmsrq(reg, val);
 }
 
 u64 hv_get_msr(unsigned int reg)
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index b5991d53fc0e..1002bdd45c0f 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -197,7 +197,7 @@ static void kvm_setup_secondary_clock(void)
 void kvmclock_disable(void)
 {
 	if (msr_kvm_system_time)
-		native_write_msr(msr_kvm_system_time, 0);
+		wrmsrq(msr_kvm_system_time, 0);
 }
 
 static void __init kvmclock_init_mem(void)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 8f8bc863e214..1c0e7cae9e49 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -389,12 +389,12 @@ static void svm_init_erratum_383(void)
 		return;
 
 	/* Use _safe variants to not break nested virtualization */
-	if (native_read_msr_safe(MSR_AMD64_DC_CFG, &val))
+	if (rdmsrq_safe(MSR_AMD64_DC_CFG, &val))
 		return;
 
 	val |= (1ULL << 47);
 
-	native_write_msr_safe(MSR_AMD64_DC_CFG, val);
+	wrmsrq_safe(MSR_AMD64_DC_CFG, val);
 
 	erratum_383_found = true;
 }
@@ -554,9 +554,9 @@ static int svm_enable_virtualization_cpu(void)
 		u64 len, status = 0;
 		int err;
 
-		err = native_read_msr_safe(MSR_AMD64_OSVW_ID_LENGTH, &len);
+		err = rdmsrq_safe(MSR_AMD64_OSVW_ID_LENGTH, &len);
 		if (!err)
-			err = native_read_msr_safe(MSR_AMD64_OSVW_STATUS, &status);
+			err = rdmsrq_safe(MSR_AMD64_OSVW_STATUS, &status);
 
 		if (err)
 			osvw_status = osvw_len = 0;
@@ -2029,7 +2029,7 @@ static bool is_erratum_383(void)
 	if (!erratum_383_found)
 		return false;
 
-	if (native_read_msr_safe(MSR_IA32_MC0_STATUS, &value))
+	if (rdmsrq_safe(MSR_IA32_MC0_STATUS, &value))
 		return false;
 
 	/* Bit 62 may or may not be set for this mce */
@@ -2040,11 +2040,11 @@ static bool is_erratum_383(void)
 
 	/* Clear MCi_STATUS registers */
 	for (i = 0; i < 6; ++i)
-		native_write_msr_safe(MSR_IA32_MCx_STATUS(i), 0);
+		wrmsrq_safe(MSR_IA32_MCx_STATUS(i), 0);
 
-	if (!native_read_msr_safe(MSR_IA32_MCG_STATUS, &value)) {
+	if (!rdmsrq_safe(MSR_IA32_MCG_STATUS, &value)) {
 		value &= ~(1ULL << 2);
-		native_write_msr_safe(MSR_IA32_MCG_STATUS, value);
+		wrmsrq_safe(MSR_IA32_MCG_STATUS, value);
 	}
 
 	/* Flush tlb to evict multi-match entries */
diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c
index 8f89ce0b67e3..d49a3bdc448b 100644
--- a/arch/x86/xen/pmu.c
+++ b/arch/x86/xen/pmu.c
@@ -323,7 +323,7 @@ static u64 xen_amd_read_pmc(int counter)
 		u64 val;
 
 		msr = amd_counters_base + (counter * amd_msr_step);
-		native_read_msr_safe(msr, &val);
+		rdmsrq_safe(msr, &val);
 		return val;
 	}
 
@@ -349,7 +349,7 @@ static u64 xen_intel_read_pmc(int counter)
 		else
 			msr = MSR_IA32_PERFCTR0 + counter;
 
-		native_read_msr_safe(msr, &val);
+		rdmsrq_safe(msr, &val);
 		return val;
 	}
 
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v3 00/16] x86/msr: Inline rdmsr/wrmsr instructions
  2026-02-18  8:21 [PATCH v3 00/16] x86/msr: Inline rdmsr/wrmsr instructions Juergen Gross
  2026-02-18  8:21 ` [PATCH v3 05/16] x86/msr: Minimize usage of native_*() msr access functions Juergen Gross
@ 2026-02-18 20:37 ` H. Peter Anvin
  2026-02-19  6:28   ` Jürgen Groß
  1 sibling, 1 reply; 4+ messages in thread
From: H. Peter Anvin @ 2026-02-18 20:37 UTC (permalink / raw)
  To: Juergen Gross, linux-kernel, x86, linux-coco, kvm, linux-hyperv,
	virtualization, llvm
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Kiryl Shutsemau, Rick Edgecombe, Sean Christopherson,
	Paolo Bonzini, K. Y. Srinivasan, Haiyang Zhang, Wei Liu,
	Dexuan Cui, Long Li, Vitaly Kuznetsov, Boris Ostrovsky, xen-devel,
	Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list,
	Andy Lutomirski, Peter Zijlstra, Xin Li, Nathan Chancellor,
	Nick Desaulniers, Bill Wendling, Justin Stitt, Josh Poimboeuf,
	andy.cooper

On February 18, 2026 12:21:17 AM PST, Juergen Gross <jgross@suse.com> wrote:
>When building a kernel with CONFIG_PARAVIRT_XXL the paravirt
>infrastructure will always use functions for reading or writing MSRs,
>even when running on bare metal.
>
>Switch to inline RDMSR/WRMSR instructions in this case, reducing the
>paravirt overhead.
>
>The first patch is a prerequisite fix for alternative patching. Its
>is needed due to the initial indirect call needs to be padded with
>NOPs in some cases with the following patches.
>
>In order to make this less intrusive, some further reorganization of
>the MSR access helpers is done in the patches 1-6.
>
>The next 4 patches are converting the non-paravirt case to use direct
>inlining of the MSR access instructions, including the WRMSRNS
>instruction and the immediate variants of RDMSR and WRMSR if possible.
>
>Patches 11-13 are some further preparations for making the real switch
>to directly patch in the native MSR instructions easier.
>
>Patch 14 is switching the paravirt MSR function interface from normal
>call ABI to one more similar to the native MSR instructions.
>
>Patch 15 is a little cleanup patch.
>
>Patch 16 is the final step for patching in the native MSR instructions
>when not running as a Xen PV guest.
>
>This series has been tested to work with Xen PV and on bare metal.
>
>Note that there is more room for improvement. This series is sent out
>to get a first impression how the code will basically look like.

Does that mean you are considering this patchset an RFC? If so, you should put that in the subject header. 

>Right now the same problem is solved differently for the paravirt and
>the non-paravirt cases. In case this is not desired, there are two
>possibilities to merge the two implementations. Both solutions have
>the common idea to have rather similar code for paravirt and
>non-paravirt variants, but just use a different main macro for
>generating the respective code. For making the code of both possible
>scenarios more similar, the following variants are possible:
>
>1. Remove the micro-optimizations of the non-paravirt case, making
>   it similar to the paravirt code in my series. This has the
>   advantage of being more simple, but might have a very small
>   negative performance impact (probably not really detectable).
>
>2. Add the same micro-optimizations to the paravirt case, requiring
>   to enhance paravirt patching to support a to be patched indirect
>   call in the middle of the initial code snipplet.
>
>In both cases the native MSR function variants would no longer be
>usable in the paravirt case, but this would mostly affect Xen, as it
>would need to open code the WRMSR/RDMSR instructions to be used
>instead the native_*msr*() functions.
>
>Changes since V2:
>- switch back to the paravirt approach
>
>Changes since V1:
>- Use Xin Li's approach for inlining
>- Several new patches
>
>Juergen Gross (16):
>  x86/alternative: Support alt_replace_call() with instructions after
>    call
>  coco/tdx: Rename MSR access helpers
>  x86/sev: Replace call of native_wrmsr() with native_wrmsrq()
>  KVM: x86: Remove the KVM private read_msr() function
>  x86/msr: Minimize usage of native_*() msr access functions
>  x86/msr: Move MSR trace calls one function level up
>  x86/opcode: Add immediate form MSR instructions
>  x86/extable: Add support for immediate form MSR instructions
>  x86/msr: Use the alternatives mechanism for WRMSR
>  x86/msr: Use the alternatives mechanism for RDMSR
>  x86/alternatives: Add ALTERNATIVE_4()
>  x86/paravirt: Split off MSR related hooks into new header
>  x86/paravirt: Prepare support of MSR instruction interfaces
>  x86/paravirt: Switch MSR access pv_ops functions to instruction
>    interfaces
>  x86/msr: Reduce number of low level MSR access helpers
>  x86/paravirt: Use alternatives for MSR access with paravirt
>
> arch/x86/coco/sev/internal.h              |   7 +-
> arch/x86/coco/tdx/tdx.c                   |   8 +-
> arch/x86/hyperv/ivm.c                     |   2 +-
> arch/x86/include/asm/alternative.h        |   6 +
> arch/x86/include/asm/fred.h               |   2 +-
> arch/x86/include/asm/kvm_host.h           |  10 -
> arch/x86/include/asm/msr.h                | 345 ++++++++++++++++------
> arch/x86/include/asm/paravirt-msr.h       | 148 ++++++++++
> arch/x86/include/asm/paravirt.h           |  67 -----
> arch/x86/include/asm/paravirt_types.h     |  57 ++--
> arch/x86/include/asm/qspinlock_paravirt.h |   4 +-
> arch/x86/kernel/alternative.c             |   5 +-
> arch/x86/kernel/cpu/mshyperv.c            |   7 +-
> arch/x86/kernel/kvmclock.c                |   2 +-
> arch/x86/kernel/paravirt.c                |  42 ++-
> arch/x86/kvm/svm/svm.c                    |  16 +-
> arch/x86/kvm/vmx/tdx.c                    |   2 +-
> arch/x86/kvm/vmx/vmx.c                    |   8 +-
> arch/x86/lib/x86-opcode-map.txt           |   5 +-
> arch/x86/mm/extable.c                     |  35 ++-
> arch/x86/xen/enlighten_pv.c               |  52 +++-
> arch/x86/xen/pmu.c                        |   4 +-
> tools/arch/x86/lib/x86-opcode-map.txt     |   5 +-
> tools/objtool/check.c                     |   1 +
> 24 files changed, 576 insertions(+), 264 deletions(-)
> create mode 100644 arch/x86/include/asm/paravirt-msr.h
>

Could you clarify *on the high design level* what "go back to the paravirt approach" means, and the motivation for that?

Note that for Xen *most* MSRs fall in one of two categories: those that are dropped entirely and those that are just passed straight on to the hardware.

I don't know if anyone cares about optimizing PV Xen anymore, but at least in theory Xen can un-paravirtualize most sites.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3 00/16] x86/msr: Inline rdmsr/wrmsr instructions
  2026-02-18 20:37 ` [PATCH v3 00/16] x86/msr: Inline rdmsr/wrmsr instructions H. Peter Anvin
@ 2026-02-19  6:28   ` Jürgen Groß
  0 siblings, 0 replies; 4+ messages in thread
From: Jürgen Groß @ 2026-02-19  6:28 UTC (permalink / raw)
  To: H. Peter Anvin, linux-kernel, x86, linux-coco, kvm, linux-hyperv,
	virtualization, llvm
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Kiryl Shutsemau, Rick Edgecombe, Sean Christopherson,
	Paolo Bonzini, K. Y. Srinivasan, Haiyang Zhang, Wei Liu,
	Dexuan Cui, Long Li, Vitaly Kuznetsov, Boris Ostrovsky, xen-devel,
	Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list,
	Andy Lutomirski, Peter Zijlstra, Xin Li, Nathan Chancellor,
	Nick Desaulniers, Bill Wendling, Justin Stitt, Josh Poimboeuf,
	andy.cooper


[-- Attachment #1.1.1: Type: text/plain, Size: 6721 bytes --]

On 18.02.26 21:37, H. Peter Anvin wrote:
> On February 18, 2026 12:21:17 AM PST, Juergen Gross <jgross@suse.com> wrote:
>> When building a kernel with CONFIG_PARAVIRT_XXL the paravirt
>> infrastructure will always use functions for reading or writing MSRs,
>> even when running on bare metal.
>>
>> Switch to inline RDMSR/WRMSR instructions in this case, reducing the
>> paravirt overhead.
>>
>> The first patch is a prerequisite fix for alternative patching. Its
>> is needed due to the initial indirect call needs to be padded with
>> NOPs in some cases with the following patches.
>>
>> In order to make this less intrusive, some further reorganization of
>> the MSR access helpers is done in the patches 1-6.
>>
>> The next 4 patches are converting the non-paravirt case to use direct
>> inlining of the MSR access instructions, including the WRMSRNS
>> instruction and the immediate variants of RDMSR and WRMSR if possible.
>>
>> Patches 11-13 are some further preparations for making the real switch
>> to directly patch in the native MSR instructions easier.
>>
>> Patch 14 is switching the paravirt MSR function interface from normal
>> call ABI to one more similar to the native MSR instructions.
>>
>> Patch 15 is a little cleanup patch.
>>
>> Patch 16 is the final step for patching in the native MSR instructions
>> when not running as a Xen PV guest.
>>
>> This series has been tested to work with Xen PV and on bare metal.
>>
>> Note that there is more room for improvement. This series is sent out
>> to get a first impression how the code will basically look like.
> 
> Does that mean you are considering this patchset an RFC? If so, you should put that in the subject header.

It is one possible solution.

> 
>> Right now the same problem is solved differently for the paravirt and
>> the non-paravirt cases. In case this is not desired, there are two
>> possibilities to merge the two implementations. Both solutions have
>> the common idea to have rather similar code for paravirt and
>> non-paravirt variants, but just use a different main macro for
>> generating the respective code. For making the code of both possible
>> scenarios more similar, the following variants are possible:
>>
>> 1. Remove the micro-optimizations of the non-paravirt case, making
>>    it similar to the paravirt code in my series. This has the
>>    advantage of being more simple, but might have a very small
>>    negative performance impact (probably not really detectable).
>>
>> 2. Add the same micro-optimizations to the paravirt case, requiring
>>    to enhance paravirt patching to support a to be patched indirect
>>    call in the middle of the initial code snipplet.
>>
>> In both cases the native MSR function variants would no longer be
>> usable in the paravirt case, but this would mostly affect Xen, as it
>> would need to open code the WRMSR/RDMSR instructions to be used
>> instead the native_*msr*() functions.
>>
>> Changes since V2:
>> - switch back to the paravirt approach
>>
>> Changes since V1:
>> - Use Xin Li's approach for inlining
>> - Several new patches
>>
>> Juergen Gross (16):
>>   x86/alternative: Support alt_replace_call() with instructions after
>>     call
>>   coco/tdx: Rename MSR access helpers
>>   x86/sev: Replace call of native_wrmsr() with native_wrmsrq()
>>   KVM: x86: Remove the KVM private read_msr() function
>>   x86/msr: Minimize usage of native_*() msr access functions
>>   x86/msr: Move MSR trace calls one function level up
>>   x86/opcode: Add immediate form MSR instructions
>>   x86/extable: Add support for immediate form MSR instructions
>>   x86/msr: Use the alternatives mechanism for WRMSR
>>   x86/msr: Use the alternatives mechanism for RDMSR
>>   x86/alternatives: Add ALTERNATIVE_4()
>>   x86/paravirt: Split off MSR related hooks into new header
>>   x86/paravirt: Prepare support of MSR instruction interfaces
>>   x86/paravirt: Switch MSR access pv_ops functions to instruction
>>     interfaces
>>   x86/msr: Reduce number of low level MSR access helpers
>>   x86/paravirt: Use alternatives for MSR access with paravirt
>>
>> arch/x86/coco/sev/internal.h              |   7 +-
>> arch/x86/coco/tdx/tdx.c                   |   8 +-
>> arch/x86/hyperv/ivm.c                     |   2 +-
>> arch/x86/include/asm/alternative.h        |   6 +
>> arch/x86/include/asm/fred.h               |   2 +-
>> arch/x86/include/asm/kvm_host.h           |  10 -
>> arch/x86/include/asm/msr.h                | 345 ++++++++++++++++------
>> arch/x86/include/asm/paravirt-msr.h       | 148 ++++++++++
>> arch/x86/include/asm/paravirt.h           |  67 -----
>> arch/x86/include/asm/paravirt_types.h     |  57 ++--
>> arch/x86/include/asm/qspinlock_paravirt.h |   4 +-
>> arch/x86/kernel/alternative.c             |   5 +-
>> arch/x86/kernel/cpu/mshyperv.c            |   7 +-
>> arch/x86/kernel/kvmclock.c                |   2 +-
>> arch/x86/kernel/paravirt.c                |  42 ++-
>> arch/x86/kvm/svm/svm.c                    |  16 +-
>> arch/x86/kvm/vmx/tdx.c                    |   2 +-
>> arch/x86/kvm/vmx/vmx.c                    |   8 +-
>> arch/x86/lib/x86-opcode-map.txt           |   5 +-
>> arch/x86/mm/extable.c                     |  35 ++-
>> arch/x86/xen/enlighten_pv.c               |  52 +++-
>> arch/x86/xen/pmu.c                        |   4 +-
>> tools/arch/x86/lib/x86-opcode-map.txt     |   5 +-
>> tools/objtool/check.c                     |   1 +
>> 24 files changed, 576 insertions(+), 264 deletions(-)
>> create mode 100644 arch/x86/include/asm/paravirt-msr.h
>>
> 
> Could you clarify *on the high design level* what "go back to the paravirt approach" means, and the motivation for that?

This is related to V2 of this series, where I used a static branch for
special casing Xen PV.

Peter Zijlstra commented on that asking to try harder using the pv_ops
hooks for Xen PV, too.

> Note that for Xen *most* MSRs fall in one of two categories: those that are dropped entirely and those that are just passed straight on to the hardware.
> 
> I don't know if anyone cares about optimizing PV Xen anymore, but at least in theory Xen can un-paravirtualize most sites.

The problem with that is, that this would need to be taken care at the
callers' sites, "poisoning" a lot of code with Xen specific paths. Or we'd
need to use the native variants explicitly at all places where Xen PV
would just use the MSR instructions itself. But please be aware, that
there are plans to introduce a hypercall for Xen to speed up MSR accesses,
which would reduce the "passed through to hardware" cases to 0.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3743 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-02-19  6:28 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-18  8:21 [PATCH v3 00/16] x86/msr: Inline rdmsr/wrmsr instructions Juergen Gross
2026-02-18  8:21 ` [PATCH v3 05/16] x86/msr: Minimize usage of native_*() msr access functions Juergen Gross
2026-02-18 20:37 ` [PATCH v3 00/16] x86/msr: Inline rdmsr/wrmsr instructions H. Peter Anvin
2026-02-19  6:28   ` Jürgen Groß

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox