* [PATCH v3 00/21] paravirt: cleanup and reorg
@ 2025-10-06 7:45 Juergen Gross
2025-10-06 7:45 ` [PATCH v3 02/21] x86/paravirt: Remove some unneeded struct declarations Juergen Gross
` (17 more replies)
0 siblings, 18 replies; 24+ messages in thread
From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw)
To: linux-kernel, x86, linux-hyperv, virtualization, loongarch,
linuxppc-dev, linux-riscv, kvm
Cc: Juergen Gross, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, H. Peter Anvin, K. Y. Srinivasan,
Haiyang Zhang, Wei Liu, Dexuan Cui, Peter Zijlstra, Will Deacon,
Boqun Feng, Waiman Long, Jiri Kosina, Josh Poimboeuf, Pawan Gupta,
Boris Ostrovsky, xen-devel, Ajay Kaher, Alexey Makhalov,
Broadcom internal kernel review list, Russell King,
Catalin Marinas, Huacai Chen, WANG Xuerui, Madhavan Srinivasan,
Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, linux-arm-kernel,
Paolo Bonzini, Vitaly Kuznetsov, Stefano Stabellini,
Oleksandr Tyshchenko, Daniel Lezcano, Oleg Nesterov
Some cleanups and reorg of paravirt code and headers:
- The first 2 patches should be not controversial at all, as they
remove just some no longer needed #include and struct forward
declarations.
- The 3rd patch is removing CONFIG_PARAVIRT_DEBUG, which IMO has
no real value, as it just changes a crash to a BUG() (the stack
trace will basically be the same). As the maintainer of the main
paravirt user (Xen) I have never seen this crash/BUG() to happen.
- The 4th patch is just a movement of code.
- I don't know for what reason asm/paravirt_api_clock.h was added,
as all archs supporting it do it exactly in the same way. Patch
5 is removing it.
- Patches 6-14 are streamlining the paravirt clock interfaces by
using a common implementation across architectures where possible
and by moving the related code into common sched code, as this is
where it should live.
- Patches 15-20 are more like RFC material preparing the paravirt
infrastructure to support multiple pv_ops function arrays.
As a prerequisite for that it makes life in objtool much easier
with dropping the Xen static initializers of the pv_ops sub-
structures, which is done in patches 15-17.
Patches 18-20 are doing the real preparations for multiple pv_ops
arrays and using those arrays in multiple headers.
- Patch 21 is an example how the new scheme can look like using the
PV-spinlocks.
Changes in V2:
- new patches 13-18 and 20
- complete rework of patch 21
Changes in V3:
- fixed 2 issues detected by kernel test robot
Juergen Gross (21):
x86/paravirt: Remove not needed includes of paravirt.h
x86/paravirt: Remove some unneeded struct declarations
x86/paravirt: Remove PARAVIRT_DEBUG config option
x86/paravirt: Move thunk macros to paravirt_types.h
paravirt: Remove asm/paravirt_api_clock.h
sched: Move clock related paravirt code to kernel/sched
arm/paravirt: Use common code for paravirt_steal_clock()
arm64/paravirt: Use common code for paravirt_steal_clock()
loongarch/paravirt: Use common code for paravirt_steal_clock()
riscv/paravirt: Use common code for paravirt_steal_clock()
x86/paravirt: Use common code for paravirt_steal_clock()
x86/paravirt: Move paravirt_sched_clock() related code into tsc.c
x86/paravirt: Introduce new paravirt-base.h header
x86/paravirt: Move pv_native_*() prototypes to paravirt.c
x86/xen: Drop xen_irq_ops
x86/xen: Drop xen_cpu_ops
x86/xen: Drop xen_mmu_ops
objtool: Allow multiple pv_ops arrays
x86/paravirt: Allow pv-calls outside paravirt.h
x86/paravirt: Specify pv_ops array in paravirt macros
x86/pvlocks: Move paravirt spinlock functions into own header
arch/Kconfig | 3 +
arch/arm/Kconfig | 1 +
arch/arm/include/asm/paravirt.h | 22 --
arch/arm/include/asm/paravirt_api_clock.h | 1 -
arch/arm/kernel/Makefile | 1 -
arch/arm/kernel/paravirt.c | 23 --
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/paravirt.h | 14 -
arch/arm64/include/asm/paravirt_api_clock.h | 1 -
arch/arm64/kernel/paravirt.c | 11 +-
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/paravirt.h | 13 -
.../include/asm/paravirt_api_clock.h | 1 -
arch/loongarch/kernel/paravirt.c | 10 +-
arch/powerpc/include/asm/paravirt.h | 3 -
arch/powerpc/include/asm/paravirt_api_clock.h | 2 -
arch/powerpc/platforms/pseries/setup.c | 4 +-
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/paravirt.h | 14 -
arch/riscv/include/asm/paravirt_api_clock.h | 1 -
arch/riscv/kernel/paravirt.c | 11 +-
arch/x86/Kconfig | 8 +-
arch/x86/entry/entry_64.S | 1 -
arch/x86/entry/vsyscall/vsyscall_64.c | 1 -
arch/x86/hyperv/hv_spinlock.c | 11 +-
arch/x86/include/asm/apic.h | 4 -
arch/x86/include/asm/highmem.h | 1 -
arch/x86/include/asm/mshyperv.h | 1 -
arch/x86/include/asm/paravirt-base.h | 29 ++
arch/x86/include/asm/paravirt-spinlock.h | 146 ++++++++
arch/x86/include/asm/paravirt.h | 331 +++++-------------
arch/x86/include/asm/paravirt_api_clock.h | 1 -
arch/x86/include/asm/paravirt_types.h | 269 +++++++-------
arch/x86/include/asm/pgtable_32.h | 1 -
arch/x86/include/asm/ptrace.h | 2 +-
arch/x86/include/asm/qspinlock.h | 89 +----
arch/x86/include/asm/spinlock.h | 1 -
arch/x86/include/asm/timer.h | 1 +
arch/x86/include/asm/tlbflush.h | 4 -
arch/x86/kernel/Makefile | 2 +-
arch/x86/kernel/apm_32.c | 1 -
arch/x86/kernel/callthunks.c | 1 -
arch/x86/kernel/cpu/bugs.c | 1 -
arch/x86/kernel/cpu/vmware.c | 1 +
arch/x86/kernel/kvm.c | 11 +-
arch/x86/kernel/kvmclock.c | 1 +
arch/x86/kernel/paravirt-spinlocks.c | 26 +-
arch/x86/kernel/paravirt.c | 42 +--
arch/x86/kernel/tsc.c | 10 +-
arch/x86/kernel/vsmp_64.c | 1 -
arch/x86/kernel/x86_init.c | 1 -
arch/x86/lib/cache-smp.c | 1 -
arch/x86/mm/init.c | 1 -
arch/x86/xen/enlighten_pv.c | 82 ++---
arch/x86/xen/irq.c | 20 +-
arch/x86/xen/mmu_pv.c | 100 ++----
arch/x86/xen/spinlock.c | 11 +-
arch/x86/xen/time.c | 2 +
drivers/clocksource/hyperv_timer.c | 2 +
drivers/xen/time.c | 2 +-
include/linux/sched/cputime.h | 18 +
kernel/sched/core.c | 5 +
kernel/sched/cputime.c | 13 +
kernel/sched/sched.h | 3 +-
tools/objtool/arch/x86/decode.c | 8 +-
tools/objtool/check.c | 78 ++++-
tools/objtool/include/objtool/check.h | 2 +
67 files changed, 659 insertions(+), 827 deletions(-)
delete mode 100644 arch/arm/include/asm/paravirt.h
delete mode 100644 arch/arm/include/asm/paravirt_api_clock.h
delete mode 100644 arch/arm/kernel/paravirt.c
delete mode 100644 arch/arm64/include/asm/paravirt_api_clock.h
delete mode 100644 arch/loongarch/include/asm/paravirt_api_clock.h
delete mode 100644 arch/powerpc/include/asm/paravirt_api_clock.h
delete mode 100644 arch/riscv/include/asm/paravirt_api_clock.h
create mode 100644 arch/x86/include/asm/paravirt-base.h
create mode 100644 arch/x86/include/asm/paravirt-spinlock.h
delete mode 100644 arch/x86/include/asm/paravirt_api_clock.h
--
2.51.0
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v3 02/21] x86/paravirt: Remove some unneeded struct declarations 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-10-23 12:14 ` Borislav Petkov 2025-10-06 7:45 ` [PATCH v3 03/21] x86/paravirt: Remove PARAVIRT_DEBUG config option Juergen Gross ` (16 subsequent siblings) 17 siblings, 1 reply; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, x86, virtualization Cc: Juergen Gross, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Peter Zijlstra (Intel) In paravirt_types.h iand paravirt.h there are some struct declarations which are not needed. Remove them. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- V2: - remove mm_struct from paravirt.h, too --- arch/x86/include/asm/paravirt.h | 4 ---- arch/x86/include/asm/paravirt_types.h | 6 ------ 2 files changed, 10 deletions(-) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index b5e59a7ba0d0..612b3df65b1b 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -6,10 +6,6 @@ #include <asm/paravirt_types.h> -#ifndef __ASSEMBLER__ -struct mm_struct; -#endif - #ifdef CONFIG_PARAVIRT #include <asm/pgtable_types.h> #include <asm/asm.h> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 37a8627d8277..84cc8c95713b 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -11,16 +11,11 @@ #include <asm/pgtable_types.h> #include <asm/nospec-branch.h> -struct page; struct thread_struct; -struct desc_ptr; -struct tss_struct; struct mm_struct; -struct desc_struct; struct task_struct; struct cpumask; struct flush_tlb_info; -struct mmu_gather; struct vm_area_struct; /* @@ -205,7 +200,6 @@ struct pv_mmu_ops { #endif } __no_randomize_layout; -struct arch_spinlock; #ifdef CONFIG_SMP #include <asm/spinlock_types.h> #endif -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v3 02/21] x86/paravirt: Remove some unneeded struct declarations 2025-10-06 7:45 ` [PATCH v3 02/21] x86/paravirt: Remove some unneeded struct declarations Juergen Gross @ 2025-10-23 12:14 ` Borislav Petkov 0 siblings, 0 replies; 24+ messages in thread From: Borislav Petkov @ 2025-10-23 12:14 UTC (permalink / raw) To: Juergen Gross Cc: linux-kernel, x86, virtualization, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Thomas Gleixner, Ingo Molnar, Dave Hansen, H. Peter Anvin, Peter Zijlstra (Intel) On Mon, Oct 06, 2025 at 09:45:47AM +0200, Juergen Gross wrote: > In paravirt_types.h iand paravirt.h there are some struct declarations ^^^^^ Spellchecker pls. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette ^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v3 03/21] x86/paravirt: Remove PARAVIRT_DEBUG config option 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross 2025-10-06 7:45 ` [PATCH v3 02/21] x86/paravirt: Remove some unneeded struct declarations Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-10-06 7:45 ` [PATCH v3 04/21] x86/paravirt: Move thunk macros to paravirt_types.h Juergen Gross ` (15 subsequent siblings) 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, x86, virtualization Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Peter Zijlstra (Intel) The only effect of CONFIG_PARAVIRT_DEBUG set is that instead of doing a call using a NULL pointer a BUG() is being raised. While the BUG() will be a little bit easier to analyse, the call of NULL isn't really that difficult to find the reason for. Remove the config option to make paravirt coding a little bit less annoying. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- arch/x86/Kconfig | 7 ------- arch/x86/include/asm/paravirt.h | 1 - arch/x86/include/asm/paravirt_types.h | 8 -------- 3 files changed, 16 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 9d034a987c6e..451c3adffacb 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -809,13 +809,6 @@ config PARAVIRT_XXL bool depends on X86_64 -config PARAVIRT_DEBUG - bool "paravirt-ops debugging" - depends on PARAVIRT && DEBUG_KERNEL - help - Enable to debug paravirt_ops internals. Specifically, BUG if - a paravirt_op is missing when it is called. - config PARAVIRT_SPINLOCKS bool "Paravirtualization layer for spinlocks" depends on PARAVIRT && SMP diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 612b3df65b1b..fd9826397419 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -12,7 +12,6 @@ #include <asm/nospec-branch.h> #ifndef __ASSEMBLER__ -#include <linux/bug.h> #include <linux/types.h> #include <linux/cpumask.h> #include <linux/static_call_types.h> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 84cc8c95713b..085095f94f97 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -354,12 +354,6 @@ extern struct paravirt_patch_template pv_ops; #define VEXTRA_CLOBBERS , "rax", "r8", "r9", "r10", "r11" #endif /* CONFIG_X86_32 */ -#ifdef CONFIG_PARAVIRT_DEBUG -#define PVOP_TEST_NULL(op) BUG_ON(pv_ops.op == NULL) -#else -#define PVOP_TEST_NULL(op) ((void)pv_ops.op) -#endif - #define PVOP_RETVAL(rettype) \ ({ unsigned long __mask = ~0UL; \ BUILD_BUG_ON(sizeof(rettype) > sizeof(unsigned long)); \ @@ -388,7 +382,6 @@ extern struct paravirt_patch_template pv_ops; #define ____PVOP_CALL(ret, op, call_clbr, extra_clbr, ...) \ ({ \ PVOP_CALL_ARGS; \ - PVOP_TEST_NULL(op); \ asm volatile(ALTERNATIVE(PARAVIRT_CALL, ALT_CALL_INSTR, \ ALT_CALL_ALWAYS) \ : call_clbr, ASM_CALL_CONSTRAINT \ @@ -402,7 +395,6 @@ extern struct paravirt_patch_template pv_ops; extra_clbr, ...) \ ({ \ PVOP_CALL_ARGS; \ - PVOP_TEST_NULL(op); \ asm volatile(ALTERNATIVE_2(PARAVIRT_CALL, \ ALT_CALL_INSTR, ALT_CALL_ALWAYS, \ alt, cond) \ -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 04/21] x86/paravirt: Move thunk macros to paravirt_types.h 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross 2025-10-06 7:45 ` [PATCH v3 02/21] x86/paravirt: Remove some unneeded struct declarations Juergen Gross 2025-10-06 7:45 ` [PATCH v3 03/21] x86/paravirt: Remove PARAVIRT_DEBUG config option Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-10-06 7:45 ` [PATCH v3 05/21] paravirt: Remove asm/paravirt_api_clock.h Juergen Gross ` (14 subsequent siblings) 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, x86, virtualization Cc: Juergen Gross, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Peter Zijlstra (Intel) The macros for generating PV-thunks are part of the generic paravirt infrastructure, so they should be in paravirt_types.h. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- arch/x86/include/asm/paravirt.h | 68 --------------------------- arch/x86/include/asm/paravirt_types.h | 68 +++++++++++++++++++++++++++ 2 files changed, 68 insertions(+), 68 deletions(-) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index fd9826397419..1344d2fb2b86 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -581,74 +581,6 @@ bool __raw_callee_save___native_vcpu_is_preempted(long cpu); #endif /* SMP && PARAVIRT_SPINLOCKS */ -#ifdef CONFIG_X86_32 -/* save and restore all caller-save registers, except return value */ -#define PV_SAVE_ALL_CALLER_REGS "pushl %ecx;" -#define PV_RESTORE_ALL_CALLER_REGS "popl %ecx;" -#else -/* save and restore all caller-save registers, except return value */ -#define PV_SAVE_ALL_CALLER_REGS \ - "push %rcx;" \ - "push %rdx;" \ - "push %rsi;" \ - "push %rdi;" \ - "push %r8;" \ - "push %r9;" \ - "push %r10;" \ - "push %r11;" -#define PV_RESTORE_ALL_CALLER_REGS \ - "pop %r11;" \ - "pop %r10;" \ - "pop %r9;" \ - "pop %r8;" \ - "pop %rdi;" \ - "pop %rsi;" \ - "pop %rdx;" \ - "pop %rcx;" -#endif - -/* - * Generate a thunk around a function which saves all caller-save - * registers except for the return value. This allows C functions to - * be called from assembler code where fewer than normal registers are - * available. It may also help code generation around calls from C - * code if the common case doesn't use many registers. - * - * When a callee is wrapped in a thunk, the caller can assume that all - * arg regs and all scratch registers are preserved across the - * call. The return value in rax/eax will not be saved, even for void - * functions. - */ -#define PV_THUNK_NAME(func) "__raw_callee_save_" #func -#define __PV_CALLEE_SAVE_REGS_THUNK(func, section) \ - extern typeof(func) __raw_callee_save_##func; \ - \ - asm(".pushsection " section ", \"ax\";" \ - ".globl " PV_THUNK_NAME(func) ";" \ - ".type " PV_THUNK_NAME(func) ", @function;" \ - ASM_FUNC_ALIGN \ - PV_THUNK_NAME(func) ":" \ - ASM_ENDBR \ - FRAME_BEGIN \ - PV_SAVE_ALL_CALLER_REGS \ - "call " #func ";" \ - PV_RESTORE_ALL_CALLER_REGS \ - FRAME_END \ - ASM_RET \ - ".size " PV_THUNK_NAME(func) ", .-" PV_THUNK_NAME(func) ";" \ - ".popsection") - -#define PV_CALLEE_SAVE_REGS_THUNK(func) \ - __PV_CALLEE_SAVE_REGS_THUNK(func, ".text") - -/* Get a reference to a callee-save function */ -#define PV_CALLEE_SAVE(func) \ - ((struct paravirt_callee_save) { __raw_callee_save_##func }) - -/* Promise that "func" already uses the right calling convention */ -#define __PV_IS_CALLEE_SAVE(func) \ - ((struct paravirt_callee_save) { func }) - #ifdef CONFIG_PARAVIRT_XXL static __always_inline unsigned long arch_local_save_flags(void) { diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 085095f94f97..7acff40cc159 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -512,5 +512,73 @@ unsigned long pv_native_read_cr2(void); #define ALT_NOT_XEN ALT_NOT(X86_FEATURE_XENPV) +#ifdef CONFIG_X86_32 +/* save and restore all caller-save registers, except return value */ +#define PV_SAVE_ALL_CALLER_REGS "pushl %ecx;" +#define PV_RESTORE_ALL_CALLER_REGS "popl %ecx;" +#else +/* save and restore all caller-save registers, except return value */ +#define PV_SAVE_ALL_CALLER_REGS \ + "push %rcx;" \ + "push %rdx;" \ + "push %rsi;" \ + "push %rdi;" \ + "push %r8;" \ + "push %r9;" \ + "push %r10;" \ + "push %r11;" +#define PV_RESTORE_ALL_CALLER_REGS \ + "pop %r11;" \ + "pop %r10;" \ + "pop %r9;" \ + "pop %r8;" \ + "pop %rdi;" \ + "pop %rsi;" \ + "pop %rdx;" \ + "pop %rcx;" +#endif + +/* + * Generate a thunk around a function which saves all caller-save + * registers except for the return value. This allows C functions to + * be called from assembler code where fewer than normal registers are + * available. It may also help code generation around calls from C + * code if the common case doesn't use many registers. + * + * When a callee is wrapped in a thunk, the caller can assume that all + * arg regs and all scratch registers are preserved across the + * call. The return value in rax/eax will not be saved, even for void + * functions. + */ +#define PV_THUNK_NAME(func) "__raw_callee_save_" #func +#define __PV_CALLEE_SAVE_REGS_THUNK(func, section) \ + extern typeof(func) __raw_callee_save_##func; \ + \ + asm(".pushsection " section ", \"ax\";" \ + ".globl " PV_THUNK_NAME(func) ";" \ + ".type " PV_THUNK_NAME(func) ", @function;" \ + ASM_FUNC_ALIGN \ + PV_THUNK_NAME(func) ":" \ + ASM_ENDBR \ + FRAME_BEGIN \ + PV_SAVE_ALL_CALLER_REGS \ + "call " #func ";" \ + PV_RESTORE_ALL_CALLER_REGS \ + FRAME_END \ + ASM_RET \ + ".size " PV_THUNK_NAME(func) ", .-" PV_THUNK_NAME(func) ";" \ + ".popsection") + +#define PV_CALLEE_SAVE_REGS_THUNK(func) \ + __PV_CALLEE_SAVE_REGS_THUNK(func, ".text") + +/* Get a reference to a callee-save function */ +#define PV_CALLEE_SAVE(func) \ + ((struct paravirt_callee_save) { __raw_callee_save_##func }) + +/* Promise that "func" already uses the right calling convention */ +#define __PV_IS_CALLEE_SAVE(func) \ + ((struct paravirt_callee_save) { func }) + #endif /* CONFIG_PARAVIRT */ #endif /* _ASM_X86_PARAVIRT_TYPES_H */ -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 05/21] paravirt: Remove asm/paravirt_api_clock.h 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (2 preceding siblings ...) 2025-10-06 7:45 ` [PATCH v3 04/21] x86/paravirt: Move thunk macros to paravirt_types.h Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-10-15 16:02 ` Shrikanth Hegde 2025-10-06 7:45 ` [PATCH v3 06/21] sched: Move clock related paravirt code to kernel/sched Juergen Gross ` (13 subsequent siblings) 17 siblings, 1 reply; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, x86, virtualization, loongarch, linuxppc-dev, linux-riscv Cc: Juergen Gross, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Russell King, Catalin Marinas, Will Deacon, Huacai Chen, WANG Xuerui, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin, Christophe Leroy, Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider, linux-arm-kernel All architectures supporting CONFIG_PARAVIRT share the same contents of asm/paravirt_api_clock.h: #include <asm/paravirt.h> So remove all incarnations of asm/paravirt_api_clock.h and remove the only place where it is included, as there asm/paravirt.h is included anyway. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- arch/arm/include/asm/paravirt_api_clock.h | 1 - arch/arm64/include/asm/paravirt_api_clock.h | 1 - arch/loongarch/include/asm/paravirt_api_clock.h | 1 - arch/powerpc/include/asm/paravirt_api_clock.h | 2 -- arch/riscv/include/asm/paravirt_api_clock.h | 1 - arch/x86/include/asm/paravirt_api_clock.h | 1 - kernel/sched/sched.h | 1 - 7 files changed, 8 deletions(-) delete mode 100644 arch/arm/include/asm/paravirt_api_clock.h delete mode 100644 arch/arm64/include/asm/paravirt_api_clock.h delete mode 100644 arch/loongarch/include/asm/paravirt_api_clock.h delete mode 100644 arch/powerpc/include/asm/paravirt_api_clock.h delete mode 100644 arch/riscv/include/asm/paravirt_api_clock.h delete mode 100644 arch/x86/include/asm/paravirt_api_clock.h diff --git a/arch/arm/include/asm/paravirt_api_clock.h b/arch/arm/include/asm/paravirt_api_clock.h deleted file mode 100644 index 65ac7cee0dad..000000000000 --- a/arch/arm/include/asm/paravirt_api_clock.h +++ /dev/null @@ -1 +0,0 @@ -#include <asm/paravirt.h> diff --git a/arch/arm64/include/asm/paravirt_api_clock.h b/arch/arm64/include/asm/paravirt_api_clock.h deleted file mode 100644 index 65ac7cee0dad..000000000000 --- a/arch/arm64/include/asm/paravirt_api_clock.h +++ /dev/null @@ -1 +0,0 @@ -#include <asm/paravirt.h> diff --git a/arch/loongarch/include/asm/paravirt_api_clock.h b/arch/loongarch/include/asm/paravirt_api_clock.h deleted file mode 100644 index 65ac7cee0dad..000000000000 --- a/arch/loongarch/include/asm/paravirt_api_clock.h +++ /dev/null @@ -1 +0,0 @@ -#include <asm/paravirt.h> diff --git a/arch/powerpc/include/asm/paravirt_api_clock.h b/arch/powerpc/include/asm/paravirt_api_clock.h deleted file mode 100644 index d25ca7ac57c7..000000000000 --- a/arch/powerpc/include/asm/paravirt_api_clock.h +++ /dev/null @@ -1,2 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#include <asm/paravirt.h> diff --git a/arch/riscv/include/asm/paravirt_api_clock.h b/arch/riscv/include/asm/paravirt_api_clock.h deleted file mode 100644 index 65ac7cee0dad..000000000000 --- a/arch/riscv/include/asm/paravirt_api_clock.h +++ /dev/null @@ -1 +0,0 @@ -#include <asm/paravirt.h> diff --git a/arch/x86/include/asm/paravirt_api_clock.h b/arch/x86/include/asm/paravirt_api_clock.h deleted file mode 100644 index 65ac7cee0dad..000000000000 --- a/arch/x86/include/asm/paravirt_api_clock.h +++ /dev/null @@ -1 +0,0 @@ -#include <asm/paravirt.h> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 1f5d07067f60..0d0fa13cab5c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -84,7 +84,6 @@ struct cpuidle_state; #ifdef CONFIG_PARAVIRT # include <asm/paravirt.h> -# include <asm/paravirt_api_clock.h> #endif #include <asm/barrier.h> -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v3 05/21] paravirt: Remove asm/paravirt_api_clock.h 2025-10-06 7:45 ` [PATCH v3 05/21] paravirt: Remove asm/paravirt_api_clock.h Juergen Gross @ 2025-10-15 16:02 ` Shrikanth Hegde 0 siblings, 0 replies; 24+ messages in thread From: Shrikanth Hegde @ 2025-10-15 16:02 UTC (permalink / raw) To: Juergen Gross Cc: Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Russell King, Catalin Marinas, Will Deacon, Huacai Chen, WANG Xuerui, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin, Christophe Leroy, Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider, linux-arm-kernel, linux-kernel, x86, virtualization, loongarch, linuxppc-dev, linux-riscv On 10/6/25 1:15 PM, Juergen Gross wrote: > All architectures supporting CONFIG_PARAVIRT share the same contents > of asm/paravirt_api_clock.h: > > #include <asm/paravirt.h> > > So remove all incarnations of asm/paravirt_api_clock.h and remove the > only place where it is included, as there asm/paravirt.h is included > anyway. > > Signed-off-by: Juergen Gross <jgross@suse.com> > Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> > --- > arch/arm/include/asm/paravirt_api_clock.h | 1 - > arch/arm64/include/asm/paravirt_api_clock.h | 1 - > arch/loongarch/include/asm/paravirt_api_clock.h | 1 - > arch/powerpc/include/asm/paravirt_api_clock.h | 2 -- > arch/riscv/include/asm/paravirt_api_clock.h | 1 - > arch/x86/include/asm/paravirt_api_clock.h | 1 - > kernel/sched/sched.h | 1 - > 7 files changed, 8 deletions(-) > delete mode 100644 arch/arm/include/asm/paravirt_api_clock.h > delete mode 100644 arch/arm64/include/asm/paravirt_api_clock.h > delete mode 100644 arch/loongarch/include/asm/paravirt_api_clock.h > delete mode 100644 arch/powerpc/include/asm/paravirt_api_clock.h > delete mode 100644 arch/riscv/include/asm/paravirt_api_clock.h > delete mode 100644 arch/x86/include/asm/paravirt_api_clock.h > For powerpc, scheduler bits Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com> ^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v3 06/21] sched: Move clock related paravirt code to kernel/sched 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (3 preceding siblings ...) 2025-10-06 7:45 ` [PATCH v3 05/21] paravirt: Remove asm/paravirt_api_clock.h Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2026-01-07 22:48 ` Alexey Makhalov 2025-10-06 7:45 ` [PATCH v3 07/21] arm/paravirt: Use common code for paravirt_steal_clock() Juergen Gross ` (12 subsequent siblings) 17 siblings, 1 reply; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, x86, virtualization, loongarch, linuxppc-dev, linux-riscv, kvm Cc: Juergen Gross, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Russell King, Catalin Marinas, Will Deacon, Huacai Chen, WANG Xuerui, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin, Christophe Leroy, Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Paolo Bonzini, Vitaly Kuznetsov, Stefano Stabellini, Oleksandr Tyshchenko, Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider, linux-arm-kernel, xen-devel Paravirt clock related functions are available in multiple archs. In order to share the common parts, move the common static keys to kernel/sched/ and remove them from the arch specific files. Make a common paravirt_steal_clock() implementation available in kernel/sched/cputime.c, guarding it with a new config option CONFIG_HAVE_PV_STEAL_CLOCK_GEN, which can be selected by an arch in case it wants to use that common variant. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- arch/Kconfig | 3 +++ arch/arm/include/asm/paravirt.h | 4 ---- arch/arm/kernel/paravirt.c | 3 --- arch/arm64/include/asm/paravirt.h | 4 ---- arch/arm64/kernel/paravirt.c | 4 +--- arch/loongarch/include/asm/paravirt.h | 3 --- arch/loongarch/kernel/paravirt.c | 3 +-- arch/powerpc/include/asm/paravirt.h | 3 --- arch/powerpc/platforms/pseries/setup.c | 4 +--- arch/riscv/include/asm/paravirt.h | 4 ---- arch/riscv/kernel/paravirt.c | 4 +--- arch/x86/include/asm/paravirt.h | 4 ---- arch/x86/kernel/cpu/vmware.c | 1 + arch/x86/kernel/kvm.c | 1 + arch/x86/kernel/paravirt.c | 3 --- drivers/xen/time.c | 1 + include/linux/sched/cputime.h | 18 ++++++++++++++++++ kernel/sched/core.c | 5 +++++ kernel/sched/cputime.c | 13 +++++++++++++ kernel/sched/sched.h | 2 +- 20 files changed, 47 insertions(+), 40 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index ebe08b9186ad..f310ac346fa4 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1051,6 +1051,9 @@ config HAVE_IRQ_TIME_ACCOUNTING Archs need to ensure they use a high enough resolution clock to support irq time accounting and then call enable_sched_clock_irqtime(). +config HAVE_PV_STEAL_CLOCK_GEN + bool + config HAVE_MOVE_PUD bool help diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h index 95d5b0d625cd..69da4bdcf856 100644 --- a/arch/arm/include/asm/paravirt.h +++ b/arch/arm/include/asm/paravirt.h @@ -5,10 +5,6 @@ #ifdef CONFIG_PARAVIRT #include <linux/static_call_types.h> -struct static_key; -extern struct static_key paravirt_steal_enabled; -extern struct static_key paravirt_steal_rq_enabled; - u64 dummy_steal_clock(int cpu); DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); diff --git a/arch/arm/kernel/paravirt.c b/arch/arm/kernel/paravirt.c index 7dd9806369fb..3895a5578852 100644 --- a/arch/arm/kernel/paravirt.c +++ b/arch/arm/kernel/paravirt.c @@ -12,9 +12,6 @@ #include <linux/static_call.h> #include <asm/paravirt.h> -struct static_key paravirt_steal_enabled; -struct static_key paravirt_steal_rq_enabled; - static u64 native_steal_clock(int cpu) { return 0; diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h index 9aa193e0e8f2..c9f7590baacb 100644 --- a/arch/arm64/include/asm/paravirt.h +++ b/arch/arm64/include/asm/paravirt.h @@ -5,10 +5,6 @@ #ifdef CONFIG_PARAVIRT #include <linux/static_call_types.h> -struct static_key; -extern struct static_key paravirt_steal_enabled; -extern struct static_key paravirt_steal_rq_enabled; - u64 dummy_steal_clock(int cpu); DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c index aa718d6a9274..943b60ce12f4 100644 --- a/arch/arm64/kernel/paravirt.c +++ b/arch/arm64/kernel/paravirt.c @@ -19,14 +19,12 @@ #include <linux/slab.h> #include <linux/types.h> #include <linux/static_call.h> +#include <linux/sched/cputime.h> #include <asm/paravirt.h> #include <asm/pvclock-abi.h> #include <asm/smp_plat.h> -struct static_key paravirt_steal_enabled; -struct static_key paravirt_steal_rq_enabled; - static u64 native_steal_clock(int cpu) { return 0; diff --git a/arch/loongarch/include/asm/paravirt.h b/arch/loongarch/include/asm/paravirt.h index 3f4323603e6a..d219ea0d98ac 100644 --- a/arch/loongarch/include/asm/paravirt.h +++ b/arch/loongarch/include/asm/paravirt.h @@ -5,9 +5,6 @@ #ifdef CONFIG_PARAVIRT #include <linux/static_call_types.h> -struct static_key; -extern struct static_key paravirt_steal_enabled; -extern struct static_key paravirt_steal_rq_enabled; u64 dummy_steal_clock(int cpu); DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); diff --git a/arch/loongarch/kernel/paravirt.c b/arch/loongarch/kernel/paravirt.c index b1b51f920b23..8caaa94fed1a 100644 --- a/arch/loongarch/kernel/paravirt.c +++ b/arch/loongarch/kernel/paravirt.c @@ -6,11 +6,10 @@ #include <linux/kvm_para.h> #include <linux/reboot.h> #include <linux/static_call.h> +#include <linux/sched/cputime.h> #include <asm/paravirt.h> static int has_steal_clock; -struct static_key paravirt_steal_enabled; -struct static_key paravirt_steal_rq_enabled; static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64); DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key); diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h index b78b82d66057..92343a23ad15 100644 --- a/arch/powerpc/include/asm/paravirt.h +++ b/arch/powerpc/include/asm/paravirt.h @@ -23,9 +23,6 @@ static inline bool is_shared_processor(void) } #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING -extern struct static_key paravirt_steal_enabled; -extern struct static_key paravirt_steal_rq_enabled; - u64 pseries_paravirt_steal_clock(int cpu); static inline u64 paravirt_steal_clock(int cpu) diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index b10a25325238..50b26ed8432d 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -42,6 +42,7 @@ #include <linux/memblock.h> #include <linux/swiotlb.h> #include <linux/seq_buf.h> +#include <linux/sched/cputime.h> #include <asm/mmu.h> #include <asm/processor.h> @@ -83,9 +84,6 @@ DEFINE_STATIC_KEY_FALSE(shared_processor); EXPORT_SYMBOL(shared_processor); #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING -struct static_key paravirt_steal_enabled; -struct static_key paravirt_steal_rq_enabled; - static bool steal_acc = true; static int __init parse_no_stealacc(char *arg) { diff --git a/arch/riscv/include/asm/paravirt.h b/arch/riscv/include/asm/paravirt.h index c0abde70fc2c..17e5e39c72c0 100644 --- a/arch/riscv/include/asm/paravirt.h +++ b/arch/riscv/include/asm/paravirt.h @@ -5,10 +5,6 @@ #ifdef CONFIG_PARAVIRT #include <linux/static_call_types.h> -struct static_key; -extern struct static_key paravirt_steal_enabled; -extern struct static_key paravirt_steal_rq_enabled; - u64 dummy_steal_clock(int cpu); DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); diff --git a/arch/riscv/kernel/paravirt.c b/arch/riscv/kernel/paravirt.c index fa6b0339a65d..d3c334f16172 100644 --- a/arch/riscv/kernel/paravirt.c +++ b/arch/riscv/kernel/paravirt.c @@ -16,15 +16,13 @@ #include <linux/printk.h> #include <linux/static_call.h> #include <linux/types.h> +#include <linux/sched/cputime.h> #include <asm/barrier.h> #include <asm/page.h> #include <asm/paravirt.h> #include <asm/sbi.h> -struct static_key paravirt_steal_enabled; -struct static_key paravirt_steal_rq_enabled; - static u64 native_steal_clock(int cpu) { return 0; diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 1344d2fb2b86..0ef797ea8440 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -30,10 +30,6 @@ static __always_inline u64 paravirt_sched_clock(void) return static_call(pv_sched_clock)(); } -struct static_key; -extern struct static_key paravirt_steal_enabled; -extern struct static_key paravirt_steal_rq_enabled; - __visible void __native_queued_spin_unlock(struct qspinlock *lock); bool pv_is_native_spin_unlock(void); __visible bool __native_vcpu_is_preempted(long cpu); diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c index cb3f900c46fc..a3e6936839b1 100644 --- a/arch/x86/kernel/cpu/vmware.c +++ b/arch/x86/kernel/cpu/vmware.c @@ -29,6 +29,7 @@ #include <linux/efi.h> #include <linux/reboot.h> #include <linux/static_call.h> +#include <linux/sched/cputime.h> #include <asm/div64.h> #include <asm/x86_init.h> #include <asm/hypervisor.h> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index b67d7c59dca0..d54fd2bc0402 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -29,6 +29,7 @@ #include <linux/syscore_ops.h> #include <linux/cc_platform.h> #include <linux/efi.h> +#include <linux/sched/cputime.h> #include <asm/timer.h> #include <asm/cpu.h> #include <asm/traps.h> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index ab3e172dcc69..a3ba4747be1c 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -60,9 +60,6 @@ void __init native_pv_lock_init(void) static_branch_enable(&virt_spin_lock_key); } -struct static_key paravirt_steal_enabled; -struct static_key paravirt_steal_rq_enabled; - static u64 native_steal_clock(int cpu) { return 0; diff --git a/drivers/xen/time.c b/drivers/xen/time.c index 5683383d2305..d360ded2ef39 100644 --- a/drivers/xen/time.c +++ b/drivers/xen/time.c @@ -8,6 +8,7 @@ #include <linux/gfp.h> #include <linux/slab.h> #include <linux/static_call.h> +#include <linux/sched/cputime.h> #include <asm/paravirt.h> #include <asm/xen/hypervisor.h> diff --git a/include/linux/sched/cputime.h b/include/linux/sched/cputime.h index 5f8fd5b24a2e..e90efaf6d26e 100644 --- a/include/linux/sched/cputime.h +++ b/include/linux/sched/cputime.h @@ -2,6 +2,7 @@ #ifndef _LINUX_SCHED_CPUTIME_H #define _LINUX_SCHED_CPUTIME_H +#include <linux/static_call_types.h> #include <linux/sched/signal.h> /* @@ -180,4 +181,21 @@ static inline void prev_cputime_init(struct prev_cputime *prev) extern unsigned long long task_sched_runtime(struct task_struct *task); +#ifdef CONFIG_PARAVIRT +struct static_key; +extern struct static_key paravirt_steal_enabled; +extern struct static_key paravirt_steal_rq_enabled; + +#ifdef CONFIG_HAVE_PV_STEAL_CLOCK_GEN +u64 dummy_steal_clock(int cpu); + +DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); + +static inline u64 paravirt_steal_clock(int cpu) +{ + return static_call(pv_steal_clock)(cpu); +} +#endif +#endif + #endif /* _LINUX_SCHED_CPUTIME_H */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 198d2dd45f59..06a9a20820d4 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -769,6 +769,11 @@ struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) * RQ-clock updating methods: */ +/* Use CONFIG_PARAVIRT as this will avoid more #ifdef in arch code. */ +#ifdef CONFIG_PARAVIRT +struct static_key paravirt_steal_rq_enabled; +#endif + static void update_rq_clock_task(struct rq *rq, s64 delta) { /* diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c index 7097de2c8cda..ed8f71e08047 100644 --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -251,6 +251,19 @@ void __account_forceidle_time(struct task_struct *p, u64 delta) * ticks are not redelivered later. Due to that, this function may on * occasion account more time than the calling functions think elapsed. */ +#ifdef CONFIG_PARAVIRT +struct static_key paravirt_steal_enabled; + +#ifdef CONFIG_HAVE_PV_STEAL_CLOCK_GEN +static u64 native_steal_clock(int cpu) +{ + return 0; +} + +DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock); +#endif +#endif + static __always_inline u64 steal_account_process_time(u64 maxtime) { #ifdef CONFIG_PARAVIRT diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 0d0fa13cab5c..72fd9268008e 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -82,7 +82,7 @@ struct rt_rq; struct sched_group; struct cpuidle_state; -#ifdef CONFIG_PARAVIRT +#if defined(CONFIG_PARAVIRT) && !defined(CONFIG_HAVE_PV_STEAL_CLOCK_GEN) # include <asm/paravirt.h> #endif -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v3 06/21] sched: Move clock related paravirt code to kernel/sched 2025-10-06 7:45 ` [PATCH v3 06/21] sched: Move clock related paravirt code to kernel/sched Juergen Gross @ 2026-01-07 22:48 ` Alexey Makhalov 0 siblings, 0 replies; 24+ messages in thread From: Alexey Makhalov @ 2026-01-07 22:48 UTC (permalink / raw) To: Juergen Gross Cc: Ajay Kaher, loongarch, x86, virtualization, linux-kernel, linuxppc-dev, linux-riscv, kvm, Broadcom internal kernel review list, Russell King, Catalin Marinas, Will Deacon, Huacai Chen, WANG Xuerui, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin, Christophe Leroy, Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Paolo Bonzini, Vitaly Kuznetsov, Stefano Stabellini, Oleksandr Tyshchenko, Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider, linux-arm-kernel, xen-devel On 10/6/25 12:45 AM, Juergen Gross wrote: > Paravirt clock related functions are available in multiple archs. > > In order to share the common parts, move the common static keys > to kernel/sched/ and remove them from the arch specific files. > > Make a common paravirt_steal_clock() implementation available in > kernel/sched/cputime.c, guarding it with a new config option > CONFIG_HAVE_PV_STEAL_CLOCK_GEN, which can be selected by an arch > in case it wants to use that common variant. > > Signed-off-by: Juergen Gross <jgross@suse.com> > Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> > --- > arch/Kconfig | 3 +++ > arch/arm/include/asm/paravirt.h | 4 ---- > arch/arm/kernel/paravirt.c | 3 --- > arch/arm64/include/asm/paravirt.h | 4 ---- > arch/arm64/kernel/paravirt.c | 4 +--- > arch/loongarch/include/asm/paravirt.h | 3 --- > arch/loongarch/kernel/paravirt.c | 3 +-- > arch/powerpc/include/asm/paravirt.h | 3 --- > arch/powerpc/platforms/pseries/setup.c | 4 +--- > arch/riscv/include/asm/paravirt.h | 4 ---- > arch/riscv/kernel/paravirt.c | 4 +--- > arch/x86/include/asm/paravirt.h | 4 ---- > arch/x86/kernel/cpu/vmware.c | 1 + > arch/x86/kernel/kvm.c | 1 + > arch/x86/kernel/paravirt.c | 3 --- > drivers/xen/time.c | 1 + > include/linux/sched/cputime.h | 18 ++++++++++++++++++ > kernel/sched/core.c | 5 +++++ > kernel/sched/cputime.c | 13 +++++++++++++ > kernel/sched/sched.h | 2 +- > 20 files changed, 47 insertions(+), 40 deletions(-) > > diff --git a/arch/Kconfig b/arch/Kconfig > index ebe08b9186ad..f310ac346fa4 100644 > --- a/arch/Kconfig > +++ b/arch/Kconfig > @@ -1051,6 +1051,9 @@ config HAVE_IRQ_TIME_ACCOUNTING > Archs need to ensure they use a high enough resolution clock to > support irq time accounting and then call enable_sched_clock_irqtime(). > > +config HAVE_PV_STEAL_CLOCK_GEN > + bool > + > config HAVE_MOVE_PUD > bool > help > diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h > index 95d5b0d625cd..69da4bdcf856 100644 > --- a/arch/arm/include/asm/paravirt.h > +++ b/arch/arm/include/asm/paravirt.h > @@ -5,10 +5,6 @@ > #ifdef CONFIG_PARAVIRT > #include <linux/static_call_types.h> > > -struct static_key; > -extern struct static_key paravirt_steal_enabled; > -extern struct static_key paravirt_steal_rq_enabled; > - > u64 dummy_steal_clock(int cpu); > > DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); > diff --git a/arch/arm/kernel/paravirt.c b/arch/arm/kernel/paravirt.c > index 7dd9806369fb..3895a5578852 100644 > --- a/arch/arm/kernel/paravirt.c > +++ b/arch/arm/kernel/paravirt.c > @@ -12,9 +12,6 @@ > #include <linux/static_call.h> > #include <asm/paravirt.h> > > -struct static_key paravirt_steal_enabled; > -struct static_key paravirt_steal_rq_enabled; > - > static u64 native_steal_clock(int cpu) > { > return 0; > diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h > index 9aa193e0e8f2..c9f7590baacb 100644 > --- a/arch/arm64/include/asm/paravirt.h > +++ b/arch/arm64/include/asm/paravirt.h > @@ -5,10 +5,6 @@ > #ifdef CONFIG_PARAVIRT > #include <linux/static_call_types.h> > > -struct static_key; > -extern struct static_key paravirt_steal_enabled; > -extern struct static_key paravirt_steal_rq_enabled; > - > u64 dummy_steal_clock(int cpu); > > DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); > diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c > index aa718d6a9274..943b60ce12f4 100644 > --- a/arch/arm64/kernel/paravirt.c > +++ b/arch/arm64/kernel/paravirt.c > @@ -19,14 +19,12 @@ > #include <linux/slab.h> > #include <linux/types.h> > #include <linux/static_call.h> > +#include <linux/sched/cputime.h> > > #include <asm/paravirt.h> > #include <asm/pvclock-abi.h> > #include <asm/smp_plat.h> > > -struct static_key paravirt_steal_enabled; > -struct static_key paravirt_steal_rq_enabled; > - > static u64 native_steal_clock(int cpu) > { > return 0; > diff --git a/arch/loongarch/include/asm/paravirt.h b/arch/loongarch/include/asm/paravirt.h > index 3f4323603e6a..d219ea0d98ac 100644 > --- a/arch/loongarch/include/asm/paravirt.h > +++ b/arch/loongarch/include/asm/paravirt.h > @@ -5,9 +5,6 @@ > #ifdef CONFIG_PARAVIRT > > #include <linux/static_call_types.h> > -struct static_key; > -extern struct static_key paravirt_steal_enabled; > -extern struct static_key paravirt_steal_rq_enabled; > > u64 dummy_steal_clock(int cpu); > DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); > diff --git a/arch/loongarch/kernel/paravirt.c b/arch/loongarch/kernel/paravirt.c > index b1b51f920b23..8caaa94fed1a 100644 > --- a/arch/loongarch/kernel/paravirt.c > +++ b/arch/loongarch/kernel/paravirt.c > @@ -6,11 +6,10 @@ > #include <linux/kvm_para.h> > #include <linux/reboot.h> > #include <linux/static_call.h> > +#include <linux/sched/cputime.h> > #include <asm/paravirt.h> > > static int has_steal_clock; > -struct static_key paravirt_steal_enabled; > -struct static_key paravirt_steal_rq_enabled; > static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64); > DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key); > > diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h > index b78b82d66057..92343a23ad15 100644 > --- a/arch/powerpc/include/asm/paravirt.h > +++ b/arch/powerpc/include/asm/paravirt.h > @@ -23,9 +23,6 @@ static inline bool is_shared_processor(void) > } > > #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING > -extern struct static_key paravirt_steal_enabled; > -extern struct static_key paravirt_steal_rq_enabled; > - > u64 pseries_paravirt_steal_clock(int cpu); > > static inline u64 paravirt_steal_clock(int cpu) > diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c > index b10a25325238..50b26ed8432d 100644 > --- a/arch/powerpc/platforms/pseries/setup.c > +++ b/arch/powerpc/platforms/pseries/setup.c > @@ -42,6 +42,7 @@ > #include <linux/memblock.h> > #include <linux/swiotlb.h> > #include <linux/seq_buf.h> > +#include <linux/sched/cputime.h> > > #include <asm/mmu.h> > #include <asm/processor.h> > @@ -83,9 +84,6 @@ DEFINE_STATIC_KEY_FALSE(shared_processor); > EXPORT_SYMBOL(shared_processor); > > #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING > -struct static_key paravirt_steal_enabled; > -struct static_key paravirt_steal_rq_enabled; > - > static bool steal_acc = true; > static int __init parse_no_stealacc(char *arg) > { > diff --git a/arch/riscv/include/asm/paravirt.h b/arch/riscv/include/asm/paravirt.h > index c0abde70fc2c..17e5e39c72c0 100644 > --- a/arch/riscv/include/asm/paravirt.h > +++ b/arch/riscv/include/asm/paravirt.h > @@ -5,10 +5,6 @@ > #ifdef CONFIG_PARAVIRT > #include <linux/static_call_types.h> > > -struct static_key; > -extern struct static_key paravirt_steal_enabled; > -extern struct static_key paravirt_steal_rq_enabled; > - > u64 dummy_steal_clock(int cpu); > > DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); > diff --git a/arch/riscv/kernel/paravirt.c b/arch/riscv/kernel/paravirt.c > index fa6b0339a65d..d3c334f16172 100644 > --- a/arch/riscv/kernel/paravirt.c > +++ b/arch/riscv/kernel/paravirt.c > @@ -16,15 +16,13 @@ > #include <linux/printk.h> > #include <linux/static_call.h> > #include <linux/types.h> > +#include <linux/sched/cputime.h> > > #include <asm/barrier.h> > #include <asm/page.h> > #include <asm/paravirt.h> > #include <asm/sbi.h> > > -struct static_key paravirt_steal_enabled; > -struct static_key paravirt_steal_rq_enabled; > - > static u64 native_steal_clock(int cpu) > { > return 0; > diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h > index 1344d2fb2b86..0ef797ea8440 100644 > --- a/arch/x86/include/asm/paravirt.h > +++ b/arch/x86/include/asm/paravirt.h > @@ -30,10 +30,6 @@ static __always_inline u64 paravirt_sched_clock(void) > return static_call(pv_sched_clock)(); > } > > -struct static_key; > -extern struct static_key paravirt_steal_enabled; > -extern struct static_key paravirt_steal_rq_enabled; > - > __visible void __native_queued_spin_unlock(struct qspinlock *lock); > bool pv_is_native_spin_unlock(void); > __visible bool __native_vcpu_is_preempted(long cpu); > diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c > index cb3f900c46fc..a3e6936839b1 100644 > --- a/arch/x86/kernel/cpu/vmware.c > +++ b/arch/x86/kernel/cpu/vmware.c > @@ -29,6 +29,7 @@ > #include <linux/efi.h> > #include <linux/reboot.h> > #include <linux/static_call.h> > +#include <linux/sched/cputime.h> > #include <asm/div64.h> > #include <asm/x86_init.h> > #include <asm/hypervisor.h> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index b67d7c59dca0..d54fd2bc0402 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -29,6 +29,7 @@ > #include <linux/syscore_ops.h> > #include <linux/cc_platform.h> > #include <linux/efi.h> > +#include <linux/sched/cputime.h> > #include <asm/timer.h> > #include <asm/cpu.h> > #include <asm/traps.h> > diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c > index ab3e172dcc69..a3ba4747be1c 100644 > --- a/arch/x86/kernel/paravirt.c > +++ b/arch/x86/kernel/paravirt.c > @@ -60,9 +60,6 @@ void __init native_pv_lock_init(void) > static_branch_enable(&virt_spin_lock_key); > } > > -struct static_key paravirt_steal_enabled; > -struct static_key paravirt_steal_rq_enabled; > - > static u64 native_steal_clock(int cpu) > { > return 0; > diff --git a/drivers/xen/time.c b/drivers/xen/time.c > index 5683383d2305..d360ded2ef39 100644 > --- a/drivers/xen/time.c > +++ b/drivers/xen/time.c > @@ -8,6 +8,7 @@ > #include <linux/gfp.h> > #include <linux/slab.h> > #include <linux/static_call.h> > +#include <linux/sched/cputime.h> > > #include <asm/paravirt.h> > #include <asm/xen/hypervisor.h> > diff --git a/include/linux/sched/cputime.h b/include/linux/sched/cputime.h > index 5f8fd5b24a2e..e90efaf6d26e 100644 > --- a/include/linux/sched/cputime.h > +++ b/include/linux/sched/cputime.h > @@ -2,6 +2,7 @@ > #ifndef _LINUX_SCHED_CPUTIME_H > #define _LINUX_SCHED_CPUTIME_H > > +#include <linux/static_call_types.h> > #include <linux/sched/signal.h> > > /* > @@ -180,4 +181,21 @@ static inline void prev_cputime_init(struct prev_cputime *prev) > extern unsigned long long > task_sched_runtime(struct task_struct *task); > > +#ifdef CONFIG_PARAVIRT > +struct static_key; > +extern struct static_key paravirt_steal_enabled; > +extern struct static_key paravirt_steal_rq_enabled; > + > +#ifdef CONFIG_HAVE_PV_STEAL_CLOCK_GEN > +u64 dummy_steal_clock(int cpu); > + > +DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); > + > +static inline u64 paravirt_steal_clock(int cpu) > +{ > + return static_call(pv_steal_clock)(cpu); > +} > +#endif > +#endif > + > #endif /* _LINUX_SCHED_CPUTIME_H */ > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 198d2dd45f59..06a9a20820d4 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -769,6 +769,11 @@ struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) > * RQ-clock updating methods: > */ > > +/* Use CONFIG_PARAVIRT as this will avoid more #ifdef in arch code. */ > +#ifdef CONFIG_PARAVIRT > +struct static_key paravirt_steal_rq_enabled; > +#endif > + > static void update_rq_clock_task(struct rq *rq, s64 delta) > { > /* > diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c > index 7097de2c8cda..ed8f71e08047 100644 > --- a/kernel/sched/cputime.c > +++ b/kernel/sched/cputime.c > @@ -251,6 +251,19 @@ void __account_forceidle_time(struct task_struct *p, u64 delta) > * ticks are not redelivered later. Due to that, this function may on > * occasion account more time than the calling functions think elapsed. > */ > +#ifdef CONFIG_PARAVIRT > +struct static_key paravirt_steal_enabled; > + > +#ifdef CONFIG_HAVE_PV_STEAL_CLOCK_GEN > +static u64 native_steal_clock(int cpu) > +{ > + return 0; > +} > + > +DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock); > +#endif > +#endif > + > static __always_inline u64 steal_account_process_time(u64 maxtime) > { > #ifdef CONFIG_PARAVIRT > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index 0d0fa13cab5c..72fd9268008e 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -82,7 +82,7 @@ struct rt_rq; > struct sched_group; > struct cpuidle_state; > > -#ifdef CONFIG_PARAVIRT > +#if defined(CONFIG_PARAVIRT) && !defined(CONFIG_HAVE_PV_STEAL_CLOCK_GEN) > # include <asm/paravirt.h> > #endif > Acked-by: Alexey Makhalov <alexey.makhalov@broadcom.com> ^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v3 07/21] arm/paravirt: Use common code for paravirt_steal_clock() 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (4 preceding siblings ...) 2025-10-06 7:45 ` [PATCH v3 06/21] sched: Move clock related paravirt code to kernel/sched Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-10-06 7:45 ` [PATCH v3 08/21] arm64/paravirt: " Juergen Gross ` (11 subsequent siblings) 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, virtualization, x86 Cc: Juergen Gross, Russell King, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Stefano Stabellini, Oleksandr Tyshchenko, linux-arm-kernel, xen-devel, Peter Zijlstra (Intel) Remove the arch specific variant of paravirt_steal_clock() and use the common one instead. This allows to remove paravirt.c and paravirt.h from arch/arm. Until all archs supporting Xen have been switched to the common code of paravirt_steal_clock(), drivers/xen/time.c needs to include asm/paravirt.h for those archs, while this is not necessary for arm any longer. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- arch/arm/Kconfig | 1 + arch/arm/include/asm/paravirt.h | 18 ------------------ arch/arm/kernel/Makefile | 1 - arch/arm/kernel/paravirt.c | 20 -------------------- drivers/xen/time.c | 2 ++ 5 files changed, 3 insertions(+), 39 deletions(-) delete mode 100644 arch/arm/include/asm/paravirt.h delete mode 100644 arch/arm/kernel/paravirt.c diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 2a124c92e4f6..39ab0860bfbc 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1319,6 +1319,7 @@ config UACCESS_WITH_MEMCPY config PARAVIRT bool "Enable paravirtualization code" + select HAVE_PV_STEAL_CLOCK_GEN help This changes the kernel so it can modify itself when it is run under a hypervisor, potentially improving performance significantly diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h deleted file mode 100644 index 69da4bdcf856..000000000000 --- a/arch/arm/include/asm/paravirt.h +++ /dev/null @@ -1,18 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_ARM_PARAVIRT_H -#define _ASM_ARM_PARAVIRT_H - -#ifdef CONFIG_PARAVIRT -#include <linux/static_call_types.h> - -u64 dummy_steal_clock(int cpu); - -DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); - -static inline u64 paravirt_steal_clock(int cpu) -{ - return static_call(pv_steal_clock)(cpu); -} -#endif - -#endif diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile index afc9de7ef9a1..b36cf0cfd4a7 100644 --- a/arch/arm/kernel/Makefile +++ b/arch/arm/kernel/Makefile @@ -83,7 +83,6 @@ AFLAGS_iwmmxt.o := -Wa,-mcpu=iwmmxt obj-$(CONFIG_ARM_CPU_TOPOLOGY) += topology.o obj-$(CONFIG_VDSO) += vdso.o obj-$(CONFIG_EFI) += efi.o -obj-$(CONFIG_PARAVIRT) += paravirt.o obj-y += head$(MMUEXT).o obj-$(CONFIG_DEBUG_LL) += debug.o diff --git a/arch/arm/kernel/paravirt.c b/arch/arm/kernel/paravirt.c deleted file mode 100644 index 3895a5578852..000000000000 --- a/arch/arm/kernel/paravirt.c +++ /dev/null @@ -1,20 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * - * Copyright (C) 2013 Citrix Systems - * - * Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com> - */ - -#include <linux/export.h> -#include <linux/jump_label.h> -#include <linux/types.h> -#include <linux/static_call.h> -#include <asm/paravirt.h> - -static u64 native_steal_clock(int cpu) -{ - return 0; -} - -DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock); diff --git a/drivers/xen/time.c b/drivers/xen/time.c index d360ded2ef39..53b12f5ac465 100644 --- a/drivers/xen/time.c +++ b/drivers/xen/time.c @@ -10,7 +10,9 @@ #include <linux/static_call.h> #include <linux/sched/cputime.h> +#ifndef CONFIG_HAVE_PV_STEAL_CLOCK_GEN #include <asm/paravirt.h> +#endif #include <asm/xen/hypervisor.h> #include <asm/xen/hypercall.h> -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 08/21] arm64/paravirt: Use common code for paravirt_steal_clock() 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (5 preceding siblings ...) 2025-10-06 7:45 ` [PATCH v3 07/21] arm/paravirt: Use common code for paravirt_steal_clock() Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-10-06 7:45 ` [PATCH v3 09/21] loongarch/paravirt: " Juergen Gross ` (10 subsequent siblings) 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, virtualization, x86 Cc: Juergen Gross, Catalin Marinas, Will Deacon, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, linux-arm-kernel, Peter Zijlstra (Intel) Remove the arch specific variant of paravirt_steal_clock() and use the common one instead. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/paravirt.h | 10 ---------- arch/arm64/kernel/paravirt.c | 7 ------- 3 files changed, 1 insertion(+), 17 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6663ffd23f25..3a463027538e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1560,6 +1560,7 @@ config CC_HAVE_SHADOW_CALL_STACK config PARAVIRT bool "Enable paravirtualization code" + select HAVE_PV_STEAL_CLOCK_GEN help This changes the kernel so it can modify itself when it is run under a hypervisor, potentially improving performance significantly diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h index c9f7590baacb..cb037e742372 100644 --- a/arch/arm64/include/asm/paravirt.h +++ b/arch/arm64/include/asm/paravirt.h @@ -3,16 +3,6 @@ #define _ASM_ARM64_PARAVIRT_H #ifdef CONFIG_PARAVIRT -#include <linux/static_call_types.h> - -u64 dummy_steal_clock(int cpu); - -DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); - -static inline u64 paravirt_steal_clock(int cpu) -{ - return static_call(pv_steal_clock)(cpu); -} int __init pv_time_init(void); diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c index 943b60ce12f4..572efb96b23f 100644 --- a/arch/arm64/kernel/paravirt.c +++ b/arch/arm64/kernel/paravirt.c @@ -25,13 +25,6 @@ #include <asm/pvclock-abi.h> #include <asm/smp_plat.h> -static u64 native_steal_clock(int cpu) -{ - return 0; -} - -DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock); - struct pv_time_stolen_time_region { struct pvclock_vcpu_stolen_time __rcu *kaddr; }; -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 09/21] loongarch/paravirt: Use common code for paravirt_steal_clock() 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (6 preceding siblings ...) 2025-10-06 7:45 ` [PATCH v3 08/21] arm64/paravirt: " Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-11-24 9:52 ` Bibo Mao 2025-10-06 7:45 ` [PATCH v3 10/21] riscv/paravirt: " Juergen Gross ` (9 subsequent siblings) 17 siblings, 1 reply; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, loongarch, virtualization, x86 Cc: Juergen Gross, Huacai Chen, WANG Xuerui, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Peter Zijlstra (Intel) Remove the arch specific variant of paravirt_steal_clock() and use the common one instead. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- arch/loongarch/Kconfig | 1 + arch/loongarch/include/asm/paravirt.h | 10 ---------- arch/loongarch/kernel/paravirt.c | 7 ------- 3 files changed, 1 insertion(+), 17 deletions(-) diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index ea683bcea14c..7a9d1d0edc92 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -671,6 +671,7 @@ source "kernel/livepatch/Kconfig" config PARAVIRT bool "Enable paravirtualization code" depends on AS_HAS_LVZ_EXTENSION + select HAVE_PV_STEAL_CLOCK_GEN help This changes the kernel so it can modify itself when it is run under a hypervisor, potentially improving performance significantly diff --git a/arch/loongarch/include/asm/paravirt.h b/arch/loongarch/include/asm/paravirt.h index d219ea0d98ac..0111f0ad5f73 100644 --- a/arch/loongarch/include/asm/paravirt.h +++ b/arch/loongarch/include/asm/paravirt.h @@ -4,16 +4,6 @@ #ifdef CONFIG_PARAVIRT -#include <linux/static_call_types.h> - -u64 dummy_steal_clock(int cpu); -DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); - -static inline u64 paravirt_steal_clock(int cpu) -{ - return static_call(pv_steal_clock)(cpu); -} - int __init pv_ipi_init(void); int __init pv_time_init(void); int __init pv_spinlock_init(void); diff --git a/arch/loongarch/kernel/paravirt.c b/arch/loongarch/kernel/paravirt.c index 8caaa94fed1a..c5e526098c0b 100644 --- a/arch/loongarch/kernel/paravirt.c +++ b/arch/loongarch/kernel/paravirt.c @@ -13,13 +13,6 @@ static int has_steal_clock; static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64); DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key); -static u64 native_steal_clock(int cpu) -{ - return 0; -} - -DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock); - static bool steal_acc = true; static int __init parse_no_stealacc(char *arg) -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v3 09/21] loongarch/paravirt: Use common code for paravirt_steal_clock() 2025-10-06 7:45 ` [PATCH v3 09/21] loongarch/paravirt: " Juergen Gross @ 2025-11-24 9:52 ` Bibo Mao 0 siblings, 0 replies; 24+ messages in thread From: Bibo Mao @ 2025-11-24 9:52 UTC (permalink / raw) To: Juergen Gross, linux-kernel, loongarch, virtualization, x86 Cc: Huacai Chen, WANG Xuerui, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Peter Zijlstra (Intel) Reviewed-by: Bibo Mao <maobibo@loongson.cn> On 2025/10/6 下午3:45, Juergen Gross wrote: > Remove the arch specific variant of paravirt_steal_clock() and use > the common one instead. > > Signed-off-by: Juergen Gross <jgross@suse.com> > Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> > --- > arch/loongarch/Kconfig | 1 + > arch/loongarch/include/asm/paravirt.h | 10 ---------- > arch/loongarch/kernel/paravirt.c | 7 ------- > 3 files changed, 1 insertion(+), 17 deletions(-) > > diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig > index ea683bcea14c..7a9d1d0edc92 100644 > --- a/arch/loongarch/Kconfig > +++ b/arch/loongarch/Kconfig > @@ -671,6 +671,7 @@ source "kernel/livepatch/Kconfig" > config PARAVIRT > bool "Enable paravirtualization code" > depends on AS_HAS_LVZ_EXTENSION > + select HAVE_PV_STEAL_CLOCK_GEN > help > This changes the kernel so it can modify itself when it is run > under a hypervisor, potentially improving performance significantly > diff --git a/arch/loongarch/include/asm/paravirt.h b/arch/loongarch/include/asm/paravirt.h > index d219ea0d98ac..0111f0ad5f73 100644 > --- a/arch/loongarch/include/asm/paravirt.h > +++ b/arch/loongarch/include/asm/paravirt.h > @@ -4,16 +4,6 @@ > > #ifdef CONFIG_PARAVIRT > > -#include <linux/static_call_types.h> > - > -u64 dummy_steal_clock(int cpu); > -DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); > - > -static inline u64 paravirt_steal_clock(int cpu) > -{ > - return static_call(pv_steal_clock)(cpu); > -} > - > int __init pv_ipi_init(void); > int __init pv_time_init(void); > int __init pv_spinlock_init(void); > diff --git a/arch/loongarch/kernel/paravirt.c b/arch/loongarch/kernel/paravirt.c > index 8caaa94fed1a..c5e526098c0b 100644 > --- a/arch/loongarch/kernel/paravirt.c > +++ b/arch/loongarch/kernel/paravirt.c > @@ -13,13 +13,6 @@ static int has_steal_clock; > static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64); > DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key); > > -static u64 native_steal_clock(int cpu) > -{ > - return 0; > -} > - > -DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock); > - > static bool steal_acc = true; > > static int __init parse_no_stealacc(char *arg) > ^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v3 10/21] riscv/paravirt: Use common code for paravirt_steal_clock() 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (7 preceding siblings ...) 2025-10-06 7:45 ` [PATCH v3 09/21] loongarch/paravirt: " Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-10-06 7:45 ` [PATCH v3 11/21] x86/paravirt: " Juergen Gross ` (8 subsequent siblings) 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, linux-riscv, virtualization, x86 Cc: Juergen Gross, Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Peter Zijlstra (Intel), Andrew Jones Remove the arch specific variant of paravirt_steal_clock() and use the common one instead. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> --- arch/riscv/Kconfig | 1 + arch/riscv/include/asm/paravirt.h | 10 ---------- arch/riscv/kernel/paravirt.c | 7 ------- 3 files changed, 1 insertion(+), 17 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 0c6038dc5dfd..68edcf741134 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -1095,6 +1095,7 @@ config COMPAT config PARAVIRT bool "Enable paravirtualization code" depends on RISCV_SBI + select HAVE_PV_STEAL_CLOCK_GEN help This changes the kernel so it can modify itself when it is run under a hypervisor, potentially improving performance significantly diff --git a/arch/riscv/include/asm/paravirt.h b/arch/riscv/include/asm/paravirt.h index 17e5e39c72c0..c49c55b266f3 100644 --- a/arch/riscv/include/asm/paravirt.h +++ b/arch/riscv/include/asm/paravirt.h @@ -3,16 +3,6 @@ #define _ASM_RISCV_PARAVIRT_H #ifdef CONFIG_PARAVIRT -#include <linux/static_call_types.h> - -u64 dummy_steal_clock(int cpu); - -DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); - -static inline u64 paravirt_steal_clock(int cpu) -{ - return static_call(pv_steal_clock)(cpu); -} int __init pv_time_init(void); diff --git a/arch/riscv/kernel/paravirt.c b/arch/riscv/kernel/paravirt.c index d3c334f16172..5f56be79cd06 100644 --- a/arch/riscv/kernel/paravirt.c +++ b/arch/riscv/kernel/paravirt.c @@ -23,13 +23,6 @@ #include <asm/paravirt.h> #include <asm/sbi.h> -static u64 native_steal_clock(int cpu) -{ - return 0; -} - -DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock); - static bool steal_acc = true; static int __init parse_no_stealacc(char *arg) { -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 11/21] x86/paravirt: Use common code for paravirt_steal_clock() 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (8 preceding siblings ...) 2025-10-06 7:45 ` [PATCH v3 10/21] riscv/paravirt: " Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-10-06 7:45 ` [PATCH v3 12/21] x86/paravirt: Move paravirt_sched_clock() related code into tsc.c Juergen Gross ` (7 subsequent siblings) 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, x86, virtualization Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Boris Ostrovsky, Stefano Stabellini, Oleksandr Tyshchenko, xen-devel, Peter Zijlstra (Intel) Remove the arch specific variant of paravirt_steal_clock() and use the common one instead. With all archs supporting Xen now having been switched to the common variant, including paravirt.h can be dropped from drivers/xen/time.c. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- arch/x86/Kconfig | 1 + arch/x86/include/asm/paravirt.h | 7 ------- arch/x86/kernel/paravirt.c | 6 ------ arch/x86/xen/time.c | 1 + drivers/xen/time.c | 3 --- 5 files changed, 2 insertions(+), 16 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 451c3adffacb..f134cfff090b 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -799,6 +799,7 @@ if HYPERVISOR_GUEST config PARAVIRT bool "Enable paravirtualization code" depends on HAVE_STATIC_CALL + select HAVE_PV_STEAL_CLOCK_GEN help This changes the kernel so it can modify itself when it is run under a hypervisor, potentially improving performance significantly diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 0ef797ea8440..766a7cee3d64 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -17,10 +17,8 @@ #include <linux/static_call_types.h> #include <asm/frame.h> -u64 dummy_steal_clock(int cpu); u64 dummy_sched_clock(void); -DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); DECLARE_STATIC_CALL(pv_sched_clock, dummy_sched_clock); void paravirt_set_sched_clock(u64 (*func)(void)); @@ -35,11 +33,6 @@ bool pv_is_native_spin_unlock(void); __visible bool __native_vcpu_is_preempted(long cpu); bool pv_is_native_vcpu_is_preempted(void); -static inline u64 paravirt_steal_clock(int cpu) -{ - return static_call(pv_steal_clock)(cpu); -} - #ifdef CONFIG_PARAVIRT_SPINLOCKS void __init paravirt_set_cap(void); #endif diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index a3ba4747be1c..42991d471bf3 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -60,12 +60,6 @@ void __init native_pv_lock_init(void) static_branch_enable(&virt_spin_lock_key); } -static u64 native_steal_clock(int cpu) -{ - return 0; -} - -DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock); DEFINE_STATIC_CALL(pv_sched_clock, native_sched_clock); void paravirt_set_sched_clock(u64 (*func)(void)) diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c index 96521b1874ac..e4754b2fa900 100644 --- a/arch/x86/xen/time.c +++ b/arch/x86/xen/time.c @@ -16,6 +16,7 @@ #include <linux/slab.h> #include <linux/pvclock_gtod.h> #include <linux/timekeeper_internal.h> +#include <linux/sched/cputime.h> #include <asm/pvclock.h> #include <asm/xen/hypervisor.h> diff --git a/drivers/xen/time.c b/drivers/xen/time.c index 53b12f5ac465..0b18d8a5a2dd 100644 --- a/drivers/xen/time.c +++ b/drivers/xen/time.c @@ -10,9 +10,6 @@ #include <linux/static_call.h> #include <linux/sched/cputime.h> -#ifndef CONFIG_HAVE_PV_STEAL_CLOCK_GEN -#include <asm/paravirt.h> -#endif #include <asm/xen/hypervisor.h> #include <asm/xen/hypercall.h> -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 12/21] x86/paravirt: Move paravirt_sched_clock() related code into tsc.c 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (9 preceding siblings ...) 2025-10-06 7:45 ` [PATCH v3 11/21] x86/paravirt: " Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-10-06 7:45 ` [PATCH v3 13/21] x86/paravirt: Introduce new paravirt-base.h header Juergen Gross ` (6 subsequent siblings) 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, x86, virtualization, kvm, linux-hyperv Cc: Juergen Gross, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Paolo Bonzini, Vitaly Kuznetsov, Boris Ostrovsky, K. Y. Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Daniel Lezcano, xen-devel, Peter Zijlstra (Intel) The only user of paravirt_sched_clock() is in tsc.c, so move the code from paravirt.c and paravirt.h to tsc.c. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- arch/x86/include/asm/paravirt.h | 12 ------------ arch/x86/include/asm/timer.h | 1 + arch/x86/kernel/kvmclock.c | 1 + arch/x86/kernel/paravirt.c | 7 ------- arch/x86/kernel/tsc.c | 10 +++++++++- arch/x86/xen/time.c | 1 + drivers/clocksource/hyperv_timer.c | 2 ++ 7 files changed, 14 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 766a7cee3d64..b69e75a5c872 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -14,20 +14,8 @@ #ifndef __ASSEMBLER__ #include <linux/types.h> #include <linux/cpumask.h> -#include <linux/static_call_types.h> #include <asm/frame.h> -u64 dummy_sched_clock(void); - -DECLARE_STATIC_CALL(pv_sched_clock, dummy_sched_clock); - -void paravirt_set_sched_clock(u64 (*func)(void)); - -static __always_inline u64 paravirt_sched_clock(void) -{ - return static_call(pv_sched_clock)(); -} - __visible void __native_queued_spin_unlock(struct qspinlock *lock); bool pv_is_native_spin_unlock(void); __visible bool __native_vcpu_is_preempted(long cpu); diff --git a/arch/x86/include/asm/timer.h b/arch/x86/include/asm/timer.h index 23baf8c9b34c..fda18bcb19b4 100644 --- a/arch/x86/include/asm/timer.h +++ b/arch/x86/include/asm/timer.h @@ -12,6 +12,7 @@ extern void recalibrate_cpu_khz(void); extern int no_timer_check; extern bool using_native_sched_clock(void); +void paravirt_set_sched_clock(u64 (*func)(void)); /* * We use the full linear equation: f(x) = a + b*x, in order to allow diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c index ca0a49eeac4a..b5991d53fc0e 100644 --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -19,6 +19,7 @@ #include <linux/cc_platform.h> #include <asm/hypervisor.h> +#include <asm/timer.h> #include <asm/x86_init.h> #include <asm/kvmclock.h> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 42991d471bf3..4e37db8073f9 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -60,13 +60,6 @@ void __init native_pv_lock_init(void) static_branch_enable(&virt_spin_lock_key); } -DEFINE_STATIC_CALL(pv_sched_clock, native_sched_clock); - -void paravirt_set_sched_clock(u64 (*func)(void)) -{ - static_call_update(pv_sched_clock, func); -} - static noinstr void pv_native_safe_halt(void) { native_safe_halt(); diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index 87e749106dda..554b54783a04 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -266,19 +266,27 @@ u64 native_sched_clock_from_tsc(u64 tsc) /* We need to define a real function for sched_clock, to override the weak default version */ #ifdef CONFIG_PARAVIRT +DEFINE_STATIC_CALL(pv_sched_clock, native_sched_clock); + noinstr u64 sched_clock_noinstr(void) { - return paravirt_sched_clock(); + return static_call(pv_sched_clock)(); } bool using_native_sched_clock(void) { return static_call_query(pv_sched_clock) == native_sched_clock; } + +void paravirt_set_sched_clock(u64 (*func)(void)) +{ + static_call_update(pv_sched_clock, func); +} #else u64 sched_clock_noinstr(void) __attribute__((alias("native_sched_clock"))); bool using_native_sched_clock(void) { return true; } +void paravirt_set_sched_clock(u64 (*func)(void)) { } #endif notrace u64 sched_clock(void) diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c index e4754b2fa900..6f9f665bb7ae 100644 --- a/arch/x86/xen/time.c +++ b/arch/x86/xen/time.c @@ -19,6 +19,7 @@ #include <linux/sched/cputime.h> #include <asm/pvclock.h> +#include <asm/timer.h> #include <asm/xen/hypervisor.h> #include <asm/xen/hypercall.h> #include <asm/xen/cpuid.h> diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c index 2edc13ca184e..6397a7ba4a98 100644 --- a/drivers/clocksource/hyperv_timer.c +++ b/drivers/clocksource/hyperv_timer.c @@ -535,6 +535,8 @@ static __always_inline void hv_setup_sched_clock(void *sched_clock) sched_clock_register(sched_clock, 64, NSEC_PER_SEC); } #elif defined CONFIG_PARAVIRT +#include <asm/timer.h> + static __always_inline void hv_setup_sched_clock(void *sched_clock) { /* We're on x86/x64 *and* using PV ops */ -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 13/21] x86/paravirt: Introduce new paravirt-base.h header 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (10 preceding siblings ...) 2025-10-06 7:45 ` [PATCH v3 12/21] x86/paravirt: Move paravirt_sched_clock() related code into tsc.c Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-10-06 7:45 ` [PATCH v3 14/21] x86/paravirt: Move pv_native_*() prototypes to paravirt.c Juergen Gross ` (5 subsequent siblings) 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, x86, virtualization Cc: Juergen Gross, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Oleg Nesterov Move the pv_info related definitions and the declarations of the global paravirt function primitives into a new header file paravirt-base.h. This enables to use that header instead of paravirt_types.h in ptrace.h. Additionally it is in preparation of reducing include hell with paravirt enabled. Signed-off-by: Juergen Gross <jgross@suse.com> --- V2: - new patch --- arch/x86/include/asm/paravirt-base.h | 29 +++++++++++++++++++++++++++ arch/x86/include/asm/paravirt.h | 4 +++- arch/x86/include/asm/paravirt_types.h | 23 +-------------------- arch/x86/include/asm/ptrace.h | 2 +- 4 files changed, 34 insertions(+), 24 deletions(-) create mode 100644 arch/x86/include/asm/paravirt-base.h diff --git a/arch/x86/include/asm/paravirt-base.h b/arch/x86/include/asm/paravirt-base.h new file mode 100644 index 000000000000..3827ea20de18 --- /dev/null +++ b/arch/x86/include/asm/paravirt-base.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _ASM_X86_PARAVIRT_BASE_H +#define _ASM_X86_PARAVIRT_BASE_H + +/* + * Wrapper type for pointers to code which uses the non-standard + * calling convention. See PV_CALL_SAVE_REGS_THUNK below. + */ +struct paravirt_callee_save { + void *func; +}; + +struct pv_info { +#ifdef CONFIG_PARAVIRT_XXL + u16 extra_user_64bit_cs; /* __USER_CS if none */ +#endif + const char *name; +}; + +void default_banner(void); +extern struct pv_info pv_info; +unsigned long paravirt_ret0(void); +#ifdef CONFIG_PARAVIRT_XXL +u64 _paravirt_ident_64(u64); +#endif +#define paravirt_nop ((void *)nop_func) + +#endif /* _ASM_X86_PARAVIRT_BASE_H */ diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index b69e75a5c872..62399f5d037d 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -4,6 +4,9 @@ /* Various instructions on x86 need to be replaced for * para-virtualization: those hooks are defined here. */ +#ifndef __ASSEMBLER__ +#include <asm/paravirt-base.h> +#endif #include <asm/paravirt_types.h> #ifdef CONFIG_PARAVIRT @@ -601,7 +604,6 @@ static __always_inline unsigned long arch_local_irq_save(void) #undef PVOP_VCALL4 #undef PVOP_CALL4 -extern void default_banner(void); void native_pv_lock_init(void) __init; #else /* __ASSEMBLER__ */ diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 7acff40cc159..148d157e2a4a 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -7,6 +7,7 @@ #ifndef __ASSEMBLER__ #include <linux/types.h> +#include <asm/paravirt-base.h> #include <asm/desc_defs.h> #include <asm/pgtable_types.h> #include <asm/nospec-branch.h> @@ -18,23 +19,6 @@ struct cpumask; struct flush_tlb_info; struct vm_area_struct; -/* - * Wrapper type for pointers to code which uses the non-standard - * calling convention. See PV_CALL_SAVE_REGS_THUNK below. - */ -struct paravirt_callee_save { - void *func; -}; - -/* general info */ -struct pv_info { -#ifdef CONFIG_PARAVIRT_XXL - u16 extra_user_64bit_cs; /* __USER_CS if none */ -#endif - - const char *name; -}; - #ifdef CONFIG_PARAVIRT_XXL struct pv_lazy_ops { /* Set deferred update mode, used for batching operations. */ @@ -226,7 +210,6 @@ struct paravirt_patch_template { struct pv_lock_ops lock; } __no_randomize_layout; -extern struct pv_info pv_info; extern struct paravirt_patch_template pv_ops; #define paravirt_ptr(op) [paravirt_opptr] "m" (pv_ops.op) @@ -497,17 +480,13 @@ extern struct paravirt_patch_template pv_ops; __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2), \ PVOP_CALL_ARG3(arg3), PVOP_CALL_ARG4(arg4)) -unsigned long paravirt_ret0(void); #ifdef CONFIG_PARAVIRT_XXL -u64 _paravirt_ident_64(u64); unsigned long pv_native_save_fl(void); void pv_native_irq_disable(void); void pv_native_irq_enable(void); unsigned long pv_native_read_cr2(void); #endif -#define paravirt_nop ((void *)nop_func) - #endif /* __ASSEMBLER__ */ #define ALT_NOT_XEN ALT_NOT(X86_FEATURE_XENPV) diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h index 50f75467f73d..fe2dab7d74e3 100644 --- a/arch/x86/include/asm/ptrace.h +++ b/arch/x86/include/asm/ptrace.h @@ -172,7 +172,7 @@ struct pt_regs { #endif /* !__i386__ */ #ifdef CONFIG_PARAVIRT -#include <asm/paravirt_types.h> +#include <asm/paravirt-base.h> #endif #include <asm/proto.h> -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 14/21] x86/paravirt: Move pv_native_*() prototypes to paravirt.c 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (11 preceding siblings ...) 2025-10-06 7:45 ` [PATCH v3 13/21] x86/paravirt: Introduce new paravirt-base.h header Juergen Gross @ 2025-10-06 7:45 ` Juergen Gross 2025-10-06 7:46 ` [PATCH v3 19/21] x86/paravirt: Allow pv-calls outside paravirt.h Juergen Gross ` (4 subsequent siblings) 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:45 UTC (permalink / raw) To: linux-kernel, x86, virtualization Cc: Juergen Gross, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin The only reason the pv_native_*() prototypes are needed is the complete definition of those functions via an asm() statement, which makes it impossible to have those functions as static ones. Move the prototypes from paravirt_types.h into paravirt.c, which is the only source referencing the functions. Signed-off-by: Juergen Gross <jgross@suse.com> --- V2: - new patch --- arch/x86/include/asm/paravirt_types.h | 7 ------- arch/x86/kernel/paravirt.c | 5 +++++ 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 148d157e2a4a..1e50f13e6543 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -480,13 +480,6 @@ extern struct paravirt_patch_template pv_ops; __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2), \ PVOP_CALL_ARG3(arg3), PVOP_CALL_ARG4(arg4)) -#ifdef CONFIG_PARAVIRT_XXL -unsigned long pv_native_save_fl(void); -void pv_native_irq_disable(void); -void pv_native_irq_enable(void); -unsigned long pv_native_read_cr2(void); -#endif - #endif /* __ASSEMBLER__ */ #define ALT_NOT_XEN ALT_NOT(X86_FEATURE_XENPV) diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 4e37db8073f9..5dfbd3f55792 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -45,6 +45,11 @@ void __init default_banner(void) } #ifdef CONFIG_PARAVIRT_XXL +unsigned long pv_native_save_fl(void); +void pv_native_irq_disable(void); +void pv_native_irq_enable(void); +unsigned long pv_native_read_cr2(void); + DEFINE_ASM_FUNC(_paravirt_ident_64, "mov %rdi, %rax", .text); DEFINE_ASM_FUNC(pv_native_save_fl, "pushf; pop %rax", .noinstr.text); DEFINE_ASM_FUNC(pv_native_irq_disable, "cli", .noinstr.text); -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 19/21] x86/paravirt: Allow pv-calls outside paravirt.h 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (12 preceding siblings ...) 2025-10-06 7:45 ` [PATCH v3 14/21] x86/paravirt: Move pv_native_*() prototypes to paravirt.c Juergen Gross @ 2025-10-06 7:46 ` Juergen Gross 2025-10-06 7:46 ` [PATCH v3 20/21] x86/paravirt: Specify pv_ops array in paravirt macros Juergen Gross ` (3 subsequent siblings) 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:46 UTC (permalink / raw) To: linux-kernel, x86, virtualization Cc: Juergen Gross, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin In order to prepare for defining paravirt functions outside of paravirt.h, don't #undef the paravirt call macros. Signed-off-by: Juergen Gross <jgross@suse.com> --- arch/x86/include/asm/paravirt.h | 16 ---------------- 1 file changed, 16 deletions(-) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 62399f5d037d..ba6b14b6f36a 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -588,22 +588,6 @@ static __always_inline unsigned long arch_local_irq_save(void) } #endif - -/* Make sure as little as possible of this mess escapes. */ -#undef PARAVIRT_CALL -#undef __PVOP_CALL -#undef __PVOP_VCALL -#undef PVOP_VCALL0 -#undef PVOP_CALL0 -#undef PVOP_VCALL1 -#undef PVOP_CALL1 -#undef PVOP_VCALL2 -#undef PVOP_CALL2 -#undef PVOP_VCALL3 -#undef PVOP_CALL3 -#undef PVOP_VCALL4 -#undef PVOP_CALL4 - void native_pv_lock_init(void) __init; #else /* __ASSEMBLER__ */ -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 20/21] x86/paravirt: Specify pv_ops array in paravirt macros 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (13 preceding siblings ...) 2025-10-06 7:46 ` [PATCH v3 19/21] x86/paravirt: Allow pv-calls outside paravirt.h Juergen Gross @ 2025-10-06 7:46 ` Juergen Gross 2025-10-06 7:46 ` [PATCH v3 21/21] x86/pvlocks: Move paravirt spinlock functions into own header Juergen Gross ` (2 subsequent siblings) 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:46 UTC (permalink / raw) To: linux-kernel, x86, virtualization Cc: Juergen Gross, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin In order to prepare having multiple pv_ops arrays, specify the array in the paravirt macros. Signed-off-by: Juergen Gross <jgross@suse.com> --- V2: - new patch --- arch/x86/include/asm/paravirt.h | 166 +++++++++++++------------- arch/x86/include/asm/paravirt_types.h | 140 +++++++++++----------- 2 files changed, 153 insertions(+), 153 deletions(-) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index ba6b14b6f36a..ec274d13bae0 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -31,11 +31,11 @@ void __init paravirt_set_cap(void); /* The paravirtualized I/O functions */ static inline void slow_down_io(void) { - PVOP_VCALL0(cpu.io_delay); + PVOP_VCALL0(pv_ops, cpu.io_delay); #ifdef REALLY_SLOW_IO - PVOP_VCALL0(cpu.io_delay); - PVOP_VCALL0(cpu.io_delay); - PVOP_VCALL0(cpu.io_delay); + PVOP_VCALL0(pv_ops, cpu.io_delay); + PVOP_VCALL0(pv_ops, cpu.io_delay); + PVOP_VCALL0(pv_ops, cpu.io_delay); #endif } @@ -47,57 +47,57 @@ void native_flush_tlb_multi(const struct cpumask *cpumask, static inline void __flush_tlb_local(void) { - PVOP_VCALL0(mmu.flush_tlb_user); + PVOP_VCALL0(pv_ops, mmu.flush_tlb_user); } static inline void __flush_tlb_global(void) { - PVOP_VCALL0(mmu.flush_tlb_kernel); + PVOP_VCALL0(pv_ops, mmu.flush_tlb_kernel); } static inline void __flush_tlb_one_user(unsigned long addr) { - PVOP_VCALL1(mmu.flush_tlb_one_user, addr); + PVOP_VCALL1(pv_ops, mmu.flush_tlb_one_user, addr); } static inline void __flush_tlb_multi(const struct cpumask *cpumask, const struct flush_tlb_info *info) { - PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info); + PVOP_VCALL2(pv_ops, mmu.flush_tlb_multi, cpumask, info); } static inline void paravirt_arch_exit_mmap(struct mm_struct *mm) { - PVOP_VCALL1(mmu.exit_mmap, mm); + PVOP_VCALL1(pv_ops, mmu.exit_mmap, mm); } static inline void notify_page_enc_status_changed(unsigned long pfn, int npages, bool enc) { - PVOP_VCALL3(mmu.notify_page_enc_status_changed, pfn, npages, enc); + PVOP_VCALL3(pv_ops, mmu.notify_page_enc_status_changed, pfn, npages, enc); } static __always_inline void arch_safe_halt(void) { - PVOP_VCALL0(irq.safe_halt); + PVOP_VCALL0(pv_ops, irq.safe_halt); } static inline void halt(void) { - PVOP_VCALL0(irq.halt); + PVOP_VCALL0(pv_ops, irq.halt); } #ifdef CONFIG_PARAVIRT_XXL static inline void load_sp0(unsigned long sp0) { - PVOP_VCALL1(cpu.load_sp0, sp0); + PVOP_VCALL1(pv_ops, cpu.load_sp0, sp0); } /* The paravirtualized CPUID instruction. */ static inline void __cpuid(unsigned int *eax, unsigned int *ebx, unsigned int *ecx, unsigned int *edx) { - PVOP_VCALL4(cpu.cpuid, eax, ebx, ecx, edx); + PVOP_VCALL4(pv_ops, cpu.cpuid, eax, ebx, ecx, edx); } /* @@ -105,69 +105,69 @@ static inline void __cpuid(unsigned int *eax, unsigned int *ebx, */ static __always_inline unsigned long paravirt_get_debugreg(int reg) { - return PVOP_CALL1(unsigned long, cpu.get_debugreg, reg); + return PVOP_CALL1(unsigned long, pv_ops, cpu.get_debugreg, reg); } #define get_debugreg(var, reg) var = paravirt_get_debugreg(reg) static __always_inline void set_debugreg(unsigned long val, int reg) { - PVOP_VCALL2(cpu.set_debugreg, reg, val); + PVOP_VCALL2(pv_ops, cpu.set_debugreg, reg, val); } static inline unsigned long read_cr0(void) { - return PVOP_CALL0(unsigned long, cpu.read_cr0); + return PVOP_CALL0(unsigned long, pv_ops, cpu.read_cr0); } static inline void write_cr0(unsigned long x) { - PVOP_VCALL1(cpu.write_cr0, x); + PVOP_VCALL1(pv_ops, cpu.write_cr0, x); } static __always_inline unsigned long read_cr2(void) { - return PVOP_ALT_CALLEE0(unsigned long, mmu.read_cr2, + return PVOP_ALT_CALLEE0(unsigned long, pv_ops, mmu.read_cr2, "mov %%cr2, %%rax;", ALT_NOT_XEN); } static __always_inline void write_cr2(unsigned long x) { - PVOP_VCALL1(mmu.write_cr2, x); + PVOP_VCALL1(pv_ops, mmu.write_cr2, x); } static inline unsigned long __read_cr3(void) { - return PVOP_ALT_CALL0(unsigned long, mmu.read_cr3, + return PVOP_ALT_CALL0(unsigned long, pv_ops, mmu.read_cr3, "mov %%cr3, %%rax;", ALT_NOT_XEN); } static inline void write_cr3(unsigned long x) { - PVOP_ALT_VCALL1(mmu.write_cr3, x, "mov %%rdi, %%cr3", ALT_NOT_XEN); + PVOP_ALT_VCALL1(pv_ops, mmu.write_cr3, x, "mov %%rdi, %%cr3", ALT_NOT_XEN); } static inline void __write_cr4(unsigned long x) { - PVOP_VCALL1(cpu.write_cr4, x); + PVOP_VCALL1(pv_ops, cpu.write_cr4, x); } static inline u64 paravirt_read_msr(u32 msr) { - return PVOP_CALL1(u64, cpu.read_msr, msr); + return PVOP_CALL1(u64, pv_ops, cpu.read_msr, msr); } static inline void paravirt_write_msr(u32 msr, u64 val) { - PVOP_VCALL2(cpu.write_msr, msr, val); + PVOP_VCALL2(pv_ops, cpu.write_msr, msr, val); } static inline int paravirt_read_msr_safe(u32 msr, u64 *val) { - return PVOP_CALL2(int, cpu.read_msr_safe, msr, val); + return PVOP_CALL2(int, pv_ops, cpu.read_msr_safe, msr, val); } static inline int paravirt_write_msr_safe(u32 msr, u64 val) { - return PVOP_CALL2(int, cpu.write_msr_safe, msr, val); + return PVOP_CALL2(int, pv_ops, cpu.write_msr_safe, msr, val); } #define rdmsr(msr, val1, val2) \ @@ -214,154 +214,154 @@ static __always_inline int rdmsrq_safe(u32 msr, u64 *p) static __always_inline u64 rdpmc(int counter) { - return PVOP_CALL1(u64, cpu.read_pmc, counter); + return PVOP_CALL1(u64, pv_ops, cpu.read_pmc, counter); } static inline void paravirt_alloc_ldt(struct desc_struct *ldt, unsigned entries) { - PVOP_VCALL2(cpu.alloc_ldt, ldt, entries); + PVOP_VCALL2(pv_ops, cpu.alloc_ldt, ldt, entries); } static inline void paravirt_free_ldt(struct desc_struct *ldt, unsigned entries) { - PVOP_VCALL2(cpu.free_ldt, ldt, entries); + PVOP_VCALL2(pv_ops, cpu.free_ldt, ldt, entries); } static inline void load_TR_desc(void) { - PVOP_VCALL0(cpu.load_tr_desc); + PVOP_VCALL0(pv_ops, cpu.load_tr_desc); } static inline void load_gdt(const struct desc_ptr *dtr) { - PVOP_VCALL1(cpu.load_gdt, dtr); + PVOP_VCALL1(pv_ops, cpu.load_gdt, dtr); } static inline void load_idt(const struct desc_ptr *dtr) { - PVOP_VCALL1(cpu.load_idt, dtr); + PVOP_VCALL1(pv_ops, cpu.load_idt, dtr); } static inline void set_ldt(const void *addr, unsigned entries) { - PVOP_VCALL2(cpu.set_ldt, addr, entries); + PVOP_VCALL2(pv_ops, cpu.set_ldt, addr, entries); } static inline unsigned long paravirt_store_tr(void) { - return PVOP_CALL0(unsigned long, cpu.store_tr); + return PVOP_CALL0(unsigned long, pv_ops, cpu.store_tr); } #define store_tr(tr) ((tr) = paravirt_store_tr()) static inline void load_TLS(struct thread_struct *t, unsigned cpu) { - PVOP_VCALL2(cpu.load_tls, t, cpu); + PVOP_VCALL2(pv_ops, cpu.load_tls, t, cpu); } static inline void load_gs_index(unsigned int gs) { - PVOP_VCALL1(cpu.load_gs_index, gs); + PVOP_VCALL1(pv_ops, cpu.load_gs_index, gs); } static inline void write_ldt_entry(struct desc_struct *dt, int entry, const void *desc) { - PVOP_VCALL3(cpu.write_ldt_entry, dt, entry, desc); + PVOP_VCALL3(pv_ops, cpu.write_ldt_entry, dt, entry, desc); } static inline void write_gdt_entry(struct desc_struct *dt, int entry, void *desc, int type) { - PVOP_VCALL4(cpu.write_gdt_entry, dt, entry, desc, type); + PVOP_VCALL4(pv_ops, cpu.write_gdt_entry, dt, entry, desc, type); } static inline void write_idt_entry(gate_desc *dt, int entry, const gate_desc *g) { - PVOP_VCALL3(cpu.write_idt_entry, dt, entry, g); + PVOP_VCALL3(pv_ops, cpu.write_idt_entry, dt, entry, g); } #ifdef CONFIG_X86_IOPL_IOPERM static inline void tss_invalidate_io_bitmap(void) { - PVOP_VCALL0(cpu.invalidate_io_bitmap); + PVOP_VCALL0(pv_ops, cpu.invalidate_io_bitmap); } static inline void tss_update_io_bitmap(void) { - PVOP_VCALL0(cpu.update_io_bitmap); + PVOP_VCALL0(pv_ops, cpu.update_io_bitmap); } #endif static inline void paravirt_enter_mmap(struct mm_struct *next) { - PVOP_VCALL1(mmu.enter_mmap, next); + PVOP_VCALL1(pv_ops, mmu.enter_mmap, next); } static inline int paravirt_pgd_alloc(struct mm_struct *mm) { - return PVOP_CALL1(int, mmu.pgd_alloc, mm); + return PVOP_CALL1(int, pv_ops, mmu.pgd_alloc, mm); } static inline void paravirt_pgd_free(struct mm_struct *mm, pgd_t *pgd) { - PVOP_VCALL2(mmu.pgd_free, mm, pgd); + PVOP_VCALL2(pv_ops, mmu.pgd_free, mm, pgd); } static inline void paravirt_alloc_pte(struct mm_struct *mm, unsigned long pfn) { - PVOP_VCALL2(mmu.alloc_pte, mm, pfn); + PVOP_VCALL2(pv_ops, mmu.alloc_pte, mm, pfn); } static inline void paravirt_release_pte(unsigned long pfn) { - PVOP_VCALL1(mmu.release_pte, pfn); + PVOP_VCALL1(pv_ops, mmu.release_pte, pfn); } static inline void paravirt_alloc_pmd(struct mm_struct *mm, unsigned long pfn) { - PVOP_VCALL2(mmu.alloc_pmd, mm, pfn); + PVOP_VCALL2(pv_ops, mmu.alloc_pmd, mm, pfn); } static inline void paravirt_release_pmd(unsigned long pfn) { - PVOP_VCALL1(mmu.release_pmd, pfn); + PVOP_VCALL1(pv_ops, mmu.release_pmd, pfn); } static inline void paravirt_alloc_pud(struct mm_struct *mm, unsigned long pfn) { - PVOP_VCALL2(mmu.alloc_pud, mm, pfn); + PVOP_VCALL2(pv_ops, mmu.alloc_pud, mm, pfn); } static inline void paravirt_release_pud(unsigned long pfn) { - PVOP_VCALL1(mmu.release_pud, pfn); + PVOP_VCALL1(pv_ops, mmu.release_pud, pfn); } static inline void paravirt_alloc_p4d(struct mm_struct *mm, unsigned long pfn) { - PVOP_VCALL2(mmu.alloc_p4d, mm, pfn); + PVOP_VCALL2(pv_ops, mmu.alloc_p4d, mm, pfn); } static inline void paravirt_release_p4d(unsigned long pfn) { - PVOP_VCALL1(mmu.release_p4d, pfn); + PVOP_VCALL1(pv_ops, mmu.release_p4d, pfn); } static inline pte_t __pte(pteval_t val) { - return (pte_t) { PVOP_ALT_CALLEE1(pteval_t, mmu.make_pte, val, + return (pte_t) { PVOP_ALT_CALLEE1(pteval_t, pv_ops, mmu.make_pte, val, "mov %%rdi, %%rax", ALT_NOT_XEN) }; } static inline pteval_t pte_val(pte_t pte) { - return PVOP_ALT_CALLEE1(pteval_t, mmu.pte_val, pte.pte, + return PVOP_ALT_CALLEE1(pteval_t, pv_ops, mmu.pte_val, pte.pte, "mov %%rdi, %%rax", ALT_NOT_XEN); } static inline pgd_t __pgd(pgdval_t val) { - return (pgd_t) { PVOP_ALT_CALLEE1(pgdval_t, mmu.make_pgd, val, + return (pgd_t) { PVOP_ALT_CALLEE1(pgdval_t, pv_ops, mmu.make_pgd, val, "mov %%rdi, %%rax", ALT_NOT_XEN) }; } static inline pgdval_t pgd_val(pgd_t pgd) { - return PVOP_ALT_CALLEE1(pgdval_t, mmu.pgd_val, pgd.pgd, + return PVOP_ALT_CALLEE1(pgdval_t, pv_ops, mmu.pgd_val, pgd.pgd, "mov %%rdi, %%rax", ALT_NOT_XEN); } @@ -371,7 +371,7 @@ static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned { pteval_t ret; - ret = PVOP_CALL3(pteval_t, mmu.ptep_modify_prot_start, vma, addr, ptep); + ret = PVOP_CALL3(pteval_t, pv_ops, mmu.ptep_modify_prot_start, vma, addr, ptep); return (pte_t) { .pte = ret }; } @@ -380,41 +380,41 @@ static inline void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned pte_t *ptep, pte_t old_pte, pte_t pte) { - PVOP_VCALL4(mmu.ptep_modify_prot_commit, vma, addr, ptep, pte.pte); + PVOP_VCALL4(pv_ops, mmu.ptep_modify_prot_commit, vma, addr, ptep, pte.pte); } static inline void set_pte(pte_t *ptep, pte_t pte) { - PVOP_VCALL2(mmu.set_pte, ptep, pte.pte); + PVOP_VCALL2(pv_ops, mmu.set_pte, ptep, pte.pte); } static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) { - PVOP_VCALL2(mmu.set_pmd, pmdp, native_pmd_val(pmd)); + PVOP_VCALL2(pv_ops, mmu.set_pmd, pmdp, native_pmd_val(pmd)); } static inline pmd_t __pmd(pmdval_t val) { - return (pmd_t) { PVOP_ALT_CALLEE1(pmdval_t, mmu.make_pmd, val, + return (pmd_t) { PVOP_ALT_CALLEE1(pmdval_t, pv_ops, mmu.make_pmd, val, "mov %%rdi, %%rax", ALT_NOT_XEN) }; } static inline pmdval_t pmd_val(pmd_t pmd) { - return PVOP_ALT_CALLEE1(pmdval_t, mmu.pmd_val, pmd.pmd, + return PVOP_ALT_CALLEE1(pmdval_t, pv_ops, mmu.pmd_val, pmd.pmd, "mov %%rdi, %%rax", ALT_NOT_XEN); } static inline void set_pud(pud_t *pudp, pud_t pud) { - PVOP_VCALL2(mmu.set_pud, pudp, native_pud_val(pud)); + PVOP_VCALL2(pv_ops, mmu.set_pud, pudp, native_pud_val(pud)); } static inline pud_t __pud(pudval_t val) { pudval_t ret; - ret = PVOP_ALT_CALLEE1(pudval_t, mmu.make_pud, val, + ret = PVOP_ALT_CALLEE1(pudval_t, pv_ops, mmu.make_pud, val, "mov %%rdi, %%rax", ALT_NOT_XEN); return (pud_t) { ret }; @@ -422,7 +422,7 @@ static inline pud_t __pud(pudval_t val) static inline pudval_t pud_val(pud_t pud) { - return PVOP_ALT_CALLEE1(pudval_t, mmu.pud_val, pud.pud, + return PVOP_ALT_CALLEE1(pudval_t, pv_ops, mmu.pud_val, pud.pud, "mov %%rdi, %%rax", ALT_NOT_XEN); } @@ -435,12 +435,12 @@ static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) { p4dval_t val = native_p4d_val(p4d); - PVOP_VCALL2(mmu.set_p4d, p4dp, val); + PVOP_VCALL2(pv_ops, mmu.set_p4d, p4dp, val); } static inline p4d_t __p4d(p4dval_t val) { - p4dval_t ret = PVOP_ALT_CALLEE1(p4dval_t, mmu.make_p4d, val, + p4dval_t ret = PVOP_ALT_CALLEE1(p4dval_t, pv_ops, mmu.make_p4d, val, "mov %%rdi, %%rax", ALT_NOT_XEN); return (p4d_t) { ret }; @@ -448,13 +448,13 @@ static inline p4d_t __p4d(p4dval_t val) static inline p4dval_t p4d_val(p4d_t p4d) { - return PVOP_ALT_CALLEE1(p4dval_t, mmu.p4d_val, p4d.p4d, + return PVOP_ALT_CALLEE1(p4dval_t, pv_ops, mmu.p4d_val, p4d.p4d, "mov %%rdi, %%rax", ALT_NOT_XEN); } static inline void __set_pgd(pgd_t *pgdp, pgd_t pgd) { - PVOP_VCALL2(mmu.set_pgd, pgdp, native_pgd_val(pgd)); + PVOP_VCALL2(pv_ops, mmu.set_pgd, pgdp, native_pgd_val(pgd)); } #define set_pgd(pgdp, pgdval) do { \ @@ -493,28 +493,28 @@ static inline void pmd_clear(pmd_t *pmdp) #define __HAVE_ARCH_START_CONTEXT_SWITCH static inline void arch_start_context_switch(struct task_struct *prev) { - PVOP_VCALL1(cpu.start_context_switch, prev); + PVOP_VCALL1(pv_ops, cpu.start_context_switch, prev); } static inline void arch_end_context_switch(struct task_struct *next) { - PVOP_VCALL1(cpu.end_context_switch, next); + PVOP_VCALL1(pv_ops, cpu.end_context_switch, next); } #define __HAVE_ARCH_ENTER_LAZY_MMU_MODE static inline void arch_enter_lazy_mmu_mode(void) { - PVOP_VCALL0(mmu.lazy_mode.enter); + PVOP_VCALL0(pv_ops, mmu.lazy_mode.enter); } static inline void arch_leave_lazy_mmu_mode(void) { - PVOP_VCALL0(mmu.lazy_mode.leave); + PVOP_VCALL0(pv_ops, mmu.lazy_mode.leave); } static inline void arch_flush_lazy_mmu_mode(void) { - PVOP_VCALL0(mmu.lazy_mode.flush); + PVOP_VCALL0(pv_ops, mmu.lazy_mode.flush); } static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, @@ -529,29 +529,29 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, static __always_inline void pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) { - PVOP_VCALL2(lock.queued_spin_lock_slowpath, lock, val); + PVOP_VCALL2(pv_ops, lock.queued_spin_lock_slowpath, lock, val); } static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock) { - PVOP_ALT_VCALLEE1(lock.queued_spin_unlock, lock, + PVOP_ALT_VCALLEE1(pv_ops, lock.queued_spin_unlock, lock, "movb $0, (%%" _ASM_ARG1 ");", ALT_NOT(X86_FEATURE_PVUNLOCK)); } static __always_inline void pv_wait(u8 *ptr, u8 val) { - PVOP_VCALL2(lock.wait, ptr, val); + PVOP_VCALL2(pv_ops, lock.wait, ptr, val); } static __always_inline void pv_kick(int cpu) { - PVOP_VCALL1(lock.kick, cpu); + PVOP_VCALL1(pv_ops, lock.kick, cpu); } static __always_inline bool pv_vcpu_is_preempted(long cpu) { - return PVOP_ALT_CALLEE1(bool, lock.vcpu_is_preempted, cpu, + return PVOP_ALT_CALLEE1(bool, pv_ops, lock.vcpu_is_preempted, cpu, "xor %%" _ASM_AX ", %%" _ASM_AX ";", ALT_NOT(X86_FEATURE_VCPUPREEMPT)); } @@ -564,18 +564,18 @@ bool __raw_callee_save___native_vcpu_is_preempted(long cpu); #ifdef CONFIG_PARAVIRT_XXL static __always_inline unsigned long arch_local_save_flags(void) { - return PVOP_ALT_CALLEE0(unsigned long, irq.save_fl, "pushf; pop %%rax;", + return PVOP_ALT_CALLEE0(unsigned long, pv_ops, irq.save_fl, "pushf; pop %%rax;", ALT_NOT_XEN); } static __always_inline void arch_local_irq_disable(void) { - PVOP_ALT_VCALLEE0(irq.irq_disable, "cli;", ALT_NOT_XEN); + PVOP_ALT_VCALLEE0(pv_ops, irq.irq_disable, "cli;", ALT_NOT_XEN); } static __always_inline void arch_local_irq_enable(void) { - PVOP_ALT_VCALLEE0(irq.irq_enable, "sti;", ALT_NOT_XEN); + PVOP_ALT_VCALLEE0(pv_ops, irq.irq_enable, "sti;", ALT_NOT_XEN); } static __always_inline unsigned long arch_local_irq_save(void) diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 1e50f13e6543..01a485f1a7f1 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -212,7 +212,7 @@ struct paravirt_patch_template { extern struct paravirt_patch_template pv_ops; -#define paravirt_ptr(op) [paravirt_opptr] "m" (pv_ops.op) +#define paravirt_ptr(array, op) [paravirt_opptr] "m" (array.op) /* * This generates an indirect call based on the operation type number. @@ -362,19 +362,19 @@ extern struct paravirt_patch_template pv_ops; * feature is not active, the direct call is used as above via the * ALT_FLAG_DIRECT_CALL special case and the "always on" feature. */ -#define ____PVOP_CALL(ret, op, call_clbr, extra_clbr, ...) \ +#define ____PVOP_CALL(ret, array, op, call_clbr, extra_clbr, ...) \ ({ \ PVOP_CALL_ARGS; \ asm volatile(ALTERNATIVE(PARAVIRT_CALL, ALT_CALL_INSTR, \ ALT_CALL_ALWAYS) \ : call_clbr, ASM_CALL_CONSTRAINT \ - : paravirt_ptr(op), \ + : paravirt_ptr(array, op), \ ##__VA_ARGS__ \ : "memory", "cc" extra_clbr); \ ret; \ }) -#define ____PVOP_ALT_CALL(ret, op, alt, cond, call_clbr, \ +#define ____PVOP_ALT_CALL(ret, array, op, alt, cond, call_clbr, \ extra_clbr, ...) \ ({ \ PVOP_CALL_ARGS; \ @@ -382,102 +382,102 @@ extern struct paravirt_patch_template pv_ops; ALT_CALL_INSTR, ALT_CALL_ALWAYS, \ alt, cond) \ : call_clbr, ASM_CALL_CONSTRAINT \ - : paravirt_ptr(op), \ + : paravirt_ptr(array, op), \ ##__VA_ARGS__ \ : "memory", "cc" extra_clbr); \ ret; \ }) -#define __PVOP_CALL(rettype, op, ...) \ - ____PVOP_CALL(PVOP_RETVAL(rettype), op, \ +#define __PVOP_CALL(rettype, array, op, ...) \ + ____PVOP_CALL(PVOP_RETVAL(rettype), array, op, \ PVOP_CALL_CLOBBERS, EXTRA_CLOBBERS, ##__VA_ARGS__) -#define __PVOP_ALT_CALL(rettype, op, alt, cond, ...) \ - ____PVOP_ALT_CALL(PVOP_RETVAL(rettype), op, alt, cond, \ +#define __PVOP_ALT_CALL(rettype, array, op, alt, cond, ...) \ + ____PVOP_ALT_CALL(PVOP_RETVAL(rettype), array, op, alt, cond, \ PVOP_CALL_CLOBBERS, EXTRA_CLOBBERS, \ ##__VA_ARGS__) -#define __PVOP_CALLEESAVE(rettype, op, ...) \ - ____PVOP_CALL(PVOP_RETVAL(rettype), op.func, \ +#define __PVOP_CALLEESAVE(rettype, array, op, ...) \ + ____PVOP_CALL(PVOP_RETVAL(rettype), array, op.func, \ PVOP_CALLEE_CLOBBERS, , ##__VA_ARGS__) -#define __PVOP_ALT_CALLEESAVE(rettype, op, alt, cond, ...) \ - ____PVOP_ALT_CALL(PVOP_RETVAL(rettype), op.func, alt, cond, \ +#define __PVOP_ALT_CALLEESAVE(rettype, array, op, alt, cond, ...) \ + ____PVOP_ALT_CALL(PVOP_RETVAL(rettype), array, op.func, alt, cond, \ PVOP_CALLEE_CLOBBERS, , ##__VA_ARGS__) -#define __PVOP_VCALL(op, ...) \ - (void)____PVOP_CALL(, op, PVOP_VCALL_CLOBBERS, \ +#define __PVOP_VCALL(array, op, ...) \ + (void)____PVOP_CALL(, array, op, PVOP_VCALL_CLOBBERS, \ VEXTRA_CLOBBERS, ##__VA_ARGS__) -#define __PVOP_ALT_VCALL(op, alt, cond, ...) \ - (void)____PVOP_ALT_CALL(, op, alt, cond, \ +#define __PVOP_ALT_VCALL(array, op, alt, cond, ...) \ + (void)____PVOP_ALT_CALL(, array, op, alt, cond, \ PVOP_VCALL_CLOBBERS, VEXTRA_CLOBBERS, \ ##__VA_ARGS__) -#define __PVOP_VCALLEESAVE(op, ...) \ - (void)____PVOP_CALL(, op.func, \ +#define __PVOP_VCALLEESAVE(array, op, ...) \ + (void)____PVOP_CALL(, array, op.func, \ PVOP_VCALLEE_CLOBBERS, , ##__VA_ARGS__) -#define __PVOP_ALT_VCALLEESAVE(op, alt, cond, ...) \ - (void)____PVOP_ALT_CALL(, op.func, alt, cond, \ +#define __PVOP_ALT_VCALLEESAVE(array, op, alt, cond, ...) \ + (void)____PVOP_ALT_CALL(, array, op.func, alt, cond, \ PVOP_VCALLEE_CLOBBERS, , ##__VA_ARGS__) -#define PVOP_CALL0(rettype, op) \ - __PVOP_CALL(rettype, op) -#define PVOP_VCALL0(op) \ - __PVOP_VCALL(op) -#define PVOP_ALT_CALL0(rettype, op, alt, cond) \ - __PVOP_ALT_CALL(rettype, op, alt, cond) -#define PVOP_ALT_VCALL0(op, alt, cond) \ - __PVOP_ALT_VCALL(op, alt, cond) - -#define PVOP_CALLEE0(rettype, op) \ - __PVOP_CALLEESAVE(rettype, op) -#define PVOP_VCALLEE0(op) \ - __PVOP_VCALLEESAVE(op) -#define PVOP_ALT_CALLEE0(rettype, op, alt, cond) \ - __PVOP_ALT_CALLEESAVE(rettype, op, alt, cond) -#define PVOP_ALT_VCALLEE0(op, alt, cond) \ - __PVOP_ALT_VCALLEESAVE(op, alt, cond) - - -#define PVOP_CALL1(rettype, op, arg1) \ - __PVOP_CALL(rettype, op, PVOP_CALL_ARG1(arg1)) -#define PVOP_VCALL1(op, arg1) \ - __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1)) -#define PVOP_ALT_VCALL1(op, arg1, alt, cond) \ - __PVOP_ALT_VCALL(op, alt, cond, PVOP_CALL_ARG1(arg1)) - -#define PVOP_CALLEE1(rettype, op, arg1) \ - __PVOP_CALLEESAVE(rettype, op, PVOP_CALL_ARG1(arg1)) -#define PVOP_VCALLEE1(op, arg1) \ - __PVOP_VCALLEESAVE(op, PVOP_CALL_ARG1(arg1)) -#define PVOP_ALT_CALLEE1(rettype, op, arg1, alt, cond) \ - __PVOP_ALT_CALLEESAVE(rettype, op, alt, cond, PVOP_CALL_ARG1(arg1)) -#define PVOP_ALT_VCALLEE1(op, arg1, alt, cond) \ - __PVOP_ALT_VCALLEESAVE(op, alt, cond, PVOP_CALL_ARG1(arg1)) - - -#define PVOP_CALL2(rettype, op, arg1, arg2) \ - __PVOP_CALL(rettype, op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2)) -#define PVOP_VCALL2(op, arg1, arg2) \ - __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2)) - -#define PVOP_CALL3(rettype, op, arg1, arg2, arg3) \ - __PVOP_CALL(rettype, op, PVOP_CALL_ARG1(arg1), \ +#define PVOP_CALL0(rettype, array, op) \ + __PVOP_CALL(rettype, array, op) +#define PVOP_VCALL0(array, op) \ + __PVOP_VCALL(array, op) +#define PVOP_ALT_CALL0(rettype, array, op, alt, cond) \ + __PVOP_ALT_CALL(rettype, array, op, alt, cond) +#define PVOP_ALT_VCALL0(array, op, alt, cond) \ + __PVOP_ALT_VCALL(array, op, alt, cond) + +#define PVOP_CALLEE0(rettype, array, op) \ + __PVOP_CALLEESAVE(rettype, array, op) +#define PVOP_VCALLEE0(array, op) \ + __PVOP_VCALLEESAVE(array, op) +#define PVOP_ALT_CALLEE0(rettype, array, op, alt, cond) \ + __PVOP_ALT_CALLEESAVE(rettype, array, op, alt, cond) +#define PVOP_ALT_VCALLEE0(array, op, alt, cond) \ + __PVOP_ALT_VCALLEESAVE(array, op, alt, cond) + + +#define PVOP_CALL1(rettype, array, op, arg1) \ + __PVOP_CALL(rettype, array, op, PVOP_CALL_ARG1(arg1)) +#define PVOP_VCALL1(array, op, arg1) \ + __PVOP_VCALL(array, op, PVOP_CALL_ARG1(arg1)) +#define PVOP_ALT_VCALL1(array, op, arg1, alt, cond) \ + __PVOP_ALT_VCALL(array, op, alt, cond, PVOP_CALL_ARG1(arg1)) + +#define PVOP_CALLEE1(rettype, array, op, arg1) \ + __PVOP_CALLEESAVE(rettype, array, op, PVOP_CALL_ARG1(arg1)) +#define PVOP_VCALLEE1(array, op, arg1) \ + __PVOP_VCALLEESAVE(array, op, PVOP_CALL_ARG1(arg1)) +#define PVOP_ALT_CALLEE1(rettype, array, op, arg1, alt, cond) \ + __PVOP_ALT_CALLEESAVE(rettype, array, op, alt, cond, PVOP_CALL_ARG1(arg1)) +#define PVOP_ALT_VCALLEE1(array, op, arg1, alt, cond) \ + __PVOP_ALT_VCALLEESAVE(array, op, alt, cond, PVOP_CALL_ARG1(arg1)) + + +#define PVOP_CALL2(rettype, array, op, arg1, arg2) \ + __PVOP_CALL(rettype, array, op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2)) +#define PVOP_VCALL2(array, op, arg1, arg2) \ + __PVOP_VCALL(array, op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2)) + +#define PVOP_CALL3(rettype, array, op, arg1, arg2, arg3) \ + __PVOP_CALL(rettype, array, op, PVOP_CALL_ARG1(arg1), \ PVOP_CALL_ARG2(arg2), PVOP_CALL_ARG3(arg3)) -#define PVOP_VCALL3(op, arg1, arg2, arg3) \ - __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), \ +#define PVOP_VCALL3(array, op, arg1, arg2, arg3) \ + __PVOP_VCALL(array, op, PVOP_CALL_ARG1(arg1), \ PVOP_CALL_ARG2(arg2), PVOP_CALL_ARG3(arg3)) -#define PVOP_CALL4(rettype, op, arg1, arg2, arg3, arg4) \ - __PVOP_CALL(rettype, op, \ +#define PVOP_CALL4(rettype, array, op, arg1, arg2, arg3, arg4) \ + __PVOP_CALL(rettype, array, op, \ PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2), \ PVOP_CALL_ARG3(arg3), PVOP_CALL_ARG4(arg4)) -#define PVOP_VCALL4(op, arg1, arg2, arg3, arg4) \ - __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2), \ +#define PVOP_VCALL4(array, op, arg1, arg2, arg3, arg4) \ + __PVOP_VCALL(array, op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2), \ PVOP_CALL_ARG3(arg3), PVOP_CALL_ARG4(arg4)) #endif /* __ASSEMBLER__ */ -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 21/21] x86/pvlocks: Move paravirt spinlock functions into own header 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (14 preceding siblings ...) 2025-10-06 7:46 ` [PATCH v3 20/21] x86/paravirt: Specify pv_ops array in paravirt macros Juergen Gross @ 2025-10-06 7:46 ` Juergen Gross 2025-10-15 8:53 ` kernel test robot 2025-11-24 9:42 ` [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross 2026-02-20 4:10 ` patchwork-bot+linux-riscv 17 siblings, 1 reply; 24+ messages in thread From: Juergen Gross @ 2025-10-06 7:46 UTC (permalink / raw) To: linux-kernel, x86, linux-hyperv, virtualization, kvm Cc: Juergen Gross, K. Y. Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Paolo Bonzini, Vitaly Kuznetsov, Boris Ostrovsky, Josh Poimboeuf, Peter Zijlstra, xen-devel Instead of having the pv spinlock function definitions in paravirt.h, move them into the new header paravirt-spinlock.h. Signed-off-by: Juergen Gross <jgross@suse.com> --- V2: - use new header instead of qspinlock.h - use dedicated pv_ops_lock array - move more paravirt related lock code V3: - hide native_pv_lock_init() with CONFIG_SMP (kernel test robot) --- arch/x86/hyperv/hv_spinlock.c | 10 +- arch/x86/include/asm/paravirt-spinlock.h | 146 +++++++++++++++++++++++ arch/x86/include/asm/paravirt.h | 61 ---------- arch/x86/include/asm/paravirt_types.h | 17 --- arch/x86/include/asm/qspinlock.h | 89 ++------------ arch/x86/kernel/Makefile | 2 +- arch/x86/kernel/kvm.c | 10 +- arch/x86/kernel/paravirt-spinlocks.c | 26 +++- arch/x86/kernel/paravirt.c | 21 ---- arch/x86/xen/spinlock.c | 10 +- tools/objtool/check.c | 1 + 11 files changed, 194 insertions(+), 199 deletions(-) create mode 100644 arch/x86/include/asm/paravirt-spinlock.h diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c index 2a3c2afb0154..210b494e4de0 100644 --- a/arch/x86/hyperv/hv_spinlock.c +++ b/arch/x86/hyperv/hv_spinlock.c @@ -78,11 +78,11 @@ void __init hv_init_spinlocks(void) pr_info("PV spinlocks enabled\n"); __pv_init_lock_hash(); - pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; - pv_ops.lock.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); - pv_ops.lock.wait = hv_qlock_wait; - pv_ops.lock.kick = hv_qlock_kick; - pv_ops.lock.vcpu_is_preempted = PV_CALLEE_SAVE(hv_vcpu_is_preempted); + pv_ops_lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; + pv_ops_lock.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); + pv_ops_lock.wait = hv_qlock_wait; + pv_ops_lock.kick = hv_qlock_kick; + pv_ops_lock.vcpu_is_preempted = PV_CALLEE_SAVE(hv_vcpu_is_preempted); } static __init int hv_parse_nopvspin(char *arg) diff --git a/arch/x86/include/asm/paravirt-spinlock.h b/arch/x86/include/asm/paravirt-spinlock.h new file mode 100644 index 000000000000..ed3ed343903d --- /dev/null +++ b/arch/x86/include/asm/paravirt-spinlock.h @@ -0,0 +1,146 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _ASM_X86_PARAVIRT_SPINLOCK_H +#define _ASM_X86_PARAVIRT_SPINLOCK_H + +#include <asm/paravirt_types.h> + +#ifdef CONFIG_SMP +#include <asm/spinlock_types.h> +#endif + +struct qspinlock; + +struct pv_lock_ops { + void (*queued_spin_lock_slowpath)(struct qspinlock *lock, u32 val); + struct paravirt_callee_save queued_spin_unlock; + + void (*wait)(u8 *ptr, u8 val); + void (*kick)(int cpu); + + struct paravirt_callee_save vcpu_is_preempted; +} __no_randomize_layout; + +extern struct pv_lock_ops pv_ops_lock; + +#ifdef CONFIG_PARAVIRT_SPINLOCKS +void __init paravirt_set_cap(void); +extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +extern void __pv_init_lock_hash(void); +extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock); +extern bool nopvspin; + +static __always_inline void pv_queued_spin_lock_slowpath(struct qspinlock *lock, + u32 val) +{ + PVOP_VCALL2(pv_ops_lock, queued_spin_lock_slowpath, lock, val); +} + +static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock) +{ + PVOP_ALT_VCALLEE1(pv_ops_lock, queued_spin_unlock, lock, + "movb $0, (%%" _ASM_ARG1 ");", + ALT_NOT(X86_FEATURE_PVUNLOCK)); +} + +static __always_inline bool pv_vcpu_is_preempted(long cpu) +{ + return PVOP_ALT_CALLEE1(bool, pv_ops_lock, vcpu_is_preempted, cpu, + "xor %%" _ASM_AX ", %%" _ASM_AX ";", + ALT_NOT(X86_FEATURE_VCPUPREEMPT)); +} + +#define queued_spin_unlock queued_spin_unlock +/** + * queued_spin_unlock - release a queued spinlock + * @lock : Pointer to queued spinlock structure + * + * A smp_store_release() on the least-significant byte. + */ +static inline void native_queued_spin_unlock(struct qspinlock *lock) +{ + smp_store_release(&lock->locked, 0); +} + +static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) +{ + pv_queued_spin_lock_slowpath(lock, val); +} + +static inline void queued_spin_unlock(struct qspinlock *lock) +{ + kcsan_release(); + pv_queued_spin_unlock(lock); +} + +#define vcpu_is_preempted vcpu_is_preempted +static inline bool vcpu_is_preempted(long cpu) +{ + return pv_vcpu_is_preempted(cpu); +} + +static __always_inline void pv_wait(u8 *ptr, u8 val) +{ + PVOP_VCALL2(pv_ops_lock, wait, ptr, val); +} + +static __always_inline void pv_kick(int cpu) +{ + PVOP_VCALL1(pv_ops_lock, kick, cpu); +} + +void __raw_callee_save___native_queued_spin_unlock(struct qspinlock *lock); +bool __raw_callee_save___native_vcpu_is_preempted(long cpu); +#endif /* CONFIG_PARAVIRT_SPINLOCKS */ + +void __init native_pv_lock_init(void); +__visible void __native_queued_spin_unlock(struct qspinlock *lock); +bool pv_is_native_spin_unlock(void); +__visible bool __native_vcpu_is_preempted(long cpu); +bool pv_is_native_vcpu_is_preempted(void); + +/* + * virt_spin_lock_key - disables by default the virt_spin_lock() hijack. + * + * Native (and PV wanting native due to vCPU pinning) should keep this key + * disabled. Native does not touch the key. + * + * When in a guest then native_pv_lock_init() enables the key first and + * KVM/XEN might conditionally disable it later in the boot process again. + */ +DECLARE_STATIC_KEY_FALSE(virt_spin_lock_key); + +/* + * Shortcut for the queued_spin_lock_slowpath() function that allows + * virt to hijack it. + * + * Returns: + * true - lock has been negotiated, all done; + * false - queued_spin_lock_slowpath() will do its thing. + */ +#define virt_spin_lock virt_spin_lock +static inline bool virt_spin_lock(struct qspinlock *lock) +{ + int val; + + if (!static_branch_likely(&virt_spin_lock_key)) + return false; + + /* + * On hypervisors without PARAVIRT_SPINLOCKS support we fall + * back to a Test-and-Set spinlock, because fair locks have + * horrible lock 'holder' preemption issues. + */ + + __retry: + val = atomic_read(&lock->val); + + if (val || !atomic_try_cmpxchg(&lock->val, &val, _Q_LOCKED_VAL)) { + cpu_relax(); + goto __retry; + } + + return true; +} + +#endif /* _ASM_X86_PARAVIRT_SPINLOCK_H */ diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index ec274d13bae0..b21072af731d 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -19,15 +19,6 @@ #include <linux/cpumask.h> #include <asm/frame.h> -__visible void __native_queued_spin_unlock(struct qspinlock *lock); -bool pv_is_native_spin_unlock(void); -__visible bool __native_vcpu_is_preempted(long cpu); -bool pv_is_native_vcpu_is_preempted(void); - -#ifdef CONFIG_PARAVIRT_SPINLOCKS -void __init paravirt_set_cap(void); -#endif - /* The paravirtualized I/O functions */ static inline void slow_down_io(void) { @@ -522,46 +513,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, { pv_ops.mmu.set_fixmap(idx, phys, flags); } -#endif - -#if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS) - -static __always_inline void pv_queued_spin_lock_slowpath(struct qspinlock *lock, - u32 val) -{ - PVOP_VCALL2(pv_ops, lock.queued_spin_lock_slowpath, lock, val); -} - -static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock) -{ - PVOP_ALT_VCALLEE1(pv_ops, lock.queued_spin_unlock, lock, - "movb $0, (%%" _ASM_ARG1 ");", - ALT_NOT(X86_FEATURE_PVUNLOCK)); -} - -static __always_inline void pv_wait(u8 *ptr, u8 val) -{ - PVOP_VCALL2(pv_ops, lock.wait, ptr, val); -} - -static __always_inline void pv_kick(int cpu) -{ - PVOP_VCALL1(pv_ops, lock.kick, cpu); -} - -static __always_inline bool pv_vcpu_is_preempted(long cpu) -{ - return PVOP_ALT_CALLEE1(bool, pv_ops, lock.vcpu_is_preempted, cpu, - "xor %%" _ASM_AX ", %%" _ASM_AX ";", - ALT_NOT(X86_FEATURE_VCPUPREEMPT)); -} -void __raw_callee_save___native_queued_spin_unlock(struct qspinlock *lock); -bool __raw_callee_save___native_vcpu_is_preempted(long cpu); - -#endif /* SMP && PARAVIRT_SPINLOCKS */ - -#ifdef CONFIG_PARAVIRT_XXL static __always_inline unsigned long arch_local_save_flags(void) { return PVOP_ALT_CALLEE0(unsigned long, pv_ops, irq.save_fl, "pushf; pop %%rax;", @@ -588,8 +540,6 @@ static __always_inline unsigned long arch_local_irq_save(void) } #endif -void native_pv_lock_init(void) __init; - #else /* __ASSEMBLER__ */ #ifdef CONFIG_X86_64 @@ -613,12 +563,6 @@ void native_pv_lock_init(void) __init; #endif /* __ASSEMBLER__ */ #else /* CONFIG_PARAVIRT */ # define default_banner x86_init_noop - -#ifndef __ASSEMBLER__ -static inline void native_pv_lock_init(void) -{ -} -#endif #endif /* !CONFIG_PARAVIRT */ #ifndef __ASSEMBLER__ @@ -634,10 +578,5 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm) } #endif -#ifndef CONFIG_PARAVIRT_SPINLOCKS -static inline void paravirt_set_cap(void) -{ -} -#endif #endif /* __ASSEMBLER__ */ #endif /* _ASM_X86_PARAVIRT_H */ diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 01a485f1a7f1..e2b487d35d14 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -184,22 +184,6 @@ struct pv_mmu_ops { #endif } __no_randomize_layout; -#ifdef CONFIG_SMP -#include <asm/spinlock_types.h> -#endif - -struct qspinlock; - -struct pv_lock_ops { - void (*queued_spin_lock_slowpath)(struct qspinlock *lock, u32 val); - struct paravirt_callee_save queued_spin_unlock; - - void (*wait)(u8 *ptr, u8 val); - void (*kick)(int cpu); - - struct paravirt_callee_save vcpu_is_preempted; -} __no_randomize_layout; - /* This contains all the paravirt structures: we get a convenient * number for each function using the offset which we use to indicate * what to patch. */ @@ -207,7 +191,6 @@ struct paravirt_patch_template { struct pv_cpu_ops cpu; struct pv_irq_ops irq; struct pv_mmu_ops mmu; - struct pv_lock_ops lock; } __no_randomize_layout; extern struct paravirt_patch_template pv_ops; diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index 68da67df304d..a2668bdf4c84 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -7,6 +7,9 @@ #include <asm-generic/qspinlock_types.h> #include <asm/paravirt.h> #include <asm/rmwcc.h> +#ifdef CONFIG_PARAVIRT +#include <asm/paravirt-spinlock.h> +#endif #define _Q_PENDING_LOOPS (1 << 9) @@ -27,89 +30,13 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo return val; } -#ifdef CONFIG_PARAVIRT_SPINLOCKS -extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); -extern void __pv_init_lock_hash(void); -extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); -extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock); -extern bool nopvspin; - -#define queued_spin_unlock queued_spin_unlock -/** - * queued_spin_unlock - release a queued spinlock - * @lock : Pointer to queued spinlock structure - * - * A smp_store_release() on the least-significant byte. - */ -static inline void native_queued_spin_unlock(struct qspinlock *lock) -{ - smp_store_release(&lock->locked, 0); -} - -static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) -{ - pv_queued_spin_lock_slowpath(lock, val); -} - -static inline void queued_spin_unlock(struct qspinlock *lock) -{ - kcsan_release(); - pv_queued_spin_unlock(lock); -} - -#define vcpu_is_preempted vcpu_is_preempted -static inline bool vcpu_is_preempted(long cpu) -{ - return pv_vcpu_is_preempted(cpu); -} +#ifndef CONFIG_PARAVIRT_SPINLOCKS +static inline void paravirt_set_cap(void) { } #endif -#ifdef CONFIG_PARAVIRT -/* - * virt_spin_lock_key - disables by default the virt_spin_lock() hijack. - * - * Native (and PV wanting native due to vCPU pinning) should keep this key - * disabled. Native does not touch the key. - * - * When in a guest then native_pv_lock_init() enables the key first and - * KVM/XEN might conditionally disable it later in the boot process again. - */ -DECLARE_STATIC_KEY_FALSE(virt_spin_lock_key); - -/* - * Shortcut for the queued_spin_lock_slowpath() function that allows - * virt to hijack it. - * - * Returns: - * true - lock has been negotiated, all done; - * false - queued_spin_lock_slowpath() will do its thing. - */ -#define virt_spin_lock virt_spin_lock -static inline bool virt_spin_lock(struct qspinlock *lock) -{ - int val; - - if (!static_branch_likely(&virt_spin_lock_key)) - return false; - - /* - * On hypervisors without PARAVIRT_SPINLOCKS support we fall - * back to a Test-and-Set spinlock, because fair locks have - * horrible lock 'holder' preemption issues. - */ - - __retry: - val = atomic_read(&lock->val); - - if (val || !atomic_try_cmpxchg(&lock->val, &val, _Q_LOCKED_VAL)) { - cpu_relax(); - goto __retry; - } - - return true; -} - -#endif /* CONFIG_PARAVIRT */ +#ifndef CONFIG_PARAVIRT +static inline void native_pv_lock_init(void) { } +#endif #include <asm-generic/qspinlock.h> diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index bc184dd38d99..e9aeeeafad17 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -126,7 +126,7 @@ obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o obj-$(CONFIG_KVM_GUEST) += kvm.o kvmclock.o obj-$(CONFIG_PARAVIRT) += paravirt.o -obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o +obj-$(CONFIG_PARAVIRT) += paravirt-spinlocks.o obj-$(CONFIG_PARAVIRT_CLOCK) += pvclock.o obj-$(CONFIG_X86_PMEM_LEGACY_DEVICE) += pmem.o diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index d54fd2bc0402..47426538b579 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -824,7 +824,7 @@ static void __init kvm_guest_init(void) has_steal_clock = 1; static_call_update(pv_steal_clock, kvm_steal_clock); - pv_ops.lock.vcpu_is_preempted = + pv_ops_lock.vcpu_is_preempted = PV_CALLEE_SAVE(__kvm_vcpu_is_preempted); } @@ -1121,11 +1121,11 @@ void __init kvm_spinlock_init(void) pr_info("PV spinlocks enabled\n"); __pv_init_lock_hash(); - pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; - pv_ops.lock.queued_spin_unlock = + pv_ops_lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; + pv_ops_lock.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); - pv_ops.lock.wait = kvm_wait; - pv_ops.lock.kick = kvm_kick_cpu; + pv_ops_lock.wait = kvm_wait; + pv_ops_lock.kick = kvm_kick_cpu; /* * When PV spinlock is enabled which is preferred over diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c index 9e1ea99ad9df..95452444868f 100644 --- a/arch/x86/kernel/paravirt-spinlocks.c +++ b/arch/x86/kernel/paravirt-spinlocks.c @@ -3,12 +3,22 @@ * Split spinlock implementation out into its own file, so it can be * compiled in a FTRACE-compatible way. */ +#include <linux/static_call.h> #include <linux/spinlock.h> #include <linux/export.h> #include <linux/jump_label.h> -#include <asm/paravirt.h> +DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key); +#ifdef CONFIG_SMP +void __init native_pv_lock_init(void) +{ + if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) + static_branch_enable(&virt_spin_lock_key); +} +#endif + +#ifdef CONFIG_PARAVIRT_SPINLOCKS __visible void __native_queued_spin_unlock(struct qspinlock *lock) { native_queued_spin_unlock(lock); @@ -17,7 +27,7 @@ PV_CALLEE_SAVE_REGS_THUNK(__native_queued_spin_unlock); bool pv_is_native_spin_unlock(void) { - return pv_ops.lock.queued_spin_unlock.func == + return pv_ops_lock.queued_spin_unlock.func == __raw_callee_save___native_queued_spin_unlock; } @@ -29,7 +39,7 @@ PV_CALLEE_SAVE_REGS_THUNK(__native_vcpu_is_preempted); bool pv_is_native_vcpu_is_preempted(void) { - return pv_ops.lock.vcpu_is_preempted.func == + return pv_ops_lock.vcpu_is_preempted.func == __raw_callee_save___native_vcpu_is_preempted; } @@ -41,3 +51,13 @@ void __init paravirt_set_cap(void) if (!pv_is_native_vcpu_is_preempted()) setup_force_cpu_cap(X86_FEATURE_VCPUPREEMPT); } + +struct pv_lock_ops pv_ops_lock = { + .queued_spin_lock_slowpath = native_queued_spin_lock_slowpath, + .queued_spin_unlock = PV_CALLEE_SAVE(__native_queued_spin_unlock), + .wait = paravirt_nop, + .kick = paravirt_nop, + .vcpu_is_preempted = PV_CALLEE_SAVE(__native_vcpu_is_preempted), +}; +EXPORT_SYMBOL(pv_ops_lock); +#endif diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 5dfbd3f55792..a6ed52cae003 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -57,14 +57,6 @@ DEFINE_ASM_FUNC(pv_native_irq_enable, "sti", .noinstr.text); DEFINE_ASM_FUNC(pv_native_read_cr2, "mov %cr2, %rax", .noinstr.text); #endif -DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key); - -void __init native_pv_lock_init(void) -{ - if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) - static_branch_enable(&virt_spin_lock_key); -} - static noinstr void pv_native_safe_halt(void) { native_safe_halt(); @@ -221,19 +213,6 @@ struct paravirt_patch_template pv_ops = { .mmu.set_fixmap = native_set_fixmap, #endif /* CONFIG_PARAVIRT_XXL */ - -#if defined(CONFIG_PARAVIRT_SPINLOCKS) - /* Lock ops. */ -#ifdef CONFIG_SMP - .lock.queued_spin_lock_slowpath = native_queued_spin_lock_slowpath, - .lock.queued_spin_unlock = - PV_CALLEE_SAVE(__native_queued_spin_unlock), - .lock.wait = paravirt_nop, - .lock.kick = paravirt_nop, - .lock.vcpu_is_preempted = - PV_CALLEE_SAVE(__native_vcpu_is_preempted), -#endif /* SMP */ -#endif }; #ifdef CONFIG_PARAVIRT_XXL diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index fe56646d6919..83ac24ead289 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -134,10 +134,10 @@ void __init xen_init_spinlocks(void) printk(KERN_DEBUG "xen: PV spinlocks enabled\n"); __pv_init_lock_hash(); - pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; - pv_ops.lock.queued_spin_unlock = + pv_ops_lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; + pv_ops_lock.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); - pv_ops.lock.wait = xen_qlock_wait; - pv_ops.lock.kick = xen_qlock_kick; - pv_ops.lock.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); + pv_ops_lock.wait = xen_qlock_wait; + pv_ops_lock.kick = xen_qlock_kick; + pv_ops_lock.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); } diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 1675c16c3793..663fa5f281bd 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -549,6 +549,7 @@ static struct { int idx_off; } pv_ops_tables[] = { { .name = "pv_ops", }, + { .name = "pv_ops_lock", }, { .name = NULL, .idx_off = -1 } }; -- 2.51.0 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v3 21/21] x86/pvlocks: Move paravirt spinlock functions into own header 2025-10-06 7:46 ` [PATCH v3 21/21] x86/pvlocks: Move paravirt spinlock functions into own header Juergen Gross @ 2025-10-15 8:53 ` kernel test robot 0 siblings, 0 replies; 24+ messages in thread From: kernel test robot @ 2025-10-15 8:53 UTC (permalink / raw) To: Juergen Gross, linux-kernel, x86, linux-hyperv, virtualization, kvm Cc: oe-kbuild-all, Juergen Gross, K. Y. Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Paolo Bonzini, Vitaly Kuznetsov, Boris Ostrovsky, Josh Poimboeuf, Peter Zijlstra, xen-devel Hi Juergen, kernel test robot noticed the following build errors: [auto build test ERROR on tip/sched/core] [also build test ERROR on kvm/queue kvm/next linus/master v6.18-rc1 next-20251014] [cannot apply to tip/x86/core kvm/linux-next] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Juergen-Gross/x86-paravirt-Remove-not-needed-includes-of-paravirt-h/20251010-094850 base: tip/sched/core patch link: https://lore.kernel.org/r/20251006074606.1266-22-jgross%40suse.com patch subject: [PATCH v3 21/21] x86/pvlocks: Move paravirt spinlock functions into own header config: x86_64-randconfig-001-20251015 (https://download.01.org/0day-ci/archive/20251015/202510151611.uYXVunzo-lkp@intel.com/config) compiler: gcc-14 (Debian 14.2.0-19) 14.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251015/202510151611.uYXVunzo-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202510151611.uYXVunzo-lkp@intel.com/ All errors (new ones prefixed by >>): ld: vmlinux.o: in function `kvm_guest_init': arch/x86/kernel/kvm.c:828:(.init.text+0x440f4): undefined reference to `pv_ops_lock' >> ld: arch/x86/kernel/kvm.c:828:(.init.text+0x4410e): undefined reference to `pv_ops_lock' ld: arch/x86/kernel/kvm.c:828:(.init.text+0x4411a): undefined reference to `pv_ops_lock' -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 00/21] paravirt: cleanup and reorg 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (15 preceding siblings ...) 2025-10-06 7:46 ` [PATCH v3 21/21] x86/pvlocks: Move paravirt spinlock functions into own header Juergen Gross @ 2025-11-24 9:42 ` Juergen Gross 2026-02-20 4:10 ` patchwork-bot+linux-riscv 17 siblings, 0 replies; 24+ messages in thread From: Juergen Gross @ 2025-11-24 9:42 UTC (permalink / raw) To: linux-kernel, x86, linux-hyperv, virtualization, loongarch, linuxppc-dev, linux-riscv, kvm Cc: Andy Lutomirski, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin, K. Y. Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Peter Zijlstra, Will Deacon, Boqun Feng, Waiman Long, Jiri Kosina, Josh Poimboeuf, Pawan Gupta, Boris Ostrovsky, xen-devel, Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list, Russell King, Catalin Marinas, Huacai Chen, WANG Xuerui, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin, Christophe Leroy, Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider, linux-arm-kernel, Paolo Bonzini, Vitaly Kuznetsov, Stefano Stabellini, Oleksandr Tyshchenko, Daniel Lezcano, Oleg Nesterov [-- Attachment #1.1.1: Type: text/plain, Size: 7778 bytes --] Ping? I think at least the first 12 patches can just go in. The others still lack review. Juergen On 06.10.25 09:45, Juergen Gross wrote: > Some cleanups and reorg of paravirt code and headers: > > - The first 2 patches should be not controversial at all, as they > remove just some no longer needed #include and struct forward > declarations. > > - The 3rd patch is removing CONFIG_PARAVIRT_DEBUG, which IMO has > no real value, as it just changes a crash to a BUG() (the stack > trace will basically be the same). As the maintainer of the main > paravirt user (Xen) I have never seen this crash/BUG() to happen. > > - The 4th patch is just a movement of code. > > - I don't know for what reason asm/paravirt_api_clock.h was added, > as all archs supporting it do it exactly in the same way. Patch > 5 is removing it. > > - Patches 6-14 are streamlining the paravirt clock interfaces by > using a common implementation across architectures where possible > and by moving the related code into common sched code, as this is > where it should live. > > - Patches 15-20 are more like RFC material preparing the paravirt > infrastructure to support multiple pv_ops function arrays. > As a prerequisite for that it makes life in objtool much easier > with dropping the Xen static initializers of the pv_ops sub- > structures, which is done in patches 15-17. > Patches 18-20 are doing the real preparations for multiple pv_ops > arrays and using those arrays in multiple headers. > > - Patch 21 is an example how the new scheme can look like using the > PV-spinlocks. > > Changes in V2: > - new patches 13-18 and 20 > - complete rework of patch 21 > > Changes in V3: > - fixed 2 issues detected by kernel test robot > > Juergen Gross (21): > x86/paravirt: Remove not needed includes of paravirt.h > x86/paravirt: Remove some unneeded struct declarations > x86/paravirt: Remove PARAVIRT_DEBUG config option > x86/paravirt: Move thunk macros to paravirt_types.h > paravirt: Remove asm/paravirt_api_clock.h > sched: Move clock related paravirt code to kernel/sched > arm/paravirt: Use common code for paravirt_steal_clock() > arm64/paravirt: Use common code for paravirt_steal_clock() > loongarch/paravirt: Use common code for paravirt_steal_clock() > riscv/paravirt: Use common code for paravirt_steal_clock() > x86/paravirt: Use common code for paravirt_steal_clock() > x86/paravirt: Move paravirt_sched_clock() related code into tsc.c > x86/paravirt: Introduce new paravirt-base.h header > x86/paravirt: Move pv_native_*() prototypes to paravirt.c > x86/xen: Drop xen_irq_ops > x86/xen: Drop xen_cpu_ops > x86/xen: Drop xen_mmu_ops > objtool: Allow multiple pv_ops arrays > x86/paravirt: Allow pv-calls outside paravirt.h > x86/paravirt: Specify pv_ops array in paravirt macros > x86/pvlocks: Move paravirt spinlock functions into own header > > arch/Kconfig | 3 + > arch/arm/Kconfig | 1 + > arch/arm/include/asm/paravirt.h | 22 -- > arch/arm/include/asm/paravirt_api_clock.h | 1 - > arch/arm/kernel/Makefile | 1 - > arch/arm/kernel/paravirt.c | 23 -- > arch/arm64/Kconfig | 1 + > arch/arm64/include/asm/paravirt.h | 14 - > arch/arm64/include/asm/paravirt_api_clock.h | 1 - > arch/arm64/kernel/paravirt.c | 11 +- > arch/loongarch/Kconfig | 1 + > arch/loongarch/include/asm/paravirt.h | 13 - > .../include/asm/paravirt_api_clock.h | 1 - > arch/loongarch/kernel/paravirt.c | 10 +- > arch/powerpc/include/asm/paravirt.h | 3 - > arch/powerpc/include/asm/paravirt_api_clock.h | 2 - > arch/powerpc/platforms/pseries/setup.c | 4 +- > arch/riscv/Kconfig | 1 + > arch/riscv/include/asm/paravirt.h | 14 - > arch/riscv/include/asm/paravirt_api_clock.h | 1 - > arch/riscv/kernel/paravirt.c | 11 +- > arch/x86/Kconfig | 8 +- > arch/x86/entry/entry_64.S | 1 - > arch/x86/entry/vsyscall/vsyscall_64.c | 1 - > arch/x86/hyperv/hv_spinlock.c | 11 +- > arch/x86/include/asm/apic.h | 4 - > arch/x86/include/asm/highmem.h | 1 - > arch/x86/include/asm/mshyperv.h | 1 - > arch/x86/include/asm/paravirt-base.h | 29 ++ > arch/x86/include/asm/paravirt-spinlock.h | 146 ++++++++ > arch/x86/include/asm/paravirt.h | 331 +++++------------- > arch/x86/include/asm/paravirt_api_clock.h | 1 - > arch/x86/include/asm/paravirt_types.h | 269 +++++++------- > arch/x86/include/asm/pgtable_32.h | 1 - > arch/x86/include/asm/ptrace.h | 2 +- > arch/x86/include/asm/qspinlock.h | 89 +---- > arch/x86/include/asm/spinlock.h | 1 - > arch/x86/include/asm/timer.h | 1 + > arch/x86/include/asm/tlbflush.h | 4 - > arch/x86/kernel/Makefile | 2 +- > arch/x86/kernel/apm_32.c | 1 - > arch/x86/kernel/callthunks.c | 1 - > arch/x86/kernel/cpu/bugs.c | 1 - > arch/x86/kernel/cpu/vmware.c | 1 + > arch/x86/kernel/kvm.c | 11 +- > arch/x86/kernel/kvmclock.c | 1 + > arch/x86/kernel/paravirt-spinlocks.c | 26 +- > arch/x86/kernel/paravirt.c | 42 +-- > arch/x86/kernel/tsc.c | 10 +- > arch/x86/kernel/vsmp_64.c | 1 - > arch/x86/kernel/x86_init.c | 1 - > arch/x86/lib/cache-smp.c | 1 - > arch/x86/mm/init.c | 1 - > arch/x86/xen/enlighten_pv.c | 82 ++--- > arch/x86/xen/irq.c | 20 +- > arch/x86/xen/mmu_pv.c | 100 ++---- > arch/x86/xen/spinlock.c | 11 +- > arch/x86/xen/time.c | 2 + > drivers/clocksource/hyperv_timer.c | 2 + > drivers/xen/time.c | 2 +- > include/linux/sched/cputime.h | 18 + > kernel/sched/core.c | 5 + > kernel/sched/cputime.c | 13 + > kernel/sched/sched.h | 3 +- > tools/objtool/arch/x86/decode.c | 8 +- > tools/objtool/check.c | 78 ++++- > tools/objtool/include/objtool/check.h | 2 + > 67 files changed, 659 insertions(+), 827 deletions(-) > delete mode 100644 arch/arm/include/asm/paravirt.h > delete mode 100644 arch/arm/include/asm/paravirt_api_clock.h > delete mode 100644 arch/arm/kernel/paravirt.c > delete mode 100644 arch/arm64/include/asm/paravirt_api_clock.h > delete mode 100644 arch/loongarch/include/asm/paravirt_api_clock.h > delete mode 100644 arch/powerpc/include/asm/paravirt_api_clock.h > delete mode 100644 arch/riscv/include/asm/paravirt_api_clock.h > create mode 100644 arch/x86/include/asm/paravirt-base.h > create mode 100644 arch/x86/include/asm/paravirt-spinlock.h > delete mode 100644 arch/x86/include/asm/paravirt_api_clock.h > [-- Attachment #1.1.2: OpenPGP public key --] [-- Type: application/pgp-keys, Size: 3743 bytes --] [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 495 bytes --] ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 00/21] paravirt: cleanup and reorg 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross ` (16 preceding siblings ...) 2025-11-24 9:42 ` [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross @ 2026-02-20 4:10 ` patchwork-bot+linux-riscv 17 siblings, 0 replies; 24+ messages in thread From: patchwork-bot+linux-riscv @ 2026-02-20 4:10 UTC (permalink / raw) To: =?utf-8?b?SsO8cmdlbiBHcm/DnyA8amdyb3NzQHN1c2UuY29tPg==?= Cc: linux-riscv, linux-kernel, x86, linux-hyperv, virtualization, loongarch, linuxppc-dev, kvm, luto, tglx, mingo, bp, dave.hansen, hpa, kys, haiyangz, wei.liu, decui, peterz, will, boqun.feng, longman, jikos, jpoimboe, pawan.kumar.gupta, boris.ostrovsky, xen-devel, ajay.kaher, alexey.makhalov, bcm-kernel-feedback-list, linux, catalin.marinas, chenhuacai, kernel, maddy, mpe, npiggin, christophe.leroy, pjw, palmer, aou, alex, juri.lelli, vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman, vschneid, linux-arm-kernel, pbonzini, vkuznets, sstabellini, oleksandr_tyshchenko, daniel.lezcano, oleg Hello: This series was applied to riscv/linux.git (fixes) by Borislav Petkov (AMD) <bp@alien8.de>: On Mon, 6 Oct 2025 09:45:45 +0200 you wrote: > Some cleanups and reorg of paravirt code and headers: > > - The first 2 patches should be not controversial at all, as they > remove just some no longer needed #include and struct forward > declarations. > > - The 3rd patch is removing CONFIG_PARAVIRT_DEBUG, which IMO has > no real value, as it just changes a crash to a BUG() (the stack > trace will basically be the same). As the maintainer of the main > paravirt user (Xen) I have never seen this crash/BUG() to happen. > > [...] Here is the summary with links: - [v3,05/21] paravirt: Remove asm/paravirt_api_clock.h https://git.kernel.org/riscv/c/68b10fd40d49 - [v3,06/21] sched: Move clock related paravirt code to kernel/sched (no matching commit) - [v3,10/21] riscv/paravirt: Use common code for paravirt_steal_clock() https://git.kernel.org/riscv/c/ee9ffcf99f07 You are awesome, thank you! -- Deet-doot-dot, I am a bot. https://korg.docs.kernel.org/patchwork/pwbot.html ^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2026-02-20 4:10 UTC | newest] Thread overview: 24+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-10-06 7:45 [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross 2025-10-06 7:45 ` [PATCH v3 02/21] x86/paravirt: Remove some unneeded struct declarations Juergen Gross 2025-10-23 12:14 ` Borislav Petkov 2025-10-06 7:45 ` [PATCH v3 03/21] x86/paravirt: Remove PARAVIRT_DEBUG config option Juergen Gross 2025-10-06 7:45 ` [PATCH v3 04/21] x86/paravirt: Move thunk macros to paravirt_types.h Juergen Gross 2025-10-06 7:45 ` [PATCH v3 05/21] paravirt: Remove asm/paravirt_api_clock.h Juergen Gross 2025-10-15 16:02 ` Shrikanth Hegde 2025-10-06 7:45 ` [PATCH v3 06/21] sched: Move clock related paravirt code to kernel/sched Juergen Gross 2026-01-07 22:48 ` Alexey Makhalov 2025-10-06 7:45 ` [PATCH v3 07/21] arm/paravirt: Use common code for paravirt_steal_clock() Juergen Gross 2025-10-06 7:45 ` [PATCH v3 08/21] arm64/paravirt: " Juergen Gross 2025-10-06 7:45 ` [PATCH v3 09/21] loongarch/paravirt: " Juergen Gross 2025-11-24 9:52 ` Bibo Mao 2025-10-06 7:45 ` [PATCH v3 10/21] riscv/paravirt: " Juergen Gross 2025-10-06 7:45 ` [PATCH v3 11/21] x86/paravirt: " Juergen Gross 2025-10-06 7:45 ` [PATCH v3 12/21] x86/paravirt: Move paravirt_sched_clock() related code into tsc.c Juergen Gross 2025-10-06 7:45 ` [PATCH v3 13/21] x86/paravirt: Introduce new paravirt-base.h header Juergen Gross 2025-10-06 7:45 ` [PATCH v3 14/21] x86/paravirt: Move pv_native_*() prototypes to paravirt.c Juergen Gross 2025-10-06 7:46 ` [PATCH v3 19/21] x86/paravirt: Allow pv-calls outside paravirt.h Juergen Gross 2025-10-06 7:46 ` [PATCH v3 20/21] x86/paravirt: Specify pv_ops array in paravirt macros Juergen Gross 2025-10-06 7:46 ` [PATCH v3 21/21] x86/pvlocks: Move paravirt spinlock functions into own header Juergen Gross 2025-10-15 8:53 ` kernel test robot 2025-11-24 9:42 ` [PATCH v3 00/21] paravirt: cleanup and reorg Juergen Gross 2026-02-20 4:10 ` patchwork-bot+linux-riscv
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox