* [PATCH v4 00/21] paravirt: cleanup and reorg
@ 2025-11-27 7:08 Juergen Gross
2025-11-27 7:08 ` [PATCH v4 01/21] x86/paravirt: Remove not needed includes of paravirt.h Juergen Gross
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Juergen Gross @ 2025-11-27 7:08 UTC (permalink / raw)
To: linux-kernel, x86, linux-hyperv, virtualization, loongarch,
linuxppc-dev, linux-riscv, kvm
Cc: Juergen Gross, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, H. Peter Anvin, K. Y. Srinivasan,
Haiyang Zhang, Wei Liu, Dexuan Cui, Peter Zijlstra, Will Deacon,
Boqun Feng, Waiman Long, Jiri Kosina, Josh Poimboeuf, Pawan Gupta,
Boris Ostrovsky, xen-devel, Ajay Kaher, Alexey Makhalov,
Broadcom internal kernel review list, Russell King,
Catalin Marinas, Huacai Chen, WANG Xuerui, Madhavan Srinivasan,
Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, linux-arm-kernel,
Paolo Bonzini, Vitaly Kuznetsov, Stefano Stabellini,
Oleksandr Tyshchenko, Daniel Lezcano, Oleg Nesterov
Some cleanups and reorg of paravirt code and headers:
- The first 2 patches should be not controversial at all, as they
remove just some no longer needed #include and struct forward
declarations.
- The 3rd patch is removing CONFIG_PARAVIRT_DEBUG, which IMO has
no real value, as it just changes a crash to a BUG() (the stack
trace will basically be the same). As the maintainer of the main
paravirt user (Xen) I have never seen this crash/BUG() to happen.
- The 4th patch is just a movement of code.
- I don't know for what reason asm/paravirt_api_clock.h was added,
as all archs supporting it do it exactly in the same way. Patch
5 is removing it.
- Patches 6-14 are streamlining the paravirt clock interfaces by
using a common implementation across architectures where possible
and by moving the related code into common sched code, as this is
where it should live.
- Patches 15-20 are more like RFC material preparing the paravirt
infrastructure to support multiple pv_ops function arrays.
As a prerequisite for that it makes life in objtool much easier
with dropping the Xen static initializers of the pv_ops sub-
structures, which is done in patches 15-17.
Patches 18-20 are doing the real preparations for multiple pv_ops
arrays and using those arrays in multiple headers.
- Patch 21 is an example how the new scheme can look like using the
PV-spinlocks.
Changes in V2:
- new patches 13-18 and 20
- complete rework of patch 21
Changes in V3:
- fixed 2 issues detected by kernel test robot
Changes in V4:
- fixed one build issue
Juergen Gross (21):
x86/paravirt: Remove not needed includes of paravirt.h
x86/paravirt: Remove some unneeded struct declarations
x86/paravirt: Remove PARAVIRT_DEBUG config option
x86/paravirt: Move thunk macros to paravirt_types.h
paravirt: Remove asm/paravirt_api_clock.h
sched: Move clock related paravirt code to kernel/sched
arm/paravirt: Use common code for paravirt_steal_clock()
arm64/paravirt: Use common code for paravirt_steal_clock()
loongarch/paravirt: Use common code for paravirt_steal_clock()
riscv/paravirt: Use common code for paravirt_steal_clock()
x86/paravirt: Use common code for paravirt_steal_clock()
x86/paravirt: Move paravirt_sched_clock() related code into tsc.c
x86/paravirt: Introduce new paravirt-base.h header
x86/paravirt: Move pv_native_*() prototypes to paravirt.c
x86/xen: Drop xen_irq_ops
x86/xen: Drop xen_cpu_ops
x86/xen: Drop xen_mmu_ops
objtool: Allow multiple pv_ops arrays
x86/paravirt: Allow pv-calls outside paravirt.h
x86/paravirt: Specify pv_ops array in paravirt macros
x86/pvlocks: Move paravirt spinlock functions into own header
arch/Kconfig | 3 +
arch/arm/Kconfig | 1 +
arch/arm/include/asm/paravirt.h | 22 --
arch/arm/include/asm/paravirt_api_clock.h | 1 -
arch/arm/kernel/Makefile | 1 -
arch/arm/kernel/paravirt.c | 23 --
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/paravirt.h | 14 -
arch/arm64/include/asm/paravirt_api_clock.h | 1 -
arch/arm64/kernel/paravirt.c | 11 +-
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/paravirt.h | 13 -
.../include/asm/paravirt_api_clock.h | 1 -
arch/loongarch/kernel/paravirt.c | 10 +-
arch/powerpc/include/asm/paravirt.h | 3 -
arch/powerpc/include/asm/paravirt_api_clock.h | 2 -
arch/powerpc/platforms/pseries/setup.c | 4 +-
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/paravirt.h | 14 -
arch/riscv/include/asm/paravirt_api_clock.h | 1 -
arch/riscv/kernel/paravirt.c | 11 +-
arch/x86/Kconfig | 8 +-
arch/x86/entry/entry_64.S | 1 -
arch/x86/entry/vsyscall/vsyscall_64.c | 1 -
arch/x86/hyperv/hv_spinlock.c | 11 +-
arch/x86/include/asm/apic.h | 4 -
arch/x86/include/asm/highmem.h | 1 -
arch/x86/include/asm/mshyperv.h | 1 -
arch/x86/include/asm/paravirt-base.h | 29 ++
arch/x86/include/asm/paravirt-spinlock.h | 146 ++++++++
arch/x86/include/asm/paravirt.h | 331 +++++-------------
arch/x86/include/asm/paravirt_api_clock.h | 1 -
arch/x86/include/asm/paravirt_types.h | 269 +++++++-------
arch/x86/include/asm/pgtable_32.h | 1 -
arch/x86/include/asm/ptrace.h | 2 +-
arch/x86/include/asm/qspinlock.h | 89 +----
arch/x86/include/asm/spinlock.h | 1 -
arch/x86/include/asm/timer.h | 1 +
arch/x86/include/asm/tlbflush.h | 4 -
arch/x86/kernel/Makefile | 2 +-
arch/x86/kernel/apm_32.c | 1 -
arch/x86/kernel/callthunks.c | 1 -
arch/x86/kernel/cpu/bugs.c | 1 -
arch/x86/kernel/cpu/vmware.c | 1 +
arch/x86/kernel/kvm.c | 13 +-
arch/x86/kernel/kvmclock.c | 1 +
arch/x86/kernel/paravirt-spinlocks.c | 26 +-
arch/x86/kernel/paravirt.c | 42 +--
arch/x86/kernel/tsc.c | 10 +-
arch/x86/kernel/vsmp_64.c | 1 -
arch/x86/lib/cache-smp.c | 1 -
arch/x86/mm/init.c | 1 -
arch/x86/xen/enlighten_pv.c | 82 ++---
arch/x86/xen/irq.c | 20 +-
arch/x86/xen/mmu_pv.c | 100 ++----
arch/x86/xen/spinlock.c | 11 +-
arch/x86/xen/time.c | 2 +
drivers/clocksource/hyperv_timer.c | 2 +
drivers/xen/time.c | 2 +-
include/linux/sched/cputime.h | 18 +
kernel/sched/core.c | 5 +
kernel/sched/cputime.c | 13 +
kernel/sched/sched.h | 3 +-
tools/objtool/arch/x86/decode.c | 8 +-
tools/objtool/check.c | 78 ++++-
tools/objtool/include/objtool/check.h | 2 +
66 files changed, 661 insertions(+), 826 deletions(-)
delete mode 100644 arch/arm/include/asm/paravirt.h
delete mode 100644 arch/arm/include/asm/paravirt_api_clock.h
delete mode 100644 arch/arm/kernel/paravirt.c
delete mode 100644 arch/arm64/include/asm/paravirt_api_clock.h
delete mode 100644 arch/loongarch/include/asm/paravirt_api_clock.h
delete mode 100644 arch/powerpc/include/asm/paravirt_api_clock.h
delete mode 100644 arch/riscv/include/asm/paravirt_api_clock.h
create mode 100644 arch/x86/include/asm/paravirt-base.h
create mode 100644 arch/x86/include/asm/paravirt-spinlock.h
delete mode 100644 arch/x86/include/asm/paravirt_api_clock.h
--
2.51.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v4 01/21] x86/paravirt: Remove not needed includes of paravirt.h
2025-11-27 7:08 [PATCH v4 00/21] paravirt: cleanup and reorg Juergen Gross
@ 2025-11-27 7:08 ` Juergen Gross
2025-11-27 7:08 ` [PATCH v4 12/21] x86/paravirt: Move paravirt_sched_clock() related code into tsc.c Juergen Gross
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Juergen Gross @ 2025-11-27 7:08 UTC (permalink / raw)
To: linux-kernel, x86, linux-hyperv
Cc: Juergen Gross, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, H. Peter Anvin, K. Y. Srinivasan,
Haiyang Zhang, Wei Liu, Dexuan Cui, Peter Zijlstra, Will Deacon,
Boqun Feng, Waiman Long, Jiri Kosina, Josh Poimboeuf, Pawan Gupta,
Boris Ostrovsky, xen-devel
In some places asm/paravirt.h is included without really being needed.
Remove the related #include statements.
Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
V3:
- reinstate the include in mmu_context.h (kernel test robot)
V4:
- reinstate the include in arch/x86/kernel/x86_init.c (Boris Petkov)
---
arch/x86/entry/entry_64.S | 1 -
arch/x86/entry/vsyscall/vsyscall_64.c | 1 -
arch/x86/hyperv/hv_spinlock.c | 1 -
arch/x86/include/asm/apic.h | 4 ----
arch/x86/include/asm/highmem.h | 1 -
arch/x86/include/asm/mshyperv.h | 1 -
arch/x86/include/asm/pgtable_32.h | 1 -
arch/x86/include/asm/spinlock.h | 1 -
arch/x86/include/asm/tlbflush.h | 4 ----
arch/x86/kernel/apm_32.c | 1 -
arch/x86/kernel/callthunks.c | 1 -
arch/x86/kernel/cpu/bugs.c | 1 -
arch/x86/kernel/vsmp_64.c | 1 -
arch/x86/lib/cache-smp.c | 1 -
arch/x86/mm/init.c | 1 -
arch/x86/xen/spinlock.c | 1 -
16 files changed, 22 deletions(-)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index ed04a968cc7d..7a82305405af 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -30,7 +30,6 @@
#include <asm/hw_irq.h>
#include <asm/page_types.h>
#include <asm/irqflags.h>
-#include <asm/paravirt.h>
#include <asm/percpu.h>
#include <asm/asm.h>
#include <asm/smap.h>
diff --git a/arch/x86/entry/vsyscall/vsyscall_64.c b/arch/x86/entry/vsyscall/vsyscall_64.c
index 6e6c0a740837..4bd1e271bb22 100644
--- a/arch/x86/entry/vsyscall/vsyscall_64.c
+++ b/arch/x86/entry/vsyscall/vsyscall_64.c
@@ -37,7 +37,6 @@
#include <asm/unistd.h>
#include <asm/fixmap.h>
#include <asm/traps.h>
-#include <asm/paravirt.h>
#define CREATE_TRACE_POINTS
#include "vsyscall_trace.h"
diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
index 81b006601370..2a3c2afb0154 100644
--- a/arch/x86/hyperv/hv_spinlock.c
+++ b/arch/x86/hyperv/hv_spinlock.c
@@ -13,7 +13,6 @@
#include <linux/spinlock.h>
#include <asm/mshyperv.h>
-#include <asm/paravirt.h>
#include <asm/apic.h>
#include <asm/msr.h>
diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
index a26e66d66444..9cd493d467d4 100644
--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -90,10 +90,6 @@ static inline bool apic_from_smp_config(void)
/*
* Basic functions accessing APICs.
*/
-#ifdef CONFIG_PARAVIRT
-#include <asm/paravirt.h>
-#endif
-
static inline void native_apic_mem_write(u32 reg, u32 v)
{
volatile u32 *addr = (volatile u32 *)(APIC_BASE + reg);
diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index 585bdadba47d..decfaaf52326 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -24,7 +24,6 @@
#include <linux/interrupt.h>
#include <linux/threads.h>
#include <asm/tlbflush.h>
-#include <asm/paravirt.h>
#include <asm/fixmap.h>
#include <asm/pgtable_areas.h>
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index 605abd02158d..15e2693b8070 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -8,7 +8,6 @@
#include <linux/io.h>
#include <linux/static_call.h>
#include <asm/nospec-branch.h>
-#include <asm/paravirt.h>
#include <asm/msr.h>
#include <hyperv/hvhdk.h>
diff --git a/arch/x86/include/asm/pgtable_32.h b/arch/x86/include/asm/pgtable_32.h
index b612cc57a4d3..acea0cfa2460 100644
--- a/arch/x86/include/asm/pgtable_32.h
+++ b/arch/x86/include/asm/pgtable_32.h
@@ -16,7 +16,6 @@
#ifndef __ASSEMBLER__
#include <asm/processor.h>
#include <linux/threads.h>
-#include <asm/paravirt.h>
#include <linux/bitops.h>
#include <linux/list.h>
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 5b6bc7016c22..934632b78d09 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -7,7 +7,6 @@
#include <asm/page.h>
#include <asm/processor.h>
#include <linux/compiler.h>
-#include <asm/paravirt.h>
#include <asm/bitops.h>
/*
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 00daedfefc1b..238a6b807da5 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -300,10 +300,6 @@ static inline void mm_clear_asid_transition(struct mm_struct *mm) { }
static inline bool mm_in_asid_transition(struct mm_struct *mm) { return false; }
#endif /* CONFIG_BROADCAST_TLB_FLUSH */
-#ifdef CONFIG_PARAVIRT
-#include <asm/paravirt.h>
-#endif
-
#define flush_tlb_mm(mm) \
flush_tlb_mm_range(mm, 0UL, TLB_FLUSH_ALL, 0UL, true)
diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
index b37ab1095707..3175d7c134e9 100644
--- a/arch/x86/kernel/apm_32.c
+++ b/arch/x86/kernel/apm_32.c
@@ -229,7 +229,6 @@
#include <linux/uaccess.h>
#include <asm/desc.h>
#include <asm/olpc.h>
-#include <asm/paravirt.h>
#include <asm/reboot.h>
#include <asm/nospec-branch.h>
#include <asm/ibt.h>
diff --git a/arch/x86/kernel/callthunks.c b/arch/x86/kernel/callthunks.c
index a951333c5995..e37728f70322 100644
--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -15,7 +15,6 @@
#include <asm/insn.h>
#include <asm/kexec.h>
#include <asm/nospec-branch.h>
-#include <asm/paravirt.h>
#include <asm/sections.h>
#include <asm/switch_to.h>
#include <asm/sync_core.h>
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d7fa03bf51b4..cb200930510e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -25,7 +25,6 @@
#include <asm/fpu/api.h>
#include <asm/msr.h>
#include <asm/vmx.h>
-#include <asm/paravirt.h>
#include <asm/cpu_device_id.h>
#include <asm/e820/api.h>
#include <asm/hypervisor.h>
diff --git a/arch/x86/kernel/vsmp_64.c b/arch/x86/kernel/vsmp_64.c
index 73511332bb67..25625e3fc183 100644
--- a/arch/x86/kernel/vsmp_64.c
+++ b/arch/x86/kernel/vsmp_64.c
@@ -18,7 +18,6 @@
#include <asm/apic.h>
#include <asm/pci-direct.h>
#include <asm/io.h>
-#include <asm/paravirt.h>
#include <asm/setup.h>
#define TOPOLOGY_REGISTER_OFFSET 0x10
diff --git a/arch/x86/lib/cache-smp.c b/arch/x86/lib/cache-smp.c
index c5c60d07308c..ae5a5dfd33c7 100644
--- a/arch/x86/lib/cache-smp.c
+++ b/arch/x86/lib/cache-smp.c
@@ -1,5 +1,4 @@
// SPDX-License-Identifier: GPL-2.0
-#include <asm/paravirt.h>
#include <linux/smp.h>
#include <linux/export.h>
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 8bf6ad4b9400..76537d40493c 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -27,7 +27,6 @@
#include <asm/pti.h>
#include <asm/text-patching.h>
#include <asm/memtype.h>
-#include <asm/paravirt.h>
#include <asm/mmu_context.h>
/*
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 8e4efe0fb6f9..fe56646d6919 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -8,7 +8,6 @@
#include <linux/slab.h>
#include <linux/atomic.h>
-#include <asm/paravirt.h>
#include <asm/qspinlock.h>
#include <xen/events.h>
--
2.51.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v4 12/21] x86/paravirt: Move paravirt_sched_clock() related code into tsc.c
2025-11-27 7:08 [PATCH v4 00/21] paravirt: cleanup and reorg Juergen Gross
2025-11-27 7:08 ` [PATCH v4 01/21] x86/paravirt: Remove not needed includes of paravirt.h Juergen Gross
@ 2025-11-27 7:08 ` Juergen Gross
2025-11-27 7:08 ` [PATCH v4 21/21] x86/pvlocks: Move paravirt spinlock functions into own header Juergen Gross
2025-12-15 8:27 ` [PATCH v4 00/21] paravirt: cleanup and reorg Juergen Gross
3 siblings, 0 replies; 7+ messages in thread
From: Juergen Gross @ 2025-11-27 7:08 UTC (permalink / raw)
To: linux-kernel, x86, virtualization, kvm, linux-hyperv
Cc: Juergen Gross, Ajay Kaher, Alexey Makhalov,
Broadcom internal kernel review list, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin,
Paolo Bonzini, Vitaly Kuznetsov, Boris Ostrovsky,
K. Y. Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui,
Daniel Lezcano, xen-devel, Peter Zijlstra (Intel)
The only user of paravirt_sched_clock() is in tsc.c, so move the code
from paravirt.c and paravirt.h to tsc.c.
Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
arch/x86/include/asm/paravirt.h | 12 ------------
arch/x86/include/asm/timer.h | 1 +
arch/x86/kernel/kvmclock.c | 1 +
arch/x86/kernel/paravirt.c | 7 -------
arch/x86/kernel/tsc.c | 10 +++++++++-
arch/x86/xen/time.c | 1 +
drivers/clocksource/hyperv_timer.c | 2 ++
7 files changed, 14 insertions(+), 20 deletions(-)
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 766a7cee3d64..b69e75a5c872 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -14,20 +14,8 @@
#ifndef __ASSEMBLER__
#include <linux/types.h>
#include <linux/cpumask.h>
-#include <linux/static_call_types.h>
#include <asm/frame.h>
-u64 dummy_sched_clock(void);
-
-DECLARE_STATIC_CALL(pv_sched_clock, dummy_sched_clock);
-
-void paravirt_set_sched_clock(u64 (*func)(void));
-
-static __always_inline u64 paravirt_sched_clock(void)
-{
- return static_call(pv_sched_clock)();
-}
-
__visible void __native_queued_spin_unlock(struct qspinlock *lock);
bool pv_is_native_spin_unlock(void);
__visible bool __native_vcpu_is_preempted(long cpu);
diff --git a/arch/x86/include/asm/timer.h b/arch/x86/include/asm/timer.h
index 23baf8c9b34c..fda18bcb19b4 100644
--- a/arch/x86/include/asm/timer.h
+++ b/arch/x86/include/asm/timer.h
@@ -12,6 +12,7 @@ extern void recalibrate_cpu_khz(void);
extern int no_timer_check;
extern bool using_native_sched_clock(void);
+void paravirt_set_sched_clock(u64 (*func)(void));
/*
* We use the full linear equation: f(x) = a + b*x, in order to allow
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index ca0a49eeac4a..b5991d53fc0e 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -19,6 +19,7 @@
#include <linux/cc_platform.h>
#include <asm/hypervisor.h>
+#include <asm/timer.h>
#include <asm/x86_init.h>
#include <asm/kvmclock.h>
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 42991d471bf3..4e37db8073f9 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -60,13 +60,6 @@ void __init native_pv_lock_init(void)
static_branch_enable(&virt_spin_lock_key);
}
-DEFINE_STATIC_CALL(pv_sched_clock, native_sched_clock);
-
-void paravirt_set_sched_clock(u64 (*func)(void))
-{
- static_call_update(pv_sched_clock, func);
-}
-
static noinstr void pv_native_safe_halt(void)
{
native_safe_halt();
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 87e749106dda..554b54783a04 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -266,19 +266,27 @@ u64 native_sched_clock_from_tsc(u64 tsc)
/* We need to define a real function for sched_clock, to override the
weak default version */
#ifdef CONFIG_PARAVIRT
+DEFINE_STATIC_CALL(pv_sched_clock, native_sched_clock);
+
noinstr u64 sched_clock_noinstr(void)
{
- return paravirt_sched_clock();
+ return static_call(pv_sched_clock)();
}
bool using_native_sched_clock(void)
{
return static_call_query(pv_sched_clock) == native_sched_clock;
}
+
+void paravirt_set_sched_clock(u64 (*func)(void))
+{
+ static_call_update(pv_sched_clock, func);
+}
#else
u64 sched_clock_noinstr(void) __attribute__((alias("native_sched_clock")));
bool using_native_sched_clock(void) { return true; }
+void paravirt_set_sched_clock(u64 (*func)(void)) { }
#endif
notrace u64 sched_clock(void)
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index e4754b2fa900..6f9f665bb7ae 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -19,6 +19,7 @@
#include <linux/sched/cputime.h>
#include <asm/pvclock.h>
+#include <asm/timer.h>
#include <asm/xen/hypervisor.h>
#include <asm/xen/hypercall.h>
#include <asm/xen/cpuid.h>
diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c
index 10356d4ec55c..e9f5034a1bc8 100644
--- a/drivers/clocksource/hyperv_timer.c
+++ b/drivers/clocksource/hyperv_timer.c
@@ -535,6 +535,8 @@ static __always_inline void hv_setup_sched_clock(void *sched_clock)
sched_clock_register(sched_clock, 64, NSEC_PER_SEC);
}
#elif defined CONFIG_PARAVIRT
+#include <asm/timer.h>
+
static __always_inline void hv_setup_sched_clock(void *sched_clock)
{
/* We're on x86/x64 *and* using PV ops */
--
2.51.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v4 21/21] x86/pvlocks: Move paravirt spinlock functions into own header
2025-11-27 7:08 [PATCH v4 00/21] paravirt: cleanup and reorg Juergen Gross
2025-11-27 7:08 ` [PATCH v4 01/21] x86/paravirt: Remove not needed includes of paravirt.h Juergen Gross
2025-11-27 7:08 ` [PATCH v4 12/21] x86/paravirt: Move paravirt_sched_clock() related code into tsc.c Juergen Gross
@ 2025-11-27 7:08 ` Juergen Gross
2025-11-27 9:24 ` kernel test robot
2025-11-27 9:24 ` kernel test robot
2025-12-15 8:27 ` [PATCH v4 00/21] paravirt: cleanup and reorg Juergen Gross
3 siblings, 2 replies; 7+ messages in thread
From: Juergen Gross @ 2025-11-27 7:08 UTC (permalink / raw)
To: linux-kernel, x86, linux-hyperv, virtualization, kvm
Cc: Juergen Gross, K. Y. Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Ajay Kaher, Alexey Makhalov,
Broadcom internal kernel review list, Paolo Bonzini,
Vitaly Kuznetsov, Boris Ostrovsky, Josh Poimboeuf, Peter Zijlstra,
xen-devel
Instead of having the pv spinlock function definitions in paravirt.h,
move them into the new header paravirt-spinlock.h.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- use new header instead of qspinlock.h
- use dedicated pv_ops_lock array
- move more paravirt related lock code
V3:
- hide native_pv_lock_init() with CONFIG_SMP (kernel test robot)
V4:
- don't reference pv_ops_lock without CONFIG_PARAVIRT_SPINLOCKS
(kernel test robot)
---
arch/x86/hyperv/hv_spinlock.c | 10 +-
arch/x86/include/asm/paravirt-spinlock.h | 146 +++++++++++++++++++++++
arch/x86/include/asm/paravirt.h | 61 ----------
arch/x86/include/asm/paravirt_types.h | 17 ---
arch/x86/include/asm/qspinlock.h | 89 ++------------
arch/x86/kernel/Makefile | 2 +-
arch/x86/kernel/kvm.c | 12 +-
arch/x86/kernel/paravirt-spinlocks.c | 26 +++-
arch/x86/kernel/paravirt.c | 21 ----
arch/x86/xen/spinlock.c | 10 +-
tools/objtool/check.c | 1 +
11 files changed, 196 insertions(+), 199 deletions(-)
create mode 100644 arch/x86/include/asm/paravirt-spinlock.h
diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
index 2a3c2afb0154..210b494e4de0 100644
--- a/arch/x86/hyperv/hv_spinlock.c
+++ b/arch/x86/hyperv/hv_spinlock.c
@@ -78,11 +78,11 @@ void __init hv_init_spinlocks(void)
pr_info("PV spinlocks enabled\n");
__pv_init_lock_hash();
- pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
- pv_ops.lock.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock);
- pv_ops.lock.wait = hv_qlock_wait;
- pv_ops.lock.kick = hv_qlock_kick;
- pv_ops.lock.vcpu_is_preempted = PV_CALLEE_SAVE(hv_vcpu_is_preempted);
+ pv_ops_lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
+ pv_ops_lock.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock);
+ pv_ops_lock.wait = hv_qlock_wait;
+ pv_ops_lock.kick = hv_qlock_kick;
+ pv_ops_lock.vcpu_is_preempted = PV_CALLEE_SAVE(hv_vcpu_is_preempted);
}
static __init int hv_parse_nopvspin(char *arg)
diff --git a/arch/x86/include/asm/paravirt-spinlock.h b/arch/x86/include/asm/paravirt-spinlock.h
new file mode 100644
index 000000000000..ed3ed343903d
--- /dev/null
+++ b/arch/x86/include/asm/paravirt-spinlock.h
@@ -0,0 +1,146 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _ASM_X86_PARAVIRT_SPINLOCK_H
+#define _ASM_X86_PARAVIRT_SPINLOCK_H
+
+#include <asm/paravirt_types.h>
+
+#ifdef CONFIG_SMP
+#include <asm/spinlock_types.h>
+#endif
+
+struct qspinlock;
+
+struct pv_lock_ops {
+ void (*queued_spin_lock_slowpath)(struct qspinlock *lock, u32 val);
+ struct paravirt_callee_save queued_spin_unlock;
+
+ void (*wait)(u8 *ptr, u8 val);
+ void (*kick)(int cpu);
+
+ struct paravirt_callee_save vcpu_is_preempted;
+} __no_randomize_layout;
+
+extern struct pv_lock_ops pv_ops_lock;
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+void __init paravirt_set_cap(void);
+extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+extern void __pv_init_lock_hash(void);
+extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock);
+extern bool nopvspin;
+
+static __always_inline void pv_queued_spin_lock_slowpath(struct qspinlock *lock,
+ u32 val)
+{
+ PVOP_VCALL2(pv_ops_lock, queued_spin_lock_slowpath, lock, val);
+}
+
+static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock)
+{
+ PVOP_ALT_VCALLEE1(pv_ops_lock, queued_spin_unlock, lock,
+ "movb $0, (%%" _ASM_ARG1 ");",
+ ALT_NOT(X86_FEATURE_PVUNLOCK));
+}
+
+static __always_inline bool pv_vcpu_is_preempted(long cpu)
+{
+ return PVOP_ALT_CALLEE1(bool, pv_ops_lock, vcpu_is_preempted, cpu,
+ "xor %%" _ASM_AX ", %%" _ASM_AX ";",
+ ALT_NOT(X86_FEATURE_VCPUPREEMPT));
+}
+
+#define queued_spin_unlock queued_spin_unlock
+/**
+ * queued_spin_unlock - release a queued spinlock
+ * @lock : Pointer to queued spinlock structure
+ *
+ * A smp_store_release() on the least-significant byte.
+ */
+static inline void native_queued_spin_unlock(struct qspinlock *lock)
+{
+ smp_store_release(&lock->locked, 0);
+}
+
+static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ pv_queued_spin_lock_slowpath(lock, val);
+}
+
+static inline void queued_spin_unlock(struct qspinlock *lock)
+{
+ kcsan_release();
+ pv_queued_spin_unlock(lock);
+}
+
+#define vcpu_is_preempted vcpu_is_preempted
+static inline bool vcpu_is_preempted(long cpu)
+{
+ return pv_vcpu_is_preempted(cpu);
+}
+
+static __always_inline void pv_wait(u8 *ptr, u8 val)
+{
+ PVOP_VCALL2(pv_ops_lock, wait, ptr, val);
+}
+
+static __always_inline void pv_kick(int cpu)
+{
+ PVOP_VCALL1(pv_ops_lock, kick, cpu);
+}
+
+void __raw_callee_save___native_queued_spin_unlock(struct qspinlock *lock);
+bool __raw_callee_save___native_vcpu_is_preempted(long cpu);
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
+
+void __init native_pv_lock_init(void);
+__visible void __native_queued_spin_unlock(struct qspinlock *lock);
+bool pv_is_native_spin_unlock(void);
+__visible bool __native_vcpu_is_preempted(long cpu);
+bool pv_is_native_vcpu_is_preempted(void);
+
+/*
+ * virt_spin_lock_key - disables by default the virt_spin_lock() hijack.
+ *
+ * Native (and PV wanting native due to vCPU pinning) should keep this key
+ * disabled. Native does not touch the key.
+ *
+ * When in a guest then native_pv_lock_init() enables the key first and
+ * KVM/XEN might conditionally disable it later in the boot process again.
+ */
+DECLARE_STATIC_KEY_FALSE(virt_spin_lock_key);
+
+/*
+ * Shortcut for the queued_spin_lock_slowpath() function that allows
+ * virt to hijack it.
+ *
+ * Returns:
+ * true - lock has been negotiated, all done;
+ * false - queued_spin_lock_slowpath() will do its thing.
+ */
+#define virt_spin_lock virt_spin_lock
+static inline bool virt_spin_lock(struct qspinlock *lock)
+{
+ int val;
+
+ if (!static_branch_likely(&virt_spin_lock_key))
+ return false;
+
+ /*
+ * On hypervisors without PARAVIRT_SPINLOCKS support we fall
+ * back to a Test-and-Set spinlock, because fair locks have
+ * horrible lock 'holder' preemption issues.
+ */
+
+ __retry:
+ val = atomic_read(&lock->val);
+
+ if (val || !atomic_try_cmpxchg(&lock->val, &val, _Q_LOCKED_VAL)) {
+ cpu_relax();
+ goto __retry;
+ }
+
+ return true;
+}
+
+#endif /* _ASM_X86_PARAVIRT_SPINLOCK_H */
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index ec274d13bae0..b21072af731d 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -19,15 +19,6 @@
#include <linux/cpumask.h>
#include <asm/frame.h>
-__visible void __native_queued_spin_unlock(struct qspinlock *lock);
-bool pv_is_native_spin_unlock(void);
-__visible bool __native_vcpu_is_preempted(long cpu);
-bool pv_is_native_vcpu_is_preempted(void);
-
-#ifdef CONFIG_PARAVIRT_SPINLOCKS
-void __init paravirt_set_cap(void);
-#endif
-
/* The paravirtualized I/O functions */
static inline void slow_down_io(void)
{
@@ -522,46 +513,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
{
pv_ops.mmu.set_fixmap(idx, phys, flags);
}
-#endif
-
-#if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
-
-static __always_inline void pv_queued_spin_lock_slowpath(struct qspinlock *lock,
- u32 val)
-{
- PVOP_VCALL2(pv_ops, lock.queued_spin_lock_slowpath, lock, val);
-}
-
-static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock)
-{
- PVOP_ALT_VCALLEE1(pv_ops, lock.queued_spin_unlock, lock,
- "movb $0, (%%" _ASM_ARG1 ");",
- ALT_NOT(X86_FEATURE_PVUNLOCK));
-}
-
-static __always_inline void pv_wait(u8 *ptr, u8 val)
-{
- PVOP_VCALL2(pv_ops, lock.wait, ptr, val);
-}
-
-static __always_inline void pv_kick(int cpu)
-{
- PVOP_VCALL1(pv_ops, lock.kick, cpu);
-}
-
-static __always_inline bool pv_vcpu_is_preempted(long cpu)
-{
- return PVOP_ALT_CALLEE1(bool, pv_ops, lock.vcpu_is_preempted, cpu,
- "xor %%" _ASM_AX ", %%" _ASM_AX ";",
- ALT_NOT(X86_FEATURE_VCPUPREEMPT));
-}
-void __raw_callee_save___native_queued_spin_unlock(struct qspinlock *lock);
-bool __raw_callee_save___native_vcpu_is_preempted(long cpu);
-
-#endif /* SMP && PARAVIRT_SPINLOCKS */
-
-#ifdef CONFIG_PARAVIRT_XXL
static __always_inline unsigned long arch_local_save_flags(void)
{
return PVOP_ALT_CALLEE0(unsigned long, pv_ops, irq.save_fl, "pushf; pop %%rax;",
@@ -588,8 +540,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
}
#endif
-void native_pv_lock_init(void) __init;
-
#else /* __ASSEMBLER__ */
#ifdef CONFIG_X86_64
@@ -613,12 +563,6 @@ void native_pv_lock_init(void) __init;
#endif /* __ASSEMBLER__ */
#else /* CONFIG_PARAVIRT */
# define default_banner x86_init_noop
-
-#ifndef __ASSEMBLER__
-static inline void native_pv_lock_init(void)
-{
-}
-#endif
#endif /* !CONFIG_PARAVIRT */
#ifndef __ASSEMBLER__
@@ -634,10 +578,5 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
}
#endif
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
-static inline void paravirt_set_cap(void)
-{
-}
-#endif
#endif /* __ASSEMBLER__ */
#endif /* _ASM_X86_PARAVIRT_H */
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 01a485f1a7f1..e2b487d35d14 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -184,22 +184,6 @@ struct pv_mmu_ops {
#endif
} __no_randomize_layout;
-#ifdef CONFIG_SMP
-#include <asm/spinlock_types.h>
-#endif
-
-struct qspinlock;
-
-struct pv_lock_ops {
- void (*queued_spin_lock_slowpath)(struct qspinlock *lock, u32 val);
- struct paravirt_callee_save queued_spin_unlock;
-
- void (*wait)(u8 *ptr, u8 val);
- void (*kick)(int cpu);
-
- struct paravirt_callee_save vcpu_is_preempted;
-} __no_randomize_layout;
-
/* This contains all the paravirt structures: we get a convenient
* number for each function using the offset which we use to indicate
* what to patch. */
@@ -207,7 +191,6 @@ struct paravirt_patch_template {
struct pv_cpu_ops cpu;
struct pv_irq_ops irq;
struct pv_mmu_ops mmu;
- struct pv_lock_ops lock;
} __no_randomize_layout;
extern struct paravirt_patch_template pv_ops;
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 68da67df304d..a2668bdf4c84 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -7,6 +7,9 @@
#include <asm-generic/qspinlock_types.h>
#include <asm/paravirt.h>
#include <asm/rmwcc.h>
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt-spinlock.h>
+#endif
#define _Q_PENDING_LOOPS (1 << 9)
@@ -27,89 +30,13 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo
return val;
}
-#ifdef CONFIG_PARAVIRT_SPINLOCKS
-extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
-extern void __pv_init_lock_hash(void);
-extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
-extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock);
-extern bool nopvspin;
-
-#define queued_spin_unlock queued_spin_unlock
-/**
- * queued_spin_unlock - release a queued spinlock
- * @lock : Pointer to queued spinlock structure
- *
- * A smp_store_release() on the least-significant byte.
- */
-static inline void native_queued_spin_unlock(struct qspinlock *lock)
-{
- smp_store_release(&lock->locked, 0);
-}
-
-static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
-{
- pv_queued_spin_lock_slowpath(lock, val);
-}
-
-static inline void queued_spin_unlock(struct qspinlock *lock)
-{
- kcsan_release();
- pv_queued_spin_unlock(lock);
-}
-
-#define vcpu_is_preempted vcpu_is_preempted
-static inline bool vcpu_is_preempted(long cpu)
-{
- return pv_vcpu_is_preempted(cpu);
-}
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+static inline void paravirt_set_cap(void) { }
#endif
-#ifdef CONFIG_PARAVIRT
-/*
- * virt_spin_lock_key - disables by default the virt_spin_lock() hijack.
- *
- * Native (and PV wanting native due to vCPU pinning) should keep this key
- * disabled. Native does not touch the key.
- *
- * When in a guest then native_pv_lock_init() enables the key first and
- * KVM/XEN might conditionally disable it later in the boot process again.
- */
-DECLARE_STATIC_KEY_FALSE(virt_spin_lock_key);
-
-/*
- * Shortcut for the queued_spin_lock_slowpath() function that allows
- * virt to hijack it.
- *
- * Returns:
- * true - lock has been negotiated, all done;
- * false - queued_spin_lock_slowpath() will do its thing.
- */
-#define virt_spin_lock virt_spin_lock
-static inline bool virt_spin_lock(struct qspinlock *lock)
-{
- int val;
-
- if (!static_branch_likely(&virt_spin_lock_key))
- return false;
-
- /*
- * On hypervisors without PARAVIRT_SPINLOCKS support we fall
- * back to a Test-and-Set spinlock, because fair locks have
- * horrible lock 'holder' preemption issues.
- */
-
- __retry:
- val = atomic_read(&lock->val);
-
- if (val || !atomic_try_cmpxchg(&lock->val, &val, _Q_LOCKED_VAL)) {
- cpu_relax();
- goto __retry;
- }
-
- return true;
-}
-
-#endif /* CONFIG_PARAVIRT */
+#ifndef CONFIG_PARAVIRT
+static inline void native_pv_lock_init(void) { }
+#endif
#include <asm-generic/qspinlock.h>
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index bc184dd38d99..e9aeeeafad17 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -126,7 +126,7 @@ obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
obj-$(CONFIG_KVM_GUEST) += kvm.o kvmclock.o
obj-$(CONFIG_PARAVIRT) += paravirt.o
-obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
+obj-$(CONFIG_PARAVIRT) += paravirt-spinlocks.o
obj-$(CONFIG_PARAVIRT_CLOCK) += pvclock.o
obj-$(CONFIG_X86_PMEM_LEGACY_DEVICE) += pmem.o
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index d54fd2bc0402..e767f8ed405a 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -824,8 +824,10 @@ static void __init kvm_guest_init(void)
has_steal_clock = 1;
static_call_update(pv_steal_clock, kvm_steal_clock);
- pv_ops.lock.vcpu_is_preempted =
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+ pv_ops_lock.vcpu_is_preempted =
PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
+#endif
}
if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
@@ -1121,11 +1123,11 @@ void __init kvm_spinlock_init(void)
pr_info("PV spinlocks enabled\n");
__pv_init_lock_hash();
- pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
- pv_ops.lock.queued_spin_unlock =
+ pv_ops_lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
+ pv_ops_lock.queued_spin_unlock =
PV_CALLEE_SAVE(__pv_queued_spin_unlock);
- pv_ops.lock.wait = kvm_wait;
- pv_ops.lock.kick = kvm_kick_cpu;
+ pv_ops_lock.wait = kvm_wait;
+ pv_ops_lock.kick = kvm_kick_cpu;
/*
* When PV spinlock is enabled which is preferred over
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 9e1ea99ad9df..95452444868f 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -3,12 +3,22 @@
* Split spinlock implementation out into its own file, so it can be
* compiled in a FTRACE-compatible way.
*/
+#include <linux/static_call.h>
#include <linux/spinlock.h>
#include <linux/export.h>
#include <linux/jump_label.h>
-#include <asm/paravirt.h>
+DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key);
+#ifdef CONFIG_SMP
+void __init native_pv_lock_init(void)
+{
+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+ static_branch_enable(&virt_spin_lock_key);
+}
+#endif
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
__visible void __native_queued_spin_unlock(struct qspinlock *lock)
{
native_queued_spin_unlock(lock);
@@ -17,7 +27,7 @@ PV_CALLEE_SAVE_REGS_THUNK(__native_queued_spin_unlock);
bool pv_is_native_spin_unlock(void)
{
- return pv_ops.lock.queued_spin_unlock.func ==
+ return pv_ops_lock.queued_spin_unlock.func ==
__raw_callee_save___native_queued_spin_unlock;
}
@@ -29,7 +39,7 @@ PV_CALLEE_SAVE_REGS_THUNK(__native_vcpu_is_preempted);
bool pv_is_native_vcpu_is_preempted(void)
{
- return pv_ops.lock.vcpu_is_preempted.func ==
+ return pv_ops_lock.vcpu_is_preempted.func ==
__raw_callee_save___native_vcpu_is_preempted;
}
@@ -41,3 +51,13 @@ void __init paravirt_set_cap(void)
if (!pv_is_native_vcpu_is_preempted())
setup_force_cpu_cap(X86_FEATURE_VCPUPREEMPT);
}
+
+struct pv_lock_ops pv_ops_lock = {
+ .queued_spin_lock_slowpath = native_queued_spin_lock_slowpath,
+ .queued_spin_unlock = PV_CALLEE_SAVE(__native_queued_spin_unlock),
+ .wait = paravirt_nop,
+ .kick = paravirt_nop,
+ .vcpu_is_preempted = PV_CALLEE_SAVE(__native_vcpu_is_preempted),
+};
+EXPORT_SYMBOL(pv_ops_lock);
+#endif
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 5dfbd3f55792..a6ed52cae003 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -57,14 +57,6 @@ DEFINE_ASM_FUNC(pv_native_irq_enable, "sti", .noinstr.text);
DEFINE_ASM_FUNC(pv_native_read_cr2, "mov %cr2, %rax", .noinstr.text);
#endif
-DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key);
-
-void __init native_pv_lock_init(void)
-{
- if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
- static_branch_enable(&virt_spin_lock_key);
-}
-
static noinstr void pv_native_safe_halt(void)
{
native_safe_halt();
@@ -221,19 +213,6 @@ struct paravirt_patch_template pv_ops = {
.mmu.set_fixmap = native_set_fixmap,
#endif /* CONFIG_PARAVIRT_XXL */
-
-#if defined(CONFIG_PARAVIRT_SPINLOCKS)
- /* Lock ops. */
-#ifdef CONFIG_SMP
- .lock.queued_spin_lock_slowpath = native_queued_spin_lock_slowpath,
- .lock.queued_spin_unlock =
- PV_CALLEE_SAVE(__native_queued_spin_unlock),
- .lock.wait = paravirt_nop,
- .lock.kick = paravirt_nop,
- .lock.vcpu_is_preempted =
- PV_CALLEE_SAVE(__native_vcpu_is_preempted),
-#endif /* SMP */
-#endif
};
#ifdef CONFIG_PARAVIRT_XXL
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index fe56646d6919..83ac24ead289 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -134,10 +134,10 @@ void __init xen_init_spinlocks(void)
printk(KERN_DEBUG "xen: PV spinlocks enabled\n");
__pv_init_lock_hash();
- pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
- pv_ops.lock.queued_spin_unlock =
+ pv_ops_lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
+ pv_ops_lock.queued_spin_unlock =
PV_CALLEE_SAVE(__pv_queued_spin_unlock);
- pv_ops.lock.wait = xen_qlock_wait;
- pv_ops.lock.kick = xen_qlock_kick;
- pv_ops.lock.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen);
+ pv_ops_lock.wait = xen_qlock_wait;
+ pv_ops_lock.kick = xen_qlock_kick;
+ pv_ops_lock.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen);
}
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index d63d0891924a..36e04988babe 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -550,6 +550,7 @@ static struct {
int idx_off;
} pv_ops_tables[] = {
{ .name = "pv_ops", },
+ { .name = "pv_ops_lock", },
{ .name = NULL, .idx_off = -1 }
};
--
2.51.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v4 21/21] x86/pvlocks: Move paravirt spinlock functions into own header
2025-11-27 7:08 ` [PATCH v4 21/21] x86/pvlocks: Move paravirt spinlock functions into own header Juergen Gross
@ 2025-11-27 9:24 ` kernel test robot
2025-11-27 9:24 ` kernel test robot
1 sibling, 0 replies; 7+ messages in thread
From: kernel test robot @ 2025-11-27 9:24 UTC (permalink / raw)
To: Juergen Gross, linux-kernel, x86, linux-hyperv, virtualization,
kvm
Cc: llvm, oe-kbuild-all, Juergen Gross, K. Y. Srinivasan,
Haiyang Zhang, Wei Liu, Dexuan Cui, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, H. Peter Anvin, Ajay Kaher,
Alexey Makhalov, Broadcom internal kernel review list,
Paolo Bonzini, Vitaly Kuznetsov, Boris Ostrovsky, Josh Poimboeuf,
Peter Zijlstra, xen-devel
Hi Juergen,
kernel test robot noticed the following build errors:
[auto build test ERROR on tip/x86/core]
[also build test ERROR on tip/sched/core kvm/queue kvm/next linus/master v6.18-rc7]
[cannot apply to kvm/linux-next next-20251127]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Juergen-Gross/x86-paravirt-Remove-not-needed-includes-of-paravirt-h/20251127-152054
base: tip/x86/core
patch link: https://lore.kernel.org/r/20251127070844.21919-22-jgross%40suse.com
patch subject: [PATCH v4 21/21] x86/pvlocks: Move paravirt spinlock functions into own header
config: x86_64-allnoconfig (https://download.01.org/0day-ci/archive/20251127/202511271747.smpLdjsz-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251127/202511271747.smpLdjsz-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202511271747.smpLdjsz-lkp@intel.com/
All errors (new ones prefixed by >>):
>> arch/x86/kernel/alternative.c:2373:2: error: call to undeclared function 'paravirt_set_cap'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
2373 | paravirt_set_cap();
| ^
arch/x86/kernel/alternative.c:2373:2: note: did you mean 'paravirt_ret0'?
arch/x86/include/asm/paravirt-base.h:23:15: note: 'paravirt_ret0' declared here
23 | unsigned long paravirt_ret0(void);
| ^
1 error generated.
vim +/paravirt_set_cap +2373 arch/x86/kernel/alternative.c
270a69c4485d7d arch/x86/kernel/alternative.c Peter Zijlstra 2023-02-08 2344
9a0b5817ad97bb arch/i386/kernel/alternative.c Gerd Hoffmann 2006-03-23 2345 void __init alternative_instructions(void)
9a0b5817ad97bb arch/i386/kernel/alternative.c Gerd Hoffmann 2006-03-23 2346 {
ebebe30794d38c arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2347 u64 ibt;
ebebe30794d38c arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2348
7457c0da024b18 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2349 int3_selftest();
7457c0da024b18 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2350
7457c0da024b18 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2351 /*
7457c0da024b18 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2352 * The patching is not fully atomic, so try to avoid local
7457c0da024b18 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2353 * interruptions that might execute the to be patched code.
7457c0da024b18 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2354 * Other CPUs are not running.
7457c0da024b18 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2355 */
8f4e956b313dcc arch/i386/kernel/alternative.c Andi Kleen 2007-07-22 2356 stop_nmi();
123aa76ec0cab5 arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2357
123aa76ec0cab5 arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2358 /*
123aa76ec0cab5 arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2359 * Don't stop machine check exceptions while patching.
123aa76ec0cab5 arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2360 * MCEs only happen when something got corrupted and in this
123aa76ec0cab5 arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2361 * case we must do something about the corruption.
32b1cbe380417f arch/x86/kernel/alternative.c Marco Ammon 2019-09-02 2362 * Ignoring it is worse than an unlikely patching race.
123aa76ec0cab5 arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2363 * Also machine checks tend to be broadcast and if one CPU
123aa76ec0cab5 arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2364 * goes into machine check the others follow quickly, so we don't
123aa76ec0cab5 arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2365 * expect a machine check to cause undue problems during to code
123aa76ec0cab5 arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2366 * patching.
123aa76ec0cab5 arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2367 */
8f4e956b313dcc arch/i386/kernel/alternative.c Andi Kleen 2007-07-22 2368
4e6292114c7412 arch/x86/kernel/alternative.c Juergen Gross 2021-03-11 2369 /*
f7af6977621a41 arch/x86/kernel/alternative.c Juergen Gross 2023-12-10 2370 * Make sure to set (artificial) features depending on used paravirt
f7af6977621a41 arch/x86/kernel/alternative.c Juergen Gross 2023-12-10 2371 * functions which can later influence alternative patching.
4e6292114c7412 arch/x86/kernel/alternative.c Juergen Gross 2021-03-11 2372 */
4e6292114c7412 arch/x86/kernel/alternative.c Juergen Gross 2021-03-11 @2373 paravirt_set_cap();
4e6292114c7412 arch/x86/kernel/alternative.c Juergen Gross 2021-03-11 2374
ebebe30794d38c arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2375 /* Keep CET-IBT disabled until caller/callee are patched */
ebebe30794d38c arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2376 ibt = ibt_save(/*disable*/ true);
ebebe30794d38c arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2377
931ab63664f02b arch/x86/kernel/alternative.c Peter Zijlstra 2022-10-27 2378 __apply_fineibt(__retpoline_sites, __retpoline_sites_end,
1d7e707af44613 arch/x86/kernel/alternative.c Mike Rapoport (Microsoft 2025-01-26 2379) __cfi_sites, __cfi_sites_end, true);
026211c40b0554 arch/x86/kernel/alternative.c Kees Cook 2025-09-03 2380 cfi_debug = false;
931ab63664f02b arch/x86/kernel/alternative.c Peter Zijlstra 2022-10-27 2381
7508500900814d arch/x86/kernel/alternative.c Peter Zijlstra 2021-10-26 2382 /*
7508500900814d arch/x86/kernel/alternative.c Peter Zijlstra 2021-10-26 2383 * Rewrite the retpolines, must be done before alternatives since
7508500900814d arch/x86/kernel/alternative.c Peter Zijlstra 2021-10-26 2384 * those can rewrite the retpoline thunks.
7508500900814d arch/x86/kernel/alternative.c Peter Zijlstra 2021-10-26 2385 */
1d7e707af44613 arch/x86/kernel/alternative.c Mike Rapoport (Microsoft 2025-01-26 2386) apply_retpolines(__retpoline_sites, __retpoline_sites_end);
1d7e707af44613 arch/x86/kernel/alternative.c Mike Rapoport (Microsoft 2025-01-26 2387) apply_returns(__return_sites, __return_sites_end);
7508500900814d arch/x86/kernel/alternative.c Peter Zijlstra 2021-10-26 2388
a82b26451de126 arch/x86/kernel/alternative.c Peter Zijlstra (Intel 2025-06-03 2389) its_fini_core();
a82b26451de126 arch/x86/kernel/alternative.c Peter Zijlstra (Intel 2025-06-03 2390)
e81dc127ef6988 arch/x86/kernel/alternative.c Thomas Gleixner 2022-09-15 2391 /*
ab9fea59487d8b arch/x86/kernel/alternative.c Peter Zijlstra 2025-02-07 2392 * Adjust all CALL instructions to point to func()-10, including
ab9fea59487d8b arch/x86/kernel/alternative.c Peter Zijlstra 2025-02-07 2393 * those in .altinstr_replacement.
e81dc127ef6988 arch/x86/kernel/alternative.c Thomas Gleixner 2022-09-15 2394 */
e81dc127ef6988 arch/x86/kernel/alternative.c Thomas Gleixner 2022-09-15 2395 callthunks_patch_builtin_calls();
e81dc127ef6988 arch/x86/kernel/alternative.c Thomas Gleixner 2022-09-15 2396
ab9fea59487d8b arch/x86/kernel/alternative.c Peter Zijlstra 2025-02-07 2397 apply_alternatives(__alt_instructions, __alt_instructions_end);
ab9fea59487d8b arch/x86/kernel/alternative.c Peter Zijlstra 2025-02-07 2398
be0fffa5ca894a arch/x86/kernel/alternative.c Peter Zijlstra 2023-06-22 2399 /*
be0fffa5ca894a arch/x86/kernel/alternative.c Peter Zijlstra 2023-06-22 2400 * Seal all functions that do not have their address taken.
be0fffa5ca894a arch/x86/kernel/alternative.c Peter Zijlstra 2023-06-22 2401 */
1d7e707af44613 arch/x86/kernel/alternative.c Mike Rapoport (Microsoft 2025-01-26 2402) apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end);
ed53a0d971926e arch/x86/kernel/alternative.c Peter Zijlstra 2022-03-08 2403
ebebe30794d38c arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2404 ibt_restore(ibt);
ebebe30794d38c arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2405
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v4 21/21] x86/pvlocks: Move paravirt spinlock functions into own header
2025-11-27 7:08 ` [PATCH v4 21/21] x86/pvlocks: Move paravirt spinlock functions into own header Juergen Gross
2025-11-27 9:24 ` kernel test robot
@ 2025-11-27 9:24 ` kernel test robot
1 sibling, 0 replies; 7+ messages in thread
From: kernel test robot @ 2025-11-27 9:24 UTC (permalink / raw)
To: Juergen Gross, linux-kernel, x86, linux-hyperv, virtualization,
kvm
Cc: oe-kbuild-all, Juergen Gross, K. Y. Srinivasan, Haiyang Zhang,
Wei Liu, Dexuan Cui, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, H. Peter Anvin, Ajay Kaher,
Alexey Makhalov, Broadcom internal kernel review list,
Paolo Bonzini, Vitaly Kuznetsov, Boris Ostrovsky, Josh Poimboeuf,
Peter Zijlstra, xen-devel
Hi Juergen,
kernel test robot noticed the following build errors:
[auto build test ERROR on tip/x86/core]
[also build test ERROR on tip/sched/core kvm/queue kvm/next linus/master v6.18-rc7]
[cannot apply to kvm/linux-next next-20251127]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Juergen-Gross/x86-paravirt-Remove-not-needed-includes-of-paravirt-h/20251127-152054
base: tip/x86/core
patch link: https://lore.kernel.org/r/20251127070844.21919-22-jgross%40suse.com
patch subject: [PATCH v4 21/21] x86/pvlocks: Move paravirt spinlock functions into own header
config: i386-allnoconfig (https://download.01.org/0day-ci/archive/20251127/202511271704.MdDOB4pB-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251127/202511271704.MdDOB4pB-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202511271704.MdDOB4pB-lkp@intel.com/
All errors (new ones prefixed by >>):
arch/x86/kernel/alternative.c: In function 'alternative_instructions':
>> arch/x86/kernel/alternative.c:2373:9: error: implicit declaration of function 'paravirt_set_cap'; did you mean 'paravirt_ret0'? [-Wimplicit-function-declaration]
2373 | paravirt_set_cap();
| ^~~~~~~~~~~~~~~~
| paravirt_ret0
vim +2373 arch/x86/kernel/alternative.c
270a69c4485d7d0 arch/x86/kernel/alternative.c Peter Zijlstra 2023-02-08 2344
9a0b5817ad97bb7 arch/i386/kernel/alternative.c Gerd Hoffmann 2006-03-23 2345 void __init alternative_instructions(void)
9a0b5817ad97bb7 arch/i386/kernel/alternative.c Gerd Hoffmann 2006-03-23 2346 {
ebebe30794d38c5 arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2347 u64 ibt;
ebebe30794d38c5 arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2348
7457c0da024b181 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2349 int3_selftest();
7457c0da024b181 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2350
7457c0da024b181 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2351 /*
7457c0da024b181 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2352 * The patching is not fully atomic, so try to avoid local
7457c0da024b181 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2353 * interruptions that might execute the to be patched code.
7457c0da024b181 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2354 * Other CPUs are not running.
7457c0da024b181 arch/x86/kernel/alternative.c Peter Zijlstra 2019-05-03 2355 */
8f4e956b313dccc arch/i386/kernel/alternative.c Andi Kleen 2007-07-22 2356 stop_nmi();
123aa76ec0cab5d arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2357
123aa76ec0cab5d arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2358 /*
123aa76ec0cab5d arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2359 * Don't stop machine check exceptions while patching.
123aa76ec0cab5d arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2360 * MCEs only happen when something got corrupted and in this
123aa76ec0cab5d arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2361 * case we must do something about the corruption.
32b1cbe380417f2 arch/x86/kernel/alternative.c Marco Ammon 2019-09-02 2362 * Ignoring it is worse than an unlikely patching race.
123aa76ec0cab5d arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2363 * Also machine checks tend to be broadcast and if one CPU
123aa76ec0cab5d arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2364 * goes into machine check the others follow quickly, so we don't
123aa76ec0cab5d arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2365 * expect a machine check to cause undue problems during to code
123aa76ec0cab5d arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2366 * patching.
123aa76ec0cab5d arch/x86/kernel/alternative.c Andi Kleen 2009-02-12 2367 */
8f4e956b313dccc arch/i386/kernel/alternative.c Andi Kleen 2007-07-22 2368
4e6292114c74122 arch/x86/kernel/alternative.c Juergen Gross 2021-03-11 2369 /*
f7af6977621a416 arch/x86/kernel/alternative.c Juergen Gross 2023-12-10 2370 * Make sure to set (artificial) features depending on used paravirt
f7af6977621a416 arch/x86/kernel/alternative.c Juergen Gross 2023-12-10 2371 * functions which can later influence alternative patching.
4e6292114c74122 arch/x86/kernel/alternative.c Juergen Gross 2021-03-11 2372 */
4e6292114c74122 arch/x86/kernel/alternative.c Juergen Gross 2021-03-11 @2373 paravirt_set_cap();
4e6292114c74122 arch/x86/kernel/alternative.c Juergen Gross 2021-03-11 2374
ebebe30794d38c5 arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2375 /* Keep CET-IBT disabled until caller/callee are patched */
ebebe30794d38c5 arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2376 ibt = ibt_save(/*disable*/ true);
ebebe30794d38c5 arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2377
931ab63664f02b1 arch/x86/kernel/alternative.c Peter Zijlstra 2022-10-27 2378 __apply_fineibt(__retpoline_sites, __retpoline_sites_end,
1d7e707af446134 arch/x86/kernel/alternative.c Mike Rapoport (Microsoft 2025-01-26 2379) __cfi_sites, __cfi_sites_end, true);
026211c40b05548 arch/x86/kernel/alternative.c Kees Cook 2025-09-03 2380 cfi_debug = false;
931ab63664f02b1 arch/x86/kernel/alternative.c Peter Zijlstra 2022-10-27 2381
7508500900814d1 arch/x86/kernel/alternative.c Peter Zijlstra 2021-10-26 2382 /*
7508500900814d1 arch/x86/kernel/alternative.c Peter Zijlstra 2021-10-26 2383 * Rewrite the retpolines, must be done before alternatives since
7508500900814d1 arch/x86/kernel/alternative.c Peter Zijlstra 2021-10-26 2384 * those can rewrite the retpoline thunks.
7508500900814d1 arch/x86/kernel/alternative.c Peter Zijlstra 2021-10-26 2385 */
1d7e707af446134 arch/x86/kernel/alternative.c Mike Rapoport (Microsoft 2025-01-26 2386) apply_retpolines(__retpoline_sites, __retpoline_sites_end);
1d7e707af446134 arch/x86/kernel/alternative.c Mike Rapoport (Microsoft 2025-01-26 2387) apply_returns(__return_sites, __return_sites_end);
7508500900814d1 arch/x86/kernel/alternative.c Peter Zijlstra 2021-10-26 2388
a82b26451de126a arch/x86/kernel/alternative.c Peter Zijlstra (Intel 2025-06-03 2389) its_fini_core();
a82b26451de126a arch/x86/kernel/alternative.c Peter Zijlstra (Intel 2025-06-03 2390)
e81dc127ef69887 arch/x86/kernel/alternative.c Thomas Gleixner 2022-09-15 2391 /*
ab9fea59487d8b5 arch/x86/kernel/alternative.c Peter Zijlstra 2025-02-07 2392 * Adjust all CALL instructions to point to func()-10, including
ab9fea59487d8b5 arch/x86/kernel/alternative.c Peter Zijlstra 2025-02-07 2393 * those in .altinstr_replacement.
e81dc127ef69887 arch/x86/kernel/alternative.c Thomas Gleixner 2022-09-15 2394 */
e81dc127ef69887 arch/x86/kernel/alternative.c Thomas Gleixner 2022-09-15 2395 callthunks_patch_builtin_calls();
e81dc127ef69887 arch/x86/kernel/alternative.c Thomas Gleixner 2022-09-15 2396
ab9fea59487d8b5 arch/x86/kernel/alternative.c Peter Zijlstra 2025-02-07 2397 apply_alternatives(__alt_instructions, __alt_instructions_end);
ab9fea59487d8b5 arch/x86/kernel/alternative.c Peter Zijlstra 2025-02-07 2398
be0fffa5ca894a9 arch/x86/kernel/alternative.c Peter Zijlstra 2023-06-22 2399 /*
be0fffa5ca894a9 arch/x86/kernel/alternative.c Peter Zijlstra 2023-06-22 2400 * Seal all functions that do not have their address taken.
be0fffa5ca894a9 arch/x86/kernel/alternative.c Peter Zijlstra 2023-06-22 2401 */
1d7e707af446134 arch/x86/kernel/alternative.c Mike Rapoport (Microsoft 2025-01-26 2402) apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end);
ed53a0d971926e4 arch/x86/kernel/alternative.c Peter Zijlstra 2022-03-08 2403
ebebe30794d38c5 arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2404 ibt_restore(ibt);
ebebe30794d38c5 arch/x86/kernel/alternative.c Pawan Gupta 2025-05-03 2405
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v4 00/21] paravirt: cleanup and reorg
2025-11-27 7:08 [PATCH v4 00/21] paravirt: cleanup and reorg Juergen Gross
` (2 preceding siblings ...)
2025-11-27 7:08 ` [PATCH v4 21/21] x86/pvlocks: Move paravirt spinlock functions into own header Juergen Gross
@ 2025-12-15 8:27 ` Juergen Gross
3 siblings, 0 replies; 7+ messages in thread
From: Juergen Gross @ 2025-12-15 8:27 UTC (permalink / raw)
To: linux-kernel, x86, linux-hyperv, virtualization, loongarch,
linuxppc-dev, linux-riscv, kvm
Cc: Andy Lutomirski, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, K. Y. Srinivasan, Haiyang Zhang,
Wei Liu, Dexuan Cui, Peter Zijlstra, Will Deacon, Boqun Feng,
Waiman Long, Jiri Kosina, Josh Poimboeuf, Pawan Gupta,
Boris Ostrovsky, xen-devel, Ajay Kaher, Alexey Makhalov,
Broadcom internal kernel review list, Russell King,
Catalin Marinas, Huacai Chen, WANG Xuerui, Madhavan Srinivasan,
Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, linux-arm-kernel,
Paolo Bonzini, Vitaly Kuznetsov, Stefano Stabellini,
Oleksandr Tyshchenko, Daniel Lezcano, Oleg Nesterov
[-- Attachment #1.1.1: Type: text/plain, Size: 7796 bytes --]
At least for patches 1-12: Ping?
Would be nice to have some feedback for 13-21, too (patch 21 needs an
update, though).
Juergen
On 27.11.25 08:08, Juergen Gross wrote:
> Some cleanups and reorg of paravirt code and headers:
>
> - The first 2 patches should be not controversial at all, as they
> remove just some no longer needed #include and struct forward
> declarations.
>
> - The 3rd patch is removing CONFIG_PARAVIRT_DEBUG, which IMO has
> no real value, as it just changes a crash to a BUG() (the stack
> trace will basically be the same). As the maintainer of the main
> paravirt user (Xen) I have never seen this crash/BUG() to happen.
>
> - The 4th patch is just a movement of code.
>
> - I don't know for what reason asm/paravirt_api_clock.h was added,
> as all archs supporting it do it exactly in the same way. Patch
> 5 is removing it.
>
> - Patches 6-14 are streamlining the paravirt clock interfaces by
> using a common implementation across architectures where possible
> and by moving the related code into common sched code, as this is
> where it should live.
>
> - Patches 15-20 are more like RFC material preparing the paravirt
> infrastructure to support multiple pv_ops function arrays.
> As a prerequisite for that it makes life in objtool much easier
> with dropping the Xen static initializers of the pv_ops sub-
> structures, which is done in patches 15-17.
> Patches 18-20 are doing the real preparations for multiple pv_ops
> arrays and using those arrays in multiple headers.
>
> - Patch 21 is an example how the new scheme can look like using the
> PV-spinlocks.
>
> Changes in V2:
> - new patches 13-18 and 20
> - complete rework of patch 21
>
> Changes in V3:
> - fixed 2 issues detected by kernel test robot
>
> Changes in V4:
> - fixed one build issue
>
> Juergen Gross (21):
> x86/paravirt: Remove not needed includes of paravirt.h
> x86/paravirt: Remove some unneeded struct declarations
> x86/paravirt: Remove PARAVIRT_DEBUG config option
> x86/paravirt: Move thunk macros to paravirt_types.h
> paravirt: Remove asm/paravirt_api_clock.h
> sched: Move clock related paravirt code to kernel/sched
> arm/paravirt: Use common code for paravirt_steal_clock()
> arm64/paravirt: Use common code for paravirt_steal_clock()
> loongarch/paravirt: Use common code for paravirt_steal_clock()
> riscv/paravirt: Use common code for paravirt_steal_clock()
> x86/paravirt: Use common code for paravirt_steal_clock()
> x86/paravirt: Move paravirt_sched_clock() related code into tsc.c
> x86/paravirt: Introduce new paravirt-base.h header
> x86/paravirt: Move pv_native_*() prototypes to paravirt.c
> x86/xen: Drop xen_irq_ops
> x86/xen: Drop xen_cpu_ops
> x86/xen: Drop xen_mmu_ops
> objtool: Allow multiple pv_ops arrays
> x86/paravirt: Allow pv-calls outside paravirt.h
> x86/paravirt: Specify pv_ops array in paravirt macros
> x86/pvlocks: Move paravirt spinlock functions into own header
>
> arch/Kconfig | 3 +
> arch/arm/Kconfig | 1 +
> arch/arm/include/asm/paravirt.h | 22 --
> arch/arm/include/asm/paravirt_api_clock.h | 1 -
> arch/arm/kernel/Makefile | 1 -
> arch/arm/kernel/paravirt.c | 23 --
> arch/arm64/Kconfig | 1 +
> arch/arm64/include/asm/paravirt.h | 14 -
> arch/arm64/include/asm/paravirt_api_clock.h | 1 -
> arch/arm64/kernel/paravirt.c | 11 +-
> arch/loongarch/Kconfig | 1 +
> arch/loongarch/include/asm/paravirt.h | 13 -
> .../include/asm/paravirt_api_clock.h | 1 -
> arch/loongarch/kernel/paravirt.c | 10 +-
> arch/powerpc/include/asm/paravirt.h | 3 -
> arch/powerpc/include/asm/paravirt_api_clock.h | 2 -
> arch/powerpc/platforms/pseries/setup.c | 4 +-
> arch/riscv/Kconfig | 1 +
> arch/riscv/include/asm/paravirt.h | 14 -
> arch/riscv/include/asm/paravirt_api_clock.h | 1 -
> arch/riscv/kernel/paravirt.c | 11 +-
> arch/x86/Kconfig | 8 +-
> arch/x86/entry/entry_64.S | 1 -
> arch/x86/entry/vsyscall/vsyscall_64.c | 1 -
> arch/x86/hyperv/hv_spinlock.c | 11 +-
> arch/x86/include/asm/apic.h | 4 -
> arch/x86/include/asm/highmem.h | 1 -
> arch/x86/include/asm/mshyperv.h | 1 -
> arch/x86/include/asm/paravirt-base.h | 29 ++
> arch/x86/include/asm/paravirt-spinlock.h | 146 ++++++++
> arch/x86/include/asm/paravirt.h | 331 +++++-------------
> arch/x86/include/asm/paravirt_api_clock.h | 1 -
> arch/x86/include/asm/paravirt_types.h | 269 +++++++-------
> arch/x86/include/asm/pgtable_32.h | 1 -
> arch/x86/include/asm/ptrace.h | 2 +-
> arch/x86/include/asm/qspinlock.h | 89 +----
> arch/x86/include/asm/spinlock.h | 1 -
> arch/x86/include/asm/timer.h | 1 +
> arch/x86/include/asm/tlbflush.h | 4 -
> arch/x86/kernel/Makefile | 2 +-
> arch/x86/kernel/apm_32.c | 1 -
> arch/x86/kernel/callthunks.c | 1 -
> arch/x86/kernel/cpu/bugs.c | 1 -
> arch/x86/kernel/cpu/vmware.c | 1 +
> arch/x86/kernel/kvm.c | 13 +-
> arch/x86/kernel/kvmclock.c | 1 +
> arch/x86/kernel/paravirt-spinlocks.c | 26 +-
> arch/x86/kernel/paravirt.c | 42 +--
> arch/x86/kernel/tsc.c | 10 +-
> arch/x86/kernel/vsmp_64.c | 1 -
> arch/x86/lib/cache-smp.c | 1 -
> arch/x86/mm/init.c | 1 -
> arch/x86/xen/enlighten_pv.c | 82 ++---
> arch/x86/xen/irq.c | 20 +-
> arch/x86/xen/mmu_pv.c | 100 ++----
> arch/x86/xen/spinlock.c | 11 +-
> arch/x86/xen/time.c | 2 +
> drivers/clocksource/hyperv_timer.c | 2 +
> drivers/xen/time.c | 2 +-
> include/linux/sched/cputime.h | 18 +
> kernel/sched/core.c | 5 +
> kernel/sched/cputime.c | 13 +
> kernel/sched/sched.h | 3 +-
> tools/objtool/arch/x86/decode.c | 8 +-
> tools/objtool/check.c | 78 ++++-
> tools/objtool/include/objtool/check.h | 2 +
> 66 files changed, 661 insertions(+), 826 deletions(-)
> delete mode 100644 arch/arm/include/asm/paravirt.h
> delete mode 100644 arch/arm/include/asm/paravirt_api_clock.h
> delete mode 100644 arch/arm/kernel/paravirt.c
> delete mode 100644 arch/arm64/include/asm/paravirt_api_clock.h
> delete mode 100644 arch/loongarch/include/asm/paravirt_api_clock.h
> delete mode 100644 arch/powerpc/include/asm/paravirt_api_clock.h
> delete mode 100644 arch/riscv/include/asm/paravirt_api_clock.h
> create mode 100644 arch/x86/include/asm/paravirt-base.h
> create mode 100644 arch/x86/include/asm/paravirt-spinlock.h
> delete mode 100644 arch/x86/include/asm/paravirt_api_clock.h
>
[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3743 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-12-15 8:27 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-27 7:08 [PATCH v4 00/21] paravirt: cleanup and reorg Juergen Gross
2025-11-27 7:08 ` [PATCH v4 01/21] x86/paravirt: Remove not needed includes of paravirt.h Juergen Gross
2025-11-27 7:08 ` [PATCH v4 12/21] x86/paravirt: Move paravirt_sched_clock() related code into tsc.c Juergen Gross
2025-11-27 7:08 ` [PATCH v4 21/21] x86/pvlocks: Move paravirt spinlock functions into own header Juergen Gross
2025-11-27 9:24 ` kernel test robot
2025-11-27 9:24 ` kernel test robot
2025-12-15 8:27 ` [PATCH v4 00/21] paravirt: cleanup and reorg Juergen Gross
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).