public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance
@ 2026-04-22  6:40 Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 01/12] smp: Disable preemption explicitly in __csd_lock_wait Chuyi Zhou
                   ` (11 more replies)
  0 siblings, 12 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

Changes in v5:
 - Replace "smp: Remove get_cpu from smp_call_function_any" with a new
   approach that extracts a common __smp_call_function_single() to safely
   keep the remote CPU selection and IPI dispatch process within a single
   preemption-disabled region in [PATCH v5 3/12].
 - Fix a typo in comments (s/cpumask_stack/task_mask/) and remove the
   obsolete "Preemption must be disabled" constraint from the kernel-doc
   in [PATCH v5 6/12].
 - Adjust the WARN_ON_ONCE() validation condition to avoid a false positive
   warning caused by CPU hotplug races when use_cpus_read_lock is false in
   [PATCH v5 9/12].
 - Move the preemptible() check in smp_call_function_many_cond() from
   [PATCH v5 4/12] to [PATCH v5 6/12].

Changes in v4:
 - Use task-local IPI cpumask rather than on-stack cpumask in
   [PATCH v4 4/12] (suggested by sebastian).
 - Skip to free csd memory in smpcfd_dead_cpu() to guarantee csd memory
   access safety, instead of using RCU mechanism in [PATCH v4 5/12]
   (suggested by sebastian).
 - Align flush_tlb_info with SMP_CACHE_BYTES to avoid performance
   degradation caused by unnecessary cache line movements in [PATCH v4
   10/12](suggested by sebastian and Nadav).
 - Collect Acked-bys and Reviewed-bys.

Changes in v3:
 - Add benchmarks to measure the performance impact of changing
   flush_tlb_info to stack variable in [PATCH v3 10/12] (suggested by
   peter)
 - Adjust the rcu_read_unlock() location in [PATCH v3 5/12] (suggested
   by muchun)
 - Use raw_smp_processor_id() to prevent warning[1] from
   check_preemption_disabled() in [PATCH v3 12/12].
 - Collect Acked-bys and Reviewed-by.

[1]: https://lore.kernel.org/lkml/20260302075216.2170675-1-zhouchuyi@bytedance.com/T/#mc39999cbeb3f50be176f0903d0fa4075688b073d

Changes in v2:
 - Simplify the code comments in [PATCH v2 2/12] (pointed by peter and
   muchun)
 - Adjust the preemption disabling logic in smp_call_function_any() in
   [PATCH v2 3/12] (suggested by peter).
 - Use on-stack cpumask only when !CONFIG_CPUMASK_OFFSTACK in [PATCH V2
   4/12] (pointed by peter)
 - Add [PATCH v2 5/12] to replace migrate_disable with the rcu mechanism
 - Adjust the preemption disabling logic to allow flush_tlb_multi() to be
   preemptible and migratable in [PATCH v2 11/12]
 - Collect Acked-bys and Reviewed-bys

Introduction
============

The vast majority of smp_call_function*() callers block until remote CPUs
complete the IPI function execution. As smp_call_function*() runs with
preemption disabled throughout, scheduling latency increases dramatically
with the number of remote CPUs and other factors (such as interrupts being
disabled).

On x86-64 architectures, TLB flushes are performed via IPIs; thus, during
process exit or when process-mapped pages are reclaimed, numerous IPI
operations must be awaited, leading to increased scheduling latency for
other threads on the current CPU. In our production environment, we
observed IPI wait-induced scheduling latency reaching up to 16ms on a
16-core machine. Our goal is to allow preemption during IPI completion
waiting to improve real-time performance.

Background
============

In our production environments, latency-sensitive workloads (DPDK) are
configured with the highest priority to preempt lower-priority tasks at any
time. We discovered that DPDK's wake-up latency is primarily caused by the
current CPU having preemption disabled. Therefore, we collected the maximum
preemption disabled events within every 30-second interval and then
calculated the P50/P99 of these max preemption disabled events:


                        p50(ns)               p99(ns)
cpu0                   254956                 5465050
cpu1                   115801                 120782
cpu2                   43324                  72957
cpu3                   256637                 16723307
cpu4                   58979                  87237
cpu5                   47464                  79815
cpu6                   48881                  81371
cpu7                   52263                  82294
cpu8                   263555                 4657713
cpu9                   44935                  73962
cpu10                  37659                  65026
cpu11                  257008                 2706878
cpu12                  49669                  90006
cpu13                  45186                  74666
cpu14                  60705                  83866
cpu15                  51311                  86885

Meanwhile, we have collected the distribution of preemption disabling
events exceeding 1ms across different CPUs over several hours(I omitted
CPU data that were all zeros):

CPU        1~10ms   10~50ms   50~100ms
cpu0        29       5       0
cpu3        38       13      0
cpu8        34       6       0
cpu11       24       10      0

The preemption disabled for several milliseconds or even 10ms+ mostly
originates from TLB flush:

@stack[
    trace_preempt_on+143
    trace_preempt_on+143
    preempt_count_sub+67
    arch_tlbbatch_flush/flush_tlb_mm_range
    task_exit/page_reclaim/...
]

Further analysis confirms that the majority of the time is consumed in
csd_lock_wait().

Now smp_call*() always needs to disable preemption, mainly to protect its
internal per‑CPU data structures and synchronize with CPU offline
operations. This patchset attempts to make csd_lock_wait() preemptible,
thereby reducing the preemption‑disabled critical section and improving
kernel real‑time performance.

Effect

======

After applying this patchset, we no longer observe preemption disabled for
more than 1ms on the arch_tlbbatch_flush/flush_tlb_mm_range path. The
overall P99 of max preemption disabled events in every 30-second is
reduced to around 1.5ms (the remaining latency is primarily due to lock
contention.

                     before patch    after patch    reduced by
                     -----------    --------------  ------------
p99(ns)                16723307        1556034        ~90.70%

Chuyi Zhou (12):
  smp: Disable preemption explicitly in __csd_lock_wait
  smp: Enable preemption early in smp_call_function_single
  smp: Refactor remote CPU selection in smp_call_function_any()
  smp: Use task-local IPI cpumask in smp_call_function_many_cond()
  smp: Alloc percpu csd data in smpcfd_prepare_cpu() only once
  smp: Enable preemption early in smp_call_function_many_cond
  smp: Remove preempt_disable from smp_call_function
  smp: Remove preempt_disable from on_each_cpu_cond_mask
  scftorture: Remove preempt_disable in scftorture_invoke_one
  x86/mm: Move flush_tlb_info back to the stack
  x86/mm: Enable preemption during native_flush_tlb_multi
  x86/mm: Enable preemption during flush_tlb_kernel_range

 arch/x86/include/asm/tlbflush.h |   8 +-
 arch/x86/kernel/kvm.c           |   4 +-
 arch/x86/mm/tlb.c               |  86 +++++++-----------
 include/linux/sched.h           |   6 ++
 include/linux/smp.h             |  20 +++++
 kernel/fork.c                   |   9 +-
 kernel/scftorture.c             |  13 +--
 kernel/smp.c                    | 155 ++++++++++++++++++++++++--------
 8 files changed, 194 insertions(+), 107 deletions(-)

-- 
2.20.1

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v5 01/12] smp: Disable preemption explicitly in __csd_lock_wait
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 02/12] smp: Enable preemption early in smp_call_function_single Chuyi Zhou
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

The latter patches will enable preemption before csd_lock_wait(), which
could break csdlock_debug. Because the slice of other tasks on the CPU may
be accounted between ktime_get_mono_fast_ns() calls, disable preemption
explicitly in __csd_lock_wait(). This is a preparation for the next
patches.

Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
Acked-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/smp.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/smp.c b/kernel/smp.c
index f349960f79ca..fc1f7a964616 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -323,6 +323,8 @@ static void __csd_lock_wait(call_single_data_t *csd)
 	int bug_id = 0;
 	u64 ts0, ts1;
 
+	guard(preempt)();
+
 	ts1 = ts0 = ktime_get_mono_fast_ns();
 	for (;;) {
 		if (csd_lock_wait_toolong(csd, ts0, &ts1, &bug_id, &nmessages))
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 02/12] smp: Enable preemption early in smp_call_function_single
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 01/12] smp: Disable preemption explicitly in __csd_lock_wait Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 03/12] smp: Refactor remote CPU selection in smp_call_function_any() Chuyi Zhou
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

Now smp_call_function_single() disables preemption mainly for the following
reasons:

- To protect the per-cpu csd_data from concurrent modification by other
tasks on the current CPU in the !wait case. For the wait case,
synchronization is not a concern as on-stack csd is used.

- To prevent the remote online CPU from being offlined. Specifically, we
want to ensure that no new IPIs are queued after smpcfd_dying_cpu() has
finished.

Disabling preemption for the entire execution is unnecessary, especially
csd_lock_wait() part does not require preemption protection. This patch
enables preemption before csd_lock_wait() to reduce the preemption-disabled
critical section.

Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/smp.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index fc1f7a964616..b603d4229f95 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -685,11 +685,16 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 
 	err = generic_exec_single(cpu, csd);
 
+	/*
+	 * @csd is stack-allocated when @wait is true. No concurrent access
+	 * except from the IPI completion path, so we can re-enable preemption
+	 * early to reduce latency.
+	 */
+	put_cpu();
+
 	if (wait)
 		csd_lock_wait(csd);
 
-	put_cpu();
-
 	return err;
 }
 EXPORT_SYMBOL(smp_call_function_single);
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 03/12] smp: Refactor remote CPU selection in smp_call_function_any()
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 01/12] smp: Disable preemption explicitly in __csd_lock_wait Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 02/12] smp: Enable preemption early in smp_call_function_single Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 04/12] smp: Use task-local IPI cpumask in smp_call_function_many_cond() Chuyi Zhou
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

Currently, smp_call_function_any() disables preemption across the entire
process of picking a target CPU, enqueueing the IPI, and synchronously
waiting for the remote CPU. Since smp_call_function_single() has already
been optimized to re-enable preemption before the synchronous
csd_lock_wait(), callers of smp_call_function_any() should also benefit
from this optimization to reduce the preemption-disabled critical section.

A naive approach would be to simply remove get_cpu() and put_cpu() from
smp_call_function_any(), leaving the preemption disablement entirely to
smp_call_function_single(). However, doing so opens a dangerous
preemption window between picking the remote CPU (e.g., via
sched_numa_find_nth_cpu()) and dispatching the IPI inside
smp_call_function_single(). If the selected remote CPU is fully offlined
during this window, smp_call_function_single() will fail its
cpu_online() check and return -ENXIO directly to the caller, violating
the guarantee to execute on *any* online CPU in the mask.

To safely enable this optimization, this patch refactors the logic of
smp_call_function_any() and smp_call_function_single(). By moving the
random remote CPU selection into a common __smp_call_function_single(),
and keep the entire selection and IPI dispatch process within a single
preemption-disabled region.

Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
---
 kernel/smp.c | 46 +++++++++++++++++++++++++---------------------
 1 file changed, 25 insertions(+), 21 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index b603d4229f95..f5bd648d6ae4 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -627,16 +627,8 @@ void flush_smp_call_function_queue(void)
 	local_irq_restore(flags);
 }
 
-/*
- * smp_call_function_single - Run a function on a specific CPU
- * @func: The function to run. This must be fast and non-blocking.
- * @info: An arbitrary pointer to pass to the function.
- * @wait: If true, wait until function has completed on other CPUs.
- *
- * Returns 0 on success, else a negative status code.
- */
-int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
-			     int wait)
+static int __smp_call_function_single(int cpu, smp_call_func_t func,
+			void *info, const struct cpumask *mask, int wait)
 {
 	call_single_data_t *csd;
 	call_single_data_t csd_stack = {
@@ -653,6 +645,14 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 	 */
 	this_cpu = get_cpu();
 
+	if (mask) {
+		/* Try for same CPU (cheapest) */
+		if (!cpumask_test_cpu(this_cpu, mask))
+			cpu = sched_numa_find_nth_cpu(mask, 0, cpu_to_node(this_cpu));
+		else
+			cpu = this_cpu;
+	}
+
 	/*
 	 * Can deadlock when called with interrupts disabled.
 	 * We allow cpu's that are not yet online though, as no one else can
@@ -697,6 +697,20 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 
 	return err;
 }
+
+/*
+ * smp_call_function_single - Run a function on a specific CPU
+ * @func: The function to run. This must be fast and non-blocking.
+ * @info: An arbitrary pointer to pass to the function.
+ * @wait: If true, wait until function has completed on other CPUs.
+ *
+ * Returns 0 on success, else a negative status code.
+ */
+int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
+			     int wait)
+{
+	return __smp_call_function_single(cpu, func, info, NULL, wait);
+}
 EXPORT_SYMBOL(smp_call_function_single);
 
 /**
@@ -761,17 +775,7 @@ EXPORT_SYMBOL_GPL(smp_call_function_single_async);
 int smp_call_function_any(const struct cpumask *mask,
 			  smp_call_func_t func, void *info, int wait)
 {
-	unsigned int cpu;
-	int ret;
-
-	/* Try for same CPU (cheapest) */
-	cpu = get_cpu();
-	if (!cpumask_test_cpu(cpu, mask))
-		cpu = sched_numa_find_nth_cpu(mask, 0, cpu_to_node(cpu));
-
-	ret = smp_call_function_single(cpu, func, info, wait);
-	put_cpu();
-	return ret;
+	return __smp_call_function_single(-1, func, info, mask, wait);
 }
 EXPORT_SYMBOL_GPL(smp_call_function_any);
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 04/12] smp: Use task-local IPI cpumask in smp_call_function_many_cond()
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
                   ` (2 preceding siblings ...)
  2026-04-22  6:40 ` [PATCH v5 03/12] smp: Refactor remote CPU selection in smp_call_function_any() Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 05/12] smp: Alloc percpu csd data in smpcfd_prepare_cpu() only once Chuyi Zhou
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

This patch prepares the task-local IPI cpumask during thread creation, and
uses the local cpumask to replace the percpu cfd cpumask in
smp_call_function_many_cond(). We will enable preemption during
csd_lock_wait() later, and this can prevent concurrent access to the
cfd->cpumask from other tasks on the current CPU. For cases where
cpumask_size() is smaller than or equal to the pointer size, it tries to
stash the cpumask in the pointer itself to avoid extra memory allocations.

Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
---
 include/linux/sched.h |  6 +++++
 include/linux/smp.h   | 20 +++++++++++++++
 kernel/fork.c         |  9 ++++++-
 kernel/smp.c          | 59 ++++++++++++++++++++++++++++++++++++++-----
 4 files changed, 87 insertions(+), 7 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8ec3b6d7d718..022df8b9c62f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1347,6 +1347,12 @@ struct task_struct {
 	struct list_head		perf_event_list;
 	struct perf_ctx_data __rcu	*perf_ctx_data;
 #endif
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPTION)
+	union {
+		cpumask_t                       *ipi_mask_ptr;
+		unsigned long			ipi_mask_val;
+	};
+#endif
 #ifdef CONFIG_DEBUG_PREEMPT
 	unsigned long			preempt_disable_ip;
 #endif
diff --git a/include/linux/smp.h b/include/linux/smp.h
index 1ebd88026119..c7b8cc82ad3c 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -167,6 +167,12 @@ void smp_call_function_many(const struct cpumask *mask,
 int smp_call_function_any(const struct cpumask *mask,
 			  smp_call_func_t func, void *info, int wait);
 
+#ifdef CONFIG_PREEMPTION
+int smp_task_ipi_mask_alloc(struct task_struct *task);
+void smp_task_ipi_mask_free(struct task_struct *task);
+cpumask_t *smp_task_ipi_mask(struct task_struct *cur);
+#endif
+
 void kick_all_cpus_sync(void);
 void wake_up_all_idle_cpus(void);
 bool cpus_peek_for_pending_ipi(const struct cpumask *mask);
@@ -306,4 +312,18 @@ bool csd_lock_is_stuck(void);
 static inline bool csd_lock_is_stuck(void) { return false; }
 #endif
 
+#if !defined(CONFIG_SMP) || !defined(CONFIG_PREEMPTION)
+static inline int smp_task_ipi_mask_alloc(struct task_struct *task)
+{
+	return 0;
+}
+static inline void smp_task_ipi_mask_free(struct task_struct *task)
+{
+}
+static inline cpumask_t *smp_task_ipi_mask(struct task_struct *cur)
+{
+	return NULL;
+}
+#endif
+
 #endif /* __LINUX_SMP_H */
diff --git a/kernel/fork.c b/kernel/fork.c
index 079802cb6100..206dda0d5254 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -533,6 +533,7 @@ void free_task(struct task_struct *tsk)
 #endif
 	release_user_cpus_ptr(tsk);
 	scs_release(tsk);
+	smp_task_ipi_mask_free(tsk);
 
 #ifndef CONFIG_THREAD_INFO_IN_TASK
 	/*
@@ -930,10 +931,14 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
 #endif
 	account_kernel_stack(tsk, 1);
 
-	err = scs_prepare(tsk, node);
+	err = smp_task_ipi_mask_alloc(tsk);
 	if (err)
 		goto free_stack;
 
+	err = scs_prepare(tsk, node);
+	if (err)
+		goto free_ipi_mask;
+
 #ifdef CONFIG_SECCOMP
 	/*
 	 * We must handle setting up seccomp filters once we're under
@@ -1004,6 +1009,8 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
 #endif
 	return tsk;
 
+free_ipi_mask:
+	smp_task_ipi_mask_free(tsk);
 free_stack:
 	exit_task_stack_account(tsk);
 	free_thread_stack(tsk);
diff --git a/kernel/smp.c b/kernel/smp.c
index f5bd648d6ae4..488ffeec5cd1 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -779,6 +779,44 @@ int smp_call_function_any(const struct cpumask *mask,
 }
 EXPORT_SYMBOL_GPL(smp_call_function_any);
 
+static DEFINE_STATIC_KEY_FALSE(ipi_mask_inlined);
+
+#ifdef CONFIG_PREEMPTION
+
+int smp_task_ipi_mask_alloc(struct task_struct *task)
+{
+	if (static_branch_unlikely(&ipi_mask_inlined))
+		return 0;
+
+	task->ipi_mask_ptr = kmalloc(cpumask_size(), GFP_KERNEL);
+	if (!task->ipi_mask_ptr)
+		return -ENOMEM;
+
+	return 0;
+}
+
+void smp_task_ipi_mask_free(struct task_struct *task)
+{
+	if (static_branch_unlikely(&ipi_mask_inlined))
+		return;
+
+	kfree(task->ipi_mask_ptr);
+}
+
+cpumask_t *smp_task_ipi_mask(struct task_struct *cur)
+{
+	/*
+	 * If cpumask_size() is smaller than or equal to the pointer
+	 * size, it stashes the cpumask in the pointer itself to
+	 * avoid extra memory allocations.
+	 */
+	if (static_branch_unlikely(&ipi_mask_inlined))
+		return (cpumask_t *)&cur->ipi_mask_val;
+
+	return cur->ipi_mask_ptr;
+}
+#endif
+
 /*
  * Flags to be used as scf_flags argument of smp_call_function_many_cond().
  *
@@ -796,11 +834,18 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
 	int cpu, last_cpu, this_cpu = smp_processor_id();
 	struct call_function_data *cfd;
 	bool wait = scf_flags & SCF_WAIT;
+	struct cpumask *cpumask, *task_mask;
+	bool preemptible_wait;
 	int nr_cpus = 0;
 	bool run_remote = false;
 
 	lockdep_assert_preemption_disabled();
 
+	task_mask = smp_task_ipi_mask(current);
+	preemptible_wait = task_mask;
+	cfd = this_cpu_ptr(&cfd_data);
+	cpumask = preemptible_wait ? task_mask : cfd->cpumask;
+
 	/*
 	 * Can deadlock when called with interrupts disabled.
 	 * We allow cpu's that are not yet online though, as no one else can
@@ -821,16 +866,15 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
 
 	/* Check if we need remote execution, i.e., any CPU excluding this one. */
 	if (cpumask_any_and_but(mask, cpu_online_mask, this_cpu) < nr_cpu_ids) {
-		cfd = this_cpu_ptr(&cfd_data);
-		cpumask_and(cfd->cpumask, mask, cpu_online_mask);
-		__cpumask_clear_cpu(this_cpu, cfd->cpumask);
+		cpumask_and(cpumask, mask, cpu_online_mask);
+		__cpumask_clear_cpu(this_cpu, cpumask);
 
 		cpumask_clear(cfd->cpumask_ipi);
-		for_each_cpu(cpu, cfd->cpumask) {
+		for_each_cpu(cpu, cpumask) {
 			call_single_data_t *csd = per_cpu_ptr(cfd->csd, cpu);
 
 			if (cond_func && !cond_func(cpu, info)) {
-				__cpumask_clear_cpu(cpu, cfd->cpumask);
+				__cpumask_clear_cpu(cpu, cpumask);
 				continue;
 			}
 
@@ -881,7 +925,7 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
 	}
 
 	if (run_remote && wait) {
-		for_each_cpu(cpu, cfd->cpumask) {
+		for_each_cpu(cpu, cpumask) {
 			call_single_data_t *csd;
 
 			csd = per_cpu_ptr(cfd->csd, cpu);
@@ -997,6 +1041,9 @@ EXPORT_SYMBOL(nr_cpu_ids);
 void __init setup_nr_cpu_ids(void)
 {
 	set_nr_cpu_ids(find_last_bit(cpumask_bits(cpu_possible_mask), NR_CPUS) + 1);
+
+	if (IS_ENABLED(CONFIG_PREEMPTION) && cpumask_size() <= sizeof(unsigned long))
+		static_branch_enable(&ipi_mask_inlined);
 }
 
 /* Called by boot processor to activate the rest. */
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 05/12] smp: Alloc percpu csd data in smpcfd_prepare_cpu() only once
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
                   ` (3 preceding siblings ...)
  2026-04-22  6:40 ` [PATCH v5 04/12] smp: Use task-local IPI cpumask in smp_call_function_many_cond() Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  2026-04-23  7:29   ` Muchun Song
  2026-04-22  6:40 ` [PATCH v5 06/12] smp: Enable preemption early in smp_call_function_many_cond Chuyi Zhou
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

Later patch would enable preemption during csd_lock_wait() in
smp_call_function_many_cond(), which may cause access cfd->csd data that
has already been freed in smpcfd_dead_cpu().

One way to fix the above issue is to use the RCU mechanism to protect the
csd data and wait for all read critical sections to exit before freeing
the memory in smpcfd_dead_cpu(), but this could delay CPU shutdown. This
patch chooses a simpler approach: allocate the percpu csd on the UP side
only once and skip freeing the csd memory in smpcfd_dead_cpu().

Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
---
 kernel/smp.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 488ffeec5cd1..134b181fb593 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -63,7 +63,15 @@ int smpcfd_prepare_cpu(unsigned int cpu)
 		free_cpumask_var(cfd->cpumask);
 		return -ENOMEM;
 	}
-	cfd->csd = alloc_percpu(call_single_data_t);
+
+	/*
+	 * The percpu csd is allocated only once and never freed.
+	 * This ensures that smp_call_function_many_cond() can safely
+	 * access the csd of an offlined CPU if it gets preempted
+	 * during csd_lock_wait().
+	 */
+	if (!cfd->csd)
+		cfd->csd = alloc_percpu(call_single_data_t);
 	if (!cfd->csd) {
 		free_cpumask_var(cfd->cpumask);
 		free_cpumask_var(cfd->cpumask_ipi);
@@ -79,7 +87,6 @@ int smpcfd_dead_cpu(unsigned int cpu)
 
 	free_cpumask_var(cfd->cpumask);
 	free_cpumask_var(cfd->cpumask_ipi);
-	free_percpu(cfd->csd);
 	return 0;
 }
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 06/12] smp: Enable preemption early in smp_call_function_many_cond
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
                   ` (4 preceding siblings ...)
  2026-04-22  6:40 ` [PATCH v5 05/12] smp: Alloc percpu csd data in smpcfd_prepare_cpu() only once Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 07/12] smp: Remove preempt_disable from smp_call_function Chuyi Zhou
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

Disabling preemption entirely during smp_call_function_many_cond() was
primarily for the following reasons:

- To prevent the remote online CPU from going offline. Specifically, we
want to ensure that no new csds are queued after smpcfd_dying_cpu() has
finished. Therefore, preemption must be disabled until all necessary IPIs
are sent.

- To prevent current CPU from going offline. Being migrated to another CPU
and calling csd_lock_wait() may cause UAF due to smpcfd_dead_cpu() during
the current CPU offline process.

- To protect the per-cpu cfd_data from concurrent modification by other
tasks on the current CPU. cfd_data contains cpumasks and per-cpu csds.
Before enqueueing a csd, we block on the csd_lock() to ensure the
previous async csd->func() has completed, and then initialize csd->func and
csd->info. After sending the IPI, we spin-wait for the remote CPU to call
csd_unlock(). Actually the csd_lock mechanism already guarantees csd
serialization. If preemption occurs during csd_lock_wait, other concurrent
smp_call_function_many_cond calls will simply block until the previous
csd->func() completes:

task A                    task B

sd->func = fun_a
send ipis

                preempted by B
               --------------->
                        csd_lock(csd); // block until last
                                       // fun_a finished

                        csd->func = func_b;
                        csd->info = info;
                            ...
                        send ipis

                switch back to A
                <---------------

csd_lock_wait(csd); // block until remote finish func_*

Previous patches replaced the per-cpu cfd->cpumask with task-local cpumask,
and the percpu csd is allocated only once and is never freed to ensure
we can safely access csd. Now we can enable preemption before
csd_lock_wait() which makes the potentially unpredictable csd_lock_wait()
preemptible and migratable.

Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
---
 kernel/smp.c | 27 +++++++++++++++++++++------
 1 file changed, 21 insertions(+), 6 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 134b181fb593..e0983d5f41a2 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -838,7 +838,7 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
 					unsigned int scf_flags,
 					smp_cond_func_t cond_func)
 {
-	int cpu, last_cpu, this_cpu = smp_processor_id();
+	int cpu, last_cpu, this_cpu;
 	struct call_function_data *cfd;
 	bool wait = scf_flags & SCF_WAIT;
 	struct cpumask *cpumask, *task_mask;
@@ -846,10 +846,10 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
 	int nr_cpus = 0;
 	bool run_remote = false;
 
-	lockdep_assert_preemption_disabled();
-
 	task_mask = smp_task_ipi_mask(current);
-	preemptible_wait = task_mask;
+	preemptible_wait = task_mask && preemptible();
+
+	this_cpu = get_cpu();
 	cfd = this_cpu_ptr(&cfd_data);
 	cpumask = preemptible_wait ? task_mask : cfd->cpumask;
 
@@ -931,6 +931,19 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
 		local_irq_restore(flags);
 	}
 
+	/*
+	 * We may block in csd_lock_wait() for a significant amount of time,
+	 * especially when interrupts are disabled or with a large number of
+	 * remote CPUs. Try to enable preemption before csd_lock_wait().
+	 *
+	 * Use the task_mask instead of cfd->cpumask to avoid concurrency
+	 * modification from tasks on the same cpu. If preemption occurs during
+	 * csd_lock_wait, other concurrent smp_call_function_many_cond() calls
+	 * will simply block until the previous csd->func() completes.
+	 */
+	if (preemptible_wait)
+		put_cpu();
+
 	if (run_remote && wait) {
 		for_each_cpu(cpu, cpumask) {
 			call_single_data_t *csd;
@@ -939,6 +952,9 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
 			csd_lock_wait(csd);
 		}
 	}
+
+	if (!preemptible_wait)
+		put_cpu();
 }
 
 /**
@@ -950,8 +966,7 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
  *        on other CPUs.
  *
  * You must not call this function with disabled interrupts or from a
- * hardware interrupt handler or from a bottom half handler. Preemption
- * must be disabled when calling this function.
+ * hardware interrupt handler or from a bottom half handler.
  *
  * @func is not called on the local CPU even if @mask contains it.  Consider
  * using on_each_cpu_cond_mask() instead if this is not desirable.
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 07/12] smp: Remove preempt_disable from smp_call_function
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
                   ` (5 preceding siblings ...)
  2026-04-22  6:40 ` [PATCH v5 06/12] smp: Enable preemption early in smp_call_function_many_cond Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 08/12] smp: Remove preempt_disable from on_each_cpu_cond_mask Chuyi Zhou
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

Now smp_call_function_many_cond() internally handles the preemption logic,
so smp_call_function() does not need to explicitly disable preemption.
Remove preempt_{enable, disable} from smp_call_function().

Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
---
 kernel/smp.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index e0983d5f41a2..7200ce6043bc 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -995,9 +995,8 @@ EXPORT_SYMBOL(smp_call_function_many);
  */
 void smp_call_function(smp_call_func_t func, void *info, int wait)
 {
-	preempt_disable();
-	smp_call_function_many(cpu_online_mask, func, info, wait);
-	preempt_enable();
+	smp_call_function_many_cond(cpu_online_mask, func, info,
+			wait ? SCF_WAIT : 0, NULL);
 }
 EXPORT_SYMBOL(smp_call_function);
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 08/12] smp: Remove preempt_disable from on_each_cpu_cond_mask
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
                   ` (6 preceding siblings ...)
  2026-04-22  6:40 ` [PATCH v5 07/12] smp: Remove preempt_disable from smp_call_function Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 09/12] scftorture: Remove preempt_disable in scftorture_invoke_one Chuyi Zhou
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

Now smp_call_function_many_cond() internally handles the preemption logic,
so on_each_cpu_cond_mask does not need to explicitly disable preemption.
Remove preempt_{enable, disable} from on_each_cpu_cond_mask().

Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
---
 kernel/smp.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 7200ce6043bc..8e28baa42bcf 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -1118,9 +1118,7 @@ void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func,
 	if (wait)
 		scf_flags |= SCF_WAIT;
 
-	preempt_disable();
 	smp_call_function_many_cond(mask, func, info, scf_flags, cond_func);
-	preempt_enable();
 }
 EXPORT_SYMBOL(on_each_cpu_cond_mask);
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 09/12] scftorture: Remove preempt_disable in scftorture_invoke_one
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
                   ` (7 preceding siblings ...)
  2026-04-22  6:40 ` [PATCH v5 08/12] smp: Remove preempt_disable from on_each_cpu_cond_mask Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 10/12] x86/mm: Move flush_tlb_info back to the stack Chuyi Zhou
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

Previous patches make smp_call*() functions handle preemption logic
internally. Thus, the explicit preempt_disable() surrounding these calls
becomes unnecessary. Furthermore, keeping the external preempt_disable()
would prevent scftorture from exercising the newly narrowed internal
preemption-disabled regions during IPI dispatch. This patch removes
the preempt_{enable, disable} pairs in scftorture_invoke_one().

Removing this preemption protection could expose a race condition with
CPU hotplug when use_cpus_read_lock is false. Specifically, for
multi-cast operations (SCF_PRIM_MANY or SCF_PRIM_ALL), if only 1 CPU is
online, smp_call_function_many() correctly skips sending IPIs and leaves
scfc_out as false. Without preemption disabled, a CPU hotplug thread
could preempt the test thread, bring a second CPU online, and increment
num_online_cpus(). When the test thread resumes, the validation check
would see num_online_cpus() > 1 and falsely trigger the memory-ordering
warning, leaking the scfcp structure.

To avoid this potential false positive, restrict the num_online_cpus() > 1
condition to only apply when use_cpus_read_lock is true, ensuring the CPU
count remains stable during evaluation.

Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
---
 kernel/scftorture.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/kernel/scftorture.c b/kernel/scftorture.c
index 327c315f411c..2082f9b44370 100644
--- a/kernel/scftorture.c
+++ b/kernel/scftorture.c
@@ -348,6 +348,8 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
 	int ret = 0;
 	struct scf_check *scfcp = NULL;
 	struct scf_selector *scfsp = scf_sel_rand(trsp);
+	bool is_single = (scfsp->scfs_prim == SCF_PRIM_SINGLE ||
+			  scfsp->scfs_prim == SCF_PRIM_SINGLE_RPC);
 
 	if (scfsp->scfs_prim == SCF_PRIM_SINGLE || scfsp->scfs_wait) {
 		scfcp = kmalloc_obj(*scfcp, GFP_ATOMIC);
@@ -364,8 +366,6 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
 	}
 	if (use_cpus_read_lock)
 		cpus_read_lock();
-	else
-		preempt_disable();
 	switch (scfsp->scfs_prim) {
 	case SCF_PRIM_RESCHED:
 		if (IS_BUILTIN(CONFIG_SCF_TORTURE_TEST)) {
@@ -411,13 +411,10 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
 		if (!ret) {
 			if (use_cpus_read_lock)
 				cpus_read_unlock();
-			else
-				preempt_enable();
+
 			wait_for_completion(&scfcp->scfc_completion);
 			if (use_cpus_read_lock)
 				cpus_read_lock();
-			else
-				preempt_disable();
 		} else {
 			scfp->n_single_rpc_ofl++;
 			scf_add_to_free_list(scfcp);
@@ -452,7 +449,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
 			scfcp->scfc_out = true;
 	}
 	if (scfcp && scfsp->scfs_wait) {
-		if (WARN_ON_ONCE((num_online_cpus() > 1 || scfsp->scfs_prim == SCF_PRIM_SINGLE) &&
+		if (WARN_ON_ONCE(((use_cpus_read_lock && num_online_cpus() > 1) || is_single) &&
 				 !scfcp->scfc_out)) {
 			pr_warn("%s: Memory-ordering failure, scfs_prim: %d.\n", __func__, scfsp->scfs_prim);
 			atomic_inc(&n_mb_out_errs); // Leak rather than trash!
@@ -463,8 +460,6 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
 	}
 	if (use_cpus_read_lock)
 		cpus_read_unlock();
-	else
-		preempt_enable();
 	if (allocfail)
 		schedule_timeout_idle((1 + longwait) * HZ);  // Let no-wait handlers complete.
 	else if (!(torture_random(trsp) & 0xfff))
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 10/12] x86/mm: Move flush_tlb_info back to the stack
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
                   ` (8 preceding siblings ...)
  2026-04-22  6:40 ` [PATCH v5 09/12] scftorture: Remove preempt_disable in scftorture_invoke_one Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 11/12] x86/mm: Enable preemption during native_flush_tlb_multi Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 12/12] x86/mm: Enable preemption during flush_tlb_kernel_range Chuyi Zhou
  11 siblings, 0 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

Commit 3db6d5a5ecaf ("x86/mm/tlb: Remove 'struct flush_tlb_info' from the
stack") converted flush_tlb_info from stack variable to per-CPU variable.
This brought about a performance improvement of around 3% in extreme test.
However, it also required that all flush_tlb* operations keep preemption
disabled entirely to prevent concurrent modifications of flush_tlb_info.
flush_tlb* needs to send IPIs to remote CPUs and synchronously wait for
all remote CPUs to complete their local TLB flushes. The process could
take tens of milliseconds when interrupts are disabled or with a large
number of remote CPUs.

From the perspective of improving kernel real-time performance, this patch
reverts flush_tlb_info back to stack variables and align it with
SMP_CACHE_BYTES. In certain configurations, SMP_CACHE_BYTES may be large,
so the alignment size is limited to 64.  This is a preparation for enabling
preemption during TLB flush in next patch.

To evaluate the performance impact of this patch, use the following script
to reproduce the microbenchmark mentioned in commit 3db6d5a5ecaf
("x86/mm/tlb: Remove 'struct flush_tlb_info' from the stack"). The test
environment is an Ice Lake system (Intel(R) Xeon(R) Platinum 8336C) with
128 CPUs and 2 NUMA nodes. During the test, the threads were bound to
specific CPUs, and both pti and mitigations were disabled:

    #include <stdio.h>
    #include <stdlib.h>
    #include <pthread.h>
    #include <sys/mman.h>
    #include <sys/time.h>
    #include <unistd.h>

    #define NUM_OPS 1000000
    #define NUM_THREADS 3
    #define NUM_RUNS 5
    #define PAGE_SIZE 4096

    volatile int stop_threads = 0;

    void *busy_wait_thread(void *arg) {
        while (!stop_threads) {
            __asm__ volatile ("nop");
        }
        return NULL;
    }

    long long get_usec() {
        struct timeval tv;
        gettimeofday(&tv, NULL);
        return tv.tv_sec * 1000000LL + tv.tv_usec;
    }

    int main() {
        pthread_t threads[NUM_THREADS];
        char *addr;
        int i, r;
        addr = mmap(NULL, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE
		| MAP_ANONYMOUS, -1, 0);

        if (addr == MAP_FAILED) {
            perror("mmap");
            exit(1);
        }

        for (i = 0; i < NUM_THREADS; i++) {
            if (pthread_create(&threads[i], NULL, busy_wait_thread, NULL))
                exit(1);
        }

        printf("Running benchmark: %d runs, %d ops each, %d background\n"
               "threads\n", NUM_RUNS, NUM_OPS, NUM_THREADS);

        for (r = 0; r < NUM_RUNS; r++) {
            long long start, end;
            start = get_usec();
            for (i = 0; i < NUM_OPS; i++) {
                addr[0] = 1;
                if (madvise(addr, PAGE_SIZE, MADV_DONTNEED)) {
                    perror("madvise");
                    exit(1);
                }
            }
            end = get_usec();
            double duration = (double)(end - start);
            double avg_lat = duration / NUM_OPS;
            printf("Run %d: Total time %.2f us, Avg latency %.4f us/op\n",
            r + 1, duration, avg_lat);
        }
        stop_threads = 1;
        for (i = 0; i < NUM_THREADS; i++)
            pthread_join(threads[i], NULL);
        munmap(addr, PAGE_SIZE);
        return 0;
    }

                   base   on-stack-aligned  on-stack-not-aligned
                   ----       ---------      -----------
avg (usec/op)     2.5278       2.5261         2.5508
stddev            0.0007       0.0027         0.0023

The benchmark results show that the average latency difference between the
baseline (base) and the properly aligned stack variable (on-stack-aligned)
is within the standard deviation (stddev). This indicates that the
variations are caused by testing noise, and reverting to a stack variable
with proper alignment causes no performance regression compared to the
per-CPU implementation. The unaligned version (on-stack-not-aligned) shows
a minor performance drop. This demonstrates that we can improve the
real-time performance without sacrificing performance.

Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Suggested-by: Nadav Amit <nadav.amit@gmail.com>
Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
---
 arch/x86/include/asm/tlbflush.h |  8 +++-
 arch/x86/mm/tlb.c               | 72 +++++++++------------------------
 2 files changed, 27 insertions(+), 53 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 0545fe75c3fa..f4e4505d4ece 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -211,6 +211,12 @@ extern u16 invlpgb_count_max;
 
 extern void initialize_tlbstate_and_flush(void);
 
+#if SMP_CACHE_BYTES > 64
+#define FLUSH_TLB_INFO_ALIGN 64
+#else
+#define FLUSH_TLB_INFO_ALIGN SMP_CACHE_BYTES
+#endif
+
 /*
  * TLB flushing:
  *
@@ -249,7 +255,7 @@ struct flush_tlb_info {
 	u8			stride_shift;
 	u8			freed_tables;
 	u8			trim_cpumask;
-};
+} __aligned(FLUSH_TLB_INFO_ALIGN);
 
 void flush_tlb_local(void);
 void flush_tlb_one_user(unsigned long addr);
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index af43d177087e..cfc3a72477f5 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1373,28 +1373,12 @@ void flush_tlb_multi(const struct cpumask *cpumask,
  */
 unsigned long tlb_single_page_flush_ceiling __read_mostly = 33;
 
-static DEFINE_PER_CPU_SHARED_ALIGNED(struct flush_tlb_info, flush_tlb_info);
-
-#ifdef CONFIG_DEBUG_VM
-static DEFINE_PER_CPU(unsigned int, flush_tlb_info_idx);
-#endif
-
-static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
-			unsigned long start, unsigned long end,
-			unsigned int stride_shift, bool freed_tables,
-			u64 new_tlb_gen)
+static void get_flush_tlb_info(struct flush_tlb_info *info,
+			       struct mm_struct *mm,
+			       unsigned long start, unsigned long end,
+			       unsigned int stride_shift, bool freed_tables,
+			       u64 new_tlb_gen)
 {
-	struct flush_tlb_info *info = this_cpu_ptr(&flush_tlb_info);
-
-#ifdef CONFIG_DEBUG_VM
-	/*
-	 * Ensure that the following code is non-reentrant and flush_tlb_info
-	 * is not overwritten. This means no TLB flushing is initiated by
-	 * interrupt handlers and machine-check exception handlers.
-	 */
-	BUG_ON(this_cpu_inc_return(flush_tlb_info_idx) != 1);
-#endif
-
 	/*
 	 * If the number of flushes is so large that a full flush
 	 * would be faster, do a full flush.
@@ -1412,32 +1396,22 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
 	info->new_tlb_gen	= new_tlb_gen;
 	info->initiating_cpu	= smp_processor_id();
 	info->trim_cpumask	= 0;
-
-	return info;
-}
-
-static void put_flush_tlb_info(void)
-{
-#ifdef CONFIG_DEBUG_VM
-	/* Complete reentrancy prevention checks */
-	barrier();
-	this_cpu_dec(flush_tlb_info_idx);
-#endif
 }
 
 void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 				unsigned long end, unsigned int stride_shift,
 				bool freed_tables)
 {
-	struct flush_tlb_info *info;
+	struct flush_tlb_info _info;
+	struct flush_tlb_info *info = &_info;
 	int cpu = get_cpu();
 	u64 new_tlb_gen;
 
 	/* This is also a barrier that synchronizes with switch_mm(). */
 	new_tlb_gen = inc_mm_tlb_gen(mm);
 
-	info = get_flush_tlb_info(mm, start, end, stride_shift, freed_tables,
-				  new_tlb_gen);
+	get_flush_tlb_info(&_info, mm, start, end, stride_shift, freed_tables,
+			   new_tlb_gen);
 
 	/*
 	 * flush_tlb_multi() is not optimized for the common case in which only
@@ -1457,7 +1431,6 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 		local_irq_enable();
 	}
 
-	put_flush_tlb_info();
 	put_cpu();
 	mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
 }
@@ -1527,19 +1500,16 @@ static void kernel_tlb_flush_range(struct flush_tlb_info *info)
 
 void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 {
-	struct flush_tlb_info *info;
+	struct flush_tlb_info info;
 
 	guard(preempt)();
+	get_flush_tlb_info(&info, NULL, start, end, PAGE_SHIFT, false,
+			   TLB_GENERATION_INVALID);
 
-	info = get_flush_tlb_info(NULL, start, end, PAGE_SHIFT, false,
-				  TLB_GENERATION_INVALID);
-
-	if (info->end == TLB_FLUSH_ALL)
-		kernel_tlb_flush_all(info);
+	if (info.end == TLB_FLUSH_ALL)
+		kernel_tlb_flush_all(&info);
 	else
-		kernel_tlb_flush_range(info);
-
-	put_flush_tlb_info();
+		kernel_tlb_flush_range(&info);
 }
 
 /*
@@ -1707,12 +1677,11 @@ EXPORT_SYMBOL_FOR_KVM(__flush_tlb_all);
 
 void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 {
-	struct flush_tlb_info *info;
+	struct flush_tlb_info info;
 
 	int cpu = get_cpu();
-
-	info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false,
-				  TLB_GENERATION_INVALID);
+	get_flush_tlb_info(&info, NULL, 0, TLB_FLUSH_ALL, 0, false,
+			   TLB_GENERATION_INVALID);
 	/*
 	 * flush_tlb_multi() is not optimized for the common case in which only
 	 * a local TLB flush is needed. Optimize this use-case by calling
@@ -1722,17 +1691,16 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 		invlpgb_flush_all_nonglobals();
 		batch->unmapped_pages = false;
 	} else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) {
-		flush_tlb_multi(&batch->cpumask, info);
+		flush_tlb_multi(&batch->cpumask, &info);
 	} else if (cpumask_test_cpu(cpu, &batch->cpumask)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
-		flush_tlb_func(info);
+		flush_tlb_func(&info);
 		local_irq_enable();
 	}
 
 	cpumask_clear(&batch->cpumask);
 
-	put_flush_tlb_info();
 	put_cpu();
 }
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 11/12] x86/mm: Enable preemption during native_flush_tlb_multi
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
                   ` (9 preceding siblings ...)
  2026-04-22  6:40 ` [PATCH v5 10/12] x86/mm: Move flush_tlb_info back to the stack Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  2026-04-22  6:40 ` [PATCH v5 12/12] x86/mm: Enable preemption during flush_tlb_kernel_range Chuyi Zhou
  11 siblings, 0 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

native_flush_tlb_multi() may be frequently called by flush_tlb_mm_range()
and arch_tlbbatch_flush() in production environments. When pages are
reclaimed or process exit, native_flush_tlb_multi() sends IPIs to remote
CPUs and waits for all remote CPUs to complete their local TLB flushes.
The overall latency may reach tens of milliseconds due to a large number of
remote CPUs and other factors (such as interrupts being disabled). Since
flush_tlb_mm_range() and arch_tlbbatch_flush() always disable preemption,
which may cause increased scheduling latency for other threads on the
current CPU.

Previous patch converted flush_tlb_info from per-cpu variable to on-stack
variable. Additionally, it's no longer necessary to explicitly disable
preemption before calling smp_call*() since they internally handle the
preemption logic. Now it's safe to enable preemption during
native_flush_tlb_multi().

Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
---
 arch/x86/kernel/kvm.c | 4 +++-
 arch/x86/mm/tlb.c     | 9 +++++++--
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 3bc062363814..4f7f4c1149b9 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -668,8 +668,10 @@ static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
 	u8 state;
 	int cpu;
 	struct kvm_steal_time *src;
-	struct cpumask *flushmask = this_cpu_cpumask_var_ptr(__pv_cpu_mask);
+	struct cpumask *flushmask;
 
+	guard(preempt)();
+	flushmask = this_cpu_cpumask_var_ptr(__pv_cpu_mask);
 	cpumask_copy(flushmask, cpumask);
 	/*
 	 * We have to call flush only on online vCPUs. And
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index cfc3a72477f5..58c6f3d2f993 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1421,9 +1421,11 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 	if (mm_global_asid(mm)) {
 		broadcast_tlb_flush(info);
 	} else if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) {
+		put_cpu();
 		info->trim_cpumask = should_trim_cpumask(mm);
 		flush_tlb_multi(mm_cpumask(mm), info);
 		consider_global_asid(mm);
+		goto invalidate;
 	} else if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
@@ -1432,6 +1434,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 	}
 
 	put_cpu();
+invalidate:
 	mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
 }
 
@@ -1691,7 +1694,9 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 		invlpgb_flush_all_nonglobals();
 		batch->unmapped_pages = false;
 	} else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) {
+		put_cpu();
 		flush_tlb_multi(&batch->cpumask, &info);
+		goto clear;
 	} else if (cpumask_test_cpu(cpu, &batch->cpumask)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
@@ -1699,9 +1704,9 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 		local_irq_enable();
 	}
 
-	cpumask_clear(&batch->cpumask);
-
 	put_cpu();
+clear:
+	cpumask_clear(&batch->cpumask);
 }
 
 /*
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 12/12] x86/mm: Enable preemption during flush_tlb_kernel_range
  2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
                   ` (10 preceding siblings ...)
  2026-04-22  6:40 ` [PATCH v5 11/12] x86/mm: Enable preemption during native_flush_tlb_multi Chuyi Zhou
@ 2026-04-22  6:40 ` Chuyi Zhou
  11 siblings, 0 replies; 14+ messages in thread
From: Chuyi Zhou @ 2026-04-22  6:40 UTC (permalink / raw)
  To: tglx, mingo, luto, peterz, paulmck, muchun.song, bp, dave.hansen,
	pbonzini, bigeasy, clrkwllms, rostedt, nadav.amit
  Cc: linux-kernel, Chuyi Zhou

flush_tlb_kernel_range() is invoked when kernel memory mapping changes.
On x86 platforms without the INVLPGB feature enabled, we need to send IPIs
to every online CPU and synchronously wait for them to complete
do_kernel_range_flush(). This process can be time-consuming due to factors
such as a large number of CPUs or other issues (like interrupts being
disabled). flush_tlb_kernel_range() always disables preemption, this may
affect the scheduling latency of other tasks on the current CPU.

Previous patch converted flush_tlb_info from per-cpu variable to on-stack
variable. Additionally, it's no longer necessary to explicitly disable
preemption before calling smp_call*() since they internally handles the
preemption logic. Now it's safe to enable preemption during
flush_tlb_kernel_range(). Additionally, in get_flush_tlb_info() use
raw_smp_processor_id() to avoid warnings from check_preemption_disabled().

Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
---
 arch/x86/mm/tlb.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 58c6f3d2f993..c37cc9845abc 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1394,7 +1394,7 @@ static void get_flush_tlb_info(struct flush_tlb_info *info,
 	info->stride_shift	= stride_shift;
 	info->freed_tables	= freed_tables;
 	info->new_tlb_gen	= new_tlb_gen;
-	info->initiating_cpu	= smp_processor_id();
+	info->initiating_cpu	= raw_smp_processor_id();
 	info->trim_cpumask	= 0;
 }
 
@@ -1461,6 +1461,8 @@ static void invlpgb_kernel_range_flush(struct flush_tlb_info *info)
 {
 	unsigned long addr, nr;
 
+	guard(preempt)();
+
 	for (addr = info->start; addr < info->end; addr += nr << PAGE_SHIFT) {
 		nr = (info->end - addr) >> PAGE_SHIFT;
 
@@ -1505,7 +1507,6 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 {
 	struct flush_tlb_info info;
 
-	guard(preempt)();
 	get_flush_tlb_info(&info, NULL, start, end, PAGE_SHIFT, false,
 			   TLB_GENERATION_INVALID);
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 05/12] smp: Alloc percpu csd data in smpcfd_prepare_cpu() only once
  2026-04-22  6:40 ` [PATCH v5 05/12] smp: Alloc percpu csd data in smpcfd_prepare_cpu() only once Chuyi Zhou
@ 2026-04-23  7:29   ` Muchun Song
  0 siblings, 0 replies; 14+ messages in thread
From: Muchun Song @ 2026-04-23  7:29 UTC (permalink / raw)
  To: Chuyi Zhou
  Cc: tglx, mingo, luto, peterz, paulmck, bp, dave.hansen, pbonzini,
	bigeasy, clrkwllms, rostedt, nadav.amit, linux-kernel



> On Apr 22, 2026, at 14:40, Chuyi Zhou <zhouchuyi@bytedance.com> wrote:
> 
> Later patch would enable preemption during csd_lock_wait() in
> smp_call_function_many_cond(), which may cause access cfd->csd data that
> has already been freed in smpcfd_dead_cpu().
> 
> One way to fix the above issue is to use the RCU mechanism to protect the
> csd data and wait for all read critical sections to exit before freeing
> the memory in smpcfd_dead_cpu(), but this could delay CPU shutdown. This
> patch chooses a simpler approach: allocate the percpu csd on the UP side
> only once and skip freeing the csd memory in smpcfd_dead_cpu().
> 
> Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>

Acked-by: Muchun Song <muchun.song@linux.dev>

Thanks.


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2026-04-23  7:30 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-22  6:40 [PATCH v5 00/12] Allow preemption during IPI completion waiting to improve real-time performance Chuyi Zhou
2026-04-22  6:40 ` [PATCH v5 01/12] smp: Disable preemption explicitly in __csd_lock_wait Chuyi Zhou
2026-04-22  6:40 ` [PATCH v5 02/12] smp: Enable preemption early in smp_call_function_single Chuyi Zhou
2026-04-22  6:40 ` [PATCH v5 03/12] smp: Refactor remote CPU selection in smp_call_function_any() Chuyi Zhou
2026-04-22  6:40 ` [PATCH v5 04/12] smp: Use task-local IPI cpumask in smp_call_function_many_cond() Chuyi Zhou
2026-04-22  6:40 ` [PATCH v5 05/12] smp: Alloc percpu csd data in smpcfd_prepare_cpu() only once Chuyi Zhou
2026-04-23  7:29   ` Muchun Song
2026-04-22  6:40 ` [PATCH v5 06/12] smp: Enable preemption early in smp_call_function_many_cond Chuyi Zhou
2026-04-22  6:40 ` [PATCH v5 07/12] smp: Remove preempt_disable from smp_call_function Chuyi Zhou
2026-04-22  6:40 ` [PATCH v5 08/12] smp: Remove preempt_disable from on_each_cpu_cond_mask Chuyi Zhou
2026-04-22  6:40 ` [PATCH v5 09/12] scftorture: Remove preempt_disable in scftorture_invoke_one Chuyi Zhou
2026-04-22  6:40 ` [PATCH v5 10/12] x86/mm: Move flush_tlb_info back to the stack Chuyi Zhou
2026-04-22  6:40 ` [PATCH v5 11/12] x86/mm: Enable preemption during native_flush_tlb_multi Chuyi Zhou
2026-04-22  6:40 ` [PATCH v5 12/12] x86/mm: Enable preemption during flush_tlb_kernel_range Chuyi Zhou

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox