* [PATCH v8 0/7] timers: Exclude isolated cpus from timer migration
@ 2025-07-14 13:30 Gabriele Monaco
2025-07-14 13:30 ` [PATCH v8 1/7] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
` (6 more replies)
0 siblings, 7 replies; 15+ messages in thread
From: Gabriele Monaco @ 2025-07-14 13:30 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
The timer migration mechanism allows active CPUs to pull timers from
idle ones to improve the overall idle time. This is however undesired
when CPU intensive workloads run on isolated cores, as the algorithm
would move the timers from housekeeping to isolated cores, negatively
affecting the isolation.
Exclude isolated cores from the timer migration algorithm, extend the
concept of unavailable cores, currently used for offline ones, to
isolated ones:
* A core is unavailable if isolated or offline;
* A core is available if non isolated and online;
A core is considered unavailable as isolated if it belongs to:
* the isolcpus (domain) list
* an isolated cpuset
Except if it is:
* in the nohz_full list (already idle for the hierarchy)
* the nohz timekeeper core (must be available to handle global timers)
All online CPUs are added to the hierarchy during early boot, isolated
CPUs are removed during late boot if configured or whenever the cpuset
isolation changes.
Due to how the timer migration algorithm works, any CPU part of the
hierarchy can have their global timers pulled by remote CPUs and have to
pull remote timers, only skipping pulling remote timers would break the
logic.
For this reason, prevent isolated CPUs from pulling remote global
timers, but also the other way around: any global timer started on an
isolated CPU will run there. This does not break the concept of
isolation (global timers don't come from outside the CPU) and, if
considered inappropriate, can usually be mitigated with other isolation
techniques (e.g. IRQ pinning).
This effect was noticed on a 128 cores machine running oslat on the
isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
and the CPU with lowest count in a timer migration hierarchy (here 1
and 65) appears as always active and continuously pulls global timers,
from the housekeeping CPUs. This ends up moving driver work (e.g.
delayed work) to isolated CPUs and causes latency spikes:
before the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 1203 10 3 4 ... 5 (us)
after the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 10 4 3 4 3 ... 5 (us)
The first 4 patches are preparatory work to change the concept of
online/offline to available/unavailable, keep track of those in a
separate cpumask cleanup the setting/clearing functions and change a
function name in cpuset code.
Patch 5 and 6 adapt isolation and cpuset to prevent domain isolated and
nohz_full from covering all CPUs not leaving any housekeeping one. This
can lead to problems with the changes introduced in this series because
no CPU would remain to handle global timers.
Patch 7 extends the unavailable status to domain isolated CPUs, which
is the main contribution of the series.
Changes since v7:
* Move tmigr_available_cpumask out of tmc lock and specify conditions.
* Initialise tmigr isolation despite the state of isolcpus.
* Move tick CPU check to condition to run SMP call.
* Fix descriptions.
Changes since v6 [1]:
* Prevent isolation checks from running during early boot
* Prevent double (de)activation while setting cpus (un)available
* Use synchronous smp calls from the isolation path
* General cleanup
Changes since v5:
* Remove fallback if no housekeeping is left by isolcpus and nohz_full
* Adjust condition not to activate CPUs in the migration hierarchy
* Always force the nohz tick CPU active in the hierarchy
Changes since v4 [2]:
* use on_each_cpu_mask() with changes on isolated CPUs to avoid races
* keep nohz_full CPUs included in the timer migration hierarchy
* prevent domain isolated and nohz_full to cover all CPUs
Changes since v3:
* add parameter to function documentation
* split into multiple straightforward patches
Changes since v2:
* improve comments about handling CPUs isolated at boot
* minor cleanup
Changes since v1 [3]:
* split into smaller patches
* use available mask instead of unavailable
* simplification and cleanup
[1] - https://lore.kernel.org/lkml/20250530142031.215594-1-gmonaco@redhat.com
[2] - https://lore.kernel.org/lkml/20250506091534.42117-7-gmonaco@redhat.com
[3] - https://lore.kernel.org/lkml/20250410065446.57304-2-gmonaco@redhat.com
Gabriele Monaco (7):
timers: Rename tmigr 'online' bit to 'available'
timers: Add the available mask in timer migration
timers: Use scoped_guard when setting/clearing the tmigr available
flag
cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to
update_exclusion_cpumasks()
sched/isolation: Force housekeeping if isolcpus and nohz_full don't
leave any
cgroup/cpuset: Fail if isolated and nohz_full don't leave any
housekeeping
timers: Exclude isolated cpus from timer migration
include/linux/timer.h | 9 ++
include/trace/events/timer_migration.h | 4 +-
kernel/cgroup/cpuset.c | 71 +++++++++++-
kernel/sched/isolation.c | 12 ++
kernel/time/timer_migration.c | 153 +++++++++++++++++++++----
kernel/time/timer_migration.h | 2 +-
6 files changed, 217 insertions(+), 34 deletions(-)
base-commit: 347e9f5043c89695b01e66b3ed111755afcf1911
--
2.50.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v8 1/7] timers: Rename tmigr 'online' bit to 'available'
2025-07-14 13:30 [PATCH v8 0/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
@ 2025-07-14 13:30 ` Gabriele Monaco
2025-07-14 13:30 ` [PATCH v8 2/7] timers: Add the available mask in timer migration Gabriele Monaco
` (5 subsequent siblings)
6 siblings, 0 replies; 15+ messages in thread
From: Gabriele Monaco @ 2025-07-14 13:30 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
The timer migration hierarchy excludes offline CPUs via the
tmigr_is_not_available function, which is essentially checking the
online bit for the CPU.
Rename the online bit to available and all references in function names
and tracepoint to generalise the concept of available CPUs.
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
include/trace/events/timer_migration.h | 4 ++--
kernel/time/timer_migration.c | 22 +++++++++++-----------
kernel/time/timer_migration.h | 2 +-
3 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/include/trace/events/timer_migration.h b/include/trace/events/timer_migration.h
index 47db5eaf2f9ab..61171b13c687c 100644
--- a/include/trace/events/timer_migration.h
+++ b/include/trace/events/timer_migration.h
@@ -173,14 +173,14 @@ DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_active,
TP_ARGS(tmc)
);
-DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_online,
+DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_available,
TP_PROTO(struct tmigr_cpu *tmc),
TP_ARGS(tmc)
);
-DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_offline,
+DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_unavailable,
TP_PROTO(struct tmigr_cpu *tmc),
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 2f6330831f084..2c2c8810b8137 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -427,7 +427,7 @@ static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu);
static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc)
{
- return !(tmc->tmgroup && tmc->online);
+ return !(tmc->tmgroup && tmc->available);
}
/*
@@ -926,7 +926,7 @@ static void tmigr_handle_remote_cpu(unsigned int cpu, u64 now,
* updated the event takes care when hierarchy is completely
* idle. Otherwise the migrator does it as the event is enqueued.
*/
- if (!tmc->online || tmc->remote || tmc->cpuevt.ignore ||
+ if (!tmc->available || tmc->remote || tmc->cpuevt.ignore ||
now < tmc->cpuevt.nextevt.expires) {
raw_spin_unlock_irq(&tmc->lock);
return;
@@ -973,7 +973,7 @@ static void tmigr_handle_remote_cpu(unsigned int cpu, u64 now,
* (See also section "Required event and timerqueue update after a
* remote expiry" in the documentation at the top)
*/
- if (!tmc->online || !tmc->idle) {
+ if (!tmc->available || !tmc->idle) {
timer_unlock_remote_bases(cpu);
goto unlock;
}
@@ -1435,19 +1435,19 @@ static long tmigr_trigger_active(void *unused)
{
struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu);
- WARN_ON_ONCE(!tmc->online || tmc->idle);
+ WARN_ON_ONCE(!tmc->available || tmc->idle);
return 0;
}
-static int tmigr_cpu_offline(unsigned int cpu)
+static int tmigr_clear_cpu_available(unsigned int cpu)
{
struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu);
int migrator;
u64 firstexp;
raw_spin_lock_irq(&tmc->lock);
- tmc->online = false;
+ tmc->available = false;
WRITE_ONCE(tmc->wakeup, KTIME_MAX);
/*
@@ -1455,7 +1455,7 @@ static int tmigr_cpu_offline(unsigned int cpu)
* offline; Therefore nextevt value is set to KTIME_MAX
*/
firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX);
- trace_tmigr_cpu_offline(tmc);
+ trace_tmigr_cpu_unavailable(tmc);
raw_spin_unlock_irq(&tmc->lock);
if (firstexp != KTIME_MAX) {
@@ -1466,7 +1466,7 @@ static int tmigr_cpu_offline(unsigned int cpu)
return 0;
}
-static int tmigr_cpu_online(unsigned int cpu)
+static int tmigr_set_cpu_available(unsigned int cpu)
{
struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu);
@@ -1475,11 +1475,11 @@ static int tmigr_cpu_online(unsigned int cpu)
return -EINVAL;
raw_spin_lock_irq(&tmc->lock);
- trace_tmigr_cpu_online(tmc);
+ trace_tmigr_cpu_available(tmc);
tmc->idle = timer_base_is_idle();
if (!tmc->idle)
__tmigr_cpu_activate(tmc);
- tmc->online = true;
+ tmc->available = true;
raw_spin_unlock_irq(&tmc->lock);
return 0;
}
@@ -1850,7 +1850,7 @@ static int __init tmigr_init(void)
goto err;
ret = cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online",
- tmigr_cpu_online, tmigr_cpu_offline);
+ tmigr_set_cpu_available, tmigr_clear_cpu_available);
if (ret)
goto err;
diff --git a/kernel/time/timer_migration.h b/kernel/time/timer_migration.h
index ae19f70f8170f..70879cde6fdd0 100644
--- a/kernel/time/timer_migration.h
+++ b/kernel/time/timer_migration.h
@@ -97,7 +97,7 @@ struct tmigr_group {
*/
struct tmigr_cpu {
raw_spinlock_t lock;
- bool online;
+ bool available;
bool idle;
bool remote;
struct tmigr_group *tmgroup;
--
2.50.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v8 2/7] timers: Add the available mask in timer migration
2025-07-14 13:30 [PATCH v8 0/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
2025-07-14 13:30 ` [PATCH v8 1/7] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
@ 2025-07-14 13:30 ` Gabriele Monaco
2025-07-24 10:20 ` Frederic Weisbecker
2025-07-14 13:30 ` [PATCH v8 3/7] timers: Use scoped_guard when setting/clearing the tmigr available flag Gabriele Monaco
` (4 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Gabriele Monaco @ 2025-07-14 13:30 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
Keep track of the CPUs available for timer migration in a cpumask. This
prepares the ground to generalise the concept of unavailable CPUs.
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/time/timer_migration.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 2c2c8810b8137..b4623ac3c2830 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -422,6 +422,12 @@ static unsigned int tmigr_crossnode_level __read_mostly;
static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu);
+/*
+ * CPUs available for timer migration.
+ * Protected by cpuset_mutex (with cpus_read_lock held) or cpus_write_lock.
+ */
+static cpumask_var_t tmigr_available_cpumask;
+
#define TMIGR_NONE 0xFF
#define BIT_CNT 8
@@ -1446,6 +1452,7 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
int migrator;
u64 firstexp;
+ cpumask_clear_cpu(cpu, tmigr_available_cpumask);
raw_spin_lock_irq(&tmc->lock);
tmc->available = false;
WRITE_ONCE(tmc->wakeup, KTIME_MAX);
@@ -1459,7 +1466,7 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
raw_spin_unlock_irq(&tmc->lock);
if (firstexp != KTIME_MAX) {
- migrator = cpumask_any_but(cpu_online_mask, cpu);
+ migrator = cpumask_any(tmigr_available_cpumask);
work_on_cpu(migrator, tmigr_trigger_active, NULL);
}
@@ -1474,6 +1481,7 @@ static int tmigr_set_cpu_available(unsigned int cpu)
if (WARN_ON_ONCE(!tmc->tmgroup))
return -EINVAL;
+ cpumask_set_cpu(cpu, tmigr_available_cpumask);
raw_spin_lock_irq(&tmc->lock);
trace_tmigr_cpu_available(tmc);
tmc->idle = timer_base_is_idle();
@@ -1801,6 +1809,11 @@ static int __init tmigr_init(void)
if (ncpus == 1)
return 0;
+ if (!zalloc_cpumask_var(&tmigr_available_cpumask, GFP_KERNEL)) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
/*
* Calculate the required hierarchy levels. Unfortunately there is no
* reliable information available, unless all possible CPUs have been
--
2.50.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v8 3/7] timers: Use scoped_guard when setting/clearing the tmigr available flag
2025-07-14 13:30 [PATCH v8 0/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
2025-07-14 13:30 ` [PATCH v8 1/7] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
2025-07-14 13:30 ` [PATCH v8 2/7] timers: Add the available mask in timer migration Gabriele Monaco
@ 2025-07-14 13:30 ` Gabriele Monaco
2025-07-24 10:29 ` Frederic Weisbecker
2025-07-14 13:30 ` [PATCH v8 4/7] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
` (3 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Gabriele Monaco @ 2025-07-14 13:30 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
Cleanup tmigr_clear_cpu_available() and tmigr_set_cpu_available() to
prepare for easier checks on the available flag.
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/time/timer_migration.c | 34 +++++++++++++++++-----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index b4623ac3c2830..878fd3af40ecb 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -1453,17 +1453,17 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
u64 firstexp;
cpumask_clear_cpu(cpu, tmigr_available_cpumask);
- raw_spin_lock_irq(&tmc->lock);
- tmc->available = false;
- WRITE_ONCE(tmc->wakeup, KTIME_MAX);
+ scoped_guard(raw_spinlock_irq, &tmc->lock) {
+ tmc->available = false;
+ WRITE_ONCE(tmc->wakeup, KTIME_MAX);
- /*
- * CPU has to handle the local events on his own, when on the way to
- * offline; Therefore nextevt value is set to KTIME_MAX
- */
- firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX);
- trace_tmigr_cpu_unavailable(tmc);
- raw_spin_unlock_irq(&tmc->lock);
+ /*
+ * CPU has to handle the local events on his own, when on the way to
+ * offline; Therefore nextevt value is set to KTIME_MAX
+ */
+ firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX);
+ trace_tmigr_cpu_unavailable(tmc);
+ }
if (firstexp != KTIME_MAX) {
migrator = cpumask_any(tmigr_available_cpumask);
@@ -1482,13 +1482,13 @@ static int tmigr_set_cpu_available(unsigned int cpu)
return -EINVAL;
cpumask_set_cpu(cpu, tmigr_available_cpumask);
- raw_spin_lock_irq(&tmc->lock);
- trace_tmigr_cpu_available(tmc);
- tmc->idle = timer_base_is_idle();
- if (!tmc->idle)
- __tmigr_cpu_activate(tmc);
- tmc->available = true;
- raw_spin_unlock_irq(&tmc->lock);
+ scoped_guard(raw_spinlock_irq, &tmc->lock) {
+ trace_tmigr_cpu_available(tmc);
+ tmc->idle = timer_base_is_idle();
+ if (!tmc->idle)
+ __tmigr_cpu_activate(tmc);
+ tmc->available = true;
+ }
return 0;
}
--
2.50.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v8 4/7] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks()
2025-07-14 13:30 [PATCH v8 0/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
` (2 preceding siblings ...)
2025-07-14 13:30 ` [PATCH v8 3/7] timers: Use scoped_guard when setting/clearing the tmigr available flag Gabriele Monaco
@ 2025-07-14 13:30 ` Gabriele Monaco
2025-07-24 10:33 ` Frederic Weisbecker
2025-07-14 13:30 ` [PATCH v8 5/7] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
` (2 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Gabriele Monaco @ 2025-07-14 13:30 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
update_unbound_workqueue_cpumask() updates unbound workqueues settings
when there's a change in isolated CPUs, but it can be used for other
subsystems requiring updated when isolated CPUs change.
Generalise the name to update_exclusion_cpumasks() to prepare for other
functions unrelated to workqueues to be called in that spot.
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/cgroup/cpuset.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 3bc4301466f33..6e3f44ffaa219 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1339,7 +1339,7 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
return isolcpus_updated;
}
-static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
+static void update_exclusion_cpumasks(bool isolcpus_updated)
{
int ret;
@@ -1470,7 +1470,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
list_add(&cs->remote_sibling, &remote_children);
cpumask_copy(cs->effective_xcpus, tmp->new_cpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
cpuset_force_rebuild();
cs->prs_err = 0;
@@ -1511,7 +1511,7 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)
compute_effective_exclusive_cpumask(cs, NULL, NULL);
reset_partition_data(cs);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
cpuset_force_rebuild();
/*
@@ -1580,7 +1580,7 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
if (xcpus)
cpumask_copy(cs->exclusive_cpus, xcpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
if (adding || deleting)
cpuset_force_rebuild();
@@ -1943,7 +1943,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
WARN_ON_ONCE(parent->nr_subparts < 0);
}
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
if ((old_prs != new_prs) && (cmd == partcmd_update))
update_partition_exclusive_flag(cs, new_prs);
@@ -2968,7 +2968,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
else if (isolcpus_updated)
isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
/* Force update if switching back to member & update effective_xcpus */
update_cpumasks_hier(cs, &tmpmask, !new_prs);
--
2.50.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v8 5/7] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any
2025-07-14 13:30 [PATCH v8 0/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
` (3 preceding siblings ...)
2025-07-14 13:30 ` [PATCH v8 4/7] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
@ 2025-07-14 13:30 ` Gabriele Monaco
2025-07-14 13:30 ` [PATCH v8 6/7] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
2025-07-14 13:30 ` [PATCH v8 7/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
6 siblings, 0 replies; 15+ messages in thread
From: Gabriele Monaco @ 2025-07-14 13:30 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
Currently the user can set up isolcpus and nohz_full in such a way that
leaves no housekeeping CPU (i.e. no CPU that is neither domain isolated
nor nohz full). This can be a problem for other subsystems (e.g. the
timer wheel imgration).
Prevent this configuration by invalidating the last setting in case the
union of isolcpus and nohz_full covers all CPUs.
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/sched/isolation.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 93b038d48900a..0019d941de683 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -165,6 +165,18 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
}
}
+ /* Check in combination with the previously set cpumask */
+ type = find_first_bit(&housekeeping.flags, HK_TYPE_MAX);
+ first_cpu = cpumask_first_and_and(cpu_present_mask,
+ housekeeping_staging,
+ housekeeping.cpumasks[type]);
+ if (first_cpu >= nr_cpu_ids || first_cpu >= setup_max_cpus) {
+ pr_warn("Housekeeping: must include one present CPU neither "
+ "in nohz_full= nor in isolcpus=, ignoring setting %s\n",
+ str);
+ goto free_housekeeping_staging;
+ }
+
iter_flags = flags & ~housekeeping.flags;
for_each_set_bit(type, &iter_flags, HK_TYPE_MAX)
--
2.50.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v8 6/7] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-07-14 13:30 [PATCH v8 0/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
` (4 preceding siblings ...)
2025-07-14 13:30 ` [PATCH v8 5/7] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
@ 2025-07-14 13:30 ` Gabriele Monaco
2025-07-24 13:01 ` Frederic Weisbecker
2025-07-14 13:30 ` [PATCH v8 7/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
6 siblings, 1 reply; 15+ messages in thread
From: Gabriele Monaco @ 2025-07-14 13:30 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
Currently the user can set up isolated cpus via cpuset and nohz_full in
such a way that leaves no housekeeping CPU (i.e. no CPU that is neither
domain isolated nor nohz full). This can be a problem for other
subsystems (e.g. the timer wheel imgration).
Prevent this configuration by blocking any assignation that would cause
the union of domain isolated cpus and nohz_full to covers all CPUs.
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/cgroup/cpuset.c | 56 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 56 insertions(+)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 6e3f44ffaa219..a946d85ce954a 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1275,6 +1275,19 @@ static void isolated_cpus_update(int old_prs, int new_prs, struct cpumask *xcpus
cpumask_andnot(isolated_cpus, isolated_cpus, xcpus);
}
+/*
+ * isolated_cpus_should_update - Returns if the isolated_cpus mask needs update
+ * @prs: new or old partition_root_state
+ * @parent: parent cpuset
+ * Return: true if isolated_cpus needs modification, false otherwise
+ */
+static bool isolated_cpus_should_update(int prs, struct cpuset *parent)
+{
+ if (!parent)
+ parent = &top_cpuset;
+ return prs != parent->partition_root_state;
+}
+
/*
* partition_xcpus_add - Add new exclusive CPUs to partition
* @new_prs: new partition_root_state
@@ -1339,6 +1352,35 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
return isolcpus_updated;
}
+/*
+ * isolcpus_nohz_conflict - check for isolated & nohz_full conflicts
+ * @new_cpus: cpu mask for cpus that are going to be isolated
+ * Return: true if there is conflict, false otherwise
+ *
+ * If nohz_full is enabled and we have isolated CPUs, their combination must
+ * still leave housekeeping CPUs.
+ */
+static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
+{
+ cpumask_var_t full_hk_cpus;
+ int res = false;
+
+ if (!housekeeping_enabled(HK_TYPE_KERNEL_NOISE))
+ return false;
+
+ if (!alloc_cpumask_var(&full_hk_cpus, GFP_KERNEL))
+ return true;
+
+ cpumask_and(full_hk_cpus, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE),
+ housekeeping_cpumask(HK_TYPE_DOMAIN));
+ cpumask_and(full_hk_cpus, full_hk_cpus, cpu_online_mask);
+ if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
+ res = true;
+
+ free_cpumask_var(full_hk_cpus);
+ return res;
+}
+
static void update_exclusion_cpumasks(bool isolcpus_updated)
{
int ret;
@@ -1464,6 +1506,9 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
if (!cpumask_intersects(tmp->new_cpus, cpu_active_mask) ||
cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus))
return PERR_INVCPUS;
+ if (isolated_cpus_should_update(new_prs, NULL) &&
+ isolcpus_nohz_conflict(tmp->new_cpus))
+ return PERR_HKEEPING;
spin_lock_irq(&callback_lock);
isolcpus_updated = partition_xcpus_add(new_prs, NULL, tmp->new_cpus);
@@ -1563,6 +1608,9 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
else if (cpumask_intersects(tmp->addmask, subpartitions_cpus) ||
cpumask_subset(top_cpuset.effective_cpus, tmp->addmask))
cs->prs_err = PERR_NOCPUS;
+ else if (isolated_cpus_should_update(prs, NULL) &&
+ isolcpus_nohz_conflict(tmp->addmask))
+ cs->prs_err = PERR_HKEEPING;
if (cs->prs_err)
goto invalidate;
}
@@ -1914,6 +1962,12 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
return err;
}
+ if (deleting && isolated_cpus_should_update(new_prs, parent) &&
+ isolcpus_nohz_conflict(tmp->delmask)) {
+ cs->prs_err = PERR_HKEEPING;
+ return PERR_HKEEPING;
+ }
+
/*
* Change the parent's effective_cpus & effective_xcpus (top cpuset
* only).
@@ -2934,6 +2988,8 @@ static int update_prstate(struct cpuset *cs, int new_prs)
* Need to update isolated_cpus.
*/
isolcpus_updated = true;
+ if (isolcpus_nohz_conflict(cs->effective_xcpus))
+ err = PERR_HKEEPING;
} else {
/*
* Switching back to member is always allowed even if it
--
2.50.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v8 7/7] timers: Exclude isolated cpus from timer migration
2025-07-14 13:30 [PATCH v8 0/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
` (5 preceding siblings ...)
2025-07-14 13:30 ` [PATCH v8 6/7] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
@ 2025-07-14 13:30 ` Gabriele Monaco
2025-07-24 23:05 ` Frederic Weisbecker
6 siblings, 1 reply; 15+ messages in thread
From: Gabriele Monaco @ 2025-07-14 13:30 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
The timer migration mechanism allows active CPUs to pull timers from
idle ones to improve the overall idle time. This is however undesired
when CPU intensive workloads run on isolated cores, as the algorithm
would move the timers from housekeeping to isolated cores, negatively
affecting the isolation.
Exclude isolated cores from the timer migration algorithm, extend the
concept of unavailable cores, currently used for offline ones, to
isolated ones:
* A core is unavailable if isolated or offline;
* A core is available if non isolated and online;
A core is considered unavailable as isolated if it belongs to:
* the isolcpus (domain) list
* an isolated cpuset
Except if it is:
* in the nohz_full list (already idle for the hierarchy)
* the nohz timekeeper core (must be available to handle global timers)
All online CPUs are added to the hierarchy during early boot, isolated
CPUs are removed during late boot if configured or whenever the cpuset
isolation changes.
Due to how the timer migration algorithm works, any CPU part of the
hierarchy can have their global timers pulled by remote CPUs and have to
pull remote timers, only skipping pulling remote timers would break the
logic.
For this reason, prevent isolated CPUs from pulling remote global
timers, but also the other way around: any global timer started on an
isolated CPU will run there. This does not break the concept of
isolation (global timers don't come from outside the CPU) and, if
considered inappropriate, can usually be mitigated with other isolation
techniques (e.g. IRQ pinning).
This effect was noticed on a 128 cores machine running oslat on the
isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
and the CPU with lowest count in a timer migration hierarchy (here 1
and 65) appears as always active and continuously pulls global timers,
from the housekeeping CPUs. This ends up moving driver work (e.g.
delayed work) to isolated CPUs and causes latency spikes:
before the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 1203 10 3 4 ... 5 (us)
after the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 10 4 3 4 3 ... 5 (us)
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
include/linux/timer.h | 9 ++++
kernel/cgroup/cpuset.c | 3 ++
kernel/time/timer_migration.c | 90 +++++++++++++++++++++++++++++++++++
3 files changed, 102 insertions(+)
diff --git a/include/linux/timer.h b/include/linux/timer.h
index 0414d9e6b4fcd..62e1cea711257 100644
--- a/include/linux/timer.h
+++ b/include/linux/timer.h
@@ -188,4 +188,13 @@ int timers_dead_cpu(unsigned int cpu);
#define timers_dead_cpu NULL
#endif
+#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
+extern int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask);
+#else
+static inline int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
+{
+ return 0;
+}
+#endif
+
#endif
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index a946d85ce954a..ff5b66abd0474 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1392,6 +1392,9 @@ static void update_exclusion_cpumasks(bool isolcpus_updated)
ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
WARN_ON_ONCE(ret < 0);
+
+ ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
+ WARN_ON_ONCE(ret < 0);
}
/**
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 878fd3af40ecb..c07cc9a2b209d 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -10,6 +10,7 @@
#include <linux/spinlock.h>
#include <linux/timerqueue.h>
#include <trace/events/ipi.h>
+#include <linux/sched/isolation.h>
#include "timer_migration.h"
#include "tick-internal.h"
@@ -428,6 +429,9 @@ static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu);
*/
static cpumask_var_t tmigr_available_cpumask;
+/* Enabled during late initcall */
+static bool tmigr_exclude_isolated __read_mostly;
+
#define TMIGR_NONE 0xFF
#define BIT_CNT 8
@@ -436,6 +440,24 @@ static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc)
return !(tmc->tmgroup && tmc->available);
}
+/*
+ * Returns true if @cpu should be excluded from the hierarchy as isolated.
+ * Domain isolated CPUs don't participate in timer migration, nohz_full
+ * CPUs are still part of the hierarchy but are always considered idle.
+ * This behaviour depends on the value of tmigr_exclude_isolated, which is
+ * normally disabled during early boot.
+ * This check is necessary, for instance, to prevent offline isolated CPU from
+ * being incorrectly marked as available once getting back online.
+ */
+static inline bool tmigr_is_isolated(int cpu)
+{
+ if (!tmigr_exclude_isolated)
+ return false;
+ return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) ||
+ cpuset_cpu_is_isolated(cpu)) &&
+ housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE);
+}
+
/*
* Returns true, when @childmask corresponds to the group migrator or when the
* group is not active - so no migrator is set.
@@ -1454,6 +1476,8 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
cpumask_clear_cpu(cpu, tmigr_available_cpumask);
scoped_guard(raw_spinlock_irq, &tmc->lock) {
+ if (!tmc->available)
+ return 0;
tmc->available = false;
WRITE_ONCE(tmc->wakeup, KTIME_MAX);
@@ -1481,8 +1505,12 @@ static int tmigr_set_cpu_available(unsigned int cpu)
if (WARN_ON_ONCE(!tmc->tmgroup))
return -EINVAL;
+ if (tmigr_is_isolated(cpu))
+ return 0;
cpumask_set_cpu(cpu, tmigr_available_cpumask);
scoped_guard(raw_spinlock_irq, &tmc->lock) {
+ if (tmc->available)
+ return 0;
trace_tmigr_cpu_available(tmc);
tmc->idle = timer_base_is_idle();
if (!tmc->idle)
@@ -1492,6 +1520,67 @@ static int tmigr_set_cpu_available(unsigned int cpu)
return 0;
}
+static bool tmigr_should_isolate_cpu(int cpu, void *ignored)
+{
+ /*
+ * The tick CPU can be marked as isolated by the cpuset code, however
+ * we cannot mark it as unavailable to avoid having no global migrator
+ * for the nohz_full CPUs.
+ */
+ return tick_nohz_cpu_hotpluggable(cpu);
+}
+
+static void tmigr_cpu_isolate(void *ignored)
+{
+ tmigr_clear_cpu_available(smp_processor_id());
+}
+
+static void tmigr_cpu_unisolate(void *ignored)
+{
+ tmigr_set_cpu_available(smp_processor_id());
+}
+
+int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
+{
+ cpumask_var_t cpumask;
+
+ lockdep_assert_cpus_held();
+
+ if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
+ return -ENOMEM;
+
+ cpumask_and(cpumask, exclude_cpumask, tmigr_available_cpumask);
+ cpumask_and(cpumask, cpumask, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
+ on_each_cpu_cond_mask(tmigr_should_isolate_cpu, tmigr_cpu_isolate, NULL,
+ 1, cpumask);
+
+ cpumask_andnot(cpumask, cpu_online_mask, exclude_cpumask);
+ cpumask_andnot(cpumask, cpumask, tmigr_available_cpumask);
+ on_each_cpu_mask(cpumask, tmigr_cpu_unisolate, NULL, 1);
+
+ free_cpumask_var(cpumask);
+ return 0;
+}
+
+static int __init tmigr_init_isolation(void)
+{
+ cpumask_var_t cpumask;
+
+ tmigr_exclude_isolated = true;
+ if (!housekeeping_enabled(HK_TYPE_DOMAIN))
+ return 0;
+ if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
+ return -ENOMEM;
+ cpumask_andnot(cpumask, tmigr_available_cpumask,
+ housekeeping_cpumask(HK_TYPE_DOMAIN));
+ cpumask_and(cpumask, cpumask, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
+ on_each_cpu_cond_mask(tmigr_should_isolate_cpu, tmigr_cpu_isolate, NULL,
+ 1, cpumask);
+
+ free_cpumask_var(cpumask);
+ return 0;
+}
+
static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
int node)
{
@@ -1874,3 +1963,4 @@ static int __init tmigr_init(void)
return ret;
}
early_initcall(tmigr_init);
+late_initcall(tmigr_init_isolation);
--
2.50.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v8 2/7] timers: Add the available mask in timer migration
2025-07-14 13:30 ` [PATCH v8 2/7] timers: Add the available mask in timer migration Gabriele Monaco
@ 2025-07-24 10:20 ` Frederic Weisbecker
0 siblings, 0 replies; 15+ messages in thread
From: Frederic Weisbecker @ 2025-07-24 10:20 UTC (permalink / raw)
To: Gabriele Monaco
Cc: linux-kernel, Anna-Maria Behnsen, Thomas Gleixner, Waiman Long
Le Mon, Jul 14, 2025 at 03:30:53PM +0200, Gabriele Monaco a écrit :
> Keep track of the CPUs available for timer migration in a cpumask. This
> prepares the ground to generalise the concept of unavailable CPUs.
>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v8 3/7] timers: Use scoped_guard when setting/clearing the tmigr available flag
2025-07-14 13:30 ` [PATCH v8 3/7] timers: Use scoped_guard when setting/clearing the tmigr available flag Gabriele Monaco
@ 2025-07-24 10:29 ` Frederic Weisbecker
0 siblings, 0 replies; 15+ messages in thread
From: Frederic Weisbecker @ 2025-07-24 10:29 UTC (permalink / raw)
To: Gabriele Monaco
Cc: linux-kernel, Anna-Maria Behnsen, Thomas Gleixner, Waiman Long
Le Mon, Jul 14, 2025 at 03:30:54PM +0200, Gabriele Monaco a écrit :
> Cleanup tmigr_clear_cpu_available() and tmigr_set_cpu_available() to
> prepare for easier checks on the available flag.
>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v8 4/7] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks()
2025-07-14 13:30 ` [PATCH v8 4/7] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
@ 2025-07-24 10:33 ` Frederic Weisbecker
0 siblings, 0 replies; 15+ messages in thread
From: Frederic Weisbecker @ 2025-07-24 10:33 UTC (permalink / raw)
To: Gabriele Monaco
Cc: linux-kernel, Anna-Maria Behnsen, Thomas Gleixner, Waiman Long
Le Mon, Jul 14, 2025 at 03:30:55PM +0200, Gabriele Monaco a écrit :
> update_unbound_workqueue_cpumask() updates unbound workqueues settings
> when there's a change in isolated CPUs, but it can be used for other
> subsystems requiring updated when isolated CPUs change.
>
> Generalise the name to update_exclusion_cpumasks() to prepare for other
> functions unrelated to workqueues to be called in that spot.
>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v8 6/7] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-07-14 13:30 ` [PATCH v8 6/7] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
@ 2025-07-24 13:01 ` Frederic Weisbecker
0 siblings, 0 replies; 15+ messages in thread
From: Frederic Weisbecker @ 2025-07-24 13:01 UTC (permalink / raw)
To: Gabriele Monaco
Cc: linux-kernel, Anna-Maria Behnsen, Thomas Gleixner, Waiman Long
Le Mon, Jul 14, 2025 at 03:30:57PM +0200, Gabriele Monaco a écrit :
> Currently the user can set up isolated cpus via cpuset and nohz_full in
> such a way that leaves no housekeeping CPU (i.e. no CPU that is neither
> domain isolated nor nohz full). This can be a problem for other
> subsystems (e.g. the timer wheel imgration).
>
> Prevent this configuration by blocking any assignation that would cause
> the union of domain isolated cpus and nohz_full to covers all CPUs.
>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v8 7/7] timers: Exclude isolated cpus from timer migration
2025-07-14 13:30 ` [PATCH v8 7/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
@ 2025-07-24 23:05 ` Frederic Weisbecker
2025-07-25 6:42 ` Gabriele Monaco
0 siblings, 1 reply; 15+ messages in thread
From: Frederic Weisbecker @ 2025-07-24 23:05 UTC (permalink / raw)
To: Gabriele Monaco
Cc: linux-kernel, Anna-Maria Behnsen, Thomas Gleixner, Waiman Long
Le Mon, Jul 14, 2025 at 03:30:58PM +0200, Gabriele Monaco a écrit :
> diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
> index 878fd3af40ecb..c07cc9a2b209d 100644
> --- a/kernel/time/timer_migration.c
> +++ b/kernel/time/timer_migration.c
> @@ -10,6 +10,7 @@
> #include <linux/spinlock.h>
> #include <linux/timerqueue.h>
> #include <trace/events/ipi.h>
> +#include <linux/sched/isolation.h>
>
> #include "timer_migration.h"
> #include "tick-internal.h"
> @@ -428,6 +429,9 @@ static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu);
> */
> static cpumask_var_t tmigr_available_cpumask;
>
> +/* Enabled during late initcall */
> +static bool tmigr_exclude_isolated __read_mostly;
This variable is still annoying.
> +
> #define TMIGR_NONE 0xFF
> #define BIT_CNT 8
[...]
> +int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
> +{
> + cpumask_var_t cpumask;
> +
> + lockdep_assert_cpus_held();
> +
> + if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
> + return -ENOMEM;
> +
> + cpumask_and(cpumask, exclude_cpumask, tmigr_available_cpumask);
> + cpumask_and(cpumask, cpumask, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
> + on_each_cpu_cond_mask(tmigr_should_isolate_cpu, tmigr_cpu_isolate, NULL,
> + 1, cpumask);
> +
> + cpumask_andnot(cpumask, cpu_online_mask, exclude_cpumask);
> + cpumask_andnot(cpumask, cpumask, tmigr_available_cpumask);
> + on_each_cpu_mask(cpumask, tmigr_cpu_unisolate, NULL, 1);
> +
> + free_cpumask_var(cpumask);
> + return 0;
> +}
> +
> +static int __init tmigr_init_isolation(void)
> +{
> + cpumask_var_t cpumask;
> +
> + tmigr_exclude_isolated = true;
> + if (!housekeeping_enabled(HK_TYPE_DOMAIN))
> + return 0;
> + if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
> + return -ENOMEM;
> + cpumask_andnot(cpumask, tmigr_available_cpumask,
> + housekeeping_cpumask(HK_TYPE_DOMAIN));
> + cpumask_and(cpumask, cpumask, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
> + on_each_cpu_cond_mask(tmigr_should_isolate_cpu, tmigr_cpu_isolate, NULL,
> + 1, cpumask);
And this is basically repeating the same logic as before but in reverse.
Here is a proposal: register the online/offline callbacks later, on
late_initcall(). This solves two problems:
1) The online/offline callbacks are called for the first time in the right
place. You don't need that tmigr_exclude_isolated anymore.
2) You don't need to make the on_each_cpu_cond_mask() call anymore in
tmigr_init_isolation(). In fact you don't need that function. The
online/offline callbacks already take care of everything.
Here is a patch you can use (only built tested):
commit ad21e35e05865e2d37a60bf5d77b0d6fa22a54ee
Author: Frederic Weisbecker <frederic@kernel.org>
Date: Fri Jul 25 00:06:20 2025 +0200
timers/migration: Postpone online/offline callbacks registration to late initcall
During the early boot process, the default clocksource used for
timekeeping is the jiffies. Better clocksources can only be selected
once clocksource_done_booting() is called as an fs initcall.
NOHZ can only be enabled after that stage, making global timer migration
irrelevant up to that point.
Therefore, don't bother with trashing the cache within that tree from
the SMP bootup until NOHZ even matters.
Make the CPUs available to the tree on late initcall, after the right
clocksource had a chance to be selected. This will also simplify the
handling of domain isolated CPUs on further patches.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 2f6330831f08..f730107d948d 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -1484,6 +1484,17 @@ static int tmigr_cpu_online(unsigned int cpu)
return 0;
}
+/*
+ * NOHZ can only be enabled after clocksource_done_booting(). Don't
+ * bother trashing the cache in the tree before.
+ */
+static int __init tmigr_late_init(void)
+{
+ return cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online",
+ tmigr_cpu_online, tmigr_cpu_offline);
+}
+late_initcall(tmigr_late_init);
+
static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
int node)
{
@@ -1846,18 +1857,9 @@ static int __init tmigr_init(void)
ret = cpuhp_setup_state(CPUHP_TMIGR_PREPARE, "tmigr:prepare",
tmigr_cpu_prepare, NULL);
- if (ret)
- goto err;
-
- ret = cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online",
- tmigr_cpu_online, tmigr_cpu_offline);
- if (ret)
- goto err;
-
- return 0;
-
err:
- pr_err("Timer migration setup failed\n");
+ if (ret)
+ pr_err("Timer migration setup failed\n");
return ret;
}
early_initcall(tmigr_init);
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v8 7/7] timers: Exclude isolated cpus from timer migration
2025-07-24 23:05 ` Frederic Weisbecker
@ 2025-07-25 6:42 ` Gabriele Monaco
2025-07-25 10:55 ` Frederic Weisbecker
0 siblings, 1 reply; 15+ messages in thread
From: Gabriele Monaco @ 2025-07-25 6:42 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: linux-kernel, Anna-Maria Behnsen, Thomas Gleixner, Waiman Long
On Fri, 2025-07-25 at 01:05 +0200, Frederic Weisbecker wrote:
>
> And this is basically repeating the same logic as before but in
> reverse.
>
> Here is a proposal: register the online/offline callbacks later, on
> late_initcall(). This solves two problems:
>
> 1) The online/offline callbacks are called for the first time in the
> right
> place. You don't need that tmigr_exclude_isolated anymore.
>
> 2) You don't need to make the on_each_cpu_cond_mask() call anymore in
> tmigr_init_isolation(). In fact you don't need that function. The
> online/offline callbacks already take care of everything.
>
Yeah, that's much neater thanks!
I'm going to try it and update the patch.
> Here is a patch you can use (only built tested):
>
> commit ad21e35e05865e2d37a60bf5d77b0d6fa22a54ee
> Author: Frederic Weisbecker <frederic@kernel.org>
> Date: Fri Jul 25 00:06:20 2025 +0200
>
> timers/migration: Postpone online/offline callbacks registration
> to late initcall
> During the early boot process, the default clocksource used for
> timekeeping is the jiffies. Better clocksources can only be
> selected once clocksource_done_booting() is called as an fs initcall.
>
> NOHZ can only be enabled after that stage, making global timer
> migration irrelevant up to that point.
>
> Therefore, don't bother with trashing the cache within that tree
> from the SMP bootup until NOHZ even matters.
>
> Make the CPUs available to the tree on late initcall, after the
> right clocksource had a chance to be selected. This will also
> simplify the handling of domain isolated CPUs on further patches.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
>
I assume it's cleaner if I squash it in 7/7 and add a
Co-developed-by: Frederic Weisbecker <frederic@kernel.org>
and/or
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Do you agree?
Thanks for the review and help,
Gabriele
> diff --git a/kernel/time/timer_migration.c
> b/kernel/time/timer_migration.c
> index 2f6330831f08..f730107d948d 100644
> --- a/kernel/time/timer_migration.c
> +++ b/kernel/time/timer_migration.c
> @@ -1484,6 +1484,17 @@ static int tmigr_cpu_online(unsigned int cpu)
> return 0;
> }
>
> +/*
> + * NOHZ can only be enabled after clocksource_done_booting(). Don't
> + * bother trashing the cache in the tree before.
> + */
> +static int __init tmigr_late_init(void)
> +{
> + return cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE,
> "tmigr:online",
> + tmigr_cpu_online,
> tmigr_cpu_offline);
> +}
> +late_initcall(tmigr_late_init);
> +
> static void tmigr_init_group(struct tmigr_group *group, unsigned int
> lvl,
> int node)
> {
> @@ -1846,18 +1857,9 @@ static int __init tmigr_init(void)
>
> ret = cpuhp_setup_state(CPUHP_TMIGR_PREPARE,
> "tmigr:prepare",
> tmigr_cpu_prepare, NULL);
> - if (ret)
> - goto err;
> -
> - ret = cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE,
> "tmigr:online",
> - tmigr_cpu_online,
> tmigr_cpu_offline);
> - if (ret)
> - goto err;
> -
> - return 0;
> -
> err:
> - pr_err("Timer migration setup failed\n");
> + if (ret)
> + pr_err("Timer migration setup failed\n");
> return ret;
> }
> early_initcall(tmigr_init);
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v8 7/7] timers: Exclude isolated cpus from timer migration
2025-07-25 6:42 ` Gabriele Monaco
@ 2025-07-25 10:55 ` Frederic Weisbecker
0 siblings, 0 replies; 15+ messages in thread
From: Frederic Weisbecker @ 2025-07-25 10:55 UTC (permalink / raw)
To: Gabriele Monaco
Cc: linux-kernel, Anna-Maria Behnsen, Thomas Gleixner, Waiman Long
Le Fri, Jul 25, 2025 at 08:42:19AM +0200, Gabriele Monaco a écrit :
> On Fri, 2025-07-25 at 01:05 +0200, Frederic Weisbecker wrote:
> I assume it's cleaner if I squash it in 7/7 and add a
> Co-developed-by: Frederic Weisbecker <frederic@kernel.org>
> and/or
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
>
> Do you agree?
I would prefer to keep the patch standalone because it's already a
logical change that has its own motivation. It's also invasive and
could potentially cause regression (or improvement) so we want to be
able to bisect to that.
Thanks!
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2025-07-25 10:55 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-14 13:30 [PATCH v8 0/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
2025-07-14 13:30 ` [PATCH v8 1/7] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
2025-07-14 13:30 ` [PATCH v8 2/7] timers: Add the available mask in timer migration Gabriele Monaco
2025-07-24 10:20 ` Frederic Weisbecker
2025-07-14 13:30 ` [PATCH v8 3/7] timers: Use scoped_guard when setting/clearing the tmigr available flag Gabriele Monaco
2025-07-24 10:29 ` Frederic Weisbecker
2025-07-14 13:30 ` [PATCH v8 4/7] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
2025-07-24 10:33 ` Frederic Weisbecker
2025-07-14 13:30 ` [PATCH v8 5/7] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
2025-07-14 13:30 ` [PATCH v8 6/7] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
2025-07-24 13:01 ` Frederic Weisbecker
2025-07-14 13:30 ` [PATCH v8 7/7] timers: Exclude isolated cpus from timer migration Gabriele Monaco
2025-07-24 23:05 ` Frederic Weisbecker
2025-07-25 6:42 ` Gabriele Monaco
2025-07-25 10:55 ` Frederic Weisbecker
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).