* [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration
@ 2025-10-20 11:27 Gabriele Monaco
2025-10-20 11:27 ` [RESEND PATCH v13 1/9] timers/migration: Postpone online/offline callbacks registration to late initcall Gabriele Monaco
` (9 more replies)
0 siblings, 10 replies; 21+ messages in thread
From: Gabriele Monaco @ 2025-10-20 11:27 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
The timer migration mechanism allows active CPUs to pull timers from
idle ones to improve the overall idle time. This is however undesired
when CPU intensive workloads run on isolated cores, as the algorithm
would move the timers from housekeeping to isolated cores, negatively
affecting the isolation.
Exclude isolated cores from the timer migration algorithm, extend the
concept of unavailable cores, currently used for offline ones, to
isolated ones:
* A core is unavailable if isolated or offline;
* A core is available if non isolated and online;
A core is considered unavailable as isolated if it belongs to:
* the isolcpus (domain) list
* an isolated cpuset
Except if it is:
* in the nohz_full list (already idle for the hierarchy)
* the nohz timekeeper core (must be available to handle global timers)
CPUs are added to the hierarchy during late boot, excluding isolated
ones, the hierarchy is also adapted when the cpuset isolation changes.
Due to how the timer migration algorithm works, any CPU part of the
hierarchy can have their global timers pulled by remote CPUs and have to
pull remote timers, only skipping pulling remote timers would break the
logic.
For this reason, prevent isolated CPUs from pulling remote global
timers, but also the other way around: any global timer started on an
isolated CPU will run there. This does not break the concept of
isolation (global timers don't come from outside the CPU) and, if
considered inappropriate, can usually be mitigated with other isolation
techniques (e.g. IRQ pinning).
This effect was noticed on a 128 cores machine running oslat on the
isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
and the CPU with lowest count in a timer migration hierarchy (here 1
and 65) appears as always active and continuously pulls global timers,
from the housekeeping CPUs. This ends up moving driver work (e.g.
delayed work) to isolated CPUs and causes latency spikes:
before the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 1203 10 3 4 ... 5 (us)
after the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 10 4 3 4 3 ... 5 (us)
The same behaviour was observed on a machine with as few as 20 cores /
40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top.
The first 5 patches are preparatory work to change the concept of
online/offline to available/unavailable, keep track of those in a
separate cpumask cleanup the setting/clearing functions and change a
function name in cpuset code.
Patch 6 and 7 adapt isolation and cpuset to prevent domain isolated and
nohz_full from covering all CPUs not leaving any housekeeping one. This
can lead to problems with the changes introduced in this series because
no CPU would remain to handle global timers.
Patch 9 extends the unavailable status to domain isolated CPUs, which
is the main contribution of the series.
This series is equivalent to v13 but rebased on v6.18-rc2.
Changes since v12:
* Pick and adapt patch by Yury Norov to initialise cpumasks
* Reorganise accesses to tmigr_available_cpumask to avoid races
Changes since v11:
* Rename isolcpus_nohz_conflict() to isolated_cpus_can_update()
* Move tick_nohz_cpu_hotpluggable() check to tmigr_is_isolated()
* Use workqueues in tmigr_isolated_exclude_cpumask() to avoid sleeping
while atomic
* Add cpumask initialiser to safely use cpumask cleanup helpers
Changes since v10:
* Simplify housekeeping conflict condition
* Reword commit (Frederic Weisbecker)
Changes since v9:
* Fix total housekeeping enforcement to focus only on nohz and domain
* Avoid out of bound access in the housekeeping array if no flag is set
* Consider isolated_cpus while checking for nohz conflicts in cpuset
* Improve comment about why nohz CPUs are not excluded by tmigr
Changes since v8 [1]:
* Postpone hotplug registration to late initcall (Frederic Weisbecker)
* Move main activation logic in _tmigr_set_cpu_available() and call it
after checking for isolation on hotplug and cpusets changes
* Call _tmigr_set_cpu_available directly to force enable tick CPU if
required (this saves checking for that on every hotplug change).
Changes since v7:
* Move tmigr_available_cpumask out of tmc lock and specify conditions.
* Initialise tmigr isolation despite the state of isolcpus.
* Move tick CPU check to condition to run SMP call.
* Fix descriptions.
Changes since v6 [2]:
* Prevent isolation checks from running during early boot
* Prevent double (de)activation while setting cpus (un)available
* Use synchronous smp calls from the isolation path
* General cleanup
Changes since v5:
* Remove fallback if no housekeeping is left by isolcpus and nohz_full
* Adjust condition not to activate CPUs in the migration hierarchy
* Always force the nohz tick CPU active in the hierarchy
Changes since v4 [3]:
* use on_each_cpu_mask() with changes on isolated CPUs to avoid races
* keep nohz_full CPUs included in the timer migration hierarchy
* prevent domain isolated and nohz_full to cover all CPUs
Changes since v3:
* add parameter to function documentation
* split into multiple straightforward patches
Changes since v2:
* improve comments about handling CPUs isolated at boot
* minor cleanup
Changes since v1 [4]:
* split into smaller patches
* use available mask instead of unavailable
* simplification and cleanup
[1] - https://lore.kernel.org/lkml/20250714133050.193108-9-gmonaco@redhat.com
[2] - https://lore.kernel.org/lkml/20250530142031.215594-1-gmonaco@redhat.com
[3] - https://lore.kernel.org/lkml/20250506091534.42117-7-gmonaco@redhat.com
[4] - https://lore.kernel.org/lkml/20250410065446.57304-2-gmonaco@redhat.com
Frederic Weisbecker (1):
timers/migration: Postpone online/offline callbacks registration to
late initcall
Gabriele Monaco (7):
timers: Rename tmigr 'online' bit to 'available'
timers: Add the available mask in timer migration
timers: Use scoped_guard when setting/clearing the tmigr available
flag
cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to
update_exclusion_cpumasks()
sched/isolation: Force housekeeping if isolcpus and nohz_full don't
leave any
cgroup/cpuset: Fail if isolated and nohz_full don't leave any
housekeeping
timers: Exclude isolated cpus from timer migration
Yury Norov (1):
cpumask: Add initialiser to use cleanup helpers
include/linux/cpumask.h | 2 +
include/linux/timer.h | 9 ++
include/trace/events/timer_migration.h | 4 +-
kernel/cgroup/cpuset.c | 78 +++++++++-
kernel/sched/isolation.c | 23 +++
kernel/time/timer_migration.c | 205 ++++++++++++++++++++-----
kernel/time/timer_migration.h | 2 +-
7 files changed, 278 insertions(+), 45 deletions(-)
base-commit: 211ddde0823f1442e4ad052a2f30f050145ccada
--
2.51.0
^ permalink raw reply [flat|nested] 21+ messages in thread* [RESEND PATCH v13 1/9] timers/migration: Postpone online/offline callbacks registration to late initcall 2025-10-20 11:27 [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco @ 2025-10-20 11:27 ` Gabriele Monaco 2025-10-30 14:07 ` Frederic Weisbecker 2025-10-20 11:27 ` [RESEND PATCH v13 2/9] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco ` (8 subsequent siblings) 9 siblings, 1 reply; 21+ messages in thread From: Gabriele Monaco @ 2025-10-20 11:27 UTC (permalink / raw) To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Gabriele Monaco From: Frederic Weisbecker <frederic@kernel.org> During the early boot process, the default clocksource used for timekeeping is the jiffies. Better clocksources can only be selected once clocksource_done_booting() is called as an fs initcall. NOHZ can only be enabled after that stage, making global timer migration irrelevant up to that point. The tree remains inactive before NOHZ is enabled anyway. Therefore it makes sense to enable each CPUs to the tree only once that is setup. Make the CPUs available to the tree on late initcall, after the right clocksource had a chance to be selected. This will also simplify the handling of domain isolated CPUs on further patches. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- kernel/time/timer_migration.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index c0c54dc5314c..891891794b92 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -1481,6 +1481,16 @@ static int tmigr_cpu_online(unsigned int cpu) return 0; } +/* + * NOHZ can only be enabled after clocksource_done_booting(). Don't + * bother trashing the cache in the tree before. + */ +static int __init tmigr_late_init(void) +{ + return cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online", + tmigr_cpu_online, tmigr_cpu_offline); +} + static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl, int node) { @@ -1843,18 +1853,10 @@ static int __init tmigr_init(void) ret = cpuhp_setup_state(CPUHP_TMIGR_PREPARE, "tmigr:prepare", tmigr_cpu_prepare, NULL); - if (ret) - goto err; - - ret = cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online", - tmigr_cpu_online, tmigr_cpu_offline); - if (ret) - goto err; - - return 0; - err: - pr_err("Timer migration setup failed\n"); + if (ret) + pr_err("Timer migration setup failed\n"); return ret; } early_initcall(tmigr_init); +late_initcall(tmigr_late_init); -- 2.51.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [RESEND PATCH v13 1/9] timers/migration: Postpone online/offline callbacks registration to late initcall 2025-10-20 11:27 ` [RESEND PATCH v13 1/9] timers/migration: Postpone online/offline callbacks registration to late initcall Gabriele Monaco @ 2025-10-30 14:07 ` Frederic Weisbecker 0 siblings, 0 replies; 21+ messages in thread From: Frederic Weisbecker @ 2025-10-30 14:07 UTC (permalink / raw) To: Gabriele Monaco Cc: linux-kernel, Anna-Maria Behnsen, Thomas Gleixner, Waiman Long Le Mon, Oct 20, 2025 at 01:27:54PM +0200, Gabriele Monaco a écrit : > From: Frederic Weisbecker <frederic@kernel.org> > > During the early boot process, the default clocksource used for > timekeeping is the jiffies. Better clocksources can only be selected > once clocksource_done_booting() is called as an fs initcall. > > NOHZ can only be enabled after that stage, making global timer migration > irrelevant up to that point. > > The tree remains inactive before NOHZ is enabled anyway. Therefore it > makes sense to enable each CPUs to the tree only once that is setup. > > Make the CPUs available to the tree on late initcall, after the right > clocksource had a chance to be selected. This will also simplify the > handling of domain isolated CPUs on further patches. > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> > --- > kernel/time/timer_migration.c | 24 +++++++++++++----------- > 1 file changed, 13 insertions(+), 11 deletions(-) > > diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c > index c0c54dc5314c..891891794b92 100644 > --- a/kernel/time/timer_migration.c > +++ b/kernel/time/timer_migration.c > @@ -1481,6 +1481,16 @@ static int tmigr_cpu_online(unsigned int cpu) > return 0; > } > > +/* > + * NOHZ can only be enabled after clocksource_done_booting(). Don't > + * bother trashing the cache in the tree before. > + */ > +static int __init tmigr_late_init(void) > +{ > + return cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online", > + tmigr_cpu_online, tmigr_cpu_offline); > +} I just worked on a fix in the timer migration code and this made me realize my suggestion was plain wrong. The CPU doing the prepare work for the target must be online because we must guarantee that the old top level is active since we unconditionally propagate its active state to the new top. And if we do that late call to tmigr_cpu_online (after most CPUs have booted) then we break that guarantee. So I fear we can't do that and we must go back to your previous idea which consisted in sending IPIs to apply isolation on late stage. Sorry about the late brain :-s -- Frederic Weisbecker SUSE Labs ^ permalink raw reply [flat|nested] 21+ messages in thread
* [RESEND PATCH v13 2/9] timers: Rename tmigr 'online' bit to 'available' 2025-10-20 11:27 [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco 2025-10-20 11:27 ` [RESEND PATCH v13 1/9] timers/migration: Postpone online/offline callbacks registration to late initcall Gabriele Monaco @ 2025-10-20 11:27 ` Gabriele Monaco 2025-10-20 11:27 ` [RESEND PATCH v13 3/9] timers: Add the available mask in timer migration Gabriele Monaco ` (7 subsequent siblings) 9 siblings, 0 replies; 21+ messages in thread From: Gabriele Monaco @ 2025-10-20 11:27 UTC (permalink / raw) To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Gabriele Monaco The timer migration hierarchy excludes offline CPUs via the tmigr_is_not_available function, which is essentially checking the online bit for the CPU. Rename the online bit to available and all references in function names and tracepoint to generalise the concept of available CPUs. Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- include/trace/events/timer_migration.h | 4 ++-- kernel/time/timer_migration.c | 22 +++++++++++----------- kernel/time/timer_migration.h | 2 +- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/trace/events/timer_migration.h b/include/trace/events/timer_migration.h index 47db5eaf2f9a..61171b13c687 100644 --- a/include/trace/events/timer_migration.h +++ b/include/trace/events/timer_migration.h @@ -173,14 +173,14 @@ DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_active, TP_ARGS(tmc) ); -DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_online, +DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_available, TP_PROTO(struct tmigr_cpu *tmc), TP_ARGS(tmc) ); -DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_offline, +DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_unavailable, TP_PROTO(struct tmigr_cpu *tmc), diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index 891891794b92..55b186fd146c 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -427,7 +427,7 @@ static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu); static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc) { - return !(tmc->tmgroup && tmc->online); + return !(tmc->tmgroup && tmc->available); } /* @@ -926,7 +926,7 @@ static void tmigr_handle_remote_cpu(unsigned int cpu, u64 now, * updated the event takes care when hierarchy is completely * idle. Otherwise the migrator does it as the event is enqueued. */ - if (!tmc->online || tmc->remote || tmc->cpuevt.ignore || + if (!tmc->available || tmc->remote || tmc->cpuevt.ignore || now < tmc->cpuevt.nextevt.expires) { raw_spin_unlock_irq(&tmc->lock); return; @@ -973,7 +973,7 @@ static void tmigr_handle_remote_cpu(unsigned int cpu, u64 now, * (See also section "Required event and timerqueue update after a * remote expiry" in the documentation at the top) */ - if (!tmc->online || !tmc->idle) { + if (!tmc->available || !tmc->idle) { timer_unlock_remote_bases(cpu); goto unlock; } @@ -1432,19 +1432,19 @@ static long tmigr_trigger_active(void *unused) { struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); - WARN_ON_ONCE(!tmc->online || tmc->idle); + WARN_ON_ONCE(!tmc->available || tmc->idle); return 0; } -static int tmigr_cpu_offline(unsigned int cpu) +static int tmigr_clear_cpu_available(unsigned int cpu) { struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); int migrator; u64 firstexp; raw_spin_lock_irq(&tmc->lock); - tmc->online = false; + tmc->available = false; WRITE_ONCE(tmc->wakeup, KTIME_MAX); /* @@ -1452,7 +1452,7 @@ static int tmigr_cpu_offline(unsigned int cpu) * offline; Therefore nextevt value is set to KTIME_MAX */ firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX); - trace_tmigr_cpu_offline(tmc); + trace_tmigr_cpu_unavailable(tmc); raw_spin_unlock_irq(&tmc->lock); if (firstexp != KTIME_MAX) { @@ -1463,7 +1463,7 @@ static int tmigr_cpu_offline(unsigned int cpu) return 0; } -static int tmigr_cpu_online(unsigned int cpu) +static int tmigr_set_cpu_available(unsigned int cpu) { struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); @@ -1472,11 +1472,11 @@ static int tmigr_cpu_online(unsigned int cpu) return -EINVAL; raw_spin_lock_irq(&tmc->lock); - trace_tmigr_cpu_online(tmc); + trace_tmigr_cpu_available(tmc); tmc->idle = timer_base_is_idle(); if (!tmc->idle) __tmigr_cpu_activate(tmc); - tmc->online = true; + tmc->available = true; raw_spin_unlock_irq(&tmc->lock); return 0; } @@ -1488,7 +1488,7 @@ static int tmigr_cpu_online(unsigned int cpu) static int __init tmigr_late_init(void) { return cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online", - tmigr_cpu_online, tmigr_cpu_offline); + tmigr_set_cpu_available, tmigr_clear_cpu_available); } static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl, diff --git a/kernel/time/timer_migration.h b/kernel/time/timer_migration.h index ae19f70f8170..70879cde6fdd 100644 --- a/kernel/time/timer_migration.h +++ b/kernel/time/timer_migration.h @@ -97,7 +97,7 @@ struct tmigr_group { */ struct tmigr_cpu { raw_spinlock_t lock; - bool online; + bool available; bool idle; bool remote; struct tmigr_group *tmgroup; -- 2.51.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RESEND PATCH v13 3/9] timers: Add the available mask in timer migration 2025-10-20 11:27 [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco 2025-10-20 11:27 ` [RESEND PATCH v13 1/9] timers/migration: Postpone online/offline callbacks registration to late initcall Gabriele Monaco 2025-10-20 11:27 ` [RESEND PATCH v13 2/9] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco @ 2025-10-20 11:27 ` Gabriele Monaco 2025-10-20 11:27 ` [RESEND PATCH v13 4/9] timers: Use scoped_guard when setting/clearing the tmigr available flag Gabriele Monaco ` (6 subsequent siblings) 9 siblings, 0 replies; 21+ messages in thread From: Gabriele Monaco @ 2025-10-20 11:27 UTC (permalink / raw) To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Gabriele Monaco Keep track of the CPUs available for timer migration in a cpumask. This prepares the ground to generalise the concept of unavailable CPUs. Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- kernel/time/timer_migration.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index 55b186fd146c..865071ab5062 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -422,6 +422,12 @@ static unsigned int tmigr_crossnode_level __read_mostly; static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu); +/* + * CPUs available for timer migration. + * Protected by cpuset_mutex (with cpus_read_lock held) or cpus_write_lock. + */ +static cpumask_var_t tmigr_available_cpumask; + #define TMIGR_NONE 0xFF #define BIT_CNT 8 @@ -1443,6 +1449,7 @@ static int tmigr_clear_cpu_available(unsigned int cpu) int migrator; u64 firstexp; + cpumask_clear_cpu(cpu, tmigr_available_cpumask); raw_spin_lock_irq(&tmc->lock); tmc->available = false; WRITE_ONCE(tmc->wakeup, KTIME_MAX); @@ -1456,7 +1463,7 @@ static int tmigr_clear_cpu_available(unsigned int cpu) raw_spin_unlock_irq(&tmc->lock); if (firstexp != KTIME_MAX) { - migrator = cpumask_any_but(cpu_online_mask, cpu); + migrator = cpumask_any(tmigr_available_cpumask); work_on_cpu(migrator, tmigr_trigger_active, NULL); } @@ -1471,6 +1478,7 @@ static int tmigr_set_cpu_available(unsigned int cpu) if (WARN_ON_ONCE(!tmc->tmgroup)) return -EINVAL; + cpumask_set_cpu(cpu, tmigr_available_cpumask); raw_spin_lock_irq(&tmc->lock); trace_tmigr_cpu_available(tmc); tmc->idle = timer_base_is_idle(); @@ -1808,6 +1816,11 @@ static int __init tmigr_init(void) if (ncpus == 1) return 0; + if (!zalloc_cpumask_var(&tmigr_available_cpumask, GFP_KERNEL)) { + ret = -ENOMEM; + goto err; + } + /* * Calculate the required hierarchy levels. Unfortunately there is no * reliable information available, unless all possible CPUs have been -- 2.51.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RESEND PATCH v13 4/9] timers: Use scoped_guard when setting/clearing the tmigr available flag 2025-10-20 11:27 [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco ` (2 preceding siblings ...) 2025-10-20 11:27 ` [RESEND PATCH v13 3/9] timers: Add the available mask in timer migration Gabriele Monaco @ 2025-10-20 11:27 ` Gabriele Monaco 2025-10-20 11:27 ` [RESEND PATCH v13 5/9] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco ` (5 subsequent siblings) 9 siblings, 0 replies; 21+ messages in thread From: Gabriele Monaco @ 2025-10-20 11:27 UTC (permalink / raw) To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Gabriele Monaco Cleanup tmigr_clear_cpu_available() and tmigr_set_cpu_available() to prepare for easier checks on the available flag. Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- kernel/time/timer_migration.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index 865071ab5062..0a3a26e766d0 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -1450,17 +1450,17 @@ static int tmigr_clear_cpu_available(unsigned int cpu) u64 firstexp; cpumask_clear_cpu(cpu, tmigr_available_cpumask); - raw_spin_lock_irq(&tmc->lock); - tmc->available = false; - WRITE_ONCE(tmc->wakeup, KTIME_MAX); + scoped_guard(raw_spinlock_irq, &tmc->lock) { + tmc->available = false; + WRITE_ONCE(tmc->wakeup, KTIME_MAX); - /* - * CPU has to handle the local events on his own, when on the way to - * offline; Therefore nextevt value is set to KTIME_MAX - */ - firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX); - trace_tmigr_cpu_unavailable(tmc); - raw_spin_unlock_irq(&tmc->lock); + /* + * CPU has to handle the local events on his own, when on the way to + * offline; Therefore nextevt value is set to KTIME_MAX + */ + firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX); + trace_tmigr_cpu_unavailable(tmc); + } if (firstexp != KTIME_MAX) { migrator = cpumask_any(tmigr_available_cpumask); @@ -1479,13 +1479,13 @@ static int tmigr_set_cpu_available(unsigned int cpu) return -EINVAL; cpumask_set_cpu(cpu, tmigr_available_cpumask); - raw_spin_lock_irq(&tmc->lock); - trace_tmigr_cpu_available(tmc); - tmc->idle = timer_base_is_idle(); - if (!tmc->idle) - __tmigr_cpu_activate(tmc); - tmc->available = true; - raw_spin_unlock_irq(&tmc->lock); + scoped_guard(raw_spinlock_irq, &tmc->lock) { + trace_tmigr_cpu_available(tmc); + tmc->idle = timer_base_is_idle(); + if (!tmc->idle) + __tmigr_cpu_activate(tmc); + tmc->available = true; + } return 0; } -- 2.51.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RESEND PATCH v13 5/9] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() 2025-10-20 11:27 [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco ` (3 preceding siblings ...) 2025-10-20 11:27 ` [RESEND PATCH v13 4/9] timers: Use scoped_guard when setting/clearing the tmigr available flag Gabriele Monaco @ 2025-10-20 11:27 ` Gabriele Monaco 2025-10-20 11:27 ` [RESEND PATCH v13 6/9] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco ` (4 subsequent siblings) 9 siblings, 0 replies; 21+ messages in thread From: Gabriele Monaco @ 2025-10-20 11:27 UTC (permalink / raw) To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Gabriele Monaco update_unbound_workqueue_cpumask() updates unbound workqueues settings when there's a change in isolated CPUs, but it can be used for other subsystems requiring updated when isolated CPUs change. Generalise the name to update_exclusion_cpumasks() to prepare for other functions unrelated to workqueues to be called in that spot. Acked-by: Frederic Weisbecker <frederic@kernel.org> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- kernel/cgroup/cpuset.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 52468d2c178a..217480e01be6 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1391,7 +1391,7 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent, return isolcpus_updated; } -static void update_unbound_workqueue_cpumask(bool isolcpus_updated) +static void update_exclusion_cpumasks(bool isolcpus_updated) { int ret; @@ -1555,7 +1555,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs, list_add(&cs->remote_sibling, &remote_children); cpumask_copy(cs->effective_xcpus, tmp->new_cpus); spin_unlock_irq(&callback_lock); - update_unbound_workqueue_cpumask(isolcpus_updated); + update_exclusion_cpumasks(isolcpus_updated); cpuset_force_rebuild(); cs->prs_err = 0; @@ -1596,7 +1596,7 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp) compute_excpus(cs, cs->effective_xcpus); reset_partition_data(cs); spin_unlock_irq(&callback_lock); - update_unbound_workqueue_cpumask(isolcpus_updated); + update_exclusion_cpumasks(isolcpus_updated); cpuset_force_rebuild(); /* @@ -1665,7 +1665,7 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus, if (xcpus) cpumask_copy(cs->exclusive_cpus, xcpus); spin_unlock_irq(&callback_lock); - update_unbound_workqueue_cpumask(isolcpus_updated); + update_exclusion_cpumasks(isolcpus_updated); if (adding || deleting) cpuset_force_rebuild(); @@ -2023,7 +2023,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd, WARN_ON_ONCE(parent->nr_subparts < 0); } spin_unlock_irq(&callback_lock); - update_unbound_workqueue_cpumask(isolcpus_updated); + update_exclusion_cpumasks(isolcpus_updated); if ((old_prs != new_prs) && (cmd == partcmd_update)) update_partition_exclusive_flag(cs, new_prs); @@ -3043,7 +3043,7 @@ static int update_prstate(struct cpuset *cs, int new_prs) else if (isolcpus_updated) isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus); spin_unlock_irq(&callback_lock); - update_unbound_workqueue_cpumask(isolcpus_updated); + update_exclusion_cpumasks(isolcpus_updated); /* Force update if switching back to member & update effective_xcpus */ update_cpumasks_hier(cs, &tmpmask, !new_prs); -- 2.51.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RESEND PATCH v13 6/9] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any 2025-10-20 11:27 [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco ` (4 preceding siblings ...) 2025-10-20 11:27 ` [RESEND PATCH v13 5/9] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco @ 2025-10-20 11:27 ` Gabriele Monaco 2025-10-20 11:28 ` [RESEND PATCH v13 7/9] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco ` (3 subsequent siblings) 9 siblings, 0 replies; 21+ messages in thread From: Gabriele Monaco @ 2025-10-20 11:27 UTC (permalink / raw) To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Gabriele Monaco Currently the user can set up isolcpus and nohz_full in such a way that leaves no housekeeping CPU (i.e. no CPU that is neither domain isolated nor nohz full). This can be a problem for other subsystems (e.g. the timer wheel imgration). Prevent this configuration by invalidating the last setting in case the union of isolcpus (domain) and nohz_full covers all CPUs. Reviewed-by: Waiman Long <longman@redhat.com> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- kernel/sched/isolation.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c index a4cf17b1fab0..3ad0d6df6a0a 100644 --- a/kernel/sched/isolation.c +++ b/kernel/sched/isolation.c @@ -167,6 +167,29 @@ static int __init housekeeping_setup(char *str, unsigned long flags) } } + /* + * Check the combination of nohz_full and isolcpus=domain, + * necessary to avoid problems with the timer migration + * hierarchy. managed_irq is ignored by this check since it + * isn't considered in the timer migration logic. + */ + iter_flags = housekeeping.flags & (HK_FLAG_KERNEL_NOISE | HK_FLAG_DOMAIN); + type = find_first_bit(&iter_flags, HK_TYPE_MAX); + /* + * Pass the check if none of these flags were previously set or + * are not in the current selection. + */ + iter_flags = flags & (HK_FLAG_KERNEL_NOISE | HK_FLAG_DOMAIN); + first_cpu = (type == HK_TYPE_MAX || !iter_flags) ? 0 : + cpumask_first_and_and(cpu_present_mask, + housekeeping_staging, housekeeping.cpumasks[type]); + if (first_cpu >= min(nr_cpu_ids, setup_max_cpus)) { + pr_warn("Housekeeping: must include one present CPU " + "neither in nohz_full= nor in isolcpus=domain, " + "ignoring setting %s\n", str); + goto free_housekeeping_staging; + } + iter_flags = flags & ~housekeeping.flags; for_each_set_bit(type, &iter_flags, HK_TYPE_MAX) -- 2.51.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RESEND PATCH v13 7/9] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping 2025-10-20 11:27 [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco ` (5 preceding siblings ...) 2025-10-20 11:27 ` [RESEND PATCH v13 6/9] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco @ 2025-10-20 11:28 ` Gabriele Monaco 2025-10-20 11:28 ` [RESEND PATCH v13 8/9] cpumask: Add initialiser to use cleanup helpers Gabriele Monaco ` (2 subsequent siblings) 9 siblings, 0 replies; 21+ messages in thread From: Gabriele Monaco @ 2025-10-20 11:28 UTC (permalink / raw) To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Gabriele Monaco Currently the user can set up isolated cpus via cpuset and nohz_full in such a way that leaves no housekeeping CPU (i.e. no CPU that is neither domain isolated nor nohz full). This can be a problem for other subsystems (e.g. the timer wheel imgration). Prevent this configuration by blocking any assignation that would cause the union of domain isolated cpus and nohz_full to covers all CPUs. Acked-by: Frederic Weisbecker <frederic@kernel.org> Reviewed-by: Waiman Long <longman@redhat.com> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- kernel/cgroup/cpuset.c | 63 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 217480e01be6..597a9b9c18c6 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1327,6 +1327,19 @@ static void isolated_cpus_update(int old_prs, int new_prs, struct cpumask *xcpus cpumask_andnot(isolated_cpus, isolated_cpus, xcpus); } +/* + * isolated_cpus_should_update - Returns if the isolated_cpus mask needs update + * @prs: new or old partition_root_state + * @parent: parent cpuset + * Return: true if isolated_cpus needs modification, false otherwise + */ +static bool isolated_cpus_should_update(int prs, struct cpuset *parent) +{ + if (!parent) + parent = &top_cpuset; + return prs != parent->partition_root_state; +} + /* * partition_xcpus_add - Add new exclusive CPUs to partition * @new_prs: new partition_root_state @@ -1391,6 +1404,42 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent, return isolcpus_updated; } +/* + * isolated_cpus_can_update - check for isolated & nohz_full conflicts + * @add_cpus: cpu mask for cpus that are going to be isolated + * @del_cpus: cpu mask for cpus that are no longer isolated, can be NULL + * Return: false if there is conflict, true otherwise + * + * If nohz_full is enabled and we have isolated CPUs, their combination must + * still leave housekeeping CPUs. + */ +static bool isolated_cpus_can_update(struct cpumask *add_cpus, + struct cpumask *del_cpus) +{ + cpumask_var_t full_hk_cpus; + int res = true; + + if (!housekeeping_enabled(HK_TYPE_KERNEL_NOISE)) + return true; + + if (del_cpus && cpumask_weight_and(del_cpus, + housekeeping_cpumask(HK_TYPE_KERNEL_NOISE))) + return true; + + if (!alloc_cpumask_var(&full_hk_cpus, GFP_KERNEL)) + return false; + + cpumask_and(full_hk_cpus, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE), + housekeeping_cpumask(HK_TYPE_DOMAIN)); + cpumask_andnot(full_hk_cpus, full_hk_cpus, isolated_cpus); + cpumask_and(full_hk_cpus, full_hk_cpus, cpu_active_mask); + if (!cpumask_weight_andnot(full_hk_cpus, add_cpus)) + res = false; + + free_cpumask_var(full_hk_cpus); + return res; +} + static void update_exclusion_cpumasks(bool isolcpus_updated) { int ret; @@ -1549,6 +1598,9 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs, if (!cpumask_intersects(tmp->new_cpus, cpu_active_mask) || cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus)) return PERR_INVCPUS; + if (isolated_cpus_should_update(new_prs, NULL) && + !isolated_cpus_can_update(tmp->new_cpus, NULL)) + return PERR_HKEEPING; spin_lock_irq(&callback_lock); isolcpus_updated = partition_xcpus_add(new_prs, NULL, tmp->new_cpus); @@ -1648,6 +1700,9 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus, else if (cpumask_intersects(tmp->addmask, subpartitions_cpus) || cpumask_subset(top_cpuset.effective_cpus, tmp->addmask)) cs->prs_err = PERR_NOCPUS; + else if (isolated_cpus_should_update(prs, NULL) && + !isolated_cpus_can_update(tmp->addmask, tmp->delmask)) + cs->prs_err = PERR_HKEEPING; if (cs->prs_err) goto invalidate; } @@ -1994,6 +2049,12 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd, return err; } + if (deleting && isolated_cpus_should_update(new_prs, parent) && + !isolated_cpus_can_update(tmp->delmask, tmp->addmask)) { + cs->prs_err = PERR_HKEEPING; + return PERR_HKEEPING; + } + /* * Change the parent's effective_cpus & effective_xcpus (top cpuset * only). @@ -3009,6 +3070,8 @@ static int update_prstate(struct cpuset *cs, int new_prs) * Need to update isolated_cpus. */ isolcpus_updated = true; + if (!isolated_cpus_can_update(cs->effective_xcpus, NULL)) + err = PERR_HKEEPING; } else { /* * Switching back to member is always allowed even if it -- 2.51.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RESEND PATCH v13 8/9] cpumask: Add initialiser to use cleanup helpers 2025-10-20 11:27 [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco ` (6 preceding siblings ...) 2025-10-20 11:28 ` [RESEND PATCH v13 7/9] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco @ 2025-10-20 11:28 ` Gabriele Monaco 2025-10-20 11:28 ` [RESEND PATCH v13 9/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco 2025-10-30 2:56 ` [RESEND PATCH v13 0/9] " Waiman Long 9 siblings, 0 replies; 21+ messages in thread From: Gabriele Monaco @ 2025-10-20 11:28 UTC (permalink / raw) To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Yury Norov, Gabriele Monaco From: Yury Norov <yury.norov@gmail.com> Now we can simplify a code that allocates cpumasks for local needs. Automatic variables have to be initialized at declaration, or at least before any possibility for the logic to return, so that compiler wouldn't try to call an associate destructor function on a random stack number. Because cpumask_var_t, depending on the CPUMASK_OFFSTACK config, is either a pointer or an array, we have to have a macro for initialization. So define a CPUMASK_VAR_NULL macro, which allows to init struct cpumask pointer with NULL when CPUMASK_OFFSTACK is enabled, and effectively a no-op when CPUMASK_OFFSTACK is disabled (initialisation optimised out with -O2). Signed-off-by: Yury Norov <yury.norov@gmail.com> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- include/linux/cpumask.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index ff8f41ab7ce6..68be522449ec 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -1005,6 +1005,7 @@ static __always_inline unsigned int cpumask_size(void) #define this_cpu_cpumask_var_ptr(x) this_cpu_read(x) #define __cpumask_var_read_mostly __read_mostly +#define CPUMASK_VAR_NULL NULL bool alloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags, int node); @@ -1051,6 +1052,7 @@ static __always_inline bool cpumask_available(cpumask_var_t mask) #define this_cpu_cpumask_var_ptr(x) this_cpu_ptr(x) #define __cpumask_var_read_mostly +#define CPUMASK_VAR_NULL {} static __always_inline bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags) { -- 2.51.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RESEND PATCH v13 9/9] timers: Exclude isolated cpus from timer migration 2025-10-20 11:27 [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco ` (7 preceding siblings ...) 2025-10-20 11:28 ` [RESEND PATCH v13 8/9] cpumask: Add initialiser to use cleanup helpers Gabriele Monaco @ 2025-10-20 11:28 ` Gabriele Monaco 2025-10-30 2:56 ` [RESEND PATCH v13 0/9] " Waiman Long 9 siblings, 0 replies; 21+ messages in thread From: Gabriele Monaco @ 2025-10-20 11:28 UTC (permalink / raw) To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Gabriele Monaco, John B. Wyatt IV, John B. Wyatt IV The timer migration mechanism allows active CPUs to pull timers from idle ones to improve the overall idle time. This is however undesired when CPU intensive workloads run on isolated cores, as the algorithm would move the timers from housekeeping to isolated cores, negatively affecting the isolation. Exclude isolated cores from the timer migration algorithm, extend the concept of unavailable cores, currently used for offline ones, to isolated ones: * A core is unavailable if isolated or offline; * A core is available if non isolated and online; A core is considered unavailable as isolated if it belongs to: * the isolcpus (domain) list * an isolated cpuset Except if it is: * in the nohz_full list (already idle for the hierarchy) * the nohz timekeeper core (must be available to handle global timers) CPUs are added to the hierarchy during late boot, excluding isolated ones, the hierarchy is also adapted when the cpuset isolation changes. Due to how the timer migration algorithm works, any CPU part of the hierarchy can have their global timers pulled by remote CPUs and have to pull remote timers, only skipping pulling remote timers would break the logic. For this reason, prevent isolated CPUs from pulling remote global timers, but also the other way around: any global timer started on an isolated CPU will run there. This does not break the concept of isolation (global timers don't come from outside the CPU) and, if considered inappropriate, can usually be mitigated with other isolation techniques (e.g. IRQ pinning). This effect was noticed on a 128 cores machine running oslat on the isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, and the CPU with lowest count in a timer migration hierarchy (here 1 and 65) appears as always active and continuously pulls global timers, from the housekeeping CPUs. This ends up moving driver work (e.g. delayed work) to isolated CPUs and causes latency spikes: before the change: # oslat -c 1-31,33-63,65-95,97-127 -D 62s ... Maximum: 1203 10 3 4 ... 5 (us) after the change: # oslat -c 1-31,33-63,65-95,97-127 -D 62s ... Maximum: 10 4 3 4 3 ... 5 (us) The same behaviour was observed on a machine with as few as 20 cores / 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top. Tested-by: John B. Wyatt IV <jwyatt@redhat.com> Tested-by: John B. Wyatt IV <sageofredondo@gmail.com> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- include/linux/timer.h | 9 +++ kernel/cgroup/cpuset.c | 3 + kernel/time/timer_migration.c | 128 ++++++++++++++++++++++++++++++++-- 3 files changed, 135 insertions(+), 5 deletions(-) diff --git a/include/linux/timer.h b/include/linux/timer.h index 0414d9e6b4fc..62e1cea71125 100644 --- a/include/linux/timer.h +++ b/include/linux/timer.h @@ -188,4 +188,13 @@ int timers_dead_cpu(unsigned int cpu); #define timers_dead_cpu NULL #endif +#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON) +extern int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask); +#else +static inline int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask) +{ + return 0; +} +#endif + #endif diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 597a9b9c18c6..ffc2f70f771f 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1451,6 +1451,9 @@ static void update_exclusion_cpumasks(bool isolcpus_updated) ret = workqueue_unbound_exclude_cpumask(isolated_cpus); WARN_ON_ONCE(ret < 0); + + ret = tmigr_isolated_exclude_cpumask(isolated_cpus); + WARN_ON_ONCE(ret < 0); } /** diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index 0a3a26e766d0..9aa01f1e0ea4 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -10,6 +10,7 @@ #include <linux/spinlock.h> #include <linux/timerqueue.h> #include <trace/events/ipi.h> +#include <linux/sched/isolation.h> #include "timer_migration.h" #include "tick-internal.h" @@ -436,6 +437,29 @@ static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc) return !(tmc->tmgroup && tmc->available); } +/* + * Returns true if @cpu should be excluded from the hierarchy as isolated. + * Domain isolated CPUs don't participate in timer migration, nohz_full CPUs + * are still part of the hierarchy but become idle (from a tick and timer + * migration perspective) when they stop their tick. This lets the timekeeping + * CPU handle their global timers. Marking also isolated CPUs as idle would be + * too costly, hence they are completely excluded from the hierarchy. + * This check is necessary, for instance, to prevent offline isolated CPUs from + * being incorrectly marked as available once getting back online. + * + * Additionally, the tick CPU can be isolated at boot, however + * we cannot mark it as unavailable to avoid having no global migrator + * for the nohz_full CPUs. This check is only necessary at boot time. + */ +static inline bool tmigr_is_isolated(int cpu) +{ + if (!tick_nohz_cpu_hotpluggable(cpu)) + return false; + return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) || + cpuset_cpu_is_isolated(cpu)) && + housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE); +} + /* * Returns true, when @childmask corresponds to the group migrator or when the * group is not active - so no migrator is set. @@ -1449,8 +1473,9 @@ static int tmigr_clear_cpu_available(unsigned int cpu) int migrator; u64 firstexp; - cpumask_clear_cpu(cpu, tmigr_available_cpumask); scoped_guard(raw_spinlock_irq, &tmc->lock) { + if (!tmc->available) + return 0; tmc->available = false; WRITE_ONCE(tmc->wakeup, KTIME_MAX); @@ -1463,11 +1488,11 @@ static int tmigr_clear_cpu_available(unsigned int cpu) } if (firstexp != KTIME_MAX) { - migrator = cpumask_any(tmigr_available_cpumask); + migrator = cpumask_any_but(tmigr_available_cpumask, cpu); work_on_cpu(migrator, tmigr_trigger_active, NULL); } - return 0; + return 1; } static int tmigr_set_cpu_available(unsigned int cpu) @@ -1478,14 +1503,107 @@ static int tmigr_set_cpu_available(unsigned int cpu) if (WARN_ON_ONCE(!tmc->tmgroup)) return -EINVAL; - cpumask_set_cpu(cpu, tmigr_available_cpumask); + if (tmigr_is_isolated(cpu)) + return 0; + scoped_guard(raw_spinlock_irq, &tmc->lock) { + if (tmc->available) + return 0; trace_tmigr_cpu_available(tmc); tmc->idle = timer_base_is_idle(); if (!tmc->idle) __tmigr_cpu_activate(tmc); tmc->available = true; } + return 1; +} + +static int tmigr_online_cpu(unsigned int cpu) +{ + if (tmigr_set_cpu_available(cpu) > 0) + cpumask_set_cpu(cpu, tmigr_available_cpumask); + return 0; +} + +static int tmigr_offline_cpu(unsigned int cpu) +{ + if (tmigr_clear_cpu_available(cpu) > 0) + cpumask_clear_cpu(cpu, tmigr_available_cpumask); + return 0; +} + +static void tmigr_cpu_isolate(struct work_struct *ignored) +{ + tmigr_clear_cpu_available(smp_processor_id()); +} + +static void tmigr_cpu_unisolate(struct work_struct *ignored) +{ + tmigr_set_cpu_available(smp_processor_id()); +} + +/** + * tmigr_isolated_exclude_cpumask - Exclude given CPUs from hierarchy + * @exclude_cpumask: the cpumask to be excluded from timer migration hierarchy + * + * This function can be called from cpuset code to provide the new set of + * isolated CPUs that should be excluded from the hierarchy. + * Online CPUs not present in exclude_cpumask but already excluded are brought + * back to the hierarchy. + * Functions to isolate/unisolate need to be called locally and can sleep. + */ +int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask) +{ + struct work_struct __percpu *works __free(free_percpu) = + alloc_percpu(struct work_struct); + cpumask_var_t cpumask_unisol __free(free_cpumask_var) = CPUMASK_VAR_NULL; + cpumask_var_t cpumask_isol __free(free_cpumask_var) = CPUMASK_VAR_NULL; + int cpu; + + lockdep_assert_cpus_held(); + + if (!alloc_cpumask_var(&cpumask_isol, GFP_KERNEL)) + return -ENOMEM; + if (!alloc_cpumask_var(&cpumask_unisol, GFP_KERNEL)) + return -ENOMEM; + if (!works) + return -ENOMEM; + + cpumask_andnot(cpumask_unisol, cpu_online_mask, exclude_cpumask); + cpumask_andnot(cpumask_unisol, cpumask_unisol, tmigr_available_cpumask); + /* Set up the mask earlier to avoid races with the migrator CPU */ + cpumask_or(tmigr_available_cpumask, tmigr_available_cpumask, cpumask_unisol); + for_each_cpu(cpu, cpumask_unisol) { + struct work_struct *work = per_cpu_ptr(works, cpu); + + INIT_WORK(work, tmigr_cpu_unisolate); + schedule_work_on(cpu, work); + } + + cpumask_and(cpumask_isol, exclude_cpumask, tmigr_available_cpumask); + cpumask_and(cpumask_isol, cpumask_isol, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE)); + /* + * Handle this here and not in the cpuset code because exclude_cpumask + * might include also the tick CPU if included in isolcpus. + */ + for_each_cpu(cpu, cpumask_isol) { + if (!tick_nohz_cpu_hotpluggable(cpu)) { + cpumask_clear_cpu(cpu, cpumask_isol); + break; + } + } + /* Set up the mask earlier to avoid races with the migrator CPU */ + cpumask_andnot(tmigr_available_cpumask, tmigr_available_cpumask, cpumask_isol); + for_each_cpu(cpu, cpumask_isol) { + struct work_struct *work = per_cpu_ptr(works, cpu); + + INIT_WORK(work, tmigr_cpu_isolate); + schedule_work_on(cpu, work); + } + + for_each_cpu_or(cpu, cpumask_isol, cpumask_unisol) + flush_work(per_cpu_ptr(works, cpu)); + return 0; } @@ -1496,7 +1614,7 @@ static int tmigr_set_cpu_available(unsigned int cpu) static int __init tmigr_late_init(void) { return cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online", - tmigr_set_cpu_available, tmigr_clear_cpu_available); + tmigr_online_cpu, tmigr_offline_cpu); } static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl, -- 2.51.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration 2025-10-20 11:27 [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco ` (8 preceding siblings ...) 2025-10-20 11:28 ` [RESEND PATCH v13 9/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco @ 2025-10-30 2:56 ` Waiman Long 2025-10-30 14:12 ` Frederic Weisbecker 9 siblings, 1 reply; 21+ messages in thread From: Waiman Long @ 2025-10-30 2:56 UTC (permalink / raw) To: Thomas Gleixner Cc: linux-kernel, Gabriele Monaco, Frederic Weisbecker, Anna-Maria Behnsen On 10/20/25 7:27 AM, Gabriele Monaco wrote: > The timer migration mechanism allows active CPUs to pull timers from > idle ones to improve the overall idle time. This is however undesired > when CPU intensive workloads run on isolated cores, as the algorithm > would move the timers from housekeeping to isolated cores, negatively > affecting the isolation. > > Exclude isolated cores from the timer migration algorithm, extend the > concept of unavailable cores, currently used for offline ones, to > isolated ones: > * A core is unavailable if isolated or offline; > * A core is available if non isolated and online; > > A core is considered unavailable as isolated if it belongs to: > * the isolcpus (domain) list > * an isolated cpuset > Except if it is: > * in the nohz_full list (already idle for the hierarchy) > * the nohz timekeeper core (must be available to handle global timers) > > CPUs are added to the hierarchy during late boot, excluding isolated > ones, the hierarchy is also adapted when the cpuset isolation changes. > > Due to how the timer migration algorithm works, any CPU part of the > hierarchy can have their global timers pulled by remote CPUs and have to > pull remote timers, only skipping pulling remote timers would break the > logic. > For this reason, prevent isolated CPUs from pulling remote global > timers, but also the other way around: any global timer started on an > isolated CPU will run there. This does not break the concept of > isolation (global timers don't come from outside the CPU) and, if > considered inappropriate, can usually be mitigated with other isolation > techniques (e.g. IRQ pinning). > > This effect was noticed on a 128 cores machine running oslat on the > isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, > and the CPU with lowest count in a timer migration hierarchy (here 1 > and 65) appears as always active and continuously pulls global timers, > from the housekeeping CPUs. This ends up moving driver work (e.g. > delayed work) to isolated CPUs and causes latency spikes: > > before the change: > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > ... > Maximum: 1203 10 3 4 ... 5 (us) > > after the change: > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > ... > Maximum: 10 4 3 4 3 ... 5 (us) > > The same behaviour was observed on a machine with as few as 20 cores / > 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top. > > The first 5 patches are preparatory work to change the concept of > online/offline to available/unavailable, keep track of those in a > separate cpumask cleanup the setting/clearing functions and change a > function name in cpuset code. > > Patch 6 and 7 adapt isolation and cpuset to prevent domain isolated and > nohz_full from covering all CPUs not leaving any housekeeping one. This > can lead to problems with the changes introduced in this series because > no CPU would remain to handle global timers. > > Patch 9 extends the unavailable status to domain isolated CPUs, which > is the main contribution of the series. > > This series is equivalent to v13 but rebased on v6.18-rc2. Thomas, This patch series have undergone multiple round of reviews. Do you think it is good enough to be merged into tip? It does contain some cpuset code, but most of the changes are in the timer code. So I think it is better to go through the tip tree. It does have some minor conflicts with the current for-6.19 branch of the cgroup tree, but it can be easily resolved during merge. What do you think? Thanks, Longman ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration 2025-10-30 2:56 ` [RESEND PATCH v13 0/9] " Waiman Long @ 2025-10-30 14:12 ` Frederic Weisbecker [not found] ` <5457560d-f48a-4a99-8756-51b1017a6aab@redhat.com> 0 siblings, 1 reply; 21+ messages in thread From: Frederic Weisbecker @ 2025-10-30 14:12 UTC (permalink / raw) To: Waiman Long Cc: Thomas Gleixner, linux-kernel, Gabriele Monaco, Anna-Maria Behnsen Hi Waiman, Le Wed, Oct 29, 2025 at 10:56:06PM -0400, Waiman Long a écrit : > On 10/20/25 7:27 AM, Gabriele Monaco wrote: > > The timer migration mechanism allows active CPUs to pull timers from > > idle ones to improve the overall idle time. This is however undesired > > when CPU intensive workloads run on isolated cores, as the algorithm > > would move the timers from housekeeping to isolated cores, negatively > > affecting the isolation. > > > > Exclude isolated cores from the timer migration algorithm, extend the > > concept of unavailable cores, currently used for offline ones, to > > isolated ones: > > * A core is unavailable if isolated or offline; > > * A core is available if non isolated and online; > > > > A core is considered unavailable as isolated if it belongs to: > > * the isolcpus (domain) list > > * an isolated cpuset > > Except if it is: > > * in the nohz_full list (already idle for the hierarchy) > > * the nohz timekeeper core (must be available to handle global timers) > > > > CPUs are added to the hierarchy during late boot, excluding isolated > > ones, the hierarchy is also adapted when the cpuset isolation changes. > > > > Due to how the timer migration algorithm works, any CPU part of the > > hierarchy can have their global timers pulled by remote CPUs and have to > > pull remote timers, only skipping pulling remote timers would break the > > logic. > > For this reason, prevent isolated CPUs from pulling remote global > > timers, but also the other way around: any global timer started on an > > isolated CPU will run there. This does not break the concept of > > isolation (global timers don't come from outside the CPU) and, if > > considered inappropriate, can usually be mitigated with other isolation > > techniques (e.g. IRQ pinning). > > > > This effect was noticed on a 128 cores machine running oslat on the > > isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, > > and the CPU with lowest count in a timer migration hierarchy (here 1 > > and 65) appears as always active and continuously pulls global timers, > > from the housekeeping CPUs. This ends up moving driver work (e.g. > > delayed work) to isolated CPUs and causes latency spikes: > > > > before the change: > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > ... > > Maximum: 1203 10 3 4 ... 5 (us) > > > > after the change: > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > ... > > Maximum: 10 4 3 4 3 ... 5 (us) > > > > The same behaviour was observed on a machine with as few as 20 cores / > > 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top. > > > > The first 5 patches are preparatory work to change the concept of > > online/offline to available/unavailable, keep track of those in a > > separate cpumask cleanup the setting/clearing functions and change a > > function name in cpuset code. > > > > Patch 6 and 7 adapt isolation and cpuset to prevent domain isolated and > > nohz_full from covering all CPUs not leaving any housekeeping one. This > > can lead to problems with the changes introduced in this series because > > no CPU would remain to handle global timers. > > > > Patch 9 extends the unavailable status to domain isolated CPUs, which > > is the main contribution of the series. > > > > This series is equivalent to v13 but rebased on v6.18-rc2. > > Thomas, > > This patch series have undergone multiple round of reviews. Do you think it > is good enough to be merged into tip? > > It does contain some cpuset code, but most of the changes are in the timer > code. So I think it is better to go through the tip tree. It does have some > minor conflicts with the current for-6.19 branch of the cgroup tree, but it > can be easily resolved during merge. > > What do you think? Just wait a little, I realize I made a buggy suggestion to Gabriele and a detail needs to be fixed. My bad... -- Frederic Weisbecker SUSE Labs ^ permalink raw reply [flat|nested] 21+ messages in thread
[parent not found: <5457560d-f48a-4a99-8756-51b1017a6aab@redhat.com>]
* Re: [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration [not found] ` <5457560d-f48a-4a99-8756-51b1017a6aab@redhat.com> @ 2025-10-30 16:09 ` Gabriele Monaco 2025-10-30 16:37 ` Waiman Long 2025-10-30 17:08 ` Frederic Weisbecker 1 sibling, 1 reply; 21+ messages in thread From: Gabriele Monaco @ 2025-10-30 16:09 UTC (permalink / raw) To: Waiman Long, Frederic Weisbecker Cc: Thomas Gleixner, linux-kernel, Anna-Maria Behnsen On Thu, 2025-10-30 at 11:37 -0400, Waiman Long wrote: > > On 10/30/25 10:12 AM, Frederic Weisbecker wrote: > > > > > > Hi Waiman, > > > > Le Wed, Oct 29, 2025 at 10:56:06PM -0400, Waiman Long a écrit : > > > > > > > > On 10/20/25 7:27 AM, Gabriele Monaco wrote: > > > > > > > > > > > The timer migration mechanism allows active CPUs to pull timers from > > > > idle ones to improve the overall idle time. This is however undesired > > > > when CPU intensive workloads run on isolated cores, as the algorithm > > > > would move the timers from housekeeping to isolated cores, negatively > > > > affecting the isolation. > > > > > > > > Exclude isolated cores from the timer migration algorithm, extend the > > > > concept of unavailable cores, currently used for offline ones, to > > > > isolated ones: > > > > * A core is unavailable if isolated or offline; > > > > * A core is available if non isolated and online; > > > > > > > > A core is considered unavailable as isolated if it belongs to: > > > > * the isolcpus (domain) list > > > > * an isolated cpuset > > > > Except if it is: > > > > * in the nohz_full list (already idle for the hierarchy) > > > > * the nohz timekeeper core (must be available to handle global timers) > > > > > > > > CPUs are added to the hierarchy during late boot, excluding isolated > > > > ones, the hierarchy is also adapted when the cpuset isolation changes. > > > > > > > > Due to how the timer migration algorithm works, any CPU part of the > > > > hierarchy can have their global timers pulled by remote CPUs and have to > > > > pull remote timers, only skipping pulling remote timers would break the > > > > logic. > > > > For this reason, prevent isolated CPUs from pulling remote global > > > > timers, but also the other way around: any global timer started on an > > > > isolated CPU will run there. This does not break the concept of > > > > isolation (global timers don't come from outside the CPU) and, if > > > > considered inappropriate, can usually be mitigated with other isolation > > > > techniques (e.g. IRQ pinning). > > > > > > > > This effect was noticed on a 128 cores machine running oslat on the > > > > isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, > > > > and the CPU with lowest count in a timer migration hierarchy (here 1 > > > > and 65) appears as always active and continuously pulls global timers, > > > > from the housekeeping CPUs. This ends up moving driver work (e.g. > > > > delayed work) to isolated CPUs and causes latency spikes: > > > > > > > > before the change: > > > > > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > > > ... > > > > Maximum: 1203 10 3 4 ... 5 (us) > > > > > > > > after the change: > > > > > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > > > ... > > > > Maximum: 10 4 3 4 3 ... 5 (us) > > > > > > > > The same behaviour was observed on a machine with as few as 20 cores / > > > > 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top. > > > > > > > > The first 5 patches are preparatory work to change the concept of > > > > online/offline to available/unavailable, keep track of those in a > > > > separate cpumask cleanup the setting/clearing functions and change a > > > > function name in cpuset code. > > > > > > > > Patch 6 and 7 adapt isolation and cpuset to prevent domain isolated and > > > > nohz_full from covering all CPUs not leaving any housekeeping one. This > > > > can lead to problems with the changes introduced in this series because > > > > no CPU would remain to handle global timers. > > > > > > > > Patch 9 extends the unavailable status to domain isolated CPUs, which > > > > is the main contribution of the series. > > > > > > > > This series is equivalent to v13 but rebased on v6.18-rc2. > > > > > > > > > > Thomas, > > > > > > This patch series have undergone multiple round of reviews. Do you think > > > it > > > is good enough to be merged into tip? > > > > > > It does contain some cpuset code, but most of the changes are in the timer > > > code. So I think it is better to go through the tip tree. It does have > > > some > > > minor conflicts with the current for-6.19 branch of the cgroup tree, but > > > it > > > can be easily resolved during merge. > > > > > > What do you think? > > > > > > > Just wait a little, I realize I made a buggy suggestion to Gabriele and > > a detail needs to be fixed. > > > > My bad... > > > > OK, I thought you were OK with the timer changes. I guess Gabriele will have > to send out a new version to address your finding. Sure, I'm going to have a look at this next week and send a V14. Thanks, Gabriele ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration 2025-10-30 16:09 ` Gabriele Monaco @ 2025-10-30 16:37 ` Waiman Long 2025-10-30 17:10 ` Frederic Weisbecker 0 siblings, 1 reply; 21+ messages in thread From: Waiman Long @ 2025-10-30 16:37 UTC (permalink / raw) To: Gabriele Monaco, Waiman Long, Frederic Weisbecker Cc: Thomas Gleixner, linux-kernel, Anna-Maria Behnsen On 10/30/25 12:09 PM, Gabriele Monaco wrote: > > On Thu, 2025-10-30 at 11:37 -0400, Waiman Long wrote: >> >> On 10/30/25 10:12 AM, Frederic Weisbecker wrote: >> >> >>> >>> Hi Waiman, >>> >>> Le Wed, Oct 29, 2025 at 10:56:06PM -0400, Waiman Long a écrit : >>> >>>> >>>> On 10/20/25 7:27 AM, Gabriele Monaco wrote: >>>> >>>>> >>>>> The timer migration mechanism allows active CPUs to pull timers from >>>>> idle ones to improve the overall idle time. This is however undesired >>>>> when CPU intensive workloads run on isolated cores, as the algorithm >>>>> would move the timers from housekeeping to isolated cores, negatively >>>>> affecting the isolation. >>>>> >>>>> Exclude isolated cores from the timer migration algorithm, extend the >>>>> concept of unavailable cores, currently used for offline ones, to >>>>> isolated ones: >>>>> * A core is unavailable if isolated or offline; >>>>> * A core is available if non isolated and online; >>>>> >>>>> A core is considered unavailable as isolated if it belongs to: >>>>> * the isolcpus (domain) list >>>>> * an isolated cpuset >>>>> Except if it is: >>>>> * in the nohz_full list (already idle for the hierarchy) >>>>> * the nohz timekeeper core (must be available to handle global timers) >>>>> >>>>> CPUs are added to the hierarchy during late boot, excluding isolated >>>>> ones, the hierarchy is also adapted when the cpuset isolation changes. >>>>> >>>>> Due to how the timer migration algorithm works, any CPU part of the >>>>> hierarchy can have their global timers pulled by remote CPUs and have to >>>>> pull remote timers, only skipping pulling remote timers would break the >>>>> logic. >>>>> For this reason, prevent isolated CPUs from pulling remote global >>>>> timers, but also the other way around: any global timer started on an >>>>> isolated CPU will run there. This does not break the concept of >>>>> isolation (global timers don't come from outside the CPU) and, if >>>>> considered inappropriate, can usually be mitigated with other isolation >>>>> techniques (e.g. IRQ pinning). >>>>> >>>>> This effect was noticed on a 128 cores machine running oslat on the >>>>> isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, >>>>> and the CPU with lowest count in a timer migration hierarchy (here 1 >>>>> and 65) appears as always active and continuously pulls global timers, >>>>> from the housekeeping CPUs. This ends up moving driver work (e.g. >>>>> delayed work) to isolated CPUs and causes latency spikes: >>>>> >>>>> before the change: >>>>> >>>>> # oslat -c 1-31,33-63,65-95,97-127 -D 62s >>>>> ... >>>>> Maximum: 1203 10 3 4 ... 5 (us) >>>>> >>>>> after the change: >>>>> >>>>> # oslat -c 1-31,33-63,65-95,97-127 -D 62s >>>>> ... >>>>> Maximum: 10 4 3 4 3 ... 5 (us) >>>>> >>>>> The same behaviour was observed on a machine with as few as 20 cores / >>>>> 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top. >>>>> >>>>> The first 5 patches are preparatory work to change the concept of >>>>> online/offline to available/unavailable, keep track of those in a >>>>> separate cpumask cleanup the setting/clearing functions and change a >>>>> function name in cpuset code. >>>>> >>>>> Patch 6 and 7 adapt isolation and cpuset to prevent domain isolated and >>>>> nohz_full from covering all CPUs not leaving any housekeeping one. This >>>>> can lead to problems with the changes introduced in this series because >>>>> no CPU would remain to handle global timers. >>>>> >>>>> Patch 9 extends the unavailable status to domain isolated CPUs, which >>>>> is the main contribution of the series. >>>>> >>>>> This series is equivalent to v13 but rebased on v6.18-rc2. >>>>> >>>> >>>> Thomas, >>>> >>>> This patch series have undergone multiple round of reviews. Do you think >>>> it >>>> is good enough to be merged into tip? >>>> >>>> It does contain some cpuset code, but most of the changes are in the timer >>>> code. So I think it is better to go through the tip tree. It does have >>>> some >>>> minor conflicts with the current for-6.19 branch of the cgroup tree, but >>>> it >>>> can be easily resolved during merge. >>>> >>>> What do you think? >>>> >>> >>> Just wait a little, I realize I made a buggy suggestion to Gabriele and >>> a detail needs to be fixed. >>> >>> My bad... >>> >> >> OK, I thought you were OK with the timer changes. I guess Gabriele will have >> to send out a new version to address your finding. > Sure, I'm going to have a look at this next week and send a V14. I am going to extract out your 2 cpuset patches and send them to the cgroup mailing list separately. So you don't need to include them in your next version. Cheers, Longman ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration 2025-10-30 16:37 ` Waiman Long @ 2025-10-30 17:10 ` Frederic Weisbecker 2025-10-30 17:57 ` Waiman Long 0 siblings, 1 reply; 21+ messages in thread From: Frederic Weisbecker @ 2025-10-30 17:10 UTC (permalink / raw) To: Waiman Long Cc: Gabriele Monaco, Thomas Gleixner, linux-kernel, Anna-Maria Behnsen Le Thu, Oct 30, 2025 at 12:37:08PM -0400, Waiman Long a écrit : > On 10/30/25 12:09 PM, Gabriele Monaco wrote: > > > > On Thu, 2025-10-30 at 11:37 -0400, Waiman Long wrote: > > > On 10/30/25 10:12 AM, Frederic Weisbecker wrote: > > > > Hi Waiman, > > > > > > > > Le Wed, Oct 29, 2025 at 10:56:06PM -0400, Waiman Long a écrit : > > > > > On 10/20/25 7:27 AM, Gabriele Monaco wrote: > > > > > > The timer migration mechanism allows active CPUs to pull timers from > > > > > > idle ones to improve the overall idle time. This is however undesired > > > > > > when CPU intensive workloads run on isolated cores, as the algorithm > > > > > > would move the timers from housekeeping to isolated cores, negatively > > > > > > affecting the isolation. > > > > > > > > > > > > Exclude isolated cores from the timer migration algorithm, extend the > > > > > > concept of unavailable cores, currently used for offline ones, to > > > > > > isolated ones: > > > > > > * A core is unavailable if isolated or offline; > > > > > > * A core is available if non isolated and online; > > > > > > > > > > > > A core is considered unavailable as isolated if it belongs to: > > > > > > * the isolcpus (domain) list > > > > > > * an isolated cpuset > > > > > > Except if it is: > > > > > > * in the nohz_full list (already idle for the hierarchy) > > > > > > * the nohz timekeeper core (must be available to handle global timers) > > > > > > > > > > > > CPUs are added to the hierarchy during late boot, excluding isolated > > > > > > ones, the hierarchy is also adapted when the cpuset isolation changes. > > > > > > > > > > > > Due to how the timer migration algorithm works, any CPU part of the > > > > > > hierarchy can have their global timers pulled by remote CPUs and have to > > > > > > pull remote timers, only skipping pulling remote timers would break the > > > > > > logic. > > > > > > For this reason, prevent isolated CPUs from pulling remote global > > > > > > timers, but also the other way around: any global timer started on an > > > > > > isolated CPU will run there. This does not break the concept of > > > > > > isolation (global timers don't come from outside the CPU) and, if > > > > > > considered inappropriate, can usually be mitigated with other isolation > > > > > > techniques (e.g. IRQ pinning). > > > > > > > > > > > > This effect was noticed on a 128 cores machine running oslat on the > > > > > > isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, > > > > > > and the CPU with lowest count in a timer migration hierarchy (here 1 > > > > > > and 65) appears as always active and continuously pulls global timers, > > > > > > from the housekeeping CPUs. This ends up moving driver work (e.g. > > > > > > delayed work) to isolated CPUs and causes latency spikes: > > > > > > > > > > > > before the change: > > > > > > > > > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > > > > > ... > > > > > > Maximum: 1203 10 3 4 ... 5 (us) > > > > > > > > > > > > after the change: > > > > > > > > > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > > > > > ... > > > > > > Maximum: 10 4 3 4 3 ... 5 (us) > > > > > > > > > > > > The same behaviour was observed on a machine with as few as 20 cores / > > > > > > 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top. > > > > > > > > > > > > The first 5 patches are preparatory work to change the concept of > > > > > > online/offline to available/unavailable, keep track of those in a > > > > > > separate cpumask cleanup the setting/clearing functions and change a > > > > > > function name in cpuset code. > > > > > > > > > > > > Patch 6 and 7 adapt isolation and cpuset to prevent domain isolated and > > > > > > nohz_full from covering all CPUs not leaving any housekeeping one. This > > > > > > can lead to problems with the changes introduced in this series because > > > > > > no CPU would remain to handle global timers. > > > > > > > > > > > > Patch 9 extends the unavailable status to domain isolated CPUs, which > > > > > > is the main contribution of the series. > > > > > > > > > > > > This series is equivalent to v13 but rebased on v6.18-rc2. > > > > > Thomas, > > > > > > > > > > This patch series have undergone multiple round of reviews. Do you think > > > > > it > > > > > is good enough to be merged into tip? > > > > > > > > > > It does contain some cpuset code, but most of the changes are in the timer > > > > > code. So I think it is better to go through the tip tree. It does have > > > > > some > > > > > minor conflicts with the current for-6.19 branch of the cgroup tree, but > > > > > it > > > > > can be easily resolved during merge. > > > > > > > > > > What do you think? > > > > Just wait a little, I realize I made a buggy suggestion to Gabriele and > > > > a detail needs to be fixed. > > > > > > > > My bad... > > > OK, I thought you were OK with the timer changes. I guess Gabriele will have > > > to send out a new version to address your finding. > > Sure, I'm going to have a look at this next week and send a V14. > > I am going to extract out your 2 cpuset patches and send them to the cgroup > mailing list separately. So you don't need to include them in your next > version. I'm not sure this will help if you apply those to an external tree if the plan is to apply the whole to the timer tree. Or we'll create a dependency issue... > Cheers, > Longman > -- Frederic Weisbecker SUSE Labs ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration 2025-10-30 17:10 ` Frederic Weisbecker @ 2025-10-30 17:57 ` Waiman Long 2025-10-31 13:48 ` Frederic Weisbecker 0 siblings, 1 reply; 21+ messages in thread From: Waiman Long @ 2025-10-30 17:57 UTC (permalink / raw) To: Frederic Weisbecker, Waiman Long Cc: Gabriele Monaco, Thomas Gleixner, linux-kernel, Anna-Maria Behnsen On 10/30/25 1:10 PM, Frederic Weisbecker wrote: > Le Thu, Oct 30, 2025 at 12:37:08PM -0400, Waiman Long a écrit : >> On 10/30/25 12:09 PM, Gabriele Monaco wrote: >>> On Thu, 2025-10-30 at 11:37 -0400, Waiman Long wrote: >>>> On 10/30/25 10:12 AM, Frederic Weisbecker wrote: >>>>> Hi Waiman, >>>>> >>>>> Le Wed, Oct 29, 2025 at 10:56:06PM -0400, Waiman Long a écrit : >>>>>> On 10/20/25 7:27 AM, Gabriele Monaco wrote: >>>>>>> The timer migration mechanism allows active CPUs to pull timers from >>>>>>> idle ones to improve the overall idle time. This is however undesired >>>>>>> when CPU intensive workloads run on isolated cores, as the algorithm >>>>>>> would move the timers from housekeeping to isolated cores, negatively >>>>>>> affecting the isolation. >>>>>>> >>>>>>> Exclude isolated cores from the timer migration algorithm, extend the >>>>>>> concept of unavailable cores, currently used for offline ones, to >>>>>>> isolated ones: >>>>>>> * A core is unavailable if isolated or offline; >>>>>>> * A core is available if non isolated and online; >>>>>>> >>>>>>> A core is considered unavailable as isolated if it belongs to: >>>>>>> * the isolcpus (domain) list >>>>>>> * an isolated cpuset >>>>>>> Except if it is: >>>>>>> * in the nohz_full list (already idle for the hierarchy) >>>>>>> * the nohz timekeeper core (must be available to handle global timers) >>>>>>> >>>>>>> CPUs are added to the hierarchy during late boot, excluding isolated >>>>>>> ones, the hierarchy is also adapted when the cpuset isolation changes. >>>>>>> >>>>>>> Due to how the timer migration algorithm works, any CPU part of the >>>>>>> hierarchy can have their global timers pulled by remote CPUs and have to >>>>>>> pull remote timers, only skipping pulling remote timers would break the >>>>>>> logic. >>>>>>> For this reason, prevent isolated CPUs from pulling remote global >>>>>>> timers, but also the other way around: any global timer started on an >>>>>>> isolated CPU will run there. This does not break the concept of >>>>>>> isolation (global timers don't come from outside the CPU) and, if >>>>>>> considered inappropriate, can usually be mitigated with other isolation >>>>>>> techniques (e.g. IRQ pinning). >>>>>>> >>>>>>> This effect was noticed on a 128 cores machine running oslat on the >>>>>>> isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, >>>>>>> and the CPU with lowest count in a timer migration hierarchy (here 1 >>>>>>> and 65) appears as always active and continuously pulls global timers, >>>>>>> from the housekeeping CPUs. This ends up moving driver work (e.g. >>>>>>> delayed work) to isolated CPUs and causes latency spikes: >>>>>>> >>>>>>> before the change: >>>>>>> >>>>>>> # oslat -c 1-31,33-63,65-95,97-127 -D 62s >>>>>>> ... >>>>>>> Maximum: 1203 10 3 4 ... 5 (us) >>>>>>> >>>>>>> after the change: >>>>>>> >>>>>>> # oslat -c 1-31,33-63,65-95,97-127 -D 62s >>>>>>> ... >>>>>>> Maximum: 10 4 3 4 3 ... 5 (us) >>>>>>> >>>>>>> The same behaviour was observed on a machine with as few as 20 cores / >>>>>>> 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top. >>>>>>> >>>>>>> The first 5 patches are preparatory work to change the concept of >>>>>>> online/offline to available/unavailable, keep track of those in a >>>>>>> separate cpumask cleanup the setting/clearing functions and change a >>>>>>> function name in cpuset code. >>>>>>> >>>>>>> Patch 6 and 7 adapt isolation and cpuset to prevent domain isolated and >>>>>>> nohz_full from covering all CPUs not leaving any housekeeping one. This >>>>>>> can lead to problems with the changes introduced in this series because >>>>>>> no CPU would remain to handle global timers. >>>>>>> >>>>>>> Patch 9 extends the unavailable status to domain isolated CPUs, which >>>>>>> is the main contribution of the series. >>>>>>> >>>>>>> This series is equivalent to v13 but rebased on v6.18-rc2. >>>>>> Thomas, >>>>>> >>>>>> This patch series have undergone multiple round of reviews. Do you think >>>>>> it >>>>>> is good enough to be merged into tip? >>>>>> >>>>>> It does contain some cpuset code, but most of the changes are in the timer >>>>>> code. So I think it is better to go through the tip tree. It does have >>>>>> some >>>>>> minor conflicts with the current for-6.19 branch of the cgroup tree, but >>>>>> it >>>>>> can be easily resolved during merge. >>>>>> >>>>>> What do you think? >>>>> Just wait a little, I realize I made a buggy suggestion to Gabriele and >>>>> a detail needs to be fixed. >>>>> >>>>> My bad... >>>> OK, I thought you were OK with the timer changes. I guess Gabriele will have >>>> to send out a new version to address your finding. >>> Sure, I'm going to have a look at this next week and send a V14. >> I am going to extract out your 2 cpuset patches and send them to the cgroup >> mailing list separately. So you don't need to include them in your next >> version. > I'm not sure this will help if you apply those to an external tree if the > plan is to apply the whole to the timer tree. Or we'll create a dependency > issue... These 2 cpuset patches are actually independent of the timer related changes. The purpose of these two patches are to prevent the cpuset code from adding isolated CPUs in such a way that all the nohz_full HK CPUs become domain-isolated. This is a corner case that normal users won't try to do. The patches are just an insurance policy to ensure that users can't do that. This is complementary to the sched/isolation patch that limits what CPUs can be put to the isolcpus and nohz_full boot parameters. All these patches are independent of the timer related changes, though you can say that the solution will only be complete if all the pieces are in place. There are another set of pending cpuset patches from Chen Ridong that does some restructuring of the cpuset code that will likely have some conflicts with these 2 patches. So I would like to settle the cpuset changes to avoid future conflicts. Cheers, Longman ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration 2025-10-30 17:57 ` Waiman Long @ 2025-10-31 13:48 ` Frederic Weisbecker 2025-10-31 14:03 ` Gabriele Monaco 2025-10-31 16:14 ` Waiman Long 0 siblings, 2 replies; 21+ messages in thread From: Frederic Weisbecker @ 2025-10-31 13:48 UTC (permalink / raw) To: Waiman Long Cc: Gabriele Monaco, Thomas Gleixner, linux-kernel, Anna-Maria Behnsen Le Thu, Oct 30, 2025 at 01:57:50PM -0400, Waiman Long a écrit : > On 10/30/25 1:10 PM, Frederic Weisbecker wrote: > > Le Thu, Oct 30, 2025 at 12:37:08PM -0400, Waiman Long a écrit : > > > On 10/30/25 12:09 PM, Gabriele Monaco wrote: > > > > On Thu, 2025-10-30 at 11:37 -0400, Waiman Long wrote: > > > > > On 10/30/25 10:12 AM, Frederic Weisbecker wrote: > > > > > > Hi Waiman, > > > > > > > > > > > > Le Wed, Oct 29, 2025 at 10:56:06PM -0400, Waiman Long a écrit : > > > > > > > On 10/20/25 7:27 AM, Gabriele Monaco wrote: > > > > > > > > The timer migration mechanism allows active CPUs to pull timers from > > > > > > > > idle ones to improve the overall idle time. This is however undesired > > > > > > > > when CPU intensive workloads run on isolated cores, as the algorithm > > > > > > > > would move the timers from housekeeping to isolated cores, negatively > > > > > > > > affecting the isolation. > > > > > > > > > > > > > > > > Exclude isolated cores from the timer migration algorithm, extend the > > > > > > > > concept of unavailable cores, currently used for offline ones, to > > > > > > > > isolated ones: > > > > > > > > * A core is unavailable if isolated or offline; > > > > > > > > * A core is available if non isolated and online; > > > > > > > > > > > > > > > > A core is considered unavailable as isolated if it belongs to: > > > > > > > > * the isolcpus (domain) list > > > > > > > > * an isolated cpuset > > > > > > > > Except if it is: > > > > > > > > * in the nohz_full list (already idle for the hierarchy) > > > > > > > > * the nohz timekeeper core (must be available to handle global timers) > > > > > > > > > > > > > > > > CPUs are added to the hierarchy during late boot, excluding isolated > > > > > > > > ones, the hierarchy is also adapted when the cpuset isolation changes. > > > > > > > > > > > > > > > > Due to how the timer migration algorithm works, any CPU part of the > > > > > > > > hierarchy can have their global timers pulled by remote CPUs and have to > > > > > > > > pull remote timers, only skipping pulling remote timers would break the > > > > > > > > logic. > > > > > > > > For this reason, prevent isolated CPUs from pulling remote global > > > > > > > > timers, but also the other way around: any global timer started on an > > > > > > > > isolated CPU will run there. This does not break the concept of > > > > > > > > isolation (global timers don't come from outside the CPU) and, if > > > > > > > > considered inappropriate, can usually be mitigated with other isolation > > > > > > > > techniques (e.g. IRQ pinning). > > > > > > > > > > > > > > > > This effect was noticed on a 128 cores machine running oslat on the > > > > > > > > isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, > > > > > > > > and the CPU with lowest count in a timer migration hierarchy (here 1 > > > > > > > > and 65) appears as always active and continuously pulls global timers, > > > > > > > > from the housekeeping CPUs. This ends up moving driver work (e.g. > > > > > > > > delayed work) to isolated CPUs and causes latency spikes: > > > > > > > > > > > > > > > > before the change: > > > > > > > > > > > > > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > > > > > > > ... > > > > > > > > Maximum: 1203 10 3 4 ... 5 (us) > > > > > > > > > > > > > > > > after the change: > > > > > > > > > > > > > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > > > > > > > ... > > > > > > > > Maximum: 10 4 3 4 3 ... 5 (us) > > > > > > > > > > > > > > > > The same behaviour was observed on a machine with as few as 20 cores / > > > > > > > > 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top. > > > > > > > > > > > > > > > > The first 5 patches are preparatory work to change the concept of > > > > > > > > online/offline to available/unavailable, keep track of those in a > > > > > > > > separate cpumask cleanup the setting/clearing functions and change a > > > > > > > > function name in cpuset code. > > > > > > > > > > > > > > > > Patch 6 and 7 adapt isolation and cpuset to prevent domain isolated and > > > > > > > > nohz_full from covering all CPUs not leaving any housekeeping one. This > > > > > > > > can lead to problems with the changes introduced in this series because > > > > > > > > no CPU would remain to handle global timers. > > > > > > > > > > > > > > > > Patch 9 extends the unavailable status to domain isolated CPUs, which > > > > > > > > is the main contribution of the series. > > > > > > > > > > > > > > > > This series is equivalent to v13 but rebased on v6.18-rc2. > > > > > > > Thomas, > > > > > > > > > > > > > > This patch series have undergone multiple round of reviews. Do you think > > > > > > > it > > > > > > > is good enough to be merged into tip? > > > > > > > > > > > > > > It does contain some cpuset code, but most of the changes are in the timer > > > > > > > code. So I think it is better to go through the tip tree. It does have > > > > > > > some > > > > > > > minor conflicts with the current for-6.19 branch of the cgroup tree, but > > > > > > > it > > > > > > > can be easily resolved during merge. > > > > > > > > > > > > > > What do you think? > > > > > > Just wait a little, I realize I made a buggy suggestion to Gabriele and > > > > > > a detail needs to be fixed. > > > > > > > > > > > > My bad... > > > > > OK, I thought you were OK with the timer changes. I guess Gabriele will have > > > > > to send out a new version to address your finding. > > > > Sure, I'm going to have a look at this next week and send a V14. > > > I am going to extract out your 2 cpuset patches and send them to the cgroup > > > mailing list separately. So you don't need to include them in your next > > > version. > > I'm not sure this will help if you apply those to an external tree if the > > plan is to apply the whole to the timer tree. Or we'll create a dependency > > issue... > > These 2 cpuset patches are actually independent of the timer related > changes. The purpose of these two patches are to prevent the cpuset code > from adding isolated CPUs in such a way that all the nohz_full HK CPUs > become domain-isolated. This is a corner case that normal users won't try to > do. The patches are just an insurance policy to ensure that users can't do > that. This is complementary to the sched/isolation patch that limits what > CPUs can be put to the isolcpus and nohz_full boot parameters. All these > patches are independent of the timer related changes, though you can say > that the solution will only be complete if all the pieces are in place. Right but there will be a conflict if the timer patches don't have the rename of update_unbound_workqueue_cpumask(). > There are another set of pending cpuset patches from Chen Ridong that does > some restructuring of the cpuset code that will likely have some conflicts > with these 2 patches. So I would like to settle the cpuset changes to avoid > future conflicts. Ok so it looks like there will be conflicts eventually during the merge window. In that case it makes sense to take Gabriel cpuset patches but he'll need to rebase the rest on top of the timer tree. Thanks. > > Cheers, > Longman > -- Frederic Weisbecker SUSE Labs ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration 2025-10-31 13:48 ` Frederic Weisbecker @ 2025-10-31 14:03 ` Gabriele Monaco 2025-10-31 16:14 ` Waiman Long 1 sibling, 0 replies; 21+ messages in thread From: Gabriele Monaco @ 2025-10-31 14:03 UTC (permalink / raw) To: Frederic Weisbecker, Waiman Long Cc: Thomas Gleixner, linux-kernel, Anna-Maria Behnsen On Fri, 2025-10-31 at 14:48 +0100, Frederic Weisbecker wrote: > Le Thu, Oct 30, 2025 at 01:57:50PM -0400, Waiman Long a écrit : > > These 2 cpuset patches are actually independent of the timer related > > changes. The purpose of these two patches are to prevent the cpuset code > > from adding isolated CPUs in such a way that all the nohz_full HK CPUs > > become domain-isolated. This is a corner case that normal users won't try to > > do. The patches are just an insurance policy to ensure that users can't do > > that. This is complementary to the sched/isolation patch that limits what > > CPUs can be put to the isolcpus and nohz_full boot parameters. All these > > patches are independent of the timer related changes, though you can say > > that the solution will only be complete if all the pieces are in place. > > Right but there will be a conflict if the timer patches don't have > the rename of update_unbound_workqueue_cpumask(). > Waiman, are you referring to [1]? Since that is an RFC, couldn't you just take in those patches before merging [1] and adapt just that one directly in the cpuset tree? I guess git should figure it out if we keep my cpuset patches in both trees (or at least 5/9), as long as the conflicts come in later commits. Different story is if you already took some conflicting patches in, then I can look into rebasing as Frederic suggests. Thanks, Gabriele [1] - https://lore.kernel.org/lkml/20251025064844.495525-8-chenridong@huaweicloud.com > > There are another set of pending cpuset patches from Chen Ridong that does > > some restructuring of the cpuset code that will likely have some conflicts > > with these 2 patches. So I would like to settle the cpuset changes to avoid > > future conflicts. > > Ok so it looks like there will be conflicts eventually during the merge > window. In that case it makes sense to take Gabriel cpuset patches but > he'll need to rebase the rest on top of the timer tree. > > Thanks. > > > > > Cheers, > > Longman > > ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration 2025-10-31 13:48 ` Frederic Weisbecker 2025-10-31 14:03 ` Gabriele Monaco @ 2025-10-31 16:14 ` Waiman Long 1 sibling, 0 replies; 21+ messages in thread From: Waiman Long @ 2025-10-31 16:14 UTC (permalink / raw) To: Frederic Weisbecker, Waiman Long Cc: Gabriele Monaco, Thomas Gleixner, linux-kernel, Anna-Maria Behnsen On 10/31/25 9:48 AM, Frederic Weisbecker wrote: > Le Thu, Oct 30, 2025 at 01:57:50PM -0400, Waiman Long a écrit : >> On 10/30/25 1:10 PM, Frederic Weisbecker wrote: >>> Le Thu, Oct 30, 2025 at 12:37:08PM -0400, Waiman Long a écrit : >>>> On 10/30/25 12:09 PM, Gabriele Monaco wrote: >>>>> On Thu, 2025-10-30 at 11:37 -0400, Waiman Long wrote: >>>>>> On 10/30/25 10:12 AM, Frederic Weisbecker wrote: >>>>>>> Hi Waiman, >>>>>>> >>>>>>> Le Wed, Oct 29, 2025 at 10:56:06PM -0400, Waiman Long a écrit : >>>>>>>> On 10/20/25 7:27 AM, Gabriele Monaco wrote: >>>>>>>>> The timer migration mechanism allows active CPUs to pull timers from >>>>>>>>> idle ones to improve the overall idle time. This is however undesired >>>>>>>>> when CPU intensive workloads run on isolated cores, as the algorithm >>>>>>>>> would move the timers from housekeeping to isolated cores, negatively >>>>>>>>> affecting the isolation. >>>>>>>>> >>>>>>>>> Exclude isolated cores from the timer migration algorithm, extend the >>>>>>>>> concept of unavailable cores, currently used for offline ones, to >>>>>>>>> isolated ones: >>>>>>>>> * A core is unavailable if isolated or offline; >>>>>>>>> * A core is available if non isolated and online; >>>>>>>>> >>>>>>>>> A core is considered unavailable as isolated if it belongs to: >>>>>>>>> * the isolcpus (domain) list >>>>>>>>> * an isolated cpuset >>>>>>>>> Except if it is: >>>>>>>>> * in the nohz_full list (already idle for the hierarchy) >>>>>>>>> * the nohz timekeeper core (must be available to handle global timers) >>>>>>>>> >>>>>>>>> CPUs are added to the hierarchy during late boot, excluding isolated >>>>>>>>> ones, the hierarchy is also adapted when the cpuset isolation changes. >>>>>>>>> >>>>>>>>> Due to how the timer migration algorithm works, any CPU part of the >>>>>>>>> hierarchy can have their global timers pulled by remote CPUs and have to >>>>>>>>> pull remote timers, only skipping pulling remote timers would break the >>>>>>>>> logic. >>>>>>>>> For this reason, prevent isolated CPUs from pulling remote global >>>>>>>>> timers, but also the other way around: any global timer started on an >>>>>>>>> isolated CPU will run there. This does not break the concept of >>>>>>>>> isolation (global timers don't come from outside the CPU) and, if >>>>>>>>> considered inappropriate, can usually be mitigated with other isolation >>>>>>>>> techniques (e.g. IRQ pinning). >>>>>>>>> >>>>>>>>> This effect was noticed on a 128 cores machine running oslat on the >>>>>>>>> isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, >>>>>>>>> and the CPU with lowest count in a timer migration hierarchy (here 1 >>>>>>>>> and 65) appears as always active and continuously pulls global timers, >>>>>>>>> from the housekeeping CPUs. This ends up moving driver work (e.g. >>>>>>>>> delayed work) to isolated CPUs and causes latency spikes: >>>>>>>>> >>>>>>>>> before the change: >>>>>>>>> >>>>>>>>> # oslat -c 1-31,33-63,65-95,97-127 -D 62s >>>>>>>>> ... >>>>>>>>> Maximum: 1203 10 3 4 ... 5 (us) >>>>>>>>> >>>>>>>>> after the change: >>>>>>>>> >>>>>>>>> # oslat -c 1-31,33-63,65-95,97-127 -D 62s >>>>>>>>> ... >>>>>>>>> Maximum: 10 4 3 4 3 ... 5 (us) >>>>>>>>> >>>>>>>>> The same behaviour was observed on a machine with as few as 20 cores / >>>>>>>>> 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top. >>>>>>>>> >>>>>>>>> The first 5 patches are preparatory work to change the concept of >>>>>>>>> online/offline to available/unavailable, keep track of those in a >>>>>>>>> separate cpumask cleanup the setting/clearing functions and change a >>>>>>>>> function name in cpuset code. >>>>>>>>> >>>>>>>>> Patch 6 and 7 adapt isolation and cpuset to prevent domain isolated and >>>>>>>>> nohz_full from covering all CPUs not leaving any housekeeping one. This >>>>>>>>> can lead to problems with the changes introduced in this series because >>>>>>>>> no CPU would remain to handle global timers. >>>>>>>>> >>>>>>>>> Patch 9 extends the unavailable status to domain isolated CPUs, which >>>>>>>>> is the main contribution of the series. >>>>>>>>> >>>>>>>>> This series is equivalent to v13 but rebased on v6.18-rc2. >>>>>>>> Thomas, >>>>>>>> >>>>>>>> This patch series have undergone multiple round of reviews. Do you think >>>>>>>> it >>>>>>>> is good enough to be merged into tip? >>>>>>>> >>>>>>>> It does contain some cpuset code, but most of the changes are in the timer >>>>>>>> code. So I think it is better to go through the tip tree. It does have >>>>>>>> some >>>>>>>> minor conflicts with the current for-6.19 branch of the cgroup tree, but >>>>>>>> it >>>>>>>> can be easily resolved during merge. >>>>>>>> >>>>>>>> What do you think? >>>>>>> Just wait a little, I realize I made a buggy suggestion to Gabriele and >>>>>>> a detail needs to be fixed. >>>>>>> >>>>>>> My bad... >>>>>> OK, I thought you were OK with the timer changes. I guess Gabriele will have >>>>>> to send out a new version to address your finding. >>>>> Sure, I'm going to have a look at this next week and send a V14. >>>> I am going to extract out your 2 cpuset patches and send them to the cgroup >>>> mailing list separately. So you don't need to include them in your next >>>> version. >>> I'm not sure this will help if you apply those to an external tree if the >>> plan is to apply the whole to the timer tree. Or we'll create a dependency >>> issue... >> These 2 cpuset patches are actually independent of the timer related >> changes. The purpose of these two patches are to prevent the cpuset code >> from adding isolated CPUs in such a way that all the nohz_full HK CPUs >> become domain-isolated. This is a corner case that normal users won't try to >> do. The patches are just an insurance policy to ensure that users can't do >> that. This is complementary to the sched/isolation patch that limits what >> CPUs can be put to the isolcpus and nohz_full boot parameters. All these >> patches are independent of the timer related changes, though you can say >> that the solution will only be complete if all the pieces are in place. > Right but there will be a conflict if the timer patches don't have > the rename of update_unbound_workqueue_cpumask(). Yes, I missed the fact that patch 9 does have a cpuset.c hunk. Yes, it will have a dependency on the prior cpuset patches. >> There are another set of pending cpuset patches from Chen Ridong that does >> some restructuring of the cpuset code that will likely have some conflicts >> with these 2 patches. So I would like to settle the cpuset changes to avoid >> future conflicts. > Ok so it looks like there will be conflicts eventually during the merge > window. In that case it makes sense to take Gabriel cpuset patches but > he'll need to rebase the rest on top of the timer tree. It depends on whether the patch set can be merged in time into tip for the next v6.19 merge windows. Cheers, Longman > > Thanks. > >> Cheers, >> Longman >> ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration [not found] ` <5457560d-f48a-4a99-8756-51b1017a6aab@redhat.com> 2025-10-30 16:09 ` Gabriele Monaco @ 2025-10-30 17:08 ` Frederic Weisbecker 1 sibling, 0 replies; 21+ messages in thread From: Frederic Weisbecker @ 2025-10-30 17:08 UTC (permalink / raw) To: Waiman Long Cc: Thomas Gleixner, linux-kernel, Gabriele Monaco, Anna-Maria Behnsen Le Thu, Oct 30, 2025 at 11:37:36AM -0400, Waiman Long a écrit : > On 10/30/25 10:12 AM, Frederic Weisbecker wrote: > > Hi Waiman, > > > > Le Wed, Oct 29, 2025 at 10:56:06PM -0400, Waiman Long a écrit : > > > On 10/20/25 7:27 AM, Gabriele Monaco wrote: > > > > The timer migration mechanism allows active CPUs to pull timers from > > > > idle ones to improve the overall idle time. This is however undesired > > > > when CPU intensive workloads run on isolated cores, as the algorithm > > > > would move the timers from housekeeping to isolated cores, negatively > > > > affecting the isolation. > > > > > > > > Exclude isolated cores from the timer migration algorithm, extend the > > > > concept of unavailable cores, currently used for offline ones, to > > > > isolated ones: > > > > * A core is unavailable if isolated or offline; > > > > * A core is available if non isolated and online; > > > > > > > > A core is considered unavailable as isolated if it belongs to: > > > > * the isolcpus (domain) list > > > > * an isolated cpuset > > > > Except if it is: > > > > * in the nohz_full list (already idle for the hierarchy) > > > > * the nohz timekeeper core (must be available to handle global timers) > > > > > > > > CPUs are added to the hierarchy during late boot, excluding isolated > > > > ones, the hierarchy is also adapted when the cpuset isolation changes. > > > > > > > > Due to how the timer migration algorithm works, any CPU part of the > > > > hierarchy can have their global timers pulled by remote CPUs and have to > > > > pull remote timers, only skipping pulling remote timers would break the > > > > logic. > > > > For this reason, prevent isolated CPUs from pulling remote global > > > > timers, but also the other way around: any global timer started on an > > > > isolated CPU will run there. This does not break the concept of > > > > isolation (global timers don't come from outside the CPU) and, if > > > > considered inappropriate, can usually be mitigated with other isolation > > > > techniques (e.g. IRQ pinning). > > > > > > > > This effect was noticed on a 128 cores machine running oslat on the > > > > isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, > > > > and the CPU with lowest count in a timer migration hierarchy (here 1 > > > > and 65) appears as always active and continuously pulls global timers, > > > > from the housekeeping CPUs. This ends up moving driver work (e.g. > > > > delayed work) to isolated CPUs and causes latency spikes: > > > > > > > > before the change: > > > > > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > > > ... > > > > Maximum: 1203 10 3 4 ... 5 (us) > > > > > > > > after the change: > > > > > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > > > ... > > > > Maximum: 10 4 3 4 3 ... 5 (us) > > > > > > > > The same behaviour was observed on a machine with as few as 20 cores / > > > > 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top. > > > > > > > > The first 5 patches are preparatory work to change the concept of > > > > online/offline to available/unavailable, keep track of those in a > > > > separate cpumask cleanup the setting/clearing functions and change a > > > > function name in cpuset code. > > > > > > > > Patch 6 and 7 adapt isolation and cpuset to prevent domain isolated and > > > > nohz_full from covering all CPUs not leaving any housekeeping one. This > > > > can lead to problems with the changes introduced in this series because > > > > no CPU would remain to handle global timers. > > > > > > > > Patch 9 extends the unavailable status to domain isolated CPUs, which > > > > is the main contribution of the series. > > > > > > > > This series is equivalent to v13 but rebased on v6.18-rc2. > > > Thomas, > > > > > > This patch series have undergone multiple round of reviews. Do you think it > > > is good enough to be merged into tip? > > > > > > It does contain some cpuset code, but most of the changes are in the timer > > > code. So I think it is better to go through the tip tree. It does have some > > > minor conflicts with the current for-6.19 branch of the cgroup tree, but it > > > can be easily resolved during merge. > > > > > > What do you think? > > Just wait a little, I realize I made a buggy suggestion to Gabriele and > > a detail needs to be fixed. > > > > My bad... > > OK, I thought you were OK with the timer changes. I was ok until...just a few days ago, and I should have written about it right away but you know, being wrong is a process that takes time :o) > I guess Gabriele will have to send out a new version to address your finding. Right. Thanks! -- Frederic Weisbecker SUSE Labs ^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2025-10-31 16:14 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-20 11:27 [RESEND PATCH v13 0/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco
2025-10-20 11:27 ` [RESEND PATCH v13 1/9] timers/migration: Postpone online/offline callbacks registration to late initcall Gabriele Monaco
2025-10-30 14:07 ` Frederic Weisbecker
2025-10-20 11:27 ` [RESEND PATCH v13 2/9] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
2025-10-20 11:27 ` [RESEND PATCH v13 3/9] timers: Add the available mask in timer migration Gabriele Monaco
2025-10-20 11:27 ` [RESEND PATCH v13 4/9] timers: Use scoped_guard when setting/clearing the tmigr available flag Gabriele Monaco
2025-10-20 11:27 ` [RESEND PATCH v13 5/9] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
2025-10-20 11:27 ` [RESEND PATCH v13 6/9] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
2025-10-20 11:28 ` [RESEND PATCH v13 7/9] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
2025-10-20 11:28 ` [RESEND PATCH v13 8/9] cpumask: Add initialiser to use cleanup helpers Gabriele Monaco
2025-10-20 11:28 ` [RESEND PATCH v13 9/9] timers: Exclude isolated cpus from timer migration Gabriele Monaco
2025-10-30 2:56 ` [RESEND PATCH v13 0/9] " Waiman Long
2025-10-30 14:12 ` Frederic Weisbecker
[not found] ` <5457560d-f48a-4a99-8756-51b1017a6aab@redhat.com>
2025-10-30 16:09 ` Gabriele Monaco
2025-10-30 16:37 ` Waiman Long
2025-10-30 17:10 ` Frederic Weisbecker
2025-10-30 17:57 ` Waiman Long
2025-10-31 13:48 ` Frederic Weisbecker
2025-10-31 14:03 ` Gabriele Monaco
2025-10-31 16:14 ` Waiman Long
2025-10-30 17:08 ` Frederic Weisbecker
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox