* [PATCH v2 0/3] timers: Exclude isolated cpus from timer migation
@ 2025-04-15 10:25 Gabriele Monaco
2025-04-15 10:25 ` [PATCH v2 1/3] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Gabriele Monaco @ 2025-04-15 10:25 UTC (permalink / raw)
To: linux-kernel, Frederic Weisbecker, Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
The timer migration mechanism allows active CPUs to pull timers from
idle ones to improve the overall idle time. This is however undesired
when CPU intensive workloads run on isolated cores, as the algorithm
would move the timers from housekeeping to isolated cores, negatively
affecting the isolation.
This effect was noticed on a 128 cores machine running oslat on the
isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
and the CPU with lowest count in a timer migration hierarchy (here 1
and 65) appears as always active and continuously pulls global timers,
from the housekeeping CPUs. This ends up moving driver work (e.g.
delayed work) to isolated CPUs and causes latency spikes:
before the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 1203 10 3 4 ... 5 (us)
after the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 10 4 3 4 3 ... 5 (us)
Exclude isolated cores from the timer migration algorithm, extend the
concept of unavailable cores, currently used for offline ones, to
isolated ones:
* A core is unavailable if isolated or offline;
* A core is available if isolated and offline;
A core is considered unavailable as idle if:
* is in the isolcpus list
* is in the nohz_full list
* is in an isolated cpuset
Due to how the timer migration algorithm works, any CPU part of the
hierarchy can have their global timers pulled by remote CPUs and have to
pull remote timers, only skipping pulling remote timers would break the
logic.
For this reason, we prevents isolated CPUs from pulling remote global
timers, but also the other way around: any global timer started on an
isolated CPU will run there. This does not break the concept of
isolation (global timers don't come from outside the CPU) and, if
considered inappropriate, can usually be mitigated with other isolation
techniques (e.g. IRQ pinning).
The first 2 patches are preparatory work to change the concept of
online/offline to available/unavailable and keep track of those in a
separate cpumask.
The third patch extends the unavailable status to isolated CPUs, which
is the main contribution of the series.
Changes since v1 [1]:
* split into smaller patches
* use available mask instead of unavailable
* simplification and cleanup
[1] - https://lore.kernel.org/lkml/20250410065446.57304-2-gmonaco@redhat.com
Gabriele Monaco (3):
timers: Rename tmigr 'online' bit to 'available'
timers: Add the available mask in timer migration
timers: Exclude isolated cpus from timer migation
include/linux/timer.h | 6 +++
include/trace/events/timer_migration.h | 4 +-
kernel/cgroup/cpuset.c | 14 ++++---
kernel/time/tick-internal.h | 1 +
kernel/time/timer.c | 10 +++++
kernel/time/timer_migration.c | 58 +++++++++++++++++++-------
kernel/time/timer_migration.h | 2 +-
7 files changed, 71 insertions(+), 24 deletions(-)
base-commit: 834a4a689699090a406d1662b03affa8b155d025
--
2.49.0
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v2 1/3] timers: Rename tmigr 'online' bit to 'available' 2025-04-15 10:25 [PATCH v2 0/3] timers: Exclude isolated cpus from timer migation Gabriele Monaco @ 2025-04-15 10:25 ` Gabriele Monaco 2025-04-15 10:25 ` [PATCH v2 2/3] timers: Add the available mask in timer migration Gabriele Monaco 2025-04-15 10:25 ` [PATCH v2 3/3] timers: Exclude isolated cpus from timer migation Gabriele Monaco 2 siblings, 0 replies; 8+ messages in thread From: Gabriele Monaco @ 2025-04-15 10:25 UTC (permalink / raw) To: linux-kernel, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Gabriele Monaco The timer migration hierarchy excludes offline CPUs via the tmigr_is_not_available function, which is essentially checking the online bit for the CPU. Rename the online bit to available and all references in function names and tracepoint to generalise the concept of available CPUs. Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- include/trace/events/timer_migration.h | 4 ++-- kernel/time/timer_migration.c | 22 +++++++++++----------- kernel/time/timer_migration.h | 2 +- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/trace/events/timer_migration.h b/include/trace/events/timer_migration.h index 47db5eaf2f9ab..61171b13c687c 100644 --- a/include/trace/events/timer_migration.h +++ b/include/trace/events/timer_migration.h @@ -173,14 +173,14 @@ DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_active, TP_ARGS(tmc) ); -DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_online, +DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_available, TP_PROTO(struct tmigr_cpu *tmc), TP_ARGS(tmc) ); -DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_offline, +DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_unavailable, TP_PROTO(struct tmigr_cpu *tmc), diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index 2f6330831f084..7efd897c79599 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -427,7 +427,7 @@ static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu); static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc) { - return !(tmc->tmgroup && tmc->online); + return !(tmc->tmgroup && tmc->available); } /* @@ -926,7 +926,7 @@ static void tmigr_handle_remote_cpu(unsigned int cpu, u64 now, * updated the event takes care when hierarchy is completely * idle. Otherwise the migrator does it as the event is enqueued. */ - if (!tmc->online || tmc->remote || tmc->cpuevt.ignore || + if (!tmc->available || tmc->remote || tmc->cpuevt.ignore || now < tmc->cpuevt.nextevt.expires) { raw_spin_unlock_irq(&tmc->lock); return; @@ -973,7 +973,7 @@ static void tmigr_handle_remote_cpu(unsigned int cpu, u64 now, * (See also section "Required event and timerqueue update after a * remote expiry" in the documentation at the top) */ - if (!tmc->online || !tmc->idle) { + if (!tmc->available || !tmc->idle) { timer_unlock_remote_bases(cpu); goto unlock; } @@ -1435,19 +1435,19 @@ static long tmigr_trigger_active(void *unused) { struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); - WARN_ON_ONCE(!tmc->online || tmc->idle); + WARN_ON_ONCE(!tmc->available || tmc->idle); return 0; } -static int tmigr_cpu_offline(unsigned int cpu) +static int tmigr_cpu_unavailable(unsigned int cpu) { struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); int migrator; u64 firstexp; raw_spin_lock_irq(&tmc->lock); - tmc->online = false; + tmc->available = false; WRITE_ONCE(tmc->wakeup, KTIME_MAX); /* @@ -1455,7 +1455,7 @@ static int tmigr_cpu_offline(unsigned int cpu) * offline; Therefore nextevt value is set to KTIME_MAX */ firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX); - trace_tmigr_cpu_offline(tmc); + trace_tmigr_cpu_unavailable(tmc); raw_spin_unlock_irq(&tmc->lock); if (firstexp != KTIME_MAX) { @@ -1466,7 +1466,7 @@ static int tmigr_cpu_offline(unsigned int cpu) return 0; } -static int tmigr_cpu_online(unsigned int cpu) +static int tmigr_cpu_available(unsigned int cpu) { struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); @@ -1475,11 +1475,11 @@ static int tmigr_cpu_online(unsigned int cpu) return -EINVAL; raw_spin_lock_irq(&tmc->lock); - trace_tmigr_cpu_online(tmc); + trace_tmigr_cpu_available(tmc); tmc->idle = timer_base_is_idle(); if (!tmc->idle) __tmigr_cpu_activate(tmc); - tmc->online = true; + tmc->available = true; raw_spin_unlock_irq(&tmc->lock); return 0; } @@ -1850,7 +1850,7 @@ static int __init tmigr_init(void) goto err; ret = cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online", - tmigr_cpu_online, tmigr_cpu_offline); + tmigr_cpu_available, tmigr_cpu_unavailable); if (ret) goto err; diff --git a/kernel/time/timer_migration.h b/kernel/time/timer_migration.h index ae19f70f8170f..70879cde6fdd0 100644 --- a/kernel/time/timer_migration.h +++ b/kernel/time/timer_migration.h @@ -97,7 +97,7 @@ struct tmigr_group { */ struct tmigr_cpu { raw_spinlock_t lock; - bool online; + bool available; bool idle; bool remote; struct tmigr_group *tmgroup; -- 2.49.0 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 2/3] timers: Add the available mask in timer migration 2025-04-15 10:25 [PATCH v2 0/3] timers: Exclude isolated cpus from timer migation Gabriele Monaco 2025-04-15 10:25 ` [PATCH v2 1/3] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco @ 2025-04-15 10:25 ` Gabriele Monaco 2025-04-15 10:25 ` [PATCH v2 3/3] timers: Exclude isolated cpus from timer migation Gabriele Monaco 2 siblings, 0 replies; 8+ messages in thread From: Gabriele Monaco @ 2025-04-15 10:25 UTC (permalink / raw) To: linux-kernel, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Gabriele Monaco Keep track of the CPUs available for timer migration in a cpumask. This prepares the ground to generalise the concept of unavailable CPUs. Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- kernel/time/timer_migration.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index 7efd897c79599..1fae38fbac8c2 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -422,6 +422,9 @@ static unsigned int tmigr_crossnode_level __read_mostly; static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu); +/* CPUs available for migration */ +static cpumask_var_t tmigr_available_cpumask; + #define TMIGR_NONE 0xFF #define BIT_CNT 8 @@ -1449,6 +1452,7 @@ static int tmigr_cpu_unavailable(unsigned int cpu) raw_spin_lock_irq(&tmc->lock); tmc->available = false; WRITE_ONCE(tmc->wakeup, KTIME_MAX); + cpumask_clear_cpu(cpu, tmigr_available_cpumask); /* * CPU has to handle the local events on his own, when on the way to @@ -1459,7 +1463,7 @@ static int tmigr_cpu_unavailable(unsigned int cpu) raw_spin_unlock_irq(&tmc->lock); if (firstexp != KTIME_MAX) { - migrator = cpumask_any_but(cpu_online_mask, cpu); + migrator = cpumask_any(tmigr_available_cpumask); work_on_cpu(migrator, tmigr_trigger_active, NULL); } @@ -1480,6 +1484,7 @@ static int tmigr_cpu_available(unsigned int cpu) if (!tmc->idle) __tmigr_cpu_activate(tmc); tmc->available = true; + cpumask_set_cpu(cpu, tmigr_available_cpumask); raw_spin_unlock_irq(&tmc->lock); return 0; } @@ -1801,6 +1806,11 @@ static int __init tmigr_init(void) if (ncpus == 1) return 0; + if (!zalloc_cpumask_var(&tmigr_available_cpumask, GFP_KERNEL)) { + ret = -ENOMEM; + goto err; + } + /* * Calculate the required hierarchy levels. Unfortunately there is no * reliable information available, unless all possible CPUs have been -- 2.49.0 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 3/3] timers: Exclude isolated cpus from timer migation 2025-04-15 10:25 [PATCH v2 0/3] timers: Exclude isolated cpus from timer migation Gabriele Monaco 2025-04-15 10:25 ` [PATCH v2 1/3] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco 2025-04-15 10:25 ` [PATCH v2 2/3] timers: Add the available mask in timer migration Gabriele Monaco @ 2025-04-15 10:25 ` Gabriele Monaco 2025-04-15 15:30 ` Waiman Long 2 siblings, 1 reply; 8+ messages in thread From: Gabriele Monaco @ 2025-04-15 10:25 UTC (permalink / raw) To: linux-kernel, Frederic Weisbecker, Thomas Gleixner, Waiman Long Cc: Gabriele Monaco The timer migration mechanism allows active CPUs to pull timers from idle ones to improve the overall idle time. This is however undesired when CPU intensive workloads run on isolated cores, as the algorithm would move the timers from housekeeping to isolated cores, negatively affecting the isolation. This effect was noticed on a 128 cores machine running oslat on the isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, and the CPU with lowest count in a timer migration hierarchy (here 1 and 65) appears as always active and continuously pulls global timers, from the housekeeping CPUs. This ends up moving driver work (e.g. delayed work) to isolated CPUs and causes latency spikes: before the change: # oslat -c 1-31,33-63,65-95,97-127 -D 62s ... Maximum: 1203 10 3 4 ... 5 (us) after the change: # oslat -c 1-31,33-63,65-95,97-127 -D 62s ... Maximum: 10 4 3 4 3 ... 5 (us) Exclude isolated cores from the timer migration algorithm, extend the concept of unavailable cores, currently used for offline ones, to isolated ones: * A core is unavailable if isolated or offline; * A core is available if isolated and offline; A core is considered unavailable as idle if: * is in the isolcpus list * is in the nohz_full list * is in an isolated cpuset Due to how the timer migration algorithm works, any CPU part of the hierarchy can have their global timers pulled by remote CPUs and have to pull remote timers, only skipping pulling remote timers would break the logic. For this reason, we prevents isolated CPUs from pulling remote global timers, but also the other way around: any global timer started on an isolated CPU will run there. This does not break the concept of isolation (global timers don't come from outside the CPU) and, if considered inappropriate, can usually be mitigated with other isolation techniques (e.g. IRQ pinning). Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> --- include/linux/timer.h | 6 ++++++ kernel/cgroup/cpuset.c | 14 ++++++++------ kernel/time/tick-internal.h | 1 + kernel/time/timer.c | 10 ++++++++++ kernel/time/timer_migration.c | 24 +++++++++++++++++++++--- 5 files changed, 46 insertions(+), 9 deletions(-) diff --git a/include/linux/timer.h b/include/linux/timer.h index 10596d7c3a346..4722e075d9843 100644 --- a/include/linux/timer.h +++ b/include/linux/timer.h @@ -190,4 +190,10 @@ int timers_dead_cpu(unsigned int cpu); #define timers_dead_cpu NULL #endif +#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON) +extern void tmigr_isolated_exclude_cpumask(cpumask_var_t exclude_cpumask); +#else +static inline void tmigr_isolated_exclude_cpumask(cpumask_var_t exclude_cpumask) { } +#endif + #endif diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 306b604300914..866b4b8188118 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1323,7 +1323,7 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent, return isolcpus_updated; } -static void update_unbound_workqueue_cpumask(bool isolcpus_updated) +static void update_exclusion_cpumasks(bool isolcpus_updated) { int ret; @@ -1334,6 +1334,8 @@ static void update_unbound_workqueue_cpumask(bool isolcpus_updated) ret = workqueue_unbound_exclude_cpumask(isolated_cpus); WARN_ON_ONCE(ret < 0); + + tmigr_isolated_exclude_cpumask(isolated_cpus); } /** @@ -1454,7 +1456,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs, list_add(&cs->remote_sibling, &remote_children); cpumask_copy(cs->effective_xcpus, tmp->new_cpus); spin_unlock_irq(&callback_lock); - update_unbound_workqueue_cpumask(isolcpus_updated); + update_exclusion_cpumasks(isolcpus_updated); cpuset_force_rebuild(); cs->prs_err = 0; @@ -1495,7 +1497,7 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp) compute_effective_exclusive_cpumask(cs, NULL, NULL); reset_partition_data(cs); spin_unlock_irq(&callback_lock); - update_unbound_workqueue_cpumask(isolcpus_updated); + update_exclusion_cpumasks(isolcpus_updated); cpuset_force_rebuild(); /* @@ -1563,7 +1565,7 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus, if (xcpus) cpumask_copy(cs->exclusive_cpus, xcpus); spin_unlock_irq(&callback_lock); - update_unbound_workqueue_cpumask(isolcpus_updated); + update_exclusion_cpumasks(isolcpus_updated); if (adding || deleting) cpuset_force_rebuild(); @@ -1906,7 +1908,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd, WARN_ON_ONCE(parent->nr_subparts < 0); } spin_unlock_irq(&callback_lock); - update_unbound_workqueue_cpumask(isolcpus_updated); + update_exclusion_cpumasks(isolcpus_updated); if ((old_prs != new_prs) && (cmd == partcmd_update)) update_partition_exclusive_flag(cs, new_prs); @@ -2931,7 +2933,7 @@ static int update_prstate(struct cpuset *cs, int new_prs) else if (isolcpus_updated) isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus); spin_unlock_irq(&callback_lock); - update_unbound_workqueue_cpumask(isolcpus_updated); + update_exclusion_cpumasks(isolcpus_updated); /* Force update if switching back to member & update effective_xcpus */ update_cpumasks_hier(cs, &tmpmask, !new_prs); diff --git a/kernel/time/tick-internal.h b/kernel/time/tick-internal.h index faac36de35b9e..75580f7c69c64 100644 --- a/kernel/time/tick-internal.h +++ b/kernel/time/tick-internal.h @@ -167,6 +167,7 @@ extern void fetch_next_timer_interrupt_remote(unsigned long basej, u64 basem, extern void timer_lock_remote_bases(unsigned int cpu); extern void timer_unlock_remote_bases(unsigned int cpu); extern bool timer_base_is_idle(void); +extern bool timer_base_remote_is_idle(unsigned int cpu); extern void timer_expire_remote(unsigned int cpu); # endif #else /* CONFIG_NO_HZ_COMMON */ diff --git a/kernel/time/timer.c b/kernel/time/timer.c index 4d915c0a263c3..f04960091eba9 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -2162,6 +2162,16 @@ bool timer_base_is_idle(void) return __this_cpu_read(timer_bases[BASE_LOCAL].is_idle); } +/** + * timer_base_remote_is_idle() - Return whether timer base is set idle for cpu + * + * Returns value of local timer base is_idle value for remote cpu. + */ +bool timer_base_remote_is_idle(unsigned int cpu) +{ + return per_cpu(timer_bases[BASE_LOCAL].is_idle, cpu); +} + static void __run_timer_base(struct timer_base *base); /** diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index 1fae38fbac8c2..6fe6ca798e98d 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -10,6 +10,7 @@ #include <linux/spinlock.h> #include <linux/timerqueue.h> #include <trace/events/ipi.h> +#include <linux/sched/isolation.h> #include "timer_migration.h" #include "tick-internal.h" @@ -1445,7 +1446,7 @@ static long tmigr_trigger_active(void *unused) static int tmigr_cpu_unavailable(unsigned int cpu) { - struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); + struct tmigr_cpu *tmc = per_cpu_ptr(&tmigr_cpu, cpu); int migrator; u64 firstexp; @@ -1472,15 +1473,18 @@ static int tmigr_cpu_unavailable(unsigned int cpu) static int tmigr_cpu_available(unsigned int cpu) { - struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); + struct tmigr_cpu *tmc = per_cpu_ptr(&tmigr_cpu, cpu); /* Check whether CPU data was successfully initialized */ if (WARN_ON_ONCE(!tmc->tmgroup)) return -EINVAL; + /* Isolated CPUs don't participate in timer migration */ + if (cpu_is_isolated(cpu)) + return 0; raw_spin_lock_irq(&tmc->lock); trace_tmigr_cpu_available(tmc); - tmc->idle = timer_base_is_idle(); + tmc->idle = timer_base_remote_is_idle(cpu); if (!tmc->idle) __tmigr_cpu_activate(tmc); tmc->available = true; @@ -1489,6 +1493,20 @@ static int tmigr_cpu_available(unsigned int cpu) return 0; } +void tmigr_isolated_exclude_cpumask(cpumask_var_t exclude_cpumask) +{ + int cpu; + + lockdep_assert_cpus_held(); + + for_each_cpu_and(cpu, exclude_cpumask, tmigr_available_cpumask) + tmigr_cpu_unavailable(cpu); + + for_each_cpu_andnot(cpu, cpu_online_mask, exclude_cpumask) + if (!cpumask_test_cpu(cpu, tmigr_available_cpumask)) + tmigr_cpu_available(cpu); +} + static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl, int node) { -- 2.49.0 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] timers: Exclude isolated cpus from timer migation 2025-04-15 10:25 ` [PATCH v2 3/3] timers: Exclude isolated cpus from timer migation Gabriele Monaco @ 2025-04-15 15:30 ` Waiman Long 2025-04-15 15:49 ` Gabriele Monaco 0 siblings, 1 reply; 8+ messages in thread From: Waiman Long @ 2025-04-15 15:30 UTC (permalink / raw) To: Gabriele Monaco, linux-kernel, Frederic Weisbecker, Thomas Gleixner On 4/15/25 6:25 AM, Gabriele Monaco wrote: > The timer migration mechanism allows active CPUs to pull timers from > idle ones to improve the overall idle time. This is however undesired > when CPU intensive workloads run on isolated cores, as the algorithm > would move the timers from housekeeping to isolated cores, negatively > affecting the isolation. > > This effect was noticed on a 128 cores machine running oslat on the > isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, > and the CPU with lowest count in a timer migration hierarchy (here 1 > and 65) appears as always active and continuously pulls global timers, > from the housekeeping CPUs. This ends up moving driver work (e.g. > delayed work) to isolated CPUs and causes latency spikes: > > before the change: > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > ... > Maximum: 1203 10 3 4 ... 5 (us) > > after the change: > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > ... > Maximum: 10 4 3 4 3 ... 5 (us) > > Exclude isolated cores from the timer migration algorithm, extend the > concept of unavailable cores, currently used for offline ones, to > isolated ones: > * A core is unavailable if isolated or offline; > * A core is available if isolated and offline; I think you mean "A core is available if NOT isolated and NOT offline". Right? > > A core is considered unavailable as idle if: > * is in the isolcpus list > * is in the nohz_full list > * is in an isolated cpuset > > Due to how the timer migration algorithm works, any CPU part of the > hierarchy can have their global timers pulled by remote CPUs and have to > pull remote timers, only skipping pulling remote timers would break the > logic. > For this reason, we prevents isolated CPUs from pulling remote global > timers, but also the other way around: any global timer started on an > isolated CPU will run there. This does not break the concept of > isolation (global timers don't come from outside the CPU) and, if > considered inappropriate, can usually be mitigated with other isolation > techniques (e.g. IRQ pinning). > > Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> > --- > include/linux/timer.h | 6 ++++++ > kernel/cgroup/cpuset.c | 14 ++++++++------ > kernel/time/tick-internal.h | 1 + > kernel/time/timer.c | 10 ++++++++++ > kernel/time/timer_migration.c | 24 +++++++++++++++++++++--- > 5 files changed, 46 insertions(+), 9 deletions(-) > > diff --git a/include/linux/timer.h b/include/linux/timer.h > index 10596d7c3a346..4722e075d9843 100644 > --- a/include/linux/timer.h > +++ b/include/linux/timer.h > @@ -190,4 +190,10 @@ int timers_dead_cpu(unsigned int cpu); > #define timers_dead_cpu NULL > #endif > > +#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON) > +extern void tmigr_isolated_exclude_cpumask(cpumask_var_t exclude_cpumask); > +#else > +static inline void tmigr_isolated_exclude_cpumask(cpumask_var_t exclude_cpumask) { } > +#endif > + > #endif > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c > index 306b604300914..866b4b8188118 100644 > --- a/kernel/cgroup/cpuset.c > +++ b/kernel/cgroup/cpuset.c > @@ -1323,7 +1323,7 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent, > return isolcpus_updated; > } > > -static void update_unbound_workqueue_cpumask(bool isolcpus_updated) > +static void update_exclusion_cpumasks(bool isolcpus_updated) > { > int ret; > > @@ -1334,6 +1334,8 @@ static void update_unbound_workqueue_cpumask(bool isolcpus_updated) > > ret = workqueue_unbound_exclude_cpumask(isolated_cpus); > WARN_ON_ONCE(ret < 0); > + > + tmigr_isolated_exclude_cpumask(isolated_cpus); > } > > /** > @@ -1454,7 +1456,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs, > list_add(&cs->remote_sibling, &remote_children); > cpumask_copy(cs->effective_xcpus, tmp->new_cpus); > spin_unlock_irq(&callback_lock); > - update_unbound_workqueue_cpumask(isolcpus_updated); > + update_exclusion_cpumasks(isolcpus_updated); > cpuset_force_rebuild(); > cs->prs_err = 0; > > @@ -1495,7 +1497,7 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp) > compute_effective_exclusive_cpumask(cs, NULL, NULL); > reset_partition_data(cs); > spin_unlock_irq(&callback_lock); > - update_unbound_workqueue_cpumask(isolcpus_updated); > + update_exclusion_cpumasks(isolcpus_updated); > cpuset_force_rebuild(); > > /* > @@ -1563,7 +1565,7 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus, > if (xcpus) > cpumask_copy(cs->exclusive_cpus, xcpus); > spin_unlock_irq(&callback_lock); > - update_unbound_workqueue_cpumask(isolcpus_updated); > + update_exclusion_cpumasks(isolcpus_updated); > if (adding || deleting) > cpuset_force_rebuild(); > > @@ -1906,7 +1908,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd, > WARN_ON_ONCE(parent->nr_subparts < 0); > } > spin_unlock_irq(&callback_lock); > - update_unbound_workqueue_cpumask(isolcpus_updated); > + update_exclusion_cpumasks(isolcpus_updated); > > if ((old_prs != new_prs) && (cmd == partcmd_update)) > update_partition_exclusive_flag(cs, new_prs); > @@ -2931,7 +2933,7 @@ static int update_prstate(struct cpuset *cs, int new_prs) > else if (isolcpus_updated) > isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus); > spin_unlock_irq(&callback_lock); > - update_unbound_workqueue_cpumask(isolcpus_updated); > + update_exclusion_cpumasks(isolcpus_updated); > > /* Force update if switching back to member & update effective_xcpus */ > update_cpumasks_hier(cs, &tmpmask, !new_prs); > diff --git a/kernel/time/tick-internal.h b/kernel/time/tick-internal.h > index faac36de35b9e..75580f7c69c64 100644 > --- a/kernel/time/tick-internal.h > +++ b/kernel/time/tick-internal.h > @@ -167,6 +167,7 @@ extern void fetch_next_timer_interrupt_remote(unsigned long basej, u64 basem, > extern void timer_lock_remote_bases(unsigned int cpu); > extern void timer_unlock_remote_bases(unsigned int cpu); > extern bool timer_base_is_idle(void); > +extern bool timer_base_remote_is_idle(unsigned int cpu); > extern void timer_expire_remote(unsigned int cpu); > # endif > #else /* CONFIG_NO_HZ_COMMON */ > diff --git a/kernel/time/timer.c b/kernel/time/timer.c > index 4d915c0a263c3..f04960091eba9 100644 > --- a/kernel/time/timer.c > +++ b/kernel/time/timer.c > @@ -2162,6 +2162,16 @@ bool timer_base_is_idle(void) > return __this_cpu_read(timer_bases[BASE_LOCAL].is_idle); > } > > +/** > + * timer_base_remote_is_idle() - Return whether timer base is set idle for cpu > + * > + * Returns value of local timer base is_idle value for remote cpu. > + */ > +bool timer_base_remote_is_idle(unsigned int cpu) > +{ > + return per_cpu(timer_bases[BASE_LOCAL].is_idle, cpu); > +} > + > static void __run_timer_base(struct timer_base *base); > > /** > diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c > index 1fae38fbac8c2..6fe6ca798e98d 100644 > --- a/kernel/time/timer_migration.c > +++ b/kernel/time/timer_migration.c > @@ -10,6 +10,7 @@ > #include <linux/spinlock.h> > #include <linux/timerqueue.h> > #include <trace/events/ipi.h> > +#include <linux/sched/isolation.h> > > #include "timer_migration.h" > #include "tick-internal.h" > @@ -1445,7 +1446,7 @@ static long tmigr_trigger_active(void *unused) > > static int tmigr_cpu_unavailable(unsigned int cpu) > { > - struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); > + struct tmigr_cpu *tmc = per_cpu_ptr(&tmigr_cpu, cpu); > int migrator; > u64 firstexp; > > @@ -1472,15 +1473,18 @@ static int tmigr_cpu_unavailable(unsigned int cpu) > > static int tmigr_cpu_available(unsigned int cpu) > { > - struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); > + struct tmigr_cpu *tmc = per_cpu_ptr(&tmigr_cpu, cpu); > > /* Check whether CPU data was successfully initialized */ > if (WARN_ON_ONCE(!tmc->tmgroup)) > return -EINVAL; > > + /* Isolated CPUs don't participate in timer migration */ > + if (cpu_is_isolated(cpu)) > + return 0; There are two main sets of isolated CPUs used by cpu_is_isolated() - boot-time isolated CPUs via "isolcpus" and "nohz_full" boot command time options and runtime isolated CPUs via cpuset isolated partitions. The check for runtime isolated CPUs is redundant here as those CPUs won't be passed to tmigr_cpu_available(). So this call is effectively removing the boot time isolated CPUs away from the available cpumask especially during the boot up process. Maybe you can add some comment about this behavioral change. > raw_spin_lock_irq(&tmc->lock); > trace_tmigr_cpu_available(tmc); > - tmc->idle = timer_base_is_idle(); > + tmc->idle = timer_base_remote_is_idle(cpu); > if (!tmc->idle) > __tmigr_cpu_activate(tmc); > tmc->available = true; > @@ -1489,6 +1493,20 @@ static int tmigr_cpu_available(unsigned int cpu) > return 0; > } > > +void tmigr_isolated_exclude_cpumask(cpumask_var_t exclude_cpumask) > +{ > + int cpu; > + > + lockdep_assert_cpus_held(); > + > + for_each_cpu_and(cpu, exclude_cpumask, tmigr_available_cpumask) > + tmigr_cpu_unavailable(cpu); > + > + for_each_cpu_andnot(cpu, cpu_online_mask, exclude_cpumask) > + if (!cpumask_test_cpu(cpu, tmigr_available_cpumask)) > + tmigr_cpu_available(cpu); > +} > + > static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl, > int node) > { So far, I haven't seen any major issue with this patch series. Cheers, Longman ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] timers: Exclude isolated cpus from timer migation 2025-04-15 15:30 ` Waiman Long @ 2025-04-15 15:49 ` Gabriele Monaco 2025-04-16 2:24 ` Waiman Long 0 siblings, 1 reply; 8+ messages in thread From: Gabriele Monaco @ 2025-04-15 15:49 UTC (permalink / raw) To: Waiman Long, linux-kernel, Frederic Weisbecker, Thomas Gleixner On Tue, 2025-04-15 at 11:30 -0400, Waiman Long wrote: > > On 4/15/25 6:25 AM, Gabriele Monaco wrote: > > The timer migration mechanism allows active CPUs to pull timers > > from > > idle ones to improve the overall idle time. This is however > > undesired > > when CPU intensive workloads run on isolated cores, as the > > algorithm > > would move the timers from housekeeping to isolated cores, > > negatively > > affecting the isolation. > > > > This effect was noticed on a 128 cores machine running oslat on the > > isolated cores (1-31,33-63,65-95,97-127). The tool monopolises > > CPUs, > > and the CPU with lowest count in a timer migration hierarchy (here > > 1 > > and 65) appears as always active and continuously pulls global > > timers, > > from the housekeeping CPUs. This ends up moving driver work (e.g. > > delayed work) to isolated CPUs and causes latency spikes: > > > > before the change: > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > ... > > Maximum: 1203 10 3 4 ... 5 (us) > > > > after the change: > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > ... > > Maximum: 10 4 3 4 3 ... 5 (us) > > > > Exclude isolated cores from the timer migration algorithm, extend > > the > > concept of unavailable cores, currently used for offline ones, to > > isolated ones: > > * A core is unavailable if isolated or offline; > > * A core is available if isolated and offline; > I think you mean "A core is available if NOT isolated and NOT > offline". > Right? Yes, of course.. My bad. Thanks for spotting. > > > > A core is considered unavailable as idle if: > > * is in the isolcpus list > > * is in the nohz_full list > > * is in an isolated cpuset > > > > Due to how the timer migration algorithm works, any CPU part of the > > hierarchy can have their global timers pulled by remote CPUs and > > have to > > pull remote timers, only skipping pulling remote timers would break > > the > > logic. > > For this reason, we prevents isolated CPUs from pulling remote > > global > > timers, but also the other way around: any global timer started on > > an > > isolated CPU will run there. This does not break the concept of > > isolation (global timers don't come from outside the CPU) and, if > > considered inappropriate, can usually be mitigated with other > > isolation > > techniques (e.g. IRQ pinning). > > > > Signed-off-by: Gabriele Monaco <gmonaco@redhat.com> > > --- > > include/linux/timer.h | 6 ++++++ > > kernel/cgroup/cpuset.c | 14 ++++++++------ > > kernel/time/tick-internal.h | 1 + > > kernel/time/timer.c | 10 ++++++++++ > > kernel/time/timer_migration.c | 24 +++++++++++++++++++++--- > > 5 files changed, 46 insertions(+), 9 deletions(-) > > > > diff --git a/include/linux/timer.h b/include/linux/timer.h > > index 10596d7c3a346..4722e075d9843 100644 > > --- a/include/linux/timer.h > > +++ b/include/linux/timer.h > > @@ -190,4 +190,10 @@ int timers_dead_cpu(unsigned int cpu); > > #define timers_dead_cpu NULL > > #endif > > > > +#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON) > > +extern void tmigr_isolated_exclude_cpumask(cpumask_var_t > > exclude_cpumask); > > +#else > > +static inline void tmigr_isolated_exclude_cpumask(cpumask_var_t > > exclude_cpumask) { } > > +#endif > > + > > #endif > > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c > > index 306b604300914..866b4b8188118 100644 > > --- a/kernel/cgroup/cpuset.c > > +++ b/kernel/cgroup/cpuset.c > > @@ -1323,7 +1323,7 @@ static bool partition_xcpus_del(int old_prs, > > struct cpuset *parent, > > return isolcpus_updated; > > } > > > > -static void update_unbound_workqueue_cpumask(bool > > isolcpus_updated) > > +static void update_exclusion_cpumasks(bool isolcpus_updated) > > { > > int ret; > > > > @@ -1334,6 +1334,8 @@ static void > > update_unbound_workqueue_cpumask(bool isolcpus_updated) > > > > ret = workqueue_unbound_exclude_cpumask(isolated_cpus); > > WARN_ON_ONCE(ret < 0); > > + > > + tmigr_isolated_exclude_cpumask(isolated_cpus); > > } > > > > /** > > @@ -1454,7 +1456,7 @@ static int remote_partition_enable(struct > > cpuset *cs, int new_prs, > > list_add(&cs->remote_sibling, &remote_children); > > cpumask_copy(cs->effective_xcpus, tmp->new_cpus); > > spin_unlock_irq(&callback_lock); > > - update_unbound_workqueue_cpumask(isolcpus_updated); > > + update_exclusion_cpumasks(isolcpus_updated); > > cpuset_force_rebuild(); > > cs->prs_err = 0; > > > > @@ -1495,7 +1497,7 @@ static void remote_partition_disable(struct > > cpuset *cs, struct tmpmasks *tmp) > > compute_effective_exclusive_cpumask(cs, NULL, NULL); > > reset_partition_data(cs); > > spin_unlock_irq(&callback_lock); > > - update_unbound_workqueue_cpumask(isolcpus_updated); > > + update_exclusion_cpumasks(isolcpus_updated); > > cpuset_force_rebuild(); > > > > /* > > @@ -1563,7 +1565,7 @@ static void remote_cpus_update(struct cpuset > > *cs, struct cpumask *xcpus, > > if (xcpus) > > cpumask_copy(cs->exclusive_cpus, xcpus); > > spin_unlock_irq(&callback_lock); > > - update_unbound_workqueue_cpumask(isolcpus_updated); > > + update_exclusion_cpumasks(isolcpus_updated); > > if (adding || deleting) > > cpuset_force_rebuild(); > > > > @@ -1906,7 +1908,7 @@ static int > > update_parent_effective_cpumask(struct cpuset *cs, int cmd, > > WARN_ON_ONCE(parent->nr_subparts < 0); > > } > > spin_unlock_irq(&callback_lock); > > - update_unbound_workqueue_cpumask(isolcpus_updated); > > + update_exclusion_cpumasks(isolcpus_updated); > > > > if ((old_prs != new_prs) && (cmd == partcmd_update)) > > update_partition_exclusive_flag(cs, new_prs); > > @@ -2931,7 +2933,7 @@ static int update_prstate(struct cpuset *cs, > > int new_prs) > > else if (isolcpus_updated) > > isolated_cpus_update(old_prs, new_prs, cs- > > >effective_xcpus); > > spin_unlock_irq(&callback_lock); > > - update_unbound_workqueue_cpumask(isolcpus_updated); > > + update_exclusion_cpumasks(isolcpus_updated); > > > > /* Force update if switching back to member & update > > effective_xcpus */ > > update_cpumasks_hier(cs, &tmpmask, !new_prs); > > diff --git a/kernel/time/tick-internal.h b/kernel/time/tick- > > internal.h > > index faac36de35b9e..75580f7c69c64 100644 > > --- a/kernel/time/tick-internal.h > > +++ b/kernel/time/tick-internal.h > > @@ -167,6 +167,7 @@ extern void > > fetch_next_timer_interrupt_remote(unsigned long basej, u64 basem, > > extern void timer_lock_remote_bases(unsigned int cpu); > > extern void timer_unlock_remote_bases(unsigned int cpu); > > extern bool timer_base_is_idle(void); > > +extern bool timer_base_remote_is_idle(unsigned int cpu); > > extern void timer_expire_remote(unsigned int cpu); > > # endif > > #else /* CONFIG_NO_HZ_COMMON */ > > diff --git a/kernel/time/timer.c b/kernel/time/timer.c > > index 4d915c0a263c3..f04960091eba9 100644 > > --- a/kernel/time/timer.c > > +++ b/kernel/time/timer.c > > @@ -2162,6 +2162,16 @@ bool timer_base_is_idle(void) > > return __this_cpu_read(timer_bases[BASE_LOCAL].is_idle); > > } > > > > +/** > > + * timer_base_remote_is_idle() - Return whether timer base is set > > idle for cpu > > + * > > + * Returns value of local timer base is_idle value for remote cpu. > > + */ > > +bool timer_base_remote_is_idle(unsigned int cpu) > > +{ > > + return per_cpu(timer_bases[BASE_LOCAL].is_idle, cpu); > > +} > > + > > static void __run_timer_base(struct timer_base *base); > > > > /** > > diff --git a/kernel/time/timer_migration.c > > b/kernel/time/timer_migration.c > > index 1fae38fbac8c2..6fe6ca798e98d 100644 > > --- a/kernel/time/timer_migration.c > > +++ b/kernel/time/timer_migration.c > > @@ -10,6 +10,7 @@ > > #include <linux/spinlock.h> > > #include <linux/timerqueue.h> > > #include <trace/events/ipi.h> > > +#include <linux/sched/isolation.h> > > > > #include "timer_migration.h" > > #include "tick-internal.h" > > @@ -1445,7 +1446,7 @@ static long tmigr_trigger_active(void > > *unused) > > > > static int tmigr_cpu_unavailable(unsigned int cpu) > > { > > - struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); > > + struct tmigr_cpu *tmc = per_cpu_ptr(&tmigr_cpu, cpu); > > int migrator; > > u64 firstexp; > > > > @@ -1472,15 +1473,18 @@ static int tmigr_cpu_unavailable(unsigned > > int cpu) > > > > static int tmigr_cpu_available(unsigned int cpu) > > { > > - struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); > > + struct tmigr_cpu *tmc = per_cpu_ptr(&tmigr_cpu, cpu); > > > > /* Check whether CPU data was successfully initialized */ > > if (WARN_ON_ONCE(!tmc->tmgroup)) > > return -EINVAL; > > > > + /* Isolated CPUs don't participate in timer migration */ > > + if (cpu_is_isolated(cpu)) > > + return 0; > > There are two main sets of isolated CPUs used by cpu_is_isolated() - > boot-time isolated CPUs via "isolcpus" and "nohz_full" boot command > time > options and runtime isolated CPUs via cpuset isolated partitions. The > check for runtime isolated CPUs is redundant here as those CPUs won't > be > passed to tmigr_cpu_available(). Since tmigr_cpu_available is shared between isolated and offline CPUs, I added this check also to make sure bringing an isolated CPU back online won't make it available for tmigr. > So this call is effectively removing > the boot time isolated CPUs away from the available cpumask > especially > during the boot up process. Maybe you can add some comment about this > behavioral change. > Do you mean I should make clear that the check in tmigr_cpu_available is especially meaningful at boot time (i.e. when CPUs are first brought online)? Yeah, I probably should, good point. I had that kind of comment in v1 while allocating the mask and removed it while changing a few things. I'm going to make that comment more verbose to clarify when exactly it's needed. > > > raw_spin_lock_irq(&tmc->lock); > > trace_tmigr_cpu_available(tmc); > > - tmc->idle = timer_base_is_idle(); > > + tmc->idle = timer_base_remote_is_idle(cpu); > > if (!tmc->idle) > > __tmigr_cpu_activate(tmc); > > tmc->available = true; > > @@ -1489,6 +1493,20 @@ static int tmigr_cpu_available(unsigned int > > cpu) > > return 0; > > } > > > > +void tmigr_isolated_exclude_cpumask(cpumask_var_t exclude_cpumask) > > +{ > > + int cpu; > > + > > + lockdep_assert_cpus_held(); > > + > > + for_each_cpu_and(cpu, exclude_cpumask, > > tmigr_available_cpumask) > > + tmigr_cpu_unavailable(cpu); > > + > > + for_each_cpu_andnot(cpu, cpu_online_mask, exclude_cpumask) > > + if (!cpumask_test_cpu(cpu, > > tmigr_available_cpumask)) > > + tmigr_cpu_available(cpu); > > +} > > + > > static void tmigr_init_group(struct tmigr_group *group, unsigned > > int lvl, > > int node) > > { > > So far, I haven't seen any major issue with this patch series. > Thanks for the review! Cheers, Gabriele ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] timers: Exclude isolated cpus from timer migation 2025-04-15 15:49 ` Gabriele Monaco @ 2025-04-16 2:24 ` Waiman Long 2025-04-16 6:23 ` Gabriele Monaco 0 siblings, 1 reply; 8+ messages in thread From: Waiman Long @ 2025-04-16 2:24 UTC (permalink / raw) To: Gabriele Monaco, Waiman Long, linux-kernel, Frederic Weisbecker, Thomas Gleixner On 4/15/25 11:49 AM, Gabriele Monaco wrote: > > On Tue, 2025-04-15 at 11:30 -0400, Waiman Long wrote: >> On 4/15/25 6:25 AM, Gabriele Monaco wrote: >>> The timer migration mechanism allows active CPUs to pull timers >>> from >>> idle ones to improve the overall idle time. This is however >>> undesired >>> when CPU intensive workloads run on isolated cores, as the >>> algorithm >>> would move the timers from housekeeping to isolated cores, >>> negatively >>> affecting the isolation. >>> >>> This effect was noticed on a 128 cores machine running oslat on the >>> isolated cores (1-31,33-63,65-95,97-127). The tool monopolises >>> CPUs, >>> and the CPU with lowest count in a timer migration hierarchy (here >>> 1 >>> and 65) appears as always active and continuously pulls global >>> timers, >>> from the housekeeping CPUs. This ends up moving driver work (e.g. >>> delayed work) to isolated CPUs and causes latency spikes: >>> >>> before the change: >>> >>> # oslat -c 1-31,33-63,65-95,97-127 -D 62s >>> ... >>> Maximum: 1203 10 3 4 ... 5 (us) >>> >>> after the change: >>> >>> # oslat -c 1-31,33-63,65-95,97-127 -D 62s >>> ... >>> Maximum: 10 4 3 4 3 ... 5 (us) >>> >>> Exclude isolated cores from the timer migration algorithm, extend >>> the >>> concept of unavailable cores, currently used for offline ones, to >>> isolated ones: >>> * A core is unavailable if isolated or offline; >>> * A core is available if isolated and offline; >> I think you mean "A core is available if NOT isolated and NOT >> offline". >> Right? > Yes, of course.. My bad. Thanks for spotting. > >>> A core is considered unavailable as idle if: What do you mean by "unavailable as idle"? An idle CPU is different from an unvailable CPU, I think. >>> * is in the isolcpus list >>> * is in the nohz_full list >>> * is in an isolated cpuset >>> >>> Due to how the timer migration algorithm works, any CPU part of the >>> hierarchy can have their global timers pulled by remote CPUs and >>> have to >>> pull remote timers, only skipping pulling remote timers would break >>> the >>> logic. >>> For this reason, we prevents isolated CPUs from pulling remote >>> global >>> timers, but also the other way around: any global timer started on >>> an >>> isolated CPU will run there. This does not break the concept of >>> isolation (global timers don't come from outside the CPU) and, if >>> considered inappropriate, can usually be mitigated with other >>> isolation >>> techniques (e.g. IRQ pinning). BTW, I am not that familiar with the timer migration code. Does marking an isolated CPU as unavailable (previously offline) make the above behavior happen? Now unavailable CPUs include the isolated CPUs. We may need to look at some of the online (now available) flag check within the timer migration code to make sure that they are still doing the right thing. >>> diff --git a/kernel/time/timer_migration.c >>> b/kernel/time/timer_migration.c >>> index 1fae38fbac8c2..6fe6ca798e98d 100644 >>> --- a/kernel/time/timer_migration.c >>> +++ b/kernel/time/timer_migration.c >>> @@ -10,6 +10,7 @@ >>> #include <linux/spinlock.h> >>> #include <linux/timerqueue.h> >>> #include <trace/events/ipi.h> >>> +#include <linux/sched/isolation.h> >>> #include "timer_migration.h" >>> #include "tick-internal.h" >>> @@ -1445,7 +1446,7 @@ static long tmigr_trigger_active(void >>> *unused) >>> static int tmigr_cpu_unavailable(unsigned int cpu) >>> { >>> - struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); >>> + struct tmigr_cpu *tmc = per_cpu_ptr(&tmigr_cpu, cpu); >>> int migrator; >>> u64 firstexp; >>> @@ -1472,15 +1473,18 @@ static int tmigr_cpu_unavailable(unsigned >>> int cpu) >>> static int tmigr_cpu_available(unsigned int cpu) >>> { >>> - struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); >>> + struct tmigr_cpu *tmc = per_cpu_ptr(&tmigr_cpu, cpu); >>> /* Check whether CPU data was successfully initialized */ >>> if (WARN_ON_ONCE(!tmc->tmgroup)) >>> return -EINVAL; >>> + /* Isolated CPUs don't participate in timer migration */ >>> + if (cpu_is_isolated(cpu)) >>> + return 0; >> There are two main sets of isolated CPUs used by cpu_is_isolated() - >> boot-time isolated CPUs via "isolcpus" and "nohz_full" boot command >> time >> options and runtime isolated CPUs via cpuset isolated partitions. The >> check for runtime isolated CPUs is redundant here as those CPUs won't >> be >> passed to tmigr_cpu_available(). > Since tmigr_cpu_available is shared between isolated and offline CPUs, > I added this check also to make sure bringing an isolated CPU back > online won't make it available for tmigr. Good point, so the check is indeed needed. > >> So this call is effectively removing >> the boot time isolated CPUs away from the available cpumask >> especially >> during the boot up process. Maybe you can add some comment about this >> behavioral change. >> > Do you mean I should make clear that the check in tmigr_cpu_available > is especially meaningful at boot time (i.e. when CPUs are first brought > online)? The current timgr code doesn't look at boot time isolated CPUs. The cpu_is_isolated() check skips those boot time isolated CPUs from the mask. I think this should be noted. > > Yeah, I probably should, good point. I had that kind of comment in v1 > while allocating the mask and removed it while changing a few things. > > I'm going to make that comment more verbose to clarify when exactly > it's needed. Thanks, Longman ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] timers: Exclude isolated cpus from timer migation 2025-04-16 2:24 ` Waiman Long @ 2025-04-16 6:23 ` Gabriele Monaco 0 siblings, 0 replies; 8+ messages in thread From: Gabriele Monaco @ 2025-04-16 6:23 UTC (permalink / raw) To: Waiman Long, linux-kernel, Frederic Weisbecker, Thomas Gleixner On Tue, 2025-04-15 at 22:24 -0400, Waiman Long wrote: > On 4/15/25 11:49 AM, Gabriele Monaco wrote: > > > > On Tue, 2025-04-15 at 11:30 -0400, Waiman Long wrote: > > > On 4/15/25 6:25 AM, Gabriele Monaco wrote: > > > > The timer migration mechanism allows active CPUs to pull timers > > > > from > > > > idle ones to improve the overall idle time. This is however > > > > undesired > > > > when CPU intensive workloads run on isolated cores, as the > > > > algorithm > > > > would move the timers from housekeeping to isolated cores, > > > > negatively > > > > affecting the isolation. > > > > > > > > This effect was noticed on a 128 cores machine running oslat on > > > > the > > > > isolated cores (1-31,33-63,65-95,97-127). The tool monopolises > > > > CPUs, > > > > and the CPU with lowest count in a timer migration hierarchy > > > > (here > > > > 1 > > > > and 65) appears as always active and continuously pulls global > > > > timers, > > > > from the housekeeping CPUs. This ends up moving driver work > > > > (e.g. > > > > delayed work) to isolated CPUs and causes latency spikes: > > > > > > > > before the change: > > > > > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > > > ... > > > > Maximum: 1203 10 3 4 ... 5 (us) > > > > > > > > after the change: > > > > > > > > # oslat -c 1-31,33-63,65-95,97-127 -D 62s > > > > ... > > > > Maximum: 10 4 3 4 3 ... 5 (us) > > > > > > > > Exclude isolated cores from the timer migration algorithm, > > > > extend > > > > the > > > > concept of unavailable cores, currently used for offline ones, > > > > to > > > > isolated ones: > > > > * A core is unavailable if isolated or offline; > > > > * A core is available if isolated and offline; > > > I think you mean "A core is available if NOT isolated and NOT > > > offline". > > > Right? > > Yes, of course.. My bad. Thanks for spotting. > > > > > > A core is considered unavailable as idle if: > What do you mean by "unavailable as idle"? An idle CPU is different > from > an unvailable CPU, I think. Here I mean unavailable for tmigr, see below for why I got that term. If you find it misleading I could look for a better term to represent the concept. > > > > * is in the isolcpus list > > > > * is in the nohz_full list > > > > * is in an isolated cpuset > > > > > > > > Due to how the timer migration algorithm works, any CPU part of > > > > the > > > > hierarchy can have their global timers pulled by remote CPUs > > > > and > > > > have to > > > > pull remote timers, only skipping pulling remote timers would > > > > break > > > > the > > > > logic. > > > > For this reason, we prevents isolated CPUs from pulling remote > > > > global > > > > timers, but also the other way around: any global timer started > > > > on > > > > an > > > > isolated CPU will run there. This does not break the concept of > > > > isolation (global timers don't come from outside the CPU) and, > > > > if > > > > considered inappropriate, can usually be mitigated with other > > > > isolation > > > > techniques (e.g. IRQ pinning). > > BTW, I am not that familiar with the timer migration code. Does > marking > an isolated CPU as unavailable (previously offline) make the above > behavior happen? > > Now unavailable CPUs include the isolated CPUs. We may need to look > at > some of the online (now available) flag check within the timer > migration > code to make sure that they are still doing the right thing. > The original tmigr code excludes offline CPUs from the hierarchy, those clearly won't have timers and everything works. This series changes that concept to unavailable (I took the name from tmigr_is_not_available, which used to check if a CPU was online and initialised for tmigr) to also include isolated CPUs. A CPU is then unavailable because it's offline, isolated or both, this effectively prevents it from joining the hierarchy. One noticeable difference is that isolated CPUs can still have global timers, the fact they don't participate in the hierarchy would isolate also those (which won't migrate). This is what I kind of missed in v1 but acknowledging that, everything seems to work. That's as far as I could see, of course, reviewers might find some corner cases I didn't consider. > > > > > diff --git a/kernel/time/timer_migration.c > > > > b/kernel/time/timer_migration.c > > > > index 1fae38fbac8c2..6fe6ca798e98d 100644 > > > > --- a/kernel/time/timer_migration.c > > > > +++ b/kernel/time/timer_migration.c > > > > @@ -10,6 +10,7 @@ > > > > #include <linux/spinlock.h> > > > > #include <linux/timerqueue.h> > > > > #include <trace/events/ipi.h> > > > > +#include <linux/sched/isolation.h> > > > > #include "timer_migration.h" > > > > #include "tick-internal.h" > > > > @@ -1445,7 +1446,7 @@ static long tmigr_trigger_active(void > > > > *unused) > > > > static int tmigr_cpu_unavailable(unsigned int cpu) > > > > { > > > > - struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); > > > > + struct tmigr_cpu *tmc = per_cpu_ptr(&tmigr_cpu, cpu); > > > > int migrator; > > > > u64 firstexp; > > > > @@ -1472,15 +1473,18 @@ static int > > > > tmigr_cpu_unavailable(unsigned > > > > int cpu) > > > > static int tmigr_cpu_available(unsigned int cpu) > > > > { > > > > - struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); > > > > + struct tmigr_cpu *tmc = per_cpu_ptr(&tmigr_cpu, cpu); > > > > /* Check whether CPU data was successfully > > > > initialized */ > > > > if (WARN_ON_ONCE(!tmc->tmgroup)) > > > > return -EINVAL; > > > > + /* Isolated CPUs don't participate in timer migration > > > > */ > > > > + if (cpu_is_isolated(cpu)) > > > > + return 0; > > > There are two main sets of isolated CPUs used by > > > cpu_is_isolated() - > > > boot-time isolated CPUs via "isolcpus" and "nohz_full" boot > > > command > > > time > > > options and runtime isolated CPUs via cpuset isolated partitions. > > > The > > > check for runtime isolated CPUs is redundant here as those CPUs > > > won't > > > be > > > passed to tmigr_cpu_available(). > > Since tmigr_cpu_available is shared between isolated and offline > > CPUs, > > I added this check also to make sure bringing an isolated CPU back > > online won't make it available for tmigr. > Good point, so the check is indeed needed. > > > > > > So this call is effectively removing > > > the boot time isolated CPUs away from the available cpumask > > > especially > > > during the boot up process. Maybe you can add some comment about > > > this > > > behavioral change. > > > > > Do you mean I should make clear that the check in > > tmigr_cpu_available > > is especially meaningful at boot time (i.e. when CPUs are first > > brought > > online)? > > The current timgr code doesn't look at boot time isolated CPUs. The > cpu_is_isolated() check skips those boot time isolated CPUs from the > mask. I think this should be noted. > Right, will add a note about that. Thanks again, Gabriele ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-04-16 6:23 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-04-15 10:25 [PATCH v2 0/3] timers: Exclude isolated cpus from timer migation Gabriele Monaco 2025-04-15 10:25 ` [PATCH v2 1/3] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco 2025-04-15 10:25 ` [PATCH v2 2/3] timers: Add the available mask in timer migration Gabriele Monaco 2025-04-15 10:25 ` [PATCH v2 3/3] timers: Exclude isolated cpus from timer migation Gabriele Monaco 2025-04-15 15:30 ` Waiman Long 2025-04-15 15:49 ` Gabriele Monaco 2025-04-16 2:24 ` Waiman Long 2025-04-16 6:23 ` Gabriele Monaco
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox