* [PATCH v9 0/8] timers: Exclude isolated cpus from timer migration
@ 2025-07-30 13:11 Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 1/8] timers/migration: Postpone online/offline callbacks registration to late initcall Gabriele Monaco
` (7 more replies)
0 siblings, 8 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-07-30 13:11 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
The timer migration mechanism allows active CPUs to pull timers from
idle ones to improve the overall idle time. This is however undesired
when CPU intensive workloads run on isolated cores, as the algorithm
would move the timers from housekeeping to isolated cores, negatively
affecting the isolation.
Exclude isolated cores from the timer migration algorithm, extend the
concept of unavailable cores, currently used for offline ones, to
isolated ones:
* A core is unavailable if isolated or offline;
* A core is available if non isolated and online;
A core is considered unavailable as isolated if it belongs to:
* the isolcpus (domain) list
* an isolated cpuset
Except if it is:
* in the nohz_full list (already idle for the hierarchy)
* the nohz timekeeper core (must be available to handle global timers)
CPUs are added to the hierarchy during late boot, excluding isolated
ones, the hierarchy is also adapted when the cpuset isolation changes.
Due to how the timer migration algorithm works, any CPU part of the
hierarchy can have their global timers pulled by remote CPUs and have to
pull remote timers, only skipping pulling remote timers would break the
logic.
For this reason, prevent isolated CPUs from pulling remote global
timers, but also the other way around: any global timer started on an
isolated CPU will run there. This does not break the concept of
isolation (global timers don't come from outside the CPU) and, if
considered inappropriate, can usually be mitigated with other isolation
techniques (e.g. IRQ pinning).
This effect was noticed on a 128 cores machine running oslat on the
isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
and the CPU with lowest count in a timer migration hierarchy (here 1
and 65) appears as always active and continuously pulls global timers,
from the housekeeping CPUs. This ends up moving driver work (e.g.
delayed work) to isolated CPUs and causes latency spikes:
before the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 1203 10 3 4 ... 5 (us)
after the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 10 4 3 4 3 ... 5 (us)
The first 5 patches are preparatory work to change the concept of
online/offline to available/unavailable, keep track of those in a
separate cpumask cleanup the setting/clearing functions and change a
function name in cpuset code.
Patch 6 and 7 adapt isolation and cpuset to prevent domain isolated and
nohz_full from covering all CPUs not leaving any housekeeping one. This
can lead to problems with the changes introduced in this series because
no CPU would remain to handle global timers.
Patch 8 extends the unavailable status to domain isolated CPUs, which
is the main contribution of the series.
Changes since v8 [1]:
* Postpone hotplug registration to late initcall (Frederic Weisbecker)
* Move main activation logic in _tmigr_set_cpu_available() and call it
after checking for isolation on hotplug and cpusets changes
* Call _tmigr_set_cpu_available directly to force enable tick CPU if
required (this saves checking for that on every hotplug change).
Changes since v7:
* Move tmigr_available_cpumask out of tmc lock and specify conditions.
* Initialise tmigr isolation despite the state of isolcpus.
* Move tick CPU check to condition to run SMP call.
* Fix descriptions.
Changes since v6 [2]:
* Prevent isolation checks from running during early boot
* Prevent double (de)activation while setting cpus (un)available
* Use synchronous smp calls from the isolation path
* General cleanup
Changes since v5:
* Remove fallback if no housekeeping is left by isolcpus and nohz_full
* Adjust condition not to activate CPUs in the migration hierarchy
* Always force the nohz tick CPU active in the hierarchy
Changes since v4 [3]:
* use on_each_cpu_mask() with changes on isolated CPUs to avoid races
* keep nohz_full CPUs included in the timer migration hierarchy
* prevent domain isolated and nohz_full to cover all CPUs
Changes since v3:
* add parameter to function documentation
* split into multiple straightforward patches
Changes since v2:
* improve comments about handling CPUs isolated at boot
* minor cleanup
Changes since v1 [4]:
* split into smaller patches
* use available mask instead of unavailable
* simplification and cleanup
[1] - https://lore.kernel.org/lkml/20250714133050.193108-9-gmonaco@redhat.com
[2] - https://lore.kernel.org/lkml/20250530142031.215594-1-gmonaco@redhat.com
[3] - https://lore.kernel.org/lkml/20250506091534.42117-7-gmonaco@redhat.com
[4] - https://lore.kernel.org/lkml/20250410065446.57304-2-gmonaco@redhat.com
Frederic Weisbecker (1):
timers/migration: Postpone online/offline callbacks registration to
late initcall
Gabriele Monaco (7):
timers: Rename tmigr 'online' bit to 'available'
timers: Add the available mask in timer migration
timers: Use scoped_guard when setting/clearing the tmigr available
flag
cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to
update_exclusion_cpumasks()
sched/isolation: Force housekeeping if isolcpus and nohz_full don't
leave any
cgroup/cpuset: Fail if isolated and nohz_full don't leave any
housekeeping
timers: Exclude isolated cpus from timer migration
include/linux/timer.h | 9 ++
include/trace/events/timer_migration.h | 4 +-
kernel/cgroup/cpuset.c | 71 +++++++++-
kernel/sched/isolation.c | 12 ++
kernel/time/timer_migration.c | 179 ++++++++++++++++++++-----
kernel/time/timer_migration.h | 2 +-
6 files changed, 233 insertions(+), 44 deletions(-)
base-commit: 038d61fd642278bab63ee8ef722c50d10ab01e8f
--
2.50.1
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v9 1/8] timers/migration: Postpone online/offline callbacks registration to late initcall
2025-07-30 13:11 [PATCH v9 0/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
@ 2025-07-30 13:11 ` Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 2/8] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
` (6 subsequent siblings)
7 siblings, 0 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-07-30 13:11 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
From: Frederic Weisbecker <frederic@kernel.org>
During the early boot process, the default clocksource used for
timekeeping is the jiffies. Better clocksources can only be selected
once clocksource_done_booting() is called as an fs initcall.
NOHZ can only be enabled after that stage, making global timer migration
irrelevant up to that point.
Therefore, don't bother with trashing the cache within that tree from
the SMP bootup until NOHZ even matters.
Make the CPUs available to the tree on late initcall, after the right
clocksource had a chance to be selected. This will also simplify the
handling of domain isolated CPUs on further patches.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/time/timer_migration.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 2f6330831f08..72987f0d101b 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -1484,6 +1484,16 @@ static int tmigr_cpu_online(unsigned int cpu)
return 0;
}
+/*
+ * NOHZ can only be enabled after clocksource_done_booting(). Don't
+ * bother trashing the cache in the tree before.
+ */
+static int __init tmigr_late_init(void)
+{
+ return cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online",
+ tmigr_cpu_online, tmigr_cpu_offline);
+}
+
static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
int node)
{
@@ -1846,18 +1856,10 @@ static int __init tmigr_init(void)
ret = cpuhp_setup_state(CPUHP_TMIGR_PREPARE, "tmigr:prepare",
tmigr_cpu_prepare, NULL);
- if (ret)
- goto err;
-
- ret = cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online",
- tmigr_cpu_online, tmigr_cpu_offline);
- if (ret)
- goto err;
-
- return 0;
-
err:
- pr_err("Timer migration setup failed\n");
+ if (ret)
+ pr_err("Timer migration setup failed\n");
return ret;
}
early_initcall(tmigr_init);
+late_initcall(tmigr_late_init);
--
2.50.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v9 2/8] timers: Rename tmigr 'online' bit to 'available'
2025-07-30 13:11 [PATCH v9 0/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 1/8] timers/migration: Postpone online/offline callbacks registration to late initcall Gabriele Monaco
@ 2025-07-30 13:11 ` Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 3/8] timers: Add the available mask in timer migration Gabriele Monaco
` (5 subsequent siblings)
7 siblings, 0 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-07-30 13:11 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
The timer migration hierarchy excludes offline CPUs via the
tmigr_is_not_available function, which is essentially checking the
online bit for the CPU.
Rename the online bit to available and all references in function names
and tracepoint to generalise the concept of available CPUs.
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
include/trace/events/timer_migration.h | 4 ++--
kernel/time/timer_migration.c | 22 +++++++++++-----------
kernel/time/timer_migration.h | 2 +-
3 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/include/trace/events/timer_migration.h b/include/trace/events/timer_migration.h
index 47db5eaf2f9a..61171b13c687 100644
--- a/include/trace/events/timer_migration.h
+++ b/include/trace/events/timer_migration.h
@@ -173,14 +173,14 @@ DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_active,
TP_ARGS(tmc)
);
-DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_online,
+DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_available,
TP_PROTO(struct tmigr_cpu *tmc),
TP_ARGS(tmc)
);
-DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_offline,
+DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_unavailable,
TP_PROTO(struct tmigr_cpu *tmc),
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 72987f0d101b..75fce6b8b642 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -427,7 +427,7 @@ static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu);
static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc)
{
- return !(tmc->tmgroup && tmc->online);
+ return !(tmc->tmgroup && tmc->available);
}
/*
@@ -926,7 +926,7 @@ static void tmigr_handle_remote_cpu(unsigned int cpu, u64 now,
* updated the event takes care when hierarchy is completely
* idle. Otherwise the migrator does it as the event is enqueued.
*/
- if (!tmc->online || tmc->remote || tmc->cpuevt.ignore ||
+ if (!tmc->available || tmc->remote || tmc->cpuevt.ignore ||
now < tmc->cpuevt.nextevt.expires) {
raw_spin_unlock_irq(&tmc->lock);
return;
@@ -973,7 +973,7 @@ static void tmigr_handle_remote_cpu(unsigned int cpu, u64 now,
* (See also section "Required event and timerqueue update after a
* remote expiry" in the documentation at the top)
*/
- if (!tmc->online || !tmc->idle) {
+ if (!tmc->available || !tmc->idle) {
timer_unlock_remote_bases(cpu);
goto unlock;
}
@@ -1435,19 +1435,19 @@ static long tmigr_trigger_active(void *unused)
{
struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu);
- WARN_ON_ONCE(!tmc->online || tmc->idle);
+ WARN_ON_ONCE(!tmc->available || tmc->idle);
return 0;
}
-static int tmigr_cpu_offline(unsigned int cpu)
+static int tmigr_clear_cpu_available(unsigned int cpu)
{
struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu);
int migrator;
u64 firstexp;
raw_spin_lock_irq(&tmc->lock);
- tmc->online = false;
+ tmc->available = false;
WRITE_ONCE(tmc->wakeup, KTIME_MAX);
/*
@@ -1455,7 +1455,7 @@ static int tmigr_cpu_offline(unsigned int cpu)
* offline; Therefore nextevt value is set to KTIME_MAX
*/
firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX);
- trace_tmigr_cpu_offline(tmc);
+ trace_tmigr_cpu_unavailable(tmc);
raw_spin_unlock_irq(&tmc->lock);
if (firstexp != KTIME_MAX) {
@@ -1466,7 +1466,7 @@ static int tmigr_cpu_offline(unsigned int cpu)
return 0;
}
-static int tmigr_cpu_online(unsigned int cpu)
+static int tmigr_set_cpu_available(unsigned int cpu)
{
struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu);
@@ -1475,11 +1475,11 @@ static int tmigr_cpu_online(unsigned int cpu)
return -EINVAL;
raw_spin_lock_irq(&tmc->lock);
- trace_tmigr_cpu_online(tmc);
+ trace_tmigr_cpu_available(tmc);
tmc->idle = timer_base_is_idle();
if (!tmc->idle)
__tmigr_cpu_activate(tmc);
- tmc->online = true;
+ tmc->available = true;
raw_spin_unlock_irq(&tmc->lock);
return 0;
}
@@ -1491,7 +1491,7 @@ static int tmigr_cpu_online(unsigned int cpu)
static int __init tmigr_late_init(void)
{
return cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online",
- tmigr_cpu_online, tmigr_cpu_offline);
+ tmigr_set_cpu_available, tmigr_clear_cpu_available);
}
static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
diff --git a/kernel/time/timer_migration.h b/kernel/time/timer_migration.h
index ae19f70f8170..70879cde6fdd 100644
--- a/kernel/time/timer_migration.h
+++ b/kernel/time/timer_migration.h
@@ -97,7 +97,7 @@ struct tmigr_group {
*/
struct tmigr_cpu {
raw_spinlock_t lock;
- bool online;
+ bool available;
bool idle;
bool remote;
struct tmigr_group *tmgroup;
--
2.50.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v9 3/8] timers: Add the available mask in timer migration
2025-07-30 13:11 [PATCH v9 0/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 1/8] timers/migration: Postpone online/offline callbacks registration to late initcall Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 2/8] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
@ 2025-07-30 13:11 ` Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 4/8] timers: Use scoped_guard when setting/clearing the tmigr available flag Gabriele Monaco
` (4 subsequent siblings)
7 siblings, 0 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-07-30 13:11 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
Keep track of the CPUs available for timer migration in a cpumask. This
prepares the ground to generalise the concept of unavailable CPUs.
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/time/timer_migration.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 75fce6b8b642..57abdef7d0f8 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -422,6 +422,12 @@ static unsigned int tmigr_crossnode_level __read_mostly;
static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu);
+/*
+ * CPUs available for timer migration.
+ * Protected by cpuset_mutex (with cpus_read_lock held) or cpus_write_lock.
+ */
+static cpumask_var_t tmigr_available_cpumask;
+
#define TMIGR_NONE 0xFF
#define BIT_CNT 8
@@ -1446,6 +1452,7 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
int migrator;
u64 firstexp;
+ cpumask_clear_cpu(cpu, tmigr_available_cpumask);
raw_spin_lock_irq(&tmc->lock);
tmc->available = false;
WRITE_ONCE(tmc->wakeup, KTIME_MAX);
@@ -1459,7 +1466,7 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
raw_spin_unlock_irq(&tmc->lock);
if (firstexp != KTIME_MAX) {
- migrator = cpumask_any_but(cpu_online_mask, cpu);
+ migrator = cpumask_any(tmigr_available_cpumask);
work_on_cpu(migrator, tmigr_trigger_active, NULL);
}
@@ -1474,6 +1481,7 @@ static int tmigr_set_cpu_available(unsigned int cpu)
if (WARN_ON_ONCE(!tmc->tmgroup))
return -EINVAL;
+ cpumask_set_cpu(cpu, tmigr_available_cpumask);
raw_spin_lock_irq(&tmc->lock);
trace_tmigr_cpu_available(tmc);
tmc->idle = timer_base_is_idle();
@@ -1811,6 +1819,11 @@ static int __init tmigr_init(void)
if (ncpus == 1)
return 0;
+ if (!zalloc_cpumask_var(&tmigr_available_cpumask, GFP_KERNEL)) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
/*
* Calculate the required hierarchy levels. Unfortunately there is no
* reliable information available, unless all possible CPUs have been
--
2.50.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v9 4/8] timers: Use scoped_guard when setting/clearing the tmigr available flag
2025-07-30 13:11 [PATCH v9 0/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
` (2 preceding siblings ...)
2025-07-30 13:11 ` [PATCH v9 3/8] timers: Add the available mask in timer migration Gabriele Monaco
@ 2025-07-30 13:11 ` Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 5/8] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
` (3 subsequent siblings)
7 siblings, 0 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-07-30 13:11 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
Cleanup tmigr_clear_cpu_available() and tmigr_set_cpu_available() to
prepare for easier checks on the available flag.
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/time/timer_migration.c | 34 +++++++++++++++++-----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 57abdef7d0f8..36e7f784ec60 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -1453,17 +1453,17 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
u64 firstexp;
cpumask_clear_cpu(cpu, tmigr_available_cpumask);
- raw_spin_lock_irq(&tmc->lock);
- tmc->available = false;
- WRITE_ONCE(tmc->wakeup, KTIME_MAX);
+ scoped_guard(raw_spinlock_irq, &tmc->lock) {
+ tmc->available = false;
+ WRITE_ONCE(tmc->wakeup, KTIME_MAX);
- /*
- * CPU has to handle the local events on his own, when on the way to
- * offline; Therefore nextevt value is set to KTIME_MAX
- */
- firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX);
- trace_tmigr_cpu_unavailable(tmc);
- raw_spin_unlock_irq(&tmc->lock);
+ /*
+ * CPU has to handle the local events on his own, when on the way to
+ * offline; Therefore nextevt value is set to KTIME_MAX
+ */
+ firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX);
+ trace_tmigr_cpu_unavailable(tmc);
+ }
if (firstexp != KTIME_MAX) {
migrator = cpumask_any(tmigr_available_cpumask);
@@ -1482,13 +1482,13 @@ static int tmigr_set_cpu_available(unsigned int cpu)
return -EINVAL;
cpumask_set_cpu(cpu, tmigr_available_cpumask);
- raw_spin_lock_irq(&tmc->lock);
- trace_tmigr_cpu_available(tmc);
- tmc->idle = timer_base_is_idle();
- if (!tmc->idle)
- __tmigr_cpu_activate(tmc);
- tmc->available = true;
- raw_spin_unlock_irq(&tmc->lock);
+ scoped_guard(raw_spinlock_irq, &tmc->lock) {
+ trace_tmigr_cpu_available(tmc);
+ tmc->idle = timer_base_is_idle();
+ if (!tmc->idle)
+ __tmigr_cpu_activate(tmc);
+ tmc->available = true;
+ }
return 0;
}
--
2.50.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v9 5/8] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks()
2025-07-30 13:11 [PATCH v9 0/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
` (3 preceding siblings ...)
2025-07-30 13:11 ` [PATCH v9 4/8] timers: Use scoped_guard when setting/clearing the tmigr available flag Gabriele Monaco
@ 2025-07-30 13:11 ` Gabriele Monaco
2025-07-31 14:42 ` Waiman Long
2025-07-30 13:11 ` [PATCH v9 6/8] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
` (2 subsequent siblings)
7 siblings, 1 reply; 18+ messages in thread
From: Gabriele Monaco @ 2025-07-30 13:11 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
update_unbound_workqueue_cpumask() updates unbound workqueues settings
when there's a change in isolated CPUs, but it can be used for other
subsystems requiring updated when isolated CPUs change.
Generalise the name to update_exclusion_cpumasks() to prepare for other
functions unrelated to workqueues to be called in that spot.
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/cgroup/cpuset.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 3bc4301466f3..6e3f44ffaa21 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1339,7 +1339,7 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
return isolcpus_updated;
}
-static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
+static void update_exclusion_cpumasks(bool isolcpus_updated)
{
int ret;
@@ -1470,7 +1470,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
list_add(&cs->remote_sibling, &remote_children);
cpumask_copy(cs->effective_xcpus, tmp->new_cpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
cpuset_force_rebuild();
cs->prs_err = 0;
@@ -1511,7 +1511,7 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)
compute_effective_exclusive_cpumask(cs, NULL, NULL);
reset_partition_data(cs);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
cpuset_force_rebuild();
/*
@@ -1580,7 +1580,7 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
if (xcpus)
cpumask_copy(cs->exclusive_cpus, xcpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
if (adding || deleting)
cpuset_force_rebuild();
@@ -1943,7 +1943,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
WARN_ON_ONCE(parent->nr_subparts < 0);
}
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
if ((old_prs != new_prs) && (cmd == partcmd_update))
update_partition_exclusive_flag(cs, new_prs);
@@ -2968,7 +2968,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
else if (isolcpus_updated)
isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
/* Force update if switching back to member & update effective_xcpus */
update_cpumasks_hier(cs, &tmpmask, !new_prs);
--
2.50.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v9 6/8] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any
2025-07-30 13:11 [PATCH v9 0/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
` (4 preceding siblings ...)
2025-07-30 13:11 ` [PATCH v9 5/8] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
@ 2025-07-30 13:11 ` Gabriele Monaco
2025-07-31 15:09 ` Waiman Long
2025-07-30 13:11 ` [PATCH v9 7/8] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 8/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
7 siblings, 1 reply; 18+ messages in thread
From: Gabriele Monaco @ 2025-07-30 13:11 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
Currently the user can set up isolcpus and nohz_full in such a way that
leaves no housekeeping CPU (i.e. no CPU that is neither domain isolated
nor nohz full). This can be a problem for other subsystems (e.g. the
timer wheel imgration).
Prevent this configuration by invalidating the last setting in case the
union of isolcpus and nohz_full covers all CPUs.
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/sched/isolation.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 93b038d48900..0019d941de68 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -165,6 +165,18 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
}
}
+ /* Check in combination with the previously set cpumask */
+ type = find_first_bit(&housekeeping.flags, HK_TYPE_MAX);
+ first_cpu = cpumask_first_and_and(cpu_present_mask,
+ housekeeping_staging,
+ housekeeping.cpumasks[type]);
+ if (first_cpu >= nr_cpu_ids || first_cpu >= setup_max_cpus) {
+ pr_warn("Housekeeping: must include one present CPU neither "
+ "in nohz_full= nor in isolcpus=, ignoring setting %s\n",
+ str);
+ goto free_housekeeping_staging;
+ }
+
iter_flags = flags & ~housekeeping.flags;
for_each_set_bit(type, &iter_flags, HK_TYPE_MAX)
--
2.50.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v9 7/8] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-07-30 13:11 [PATCH v9 0/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
` (5 preceding siblings ...)
2025-07-30 13:11 ` [PATCH v9 6/8] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
@ 2025-07-30 13:11 ` Gabriele Monaco
2025-07-31 15:39 ` Waiman Long
2025-07-30 13:11 ` [PATCH v9 8/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
7 siblings, 1 reply; 18+ messages in thread
From: Gabriele Monaco @ 2025-07-30 13:11 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
Currently the user can set up isolated cpus via cpuset and nohz_full in
such a way that leaves no housekeeping CPU (i.e. no CPU that is neither
domain isolated nor nohz full). This can be a problem for other
subsystems (e.g. the timer wheel imgration).
Prevent this configuration by blocking any assignation that would cause
the union of domain isolated cpus and nohz_full to covers all CPUs.
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/cgroup/cpuset.c | 56 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 56 insertions(+)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 6e3f44ffaa21..a946d85ce954 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1275,6 +1275,19 @@ static void isolated_cpus_update(int old_prs, int new_prs, struct cpumask *xcpus
cpumask_andnot(isolated_cpus, isolated_cpus, xcpus);
}
+/*
+ * isolated_cpus_should_update - Returns if the isolated_cpus mask needs update
+ * @prs: new or old partition_root_state
+ * @parent: parent cpuset
+ * Return: true if isolated_cpus needs modification, false otherwise
+ */
+static bool isolated_cpus_should_update(int prs, struct cpuset *parent)
+{
+ if (!parent)
+ parent = &top_cpuset;
+ return prs != parent->partition_root_state;
+}
+
/*
* partition_xcpus_add - Add new exclusive CPUs to partition
* @new_prs: new partition_root_state
@@ -1339,6 +1352,35 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
return isolcpus_updated;
}
+/*
+ * isolcpus_nohz_conflict - check for isolated & nohz_full conflicts
+ * @new_cpus: cpu mask for cpus that are going to be isolated
+ * Return: true if there is conflict, false otherwise
+ *
+ * If nohz_full is enabled and we have isolated CPUs, their combination must
+ * still leave housekeeping CPUs.
+ */
+static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
+{
+ cpumask_var_t full_hk_cpus;
+ int res = false;
+
+ if (!housekeeping_enabled(HK_TYPE_KERNEL_NOISE))
+ return false;
+
+ if (!alloc_cpumask_var(&full_hk_cpus, GFP_KERNEL))
+ return true;
+
+ cpumask_and(full_hk_cpus, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE),
+ housekeeping_cpumask(HK_TYPE_DOMAIN));
+ cpumask_and(full_hk_cpus, full_hk_cpus, cpu_online_mask);
+ if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
+ res = true;
+
+ free_cpumask_var(full_hk_cpus);
+ return res;
+}
+
static void update_exclusion_cpumasks(bool isolcpus_updated)
{
int ret;
@@ -1464,6 +1506,9 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
if (!cpumask_intersects(tmp->new_cpus, cpu_active_mask) ||
cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus))
return PERR_INVCPUS;
+ if (isolated_cpus_should_update(new_prs, NULL) &&
+ isolcpus_nohz_conflict(tmp->new_cpus))
+ return PERR_HKEEPING;
spin_lock_irq(&callback_lock);
isolcpus_updated = partition_xcpus_add(new_prs, NULL, tmp->new_cpus);
@@ -1563,6 +1608,9 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
else if (cpumask_intersects(tmp->addmask, subpartitions_cpus) ||
cpumask_subset(top_cpuset.effective_cpus, tmp->addmask))
cs->prs_err = PERR_NOCPUS;
+ else if (isolated_cpus_should_update(prs, NULL) &&
+ isolcpus_nohz_conflict(tmp->addmask))
+ cs->prs_err = PERR_HKEEPING;
if (cs->prs_err)
goto invalidate;
}
@@ -1914,6 +1962,12 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
return err;
}
+ if (deleting && isolated_cpus_should_update(new_prs, parent) &&
+ isolcpus_nohz_conflict(tmp->delmask)) {
+ cs->prs_err = PERR_HKEEPING;
+ return PERR_HKEEPING;
+ }
+
/*
* Change the parent's effective_cpus & effective_xcpus (top cpuset
* only).
@@ -2934,6 +2988,8 @@ static int update_prstate(struct cpuset *cs, int new_prs)
* Need to update isolated_cpus.
*/
isolcpus_updated = true;
+ if (isolcpus_nohz_conflict(cs->effective_xcpus))
+ err = PERR_HKEEPING;
} else {
/*
* Switching back to member is always allowed even if it
--
2.50.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v9 8/8] timers: Exclude isolated cpus from timer migration
2025-07-30 13:11 [PATCH v9 0/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
` (6 preceding siblings ...)
2025-07-30 13:11 ` [PATCH v9 7/8] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
@ 2025-07-30 13:11 ` Gabriele Monaco
2025-07-31 18:25 ` Waiman Long
7 siblings, 1 reply; 18+ messages in thread
From: Gabriele Monaco @ 2025-07-30 13:11 UTC (permalink / raw)
To: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
The timer migration mechanism allows active CPUs to pull timers from
idle ones to improve the overall idle time. This is however undesired
when CPU intensive workloads run on isolated cores, as the algorithm
would move the timers from housekeeping to isolated cores, negatively
affecting the isolation.
Exclude isolated cores from the timer migration algorithm, extend the
concept of unavailable cores, currently used for offline ones, to
isolated ones:
* A core is unavailable if isolated or offline;
* A core is available if non isolated and online;
A core is considered unavailable as isolated if it belongs to:
* the isolcpus (domain) list
* an isolated cpuset
Except if it is:
* in the nohz_full list (already idle for the hierarchy)
* the nohz timekeeper core (must be available to handle global timers)
CPUs are added to the hierarchy during late boot, excluding isolated
ones, the hierarchy is also adapted when the cpuset isolation changes.
Due to how the timer migration algorithm works, any CPU part of the
hierarchy can have their global timers pulled by remote CPUs and have to
pull remote timers, only skipping pulling remote timers would break the
logic.
For this reason, prevent isolated CPUs from pulling remote global
timers, but also the other way around: any global timer started on an
isolated CPU will run there. This does not break the concept of
isolation (global timers don't come from outside the CPU) and, if
considered inappropriate, can usually be mitigated with other isolation
techniques (e.g. IRQ pinning).
This effect was noticed on a 128 cores machine running oslat on the
isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
and the CPU with lowest count in a timer migration hierarchy (here 1
and 65) appears as always active and continuously pulls global timers,
from the housekeeping CPUs. This ends up moving driver work (e.g.
delayed work) to isolated CPUs and causes latency spikes:
before the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 1203 10 3 4 ... 5 (us)
after the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 10 4 3 4 3 ... 5 (us)
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
include/linux/timer.h | 9 +++
kernel/cgroup/cpuset.c | 3 +
kernel/time/timer_migration.c | 100 +++++++++++++++++++++++++++++++++-
3 files changed, 109 insertions(+), 3 deletions(-)
diff --git a/include/linux/timer.h b/include/linux/timer.h
index 0414d9e6b4fc..62e1cea71125 100644
--- a/include/linux/timer.h
+++ b/include/linux/timer.h
@@ -188,4 +188,13 @@ int timers_dead_cpu(unsigned int cpu);
#define timers_dead_cpu NULL
#endif
+#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
+extern int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask);
+#else
+static inline int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
+{
+ return 0;
+}
+#endif
+
#endif
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index a946d85ce954..ff5b66abd047 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1392,6 +1392,9 @@ static void update_exclusion_cpumasks(bool isolcpus_updated)
ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
WARN_ON_ONCE(ret < 0);
+
+ ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
+ WARN_ON_ONCE(ret < 0);
}
/**
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 36e7f784ec60..5e66147fce11 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -10,6 +10,7 @@
#include <linux/spinlock.h>
#include <linux/timerqueue.h>
#include <trace/events/ipi.h>
+#include <linux/sched/isolation.h>
#include "timer_migration.h"
#include "tick-internal.h"
@@ -436,6 +437,20 @@ static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc)
return !(tmc->tmgroup && tmc->available);
}
+/*
+ * Returns true if @cpu should be excluded from the hierarchy as isolated.
+ * Domain isolated CPUs don't participate in timer migration, nohz_full
+ * CPUs are still part of the hierarchy but are always considered idle.
+ * This check is necessary, for instance, to prevent offline isolated CPU from
+ * being incorrectly marked as available once getting back online.
+ */
+static inline bool tmigr_is_isolated(int cpu)
+{
+ return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) ||
+ cpuset_cpu_is_isolated(cpu)) &&
+ housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE);
+}
+
/*
* Returns true, when @childmask corresponds to the group migrator or when the
* group is not active - so no migrator is set.
@@ -1454,6 +1469,8 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
cpumask_clear_cpu(cpu, tmigr_available_cpumask);
scoped_guard(raw_spinlock_irq, &tmc->lock) {
+ if (!tmc->available)
+ return 0;
tmc->available = false;
WRITE_ONCE(tmc->wakeup, KTIME_MAX);
@@ -1473,7 +1490,7 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
return 0;
}
-static int tmigr_set_cpu_available(unsigned int cpu)
+static inline int _tmigr_set_cpu_available(unsigned int cpu)
{
struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu);
@@ -1483,6 +1500,8 @@ static int tmigr_set_cpu_available(unsigned int cpu)
cpumask_set_cpu(cpu, tmigr_available_cpumask);
scoped_guard(raw_spinlock_irq, &tmc->lock) {
+ if (tmc->available)
+ return 0;
trace_tmigr_cpu_available(tmc);
tmc->idle = timer_base_is_idle();
if (!tmc->idle)
@@ -1492,14 +1511,89 @@ static int tmigr_set_cpu_available(unsigned int cpu)
return 0;
}
+static int tmigr_set_cpu_available(unsigned int cpu)
+{
+ if (tmigr_is_isolated(cpu))
+ return 0;
+ return _tmigr_set_cpu_available(cpu);
+}
+
+static bool tmigr_should_isolate_cpu(int cpu, void *ignored)
+{
+ /*
+ * The tick CPU can be marked as isolated by the cpuset code, however
+ * we cannot mark it as unavailable to avoid having no global migrator
+ * for the nohz_full CPUs.
+ */
+ return tick_nohz_cpu_hotpluggable(cpu);
+}
+
+static void tmigr_cpu_isolate(void *ignored)
+{
+ tmigr_clear_cpu_available(smp_processor_id());
+}
+
+static void tmigr_cpu_unisolate(void *ignored)
+{
+ tmigr_set_cpu_available(smp_processor_id());
+}
+
+static void tmigr_cpu_unisolate_force(void *ignored)
+{
+ /*
+ * Required at boot to restore the tick CPU if nohz_full is available.
+ * Hotplug handlers don't check for tick CPUs during runtime.
+ */
+ _tmigr_set_cpu_available(smp_processor_id());
+}
+
+int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
+{
+ cpumask_var_t cpumask;
+
+ lockdep_assert_cpus_held();
+
+ if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
+ return -ENOMEM;
+
+ cpumask_and(cpumask, exclude_cpumask, tmigr_available_cpumask);
+ cpumask_and(cpumask, cpumask, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
+ on_each_cpu_cond_mask(tmigr_should_isolate_cpu, tmigr_cpu_isolate, NULL,
+ 1, cpumask);
+
+ cpumask_andnot(cpumask, cpu_online_mask, exclude_cpumask);
+ cpumask_andnot(cpumask, cpumask, tmigr_available_cpumask);
+ on_each_cpu_mask(cpumask, tmigr_cpu_unisolate, NULL, 1);
+
+ free_cpumask_var(cpumask);
+ return 0;
+}
+
/*
* NOHZ can only be enabled after clocksource_done_booting(). Don't
* bother trashing the cache in the tree before.
*/
static int __init tmigr_late_init(void)
{
- return cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online",
- tmigr_set_cpu_available, tmigr_clear_cpu_available);
+ int cpu, ret;
+
+ ret = cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online",
+ tmigr_set_cpu_available, tmigr_clear_cpu_available);
+ if (ret)
+ return ret;
+ /*
+ * The tick CPU may not be marked as available in the above call, this
+ * can occur only at boot as hotplug handlers are not called on the
+ * tick CPU. Force it enabled here.
+ */
+ for_each_possible_cpu(cpu) {
+ if (!tick_nohz_cpu_hotpluggable(cpu)) {
+ ret = smp_call_function_single(
+ cpu, tmigr_cpu_unisolate_force, NULL, 1);
+ break;
+ }
+ }
+ return ret;
}
static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
--
2.50.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v9 5/8] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks()
2025-07-30 13:11 ` [PATCH v9 5/8] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
@ 2025-07-31 14:42 ` Waiman Long
0 siblings, 0 replies; 18+ messages in thread
From: Waiman Long @ 2025-07-31 14:42 UTC (permalink / raw)
To: Gabriele Monaco, linux-kernel, Anna-Maria Behnsen,
Frederic Weisbecker, Thomas Gleixner
On 7/30/25 9:11 AM, Gabriele Monaco wrote:
> update_unbound_workqueue_cpumask() updates unbound workqueues settings
> when there's a change in isolated CPUs, but it can be used for other
> subsystems requiring updated when isolated CPUs change.
>
> Generalise the name to update_exclusion_cpumasks() to prepare for other
> functions unrelated to workqueues to be called in that spot.
>
> Acked-by: Frederic Weisbecker <frederic@kernel.org>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
> kernel/cgroup/cpuset.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 3bc4301466f3..6e3f44ffaa21 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1339,7 +1339,7 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
> return isolcpus_updated;
> }
>
> -static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
> +static void update_exclusion_cpumasks(bool isolcpus_updated)
> {
> int ret;
>
> @@ -1470,7 +1470,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
> list_add(&cs->remote_sibling, &remote_children);
> cpumask_copy(cs->effective_xcpus, tmp->new_cpus);
> spin_unlock_irq(&callback_lock);
> - update_unbound_workqueue_cpumask(isolcpus_updated);
> + update_exclusion_cpumasks(isolcpus_updated);
> cpuset_force_rebuild();
> cs->prs_err = 0;
>
> @@ -1511,7 +1511,7 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)
> compute_effective_exclusive_cpumask(cs, NULL, NULL);
> reset_partition_data(cs);
> spin_unlock_irq(&callback_lock);
> - update_unbound_workqueue_cpumask(isolcpus_updated);
> + update_exclusion_cpumasks(isolcpus_updated);
> cpuset_force_rebuild();
>
> /*
> @@ -1580,7 +1580,7 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
> if (xcpus)
> cpumask_copy(cs->exclusive_cpus, xcpus);
> spin_unlock_irq(&callback_lock);
> - update_unbound_workqueue_cpumask(isolcpus_updated);
> + update_exclusion_cpumasks(isolcpus_updated);
> if (adding || deleting)
> cpuset_force_rebuild();
>
> @@ -1943,7 +1943,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
> WARN_ON_ONCE(parent->nr_subparts < 0);
> }
> spin_unlock_irq(&callback_lock);
> - update_unbound_workqueue_cpumask(isolcpus_updated);
> + update_exclusion_cpumasks(isolcpus_updated);
>
> if ((old_prs != new_prs) && (cmd == partcmd_update))
> update_partition_exclusive_flag(cs, new_prs);
> @@ -2968,7 +2968,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
> else if (isolcpus_updated)
> isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus);
> spin_unlock_irq(&callback_lock);
> - update_unbound_workqueue_cpumask(isolcpus_updated);
> + update_exclusion_cpumasks(isolcpus_updated);
>
> /* Force update if switching back to member & update effective_xcpus */
> update_cpumasks_hier(cs, &tmpmask, !new_prs);
Acked-by: Waiman Long <longman@redhat.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 6/8] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any
2025-07-30 13:11 ` [PATCH v9 6/8] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
@ 2025-07-31 15:09 ` Waiman Long
2025-08-01 14:46 ` Gabriele Monaco
0 siblings, 1 reply; 18+ messages in thread
From: Waiman Long @ 2025-07-31 15:09 UTC (permalink / raw)
To: Gabriele Monaco, linux-kernel, Anna-Maria Behnsen,
Frederic Weisbecker, Thomas Gleixner
On 7/30/25 9:11 AM, Gabriele Monaco wrote:
> Currently the user can set up isolcpus and nohz_full in such a way that
> leaves no housekeeping CPU (i.e. no CPU that is neither domain isolated
> nor nohz full). This can be a problem for other subsystems (e.g. the
> timer wheel imgration).
>
> Prevent this configuration by invalidating the last setting in case the
> union of isolcpus and nohz_full covers all CPUs.
>
> Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
> kernel/sched/isolation.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 93b038d48900..0019d941de68 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -165,6 +165,18 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
> }
> }
>
> + /* Check in combination with the previously set cpumask */
> + type = find_first_bit(&housekeeping.flags, HK_TYPE_MAX);
> + first_cpu = cpumask_first_and_and(cpu_present_mask,
> + housekeeping_staging,
> + housekeeping.cpumasks[type]);
> + if (first_cpu >= nr_cpu_ids || first_cpu >= setup_max_cpus) {
> + pr_warn("Housekeeping: must include one present CPU neither "
> + "in nohz_full= nor in isolcpus=, ignoring setting %s\n",
> + str);
> + goto free_housekeeping_staging;
> + }
> +
> iter_flags = flags & ~housekeeping.flags;
>
> for_each_set_bit(type, &iter_flags, HK_TYPE_MAX)
I do have a question about this check. Currently isolcpus=domain is bit
0, managed_irq is bit 1 and nohz_full is bit 2. If manage_irq come first
followed by nohz_full and then isolcpus=domain. By the time,
isolcpus=domain is being set, you are comparing its cpumask with that of
manage_irq, not nohz_full.
Perhaps you can reuse the non_housekeeping_mask for doing the check, e.g.
cpumask_and(non_housekeeping_mask, cpu_present_mask,
housekeeping_staging);
iter_flags = housekeeping.flags & ~flags;
for_each_set_bit(type, &iter_flags, HK_TYPE_MAX)
cpumask_and(non_housekeeping_mask,
non_housekeeping_mask, housekeeping.cpumasks[type]);
if (cpumask_empty(non_housekeeping_mask)) {
pr_warn(...
Regards,
Longman
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 7/8] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-07-30 13:11 ` [PATCH v9 7/8] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
@ 2025-07-31 15:39 ` Waiman Long
2025-08-01 16:03 ` Gabriele Monaco
0 siblings, 1 reply; 18+ messages in thread
From: Waiman Long @ 2025-07-31 15:39 UTC (permalink / raw)
To: Gabriele Monaco, linux-kernel, Anna-Maria Behnsen,
Frederic Weisbecker, Thomas Gleixner
On 7/30/25 9:11 AM, Gabriele Monaco wrote:
> Currently the user can set up isolated cpus via cpuset and nohz_full in
> such a way that leaves no housekeeping CPU (i.e. no CPU that is neither
> domain isolated nor nohz full). This can be a problem for other
> subsystems (e.g. the timer wheel imgration).
>
> Prevent this configuration by blocking any assignation that would cause
> the union of domain isolated cpus and nohz_full to covers all CPUs.
>
> Acked-by: Frederic Weisbecker <frederic@kernel.org>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
> kernel/cgroup/cpuset.c | 56 ++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 56 insertions(+)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 6e3f44ffaa21..a946d85ce954 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1275,6 +1275,19 @@ static void isolated_cpus_update(int old_prs, int new_prs, struct cpumask *xcpus
> cpumask_andnot(isolated_cpus, isolated_cpus, xcpus);
> }
>
> +/*
> + * isolated_cpus_should_update - Returns if the isolated_cpus mask needs update
> + * @prs: new or old partition_root_state
> + * @parent: parent cpuset
> + * Return: true if isolated_cpus needs modification, false otherwise
> + */
> +static bool isolated_cpus_should_update(int prs, struct cpuset *parent)
> +{
> + if (!parent)
> + parent = &top_cpuset;
> + return prs != parent->partition_root_state;
> +}
> +
> /*
> * partition_xcpus_add - Add new exclusive CPUs to partition
> * @new_prs: new partition_root_state
> @@ -1339,6 +1352,35 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
> return isolcpus_updated;
> }
>
> +/*
> + * isolcpus_nohz_conflict - check for isolated & nohz_full conflicts
> + * @new_cpus: cpu mask for cpus that are going to be isolated
> + * Return: true if there is conflict, false otherwise
> + *
> + * If nohz_full is enabled and we have isolated CPUs, their combination must
> + * still leave housekeeping CPUs.
> + */
> +static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
> +{
> + cpumask_var_t full_hk_cpus;
> + int res = false;
> +
> + if (!housekeeping_enabled(HK_TYPE_KERNEL_NOISE))
> + return false;
> +
> + if (!alloc_cpumask_var(&full_hk_cpus, GFP_KERNEL))
> + return true;
> +
> + cpumask_and(full_hk_cpus, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE),
> + housekeeping_cpumask(HK_TYPE_DOMAIN));
> + cpumask_and(full_hk_cpus, full_hk_cpus, cpu_online_mask);
> + if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
> + res = true;
> +
> + free_cpumask_var(full_hk_cpus);
> + return res;
> +}
First of all, isolated_cpus currently include those CPUs excluded from
boot time isolcpus=domain setting, but it also include new isolated cpus
created by used by cpuset isolated partitions. Your current
isolcpus_nohz_conflicts() does not check isolated_cpus which I think is
incomplete.
Cheers,
Longman
> +
> static void update_exclusion_cpumasks(bool isolcpus_updated)
> {
> int ret;
> @@ -1464,6 +1506,9 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
> if (!cpumask_intersects(tmp->new_cpus, cpu_active_mask) ||
> cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus))
> return PERR_INVCPUS;
> + if (isolated_cpus_should_update(new_prs, NULL) &&
> + isolcpus_nohz_conflict(tmp->new_cpus))
> + return PERR_HKEEPING;
>
> spin_lock_irq(&callback_lock);
> isolcpus_updated = partition_xcpus_add(new_prs, NULL, tmp->new_cpus);
> @@ -1563,6 +1608,9 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
> else if (cpumask_intersects(tmp->addmask, subpartitions_cpus) ||
> cpumask_subset(top_cpuset.effective_cpus, tmp->addmask))
> cs->prs_err = PERR_NOCPUS;
> + else if (isolated_cpus_should_update(prs, NULL) &&
> + isolcpus_nohz_conflict(tmp->addmask))
> + cs->prs_err = PERR_HKEEPING;
> if (cs->prs_err)
> goto invalidate;
> }
> @@ -1914,6 +1962,12 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
> return err;
> }
>
> + if (deleting && isolated_cpus_should_update(new_prs, parent) &&
> + isolcpus_nohz_conflict(tmp->delmask)) {
> + cs->prs_err = PERR_HKEEPING;
> + return PERR_HKEEPING;
> + }
> +
> /*
> * Change the parent's effective_cpus & effective_xcpus (top cpuset
> * only).
> @@ -2934,6 +2988,8 @@ static int update_prstate(struct cpuset *cs, int new_prs)
> * Need to update isolated_cpus.
> */
> isolcpus_updated = true;
> + if (isolcpus_nohz_conflict(cs->effective_xcpus))
> + err = PERR_HKEEPING;
> } else {
> /*
> * Switching back to member is always allowed even if it
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 8/8] timers: Exclude isolated cpus from timer migration
2025-07-30 13:11 ` [PATCH v9 8/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
@ 2025-07-31 18:25 ` Waiman Long
2025-08-01 13:07 ` Frederic Weisbecker
0 siblings, 1 reply; 18+ messages in thread
From: Waiman Long @ 2025-07-31 18:25 UTC (permalink / raw)
To: Gabriele Monaco, linux-kernel, Anna-Maria Behnsen,
Frederic Weisbecker, Thomas Gleixner
On 7/30/25 9:11 AM, Gabriele Monaco wrote:
> The timer migration mechanism allows active CPUs to pull timers from
> idle ones to improve the overall idle time. This is however undesired
> when CPU intensive workloads run on isolated cores, as the algorithm
> would move the timers from housekeeping to isolated cores, negatively
> affecting the isolation.
>
> Exclude isolated cores from the timer migration algorithm, extend the
> concept of unavailable cores, currently used for offline ones, to
> isolated ones:
> * A core is unavailable if isolated or offline;
> * A core is available if non isolated and online;
>
> A core is considered unavailable as isolated if it belongs to:
> * the isolcpus (domain) list
> * an isolated cpuset
> Except if it is:
> * in the nohz_full list (already idle for the hierarchy)
For the nohz_full list here, do you mean nohz_full housekeeping or
non-housekeeping list?
> * the nohz timekeeper core (must be available to handle global timers)
>
> CPUs are added to the hierarchy during late boot, excluding isolated
> ones, the hierarchy is also adapted when the cpuset isolation changes.
>
> Due to how the timer migration algorithm works, any CPU part of the
> hierarchy can have their global timers pulled by remote CPUs and have to
> pull remote timers, only skipping pulling remote timers would break the
> logic.
> For this reason, prevent isolated CPUs from pulling remote global
> timers, but also the other way around: any global timer started on an
> isolated CPU will run there. This does not break the concept of
> isolation (global timers don't come from outside the CPU) and, if
> considered inappropriate, can usually be mitigated with other isolation
> techniques (e.g. IRQ pinning).
>
> This effect was noticed on a 128 cores machine running oslat on the
> isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
> and the CPU with lowest count in a timer migration hierarchy (here 1
> and 65) appears as always active and continuously pulls global timers,
> from the housekeeping CPUs. This ends up moving driver work (e.g.
> delayed work) to isolated CPUs and causes latency spikes:
>
> before the change:
>
> # oslat -c 1-31,33-63,65-95,97-127 -D 62s
> ...
> Maximum: 1203 10 3 4 ... 5 (us)
>
> after the change:
>
> # oslat -c 1-31,33-63,65-95,97-127 -D 62s
> ...
> Maximum: 10 4 3 4 3 ... 5 (us)
>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
> include/linux/timer.h | 9 +++
> kernel/cgroup/cpuset.c | 3 +
> kernel/time/timer_migration.c | 100 +++++++++++++++++++++++++++++++++-
> 3 files changed, 109 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/timer.h b/include/linux/timer.h
> index 0414d9e6b4fc..62e1cea71125 100644
> --- a/include/linux/timer.h
> +++ b/include/linux/timer.h
> @@ -188,4 +188,13 @@ int timers_dead_cpu(unsigned int cpu);
> #define timers_dead_cpu NULL
> #endif
>
> +#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
> +extern int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask);
> +#else
> +static inline int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
> +{
> + return 0;
> +}
> +#endif
> +
> #endif
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index a946d85ce954..ff5b66abd047 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1392,6 +1392,9 @@ static void update_exclusion_cpumasks(bool isolcpus_updated)
>
> ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
> WARN_ON_ONCE(ret < 0);
> +
> + ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
> + WARN_ON_ONCE(ret < 0);
> }
>
> /**
> diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
> index 36e7f784ec60..5e66147fce11 100644
> --- a/kernel/time/timer_migration.c
> +++ b/kernel/time/timer_migration.c
> @@ -10,6 +10,7 @@
> #include <linux/spinlock.h>
> #include <linux/timerqueue.h>
> #include <trace/events/ipi.h>
> +#include <linux/sched/isolation.h>
>
> #include "timer_migration.h"
> #include "tick-internal.h"
> @@ -436,6 +437,20 @@ static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc)
> return !(tmc->tmgroup && tmc->available);
> }
>
> +/*
> + * Returns true if @cpu should be excluded from the hierarchy as isolated.
> + * Domain isolated CPUs don't participate in timer migration, nohz_full
> + * CPUs are still part of the hierarchy but are always considered idle.
> + * This check is necessary, for instance, to prevent offline isolated CPU from
> + * being incorrectly marked as available once getting back online.
> + */
> +static inline bool tmigr_is_isolated(int cpu)
> +{
> + return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) ||
> + cpuset_cpu_is_isolated(cpu)) &&
> + housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE);
> +}
Does that mean a CPU in the nohz_full non-housekeeping list is always
considered not isolated WRT timer migration and hence will be made
available for timer migration purpose?
Cheers,
Longman
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 8/8] timers: Exclude isolated cpus from timer migration
2025-07-31 18:25 ` Waiman Long
@ 2025-08-01 13:07 ` Frederic Weisbecker
2025-08-01 19:15 ` Waiman Long
0 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2025-08-01 13:07 UTC (permalink / raw)
To: Waiman Long
Cc: Gabriele Monaco, linux-kernel, Anna-Maria Behnsen,
Thomas Gleixner
Le Thu, Jul 31, 2025 at 02:25:30PM -0400, Waiman Long a écrit :
> On 7/30/25 9:11 AM, Gabriele Monaco wrote:
> > The timer migration mechanism allows active CPUs to pull timers from
> > idle ones to improve the overall idle time. This is however undesired
> > when CPU intensive workloads run on isolated cores, as the algorithm
> > would move the timers from housekeeping to isolated cores, negatively
> > affecting the isolation.
> >
> > Exclude isolated cores from the timer migration algorithm, extend the
> > concept of unavailable cores, currently used for offline ones, to
> > isolated ones:
> > * A core is unavailable if isolated or offline;
> > * A core is available if non isolated and online;
> >
> > A core is considered unavailable as isolated if it belongs to:
> > * the isolcpus (domain) list
> > * an isolated cpuset
> > Except if it is:
> > * in the nohz_full list (already idle for the hierarchy)
> For the nohz_full list here, do you mean nohz_full housekeeping or
> non-housekeeping list?
nohz_full.
> > @@ -436,6 +437,20 @@ static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc)
> > return !(tmc->tmgroup && tmc->available);
> > }
> > +/*
> > + * Returns true if @cpu should be excluded from the hierarchy as isolated.
> > + * Domain isolated CPUs don't participate in timer migration, nohz_full
> > + * CPUs are still part of the hierarchy but are always considered idle.
> > + * This check is necessary, for instance, to prevent offline isolated CPU from
> > + * being incorrectly marked as available once getting back online.
> > + */
> > +static inline bool tmigr_is_isolated(int cpu)
> > +{
> > + return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) ||
> > + cpuset_cpu_is_isolated(cpu)) &&
> > + housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE);
> > +}
>
> Does that mean a CPU in the nohz_full non-housekeeping list is always
> considered not isolated WRT timer migration and hence will be made available
> for timer migration purpose?
Exactly, because nohz_full CPUs become idle (from a tick and timer migration
POV) when they stop their tick. And since they are idle, their global timer
are handled by the timekeeping CPU.
This is much better than making the CPU unavailable like is done in this
patchset for domain isolated CPUs, because unavailable CPUs must still handle
their own global timers. Unfortunately we can't just fake them as well as idle,
like we do with nohz_full CPUs, because that would mean walking the whole timer
migration tree everytime a timer is queued or modified. This would be too
costly.
Indeed that should be commented somewhere in this function.
Thanks.
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 6/8] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any
2025-07-31 15:09 ` Waiman Long
@ 2025-08-01 14:46 ` Gabriele Monaco
2025-08-01 18:04 ` Waiman Long
0 siblings, 1 reply; 18+ messages in thread
From: Gabriele Monaco @ 2025-08-01 14:46 UTC (permalink / raw)
To: Waiman Long, Frederic Weisbecker
Cc: linux-kernel, Anna-Maria Behnsen, Thomas Gleixner
On Thu, 2025-07-31 at 11:09 -0400, Waiman Long wrote:
>
> On 7/30/25 9:11 AM, Gabriele Monaco wrote:
> > Currently the user can set up isolcpus and nohz_full in such a way
> > that
> > leaves no housekeeping CPU (i.e. no CPU that is neither domain
> > isolated
> > nor nohz full). This can be a problem for other subsystems (e.g.
> > the
> > timer wheel imgration).
> >
> > Prevent this configuration by invalidating the last setting in case
> > the
> > union of isolcpus and nohz_full covers all CPUs.
> >
> > Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
> > Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> > ---
> > kernel/sched/isolation.c | 12 ++++++++++++
> > 1 file changed, 12 insertions(+)
> >
> > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> > index 93b038d48900..0019d941de68 100644
> > --- a/kernel/sched/isolation.c
> > +++ b/kernel/sched/isolation.c
> > @@ -165,6 +165,18 @@ static int __init housekeeping_setup(char
> > *str, unsigned long flags)
> > }
> > }
> >
> > + /* Check in combination with the previously set
> > cpumask */
> > + type = find_first_bit(&housekeeping.flags,
> > HK_TYPE_MAX);
> > + first_cpu =
> > cpumask_first_and_and(cpu_present_mask,
> > +
> > housekeeping_staging,
> > +
> > housekeeping.cpumasks[type]);
> > + if (first_cpu >= nr_cpu_ids || first_cpu >=
> > setup_max_cpus) {
> > + pr_warn("Housekeeping: must include one
> > present CPU neither "
> > + "in nohz_full= nor in isolcpus=,
> > ignoring setting %s\n",
> > + str);
> > + goto free_housekeeping_staging;
> > + }
> > +
> > iter_flags = flags & ~housekeeping.flags;
> >
> > for_each_set_bit(type, &iter_flags, HK_TYPE_MAX)
>
> I do have a question about this check. Currently isolcpus=domain is
> bit 0, managed_irq is bit 1 and nohz_full is bit 2. If manage_irq
> come first followed by nohz_full and then isolcpus=domain. By the
> time, isolcpus=domain is being set, you are comparing its cpumask
> with that of manage_irq, not nohz_full.
>
> Perhaps you can reuse the non_housekeeping_mask for doing the check,
> e.g.
>
> cpumask_and(non_housekeeping_mask, cpu_present_mask,
> housekeeping_staging);
> iter_flags = housekeeping.flags & ~flags;
> for_each_set_bit(type, &iter_flags, HK_TYPE_MAX)
> cpumask_and(non_housekeeping_mask,
> non_housekeeping_mask, housekeeping.cpumasks[type]);
> if (cpumask_empty(non_housekeeping_mask)) {
> pr_warn(...
Mmh right didn't think passing different masks in isocpus was possible.
You mean something like this right?
isolcpus=managed_irq,0-4 nohz_full=8-15 isolcpus=domain,0-7
Which doesn't block the nohz_full because the first mask (managed_irq)
leaves spaces.
Right now we block assignments like
isolcpus=managed_irq,0-7 nohz_full=8-15
and
isolcpus=managed_irq,0-7 -a isolcpus=domain,8-15
although this series doesn't really have problems with it.
Shouldn't we just ignore these cases and just count domain + nohz_full?
The solution you propose is to check all housekeeping, so it would also
prevent the (safe?) assignments above, right?
We could just check against the previously set domain/nohz_full and
leave other flags alone, couldn't we?
Thanks,
Gabriele
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 7/8] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-07-31 15:39 ` Waiman Long
@ 2025-08-01 16:03 ` Gabriele Monaco
0 siblings, 0 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-08-01 16:03 UTC (permalink / raw)
To: Waiman Long
Cc: linux-kernel, Anna-Maria Behnsen, Frederic Weisbecker,
Thomas Gleixner
On Thu, 2025-07-31 at 11:39 -0400, Waiman Long wrote:
>
> On 7/30/25 9:11 AM, Gabriele Monaco wrote:
> > +static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
> > +{
> > + cpumask_var_t full_hk_cpus;
> > + int res = false;
> > +
> > + if (!housekeeping_enabled(HK_TYPE_KERNEL_NOISE))
> > + return false;
> > +
> > + if (!alloc_cpumask_var(&full_hk_cpus, GFP_KERNEL))
> > + return true;
> > +
> > + cpumask_and(full_hk_cpus,
> > housekeeping_cpumask(HK_TYPE_KERNEL_NOISE),
> > + housekeeping_cpumask(HK_TYPE_DOMAIN));
> > + cpumask_and(full_hk_cpus, full_hk_cpus, cpu_online_mask);
> > + if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
> > + res = true;
> > +
> > + free_cpumask_var(full_hk_cpus);
> > + return res;
> > +}
>
> First of all, isolated_cpus currently include those CPUs excluded
> from boot time isolcpus=domain setting, but it also include new
> isolated
> cpus created by used by cpuset isolated partitions. Your current
> isolcpus_nohz_conflicts() does not check isolated_cpus which I think
> is incomplete.
Right, good point! Thanks for the review.
Somehow it was working fine with cpuset+isolcpus, but doesn't work if I
have multiple cpusets.
Thanks,
Gabriele
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 6/8] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any
2025-08-01 14:46 ` Gabriele Monaco
@ 2025-08-01 18:04 ` Waiman Long
0 siblings, 0 replies; 18+ messages in thread
From: Waiman Long @ 2025-08-01 18:04 UTC (permalink / raw)
To: Gabriele Monaco, Waiman Long, Frederic Weisbecker
Cc: linux-kernel, Anna-Maria Behnsen, Thomas Gleixner
On 8/1/25 10:46 AM, Gabriele Monaco wrote:
>
> On Thu, 2025-07-31 at 11:09 -0400, Waiman Long wrote:
>> On 7/30/25 9:11 AM, Gabriele Monaco wrote:
>>> Currently the user can set up isolcpus and nohz_full in such a way
>>> that
>>> leaves no housekeeping CPU (i.e. no CPU that is neither domain
>>> isolated
>>> nor nohz full). This can be a problem for other subsystems (e.g.
>>> the
>>> timer wheel imgration).
>>>
>>> Prevent this configuration by invalidating the last setting in case
>>> the
>>> union of isolcpus and nohz_full covers all CPUs.
>>>
>>> Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
>>> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
>>> ---
>>> kernel/sched/isolation.c | 12 ++++++++++++
>>> 1 file changed, 12 insertions(+)
>>>
>>> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
>>> index 93b038d48900..0019d941de68 100644
>>> --- a/kernel/sched/isolation.c
>>> +++ b/kernel/sched/isolation.c
>>> @@ -165,6 +165,18 @@ static int __init housekeeping_setup(char
>>> *str, unsigned long flags)
>>> }
>>> }
>>>
>>> + /* Check in combination with the previously set
>>> cpumask */
>>> + type = find_first_bit(&housekeeping.flags,
>>> HK_TYPE_MAX);
>>> + first_cpu =
>>> cpumask_first_and_and(cpu_present_mask,
>>> +
>>> housekeeping_staging,
>>> +
>>> housekeeping.cpumasks[type]);
>>> + if (first_cpu >= nr_cpu_ids || first_cpu >=
>>> setup_max_cpus) {
>>> + pr_warn("Housekeeping: must include one
>>> present CPU neither "
>>> + "in nohz_full= nor in isolcpus=,
>>> ignoring setting %s\n",
>>> + str);
>>> + goto free_housekeeping_staging;
>>> + }
>>> +
>>> iter_flags = flags & ~housekeeping.flags;
>>>
>>> for_each_set_bit(type, &iter_flags, HK_TYPE_MAX)
>> I do have a question about this check. Currently isolcpus=domain is
>> bit 0, managed_irq is bit 1 and nohz_full is bit 2. If manage_irq
>> come first followed by nohz_full and then isolcpus=domain. By the
>> time, isolcpus=domain is being set, you are comparing its cpumask
>> with that of manage_irq, not nohz_full.
>>
>> Perhaps you can reuse the non_housekeeping_mask for doing the check,
>> e.g.
>>
>> cpumask_and(non_housekeeping_mask, cpu_present_mask,
>> housekeeping_staging);
>> iter_flags = housekeeping.flags & ~flags;
>> for_each_set_bit(type, &iter_flags, HK_TYPE_MAX)
>> cpumask_and(non_housekeeping_mask,
>> non_housekeeping_mask, housekeeping.cpumasks[type]);
>> if (cpumask_empty(non_housekeeping_mask)) {
>> pr_warn(...
> Mmh right didn't think passing different masks in isocpus was possible.
>
> You mean something like this right?
>
> isolcpus=managed_irq,0-4 nohz_full=8-15 isolcpus=domain,0-7
>
> Which doesn't block the nohz_full because the first mask (managed_irq)
> leaves spaces.
Yes, that is what I am talking about.
>
> Right now we block assignments like
>
> isolcpus=managed_irq,0-7 nohz_full=8-15
>
> and
>
> isolcpus=managed_irq,0-7 -a isolcpus=domain,8-15
>
> although this series doesn't really have problems with it.
> Shouldn't we just ignore these cases and just count domain + nohz_full?
You could, but you have to explicitly exclude managed_irq in your logic.
>
> The solution you propose is to check all housekeeping, so it would also
> prevent the (safe?) assignments above, right?
>
> We could just check against the previously set domain/nohz_full and
> leave other flags alone, couldn't we?
You will have to modify your logic and be explicit that managed_irq is
currently ignored.
Cheers,
Longman
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 8/8] timers: Exclude isolated cpus from timer migration
2025-08-01 13:07 ` Frederic Weisbecker
@ 2025-08-01 19:15 ` Waiman Long
0 siblings, 0 replies; 18+ messages in thread
From: Waiman Long @ 2025-08-01 19:15 UTC (permalink / raw)
To: Frederic Weisbecker, Waiman Long
Cc: Gabriele Monaco, linux-kernel, Anna-Maria Behnsen,
Thomas Gleixner
On 8/1/25 9:07 AM, Frederic Weisbecker wrote:
> Le Thu, Jul 31, 2025 at 02:25:30PM -0400, Waiman Long a écrit :
>> On 7/30/25 9:11 AM, Gabriele Monaco wrote:
>>> The timer migration mechanism allows active CPUs to pull timers from
>>> idle ones to improve the overall idle time. This is however undesired
>>> when CPU intensive workloads run on isolated cores, as the algorithm
>>> would move the timers from housekeeping to isolated cores, negatively
>>> affecting the isolation.
>>>
>>> Exclude isolated cores from the timer migration algorithm, extend the
>>> concept of unavailable cores, currently used for offline ones, to
>>> isolated ones:
>>> * A core is unavailable if isolated or offline;
>>> * A core is available if non isolated and online;
>>>
>>> A core is considered unavailable as isolated if it belongs to:
>>> * the isolcpus (domain) list
>>> * an isolated cpuset
>>> Except if it is:
>>> * in the nohz_full list (already idle for the hierarchy)
>> For the nohz_full list here, do you mean nohz_full housekeeping or
>> non-housekeeping list?
> nohz_full.
>
>>> @@ -436,6 +437,20 @@ static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc)
>>> return !(tmc->tmgroup && tmc->available);
>>> }
>>> +/*
>>> + * Returns true if @cpu should be excluded from the hierarchy as isolated.
>>> + * Domain isolated CPUs don't participate in timer migration, nohz_full
>>> + * CPUs are still part of the hierarchy but are always considered idle.
>>> + * This check is necessary, for instance, to prevent offline isolated CPU from
>>> + * being incorrectly marked as available once getting back online.
>>> + */
>>> +static inline bool tmigr_is_isolated(int cpu)
>>> +{
>>> + return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) ||
>>> + cpuset_cpu_is_isolated(cpu)) &&
>>> + housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE);
>>> +}
>> Does that mean a CPU in the nohz_full non-housekeeping list is always
>> considered not isolated WRT timer migration and hence will be made available
>> for timer migration purpose?
> Exactly, because nohz_full CPUs become idle (from a tick and timer migration
> POV) when they stop their tick. And since they are idle, their global timer
> are handled by the timekeeping CPU.
>
> This is much better than making the CPU unavailable like is done in this
> patchset for domain isolated CPUs, because unavailable CPUs must still handle
> their own global timers. Unfortunately we can't just fake them as well as idle,
> like we do with nohz_full CPUs, because that would mean walking the whole timer
> migration tree everytime a timer is queued or modified. This would be too
> costly.
>
> Indeed that should be commented somewhere in this function.
Thanks for the clarification. Yes, I agree that we should better
document it in the code as it is not clear why we are doing that.
Cheers,
Longman
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2025-08-01 19:15 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-30 13:11 [PATCH v9 0/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 1/8] timers/migration: Postpone online/offline callbacks registration to late initcall Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 2/8] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 3/8] timers: Add the available mask in timer migration Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 4/8] timers: Use scoped_guard when setting/clearing the tmigr available flag Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 5/8] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
2025-07-31 14:42 ` Waiman Long
2025-07-30 13:11 ` [PATCH v9 6/8] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
2025-07-31 15:09 ` Waiman Long
2025-08-01 14:46 ` Gabriele Monaco
2025-08-01 18:04 ` Waiman Long
2025-07-30 13:11 ` [PATCH v9 7/8] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
2025-07-31 15:39 ` Waiman Long
2025-08-01 16:03 ` Gabriele Monaco
2025-07-30 13:11 ` [PATCH v9 8/8] timers: Exclude isolated cpus from timer migration Gabriele Monaco
2025-07-31 18:25 ` Waiman Long
2025-08-01 13:07 ` Frederic Weisbecker
2025-08-01 19:15 ` Waiman Long
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).