* [PATCH v5 0/6] timers: Exclude isolated cpus from timer migation
@ 2025-05-08 14:53 Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 1/6] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
` (5 more replies)
0 siblings, 6 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-08 14:53 UTC (permalink / raw)
To: linux-kernel, Frederic Weisbecker, Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
The timer migration mechanism allows active CPUs to pull timers from
idle ones to improve the overall idle time. This is however undesired
when CPU intensive workloads run on isolated cores, as the algorithm
would move the timers from housekeeping to isolated cores, negatively
affecting the isolation.
This effect was noticed on a 128 cores machine running oslat on the
isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
and the CPU with lowest count in a timer migration hierarchy (here 1
and 65) appears as always active and continuously pulls global timers,
from the housekeeping CPUs. This ends up moving driver work (e.g.
delayed work) to isolated CPUs and causes latency spikes:
before the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 1203 10 3 4 ... 5 (us)
after the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 10 4 3 4 3 ... 5 (us)
Exclude isolated cores from the timer migration algorithm, extend the
concept of unavailable cores, currently used for offline ones, to
isolated ones:
* A core is unavailable if isolated or offline;
* A core is available if isolated and offline;
A core is considered unavailable as idle if:
* is in the isolcpus list
* is in the nohz_full list
* is in an isolated cpuset
Due to how the timer migration algorithm works, any CPU part of the
hierarchy can have their global timers pulled by remote CPUs and have to
pull remote timers, only skipping pulling remote timers would break the
logic.
For this reason, we prevent isolated CPUs from pulling remote global
timers, but also the other way around: any global timer started on an
isolated CPU will run there. This does not break the concept of
isolation (global timers don't come from outside the CPU) and, if
considered inappropriate, can usually be mitigated with other isolation
techniques (e.g. IRQ pinning).
The first 3 patches are preparatory work to change the concept of
online/offline to available/unavailable, keep track of those in a
separate cpumask and change a function name in cpuset code.
Patch 4 and 5 adapt isolation and cpuset to prevent domain isolated and
nohz_full from covering all CPUs not leaving any housekeeping one. This
can lead to problems with the changes introduced in this series because
no CPU would remain to handle global timers.
Patch 6 extends the unavailable status to domain isolated CPUs, which
is the main contribution of the series.
Changes since v4 [1]:
* use on_each_cpu_mask() with changes on isolated CPUs to avoid races
* keep nohz_full CPUs included in the timer migration hierarchy
* prevent domain isolated and nohz_full to cover all CPUs
Changes since v3:
* add parameter to function documentation
* split into multiple straightforward patches
Changes since v2:
* improve comments about handling CPUs isolated at boot
* minor cleanup
Changes since v1 [2]:
* split into smaller patches
* use available mask instead of unavailable
* simplification and cleanup
[1] - https://lore.kernel.org/lkml/20250506091534.42117-7-gmonaco@redhat.com
[2] - https://lore.kernel.org/lkml/20250410065446.57304-2-gmonaco@redhat.com
Gabriele Monaco (6):
timers: Rename tmigr 'online' bit to 'available'
timers: Add the available mask in timer migration
cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to
update_exclusion_cpumasks()
sched/isolation: Force housekeeping if isolcpus and nohz_full don't
leave any
cgroup/cpuset: Fail if isolated and nohz_full don't leave any
housekeeping
timers: Exclude isolated cpus from timer migation
include/linux/tick.h | 2 +
include/linux/timer.h | 9 +++
include/trace/events/timer_migration.h | 4 +-
kernel/cgroup/cpuset.c | 82 +++++++++++++++++++++++---
kernel/sched/isolation.c | 20 +++++++
kernel/time/tick-sched.c | 7 +++
kernel/time/timer_migration.c | 77 ++++++++++++++++++++----
kernel/time/timer_migration.h | 2 +-
8 files changed, 180 insertions(+), 23 deletions(-)
base-commit: d76bb1ebb5587f66b0f8b8099bfbb44722bc08b3
--
2.49.0
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v5 1/6] timers: Rename tmigr 'online' bit to 'available'
2025-05-08 14:53 [PATCH v5 0/6] timers: Exclude isolated cpus from timer migation Gabriele Monaco
@ 2025-05-08 14:53 ` Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 2/6] timers: Add the available mask in timer migration Gabriele Monaco
` (4 subsequent siblings)
5 siblings, 0 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-08 14:53 UTC (permalink / raw)
To: linux-kernel, Frederic Weisbecker, Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
The timer migration hierarchy excludes offline CPUs via the
tmigr_is_not_available function, which is essentially checking the
online bit for the CPU.
Rename the online bit to available and all references in function names
and tracepoint to generalise the concept of available CPUs.
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
include/trace/events/timer_migration.h | 4 ++--
kernel/time/timer_migration.c | 22 +++++++++++-----------
kernel/time/timer_migration.h | 2 +-
3 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/include/trace/events/timer_migration.h b/include/trace/events/timer_migration.h
index 47db5eaf2f9a..61171b13c687 100644
--- a/include/trace/events/timer_migration.h
+++ b/include/trace/events/timer_migration.h
@@ -173,14 +173,14 @@ DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_active,
TP_ARGS(tmc)
);
-DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_online,
+DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_available,
TP_PROTO(struct tmigr_cpu *tmc),
TP_ARGS(tmc)
);
-DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_offline,
+DEFINE_EVENT(tmigr_cpugroup, tmigr_cpu_unavailable,
TP_PROTO(struct tmigr_cpu *tmc),
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 2f6330831f08..7efd897c7959 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -427,7 +427,7 @@ static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu);
static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc)
{
- return !(tmc->tmgroup && tmc->online);
+ return !(tmc->tmgroup && tmc->available);
}
/*
@@ -926,7 +926,7 @@ static void tmigr_handle_remote_cpu(unsigned int cpu, u64 now,
* updated the event takes care when hierarchy is completely
* idle. Otherwise the migrator does it as the event is enqueued.
*/
- if (!tmc->online || tmc->remote || tmc->cpuevt.ignore ||
+ if (!tmc->available || tmc->remote || tmc->cpuevt.ignore ||
now < tmc->cpuevt.nextevt.expires) {
raw_spin_unlock_irq(&tmc->lock);
return;
@@ -973,7 +973,7 @@ static void tmigr_handle_remote_cpu(unsigned int cpu, u64 now,
* (See also section "Required event and timerqueue update after a
* remote expiry" in the documentation at the top)
*/
- if (!tmc->online || !tmc->idle) {
+ if (!tmc->available || !tmc->idle) {
timer_unlock_remote_bases(cpu);
goto unlock;
}
@@ -1435,19 +1435,19 @@ static long tmigr_trigger_active(void *unused)
{
struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu);
- WARN_ON_ONCE(!tmc->online || tmc->idle);
+ WARN_ON_ONCE(!tmc->available || tmc->idle);
return 0;
}
-static int tmigr_cpu_offline(unsigned int cpu)
+static int tmigr_cpu_unavailable(unsigned int cpu)
{
struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu);
int migrator;
u64 firstexp;
raw_spin_lock_irq(&tmc->lock);
- tmc->online = false;
+ tmc->available = false;
WRITE_ONCE(tmc->wakeup, KTIME_MAX);
/*
@@ -1455,7 +1455,7 @@ static int tmigr_cpu_offline(unsigned int cpu)
* offline; Therefore nextevt value is set to KTIME_MAX
*/
firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX);
- trace_tmigr_cpu_offline(tmc);
+ trace_tmigr_cpu_unavailable(tmc);
raw_spin_unlock_irq(&tmc->lock);
if (firstexp != KTIME_MAX) {
@@ -1466,7 +1466,7 @@ static int tmigr_cpu_offline(unsigned int cpu)
return 0;
}
-static int tmigr_cpu_online(unsigned int cpu)
+static int tmigr_cpu_available(unsigned int cpu)
{
struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu);
@@ -1475,11 +1475,11 @@ static int tmigr_cpu_online(unsigned int cpu)
return -EINVAL;
raw_spin_lock_irq(&tmc->lock);
- trace_tmigr_cpu_online(tmc);
+ trace_tmigr_cpu_available(tmc);
tmc->idle = timer_base_is_idle();
if (!tmc->idle)
__tmigr_cpu_activate(tmc);
- tmc->online = true;
+ tmc->available = true;
raw_spin_unlock_irq(&tmc->lock);
return 0;
}
@@ -1850,7 +1850,7 @@ static int __init tmigr_init(void)
goto err;
ret = cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online",
- tmigr_cpu_online, tmigr_cpu_offline);
+ tmigr_cpu_available, tmigr_cpu_unavailable);
if (ret)
goto err;
diff --git a/kernel/time/timer_migration.h b/kernel/time/timer_migration.h
index ae19f70f8170..70879cde6fdd 100644
--- a/kernel/time/timer_migration.h
+++ b/kernel/time/timer_migration.h
@@ -97,7 +97,7 @@ struct tmigr_group {
*/
struct tmigr_cpu {
raw_spinlock_t lock;
- bool online;
+ bool available;
bool idle;
bool remote;
struct tmigr_group *tmgroup;
--
2.49.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 2/6] timers: Add the available mask in timer migration
2025-05-08 14:53 [PATCH v5 0/6] timers: Exclude isolated cpus from timer migation Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 1/6] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
@ 2025-05-08 14:53 ` Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 3/6] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
` (3 subsequent siblings)
5 siblings, 0 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-08 14:53 UTC (permalink / raw)
To: linux-kernel, Frederic Weisbecker, Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
Keep track of the CPUs available for timer migration in a cpumask. This
prepares the ground to generalise the concept of unavailable CPUs.
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/time/timer_migration.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 7efd897c7959..25439f961ccf 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -422,6 +422,9 @@ static unsigned int tmigr_crossnode_level __read_mostly;
static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu);
+/* CPUs available for timer migration */
+static cpumask_var_t tmigr_available_cpumask;
+
#define TMIGR_NONE 0xFF
#define BIT_CNT 8
@@ -1449,6 +1452,7 @@ static int tmigr_cpu_unavailable(unsigned int cpu)
raw_spin_lock_irq(&tmc->lock);
tmc->available = false;
WRITE_ONCE(tmc->wakeup, KTIME_MAX);
+ cpumask_clear_cpu(cpu, tmigr_available_cpumask);
/*
* CPU has to handle the local events on his own, when on the way to
@@ -1459,7 +1463,7 @@ static int tmigr_cpu_unavailable(unsigned int cpu)
raw_spin_unlock_irq(&tmc->lock);
if (firstexp != KTIME_MAX) {
- migrator = cpumask_any_but(cpu_online_mask, cpu);
+ migrator = cpumask_any(tmigr_available_cpumask);
work_on_cpu(migrator, tmigr_trigger_active, NULL);
}
@@ -1480,6 +1484,7 @@ static int tmigr_cpu_available(unsigned int cpu)
if (!tmc->idle)
__tmigr_cpu_activate(tmc);
tmc->available = true;
+ cpumask_set_cpu(cpu, tmigr_available_cpumask);
raw_spin_unlock_irq(&tmc->lock);
return 0;
}
@@ -1801,6 +1806,11 @@ static int __init tmigr_init(void)
if (ncpus == 1)
return 0;
+ if (!zalloc_cpumask_var(&tmigr_available_cpumask, GFP_KERNEL)) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
/*
* Calculate the required hierarchy levels. Unfortunately there is no
* reliable information available, unless all possible CPUs have been
--
2.49.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 3/6] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks()
2025-05-08 14:53 [PATCH v5 0/6] timers: Exclude isolated cpus from timer migation Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 1/6] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 2/6] timers: Add the available mask in timer migration Gabriele Monaco
@ 2025-05-08 14:53 ` Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 4/6] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
` (2 subsequent siblings)
5 siblings, 0 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-08 14:53 UTC (permalink / raw)
To: linux-kernel, Frederic Weisbecker, Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
update_unbound_workqueue_cpumask() updates unbound workqueues settings
when there's a change in isolated CPUs, but it can be used for other
subsystems requiring updated when isolated CPUs change.
Generalise the name to update_exclusion_cpumasks() to prepare for other
functions unrelated to workqueues to be called in that spot.
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/cgroup/cpuset.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 306b60430091..95316d39c282 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1323,7 +1323,7 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
return isolcpus_updated;
}
-static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
+static void update_exclusion_cpumasks(bool isolcpus_updated)
{
int ret;
@@ -1454,7 +1454,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
list_add(&cs->remote_sibling, &remote_children);
cpumask_copy(cs->effective_xcpus, tmp->new_cpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
cpuset_force_rebuild();
cs->prs_err = 0;
@@ -1495,7 +1495,7 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)
compute_effective_exclusive_cpumask(cs, NULL, NULL);
reset_partition_data(cs);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
cpuset_force_rebuild();
/*
@@ -1563,7 +1563,7 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
if (xcpus)
cpumask_copy(cs->exclusive_cpus, xcpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
if (adding || deleting)
cpuset_force_rebuild();
@@ -1906,7 +1906,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
WARN_ON_ONCE(parent->nr_subparts < 0);
}
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
if ((old_prs != new_prs) && (cmd == partcmd_update))
update_partition_exclusive_flag(cs, new_prs);
@@ -2931,7 +2931,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
else if (isolcpus_updated)
isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_exclusion_cpumasks(isolcpus_updated);
/* Force update if switching back to member & update effective_xcpus */
update_cpumasks_hier(cs, &tmpmask, !new_prs);
--
2.49.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 4/6] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any
2025-05-08 14:53 [PATCH v5 0/6] timers: Exclude isolated cpus from timer migation Gabriele Monaco
` (2 preceding siblings ...)
2025-05-08 14:53 ` [PATCH v5 3/6] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
@ 2025-05-08 14:53 ` Gabriele Monaco
2025-05-20 10:17 ` Frederic Weisbecker
2025-05-20 12:02 ` Frederic Weisbecker
2025-05-08 14:53 ` [PATCH v5 5/6] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 6/6] timers: Exclude isolated cpus from timer migation Gabriele Monaco
5 siblings, 2 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-08 14:53 UTC (permalink / raw)
To: linux-kernel, Frederic Weisbecker, Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
Currently the user can set up isolcpus and nohz_full in such a way that
leaves no housekeeping CPU (i.e. no CPU that is neither domain isolated
nor nohz full). This can be a problem for other subsystems (e.g. the
timer wheel imgration).
Prevent this configuration by setting the boot CPU as housekeeping if
the union of isolcpus and nohz_full covers all CPUs. In a similar
fashion as it already happens if either of them covers all CPUs.
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
include/linux/tick.h | 2 ++
kernel/sched/isolation.c | 20 ++++++++++++++++++++
kernel/time/tick-sched.c | 7 +++++++
3 files changed, 29 insertions(+)
diff --git a/include/linux/tick.h b/include/linux/tick.h
index b8ddc8e631a3..0b32c0bd3512 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -278,6 +278,7 @@ static inline void tick_dep_clear_signal(struct signal_struct *signal,
extern void tick_nohz_full_kick_cpu(int cpu);
extern void __tick_nohz_task_switch(void);
extern void __init tick_nohz_full_setup(cpumask_var_t cpumask);
+extern void __init tick_nohz_full_clear_cpu(unsigned int cpu);
#else
static inline bool tick_nohz_full_enabled(void) { return false; }
static inline bool tick_nohz_full_cpu(int cpu) { return false; }
@@ -304,6 +305,7 @@ static inline void tick_dep_clear_signal(struct signal_struct *signal,
static inline void tick_nohz_full_kick_cpu(int cpu) { }
static inline void __tick_nohz_task_switch(void) { }
static inline void tick_nohz_full_setup(cpumask_var_t cpumask) { }
+static inline void tick_nohz_full_clear_cpu(unsigned int cpu) { }
#endif
static inline void tick_nohz_task_switch(void)
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 81bc8b329ef1..27b65b401534 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -165,6 +165,26 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
}
}
+ /* Check in combination with the previously set cpumask */
+ type = find_first_bit(&housekeeping.flags, HK_TYPE_MAX);
+ first_cpu = cpumask_first_and_and(cpu_present_mask,
+ housekeeping_staging,
+ housekeeping.cpumasks[type]);
+ if (first_cpu >= nr_cpu_ids || first_cpu >= setup_max_cpus) {
+ pr_warn("Housekeeping: must include one present CPU neither "
+ "in nohz_full= nor in isolcpus=, using boot CPU:%d\n",
+ smp_processor_id());
+ for_each_set_bit(type, &housekeeping.flags, HK_TYPE_MAX)
+ __cpumask_set_cpu(smp_processor_id(),
+ housekeeping.cpumasks[type]);
+ __cpumask_set_cpu(smp_processor_id(), housekeeping_staging);
+ __cpumask_clear_cpu(smp_processor_id(), non_housekeeping_mask);
+ tick_nohz_full_clear_cpu(smp_processor_id());
+
+ if (cpumask_empty(non_housekeeping_mask))
+ goto free_housekeeping_staging;
+ }
+
iter_flags = flags & ~housekeeping.flags;
for_each_set_bit(type, &iter_flags, HK_TYPE_MAX)
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index c527b421c865..2969ed13d1f4 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -604,6 +604,13 @@ void __init tick_nohz_full_setup(cpumask_var_t cpumask)
tick_nohz_full_running = true;
}
+/* Called if boot-time nohz CPU list changes during initialisation. */
+void __init tick_nohz_full_clear_cpu(unsigned int cpu)
+{
+ if (tick_nohz_full_running)
+ cpumask_clear_cpu(cpu, tick_nohz_full_mask);
+}
+
bool tick_nohz_cpu_hotpluggable(unsigned int cpu)
{
/*
--
2.49.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 5/6] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-05-08 14:53 [PATCH v5 0/6] timers: Exclude isolated cpus from timer migation Gabriele Monaco
` (3 preceding siblings ...)
2025-05-08 14:53 ` [PATCH v5 4/6] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
@ 2025-05-08 14:53 ` Gabriele Monaco
2025-05-20 13:39 ` Gabriele Monaco
2025-05-20 14:28 ` Frederic Weisbecker
2025-05-08 14:53 ` [PATCH v5 6/6] timers: Exclude isolated cpus from timer migation Gabriele Monaco
5 siblings, 2 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-08 14:53 UTC (permalink / raw)
To: linux-kernel, Frederic Weisbecker, Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
Currently the user can set up isolated cpus via cpuset and nohz_full in
such a way that leaves no housekeeping CPU (i.e. no CPU that is neither
domain isolated nor nohz full). This can be a problem for other
subsystems (e.g. the timer wheel imgration).
Prevent this configuration by blocking any assignation that would cause
the union of domain isolated cpus and nohz_full to covers all CPUs.
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
kernel/cgroup/cpuset.c | 67 ++++++++++++++++++++++++++++++++++++++++--
1 file changed, 65 insertions(+), 2 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 95316d39c282..2f1df6f5b988 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -80,6 +80,12 @@ static cpumask_var_t subpartitions_cpus;
*/
static cpumask_var_t isolated_cpus;
+/*
+ * Housekeeping CPUs for both HK_TYPE_DOMAIN and HK_TYPE_KERNEL_NOISE
+ */
+static cpumask_var_t full_hk_cpus;
+static bool have_boot_nohz_full;
+
/*
* Housekeeping (HK_TYPE_DOMAIN) CPUs at boot
*/
@@ -1253,10 +1259,26 @@ static void reset_partition_data(struct cpuset *cs)
static void isolated_cpus_update(int old_prs, int new_prs, struct cpumask *xcpus)
{
WARN_ON_ONCE(old_prs == new_prs);
- if (new_prs == PRS_ISOLATED)
+ if (new_prs == PRS_ISOLATED) {
cpumask_or(isolated_cpus, isolated_cpus, xcpus);
- else
+ cpumask_andnot(full_hk_cpus, full_hk_cpus, xcpus);
+ } else {
cpumask_andnot(isolated_cpus, isolated_cpus, xcpus);
+ cpumask_or(full_hk_cpus, full_hk_cpus, xcpus);
+ }
+}
+
+/*
+ * isolated_cpus_should_update - Returns if the isolated_cpus mask needs update
+ * @prs: new or old partition_root_state
+ * @parent: parent cpuset
+ * Return: true if isolated_cpus needs modification, false otherwise
+ */
+static bool isolated_cpus_should_update(int prs, struct cpuset *parent)
+{
+ if (!parent)
+ parent = &top_cpuset;
+ return prs != parent->partition_root_state;
}
/*
@@ -1323,6 +1345,25 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
return isolcpus_updated;
}
+/*
+ * isolcpus_nohz_conflict - check for isolated & nohz_full conflicts
+ * @new_cpus: cpu mask
+ * Return: true if there is conflict, false otherwise
+ *
+ * If nohz_full is enabled and we have isolated CPUs, their combination must
+ * still leave housekeeping CPUs.
+ */
+static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
+{
+ if (!have_boot_nohz_full)
+ return false;
+
+ if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
+ return true;
+
+ return false;
+}
+
static void update_exclusion_cpumasks(bool isolcpus_updated)
{
int ret;
@@ -1448,6 +1489,9 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
cpumask_intersects(tmp->new_cpus, subpartitions_cpus) ||
cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus))
return PERR_INVCPUS;
+ if (isolated_cpus_should_update(new_prs, NULL) &&
+ isolcpus_nohz_conflict(tmp->new_cpus))
+ return PERR_HKEEPING;
spin_lock_irq(&callback_lock);
isolcpus_updated = partition_xcpus_add(new_prs, NULL, tmp->new_cpus);
@@ -1546,6 +1590,9 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
else if (cpumask_intersects(tmp->addmask, subpartitions_cpus) ||
cpumask_subset(top_cpuset.effective_cpus, tmp->addmask))
cs->prs_err = PERR_NOCPUS;
+ else if (isolated_cpus_should_update(prs, NULL) &&
+ isolcpus_nohz_conflict(tmp->addmask))
+ cs->prs_err = PERR_HKEEPING;
if (cs->prs_err)
goto invalidate;
}
@@ -1877,6 +1924,12 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
return err;
}
+ if (deleting && isolated_cpus_should_update(new_prs, parent) &&
+ isolcpus_nohz_conflict(tmp->delmask)) {
+ cs->prs_err = PERR_HKEEPING;
+ return PERR_HKEEPING;
+ }
+
/*
* Change the parent's effective_cpus & effective_xcpus (top cpuset
* only).
@@ -2897,6 +2950,8 @@ static int update_prstate(struct cpuset *cs, int new_prs)
* Need to update isolated_cpus.
*/
isolcpus_updated = true;
+ if (isolcpus_nohz_conflict(cs->effective_xcpus))
+ err = PERR_HKEEPING;
} else {
/*
* Switching back to member is always allowed even if it
@@ -3715,6 +3770,7 @@ int __init cpuset_init(void)
BUG_ON(!alloc_cpumask_var(&top_cpuset.exclusive_cpus, GFP_KERNEL));
BUG_ON(!zalloc_cpumask_var(&subpartitions_cpus, GFP_KERNEL));
BUG_ON(!zalloc_cpumask_var(&isolated_cpus, GFP_KERNEL));
+ BUG_ON(!alloc_cpumask_var(&full_hk_cpus, GFP_KERNEL));
cpumask_setall(top_cpuset.cpus_allowed);
nodes_setall(top_cpuset.mems_allowed);
@@ -3722,17 +3778,24 @@ int __init cpuset_init(void)
cpumask_setall(top_cpuset.effective_xcpus);
cpumask_setall(top_cpuset.exclusive_cpus);
nodes_setall(top_cpuset.effective_mems);
+ cpumask_copy(full_hk_cpus, cpu_present_mask);
fmeter_init(&top_cpuset.fmeter);
INIT_LIST_HEAD(&remote_children);
BUG_ON(!alloc_cpumask_var(&cpus_attach, GFP_KERNEL));
+ have_boot_nohz_full = housekeeping_enabled(HK_TYPE_KERNEL_NOISE);
+ if (have_boot_nohz_full)
+ cpumask_and(full_hk_cpus, cpu_possible_mask,
+ housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
+
have_boot_isolcpus = housekeeping_enabled(HK_TYPE_DOMAIN);
if (have_boot_isolcpus) {
BUG_ON(!alloc_cpumask_var(&boot_hk_cpus, GFP_KERNEL));
cpumask_copy(boot_hk_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN));
cpumask_andnot(isolated_cpus, cpu_possible_mask, boot_hk_cpus);
+ cpumask_and(full_hk_cpus, full_hk_cpus, boot_hk_cpus);
}
return 0;
--
2.49.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 6/6] timers: Exclude isolated cpus from timer migation
2025-05-08 14:53 [PATCH v5 0/6] timers: Exclude isolated cpus from timer migation Gabriele Monaco
` (4 preceding siblings ...)
2025-05-08 14:53 ` [PATCH v5 5/6] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
@ 2025-05-08 14:53 ` Gabriele Monaco
2025-05-20 14:43 ` Frederic Weisbecker
5 siblings, 1 reply; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-08 14:53 UTC (permalink / raw)
To: linux-kernel, Frederic Weisbecker, Thomas Gleixner, Waiman Long
Cc: Gabriele Monaco
The timer migration mechanism allows active CPUs to pull timers from
idle ones to improve the overall idle time. This is however undesired
when CPU intensive workloads run on isolated cores, as the algorithm
would move the timers from housekeeping to isolated cores, negatively
affecting the isolation.
This effect was noticed on a 128 cores machine running oslat on the
isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
and the CPU with lowest count in a timer migration hierarchy (here 1
and 65) appears as always active and continuously pulls global timers,
from the housekeeping CPUs. This ends up moving driver work (e.g.
delayed work) to isolated CPUs and causes latency spikes:
before the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 1203 10 3 4 ... 5 (us)
after the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 10 4 3 4 3 ... 5 (us)
Exclude isolated cores from the timer migration algorithm, extend the
concept of unavailable cores, currently used for offline ones, to
isolated ones:
* A core is unavailable if isolated or offline;
* A core is available if isolated and offline;
A core is considered unavailable as idle if:
* is in the isolcpus list
* is in the nohz_full list
* is in an isolated cpuset
Due to how the timer migration algorithm works, any CPU part of the
hierarchy can have their global timers pulled by remote CPUs and have to
pull remote timers, only skipping pulling remote timers would break the
logic.
For this reason, we prevent isolated CPUs from pulling remote global
timers, but also the other way around: any global timer started on an
isolated CPU will run there. This does not break the concept of
isolation (global timers don't come from outside the CPU) and, if
considered inappropriate, can usually be mitigated with other isolation
techniques (e.g. IRQ pinning).
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
include/linux/timer.h | 9 ++++++++
kernel/cgroup/cpuset.c | 3 +++
kernel/time/timer_migration.c | 43 +++++++++++++++++++++++++++++++++++
3 files changed, 55 insertions(+)
diff --git a/include/linux/timer.h b/include/linux/timer.h
index 10596d7c3a34..a8b683d9ce25 100644
--- a/include/linux/timer.h
+++ b/include/linux/timer.h
@@ -190,4 +190,13 @@ int timers_dead_cpu(unsigned int cpu);
#define timers_dead_cpu NULL
#endif
+#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
+extern int tmigr_isolated_exclude_cpumask(cpumask_var_t exclude_cpumask);
+#else
+static inline int tmigr_isolated_exclude_cpumask(cpumask_var_t exclude_cpumask)
+{
+ return 0;
+}
+#endif
+
#endif
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 2f1df6f5b988..6e36e333d8b1 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1375,6 +1375,9 @@ static void update_exclusion_cpumasks(bool isolcpus_updated)
ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
WARN_ON_ONCE(ret < 0);
+
+ ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
+ WARN_ON_ONCE(ret < 0);
}
/**
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 25439f961ccf..fb27e929e2cf 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -10,6 +10,7 @@
#include <linux/spinlock.h>
#include <linux/timerqueue.h>
#include <trace/events/ipi.h>
+#include <linux/sched/isolation.h>
#include "timer_migration.h"
#include "tick-internal.h"
@@ -1478,6 +1479,16 @@ static int tmigr_cpu_available(unsigned int cpu)
if (WARN_ON_ONCE(!tmc->tmgroup))
return -EINVAL;
+ /*
+ * Domain isolated CPUs don't participate in timer migration.
+ * Checking here guarantees that CPUs isolated at boot (e.g. isolcpus)
+ * are not marked as available when they first become online.
+ * During runtime, any offline isolated CPU is also not incorrectly
+ * marked as available once it gets back online.
+ */
+ if (!housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN) ||
+ cpuset_cpu_is_isolated(cpu))
+ return 0;
raw_spin_lock_irq(&tmc->lock);
trace_tmigr_cpu_available(tmc);
tmc->idle = timer_base_is_idle();
@@ -1489,6 +1500,38 @@ static int tmigr_cpu_available(unsigned int cpu)
return 0;
}
+static void tmigr_remote_cpu_unavailable(void *ignored)
+{
+ tmigr_cpu_unavailable(smp_processor_id());
+}
+
+static void tmigr_remote_cpu_available(void *ignored)
+{
+ tmigr_cpu_available(smp_processor_id());
+}
+
+int tmigr_isolated_exclude_cpumask(cpumask_var_t exclude_cpumask)
+{
+ cpumask_var_t cpumask;
+ int ret = 0;
+
+ lockdep_assert_cpus_held();
+
+ if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
+ return -ENOMEM;
+
+ cpumask_and(cpumask, exclude_cpumask, tmigr_available_cpumask);
+ cpumask_and(cpumask, cpumask, housekeeping_cpumask(HK_TYPE_TICK));
+ on_each_cpu_mask(cpumask, tmigr_remote_cpu_unavailable, NULL, 0);
+
+ cpumask_andnot(cpumask, cpu_online_mask, exclude_cpumask);
+ cpumask_andnot(cpumask, cpumask, tmigr_available_cpumask);
+ on_each_cpu_mask(cpumask, tmigr_remote_cpu_available, NULL, 0);
+
+ free_cpumask_var(cpumask);
+ return ret;
+}
+
static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
int node)
{
--
2.49.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v5 4/6] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any
2025-05-08 14:53 ` [PATCH v5 4/6] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
@ 2025-05-20 10:17 ` Frederic Weisbecker
2025-05-20 11:17 ` Gabriele Monaco
2025-05-20 12:02 ` Frederic Weisbecker
1 sibling, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2025-05-20 10:17 UTC (permalink / raw)
To: Gabriele Monaco; +Cc: linux-kernel, Thomas Gleixner, Waiman Long
Le Thu, May 08, 2025 at 04:53:24PM +0200, Gabriele Monaco a écrit :
> Currently the user can set up isolcpus and nohz_full in such a way that
> leaves no housekeeping CPU (i.e. no CPU that is neither domain isolated
> nor nohz full). This can be a problem for other subsystems (e.g. the
> timer wheel imgration).
>
> Prevent this configuration by setting the boot CPU as housekeeping if
> the union of isolcpus and nohz_full covers all CPUs. In a similar
> fashion as it already happens if either of them covers all CPUs.
>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
> include/linux/tick.h | 2 ++
> kernel/sched/isolation.c | 20 ++++++++++++++++++++
> kernel/time/tick-sched.c | 7 +++++++
> 3 files changed, 29 insertions(+)
>
> diff --git a/include/linux/tick.h b/include/linux/tick.h
> index b8ddc8e631a3..0b32c0bd3512 100644
> --- a/include/linux/tick.h
> +++ b/include/linux/tick.h
> @@ -278,6 +278,7 @@ static inline void tick_dep_clear_signal(struct signal_struct *signal,
> extern void tick_nohz_full_kick_cpu(int cpu);
> extern void __tick_nohz_task_switch(void);
> extern void __init tick_nohz_full_setup(cpumask_var_t cpumask);
> +extern void __init tick_nohz_full_clear_cpu(unsigned int cpu);
> #else
> static inline bool tick_nohz_full_enabled(void) { return false; }
> static inline bool tick_nohz_full_cpu(int cpu) { return false; }
> @@ -304,6 +305,7 @@ static inline void tick_dep_clear_signal(struct signal_struct *signal,
> static inline void tick_nohz_full_kick_cpu(int cpu) { }
> static inline void __tick_nohz_task_switch(void) { }
> static inline void tick_nohz_full_setup(cpumask_var_t cpumask) { }
> +static inline void tick_nohz_full_clear_cpu(unsigned int cpu) { }
> #endif
>
> static inline void tick_nohz_task_switch(void)
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 81bc8b329ef1..27b65b401534 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -165,6 +165,26 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
> }
> }
>
> + /* Check in combination with the previously set cpumask */
> + type = find_first_bit(&housekeeping.flags, HK_TYPE_MAX);
> + first_cpu = cpumask_first_and_and(cpu_present_mask,
> + housekeeping_staging,
> + housekeeping.cpumasks[type]);
> + if (first_cpu >= nr_cpu_ids || first_cpu >= setup_max_cpus) {
> + pr_warn("Housekeeping: must include one present CPU neither "
> + "in nohz_full= nor in isolcpus=, using boot CPU:%d\n",
> + smp_processor_id());
> + for_each_set_bit(type, &housekeeping.flags, HK_TYPE_MAX)
> + __cpumask_set_cpu(smp_processor_id(),
> + housekeeping.cpumasks[type]);
> + __cpumask_set_cpu(smp_processor_id(), housekeeping_staging);
> + __cpumask_clear_cpu(smp_processor_id(), non_housekeeping_mask);
> + tick_nohz_full_clear_cpu(smp_processor_id());
> +
> + if (cpumask_empty(non_housekeeping_mask))
> + goto free_housekeeping_staging;
> + }
> +
Looking again at that, how is it possible to set a different CPU between
isolcpus= and nohz_full= ?
enum hk_type type;
unsigned long iter_flags = flags & housekeeping.flags;
for_each_set_bit(type, &iter_flags, HK_TYPE_MAX) {
if (!cpumask_equal(housekeeping_staging,
housekeeping.cpumasks[type])) {
pr_warn("Housekeeping: nohz_full= must match isolcpus=\n");
goto free_housekeeping_staging;
}
}
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 4/6] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any
2025-05-20 10:17 ` Frederic Weisbecker
@ 2025-05-20 11:17 ` Gabriele Monaco
2025-05-20 11:57 ` Frederic Weisbecker
0 siblings, 1 reply; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-20 11:17 UTC (permalink / raw)
To: Frederic Weisbecker; +Cc: linux-kernel, Thomas Gleixner, Waiman Long
On Tue, 2025-05-20 at 12:17 +0200, Frederic Weisbecker wrote:
> Le Thu, May 08, 2025 at 04:53:24PM +0200, Gabriele Monaco a écrit :
> > Currently the user can set up isolcpus and nohz_full in such a way
> > that
> > leaves no housekeeping CPU (i.e. no CPU that is neither domain
> > isolated
> > nor nohz full). This can be a problem for other subsystems (e.g.
> > the
> > timer wheel imgration).
> >
> > Prevent this configuration by setting the boot CPU as housekeeping
> > if
> > the union of isolcpus and nohz_full covers all CPUs. In a similar
> > fashion as it already happens if either of them covers all CPUs.
> >
> > Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> > ---
> > include/linux/tick.h | 2 ++
> > kernel/sched/isolation.c | 20 ++++++++++++++++++++
> > kernel/time/tick-sched.c | 7 +++++++
> > 3 files changed, 29 insertions(+)
> >
> > diff --git a/include/linux/tick.h b/include/linux/tick.h
> > index b8ddc8e631a3..0b32c0bd3512 100644
> > --- a/include/linux/tick.h
> > +++ b/include/linux/tick.h
> > @@ -278,6 +278,7 @@ static inline void tick_dep_clear_signal(struct
> > signal_struct *signal,
> > extern void tick_nohz_full_kick_cpu(int cpu);
> > extern void __tick_nohz_task_switch(void);
> > extern void __init tick_nohz_full_setup(cpumask_var_t cpumask);
> > +extern void __init tick_nohz_full_clear_cpu(unsigned int cpu);
> > #else
> > static inline bool tick_nohz_full_enabled(void) { return false; }
> > static inline bool tick_nohz_full_cpu(int cpu) { return false; }
> > @@ -304,6 +305,7 @@ static inline void tick_dep_clear_signal(struct
> > signal_struct *signal,
> > static inline void tick_nohz_full_kick_cpu(int cpu) { }
> > static inline void __tick_nohz_task_switch(void) { }
> > static inline void tick_nohz_full_setup(cpumask_var_t cpumask) { }
> > +static inline void tick_nohz_full_clear_cpu(unsigned int cpu) { }
> > #endif
> >
> > static inline void tick_nohz_task_switch(void)
> > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> > index 81bc8b329ef1..27b65b401534 100644
> > --- a/kernel/sched/isolation.c
> > +++ b/kernel/sched/isolation.c
> > @@ -165,6 +165,26 @@ static int __init housekeeping_setup(char
> > *str, unsigned long flags)
> > }
> > }
> >
> > + /* Check in combination with the previously set
> > cpumask */
> > + type = find_first_bit(&housekeeping.flags,
> > HK_TYPE_MAX);
> > + first_cpu =
> > cpumask_first_and_and(cpu_present_mask,
> > +
> > housekeeping_staging,
> > +
> > housekeeping.cpumasks[type]);
> > + if (first_cpu >= nr_cpu_ids || first_cpu >=
> > setup_max_cpus) {
> > + pr_warn("Housekeeping: must include one
> > present CPU neither "
> > + "in nohz_full= nor in isolcpus=,
> > using boot CPU:%d\n",
> > + smp_processor_id());
> > + for_each_set_bit(type,
> > &housekeeping.flags, HK_TYPE_MAX)
> > + __cpumask_set_cpu(smp_processor_id
> > (),
> > +
> > housekeeping.cpumasks[type]);
> > + __cpumask_set_cpu(smp_processor_id(),
> > housekeeping_staging);
> > + __cpumask_clear_cpu(smp_processor_id(),
> > non_housekeeping_mask);
> > + tick_nohz_full_clear_cpu(smp_processor_id(
> > ));
> > +
> > + if (cpumask_empty(non_housekeeping_mask))
> > + goto free_housekeeping_staging;
> > + }
> > +
>
> Looking again at that, how is it possible to set a different CPU
> between
> isolcpus= and nohz_full= ?
>
> enum hk_type type;
> unsigned long iter_flags = flags &
> housekeeping.flags;
>
> for_each_set_bit(type, &iter_flags, HK_TYPE_MAX) {
> if (!cpumask_equal(housekeeping_staging,
>
> housekeeping.cpumasks[type])) {
> pr_warn("Housekeeping: nohz_full=
> must match isolcpus=\n");
> goto free_housekeeping_staging;
> }
> }
The isolcpus parameter can be used like:
1. isolcpus=1,2,3
2. isolcpus=domain,1,2,3
3. isolcpus=nohz,1,2,3
4. isolcpus=domain,nohz,1,2,3
...
1 and 2 are equivalent (e.g. if no mode is specified, that's domain
isolation), 3 is equivalent to nohz_full=1,2,3 and 4 is equivalent to
1-2 in combination with nohz_full=1,2,3
Now, the code takes into account that there are 2 arguments that can
isolate (isolcpus and domain) and can be passed in any order, that
specific code guards against those two passing inconsistent maps, e.g.:
isolcpus=nohz,0-4 nohz_full=5-8
Strictly speaking it's guarding for any other possible inconsistency
but I believe that's the only one actually achievable.
Again, nothing forbids e.g.
isolcpus=domain,0-4 nohz_full=5-8
since they're different isolation flags and that's allowed (not sure if
it really should be though).
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 4/6] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any
2025-05-20 11:17 ` Gabriele Monaco
@ 2025-05-20 11:57 ` Frederic Weisbecker
0 siblings, 0 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2025-05-20 11:57 UTC (permalink / raw)
To: Gabriele Monaco; +Cc: linux-kernel, Thomas Gleixner, Waiman Long
Le Tue, May 20, 2025 at 01:17:20PM +0200, Gabriele Monaco a écrit :
> The isolcpus parameter can be used like:
> 1. isolcpus=1,2,3
> 2. isolcpus=domain,1,2,3
> 3. isolcpus=nohz,1,2,3
> 4. isolcpus=domain,nohz,1,2,3
> ...
>
> 1 and 2 are equivalent (e.g. if no mode is specified, that's domain
> isolation), 3 is equivalent to nohz_full=1,2,3 and 4 is equivalent to
> 1-2 in combination with nohz_full=1,2,3
>
> Now, the code takes into account that there are 2 arguments that can
> isolate (isolcpus and domain) and can be passed in any order, that
> specific code guards against those two passing inconsistent maps, e.g.:
>
> isolcpus=nohz,0-4 nohz_full=5-8
>
> Strictly speaking it's guarding for any other possible inconsistency
> but I believe that's the only one actually achievable.
>
> Again, nothing forbids e.g.
>
> isolcpus=domain,0-4 nohz_full=5-8
>
> since they're different isolation flags and that's allowed (not sure if
> it really should be though).
Duh, yes, it only refuse if the flags are common and masks are different.
I seem to remember you already explained that to me last time and I already
slapped my forehead. Prepare for me to ask the same question one more time
in one week ;-)
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 4/6] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any
2025-05-08 14:53 ` [PATCH v5 4/6] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
2025-05-20 10:17 ` Frederic Weisbecker
@ 2025-05-20 12:02 ` Frederic Weisbecker
2025-05-20 12:28 ` Gabriele Monaco
1 sibling, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2025-05-20 12:02 UTC (permalink / raw)
To: Gabriele Monaco; +Cc: linux-kernel, Thomas Gleixner, Waiman Long
Le Thu, May 08, 2025 at 04:53:24PM +0200, Gabriele Monaco a écrit :
> Currently the user can set up isolcpus and nohz_full in such a way that
> leaves no housekeeping CPU (i.e. no CPU that is neither domain isolated
> nor nohz full). This can be a problem for other subsystems (e.g. the
> timer wheel imgration).
>
> Prevent this configuration by setting the boot CPU as housekeeping if
> the union of isolcpus and nohz_full covers all CPUs. In a similar
> fashion as it already happens if either of them covers all CPUs.
>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
> include/linux/tick.h | 2 ++
> kernel/sched/isolation.c | 20 ++++++++++++++++++++
> kernel/time/tick-sched.c | 7 +++++++
> 3 files changed, 29 insertions(+)
>
> diff --git a/include/linux/tick.h b/include/linux/tick.h
> index b8ddc8e631a3..0b32c0bd3512 100644
> --- a/include/linux/tick.h
> +++ b/include/linux/tick.h
> @@ -278,6 +278,7 @@ static inline void tick_dep_clear_signal(struct signal_struct *signal,
> extern void tick_nohz_full_kick_cpu(int cpu);
> extern void __tick_nohz_task_switch(void);
> extern void __init tick_nohz_full_setup(cpumask_var_t cpumask);
> +extern void __init tick_nohz_full_clear_cpu(unsigned int cpu);
> #else
> static inline bool tick_nohz_full_enabled(void) { return false; }
> static inline bool tick_nohz_full_cpu(int cpu) { return false; }
> @@ -304,6 +305,7 @@ static inline void tick_dep_clear_signal(struct signal_struct *signal,
> static inline void tick_nohz_full_kick_cpu(int cpu) { }
> static inline void __tick_nohz_task_switch(void) { }
> static inline void tick_nohz_full_setup(cpumask_var_t cpumask) { }
> +static inline void tick_nohz_full_clear_cpu(unsigned int cpu) { }
> #endif
>
> static inline void tick_nohz_task_switch(void)
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 81bc8b329ef1..27b65b401534 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -165,6 +165,26 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
> }
> }
>
> + /* Check in combination with the previously set cpumask */
> + type = find_first_bit(&housekeeping.flags, HK_TYPE_MAX);
> + first_cpu = cpumask_first_and_and(cpu_present_mask,
> + housekeeping_staging,
> + housekeeping.cpumasks[type]);
> + if (first_cpu >= nr_cpu_ids || first_cpu >= setup_max_cpus) {
> + pr_warn("Housekeeping: must include one present CPU neither "
> + "in nohz_full= nor in isolcpus=, using boot CPU:%d\n",
> + smp_processor_id());
I wouldn't even bother recovering:
pr_warn("Housekeeping: must include one present CPU neither in nohz_full= nor in
isolcpus=\n ignoring setting %lx", flags);
goto free_housekeeping_staging;
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 4/6] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any
2025-05-20 12:02 ` Frederic Weisbecker
@ 2025-05-20 12:28 ` Gabriele Monaco
0 siblings, 0 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-20 12:28 UTC (permalink / raw)
To: Frederic Weisbecker; +Cc: linux-kernel, Thomas Gleixner, Waiman Long
On Tue, 2025-05-20 at 14:02 +0200, Frederic Weisbecker wrote:
> Le Thu, May 08, 2025 at 04:53:24PM +0200, Gabriele Monaco a écrit :
> > Currently the user can set up isolcpus and nohz_full in such a way
> > that
> > leaves no housekeeping CPU (i.e. no CPU that is neither domain
> > isolated
> > nor nohz full). This can be a problem for other subsystems (e.g.
> > the
> > timer wheel imgration).
> >
> > Prevent this configuration by setting the boot CPU as housekeeping
> > if
> > the union of isolcpus and nohz_full covers all CPUs. In a similar
> > fashion as it already happens if either of them covers all CPUs.
> >
> > Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> > ---
> > include/linux/tick.h | 2 ++
> > kernel/sched/isolation.c | 20 ++++++++++++++++++++
> > kernel/time/tick-sched.c | 7 +++++++
> > 3 files changed, 29 insertions(+)
> >
> > diff --git a/include/linux/tick.h b/include/linux/tick.h
> > index b8ddc8e631a3..0b32c0bd3512 100644
> > --- a/include/linux/tick.h
> > +++ b/include/linux/tick.h
> > @@ -278,6 +278,7 @@ static inline void tick_dep_clear_signal(struct
> > signal_struct *signal,
> > extern void tick_nohz_full_kick_cpu(int cpu);
> > extern void __tick_nohz_task_switch(void);
> > extern void __init tick_nohz_full_setup(cpumask_var_t cpumask);
> > +extern void __init tick_nohz_full_clear_cpu(unsigned int cpu);
> > #else
> > static inline bool tick_nohz_full_enabled(void) { return false; }
> > static inline bool tick_nohz_full_cpu(int cpu) { return false; }
> > @@ -304,6 +305,7 @@ static inline void tick_dep_clear_signal(struct
> > signal_struct *signal,
> > static inline void tick_nohz_full_kick_cpu(int cpu) { }
> > static inline void __tick_nohz_task_switch(void) { }
> > static inline void tick_nohz_full_setup(cpumask_var_t cpumask) { }
> > +static inline void tick_nohz_full_clear_cpu(unsigned int cpu) { }
> > #endif
> >
> > static inline void tick_nohz_task_switch(void)
> > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> > index 81bc8b329ef1..27b65b401534 100644
> > --- a/kernel/sched/isolation.c
> > +++ b/kernel/sched/isolation.c
> > @@ -165,6 +165,26 @@ static int __init housekeeping_setup(char
> > *str, unsigned long flags)
> > }
> > }
> >
> > + /* Check in combination with the previously set
> > cpumask */
> > + type = find_first_bit(&housekeeping.flags,
> > HK_TYPE_MAX);
> > + first_cpu =
> > cpumask_first_and_and(cpu_present_mask,
> > +
> > housekeeping_staging,
> > +
> > housekeeping.cpumasks[type]);
> > + if (first_cpu >= nr_cpu_ids || first_cpu >=
> > setup_max_cpus) {
> > + pr_warn("Housekeeping: must include one
> > present CPU neither "
> > + "in nohz_full= nor in isolcpus=,
> > using boot CPU:%d\n",
> > + smp_processor_id());
>
> I wouldn't even bother recovering:
>
> pr_warn("Housekeeping: must include one present CPU neither in
> nohz_full= nor in
> isolcpus=\n ignoring setting %lx", flags);
>
> goto free_housekeeping_staging;
Yeah good point, that would simplify things with the tick CPU.
Thanks,
Gabriele
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 5/6] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-05-08 14:53 ` [PATCH v5 5/6] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
@ 2025-05-20 13:39 ` Gabriele Monaco
2025-05-20 14:28 ` Frederic Weisbecker
1 sibling, 0 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-20 13:39 UTC (permalink / raw)
To: Waiman Long; +Cc: linux-kernel, Frederic Weisbecker, Thomas Gleixner
On Thu, 2025-05-08 at 16:53 +0200, Gabriele Monaco wrote:
> Currently the user can set up isolated cpus via cpuset and nohz_full
> in
> such a way that leaves no housekeeping CPU (i.e. no CPU that is
> neither
> domain isolated nor nohz full). This can be a problem for other
> subsystems (e.g. the timer wheel imgration).
>
> Prevent this configuration by blocking any assignation that would
> cause
> the union of domain isolated cpus and nohz_full to covers all CPUs.
>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
Waiman, while testing this patch I got a few doubts how errors should
be reported in cpusets and the general behaviour when cpuset is
combined with boot-time isolation.
This is the behaviour introduced by the current patch:
* Assume we boot a 16 cores machine with nohz_full=8-15
* We configure an isolated cgroup with 0-9 exclusive CPUs
* the partition file complains with:
isolated invalid (partition config conflicts with housekeeping setup)
nproc: 16 (ok)
* we now set the same cgroup with 0-6 isolated CPUs
the partition is marked as isolated (expected)
nproc: 9 (ok)
* we set back the CPUs as 0-9
I'd expect an error somewhere but partition is still isolated
nproc: 9 (ok?)
Checking with nproc shows 7-9 are not isolated (but this is not visible
in the effective exclusive CPUs which shows still 0-9).
Now this behaviour seems incorrect to me, but is consistent with the
other flavour of PERR_HKEEPING (already upstream):
* set isolcpus=8-15
nproc: 8
* set 5-9 as isolated
isolated invalid (as above)
nproc: 8
* set 5-7
isolated
nproc: 13 (?!)
* set back 5-9
still isolated
nproc: 16 (?!)
Here nproc reports isolcpus as no longer isolated, which I find even
more confusing.
Now my questions: is it alright not to report errors when we fail to
isolate some CPUs but can allocate them as exclusive in the cpuset?
Can cpuset really undo some effects of isolcpus or is that a glitch?
Thanks,
Gabriele
> kernel/cgroup/cpuset.c | 67
> ++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 65 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 95316d39c282..2f1df6f5b988 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -80,6 +80,12 @@ static cpumask_var_t subpartitions_cpus;
> */
> static cpumask_var_t isolated_cpus;
>
> +/*
> + * Housekeeping CPUs for both HK_TYPE_DOMAIN and
> HK_TYPE_KERNEL_NOISE
> + */
> +static cpumask_var_t full_hk_cpus;
> +static bool have_boot_nohz_full;
> +
> /*
> * Housekeeping (HK_TYPE_DOMAIN) CPUs at boot
> */
> @@ -1253,10 +1259,26 @@ static void reset_partition_data(struct
> cpuset *cs)
> static void isolated_cpus_update(int old_prs, int new_prs, struct
> cpumask *xcpus)
> {
> WARN_ON_ONCE(old_prs == new_prs);
> - if (new_prs == PRS_ISOLATED)
> + if (new_prs == PRS_ISOLATED) {
> cpumask_or(isolated_cpus, isolated_cpus, xcpus);
> - else
> + cpumask_andnot(full_hk_cpus, full_hk_cpus, xcpus);
> + } else {
> cpumask_andnot(isolated_cpus, isolated_cpus, xcpus);
> + cpumask_or(full_hk_cpus, full_hk_cpus, xcpus);
> + }
> +}
> +
> +/*
> + * isolated_cpus_should_update - Returns if the isolated_cpus mask
> needs update
> + * @prs: new or old partition_root_state
> + * @parent: parent cpuset
> + * Return: true if isolated_cpus needs modification, false otherwise
> + */
> +static bool isolated_cpus_should_update(int prs, struct cpuset
> *parent)
> +{
> + if (!parent)
> + parent = &top_cpuset;
> + return prs != parent->partition_root_state;
> }
>
> /*
> @@ -1323,6 +1345,25 @@ static bool partition_xcpus_del(int old_prs,
> struct cpuset *parent,
> return isolcpus_updated;
> }
>
> +/*
> + * isolcpus_nohz_conflict - check for isolated & nohz_full conflicts
> + * @new_cpus: cpu mask
> + * Return: true if there is conflict, false otherwise
> + *
> + * If nohz_full is enabled and we have isolated CPUs, their
> combination must
> + * still leave housekeeping CPUs.
> + */
> +static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
> +{
> + if (!have_boot_nohz_full)
> + return false;
> +
> + if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
> + return true;
> +
> + return false;
> +}
> +
> static void update_exclusion_cpumasks(bool isolcpus_updated)
> {
> int ret;
> @@ -1448,6 +1489,9 @@ static int remote_partition_enable(struct
> cpuset *cs, int new_prs,
> cpumask_intersects(tmp->new_cpus, subpartitions_cpus) ||
> cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus))
> return PERR_INVCPUS;
> + if (isolated_cpus_should_update(new_prs, NULL) &&
> + isolcpus_nohz_conflict(tmp->new_cpus))
> + return PERR_HKEEPING;
>
> spin_lock_irq(&callback_lock);
> isolcpus_updated = partition_xcpus_add(new_prs, NULL, tmp-
> >new_cpus);
> @@ -1546,6 +1590,9 @@ static void remote_cpus_update(struct cpuset
> *cs, struct cpumask *xcpus,
> else if (cpumask_intersects(tmp->addmask, subpartitions_cpus) ||
> cpumask_subset(top_cpuset.effective_cpus, tmp->addmask))
> cs->prs_err = PERR_NOCPUS;
> + else if (isolated_cpus_should_update(prs, NULL) &&
> + isolcpus_nohz_conflict(tmp->addmask))
> + cs->prs_err = PERR_HKEEPING;
> if (cs->prs_err)
> goto invalidate;
> }
> @@ -1877,6 +1924,12 @@ static int
> update_parent_effective_cpumask(struct cpuset *cs, int cmd,
> return err;
> }
>
> + if (deleting && isolated_cpus_should_update(new_prs, parent) &&
> + isolcpus_nohz_conflict(tmp->delmask)) {
> + cs->prs_err = PERR_HKEEPING;
> + return PERR_HKEEPING;
> + }
> +
> /*
> * Change the parent's effective_cpus & effective_xcpus (top cpuset
> * only).
> @@ -2897,6 +2950,8 @@ static int update_prstate(struct cpuset *cs,
> int new_prs)
> * Need to update isolated_cpus.
> */
> isolcpus_updated = true;
> + if (isolcpus_nohz_conflict(cs->effective_xcpus))
> + err = PERR_HKEEPING;
> } else {
> /*
> * Switching back to member is always allowed even if it
> @@ -3715,6 +3770,7 @@ int __init cpuset_init(void)
> BUG_ON(!alloc_cpumask_var(&top_cpuset.exclusive_cpus, GFP_KERNEL));
> BUG_ON(!zalloc_cpumask_var(&subpartitions_cpus, GFP_KERNEL));
> BUG_ON(!zalloc_cpumask_var(&isolated_cpus, GFP_KERNEL));
> + BUG_ON(!alloc_cpumask_var(&full_hk_cpus, GFP_KERNEL));
>
> cpumask_setall(top_cpuset.cpus_allowed);
> nodes_setall(top_cpuset.mems_allowed);
> @@ -3722,17 +3778,24 @@ int __init cpuset_init(void)
> cpumask_setall(top_cpuset.effective_xcpus);
> cpumask_setall(top_cpuset.exclusive_cpus);
> nodes_setall(top_cpuset.effective_mems);
> + cpumask_copy(full_hk_cpus, cpu_present_mask);
>
> fmeter_init(&top_cpuset.fmeter);
> INIT_LIST_HEAD(&remote_children);
>
> BUG_ON(!alloc_cpumask_var(&cpus_attach, GFP_KERNEL));
>
> + have_boot_nohz_full = housekeeping_enabled(HK_TYPE_KERNEL_NOISE);
> + if (have_boot_nohz_full)
> + cpumask_and(full_hk_cpus, cpu_possible_mask,
> + housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
> +
> have_boot_isolcpus = housekeeping_enabled(HK_TYPE_DOMAIN);
> if (have_boot_isolcpus) {
> BUG_ON(!alloc_cpumask_var(&boot_hk_cpus, GFP_KERNEL));
> cpumask_copy(boot_hk_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN));
> cpumask_andnot(isolated_cpus, cpu_possible_mask, boot_hk_cpus);
> + cpumask_and(full_hk_cpus, full_hk_cpus, boot_hk_cpus);
> }
>
> return 0;
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 5/6] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-05-08 14:53 ` [PATCH v5 5/6] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
2025-05-20 13:39 ` Gabriele Monaco
@ 2025-05-20 14:28 ` Frederic Weisbecker
2025-05-20 15:24 ` Gabriele Monaco
2025-05-23 11:15 ` Gabriele Monaco
1 sibling, 2 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2025-05-20 14:28 UTC (permalink / raw)
To: Gabriele Monaco
Cc: linux-kernel, Thomas Gleixner, Waiman Long, Anna-Maria Behnsen
(Please keep Anna-Maria Cc'ed)
Le Thu, May 08, 2025 at 04:53:25PM +0200, Gabriele Monaco a écrit :
> Currently the user can set up isolated cpus via cpuset and nohz_full in
> such a way that leaves no housekeeping CPU (i.e. no CPU that is neither
> domain isolated nor nohz full). This can be a problem for other
> subsystems (e.g. the timer wheel imgration).
>
> Prevent this configuration by blocking any assignation that would cause
> the union of domain isolated cpus and nohz_full to covers all CPUs.
>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
> kernel/cgroup/cpuset.c | 67 ++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 65 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 95316d39c282..2f1df6f5b988 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -80,6 +80,12 @@ static cpumask_var_t subpartitions_cpus;
> */
> static cpumask_var_t isolated_cpus;
>
> +/*
> + * Housekeeping CPUs for both HK_TYPE_DOMAIN and HK_TYPE_KERNEL_NOISE
> + */
> +static cpumask_var_t full_hk_cpus;
> +static bool have_boot_nohz_full;
Do you really need to maintain those copies?
> +
> /*
> * Housekeeping (HK_TYPE_DOMAIN) CPUs at boot
> */
> @@ -1253,10 +1259,26 @@ static void reset_partition_data(struct cpuset *cs)
> static void isolated_cpus_update(int old_prs, int new_prs, struct cpumask *xcpus)
> {
> WARN_ON_ONCE(old_prs == new_prs);
> - if (new_prs == PRS_ISOLATED)
> + if (new_prs == PRS_ISOLATED) {
> cpumask_or(isolated_cpus, isolated_cpus, xcpus);
> - else
> + cpumask_andnot(full_hk_cpus, full_hk_cpus, xcpus);
> + } else {
> cpumask_andnot(isolated_cpus, isolated_cpus, xcpus);
> + cpumask_or(full_hk_cpus, full_hk_cpus, xcpus);
> + }
> +}
> +
> +/*
> + * isolated_cpus_should_update - Returns if the isolated_cpus mask needs update
> + * @prs: new or old partition_root_state
> + * @parent: parent cpuset
> + * Return: true if isolated_cpus needs modification, false otherwise
> + */
> +static bool isolated_cpus_should_update(int prs, struct cpuset *parent)
> +{
> + if (!parent)
> + parent = &top_cpuset;
> + return prs != parent->partition_root_state;
> }
>
> /*
> @@ -1323,6 +1345,25 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
> return isolcpus_updated;
> }
>
> +/*
> + * isolcpus_nohz_conflict - check for isolated & nohz_full conflicts
> + * @new_cpus: cpu mask
The description lacks explanation about the role of this cpu mask.
> + * Return: true if there is conflict, false otherwise
> + *
> + * If nohz_full is enabled and we have isolated CPUs, their combination must
> + * still leave housekeeping CPUs.
> + */
> +static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
> +{
> + if (!have_boot_nohz_full)
> + return false;
> +
> + if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
> + return true;
Do we also need to make sure that in this weight there is an online CPU?
Can you allocate a temporary mask here and do:
cpumask_var_t full_hk_cpus;
int ret;
if (!zalloc_cpumask_var(&full_hk_cpus, GFP_KERNEL))
return true;
cpumask_copy(full_hk_cpus, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
cpumask_and(full_hk_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN));
cpumask_and(full_hk_cpus, cpu_online_mask));
if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
ret = true;
else
ret = false;
free_cpumask_var(full_hk_cpus);
I also realize something, what makes sure that we don't offline the last
non isolated?
I just did a small test:
# cd /sys/fs/cgroup/
# echo +cpuset > cgroup.subtree_control
# cat cpuset.cpus.effective
0-7
# mkdir test
# cd test
# echo +cpuset > cgroup.subtree_control
# echo 0-6 > cpuset.cpus
# echo isolated > cpuset.cpus.partition
# cat ../cpuset.cpus.effective
7
# echo 0 > /sys/devices/system/cpu/cpu7/online
[ 4590.864066] ------------[ cut here ]------------
[ 4590.866469] WARNING: CPU: 7 PID: 50 at kernel/cgroup/cpuset.c:1906 update_parent_effective_cpumask+0x770/0x8c0
[ 4590.870023] Modules linked in:
[ 4590.871058] CPU: 7 UID: 0 PID: 50 Comm: cpuhp/7 Not tainted 6.15.0-rc2-g996d9d202383 #10 PREEMPT(voluntary)
[ 4590.873588] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-2-gc13ff2cd-prebuilt.qemu.org 04/01/2014
[ 4590.875689] RIP: 0010:update_parent_effective_cpumask+0x770/0x8c0
[ 4590.876858] Code: 06 48 8b 0c 24 ba 05 00 00 00 48 23 85 f8 00 00 00 41 0f 95 c6 48 89 01 41 8b 84 24 34 01 00 00 45 0f b6 f6 e9 90 fe ff ff 90 <0f> 0b 90e
[ 4590.880010] RSP: 0018:ffffa4ce001ebd40 EFLAGS: 00010086
[ 4590.880963] RAX: 00000000ffffffff RBX: 0000000000000000 RCX: 0000000000000001
[ 4590.882342] RDX: 000000000000007f RSI: 0000000000000000 RDI: 0000000000000002
[ 4590.883683] RBP: ffffffffbdf52f00 R08: 0000000000000000 R09: 0000000000000000
[ 4590.885071] R10: ffffa223062d2388 R11: 0000000000000000 R12: ffffa223062d2200
[ 4590.886604] R13: 0000000000000002 R14: 0000000000000001 R15: 0000000000000004
[ 4590.888309] FS: 0000000000000000(0000) GS:ffffa223bc4d6000(0000) knlGS:0000000000000000
[ 4590.890183] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 4590.891385] CR2: 000055ab80ada170 CR3: 00000001084ac000 CR4: 00000000000006f0
[ 4590.892901] DR0: ffffffffbc8c8420 DR1: 0000000000000000 DR2: 0000000000000000
[ 4590.894341] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
[ 4590.895765] Call Trace:
[ 4590.896400] <TASK>
[ 4590.896938] cpuset_update_active_cpus+0x680/0x730
[ 4590.897979] ? kvm_sched_clock_read+0x11/0x20
[ 4590.898916] ? sched_clock+0x10/0x30
[ 4590.899785] sched_cpu_deactivate+0x148/0x170
[ 4590.900812] ? __pfx_sched_cpu_deactivate+0x10/0x10
[ 4590.901925] cpuhp_invoke_callback+0x10e/0x480
[ 4590.902920] ? __pfx_smpboot_thread_fn+0x10/0x10
[ 4590.903928] cpuhp_thread_fun+0xd7/0x160
[ 4590.904818] smpboot_thread_fn+0xee/0x220
[ 4590.905716] kthread+0xf6/0x1f0
[ 4590.906471] ? __pfx_kthread+0x10/0x10
[ 4590.907297] ret_from_fork+0x2f/0x50
[ 4590.908110] ? __pfx_kthread+0x10/0x10
[ 4590.908917] ret_from_fork_asm+0x1a/0x30
[ 4590.909833] </TASK>
[ 4590.910465] ---[ end trace 0000000000000000 ]---
[ 4590.916786] smpboot: CPU 7 is now offline
Apparently you can't trigger the same with isolcpus=0-6, for some reason.
One last thing, nohz_full makes sure that we never offline the timekeeper
(see tick_nohz_cpu_down()). The timekeeper also never shuts down its tick
and therefore never go idle, from tmigr perspective, this way when a nohz_full
CPU shuts down its tick, it makes sure that its global timers are handled by
the timekeeper in last resort, because it's the last global migrator, always
alive.
But if the timekeeper is HK_TYPE_DOMAIN, or isolated by cpuset, it will go out
of the tmigr hierarchy, breaking the guarantee to have a live global migrator
for nohz_full.
That one is a bit more tricky to solve. The easiest is to forbid the timekeeper
from ever being made unavailable. It is also possible to migrate the timekeeping duty
to another common housekeeper.
We probably need to do the latter...
Thanks.
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 6/6] timers: Exclude isolated cpus from timer migation
2025-05-08 14:53 ` [PATCH v5 6/6] timers: Exclude isolated cpus from timer migation Gabriele Monaco
@ 2025-05-20 14:43 ` Frederic Weisbecker
0 siblings, 0 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2025-05-20 14:43 UTC (permalink / raw)
To: Gabriele Monaco; +Cc: linux-kernel, Thomas Gleixner, Waiman Long
Le Thu, May 08, 2025 at 04:53:26PM +0200, Gabriele Monaco a écrit :
> diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
> index 25439f961ccf..fb27e929e2cf 100644
> --- a/kernel/time/timer_migration.c
> +++ b/kernel/time/timer_migration.c
> @@ -10,6 +10,7 @@
> #include <linux/spinlock.h>
> #include <linux/timerqueue.h>
> #include <trace/events/ipi.h>
> +#include <linux/sched/isolation.h>
>
> #include "timer_migration.h"
> #include "tick-internal.h"
> @@ -1478,6 +1479,16 @@ static int tmigr_cpu_available(unsigned int cpu)
> if (WARN_ON_ONCE(!tmc->tmgroup))
> return -EINVAL;
>
> + /*
> + * Domain isolated CPUs don't participate in timer migration.
> + * Checking here guarantees that CPUs isolated at boot (e.g. isolcpus)
> + * are not marked as available when they first become online.
> + * During runtime, any offline isolated CPU is also not incorrectly
> + * marked as available once it gets back online.
> + */
> + if (!housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN) ||
> + cpuset_cpu_is_isolated(cpu))
if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE) &&
(!housekeeping_cpu(cpu,HK_TYPE_DOMAIN) || cpuset_cpu_is_isolated(cpu)))
Because nohz_full= must be part of the hierarchy.
> + return 0;
> raw_spin_lock_irq(&tmc->lock);
> trace_tmigr_cpu_available(tmc);
> tmc->idle = timer_base_is_idle();
> @@ -1489,6 +1500,38 @@ static int tmigr_cpu_available(unsigned int cpu)
> return 0;
> }
>
> +static void tmigr_remote_cpu_unavailable(void *ignored)
> +{
> + tmigr_cpu_unavailable(smp_processor_id());
> +}
> +
> +static void tmigr_remote_cpu_available(void *ignored)
> +{
> + tmigr_cpu_available(smp_processor_id());
> +}
> +
> +int tmigr_isolated_exclude_cpumask(cpumask_var_t exclude_cpumask)
> +{
> + cpumask_var_t cpumask;
> + int ret = 0;
> +
> + lockdep_assert_cpus_held();
> +
> + if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
> + return -ENOMEM;
> +
> + cpumask_and(cpumask, exclude_cpumask, tmigr_available_cpumask);
> + cpumask_and(cpumask, cpumask, housekeeping_cpumask(HK_TYPE_TICK));
Good, but please use HK_TYPE_KERNEL_NOISE, I need to finish that rename at some
point.
Thanks.
> + on_each_cpu_mask(cpumask, tmigr_remote_cpu_unavailable, NULL, 0);
> +
> + cpumask_andnot(cpumask, cpu_online_mask, exclude_cpumask);
> + cpumask_andnot(cpumask, cpumask, tmigr_available_cpumask);
> + on_each_cpu_mask(cpumask, tmigr_remote_cpu_available, NULL, 0);
> +
> + free_cpumask_var(cpumask);
> + return ret;
> +}
> +
> static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
> int node)
> {
> --
> 2.49.0
>
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 5/6] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-05-20 14:28 ` Frederic Weisbecker
@ 2025-05-20 15:24 ` Gabriele Monaco
2025-05-23 11:15 ` Gabriele Monaco
1 sibling, 0 replies; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-20 15:24 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: linux-kernel, Thomas Gleixner, Waiman Long, Anna-Maria Behnsen
On Tue, 2025-05-20 at 16:28 +0200, Frederic Weisbecker wrote:
> (Please keep Anna-Maria Cc'ed)
>
> Le Thu, May 08, 2025 at 04:53:25PM +0200, Gabriele Monaco a écrit :
> > Currently the user can set up isolated cpus via cpuset and
> > nohz_full in
> > such a way that leaves no housekeeping CPU (i.e. no CPU that is
> > neither
> > domain isolated nor nohz full). This can be a problem for other
> > subsystems (e.g. the timer wheel imgration).
> >
> > Prevent this configuration by blocking any assignation that would
> > cause
> > the union of domain isolated cpus and nohz_full to covers all CPUs.
> >
> > Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> > ---
> > kernel/cgroup/cpuset.c | 67
> > ++++++++++++++++++++++++++++++++++++++++--
> > 1 file changed, 65 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> > index 95316d39c282..2f1df6f5b988 100644
> > --- a/kernel/cgroup/cpuset.c
> > +++ b/kernel/cgroup/cpuset.c
> > @@ -80,6 +80,12 @@ static cpumask_var_t subpartitions_cpus;
> > */
> > static cpumask_var_t isolated_cpus;
> >
> > +/*
> > + * Housekeeping CPUs for both HK_TYPE_DOMAIN and
> > HK_TYPE_KERNEL_NOISE
> > + */
> > +static cpumask_var_t full_hk_cpus;
> > +static bool have_boot_nohz_full;
>
> Do you really need to maintain those copies?
>
Yeah good point, I wanted to avoid allocating temporary masks but it's
probably better than maintaining those..
> > +
> > /*
> > * Housekeeping (HK_TYPE_DOMAIN) CPUs at boot
> > */
> > @@ -1253,10 +1259,26 @@ static void reset_partition_data(struct
> > cpuset *cs)
> > static void isolated_cpus_update(int old_prs, int new_prs, struct
> > cpumask *xcpus)
> > {
> > WARN_ON_ONCE(old_prs == new_prs);
> > - if (new_prs == PRS_ISOLATED)
> > + if (new_prs == PRS_ISOLATED) {
> > cpumask_or(isolated_cpus, isolated_cpus, xcpus);
> > - else
> > + cpumask_andnot(full_hk_cpus, full_hk_cpus, xcpus);
> > + } else {
> > cpumask_andnot(isolated_cpus, isolated_cpus,
> > xcpus);
> > + cpumask_or(full_hk_cpus, full_hk_cpus, xcpus);
> > + }
> > +}
> > +
> > +/*
> > + * isolated_cpus_should_update - Returns if the isolated_cpus mask
> > needs update
> > + * @prs: new or old partition_root_state
> > + * @parent: parent cpuset
> > + * Return: true if isolated_cpus needs modification, false
> > otherwise
> > + */
> > +static bool isolated_cpus_should_update(int prs, struct cpuset
> > *parent)
> > +{
> > + if (!parent)
> > + parent = &top_cpuset;
> > + return prs != parent->partition_root_state;
> > }
> >
> > /*
> > @@ -1323,6 +1345,25 @@ static bool partition_xcpus_del(int old_prs,
> > struct cpuset *parent,
> > return isolcpus_updated;
> > }
> >
> > +/*
> > + * isolcpus_nohz_conflict - check for isolated & nohz_full
> > conflicts
> > + * @new_cpus: cpu mask
>
> The description lacks explanation about the role of this cpu mask.
>
Mmh yeah, that was a copy paste from prstate_housekeeping_conflict but
I agree, I should describe it better at least here..
> > + * Return: true if there is conflict, false otherwise
> > + *
> > + * If nohz_full is enabled and we have isolated CPUs, their
> > combination must
> > + * still leave housekeeping CPUs.
> > + */
> > +static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
> > +{
> > + if (!have_boot_nohz_full)
> > + return false;
> > +
> > + if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
> > + return true;
>
> Do we also need to make sure that in this weight there is an online
> CPU?
>
> Can you allocate a temporary mask here and do:
>
> cpumask_var_t full_hk_cpus;
> int ret;
>
> if (!zalloc_cpumask_var(&full_hk_cpus, GFP_KERNEL))
> return true;
>
> cpumask_copy(full_hk_cpus,
> housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
> cpumask_and(full_hk_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN));
> cpumask_and(full_hk_cpus, cpu_online_mask));
> if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
> ret = true;
> else
> ret = false;
>
> free_cpumask_var(full_hk_cpus);
>
Yeah that looks safer. I'll do with a mask.
> I also realize something, what makes sure that we don't offline the
> last
> non isolated?
>
Mmh, I guess we need to enforce that too then, I remember in some
condition the system was preventing me from doing that, but need to
play a bit more to understand what's going on...
> I just did a small test:
>
> # cd /sys/fs/cgroup/
> # echo +cpuset > cgroup.subtree_control
> # cat cpuset.cpus.effective
> 0-7
> # mkdir test
> # cd test
> # echo +cpuset > cgroup.subtree_control
> # echo 0-6 > cpuset.cpus
> # echo isolated > cpuset.cpus.partition
> # cat ../cpuset.cpus.effective
> 7
> # echo 0 > /sys/devices/system/cpu/cpu7/online
> [ 4590.864066] ------------[ cut here ]------------
> [ 4590.866469] WARNING: CPU: 7 PID: 50 at kernel/cgroup/cpuset.c:1906
> update_parent_effective_cpumask+0x770/0x8c0
> [ 4590.870023] Modules linked in:
> [ 4590.871058] CPU: 7 UID: 0 PID: 50 Comm: cpuhp/7 Not tainted
> 6.15.0-rc2-g996d9d202383 #10 PREEMPT(voluntary)
> [ 4590.873588] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
> BIOS rel-1.16.3-2-gc13ff2cd-prebuilt.qemu.org 04/01/2014
> [ 4590.875689] RIP: 0010:update_parent_effective_cpumask+0x770/0x8c0
> [ 4590.876858] Code: 06 48 8b 0c 24 ba 05 00 00 00 48 23 85 f8 00 00
> 00 41 0f 95 c6 48 89 01 41 8b 84 24 34 01 00 00 45 0f b6 f6 e9 90 fe
> ff ff 90 <0f> 0b 90e
> [ 4590.880010] RSP: 0018:ffffa4ce001ebd40 EFLAGS: 00010086
> [ 4590.880963] RAX: 00000000ffffffff RBX: 0000000000000000 RCX:
> 0000000000000001
> [ 4590.882342] RDX: 000000000000007f RSI: 0000000000000000 RDI:
> 0000000000000002
> [ 4590.883683] RBP: ffffffffbdf52f00 R08: 0000000000000000 R09:
> 0000000000000000
> [ 4590.885071] R10: ffffa223062d2388 R11: 0000000000000000 R12:
> ffffa223062d2200
> [ 4590.886604] R13: 0000000000000002 R14: 0000000000000001 R15:
> 0000000000000004
> [ 4590.888309] FS: 0000000000000000(0000) GS:ffffa223bc4d6000(0000)
> knlGS:0000000000000000
> [ 4590.890183] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 4590.891385] CR2: 000055ab80ada170 CR3: 00000001084ac000 CR4:
> 00000000000006f0
> [ 4590.892901] DR0: ffffffffbc8c8420 DR1: 0000000000000000 DR2:
> 0000000000000000
> [ 4590.894341] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000600
> [ 4590.895765] Call Trace:
> [ 4590.896400] <TASK>
> [ 4590.896938] cpuset_update_active_cpus+0x680/0x730
> [ 4590.897979] ? kvm_sched_clock_read+0x11/0x20
> [ 4590.898916] ? sched_clock+0x10/0x30
> [ 4590.899785] sched_cpu_deactivate+0x148/0x170
> [ 4590.900812] ? __pfx_sched_cpu_deactivate+0x10/0x10
> [ 4590.901925] cpuhp_invoke_callback+0x10e/0x480
> [ 4590.902920] ? __pfx_smpboot_thread_fn+0x10/0x10
> [ 4590.903928] cpuhp_thread_fun+0xd7/0x160
> [ 4590.904818] smpboot_thread_fn+0xee/0x220
> [ 4590.905716] kthread+0xf6/0x1f0
> [ 4590.906471] ? __pfx_kthread+0x10/0x10
> [ 4590.907297] ret_from_fork+0x2f/0x50
> [ 4590.908110] ? __pfx_kthread+0x10/0x10
> [ 4590.908917] ret_from_fork_asm+0x1a/0x30
> [ 4590.909833] </TASK>
> [ 4590.910465] ---[ end trace 0000000000000000 ]---
> [ 4590.916786] smpboot: CPU 7 is now offline
>
> Apparently you can't trigger the same with isolcpus=0-6, for some
> reason.
>
> One last thing, nohz_full makes sure that we never offline the
> timekeeper
> (see tick_nohz_cpu_down()). The timekeeper also never shuts down its
> tick
> and therefore never go idle, from tmigr perspective, this way when a
> nohz_full
> CPU shuts down its tick, it makes sure that its global timers are
> handled by
> the timekeeper in last resort, because it's the last global migrator,
> always
> alive.
>
> But if the timekeeper is HK_TYPE_DOMAIN, or isolated by cpuset, it
> will go out
> of the tmigr hierarchy, breaking the guarantee to have a live global
> migrator
> for nohz_full.
>
> That one is a bit more tricky to solve. The easiest is to forbid the
> timekeeper
> from ever being made unavailable. It is also possible to migrate the
> timekeeping duty
> to another common housekeeper.
>
Mmh, if I get it correctly, this would mean we need to:
1. make sure the timekeeper is not in isolcpus at boot
2. change timekeeper in case it is included in an isolated cpuset
3. avoid picking domain isolated CPUs when the timekeeper gets offline
All those would be meaningful only if nohz_full is active, right? Am I
missing any corner case? Are any of those already happening?
Thanks,
Gabriele
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 5/6] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-05-20 14:28 ` Frederic Weisbecker
2025-05-20 15:24 ` Gabriele Monaco
@ 2025-05-23 11:15 ` Gabriele Monaco
2025-06-24 14:11 ` Frederic Weisbecker
1 sibling, 1 reply; 18+ messages in thread
From: Gabriele Monaco @ 2025-05-23 11:15 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: linux-kernel, Thomas Gleixner, Waiman Long, Anna-Maria Behnsen
On Tue, 2025-05-20 at 16:28 +0200, Frederic Weisbecker wrote:
>
> Apparently you can't trigger the same with isolcpus=0-6, for some
> reason.
>
> One last thing, nohz_full makes sure that we never offline the
> timekeeper
> (see tick_nohz_cpu_down()). The timekeeper also never shuts down its
> tick
> and therefore never go idle, from tmigr perspective, this way when a
> nohz_full
> CPU shuts down its tick, it makes sure that its global timers are
> handled by
> the timekeeper in last resort, because it's the last global migrator,
> always
> alive.
>
> But if the timekeeper is HK_TYPE_DOMAIN, or isolated by cpuset, it
> will go out
> of the tmigr hierarchy, breaking the guarantee to have a live global
> migrator
> for nohz_full.
>
> That one is a bit more tricky to solve. The easiest is to forbid the
> timekeeper
> from ever being made unavailable. It is also possible to migrate the
> timekeeping duty
> to another common housekeeper.
>
> We probably need to do the latter...
I'm thinking about this again, is it really worth the extra complexity?
The tick CPU is already set as the boot CPU and if the user requests it
as nohz_full, that's not accepted.
In my understanding, this typically happens on CPU0 and this CPU is
kinda special and is advised to stay as housekeeping. As far as I
understand, when nohz_full is enabled, the tick CPU cannot change.
Said that, I'd reconsider force keeping the tick CPU in the hierarchy
no matter if we isolate it or not when nohz_full is active (e.g. what
you mentioned as the /easy/ way).
We'd not prevent domain isolation (as the user requested), but allow a
bit more noise just on that CPU for the sake of keeping things simple
while not falling into dangerous corner cases.
If that's still a problem for the user, they are probably better off
either selecting a different mask or setting nohz_full consistently
(I'm still wondering how common a scenario this is).
Am I missing something here?
Thanks,
Gabriele
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 5/6] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-05-23 11:15 ` Gabriele Monaco
@ 2025-06-24 14:11 ` Frederic Weisbecker
0 siblings, 0 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2025-06-24 14:11 UTC (permalink / raw)
To: Gabriele Monaco
Cc: linux-kernel, Thomas Gleixner, Waiman Long, Anna-Maria Behnsen
(Sorry for the delay, I forgot to reply that one)
Le Fri, May 23, 2025 at 01:15:44PM +0200, Gabriele Monaco a écrit :
> On Tue, 2025-05-20 at 16:28 +0200, Frederic Weisbecker wrote:
> >
> > Apparently you can't trigger the same with isolcpus=0-6, for some
> > reason.
> >
> > One last thing, nohz_full makes sure that we never offline the
> > timekeeper
> > (see tick_nohz_cpu_down()). The timekeeper also never shuts down its
> > tick
> > and therefore never go idle, from tmigr perspective, this way when a
> > nohz_full
> > CPU shuts down its tick, it makes sure that its global timers are
> > handled by
> > the timekeeper in last resort, because it's the last global migrator,
> > always
> > alive.
> >
> > But if the timekeeper is HK_TYPE_DOMAIN, or isolated by cpuset, it
> > will go out
> > of the tmigr hierarchy, breaking the guarantee to have a live global
> > migrator
> > for nohz_full.
> >
> > That one is a bit more tricky to solve. The easiest is to forbid the
> > timekeeper
> > from ever being made unavailable. It is also possible to migrate the
> > timekeeping duty
> > to another common housekeeper.
> >
> > We probably need to do the latter...
>
> I'm thinking about this again, is it really worth the extra complexity?
>
> The tick CPU is already set as the boot CPU and if the user requests it
> as nohz_full, that's not accepted.
Actually that's possible, unfortunately...
> In my understanding, this typically happens on CPU0 and this CPU is
> kinda special and is advised to stay as housekeeping. As far as I
> understand, when nohz_full is enabled, the tick CPU cannot change.
It can change, fortunately on early boot.
>
> Said that, I'd reconsider force keeping the tick CPU in the hierarchy
> no matter if we isolate it or not when nohz_full is active (e.g. what
> you mentioned as the /easy/ way).
> We'd not prevent domain isolation (as the user requested), but allow a
> bit more noise just on that CPU for the sake of keeping things simple
> while not falling into dangerous corner cases.
> If that's still a problem for the user, they are probably better off
> either selecting a different mask or setting nohz_full consistently
> (I'm still wondering how common a scenario this is).
>
> Am I missing something here?
Agreed, forcing the tick CPU to stay in the hierarchy when nohz_full is
enabled is the easiest.
Thanks.
>
> Thanks,
> Gabriele
>
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2025-06-24 14:11 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-08 14:53 [PATCH v5 0/6] timers: Exclude isolated cpus from timer migation Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 1/6] timers: Rename tmigr 'online' bit to 'available' Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 2/6] timers: Add the available mask in timer migration Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 3/6] cgroup/cpuset: Rename update_unbound_workqueue_cpumask() to update_exclusion_cpumasks() Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 4/6] sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any Gabriele Monaco
2025-05-20 10:17 ` Frederic Weisbecker
2025-05-20 11:17 ` Gabriele Monaco
2025-05-20 11:57 ` Frederic Weisbecker
2025-05-20 12:02 ` Frederic Weisbecker
2025-05-20 12:28 ` Gabriele Monaco
2025-05-08 14:53 ` [PATCH v5 5/6] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Gabriele Monaco
2025-05-20 13:39 ` Gabriele Monaco
2025-05-20 14:28 ` Frederic Weisbecker
2025-05-20 15:24 ` Gabriele Monaco
2025-05-23 11:15 ` Gabriele Monaco
2025-06-24 14:11 ` Frederic Weisbecker
2025-05-08 14:53 ` [PATCH v5 6/6] timers: Exclude isolated cpus from timer migation Gabriele Monaco
2025-05-20 14:43 ` Frederic Weisbecker
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).