* [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity
@ 2025-08-29 15:47 Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 01/33] sched/isolation: Remove housekeeping static key Frederic Weisbecker
` (33 more replies)
0 siblings, 34 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Tejun Heo, Michal Hocko, Marco Crivellari,
Thomas Gleixner, Peter Zijlstra, Waiman Long
Hi,
The kthread code was enhanced lately to provide an infrastructure which
manages the preferred affinity of unbound kthreads (node or custom
cpumask) against housekeeping constraints and CPU hotplug events.
One crucial missing piece is cpuset: when an isolated partition is
created, deleted, or its CPUs updated, all the unbound kthreads in the
top cpuset are affine to _all_ the non-isolated CPUs, possibly breaking
their preferred affinity along the way
Solve this with performing the kthreads affinity update from cpuset to
the kthreads consolidated relevant code instead so that preferred
affinities are honoured.
The dispatch of the new cpumasks to workqueues and kthreads is performed
by housekeeping, as per the nice Tejun's suggestion.
As a welcome side effect, HK_TYPE_DOMAIN then integrates both the set
from isolcpus= and cpuset isolated partitions. Housekeeping cpumasks are
now modifyable with specific synchronization. A big step toward making
nohz_full= also mutable through cpuset in the future.
Changes since v1:
- Drop the housekeeping lock and use RCU to synchronize housekeeping
against cpuset changes.
- Add housekeeping documentation
- Simplify CPU hotplug handling
- Collect ack from Shakeel Butt
- Handle sched/arm64's task fallback cpumask move to HK_TYPE_DOMAIN
- Fix genirq kthreads affinity
- Add missing kernel doc
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
kthread/core-v2
HEAD: 092784f7df0aa6415c91ae5edc1c1a72603b5c50
Thanks,
Frederic
---
Frederic Weisbecker (32):
sched/isolation: Remove housekeeping static key
PCI: Protect against concurrent change of housekeeping cpumask
cpu: Revert "cpu/hotplug: Prevent self deadlock on CPU hot-unplug"
memcg: Prepare to protect against concurrent isolated cpuset change
mm: vmstat: Prepare to protect against concurrent isolated cpuset change
sched/isolation: Save boot defined domain flags
cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT
driver core: cpu: Convert /sys/devices/system/cpu/isolated to use HK_TYPE_DOMAIN_BOOT
net: Keep ignoring isolated cpuset change
block: Protect against concurrent isolated cpuset change
cpu: Provide lockdep check for CPU hotplug lock write-held
cpuset: Provide lockdep check for cpuset lock held
sched/isolation: Convert housekeeping cpumasks to rcu pointers
cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset
sched/isolation: Flush memcg workqueues on cpuset isolated partition change
sched/isolation: Flush vmstat workqueues on cpuset isolated partition change
cpuset: Propagate cpuset isolation update to workqueue through housekeeping
cpuset: Remove cpuset_cpu_is_isolated()
sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated()
PCI: Remove superfluous HK_TYPE_WQ check
kthread: Refine naming of affinity related fields
kthread: Include unbound kthreads in the managed affinity list
kthread: Include kthreadd to the managed affinity list
kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management
sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN
sched/arm64: Move fallback task cpumask to HK_TYPE_DOMAIN
kthread: Honour kthreads preferred affinity after cpuset changes
kthread: Comment on the purpose and placement of kthread_affine_node() call
kthread: Add API to update preferred affinity on kthread runtime
kthread: Document kthread_affine_preferred()
genirq: Correctly handle preferred kthreads affinity
doc: Add housekeeping documentation
Gabriele Monaco (1):
cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
Documentation/cpu_isolation/housekeeping.rst | 111 +++++++++++++++
arch/arm64/kernel/cpufeature.c | 18 ++-
block/blk-mq.c | 6 +-
drivers/base/cpu.c | 2 +-
drivers/pci/pci-driver.c | 50 ++++---
include/linux/cpu.h | 4 +
include/linux/cpuhplock.h | 1 +
include/linux/cpuset.h | 8 +-
include/linux/kthread.h | 2 +
include/linux/memcontrol.h | 4 +
include/linux/mmu_context.h | 2 +-
include/linux/percpu-rwsem.h | 1 +
include/linux/sched/isolation.h | 30 +++--
include/linux/vmstat.h | 2 +
include/linux/workqueue.h | 2 +-
init/Kconfig | 1 +
kernel/cgroup/cpuset.c | 131 +++++++++++++-----
kernel/cpu.c | 42 +++---
kernel/irq/manage.c | 47 ++++---
kernel/kthread.c | 195 +++++++++++++++++++--------
kernel/sched/isolation.c | 185 ++++++++++++++++---------
kernel/sched/sched.h | 4 +
kernel/workqueue.c | 2 +-
mm/memcontrol.c | 25 +++-
mm/vmstat.c | 15 ++-
net/core/net-sysfs.c | 2 +-
26 files changed, 639 insertions(+), 253 deletions(-)
^ permalink raw reply [flat|nested] 43+ messages in thread
* [PATCH 01/33] sched/isolation: Remove housekeeping static key
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 21:34 ` Waiman Long
2025-09-01 10:26 ` Peter Zijlstra
2025-08-29 15:47 ` [PATCH 02/33] PCI: Protect against concurrent change of housekeeping cpumask Frederic Weisbecker
` (32 subsequent siblings)
33 siblings, 2 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Ingo Molnar, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long
The housekeeping static key in its current use is mostly irrelevant.
Most of the time, a housekeeping function call had already been issued
before the static call got a chance to be evaluated, defeating the
initial call optimization purpose.
housekeeping_cpu() is the sole correct user performing the static call
before the actual slow-path function call. But it's seldom used in
fast-path.
Finally the static call prevents from synchronizing correctly against
dynamic updates of the housekeeping cpumasks through cpusets.
Get away with a simple flag test instead.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/sched/isolation.h | 25 +++++----
kernel/sched/isolation.c | 90 ++++++++++++++-------------------
2 files changed, 55 insertions(+), 60 deletions(-)
diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index d8501f4709b5..f98ba0d71c52 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -25,12 +25,22 @@ enum hk_type {
};
#ifdef CONFIG_CPU_ISOLATION
-DECLARE_STATIC_KEY_FALSE(housekeeping_overridden);
+extern unsigned long housekeeping_flags;
+
extern int housekeeping_any_cpu(enum hk_type type);
extern const struct cpumask *housekeeping_cpumask(enum hk_type type);
extern bool housekeeping_enabled(enum hk_type type);
extern void housekeeping_affine(struct task_struct *t, enum hk_type type);
extern bool housekeeping_test_cpu(int cpu, enum hk_type type);
+
+static inline bool housekeeping_cpu(int cpu, enum hk_type type)
+{
+ if (housekeeping_flags & BIT(type))
+ return housekeeping_test_cpu(cpu, type);
+ else
+ return true;
+}
+
extern void __init housekeeping_init(void);
#else
@@ -58,17 +68,14 @@ static inline bool housekeeping_test_cpu(int cpu, enum hk_type type)
return true;
}
+static inline bool housekeeping_cpu(int cpu, enum hk_type type)
+{
+ return true;
+}
+
static inline void housekeeping_init(void) { }
#endif /* CONFIG_CPU_ISOLATION */
-static inline bool housekeeping_cpu(int cpu, enum hk_type type)
-{
-#ifdef CONFIG_CPU_ISOLATION
- if (static_branch_unlikely(&housekeeping_overridden))
- return housekeeping_test_cpu(cpu, type);
-#endif
- return true;
-}
static inline bool cpu_is_isolated(int cpu)
{
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index a4cf17b1fab0..2a6fc6fc46fb 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -16,19 +16,13 @@ enum hk_flags {
HK_FLAG_KERNEL_NOISE = BIT(HK_TYPE_KERNEL_NOISE),
};
-DEFINE_STATIC_KEY_FALSE(housekeeping_overridden);
-EXPORT_SYMBOL_GPL(housekeeping_overridden);
-
-struct housekeeping {
- cpumask_var_t cpumasks[HK_TYPE_MAX];
- unsigned long flags;
-};
-
-static struct housekeeping housekeeping;
+static cpumask_var_t housekeeping_cpumasks[HK_TYPE_MAX];
+unsigned long housekeeping_flags;
+EXPORT_SYMBOL_GPL(housekeeping_flags);
bool housekeeping_enabled(enum hk_type type)
{
- return !!(housekeeping.flags & BIT(type));
+ return !!(housekeeping_flags & BIT(type));
}
EXPORT_SYMBOL_GPL(housekeeping_enabled);
@@ -36,50 +30,46 @@ int housekeeping_any_cpu(enum hk_type type)
{
int cpu;
- if (static_branch_unlikely(&housekeeping_overridden)) {
- if (housekeeping.flags & BIT(type)) {
- cpu = sched_numa_find_closest(housekeeping.cpumasks[type], smp_processor_id());
- if (cpu < nr_cpu_ids)
- return cpu;
+ if (housekeeping_flags & BIT(type)) {
+ cpu = sched_numa_find_closest(housekeeping_cpumasks[type], smp_processor_id());
+ if (cpu < nr_cpu_ids)
+ return cpu;
- cpu = cpumask_any_and_distribute(housekeeping.cpumasks[type], cpu_online_mask);
- if (likely(cpu < nr_cpu_ids))
- return cpu;
- /*
- * Unless we have another problem this can only happen
- * at boot time before start_secondary() brings the 1st
- * housekeeping CPU up.
- */
- WARN_ON_ONCE(system_state == SYSTEM_RUNNING ||
- type != HK_TYPE_TIMER);
- }
+ cpu = cpumask_any_and_distribute(housekeeping_cpumasks[type], cpu_online_mask);
+ if (likely(cpu < nr_cpu_ids))
+ return cpu;
+ /*
+ * Unless we have another problem this can only happen
+ * at boot time before start_secondary() brings the 1st
+ * housekeeping CPU up.
+ */
+ WARN_ON_ONCE(system_state == SYSTEM_RUNNING ||
+ type != HK_TYPE_TIMER);
}
+
return smp_processor_id();
}
EXPORT_SYMBOL_GPL(housekeeping_any_cpu);
const struct cpumask *housekeeping_cpumask(enum hk_type type)
{
- if (static_branch_unlikely(&housekeeping_overridden))
- if (housekeeping.flags & BIT(type))
- return housekeeping.cpumasks[type];
+ if (housekeeping_flags & BIT(type))
+ return housekeeping_cpumasks[type];
return cpu_possible_mask;
}
EXPORT_SYMBOL_GPL(housekeeping_cpumask);
void housekeeping_affine(struct task_struct *t, enum hk_type type)
{
- if (static_branch_unlikely(&housekeeping_overridden))
- if (housekeeping.flags & BIT(type))
- set_cpus_allowed_ptr(t, housekeeping.cpumasks[type]);
+ if (housekeeping_flags & BIT(type))
+ set_cpus_allowed_ptr(t, housekeeping_cpumasks[type]);
}
EXPORT_SYMBOL_GPL(housekeeping_affine);
bool housekeeping_test_cpu(int cpu, enum hk_type type)
{
- if (static_branch_unlikely(&housekeeping_overridden))
- if (housekeeping.flags & BIT(type))
- return cpumask_test_cpu(cpu, housekeeping.cpumasks[type]);
+ if (housekeeping_flags & BIT(type))
+ return cpumask_test_cpu(cpu, housekeeping_cpumasks[type]);
return true;
}
EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
@@ -88,17 +78,15 @@ void __init housekeeping_init(void)
{
enum hk_type type;
- if (!housekeeping.flags)
+ if (!housekeeping_flags)
return;
- static_branch_enable(&housekeeping_overridden);
-
- if (housekeeping.flags & HK_FLAG_KERNEL_NOISE)
+ if (housekeeping_flags & HK_FLAG_KERNEL_NOISE)
sched_tick_offload_init();
- for_each_set_bit(type, &housekeeping.flags, HK_TYPE_MAX) {
+ for_each_set_bit(type, &housekeeping_flags, HK_TYPE_MAX) {
/* We need at least one CPU to handle housekeeping work */
- WARN_ON_ONCE(cpumask_empty(housekeeping.cpumasks[type]));
+ WARN_ON_ONCE(cpumask_empty(housekeeping_cpumasks[type]));
}
}
@@ -106,8 +94,8 @@ static void __init housekeeping_setup_type(enum hk_type type,
cpumask_var_t housekeeping_staging)
{
- alloc_bootmem_cpumask_var(&housekeeping.cpumasks[type]);
- cpumask_copy(housekeeping.cpumasks[type],
+ alloc_bootmem_cpumask_var(&housekeeping_cpumasks[type]);
+ cpumask_copy(housekeeping_cpumasks[type],
housekeeping_staging);
}
@@ -117,7 +105,7 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
unsigned int first_cpu;
int err = 0;
- if ((flags & HK_FLAG_KERNEL_NOISE) && !(housekeeping.flags & HK_FLAG_KERNEL_NOISE)) {
+ if ((flags & HK_FLAG_KERNEL_NOISE) && !(housekeeping_flags & HK_FLAG_KERNEL_NOISE)) {
if (!IS_ENABLED(CONFIG_NO_HZ_FULL)) {
pr_warn("Housekeeping: nohz unsupported."
" Build with CONFIG_NO_HZ_FULL\n");
@@ -139,7 +127,7 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
if (first_cpu >= nr_cpu_ids || first_cpu >= setup_max_cpus) {
__cpumask_set_cpu(smp_processor_id(), housekeeping_staging);
__cpumask_clear_cpu(smp_processor_id(), non_housekeeping_mask);
- if (!housekeeping.flags) {
+ if (!housekeeping_flags) {
pr_warn("Housekeeping: must include one present CPU, "
"using boot CPU:%d\n", smp_processor_id());
}
@@ -148,7 +136,7 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
if (cpumask_empty(non_housekeeping_mask))
goto free_housekeeping_staging;
- if (!housekeeping.flags) {
+ if (!housekeeping_flags) {
/* First setup call ("nohz_full=" or "isolcpus=") */
enum hk_type type;
@@ -157,26 +145,26 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
} else {
/* Second setup call ("nohz_full=" after "isolcpus=" or the reverse) */
enum hk_type type;
- unsigned long iter_flags = flags & housekeeping.flags;
+ unsigned long iter_flags = flags & housekeeping_flags;
for_each_set_bit(type, &iter_flags, HK_TYPE_MAX) {
if (!cpumask_equal(housekeeping_staging,
- housekeeping.cpumasks[type])) {
+ housekeeping_cpumasks[type])) {
pr_warn("Housekeeping: nohz_full= must match isolcpus=\n");
goto free_housekeeping_staging;
}
}
- iter_flags = flags & ~housekeeping.flags;
+ iter_flags = flags & ~housekeeping_flags;
for_each_set_bit(type, &iter_flags, HK_TYPE_MAX)
housekeeping_setup_type(type, housekeeping_staging);
}
- if ((flags & HK_FLAG_KERNEL_NOISE) && !(housekeeping.flags & HK_FLAG_KERNEL_NOISE))
+ if ((flags & HK_FLAG_KERNEL_NOISE) && !(housekeeping_flags & HK_FLAG_KERNEL_NOISE))
tick_nohz_full_setup(non_housekeeping_mask);
- housekeeping.flags |= flags;
+ housekeeping_flags |= flags;
err = 1;
free_housekeeping_staging:
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 02/33] PCI: Protect against concurrent change of housekeeping cpumask
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 01/33] sched/isolation: Remove housekeeping static key Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 03/33] cpu: Revert "cpu/hotplug: Prevent self deadlock on CPU hot-unplug" Frederic Weisbecker
` (31 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Bjorn Helgaas, Marco Crivellari,
Michal Hocko, Peter Zijlstra, Tejun Heo, Thomas Gleixner,
Vlastimil Babka, Waiman Long, linux-pci
HK_TYPE_DOMAIN will soon integrate cpuset isolated partitions and
therefore be made modifyable at runtime. Synchronize against the cpumask
update using RCU.
The RCU locked section includes both the housekeeping CPU target
election for the PCI probe work and the work enqueue.
This way the housekeeping update side will simply need to flush the
pending related works after updating the housekeeping mask in order to
make sure that no PCI work ever executes on an isolated CPU.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
drivers/pci/pci-driver.c | 40 +++++++++++++++++++++++++++++++---------
1 file changed, 31 insertions(+), 9 deletions(-)
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index 63665240ae87..cf2b83004886 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -302,9 +302,8 @@ struct drv_dev_and_id {
const struct pci_device_id *id;
};
-static long local_pci_probe(void *_ddi)
+static int local_pci_probe(struct drv_dev_and_id *ddi)
{
- struct drv_dev_and_id *ddi = _ddi;
struct pci_dev *pci_dev = ddi->dev;
struct pci_driver *pci_drv = ddi->drv;
struct device *dev = &pci_dev->dev;
@@ -338,6 +337,19 @@ static long local_pci_probe(void *_ddi)
return 0;
}
+struct pci_probe_arg {
+ struct drv_dev_and_id *ddi;
+ struct work_struct work;
+ int ret;
+};
+
+static void local_pci_probe_callback(struct work_struct *work)
+{
+ struct pci_probe_arg *arg = container_of(work, struct pci_probe_arg, work);
+
+ arg->ret = local_pci_probe(arg->ddi);
+}
+
static bool pci_physfn_is_probed(struct pci_dev *dev)
{
#ifdef CONFIG_PCI_IOV
@@ -362,34 +374,44 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
dev->is_probed = 1;
cpu_hotplug_disable();
-
/*
* Prevent nesting work_on_cpu() for the case where a Virtual Function
* device is probed from work_on_cpu() of the Physical device.
*/
if (node < 0 || node >= MAX_NUMNODES || !node_online(node) ||
pci_physfn_is_probed(dev)) {
- cpu = nr_cpu_ids;
+ error = local_pci_probe(&ddi);
} else {
cpumask_var_t wq_domain_mask;
+ struct pci_probe_arg arg = { .ddi = &ddi };
+
+ INIT_WORK_ONSTACK(&arg.work, local_pci_probe_callback);
if (!zalloc_cpumask_var(&wq_domain_mask, GFP_KERNEL)) {
error = -ENOMEM;
goto out;
}
+
+ rcu_read_lock();
cpumask_and(wq_domain_mask,
housekeeping_cpumask(HK_TYPE_WQ),
housekeeping_cpumask(HK_TYPE_DOMAIN));
cpu = cpumask_any_and(cpumask_of_node(node),
wq_domain_mask);
+ if (cpu < nr_cpu_ids) {
+ schedule_work_on(cpu, &arg.work);
+ rcu_read_unlock();
+ flush_work(&arg.work);
+ error = arg.ret;
+ } else {
+ rcu_read_unlock();
+ error = local_pci_probe(&ddi);
+ }
+
free_cpumask_var(wq_domain_mask);
+ destroy_work_on_stack(&arg.work);
}
-
- if (cpu < nr_cpu_ids)
- error = work_on_cpu(cpu, local_pci_probe, &ddi);
- else
- error = local_pci_probe(&ddi);
out:
dev->is_probed = 0;
cpu_hotplug_enable();
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 03/33] cpu: Revert "cpu/hotplug: Prevent self deadlock on CPU hot-unplug"
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 01/33] sched/isolation: Remove housekeeping static key Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 02/33] PCI: Protect against concurrent change of housekeeping cpumask Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 04/33] memcg: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
` (30 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Waiman Long
1) The commit:
2b8272ff4a70 ("cpu/hotplug: Prevent self deadlock on CPU hot-unplug")
was added to fix an issue where the hotplug control task (BP) was
throttled between CPUHP_AP_IDLE_DEAD and CPUHP_HRTIMERS_PREPARE waiting
in the hrtimer blindspot for the bandwidth callback queued in the dead
CPU.
2) Later on, the commit:
38685e2a0476 ("cpu/hotplug: Don't offline the last non-isolated CPU")
plugged on the target selection for the workqueue offloaded CPU down
process to prevent from destroying the last CPU domain.
3) Finally:
5c0930ccaad5 ("hrtimers: Push pending hrtimers away from outgoing CPU earlier")
removed entirely the conditions for the race exposed and partially fixed
in 1). The offloading of the CPU down process to a workqueue on another
CPU then becomes unnecessary. But the last CPU belonging to scheduler
domains must still remain online.
Therefore revert the now obsolete commit
2b8272ff4a70b866106ae13c36be7ecbef5d5da2 and move the housekeeping check
under the cpu_hotplug_lock write held. Since HK_TYPE_DOMAIN will include
both isolcpus and cpuset isolated partition, the hotplug lock will
synchronize against concurrent cpuset partition updates.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/cpu.c | 37 +++++++++++--------------------------
1 file changed, 11 insertions(+), 26 deletions(-)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index db9f6c539b28..453a806af2ee 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1410,6 +1410,16 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
cpus_write_lock();
+ /*
+ * Keep at least one housekeeping cpu onlined to avoid generating
+ * an empty sched_domain span.
+ */
+ if (cpumask_any_and(cpu_online_mask,
+ housekeeping_cpumask(HK_TYPE_DOMAIN)) >= nr_cpu_ids) {
+ ret = -EBUSY;
+ goto out;
+ }
+
cpuhp_tasks_frozen = tasks_frozen;
prev_state = cpuhp_set_state(cpu, st, target);
@@ -1456,22 +1466,8 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
return ret;
}
-struct cpu_down_work {
- unsigned int cpu;
- enum cpuhp_state target;
-};
-
-static long __cpu_down_maps_locked(void *arg)
-{
- struct cpu_down_work *work = arg;
-
- return _cpu_down(work->cpu, 0, work->target);
-}
-
static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
{
- struct cpu_down_work work = { .cpu = cpu, .target = target, };
-
/*
* If the platform does not support hotplug, report it explicitly to
* differentiate it from a transient offlining failure.
@@ -1480,18 +1476,7 @@ static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
return -EOPNOTSUPP;
if (cpu_hotplug_disabled)
return -EBUSY;
-
- /*
- * Ensure that the control task does not run on the to be offlined
- * CPU to prevent a deadlock against cfs_b->period_timer.
- * Also keep at least one housekeeping cpu onlined to avoid generating
- * an empty sched_domain span.
- */
- for_each_cpu_and(cpu, cpu_online_mask, housekeeping_cpumask(HK_TYPE_DOMAIN)) {
- if (cpu != work.cpu)
- return work_on_cpu(cpu, __cpu_down_maps_locked, &work);
- }
- return -EBUSY;
+ return _cpu_down(cpu, 0, target);
}
static int cpu_down(unsigned int cpu, enum cpuhp_state target)
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 04/33] memcg: Prepare to protect against concurrent isolated cpuset change
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (2 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 03/33] cpu: Revert "cpu/hotplug: Prevent self deadlock on CPU hot-unplug" Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 05/33] mm: vmstat: " Frederic Weisbecker
` (29 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Andrew Morton, Johannes Weiner,
Marco Crivellari, Michal Hocko, Muchun Song, Peter Zijlstra,
Roman Gushchin, Shakeel Butt, Tejun Heo, Thomas Gleixner,
Vlastimil Babka, Waiman Long
The HK_TYPE_DOMAIN housekeeping cpumask will soon be made modifyable at
runtime. In order to synchronize against memcg workqueue to make sure
that no asynchronous draining is pending or executing on a newly made
isolated CPU, target and queue a drain work under the same RCU critical
section.
Whenever housekeeping will update the HK_TYPE_DOMAIN cpumask, a memcg
workqueue flush will also be issued in a further change to make sure
that no work remains pending after a CPU has been made isolated.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
mm/memcontrol.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8dd7fbed5a94..2649d6c09160 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1970,6 +1970,13 @@ static bool is_memcg_drain_needed(struct memcg_stock_pcp *stock,
return flush;
}
+static void schedule_drain_work(int cpu, struct work_struct *work)
+{
+ guard(rcu)();
+ if (!cpu_is_isolated(cpu))
+ schedule_work_on(cpu, work);
+}
+
/*
* Drains all per-CPU charge caches for given root_memcg resp. subtree
* of the hierarchy under it.
@@ -1999,8 +2006,8 @@ void drain_all_stock(struct mem_cgroup *root_memcg)
&memcg_st->flags)) {
if (cpu == curcpu)
drain_local_memcg_stock(&memcg_st->work);
- else if (!cpu_is_isolated(cpu))
- schedule_work_on(cpu, &memcg_st->work);
+ else
+ schedule_drain_work(cpu, &memcg_st->work);
}
if (!test_bit(FLUSHING_CACHED_CHARGE, &obj_st->flags) &&
@@ -2009,8 +2016,8 @@ void drain_all_stock(struct mem_cgroup *root_memcg)
&obj_st->flags)) {
if (cpu == curcpu)
drain_local_obj_stock(&obj_st->work);
- else if (!cpu_is_isolated(cpu))
- schedule_work_on(cpu, &obj_st->work);
+ else
+ schedule_drain_work(cpu, &obj_st->work);
}
}
migrate_enable();
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 05/33] mm: vmstat: Prepare to protect against concurrent isolated cpuset change
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (3 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 04/33] memcg: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 06/33] sched/isolation: Save boot defined domain flags Frederic Weisbecker
` (28 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Andrew Morton, Marco Crivellari,
Michal Hocko, Peter Zijlstra, Tejun Heo, Thomas Gleixner,
Vlastimil Babka, Waiman Long, linux-mm
The HK_TYPE_DOMAIN housekeeping cpumask will soon be made modifyable at
runtime. In order to synchronize against vmstat workqueue to make sure
that no asynchronous vmstat work is pending or executing on a newly made
isolated CPU, target and queue a vmstat work under the same RCU read
side critical section.
Whenever housekeeping will update the HK_TYPE_DOMAIN cpumask, a vmstat
workqueue flush will also be issued in a further change to make sure
that no work remains pending after a CPU has been made isolated.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
mm/vmstat.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 71cd1ceba191..b90325ee49d3 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -2133,11 +2133,13 @@ static void vmstat_shepherd(struct work_struct *w)
* infrastructure ever noticing. Skip regular flushing from vmstat_shepherd
* for all isolated CPUs to avoid interference with the isolated workload.
*/
- if (cpu_is_isolated(cpu))
- continue;
+ scoped_guard(rcu) {
+ if (cpu_is_isolated(cpu))
+ continue;
- if (!delayed_work_pending(dw) && need_update(cpu))
- queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
+ if (!delayed_work_pending(dw) && need_update(cpu))
+ queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
+ }
cond_resched();
}
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 06/33] sched/isolation: Save boot defined domain flags
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (4 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 05/33] mm: vmstat: " Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 07/33] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT Frederic Weisbecker
` (27 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Ingo Molnar, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long
HK_TYPE_DOMAIN will soon integrate not only boot defined isolcpus= CPUs
but also cpuset isolated partitions.
Housekeeping still needs a way to record what was initially passed
to isolcpus= in order to keep these CPUs isolated after a cpuset
isolated partition is modified or destroyed while containing some of
them.
Create a new HK_TYPE_DOMAIN_BOOT to keep track of those.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/sched/isolation.h | 1 +
kernel/sched/isolation.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index f98ba0d71c52..9262378760b1 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -7,6 +7,7 @@
#include <linux/tick.h>
enum hk_type {
+ HK_TYPE_DOMAIN_BOOT,
HK_TYPE_DOMAIN,
HK_TYPE_MANAGED_IRQ,
HK_TYPE_KERNEL_NOISE,
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 2a6fc6fc46fb..fb414e28706d 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -11,6 +11,7 @@
#include "sched.h"
enum hk_flags {
+ HK_FLAG_DOMAIN_BOOT = BIT(HK_TYPE_DOMAIN_BOOT),
HK_FLAG_DOMAIN = BIT(HK_TYPE_DOMAIN),
HK_FLAG_MANAGED_IRQ = BIT(HK_TYPE_MANAGED_IRQ),
HK_FLAG_KERNEL_NOISE = BIT(HK_TYPE_KERNEL_NOISE),
@@ -204,7 +205,7 @@ static int __init housekeeping_isolcpus_setup(char *str)
if (!strncmp(str, "domain,", 7)) {
str += 7;
- flags |= HK_FLAG_DOMAIN;
+ flags |= HK_FLAG_DOMAIN | HK_FLAG_DOMAIN_BOOT;
continue;
}
@@ -234,7 +235,7 @@ static int __init housekeeping_isolcpus_setup(char *str)
/* Default behaviour for isolcpus without flags */
if (!flags)
- flags |= HK_FLAG_DOMAIN;
+ flags |= HK_FLAG_DOMAIN | HK_FLAG_DOMAIN_BOOT;
return housekeeping_setup(str, flags);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 07/33] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (5 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 06/33] sched/isolation: Save boot defined domain flags Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 08/33] driver core: cpu: Convert /sys/devices/system/cpu/isolated " Frederic Weisbecker
` (26 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Johannes Weiner, Marco Crivellari,
Michal Hocko, Michal Koutny, Peter Zijlstra, Tejun Heo,
Thomas Gleixner, Vlastimil Babka, Waiman Long, cgroups
boot_hk_cpus is an ad-hoc copy of HK_TYPE_DOMAIN_BOOT. Remove it and use
the official version.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/cgroup/cpuset.c | 22 +++++++---------------
1 file changed, 7 insertions(+), 15 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 27adb04df675..b00d8e3c30ba 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -80,12 +80,6 @@ static cpumask_var_t subpartitions_cpus;
*/
static cpumask_var_t isolated_cpus;
-/*
- * Housekeeping (HK_TYPE_DOMAIN) CPUs at boot
- */
-static cpumask_var_t boot_hk_cpus;
-static bool have_boot_isolcpus;
-
/* List of remote partition root children */
static struct list_head remote_children;
@@ -1601,15 +1595,16 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
* @new_cpus: cpu mask
* Return: true if there is conflict, false otherwise
*
- * CPUs outside of boot_hk_cpus, if defined, can only be used in an
+ * CPUs outside of HK_TYPE_DOMAIN_BOOT, if defined, can only be used in an
* isolated partition.
*/
static bool prstate_housekeeping_conflict(int prstate, struct cpumask *new_cpus)
{
- if (!have_boot_isolcpus)
+ if (!housekeeping_enabled(HK_TYPE_DOMAIN_BOOT))
return false;
- if ((prstate != PRS_ISOLATED) && !cpumask_subset(new_cpus, boot_hk_cpus))
+ if ((prstate != PRS_ISOLATED) &&
+ !cpumask_subset(new_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT)))
return true;
return false;
@@ -3764,12 +3759,9 @@ int __init cpuset_init(void)
BUG_ON(!alloc_cpumask_var(&cpus_attach, GFP_KERNEL));
- have_boot_isolcpus = housekeeping_enabled(HK_TYPE_DOMAIN);
- if (have_boot_isolcpus) {
- BUG_ON(!alloc_cpumask_var(&boot_hk_cpus, GFP_KERNEL));
- cpumask_copy(boot_hk_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN));
- cpumask_andnot(isolated_cpus, cpu_possible_mask, boot_hk_cpus);
- }
+ if (housekeeping_enabled(HK_TYPE_DOMAIN_BOOT))
+ cpumask_andnot(isolated_cpus, cpu_possible_mask,
+ housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT));
return 0;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 08/33] driver core: cpu: Convert /sys/devices/system/cpu/isolated to use HK_TYPE_DOMAIN_BOOT
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (6 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 07/33] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 09/33] net: Keep ignoring isolated cpuset change Frederic Weisbecker
` (25 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Danilo Krummrich, Greg Kroah-Hartman,
Marco Crivellari, Michal Hocko, Peter Zijlstra,
Rafael J . Wysocki, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long
Make sure /sys/devices/system/cpu/isolated only prints what was passed
through the isolcpus= parameter before HK_TYPE_DOMAIN will also
integrate cpuset isolated partitions.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
drivers/base/cpu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index efc575a00edd..f448e0b8e56d 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -291,7 +291,7 @@ static ssize_t print_cpus_isolated(struct device *dev,
return -ENOMEM;
cpumask_andnot(isolated, cpu_possible_mask,
- housekeeping_cpumask(HK_TYPE_DOMAIN));
+ housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT));
len = sysfs_emit(buf, "%*pbl\n", cpumask_pr_args(isolated));
free_cpumask_var(isolated);
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 09/33] net: Keep ignoring isolated cpuset change
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (7 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 08/33] driver core: cpu: Convert /sys/devices/system/cpu/isolated " Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 10/33] block: Protect against concurrent " Frederic Weisbecker
` (24 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, David S . Miller, Eric Dumazet,
Jakub Kicinski, Marco Crivellari, Michal Hocko, Paolo Abeni,
Peter Zijlstra, Simon Horman, Tejun Heo, Thomas Gleixner,
Vlastimil Babka, Waiman Long, netdev
RPS cpumask can be overriden through sysfs/syctl. The boot defined
isolated CPUs are then excluded from that cpumask.
However HK_TYPE_DOMAIN will soon integrate cpuset isolated
CPUs updates and the RPS infrastructure needs more thoughts to be able
to propagate such changes and synchronize against them.
Keep handling only what was passed through "isolcpus=" for now.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
net/core/net-sysfs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index c28cd6665444..9b0081e444d6 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -1022,7 +1022,7 @@ static int netdev_rx_queue_set_rps_mask(struct netdev_rx_queue *queue,
int rps_cpumask_housekeeping(struct cpumask *mask)
{
if (!cpumask_empty(mask)) {
- cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_DOMAIN));
+ cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT));
cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_WQ));
if (cpumask_empty(mask))
return -EINVAL;
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 10/33] block: Protect against concurrent isolated cpuset change
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (8 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 09/33] net: Keep ignoring isolated cpuset change Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 11/33] cpu: Provide lockdep check for CPU hotplug lock write-held Frederic Weisbecker
` (23 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Jens Axboe, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long, linux-block
The block subsystem prevents running the workqueue to isolated CPUs,
including those defined by cpuset isolated partitions. Since
HK_TYPE_DOMAIN will soon contain both and be subject to runtime
modifications, synchronize against housekeeping using the relevant lock.
For full support of cpuset changes, the block subsystem may need to
propagate changes to isolated cpumask through the workqueue in the
future.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
block/blk-mq.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ba3a4b77f578..f2d1f2531fca 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4241,12 +4241,16 @@ static void blk_mq_map_swqueue(struct request_queue *q)
/*
* Rule out isolated CPUs from hctx->cpumask to avoid
- * running block kworker on isolated CPUs
+ * running block kworker on isolated CPUs.
+ * FIXME: cpuset should propagate further changes to isolated CPUs
+ * here.
*/
+ rcu_read_lock();
for_each_cpu(cpu, hctx->cpumask) {
if (cpu_is_isolated(cpu))
cpumask_clear_cpu(cpu, hctx->cpumask);
}
+ rcu_read_unlock();
/*
* Initialize batch roundrobin counts
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 11/33] cpu: Provide lockdep check for CPU hotplug lock write-held
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (9 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 10/33] block: Protect against concurrent " Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 12/33] cpuset: Provide lockdep check for cpuset lock held Frederic Weisbecker
` (22 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Waiman Long
cpuset modifies partitions, including isolated, while holding the cpu
hotplug lock read-held.
This means that write-holding the CPU hotplug lock is safe to
synchronize against housekeeping cpumask changes.
Provide a lockdep check to validate that.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/cpuhplock.h | 1 +
include/linux/percpu-rwsem.h | 1 +
kernel/cpu.c | 5 +++++
3 files changed, 7 insertions(+)
diff --git a/include/linux/cpuhplock.h b/include/linux/cpuhplock.h
index f7aa20f62b87..286b3ab92e15 100644
--- a/include/linux/cpuhplock.h
+++ b/include/linux/cpuhplock.h
@@ -13,6 +13,7 @@
struct device;
extern int lockdep_is_cpus_held(void);
+extern int lockdep_is_cpus_write_held(void);
#ifdef CONFIG_HOTPLUG_CPU
void cpus_write_lock(void);
diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
index 288f5235649a..c8cb010d655e 100644
--- a/include/linux/percpu-rwsem.h
+++ b/include/linux/percpu-rwsem.h
@@ -161,6 +161,7 @@ extern void percpu_free_rwsem(struct percpu_rw_semaphore *);
__percpu_init_rwsem(sem, #sem, &rwsem_key); \
})
+#define percpu_rwsem_is_write_held(sem) lockdep_is_held_type(sem, 0)
#define percpu_rwsem_is_held(sem) lockdep_is_held(sem)
#define percpu_rwsem_assert_held(sem) lockdep_assert_held(sem)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 453a806af2ee..3b0443f7c486 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -534,6 +534,11 @@ int lockdep_is_cpus_held(void)
{
return percpu_rwsem_is_held(&cpu_hotplug_lock);
}
+
+int lockdep_is_cpus_write_held(void)
+{
+ return percpu_rwsem_is_write_held(&cpu_hotplug_lock);
+}
#endif
static void lockdep_acquire_cpus_lock(void)
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 12/33] cpuset: Provide lockdep check for cpuset lock held
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (10 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 11/33] cpu: Provide lockdep check for CPU hotplug lock write-held Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 13/33] sched/isolation: Convert housekeeping cpumasks to rcu pointers Frederic Weisbecker
` (21 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Michal Koutný, Johannes Weiner,
Marco Crivellari, Michal Hocko, Peter Zijlstra, Tejun Heo,
Thomas Gleixner, Vlastimil Babka, Waiman Long, cgroups
cpuset modifies partitions, including isolated, while holding the cpuset
mutex.
This means that holding the cpuset mutex is safe to synchronize against
housekeeping cpumask changes.
Provide a lockdep check to validate that.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/cpuset.h | 2 ++
kernel/cgroup/cpuset.c | 7 +++++++
2 files changed, 9 insertions(+)
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 2ddb256187b5..051d36fec578 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -18,6 +18,8 @@
#include <linux/mmu_context.h>
#include <linux/jump_label.h>
+extern bool lockdep_is_cpuset_held(void);
+
#ifdef CONFIG_CPUSETS
/*
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index b00d8e3c30ba..2d2fc74bc00c 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -254,6 +254,13 @@ void cpuset_unlock(void)
mutex_unlock(&cpuset_mutex);
}
+#ifdef CONFIG_LOCKDEP
+bool lockdep_is_cpuset_held(void)
+{
+ return lockdep_is_held(&cpuset_mutex);
+}
+#endif
+
static DEFINE_SPINLOCK(callback_lock);
void cpuset_callback_lock_irq(void)
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 13/33] sched/isolation: Convert housekeeping cpumasks to rcu pointers
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (11 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 12/33] cpuset: Provide lockdep check for cpuset lock held Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
` (20 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Ingo Molnar, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long
HK_TYPE_DOMAIN's cpumask will soon be made modifyable by cpuset.
A synchronization mechanism is then needed to synchronize the updates
with the housekeeping cpumask readers.
Turn the housekeeping cpumasks into RCU pointers. Once a housekeeping
cpumask will be modified, the update side will wait for an RCU grace
period and propagate the change to interested subsystem when deemed
necessary.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/sched/isolation.c | 52 ++++++++++++++++++++++++++--------------
kernel/sched/sched.h | 1 +
2 files changed, 35 insertions(+), 18 deletions(-)
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index fb414e28706d..5ddb8dc5ca91 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -17,7 +17,7 @@ enum hk_flags {
HK_FLAG_KERNEL_NOISE = BIT(HK_TYPE_KERNEL_NOISE),
};
-static cpumask_var_t housekeeping_cpumasks[HK_TYPE_MAX];
+static struct cpumask __rcu *housekeeping_cpumasks[HK_TYPE_MAX];
unsigned long housekeeping_flags;
EXPORT_SYMBOL_GPL(housekeeping_flags);
@@ -27,16 +27,25 @@ bool housekeeping_enabled(enum hk_type type)
}
EXPORT_SYMBOL_GPL(housekeeping_enabled);
+const struct cpumask *housekeeping_cpumask(enum hk_type type)
+{
+ if (housekeeping_flags & BIT(type)) {
+ return rcu_dereference_check(housekeeping_cpumasks[type], 1);
+ }
+ return cpu_possible_mask;
+}
+EXPORT_SYMBOL_GPL(housekeeping_cpumask);
+
int housekeeping_any_cpu(enum hk_type type)
{
int cpu;
if (housekeeping_flags & BIT(type)) {
- cpu = sched_numa_find_closest(housekeeping_cpumasks[type], smp_processor_id());
+ cpu = sched_numa_find_closest(housekeeping_cpumask(type), smp_processor_id());
if (cpu < nr_cpu_ids)
return cpu;
- cpu = cpumask_any_and_distribute(housekeeping_cpumasks[type], cpu_online_mask);
+ cpu = cpumask_any_and_distribute(housekeeping_cpumask(type), cpu_online_mask);
if (likely(cpu < nr_cpu_ids))
return cpu;
/*
@@ -52,25 +61,17 @@ int housekeeping_any_cpu(enum hk_type type)
}
EXPORT_SYMBOL_GPL(housekeeping_any_cpu);
-const struct cpumask *housekeeping_cpumask(enum hk_type type)
-{
- if (housekeeping_flags & BIT(type))
- return housekeeping_cpumasks[type];
- return cpu_possible_mask;
-}
-EXPORT_SYMBOL_GPL(housekeeping_cpumask);
-
void housekeeping_affine(struct task_struct *t, enum hk_type type)
{
if (housekeeping_flags & BIT(type))
- set_cpus_allowed_ptr(t, housekeeping_cpumasks[type]);
+ set_cpus_allowed_ptr(t, housekeeping_cpumask(type));
}
EXPORT_SYMBOL_GPL(housekeeping_affine);
bool housekeeping_test_cpu(int cpu, enum hk_type type)
{
if (housekeeping_flags & BIT(type))
- return cpumask_test_cpu(cpu, housekeeping_cpumasks[type]);
+ return cpumask_test_cpu(cpu, housekeeping_cpumask(type));
return true;
}
EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
@@ -85,9 +86,23 @@ void __init housekeeping_init(void)
if (housekeeping_flags & HK_FLAG_KERNEL_NOISE)
sched_tick_offload_init();
+ /*
+ * Realloc with a proper allocator so that any cpumask update
+ * can indifferently free the old version with kfree().
+ */
for_each_set_bit(type, &housekeeping_flags, HK_TYPE_MAX) {
+ struct cpumask *omask, *nmask = kmalloc(cpumask_size(), GFP_KERNEL);
+
+ if (WARN_ON_ONCE(!nmask))
+ return;
+
+ omask = rcu_dereference(housekeeping_cpumasks[type]);
+
/* We need at least one CPU to handle housekeeping work */
- WARN_ON_ONCE(cpumask_empty(housekeeping_cpumasks[type]));
+ WARN_ON_ONCE(cpumask_empty(omask));
+ cpumask_copy(nmask, omask);
+ RCU_INIT_POINTER(housekeeping_cpumasks[type], nmask);
+ memblock_free(omask, cpumask_size());
}
}
@@ -95,9 +110,10 @@ static void __init housekeeping_setup_type(enum hk_type type,
cpumask_var_t housekeeping_staging)
{
- alloc_bootmem_cpumask_var(&housekeeping_cpumasks[type]);
- cpumask_copy(housekeeping_cpumasks[type],
- housekeeping_staging);
+ struct cpumask *mask = memblock_alloc_or_panic(cpumask_size(), SMP_CACHE_BYTES);
+
+ cpumask_copy(mask, housekeeping_staging);
+ RCU_INIT_POINTER(housekeeping_cpumasks[type], mask);
}
static int __init housekeeping_setup(char *str, unsigned long flags)
@@ -150,7 +166,7 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
for_each_set_bit(type, &iter_flags, HK_TYPE_MAX) {
if (!cpumask_equal(housekeeping_staging,
- housekeeping_cpumasks[type])) {
+ housekeeping_cpumask(type))) {
pr_warn("Housekeeping: nohz_full= must match isolcpus=\n");
goto free_housekeeping_staging;
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index be9745d104f7..0b1a233dcabf 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -42,6 +42,7 @@
#include <linux/ktime_api.h>
#include <linux/lockdep_api.h>
#include <linux/lockdep.h>
+#include <linux/memblock.h>
#include <linux/minmax.h>
#include <linux/mm.h>
#include <linux/module.h>
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (12 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 13/33] sched/isolation: Convert housekeeping cpumasks to rcu pointers Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-09-01 0:40 ` Waiman Long
2025-08-29 15:47 ` [PATCH 15/33] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Frederic Weisbecker
` (19 subsequent siblings)
33 siblings, 1 reply; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Michal Koutný, Ingo Molnar,
Johannes Weiner, Marco Crivellari, Michal Hocko, Peter Zijlstra,
Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long, cgroups
Until now, HK_TYPE_DOMAIN used to only include boot defined isolated
CPUs passed through isolcpus= boot option. Users interested in also
knowing the runtime defined isolated CPUs through cpuset must use
different APIs: cpuset_cpu_is_isolated(), cpu_is_isolated(), etc...
There are many drawbacks to that approach:
1) Most interested subsystems want to know about all isolated CPUs, not
just those defined on boot time.
2) cpuset_cpu_is_isolated() / cpu_is_isolated() are not synchronized with
concurrent cpuset changes.
3) Further cpuset modifications are not propagated to subsystems
Solve 1) and 2) and centralize all isolated CPUs within the
HK_TYPE_DOMAIN housekeeping cpumask.
Subsystems can rely on RCU to synchronize against concurrent changes.
The propagation mentioned in 3) will be handled in further patches.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/sched/isolation.h | 4 +-
kernel/cgroup/cpuset.c | 2 +
kernel/sched/isolation.c | 65 ++++++++++++++++++++++++++++++---
kernel/sched/sched.h | 1 +
4 files changed, 65 insertions(+), 7 deletions(-)
diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index 9262378760b1..199d0fc4646f 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -36,12 +36,13 @@ extern bool housekeeping_test_cpu(int cpu, enum hk_type type);
static inline bool housekeeping_cpu(int cpu, enum hk_type type)
{
- if (housekeeping_flags & BIT(type))
+ if (READ_ONCE(housekeeping_flags) & BIT(type))
return housekeeping_test_cpu(cpu, type);
else
return true;
}
+extern int housekeeping_update(struct cpumask *mask, enum hk_type type);
extern void __init housekeeping_init(void);
#else
@@ -74,6 +75,7 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type)
return true;
}
+static inline int housekeeping_update(struct cpumask *mask, enum hk_type type) { return 0; }
static inline void housekeeping_init(void) { }
#endif /* CONFIG_CPU_ISOLATION */
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 2d2fc74bc00c..4f2bc68332a7 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1351,6 +1351,8 @@ static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
WARN_ON_ONCE(ret < 0);
+ ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
+ WARN_ON_ONCE(ret < 0);
}
/**
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 5ddb8dc5ca91..48f3b6b20604 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -23,16 +23,39 @@ EXPORT_SYMBOL_GPL(housekeeping_flags);
bool housekeeping_enabled(enum hk_type type)
{
- return !!(housekeeping_flags & BIT(type));
+ return !!(READ_ONCE(housekeeping_flags) & BIT(type));
}
EXPORT_SYMBOL_GPL(housekeeping_enabled);
+static bool housekeeping_dereference_check(enum hk_type type)
+{
+ if (type == HK_TYPE_DOMAIN) {
+ if (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_write_held())
+ return true;
+ if (IS_ENABLED(CONFIG_CPUSETS) && lockdep_is_cpuset_held())
+ return true;
+
+ return false;
+ }
+
+ return true;
+}
+
+static inline struct cpumask *__housekeeping_cpumask(enum hk_type type)
+{
+ return rcu_dereference_check(housekeeping_cpumasks[type],
+ housekeeping_dereference_check(type));
+}
+
const struct cpumask *housekeeping_cpumask(enum hk_type type)
{
- if (housekeeping_flags & BIT(type)) {
- return rcu_dereference_check(housekeeping_cpumasks[type], 1);
- }
- return cpu_possible_mask;
+ const struct cpumask *mask = NULL;
+
+ if (READ_ONCE(housekeeping_flags) & BIT(type))
+ mask = __housekeeping_cpumask(type);
+ if (!mask)
+ mask = cpu_possible_mask;
+ return mask;
}
EXPORT_SYMBOL_GPL(housekeeping_cpumask);
@@ -70,12 +93,42 @@ EXPORT_SYMBOL_GPL(housekeeping_affine);
bool housekeeping_test_cpu(int cpu, enum hk_type type)
{
- if (housekeeping_flags & BIT(type))
+ if (READ_ONCE(housekeeping_flags) & BIT(type))
return cpumask_test_cpu(cpu, housekeeping_cpumask(type));
return true;
}
EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
+int housekeeping_update(struct cpumask *mask, enum hk_type type)
+{
+ struct cpumask *trial, *old = NULL;
+
+ if (type != HK_TYPE_DOMAIN)
+ return -ENOTSUPP;
+
+ trial = kmalloc(sizeof(*trial), GFP_KERNEL);
+ if (!trial)
+ return -ENOMEM;
+
+ cpumask_andnot(trial, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT), mask);
+ if (!cpumask_intersects(trial, cpu_online_mask)) {
+ kfree(trial);
+ return -EINVAL;
+ }
+
+ if (housekeeping_flags & BIT(type))
+ old = __housekeeping_cpumask(type);
+ else
+ WRITE_ONCE(housekeeping_flags, housekeeping_flags | BIT(type));
+ rcu_assign_pointer(housekeeping_cpumasks[type], trial);
+
+ synchronize_rcu();
+
+ kfree(old);
+
+ return 0;
+}
+
void __init housekeeping_init(void)
{
enum hk_type type;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 0b1a233dcabf..d3512138027b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -30,6 +30,7 @@
#include <linux/context_tracking.h>
#include <linux/cpufreq.h>
#include <linux/cpumask_api.h>
+#include <linux/cpuset.h>
#include <linux/ctype.h>
#include <linux/file.h>
#include <linux/fs_api.h>
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 15/33] sched/isolation: Flush memcg workqueues on cpuset isolated partition change
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (13 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 16/33] sched/isolation: Flush vmstat " Frederic Weisbecker
` (18 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Andrew Morton, Ingo Molnar, Johannes Weiner,
Marco Crivellari, Michal Hocko, Muchun Song, Peter Zijlstra,
Roman Gushchin, Shakeel Butt, Tejun Heo, Thomas Gleixner,
Vlastimil Babka, Waiman Long, cgroups, linux-mm
The HK_TYPE_DOMAIN housekeeping cpumask is now modifyable at runtime. In
order to synchronize against memcg workqueue to make sure that no
asynchronous draining is still pending or executing on a newly made
isolated CPU, the housekeeping susbsystem must flush the memcg
workqueues.
However the memcg workqueues can't be flushed easily since they are
queued to the main per-CPU workqueue pool.
Solve this with creating a memcg specific pool and provide and use the
appropriate flushing API.
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/memcontrol.h | 4 ++++
kernel/sched/isolation.c | 2 ++
kernel/sched/sched.h | 1 +
mm/memcontrol.c | 12 +++++++++++-
4 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 785173aa0739..8b23ff000473 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1048,6 +1048,8 @@ static inline u64 cgroup_id_from_mm(struct mm_struct *mm)
return id;
}
+void mem_cgroup_flush_workqueue(void);
+
extern int mem_cgroup_init(void);
#else /* CONFIG_MEMCG */
@@ -1453,6 +1455,8 @@ static inline u64 cgroup_id_from_mm(struct mm_struct *mm)
return 0;
}
+static inline void mem_cgroup_flush_workqueue(void) { }
+
static inline int mem_cgroup_init(void) { return 0; }
#endif /* CONFIG_MEMCG */
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 48f3b6b20604..e85f402b103a 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -124,6 +124,8 @@ int housekeeping_update(struct cpumask *mask, enum hk_type type)
synchronize_rcu();
+ mem_cgroup_flush_workqueue();
+
kfree(old);
return 0;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d3512138027b..1dad1ac7fc61 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -44,6 +44,7 @@
#include <linux/lockdep_api.h>
#include <linux/lockdep.h>
#include <linux/memblock.h>
+#include <linux/memcontrol.h>
#include <linux/minmax.h>
#include <linux/mm.h>
#include <linux/module.h>
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 2649d6c09160..1aa2dfa32ccd 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -95,6 +95,8 @@ static bool cgroup_memory_nokmem __ro_after_init;
/* BPF memory accounting disabled? */
static bool cgroup_memory_nobpf __ro_after_init;
+static struct workqueue_struct *memcg_wq __ro_after_init;
+
static struct kmem_cache *memcg_cachep;
static struct kmem_cache *memcg_pn_cachep;
@@ -1974,7 +1976,7 @@ static void schedule_drain_work(int cpu, struct work_struct *work)
{
guard(rcu)();
if (!cpu_is_isolated(cpu))
- schedule_work_on(cpu, work);
+ queue_work_on(cpu, memcg_wq, work);
}
/*
@@ -5071,6 +5073,11 @@ void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
refill_stock(memcg, nr_pages);
}
+void mem_cgroup_flush_workqueue(void)
+{
+ flush_workqueue(memcg_wq);
+}
+
static int __init cgroup_memory(char *s)
{
char *token;
@@ -5113,6 +5120,9 @@ int __init mem_cgroup_init(void)
cpuhp_setup_state_nocalls(CPUHP_MM_MEMCQ_DEAD, "mm/memctrl:dead", NULL,
memcg_hotplug_cpu_dead);
+ memcg_wq = alloc_workqueue("memcg", 0, 0);
+ WARN_ON(!memcg_wq);
+
for_each_possible_cpu(cpu) {
INIT_WORK(&per_cpu_ptr(&memcg_stock, cpu)->work,
drain_local_memcg_stock);
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 16/33] sched/isolation: Flush vmstat workqueues on cpuset isolated partition change
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (14 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 15/33] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 17/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
` (17 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Andrew Morton, Ingo Molnar, Marco Crivellari,
Michal Hocko, Peter Zijlstra, Tejun Heo, Thomas Gleixner,
Vlastimil Babka, Waiman Long, linux-mm
The HK_TYPE_DOMAIN housekeeping cpumask is now modifyable at runtime.
In order to synchronize against vmstat workqueue to make sure
that no asynchronous vmstat work is still pending or executing on a
newly made isolated CPU, the housekeeping susbsystem must flush the
vmstat workqueues.
This involves flushing the whole mm_percpu_wq workqueue, shared with
LRU drain, introducing here a welcome side effect.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/vmstat.h | 2 ++
kernel/sched/isolation.c | 1 +
kernel/sched/sched.h | 1 +
mm/vmstat.c | 5 +++++
4 files changed, 9 insertions(+)
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index c287998908bf..a81aa5635b47 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -303,6 +303,7 @@ int calculate_pressure_threshold(struct zone *zone);
int calculate_normal_threshold(struct zone *zone);
void set_pgdat_percpu_threshold(pg_data_t *pgdat,
int (*calculate_pressure)(struct zone *));
+void vmstat_flush_workqueue(void);
#else /* CONFIG_SMP */
/*
@@ -403,6 +404,7 @@ static inline void __dec_node_page_state(struct page *page,
static inline void refresh_zone_stat_thresholds(void) { }
static inline void cpu_vm_stats_fold(int cpu) { }
static inline void quiet_vmstat(void) { }
+static inline void vmstat_flush_workqueue(void) { }
static inline void drain_zonestat(struct zone *zone,
struct per_cpu_zonestat *pzstats) { }
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index e85f402b103a..86ce39aa1e9f 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -125,6 +125,7 @@ int housekeeping_update(struct cpumask *mask, enum hk_type type)
synchronize_rcu();
mem_cgroup_flush_workqueue();
+ vmstat_flush_workqueue();
kfree(old);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 1dad1ac7fc61..2d4de083200a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -68,6 +68,7 @@
#include <linux/types.h>
#include <linux/u64_stats_sync_api.h>
#include <linux/uaccess.h>
+#include <linux/vmstat.h>
#include <linux/wait_api.h>
#include <linux/wait_bit.h>
#include <linux/workqueue_api.h>
diff --git a/mm/vmstat.c b/mm/vmstat.c
index b90325ee49d3..69412b61fe1b 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -2113,6 +2113,11 @@ static void vmstat_shepherd(struct work_struct *w);
static DECLARE_DEFERRABLE_WORK(shepherd, vmstat_shepherd);
+void vmstat_flush_workqueue(void)
+{
+ flush_workqueue(mm_percpu_wq);
+}
+
static void vmstat_shepherd(struct work_struct *w)
{
int cpu;
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 17/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (15 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 16/33] sched/isolation: Flush vmstat " Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-09-01 2:51 ` Waiman Long
2025-08-29 15:47 ` [PATCH 18/33] cpuset: Remove cpuset_cpu_is_isolated() Frederic Weisbecker
` (16 subsequent siblings)
33 siblings, 1 reply; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Michal Koutný, Ingo Molnar,
Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long, cgroups
Until now, cpuset would propagate isolated partition changes to
workqueues so that unbound workers get properly reaffined.
Since housekeeping now centralizes, synchronize and propagates isolation
cpumask changes, perform the work from that subsystem for consolidation
and consistency purposes.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/workqueue.h | 2 +-
init/Kconfig | 1 +
kernel/cgroup/cpuset.c | 14 ++++++--------
kernel/sched/isolation.c | 4 +++-
kernel/workqueue.c | 2 +-
5 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 45d5dd470ff6..19fee865ce2a 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -588,7 +588,7 @@ struct workqueue_attrs *alloc_workqueue_attrs_noprof(void);
void free_workqueue_attrs(struct workqueue_attrs *attrs);
int apply_workqueue_attrs(struct workqueue_struct *wq,
const struct workqueue_attrs *attrs);
-extern int workqueue_unbound_exclude_cpumask(cpumask_var_t cpumask);
+extern int workqueue_unbound_exclude_cpumask(const struct cpumask *cpumask);
extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
struct work_struct *work);
diff --git a/init/Kconfig b/init/Kconfig
index 836320251219..af05cf89db12 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1230,6 +1230,7 @@ config CPUSETS
bool "Cpuset controller"
depends on SMP
select UNION_FIND
+ select CPU_ISOLATION
help
This option will let you create and manage CPUSETs which
allow dynamically partitioning a system into sets of CPUs and
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 4f2bc68332a7..eb8d01d23af6 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1340,7 +1340,7 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
return isolcpus_updated;
}
-static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
+static void update_housekeeping_cpumask(bool isolcpus_updated)
{
int ret;
@@ -1349,8 +1349,6 @@ static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
if (!isolcpus_updated)
return;
- ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
- WARN_ON_ONCE(ret < 0);
ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
WARN_ON_ONCE(ret < 0);
}
@@ -1473,7 +1471,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
list_add(&cs->remote_sibling, &remote_children);
cpumask_copy(cs->effective_xcpus, tmp->new_cpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_housekeeping_cpumask(isolcpus_updated);
cpuset_force_rebuild();
cs->prs_err = 0;
@@ -1514,7 +1512,7 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)
compute_effective_exclusive_cpumask(cs, NULL, NULL);
reset_partition_data(cs);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_housekeeping_cpumask(isolcpus_updated);
cpuset_force_rebuild();
/*
@@ -1583,7 +1581,7 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
if (xcpus)
cpumask_copy(cs->exclusive_cpus, xcpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_housekeeping_cpumask(isolcpus_updated);
if (adding || deleting)
cpuset_force_rebuild();
@@ -1947,7 +1945,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
WARN_ON_ONCE(parent->nr_subparts < 0);
}
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_housekeeping_cpumask(isolcpus_updated);
if ((old_prs != new_prs) && (cmd == partcmd_update))
update_partition_exclusive_flag(cs, new_prs);
@@ -2972,7 +2970,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
else if (isolcpus_updated)
isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_housekeeping_cpumask(isolcpus_updated);
/* Force update if switching back to member & update effective_xcpus */
update_cpumasks_hier(cs, &tmpmask, !new_prs);
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 86ce39aa1e9f..5baf1621a56e 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -102,6 +102,7 @@ EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
int housekeeping_update(struct cpumask *mask, enum hk_type type)
{
struct cpumask *trial, *old = NULL;
+ int err;
if (type != HK_TYPE_DOMAIN)
return -ENOTSUPP;
@@ -126,10 +127,11 @@ int housekeeping_update(struct cpumask *mask, enum hk_type type)
mem_cgroup_flush_workqueue();
vmstat_flush_workqueue();
+ err = workqueue_unbound_exclude_cpumask(housekeeping_cpumask(type));
kfree(old);
- return 0;
+ return err;
}
void __init housekeeping_init(void)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c6b79b3675c3..63dcc1d8b317 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -6921,7 +6921,7 @@ static int workqueue_apply_unbound_cpumask(const cpumask_var_t unbound_cpumask)
* This function can be called from cpuset code to provide a set of isolated
* CPUs that should be excluded from wq_unbound_cpumask.
*/
-int workqueue_unbound_exclude_cpumask(cpumask_var_t exclude_cpumask)
+int workqueue_unbound_exclude_cpumask(const struct cpumask *exclude_cpumask)
{
cpumask_var_t cpumask;
int ret = 0;
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 18/33] cpuset: Remove cpuset_cpu_is_isolated()
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (16 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 17/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
@ 2025-08-29 15:47 ` Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 19/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated() Frederic Weisbecker
` (15 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:47 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Michal Koutný, Johannes Weiner,
Marco Crivellari, Michal Hocko, Peter Zijlstra, Tejun Heo,
Thomas Gleixner, Vlastimil Babka, Waiman Long, cgroups
The set of cpuset isolated CPUs is now included in HK_TYPE_DOMAIN
housekeeping cpumask. There is no usecase left interested in just
checking what is isolated by cpuset and not by the isolcpus= kernel
boot parameter.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/cpuset.h | 6 ------
include/linux/sched/isolation.h | 3 +--
kernel/cgroup/cpuset.c | 12 ------------
3 files changed, 1 insertion(+), 20 deletions(-)
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 051d36fec578..a10775a4f702 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -78,7 +78,6 @@ extern void cpuset_lock(void);
extern void cpuset_unlock(void);
extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
extern bool cpuset_cpus_allowed_fallback(struct task_struct *p);
-extern bool cpuset_cpu_is_isolated(int cpu);
extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
#define cpuset_current_mems_allowed (current->mems_allowed)
void cpuset_init_current_mems_allowed(void);
@@ -208,11 +207,6 @@ static inline bool cpuset_cpus_allowed_fallback(struct task_struct *p)
return false;
}
-static inline bool cpuset_cpu_is_isolated(int cpu)
-{
- return false;
-}
-
static inline nodemask_t cpuset_mems_allowed(struct task_struct *p)
{
return node_possible_map;
diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index 199d0fc4646f..c02923ed4cbe 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -83,8 +83,7 @@ static inline void housekeeping_init(void) { }
static inline bool cpu_is_isolated(int cpu)
{
return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN) ||
- !housekeeping_test_cpu(cpu, HK_TYPE_TICK) ||
- cpuset_cpu_is_isolated(cpu);
+ !housekeeping_test_cpu(cpu, HK_TYPE_TICK);
}
#endif /* _LINUX_SCHED_ISOLATION_H */
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index eb8d01d23af6..df1dfacf5f9d 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -29,7 +29,6 @@
#include <linux/mempolicy.h>
#include <linux/mm.h>
#include <linux/memory.h>
-#include <linux/export.h>
#include <linux/rcupdate.h>
#include <linux/sched.h>
#include <linux/sched/deadline.h>
@@ -1353,17 +1352,6 @@ static void update_housekeeping_cpumask(bool isolcpus_updated)
WARN_ON_ONCE(ret < 0);
}
-/**
- * cpuset_cpu_is_isolated - Check if the given CPU is isolated
- * @cpu: the CPU number to be checked
- * Return: true if CPU is used in an isolated partition, false otherwise
- */
-bool cpuset_cpu_is_isolated(int cpu)
-{
- return cpumask_test_cpu(cpu, isolated_cpus);
-}
-EXPORT_SYMBOL_GPL(cpuset_cpu_is_isolated);
-
/*
* compute_effective_exclusive_cpumask - compute effective exclusive CPUs
* @cs: cpuset
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 19/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated()
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (17 preceding siblings ...)
2025-08-29 15:47 ` [PATCH 18/33] cpuset: Remove cpuset_cpu_is_isolated() Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-09-02 14:28 ` Waiman Long
2025-08-29 15:48 ` [PATCH 20/33] PCI: Remove superfluous HK_TYPE_WQ check Frederic Weisbecker
` (14 subsequent siblings)
33 siblings, 1 reply; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Ingo Molnar, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long
It doesn't make sense to use nohz_full without also isolating the
related CPUs from the domain topology, either through the use of
isolcpus= or cpuset isolated partitions.
And now HK_TYPE_DOMAIN includes all kinds of domain isolated CPUs.
This means that HK_TYPE_KERNEL_NOISE (of which HK_TYPE_TICK is only an
alias) is always a superset of HK_TYPE_DOMAIN.
Therefore if a CPU is not HK_TYPE_KERNEL_NOISE, it can't be
HK_TYPE_DOMAIN either. Testing the latter is then enough.
Simplify cpu_is_isolated() accordingly.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/sched/isolation.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index c02923ed4cbe..8d6d26d3fdf5 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -82,8 +82,7 @@ static inline void housekeeping_init(void) { }
static inline bool cpu_is_isolated(int cpu)
{
- return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN) ||
- !housekeeping_test_cpu(cpu, HK_TYPE_TICK);
+ return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN);
}
#endif /* _LINUX_SCHED_ISOLATION_H */
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 20/33] PCI: Remove superfluous HK_TYPE_WQ check
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (18 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 19/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated() Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 21/33] kthread: Refine naming of affinity related fields Frederic Weisbecker
` (13 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Bjorn Helgaas, Marco Crivellari,
Michal Hocko, Peter Zijlstra, Tejun Heo, Thomas Gleixner,
Vlastimil Babka, Waiman Long, linux-pci
It doesn't make sense to use nohz_full without also isolating the
related CPUs from the domain topology, either through the use of
isolcpus= or cpuset isolated partitions.
And now HK_TYPE_DOMAIN includes all kinds of domain isolated CPUs.
This means that HK_TYPE_KERNEL_NOISE (of which HK_TYPE_WQ is only an
alias) is always a superset of HK_TYPE_DOMAIN.
Therefore:
HK_TYPE_KERNEL_NOISE & HK_TYPE_DOMAIN = HK_TYPE_DOMAIN
Simplify the PCI probe target election accordingly.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
drivers/pci/pci-driver.c | 16 +++-------------
1 file changed, 3 insertions(+), 13 deletions(-)
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index cf2b83004886..326112ec516e 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -382,23 +382,14 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
pci_physfn_is_probed(dev)) {
error = local_pci_probe(&ddi);
} else {
- cpumask_var_t wq_domain_mask;
struct pci_probe_arg arg = { .ddi = &ddi };
INIT_WORK_ONSTACK(&arg.work, local_pci_probe_callback);
- if (!zalloc_cpumask_var(&wq_domain_mask, GFP_KERNEL)) {
- error = -ENOMEM;
- goto out;
- }
-
rcu_read_lock();
- cpumask_and(wq_domain_mask,
- housekeeping_cpumask(HK_TYPE_WQ),
- housekeeping_cpumask(HK_TYPE_DOMAIN));
-
cpu = cpumask_any_and(cpumask_of_node(node),
- wq_domain_mask);
+ housekeeping_cpumask(HK_TYPE_DOMAIN));
+
if (cpu < nr_cpu_ids) {
schedule_work_on(cpu, &arg.work);
rcu_read_unlock();
@@ -409,10 +400,9 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
error = local_pci_probe(&ddi);
}
- free_cpumask_var(wq_domain_mask);
destroy_work_on_stack(&arg.work);
}
-out:
+
dev->is_probed = 0;
cpu_hotplug_enable();
return error;
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 21/33] kthread: Refine naming of affinity related fields
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (19 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 20/33] PCI: Remove superfluous HK_TYPE_WQ check Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 22/33] kthread: Include unbound kthreads in the managed affinity list Frederic Weisbecker
` (12 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long
The kthreads preferred affinity related fields use "hotplug" as the base
of their naming because the affinity management was initially deemed to
deal with CPU hotplug.
The scope of this role is going to broaden now and also deal with
cpuset isolated partition updates.
Switch the naming accordingly.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/kthread.c | 38 +++++++++++++++++++-------------------
1 file changed, 19 insertions(+), 19 deletions(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 31b072e8d427..c4dd967e9e9c 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -35,8 +35,8 @@ static DEFINE_SPINLOCK(kthread_create_lock);
static LIST_HEAD(kthread_create_list);
struct task_struct *kthreadd_task;
-static LIST_HEAD(kthreads_hotplug);
-static DEFINE_MUTEX(kthreads_hotplug_lock);
+static LIST_HEAD(kthread_affinity_list);
+static DEFINE_MUTEX(kthread_affinity_lock);
struct kthread_create_info
{
@@ -69,7 +69,7 @@ struct kthread {
/* To store the full name if task comm is truncated. */
char *full_name;
struct task_struct *task;
- struct list_head hotplug_node;
+ struct list_head affinity_node;
struct cpumask *preferred_affinity;
};
@@ -128,7 +128,7 @@ bool set_kthread_struct(struct task_struct *p)
init_completion(&kthread->exited);
init_completion(&kthread->parked);
- INIT_LIST_HEAD(&kthread->hotplug_node);
+ INIT_LIST_HEAD(&kthread->affinity_node);
p->vfork_done = &kthread->exited;
kthread->task = p;
@@ -323,10 +323,10 @@ void __noreturn kthread_exit(long result)
{
struct kthread *kthread = to_kthread(current);
kthread->result = result;
- if (!list_empty(&kthread->hotplug_node)) {
- mutex_lock(&kthreads_hotplug_lock);
- list_del(&kthread->hotplug_node);
- mutex_unlock(&kthreads_hotplug_lock);
+ if (!list_empty(&kthread->affinity_node)) {
+ mutex_lock(&kthread_affinity_lock);
+ list_del(&kthread->affinity_node);
+ mutex_unlock(&kthread_affinity_lock);
if (kthread->preferred_affinity) {
kfree(kthread->preferred_affinity);
@@ -390,9 +390,9 @@ static void kthread_affine_node(void)
return;
}
- mutex_lock(&kthreads_hotplug_lock);
- WARN_ON_ONCE(!list_empty(&kthread->hotplug_node));
- list_add_tail(&kthread->hotplug_node, &kthreads_hotplug);
+ mutex_lock(&kthread_affinity_lock);
+ WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
+ list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
/*
* The node cpumask is racy when read from kthread() but:
* - a racing CPU going down will either fail on the subsequent
@@ -402,7 +402,7 @@ static void kthread_affine_node(void)
*/
kthread_fetch_affinity(kthread, affinity);
set_cpus_allowed_ptr(current, affinity);
- mutex_unlock(&kthreads_hotplug_lock);
+ mutex_unlock(&kthread_affinity_lock);
free_cpumask_var(affinity);
}
@@ -876,10 +876,10 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
goto out;
}
- mutex_lock(&kthreads_hotplug_lock);
+ mutex_lock(&kthread_affinity_lock);
cpumask_copy(kthread->preferred_affinity, mask);
- WARN_ON_ONCE(!list_empty(&kthread->hotplug_node));
- list_add_tail(&kthread->hotplug_node, &kthreads_hotplug);
+ WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
+ list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
kthread_fetch_affinity(kthread, affinity);
/* It's safe because the task is inactive. */
@@ -887,7 +887,7 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
do_set_cpus_allowed(p, affinity);
raw_spin_unlock_irqrestore(&p->pi_lock, flags);
- mutex_unlock(&kthreads_hotplug_lock);
+ mutex_unlock(&kthread_affinity_lock);
out:
free_cpumask_var(affinity);
@@ -908,9 +908,9 @@ static int kthreads_online_cpu(unsigned int cpu)
struct kthread *k;
int ret;
- guard(mutex)(&kthreads_hotplug_lock);
+ guard(mutex)(&kthread_affinity_lock);
- if (list_empty(&kthreads_hotplug))
+ if (list_empty(&kthread_affinity_list))
return 0;
if (!zalloc_cpumask_var(&affinity, GFP_KERNEL))
@@ -918,7 +918,7 @@ static int kthreads_online_cpu(unsigned int cpu)
ret = 0;
- list_for_each_entry(k, &kthreads_hotplug, hotplug_node) {
+ list_for_each_entry(k, &kthread_affinity_list, affinity_node) {
if (WARN_ON_ONCE((k->task->flags & PF_NO_SETAFFINITY) ||
kthread_is_per_cpu(k->task))) {
ret = -EINVAL;
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 22/33] kthread: Include unbound kthreads in the managed affinity list
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (20 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 21/33] kthread: Refine naming of affinity related fields Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 23/33] kthread: Include kthreadd to " Frederic Weisbecker
` (11 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long
The managed affinity list currently contains only unbound kthreads that
have affinity preferences. Unbound kthreads globally affine by default
are outside of the list because their affinity is automatically managed
by the scheduler (through the fallback housekeeping mask) and by cpuset.
However in order to preserve the preferred affinity of kthreads, cpuset
will delegate the isolated partition update propagation to the
housekeeping and kthread code.
Prepare for that with including all unbound kthreads in the managed
affinity list.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/kthread.c | 59 ++++++++++++++++++++++++------------------------
1 file changed, 30 insertions(+), 29 deletions(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index c4dd967e9e9c..cba3d297f267 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -365,9 +365,10 @@ static void kthread_fetch_affinity(struct kthread *kthread, struct cpumask *cpum
if (kthread->preferred_affinity) {
pref = kthread->preferred_affinity;
} else {
- if (WARN_ON_ONCE(kthread->node == NUMA_NO_NODE))
- return;
- pref = cpumask_of_node(kthread->node);
+ if (kthread->node == NUMA_NO_NODE)
+ pref = housekeeping_cpumask(HK_TYPE_KTHREAD);
+ else
+ pref = cpumask_of_node(kthread->node);
}
cpumask_and(cpumask, pref, housekeeping_cpumask(HK_TYPE_KTHREAD));
@@ -380,32 +381,29 @@ static void kthread_affine_node(void)
struct kthread *kthread = to_kthread(current);
cpumask_var_t affinity;
- WARN_ON_ONCE(kthread_is_per_cpu(current));
+ if (WARN_ON_ONCE(kthread_is_per_cpu(current)))
+ return;
- if (kthread->node == NUMA_NO_NODE) {
- housekeeping_affine(current, HK_TYPE_KTHREAD);
- } else {
- if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) {
- WARN_ON_ONCE(1);
- return;
- }
-
- mutex_lock(&kthread_affinity_lock);
- WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
- list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
- /*
- * The node cpumask is racy when read from kthread() but:
- * - a racing CPU going down will either fail on the subsequent
- * call to set_cpus_allowed_ptr() or be migrated to housekeepers
- * afterwards by the scheduler.
- * - a racing CPU going up will be handled by kthreads_online_cpu()
- */
- kthread_fetch_affinity(kthread, affinity);
- set_cpus_allowed_ptr(current, affinity);
- mutex_unlock(&kthread_affinity_lock);
-
- free_cpumask_var(affinity);
+ if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) {
+ WARN_ON_ONCE(1);
+ return;
}
+
+ mutex_lock(&kthread_affinity_lock);
+ WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
+ list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
+ /*
+ * The node cpumask is racy when read from kthread() but:
+ * - a racing CPU going down will either fail on the subsequent
+ * call to set_cpus_allowed_ptr() or be migrated to housekeepers
+ * afterwards by the scheduler.
+ * - a racing CPU going up will be handled by kthreads_online_cpu()
+ */
+ kthread_fetch_affinity(kthread, affinity);
+ set_cpus_allowed_ptr(current, affinity);
+ mutex_unlock(&kthread_affinity_lock);
+
+ free_cpumask_var(affinity);
}
static int kthread(void *_create)
@@ -924,8 +922,11 @@ static int kthreads_online_cpu(unsigned int cpu)
ret = -EINVAL;
continue;
}
- kthread_fetch_affinity(k, affinity);
- set_cpus_allowed_ptr(k->task, affinity);
+
+ if (k->preferred_affinity || k->node != NUMA_NO_NODE) {
+ kthread_fetch_affinity(k, affinity);
+ set_cpus_allowed_ptr(k->task, affinity);
+ }
}
free_cpumask_var(affinity);
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 23/33] kthread: Include kthreadd to the managed affinity list
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (21 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 22/33] kthread: Include unbound kthreads in the managed affinity list Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 24/33] kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management Frederic Weisbecker
` (10 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long
The unbound kthreads affinity management performed by cpuset is going to
be imported to the kthread core code for consolidation purposes.
Treat kthreadd just like any other kthread.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/kthread.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index cba3d297f267..cb0be05d6091 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -820,12 +820,13 @@ int kthreadd(void *unused)
/* Setup a clean context for our children to inherit. */
set_task_comm(tsk, comm);
ignore_signals(tsk);
- set_cpus_allowed_ptr(tsk, housekeeping_cpumask(HK_TYPE_KTHREAD));
set_mems_allowed(node_states[N_MEMORY]);
current->flags |= PF_NOFREEZE;
cgroup_init_kthreadd();
+ kthread_affine_node();
+
for (;;) {
set_current_state(TASK_INTERRUPTIBLE);
if (list_empty(&kthread_create_list))
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 24/33] kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (22 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 23/33] kthread: Include kthreadd to " Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 25/33] sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
` (9 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long
Unbound kthreads want to run neither on nohz_full CPUs nor on domain
isolated CPUs. And since nohz_full implies domain isolation, checking
the latter is enough to verify both.
Therefore exclude kthreads from domain isolation.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/kthread.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index cb0be05d6091..8d0c8c4c7e46 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -362,18 +362,20 @@ static void kthread_fetch_affinity(struct kthread *kthread, struct cpumask *cpum
{
const struct cpumask *pref;
+ guard(rcu)();
+
if (kthread->preferred_affinity) {
pref = kthread->preferred_affinity;
} else {
if (kthread->node == NUMA_NO_NODE)
- pref = housekeeping_cpumask(HK_TYPE_KTHREAD);
+ pref = housekeeping_cpumask(HK_TYPE_DOMAIN);
else
pref = cpumask_of_node(kthread->node);
}
- cpumask_and(cpumask, pref, housekeeping_cpumask(HK_TYPE_KTHREAD));
+ cpumask_and(cpumask, pref, housekeeping_cpumask(HK_TYPE_DOMAIN));
if (cpumask_empty(cpumask))
- cpumask_copy(cpumask, housekeeping_cpumask(HK_TYPE_KTHREAD));
+ cpumask_copy(cpumask, housekeeping_cpumask(HK_TYPE_DOMAIN));
}
static void kthread_affine_node(void)
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 25/33] sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (23 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 24/33] kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 26/33] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Frederic Weisbecker
` (8 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Catalin Marinas, Marco Crivellari,
Michal Hocko, Peter Zijlstra, Tejun Heo, Thomas Gleixner,
Vlastimil Babka, Waiman Long, Will Deacon, linux-arm-kernel
Tasks that have all their allowed CPUs offline don't want their affinity
to fallback on either nohz_full CPUs or on domain isolated CPUs. And
since nohz_full implies domain isolation, checking the latter is enough
to verify both.
Therefore exclude domain isolation from fallback task affinity.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/mmu_context.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mmu_context.h b/include/linux/mmu_context.h
index ac01dc4eb2ce..ed3dd0f3fe19 100644
--- a/include/linux/mmu_context.h
+++ b/include/linux/mmu_context.h
@@ -24,7 +24,7 @@ static inline void leave_mm(void) { }
#ifndef task_cpu_possible_mask
# define task_cpu_possible_mask(p) cpu_possible_mask
# define task_cpu_possible(cpu, p) true
-# define task_cpu_fallback_mask(p) housekeeping_cpumask(HK_TYPE_TICK)
+# define task_cpu_fallback_mask(p) housekeeping_cpumask(HK_TYPE_DOMAIN)
#else
# define task_cpu_possible(cpu, p) cpumask_test_cpu((cpu), task_cpu_possible_mask(p))
#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 26/33] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (24 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 25/33] sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-09-02 15:44 ` Waiman Long
2025-08-29 15:48 ` [PATCH 27/33] sched/arm64: Move fallback task cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
` (7 subsequent siblings)
33 siblings, 1 reply; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Gabriele Monaco, Johannes Weiner, Marco Crivellari, Michal Hocko,
Michal Koutný, Peter Zijlstra, Tejun Heo, Thomas Gleixner,
Waiman Long, cgroups, Frederic Weisbecker
From: Gabriele Monaco <gmonaco@redhat.com>
Currently the user can set up isolated cpus via cpuset and nohz_full in
such a way that leaves no housekeeping CPU (i.e. no CPU that is neither
domain isolated nor nohz full). This can be a problem for other
subsystems (e.g. the timer wheel imgration).
Prevent this configuration by blocking any assignation that would cause
the union of domain isolated cpus and nohz_full to covers all CPUs.
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/cgroup/cpuset.c | 57 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 57 insertions(+)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index df1dfacf5f9d..8260dd699fd8 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1275,6 +1275,19 @@ static void isolated_cpus_update(int old_prs, int new_prs, struct cpumask *xcpus
cpumask_andnot(isolated_cpus, isolated_cpus, xcpus);
}
+/*
+ * isolated_cpus_should_update - Returns if the isolated_cpus mask needs update
+ * @prs: new or old partition_root_state
+ * @parent: parent cpuset
+ * Return: true if isolated_cpus needs modification, false otherwise
+ */
+static bool isolated_cpus_should_update(int prs, struct cpuset *parent)
+{
+ if (!parent)
+ parent = &top_cpuset;
+ return prs != parent->partition_root_state;
+}
+
/*
* partition_xcpus_add - Add new exclusive CPUs to partition
* @new_prs: new partition_root_state
@@ -1339,6 +1352,36 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
return isolcpus_updated;
}
+/*
+ * isolcpus_nohz_conflict - check for isolated & nohz_full conflicts
+ * @new_cpus: cpu mask for cpus that are going to be isolated
+ * Return: true if there is conflict, false otherwise
+ *
+ * If nohz_full is enabled and we have isolated CPUs, their combination must
+ * still leave housekeeping CPUs.
+ */
+static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
+{
+ cpumask_var_t full_hk_cpus;
+ int res = false;
+
+ if (!housekeeping_enabled(HK_TYPE_KERNEL_NOISE))
+ return false;
+
+ if (!alloc_cpumask_var(&full_hk_cpus, GFP_KERNEL))
+ return true;
+
+ cpumask_and(full_hk_cpus, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE),
+ housekeeping_cpumask(HK_TYPE_DOMAIN));
+ cpumask_andnot(full_hk_cpus, full_hk_cpus, isolated_cpus);
+ cpumask_and(full_hk_cpus, full_hk_cpus, cpu_online_mask);
+ if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
+ res = true;
+
+ free_cpumask_var(full_hk_cpus);
+ return res;
+}
+
static void update_housekeeping_cpumask(bool isolcpus_updated)
{
int ret;
@@ -1453,6 +1496,9 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
if (!cpumask_intersects(tmp->new_cpus, cpu_active_mask) ||
cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus))
return PERR_INVCPUS;
+ if (isolated_cpus_should_update(new_prs, NULL) &&
+ isolcpus_nohz_conflict(tmp->new_cpus))
+ return PERR_HKEEPING;
spin_lock_irq(&callback_lock);
isolcpus_updated = partition_xcpus_add(new_prs, NULL, tmp->new_cpus);
@@ -1552,6 +1598,9 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
else if (cpumask_intersects(tmp->addmask, subpartitions_cpus) ||
cpumask_subset(top_cpuset.effective_cpus, tmp->addmask))
cs->prs_err = PERR_NOCPUS;
+ else if (isolated_cpus_should_update(prs, NULL) &&
+ isolcpus_nohz_conflict(tmp->addmask))
+ cs->prs_err = PERR_HKEEPING;
if (cs->prs_err)
goto invalidate;
}
@@ -1904,6 +1953,12 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
return err;
}
+ if (deleting && isolated_cpus_should_update(new_prs, parent) &&
+ isolcpus_nohz_conflict(tmp->delmask)) {
+ cs->prs_err = PERR_HKEEPING;
+ return PERR_HKEEPING;
+ }
+
/*
* Change the parent's effective_cpus & effective_xcpus (top cpuset
* only).
@@ -2924,6 +2979,8 @@ static int update_prstate(struct cpuset *cs, int new_prs)
* Need to update isolated_cpus.
*/
isolcpus_updated = true;
+ if (isolcpus_nohz_conflict(cs->effective_xcpus))
+ err = PERR_HKEEPING;
} else {
/*
* Switching back to member is always allowed even if it
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 27/33] sched/arm64: Move fallback task cpumask to HK_TYPE_DOMAIN
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (25 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 26/33] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-09-02 16:43 ` Waiman Long
2025-08-29 15:48 ` [PATCH 28/33] kthread: Honour kthreads preferred affinity after cpuset changes Frederic Weisbecker
` (6 subsequent siblings)
33 siblings, 1 reply; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Catalin Marinas, Marco Crivellari,
Michal Hocko, Peter Zijlstra, Tejun Heo, Thomas Gleixner,
Waiman Long, Will Deacon, linux-arm-kernel
When none of the allowed CPUs of a task are online, it gets migrated
to the fallback cpumask which is all the non nohz_full CPUs.
However just like nohz_full CPUs, domain isolated CPUs don't want to be
disturbed by tasks that have lost their CPU affinities.
And since nohz_full rely on domain isolation to work correctly, the
housekeeping mask of domain isolated CPUs is always a subset of the
housekeeping mask of nohz_full CPUs (there can be CPUs that are domain
isolated but not nohz_full, OTOH there can't be nohz_full CPUs that are
not domain isolated):
HK_TYPE_DOMAIN & HK_TYPE_KERNEL_NOISE == HK_TYPE_DOMAIN
Therefore use HK_TYPE_DOMAIN as the appropriate fallback target for
tasks and since this cpumask can be modified at runtime, make sure
that 32 bits support CPUs on ARM64 mismatched systems are not isolated
by cpusets.
CC: linux-arm-kernel@lists.infradead.org
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 18 ++++++++++++---
include/linux/cpu.h | 4 ++++
kernel/cgroup/cpuset.c | 40 +++++++++++++++++++++++-----------
3 files changed, 46 insertions(+), 16 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9ad065f15f1d..38046489d2ea 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1653,6 +1653,18 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
return feature_matches(val, entry);
}
+/*
+ * 32 bits support CPUs can't be isolated because tasks may be
+ * arbitrarily affine to them, defeating the purpose of isolation.
+ */
+bool arch_isolated_cpus_can_update(struct cpumask *new_cpus)
+{
+ if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
+ return !cpumask_intersects(cpu_32bit_el0_mask, new_cpus);
+ else
+ return true;
+}
+
const struct cpumask *system_32bit_el0_cpumask(void)
{
if (!system_supports_32bit_el0())
@@ -1666,7 +1678,7 @@ const struct cpumask *system_32bit_el0_cpumask(void)
const struct cpumask *task_cpu_fallback_mask(struct task_struct *p)
{
- return __task_cpu_possible_mask(p, housekeeping_cpumask(HK_TYPE_TICK));
+ return __task_cpu_possible_mask(p, housekeeping_cpumask(HK_TYPE_DOMAIN));
}
static int __init parse_32bit_el0_param(char *str)
@@ -3963,8 +3975,8 @@ static int enable_mismatched_32bit_el0(unsigned int cpu)
bool cpu_32bit = false;
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
- if (!housekeeping_cpu(cpu, HK_TYPE_TICK))
- pr_info("Treating adaptive-ticks CPU %u as 64-bit only\n", cpu);
+ if (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN))
+ pr_info("Treating domain isolated CPU %u as 64-bit only\n", cpu);
else
cpu_32bit = true;
}
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index b91b993f58ee..8bb239080534 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -228,4 +228,8 @@ static inline bool cpu_attack_vector_mitigated(enum cpu_attack_vectors v)
#define smt_mitigations SMT_MITIGATIONS_OFF
#endif
+struct cpumask;
+
+bool arch_isolated_cpus_can_update(struct cpumask *new_cpus);
+
#endif /* _LINUX_CPU_H_ */
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 8260dd699fd8..cf99ea844c1d 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1352,33 +1352,47 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
return isolcpus_updated;
}
+bool __weak arch_isolated_cpus_can_update(struct cpumask *new_cpus)
+{
+ return true;
+}
+
/*
- * isolcpus_nohz_conflict - check for isolated & nohz_full conflicts
+ * isolated_cpus_can_update - check for conflicts against housekeeping and
+ * CPUs capabilities.
* @new_cpus: cpu mask for cpus that are going to be isolated
- * Return: true if there is conflict, false otherwise
+ * Return: true if there no conflict, false otherwise
*
- * If nohz_full is enabled and we have isolated CPUs, their combination must
- * still leave housekeeping CPUs.
+ * Check for conflicts:
+ * - If nohz_full is enabled and there are isolated CPUs, their combination must
+ * still leave housekeeping CPUs.
+ * - Architecture has CPU capabilities incompatible with being isolated
*/
-static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
+static bool isolated_cpus_can_update(struct cpumask *new_cpus)
{
cpumask_var_t full_hk_cpus;
- int res = false;
+ bool res;
+
+ if (!arch_isolated_cpus_can_update(new_cpus))
+ return false;
if (!housekeeping_enabled(HK_TYPE_KERNEL_NOISE))
- return false;
+ return true;
if (!alloc_cpumask_var(&full_hk_cpus, GFP_KERNEL))
- return true;
+ return false;
+
+ res = true;
cpumask_and(full_hk_cpus, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE),
housekeeping_cpumask(HK_TYPE_DOMAIN));
cpumask_andnot(full_hk_cpus, full_hk_cpus, isolated_cpus);
cpumask_and(full_hk_cpus, full_hk_cpus, cpu_online_mask);
if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
- res = true;
+ res = false;
free_cpumask_var(full_hk_cpus);
+
return res;
}
@@ -1497,7 +1511,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus))
return PERR_INVCPUS;
if (isolated_cpus_should_update(new_prs, NULL) &&
- isolcpus_nohz_conflict(tmp->new_cpus))
+ !isolated_cpus_can_update(tmp->new_cpus))
return PERR_HKEEPING;
spin_lock_irq(&callback_lock);
@@ -1599,7 +1613,7 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
cpumask_subset(top_cpuset.effective_cpus, tmp->addmask))
cs->prs_err = PERR_NOCPUS;
else if (isolated_cpus_should_update(prs, NULL) &&
- isolcpus_nohz_conflict(tmp->addmask))
+ !isolated_cpus_can_update(tmp->addmask))
cs->prs_err = PERR_HKEEPING;
if (cs->prs_err)
goto invalidate;
@@ -1954,7 +1968,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
}
if (deleting && isolated_cpus_should_update(new_prs, parent) &&
- isolcpus_nohz_conflict(tmp->delmask)) {
+ !isolated_cpus_can_update(tmp->delmask)) {
cs->prs_err = PERR_HKEEPING;
return PERR_HKEEPING;
}
@@ -2979,7 +2993,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
* Need to update isolated_cpus.
*/
isolcpus_updated = true;
- if (isolcpus_nohz_conflict(cs->effective_xcpus))
+ if (!isolated_cpus_can_update(cs->effective_xcpus))
err = PERR_HKEEPING;
} else {
/*
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 28/33] kthread: Honour kthreads preferred affinity after cpuset changes
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (26 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 27/33] sched/arm64: Move fallback task cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 29/33] kthread: Comment on the purpose and placement of kthread_affine_node() call Frederic Weisbecker
` (5 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Ingo Molnar, Johannes Weiner,
Marco Crivellari, Michal Hocko, Michal Koutný,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long, cgroups
When cpuset isolated partitions get updated, unbound kthreads get
indifferently affine to all non isolated CPUs, regardless of their
individual affinity preferences.
For example kswapd is a per-node kthread that prefers to be affine to
the node it refers to. Whenever an isolated partition is created,
updated or deleted, kswapd's node affinity is going to be broken if any
CPU in the related node is not isolated because kswapd will be affine
globally.
Fix this with letting the consolidated kthread managed affinity code do
the affinity update on behalf of cpuset.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/kthread.h | 1 +
kernel/cgroup/cpuset.c | 5 ++---
kernel/kthread.c | 38 +++++++++++++++++++++++++++++---------
kernel/sched/isolation.c | 2 ++
4 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index 8d27403888ce..c92c1149ee6e 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -100,6 +100,7 @@ void kthread_unpark(struct task_struct *k);
void kthread_parkme(void);
void kthread_exit(long result) __noreturn;
void kthread_complete_and_exit(struct completion *, long) __noreturn;
+int kthreads_update_housekeeping(void);
int kthreadd(void *unused);
extern struct task_struct *kthreadd_task;
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index cf99ea844c1d..e76711fa7d34 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1130,11 +1130,10 @@ void cpuset_update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus)
if (top_cs) {
/*
+ * PF_KTHREAD tasks are handled by housekeeping.
* PF_NO_SETAFFINITY tasks are ignored.
- * All per cpu kthreads should have PF_NO_SETAFFINITY
- * flag set, see kthread_set_per_cpu().
*/
- if (task->flags & PF_NO_SETAFFINITY)
+ if (task->flags & (PF_KTHREAD | PF_NO_SETAFFINITY))
continue;
cpumask_andnot(new_cpus, possible_mask, subpartitions_cpus);
} else {
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 8d0c8c4c7e46..4d3cc04e5e8b 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -896,14 +896,7 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
}
EXPORT_SYMBOL_GPL(kthread_affine_preferred);
-/*
- * Re-affine kthreads according to their preferences
- * and the newly online CPU. The CPU down part is handled
- * by select_fallback_rq() which default re-affines to
- * housekeepers from other nodes in case the preferred
- * affinity doesn't apply anymore.
- */
-static int kthreads_online_cpu(unsigned int cpu)
+static int kthreads_update_affinity(bool force)
{
cpumask_var_t affinity;
struct kthread *k;
@@ -926,7 +919,7 @@ static int kthreads_online_cpu(unsigned int cpu)
continue;
}
- if (k->preferred_affinity || k->node != NUMA_NO_NODE) {
+ if (force || k->preferred_affinity || k->node != NUMA_NO_NODE) {
kthread_fetch_affinity(k, affinity);
set_cpus_allowed_ptr(k->task, affinity);
}
@@ -937,6 +930,33 @@ static int kthreads_online_cpu(unsigned int cpu)
return ret;
}
+/**
+ * kthreads_update_housekeeping - Update kthreads affinity on cpuset change
+ *
+ * When cpuset changes a partition type to/from "isolated" or updates related
+ * cpumasks, propagate the housekeeping cpumask change to preferred kthreads
+ * affinity.
+ *
+ * Returns 0 if successful, -ENOMEM if temporary mask couldn't
+ * be allocated or -EINVAL in case of internal error.
+ */
+int kthreads_update_housekeeping(void)
+{
+ return kthreads_update_affinity(true);
+}
+
+/*
+ * Re-affine kthreads according to their preferences
+ * and the newly online CPU. The CPU down part is handled
+ * by select_fallback_rq() which default re-affines to
+ * housekeepers from other nodes in case the preferred
+ * affinity doesn't apply anymore.
+ */
+static int kthreads_online_cpu(unsigned int cpu)
+{
+ return kthreads_update_affinity(false);
+}
+
static int kthreads_init(void)
{
return cpuhp_setup_state(CPUHP_AP_KTHREADS_ONLINE, "kthreads:online",
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 5baf1621a56e..51392eb9b221 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -128,6 +128,8 @@ int housekeeping_update(struct cpumask *mask, enum hk_type type)
mem_cgroup_flush_workqueue();
vmstat_flush_workqueue();
err = workqueue_unbound_exclude_cpumask(housekeeping_cpumask(type));
+ WARN_ON_ONCE(err < 0);
+ err = kthreads_update_housekeeping();
kfree(old);
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 29/33] kthread: Comment on the purpose and placement of kthread_affine_node() call
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (27 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 28/33] kthread: Honour kthreads preferred affinity after cpuset changes Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 30/33] kthread: Add API to update preferred affinity on kthread runtime Frederic Weisbecker
` (4 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long
It may not appear obvious why kthread_affine_node() is not called before
the kthread creation completion instead of after the first wake-up.
The reason is that kthread_affine_node() applies a default affinity
behaviour that only takes place if no affinity preference have already
been passed by the kthread creation call site.
Add a comment to clarify that.
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/kthread.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 4d3cc04e5e8b..d36bdfbd004e 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -453,6 +453,10 @@ static int kthread(void *_create)
self->started = 1;
+ /*
+ * Apply default node affinity if no call to kthread_bind[_mask]() nor
+ * kthread_affine_preferred() was issued before the first wake-up.
+ */
if (!(current->flags & PF_NO_SETAFFINITY) && !self->preferred_affinity)
kthread_affine_node();
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 30/33] kthread: Add API to update preferred affinity on kthread runtime
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (28 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 29/33] kthread: Comment on the purpose and placement of kthread_affine_node() call Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 31/33] kthread: Document kthread_affine_preferred() Frederic Weisbecker
` (3 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Waiman Long
Kthreads can apply for a preferred affinity upon creation but they have
no means to update that preferred affinity after the first wake up.
kthread_affine_preferred() is optimized by assuming the kthread
is sleeping while applying the allowed cpumask.
Therefore introduce a new API to further update the preferred affinity.
It will be used by IRQ kthreads.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/kthread.h | 1 +
kernel/kthread.c | 55 +++++++++++++++++++++++++++++++++++------
2 files changed, 48 insertions(+), 8 deletions(-)
diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index c92c1149ee6e..a06cae7f2c55 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -86,6 +86,7 @@ void free_kthread_struct(struct task_struct *k);
void kthread_bind(struct task_struct *k, unsigned int cpu);
void kthread_bind_mask(struct task_struct *k, const struct cpumask *mask);
int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask);
+int kthread_affine_preferred_update(struct task_struct *p, const struct cpumask *mask);
int kthread_stop(struct task_struct *k);
int kthread_stop_put(struct task_struct *k);
bool kthread_should_stop(void);
diff --git a/kernel/kthread.c b/kernel/kthread.c
index d36bdfbd004e..f3397cf7542a 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -322,17 +322,16 @@ EXPORT_SYMBOL_GPL(kthread_parkme);
void __noreturn kthread_exit(long result)
{
struct kthread *kthread = to_kthread(current);
+ struct cpumask *to_free = NULL;
kthread->result = result;
- if (!list_empty(&kthread->affinity_node)) {
- mutex_lock(&kthread_affinity_lock);
- list_del(&kthread->affinity_node);
- mutex_unlock(&kthread_affinity_lock);
- if (kthread->preferred_affinity) {
- kfree(kthread->preferred_affinity);
- kthread->preferred_affinity = NULL;
- }
+ scoped_guard(mutex, &kthread_affinity_lock) {
+ if (!list_empty(&kthread->affinity_node))
+ list_del_init(&kthread->affinity_node);
+ to_free = kthread->preferred_affinity;
+ kthread->preferred_affinity = NULL;
}
+ kfree(to_free);
do_exit(0);
}
EXPORT_SYMBOL(kthread_exit);
@@ -900,6 +899,46 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
}
EXPORT_SYMBOL_GPL(kthread_affine_preferred);
+/**
+ * kthread_affine_preferred_update - update a kthread's preferred affinity
+ * @p: thread created by kthread_create().
+ * @cpumask: new mask of CPUs (might not be online, must be possible) for @k
+ * to run on.
+ *
+ * Update the cpumask of the desired kthread's affinity that was passed by
+ * a previous call to kthread_affine_preferred(). This can be called either
+ * before or after the first wakeup of the kthread.
+ *
+ * Returns 0 if the affinity has been applied.
+ */
+int kthread_affine_preferred_update(struct task_struct *p,
+ const struct cpumask *mask)
+{
+ struct kthread *kthread = to_kthread(p);
+ cpumask_var_t affinity;
+ int ret = 0;
+
+ if (!zalloc_cpumask_var(&affinity, GFP_KERNEL))
+ return -ENOMEM;
+
+ scoped_guard(mutex, &kthread_affinity_lock) {
+ if (WARN_ON_ONCE(!kthread->preferred_affinity ||
+ list_empty(&kthread->affinity_node))) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ cpumask_copy(kthread->preferred_affinity, mask);
+ kthread_fetch_affinity(kthread, affinity);
+ set_cpus_allowed_ptr(p, affinity);
+ }
+out:
+ free_cpumask_var(affinity);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(kthread_affine_preferred_update);
+
static int kthreads_update_affinity(bool force)
{
cpumask_var_t affinity;
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 31/33] kthread: Document kthread_affine_preferred()
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (29 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 30/33] kthread: Add API to update preferred affinity on kthread runtime Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-08-29 15:48 ` [RFC PATCH 32/33] genirq: Correctly handle preferred kthreads affinity Frederic Weisbecker
` (2 subsequent siblings)
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Waiman Long
The documentation of this new API has been overlooked during its
introduction. Fill the gap.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/kthread.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index f3397cf7542a..b989aeaa441a 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -857,6 +857,18 @@ int kthreadd(void *unused)
return 0;
}
+/**
+ * kthread_affine_preferred - Define a kthread's preferred affinity
+ * @p: thread created by kthread_create().
+ * @cpumask: preferred mask of CPUs (might not be online, must be possible) for @k
+ * to run on.
+ *
+ * Similar to kthread_bind_mask() except that the affinity is not a requirement
+ * but rather a preference that can be constrained by CPU isolation or CPU hotplug.
+ * Must be called before the first wakeup of the kthread.
+ *
+ * Returns 0 if the affinity has been applied.
+ */
int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
{
struct kthread *kthread = to_kthread(p);
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [RFC PATCH 32/33] genirq: Correctly handle preferred kthreads affinity
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (30 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 31/33] kthread: Document kthread_affine_preferred() Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 33/33] doc: Add housekeeping documentation Frederic Weisbecker
2025-09-02 19:12 ` [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Waiman Long
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Waiman Long
[CHECKME: Do some IRQ threads have strong affinity requirements? In
which case they should use kthread_bind()...]
The affinity of IRQ threads is applied through a direct call to the
scheduler. As a result this affinity may not be carried correctly across
hotplug events, cpuset isolated partitions updates, or against
housekeeping constraints.
For example a simple creation of cpuset isolated partition will
overwrite all IRQ threads affinity to the non isolated cpusets.
To prevent from that, use the appropriate kthread affinity APIs that
takes care of the preferred affinity during these kinds of events.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/irq/manage.c | 47 +++++++++++++++++++++++++++------------------
1 file changed, 28 insertions(+), 19 deletions(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index c94837382037..d96f6675c888 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -176,15 +176,15 @@ bool irq_can_set_affinity_usr(unsigned int irq)
}
/**
- * irq_set_thread_affinity - Notify irq threads to adjust affinity
+ * irq_thread_notify_affinity - Notify irq threads to adjust affinity
* @desc: irq descriptor which has affinity changed
*
* Just set IRQTF_AFFINITY and delegate the affinity setting to the
- * interrupt thread itself. We can not call set_cpus_allowed_ptr() here as
- * we hold desc->lock and this code can be called from hard interrupt
+ * interrupt thread itself. We can not call kthread_affine_preferred_update()
+ * here as we hold desc->lock and this code can be called from hard interrupt
* context.
*/
-static void irq_set_thread_affinity(struct irq_desc *desc)
+static void irq_thread_notify_affinity(struct irq_desc *desc)
{
struct irqaction *action;
@@ -283,7 +283,7 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
fallthrough;
case IRQ_SET_MASK_OK_NOCOPY:
irq_validate_effective_affinity(data);
- irq_set_thread_affinity(desc);
+ irq_thread_notify_affinity(desc);
ret = 0;
}
@@ -1032,11 +1032,26 @@ static void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *a
}
if (valid)
- set_cpus_allowed_ptr(current, mask);
+ kthread_affine_preferred_update(current, mask);
free_cpumask_var(mask);
}
+
+static inline void irq_thread_set_affinity(struct task_struct *t,
+ struct irq_desc *desc)
+{
+ const struct cpumask *mask;
+
+ if (cpumask_available(desc->irq_common_data.affinity))
+ mask = irq_data_get_effective_affinity_mask(&desc->irq_data);
+ else
+ mask = cpu_possible_mask;
+
+ kthread_affine_preferred(t, mask);
+}
#else
static inline void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action) { }
+static inline void irq_thread_set_affinity(struct task_struct *t,
+ struct irq_desc *desc) { }
#endif
static int irq_wait_for_interrupt(struct irq_desc *desc,
@@ -1384,7 +1399,8 @@ static void irq_nmi_teardown(struct irq_desc *desc)
}
static int
-setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary)
+setup_irq_thread(struct irqaction *new, struct irq_desc *desc,
+ unsigned int irq, bool secondary)
{
struct task_struct *t;
@@ -1405,16 +1421,9 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary)
* references an already freed task_struct.
*/
new->thread = get_task_struct(t);
- /*
- * Tell the thread to set its affinity. This is
- * important for shared interrupt handlers as we do
- * not invoke setup_affinity() for the secondary
- * handlers as everything is already set up. Even for
- * interrupts marked with IRQF_NO_BALANCE this is
- * correct as we want the thread to move to the cpu(s)
- * on which the requesting code placed the interrupt.
- */
- set_bit(IRQTF_AFFINITY, &new->thread_flags);
+
+ irq_thread_set_affinity(t, desc);
+
return 0;
}
@@ -1486,11 +1495,11 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
* thread.
*/
if (new->thread_fn && !nested) {
- ret = setup_irq_thread(new, irq, false);
+ ret = setup_irq_thread(new, desc, irq, false);
if (ret)
goto out_mput;
if (new->secondary) {
- ret = setup_irq_thread(new->secondary, irq, true);
+ ret = setup_irq_thread(new->secondary, desc, irq, true);
if (ret)
goto out_thread;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 33/33] doc: Add housekeeping documentation
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (31 preceding siblings ...)
2025-08-29 15:48 ` [RFC PATCH 32/33] genirq: Correctly handle preferred kthreads affinity Frederic Weisbecker
@ 2025-08-29 15:48 ` Frederic Weisbecker
2025-09-02 19:12 ` [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Waiman Long
33 siblings, 0 replies; 43+ messages in thread
From: Frederic Weisbecker @ 2025-08-29 15:48 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Waiman Long
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
Documentation/cpu_isolation/housekeeping.rst | 111 +++++++++++++++++++
1 file changed, 111 insertions(+)
create mode 100644 Documentation/cpu_isolation/housekeeping.rst
diff --git a/Documentation/cpu_isolation/housekeeping.rst b/Documentation/cpu_isolation/housekeeping.rst
new file mode 100644
index 000000000000..e5417302774c
--- /dev/null
+++ b/Documentation/cpu_isolation/housekeeping.rst
@@ -0,0 +1,111 @@
+======================================
+Housekeeping
+======================================
+
+
+CPU Isolation moves away kernel work that may otherwise run on any CPU.
+The purpose of its related features is to reduce the OS jitter that some
+extreme workloads can't stand, such as in some DPDK usecases.
+
+The kernel work moved away by CPU isolation is commonly described as
+"housekeeping" because it includes ground work that performs cleanups,
+statistics maintainance and actions relying on them, memory release,
+various deferrals etc...
+
+Sometimes housekeeping is just some unbound work (unbound workqueues,
+unbound timers, ...) that gets easily assigned to non-isolated CPUs.
+But sometimes housekeeping is tied to a specific CPU and requires
+elaborated tricks to be offloaded to non-isolated CPUs (RCU_NOCB, remote
+scheduler tick, etc...).
+
+Thus, a housekeeping CPU can be considered as the reverse of an isolated
+CPU. It is simply a CPU that can execute housekeeping work. There must
+always be at least one online housekeeping CPU at any time. The CPUs that
+are not isolated are automatically assigned as housekeeping.
+
+Housekeeping is currently divided in four features described
+by the ``enum hk_type type``:
+
+1. HK_TYPE_DOMAIN matches the work moved away by scheduler domain
+ isolation performed through ``isolcpus=domain`` boot parameter or
+ isolated cpuset partitions in cgroup v2. This includes scheduler
+ load balancing, unbound workqueues and timers.
+
+2. HK_TYPE_KERNEL_NOISE matches the work moved away by tick isolation
+ performed through ``nohz_full=`` or ``isolcpus=nohz`` boot
+ parameters. This includes remote scheduler tick, vmstat and lockup
+ watchdog.
+
+3. HK_TYPE_MANAGED_IRQ matches the IRQ handlers moved away by managed
+ IRQ isolation performed through ``isolcpus=managed_irq``.
+
+4. HK_TYPE_DOMAIN_BOOT matches the work moved away by scheduler domain
+ isolation performed through ``isolcpus=domain`` only. It is similar
+ to HK_TYPE_DOMAIN except it ignores the isolation performed by
+ cpusets.
+
+
+Housekeeping cpumasks
+=================================
+
+Housekeeping cpumasks include the CPUs that can execute the work moved
+away by the matching isolation feature. These cpumasks are returned by
+the following function::
+
+ const struct cpumask *housekeeping_cpumask(enum hk_type type)
+
+By default, if neither ``nohz_full=``, nor ``isolcpus``, nor cpuset's
+isolated partitions are used, which covers most usecases, this function
+returns the cpu_possible_mask.
+
+Otherwise the function returns the cpumask complement of the isolation
+feature. For example:
+
+With isolcpus=domain,7 the following will return a mask with all possible
+CPUs except 7::
+
+ housekeeping_cpumask(HK_TYPE_DOMAIN)
+
+Similarly with nohz_full=5,6 the following will return a mask with all
+possible CPUs except 5,6::
+
+ housekeeping_cpumask(HK_TYPE_KERNEL_NOISE)
+
+
+Synchronization against cpusets
+=================================
+
+Cpuset can modify the HK_TYPE_DOMAIN housekeeping cpumask while creating,
+modifying or deleting an isolated partition.
+
+The users of HK_TYPE_DOMAIN cpumask must then make sure to synchronize
+properly against cpuset in order to make sure that:
+
+1. The cpumask snapshot stays coherent.
+
+2. No housekeeping work is queued on a newly made isolated CPU.
+
+3. Pending housekeeping work that was queued to a non isolated
+ CPU which just turned isolated through cpuset must be flushed
+ before the related created/modified isolated partition is made
+ available to userspace.
+
+This synchronization is maintained by an RCU based scheme. The cpuset update
+side waits for an RCU grace period after updating the HK_TYPE_DOMAIN
+cpumask and before flushing pending works. On the read side, care must be
+taken to gather the housekeeping target election and the work enqueue within
+the same RCU read side critical section.
+
+A typical layout example would look like this on the update side
+(``housekeeping_update()``)::
+
+ rcu_assign_pointer(housekeeping_cpumasks[type], trial);
+ synchronize_rcu();
+ flush_workqueue(example_workqueue);
+
+And then on the read side::
+
+ rcu_read_lock();
+ cpu = housekeeping_any_cpu(HK_TYPE_DOMAIN);
+ queue_work_on(cpu, example_workqueue, work);
+ rcu_read_unlock();
--
2.51.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* Re: [PATCH 01/33] sched/isolation: Remove housekeeping static key
2025-08-29 15:47 ` [PATCH 01/33] sched/isolation: Remove housekeeping static key Frederic Weisbecker
@ 2025-08-29 21:34 ` Waiman Long
2025-09-01 10:26 ` Peter Zijlstra
1 sibling, 0 replies; 43+ messages in thread
From: Waiman Long @ 2025-08-29 21:34 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Ingo Molnar, Marco Crivellari, Michal Hocko, Peter Zijlstra,
Tejun Heo, Thomas Gleixner, Vlastimil Babka
On 8/29/25 11:47 AM, Frederic Weisbecker wrote:
> The housekeeping static key in its current use is mostly irrelevant.
> Most of the time, a housekeeping function call had already been issued
> before the static call got a chance to be evaluated, defeating the
> initial call optimization purpose.
>
> housekeeping_cpu() is the sole correct user performing the static call
> before the actual slow-path function call. But it's seldom used in
> fast-path.
>
> Finally the static call prevents from synchronizing correctly against
> dynamic updates of the housekeeping cpumasks through cpusets.
>
> Get away with a simple flag test instead.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
> include/linux/sched/isolation.h | 25 +++++----
> kernel/sched/isolation.c | 90 ++++++++++++++-------------------
> 2 files changed, 55 insertions(+), 60 deletions(-)
>
> diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
> index d8501f4709b5..f98ba0d71c52 100644
> --- a/include/linux/sched/isolation.h
> +++ b/include/linux/sched/isolation.h
> @@ -25,12 +25,22 @@ enum hk_type {
> };
>
> #ifdef CONFIG_CPU_ISOLATION
> -DECLARE_STATIC_KEY_FALSE(housekeeping_overridden);
> +extern unsigned long housekeeping_flags;
> +
> extern int housekeeping_any_cpu(enum hk_type type);
> extern const struct cpumask *housekeeping_cpumask(enum hk_type type);
> extern bool housekeeping_enabled(enum hk_type type);
> extern void housekeeping_affine(struct task_struct *t, enum hk_type type);
> extern bool housekeeping_test_cpu(int cpu, enum hk_type type);
> +
> +static inline bool housekeeping_cpu(int cpu, enum hk_type type)
> +{
> + if (housekeeping_flags & BIT(type))
> + return housekeeping_test_cpu(cpu, type);
> + else
> + return true;
> +}
> +
> extern void __init housekeeping_init(void);
>
> #else
> @@ -58,17 +68,14 @@ static inline bool housekeeping_test_cpu(int cpu, enum hk_type type)
> return true;
> }
>
> +static inline bool housekeeping_cpu(int cpu, enum hk_type type)
> +{
> + return true;
> +}
> +
> static inline void housekeeping_init(void) { }
> #endif /* CONFIG_CPU_ISOLATION */
>
> -static inline bool housekeeping_cpu(int cpu, enum hk_type type)
> -{
> -#ifdef CONFIG_CPU_ISOLATION
> - if (static_branch_unlikely(&housekeeping_overridden))
> - return housekeeping_test_cpu(cpu, type);
> -#endif
> - return true;
> -}
>
> static inline bool cpu_is_isolated(int cpu)
> {
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index a4cf17b1fab0..2a6fc6fc46fb 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -16,19 +16,13 @@ enum hk_flags {
> HK_FLAG_KERNEL_NOISE = BIT(HK_TYPE_KERNEL_NOISE),
> };
>
> -DEFINE_STATIC_KEY_FALSE(housekeeping_overridden);
> -EXPORT_SYMBOL_GPL(housekeeping_overridden);
> -
> -struct housekeeping {
> - cpumask_var_t cpumasks[HK_TYPE_MAX];
> - unsigned long flags;
> -};
> -
> -static struct housekeeping housekeeping;
> +static cpumask_var_t housekeeping_cpumasks[HK_TYPE_MAX];
> +unsigned long housekeeping_flags;
Should we add the__read_mostly attribute to housekeeping_flags to
prevent possible false cacheline sharing problem?
Other than that, LGTM
Cheers,
Longman
> +EXPORT_SYMBOL_GPL(housekeeping_flags);
>
> bool housekeeping_enabled(enum hk_type type)
> {
> - return !!(housekeeping.flags & BIT(type));
> + return !!(housekeeping_flags & BIT(type));
> }
> EXPORT_SYMBOL_GPL(housekeeping_enabled);
>
> @@ -36,50 +30,46 @@ int housekeeping_any_cpu(enum hk_type type)
> {
> int cpu;
>
> - if (static_branch_unlikely(&housekeeping_overridden)) {
> - if (housekeeping.flags & BIT(type)) {
> - cpu = sched_numa_find_closest(housekeeping.cpumasks[type], smp_processor_id());
> - if (cpu < nr_cpu_ids)
> - return cpu;
> + if (housekeeping_flags & BIT(type)) {
> + cpu = sched_numa_find_closest(housekeeping_cpumasks[type], smp_processor_id());
> + if (cpu < nr_cpu_ids)
> + return cpu;
>
> - cpu = cpumask_any_and_distribute(housekeeping.cpumasks[type], cpu_online_mask);
> - if (likely(cpu < nr_cpu_ids))
> - return cpu;
> - /*
> - * Unless we have another problem this can only happen
> - * at boot time before start_secondary() brings the 1st
> - * housekeeping CPU up.
> - */
> - WARN_ON_ONCE(system_state == SYSTEM_RUNNING ||
> - type != HK_TYPE_TIMER);
> - }
> + cpu = cpumask_any_and_distribute(housekeeping_cpumasks[type], cpu_online_mask);
> + if (likely(cpu < nr_cpu_ids))
> + return cpu;
> + /*
> + * Unless we have another problem this can only happen
> + * at boot time before start_secondary() brings the 1st
> + * housekeeping CPU up.
> + */
> + WARN_ON_ONCE(system_state == SYSTEM_RUNNING ||
> + type != HK_TYPE_TIMER);
> }
> +
> return smp_processor_id();
> }
> EXPORT_SYMBOL_GPL(housekeeping_any_cpu);
>
> const struct cpumask *housekeeping_cpumask(enum hk_type type)
> {
> - if (static_branch_unlikely(&housekeeping_overridden))
> - if (housekeeping.flags & BIT(type))
> - return housekeeping.cpumasks[type];
> + if (housekeeping_flags & BIT(type))
> + return housekeeping_cpumasks[type];
> return cpu_possible_mask;
> }
> EXPORT_SYMBOL_GPL(housekeeping_cpumask);
>
> void housekeeping_affine(struct task_struct *t, enum hk_type type)
> {
> - if (static_branch_unlikely(&housekeeping_overridden))
> - if (housekeeping.flags & BIT(type))
> - set_cpus_allowed_ptr(t, housekeeping.cpumasks[type]);
> + if (housekeeping_flags & BIT(type))
> + set_cpus_allowed_ptr(t, housekeeping_cpumasks[type]);
> }
> EXPORT_SYMBOL_GPL(housekeeping_affine);
>
> bool housekeeping_test_cpu(int cpu, enum hk_type type)
> {
> - if (static_branch_unlikely(&housekeeping_overridden))
> - if (housekeeping.flags & BIT(type))
> - return cpumask_test_cpu(cpu, housekeeping.cpumasks[type]);
> + if (housekeeping_flags & BIT(type))
> + return cpumask_test_cpu(cpu, housekeeping_cpumasks[type]);
> return true;
> }
> EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
> @@ -88,17 +78,15 @@ void __init housekeeping_init(void)
> {
> enum hk_type type;
>
> - if (!housekeeping.flags)
> + if (!housekeeping_flags)
> return;
>
> - static_branch_enable(&housekeeping_overridden);
> -
> - if (housekeeping.flags & HK_FLAG_KERNEL_NOISE)
> + if (housekeeping_flags & HK_FLAG_KERNEL_NOISE)
> sched_tick_offload_init();
>
> - for_each_set_bit(type, &housekeeping.flags, HK_TYPE_MAX) {
> + for_each_set_bit(type, &housekeeping_flags, HK_TYPE_MAX) {
> /* We need at least one CPU to handle housekeeping work */
> - WARN_ON_ONCE(cpumask_empty(housekeeping.cpumasks[type]));
> + WARN_ON_ONCE(cpumask_empty(housekeeping_cpumasks[type]));
> }
> }
>
> @@ -106,8 +94,8 @@ static void __init housekeeping_setup_type(enum hk_type type,
> cpumask_var_t housekeeping_staging)
> {
>
> - alloc_bootmem_cpumask_var(&housekeeping.cpumasks[type]);
> - cpumask_copy(housekeeping.cpumasks[type],
> + alloc_bootmem_cpumask_var(&housekeeping_cpumasks[type]);
> + cpumask_copy(housekeeping_cpumasks[type],
> housekeeping_staging);
> }
>
> @@ -117,7 +105,7 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
> unsigned int first_cpu;
> int err = 0;
>
> - if ((flags & HK_FLAG_KERNEL_NOISE) && !(housekeeping.flags & HK_FLAG_KERNEL_NOISE)) {
> + if ((flags & HK_FLAG_KERNEL_NOISE) && !(housekeeping_flags & HK_FLAG_KERNEL_NOISE)) {
> if (!IS_ENABLED(CONFIG_NO_HZ_FULL)) {
> pr_warn("Housekeeping: nohz unsupported."
> " Build with CONFIG_NO_HZ_FULL\n");
> @@ -139,7 +127,7 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
> if (first_cpu >= nr_cpu_ids || first_cpu >= setup_max_cpus) {
> __cpumask_set_cpu(smp_processor_id(), housekeeping_staging);
> __cpumask_clear_cpu(smp_processor_id(), non_housekeeping_mask);
> - if (!housekeeping.flags) {
> + if (!housekeeping_flags) {
> pr_warn("Housekeeping: must include one present CPU, "
> "using boot CPU:%d\n", smp_processor_id());
> }
> @@ -148,7 +136,7 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
> if (cpumask_empty(non_housekeeping_mask))
> goto free_housekeeping_staging;
>
> - if (!housekeeping.flags) {
> + if (!housekeeping_flags) {
> /* First setup call ("nohz_full=" or "isolcpus=") */
> enum hk_type type;
>
> @@ -157,26 +145,26 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
> } else {
> /* Second setup call ("nohz_full=" after "isolcpus=" or the reverse) */
> enum hk_type type;
> - unsigned long iter_flags = flags & housekeeping.flags;
> + unsigned long iter_flags = flags & housekeeping_flags;
>
> for_each_set_bit(type, &iter_flags, HK_TYPE_MAX) {
> if (!cpumask_equal(housekeeping_staging,
> - housekeeping.cpumasks[type])) {
> + housekeeping_cpumasks[type])) {
> pr_warn("Housekeeping: nohz_full= must match isolcpus=\n");
> goto free_housekeeping_staging;
> }
> }
>
> - iter_flags = flags & ~housekeeping.flags;
> + iter_flags = flags & ~housekeeping_flags;
>
> for_each_set_bit(type, &iter_flags, HK_TYPE_MAX)
> housekeeping_setup_type(type, housekeeping_staging);
> }
>
> - if ((flags & HK_FLAG_KERNEL_NOISE) && !(housekeeping.flags & HK_FLAG_KERNEL_NOISE))
> + if ((flags & HK_FLAG_KERNEL_NOISE) && !(housekeeping_flags & HK_FLAG_KERNEL_NOISE))
> tick_nohz_full_setup(non_housekeeping_mask);
>
> - housekeeping.flags |= flags;
> + housekeeping_flags |= flags;
> err = 1;
>
> free_housekeeping_staging:
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset
2025-08-29 15:47 ` [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
@ 2025-09-01 0:40 ` Waiman Long
0 siblings, 0 replies; 43+ messages in thread
From: Waiman Long @ 2025-09-01 0:40 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Michal Koutný, Ingo Molnar, Johannes Weiner,
Marco Crivellari, Michal Hocko, Peter Zijlstra, Tejun Heo,
Thomas Gleixner, Vlastimil Babka, cgroups
On 8/29/25 11:47 AM, Frederic Weisbecker wrote:
> Until now, HK_TYPE_DOMAIN used to only include boot defined isolated
> CPUs passed through isolcpus= boot option. Users interested in also
> knowing the runtime defined isolated CPUs through cpuset must use
> different APIs: cpuset_cpu_is_isolated(), cpu_is_isolated(), etc...
>
> There are many drawbacks to that approach:
>
> 1) Most interested subsystems want to know about all isolated CPUs, not
> just those defined on boot time.
>
> 2) cpuset_cpu_is_isolated() / cpu_is_isolated() are not synchronized with
> concurrent cpuset changes.
>
> 3) Further cpuset modifications are not propagated to subsystems
>
> Solve 1) and 2) and centralize all isolated CPUs within the
> HK_TYPE_DOMAIN housekeeping cpumask.
>
> Subsystems can rely on RCU to synchronize against concurrent changes.
>
> The propagation mentioned in 3) will be handled in further patches.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
> include/linux/sched/isolation.h | 4 +-
> kernel/cgroup/cpuset.c | 2 +
> kernel/sched/isolation.c | 65 ++++++++++++++++++++++++++++++---
> kernel/sched/sched.h | 1 +
> 4 files changed, 65 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 5ddb8dc5ca91..48f3b6b20604 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -23,16 +23,39 @@ EXPORT_SYMBOL_GPL(housekeeping_flags);
>
> bool housekeeping_enabled(enum hk_type type)
> {
> - return !!(housekeeping_flags & BIT(type));
> + return !!(READ_ONCE(housekeeping_flags) & BIT(type));
> }
> EXPORT_SYMBOL_GPL(housekeeping_enabled);
>
> +static bool housekeeping_dereference_check(enum hk_type type)
> +{
> + if (type == HK_TYPE_DOMAIN) {
> + if (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_write_held())
> + return true;
> + if (IS_ENABLED(CONFIG_CPUSETS) && lockdep_is_cpuset_held())
> + return true;
> +
> + return false;
> + }
> +
> + return true;
> +}
Both lockdep_is_cpuset_held() and lockdep_is_cpus_write_held() may be
defined only if CONFIG_LOCKDEP is set. However, this function is
currently referenced by __housekeeping_cpumask() via RCU_LOCKDEP_WARN().
So it is not invoked if CONFIG_LOCKDEP is not set. You are assuming that
static function not referenced is not being compiled into the object
file. Should we bracket it with "ifdef CONFIG_LOCKDEP" just to make this
clear?
> +
> +static inline struct cpumask *__housekeeping_cpumask(enum hk_type type)
> +{
> + return rcu_dereference_check(housekeeping_cpumasks[type],
> + housekeeping_dereference_check(type));
> +}
> +
> const struct cpumask *housekeeping_cpumask(enum hk_type type)
> {
> - if (housekeeping_flags & BIT(type)) {
> - return rcu_dereference_check(housekeeping_cpumasks[type], 1);
> - }
> - return cpu_possible_mask;
> + const struct cpumask *mask = NULL;
> +
> + if (READ_ONCE(housekeeping_flags) & BIT(type))
> + mask = __housekeeping_cpumask(type);
> + if (!mask)
> + mask = cpu_possible_mask;
> + return mask;
> }
> EXPORT_SYMBOL_GPL(housekeeping_cpumask);
>
> @@ -70,12 +93,42 @@ EXPORT_SYMBOL_GPL(housekeeping_affine);
>
> bool housekeeping_test_cpu(int cpu, enum hk_type type)
> {
> - if (housekeeping_flags & BIT(type))
> + if (READ_ONCE(housekeeping_flags) & BIT(type))
> return cpumask_test_cpu(cpu, housekeeping_cpumask(type));
> return true;
> }
> EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
>
> +int housekeeping_update(struct cpumask *mask, enum hk_type type)
> +{
> + struct cpumask *trial, *old = NULL;
> +
> + if (type != HK_TYPE_DOMAIN)
> + return -ENOTSUPP;
> +
> + trial = kmalloc(sizeof(*trial), GFP_KERNEL);
> + if (!trial)
> + return -ENOMEM;
> +
> + cpumask_andnot(trial, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT), mask);
> + if (!cpumask_intersects(trial, cpu_online_mask)) {
> + kfree(trial);
> + return -EINVAL;
> + }
> +
> + if (housekeeping_flags & BIT(type))
> + old = __housekeeping_cpumask(type);
> + else
> + WRITE_ONCE(housekeeping_flags, housekeeping_flags | BIT(type));
Should we use to READ_ONCE() to retrieve the current housekeeping_flags
value?
Cheers,
Longman
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 17/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping
2025-08-29 15:47 ` [PATCH 17/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
@ 2025-09-01 2:51 ` Waiman Long
0 siblings, 0 replies; 43+ messages in thread
From: Waiman Long @ 2025-09-01 2:51 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Michal Koutný, Ingo Molnar, Johannes Weiner, Lai Jiangshan,
Marco Crivellari, Michal Hocko, Peter Zijlstra, Tejun Heo,
Thomas Gleixner, Vlastimil Babka, cgroups
On 8/29/25 11:47 AM, Frederic Weisbecker wrote:
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -102,6 +102,7 @@ EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
> int housekeeping_update(struct cpumask *mask, enum hk_type type)
> {
> struct cpumask *trial, *old = NULL;
> + int err;
>
> if (type != HK_TYPE_DOMAIN)
> return -ENOTSUPP;
> @@ -126,10 +127,11 @@ int housekeeping_update(struct cpumask *mask, enum hk_type type)
>
> mem_cgroup_flush_workqueue();
> vmstat_flush_workqueue();
> + err = workqueue_unbound_exclude_cpumask(housekeeping_cpumask(type));
>
> kfree(old);
>
> - return 0;
> + return err;
> }
Actually workqueue_unbound_exclude_cpumask() expects a cpumask of all
the CPUs that have been isolated. IOW, all the CPUs that are not in
housekeeping_cpumask(HK_TYPE_DOMAIN). So we do the inversion here or we
rename the function to, e.g. workqueue_unbound_cpumask_update() and make
the change there.
Cheers,
Longman
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 01/33] sched/isolation: Remove housekeeping static key
2025-08-29 15:47 ` [PATCH 01/33] sched/isolation: Remove housekeeping static key Frederic Weisbecker
2025-08-29 21:34 ` Waiman Long
@ 2025-09-01 10:26 ` Peter Zijlstra
1 sibling, 0 replies; 43+ messages in thread
From: Peter Zijlstra @ 2025-09-01 10:26 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: LKML, Ingo Molnar, Marco Crivellari, Michal Hocko, Tejun Heo,
Thomas Gleixner, Vlastimil Babka, Waiman Long
On Fri, Aug 29, 2025 at 05:47:42PM +0200, Frederic Weisbecker wrote:
> +static inline bool housekeeping_cpu(int cpu, enum hk_type type)
> +{
> + if (housekeeping_flags & BIT(type))
> + return housekeeping_test_cpu(cpu, type);
> + else
> + return true;
> +}
That 'else' is superfluous.
> -static inline bool housekeeping_cpu(int cpu, enum hk_type type)
> -{
> -#ifdef CONFIG_CPU_ISOLATION
> - if (static_branch_unlikely(&housekeeping_overridden))
> - return housekeeping_test_cpu(cpu, type);
> -#endif
> - return true;
> -}
>
> static inline bool cpu_is_isolated(int cpu)
> {
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index a4cf17b1fab0..2a6fc6fc46fb 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -16,19 +16,13 @@ enum hk_flags {
> HK_FLAG_KERNEL_NOISE = BIT(HK_TYPE_KERNEL_NOISE),
> };
>
> -DEFINE_STATIC_KEY_FALSE(housekeeping_overridden);
> -EXPORT_SYMBOL_GPL(housekeeping_overridden);
> -
> -struct housekeeping {
> - cpumask_var_t cpumasks[HK_TYPE_MAX];
> - unsigned long flags;
> -};
> -
> -static struct housekeeping housekeeping;
> +static cpumask_var_t housekeeping_cpumasks[HK_TYPE_MAX];
> +unsigned long housekeeping_flags;
> +EXPORT_SYMBOL_GPL(housekeeping_flags);
I don't particularly like exporting variables much. It means modules can
actually change the value and things like that.
And while an exported static_key can be changed by modules, that's
fixable.
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 19/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated()
2025-08-29 15:48 ` [PATCH 19/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated() Frederic Weisbecker
@ 2025-09-02 14:28 ` Waiman Long
2025-09-02 15:48 ` Waiman Long
0 siblings, 1 reply; 43+ messages in thread
From: Waiman Long @ 2025-09-02 14:28 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Ingo Molnar, Marco Crivellari, Michal Hocko, Peter Zijlstra,
Tejun Heo, Thomas Gleixner, Vlastimil Babka
On 8/29/25 11:48 AM, Frederic Weisbecker wrote:
> It doesn't make sense to use nohz_full without also isolating the
> related CPUs from the domain topology, either through the use of
> isolcpus= or cpuset isolated partitions.
>
> And now HK_TYPE_DOMAIN includes all kinds of domain isolated CPUs.
>
> This means that HK_TYPE_KERNEL_NOISE (of which HK_TYPE_TICK is only an
> alias) is always a superset of HK_TYPE_DOMAIN.
That may not be true. Users can still set "isolcpus=" and "nohz_full="
with disjoint set of CPUs whether cpuset is used for additional isolated
CPUs or not.
Cheers,
Longman
>
> Therefore if a CPU is not HK_TYPE_KERNEL_NOISE, it can't be
> HK_TYPE_DOMAIN either. Testing the latter is then enough.
>
> Simplify cpu_is_isolated() accordingly.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
> include/linux/sched/isolation.h | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
> index c02923ed4cbe..8d6d26d3fdf5 100644
> --- a/include/linux/sched/isolation.h
> +++ b/include/linux/sched/isolation.h
> @@ -82,8 +82,7 @@ static inline void housekeeping_init(void) { }
>
> static inline bool cpu_is_isolated(int cpu)
> {
> - return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN) ||
> - !housekeeping_test_cpu(cpu, HK_TYPE_TICK);
> + return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN);
> }
>
> #endif /* _LINUX_SCHED_ISOLATION_H */
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 26/33] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping
2025-08-29 15:48 ` [PATCH 26/33] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Frederic Weisbecker
@ 2025-09-02 15:44 ` Waiman Long
0 siblings, 0 replies; 43+ messages in thread
From: Waiman Long @ 2025-09-02 15:44 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Gabriele Monaco, Johannes Weiner, Marco Crivellari, Michal Hocko,
Michal Koutný, Peter Zijlstra, Tejun Heo, Thomas Gleixner,
cgroups
On 8/29/25 11:48 AM, Frederic Weisbecker wrote:
> From: Gabriele Monaco <gmonaco@redhat.com>
>
> Currently the user can set up isolated cpus via cpuset and nohz_full in
> such a way that leaves no housekeeping CPU (i.e. no CPU that is neither
> domain isolated nor nohz full). This can be a problem for other
> subsystems (e.g. the timer wheel imgration).
>
> Prevent this configuration by blocking any assignation that would cause
> the union of domain isolated cpus and nohz_full to covers all CPUs.
>
> Acked-by: Frederic Weisbecker <frederic@kernel.org>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
> kernel/cgroup/cpuset.c | 57 ++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 57 insertions(+)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index df1dfacf5f9d..8260dd699fd8 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1275,6 +1275,19 @@ static void isolated_cpus_update(int old_prs, int new_prs, struct cpumask *xcpus
> cpumask_andnot(isolated_cpus, isolated_cpus, xcpus);
> }
>
> +/*
> + * isolated_cpus_should_update - Returns if the isolated_cpus mask needs update
> + * @prs: new or old partition_root_state
> + * @parent: parent cpuset
> + * Return: true if isolated_cpus needs modification, false otherwise
> + */
> +static bool isolated_cpus_should_update(int prs, struct cpuset *parent)
> +{
> + if (!parent)
> + parent = &top_cpuset;
> + return prs != parent->partition_root_state;
> +}
> +
> /*
> * partition_xcpus_add - Add new exclusive CPUs to partition
> * @new_prs: new partition_root_state
> @@ -1339,6 +1352,36 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
> return isolcpus_updated;
> }
>
> +/*
> + * isolcpus_nohz_conflict - check for isolated & nohz_full conflicts
> + * @new_cpus: cpu mask for cpus that are going to be isolated
> + * Return: true if there is conflict, false otherwise
> + *
> + * If nohz_full is enabled and we have isolated CPUs, their combination must
> + * still leave housekeeping CPUs.
> + */
> +static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
> +{
> + cpumask_var_t full_hk_cpus;
> + int res = false;
> +
> + if (!housekeeping_enabled(HK_TYPE_KERNEL_NOISE))
> + return false;
> +
> + if (!alloc_cpumask_var(&full_hk_cpus, GFP_KERNEL))
> + return true;
> +
> + cpumask_and(full_hk_cpus, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE),
> + housekeeping_cpumask(HK_TYPE_DOMAIN));
> + cpumask_andnot(full_hk_cpus, full_hk_cpus, isolated_cpus);
> + cpumask_and(full_hk_cpus, full_hk_cpus, cpu_online_mask);
> + if (!cpumask_weight_andnot(full_hk_cpus, new_cpus))
> + res = true;
> +
> + free_cpumask_var(full_hk_cpus);
> + return res;
> +}
> +
> static void update_housekeeping_cpumask(bool isolcpus_updated)
> {
> int ret;
> @@ -1453,6 +1496,9 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
> if (!cpumask_intersects(tmp->new_cpus, cpu_active_mask) ||
> cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus))
> return PERR_INVCPUS;
> + if (isolated_cpus_should_update(new_prs, NULL) &&
> + isolcpus_nohz_conflict(tmp->new_cpus))
> + return PERR_HKEEPING;
>
> spin_lock_irq(&callback_lock);
> isolcpus_updated = partition_xcpus_add(new_prs, NULL, tmp->new_cpus);
> @@ -1552,6 +1598,9 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
> else if (cpumask_intersects(tmp->addmask, subpartitions_cpus) ||
> cpumask_subset(top_cpuset.effective_cpus, tmp->addmask))
> cs->prs_err = PERR_NOCPUS;
> + else if (isolated_cpus_should_update(prs, NULL) &&
> + isolcpus_nohz_conflict(tmp->addmask))
> + cs->prs_err = PERR_HKEEPING;
> if (cs->prs_err)
> goto invalidate;
> }
> @@ -1904,6 +1953,12 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
> return err;
> }
>
> + if (deleting && isolated_cpus_should_update(new_prs, parent) &&
> + isolcpus_nohz_conflict(tmp->delmask)) {
> + cs->prs_err = PERR_HKEEPING;
> + return PERR_HKEEPING;
> + }
> +
> /*
> * Change the parent's effective_cpus & effective_xcpus (top cpuset
> * only).
> @@ -2924,6 +2979,8 @@ static int update_prstate(struct cpuset *cs, int new_prs)
> * Need to update isolated_cpus.
> */
> isolcpus_updated = true;
> + if (isolcpus_nohz_conflict(cs->effective_xcpus))
> + err = PERR_HKEEPING;
> } else {
> /*
> * Switching back to member is always allowed even if it
In both remote_cpus_update() and update_parent_effective_cpumask(), some
new CPUs can be added to the isolation list while other CPUs can be
removed from it. So isolcpus_nohz_conflict() should include both set in
its analysis to avoid false positive. Essentally, if the CPUs removed
from the isolated_cpus intersect with the nohz_full housekeeping mask,
there is no conflict.
Cheers,
Longman
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 19/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated()
2025-09-02 14:28 ` Waiman Long
@ 2025-09-02 15:48 ` Waiman Long
0 siblings, 0 replies; 43+ messages in thread
From: Waiman Long @ 2025-09-02 15:48 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Ingo Molnar, Marco Crivellari, Michal Hocko, Peter Zijlstra,
Tejun Heo, Thomas Gleixner, Vlastimil Babka
On 9/2/25 10:28 AM, Waiman Long wrote:
>
> On 8/29/25 11:48 AM, Frederic Weisbecker wrote:
>> It doesn't make sense to use nohz_full without also isolating the
>> related CPUs from the domain topology, either through the use of
>> isolcpus= or cpuset isolated partitions.
>>
>> And now HK_TYPE_DOMAIN includes all kinds of domain isolated CPUs.
>>
>> This means that HK_TYPE_KERNEL_NOISE (of which HK_TYPE_TICK is only an
>> alias) is always a superset of HK_TYPE_DOMAIN.
>
> That may not be true. Users can still set "isolcpus=" and "nohz_full="
> with disjoint set of CPUs whether cpuset is used for additional
> isolated CPUs or not.
Instead of "is always a superset", I would prefer to use "should always
be a superset" as it is a recommendation but users can still violate it.
Cheers,
Longman
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 27/33] sched/arm64: Move fallback task cpumask to HK_TYPE_DOMAIN
2025-08-29 15:48 ` [PATCH 27/33] sched/arm64: Move fallback task cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
@ 2025-09-02 16:43 ` Waiman Long
0 siblings, 0 replies; 43+ messages in thread
From: Waiman Long @ 2025-09-02 16:43 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Catalin Marinas, Marco Crivellari, Michal Hocko, Peter Zijlstra,
Tejun Heo, Thomas Gleixner, Will Deacon, linux-arm-kernel
On 8/29/25 11:48 AM, Frederic Weisbecker wrote:
> When none of the allowed CPUs of a task are online, it gets migrated
> to the fallback cpumask which is all the non nohz_full CPUs.
>
> However just like nohz_full CPUs, domain isolated CPUs don't want to be
> disturbed by tasks that have lost their CPU affinities.
>
> And since nohz_full rely on domain isolation to work correctly, the
> housekeeping mask of domain isolated CPUs is always a subset of the
> housekeeping mask of nohz_full CPUs (there can be CPUs that are domain
> isolated but not nohz_full, OTOH there can't be nohz_full CPUs that are
> not domain isolated):
>
> HK_TYPE_DOMAIN & HK_TYPE_KERNEL_NOISE == HK_TYPE_DOMAIN
>
> Therefore use HK_TYPE_DOMAIN as the appropriate fallback target for
> tasks and since this cpumask can be modified at runtime, make sure
> that 32 bits support CPUs on ARM64 mismatched systems are not isolated
> by cpusets.
>
> CC: linux-arm-kernel@lists.infradead.org
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
> arch/arm64/kernel/cpufeature.c | 18 ++++++++++++---
> include/linux/cpu.h | 4 ++++
> kernel/cgroup/cpuset.c | 40 +++++++++++++++++++++++-----------
> 3 files changed, 46 insertions(+), 16 deletions(-)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9ad065f15f1d..38046489d2ea 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1653,6 +1653,18 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
> return feature_matches(val, entry);
> }
>
> +/*
> + * 32 bits support CPUs can't be isolated because tasks may be
> + * arbitrarily affine to them, defeating the purpose of isolation.
> + */
> +bool arch_isolated_cpus_can_update(struct cpumask *new_cpus)
> +{
> + if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
> + return !cpumask_intersects(cpu_32bit_el0_mask, new_cpus);
> + else
> + return true;
> +}
> +
> const struct cpumask *system_32bit_el0_cpumask(void)
> {
> if (!system_supports_32bit_el0())
> @@ -1666,7 +1678,7 @@ const struct cpumask *system_32bit_el0_cpumask(void)
>
> const struct cpumask *task_cpu_fallback_mask(struct task_struct *p)
> {
> - return __task_cpu_possible_mask(p, housekeeping_cpumask(HK_TYPE_TICK));
> + return __task_cpu_possible_mask(p, housekeeping_cpumask(HK_TYPE_DOMAIN));
> }
>
> static int __init parse_32bit_el0_param(char *str)
> @@ -3963,8 +3975,8 @@ static int enable_mismatched_32bit_el0(unsigned int cpu)
> bool cpu_32bit = false;
>
> if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
> - if (!housekeeping_cpu(cpu, HK_TYPE_TICK))
> - pr_info("Treating adaptive-ticks CPU %u as 64-bit only\n", cpu);
> + if (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN))
> + pr_info("Treating domain isolated CPU %u as 64-bit only\n", cpu);
> else
> cpu_32bit = true;
> }
> diff --git a/include/linux/cpu.h b/include/linux/cpu.h
> index b91b993f58ee..8bb239080534 100644
> --- a/include/linux/cpu.h
> +++ b/include/linux/cpu.h
> @@ -228,4 +228,8 @@ static inline bool cpu_attack_vector_mitigated(enum cpu_attack_vectors v)
> #define smt_mitigations SMT_MITIGATIONS_OFF
> #endif
>
> +struct cpumask;
> +
> +bool arch_isolated_cpus_can_update(struct cpumask *new_cpus);
> +
> #endif /* _LINUX_CPU_H_ */
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 8260dd699fd8..cf99ea844c1d 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1352,33 +1352,47 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
> return isolcpus_updated;
> }
>
> +bool __weak arch_isolated_cpus_can_update(struct cpumask *new_cpus)
> +{
> + return true;
> +}
> +
> /*
> - * isolcpus_nohz_conflict - check for isolated & nohz_full conflicts
> + * isolated_cpus_can_update - check for conflicts against housekeeping and
> + * CPUs capabilities.
> * @new_cpus: cpu mask for cpus that are going to be isolated
> - * Return: true if there is conflict, false otherwise
> + * Return: true if there no conflict, false otherwise
> *
> - * If nohz_full is enabled and we have isolated CPUs, their combination must
> - * still leave housekeeping CPUs.
> + * Check for conflicts:
> + * - If nohz_full is enabled and there are isolated CPUs, their combination must
> + * still leave housekeeping CPUs.
> + * - Architecture has CPU capabilities incompatible with being isolated
> */
> -static bool isolcpus_nohz_conflict(struct cpumask *new_cpus)
> +static bool isolated_cpus_can_update(struct cpumask *new_cpus)
> {
> cpumask_var_t full_hk_cpus;
> - int res = false;
> + bool res;
> +
> + if (!arch_isolated_cpus_can_update(new_cpus))
> + return false;
>
> if (!housekeeping_enabled(HK_TYPE_KERNEL_NOISE))
> - return false;
> + return true;
>
> if (!alloc_cpumask_var(&full_hk_cpus, GFP_KERNEL))
> - return true;
> + return false;
> +
> + res = true;
>
> cpumask_and(full_hk_cpus, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE),
> housekeeping_cpumask(HK_TYPE_DOMAIN));
> cpumask_andnot(full_hk_cpus, full_hk_cpus, isolated_cpus);
> cpumask_and(full_hk_cpus, full_hk_cpus, cpu_online_mask);
We should construct the new cpumask by adding new CPUs and removing old
ones from the existing isolated_cpus and pass it to
arch_isolated_cpus_can_update() for the checking to be correct.
Cheers,
Longman
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
` (32 preceding siblings ...)
2025-08-29 15:48 ` [PATCH 33/33] doc: Add housekeeping documentation Frederic Weisbecker
@ 2025-09-02 19:12 ` Waiman Long
33 siblings, 0 replies; 43+ messages in thread
From: Waiman Long @ 2025-09-02 19:12 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Tejun Heo, Michal Hocko, Marco Crivellari, Thomas Gleixner,
Peter Zijlstra
On 8/29/25 11:47 AM, Frederic Weisbecker wrote:
> Hi,
>
> The kthread code was enhanced lately to provide an infrastructure which
> manages the preferred affinity of unbound kthreads (node or custom
> cpumask) against housekeeping constraints and CPU hotplug events.
>
> One crucial missing piece is cpuset: when an isolated partition is
> created, deleted, or its CPUs updated, all the unbound kthreads in the
> top cpuset are affine to _all_ the non-isolated CPUs, possibly breaking
> their preferred affinity along the way
>
> Solve this with performing the kthreads affinity update from cpuset to
> the kthreads consolidated relevant code instead so that preferred
> affinities are honoured.
>
> The dispatch of the new cpumasks to workqueues and kthreads is performed
> by housekeeping, as per the nice Tejun's suggestion.
>
> As a welcome side effect, HK_TYPE_DOMAIN then integrates both the set
> from isolcpus= and cpuset isolated partitions. Housekeeping cpumasks are
> now modifyable with specific synchronization. A big step toward making
> nohz_full= also mutable through cpuset in the future.
>
> Changes since v1:
>
> - Drop the housekeeping lock and use RCU to synchronize housekeeping
> against cpuset changes.
>
> - Add housekeeping documentation
>
> - Simplify CPU hotplug handling
>
> - Collect ack from Shakeel Butt
>
> - Handle sched/arm64's task fallback cpumask move to HK_TYPE_DOMAIN
>
> - Fix genirq kthreads affinity
>
> - Add missing kernel doc
>
> git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
> kthread/core-v2
>
> HEAD: 092784f7df0aa6415c91ae5edc1c1a72603b5c50
> Thanks,
> Frederic
I have finally finished the review of this long patch series. I like
your current approach and I will adopt my RFC patch series to be based
on yours. However, I do have comments and is looking forward to your
response.
Thanks,
Longman
^ permalink raw reply [flat|nested] 43+ messages in thread
end of thread, other threads:[~2025-09-02 19:12 UTC | newest]
Thread overview: 43+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-29 15:47 [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 01/33] sched/isolation: Remove housekeeping static key Frederic Weisbecker
2025-08-29 21:34 ` Waiman Long
2025-09-01 10:26 ` Peter Zijlstra
2025-08-29 15:47 ` [PATCH 02/33] PCI: Protect against concurrent change of housekeeping cpumask Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 03/33] cpu: Revert "cpu/hotplug: Prevent self deadlock on CPU hot-unplug" Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 04/33] memcg: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 05/33] mm: vmstat: " Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 06/33] sched/isolation: Save boot defined domain flags Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 07/33] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 08/33] driver core: cpu: Convert /sys/devices/system/cpu/isolated " Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 09/33] net: Keep ignoring isolated cpuset change Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 10/33] block: Protect against concurrent " Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 11/33] cpu: Provide lockdep check for CPU hotplug lock write-held Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 12/33] cpuset: Provide lockdep check for cpuset lock held Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 13/33] sched/isolation: Convert housekeeping cpumasks to rcu pointers Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
2025-09-01 0:40 ` Waiman Long
2025-08-29 15:47 ` [PATCH 15/33] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 16/33] sched/isolation: Flush vmstat " Frederic Weisbecker
2025-08-29 15:47 ` [PATCH 17/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
2025-09-01 2:51 ` Waiman Long
2025-08-29 15:47 ` [PATCH 18/33] cpuset: Remove cpuset_cpu_is_isolated() Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 19/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated() Frederic Weisbecker
2025-09-02 14:28 ` Waiman Long
2025-09-02 15:48 ` Waiman Long
2025-08-29 15:48 ` [PATCH 20/33] PCI: Remove superfluous HK_TYPE_WQ check Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 21/33] kthread: Refine naming of affinity related fields Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 22/33] kthread: Include unbound kthreads in the managed affinity list Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 23/33] kthread: Include kthreadd to " Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 24/33] kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 25/33] sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 26/33] cgroup/cpuset: Fail if isolated and nohz_full don't leave any housekeeping Frederic Weisbecker
2025-09-02 15:44 ` Waiman Long
2025-08-29 15:48 ` [PATCH 27/33] sched/arm64: Move fallback task cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
2025-09-02 16:43 ` Waiman Long
2025-08-29 15:48 ` [PATCH 28/33] kthread: Honour kthreads preferred affinity after cpuset changes Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 29/33] kthread: Comment on the purpose and placement of kthread_affine_node() call Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 30/33] kthread: Add API to update preferred affinity on kthread runtime Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 31/33] kthread: Document kthread_affine_preferred() Frederic Weisbecker
2025-08-29 15:48 ` [RFC PATCH 32/33] genirq: Correctly handle preferred kthreads affinity Frederic Weisbecker
2025-08-29 15:48 ` [PATCH 33/33] doc: Add housekeeping documentation Frederic Weisbecker
2025-09-02 19:12 ` [PATCH 00/33 v2] cpuset/isolation: Honour kthreads preferred affinity Waiman Long
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).