* [PATCH 08/27] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT
[not found] <20250620152308.27492-1-frederic@kernel.org>
@ 2025-06-20 15:22 ` Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 13/27] cpuset: Provide lockdep check for cpuset lock held Frederic Weisbecker
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2025-06-20 15:22 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Johannes Weiner, Marco Crivellari,
Michal Hocko, Michal Koutny, Peter Zijlstra, Tejun Heo,
Thomas Gleixner, Vlastimil Babka, Waiman Long, cgroups
boot_hk_cpus is an ad-hoc copy of HK_TYPE_DOMAIN_BOOT. Remove it and use
the official version.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/cgroup/cpuset.c | 22 +++++++---------------
1 file changed, 7 insertions(+), 15 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 3bc4301466f3..aae8a739d48d 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -80,12 +80,6 @@ static cpumask_var_t subpartitions_cpus;
*/
static cpumask_var_t isolated_cpus;
-/*
- * Housekeeping (HK_TYPE_DOMAIN) CPUs at boot
- */
-static cpumask_var_t boot_hk_cpus;
-static bool have_boot_isolcpus;
-
/* List of remote partition root children */
static struct list_head remote_children;
@@ -1601,15 +1595,16 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
* @new_cpus: cpu mask
* Return: true if there is conflict, false otherwise
*
- * CPUs outside of boot_hk_cpus, if defined, can only be used in an
+ * CPUs outside of HK_TYPE_DOMAIN_BOOT, if defined, can only be used in an
* isolated partition.
*/
static bool prstate_housekeeping_conflict(int prstate, struct cpumask *new_cpus)
{
- if (!have_boot_isolcpus)
+ if (!housekeeping_enabled(HK_TYPE_DOMAIN_BOOT))
return false;
- if ((prstate != PRS_ISOLATED) && !cpumask_subset(new_cpus, boot_hk_cpus))
+ if ((prstate != PRS_ISOLATED) &&
+ !cpumask_subset(new_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT)))
return true;
return false;
@@ -3766,12 +3761,9 @@ int __init cpuset_init(void)
BUG_ON(!alloc_cpumask_var(&cpus_attach, GFP_KERNEL));
- have_boot_isolcpus = housekeeping_enabled(HK_TYPE_DOMAIN);
- if (have_boot_isolcpus) {
- BUG_ON(!alloc_cpumask_var(&boot_hk_cpus, GFP_KERNEL));
- cpumask_copy(boot_hk_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN));
- cpumask_andnot(isolated_cpus, cpu_possible_mask, boot_hk_cpus);
- }
+ if (housekeeping_enabled(HK_TYPE_DOMAIN_BOOT))
+ cpumask_andnot(isolated_cpus, cpu_possible_mask,
+ housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT));
return 0;
}
--
2.48.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 13/27] cpuset: Provide lockdep check for cpuset lock held
[not found] <20250620152308.27492-1-frederic@kernel.org>
2025-06-20 15:22 ` [PATCH 08/27] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT Frederic Weisbecker
@ 2025-06-20 15:22 ` Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 15/27] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2025-06-20 15:22 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Michal Koutný, Johannes Weiner,
Marco Crivellari, Michal Hocko, Peter Zijlstra, Tejun Heo,
Thomas Gleixner, Vlastimil Babka, Waiman Long, cgroups
cpuset modifies partitions, including isolated, while holding the cpuset
mutex.
This means that holding the cpuset mutex is safe to synchronize against
housekeeping cpumask changes.
Provide a lockdep check to validate that.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/cpuset.h | 2 ++
kernel/cgroup/cpuset.c | 7 +++++++
2 files changed, 9 insertions(+)
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 2ddb256187b5..051d36fec578 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -18,6 +18,8 @@
#include <linux/mmu_context.h>
#include <linux/jump_label.h>
+extern bool lockdep_is_cpuset_held(void);
+
#ifdef CONFIG_CPUSETS
/*
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index aae8a739d48d..8221b6a7da46 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -254,6 +254,13 @@ void cpuset_unlock(void)
mutex_unlock(&cpuset_mutex);
}
+#ifdef CONFIG_LOCKDEP
+bool lockdep_is_cpuset_held(void)
+{
+ return lockdep_is_held(&cpuset_mutex);
+}
+#endif
+
static DEFINE_SPINLOCK(callback_lock);
void cpuset_callback_lock_irq(void)
--
2.48.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 15/27] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset
[not found] <20250620152308.27492-1-frederic@kernel.org>
2025-06-20 15:22 ` [PATCH 08/27] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 13/27] cpuset: Provide lockdep check for cpuset lock held Frederic Weisbecker
@ 2025-06-20 15:22 ` Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 16/27] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Frederic Weisbecker
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2025-06-20 15:22 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Michal Koutný, Ingo Molnar,
Johannes Weiner, Marco Crivellari, Michal Hocko, Peter Zijlstra,
Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long, cgroups
Until now, HK_TYPE_DOMAIN used to only include boot defined isolated
CPUs passed through isolcpus= boot option. Users interested in also
knowing the runtime defined isolated CPUs through cpuset must use
different APIs: cpuset_cpu_is_isolated(), cpu_is_isolated(), etc...
There are many drawbacks to that approach:
1) Most interested subsystems want to know about all isolated CPUs, not
just those defined on boot time.
2) cpuset_cpu_is_isolated() / cpu_is_isolated() are not synchronized with
concurrent cpuset changes.
3) Further cpuset modifications are not propagated to subsystems
Solve 1) and 2) and centralize all isolated CPUs within the
HK_TYPE_DOMAIN housekeeping cpumask under the housekeeping lock.
Subsystems can rely on the housekeeping lock or RCU to synchronize
against concurrent changes.
The propagation mentioned in 3) will be handled in further patches.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/sched/isolation.h | 5 ++-
kernel/cgroup/cpuset.c | 2 +
kernel/sched/isolation.c | 71 ++++++++++++++++++++++++++++++---
kernel/sched/sched.h | 1 +
4 files changed, 72 insertions(+), 7 deletions(-)
diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index 731506d312d2..f1b309f18511 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -36,7 +36,7 @@ extern bool housekeeping_test_cpu(int cpu, enum hk_type type);
static inline bool housekeeping_cpu(int cpu, enum hk_type type)
{
- if (housekeeping_flags & BIT(type))
+ if (READ_ONCE(housekeeping_flags) & BIT(type))
return housekeeping_test_cpu(cpu, type);
else
return true;
@@ -45,6 +45,8 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type)
extern void housekeeping_lock(void);
extern void housekeeping_unlock(void);
+extern int housekeeping_update(struct cpumask *mask, enum hk_type type);
+
extern void __init housekeeping_init(void);
#else
@@ -79,6 +81,7 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type)
static inline void housekeeping_lock(void) { }
static inline void housekeeping_unlock(void) { }
+static inline int housekeeping_update(struct cpumask *mask, enum hk_type type) { return 0; }
static inline void housekeeping_init(void) { }
#endif /* CONFIG_CPU_ISOLATION */
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 8221b6a7da46..5f169a56f06c 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1351,6 +1351,8 @@ static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
WARN_ON_ONCE(ret < 0);
+ ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
+ WARN_ON_ONCE(ret < 0);
}
/**
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 75505668dcb9..7814d60be87e 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -23,7 +23,7 @@ DEFINE_STATIC_PERCPU_RWSEM(housekeeping_pcpu_lock);
bool housekeeping_enabled(enum hk_type type)
{
- return !!(housekeeping_flags & BIT(type));
+ return !!(READ_ONCE(housekeeping_flags) & BIT(type));
}
EXPORT_SYMBOL_GPL(housekeeping_enabled);
@@ -37,12 +37,39 @@ void housekeeping_unlock(void)
percpu_up_read(&housekeeping_pcpu_lock);
}
+static bool housekeeping_dereference_check(enum hk_type type)
+{
+ if (type == HK_TYPE_DOMAIN) {
+ if (system_state == SYSTEM_BOOTING)
+ return true;
+ if (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_write_held())
+ return true;
+ if (percpu_rwsem_is_held(&housekeeping_pcpu_lock))
+ return true;
+ if (IS_ENABLED(CONFIG_CPUSETS) && lockdep_is_cpuset_held())
+ return true;
+
+ return false;
+ }
+
+ return true;
+}
+
+static inline struct cpumask *__housekeeping_cpumask(enum hk_type type)
+{
+ return rcu_dereference_check(housekeeping_cpumasks[type],
+ housekeeping_dereference_check(type));
+}
+
const struct cpumask *housekeeping_cpumask(enum hk_type type)
{
- if (housekeeping_flags & BIT(type)) {
- return rcu_dereference_check(housekeeping_cpumasks[type], 1);
- }
- return cpu_possible_mask;
+ const struct cpumask *mask = NULL;
+
+ if (READ_ONCE(housekeeping_flags) & BIT(type))
+ mask = __housekeeping_cpumask(type);
+ if (!mask)
+ mask = cpu_possible_mask;
+ return mask;
}
EXPORT_SYMBOL_GPL(housekeeping_cpumask);
@@ -80,12 +107,44 @@ EXPORT_SYMBOL_GPL(housekeeping_affine);
bool housekeeping_test_cpu(int cpu, enum hk_type type)
{
- if (housekeeping_flags & BIT(type))
+ if (READ_ONCE(housekeeping_flags) & BIT(type))
return cpumask_test_cpu(cpu, housekeeping_cpumask(type));
return true;
}
EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
+int housekeeping_update(struct cpumask *mask, enum hk_type type)
+{
+ struct cpumask *trial, *old = NULL;
+
+ if (type != HK_TYPE_DOMAIN)
+ return -ENOTSUPP;
+
+ trial = kmalloc(sizeof(*trial), GFP_KERNEL);
+ if (!trial)
+ return -ENOMEM;
+
+ cpumask_andnot(trial, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT), mask);
+ if (!cpumask_intersects(trial, cpu_online_mask)) {
+ kfree(trial);
+ return -EINVAL;
+ }
+
+ percpu_down_write(&housekeeping_pcpu_lock);
+ if (housekeeping_flags & BIT(type))
+ old = __housekeeping_cpumask(type);
+ else
+ WRITE_ONCE(housekeeping_flags, housekeeping_flags | BIT(type));
+ rcu_assign_pointer(housekeeping_cpumasks[type], trial);
+ percpu_up_write(&housekeeping_pcpu_lock);
+
+ synchronize_rcu();
+
+ kfree(old);
+
+ return 0;
+}
+
void __init housekeeping_init(void)
{
enum hk_type type;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 407e7f5ad929..04094567cad4 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -30,6 +30,7 @@
#include <linux/context_tracking.h>
#include <linux/cpufreq.h>
#include <linux/cpumask_api.h>
+#include <linux/cpuset.h>
#include <linux/ctype.h>
#include <linux/file.h>
#include <linux/fs_api.h>
--
2.48.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 16/27] sched/isolation: Flush memcg workqueues on cpuset isolated partition change
[not found] <20250620152308.27492-1-frederic@kernel.org>
` (2 preceding siblings ...)
2025-06-20 15:22 ` [PATCH 15/27] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
@ 2025-06-20 15:22 ` Frederic Weisbecker
2025-06-20 19:30 ` Shakeel Butt
2025-06-20 15:22 ` [PATCH 18/27] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
` (2 subsequent siblings)
6 siblings, 1 reply; 8+ messages in thread
From: Frederic Weisbecker @ 2025-06-20 15:22 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Andrew Morton, Ingo Molnar, Johannes Weiner,
Marco Crivellari, Michal Hocko, Michal Hocko, Muchun Song,
Peter Zijlstra, Roman Gushchin, Shakeel Butt, Tejun Heo,
Thomas Gleixner, Vlastimil Babka, Waiman Long, cgroups, linux-mm
The HK_TYPE_DOMAIN housekeeping cpumask is now modifyable at runtime. In
order to synchronize against memcg workqueue to make sure that no
asynchronous draining is still pending or executing on a newly made
isolated CPU, the housekeeping susbsystem must flush the memcg
workqueues.
However the memcg workqueues can't be flushed easily since they are
queued to the main per-CPU workqueue pool.
Solve this with creating a memcg specific pool and provide and use the
appropriate flushing API.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/memcontrol.h | 4 ++++
kernel/sched/isolation.c | 2 ++
kernel/sched/sched.h | 1 +
mm/memcontrol.c | 12 +++++++++++-
4 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 87b6688f124a..ef5036c6bf04 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1046,6 +1046,8 @@ static inline u64 cgroup_id_from_mm(struct mm_struct *mm)
return id;
}
+void mem_cgroup_flush_workqueue(void);
+
extern int mem_cgroup_init(void);
#else /* CONFIG_MEMCG */
@@ -1451,6 +1453,8 @@ static inline u64 cgroup_id_from_mm(struct mm_struct *mm)
return 0;
}
+static inline void mem_cgroup_flush_workqueue(void) { }
+
static inline int mem_cgroup_init(void) { return 0; }
#endif /* CONFIG_MEMCG */
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 7814d60be87e..6fb0c7956516 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -140,6 +140,8 @@ int housekeeping_update(struct cpumask *mask, enum hk_type type)
synchronize_rcu();
+ mem_cgroup_flush_workqueue();
+
kfree(old);
return 0;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 04094567cad4..53107c021fe9 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -44,6 +44,7 @@
#include <linux/lockdep_api.h>
#include <linux/lockdep.h>
#include <linux/memblock.h>
+#include <linux/memcontrol.h>
#include <linux/minmax.h>
#include <linux/mm.h>
#include <linux/module.h>
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 29d44af6c426..928b90cdb5ba 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -96,6 +96,8 @@ static bool cgroup_memory_nokmem __ro_after_init;
/* BPF memory accounting disabled? */
static bool cgroup_memory_nobpf __ro_after_init;
+static struct workqueue_struct *memcg_wq __ro_after_init;
+
static struct kmem_cache *memcg_cachep;
static struct kmem_cache *memcg_pn_cachep;
@@ -1979,7 +1981,7 @@ static void schedule_drain_work(int cpu, struct work_struct *work)
{
housekeeping_lock();
if (!cpu_is_isolated(cpu))
- schedule_work_on(cpu, work);
+ queue_work_on(cpu, memcg_wq, work);
housekeeping_unlock();
}
@@ -5140,6 +5142,11 @@ void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
refill_stock(memcg, nr_pages);
}
+void mem_cgroup_flush_workqueue(void)
+{
+ flush_workqueue(memcg_wq);
+}
+
static int __init cgroup_memory(char *s)
{
char *token;
@@ -5182,6 +5189,9 @@ int __init mem_cgroup_init(void)
cpuhp_setup_state_nocalls(CPUHP_MM_MEMCQ_DEAD, "mm/memctrl:dead", NULL,
memcg_hotplug_cpu_dead);
+ memcg_wq = alloc_workqueue("memcg", 0, 0);
+ WARN_ON(!memcg_wq);
+
for_each_possible_cpu(cpu) {
INIT_WORK(&per_cpu_ptr(&memcg_stock, cpu)->work,
drain_local_memcg_stock);
--
2.48.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 18/27] cpuset: Propagate cpuset isolation update to workqueue through housekeeping
[not found] <20250620152308.27492-1-frederic@kernel.org>
` (3 preceding siblings ...)
2025-06-20 15:22 ` [PATCH 16/27] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Frederic Weisbecker
@ 2025-06-20 15:22 ` Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 19/27] cpuset: Remove cpuset_cpu_is_isolated() Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 26/27] kthread: Honour kthreads preferred affinity after cpuset changes Frederic Weisbecker
6 siblings, 0 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2025-06-20 15:22 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Michal Koutný, Ingo Molnar,
Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
Peter Zijlstra, Tejun Heo, Thomas Gleixner, Vlastimil Babka,
Waiman Long, cgroups
Until now, cpuset would propagate isolated partition changes to
workqueues so that unbound workers get properly reaffined.
Since housekeeping now centralizes, synchronize and propagates isolation
cpumask changes, perform the work from that subsystem for consolidation
and consistency purposes.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/workqueue.h | 2 +-
init/Kconfig | 1 +
kernel/cgroup/cpuset.c | 14 ++++++--------
kernel/sched/isolation.c | 4 +++-
kernel/workqueue.c | 2 +-
5 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 6e30f275da77..8a32c594bba1 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -581,7 +581,7 @@ struct workqueue_attrs *alloc_workqueue_attrs(void);
void free_workqueue_attrs(struct workqueue_attrs *attrs);
int apply_workqueue_attrs(struct workqueue_struct *wq,
const struct workqueue_attrs *attrs);
-extern int workqueue_unbound_exclude_cpumask(cpumask_var_t cpumask);
+extern int workqueue_unbound_exclude_cpumask(const struct cpumask *cpumask);
extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
struct work_struct *work);
diff --git a/init/Kconfig b/init/Kconfig
index af4c2f085455..b7cbb6e01e8d 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1205,6 +1205,7 @@ config CPUSETS
bool "Cpuset controller"
depends on SMP
select UNION_FIND
+ select CPU_ISOLATION
help
This option will let you create and manage CPUSETs which
allow dynamically partitioning a system into sets of CPUs and
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 5f169a56f06c..98b1ea0ad336 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1340,7 +1340,7 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
return isolcpus_updated;
}
-static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
+static void update_housekeeping_cpumask(bool isolcpus_updated)
{
int ret;
@@ -1349,8 +1349,6 @@ static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
if (!isolcpus_updated)
return;
- ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
- WARN_ON_ONCE(ret < 0);
ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
WARN_ON_ONCE(ret < 0);
}
@@ -1473,7 +1471,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
list_add(&cs->remote_sibling, &remote_children);
cpumask_copy(cs->effective_xcpus, tmp->new_cpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_housekeeping_cpumask(isolcpus_updated);
cpuset_force_rebuild();
cs->prs_err = 0;
@@ -1514,7 +1512,7 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)
compute_effective_exclusive_cpumask(cs, NULL, NULL);
reset_partition_data(cs);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_housekeeping_cpumask(isolcpus_updated);
cpuset_force_rebuild();
/*
@@ -1583,7 +1581,7 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
if (xcpus)
cpumask_copy(cs->exclusive_cpus, xcpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_housekeeping_cpumask(isolcpus_updated);
if (adding || deleting)
cpuset_force_rebuild();
@@ -1947,7 +1945,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
WARN_ON_ONCE(parent->nr_subparts < 0);
}
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_housekeeping_cpumask(isolcpus_updated);
if ((old_prs != new_prs) && (cmd == partcmd_update))
update_partition_exclusive_flag(cs, new_prs);
@@ -2972,7 +2970,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
else if (isolcpus_updated)
isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_housekeeping_cpumask(isolcpus_updated);
/* Force update if switching back to member & update effective_xcpus */
update_cpumasks_hier(cs, &tmpmask, !new_prs);
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 0119685796be..e4e4fcd4cb2c 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -116,6 +116,7 @@ EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
int housekeeping_update(struct cpumask *mask, enum hk_type type)
{
struct cpumask *trial, *old = NULL;
+ int err;
if (type != HK_TYPE_DOMAIN)
return -ENOTSUPP;
@@ -142,10 +143,11 @@ int housekeeping_update(struct cpumask *mask, enum hk_type type)
mem_cgroup_flush_workqueue();
vmstat_flush_workqueue();
+ err = workqueue_unbound_exclude_cpumask(housekeeping_cpumask(type));
kfree(old);
- return 0;
+ return err;
}
void __init housekeeping_init(void)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 97f37b5bae66..e55fcf980c5d 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -6948,7 +6948,7 @@ static int workqueue_apply_unbound_cpumask(const cpumask_var_t unbound_cpumask)
* This function can be called from cpuset code to provide a set of isolated
* CPUs that should be excluded from wq_unbound_cpumask.
*/
-int workqueue_unbound_exclude_cpumask(cpumask_var_t exclude_cpumask)
+int workqueue_unbound_exclude_cpumask(const struct cpumask *exclude_cpumask)
{
cpumask_var_t cpumask;
int ret = 0;
--
2.48.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 19/27] cpuset: Remove cpuset_cpu_is_isolated()
[not found] <20250620152308.27492-1-frederic@kernel.org>
` (4 preceding siblings ...)
2025-06-20 15:22 ` [PATCH 18/27] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
@ 2025-06-20 15:23 ` Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 26/27] kthread: Honour kthreads preferred affinity after cpuset changes Frederic Weisbecker
6 siblings, 0 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2025-06-20 15:23 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Michal Koutný, Johannes Weiner,
Marco Crivellari, Michal Hocko, Peter Zijlstra, Tejun Heo,
Thomas Gleixner, Vlastimil Babka, Waiman Long, cgroups
The set of cpuset isolated CPUs is now included in HK_TYPE_DOMAIN
housekeeping cpumask. There is no usecase left interested in just
checking what is isolated by cpuset and not by the isolcpus= kernel
boot parameter.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/cpuset.h | 6 ------
include/linux/sched/isolation.h | 3 +--
kernel/cgroup/cpuset.c | 12 ------------
3 files changed, 1 insertion(+), 20 deletions(-)
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 051d36fec578..a10775a4f702 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -78,7 +78,6 @@ extern void cpuset_lock(void);
extern void cpuset_unlock(void);
extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
extern bool cpuset_cpus_allowed_fallback(struct task_struct *p);
-extern bool cpuset_cpu_is_isolated(int cpu);
extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
#define cpuset_current_mems_allowed (current->mems_allowed)
void cpuset_init_current_mems_allowed(void);
@@ -208,11 +207,6 @@ static inline bool cpuset_cpus_allowed_fallback(struct task_struct *p)
return false;
}
-static inline bool cpuset_cpu_is_isolated(int cpu)
-{
- return false;
-}
-
static inline nodemask_t cpuset_mems_allowed(struct task_struct *p)
{
return node_possible_map;
diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index f1b309f18511..9f039dfb5739 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -89,8 +89,7 @@ static inline void housekeeping_init(void) { }
static inline bool cpu_is_isolated(int cpu)
{
return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN) ||
- !housekeeping_test_cpu(cpu, HK_TYPE_TICK) ||
- cpuset_cpu_is_isolated(cpu);
+ !housekeeping_test_cpu(cpu, HK_TYPE_TICK);
}
DEFINE_LOCK_GUARD_0(housekeeping, housekeeping_lock(), housekeeping_unlock())
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 98b1ea0ad336..db80e72681ed 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -29,7 +29,6 @@
#include <linux/mempolicy.h>
#include <linux/mm.h>
#include <linux/memory.h>
-#include <linux/export.h>
#include <linux/rcupdate.h>
#include <linux/sched.h>
#include <linux/sched/deadline.h>
@@ -1353,17 +1352,6 @@ static void update_housekeeping_cpumask(bool isolcpus_updated)
WARN_ON_ONCE(ret < 0);
}
-/**
- * cpuset_cpu_is_isolated - Check if the given CPU is isolated
- * @cpu: the CPU number to be checked
- * Return: true if CPU is used in an isolated partition, false otherwise
- */
-bool cpuset_cpu_is_isolated(int cpu)
-{
- return cpumask_test_cpu(cpu, isolated_cpus);
-}
-EXPORT_SYMBOL_GPL(cpuset_cpu_is_isolated);
-
/*
* compute_effective_exclusive_cpumask - compute effective exclusive CPUs
* @cs: cpuset
--
2.48.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 26/27] kthread: Honour kthreads preferred affinity after cpuset changes
[not found] <20250620152308.27492-1-frederic@kernel.org>
` (5 preceding siblings ...)
2025-06-20 15:23 ` [PATCH 19/27] cpuset: Remove cpuset_cpu_is_isolated() Frederic Weisbecker
@ 2025-06-20 15:23 ` Frederic Weisbecker
6 siblings, 0 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2025-06-20 15:23 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Michal Koutný, Ingo Molnar,
Johannes Weiner, Marco Crivellari, Michal Hocko, Peter Zijlstra,
Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long, cgroups
When cpuset isolated partitions get updated, unbound kthreads get
indifferently affine to all non isolated CPUs, regardless of their
individual affinity preferences.
For example kswapd is a per-node kthread that prefers to be affine to
the node it refers to. Whenever an isolated partition is created,
updated or deleted, kswapd's node affinity is going to be broken if any
CPU in the related node is not isolated because kswapd will be affine
globally.
Fix this with letting the consolidated kthread managed affinity code do
the affinity update on behalf of cpuset.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/kthread.h | 1 +
kernel/cgroup/cpuset.c | 5 ++---
kernel/kthread.c | 38 +++++++++++++++++++++++++++++---------
kernel/sched/isolation.c | 2 ++
4 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index 8d27403888ce..c92c1149ee6e 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -100,6 +100,7 @@ void kthread_unpark(struct task_struct *k);
void kthread_parkme(void);
void kthread_exit(long result) __noreturn;
void kthread_complete_and_exit(struct completion *, long) __noreturn;
+int kthreads_update_housekeeping(void);
int kthreadd(void *unused);
extern struct task_struct *kthreadd_task;
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index db80e72681ed..99ee187d941b 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1130,11 +1130,10 @@ void cpuset_update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus)
if (top_cs) {
/*
+ * PF_KTHREAD tasks are handled by housekeeping.
* PF_NO_SETAFFINITY tasks are ignored.
- * All per cpu kthreads should have PF_NO_SETAFFINITY
- * flag set, see kthread_set_per_cpu().
*/
- if (task->flags & PF_NO_SETAFFINITY)
+ if (task->flags & (PF_KTHREAD | PF_NO_SETAFFINITY))
continue;
cpumask_andnot(new_cpus, possible_mask, subpartitions_cpus);
} else {
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 42cd6e119335..8c1268c2cee9 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -896,14 +896,7 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
return ret;
}
-/*
- * Re-affine kthreads according to their preferences
- * and the newly online CPU. The CPU down part is handled
- * by select_fallback_rq() which default re-affines to
- * housekeepers from other nodes in case the preferred
- * affinity doesn't apply anymore.
- */
-static int kthreads_online_cpu(unsigned int cpu)
+static int kthreads_update_affinity(bool force)
{
cpumask_var_t affinity;
struct kthread *k;
@@ -926,7 +919,7 @@ static int kthreads_online_cpu(unsigned int cpu)
continue;
}
- if (k->preferred_affinity || k->node != NUMA_NO_NODE) {
+ if (force || k->preferred_affinity || k->node != NUMA_NO_NODE) {
kthread_fetch_affinity(k, affinity);
set_cpus_allowed_ptr(k->task, affinity);
}
@@ -937,6 +930,33 @@ static int kthreads_online_cpu(unsigned int cpu)
return ret;
}
+/**
+ * kthreads_update_housekeeping - Update kthreads affinity on cpuset change
+ *
+ * When cpuset changes a partition type to/from "isolated" or updates related
+ * cpumasks, propagate the housekeeping cpumask change to preferred kthreads
+ * affinity.
+ *
+ * Returns 0 if successful, -ENOMEM if temporary mask couldn't
+ * be allocated or -EINVAL in case of internal error.
+ */
+int kthreads_update_housekeeping(void)
+{
+ return kthreads_update_affinity(true);
+}
+
+/*
+ * Re-affine kthreads according to their preferences
+ * and the newly online CPU. The CPU down part is handled
+ * by select_fallback_rq() which default re-affines to
+ * housekeepers from other nodes in case the preferred
+ * affinity doesn't apply anymore.
+ */
+static int kthreads_online_cpu(unsigned int cpu)
+{
+ return kthreads_update_affinity(false);
+}
+
static int kthreads_init(void)
{
return cpuhp_setup_state(CPUHP_AP_KTHREADS_ONLINE, "kthreads:online",
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index e4e4fcd4cb2c..2750b80a5511 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -144,6 +144,8 @@ int housekeeping_update(struct cpumask *mask, enum hk_type type)
mem_cgroup_flush_workqueue();
vmstat_flush_workqueue();
err = workqueue_unbound_exclude_cpumask(housekeeping_cpumask(type));
+ WARN_ON_ONCE(err < 0);
+ err = kthreads_update_housekeeping();
kfree(old);
--
2.48.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 16/27] sched/isolation: Flush memcg workqueues on cpuset isolated partition change
2025-06-20 15:22 ` [PATCH 16/27] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Frederic Weisbecker
@ 2025-06-20 19:30 ` Shakeel Butt
0 siblings, 0 replies; 8+ messages in thread
From: Shakeel Butt @ 2025-06-20 19:30 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: LKML, Andrew Morton, Ingo Molnar, Johannes Weiner,
Marco Crivellari, Michal Hocko, Michal Hocko, Muchun Song,
Peter Zijlstra, Roman Gushchin, Tejun Heo, Thomas Gleixner,
Vlastimil Babka, Waiman Long, cgroups, linux-mm
On Fri, Jun 20, 2025 at 05:22:57PM +0200, Frederic Weisbecker wrote:
> The HK_TYPE_DOMAIN housekeeping cpumask is now modifyable at runtime. In
> order to synchronize against memcg workqueue to make sure that no
> asynchronous draining is still pending or executing on a newly made
> isolated CPU, the housekeeping susbsystem must flush the memcg
> workqueues.
>
> However the memcg workqueues can't be flushed easily since they are
> queued to the main per-CPU workqueue pool.
>
> Solve this with creating a memcg specific pool and provide and use the
> appropriate flushing API.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-06-20 19:30 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20250620152308.27492-1-frederic@kernel.org>
2025-06-20 15:22 ` [PATCH 08/27] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 13/27] cpuset: Provide lockdep check for cpuset lock held Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 15/27] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 16/27] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Frederic Weisbecker
2025-06-20 19:30 ` Shakeel Butt
2025-06-20 15:22 ` [PATCH 18/27] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 19/27] cpuset: Remove cpuset_cpu_is_isolated() Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 26/27] kthread: Honour kthreads preferred affinity after cpuset changes Frederic Weisbecker
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).