netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 08/33] net: Keep ignoring isolated cpuset change
  2025-10-13 20:31 [PATCH 00/33 v3] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
@ 2025-10-13 20:31 ` Frederic Weisbecker
  0 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-10-13 20:31 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

RPS cpumask can be overriden through sysfs/syctl. The boot defined
isolated CPUs are then excluded from that cpumask.

However HK_TYPE_DOMAIN will soon integrate cpuset isolated
CPUs updates and the RPS infrastructure needs more thoughts to be able
to propagate such changes and synchronize against them.

Keep handling only what was passed through "isolcpus=" for now.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 net/core/net-sysfs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index ca878525ad7c..07624b682b08 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -1022,7 +1022,7 @@ static int netdev_rx_queue_set_rps_mask(struct netdev_rx_queue *queue,
 int rps_cpumask_housekeeping(struct cpumask *mask)
 {
 	if (!cpumask_empty(mask)) {
-		cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_DOMAIN));
+		cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT));
 		cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_WQ));
 		if (cpumask_empty(mask))
 			return -EINVAL;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity
@ 2025-12-24 13:44 Frederic Weisbecker
  2025-12-24 13:44 ` [PATCH 01/33] PCI: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
                   ` (32 more replies)
  0 siblings, 33 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Gabriele Monaco, Chen Ridong, Michal Koutny,
	linux-arm-kernel, linux-block, Ingo Molnar, David S . Miller,
	Greg Kroah-Hartman, Michal Hocko, Roman Gushchin, Peter Zijlstra,
	Bjorn Helgaas, Catalin Marinas, Phil Auld, Andrew Morton,
	Paolo Abeni, Rafael J . Wysocki, Will Deacon, Thomas Gleixner,
	Lai Jiangshan, Waiman Long, Vlastimil Babka, Eric Dumazet,
	Jakub Kicinski, Muchun Song, netdev, Danilo Krummrich,
	Johannes Weiner, linux-mm, Jens Axboe, Marco Crivellari,
	Tejun Heo, Shakeel Butt, Simon Horman, cgroups, linux-pci

Hi,

The kthread code was enhanced lately to provide an infrastructure which
manages the preferred affinity of unbound kthreads (node or custom
cpumask) against housekeeping constraints and CPU hotplug events.

One crucial missing piece is cpuset: when an isolated partition is
created, deleted, or its CPUs updated, all the unbound kthreads in the
top cpuset are affine to _all_ the non-isolated CPUs, possibly breaking
their preferred affinity along the way

Solve this with performing the kthreads affinity update from cpuset to
the kthreads consolidated relevant code instead so that preferred
affinities are honoured.

The dispatch of the new cpumasks to workqueues and kthreads is performed
by housekeeping, as per the nice Tejun's suggestion.

As a welcome side effect, HK_TYPE_DOMAIN then integrates both the set
from isolcpus= and cpuset isolated partitions. Housekeeping cpumasks are
now modifyable with specific synchronization. A big step toward making
nohz_full= also mutable through cpuset in the future.

Changes since v4:

* Add more tags

* Rebase on v6.19-rc2 with latest cpuset changes

* Accomodate timers migration isolation

* Rename housekeeping_update() parameter from mask to isol_mask (Chen Ridong)

* Link housekeeping documentation to core-api

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
	kthread/core-v5

HEAD: 3c0ee047f05f361f215521424f5e789dfffcafc1

Merry Christmas,
	Frederic
---

Frederic Weisbecker (33):
      PCI: Prepare to protect against concurrent isolated cpuset change
      cpu: Revert "cpu/hotplug: Prevent self deadlock on CPU hot-unplug"
      memcg: Prepare to protect against concurrent isolated cpuset change
      mm: vmstat: Prepare to protect against concurrent isolated cpuset change
      sched/isolation: Save boot defined domain flags
      cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT
      driver core: cpu: Convert /sys/devices/system/cpu/isolated to use HK_TYPE_DOMAIN_BOOT
      net: Keep ignoring isolated cpuset change
      block: Protect against concurrent isolated cpuset change
      timers/migration: Prevent from lockdep false positive warning
      cpu: Provide lockdep check for CPU hotplug lock write-held
      cpuset: Provide lockdep check for cpuset lock held
      sched/isolation: Convert housekeeping cpumasks to rcu pointers
      cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset
      sched/isolation: Flush memcg workqueues on cpuset isolated partition change
      sched/isolation: Flush vmstat workqueues on cpuset isolated partition change
      PCI: Flush PCI probe workqueue on cpuset isolated partition change
      cpuset: Propagate cpuset isolation update to workqueue through housekeeping
      cpuset: Propagate cpuset isolation update to timers through housekeeping
      timers/migration: Remove superfluous cpuset isolation test
      cpuset: Remove cpuset_cpu_is_isolated()
      sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated()
      PCI: Remove superfluous HK_TYPE_WQ check
      kthread: Refine naming of affinity related fields
      kthread: Include unbound kthreads in the managed affinity list
      kthread: Include kthreadd to the managed affinity list
      kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management
      sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN
      sched/arm64: Move fallback task cpumask to HK_TYPE_DOMAIN
      kthread: Honour kthreads preferred affinity after cpuset changes
      kthread: Comment on the purpose and placement of kthread_affine_node() call
      kthread: Document kthread_affine_preferred()
      doc: Add housekeeping documentation

 Documentation/core-api/housekeeping.rst | 111 ++++++++++++++++++++++
 Documentation/core-api/index.rst        |   1 +
 arch/arm64/kernel/cpufeature.c          |  18 +++-
 block/blk-mq.c                          |   6 +-
 drivers/base/cpu.c                      |   2 +-
 drivers/pci/pci-driver.c                |  71 ++++++++++----
 include/linux/cpu.h                     |   4 +
 include/linux/cpuhplock.h               |   1 +
 include/linux/cpuset.h                  |   8 +-
 include/linux/kthread.h                 |   1 +
 include/linux/memcontrol.h              |   4 +
 include/linux/mmu_context.h             |   2 +-
 include/linux/pci.h                     |   3 +
 include/linux/percpu-rwsem.h            |   1 +
 include/linux/sched/isolation.h         |  16 +++-
 include/linux/vmstat.h                  |   2 +
 include/linux/workqueue.h               |   2 +-
 init/Kconfig                            |   1 +
 kernel/cgroup/cpuset.c                  |  68 +++++++-------
 kernel/cpu.c                            |  42 ++++-----
 kernel/kthread.c                        | 160 +++++++++++++++++++++-----------
 kernel/sched/isolation.c                | 144 +++++++++++++++++++++++-----
 kernel/sched/sched.h                    |   4 +
 kernel/time/timer_migration.c           |  25 +++--
 kernel/workqueue.c                      |  17 ++--
 mm/memcontrol.c                         |  25 ++++-
 mm/vmstat.c                             |  15 ++-
 net/core/net-sysfs.c                    |   2 +-
 28 files changed, 554 insertions(+), 202 deletions(-)

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH 01/33] PCI: Prepare to protect against concurrent isolated cpuset change
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-29  3:23   ` Zhang Qiao
  2025-12-24 13:44 ` [PATCH 02/33] cpu: Revert "cpu/hotplug: Prevent self deadlock on CPU hot-unplug" Frederic Weisbecker
                   ` (31 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

HK_TYPE_DOMAIN will soon integrate cpuset isolated partitions and
therefore be made modifiable at runtime. Synchronize against the cpumask
update using RCU.

The RCU locked section includes both the housekeeping CPU target
election for the PCI probe work and the work enqueue.

This way the housekeeping update side will simply need to flush the
pending related works after updating the housekeeping mask in order to
make sure that no PCI work ever executes on an isolated CPU. This part
will be handled in a subsequent patch.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 drivers/pci/pci-driver.c | 47 ++++++++++++++++++++++++++++++++--------
 1 file changed, 38 insertions(+), 9 deletions(-)

diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index 7c2d9d596258..786d6ce40999 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -302,9 +302,8 @@ struct drv_dev_and_id {
 	const struct pci_device_id *id;
 };
 
-static long local_pci_probe(void *_ddi)
+static int local_pci_probe(struct drv_dev_and_id *ddi)
 {
-	struct drv_dev_and_id *ddi = _ddi;
 	struct pci_dev *pci_dev = ddi->dev;
 	struct pci_driver *pci_drv = ddi->drv;
 	struct device *dev = &pci_dev->dev;
@@ -338,6 +337,19 @@ static long local_pci_probe(void *_ddi)
 	return 0;
 }
 
+struct pci_probe_arg {
+	struct drv_dev_and_id *ddi;
+	struct work_struct work;
+	int ret;
+};
+
+static void local_pci_probe_callback(struct work_struct *work)
+{
+	struct pci_probe_arg *arg = container_of(work, struct pci_probe_arg, work);
+
+	arg->ret = local_pci_probe(arg->ddi);
+}
+
 static bool pci_physfn_is_probed(struct pci_dev *dev)
 {
 #ifdef CONFIG_PCI_IOV
@@ -362,34 +374,51 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
 	dev->is_probed = 1;
 
 	cpu_hotplug_disable();
-
 	/*
 	 * Prevent nesting work_on_cpu() for the case where a Virtual Function
 	 * device is probed from work_on_cpu() of the Physical device.
 	 */
 	if (node < 0 || node >= MAX_NUMNODES || !node_online(node) ||
 	    pci_physfn_is_probed(dev)) {
-		cpu = nr_cpu_ids;
+		error = local_pci_probe(&ddi);
 	} else {
 		cpumask_var_t wq_domain_mask;
+		struct pci_probe_arg arg = { .ddi = &ddi };
+
+		INIT_WORK_ONSTACK(&arg.work, local_pci_probe_callback);
 
 		if (!zalloc_cpumask_var(&wq_domain_mask, GFP_KERNEL)) {
 			error = -ENOMEM;
 			goto out;
 		}
+
+		/*
+		 * The target election and the enqueue of the work must be within
+		 * the same RCU read side section so that when the workqueue pool
+		 * is flushed after a housekeeping cpumask update, further readers
+		 * are guaranteed to queue the probing work to the appropriate
+		 * targets.
+		 */
+		rcu_read_lock();
 		cpumask_and(wq_domain_mask,
 			    housekeeping_cpumask(HK_TYPE_WQ),
 			    housekeeping_cpumask(HK_TYPE_DOMAIN));
 
 		cpu = cpumask_any_and(cpumask_of_node(node),
 				      wq_domain_mask);
+		if (cpu < nr_cpu_ids) {
+			schedule_work_on(cpu, &arg.work);
+			rcu_read_unlock();
+			flush_work(&arg.work);
+			error = arg.ret;
+		} else {
+			rcu_read_unlock();
+			error = local_pci_probe(&ddi);
+		}
+
 		free_cpumask_var(wq_domain_mask);
+		destroy_work_on_stack(&arg.work);
 	}
-
-	if (cpu < nr_cpu_ids)
-		error = work_on_cpu(cpu, local_pci_probe, &ddi);
-	else
-		error = local_pci_probe(&ddi);
 out:
 	dev->is_probed = 0;
 	cpu_hotplug_enable();
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 02/33] cpu: Revert "cpu/hotplug: Prevent self deadlock on CPU hot-unplug"
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
  2025-12-24 13:44 ` [PATCH 01/33] PCI: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-24 13:44 ` [PATCH 03/33] memcg: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
                   ` (30 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

1) The commit:

	2b8272ff4a70 ("cpu/hotplug: Prevent self deadlock on CPU hot-unplug")

was added to fix an issue where the hotplug control task (BP) was
throttled between CPUHP_AP_IDLE_DEAD and CPUHP_HRTIMERS_PREPARE waiting
in the hrtimer blindspot for the bandwidth callback queued in the dead
CPU.

2) Later on, the commit:

	38685e2a0476 ("cpu/hotplug: Don't offline the last non-isolated CPU")

plugged on the target selection for the workqueue offloaded CPU down
process to prevent from destroying the last CPU domain.

3) Finally:

	5c0930ccaad5 ("hrtimers: Push pending hrtimers away from outgoing CPU earlier")

removed entirely the conditions for the race exposed and partially fixed
in 1). The offloading of the CPU down process to a workqueue on another
CPU then becomes unnecessary. But the last CPU belonging to scheduler
domains must still remain online.

Therefore revert the now obsolete commit
2b8272ff4a70b866106ae13c36be7ecbef5d5da2 and move the housekeeping check
under the cpu_hotplug_lock write held. Since HK_TYPE_DOMAIN will include
both isolcpus and cpuset isolated partition, the hotplug lock will
synchronize against concurrent cpuset partition updates.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/cpu.c | 37 +++++++++++--------------------------
 1 file changed, 11 insertions(+), 26 deletions(-)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 8df2d773fe3b..40b8496f47c5 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1410,6 +1410,16 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
 
 	cpus_write_lock();
 
+	/*
+	 * Keep at least one housekeeping cpu onlined to avoid generating
+	 * an empty sched_domain span.
+	 */
+	if (cpumask_any_and(cpu_online_mask,
+			    housekeeping_cpumask(HK_TYPE_DOMAIN)) >= nr_cpu_ids) {
+		ret = -EBUSY;
+		goto out;
+	}
+
 	cpuhp_tasks_frozen = tasks_frozen;
 
 	prev_state = cpuhp_set_state(cpu, st, target);
@@ -1456,22 +1466,8 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
 	return ret;
 }
 
-struct cpu_down_work {
-	unsigned int		cpu;
-	enum cpuhp_state	target;
-};
-
-static long __cpu_down_maps_locked(void *arg)
-{
-	struct cpu_down_work *work = arg;
-
-	return _cpu_down(work->cpu, 0, work->target);
-}
-
 static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
 {
-	struct cpu_down_work work = { .cpu = cpu, .target = target, };
-
 	/*
 	 * If the platform does not support hotplug, report it explicitly to
 	 * differentiate it from a transient offlining failure.
@@ -1480,18 +1476,7 @@ static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
 		return -EOPNOTSUPP;
 	if (cpu_hotplug_disabled)
 		return -EBUSY;
-
-	/*
-	 * Ensure that the control task does not run on the to be offlined
-	 * CPU to prevent a deadlock against cfs_b->period_timer.
-	 * Also keep at least one housekeeping cpu onlined to avoid generating
-	 * an empty sched_domain span.
-	 */
-	for_each_cpu_and(cpu, cpu_online_mask, housekeeping_cpumask(HK_TYPE_DOMAIN)) {
-		if (cpu != work.cpu)
-			return work_on_cpu(cpu, __cpu_down_maps_locked, &work);
-	}
-	return -EBUSY;
+	return _cpu_down(cpu, 0, target);
 }
 
 static int cpu_down(unsigned int cpu, enum cpuhp_state target)
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 03/33] memcg: Prepare to protect against concurrent isolated cpuset change
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
  2025-12-24 13:44 ` [PATCH 01/33] PCI: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
  2025-12-24 13:44 ` [PATCH 02/33] cpu: Revert "cpu/hotplug: Prevent self deadlock on CPU hot-unplug" Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-26 23:56   ` Tejun Heo
  2025-12-24 13:44 ` [PATCH 04/33] mm: vmstat: " Frederic Weisbecker
                   ` (29 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

The HK_TYPE_DOMAIN housekeeping cpumask will soon be made modifiable at
runtime. In order to synchronize against memcg workqueue to make sure
that no asynchronous draining is pending or executing on a newly made
isolated CPU, target and queue a drain work under the same RCU critical
section.

Whenever housekeeping will update the HK_TYPE_DOMAIN cpumask, a memcg
workqueue flush will also be issued in a further change to make sure
that no work remains pending after a CPU has been made isolated.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 mm/memcontrol.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index be810c1fbfc3..c3c473c3dfca 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2003,6 +2003,13 @@ static bool is_memcg_drain_needed(struct memcg_stock_pcp *stock,
 	return flush;
 }
 
+static void schedule_drain_work(int cpu, struct work_struct *work)
+{
+	guard(rcu)();
+	if (!cpu_is_isolated(cpu))
+		schedule_work_on(cpu, work);
+}
+
 /*
  * Drains all per-CPU charge caches for given root_memcg resp. subtree
  * of the hierarchy under it.
@@ -2032,8 +2039,8 @@ void drain_all_stock(struct mem_cgroup *root_memcg)
 				      &memcg_st->flags)) {
 			if (cpu == curcpu)
 				drain_local_memcg_stock(&memcg_st->work);
-			else if (!cpu_is_isolated(cpu))
-				schedule_work_on(cpu, &memcg_st->work);
+			else
+				schedule_drain_work(cpu, &memcg_st->work);
 		}
 
 		if (!test_bit(FLUSHING_CACHED_CHARGE, &obj_st->flags) &&
@@ -2042,8 +2049,8 @@ void drain_all_stock(struct mem_cgroup *root_memcg)
 				      &obj_st->flags)) {
 			if (cpu == curcpu)
 				drain_local_obj_stock(&obj_st->work);
-			else if (!cpu_is_isolated(cpu))
-				schedule_work_on(cpu, &obj_st->work);
+			else
+				schedule_drain_work(cpu, &obj_st->work);
 		}
 	}
 	migrate_enable();
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 04/33] mm: vmstat: Prepare to protect against concurrent isolated cpuset change
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2025-12-24 13:44 ` [PATCH 03/33] memcg: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-24 13:44 ` [PATCH 05/33] sched/isolation: Save boot defined domain flags Frederic Weisbecker
                   ` (28 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

The HK_TYPE_DOMAIN housekeeping cpumask will soon be made modifiable at
runtime. In order to synchronize against vmstat workqueue to make sure
that no asynchronous vmstat work is pending or executing on a newly made
isolated CPU, target and queue a vmstat work under the same RCU read
side critical section.

Whenever housekeeping will update the HK_TYPE_DOMAIN cpumask, a vmstat
workqueue flush will also be issued in a further change to make sure
that no work remains pending after a CPU has been made isolated.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 mm/vmstat.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/mm/vmstat.c b/mm/vmstat.c
index 65de88cdf40e..ed19c0d42de6 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -2144,11 +2144,13 @@ static void vmstat_shepherd(struct work_struct *w)
 		 * infrastructure ever noticing. Skip regular flushing from vmstat_shepherd
 		 * for all isolated CPUs to avoid interference with the isolated workload.
 		 */
-		if (cpu_is_isolated(cpu))
-			continue;
+		scoped_guard(rcu) {
+			if (cpu_is_isolated(cpu))
+				continue;
 
-		if (!delayed_work_pending(dw) && need_update(cpu))
-			queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
+			if (!delayed_work_pending(dw) && need_update(cpu))
+				queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
+		}
 
 		cond_resched();
 	}
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 05/33] sched/isolation: Save boot defined domain flags
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (3 preceding siblings ...)
  2025-12-24 13:44 ` [PATCH 04/33] mm: vmstat: " Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-25 22:27   ` Waiman Long
  2025-12-24 13:44 ` [PATCH 06/33] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT Frederic Weisbecker
                   ` (27 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

HK_TYPE_DOMAIN will soon integrate not only boot defined isolcpus= CPUs
but also cpuset isolated partitions.

Housekeeping still needs a way to record what was initially passed
to isolcpus= in order to keep these CPUs isolated after a cpuset
isolated partition is modified or destroyed while containing some of
them.

Create a new HK_TYPE_DOMAIN_BOOT to keep track of those.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Reviewed-by: Phil Auld <pauld@redhat.com>
---
 include/linux/sched/isolation.h | 4 ++++
 kernel/sched/isolation.c        | 5 +++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index d8501f4709b5..109a2149e21a 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -7,8 +7,12 @@
 #include <linux/tick.h>
 
 enum hk_type {
+	/* Revert of boot-time isolcpus= argument */
+	HK_TYPE_DOMAIN_BOOT,
 	HK_TYPE_DOMAIN,
+	/* Revert of boot-time isolcpus=managed_irq argument */
 	HK_TYPE_MANAGED_IRQ,
+	/* Revert of boot-time nohz_full= or isolcpus=nohz arguments */
 	HK_TYPE_KERNEL_NOISE,
 	HK_TYPE_MAX,
 
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 3ad0d6df6a0a..11a623fa6320 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -11,6 +11,7 @@
 #include "sched.h"
 
 enum hk_flags {
+	HK_FLAG_DOMAIN_BOOT	= BIT(HK_TYPE_DOMAIN_BOOT),
 	HK_FLAG_DOMAIN		= BIT(HK_TYPE_DOMAIN),
 	HK_FLAG_MANAGED_IRQ	= BIT(HK_TYPE_MANAGED_IRQ),
 	HK_FLAG_KERNEL_NOISE	= BIT(HK_TYPE_KERNEL_NOISE),
@@ -239,7 +240,7 @@ static int __init housekeeping_isolcpus_setup(char *str)
 
 		if (!strncmp(str, "domain,", 7)) {
 			str += 7;
-			flags |= HK_FLAG_DOMAIN;
+			flags |= HK_FLAG_DOMAIN | HK_FLAG_DOMAIN_BOOT;
 			continue;
 		}
 
@@ -269,7 +270,7 @@ static int __init housekeeping_isolcpus_setup(char *str)
 
 	/* Default behaviour for isolcpus without flags */
 	if (!flags)
-		flags |= HK_FLAG_DOMAIN;
+		flags |= HK_FLAG_DOMAIN | HK_FLAG_DOMAIN_BOOT;
 
 	return housekeeping_setup(str, flags);
 }
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 06/33] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (4 preceding siblings ...)
  2025-12-24 13:44 ` [PATCH 05/33] sched/isolation: Save boot defined domain flags Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-25 22:31   ` Waiman Long
  2025-12-24 13:44 ` [PATCH 07/33] driver core: cpu: Convert /sys/devices/system/cpu/isolated " Frederic Weisbecker
                   ` (26 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

boot_hk_cpus is an ad-hoc copy of HK_TYPE_DOMAIN_BOOT. Remove it and use
the official version.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Reviewed-by: Phil Auld <pauld@redhat.com>
Reviewed-by: Chen Ridong <chenridong@huawei.com>
---
 kernel/cgroup/cpuset.c | 22 +++++++---------------
 1 file changed, 7 insertions(+), 15 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 6e6eb09b8db6..3afa72f8d579 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -88,12 +88,6 @@ static cpumask_var_t	isolated_cpus;
  */
 static bool isolated_cpus_updating;
 
-/*
- * Housekeeping (HK_TYPE_DOMAIN) CPUs at boot
- */
-static cpumask_var_t	boot_hk_cpus;
-static bool		have_boot_isolcpus;
-
 /*
  * A flag to force sched domain rebuild at the end of an operation.
  * It can be set in
@@ -1453,15 +1447,16 @@ static bool isolated_cpus_can_update(struct cpumask *add_cpus,
  * @new_cpus: cpu mask
  * Return: true if there is conflict, false otherwise
  *
- * CPUs outside of boot_hk_cpus, if defined, can only be used in an
+ * CPUs outside of HK_TYPE_DOMAIN_BOOT, if defined, can only be used in an
  * isolated partition.
  */
 static bool prstate_housekeeping_conflict(int prstate, struct cpumask *new_cpus)
 {
-	if (!have_boot_isolcpus)
+	if (!housekeeping_enabled(HK_TYPE_DOMAIN_BOOT))
 		return false;
 
-	if ((prstate != PRS_ISOLATED) && !cpumask_subset(new_cpus, boot_hk_cpus))
+	if ((prstate != PRS_ISOLATED) &&
+	    !cpumask_subset(new_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT)))
 		return true;
 
 	return false;
@@ -3892,12 +3887,9 @@ int __init cpuset_init(void)
 
 	BUG_ON(!alloc_cpumask_var(&cpus_attach, GFP_KERNEL));
 
-	have_boot_isolcpus = housekeeping_enabled(HK_TYPE_DOMAIN);
-	if (have_boot_isolcpus) {
-		BUG_ON(!alloc_cpumask_var(&boot_hk_cpus, GFP_KERNEL));
-		cpumask_copy(boot_hk_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN));
-		cpumask_andnot(isolated_cpus, cpu_possible_mask, boot_hk_cpus);
-	}
+	if (housekeeping_enabled(HK_TYPE_DOMAIN_BOOT))
+		cpumask_andnot(isolated_cpus, cpu_possible_mask,
+			       housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT));
 
 	return 0;
 }
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 07/33] driver core: cpu: Convert /sys/devices/system/cpu/isolated to use HK_TYPE_DOMAIN_BOOT
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (5 preceding siblings ...)
  2025-12-24 13:44 ` [PATCH 06/33] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-24 13:44 ` [PATCH 08/33] net: Keep ignoring isolated cpuset change Frederic Weisbecker
                   ` (25 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

Make sure /sys/devices/system/cpu/isolated only prints what was passed
through the isolcpus= parameter before HK_TYPE_DOMAIN will also
integrate cpuset isolated partitions.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 drivers/base/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index c6c57b6f61c6..3e3fa031e605 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -291,7 +291,7 @@ static ssize_t print_cpus_isolated(struct device *dev,
 		return -ENOMEM;
 
 	cpumask_andnot(isolated, cpu_possible_mask,
-		       housekeeping_cpumask(HK_TYPE_DOMAIN));
+		       housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT));
 	len = sysfs_emit(buf, "%*pbl\n", cpumask_pr_args(isolated));
 
 	free_cpumask_var(isolated);
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 08/33] net: Keep ignoring isolated cpuset change
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (6 preceding siblings ...)
  2025-12-24 13:44 ` [PATCH 07/33] driver core: cpu: Convert /sys/devices/system/cpu/isolated " Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-24 13:44 ` [PATCH 09/33] block: Protect against concurrent " Frederic Weisbecker
                   ` (24 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

RPS cpumask can be overriden through sysfs/syctl. The boot defined
isolated CPUs are then excluded from that cpumask.

However HK_TYPE_DOMAIN will soon integrate cpuset isolated
CPUs updates and the RPS infrastructure needs more thoughts to be able
to propagate such changes and synchronize against them.

Keep handling only what was passed through "isolcpus=" for now.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 net/core/net-sysfs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index ca878525ad7c..07624b682b08 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -1022,7 +1022,7 @@ static int netdev_rx_queue_set_rps_mask(struct netdev_rx_queue *queue,
 int rps_cpumask_housekeeping(struct cpumask *mask)
 {
 	if (!cpumask_empty(mask)) {
-		cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_DOMAIN));
+		cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT));
 		cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_WQ));
 		if (cpumask_empty(mask))
 			return -EINVAL;
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 09/33] block: Protect against concurrent isolated cpuset change
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (7 preceding siblings ...)
  2025-12-24 13:44 ` [PATCH 08/33] net: Keep ignoring isolated cpuset change Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-30  0:37   ` Jens Axboe
  2025-12-24 13:44 ` [PATCH 10/33] timers/migration: Prevent from lockdep false positive warning Frederic Weisbecker
                   ` (23 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

The block subsystem prevents running the workqueue to isolated CPUs,
including those defined by cpuset isolated partitions. Since
HK_TYPE_DOMAIN will soon contain both and be subject to runtime
modifications, synchronize against housekeeping using the relevant lock.

For full support of cpuset changes, the block subsystem may need to
propagate changes to isolated cpumask through the workqueue in the
future.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 block/blk-mq.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 1978eef95dca..0037af1216f3 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4257,12 +4257,16 @@ static void blk_mq_map_swqueue(struct request_queue *q)
 
 		/*
 		 * Rule out isolated CPUs from hctx->cpumask to avoid
-		 * running block kworker on isolated CPUs
+		 * running block kworker on isolated CPUs.
+		 * FIXME: cpuset should propagate further changes to isolated CPUs
+		 * here.
 		 */
+		rcu_read_lock();
 		for_each_cpu(cpu, hctx->cpumask) {
 			if (cpu_is_isolated(cpu))
 				cpumask_clear_cpu(cpu, hctx->cpumask);
 		}
+		rcu_read_unlock();
 
 		/*
 		 * Initialize batch roundrobin counts
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 10/33] timers/migration: Prevent from lockdep false positive warning
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (8 preceding siblings ...)
  2025-12-24 13:44 ` [PATCH 09/33] block: Protect against concurrent " Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-24 13:44 ` [PATCH 11/33] cpu: Provide lockdep check for CPU hotplug lock write-held Frederic Weisbecker
                   ` (22 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

Testing housekeeping_cpu() will soon require that either the RCU "lock"
is held or the cpuset mutex.

When CPUs get isolated through cpuset, the change is propagated to
timer migration such that isolation is also performed from the migration
tree. However that propagation is done using workqueue which tests if
the target is actually isolated before proceeding.

Lockdep doesn't know that the workqueue caller holds cpuset mutex and
that it waits for the work, making the housekeeping cpumask read safe.

Shut down the future warning by removing this test. It is unecessary
beyond hotplug, the workqueue is already targeted towards isolated CPUs.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/time/timer_migration.c | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 18dda1aa782d..3879575a4975 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -1497,7 +1497,7 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
 	return 0;
 }
 
-static int tmigr_set_cpu_available(unsigned int cpu)
+static int __tmigr_set_cpu_available(unsigned int cpu)
 {
 	struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu);
 
@@ -1505,9 +1505,6 @@ static int tmigr_set_cpu_available(unsigned int cpu)
 	if (WARN_ON_ONCE(!tmc->tmgroup))
 		return -EINVAL;
 
-	if (tmigr_is_isolated(cpu))
-		return 0;
-
 	guard(mutex)(&tmigr_available_mutex);
 
 	cpumask_set_cpu(cpu, tmigr_available_cpumask);
@@ -1523,6 +1520,14 @@ static int tmigr_set_cpu_available(unsigned int cpu)
 	return 0;
 }
 
+static int tmigr_set_cpu_available(unsigned int cpu)
+{
+	if (tmigr_is_isolated(cpu))
+		return 0;
+
+	return __tmigr_set_cpu_available(cpu);
+}
+
 static void tmigr_cpu_isolate(struct work_struct *ignored)
 {
 	tmigr_clear_cpu_available(smp_processor_id());
@@ -1530,7 +1535,12 @@ static void tmigr_cpu_isolate(struct work_struct *ignored)
 
 static void tmigr_cpu_unisolate(struct work_struct *ignored)
 {
-	tmigr_set_cpu_available(smp_processor_id());
+	/*
+	 * Don't call tmigr_is_isolated() ->housekeeping_cpu() directly because
+	 * the cpuset mutex is correctly held by the workqueue caller but lockdep
+	 * doesn't know that.
+	 */
+	__tmigr_set_cpu_available(smp_processor_id());
 }
 
 /**
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 11/33] cpu: Provide lockdep check for CPU hotplug lock write-held
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (9 preceding siblings ...)
  2025-12-24 13:44 ` [PATCH 10/33] timers/migration: Prevent from lockdep false positive warning Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-24 13:44 ` [PATCH 12/33] cpuset: Provide lockdep check for cpuset lock held Frederic Weisbecker
                   ` (21 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

cpuset modifies partitions, including isolated, while holding the cpu
hotplug lock read-held.

This means that write-holding the CPU hotplug lock is safe to
synchronize against housekeeping cpumask changes.

Provide a lockdep check to validate that.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/cpuhplock.h    | 1 +
 include/linux/percpu-rwsem.h | 1 +
 kernel/cpu.c                 | 5 +++++
 3 files changed, 7 insertions(+)

diff --git a/include/linux/cpuhplock.h b/include/linux/cpuhplock.h
index f7aa20f62b87..286b3ab92e15 100644
--- a/include/linux/cpuhplock.h
+++ b/include/linux/cpuhplock.h
@@ -13,6 +13,7 @@
 struct device;
 
 extern int lockdep_is_cpus_held(void);
+extern int lockdep_is_cpus_write_held(void);
 
 #ifdef CONFIG_HOTPLUG_CPU
 void cpus_write_lock(void);
diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
index 288f5235649a..c8cb010d655e 100644
--- a/include/linux/percpu-rwsem.h
+++ b/include/linux/percpu-rwsem.h
@@ -161,6 +161,7 @@ extern void percpu_free_rwsem(struct percpu_rw_semaphore *);
 	__percpu_init_rwsem(sem, #sem, &rwsem_key);		\
 })
 
+#define percpu_rwsem_is_write_held(sem)	lockdep_is_held_type(sem, 0)
 #define percpu_rwsem_is_held(sem)	lockdep_is_held(sem)
 #define percpu_rwsem_assert_held(sem)	lockdep_assert_held(sem)
 
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 40b8496f47c5..01968a5c4a16 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -534,6 +534,11 @@ int lockdep_is_cpus_held(void)
 {
 	return percpu_rwsem_is_held(&cpu_hotplug_lock);
 }
+
+int lockdep_is_cpus_write_held(void)
+{
+	return percpu_rwsem_is_write_held(&cpu_hotplug_lock);
+}
 #endif
 
 static void lockdep_acquire_cpus_lock(void)
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 12/33] cpuset: Provide lockdep check for cpuset lock held
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (10 preceding siblings ...)
  2025-12-24 13:44 ` [PATCH 11/33] cpu: Provide lockdep check for CPU hotplug lock write-held Frederic Weisbecker
@ 2025-12-24 13:44 ` Frederic Weisbecker
  2025-12-24 13:45 ` [PATCH 13/33] sched/isolation: Convert housekeeping cpumasks to rcu pointers Frederic Weisbecker
                   ` (20 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:44 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

cpuset modifies partitions, including isolated, while holding the cpuset
mutex.

This means that holding the cpuset mutex is safe to synchronize against
housekeeping cpumask changes.

Provide a lockdep check to validate that.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/cpuset.h | 2 ++
 kernel/cgroup/cpuset.c | 7 +++++++
 2 files changed, 9 insertions(+)

diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index a98d3330385c..1c49ffd2ca9b 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -18,6 +18,8 @@
 #include <linux/mmu_context.h>
 #include <linux/jump_label.h>
 
+extern bool lockdep_is_cpuset_held(void);
+
 #ifdef CONFIG_CPUSETS
 
 /*
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 3afa72f8d579..5e2e3514c22e 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -283,6 +283,13 @@ void cpuset_full_unlock(void)
 	cpus_read_unlock();
 }
 
+#ifdef CONFIG_LOCKDEP
+bool lockdep_is_cpuset_held(void)
+{
+	return lockdep_is_held(&cpuset_mutex);
+}
+#endif
+
 static DEFINE_SPINLOCK(callback_lock);
 
 void cpuset_callback_lock_irq(void)
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 13/33] sched/isolation: Convert housekeeping cpumasks to rcu pointers
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (11 preceding siblings ...)
  2025-12-24 13:44 ` [PATCH 12/33] cpuset: Provide lockdep check for cpuset lock held Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-24 13:45 ` [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
                   ` (19 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

HK_TYPE_DOMAIN's cpumask will soon be made modifiable by cpuset.
A synchronization mechanism is then needed to synchronize the updates
with the housekeeping cpumask readers.

Turn the housekeeping cpumasks into RCU pointers. Once a housekeeping
cpumask will be modified, the update side will wait for an RCU grace
period and propagate the change to interested subsystem when deemed
necessary.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/sched/isolation.c | 58 +++++++++++++++++++++++++---------------
 kernel/sched/sched.h     |  1 +
 2 files changed, 37 insertions(+), 22 deletions(-)

diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 11a623fa6320..83be49ec2b06 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -21,7 +21,7 @@ DEFINE_STATIC_KEY_FALSE(housekeeping_overridden);
 EXPORT_SYMBOL_GPL(housekeeping_overridden);
 
 struct housekeeping {
-	cpumask_var_t cpumasks[HK_TYPE_MAX];
+	struct cpumask __rcu *cpumasks[HK_TYPE_MAX];
 	unsigned long flags;
 };
 
@@ -33,17 +33,28 @@ bool housekeeping_enabled(enum hk_type type)
 }
 EXPORT_SYMBOL_GPL(housekeeping_enabled);
 
+const struct cpumask *housekeeping_cpumask(enum hk_type type)
+{
+	if (static_branch_unlikely(&housekeeping_overridden)) {
+		if (housekeeping.flags & BIT(type)) {
+			return rcu_dereference_check(housekeeping.cpumasks[type], 1);
+		}
+	}
+	return cpu_possible_mask;
+}
+EXPORT_SYMBOL_GPL(housekeeping_cpumask);
+
 int housekeeping_any_cpu(enum hk_type type)
 {
 	int cpu;
 
 	if (static_branch_unlikely(&housekeeping_overridden)) {
 		if (housekeeping.flags & BIT(type)) {
-			cpu = sched_numa_find_closest(housekeeping.cpumasks[type], smp_processor_id());
+			cpu = sched_numa_find_closest(housekeeping_cpumask(type), smp_processor_id());
 			if (cpu < nr_cpu_ids)
 				return cpu;
 
-			cpu = cpumask_any_and_distribute(housekeeping.cpumasks[type], cpu_online_mask);
+			cpu = cpumask_any_and_distribute(housekeeping_cpumask(type), cpu_online_mask);
 			if (likely(cpu < nr_cpu_ids))
 				return cpu;
 			/*
@@ -59,28 +70,18 @@ int housekeeping_any_cpu(enum hk_type type)
 }
 EXPORT_SYMBOL_GPL(housekeeping_any_cpu);
 
-const struct cpumask *housekeeping_cpumask(enum hk_type type)
-{
-	if (static_branch_unlikely(&housekeeping_overridden))
-		if (housekeeping.flags & BIT(type))
-			return housekeeping.cpumasks[type];
-	return cpu_possible_mask;
-}
-EXPORT_SYMBOL_GPL(housekeeping_cpumask);
-
 void housekeeping_affine(struct task_struct *t, enum hk_type type)
 {
 	if (static_branch_unlikely(&housekeeping_overridden))
 		if (housekeeping.flags & BIT(type))
-			set_cpus_allowed_ptr(t, housekeeping.cpumasks[type]);
+			set_cpus_allowed_ptr(t, housekeeping_cpumask(type));
 }
 EXPORT_SYMBOL_GPL(housekeeping_affine);
 
 bool housekeeping_test_cpu(int cpu, enum hk_type type)
 {
-	if (static_branch_unlikely(&housekeeping_overridden))
-		if (housekeeping.flags & BIT(type))
-			return cpumask_test_cpu(cpu, housekeeping.cpumasks[type]);
+	if (static_branch_unlikely(&housekeeping_overridden) && housekeeping.flags & BIT(type))
+		return cpumask_test_cpu(cpu, housekeeping_cpumask(type));
 	return true;
 }
 EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
@@ -96,20 +97,33 @@ void __init housekeeping_init(void)
 
 	if (housekeeping.flags & HK_FLAG_KERNEL_NOISE)
 		sched_tick_offload_init();
-
+	/*
+	 * Realloc with a proper allocator so that any cpumask update
+	 * can indifferently free the old version with kfree().
+	 */
 	for_each_set_bit(type, &housekeeping.flags, HK_TYPE_MAX) {
+		struct cpumask *omask, *nmask = kmalloc(cpumask_size(), GFP_KERNEL);
+
+		if (WARN_ON_ONCE(!nmask))
+			return;
+
+		omask = rcu_dereference(housekeeping.cpumasks[type]);
+
 		/* We need at least one CPU to handle housekeeping work */
-		WARN_ON_ONCE(cpumask_empty(housekeeping.cpumasks[type]));
+		WARN_ON_ONCE(cpumask_empty(omask));
+		cpumask_copy(nmask, omask);
+		RCU_INIT_POINTER(housekeeping.cpumasks[type], nmask);
+		memblock_free(omask, cpumask_size());
 	}
 }
 
 static void __init housekeeping_setup_type(enum hk_type type,
 					   cpumask_var_t housekeeping_staging)
 {
+	struct cpumask *mask = memblock_alloc_or_panic(cpumask_size(), SMP_CACHE_BYTES);
 
-	alloc_bootmem_cpumask_var(&housekeeping.cpumasks[type]);
-	cpumask_copy(housekeeping.cpumasks[type],
-		     housekeeping_staging);
+	cpumask_copy(mask, housekeeping_staging);
+	RCU_INIT_POINTER(housekeeping.cpumasks[type], mask);
 }
 
 static int __init housekeeping_setup(char *str, unsigned long flags)
@@ -162,7 +176,7 @@ static int __init housekeeping_setup(char *str, unsigned long flags)
 
 		for_each_set_bit(type, &iter_flags, HK_TYPE_MAX) {
 			if (!cpumask_equal(housekeeping_staging,
-					   housekeeping.cpumasks[type])) {
+					   housekeeping_cpumask(type))) {
 				pr_warn("Housekeeping: nohz_full= must match isolcpus=\n");
 				goto free_housekeeping_staging;
 			}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d30cca6870f5..475bdab3b8db 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -42,6 +42,7 @@
 #include <linux/ktime_api.h>
 #include <linux/lockdep_api.h>
 #include <linux/lockdep.h>
+#include <linux/memblock.h>
 #include <linux/minmax.h>
 #include <linux/mm.h>
 #include <linux/module.h>
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (12 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 13/33] sched/isolation: Convert housekeeping cpumasks to rcu pointers Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26  2:24   ` Waiman Long
  2025-12-26  8:08   ` Chen Ridong
  2025-12-24 13:45 ` [PATCH 15/33] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Frederic Weisbecker
                   ` (18 subsequent siblings)
  32 siblings, 2 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

Until now, HK_TYPE_DOMAIN used to only include boot defined isolated
CPUs passed through isolcpus= boot option. Users interested in also
knowing the runtime defined isolated CPUs through cpuset must use
different APIs: cpuset_cpu_is_isolated(), cpu_is_isolated(), etc...

There are many drawbacks to that approach:

1) Most interested subsystems want to know about all isolated CPUs, not
  just those defined on boot time.

2) cpuset_cpu_is_isolated() / cpu_is_isolated() are not synchronized with
  concurrent cpuset changes.

3) Further cpuset modifications are not propagated to subsystems

Solve 1) and 2) and centralize all isolated CPUs within the
HK_TYPE_DOMAIN housekeeping cpumask.

Subsystems can rely on RCU to synchronize against concurrent changes.

The propagation mentioned in 3) will be handled in further patches.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/sched/isolation.h |  7 +++
 kernel/cgroup/cpuset.c          |  3 ++
 kernel/sched/isolation.c        | 76 ++++++++++++++++++++++++++++++---
 kernel/sched/sched.h            |  1 +
 4 files changed, 81 insertions(+), 6 deletions(-)

diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index 109a2149e21a..6842a1ba4d13 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -9,6 +9,11 @@
 enum hk_type {
 	/* Revert of boot-time isolcpus= argument */
 	HK_TYPE_DOMAIN_BOOT,
+	/*
+	 * Same as HK_TYPE_DOMAIN_BOOT but also includes the
+	 * revert of cpuset isolated partitions. As such it
+	 * is always a subset of HK_TYPE_DOMAIN_BOOT.
+	 */
 	HK_TYPE_DOMAIN,
 	/* Revert of boot-time isolcpus=managed_irq argument */
 	HK_TYPE_MANAGED_IRQ,
@@ -35,6 +40,7 @@ extern const struct cpumask *housekeeping_cpumask(enum hk_type type);
 extern bool housekeeping_enabled(enum hk_type type);
 extern void housekeeping_affine(struct task_struct *t, enum hk_type type);
 extern bool housekeeping_test_cpu(int cpu, enum hk_type type);
+extern int housekeeping_update(struct cpumask *isol_mask, enum hk_type type);
 extern void __init housekeeping_init(void);
 
 #else
@@ -62,6 +68,7 @@ static inline bool housekeeping_test_cpu(int cpu, enum hk_type type)
 	return true;
 }
 
+static inline int housekeeping_update(struct cpumask *isol_mask, enum hk_type type) { return 0; }
 static inline void housekeeping_init(void) { }
 #endif /* CONFIG_CPU_ISOLATION */
 
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 5e2e3514c22e..e13e32491ebf 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1490,6 +1490,9 @@ static void update_isolation_cpumasks(void)
 	ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
 	WARN_ON_ONCE(ret < 0);
 
+	ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
+	WARN_ON_ONCE(ret < 0);
+
 	isolated_cpus_updating = false;
 }
 
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 83be49ec2b06..a124f1119f2e 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -29,18 +29,48 @@ static struct housekeeping housekeeping;
 
 bool housekeeping_enabled(enum hk_type type)
 {
-	return !!(housekeeping.flags & BIT(type));
+	return !!(READ_ONCE(housekeeping.flags) & BIT(type));
 }
 EXPORT_SYMBOL_GPL(housekeeping_enabled);
 
+static bool housekeeping_dereference_check(enum hk_type type)
+{
+	if (IS_ENABLED(CONFIG_LOCKDEP) && type == HK_TYPE_DOMAIN) {
+		/* Cpuset isn't even writable yet? */
+		if (system_state <= SYSTEM_SCHEDULING)
+			return true;
+
+		/* CPU hotplug write locked, so cpuset partition can't be overwritten */
+		if (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_write_held())
+			return true;
+
+		/* Cpuset lock held, partitions not writable */
+		if (IS_ENABLED(CONFIG_CPUSETS) && lockdep_is_cpuset_held())
+			return true;
+
+		return false;
+	}
+
+	return true;
+}
+
+static inline struct cpumask *housekeeping_cpumask_dereference(enum hk_type type)
+{
+	return rcu_dereference_all_check(housekeeping.cpumasks[type],
+					 housekeeping_dereference_check(type));
+}
+
 const struct cpumask *housekeeping_cpumask(enum hk_type type)
 {
+	const struct cpumask *mask = NULL;
+
 	if (static_branch_unlikely(&housekeeping_overridden)) {
-		if (housekeeping.flags & BIT(type)) {
-			return rcu_dereference_check(housekeeping.cpumasks[type], 1);
-		}
+		if (READ_ONCE(housekeeping.flags) & BIT(type))
+			mask = housekeeping_cpumask_dereference(type);
 	}
-	return cpu_possible_mask;
+	if (!mask)
+		mask = cpu_possible_mask;
+	return mask;
 }
 EXPORT_SYMBOL_GPL(housekeeping_cpumask);
 
@@ -80,12 +110,46 @@ EXPORT_SYMBOL_GPL(housekeeping_affine);
 
 bool housekeeping_test_cpu(int cpu, enum hk_type type)
 {
-	if (static_branch_unlikely(&housekeeping_overridden) && housekeeping.flags & BIT(type))
+	if (static_branch_unlikely(&housekeeping_overridden) &&
+	    READ_ONCE(housekeeping.flags) & BIT(type))
 		return cpumask_test_cpu(cpu, housekeeping_cpumask(type));
 	return true;
 }
 EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
 
+int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
+{
+	struct cpumask *trial, *old = NULL;
+
+	if (type != HK_TYPE_DOMAIN)
+		return -ENOTSUPP;
+
+	trial = kmalloc(cpumask_size(), GFP_KERNEL);
+	if (!trial)
+		return -ENOMEM;
+
+	cpumask_andnot(trial, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT), isol_mask);
+	if (!cpumask_intersects(trial, cpu_online_mask)) {
+		kfree(trial);
+		return -EINVAL;
+	}
+
+	if (!housekeeping.flags)
+		static_branch_enable(&housekeeping_overridden);
+
+	if (housekeeping.flags & BIT(type))
+		old = housekeeping_cpumask_dereference(type);
+	else
+		WRITE_ONCE(housekeeping.flags, housekeeping.flags | BIT(type));
+	rcu_assign_pointer(housekeeping.cpumasks[type], trial);
+
+	synchronize_rcu();
+
+	kfree(old);
+
+	return 0;
+}
+
 void __init housekeeping_init(void)
 {
 	enum hk_type type;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 475bdab3b8db..653e898a996a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -30,6 +30,7 @@
 #include <linux/context_tracking.h>
 #include <linux/cpufreq.h>
 #include <linux/cpumask_api.h>
+#include <linux/cpuset.h>
 #include <linux/ctype.h>
 #include <linux/file.h>
 #include <linux/fs_api.h>
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 15/33] sched/isolation: Flush memcg workqueues on cpuset isolated partition change
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (13 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-24 13:45 ` [PATCH 16/33] sched/isolation: Flush vmstat " Frederic Weisbecker
                   ` (17 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

The HK_TYPE_DOMAIN housekeeping cpumask is now modifiable at runtime. In
order to synchronize against memcg workqueue to make sure that no
asynchronous draining is still pending or executing on a newly made
isolated CPU, the housekeeping susbsystem must flush the memcg
workqueues.

However the memcg workqueues can't be flushed easily since they are
queued to the main per-CPU workqueue pool.

Solve this with creating a memcg specific pool and provide and use the
appropriate flushing API.

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/memcontrol.h |  4 ++++
 kernel/sched/isolation.c   |  2 ++
 kernel/sched/sched.h       |  1 +
 mm/memcontrol.c            | 12 +++++++++++-
 4 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 0651865a4564..5b004b95648b 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1037,6 +1037,8 @@ static inline u64 cgroup_id_from_mm(struct mm_struct *mm)
 	return id;
 }
 
+void mem_cgroup_flush_workqueue(void);
+
 extern int mem_cgroup_init(void);
 #else /* CONFIG_MEMCG */
 
@@ -1436,6 +1438,8 @@ static inline u64 cgroup_id_from_mm(struct mm_struct *mm)
 	return 0;
 }
 
+static inline void mem_cgroup_flush_workqueue(void) { }
+
 static inline int mem_cgroup_init(void) { return 0; }
 #endif /* CONFIG_MEMCG */
 
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index a124f1119f2e..bb6c36d2b97b 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -145,6 +145,8 @@ int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
 
 	synchronize_rcu();
 
+	mem_cgroup_flush_workqueue();
+
 	kfree(old);
 
 	return 0;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 653e898a996a..65dfa48e54b7 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -44,6 +44,7 @@
 #include <linux/lockdep_api.h>
 #include <linux/lockdep.h>
 #include <linux/memblock.h>
+#include <linux/memcontrol.h>
 #include <linux/minmax.h>
 #include <linux/mm.h>
 #include <linux/module.h>
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c3c473c3dfca..89bf93cf97b9 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -96,6 +96,8 @@ static bool cgroup_memory_nokmem __ro_after_init;
 /* BPF memory accounting disabled? */
 static bool cgroup_memory_nobpf __ro_after_init;
 
+static struct workqueue_struct *memcg_wq __ro_after_init;
+
 static struct kmem_cache *memcg_cachep;
 static struct kmem_cache *memcg_pn_cachep;
 
@@ -2007,7 +2009,7 @@ static void schedule_drain_work(int cpu, struct work_struct *work)
 {
 	guard(rcu)();
 	if (!cpu_is_isolated(cpu))
-		schedule_work_on(cpu, work);
+		queue_work_on(cpu, memcg_wq, work);
 }
 
 /*
@@ -5119,6 +5121,11 @@ void mem_cgroup_sk_uncharge(const struct sock *sk, unsigned int nr_pages)
 	refill_stock(memcg, nr_pages);
 }
 
+void mem_cgroup_flush_workqueue(void)
+{
+	flush_workqueue(memcg_wq);
+}
+
 static int __init cgroup_memory(char *s)
 {
 	char *token;
@@ -5161,6 +5168,9 @@ int __init mem_cgroup_init(void)
 	cpuhp_setup_state_nocalls(CPUHP_MM_MEMCQ_DEAD, "mm/memctrl:dead", NULL,
 				  memcg_hotplug_cpu_dead);
 
+	memcg_wq = alloc_workqueue("memcg", WQ_PERCPU, 0);
+	WARN_ON(!memcg_wq);
+
 	for_each_possible_cpu(cpu) {
 		INIT_WORK(&per_cpu_ptr(&memcg_stock, cpu)->work,
 			  drain_local_memcg_stock);
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 16/33] sched/isolation: Flush vmstat workqueues on cpuset isolated partition change
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (14 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 15/33] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-24 13:45 ` [PATCH 17/33] PCI: Flush PCI probe workqueue " Frederic Weisbecker
                   ` (16 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

The HK_TYPE_DOMAIN housekeeping cpumask is now modifiable at runtime.
In order to synchronize against vmstat workqueue to make sure
that no asynchronous vmstat work is still pending or executing on a
newly made isolated CPU, the housekeeping susbsystem must flush the
vmstat workqueues.

This involves flushing the whole mm_percpu_wq workqueue, shared with
LRU drain, introducing here a welcome side effect.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/vmstat.h   | 2 ++
 kernel/sched/isolation.c | 1 +
 kernel/sched/sched.h     | 1 +
 mm/vmstat.c              | 5 +++++
 4 files changed, 9 insertions(+)

diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 3398a345bda8..1909b945b3ea 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -303,6 +303,7 @@ int calculate_pressure_threshold(struct zone *zone);
 int calculate_normal_threshold(struct zone *zone);
 void set_pgdat_percpu_threshold(pg_data_t *pgdat,
 				int (*calculate_pressure)(struct zone *));
+void vmstat_flush_workqueue(void);
 #else /* CONFIG_SMP */
 
 /*
@@ -403,6 +404,7 @@ static inline void __dec_node_page_state(struct page *page,
 static inline void refresh_zone_stat_thresholds(void) { }
 static inline void cpu_vm_stats_fold(int cpu) { }
 static inline void quiet_vmstat(void) { }
+static inline void vmstat_flush_workqueue(void) { }
 
 static inline void drain_zonestat(struct zone *zone,
 			struct per_cpu_zonestat *pzstats) { }
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index bb6c36d2b97b..8aac3c9f7c7f 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -146,6 +146,7 @@ int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
 	synchronize_rcu();
 
 	mem_cgroup_flush_workqueue();
+	vmstat_flush_workqueue();
 
 	kfree(old);
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 65dfa48e54b7..2d0c408fca0b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -68,6 +68,7 @@
 #include <linux/types.h>
 #include <linux/u64_stats_sync_api.h>
 #include <linux/uaccess.h>
+#include <linux/vmstat.h>
 #include <linux/wait_api.h>
 #include <linux/wait_bit.h>
 #include <linux/workqueue_api.h>
diff --git a/mm/vmstat.c b/mm/vmstat.c
index ed19c0d42de6..d6e814c82952 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -2124,6 +2124,11 @@ static void vmstat_shepherd(struct work_struct *w);
 
 static DECLARE_DEFERRABLE_WORK(shepherd, vmstat_shepherd);
 
+void vmstat_flush_workqueue(void)
+{
+	flush_workqueue(mm_percpu_wq);
+}
+
 static void vmstat_shepherd(struct work_struct *w)
 {
 	int cpu;
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 17/33] PCI: Flush PCI probe workqueue on cpuset isolated partition change
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (15 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 16/33] sched/isolation: Flush vmstat " Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26  8:48   ` Chen Ridong
  2025-12-24 13:45 ` [PATCH 18/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
                   ` (15 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

The HK_TYPE_DOMAIN housekeeping cpumask is now modifiable at runtime. In
order to synchronize against PCI probe works and make sure that no
asynchronous probing is still pending or executing on a newly isolated
CPU, the housekeeping subsystem must flush the PCI probe works.

However the PCI probe works can't be flushed easily since they are
queued to the main per-CPU workqueue pool.

Solve this with creating a PCI probe-specific pool and provide and use
the appropriate flushing API.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 drivers/pci/pci-driver.c | 17 ++++++++++++++++-
 include/linux/pci.h      |  3 +++
 kernel/sched/isolation.c |  2 ++
 3 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index 786d6ce40999..d87f781e5ce9 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -337,6 +337,8 @@ static int local_pci_probe(struct drv_dev_and_id *ddi)
 	return 0;
 }
 
+static struct workqueue_struct *pci_probe_wq;
+
 struct pci_probe_arg {
 	struct drv_dev_and_id *ddi;
 	struct work_struct work;
@@ -407,7 +409,11 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
 		cpu = cpumask_any_and(cpumask_of_node(node),
 				      wq_domain_mask);
 		if (cpu < nr_cpu_ids) {
-			schedule_work_on(cpu, &arg.work);
+			struct workqueue_struct *wq = pci_probe_wq;
+
+			if (WARN_ON_ONCE(!wq))
+				wq = system_percpu_wq;
+			queue_work_on(cpu, wq, &arg.work);
 			rcu_read_unlock();
 			flush_work(&arg.work);
 			error = arg.ret;
@@ -425,6 +431,11 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
 	return error;
 }
 
+void pci_probe_flush_workqueue(void)
+{
+	flush_workqueue(pci_probe_wq);
+}
+
 /**
  * __pci_device_probe - check if a driver wants to claim a specific PCI device
  * @drv: driver to call to check if it wants the PCI device
@@ -1762,6 +1773,10 @@ static int __init pci_driver_init(void)
 {
 	int ret;
 
+	pci_probe_wq = alloc_workqueue("sync_wq", WQ_PERCPU, 0);
+	if (!pci_probe_wq)
+		return -ENOMEM;
+
 	ret = bus_register(&pci_bus_type);
 	if (ret)
 		return ret;
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 864775651c6f..f14f467e50de 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1206,6 +1206,7 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
 				    struct pci_ops *ops, void *sysdata,
 				    struct list_head *resources);
 int pci_host_probe(struct pci_host_bridge *bridge);
+void pci_probe_flush_workqueue(void);
 int pci_bus_insert_busn_res(struct pci_bus *b, int bus, int busmax);
 int pci_bus_update_busn_res_end(struct pci_bus *b, int busmax);
 void pci_bus_release_busn_res(struct pci_bus *b);
@@ -2079,6 +2080,8 @@ static inline int pci_has_flag(int flag) { return 0; }
 _PCI_NOP_ALL(read, *)
 _PCI_NOP_ALL(write,)
 
+static inline void pci_probe_flush_workqueue(void) { }
+
 static inline struct pci_dev *pci_get_device(unsigned int vendor,
 					     unsigned int device,
 					     struct pci_dev *from)
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 8aac3c9f7c7f..7dbe037ea8df 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -8,6 +8,7 @@
  *
  */
 #include <linux/sched/isolation.h>
+#include <linux/pci.h>
 #include "sched.h"
 
 enum hk_flags {
@@ -145,6 +146,7 @@ int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
 
 	synchronize_rcu();
 
+	pci_probe_flush_workqueue();
 	mem_cgroup_flush_workqueue();
 	vmstat_flush_workqueue();
 
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 18/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (16 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 17/33] PCI: Flush PCI probe workqueue " Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26 20:31   ` Waiman Long
  2025-12-27  0:18   ` Tejun Heo
  2025-12-24 13:45 ` [PATCH 19/33] cpuset: Propagate cpuset isolation update to timers " Frederic Weisbecker
                   ` (14 subsequent siblings)
  32 siblings, 2 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

Until now, cpuset would propagate isolated partition changes to
workqueues so that unbound workers get properly reaffined.

Since housekeeping now centralizes, synchronize and propagates isolation
cpumask changes, perform the work from that subsystem for consolidation
and consistency purposes.

For simplification purpose, the target function is adapted to take the
new housekeeping mask instead of the isolated mask.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/workqueue.h |  2 +-
 init/Kconfig              |  1 +
 kernel/cgroup/cpuset.c    |  9 +++------
 kernel/sched/isolation.c  |  4 +++-
 kernel/workqueue.c        | 17 ++++++++++-------
 5 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index dabc351cc127..a4749f56398f 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -588,7 +588,7 @@ struct workqueue_attrs *alloc_workqueue_attrs_noprof(void);
 void free_workqueue_attrs(struct workqueue_attrs *attrs);
 int apply_workqueue_attrs(struct workqueue_struct *wq,
 			  const struct workqueue_attrs *attrs);
-extern int workqueue_unbound_exclude_cpumask(cpumask_var_t cpumask);
+extern int workqueue_unbound_housekeeping_update(const struct cpumask *hk);
 
 extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
 			struct work_struct *work);
diff --git a/init/Kconfig b/init/Kconfig
index fa79feb8fe57..518830fb812f 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1254,6 +1254,7 @@ config CPUSETS
 	bool "Cpuset controller"
 	depends on SMP
 	select UNION_FIND
+	select CPU_ISOLATION
 	help
 	  This option will let you create and manage CPUSETs which
 	  allow dynamically partitioning a system into sets of CPUs and
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index e13e32491ebf..a492d23dd622 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1484,15 +1484,12 @@ static void update_isolation_cpumasks(void)
 
 	lockdep_assert_cpus_held();
 
-	ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
-	WARN_ON_ONCE(ret < 0);
-
-	ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
-	WARN_ON_ONCE(ret < 0);
-
 	ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
 	WARN_ON_ONCE(ret < 0);
 
+	ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
+	WARN_ON_ONCE(ret < 0);
+
 	isolated_cpus_updating = false;
 }
 
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 7dbe037ea8df..d224bca299ed 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -121,6 +121,7 @@ EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
 int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
 {
 	struct cpumask *trial, *old = NULL;
+	int err;
 
 	if (type != HK_TYPE_DOMAIN)
 		return -ENOTSUPP;
@@ -149,10 +150,11 @@ int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
 	pci_probe_flush_workqueue();
 	mem_cgroup_flush_workqueue();
 	vmstat_flush_workqueue();
+	err = workqueue_unbound_housekeeping_update(housekeeping_cpumask(type));
 
 	kfree(old);
 
-	return 0;
+	return err;
 }
 
 void __init housekeeping_init(void)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 253311af47c6..eb5660013222 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -6959,13 +6959,16 @@ static int workqueue_apply_unbound_cpumask(const cpumask_var_t unbound_cpumask)
 }
 
 /**
- * workqueue_unbound_exclude_cpumask - Exclude given CPUs from unbound cpumask
- * @exclude_cpumask: the cpumask to be excluded from wq_unbound_cpumask
+ * workqueue_unbound_housekeeping_update - Propagate housekeeping cpumask update
+ * @hk: the new housekeeping cpumask
  *
- * This function can be called from cpuset code to provide a set of isolated
- * CPUs that should be excluded from wq_unbound_cpumask.
+ * Update the unbound workqueue cpumask on top of the new housekeeping cpumask such
+ * that the effective unbound affinity is the intersection of the new housekeeping
+ * with the requested affinity set via nohz_full=/isolcpus= or sysfs.
+ *
+ * Return: 0 on success and -errno on failure.
  */
-int workqueue_unbound_exclude_cpumask(cpumask_var_t exclude_cpumask)
+int workqueue_unbound_housekeeping_update(const struct cpumask *hk)
 {
 	cpumask_var_t cpumask;
 	int ret = 0;
@@ -6981,14 +6984,14 @@ int workqueue_unbound_exclude_cpumask(cpumask_var_t exclude_cpumask)
 	 * (HK_TYPE_WQ ∩ HK_TYPE_DOMAIN) house keeping mask and rewritten
 	 * by any subsequent write to workqueue/cpumask sysfs file.
 	 */
-	if (!cpumask_andnot(cpumask, wq_requested_unbound_cpumask, exclude_cpumask))
+	if (!cpumask_and(cpumask, wq_requested_unbound_cpumask, hk))
 		cpumask_copy(cpumask, wq_requested_unbound_cpumask);
 	if (!cpumask_equal(cpumask, wq_unbound_cpumask))
 		ret = workqueue_apply_unbound_cpumask(cpumask);
 
 	/* Save the current isolated cpumask & export it via sysfs */
 	if (!ret)
-		cpumask_copy(wq_isolated_cpumask, exclude_cpumask);
+		cpumask_andnot(wq_isolated_cpumask, cpu_possible_mask, hk);
 
 	mutex_unlock(&wq_pool_mutex);
 	free_cpumask_var(cpumask);
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 19/33] cpuset: Propagate cpuset isolation update to timers through housekeeping
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (17 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 18/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-24 13:45 ` [PATCH 20/33] timers/migration: Remove superfluous cpuset isolation test Frederic Weisbecker
                   ` (13 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

Until now, cpuset would propagate isolated partition changes to
timer migration so that unbound timers don't get migrated to isolated
CPUs.

Since housekeeping now centralizes, synchronize and propagates isolation
cpumask changes, perform the work from that subsystem for consolidation
and consistency purposes.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/cgroup/cpuset.c   | 3 ---
 kernel/sched/isolation.c | 5 +++++
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index a492d23dd622..25ac6c98113c 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1487,9 +1487,6 @@ static void update_isolation_cpumasks(void)
 	ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
 	WARN_ON_ONCE(ret < 0);
 
-	ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
-	WARN_ON_ONCE(ret < 0);
-
 	isolated_cpus_updating = false;
 }
 
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index d224bca299ed..84a257d05918 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -150,7 +150,12 @@ int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
 	pci_probe_flush_workqueue();
 	mem_cgroup_flush_workqueue();
 	vmstat_flush_workqueue();
+
 	err = workqueue_unbound_housekeeping_update(housekeeping_cpumask(type));
+	WARN_ON_ONCE(err < 0);
+
+	err = tmigr_isolated_exclude_cpumask(isol_mask);
+	WARN_ON_ONCE(err < 0);
 
 	kfree(old);
 
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 20/33] timers/migration: Remove superfluous cpuset isolation test
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (18 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 19/33] cpuset: Propagate cpuset isolation update to timers " Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26 20:45   ` Waiman Long
  2025-12-24 13:45 ` [PATCH 21/33] cpuset: Remove cpuset_cpu_is_isolated() Frederic Weisbecker
                   ` (12 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

Cpuset isolated partitions are now included in HK_TYPE_DOMAIN. Testing
if a CPU is part of an isolated partition alone is now useless.

Remove the superflous test.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/time/timer_migration.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 3879575a4975..6da9cd562b20 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -466,9 +466,8 @@ static inline bool tmigr_is_isolated(int cpu)
 {
 	if (!static_branch_unlikely(&tmigr_exclude_isolated))
 		return false;
-	return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) ||
-		cpuset_cpu_is_isolated(cpu)) &&
-	       housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE);
+	return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) &&
+		housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE));
 }
 
 /*
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 21/33] cpuset: Remove cpuset_cpu_is_isolated()
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (19 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 20/33] timers/migration: Remove superfluous cpuset isolation test Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26 20:48   ` Waiman Long
  2025-12-24 13:45 ` [PATCH 22/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated() Frederic Weisbecker
                   ` (11 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

The set of cpuset isolated CPUs is now included in HK_TYPE_DOMAIN
housekeeping cpumask. There is no usecase left interested in just
checking what is isolated by cpuset and not by the isolcpus= kernel
boot parameter.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/cpuset.h          |  6 ------
 include/linux/sched/isolation.h |  4 +---
 kernel/cgroup/cpuset.c          | 12 ------------
 3 files changed, 1 insertion(+), 21 deletions(-)

diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 1c49ffd2ca9b..a4aa2f1767d0 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -79,7 +79,6 @@ extern void cpuset_unlock(void);
 extern void cpuset_cpus_allowed_locked(struct task_struct *p, struct cpumask *mask);
 extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
 extern bool cpuset_cpus_allowed_fallback(struct task_struct *p);
-extern bool cpuset_cpu_is_isolated(int cpu);
 extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
 #define cpuset_current_mems_allowed (current->mems_allowed)
 void cpuset_init_current_mems_allowed(void);
@@ -215,11 +214,6 @@ static inline bool cpuset_cpus_allowed_fallback(struct task_struct *p)
 	return false;
 }
 
-static inline bool cpuset_cpu_is_isolated(int cpu)
-{
-	return false;
-}
-
 static inline nodemask_t cpuset_mems_allowed(struct task_struct *p)
 {
 	return node_possible_map;
diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index 6842a1ba4d13..19905adbb705 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -2,7 +2,6 @@
 #define _LINUX_SCHED_ISOLATION_H
 
 #include <linux/cpumask.h>
-#include <linux/cpuset.h>
 #include <linux/init.h>
 #include <linux/tick.h>
 
@@ -84,8 +83,7 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type)
 static inline bool cpu_is_isolated(int cpu)
 {
 	return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN) ||
-	       !housekeeping_test_cpu(cpu, HK_TYPE_TICK) ||
-	       cpuset_cpu_is_isolated(cpu);
+	       !housekeeping_test_cpu(cpu, HK_TYPE_TICK);
 }
 
 #endif /* _LINUX_SCHED_ISOLATION_H */
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 25ac6c98113c..cd6119c02beb 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -29,7 +29,6 @@
 #include <linux/mempolicy.h>
 #include <linux/mm.h>
 #include <linux/memory.h>
-#include <linux/export.h>
 #include <linux/rcupdate.h>
 #include <linux/sched.h>
 #include <linux/sched/deadline.h>
@@ -1490,17 +1489,6 @@ static void update_isolation_cpumasks(void)
 	isolated_cpus_updating = false;
 }
 
-/**
- * cpuset_cpu_is_isolated - Check if the given CPU is isolated
- * @cpu: the CPU number to be checked
- * Return: true if CPU is used in an isolated partition, false otherwise
- */
-bool cpuset_cpu_is_isolated(int cpu)
-{
-	return cpumask_test_cpu(cpu, isolated_cpus);
-}
-EXPORT_SYMBOL_GPL(cpuset_cpu_is_isolated);
-
 /**
  * rm_siblings_excl_cpus - Remove exclusive CPUs that are used by sibling cpusets
  * @parent: Parent cpuset containing all siblings
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 22/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated()
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (20 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 21/33] cpuset: Remove cpuset_cpu_is_isolated() Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26 21:26   ` Waiman Long
  2025-12-24 13:45 ` [PATCH 23/33] PCI: Remove superfluous HK_TYPE_WQ check Frederic Weisbecker
                   ` (10 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

It doesn't make sense to use nohz_full without also isolating the
related CPUs from the domain topology, either through the use of
isolcpus= or cpuset isolated partitions.

And now HK_TYPE_DOMAIN includes all kinds of domain isolated CPUs.

This means that HK_TYPE_KERNEL_NOISE (of which HK_TYPE_TICK is only an
alias) should always be a subset of HK_TYPE_DOMAIN.

Therefore if a CPU is not HK_TYPE_DOMAIN, it shouldn't be
HK_TYPE_KERNEL_NOISE either. Testing the former is then enough.

Simplify cpu_is_isolated() accordingly.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/sched/isolation.h | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index 19905adbb705..cbb1d30f699a 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -82,8 +82,7 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type)
 
 static inline bool cpu_is_isolated(int cpu)
 {
-	return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN) ||
-	       !housekeeping_test_cpu(cpu, HK_TYPE_TICK);
+	return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN);
 }
 
 #endif /* _LINUX_SCHED_ISOLATION_H */
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 23/33] PCI: Remove superfluous HK_TYPE_WQ check
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (21 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 22/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated() Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-24 13:45 ` [PATCH 24/33] kthread: Refine naming of affinity related fields Frederic Weisbecker
                   ` (9 subsequent siblings)
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

It doesn't make sense to use nohz_full without also isolating the
related CPUs from the domain topology, either through the use of
isolcpus= or cpuset isolated partitions.

And now HK_TYPE_DOMAIN includes all kinds of domain isolated CPUs.

This means that HK_TYPE_KERNEL_NOISE (of which HK_TYPE_WQ is only an
alias) should always be a subset of HK_TYPE_DOMAIN.

Therefore sane configurations verify:

	HK_TYPE_KERNEL_NOISE | HK_TYPE_DOMAIN == HK_TYPE_DOMAIN

Simplify the PCI probe target election accordingly.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 drivers/pci/pci-driver.c | 17 +++--------------
 1 file changed, 3 insertions(+), 14 deletions(-)

diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index d87f781e5ce9..a9590601835a 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -384,16 +384,9 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
 	    pci_physfn_is_probed(dev)) {
 		error = local_pci_probe(&ddi);
 	} else {
-		cpumask_var_t wq_domain_mask;
 		struct pci_probe_arg arg = { .ddi = &ddi };
 
 		INIT_WORK_ONSTACK(&arg.work, local_pci_probe_callback);
-
-		if (!zalloc_cpumask_var(&wq_domain_mask, GFP_KERNEL)) {
-			error = -ENOMEM;
-			goto out;
-		}
-
 		/*
 		 * The target election and the enqueue of the work must be within
 		 * the same RCU read side section so that when the workqueue pool
@@ -402,12 +395,9 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
 		 * targets.
 		 */
 		rcu_read_lock();
-		cpumask_and(wq_domain_mask,
-			    housekeeping_cpumask(HK_TYPE_WQ),
-			    housekeeping_cpumask(HK_TYPE_DOMAIN));
-
 		cpu = cpumask_any_and(cpumask_of_node(node),
-				      wq_domain_mask);
+				      housekeeping_cpumask(HK_TYPE_DOMAIN));
+
 		if (cpu < nr_cpu_ids) {
 			struct workqueue_struct *wq = pci_probe_wq;
 
@@ -422,10 +412,9 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
 			error = local_pci_probe(&ddi);
 		}
 
-		free_cpumask_var(wq_domain_mask);
 		destroy_work_on_stack(&arg.work);
 	}
-out:
+
 	dev->is_probed = 0;
 	cpu_hotplug_enable();
 	return error;
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 24/33] kthread: Refine naming of affinity related fields
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (22 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 23/33] PCI: Remove superfluous HK_TYPE_WQ check Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26 21:37   ` Waiman Long
  2025-12-24 13:45 ` [PATCH 25/33] kthread: Include unbound kthreads in the managed affinity list Frederic Weisbecker
                   ` (8 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

The kthreads preferred affinity related fields use "hotplug" as the base
of their naming because the affinity management was initially deemed to
deal with CPU hotplug.

The scope of this role is going to broaden now and also deal with
cpuset isolated partition updates.

Switch the naming accordingly.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/kthread.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/kernel/kthread.c b/kernel/kthread.c
index 99a3808d086f..f1e4f1f35cae 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -35,8 +35,8 @@ static DEFINE_SPINLOCK(kthread_create_lock);
 static LIST_HEAD(kthread_create_list);
 struct task_struct *kthreadd_task;
 
-static LIST_HEAD(kthreads_hotplug);
-static DEFINE_MUTEX(kthreads_hotplug_lock);
+static LIST_HEAD(kthread_affinity_list);
+static DEFINE_MUTEX(kthread_affinity_lock);
 
 struct kthread_create_info
 {
@@ -69,7 +69,7 @@ struct kthread {
 	/* To store the full name if task comm is truncated. */
 	char *full_name;
 	struct task_struct *task;
-	struct list_head hotplug_node;
+	struct list_head affinity_node;
 	struct cpumask *preferred_affinity;
 };
 
@@ -128,7 +128,7 @@ bool set_kthread_struct(struct task_struct *p)
 
 	init_completion(&kthread->exited);
 	init_completion(&kthread->parked);
-	INIT_LIST_HEAD(&kthread->hotplug_node);
+	INIT_LIST_HEAD(&kthread->affinity_node);
 	p->vfork_done = &kthread->exited;
 
 	kthread->task = p;
@@ -323,10 +323,10 @@ void __noreturn kthread_exit(long result)
 {
 	struct kthread *kthread = to_kthread(current);
 	kthread->result = result;
-	if (!list_empty(&kthread->hotplug_node)) {
-		mutex_lock(&kthreads_hotplug_lock);
-		list_del(&kthread->hotplug_node);
-		mutex_unlock(&kthreads_hotplug_lock);
+	if (!list_empty(&kthread->affinity_node)) {
+		mutex_lock(&kthread_affinity_lock);
+		list_del(&kthread->affinity_node);
+		mutex_unlock(&kthread_affinity_lock);
 
 		if (kthread->preferred_affinity) {
 			kfree(kthread->preferred_affinity);
@@ -390,9 +390,9 @@ static void kthread_affine_node(void)
 			return;
 		}
 
-		mutex_lock(&kthreads_hotplug_lock);
-		WARN_ON_ONCE(!list_empty(&kthread->hotplug_node));
-		list_add_tail(&kthread->hotplug_node, &kthreads_hotplug);
+		mutex_lock(&kthread_affinity_lock);
+		WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
+		list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
 		/*
 		 * The node cpumask is racy when read from kthread() but:
 		 * - a racing CPU going down will either fail on the subsequent
@@ -402,7 +402,7 @@ static void kthread_affine_node(void)
 		 */
 		kthread_fetch_affinity(kthread, affinity);
 		set_cpus_allowed_ptr(current, affinity);
-		mutex_unlock(&kthreads_hotplug_lock);
+		mutex_unlock(&kthread_affinity_lock);
 
 		free_cpumask_var(affinity);
 	}
@@ -873,16 +873,16 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
 		goto out;
 	}
 
-	mutex_lock(&kthreads_hotplug_lock);
+	mutex_lock(&kthread_affinity_lock);
 	cpumask_copy(kthread->preferred_affinity, mask);
-	WARN_ON_ONCE(!list_empty(&kthread->hotplug_node));
-	list_add_tail(&kthread->hotplug_node, &kthreads_hotplug);
+	WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
+	list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
 	kthread_fetch_affinity(kthread, affinity);
 
 	scoped_guard (raw_spinlock_irqsave, &p->pi_lock)
 		set_cpus_allowed_force(p, affinity);
 
-	mutex_unlock(&kthreads_hotplug_lock);
+	mutex_unlock(&kthread_affinity_lock);
 out:
 	free_cpumask_var(affinity);
 
@@ -903,9 +903,9 @@ static int kthreads_online_cpu(unsigned int cpu)
 	struct kthread *k;
 	int ret;
 
-	guard(mutex)(&kthreads_hotplug_lock);
+	guard(mutex)(&kthread_affinity_lock);
 
-	if (list_empty(&kthreads_hotplug))
+	if (list_empty(&kthread_affinity_list))
 		return 0;
 
 	if (!zalloc_cpumask_var(&affinity, GFP_KERNEL))
@@ -913,7 +913,7 @@ static int kthreads_online_cpu(unsigned int cpu)
 
 	ret = 0;
 
-	list_for_each_entry(k, &kthreads_hotplug, hotplug_node) {
+	list_for_each_entry(k, &kthread_affinity_list, affinity_node) {
 		if (WARN_ON_ONCE((k->task->flags & PF_NO_SETAFFINITY) ||
 				 kthread_is_per_cpu(k->task))) {
 			ret = -EINVAL;
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 25/33] kthread: Include unbound kthreads in the managed affinity list
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (23 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 24/33] kthread: Refine naming of affinity related fields Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26 22:11   ` Waiman Long
  2025-12-24 13:45 ` [PATCH 26/33] kthread: Include kthreadd to " Frederic Weisbecker
                   ` (7 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

The managed affinity list currently contains only unbound kthreads that
have affinity preferences. Unbound kthreads globally affine by default
are outside of the list because their affinity is automatically managed
by the scheduler (through the fallback housekeeping mask) and by cpuset.

However in order to preserve the preferred affinity of kthreads, cpuset
will delegate the isolated partition update propagation to the
housekeeping and kthread code.

Prepare for that with including all unbound kthreads in the managed
affinity list.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/kthread.c | 70 ++++++++++++++++++++++++++++--------------------
 1 file changed, 41 insertions(+), 29 deletions(-)

diff --git a/kernel/kthread.c b/kernel/kthread.c
index f1e4f1f35cae..51c0908d3d02 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -365,9 +365,10 @@ static void kthread_fetch_affinity(struct kthread *kthread, struct cpumask *cpum
 	if (kthread->preferred_affinity) {
 		pref = kthread->preferred_affinity;
 	} else {
-		if (WARN_ON_ONCE(kthread->node == NUMA_NO_NODE))
-			return;
-		pref = cpumask_of_node(kthread->node);
+		if (kthread->node == NUMA_NO_NODE)
+			pref = housekeeping_cpumask(HK_TYPE_KTHREAD);
+		else
+			pref = cpumask_of_node(kthread->node);
 	}
 
 	cpumask_and(cpumask, pref, housekeeping_cpumask(HK_TYPE_KTHREAD));
@@ -380,32 +381,29 @@ static void kthread_affine_node(void)
 	struct kthread *kthread = to_kthread(current);
 	cpumask_var_t affinity;
 
-	WARN_ON_ONCE(kthread_is_per_cpu(current));
+	if (WARN_ON_ONCE(kthread_is_per_cpu(current)))
+		return;
 
-	if (kthread->node == NUMA_NO_NODE) {
-		housekeeping_affine(current, HK_TYPE_KTHREAD);
-	} else {
-		if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) {
-			WARN_ON_ONCE(1);
-			return;
-		}
-
-		mutex_lock(&kthread_affinity_lock);
-		WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
-		list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
-		/*
-		 * The node cpumask is racy when read from kthread() but:
-		 * - a racing CPU going down will either fail on the subsequent
-		 *   call to set_cpus_allowed_ptr() or be migrated to housekeepers
-		 *   afterwards by the scheduler.
-		 * - a racing CPU going up will be handled by kthreads_online_cpu()
-		 */
-		kthread_fetch_affinity(kthread, affinity);
-		set_cpus_allowed_ptr(current, affinity);
-		mutex_unlock(&kthread_affinity_lock);
-
-		free_cpumask_var(affinity);
+	if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) {
+		WARN_ON_ONCE(1);
+		return;
 	}
+
+	mutex_lock(&kthread_affinity_lock);
+	WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
+	list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
+	/*
+	 * The node cpumask is racy when read from kthread() but:
+	 * - a racing CPU going down will either fail on the subsequent
+	 *   call to set_cpus_allowed_ptr() or be migrated to housekeepers
+	 *   afterwards by the scheduler.
+	 * - a racing CPU going up will be handled by kthreads_online_cpu()
+	 */
+	kthread_fetch_affinity(kthread, affinity);
+	set_cpus_allowed_ptr(current, affinity);
+	mutex_unlock(&kthread_affinity_lock);
+
+	free_cpumask_var(affinity);
 }
 
 static int kthread(void *_create)
@@ -919,8 +917,22 @@ static int kthreads_online_cpu(unsigned int cpu)
 			ret = -EINVAL;
 			continue;
 		}
-		kthread_fetch_affinity(k, affinity);
-		set_cpus_allowed_ptr(k->task, affinity);
+
+		/*
+		 * Unbound kthreads without preferred affinity are already affine
+		 * to housekeeping, whether those CPUs are online or not. So no need
+		 * to handle newly online CPUs for them.
+		 *
+		 * But kthreads with a preferred affinity or node are different:
+		 * if none of their preferred CPUs are online and part of
+		 * housekeeping at the same time, they must be affine to housekeeping.
+		 * But as soon as one of their preferred CPU becomes online, they must
+		 * be affine to them.
+		 */
+		if (k->preferred_affinity || k->node != NUMA_NO_NODE) {
+			kthread_fetch_affinity(k, affinity);
+			set_cpus_allowed_ptr(k->task, affinity);
+		}
 	}
 
 	free_cpumask_var(affinity);
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 26/33] kthread: Include kthreadd to the managed affinity list
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (24 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 25/33] kthread: Include unbound kthreads in the managed affinity list Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26 22:13   ` Waiman Long
  2025-12-24 13:45 ` [PATCH 27/33] kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management Frederic Weisbecker
                   ` (6 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

The unbound kthreads affinity management performed by cpuset is going to
be imported to the kthread core code for consolidation purposes.

Treat kthreadd just like any other kthread.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/kthread.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/kthread.c b/kernel/kthread.c
index 51c0908d3d02..85ccf5bb17c9 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -818,12 +818,13 @@ int kthreadd(void *unused)
 	/* Setup a clean context for our children to inherit. */
 	set_task_comm(tsk, comm);
 	ignore_signals(tsk);
-	set_cpus_allowed_ptr(tsk, housekeeping_cpumask(HK_TYPE_KTHREAD));
 	set_mems_allowed(node_states[N_MEMORY]);
 
 	current->flags |= PF_NOFREEZE;
 	cgroup_init_kthreadd();
 
+	kthread_affine_node();
+
 	for (;;) {
 		set_current_state(TASK_INTERRUPTIBLE);
 		if (list_empty(&kthread_create_list))
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 27/33] kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (25 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 26/33] kthread: Include kthreadd to " Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26 22:16   ` Waiman Long
  2025-12-24 13:45 ` [PATCH 28/33] sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
                   ` (5 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

Unbound kthreads want to run neither on nohz_full CPUs nor on domain
isolated CPUs. And since nohz_full implies domain isolation, checking
the latter is enough to verify both.

Therefore exclude kthreads from domain isolation.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/kthread.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/kernel/kthread.c b/kernel/kthread.c
index 85ccf5bb17c9..968fa5868d21 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -362,18 +362,20 @@ static void kthread_fetch_affinity(struct kthread *kthread, struct cpumask *cpum
 {
 	const struct cpumask *pref;
 
+	guard(rcu)();
+
 	if (kthread->preferred_affinity) {
 		pref = kthread->preferred_affinity;
 	} else {
 		if (kthread->node == NUMA_NO_NODE)
-			pref = housekeeping_cpumask(HK_TYPE_KTHREAD);
+			pref = housekeeping_cpumask(HK_TYPE_DOMAIN);
 		else
 			pref = cpumask_of_node(kthread->node);
 	}
 
-	cpumask_and(cpumask, pref, housekeeping_cpumask(HK_TYPE_KTHREAD));
+	cpumask_and(cpumask, pref, housekeeping_cpumask(HK_TYPE_DOMAIN));
 	if (cpumask_empty(cpumask))
-		cpumask_copy(cpumask, housekeeping_cpumask(HK_TYPE_KTHREAD));
+		cpumask_copy(cpumask, housekeeping_cpumask(HK_TYPE_DOMAIN));
 }
 
 static void kthread_affine_node(void)
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 28/33] sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (26 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 27/33] kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26 23:08   ` Waiman Long
  2025-12-24 13:45 ` [PATCH 29/33] sched/arm64: Move fallback task " Frederic Weisbecker
                   ` (4 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

Tasks that have all their allowed CPUs offline don't want their affinity
to fallback on either nohz_full CPUs or on domain isolated CPUs. And
since nohz_full implies domain isolation, checking the latter is enough
to verify both.

Therefore exclude domain isolation from fallback task affinity.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/mmu_context.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/mmu_context.h b/include/linux/mmu_context.h
index ac01dc4eb2ce..ed3dd0f3fe19 100644
--- a/include/linux/mmu_context.h
+++ b/include/linux/mmu_context.h
@@ -24,7 +24,7 @@ static inline void leave_mm(void) { }
 #ifndef task_cpu_possible_mask
 # define task_cpu_possible_mask(p)	cpu_possible_mask
 # define task_cpu_possible(cpu, p)	true
-# define task_cpu_fallback_mask(p)	housekeeping_cpumask(HK_TYPE_TICK)
+# define task_cpu_fallback_mask(p)	housekeeping_cpumask(HK_TYPE_DOMAIN)
 #else
 # define task_cpu_possible(cpu, p)	cpumask_test_cpu((cpu), task_cpu_possible_mask(p))
 #endif
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 29/33] sched/arm64: Move fallback task cpumask to HK_TYPE_DOMAIN
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (27 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 28/33] sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26 23:46   ` Waiman Long
  2025-12-24 13:45 ` [PATCH 30/33] kthread: Honour kthreads preferred affinity after cpuset changes Frederic Weisbecker
                   ` (3 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

When none of the allowed CPUs of a task are online, it gets migrated
to the fallback cpumask which is all the non nohz_full CPUs.

However just like nohz_full CPUs, domain isolated CPUs don't want to be
disturbed by tasks that have lost their CPU affinities.

And since nohz_full rely on domain isolation to work correctly, the
housekeeping mask of domain isolated CPUs should always be a superset of
the housekeeping mask of nohz_full CPUs (there can be CPUs that are
domain isolated but not nohz_full, OTOH there shouldn't be nohz_full
CPUs that are not domain isolated):

	HK_TYPE_DOMAIN | HK_TYPE_KERNEL_NOISE == HK_TYPE_DOMAIN

Therefore use HK_TYPE_DOMAIN as the appropriate fallback target for
tasks and since this cpumask can be modified at runtime, make sure
that 32 bits support CPUs on ARM64 mismatched systems are not isolated
by cpusets.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 arch/arm64/kernel/cpufeature.c | 18 +++++++++++++++---
 include/linux/cpu.h            |  4 ++++
 kernel/cgroup/cpuset.c         | 17 ++++++++++++++---
 3 files changed, 33 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c840a93b9ef9..70b0e45e299a 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1656,6 +1656,18 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
 	return feature_matches(val, entry);
 }
 
+/*
+ * 32 bits support CPUs can't be isolated because tasks may be
+ * arbitrarily affine to them, defeating the purpose of isolation.
+ */
+bool arch_isolated_cpus_can_update(struct cpumask *new_cpus)
+{
+	if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
+		return !cpumask_intersects(cpu_32bit_el0_mask, new_cpus);
+	else
+		return true;
+}
+
 const struct cpumask *system_32bit_el0_cpumask(void)
 {
 	if (!system_supports_32bit_el0())
@@ -1669,7 +1681,7 @@ const struct cpumask *system_32bit_el0_cpumask(void)
 
 const struct cpumask *task_cpu_fallback_mask(struct task_struct *p)
 {
-	return __task_cpu_possible_mask(p, housekeeping_cpumask(HK_TYPE_TICK));
+	return __task_cpu_possible_mask(p, housekeeping_cpumask(HK_TYPE_DOMAIN));
 }
 
 static int __init parse_32bit_el0_param(char *str)
@@ -3987,8 +3999,8 @@ static int enable_mismatched_32bit_el0(unsigned int cpu)
 	bool cpu_32bit = false;
 
 	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
-		if (!housekeeping_cpu(cpu, HK_TYPE_TICK))
-			pr_info("Treating adaptive-ticks CPU %u as 64-bit only\n", cpu);
+		if (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN))
+			pr_info("Treating domain isolated CPU %u as 64-bit only\n", cpu);
 		else
 			cpu_32bit = true;
 	}
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 487b3bf2e1ea..0b48af25ab5c 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -229,4 +229,8 @@ static inline bool cpu_attack_vector_mitigated(enum cpu_attack_vectors v)
 #define smt_mitigations SMT_MITIGATIONS_OFF
 #endif
 
+struct cpumask;
+
+bool arch_isolated_cpus_can_update(struct cpumask *new_cpus);
+
 #endif /* _LINUX_CPU_H_ */
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index cd6119c02beb..1cc83a3c25f6 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1408,14 +1408,22 @@ static void partition_xcpus_del(int old_prs, struct cpuset *parent,
 	cpumask_or(parent->effective_cpus, parent->effective_cpus, xcpus);
 }
 
+bool __weak arch_isolated_cpus_can_update(struct cpumask *new_cpus)
+{
+	return true;
+}
+
 /*
- * isolated_cpus_can_update - check for isolated & nohz_full conflicts
+ * isolated_cpus_can_update - check for conflicts against housekeeping and
+ *                            CPUs capabilities.
  * @add_cpus: cpu mask for cpus that are going to be isolated
  * @del_cpus: cpu mask for cpus that are no longer isolated, can be NULL
  * Return: false if there is conflict, true otherwise
  *
- * If nohz_full is enabled and we have isolated CPUs, their combination must
- * still leave housekeeping CPUs.
+ * Check for conflicts:
+ * - If nohz_full is enabled and there are isolated CPUs, their combination must
+ *   still leave housekeeping CPUs.
+ * - Architecture has CPU capabilities incompatible with being isolated
  *
  * TBD: Should consider merging this function into
  *      prstate_housekeeping_conflict().
@@ -1426,6 +1434,9 @@ static bool isolated_cpus_can_update(struct cpumask *add_cpus,
 	cpumask_var_t full_hk_cpus;
 	int res = true;
 
+	if (!arch_isolated_cpus_can_update(add_cpus))
+		return false;
+
 	if (!housekeeping_enabled(HK_TYPE_KERNEL_NOISE))
 		return true;
 
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 30/33] kthread: Honour kthreads preferred affinity after cpuset changes
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (28 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 29/33] sched/arm64: Move fallback task " Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-26 23:59   ` Waiman Long
  2025-12-24 13:45 ` [PATCH 31/33] kthread: Comment on the purpose and placement of kthread_affine_node() call Frederic Weisbecker
                   ` (2 subsequent siblings)
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

When cpuset isolated partitions get updated, unbound kthreads get
indifferently affine to all non isolated CPUs, regardless of their
individual affinity preferences.

For example kswapd is a per-node kthread that prefers to be affine to
the node it refers to. Whenever an isolated partition is created,
updated or deleted, kswapd's node affinity is going to be broken if any
CPU in the related node is not isolated because kswapd will be affine
globally.

Fix this with letting the consolidated kthread managed affinity code do
the affinity update on behalf of cpuset.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/kthread.h  |  1 +
 kernel/cgroup/cpuset.c   |  5 ++---
 kernel/kthread.c         | 41 ++++++++++++++++++++++++++++++----------
 kernel/sched/isolation.c |  3 +++
 4 files changed, 37 insertions(+), 13 deletions(-)

diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index 8d27403888ce..c92c1149ee6e 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -100,6 +100,7 @@ void kthread_unpark(struct task_struct *k);
 void kthread_parkme(void);
 void kthread_exit(long result) __noreturn;
 void kthread_complete_and_exit(struct completion *, long) __noreturn;
+int kthreads_update_housekeeping(void);
 
 int kthreadd(void *unused);
 extern struct task_struct *kthreadd_task;
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 1cc83a3c25f6..c8cfaf5cd4a1 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1208,11 +1208,10 @@ void cpuset_update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus)
 
 		if (top_cs) {
 			/*
+			 * PF_KTHREAD tasks are handled by housekeeping.
 			 * PF_NO_SETAFFINITY tasks are ignored.
-			 * All per cpu kthreads should have PF_NO_SETAFFINITY
-			 * flag set, see kthread_set_per_cpu().
 			 */
-			if (task->flags & PF_NO_SETAFFINITY)
+			if (task->flags & (PF_KTHREAD | PF_NO_SETAFFINITY))
 				continue;
 			cpumask_andnot(new_cpus, possible_mask, subpartitions_cpus);
 		} else {
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 968fa5868d21..03008154249c 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -891,14 +891,7 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
 }
 EXPORT_SYMBOL_GPL(kthread_affine_preferred);
 
-/*
- * Re-affine kthreads according to their preferences
- * and the newly online CPU. The CPU down part is handled
- * by select_fallback_rq() which default re-affines to
- * housekeepers from other nodes in case the preferred
- * affinity doesn't apply anymore.
- */
-static int kthreads_online_cpu(unsigned int cpu)
+static int kthreads_update_affinity(bool force)
 {
 	cpumask_var_t affinity;
 	struct kthread *k;
@@ -924,7 +917,8 @@ static int kthreads_online_cpu(unsigned int cpu)
 		/*
 		 * Unbound kthreads without preferred affinity are already affine
 		 * to housekeeping, whether those CPUs are online or not. So no need
-		 * to handle newly online CPUs for them.
+		 * to handle newly online CPUs for them. However housekeeping changes
+		 * have to be applied.
 		 *
 		 * But kthreads with a preferred affinity or node are different:
 		 * if none of their preferred CPUs are online and part of
@@ -932,7 +926,7 @@ static int kthreads_online_cpu(unsigned int cpu)
 		 * But as soon as one of their preferred CPU becomes online, they must
 		 * be affine to them.
 		 */
-		if (k->preferred_affinity || k->node != NUMA_NO_NODE) {
+		if (force || k->preferred_affinity || k->node != NUMA_NO_NODE) {
 			kthread_fetch_affinity(k, affinity);
 			set_cpus_allowed_ptr(k->task, affinity);
 		}
@@ -943,6 +937,33 @@ static int kthreads_online_cpu(unsigned int cpu)
 	return ret;
 }
 
+/**
+ * kthreads_update_housekeeping - Update kthreads affinity on cpuset change
+ *
+ * When cpuset changes a partition type to/from "isolated" or updates related
+ * cpumasks, propagate the housekeeping cpumask change to preferred kthreads
+ * affinity.
+ *
+ * Returns 0 if successful, -ENOMEM if temporary mask couldn't
+ * be allocated or -EINVAL in case of internal error.
+ */
+int kthreads_update_housekeeping(void)
+{
+	return kthreads_update_affinity(true);
+}
+
+/*
+ * Re-affine kthreads according to their preferences
+ * and the newly online CPU. The CPU down part is handled
+ * by select_fallback_rq() which default re-affines to
+ * housekeepers from other nodes in case the preferred
+ * affinity doesn't apply anymore.
+ */
+static int kthreads_online_cpu(unsigned int cpu)
+{
+	return kthreads_update_affinity(false);
+}
+
 static int kthreads_init(void)
 {
 	return cpuhp_setup_state(CPUHP_AP_KTHREADS_ONLINE, "kthreads:online",
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 84a257d05918..c499474866b8 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -157,6 +157,9 @@ int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
 	err = tmigr_isolated_exclude_cpumask(isol_mask);
 	WARN_ON_ONCE(err < 0);
 
+	err = kthreads_update_housekeeping();
+	WARN_ON_ONCE(err < 0);
+
 	kfree(old);
 
 	return err;
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 31/33] kthread: Comment on the purpose and placement of kthread_affine_node() call
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (29 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 30/33] kthread: Honour kthreads preferred affinity after cpuset changes Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-24 13:45 ` [PATCH 32/33] kthread: Document kthread_affine_preferred() Frederic Weisbecker
  2025-12-24 13:45 ` [PATCH 33/33] doc: Add housekeeping documentation Frederic Weisbecker
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

It may not appear obvious why kthread_affine_node() is not called before
the kthread creation completion instead of after the first wake-up.

The reason is that kthread_affine_node() applies a default affinity
behaviour that only takes place if no affinity preference have already
been passed by the kthread creation call site.

Add a comment to clarify that.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/kthread.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/kthread.c b/kernel/kthread.c
index 03008154249c..51f419139dea 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -453,6 +453,10 @@ static int kthread(void *_create)
 
 	self->started = 1;
 
+	/*
+	 * Apply default node affinity if no call to kthread_bind[_mask]() nor
+	 * kthread_affine_preferred() was issued before the first wake-up.
+	 */
 	if (!(current->flags & PF_NO_SETAFFINITY) && !self->preferred_affinity)
 		kthread_affine_node();
 
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 32/33] kthread: Document kthread_affine_preferred()
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (30 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 31/33] kthread: Comment on the purpose and placement of kthread_affine_node() call Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-24 13:45 ` [PATCH 33/33] doc: Add housekeeping documentation Frederic Weisbecker
  32 siblings, 0 replies; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

The documentation of this new API has been overlooked during its
introduction. Fill the gap.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/kthread.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/kernel/kthread.c b/kernel/kthread.c
index 51f419139dea..c50f4c0eabfe 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -856,6 +856,18 @@ int kthreadd(void *unused)
 	return 0;
 }
 
+/**
+ * kthread_affine_preferred - Define a kthread's preferred affinity
+ * @p: thread created by kthread_create().
+ * @mask: preferred mask of CPUs (might not be online, must be possible) for @p
+ *        to run on.
+ *
+ * Similar to kthread_bind_mask() except that the affinity is not a requirement
+ * but rather a preference that can be constrained by CPU isolation or CPU hotplug.
+ * Must be called before the first wakeup of the kthread.
+ *
+ * Returns 0 if the affinity has been applied.
+ */
 int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
 {
 	struct kthread *kthread = to_kthread(p);
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 33/33] doc: Add housekeeping documentation
  2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
                   ` (31 preceding siblings ...)
  2025-12-24 13:45 ` [PATCH 32/33] kthread: Document kthread_affine_preferred() Frederic Weisbecker
@ 2025-12-24 13:45 ` Frederic Weisbecker
  2025-12-27  0:39   ` Waiman Long
  32 siblings, 1 reply; 58+ messages in thread
From: Frederic Weisbecker @ 2025-12-24 13:45 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Michal Koutný, Andrew Morton,
	Bjorn Helgaas, Catalin Marinas, Chen Ridong, Danilo Krummrich,
	David S . Miller, Eric Dumazet, Gabriele Monaco,
	Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski, Jens Axboe,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 Documentation/core-api/housekeeping.rst | 111 ++++++++++++++++++++++++
 Documentation/core-api/index.rst        |   1 +
 2 files changed, 112 insertions(+)
 create mode 100644 Documentation/core-api/housekeeping.rst

diff --git a/Documentation/core-api/housekeeping.rst b/Documentation/core-api/housekeeping.rst
new file mode 100644
index 000000000000..e5417302774c
--- /dev/null
+++ b/Documentation/core-api/housekeeping.rst
@@ -0,0 +1,111 @@
+======================================
+Housekeeping
+======================================
+
+
+CPU Isolation moves away kernel work that may otherwise run on any CPU.
+The purpose of its related features is to reduce the OS jitter that some
+extreme workloads can't stand, such as in some DPDK usecases.
+
+The kernel work moved away by CPU isolation is commonly described as
+"housekeeping" because it includes ground work that performs cleanups,
+statistics maintainance and actions relying on them, memory release,
+various deferrals etc...
+
+Sometimes housekeeping is just some unbound work (unbound workqueues,
+unbound timers, ...) that gets easily assigned to non-isolated CPUs.
+But sometimes housekeeping is tied to a specific CPU and requires
+elaborated tricks to be offloaded to non-isolated CPUs (RCU_NOCB, remote
+scheduler tick, etc...).
+
+Thus, a housekeeping CPU can be considered as the reverse of an isolated
+CPU. It is simply a CPU that can execute housekeeping work. There must
+always be at least one online housekeeping CPU at any time. The CPUs that
+are not	isolated are automatically assigned as housekeeping.
+
+Housekeeping is currently divided in four features described
+by the ``enum hk_type type``:
+
+1.	HK_TYPE_DOMAIN matches the work moved away by scheduler domain
+	isolation performed through ``isolcpus=domain`` boot parameter or
+	isolated cpuset partitions in cgroup v2. This includes scheduler
+	load balancing, unbound workqueues and timers.
+
+2.	HK_TYPE_KERNEL_NOISE matches the work moved away by tick isolation
+	performed through ``nohz_full=`` or ``isolcpus=nohz`` boot
+	parameters. This includes remote scheduler tick, vmstat and lockup
+	watchdog.
+
+3.	HK_TYPE_MANAGED_IRQ matches the IRQ handlers moved away by managed
+	IRQ isolation performed through ``isolcpus=managed_irq``.
+
+4.	HK_TYPE_DOMAIN_BOOT matches the work moved away by scheduler domain
+	isolation performed through ``isolcpus=domain`` only. It is similar
+	to HK_TYPE_DOMAIN except it ignores the isolation performed by
+	cpusets.
+
+
+Housekeeping cpumasks
+=================================
+
+Housekeeping cpumasks include the CPUs that can execute the work moved
+away by the matching isolation feature. These cpumasks are returned by
+the following function::
+
+	const struct cpumask *housekeeping_cpumask(enum hk_type type)
+
+By default, if neither ``nohz_full=``, nor ``isolcpus``, nor cpuset's
+isolated partitions are used, which covers most usecases, this function
+returns the cpu_possible_mask.
+
+Otherwise the function returns the cpumask complement of the isolation
+feature. For example:
+
+With isolcpus=domain,7 the following will return a mask with all possible
+CPUs except 7::
+
+	housekeeping_cpumask(HK_TYPE_DOMAIN)
+
+Similarly with nohz_full=5,6 the following will return a mask with all
+possible CPUs except 5,6::
+
+	housekeeping_cpumask(HK_TYPE_KERNEL_NOISE)
+
+
+Synchronization against cpusets
+=================================
+
+Cpuset can modify the HK_TYPE_DOMAIN housekeeping cpumask while creating,
+modifying or deleting an isolated partition.
+
+The users of HK_TYPE_DOMAIN cpumask must then make sure to synchronize
+properly against cpuset in order to make sure that:
+
+1.	The cpumask snapshot stays coherent.
+
+2.	No housekeeping work is queued on a newly made isolated CPU.
+
+3.	Pending housekeeping work that was queued to a non isolated
+	CPU which just turned isolated through cpuset must be flushed
+	before the related created/modified isolated partition is made
+	available to userspace.
+
+This synchronization is maintained by an RCU based scheme. The cpuset update
+side waits for an RCU grace period after updating the HK_TYPE_DOMAIN
+cpumask and before flushing pending works. On the read side, care must be
+taken to gather the housekeeping target election and the work enqueue within
+the same RCU read side critical section.
+
+A typical layout example would look like this on the update side
+(``housekeeping_update()``)::
+
+	rcu_assign_pointer(housekeeping_cpumasks[type], trial);
+	synchronize_rcu();
+	flush_workqueue(example_workqueue);
+
+And then on the read side::
+
+	rcu_read_lock();
+	cpu = housekeeping_any_cpu(HK_TYPE_DOMAIN);
+	queue_work_on(cpu, example_workqueue, work);
+	rcu_read_unlock();
diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst
index 5eb0fbbbc323..79fe7735692e 100644
--- a/Documentation/core-api/index.rst
+++ b/Documentation/core-api/index.rst
@@ -25,6 +25,7 @@ it.
    symbol-namespaces
    asm-annotations
    real-time/index
+   housekeeping.rst
 
 Data structures and low-level utilities
 =======================================
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH 05/33] sched/isolation: Save boot defined domain flags
  2025-12-24 13:44 ` [PATCH 05/33] sched/isolation: Save boot defined domain flags Frederic Weisbecker
@ 2025-12-25 22:27   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-25 22:27 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:44 AM, Frederic Weisbecker wrote:
> HK_TYPE_DOMAIN will soon integrate not only boot defined isolcpus= CPUs
> but also cpuset isolated partitions.
>
> Housekeeping still needs a way to record what was initially passed
> to isolcpus= in order to keep these CPUs isolated after a cpuset
> isolated partition is modified or destroyed while containing some of
> them.
>
> Create a new HK_TYPE_DOMAIN_BOOT to keep track of those.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> Reviewed-by: Phil Auld <pauld@redhat.com>
> ---
>   include/linux/sched/isolation.h | 4 ++++
>   kernel/sched/isolation.c        | 5 +++--
>   2 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
> index d8501f4709b5..109a2149e21a 100644
> --- a/include/linux/sched/isolation.h
> +++ b/include/linux/sched/isolation.h
> @@ -7,8 +7,12 @@
>   #include <linux/tick.h>
>   
>   enum hk_type {
> +	/* Revert of boot-time isolcpus= argument */
> +	HK_TYPE_DOMAIN_BOOT,
>   	HK_TYPE_DOMAIN,
> +	/* Revert of boot-time isolcpus=managed_irq argument */
>   	HK_TYPE_MANAGED_IRQ,
> +	/* Revert of boot-time nohz_full= or isolcpus=nohz arguments */
>   	HK_TYPE_KERNEL_NOISE,
>   	HK_TYPE_MAX,
>   

"Revert" is a verb. The term "Revert of" sound strange to me. I think 
using "Inverse of" will sound better.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 06/33] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT
  2025-12-24 13:44 ` [PATCH 06/33] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT Frederic Weisbecker
@ 2025-12-25 22:31   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-25 22:31 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:44 AM, Frederic Weisbecker wrote:
> boot_hk_cpus is an ad-hoc copy of HK_TYPE_DOMAIN_BOOT. Remove it and use
> the official version.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> Reviewed-by: Phil Auld <pauld@redhat.com>
> Reviewed-by: Chen Ridong <chenridong@huawei.com>
> ---
>   kernel/cgroup/cpuset.c | 22 +++++++---------------
>   1 file changed, 7 insertions(+), 15 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 6e6eb09b8db6..3afa72f8d579 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -88,12 +88,6 @@ static cpumask_var_t	isolated_cpus;
>    */
>   static bool isolated_cpus_updating;
>   
> -/*
> - * Housekeeping (HK_TYPE_DOMAIN) CPUs at boot
> - */
> -static cpumask_var_t	boot_hk_cpus;
> -static bool		have_boot_isolcpus;
> -
>   /*
>    * A flag to force sched domain rebuild at the end of an operation.
>    * It can be set in
> @@ -1453,15 +1447,16 @@ static bool isolated_cpus_can_update(struct cpumask *add_cpus,
>    * @new_cpus: cpu mask
>    * Return: true if there is conflict, false otherwise
>    *
> - * CPUs outside of boot_hk_cpus, if defined, can only be used in an
> + * CPUs outside of HK_TYPE_DOMAIN_BOOT, if defined, can only be used in an
>    * isolated partition.
>    */
>   static bool prstate_housekeeping_conflict(int prstate, struct cpumask *new_cpus)
>   {
> -	if (!have_boot_isolcpus)
> +	if (!housekeeping_enabled(HK_TYPE_DOMAIN_BOOT))
>   		return false;
>   
> -	if ((prstate != PRS_ISOLATED) && !cpumask_subset(new_cpus, boot_hk_cpus))
> +	if ((prstate != PRS_ISOLATED) &&
> +	    !cpumask_subset(new_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT)))
>   		return true;
>   
>   	return false;
> @@ -3892,12 +3887,9 @@ int __init cpuset_init(void)
>   
>   	BUG_ON(!alloc_cpumask_var(&cpus_attach, GFP_KERNEL));
>   
> -	have_boot_isolcpus = housekeeping_enabled(HK_TYPE_DOMAIN);
> -	if (have_boot_isolcpus) {
> -		BUG_ON(!alloc_cpumask_var(&boot_hk_cpus, GFP_KERNEL));
> -		cpumask_copy(boot_hk_cpus, housekeeping_cpumask(HK_TYPE_DOMAIN));
> -		cpumask_andnot(isolated_cpus, cpu_possible_mask, boot_hk_cpus);
> -	}
> +	if (housekeeping_enabled(HK_TYPE_DOMAIN_BOOT))
> +		cpumask_andnot(isolated_cpus, cpu_possible_mask,
> +			       housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT));
>   
>   	return 0;
>   }
Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset
  2025-12-24 13:45 ` [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
@ 2025-12-26  2:24   ` Waiman Long
  2025-12-26  3:20     ` Waiman Long
  2025-12-26  8:08   ` Chen Ridong
  1 sibling, 1 reply; 58+ messages in thread
From: Waiman Long @ 2025-12-26  2:24 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> Until now, HK_TYPE_DOMAIN used to only include boot defined isolated
> CPUs passed through isolcpus= boot option. Users interested in also
> knowing the runtime defined isolated CPUs through cpuset must use
> different APIs: cpuset_cpu_is_isolated(), cpu_is_isolated(), etc...
>
> There are many drawbacks to that approach:
>
> 1) Most interested subsystems want to know about all isolated CPUs, not
>    just those defined on boot time.
>
> 2) cpuset_cpu_is_isolated() / cpu_is_isolated() are not synchronized with
>    concurrent cpuset changes.
>
> 3) Further cpuset modifications are not propagated to subsystems
>
> Solve 1) and 2) and centralize all isolated CPUs within the
> HK_TYPE_DOMAIN housekeeping cpumask.
>
> Subsystems can rely on RCU to synchronize against concurrent changes.
>
> The propagation mentioned in 3) will be handled in further patches.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   include/linux/sched/isolation.h |  7 +++
>   kernel/cgroup/cpuset.c          |  3 ++
>   kernel/sched/isolation.c        | 76 ++++++++++++++++++++++++++++++---
>   kernel/sched/sched.h            |  1 +
>   4 files changed, 81 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
> index 109a2149e21a..6842a1ba4d13 100644
> --- a/include/linux/sched/isolation.h
> +++ b/include/linux/sched/isolation.h
> @@ -9,6 +9,11 @@
>   enum hk_type {
>   	/* Revert of boot-time isolcpus= argument */
>   	HK_TYPE_DOMAIN_BOOT,
> +	/*
> +	 * Same as HK_TYPE_DOMAIN_BOOT but also includes the
> +	 * revert of cpuset isolated partitions. As such it
> +	 * is always a subset of HK_TYPE_DOMAIN_BOOT.
> +	 */
>   	HK_TYPE_DOMAIN,
>   	/* Revert of boot-time isolcpus=managed_irq argument */
>   	HK_TYPE_MANAGED_IRQ,
> @@ -35,6 +40,7 @@ extern const struct cpumask *housekeeping_cpumask(enum hk_type type);
>   extern bool housekeeping_enabled(enum hk_type type);
>   extern void housekeeping_affine(struct task_struct *t, enum hk_type type);
>   extern bool housekeeping_test_cpu(int cpu, enum hk_type type);
> +extern int housekeeping_update(struct cpumask *isol_mask, enum hk_type type);
>   extern void __init housekeeping_init(void);
>   
>   #else
> @@ -62,6 +68,7 @@ static inline bool housekeeping_test_cpu(int cpu, enum hk_type type)
>   	return true;
>   }
>   
> +static inline int housekeeping_update(struct cpumask *isol_mask, enum hk_type type) { return 0; }
>   static inline void housekeeping_init(void) { }
>   #endif /* CONFIG_CPU_ISOLATION */
>   
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 5e2e3514c22e..e13e32491ebf 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1490,6 +1490,9 @@ static void update_isolation_cpumasks(void)
>   	ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
>   	WARN_ON_ONCE(ret < 0);
>   
> +	ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
> +	WARN_ON_ONCE(ret < 0);
> +
>   	isolated_cpus_updating = false;
>   }
>   
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 83be49ec2b06..a124f1119f2e 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -29,18 +29,48 @@ static struct housekeeping housekeeping;
>   
>   bool housekeeping_enabled(enum hk_type type)
>   {
> -	return !!(housekeeping.flags & BIT(type));
> +	return !!(READ_ONCE(housekeeping.flags) & BIT(type));
>   }
>   EXPORT_SYMBOL_GPL(housekeeping_enabled);
>   
> +static bool housekeeping_dereference_check(enum hk_type type)
> +{
> +	if (IS_ENABLED(CONFIG_LOCKDEP) && type == HK_TYPE_DOMAIN) {

To be more correct, we should use IS_ENABLED(CONFIG_PROVE_LOCKING) as 
this is the real kconfig that enables most of the lockdep checking. 
PROVE_LOCKING selects LOCKDEP but not vice versa. So for some weird 
configs that set LOCKDEP but not PROVE_LOCKING, it can cause compilation 
problem.

Other than that, the rest looks good to me.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset
  2025-12-26  2:24   ` Waiman Long
@ 2025-12-26  3:20     ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26  3:20 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/25/25 9:24 PM, Waiman Long wrote:
> On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
>> Until now, HK_TYPE_DOMAIN used to only include boot defined isolated
>> CPUs passed through isolcpus= boot option. Users interested in also
>> knowing the runtime defined isolated CPUs through cpuset must use
>> different APIs: cpuset_cpu_is_isolated(), cpu_is_isolated(), etc...
>>
>> There are many drawbacks to that approach:
>>
>> 1) Most interested subsystems want to know about all isolated CPUs, not
>>    just those defined on boot time.
>>
>> 2) cpuset_cpu_is_isolated() / cpu_is_isolated() are not synchronized 
>> with
>>    concurrent cpuset changes.
>>
>> 3) Further cpuset modifications are not propagated to subsystems
>>
>> Solve 1) and 2) and centralize all isolated CPUs within the
>> HK_TYPE_DOMAIN housekeeping cpumask.
>>
>> Subsystems can rely on RCU to synchronize against concurrent changes.
>>
>> The propagation mentioned in 3) will be handled in further patches.
>>
>> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
>> ---
>>   include/linux/sched/isolation.h |  7 +++
>>   kernel/cgroup/cpuset.c          |  3 ++
>>   kernel/sched/isolation.c        | 76 ++++++++++++++++++++++++++++++---
>>   kernel/sched/sched.h            |  1 +
>>   4 files changed, 81 insertions(+), 6 deletions(-)
>>
>> diff --git a/include/linux/sched/isolation.h 
>> b/include/linux/sched/isolation.h
>> index 109a2149e21a..6842a1ba4d13 100644
>> --- a/include/linux/sched/isolation.h
>> +++ b/include/linux/sched/isolation.h
>> @@ -9,6 +9,11 @@
>>   enum hk_type {
>>       /* Revert of boot-time isolcpus= argument */
>>       HK_TYPE_DOMAIN_BOOT,
>> +    /*
>> +     * Same as HK_TYPE_DOMAIN_BOOT but also includes the
>> +     * revert of cpuset isolated partitions. As such it
>> +     * is always a subset of HK_TYPE_DOMAIN_BOOT.
>> +     */
>>       HK_TYPE_DOMAIN,
>>       /* Revert of boot-time isolcpus=managed_irq argument */
>>       HK_TYPE_MANAGED_IRQ,
>> @@ -35,6 +40,7 @@ extern const struct cpumask 
>> *housekeeping_cpumask(enum hk_type type);
>>   extern bool housekeeping_enabled(enum hk_type type);
>>   extern void housekeeping_affine(struct task_struct *t, enum hk_type 
>> type);
>>   extern bool housekeeping_test_cpu(int cpu, enum hk_type type);
>> +extern int housekeeping_update(struct cpumask *isol_mask, enum 
>> hk_type type);
>>   extern void __init housekeeping_init(void);
>>     #else
>> @@ -62,6 +68,7 @@ static inline bool housekeeping_test_cpu(int cpu, 
>> enum hk_type type)
>>       return true;
>>   }
>>   +static inline int housekeeping_update(struct cpumask *isol_mask, 
>> enum hk_type type) { return 0; }
>>   static inline void housekeeping_init(void) { }
>>   #endif /* CONFIG_CPU_ISOLATION */
>>   diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index 5e2e3514c22e..e13e32491ebf 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -1490,6 +1490,9 @@ static void update_isolation_cpumasks(void)
>>       ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
>>       WARN_ON_ONCE(ret < 0);
>>   +    ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
>> +    WARN_ON_ONCE(ret < 0);
>> +
>>       isolated_cpus_updating = false;
>>   }
>>   diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
>> index 83be49ec2b06..a124f1119f2e 100644
>> --- a/kernel/sched/isolation.c
>> +++ b/kernel/sched/isolation.c
>> @@ -29,18 +29,48 @@ static struct housekeeping housekeeping;
>>     bool housekeeping_enabled(enum hk_type type)
>>   {
>> -    return !!(housekeeping.flags & BIT(type));
>> +    return !!(READ_ONCE(housekeeping.flags) & BIT(type));
>>   }
>>   EXPORT_SYMBOL_GPL(housekeeping_enabled);
>>   +static bool housekeeping_dereference_check(enum hk_type type)
>> +{
>> +    if (IS_ENABLED(CONFIG_LOCKDEP) && type == HK_TYPE_DOMAIN) {
>
> To be more correct, we should use IS_ENABLED(CONFIG_PROVE_LOCKING) as 
> this is the real kconfig that enables most of the lockdep checking. 
> PROVE_LOCKING selects LOCKDEP but not vice versa. So for some weird 
> configs that set LOCKDEP but not PROVE_LOCKING, it can cause 
> compilation problem. 

I think I get confused too. The various lockdep* helpers should be 
defined when CONFIG_LOCKDEP is enabled even if they may not do anything 
useful. So using IS_ENABLED(CONFIG_LOCKDEP) should be fine. Sorry for 
the noise.

Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset
  2025-12-24 13:45 ` [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
  2025-12-26  2:24   ` Waiman Long
@ 2025-12-26  8:08   ` Chen Ridong
  1 sibling, 0 replies; 58+ messages in thread
From: Chen Ridong @ 2025-12-26  8:08 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev



On 2025/12/24 21:45, Frederic Weisbecker wrote:
> Until now, HK_TYPE_DOMAIN used to only include boot defined isolated
> CPUs passed through isolcpus= boot option. Users interested in also
> knowing the runtime defined isolated CPUs through cpuset must use
> different APIs: cpuset_cpu_is_isolated(), cpu_is_isolated(), etc...
> 
> There are many drawbacks to that approach:
> 
> 1) Most interested subsystems want to know about all isolated CPUs, not
>   just those defined on boot time.
> 
> 2) cpuset_cpu_is_isolated() / cpu_is_isolated() are not synchronized with
>   concurrent cpuset changes.
> 
> 3) Further cpuset modifications are not propagated to subsystems
> 
> Solve 1) and 2) and centralize all isolated CPUs within the
> HK_TYPE_DOMAIN housekeeping cpumask.
> 
> Subsystems can rely on RCU to synchronize against concurrent changes.
> 
> The propagation mentioned in 3) will be handled in further patches.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>  include/linux/sched/isolation.h |  7 +++
>  kernel/cgroup/cpuset.c          |  3 ++
>  kernel/sched/isolation.c        | 76 ++++++++++++++++++++++++++++++---
>  kernel/sched/sched.h            |  1 +
>  4 files changed, 81 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
> index 109a2149e21a..6842a1ba4d13 100644
> --- a/include/linux/sched/isolation.h
> +++ b/include/linux/sched/isolation.h
> @@ -9,6 +9,11 @@
>  enum hk_type {
>  	/* Revert of boot-time isolcpus= argument */
>  	HK_TYPE_DOMAIN_BOOT,
> +	/*
> +	 * Same as HK_TYPE_DOMAIN_BOOT but also includes the
> +	 * revert of cpuset isolated partitions. As such it
> +	 * is always a subset of HK_TYPE_DOMAIN_BOOT.
> +	 */
>  	HK_TYPE_DOMAIN,
>  	/* Revert of boot-time isolcpus=managed_irq argument */
>  	HK_TYPE_MANAGED_IRQ,
> @@ -35,6 +40,7 @@ extern const struct cpumask *housekeeping_cpumask(enum hk_type type);
>  extern bool housekeeping_enabled(enum hk_type type);
>  extern void housekeeping_affine(struct task_struct *t, enum hk_type type);
>  extern bool housekeeping_test_cpu(int cpu, enum hk_type type);
> +extern int housekeeping_update(struct cpumask *isol_mask, enum hk_type type);
>  extern void __init housekeeping_init(void);
>  
>  #else
> @@ -62,6 +68,7 @@ static inline bool housekeeping_test_cpu(int cpu, enum hk_type type)
>  	return true;
>  }
>  
> +static inline int housekeeping_update(struct cpumask *isol_mask, enum hk_type type) { return 0; }
>  static inline void housekeeping_init(void) { }
>  #endif /* CONFIG_CPU_ISOLATION */
>  
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 5e2e3514c22e..e13e32491ebf 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1490,6 +1490,9 @@ static void update_isolation_cpumasks(void)
>  	ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
>  	WARN_ON_ONCE(ret < 0);
>  
> +	ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
> +	WARN_ON_ONCE(ret < 0);
> +
>  	isolated_cpus_updating = false;
>  }
>  
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 83be49ec2b06..a124f1119f2e 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -29,18 +29,48 @@ static struct housekeeping housekeeping;
>  
>  bool housekeeping_enabled(enum hk_type type)
>  {
> -	return !!(housekeeping.flags & BIT(type));
> +	return !!(READ_ONCE(housekeeping.flags) & BIT(type));
>  }
>  EXPORT_SYMBOL_GPL(housekeeping_enabled);
>  
> +static bool housekeeping_dereference_check(enum hk_type type)
> +{
> +	if (IS_ENABLED(CONFIG_LOCKDEP) && type == HK_TYPE_DOMAIN) {
> +		/* Cpuset isn't even writable yet? */
> +		if (system_state <= SYSTEM_SCHEDULING)
> +			return true;
> +
> +		/* CPU hotplug write locked, so cpuset partition can't be overwritten */
> +		if (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_write_held())
> +			return true;
> +
> +		/* Cpuset lock held, partitions not writable */
> +		if (IS_ENABLED(CONFIG_CPUSETS) && lockdep_is_cpuset_held())
> +			return true;
> +
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +static inline struct cpumask *housekeeping_cpumask_dereference(enum hk_type type)
> +{
> +	return rcu_dereference_all_check(housekeeping.cpumasks[type],
> +					 housekeeping_dereference_check(type));
> +}
> +
>  const struct cpumask *housekeeping_cpumask(enum hk_type type)
>  {
> +	const struct cpumask *mask = NULL;
> +
>  	if (static_branch_unlikely(&housekeeping_overridden)) {
> -		if (housekeeping.flags & BIT(type)) {
> -			return rcu_dereference_check(housekeeping.cpumasks[type], 1);
> -		}
> +		if (READ_ONCE(housekeeping.flags) & BIT(type))
> +			mask = housekeeping_cpumask_dereference(type);
>  	}
> -	return cpu_possible_mask;
> +	if (!mask)
> +		mask = cpu_possible_mask;
> +	return mask;
>  }
>  EXPORT_SYMBOL_GPL(housekeeping_cpumask);
>  
> @@ -80,12 +110,46 @@ EXPORT_SYMBOL_GPL(housekeeping_affine);
>  
>  bool housekeeping_test_cpu(int cpu, enum hk_type type)
>  {
> -	if (static_branch_unlikely(&housekeeping_overridden) && housekeeping.flags & BIT(type))
> +	if (static_branch_unlikely(&housekeeping_overridden) &&
> +	    READ_ONCE(housekeeping.flags) & BIT(type))
>  		return cpumask_test_cpu(cpu, housekeeping_cpumask(type));
>  	return true;
>  }
>  EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
>  
> +int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
> +{
> +	struct cpumask *trial, *old = NULL;
> +
> +	if (type != HK_TYPE_DOMAIN)
> +		return -ENOTSUPP;
> +

Nit:

The current if statement indicates that we only support modifying the cpumask for HK_TYPE_DOMAIN,
which makes the type argument seem unnecessary. This seems to be designed for better scalability.
However, when a new type needs to be supported in the future, this statement would have to be
removed. Also, the use of cpumask_andnot below is not a general operation.

Anyway, looks good to me.

> +	trial = kmalloc(cpumask_size(), GFP_KERNEL);
> +	if (!trial)
> +		return -ENOMEM;
> +
> +	cpumask_andnot(trial, housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT), isol_mask);
> +	if (!cpumask_intersects(trial, cpu_online_mask)) {
> +		kfree(trial);
> +		return -EINVAL;
> +	}
> +
> +	if (!housekeeping.flags)
> +		static_branch_enable(&housekeeping_overridden);
> +
> +	if (housekeeping.flags & BIT(type))
> +		old = housekeeping_cpumask_dereference(type);
> +	else
> +		WRITE_ONCE(housekeeping.flags, housekeeping.flags | BIT(type));
> +	rcu_assign_pointer(housekeeping.cpumasks[type], trial);
> +
> +	synchronize_rcu();
> +
> +	kfree(old);
> +
> +	return 0;
> +}
> +
>  void __init housekeeping_init(void)
>  {
>  	enum hk_type type;
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 475bdab3b8db..653e898a996a 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -30,6 +30,7 @@
>  #include <linux/context_tracking.h>
>  #include <linux/cpufreq.h>
>  #include <linux/cpumask_api.h>
> +#include <linux/cpuset.h>
>  #include <linux/ctype.h>
>  #include <linux/file.h>
>  #include <linux/fs_api.h>

Reviewed-by: Chen Ridong <chenridong@huawei.com>

-- 
Best regards,
Ridong


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 17/33] PCI: Flush PCI probe workqueue on cpuset isolated partition change
  2025-12-24 13:45 ` [PATCH 17/33] PCI: Flush PCI probe workqueue " Frederic Weisbecker
@ 2025-12-26  8:48   ` Chen Ridong
  0 siblings, 0 replies; 58+ messages in thread
From: Chen Ridong @ 2025-12-26  8:48 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev



On 2025/12/24 21:45, Frederic Weisbecker wrote:
> The HK_TYPE_DOMAIN housekeeping cpumask is now modifiable at runtime. In
> order to synchronize against PCI probe works and make sure that no
> asynchronous probing is still pending or executing on a newly isolated
> CPU, the housekeeping subsystem must flush the PCI probe works.
> 
> However the PCI probe works can't be flushed easily since they are
> queued to the main per-CPU workqueue pool.
> 
> Solve this with creating a PCI probe-specific pool and provide and use
> the appropriate flushing API.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>  drivers/pci/pci-driver.c | 17 ++++++++++++++++-
>  include/linux/pci.h      |  3 +++
>  kernel/sched/isolation.c |  2 ++
>  3 files changed, 21 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
> index 786d6ce40999..d87f781e5ce9 100644
> --- a/drivers/pci/pci-driver.c
> +++ b/drivers/pci/pci-driver.c
> @@ -337,6 +337,8 @@ static int local_pci_probe(struct drv_dev_and_id *ddi)
>  	return 0;
>  }
>  
> +static struct workqueue_struct *pci_probe_wq;
> +
>  struct pci_probe_arg {
>  	struct drv_dev_and_id *ddi;
>  	struct work_struct work;
> @@ -407,7 +409,11 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
>  		cpu = cpumask_any_and(cpumask_of_node(node),
>  				      wq_domain_mask);
>  		if (cpu < nr_cpu_ids) {
> -			schedule_work_on(cpu, &arg.work);
> +			struct workqueue_struct *wq = pci_probe_wq;
> +
> +			if (WARN_ON_ONCE(!wq))
> +				wq = system_percpu_wq;
> +			queue_work_on(cpu, wq, &arg.work);
>  			rcu_read_unlock();
>  			flush_work(&arg.work);
>  			error = arg.ret;
> @@ -425,6 +431,11 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
>  	return error;
>  }
>  
> +void pci_probe_flush_workqueue(void)
> +{
> +	flush_workqueue(pci_probe_wq);
> +}
> +
>  /**
>   * __pci_device_probe - check if a driver wants to claim a specific PCI device
>   * @drv: driver to call to check if it wants the PCI device
> @@ -1762,6 +1773,10 @@ static int __init pci_driver_init(void)
>  {
>  	int ret;
>  
> +	pci_probe_wq = alloc_workqueue("sync_wq", WQ_PERCPU, 0);
> +	if (!pci_probe_wq)
> +		return -ENOMEM;
> +
>  	ret = bus_register(&pci_bus_type);
>  	if (ret)
>  		return ret;
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 864775651c6f..f14f467e50de 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -1206,6 +1206,7 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
>  				    struct pci_ops *ops, void *sysdata,
>  				    struct list_head *resources);
>  int pci_host_probe(struct pci_host_bridge *bridge);
> +void pci_probe_flush_workqueue(void);
>  int pci_bus_insert_busn_res(struct pci_bus *b, int bus, int busmax);
>  int pci_bus_update_busn_res_end(struct pci_bus *b, int busmax);
>  void pci_bus_release_busn_res(struct pci_bus *b);
> @@ -2079,6 +2080,8 @@ static inline int pci_has_flag(int flag) { return 0; }
>  _PCI_NOP_ALL(read, *)
>  _PCI_NOP_ALL(write,)
>  
> +static inline void pci_probe_flush_workqueue(void) { }
> +
>  static inline struct pci_dev *pci_get_device(unsigned int vendor,
>  					     unsigned int device,
>  					     struct pci_dev *from)
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 8aac3c9f7c7f..7dbe037ea8df 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -8,6 +8,7 @@
>   *
>   */
>  #include <linux/sched/isolation.h>
> +#include <linux/pci.h>
>  #include "sched.h"
>  
>  enum hk_flags {
> @@ -145,6 +146,7 @@ int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
>  
>  	synchronize_rcu();
>  
> +	pci_probe_flush_workqueue();
>  	mem_cgroup_flush_workqueue();
>  	vmstat_flush_workqueue();
>  

I am concerned that this flush work may slow down writes to the cpuset interface. I am not sure how
significant the impact will be.

I'm concerned about potential deadlock risks. While preliminary investigation hasn't uncovered any
issues, we must ensure that the cpu write lock is not held during the work(writing cpuset interface
needs cpu read lock).

-- 
Best regards,
Ridong


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping
  2025-12-24 13:45 ` [PATCH 18/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
@ 2025-12-26 20:31   ` Waiman Long
  2025-12-27  0:18   ` Tejun Heo
  1 sibling, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26 20:31 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> Until now, cpuset would propagate isolated partition changes to
> workqueues so that unbound workers get properly reaffined.
>
> Since housekeeping now centralizes, synchronize and propagates isolation
> cpumask changes, perform the work from that subsystem for consolidation
> and consistency purposes.
>
> For simplification purpose, the target function is adapted to take the
> new housekeeping mask instead of the isolated mask.
>
> Suggested-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   include/linux/workqueue.h |  2 +-
>   init/Kconfig              |  1 +
>   kernel/cgroup/cpuset.c    |  9 +++------
>   kernel/sched/isolation.c  |  4 +++-
>   kernel/workqueue.c        | 17 ++++++++++-------
>   5 files changed, 18 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
> index dabc351cc127..a4749f56398f 100644
> --- a/include/linux/workqueue.h
> +++ b/include/linux/workqueue.h
> @@ -588,7 +588,7 @@ struct workqueue_attrs *alloc_workqueue_attrs_noprof(void);
>   void free_workqueue_attrs(struct workqueue_attrs *attrs);
>   int apply_workqueue_attrs(struct workqueue_struct *wq,
>   			  const struct workqueue_attrs *attrs);
> -extern int workqueue_unbound_exclude_cpumask(cpumask_var_t cpumask);
> +extern int workqueue_unbound_housekeeping_update(const struct cpumask *hk);
>   
>   extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
>   			struct work_struct *work);
> diff --git a/init/Kconfig b/init/Kconfig
> index fa79feb8fe57..518830fb812f 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1254,6 +1254,7 @@ config CPUSETS
>   	bool "Cpuset controller"
>   	depends on SMP
>   	select UNION_FIND
> +	select CPU_ISOLATION
>   	help
>   	  This option will let you create and manage CPUSETs which
>   	  allow dynamically partitioning a system into sets of CPUs and
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index e13e32491ebf..a492d23dd622 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1484,15 +1484,12 @@ static void update_isolation_cpumasks(void)
>   
>   	lockdep_assert_cpus_held();
>   
> -	ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
> -	WARN_ON_ONCE(ret < 0);
> -
> -	ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
> -	WARN_ON_ONCE(ret < 0);
> -
>   	ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
>   	WARN_ON_ONCE(ret < 0);
>   
> +	ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
> +	WARN_ON_ONCE(ret < 0);
> +
>   	isolated_cpus_updating = false;
>   }
>   
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 7dbe037ea8df..d224bca299ed 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -121,6 +121,7 @@ EXPORT_SYMBOL_GPL(housekeeping_test_cpu);
>   int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
>   {
>   	struct cpumask *trial, *old = NULL;
> +	int err;
>   
>   	if (type != HK_TYPE_DOMAIN)
>   		return -ENOTSUPP;
> @@ -149,10 +150,11 @@ int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
>   	pci_probe_flush_workqueue();
>   	mem_cgroup_flush_workqueue();
>   	vmstat_flush_workqueue();
> +	err = workqueue_unbound_housekeeping_update(housekeeping_cpumask(type));
>   
>   	kfree(old);
>   
> -	return 0;
> +	return err;
>   }
>   
>   void __init housekeeping_init(void)
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 253311af47c6..eb5660013222 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -6959,13 +6959,16 @@ static int workqueue_apply_unbound_cpumask(const cpumask_var_t unbound_cpumask)
>   }
>   
>   /**
> - * workqueue_unbound_exclude_cpumask - Exclude given CPUs from unbound cpumask
> - * @exclude_cpumask: the cpumask to be excluded from wq_unbound_cpumask
> + * workqueue_unbound_housekeeping_update - Propagate housekeeping cpumask update
> + * @hk: the new housekeeping cpumask
>    *
> - * This function can be called from cpuset code to provide a set of isolated
> - * CPUs that should be excluded from wq_unbound_cpumask.
> + * Update the unbound workqueue cpumask on top of the new housekeeping cpumask such
> + * that the effective unbound affinity is the intersection of the new housekeeping
> + * with the requested affinity set via nohz_full=/isolcpus= or sysfs.
> + *
> + * Return: 0 on success and -errno on failure.
>    */
> -int workqueue_unbound_exclude_cpumask(cpumask_var_t exclude_cpumask)
> +int workqueue_unbound_housekeeping_update(const struct cpumask *hk)
>   {
>   	cpumask_var_t cpumask;
>   	int ret = 0;
> @@ -6981,14 +6984,14 @@ int workqueue_unbound_exclude_cpumask(cpumask_var_t exclude_cpumask)
>   	 * (HK_TYPE_WQ ∩ HK_TYPE_DOMAIN) house keeping mask and rewritten
>   	 * by any subsequent write to workqueue/cpumask sysfs file.
>   	 */
> -	if (!cpumask_andnot(cpumask, wq_requested_unbound_cpumask, exclude_cpumask))
> +	if (!cpumask_and(cpumask, wq_requested_unbound_cpumask, hk))
>   		cpumask_copy(cpumask, wq_requested_unbound_cpumask);
>   	if (!cpumask_equal(cpumask, wq_unbound_cpumask))
>   		ret = workqueue_apply_unbound_cpumask(cpumask);
>   
>   	/* Save the current isolated cpumask & export it via sysfs */
>   	if (!ret)
> -		cpumask_copy(wq_isolated_cpumask, exclude_cpumask);
> +		cpumask_andnot(wq_isolated_cpumask, cpu_possible_mask, hk);
>   
>   	mutex_unlock(&wq_pool_mutex);
>   	free_cpumask_var(cpumask);
Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 20/33] timers/migration: Remove superfluous cpuset isolation test
  2025-12-24 13:45 ` [PATCH 20/33] timers/migration: Remove superfluous cpuset isolation test Frederic Weisbecker
@ 2025-12-26 20:45   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26 20:45 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> Cpuset isolated partitions are now included in HK_TYPE_DOMAIN. Testing
> if a CPU is part of an isolated partition alone is now useless.
>
> Remove the superflous test.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   kernel/time/timer_migration.c | 5 ++---
>   1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
> index 3879575a4975..6da9cd562b20 100644
> --- a/kernel/time/timer_migration.c
> +++ b/kernel/time/timer_migration.c
> @@ -466,9 +466,8 @@ static inline bool tmigr_is_isolated(int cpu)
>   {
>   	if (!static_branch_unlikely(&tmigr_exclude_isolated))
>   		return false;
> -	return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) ||
> -		cpuset_cpu_is_isolated(cpu)) &&
> -	       housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE);
> +	return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) &&
> +		housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE));
>   }
>   
>   /*
Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 21/33] cpuset: Remove cpuset_cpu_is_isolated()
  2025-12-24 13:45 ` [PATCH 21/33] cpuset: Remove cpuset_cpu_is_isolated() Frederic Weisbecker
@ 2025-12-26 20:48   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26 20:48 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> The set of cpuset isolated CPUs is now included in HK_TYPE_DOMAIN
> housekeeping cpumask. There is no usecase left interested in just
> checking what is isolated by cpuset and not by the isolcpus= kernel
> boot parameter.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   include/linux/cpuset.h          |  6 ------
>   include/linux/sched/isolation.h |  4 +---
>   kernel/cgroup/cpuset.c          | 12 ------------
>   3 files changed, 1 insertion(+), 21 deletions(-)
>
> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> index 1c49ffd2ca9b..a4aa2f1767d0 100644
> --- a/include/linux/cpuset.h
> +++ b/include/linux/cpuset.h
> @@ -79,7 +79,6 @@ extern void cpuset_unlock(void);
>   extern void cpuset_cpus_allowed_locked(struct task_struct *p, struct cpumask *mask);
>   extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
>   extern bool cpuset_cpus_allowed_fallback(struct task_struct *p);
> -extern bool cpuset_cpu_is_isolated(int cpu);
>   extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
>   #define cpuset_current_mems_allowed (current->mems_allowed)
>   void cpuset_init_current_mems_allowed(void);
> @@ -215,11 +214,6 @@ static inline bool cpuset_cpus_allowed_fallback(struct task_struct *p)
>   	return false;
>   }
>   
> -static inline bool cpuset_cpu_is_isolated(int cpu)
> -{
> -	return false;
> -}
> -
>   static inline nodemask_t cpuset_mems_allowed(struct task_struct *p)
>   {
>   	return node_possible_map;
> diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
> index 6842a1ba4d13..19905adbb705 100644
> --- a/include/linux/sched/isolation.h
> +++ b/include/linux/sched/isolation.h
> @@ -2,7 +2,6 @@
>   #define _LINUX_SCHED_ISOLATION_H
>   
>   #include <linux/cpumask.h>
> -#include <linux/cpuset.h>
>   #include <linux/init.h>
>   #include <linux/tick.h>
>   
> @@ -84,8 +83,7 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type)
>   static inline bool cpu_is_isolated(int cpu)
>   {
>   	return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN) ||
> -	       !housekeeping_test_cpu(cpu, HK_TYPE_TICK) ||
> -	       cpuset_cpu_is_isolated(cpu);
> +	       !housekeeping_test_cpu(cpu, HK_TYPE_TICK);
>   }
>   
>   #endif /* _LINUX_SCHED_ISOLATION_H */
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 25ac6c98113c..cd6119c02beb 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -29,7 +29,6 @@
>   #include <linux/mempolicy.h>
>   #include <linux/mm.h>
>   #include <linux/memory.h>
> -#include <linux/export.h>
>   #include <linux/rcupdate.h>
>   #include <linux/sched.h>
>   #include <linux/sched/deadline.h>
> @@ -1490,17 +1489,6 @@ static void update_isolation_cpumasks(void)
>   	isolated_cpus_updating = false;
>   }
>   
> -/**
> - * cpuset_cpu_is_isolated - Check if the given CPU is isolated
> - * @cpu: the CPU number to be checked
> - * Return: true if CPU is used in an isolated partition, false otherwise
> - */
> -bool cpuset_cpu_is_isolated(int cpu)
> -{
> -	return cpumask_test_cpu(cpu, isolated_cpus);
> -}
> -EXPORT_SYMBOL_GPL(cpuset_cpu_is_isolated);
> -
>   /**
>    * rm_siblings_excl_cpus - Remove exclusive CPUs that are used by sibling cpusets
>    * @parent: Parent cpuset containing all siblings
Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 22/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated()
  2025-12-24 13:45 ` [PATCH 22/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated() Frederic Weisbecker
@ 2025-12-26 21:26   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26 21:26 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> It doesn't make sense to use nohz_full without also isolating the
> related CPUs from the domain topology, either through the use of
> isolcpus= or cpuset isolated partitions.
>
> And now HK_TYPE_DOMAIN includes all kinds of domain isolated CPUs.
>
> This means that HK_TYPE_KERNEL_NOISE (of which HK_TYPE_TICK is only an
> alias) should always be a subset of HK_TYPE_DOMAIN.
>
> Therefore if a CPU is not HK_TYPE_DOMAIN, it shouldn't be
> HK_TYPE_KERNEL_NOISE either. Testing the former is then enough.
>
> Simplify cpu_is_isolated() accordingly.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   include/linux/sched/isolation.h | 3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
> index 19905adbb705..cbb1d30f699a 100644
> --- a/include/linux/sched/isolation.h
> +++ b/include/linux/sched/isolation.h
> @@ -82,8 +82,7 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type)
>   
>   static inline bool cpu_is_isolated(int cpu)
>   {
> -	return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN) ||
> -	       !housekeeping_test_cpu(cpu, HK_TYPE_TICK);
> +	return !housekeeping_test_cpu(cpu, HK_TYPE_DOMAIN);
>   }
>   
>   #endif /* _LINUX_SCHED_ISOLATION_H */
Acked-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 24/33] kthread: Refine naming of affinity related fields
  2025-12-24 13:45 ` [PATCH 24/33] kthread: Refine naming of affinity related fields Frederic Weisbecker
@ 2025-12-26 21:37   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26 21:37 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> The kthreads preferred affinity related fields use "hotplug" as the base
> of their naming because the affinity management was initially deemed to
> deal with CPU hotplug.
>
> The scope of this role is going to broaden now and also deal with
> cpuset isolated partition updates.
>
> Switch the naming accordingly.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   kernel/kthread.c | 38 +++++++++++++++++++-------------------
>   1 file changed, 19 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index 99a3808d086f..f1e4f1f35cae 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -35,8 +35,8 @@ static DEFINE_SPINLOCK(kthread_create_lock);
>   static LIST_HEAD(kthread_create_list);
>   struct task_struct *kthreadd_task;
>   
> -static LIST_HEAD(kthreads_hotplug);
> -static DEFINE_MUTEX(kthreads_hotplug_lock);
> +static LIST_HEAD(kthread_affinity_list);
> +static DEFINE_MUTEX(kthread_affinity_lock);
>   
>   struct kthread_create_info
>   {
> @@ -69,7 +69,7 @@ struct kthread {
>   	/* To store the full name if task comm is truncated. */
>   	char *full_name;
>   	struct task_struct *task;
> -	struct list_head hotplug_node;
> +	struct list_head affinity_node;
>   	struct cpumask *preferred_affinity;
>   };
>   
> @@ -128,7 +128,7 @@ bool set_kthread_struct(struct task_struct *p)
>   
>   	init_completion(&kthread->exited);
>   	init_completion(&kthread->parked);
> -	INIT_LIST_HEAD(&kthread->hotplug_node);
> +	INIT_LIST_HEAD(&kthread->affinity_node);
>   	p->vfork_done = &kthread->exited;
>   
>   	kthread->task = p;
> @@ -323,10 +323,10 @@ void __noreturn kthread_exit(long result)
>   {
>   	struct kthread *kthread = to_kthread(current);
>   	kthread->result = result;
> -	if (!list_empty(&kthread->hotplug_node)) {
> -		mutex_lock(&kthreads_hotplug_lock);
> -		list_del(&kthread->hotplug_node);
> -		mutex_unlock(&kthreads_hotplug_lock);
> +	if (!list_empty(&kthread->affinity_node)) {
> +		mutex_lock(&kthread_affinity_lock);
> +		list_del(&kthread->affinity_node);
> +		mutex_unlock(&kthread_affinity_lock);
>   
>   		if (kthread->preferred_affinity) {
>   			kfree(kthread->preferred_affinity);
> @@ -390,9 +390,9 @@ static void kthread_affine_node(void)
>   			return;
>   		}
>   
> -		mutex_lock(&kthreads_hotplug_lock);
> -		WARN_ON_ONCE(!list_empty(&kthread->hotplug_node));
> -		list_add_tail(&kthread->hotplug_node, &kthreads_hotplug);
> +		mutex_lock(&kthread_affinity_lock);
> +		WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
> +		list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
>   		/*
>   		 * The node cpumask is racy when read from kthread() but:
>   		 * - a racing CPU going down will either fail on the subsequent
> @@ -402,7 +402,7 @@ static void kthread_affine_node(void)
>   		 */
>   		kthread_fetch_affinity(kthread, affinity);
>   		set_cpus_allowed_ptr(current, affinity);
> -		mutex_unlock(&kthreads_hotplug_lock);
> +		mutex_unlock(&kthread_affinity_lock);
>   
>   		free_cpumask_var(affinity);
>   	}
> @@ -873,16 +873,16 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
>   		goto out;
>   	}
>   
> -	mutex_lock(&kthreads_hotplug_lock);
> +	mutex_lock(&kthread_affinity_lock);
>   	cpumask_copy(kthread->preferred_affinity, mask);
> -	WARN_ON_ONCE(!list_empty(&kthread->hotplug_node));
> -	list_add_tail(&kthread->hotplug_node, &kthreads_hotplug);
> +	WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
> +	list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
>   	kthread_fetch_affinity(kthread, affinity);
>   
>   	scoped_guard (raw_spinlock_irqsave, &p->pi_lock)
>   		set_cpus_allowed_force(p, affinity);
>   
> -	mutex_unlock(&kthreads_hotplug_lock);
> +	mutex_unlock(&kthread_affinity_lock);
>   out:
>   	free_cpumask_var(affinity);
>   
> @@ -903,9 +903,9 @@ static int kthreads_online_cpu(unsigned int cpu)
>   	struct kthread *k;
>   	int ret;
>   
> -	guard(mutex)(&kthreads_hotplug_lock);
> +	guard(mutex)(&kthread_affinity_lock);
>   
> -	if (list_empty(&kthreads_hotplug))
> +	if (list_empty(&kthread_affinity_list))
>   		return 0;
>   
>   	if (!zalloc_cpumask_var(&affinity, GFP_KERNEL))
> @@ -913,7 +913,7 @@ static int kthreads_online_cpu(unsigned int cpu)
>   
>   	ret = 0;
>   
> -	list_for_each_entry(k, &kthreads_hotplug, hotplug_node) {
> +	list_for_each_entry(k, &kthread_affinity_list, affinity_node) {
>   		if (WARN_ON_ONCE((k->task->flags & PF_NO_SETAFFINITY) ||
>   				 kthread_is_per_cpu(k->task))) {
>   			ret = -EINVAL;
Acked-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 25/33] kthread: Include unbound kthreads in the managed affinity list
  2025-12-24 13:45 ` [PATCH 25/33] kthread: Include unbound kthreads in the managed affinity list Frederic Weisbecker
@ 2025-12-26 22:11   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26 22:11 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> The managed affinity list currently contains only unbound kthreads that
> have affinity preferences. Unbound kthreads globally affine by default
> are outside of the list because their affinity is automatically managed
> by the scheduler (through the fallback housekeeping mask) and by cpuset.
>
> However in order to preserve the preferred affinity of kthreads, cpuset
> will delegate the isolated partition update propagation to the
> housekeeping and kthread code.
>
> Prepare for that with including all unbound kthreads in the managed
> affinity list.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   kernel/kthread.c | 70 ++++++++++++++++++++++++++++--------------------
>   1 file changed, 41 insertions(+), 29 deletions(-)
>
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index f1e4f1f35cae..51c0908d3d02 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -365,9 +365,10 @@ static void kthread_fetch_affinity(struct kthread *kthread, struct cpumask *cpum
>   	if (kthread->preferred_affinity) {
>   		pref = kthread->preferred_affinity;
>   	} else {
> -		if (WARN_ON_ONCE(kthread->node == NUMA_NO_NODE))
> -			return;
> -		pref = cpumask_of_node(kthread->node);
> +		if (kthread->node == NUMA_NO_NODE)
> +			pref = housekeeping_cpumask(HK_TYPE_KTHREAD);
> +		else
> +			pref = cpumask_of_node(kthread->node);
>   	}
>   
>   	cpumask_and(cpumask, pref, housekeeping_cpumask(HK_TYPE_KTHREAD));
> @@ -380,32 +381,29 @@ static void kthread_affine_node(void)
>   	struct kthread *kthread = to_kthread(current);
>   	cpumask_var_t affinity;
>   
> -	WARN_ON_ONCE(kthread_is_per_cpu(current));
> +	if (WARN_ON_ONCE(kthread_is_per_cpu(current)))
> +		return;
>   
> -	if (kthread->node == NUMA_NO_NODE) {
> -		housekeeping_affine(current, HK_TYPE_KTHREAD);
> -	} else {
> -		if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) {
> -			WARN_ON_ONCE(1);
> -			return;
> -		}
> -
> -		mutex_lock(&kthread_affinity_lock);
> -		WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
> -		list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
> -		/*
> -		 * The node cpumask is racy when read from kthread() but:
> -		 * - a racing CPU going down will either fail on the subsequent
> -		 *   call to set_cpus_allowed_ptr() or be migrated to housekeepers
> -		 *   afterwards by the scheduler.
> -		 * - a racing CPU going up will be handled by kthreads_online_cpu()
> -		 */
> -		kthread_fetch_affinity(kthread, affinity);
> -		set_cpus_allowed_ptr(current, affinity);
> -		mutex_unlock(&kthread_affinity_lock);
> -
> -		free_cpumask_var(affinity);
> +	if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) {
> +		WARN_ON_ONCE(1);
> +		return;
>   	}
> +
> +	mutex_lock(&kthread_affinity_lock);
> +	WARN_ON_ONCE(!list_empty(&kthread->affinity_node));
> +	list_add_tail(&kthread->affinity_node, &kthread_affinity_list);
> +	/*
> +	 * The node cpumask is racy when read from kthread() but:
> +	 * - a racing CPU going down will either fail on the subsequent
> +	 *   call to set_cpus_allowed_ptr() or be migrated to housekeepers
> +	 *   afterwards by the scheduler.
> +	 * - a racing CPU going up will be handled by kthreads_online_cpu()
> +	 */
> +	kthread_fetch_affinity(kthread, affinity);
> +	set_cpus_allowed_ptr(current, affinity);
> +	mutex_unlock(&kthread_affinity_lock);
> +
> +	free_cpumask_var(affinity);
>   }
>   
>   static int kthread(void *_create)
> @@ -919,8 +917,22 @@ static int kthreads_online_cpu(unsigned int cpu)
>   			ret = -EINVAL;
>   			continue;
>   		}
> -		kthread_fetch_affinity(k, affinity);
> -		set_cpus_allowed_ptr(k->task, affinity);
> +
> +		/*
> +		 * Unbound kthreads without preferred affinity are already affine
> +		 * to housekeeping, whether those CPUs are online or not. So no need
> +		 * to handle newly online CPUs for them.
> +		 *
> +		 * But kthreads with a preferred affinity or node are different:
> +		 * if none of their preferred CPUs are online and part of
> +		 * housekeeping at the same time, they must be affine to housekeeping.
> +		 * But as soon as one of their preferred CPU becomes online, they must
> +		 * be affine to them.
> +		 */
> +		if (k->preferred_affinity || k->node != NUMA_NO_NODE) {
> +			kthread_fetch_affinity(k, affinity);
> +			set_cpus_allowed_ptr(k->task, affinity);
> +		}
>   	}
>   
>   	free_cpumask_var(affinity);
Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 26/33] kthread: Include kthreadd to the managed affinity list
  2025-12-24 13:45 ` [PATCH 26/33] kthread: Include kthreadd to " Frederic Weisbecker
@ 2025-12-26 22:13   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26 22:13 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> The unbound kthreads affinity management performed by cpuset is going to
> be imported to the kthread core code for consolidation purposes.
>
> Treat kthreadd just like any other kthread.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   kernel/kthread.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index 51c0908d3d02..85ccf5bb17c9 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -818,12 +818,13 @@ int kthreadd(void *unused)
>   	/* Setup a clean context for our children to inherit. */
>   	set_task_comm(tsk, comm);
>   	ignore_signals(tsk);
> -	set_cpus_allowed_ptr(tsk, housekeeping_cpumask(HK_TYPE_KTHREAD));
>   	set_mems_allowed(node_states[N_MEMORY]);
>   
>   	current->flags |= PF_NOFREEZE;
>   	cgroup_init_kthreadd();
>   
> +	kthread_affine_node();
> +
>   	for (;;) {
>   		set_current_state(TASK_INTERRUPTIBLE);
>   		if (list_empty(&kthread_create_list))
Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 27/33] kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management
  2025-12-24 13:45 ` [PATCH 27/33] kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management Frederic Weisbecker
@ 2025-12-26 22:16   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26 22:16 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> Unbound kthreads want to run neither on nohz_full CPUs nor on domain
> isolated CPUs. And since nohz_full implies domain isolation, checking
> the latter is enough to verify both.
>
> Therefore exclude kthreads from domain isolation.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   kernel/kthread.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index 85ccf5bb17c9..968fa5868d21 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -362,18 +362,20 @@ static void kthread_fetch_affinity(struct kthread *kthread, struct cpumask *cpum
>   {
>   	const struct cpumask *pref;
>   
> +	guard(rcu)();
> +
>   	if (kthread->preferred_affinity) {
>   		pref = kthread->preferred_affinity;
>   	} else {
>   		if (kthread->node == NUMA_NO_NODE)
> -			pref = housekeeping_cpumask(HK_TYPE_KTHREAD);
> +			pref = housekeeping_cpumask(HK_TYPE_DOMAIN);
>   		else
>   			pref = cpumask_of_node(kthread->node);
>   	}
>   
> -	cpumask_and(cpumask, pref, housekeeping_cpumask(HK_TYPE_KTHREAD));
> +	cpumask_and(cpumask, pref, housekeeping_cpumask(HK_TYPE_DOMAIN));
>   	if (cpumask_empty(cpumask))
> -		cpumask_copy(cpumask, housekeeping_cpumask(HK_TYPE_KTHREAD));
> +		cpumask_copy(cpumask, housekeeping_cpumask(HK_TYPE_DOMAIN));
>   }
>   
>   static void kthread_affine_node(void)
Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 28/33] sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN
  2025-12-24 13:45 ` [PATCH 28/33] sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
@ 2025-12-26 23:08   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26 23:08 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> Tasks that have all their allowed CPUs offline don't want their affinity
> to fallback on either nohz_full CPUs or on domain isolated CPUs. And
> since nohz_full implies domain isolation, checking the latter is enough
> to verify both.
>
> Therefore exclude domain isolation from fallback task affinity.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   include/linux/mmu_context.h | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/mmu_context.h b/include/linux/mmu_context.h
> index ac01dc4eb2ce..ed3dd0f3fe19 100644
> --- a/include/linux/mmu_context.h
> +++ b/include/linux/mmu_context.h
> @@ -24,7 +24,7 @@ static inline void leave_mm(void) { }
>   #ifndef task_cpu_possible_mask
>   # define task_cpu_possible_mask(p)	cpu_possible_mask
>   # define task_cpu_possible(cpu, p)	true
> -# define task_cpu_fallback_mask(p)	housekeeping_cpumask(HK_TYPE_TICK)
> +# define task_cpu_fallback_mask(p)	housekeeping_cpumask(HK_TYPE_DOMAIN)
>   #else
>   # define task_cpu_possible(cpu, p)	cpumask_test_cpu((cpu), task_cpu_possible_mask(p))
>   #endif
Acked-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 29/33] sched/arm64: Move fallback task cpumask to HK_TYPE_DOMAIN
  2025-12-24 13:45 ` [PATCH 29/33] sched/arm64: Move fallback task " Frederic Weisbecker
@ 2025-12-26 23:46   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26 23:46 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> When none of the allowed CPUs of a task are online, it gets migrated
> to the fallback cpumask which is all the non nohz_full CPUs.
>
> However just like nohz_full CPUs, domain isolated CPUs don't want to be
> disturbed by tasks that have lost their CPU affinities.
>
> And since nohz_full rely on domain isolation to work correctly, the
> housekeeping mask of domain isolated CPUs should always be a superset of
> the housekeeping mask of nohz_full CPUs (there can be CPUs that are
> domain isolated but not nohz_full, OTOH there shouldn't be nohz_full
> CPUs that are not domain isolated):
>
> 	HK_TYPE_DOMAIN | HK_TYPE_KERNEL_NOISE == HK_TYPE_DOMAIN
>
> Therefore use HK_TYPE_DOMAIN as the appropriate fallback target for
> tasks and since this cpumask can be modified at runtime, make sure
> that 32 bits support CPUs on ARM64 mismatched systems are not isolated
> by cpusets.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   arch/arm64/kernel/cpufeature.c | 18 +++++++++++++++---
>   include/linux/cpu.h            |  4 ++++
>   kernel/cgroup/cpuset.c         | 17 ++++++++++++++---
>   3 files changed, 33 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index c840a93b9ef9..70b0e45e299a 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1656,6 +1656,18 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
>   	return feature_matches(val, entry);
>   }
>   
> +/*
> + * 32 bits support CPUs can't be isolated because tasks may be
> + * arbitrarily affine to them, defeating the purpose of isolation.
> + */
> +bool arch_isolated_cpus_can_update(struct cpumask *new_cpus)
> +{
> +	if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
> +		return !cpumask_intersects(cpu_32bit_el0_mask, new_cpus);
> +	else
> +		return true;
> +}
> +
>   const struct cpumask *system_32bit_el0_cpumask(void)
>   {
>   	if (!system_supports_32bit_el0())
> @@ -1669,7 +1681,7 @@ const struct cpumask *system_32bit_el0_cpumask(void)
>   
>   const struct cpumask *task_cpu_fallback_mask(struct task_struct *p)
>   {
> -	return __task_cpu_possible_mask(p, housekeeping_cpumask(HK_TYPE_TICK));
> +	return __task_cpu_possible_mask(p, housekeeping_cpumask(HK_TYPE_DOMAIN));
>   }
>   
>   static int __init parse_32bit_el0_param(char *str)
> @@ -3987,8 +3999,8 @@ static int enable_mismatched_32bit_el0(unsigned int cpu)
>   	bool cpu_32bit = false;
>   
>   	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
> -		if (!housekeeping_cpu(cpu, HK_TYPE_TICK))
> -			pr_info("Treating adaptive-ticks CPU %u as 64-bit only\n", cpu);
> +		if (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN))
> +			pr_info("Treating domain isolated CPU %u as 64-bit only\n", cpu);
>   		else
>   			cpu_32bit = true;
>   	}
> diff --git a/include/linux/cpu.h b/include/linux/cpu.h
> index 487b3bf2e1ea..0b48af25ab5c 100644
> --- a/include/linux/cpu.h
> +++ b/include/linux/cpu.h
> @@ -229,4 +229,8 @@ static inline bool cpu_attack_vector_mitigated(enum cpu_attack_vectors v)
>   #define smt_mitigations SMT_MITIGATIONS_OFF
>   #endif
>   
> +struct cpumask;
> +
> +bool arch_isolated_cpus_can_update(struct cpumask *new_cpus);
> +
>   #endif /* _LINUX_CPU_H_ */
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index cd6119c02beb..1cc83a3c25f6 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1408,14 +1408,22 @@ static void partition_xcpus_del(int old_prs, struct cpuset *parent,
>   	cpumask_or(parent->effective_cpus, parent->effective_cpus, xcpus);
>   }
>   
> +bool __weak arch_isolated_cpus_can_update(struct cpumask *new_cpus)
> +{
> +	return true;
> +}
> +
>   /*
> - * isolated_cpus_can_update - check for isolated & nohz_full conflicts
> + * isolated_cpus_can_update - check for conflicts against housekeeping and
> + *                            CPUs capabilities.
>    * @add_cpus: cpu mask for cpus that are going to be isolated
>    * @del_cpus: cpu mask for cpus that are no longer isolated, can be NULL
>    * Return: false if there is conflict, true otherwise
>    *
> - * If nohz_full is enabled and we have isolated CPUs, their combination must
> - * still leave housekeeping CPUs.
> + * Check for conflicts:
> + * - If nohz_full is enabled and there are isolated CPUs, their combination must
> + *   still leave housekeeping CPUs.
> + * - Architecture has CPU capabilities incompatible with being isolated
>    *
>    * TBD: Should consider merging this function into
>    *      prstate_housekeeping_conflict().
> @@ -1426,6 +1434,9 @@ static bool isolated_cpus_can_update(struct cpumask *add_cpus,
>   	cpumask_var_t full_hk_cpus;
>   	int res = true;
>   
> +	if (!arch_isolated_cpus_can_update(add_cpus))
> +		return false;
> +
>   	if (!housekeeping_enabled(HK_TYPE_KERNEL_NOISE))
>   		return true;
>   
Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 03/33] memcg: Prepare to protect against concurrent isolated cpuset change
  2025-12-24 13:44 ` [PATCH 03/33] memcg: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
@ 2025-12-26 23:56   ` Tejun Heo
  0 siblings, 0 replies; 58+ messages in thread
From: Tejun Heo @ 2025-12-26 23:56 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Michal Koutný, Andrew Morton, Bjorn Helgaas,
	Catalin Marinas, Chen Ridong, Danilo Krummrich, David S . Miller,
	Eric Dumazet, Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar,
	Jakub Kicinski, Jens Axboe, Johannes Weiner, Lai Jiangshan,
	Marco Crivellari, Michal Hocko, Muchun Song, Paolo Abeni,
	Peter Zijlstra, Phil Auld, Rafael J . Wysocki, Roman Gushchin,
	Shakeel Butt, Simon Horman, Thomas Gleixner, Vlastimil Babka,
	Waiman Long, Will Deacon, cgroups, linux-arm-kernel, linux-block,
	linux-mm, linux-pci, netdev

On Wed, Dec 24, 2025 at 02:44:50PM +0100, Frederic Weisbecker wrote:
> +static void schedule_drain_work(int cpu, struct work_struct *work)
> +{
> +	guard(rcu)();
> +	if (!cpu_is_isolated(cpu))
> +		schedule_work_on(cpu, work);
> +}

As the sync rules aren't trivial, it'd be nice to have some comment
explaining what's going on.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 30/33] kthread: Honour kthreads preferred affinity after cpuset changes
  2025-12-24 13:45 ` [PATCH 30/33] kthread: Honour kthreads preferred affinity after cpuset changes Frederic Weisbecker
@ 2025-12-26 23:59   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-26 23:59 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> When cpuset isolated partitions get updated, unbound kthreads get
> indifferently affine to all non isolated CPUs, regardless of their
> individual affinity preferences.
>
> For example kswapd is a per-node kthread that prefers to be affine to
> the node it refers to. Whenever an isolated partition is created,
> updated or deleted, kswapd's node affinity is going to be broken if any
> CPU in the related node is not isolated because kswapd will be affine
> globally.
>
> Fix this with letting the consolidated kthread managed affinity code do
> the affinity update on behalf of cpuset.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   include/linux/kthread.h  |  1 +
>   kernel/cgroup/cpuset.c   |  5 ++---
>   kernel/kthread.c         | 41 ++++++++++++++++++++++++++++++----------
>   kernel/sched/isolation.c |  3 +++
>   4 files changed, 37 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/kthread.h b/include/linux/kthread.h
> index 8d27403888ce..c92c1149ee6e 100644
> --- a/include/linux/kthread.h
> +++ b/include/linux/kthread.h
> @@ -100,6 +100,7 @@ void kthread_unpark(struct task_struct *k);
>   void kthread_parkme(void);
>   void kthread_exit(long result) __noreturn;
>   void kthread_complete_and_exit(struct completion *, long) __noreturn;
> +int kthreads_update_housekeeping(void);
>   
>   int kthreadd(void *unused);
>   extern struct task_struct *kthreadd_task;
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 1cc83a3c25f6..c8cfaf5cd4a1 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1208,11 +1208,10 @@ void cpuset_update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus)
>   
>   		if (top_cs) {
>   			/*
> +			 * PF_KTHREAD tasks are handled by housekeeping.
>   			 * PF_NO_SETAFFINITY tasks are ignored.
> -			 * All per cpu kthreads should have PF_NO_SETAFFINITY
> -			 * flag set, see kthread_set_per_cpu().
>   			 */
> -			if (task->flags & PF_NO_SETAFFINITY)
> +			if (task->flags & (PF_KTHREAD | PF_NO_SETAFFINITY))
>   				continue;
>   			cpumask_andnot(new_cpus, possible_mask, subpartitions_cpus);
>   		} else {
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index 968fa5868d21..03008154249c 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -891,14 +891,7 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
>   }
>   EXPORT_SYMBOL_GPL(kthread_affine_preferred);
>   
> -/*
> - * Re-affine kthreads according to their preferences
> - * and the newly online CPU. The CPU down part is handled
> - * by select_fallback_rq() which default re-affines to
> - * housekeepers from other nodes in case the preferred
> - * affinity doesn't apply anymore.
> - */
> -static int kthreads_online_cpu(unsigned int cpu)
> +static int kthreads_update_affinity(bool force)
>   {
>   	cpumask_var_t affinity;
>   	struct kthread *k;
> @@ -924,7 +917,8 @@ static int kthreads_online_cpu(unsigned int cpu)
>   		/*
>   		 * Unbound kthreads without preferred affinity are already affine
>   		 * to housekeeping, whether those CPUs are online or not. So no need
> -		 * to handle newly online CPUs for them.
> +		 * to handle newly online CPUs for them. However housekeeping changes
> +		 * have to be applied.
>   		 *
>   		 * But kthreads with a preferred affinity or node are different:
>   		 * if none of their preferred CPUs are online and part of
> @@ -932,7 +926,7 @@ static int kthreads_online_cpu(unsigned int cpu)
>   		 * But as soon as one of their preferred CPU becomes online, they must
>   		 * be affine to them.
>   		 */
> -		if (k->preferred_affinity || k->node != NUMA_NO_NODE) {
> +		if (force || k->preferred_affinity || k->node != NUMA_NO_NODE) {
>   			kthread_fetch_affinity(k, affinity);
>   			set_cpus_allowed_ptr(k->task, affinity);
>   		}
> @@ -943,6 +937,33 @@ static int kthreads_online_cpu(unsigned int cpu)
>   	return ret;
>   }
>   
> +/**
> + * kthreads_update_housekeeping - Update kthreads affinity on cpuset change
> + *
> + * When cpuset changes a partition type to/from "isolated" or updates related
> + * cpumasks, propagate the housekeeping cpumask change to preferred kthreads
> + * affinity.
> + *
> + * Returns 0 if successful, -ENOMEM if temporary mask couldn't
> + * be allocated or -EINVAL in case of internal error.
> + */
> +int kthreads_update_housekeeping(void)
> +{
> +	return kthreads_update_affinity(true);
> +}
> +
> +/*
> + * Re-affine kthreads according to their preferences
> + * and the newly online CPU. The CPU down part is handled
> + * by select_fallback_rq() which default re-affines to
> + * housekeepers from other nodes in case the preferred
> + * affinity doesn't apply anymore.
> + */
> +static int kthreads_online_cpu(unsigned int cpu)
> +{
> +	return kthreads_update_affinity(false);
> +}
> +
>   static int kthreads_init(void)
>   {
>   	return cpuhp_setup_state(CPUHP_AP_KTHREADS_ONLINE, "kthreads:online",
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 84a257d05918..c499474866b8 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -157,6 +157,9 @@ int housekeeping_update(struct cpumask *isol_mask, enum hk_type type)
>   	err = tmigr_isolated_exclude_cpumask(isol_mask);
>   	WARN_ON_ONCE(err < 0);
>   
> +	err = kthreads_update_housekeeping();
> +	WARN_ON_ONCE(err < 0);
> +
>   	kfree(old);
>   
>   	return err;
Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping
  2025-12-24 13:45 ` [PATCH 18/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
  2025-12-26 20:31   ` Waiman Long
@ 2025-12-27  0:18   ` Tejun Heo
  1 sibling, 0 replies; 58+ messages in thread
From: Tejun Heo @ 2025-12-27  0:18 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Michal Koutný, Andrew Morton, Bjorn Helgaas,
	Catalin Marinas, Chen Ridong, Danilo Krummrich, David S . Miller,
	Eric Dumazet, Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar,
	Jakub Kicinski, Jens Axboe, Johannes Weiner, Lai Jiangshan,
	Marco Crivellari, Michal Hocko, Muchun Song, Paolo Abeni,
	Peter Zijlstra, Phil Auld, Rafael J . Wysocki, Roman Gushchin,
	Shakeel Butt, Simon Horman, Thomas Gleixner, Vlastimil Babka,
	Waiman Long, Will Deacon, cgroups, linux-arm-kernel, linux-block,
	linux-mm, linux-pci, netdev

On Wed, Dec 24, 2025 at 02:45:05PM +0100, Frederic Weisbecker wrote:
> Until now, cpuset would propagate isolated partition changes to
> workqueues so that unbound workers get properly reaffined.
> 
> Since housekeeping now centralizes, synchronize and propagates isolation
> cpumask changes, perform the work from that subsystem for consolidation
> and consistency purposes.
> 
> For simplification purpose, the target function is adapted to take the
> new housekeeping mask instead of the isolated mask.
> 
> Suggested-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>

Acked-by: Tejun Heo <tj@kernel.org>

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 33/33] doc: Add housekeeping documentation
  2025-12-24 13:45 ` [PATCH 33/33] doc: Add housekeeping documentation Frederic Weisbecker
@ 2025-12-27  0:39   ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-27  0:39 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/24/25 8:45 AM, Frederic Weisbecker wrote:
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>   Documentation/core-api/housekeeping.rst | 111 ++++++++++++++++++++++++
>   Documentation/core-api/index.rst        |   1 +
>   2 files changed, 112 insertions(+)
>   create mode 100644 Documentation/core-api/housekeeping.rst
>
> diff --git a/Documentation/core-api/housekeeping.rst b/Documentation/core-api/housekeeping.rst
> new file mode 100644
> index 000000000000..e5417302774c
> --- /dev/null
> +++ b/Documentation/core-api/housekeeping.rst
> @@ -0,0 +1,111 @@
> +======================================
> +Housekeeping
> +======================================
> +
> +
> +CPU Isolation moves away kernel work that may otherwise run on any CPU.
> +The purpose of its related features is to reduce the OS jitter that some
> +extreme workloads can't stand, such as in some DPDK usecases.
Nit: "usecases" => "use cases"
> +
> +The kernel work moved away by CPU isolation is commonly described as
> +"housekeeping" because it includes ground work that performs cleanups,
> +statistics maintainance and actions relying on them, memory release,
> +various deferrals etc...
> +
> +Sometimes housekeeping is just some unbound work (unbound workqueues,
> +unbound timers, ...) that gets easily assigned to non-isolated CPUs.
> +But sometimes housekeeping is tied to a specific CPU and requires
> +elaborated tricks to be offloaded to non-isolated CPUs (RCU_NOCB, remote
> +scheduler tick, etc...).
> +
> +Thus, a housekeeping CPU can be considered as the reverse of an isolated
> +CPU. It is simply a CPU that can execute housekeeping work. There must
> +always be at least one online housekeeping CPU at any time. The CPUs that
> +are not	isolated are automatically assigned as housekeeping.
Nit: extra white spaces between "not" and "isolated".
> +
> +Housekeeping is currently divided in four features described
> +by the ``enum hk_type type``:
> +
> +1.	HK_TYPE_DOMAIN matches the work moved away by scheduler domain
> +	isolation performed through ``isolcpus=domain`` boot parameter or
> +	isolated cpuset partitions in cgroup v2. This includes scheduler
> +	load balancing, unbound workqueues and timers.
> +
> +2.	HK_TYPE_KERNEL_NOISE matches the work moved away by tick isolation
> +	performed through ``nohz_full=`` or ``isolcpus=nohz`` boot
> +	parameters. This includes remote scheduler tick, vmstat and lockup
> +	watchdog.
> +
> +3.	HK_TYPE_MANAGED_IRQ matches the IRQ handlers moved away by managed
> +	IRQ isolation performed through ``isolcpus=managed_irq``.
> +
> +4.	HK_TYPE_DOMAIN_BOOT matches the work moved away by scheduler domain
> +	isolation performed through ``isolcpus=domain`` only. It is similar
> +	to HK_TYPE_DOMAIN except it ignores the isolation performed by
> +	cpusets.
> +
> +
> +Housekeeping cpumasks
> +=================================
> +
> +Housekeeping cpumasks include the CPUs that can execute the work moved
> +away by the matching isolation feature. These cpumasks are returned by
> +the following function::
> +
> +	const struct cpumask *housekeeping_cpumask(enum hk_type type)
> +
> +By default, if neither ``nohz_full=``, nor ``isolcpus``, nor cpuset's
> +isolated partitions are used, which covers most usecases, this function
> +returns the cpu_possible_mask.
> +
> +Otherwise the function returns the cpumask complement of the isolation
> +feature. For example:
> +
> +With isolcpus=domain,7 the following will return a mask with all possible
> +CPUs except 7::
> +
> +	housekeeping_cpumask(HK_TYPE_DOMAIN)
> +
> +Similarly with nohz_full=5,6 the following will return a mask with all
> +possible CPUs except 5,6::
> +
> +	housekeeping_cpumask(HK_TYPE_KERNEL_NOISE)
> +
> +
> +Synchronization against cpusets
> +=================================
> +
> +Cpuset can modify the HK_TYPE_DOMAIN housekeeping cpumask while creating,
> +modifying or deleting an isolated partition.
> +
> +The users of HK_TYPE_DOMAIN cpumask must then make sure to synchronize
> +properly against cpuset in order to make sure that:
> +
> +1.	The cpumask snapshot stays coherent.
> +
> +2.	No housekeeping work is queued on a newly made isolated CPU.
> +
> +3.	Pending housekeeping work that was queued to a non isolated
> +	CPU which just turned isolated through cpuset must be flushed
> +	before the related created/modified isolated partition is made
> +	available to userspace.
> +
> +This synchronization is maintained by an RCU based scheme. The cpuset update
> +side waits for an RCU grace period after updating the HK_TYPE_DOMAIN
> +cpumask and before flushing pending works. On the read side, care must be
> +taken to gather the housekeeping target election and the work enqueue within
> +the same RCU read side critical section.
> +
> +A typical layout example would look like this on the update side
> +(``housekeeping_update()``)::
> +
> +	rcu_assign_pointer(housekeeping_cpumasks[type], trial);
> +	synchronize_rcu();
> +	flush_workqueue(example_workqueue);
> +
> +And then on the read side::
> +
> +	rcu_read_lock();
> +	cpu = housekeeping_any_cpu(HK_TYPE_DOMAIN);
> +	queue_work_on(cpu, example_workqueue, work);
> +	rcu_read_unlock();
> diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst
> index 5eb0fbbbc323..79fe7735692e 100644
> --- a/Documentation/core-api/index.rst
> +++ b/Documentation/core-api/index.rst
> @@ -25,6 +25,7 @@ it.
>      symbol-namespaces
>      asm-annotations
>      real-time/index
> +   housekeeping.rst
>   
>   Data structures and low-level utilities
>   =======================================
Acked-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 01/33] PCI: Prepare to protect against concurrent isolated cpuset change
  2025-12-24 13:44 ` [PATCH 01/33] PCI: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
@ 2025-12-29  3:23   ` Zhang Qiao
  2025-12-29  3:53     ` Waiman Long
  0 siblings, 1 reply; 58+ messages in thread
From: Zhang Qiao @ 2025-12-29  3:23 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

Hi, Weisbecker,

在 2025/12/24 21:44, Frederic Weisbecker 写道:
> HK_TYPE_DOMAIN will soon integrate cpuset isolated partitions and
> therefore be made modifiable at runtime. Synchronize against the cpumask
> update using RCU.
> 
> The RCU locked section includes both the housekeeping CPU target
> election for the PCI probe work and the work enqueue.
> 
> This way the housekeeping update side will simply need to flush the
> pending related works after updating the housekeeping mask in order to
> make sure that no PCI work ever executes on an isolated CPU. This part
> will be handled in a subsequent patch.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>  drivers/pci/pci-driver.c | 47 ++++++++++++++++++++++++++++++++--------
>  1 file changed, 38 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
> index 7c2d9d596258..786d6ce40999 100644
> --- a/drivers/pci/pci-driver.c
> +++ b/drivers/pci/pci-driver.c
> @@ -302,9 +302,8 @@ struct drv_dev_and_id {
>  	const struct pci_device_id *id;
>  };
>  
> -static long local_pci_probe(void *_ddi)
> +static int local_pci_probe(struct drv_dev_and_id *ddi)
>  {
> -	struct drv_dev_and_id *ddi = _ddi;
>  	struct pci_dev *pci_dev = ddi->dev;
>  	struct pci_driver *pci_drv = ddi->drv;
>  	struct device *dev = &pci_dev->dev;
> @@ -338,6 +337,19 @@ static long local_pci_probe(void *_ddi)
>  	return 0;
>  }
>  
> +struct pci_probe_arg {
> +	struct drv_dev_and_id *ddi;
> +	struct work_struct work;
> +	int ret;
> +};
> +
> +static void local_pci_probe_callback(struct work_struct *work)
> +{
> +	struct pci_probe_arg *arg = container_of(work, struct pci_probe_arg, work);
> +
> +	arg->ret = local_pci_probe(arg->ddi);
> +}
> +
>  static bool pci_physfn_is_probed(struct pci_dev *dev)
>  {
>  #ifdef CONFIG_PCI_IOV
> @@ -362,34 +374,51 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
>  	dev->is_probed = 1;
>  
>  	cpu_hotplug_disable();
> -
>  	/*
>  	 * Prevent nesting work_on_cpu() for the case where a Virtual Function
>  	 * device is probed from work_on_cpu() of the Physical device.
>  	 */
>  	if (node < 0 || node >= MAX_NUMNODES || !node_online(node) ||
>  	    pci_physfn_is_probed(dev)) {
> -		cpu = nr_cpu_ids;
> +		error = local_pci_probe(&ddi);
>  	} else {
>  		cpumask_var_t wq_domain_mask;
> +		struct pci_probe_arg arg = { .ddi = &ddi };
> +
> +		INIT_WORK_ONSTACK(&arg.work, local_pci_probe_callback);
>  
>  		if (!zalloc_cpumask_var(&wq_domain_mask, GFP_KERNEL)) {
>  			error = -ENOMEM;

If we return from here, arg.work will not be destroyed.



>  			goto out;
>  		}
> +
> +		/*
> +		 * The target election and the enqueue of the work must be within
> +		 * the same RCU read side section so that when the workqueue pool
> +		 * is flushed after a housekeeping cpumask update, further readers
> +		 * are guaranteed to queue the probing work to the appropriate
> +		 * targets.
> +		 */
> +		rcu_read_lock();
>  		cpumask_and(wq_domain_mask,
>  			    housekeeping_cpumask(HK_TYPE_WQ),
>  			    housekeeping_cpumask(HK_TYPE_DOMAIN));
>  
>  		cpu = cpumask_any_and(cpumask_of_node(node),
>  				      wq_domain_mask);
> +		if (cpu < nr_cpu_ids) {
> +			schedule_work_on(cpu, &arg.work);
> +			rcu_read_unlock();
> +			flush_work(&arg.work);
> +			error = arg.ret;
> +		} else {
> +			rcu_read_unlock();
> +			error = local_pci_probe(&ddi);
> +		}
> +
>  		free_cpumask_var(wq_domain_mask);
> +		destroy_work_on_stack(&arg.work);
>  	}
> -
> -	if (cpu < nr_cpu_ids)
> -		error = work_on_cpu(cpu, local_pci_probe, &ddi);
> -	else
> -		error = local_pci_probe(&ddi);
>  out:
>  	dev->is_probed = 0;
>  	cpu_hotplug_enable();
> 

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 01/33] PCI: Prepare to protect against concurrent isolated cpuset change
  2025-12-29  3:23   ` Zhang Qiao
@ 2025-12-29  3:53     ` Waiman Long
  0 siblings, 0 replies; 58+ messages in thread
From: Waiman Long @ 2025-12-29  3:53 UTC (permalink / raw)
  To: Zhang Qiao, Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Jens Axboe, Johannes Weiner, Lai Jiangshan, Marco Crivellari,
	Michal Hocko, Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Will Deacon, cgroups,
	linux-arm-kernel, linux-block, linux-mm, linux-pci, netdev

On 12/28/25 10:23 PM, Zhang Qiao wrote:
> Hi, Weisbecker,
>
> 在 2025/12/24 21:44, Frederic Weisbecker 写道:
>> HK_TYPE_DOMAIN will soon integrate cpuset isolated partitions and
>> therefore be made modifiable at runtime. Synchronize against the cpumask
>> update using RCU.
>>
>> The RCU locked section includes both the housekeeping CPU target
>> election for the PCI probe work and the work enqueue.
>>
>> This way the housekeeping update side will simply need to flush the
>> pending related works after updating the housekeeping mask in order to
>> make sure that no PCI work ever executes on an isolated CPU. This part
>> will be handled in a subsequent patch.
>>
>> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
>> ---
>>   drivers/pci/pci-driver.c | 47 ++++++++++++++++++++++++++++++++--------
>>   1 file changed, 38 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
>> index 7c2d9d596258..786d6ce40999 100644
>> --- a/drivers/pci/pci-driver.c
>> +++ b/drivers/pci/pci-driver.c
>> @@ -302,9 +302,8 @@ struct drv_dev_and_id {
>>   	const struct pci_device_id *id;
>>   };
>>   
>> -static long local_pci_probe(void *_ddi)
>> +static int local_pci_probe(struct drv_dev_and_id *ddi)
>>   {
>> -	struct drv_dev_and_id *ddi = _ddi;
>>   	struct pci_dev *pci_dev = ddi->dev;
>>   	struct pci_driver *pci_drv = ddi->drv;
>>   	struct device *dev = &pci_dev->dev;
>> @@ -338,6 +337,19 @@ static long local_pci_probe(void *_ddi)
>>   	return 0;
>>   }
>>   
>> +struct pci_probe_arg {
>> +	struct drv_dev_and_id *ddi;
>> +	struct work_struct work;
>> +	int ret;
>> +};
>> +
>> +static void local_pci_probe_callback(struct work_struct *work)
>> +{
>> +	struct pci_probe_arg *arg = container_of(work, struct pci_probe_arg, work);
>> +
>> +	arg->ret = local_pci_probe(arg->ddi);
>> +}
>> +
>>   static bool pci_physfn_is_probed(struct pci_dev *dev)
>>   {
>>   #ifdef CONFIG_PCI_IOV
>> @@ -362,34 +374,51 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
>>   	dev->is_probed = 1;
>>   
>>   	cpu_hotplug_disable();
>> -
>>   	/*
>>   	 * Prevent nesting work_on_cpu() for the case where a Virtual Function
>>   	 * device is probed from work_on_cpu() of the Physical device.
>>   	 */
>>   	if (node < 0 || node >= MAX_NUMNODES || !node_online(node) ||
>>   	    pci_physfn_is_probed(dev)) {
>> -		cpu = nr_cpu_ids;
>> +		error = local_pci_probe(&ddi);
>>   	} else {
>>   		cpumask_var_t wq_domain_mask;
>> +		struct pci_probe_arg arg = { .ddi = &ddi };
>> +
>> +		INIT_WORK_ONSTACK(&arg.work, local_pci_probe_callback);
>>   
>>   		if (!zalloc_cpumask_var(&wq_domain_mask, GFP_KERNEL)) {
>>   			error = -ENOMEM;
> If we return from here, arg.work will not be destroyed.
>
>
Right. INIT_WORK_ONSTACK() should be called after successful 
cpumask_var_t allocation.

Cheers,
Longman

>>   			goto out;
>>   		}
>> +
>> +		/*
>> +		 * The target election and the enqueue of the work must be within
>> +		 * the same RCU read side section so that when the workqueue pool
>> +		 * is flushed after a housekeeping cpumask update, further readers
>> +		 * are guaranteed to queue the probing work to the appropriate
>> +		 * targets.
>> +		 */
>> +		rcu_read_lock();
>>   		cpumask_and(wq_domain_mask,
>>   			    housekeeping_cpumask(HK_TYPE_WQ),
>>   			    housekeeping_cpumask(HK_TYPE_DOMAIN));
>>   
>>   		cpu = cpumask_any_and(cpumask_of_node(node),
>>   				      wq_domain_mask);
>> +		if (cpu < nr_cpu_ids) {
>> +			schedule_work_on(cpu, &arg.work);
>> +			rcu_read_unlock();
>> +			flush_work(&arg.work);
>> +			error = arg.ret;
>> +		} else {
>> +			rcu_read_unlock();
>> +			error = local_pci_probe(&ddi);
>> +		}
>> +
>>   		free_cpumask_var(wq_domain_mask);
>> +		destroy_work_on_stack(&arg.work);
>>   	}
>> -
>> -	if (cpu < nr_cpu_ids)
>> -		error = work_on_cpu(cpu, local_pci_probe, &ddi);
>> -	else
>> -		error = local_pci_probe(&ddi);
>>   out:
>>   	dev->is_probed = 0;
>>   	cpu_hotplug_enable();
>>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/33] block: Protect against concurrent isolated cpuset change
  2025-12-24 13:44 ` [PATCH 09/33] block: Protect against concurrent " Frederic Weisbecker
@ 2025-12-30  0:37   ` Jens Axboe
  0 siblings, 0 replies; 58+ messages in thread
From: Jens Axboe @ 2025-12-30  0:37 UTC (permalink / raw)
  To: Frederic Weisbecker, LKML
  Cc: Michal Koutný, Andrew Morton, Bjorn Helgaas, Catalin Marinas,
	Chen Ridong, Danilo Krummrich, David S . Miller, Eric Dumazet,
	Gabriele Monaco, Greg Kroah-Hartman, Ingo Molnar, Jakub Kicinski,
	Johannes Weiner, Lai Jiangshan, Marco Crivellari, Michal Hocko,
	Muchun Song, Paolo Abeni, Peter Zijlstra, Phil Auld,
	Rafael J . Wysocki, Roman Gushchin, Shakeel Butt, Simon Horman,
	Tejun Heo, Thomas Gleixner, Vlastimil Babka, Waiman Long,
	Will Deacon, cgroups, linux-arm-kernel, linux-block, linux-mm,
	linux-pci, netdev

On 12/24/25 6:44 AM, Frederic Weisbecker wrote:
> The block subsystem prevents running the workqueue to isolated CPUs,
> including those defined by cpuset isolated partitions. Since
> HK_TYPE_DOMAIN will soon contain both and be subject to runtime
> modifications, synchronize against housekeeping using the relevant lock.
> 
> For full support of cpuset changes, the block subsystem may need to
> propagate changes to isolated cpumask through the workqueue in the
> future.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>  block/blk-mq.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 1978eef95dca..0037af1216f3 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -4257,12 +4257,16 @@ static void blk_mq_map_swqueue(struct request_queue *q)
>  
>  		/*
>  		 * Rule out isolated CPUs from hctx->cpumask to avoid
> -		 * running block kworker on isolated CPUs
> +		 * running block kworker on isolated CPUs.
> +		 * FIXME: cpuset should propagate further changes to isolated CPUs
> +		 * here.
>  		 */
> +		rcu_read_lock();
>  		for_each_cpu(cpu, hctx->cpumask) {
>  			if (cpu_is_isolated(cpu))
>  				cpumask_clear_cpu(cpu, hctx->cpumask);
>  		}
> +		rcu_read_unlock();

Want me to just take this one separately and get it out of your hair?
Doesn't seem to have any dependencies.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2025-12-30  0:37 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-24 13:44 [PATCH 00/33 v5] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
2025-12-24 13:44 ` [PATCH 01/33] PCI: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
2025-12-29  3:23   ` Zhang Qiao
2025-12-29  3:53     ` Waiman Long
2025-12-24 13:44 ` [PATCH 02/33] cpu: Revert "cpu/hotplug: Prevent self deadlock on CPU hot-unplug" Frederic Weisbecker
2025-12-24 13:44 ` [PATCH 03/33] memcg: Prepare to protect against concurrent isolated cpuset change Frederic Weisbecker
2025-12-26 23:56   ` Tejun Heo
2025-12-24 13:44 ` [PATCH 04/33] mm: vmstat: " Frederic Weisbecker
2025-12-24 13:44 ` [PATCH 05/33] sched/isolation: Save boot defined domain flags Frederic Weisbecker
2025-12-25 22:27   ` Waiman Long
2025-12-24 13:44 ` [PATCH 06/33] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT Frederic Weisbecker
2025-12-25 22:31   ` Waiman Long
2025-12-24 13:44 ` [PATCH 07/33] driver core: cpu: Convert /sys/devices/system/cpu/isolated " Frederic Weisbecker
2025-12-24 13:44 ` [PATCH 08/33] net: Keep ignoring isolated cpuset change Frederic Weisbecker
2025-12-24 13:44 ` [PATCH 09/33] block: Protect against concurrent " Frederic Weisbecker
2025-12-30  0:37   ` Jens Axboe
2025-12-24 13:44 ` [PATCH 10/33] timers/migration: Prevent from lockdep false positive warning Frederic Weisbecker
2025-12-24 13:44 ` [PATCH 11/33] cpu: Provide lockdep check for CPU hotplug lock write-held Frederic Weisbecker
2025-12-24 13:44 ` [PATCH 12/33] cpuset: Provide lockdep check for cpuset lock held Frederic Weisbecker
2025-12-24 13:45 ` [PATCH 13/33] sched/isolation: Convert housekeeping cpumasks to rcu pointers Frederic Weisbecker
2025-12-24 13:45 ` [PATCH 14/33] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
2025-12-26  2:24   ` Waiman Long
2025-12-26  3:20     ` Waiman Long
2025-12-26  8:08   ` Chen Ridong
2025-12-24 13:45 ` [PATCH 15/33] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Frederic Weisbecker
2025-12-24 13:45 ` [PATCH 16/33] sched/isolation: Flush vmstat " Frederic Weisbecker
2025-12-24 13:45 ` [PATCH 17/33] PCI: Flush PCI probe workqueue " Frederic Weisbecker
2025-12-26  8:48   ` Chen Ridong
2025-12-24 13:45 ` [PATCH 18/33] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
2025-12-26 20:31   ` Waiman Long
2025-12-27  0:18   ` Tejun Heo
2025-12-24 13:45 ` [PATCH 19/33] cpuset: Propagate cpuset isolation update to timers " Frederic Weisbecker
2025-12-24 13:45 ` [PATCH 20/33] timers/migration: Remove superfluous cpuset isolation test Frederic Weisbecker
2025-12-26 20:45   ` Waiman Long
2025-12-24 13:45 ` [PATCH 21/33] cpuset: Remove cpuset_cpu_is_isolated() Frederic Weisbecker
2025-12-26 20:48   ` Waiman Long
2025-12-24 13:45 ` [PATCH 22/33] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated() Frederic Weisbecker
2025-12-26 21:26   ` Waiman Long
2025-12-24 13:45 ` [PATCH 23/33] PCI: Remove superfluous HK_TYPE_WQ check Frederic Weisbecker
2025-12-24 13:45 ` [PATCH 24/33] kthread: Refine naming of affinity related fields Frederic Weisbecker
2025-12-26 21:37   ` Waiman Long
2025-12-24 13:45 ` [PATCH 25/33] kthread: Include unbound kthreads in the managed affinity list Frederic Weisbecker
2025-12-26 22:11   ` Waiman Long
2025-12-24 13:45 ` [PATCH 26/33] kthread: Include kthreadd to " Frederic Weisbecker
2025-12-26 22:13   ` Waiman Long
2025-12-24 13:45 ` [PATCH 27/33] kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management Frederic Weisbecker
2025-12-26 22:16   ` Waiman Long
2025-12-24 13:45 ` [PATCH 28/33] sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
2025-12-26 23:08   ` Waiman Long
2025-12-24 13:45 ` [PATCH 29/33] sched/arm64: Move fallback task " Frederic Weisbecker
2025-12-26 23:46   ` Waiman Long
2025-12-24 13:45 ` [PATCH 30/33] kthread: Honour kthreads preferred affinity after cpuset changes Frederic Weisbecker
2025-12-26 23:59   ` Waiman Long
2025-12-24 13:45 ` [PATCH 31/33] kthread: Comment on the purpose and placement of kthread_affine_node() call Frederic Weisbecker
2025-12-24 13:45 ` [PATCH 32/33] kthread: Document kthread_affine_preferred() Frederic Weisbecker
2025-12-24 13:45 ` [PATCH 33/33] doc: Add housekeeping documentation Frederic Weisbecker
2025-12-27  0:39   ` Waiman Long
  -- strict thread matches above, loose matches on Subject: below --
2025-10-13 20:31 [PATCH 00/33 v3] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
2025-10-13 20:31 ` [PATCH 08/33] net: Keep ignoring isolated cpuset change Frederic Weisbecker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).