From: Waiman Long <longman@redhat.com>
To: "Chen Ridong" <chenridong@huaweicloud.com>,
"Tejun Heo" <tj@kernel.org>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Michal Koutný" <mkoutny@suse.com>
Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Aaron Tomlin <atomlin@atomlin.com>,
Juri Lelli <juri.lelli@redhat.com>,
Waiman Long <longman@redhat.com>
Subject: [PATCH cgroup/for-next 2/4] cgroup/cpuset: Expand the scope of cpuset_can_attach_check()
Date: Thu, 14 May 2026 13:02:38 -0400 [thread overview]
Message-ID: <20260514170240.575156-3-longman@redhat.com> (raw)
In-Reply-To: <20260514170240.575156-1-longman@redhat.com>
Expand the scope of cpuset_can_attach_check() by including the setting
of setsched flag inside cpuset_can_attach_check() with the new @oldcs
and @psetsched argument. As cpuset_can_attach_check() is also called
from cpuset_can_fork(), set the new arguments to NULL from that caller.
While at it, expose the source and destination cpuset cpu/memory check
results in the new attach_cpus_updated and attach_mems_updated static
flags so that these flags can be used directly from cpuset_attach()
without the need to do the same computations again.
No functional change is expected.
Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/cgroup/cpuset.c | 69 +++++++++++++++++++++++++-----------------
1 file changed, 41 insertions(+), 28 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 9de3c907436f..68392cf6429b 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -2964,19 +2964,55 @@ static int update_prstate(struct cpuset *cs, int new_prs)
return 0;
}
+/*
+ * cpuset_can_attach() and cpuset_attach() specific internal data
+ */
static struct cpuset *cpuset_attach_old_cs;
+static bool attach_cpus_updated;
+static bool attach_mems_updated;
/*
* Check to see if a cpuset can accept a new task
* For v1, cpus_allowed and mems_allowed can't be empty.
* For v2, effective_cpus can't be empty.
* Note that in v1, effective_cpus = cpus_allowed.
+ *
+ * Also set the boolean flag passed in by @psetsched depending on if
+ * security_task_setscheduler() call is needed and @oldcs is not NULL.
*/
-static int cpuset_can_attach_check(struct cpuset *cs)
+static int cpuset_can_attach_check(struct cpuset *cs, struct cpuset *oldcs,
+ bool *psetsched)
{
if (cpumask_empty(cs->effective_cpus) ||
(!is_in_v2_mode() && nodes_empty(cs->mems_allowed)))
return -ENOSPC;
+
+ if (!oldcs)
+ return 0;
+
+ /*
+ * Update attach specific data
+ */
+ attach_cpus_updated = !cpumask_equal(cs->effective_cpus, oldcs->effective_cpus);
+ attach_mems_updated = !nodes_equal(cs->effective_mems, oldcs->effective_mems);
+
+ /*
+ * Skip rights over task setsched check in v2 when nothing changes,
+ * migration permission derives from hierarchy ownership in
+ * cgroup_procs_write_permission()).
+ */
+ *psetsched = !cpuset_v2() || attach_cpus_updated || attach_mems_updated;
+
+ /*
+ * A v1 cpuset with tasks will have no CPU left only when CPU hotplug
+ * brings the last online CPU offline as users are not allowed to empty
+ * cpuset.cpus when there are active tasks inside. When that happens,
+ * we should allow tasks to migrate out without security check to make
+ * sure they will be able to run after migration.
+ */
+ if (!is_in_v2_mode() && cpumask_empty(oldcs->effective_cpus))
+ *psetsched = false;
+
return 0;
}
@@ -3023,29 +3059,10 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
mutex_lock(&cpuset_mutex);
/* Check to see if task is allowed in the cpuset */
- ret = cpuset_can_attach_check(cs);
+ ret = cpuset_can_attach_check(cs, oldcs, &setsched_check);
if (ret)
goto out_unlock;
- /*
- * Skip rights over task setsched check in v2 when nothing changes,
- * migration permission derives from hierarchy ownership in
- * cgroup_procs_write_permission()).
- */
- setsched_check = !cpuset_v2() ||
- !cpumask_equal(cs->effective_cpus, oldcs->effective_cpus) ||
- !nodes_equal(cs->effective_mems, oldcs->effective_mems);
-
- /*
- * A v1 cpuset with tasks will have no CPU left only when CPU hotplug
- * brings the last online CPU offline as users are not allowed to empty
- * cpuset.cpus when there are active tasks inside. When that happens,
- * we should allow tasks to migrate out without security check to make
- * sure they will be able to run after migration.
- */
- if (!is_in_v2_mode() && cpumask_empty(oldcs->effective_cpus))
- setsched_check = false;
-
cgroup_taskset_for_each(task, css, tset) {
ret = task_can_attach(task);
if (ret)
@@ -3139,7 +3156,6 @@ static void cpuset_attach(struct cgroup_taskset *tset)
struct cgroup_subsys_state *css;
struct cpuset *cs;
struct cpuset *oldcs = cpuset_attach_old_cs;
- bool cpus_updated, mems_updated;
bool queue_task_work = false;
cgroup_taskset_first(tset, &css);
@@ -3147,9 +3163,6 @@ static void cpuset_attach(struct cgroup_taskset *tset)
lockdep_assert_cpus_held(); /* see cgroup_attach_lock() */
mutex_lock(&cpuset_mutex);
- cpus_updated = !cpumask_equal(cs->effective_cpus,
- oldcs->effective_cpus);
- mems_updated = !nodes_equal(cs->effective_mems, oldcs->effective_mems);
/*
* In the default hierarchy, enabling cpuset in the child cgroups
@@ -3157,7 +3170,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
* in effective cpus and mems. In that case, we can optimize out
* by skipping the task iteration and update.
*/
- if (cpuset_v2() && !cpus_updated && !mems_updated) {
+ if (cpuset_v2() && !attach_cpus_updated && !attach_mems_updated) {
cpuset_attach_nodemask_to = cs->effective_mems;
goto out;
}
@@ -3174,7 +3187,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
* not set.
*/
cpuset_attach_nodemask_to = cs->effective_mems;
- if (!is_memory_migrate(cs) && !mems_updated)
+ if (!is_memory_migrate(cs) && !attach_mems_updated)
goto out;
cgroup_taskset_for_each_leader(leader, css, tset) {
@@ -3589,7 +3602,7 @@ static int cpuset_can_fork(struct task_struct *task, struct css_set *cset)
mutex_lock(&cpuset_mutex);
/* Check to see if task is allowed in the cpuset */
- ret = cpuset_can_attach_check(cs);
+ ret = cpuset_can_attach_check(cs, NULL, NULL);
if (ret)
goto out_unlock;
--
2.54.0
next prev parent reply other threads:[~2026-05-14 17:03 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-14 17:02 [PATCH cgroup/for-next 0/4] cgroup/cpuset: Support multiple source/destination cpusets for cpuset_*attach() Waiman Long
2026-05-14 17:02 ` [PATCH cgroup/for-next 1/4] cgroup/cpuset: Add an alloc_dl_bw() helper Waiman Long
2026-05-14 17:02 ` Waiman Long [this message]
2026-05-14 17:02 ` [PATCH cgroup/for-next 3/4] cgroup/cpuset: Optimize cpuset_attach_task() Waiman Long
2026-05-14 17:02 ` [PATCH cgroup/for-next 4/4] cgroup/cpuset: Support multiple source/destination cpusets for cpuset_*attach() Waiman Long
2026-05-14 21:46 ` [PATCH cgroup/for-next 0/4] " Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260514170240.575156-3-longman@redhat.com \
--to=longman@redhat.com \
--cc=atomlin@atomlin.com \
--cc=cgroups@vger.kernel.org \
--cc=chenridong@huaweicloud.com \
--cc=dietmar.eggemann@arm.com \
--cc=hannes@cmpxchg.org \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mkoutny@suse.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox