From: Waiman Long <longman@redhat.com>
To: "Chen Ridong" <chenridong@huaweicloud.com>,
"Tejun Heo" <tj@kernel.org>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Michal Koutný" <mkoutny@suse.com>,
"Ingo Molnar" <mingo@redhat.com>,
"Peter Zijlstra" <peterz@infradead.org>,
"Juri Lelli" <juri.lelli@redhat.com>,
"Vincent Guittot" <vincent.guittot@linaro.org>,
"Dietmar Eggemann" <dietmar.eggemann@arm.com>,
"Steven Rostedt" <rostedt@goodmis.org>,
"Ben Segall" <bsegall@google.com>, "Mel Gorman" <mgorman@suse.de>,
"Valentin Schneider" <vschneid@redhat.com>,
"K Prateek Nayak" <kprateek.nayak@amd.com>
Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
Aaron Tomlin <atomlin@atomlin.com>,
Waiman Long <longman@redhat.com>
Subject: [PATCH cgroup/for-next v2 4/5] cgroup/cpuset: Move mpol_rebind_mm/cpuset_migrate_mm() calls inside cpuset_attach_task()
Date: Sat, 16 May 2026 00:24:47 -0400 [thread overview]
Message-ID: <20260516042448.698216-5-longman@redhat.com> (raw)
In-Reply-To: <20260516042448.698216-1-longman@redhat.com>
The cpuset_attach_task() was introduced in commit 42a11bf5c543
("cgroup/cpuset: Make cpuset_fork() handle CLONE_INTO_CGROUP properly")
to enable the CLONE_INTO_CGROUP flag of clone(2) to behave more like
moving a task from one cpuset into another one. That commits didn't
move the mpol_rebind_mm() and cpuset_migrate_mm() calls for group leader
into cpuset_attach_task().
When the CLONE_INTO_CGROUP flag is used without CLONE_THREAD, the
new task is its own group leader. So it is still not equivalent to
moving task between cpusets in this case. Make CLONE_INTO_CGROUP
behaves more close to cpuset_attach() by moving the mpol_rebind_mm()
and cpuset_migrate_mm() calls inside cpuset_attach_task().
Besides, the original code use cpuset_attach_nodemask_to for
both nodemask returned by guarantee_online_mems() used only by
cpuset_change_task_nodemask() and cs->effective_mems in all other cases.
Such dual use is now impractical by merging the two task iteration loops
into one. So keep cpuset_attach_nodemask_to for the nodemask returned
by guarantee_online_mems() and reference cs->effective_mems directly
in all the other cases.
Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/cgroup/cpuset.c | 82 ++++++++++++++++++++++--------------------
1 file changed, 43 insertions(+), 39 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index fc632370d07c..ab9fcc001f79 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -3130,9 +3130,12 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
*/
static cpumask_var_t cpus_attach;
static nodemask_t cpuset_attach_nodemask_to;
+static bool queue_task_work;
static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task)
{
+ struct mm_struct *mm;
+
lockdep_assert_cpuset_lock_held();
if (cs != &top_cpuset)
@@ -3146,17 +3149,48 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task)
*/
WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
+ if (cpuset_v2() && !attach_mems_updated)
+ return;
+
cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to);
cpuset1_update_task_spread_flags(cs, task);
+
+ if (task != task->group_leader)
+ return;
+
+ /*
+ * Change mm for threadgroup leader. This is expensive and may
+ * sleep and should be moved outside migration path proper.
+ */
+ mm = get_task_mm(task);
+ if (mm) {
+ struct cpuset *oldcs = task->attach_old_cs;
+
+ mpol_rebind_mm(mm, &cs->effective_mems);
+
+ /*
+ * old_mems_allowed is the same with mems_allowed
+ * here, except if this task is being moved
+ * automatically due to hotplug. In that case
+ * @mems_allowed has been updated and is empty, so
+ * @old_mems_allowed is the right nodesets that we
+ * migrate mm from.
+ */
+ if (oldcs && is_memory_migrate(cs)) {
+ cpuset_migrate_mm(mm, &oldcs->old_mems_allowed,
+ &cs->effective_mems);
+ queue_task_work = true;
+ } else {
+ mmput(mm);
+ }
+ }
}
static void cpuset_attach(struct cgroup_taskset *tset)
{
struct task_struct *task;
- struct task_struct *leader;
struct cgroup_subsys_state *css;
struct cpuset *cs, *oldcs;
- bool queue_task_work = false;
task = cgroup_taskset_first(tset, &css);
oldcs = task->attach_old_cs;
@@ -3164,6 +3198,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
lockdep_assert_cpus_held(); /* see cgroup_attach_lock() */
mutex_lock(&cpuset_mutex);
+ queue_task_work = false;
/*
* In the default hierarchy, enabling cpuset in the child cgroups
@@ -3171,53 +3206,18 @@ static void cpuset_attach(struct cgroup_taskset *tset)
* in effective cpus and mems. In that case, we can optimize out
* by skipping the task iteration and update.
*/
- if (cpuset_v2() && !attach_cpus_updated && !attach_mems_updated) {
- cpuset_attach_nodemask_to = cs->effective_mems;
+ if (cpuset_v2() && !attach_cpus_updated && !attach_mems_updated)
goto out;
- }
guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
cgroup_taskset_for_each(task, css, tset)
cpuset_attach_task(cs, task);
- /*
- * Change mm for all threadgroup leaders. This is expensive and may
- * sleep and should be moved outside migration path proper. Skip it
- * if there is no change in effective_mems and CS_MEMORY_MIGRATE is
- * not set.
- */
- cpuset_attach_nodemask_to = cs->effective_mems;
- if (!is_memory_migrate(cs) && !attach_mems_updated)
- goto out;
-
- cgroup_taskset_for_each_leader(leader, css, tset) {
- struct mm_struct *mm = get_task_mm(leader);
-
- if (mm) {
- mpol_rebind_mm(mm, &cpuset_attach_nodemask_to);
-
- /*
- * old_mems_allowed is the same with mems_allowed
- * here, except if this task is being moved
- * automatically due to hotplug. In that case
- * @mems_allowed has been updated and is empty, so
- * @old_mems_allowed is the right nodesets that we
- * migrate mm from.
- */
- if (is_memory_migrate(cs)) {
- cpuset_migrate_mm(mm, &oldcs->old_mems_allowed,
- &cpuset_attach_nodemask_to);
- queue_task_work = true;
- } else
- mmput(mm);
- }
- }
-
out:
if (queue_task_work)
schedule_flush_migrate_mm();
- cs->old_mems_allowed = cpuset_attach_nodemask_to;
+ cs->old_mems_allowed = cs->effective_mems;
if (cs->nr_migrate_dl_tasks) {
cs->nr_deadline_tasks += cs->nr_migrate_dl_tasks;
@@ -3667,7 +3667,11 @@ static void cpuset_fork(struct task_struct *task)
/* CLONE_INTO_CGROUP */
mutex_lock(&cpuset_mutex);
guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
+ /* Assume CPUs and memory nodes are updated */
+ attach_cpus_updated = attach_mems_updated = true;
+ task->attach_old_cs = task_cs(current);
cpuset_attach_task(cs, task);
+ attach_cpus_updated = attach_mems_updated = false;
dec_attach_in_progress_locked(cs);
mutex_unlock(&cpuset_mutex);
--
2.54.0
next prev parent reply other threads:[~2026-05-16 4:26 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-16 4:24 [PATCH cgroup/for-next v2 0/5] cgroup/cpuset: Support multiple source/destination cpusets for cpuset_*attach() Waiman Long
2026-05-16 4:24 ` [PATCH cgroup/for-next v2 1/5] cgroup/cpuset: Add a cpuset_reserve_dl_bw() helper Waiman Long
2026-05-16 4:24 ` [PATCH cgroup/for-next v2 2/5] cgroup/cpuset: Expand the scope of cpuset_can_attach_check() Waiman Long
2026-05-16 4:24 ` [PATCH cgroup/for-next v2 3/5] cgroup/cpuset: Replace cpuset_attach_old_cs by a new attach_old_cs field in task_struct Waiman Long
2026-05-16 4:24 ` Waiman Long [this message]
2026-05-16 4:24 ` [PATCH cgroup/for-next v2 5/5] cgroup/cpuset: Support multiple source/destination cpusets for cpuset_*attach() Waiman Long
2026-05-16 4:36 ` [PATCH cgroup/for-next v2 0/5] " Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260516042448.698216-5-longman@redhat.com \
--to=longman@redhat.com \
--cc=atomlin@atomlin.com \
--cc=bsegall@google.com \
--cc=cgroups@vger.kernel.org \
--cc=chenridong@huaweicloud.com \
--cc=dietmar.eggemann@arm.com \
--cc=hannes@cmpxchg.org \
--cc=juri.lelli@redhat.com \
--cc=kprateek.nayak@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=mkoutny@suse.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tj@kernel.org \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.