public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Waiman Long <longman@redhat.com>
To: "haifeng.xu" <haifeng.xu@shopee.com>
Cc: lizefan.x@bytedance.com, tj@kernel.org, hannes@cmpxchg.org,
	cgroups@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] cgroup/cpuset: Optimize update_tasks_nodemask()
Date: Wed, 23 Nov 2022 15:23:55 -0500	[thread overview]
Message-ID: <2ac6f207-e08a-2a7f-01ae-dfaf15eefaf6@redhat.com> (raw)
In-Reply-To: <20221123082157.71326-1-haifeng.xu@shopee.com>

[-- Attachment #1: Type: text/plain, Size: 2016 bytes --]

On 11/23/22 03:21, haifeng.xu wrote:
> When change the 'cpuset.mems' under some cgroup, system will hung
> for a long time. From the dmesg, many processes or theads are
> stuck in fork/exit. The reason is show as follows.
>
> thread A:
> cpuset_write_resmask /* takes cpuset_rwsem */
>    ...
>      update_tasks_nodemask
>        mpol_rebind_mm /* waits mmap_lock */
>
> thread B:
> worker_thread
>    ...
>      cpuset_migrate_mm_workfn
>        do_migrate_pages /* takes mmap_lock */
>
> thread C:
> cgroup_procs_write /* takes cgroup_mutex and cgroup_threadgroup_rwsem */
>    ...
>      cpuset_can_attach
>        percpu_down_write /* waits cpuset_rwsem */
>
> Once update the nodemasks of cpuset, thread A wakes up thread B to
> migrate mm. But when thread A iterates through all tasks, including
> child threads and group leader, it has to wait the mmap_lock which
> has been take by thread B. Unfortunately, thread C wants to migrate
> tasks into cgroup at this moment, it must wait thread A to release
> cpuset_rwsem. If thread B spends much time to migrate mm, the
> fork/exit which acquire cgroup_threadgroup_rwsem also need to
> wait for a long time.
>
> There is no need to migrate the mm of child threads which is
> shared with group leader. Just iterate through the group
> leader only.
>
> Signed-off-by: haifeng.xu <haifeng.xu@shopee.com>
> ---
>   kernel/cgroup/cpuset.c | 3 +++
>   1 file changed, 3 insertions(+)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 589827ccda8b..43cbd09546d0 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1968,6 +1968,9 @@ static void update_tasks_nodemask(struct cpuset *cs)
>   
>   		cpuset_change_task_nodemask(task, &newmems);
>   
> +		if (!thread_group_leader(task))
> +			continue;
> +
>   		mm = get_task_mm(task);
>   		if (!mm)
>   			continue;

Could you try the attached test patch to see if it can fix your problem? 
Something along the line of this patch will be more acceptable.

Thanks,
Longman


[-- Attachment #2: test.patch --]
[-- Type: text/x-patch, Size: 1275 bytes --]

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index b474289c15b8..9c17b6d4877c 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1942,6 +1942,7 @@ static void update_tasks_nodemask(struct cpuset *cs)
 	static nodemask_t newmems;	/* protected by cpuset_rwsem */
 	struct css_task_iter it;
 	struct task_struct *task;
+	bool migrate;
 
 	cpuset_being_rebound = cs;		/* causes mpol_dup() rebind */
 
@@ -1957,19 +1958,25 @@ static void update_tasks_nodemask(struct cpuset *cs)
 	 * It's ok if we rebind the same mm twice; mpol_rebind_mm()
 	 * is idempotent.  Also migrate pages in each mm to new nodes.
 	 */
+	migrate = is_memory_migrate(cs);
 	css_task_iter_start(&cs->css, 0, &it);
 	while ((task = css_task_iter_next(&it))) {
 		struct mm_struct *mm;
-		bool migrate;
 
 		cpuset_change_task_nodemask(task, &newmems);
 
+		/*
+		 * Skip mm update if a non group leader task and its group
+		 * leader are in the same cpuset.
+		 */
+		if (!thread_group_leader(task) &&
+		   (task_cs(task->group_leader) == cs))
+			continue;
+
 		mm = get_task_mm(task);
 		if (!mm)
 			continue;
 
-		migrate = is_memory_migrate(cs);
-
 		mpol_rebind_mm(mm, &cs->mems_allowed);
 		if (migrate)
 			cpuset_migrate_mm(mm, &cs->old_mems_allowed, &newmems);

  parent reply	other threads:[~2022-11-23 20:32 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-23  8:21 [PATCH] cgroup/cpuset: Optimize update_tasks_nodemask() haifeng.xu
2022-11-23 17:05 ` Tejun Heo
2022-11-23 18:48   ` Waiman Long
2022-11-23 18:54     ` Tejun Heo
2022-11-23 19:05       ` Waiman Long
2022-11-23 19:07         ` Tejun Heo
2022-11-23 20:23 ` Waiman Long [this message]
2022-11-24  3:33   ` Haifeng Xu
2022-11-24  4:24     ` Waiman Long
2022-11-24  7:49       ` Haifeng Xu
2022-11-24 23:00         ` Waiman Long
2022-11-25  2:14           ` Haifeng Xu
2022-11-28  7:34           ` Haifeng Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2ac6f207-e08a-2a7f-01ae-dfaf15eefaf6@redhat.com \
    --to=longman@redhat.com \
    --cc=cgroups@vger.kernel.org \
    --cc=haifeng.xu@shopee.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan.x@bytedance.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox