* [PATCH 6.19 033/311] cgroup: Wait for dying tasks to leave on rmdir
[not found] <20260408175939.393281918@linuxfoundation.org>
@ 2026-04-08 18:00 ` Greg Kroah-Hartman
2026-04-08 18:00 ` [PATCH 6.19 034/311] selftests/cgroup: Dont require synchronous populated update on task exit Greg Kroah-Hartman
1 sibling, 0 replies; 2+ messages in thread
From: Greg Kroah-Hartman @ 2026-04-08 18:00 UTC (permalink / raw)
To: stable
Cc: Greg Kroah-Hartman, patches, kernel test robot,
Sebastian Andrzej Siewior, Tejun Heo, Bert Karwatzki,
Michal Koutny, cgroups, Sasha Levin
6.19-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tejun Heo <tj@kernel.org>
[ Upstream commit 1b164b876c36c3eb5561dd9b37702b04401b0166 ]
a72f73c4dd9b ("cgroup: Don't expose dead tasks in cgroup") hid PF_EXITING
tasks from cgroup.procs so that systemd doesn't see tasks that have already
been reaped via waitpid(). However, the populated counter (nr_populated_csets)
is only decremented when the task later passes through cgroup_task_dead() in
finish_task_switch(). This means cgroup.procs can appear empty while the
cgroup is still populated, causing rmdir to fail with -EBUSY.
Fix this by making cgroup_rmdir() wait for dying tasks to fully leave. If the
cgroup is populated but all remaining tasks have PF_EXITING set (the task
iterator returns none due to the existing filter), wait for a kick from
cgroup_task_dead() and retry. The wait is brief as tasks are removed from the
cgroup's css_set between PF_EXITING assertion in do_exit() and
cgroup_task_dead() in finish_task_switch().
v2: cgroup_is_populated() true to false transition happens under css_set_lock
not cgroup_mutex, so retest under css_set_lock before sleeping to avoid
missed wakeups (Sebastian).
Fixes: a72f73c4dd9b ("cgroup: Don't expose dead tasks in cgroup")
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202603222104.2c81684e-lkp@intel.com
Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Bert Karwatzki <spasswolf@web.de>
Cc: Michal Koutny <mkoutny@suse.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
include/linux/cgroup-defs.h | 3 ++
kernel/cgroup/cgroup.c | 86 +++++++++++++++++++++++++++++++++++--
2 files changed, 86 insertions(+), 3 deletions(-)
diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
index f7cc60de00583..2bff3e2be0d3b 100644
--- a/include/linux/cgroup-defs.h
+++ b/include/linux/cgroup-defs.h
@@ -609,6 +609,9 @@ struct cgroup {
/* used to wait for offlining of csses */
wait_queue_head_t offline_waitq;
+ /* used by cgroup_rmdir() to wait for dying tasks to leave */
+ wait_queue_head_t dying_populated_waitq;
+
/* used to schedule release agent */
struct work_struct release_agent_work;
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 3f9e4bcd71988..257d1ddea1ada 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -2126,6 +2126,7 @@ static void init_cgroup_housekeeping(struct cgroup *cgrp)
#endif
init_waitqueue_head(&cgrp->offline_waitq);
+ init_waitqueue_head(&cgrp->dying_populated_waitq);
INIT_WORK(&cgrp->release_agent_work, cgroup1_release_agent);
}
@@ -6224,6 +6225,76 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
return 0;
};
+/**
+ * cgroup_drain_dying - wait for dying tasks to leave before rmdir
+ * @cgrp: the cgroup being removed
+ *
+ * The PF_EXITING filter in css_task_iter_advance() hides exiting tasks from
+ * cgroup.procs so that userspace (e.g. systemd) doesn't see tasks that have
+ * already been reaped via waitpid(). However, the populated counter
+ * (nr_populated_csets) is only decremented when the task later passes through
+ * cgroup_task_dead() in finish_task_switch(). This creates a window where
+ * cgroup.procs appears empty but cgroup_is_populated() is still true, causing
+ * rmdir to fail with -EBUSY.
+ *
+ * This function bridges that gap. If the cgroup is populated but all remaining
+ * tasks have PF_EXITING set, we wait for cgroup_task_dead() to process them.
+ * Tasks are removed from the cgroup's css_set in cgroup_task_dead() called from
+ * finish_task_switch(). As the window between PF_EXITING and cgroup_task_dead()
+ * is short, the number of PF_EXITING tasks on the list is small and the wait
+ * is brief.
+ *
+ * Each cgroup_task_dead() kicks the waitqueue via cset->cgrp_links, and we
+ * retry the full check from scratch.
+ *
+ * Must be called with cgroup_mutex held.
+ */
+static int cgroup_drain_dying(struct cgroup *cgrp)
+ __releases(&cgroup_mutex) __acquires(&cgroup_mutex)
+{
+ struct css_task_iter it;
+ struct task_struct *task;
+ DEFINE_WAIT(wait);
+
+ lockdep_assert_held(&cgroup_mutex);
+retry:
+ if (!cgroup_is_populated(cgrp))
+ return 0;
+
+ /* Same iterator as cgroup.threads - if any task is visible, it's busy */
+ css_task_iter_start(&cgrp->self, 0, &it);
+ task = css_task_iter_next(&it);
+ css_task_iter_end(&it);
+
+ if (task)
+ return -EBUSY;
+
+ /*
+ * All remaining tasks are PF_EXITING and will pass through
+ * cgroup_task_dead() shortly. Wait for a kick and retry.
+ *
+ * cgroup_is_populated() can't transition from false to true while
+ * we're holding cgroup_mutex, but the true to false transition
+ * happens under css_set_lock (via cgroup_task_dead()). We must
+ * retest and prepare_to_wait() under css_set_lock. Otherwise, the
+ * transition can happen between our first test and
+ * prepare_to_wait(), and we sleep with no one to wake us.
+ */
+ spin_lock_irq(&css_set_lock);
+ if (!cgroup_is_populated(cgrp)) {
+ spin_unlock_irq(&css_set_lock);
+ return 0;
+ }
+ prepare_to_wait(&cgrp->dying_populated_waitq, &wait,
+ TASK_UNINTERRUPTIBLE);
+ spin_unlock_irq(&css_set_lock);
+ mutex_unlock(&cgroup_mutex);
+ schedule();
+ finish_wait(&cgrp->dying_populated_waitq, &wait);
+ mutex_lock(&cgroup_mutex);
+ goto retry;
+}
+
int cgroup_rmdir(struct kernfs_node *kn)
{
struct cgroup *cgrp;
@@ -6233,9 +6304,12 @@ int cgroup_rmdir(struct kernfs_node *kn)
if (!cgrp)
return 0;
- ret = cgroup_destroy_locked(cgrp);
- if (!ret)
- TRACE_CGROUP_PATH(rmdir, cgrp);
+ ret = cgroup_drain_dying(cgrp);
+ if (!ret) {
+ ret = cgroup_destroy_locked(cgrp);
+ if (!ret)
+ TRACE_CGROUP_PATH(rmdir, cgrp);
+ }
cgroup_kn_unlock(kn);
return ret;
@@ -6995,6 +7069,7 @@ void cgroup_task_exit(struct task_struct *tsk)
static void do_cgroup_task_dead(struct task_struct *tsk)
{
+ struct cgrp_cset_link *link;
struct css_set *cset;
unsigned long flags;
@@ -7008,6 +7083,11 @@ static void do_cgroup_task_dead(struct task_struct *tsk)
if (thread_group_leader(tsk) && atomic_read(&tsk->signal->live))
list_add_tail(&tsk->cg_list, &cset->dying_tasks);
+ /* kick cgroup_drain_dying() waiters, see cgroup_rmdir() */
+ list_for_each_entry(link, &cset->cgrp_links, cgrp_link)
+ if (waitqueue_active(&link->cgrp->dying_populated_waitq))
+ wake_up(&link->cgrp->dying_populated_waitq);
+
if (dl_task(tsk))
dec_dl_tasks_cs(tsk);
--
2.53.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
* [PATCH 6.19 034/311] selftests/cgroup: Dont require synchronous populated update on task exit
[not found] <20260408175939.393281918@linuxfoundation.org>
2026-04-08 18:00 ` [PATCH 6.19 033/311] cgroup: Wait for dying tasks to leave on rmdir Greg Kroah-Hartman
@ 2026-04-08 18:00 ` Greg Kroah-Hartman
1 sibling, 0 replies; 2+ messages in thread
From: Greg Kroah-Hartman @ 2026-04-08 18:00 UTC (permalink / raw)
To: stable
Cc: Greg Kroah-Hartman, patches, Sebastian Andrzej Siewior, Tejun Heo,
Christian Brauner, cgroups, Sasha Levin
6.19-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tejun Heo <tj@kernel.org>
[ Upstream commit 6680c162b4850976ee52b57372eddc4450c1d074 ]
test_cgcore_populated (test_core) and test_cgkill_{simple,tree,forkbomb}
(test_kill) check cgroup.events "populated 0" immediately after reaping
child tasks with waitpid(). This used to work because cgroup_task_exit() in
do_exit() unlinked tasks from css_sets before exit_notify() woke up
waitpid().
d245698d727a ("cgroup: Defer task cgroup unlink until after the task is done
switching out") moved the unlink to cgroup_task_dead() in
finish_task_switch(), which runs after exit_notify(). The populated counter
is now decremented after the parent's waitpid() can return, so there is no
longer a synchronous ordering guarantee. On PREEMPT_RT, where
cgroup_task_dead() is further deferred through lazy irq_work, the race
window is even larger.
The synchronous populated transition was never part of the cgroup interface
contract - it was an implementation artifact. Use cg_read_strcmp_wait() which
retries for up to 1 second, matching what these tests actually need to
verify: that the cgroup eventually becomes unpopulated after all tasks exit.
Fixes: d245698d727a ("cgroup: Defer task cgroup unlink until after the task is done switching out")
Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tejun Heo <tj@kernel.org>
Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Christian Brauner <brauner@kernel.org>
Cc: cgroups@vger.kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
tools/testing/selftests/cgroup/lib/cgroup_util.c | 15 +++++++++++++++
.../selftests/cgroup/lib/include/cgroup_util.h | 2 ++
tools/testing/selftests/cgroup/test_core.c | 3 ++-
tools/testing/selftests/cgroup/test_kill.c | 7 ++++---
4 files changed, 23 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/cgroup/lib/cgroup_util.c b/tools/testing/selftests/cgroup/lib/cgroup_util.c
index 44c52f620fda1..4b0f2c46d4322 100644
--- a/tools/testing/selftests/cgroup/lib/cgroup_util.c
+++ b/tools/testing/selftests/cgroup/lib/cgroup_util.c
@@ -123,6 +123,21 @@ int cg_read_strcmp(const char *cgroup, const char *control,
return ret;
}
+int cg_read_strcmp_wait(const char *cgroup, const char *control,
+ const char *expected)
+{
+ int i, ret;
+
+ for (i = 0; i < 100; i++) {
+ ret = cg_read_strcmp(cgroup, control, expected);
+ if (!ret)
+ return ret;
+ usleep(10000);
+ }
+
+ return ret;
+}
+
int cg_read_strstr(const char *cgroup, const char *control, const char *needle)
{
char buf[PAGE_SIZE];
diff --git a/tools/testing/selftests/cgroup/lib/include/cgroup_util.h b/tools/testing/selftests/cgroup/lib/include/cgroup_util.h
index 7ab2824ed7b54..1cbe3b0ac6f73 100644
--- a/tools/testing/selftests/cgroup/lib/include/cgroup_util.h
+++ b/tools/testing/selftests/cgroup/lib/include/cgroup_util.h
@@ -59,6 +59,8 @@ extern int cg_read(const char *cgroup, const char *control,
char *buf, size_t len);
extern int cg_read_strcmp(const char *cgroup, const char *control,
const char *expected);
+extern int cg_read_strcmp_wait(const char *cgroup, const char *control,
+ const char *expected);
extern int cg_read_strstr(const char *cgroup, const char *control,
const char *needle);
extern long cg_read_long(const char *cgroup, const char *control);
diff --git a/tools/testing/selftests/cgroup/test_core.c b/tools/testing/selftests/cgroup/test_core.c
index 102262555a599..7b83c7e7c9d4f 100644
--- a/tools/testing/selftests/cgroup/test_core.c
+++ b/tools/testing/selftests/cgroup/test_core.c
@@ -233,7 +233,8 @@ static int test_cgcore_populated(const char *root)
if (err)
goto cleanup;
- if (cg_read_strcmp(cg_test_d, "cgroup.events", "populated 0\n"))
+ if (cg_read_strcmp_wait(cg_test_d, "cgroup.events",
+ "populated 0\n"))
goto cleanup;
/* Remove cgroup. */
diff --git a/tools/testing/selftests/cgroup/test_kill.c b/tools/testing/selftests/cgroup/test_kill.c
index c8c9d306925b6..f6cd23a8ecc71 100644
--- a/tools/testing/selftests/cgroup/test_kill.c
+++ b/tools/testing/selftests/cgroup/test_kill.c
@@ -86,7 +86,7 @@ static int test_cgkill_simple(const char *root)
wait_for_pid(pids[i]);
if (ret == KSFT_PASS &&
- cg_read_strcmp(cgroup, "cgroup.events", "populated 0\n"))
+ cg_read_strcmp_wait(cgroup, "cgroup.events", "populated 0\n"))
ret = KSFT_FAIL;
if (cgroup)
@@ -190,7 +190,8 @@ static int test_cgkill_tree(const char *root)
wait_for_pid(pids[i]);
if (ret == KSFT_PASS &&
- cg_read_strcmp(cgroup[0], "cgroup.events", "populated 0\n"))
+ cg_read_strcmp_wait(cgroup[0], "cgroup.events",
+ "populated 0\n"))
ret = KSFT_FAIL;
for (i = 9; i >= 0 && cgroup[i]; i--) {
@@ -251,7 +252,7 @@ static int test_cgkill_forkbomb(const char *root)
wait_for_pid(pid);
if (ret == KSFT_PASS &&
- cg_read_strcmp(cgroup, "cgroup.events", "populated 0\n"))
+ cg_read_strcmp_wait(cgroup, "cgroup.events", "populated 0\n"))
ret = KSFT_FAIL;
if (cgroup)
--
2.53.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-04-08 18:52 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260408175939.393281918@linuxfoundation.org>
2026-04-08 18:00 ` [PATCH 6.19 033/311] cgroup: Wait for dying tasks to leave on rmdir Greg Kroah-Hartman
2026-04-08 18:00 ` [PATCH 6.19 034/311] selftests/cgroup: Dont require synchronous populated update on task exit Greg Kroah-Hartman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox