* [PATCH v3] cgroup: cgroup: drain specific subsystems when mounting/destroying a root
@ 2025-08-15 7:05 Chen Ridong
2025-08-15 7:31 ` Chen Ridong
2025-08-15 17:44 ` Tejun Heo
0 siblings, 2 replies; 5+ messages in thread
From: Chen Ridong @ 2025-08-15 7:05 UTC (permalink / raw)
To: tj, hannes, mkoutny, lizefan
Cc: cgroups, linux-kernel, lujialin4, chenridong, hdanton
From: Chen Ridong <chenridong@huawei.com>
A hung task can occur during [1] LTP cgroup testing when repeatedly
mounting/unmounting perf_event and net_prio controllers with
systemd.unified_cgroup_hierarchy=1. The hang manifests in
cgroup_lock_and_drain_offline() during root destruction.
Related case:
cgroup_fj_function_perf_event cgroup_fj_function.sh perf_event
cgroup_fj_function_net_prio cgroup_fj_function.sh net_prio
Call Trace:
cgroup_lock_and_drain_offline+0x14c/0x1e8
cgroup_destroy_root+0x3c/0x2c0
css_free_rwork_fn+0x248/0x338
process_one_work+0x16c/0x3b8
worker_thread+0x22c/0x3b0
kthread+0xec/0x100
ret_from_fork+0x10/0x20
Root Cause:
CPU0 CPU1
mount perf_event umount net_prio
cgroup1_get_tree cgroup_kill_sb
rebind_subsystems // root destruction enqueues
// cgroup_destroy_wq
// kill all perf_event css
// one perf_event css A is dying
// css A offline enqueues cgroup_destroy_wq
// root destruction will be executed first
css_free_rwork_fn
cgroup_destroy_root
cgroup_lock_and_drain_offline
// some perf descendants are dying
// cgroup_destroy_wq max_active = 1
// waiting for css A to die
Problem scenario:
1. CPU0 mounts perf_event (rebind_subsystems)
2. CPU1 unmounts net_prio (cgroup_kill_sb), queuing root destruction work
3. A dying perf_event CSS gets queued for offline after root destruction
4. Root destruction waits for offline completion, but offline work is
blocked behind root destruction in cgroup_destroy_wq (max_active=1)
Solution:
Introduce ss_mask for cgroup_lock_and_drain_offline() to selectively drain
specific subsystems rather than all subsystems.
There are two primary scenarios requiring offline draining:
1. Root Operations - Draining all subsystems in cgrp_dfl_root when mounting
or destroying a cgroup root
2. Draining specific cgroup when modifying cgroup.subtree_control or
cgroup.threads
For case 1 (Root Operations), it only need to drain the specific subsystem
being mounted/destroyed, not all subsystems. The rationale for draining
cgrp_dfl_root is explained in [2].
For case 2, it's enough to drain subsystems enabled in the cgroup. Since
other subsystems cannot have descendants in this cgroup, adding ss_mask
should not have a hurt.
[1] https://github.com/linux-test-project/ltp/blob/master/runtest/controllers
[2] https://lore.kernel.org/cgroups/39e05402-40c7-4631-a87b-8e3747ceddc6@huaweicloud.com/T/#t
Fixes: 334c3679ec4b ("cgroup: reimplement rebind_subsystems() using cgroup_apply_control() and friends")
Reported-by: Gao Yingjie <gaoyingjie@uniontech.com>
Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
v2->v3: add ss_mask for cgroup_lock_and_drain_offline to fix this issue.
v2: https://lore.kernel.org/cgroups/20250815024020.4579-1-hdanton@sina.com/T/#t
kernel/cgroup/cgroup-internal.h | 2 +-
kernel/cgroup/cgroup-v1.c | 4 ++--
kernel/cgroup/cgroup.c | 11 ++++++++---
3 files changed, 11 insertions(+), 6 deletions(-)
diff --git a/kernel/cgroup/cgroup-internal.h b/kernel/cgroup/cgroup-internal.h
index b14e61c64a34..3c7443d8b3b7 100644
--- a/kernel/cgroup/cgroup-internal.h
+++ b/kernel/cgroup/cgroup-internal.h
@@ -257,7 +257,7 @@ struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup,
void cgroup_procs_write_finish(struct task_struct *task, bool locked)
__releases(&cgroup_threadgroup_rwsem);
-void cgroup_lock_and_drain_offline(struct cgroup *cgrp);
+void cgroup_lock_and_drain_offline(struct cgroup *cgrp, u16 ss_mask);
int cgroup_mkdir(struct kernfs_node *parent_kn, const char *name, umode_t mode);
int cgroup_rmdir(struct kernfs_node *kn);
diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
index 2a4a387f867a..1be439c62fbe 100644
--- a/kernel/cgroup/cgroup-v1.c
+++ b/kernel/cgroup/cgroup-v1.c
@@ -1096,7 +1096,7 @@ int cgroup1_reconfigure(struct fs_context *fc)
int ret = 0;
u16 added_mask, removed_mask;
- cgroup_lock_and_drain_offline(&cgrp_dfl_root.cgrp);
+ cgroup_lock_and_drain_offline(&cgrp_dfl_root.cgrp, root->subsys_mask);
/* See what subsystems are wanted */
ret = check_cgroupfs_options(fc);
@@ -1261,7 +1261,7 @@ int cgroup1_get_tree(struct fs_context *fc)
if (!ns_capable(ctx->ns->user_ns, CAP_SYS_ADMIN))
return -EPERM;
- cgroup_lock_and_drain_offline(&cgrp_dfl_root.cgrp);
+ cgroup_lock_and_drain_offline(&cgrp_dfl_root.cgrp, ctx->subsys_mask);
ret = cgroup1_root_to_use(fc);
if (!ret && !percpu_ref_tryget_live(&ctx->root->cgrp.self.refcnt))
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index d8b82afed181..c76c23186fd3 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -1346,7 +1346,7 @@ static void cgroup_destroy_root(struct cgroup_root *root)
trace_cgroup_destroy_root(root);
- cgroup_lock_and_drain_offline(&cgrp_dfl_root.cgrp);
+ cgroup_lock_and_drain_offline(&cgrp_dfl_root.cgrp, root->subsys_mask);
BUG_ON(atomic_read(&root->nr_cgrps));
BUG_ON(!list_empty(&cgrp->self.children));
@@ -1681,7 +1681,7 @@ struct cgroup *cgroup_kn_lock_live(struct kernfs_node *kn, bool drain_offline)
kernfs_break_active_protection(kn);
if (drain_offline)
- cgroup_lock_and_drain_offline(cgrp);
+ cgroup_lock_and_drain_offline(cgrp, cgrp->subtree_ss_mask);
else
cgroup_lock();
@@ -3147,12 +3147,14 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
/**
* cgroup_lock_and_drain_offline - lock cgroup_mutex and drain offlined csses
* @cgrp: root of the target subtree
+ * @ss_mask: bitmask of subsystem to be drained
*
* Because css offlining is asynchronous, userland may try to re-enable a
* controller while the previous css is still around. This function grabs
* cgroup_mutex and drains the previous css instances of @cgrp's subtree.
*/
-void cgroup_lock_and_drain_offline(struct cgroup *cgrp)
+
+void cgroup_lock_and_drain_offline(struct cgroup *cgrp, u16 ss_mask)
__acquires(&cgroup_mutex)
{
struct cgroup *dsct;
@@ -3168,6 +3170,9 @@ void cgroup_lock_and_drain_offline(struct cgroup *cgrp)
struct cgroup_subsys_state *css = cgroup_css(dsct, ss);
DEFINE_WAIT(wait);
+ if (!(ss_mask & 1U << ssid))
+ continue;
+
if (!css || !percpu_ref_is_dying(&css->refcnt))
continue;
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v3] cgroup: cgroup: drain specific subsystems when mounting/destroying a root
2025-08-15 7:05 [PATCH v3] cgroup: cgroup: drain specific subsystems when mounting/destroying a root Chen Ridong
@ 2025-08-15 7:31 ` Chen Ridong
2025-08-15 17:44 ` Tejun Heo
1 sibling, 0 replies; 5+ messages in thread
From: Chen Ridong @ 2025-08-15 7:31 UTC (permalink / raw)
To: tj, hannes, mkoutny, lizefan
Cc: cgroups, linux-kernel, lujialin4, chenridong, hdanton, gaoyingjie
--cc gaoyingjie
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v3] cgroup: cgroup: drain specific subsystems when mounting/destroying a root
2025-08-15 7:05 [PATCH v3] cgroup: cgroup: drain specific subsystems when mounting/destroying a root Chen Ridong
2025-08-15 7:31 ` Chen Ridong
@ 2025-08-15 17:44 ` Tejun Heo
2025-08-16 1:26 ` Chen Ridong
1 sibling, 1 reply; 5+ messages in thread
From: Tejun Heo @ 2025-08-15 17:44 UTC (permalink / raw)
To: Chen Ridong
Cc: hannes, mkoutny, lizefan, cgroups, linux-kernel, lujialin4,
chenridong, hdanton
Hello, Chen.
On Fri, Aug 15, 2025 at 07:05:18AM +0000, Chen Ridong wrote:
> From: Chen Ridong <chenridong@huawei.com>
>
> A hung task can occur during [1] LTP cgroup testing when repeatedly
> mounting/unmounting perf_event and net_prio controllers with
> systemd.unified_cgroup_hierarchy=1. The hang manifests in
> cgroup_lock_and_drain_offline() during root destruction.
>
> Related case:
> cgroup_fj_function_perf_event cgroup_fj_function.sh perf_event
> cgroup_fj_function_net_prio cgroup_fj_function.sh net_prio
>
> Call Trace:
> cgroup_lock_and_drain_offline+0x14c/0x1e8
> cgroup_destroy_root+0x3c/0x2c0
> css_free_rwork_fn+0x248/0x338
> process_one_work+0x16c/0x3b8
> worker_thread+0x22c/0x3b0
> kthread+0xec/0x100
> ret_from_fork+0x10/0x20
>
> Root Cause:
>
> CPU0 CPU1
> mount perf_event umount net_prio
> cgroup1_get_tree cgroup_kill_sb
> rebind_subsystems // root destruction enqueues
> // cgroup_destroy_wq
> // kill all perf_event css
> // one perf_event css A is dying
> // css A offline enqueues cgroup_destroy_wq
> // root destruction will be executed first
> css_free_rwork_fn
> cgroup_destroy_root
> cgroup_lock_and_drain_offline
> // some perf descendants are dying
> // cgroup_destroy_wq max_active = 1
> // waiting for css A to die
>
> Problem scenario:
> 1. CPU0 mounts perf_event (rebind_subsystems)
> 2. CPU1 unmounts net_prio (cgroup_kill_sb), queuing root destruction work
> 3. A dying perf_event CSS gets queued for offline after root destruction
> 4. Root destruction waits for offline completion, but offline work is
> blocked behind root destruction in cgroup_destroy_wq (max_active=1)
Thanks for the analysis, so this is caused by css free path waiting for css
offline.
> Solution:
> Introduce ss_mask for cgroup_lock_and_drain_offline() to selectively drain
> specific subsystems rather than all subsystems.
>
> There are two primary scenarios requiring offline draining:
> 1. Root Operations - Draining all subsystems in cgrp_dfl_root when mounting
> or destroying a cgroup root
> 2. Draining specific cgroup when modifying cgroup.subtree_control or
> cgroup.threads
>
> For case 1 (Root Operations), it only need to drain the specific subsystem
> being mounted/destroyed, not all subsystems. The rationale for draining
> cgrp_dfl_root is explained in [2].
>
> For case 2, it's enough to drain subsystems enabled in the cgroup. Since
> other subsystems cannot have descendants in this cgroup, adding ss_mask
> should not have a hurt.
Hmm... this seems a bit fragile. Would splitting cgroup_destroy_wq into two
separate workqueues - e.g. cgroup_offline_wq and cgroup_free_wq - work?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v3] cgroup: cgroup: drain specific subsystems when mounting/destroying a root
2025-08-15 17:44 ` Tejun Heo
@ 2025-08-16 1:26 ` Chen Ridong
2025-08-16 16:16 ` Tejun Heo
0 siblings, 1 reply; 5+ messages in thread
From: Chen Ridong @ 2025-08-16 1:26 UTC (permalink / raw)
To: Tejun Heo
Cc: hannes, mkoutny, lizefan, cgroups, linux-kernel, lujialin4,
chenridong, hdanton
On 2025/8/16 1:44, Tejun Heo wrote:
> Hello, Chen.
>
> On Fri, Aug 15, 2025 at 07:05:18AM +0000, Chen Ridong wrote:
>> From: Chen Ridong <chenridong@huawei.com>
>>
>> A hung task can occur during [1] LTP cgroup testing when repeatedly
>> mounting/unmounting perf_event and net_prio controllers with
>> systemd.unified_cgroup_hierarchy=1. The hang manifests in
>> cgroup_lock_and_drain_offline() during root destruction.
>>
>> Related case:
>> cgroup_fj_function_perf_event cgroup_fj_function.sh perf_event
>> cgroup_fj_function_net_prio cgroup_fj_function.sh net_prio
>>
>> Call Trace:
>> cgroup_lock_and_drain_offline+0x14c/0x1e8
>> cgroup_destroy_root+0x3c/0x2c0
>> css_free_rwork_fn+0x248/0x338
>> process_one_work+0x16c/0x3b8
>> worker_thread+0x22c/0x3b0
>> kthread+0xec/0x100
>> ret_from_fork+0x10/0x20
>>
>> Root Cause:
>>
>> CPU0 CPU1
>> mount perf_event umount net_prio
>> cgroup1_get_tree cgroup_kill_sb
>> rebind_subsystems // root destruction enqueues
>> // cgroup_destroy_wq
>> // kill all perf_event css
>> // one perf_event css A is dying
>> // css A offline enqueues cgroup_destroy_wq
>> // root destruction will be executed first
>> css_free_rwork_fn
>> cgroup_destroy_root
>> cgroup_lock_and_drain_offline
>> // some perf descendants are dying
>> // cgroup_destroy_wq max_active = 1
>> // waiting for css A to die
>>
>> Problem scenario:
>> 1. CPU0 mounts perf_event (rebind_subsystems)
>> 2. CPU1 unmounts net_prio (cgroup_kill_sb), queuing root destruction work
>> 3. A dying perf_event CSS gets queued for offline after root destruction
>> 4. Root destruction waits for offline completion, but offline work is
>> blocked behind root destruction in cgroup_destroy_wq (max_active=1)
>
> Thanks for the analysis, so this is caused by css free path waiting for css
> offline.
>
>> Solution:
>> Introduce ss_mask for cgroup_lock_and_drain_offline() to selectively drain
>> specific subsystems rather than all subsystems.
>>
>> There are two primary scenarios requiring offline draining:
>> 1. Root Operations - Draining all subsystems in cgrp_dfl_root when mounting
>> or destroying a cgroup root
>> 2. Draining specific cgroup when modifying cgroup.subtree_control or
>> cgroup.threads
>>
>> For case 1 (Root Operations), it only need to drain the specific subsystem
>> being mounted/destroyed, not all subsystems. The rationale for draining
>> cgrp_dfl_root is explained in [2].
>>
>> For case 2, it's enough to drain subsystems enabled in the cgroup. Since
>> other subsystems cannot have descendants in this cgroup, adding ss_mask
>> should not have a hurt.
>
> Hmm... this seems a bit fragile. Would splitting cgroup_destroy_wq into two
> separate workqueues - e.g. cgroup_offline_wq and cgroup_free_wq - work?
>
> Thanks.
>
Hi Tj,
I've tested that adding a dedicated cgroup_offline_wq workqueue for CSS offline operations, which
could resolve the current issue.
Going further, I propose we split cgroup_destroy_wq into three specialized workqueues to better
match the destruction lifecycle:
cgroup_offline_wq - Handles offline operations
cgroup_release_wq - Manages resource release
cgroup_free_wq - Performs final memory freeing
This explicit separation would clearly delineate responsibilities for each workqueue.
What are your thoughts on this approach, Tejun?
--
Best regards,
Ridong
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v3] cgroup: cgroup: drain specific subsystems when mounting/destroying a root
2025-08-16 1:26 ` Chen Ridong
@ 2025-08-16 16:16 ` Tejun Heo
0 siblings, 0 replies; 5+ messages in thread
From: Tejun Heo @ 2025-08-16 16:16 UTC (permalink / raw)
To: Chen Ridong
Cc: hannes, mkoutny, lizefan, cgroups, linux-kernel, lujialin4,
chenridong, hdanton
Hello,
On Sat, Aug 16, 2025 at 09:26:59AM +0800, Chen Ridong wrote:
...
> I've tested that adding a dedicated cgroup_offline_wq workqueue for CSS offline operations, which
> could resolve the current issue.
>
> Going further, I propose we split cgroup_destroy_wq into three specialized workqueues to better
> match the destruction lifecycle:
> cgroup_offline_wq - Handles offline operations
> cgroup_release_wq - Manages resource release
> cgroup_free_wq - Performs final memory freeing
>
> This explicit separation would clearly delineate responsibilities for each workqueue.
>
> What are your thoughts on this approach, Tejun?
Yeah, sure. It'd also be helpful to note in a comment why separate
workqueues are used.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-08-16 16:16 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-15 7:05 [PATCH v3] cgroup: cgroup: drain specific subsystems when mounting/destroying a root Chen Ridong
2025-08-15 7:31 ` Chen Ridong
2025-08-15 17:44 ` Tejun Heo
2025-08-16 1:26 ` Chen Ridong
2025-08-16 16:16 ` Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).