* [PATCH v1] cgroup: drop preemption_disabled checking
@ 2025-11-19 11:14 Jiayuan Chen
2025-11-19 15:59 ` Tejun Heo
0 siblings, 1 reply; 3+ messages in thread
From: Jiayuan Chen @ 2025-11-19 11:14 UTC (permalink / raw)
To: cgroups; +Cc: tj, hannes, mkoutny, linux-kernel, Jiayuan Chen
BPF programs do not disable preemption, they only disable migration.
Therefore, when running the cgroup_hierarchical_stats selftest, a
warning [1] is generated.
The css_rstat_updated() function is lockless and reentrant, so checking
for disabled preemption is unnecessary (please correct me if I'm wrong).
[1]:
~/tools/testing/selftests/bpf$
test_progs -a cgroup_hierarchical_stats
...
------------[ cut here ]------------
WARNING: CPU: 0 PID: 382 at kernel/cgroup/rstat.c:84
Modules linked in:
RIP: 0010:css_rstat_updated+0x9d/0x160
...
PKRU: 55555554
Call Trace:
<TASK>
bpf_prog_16a1c2d081688506_counter+0x143/0x14e
bpf_trampoline_6442524909+0x4b/0xb7
cgroup_attach_task+0x5/0x330
? __cgroup_procs_write+0x1d7/0x2f0
cgroup_procs_write+0x17/0x30
cgroup_file_write+0xa6/0x2d0
kernfs_fop_write_iter+0x188/0x240
vfs_write+0x2da/0x5a0
ksys_write+0x77/0x100
__x64_sys_write+0x19/0x30
x64_sys_call+0x79/0x26a0
do_syscall_64+0x89/0x790
? irqentry_exit+0x77/0xb0
? __this_cpu_preempt_check+0x13/0x20
? lockdep_hardirqs_on+0xce/0x170
? irqentry_exit_to_user_mode+0xf2/0x290
? irqentry_exit+0x77/0xb0
? clear_bhb_loop+0x50/0xa0
? clear_bhb_loop+0x50/0xa0
entry_SYSCALL_64_after_hwframe+0x76/0x7e
---[ end trace 0000000000000000 ]---
Signed-off-by: Jiayuan Chen <jiayuan.chen@linux.dev>
---
kernel/cgroup/rstat.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index a198e40c799b..fe0d22280cbd 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -81,8 +81,6 @@ __bpf_kfunc void css_rstat_updated(struct cgroup_subsys_state *css, int cpu)
if (!css_uses_rstat(css))
return;
- lockdep_assert_preemption_disabled();
-
/*
* For archs withnot nmi safe cmpxchg or percpu ops support, ignore
* the requests from nmi context.
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v1] cgroup: drop preemption_disabled checking
2025-11-19 11:14 [PATCH v1] cgroup: drop preemption_disabled checking Jiayuan Chen
@ 2025-11-19 15:59 ` Tejun Heo
2025-11-20 1:57 ` Jiayuan Chen
0 siblings, 1 reply; 3+ messages in thread
From: Tejun Heo @ 2025-11-19 15:59 UTC (permalink / raw)
To: Jiayuan Chen; +Cc: cgroups, hannes, mkoutny, linux-kernel
Hello,
On Wed, Nov 19, 2025 at 07:14:01PM +0800, Jiayuan Chen wrote:
> BPF programs do not disable preemption, they only disable migration.
> Therefore, when running the cgroup_hierarchical_stats selftest, a
> warning [1] is generated.
>
> The css_rstat_updated() function is lockless and reentrant, so checking
> for disabled preemption is unnecessary (please correct me if I'm wrong).
While it won't crash the kernel to schedule while running the function,
there are timing considerations here. If the thread which wins the lnode
competition gets scheduled out, there can be significant unexpected delays
for others that lost against it. Maybe just update the caller to disable
preemption?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v1] cgroup: drop preemption_disabled checking
2025-11-19 15:59 ` Tejun Heo
@ 2025-11-20 1:57 ` Jiayuan Chen
0 siblings, 0 replies; 3+ messages in thread
From: Jiayuan Chen @ 2025-11-20 1:57 UTC (permalink / raw)
To: Tejun Heo; +Cc: cgroups, hannes, mkoutny, linux-kernel
November 19, 2025 at 23:59, "Tejun Heo" <tj@kernel.org mailto:tj@kernel.org?to=%22Tejun%20Heo%22%20%3Ctj%40kernel.org%3E > wrote:
>
> Hello,
>
> On Wed, Nov 19, 2025 at 07:14:01PM +0800, Jiayuan Chen wrote:
>
> >
> > BPF programs do not disable preemption, they only disable migration.
> > Therefore, when running the cgroup_hierarchical_stats selftest, a
> > warning [1] is generated.
> >
> > The css_rstat_updated() function is lockless and reentrant, so checking
> > for disabled preemption is unnecessary (please correct me if I'm wrong).
> >
> While it won't crash the kernel to schedule while running the function,
> there are timing considerations here. If the thread which wins the lnode
> competition gets scheduled out, there can be significant unexpected delays
> for others that lost against it. Maybe just update the caller to disable
> preemption?
>
> Thanks.
>
> --
> tejun
>
Since css_rstat_updated() can be called from BPF where preemption is not
disabled by its framework, we can simply add preempt_disable()/preempt_enable()
around the call, like this:
void css_rstat_updated()
{
preempt_disable();
__css_rstat_updated();
preempt_enable();
}
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-11-20 1:57 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-19 11:14 [PATCH v1] cgroup: drop preemption_disabled checking Jiayuan Chen
2025-11-19 15:59 ` Tejun Heo
2025-11-20 1:57 ` Jiayuan Chen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).