* [PATCH bpf-next] bpf: Check rcu_read_lock_trace_held() in bpf_map_lookup_percpu_elem()
@ 2025-05-26 6:25 Hou Tao
2025-05-27 17:50 ` patchwork-bot+netdevbpf
0 siblings, 1 reply; 2+ messages in thread
From: Hou Tao @ 2025-05-26 6:25 UTC (permalink / raw)
To: bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, houtao1
From: Hou Tao <houtao1@huawei.com>
bpf_map_lookup_percpu_elem() helper is also available for sleepable bpf
program. When BPF JIT is disabled or under 32-bit host,
bpf_map_lookup_percpu_elem() will not be inlined. Using it in a
sleepable bpf program will trigger the warning in
bpf_map_lookup_percpu_elem(), because the bpf program only holds
rcu_read_lock_trace lock. Therefore, add the missed check.
Reported-by: syzbot+dce5aae19ae4d6399986@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/bpf/000000000000176a130617420310@google.com/
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
kernel/bpf/helpers.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index c1113b74e1e2..601500786ab4 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -130,7 +130,8 @@ const struct bpf_func_proto bpf_map_peek_elem_proto = {
BPF_CALL_3(bpf_map_lookup_percpu_elem, struct bpf_map *, map, void *, key, u32, cpu)
{
- WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_bh_held());
+ WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held() &&
+ !rcu_read_lock_bh_held());
return (unsigned long) map->ops->map_lookup_percpu_elem(map, key, cpu);
}
--
2.29.2
^ permalink raw reply related [flat|nested] 2+ messages in thread* Re: [PATCH bpf-next] bpf: Check rcu_read_lock_trace_held() in bpf_map_lookup_percpu_elem()
2025-05-26 6:25 [PATCH bpf-next] bpf: Check rcu_read_lock_trace_held() in bpf_map_lookup_percpu_elem() Hou Tao
@ 2025-05-27 17:50 ` patchwork-bot+netdevbpf
0 siblings, 0 replies; 2+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-05-27 17:50 UTC (permalink / raw)
To: Hou Tao
Cc: bpf, martin.lau, alexei.starovoitov, andrii, eddyz87, song,
haoluo, yonghong.song, daniel, kpsingh, sdf, jolsa,
john.fastabend, houtao1
Hello:
This patch was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Mon, 26 May 2025 14:25:34 +0800 you wrote:
> From: Hou Tao <houtao1@huawei.com>
>
> bpf_map_lookup_percpu_elem() helper is also available for sleepable bpf
> program. When BPF JIT is disabled or under 32-bit host,
> bpf_map_lookup_percpu_elem() will not be inlined. Using it in a
> sleepable bpf program will trigger the warning in
> bpf_map_lookup_percpu_elem(), because the bpf program only holds
> rcu_read_lock_trace lock. Therefore, add the missed check.
>
> [...]
Here is the summary with links:
- [bpf-next] bpf: Check rcu_read_lock_trace_held() in bpf_map_lookup_percpu_elem()
https://git.kernel.org/bpf/bpf-next/c/d4965578267e
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-05-27 17:49 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-26 6:25 [PATCH bpf-next] bpf: Check rcu_read_lock_trace_held() in bpf_map_lookup_percpu_elem() Hou Tao
2025-05-27 17:50 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox