* [PATCH bpf-next v4] bpf: streamline allowed helpers between tracing and base sets
@ 2025-04-23 7:31 Feng Yang
2025-04-23 18:08 ` Andrii Nakryiko
2025-04-23 18:10 ` patchwork-bot+netdevbpf
0 siblings, 2 replies; 3+ messages in thread
From: Feng Yang @ 2025-04-23 7:31 UTC (permalink / raw)
To: ast, daniel, andrii, martin.lau, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, mattbobrowski,
rostedt, mhiramat, mathieu.desnoyers
Cc: bpf, linux-kernel, linux-trace-kernel
From: Feng Yang <yangfeng@kylinos.cn>
Many conditional checks in switch-case are redundant
with bpf_base_func_proto and should be removed.
Regarding the permission checks bpf_base_func_proto:
The permission checks in bpf_prog_load (as outlined below)
ensure that the trace has both CAP_BPF and CAP_PERFMON capabilities,
thus enabling the use of corresponding prototypes
in bpf_base_func_proto without adverse effects.
bpf_prog_load
......
bpf_cap = bpf_token_capable(token, CAP_BPF);
......
if (type != BPF_PROG_TYPE_SOCKET_FILTER &&
type != BPF_PROG_TYPE_CGROUP_SKB &&
!bpf_cap)
goto put_token;
......
if (is_perfmon_prog_type(type) && !bpf_token_capable(token, CAP_PERFMON))
goto put_token;
......
Signed-off-by: Feng Yang <yangfeng@kylinos.cn>
Acked-by: Song Liu <song@kernel.org>
---
Changes in v4:
- Only modify patch description information.
- At present, bpf_tracing_func_proto still has the following ID:
- BPF_FUNC_get_current_uid_gid
- BPF_FUNC_get_current_comm
- BPF_FUNC_get_smp_processor_id
- BPF_FUNC_perf_event_read
- BPF_FUNC_probe_read
- BPF_FUNC_probe_read_str
- BPF_FUNC_current_task_under_cgroup
- BPF_FUNC_send_signal
- BPF_FUNC_send_signal_thread
- BPF_FUNC_get_task_stack
- BPF_FUNC_copy_from_user
- BPF_FUNC_copy_from_user_task
- BPF_FUNC_task_storage_get
- BPF_FUNC_task_storage_delete
- BPF_FUNC_get_func_ip
- BPF_FUNC_get_branch_snapshot
- BPF_FUNC_find_vma
- BPF_FUNC_probe_write_user
- I'm not sure which ones can be used by all programs, as Zvi Effron said(https://lore.kernel.org/all/CAC1LvL2SOKojrXPx92J46fFEi3T9TAWb3mC1XKtYzwU=pzTEbQ@mail.gmail.com/)
- get_smp_processor_id also be retained(https://lore.kernel.org/all/CAADnVQ+WYLfoR1W6AsCJF6fNKEUgfxANXP01EQCJh1=99ZpoNw@mail.gmail.com/)
- Link to v3: https://lore.kernel.org/all/20250410070258.276759-1-yangfeng59949@163.com/
Changes in v3:
- Only modify patch description information.
- Link to v2: https://lore.kernel.org/all/20250408071151.229329-1-yangfeng59949@163.com/
Changes in v2:
- Only modify patch description information.
- Link to v1: https://lore.kernel.org/all/20250320032258.116156-1-yangfeng59949@163.com/
---
kernel/trace/bpf_trace.c | 72 ----------------------------------------
1 file changed, 72 deletions(-)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 0f5906f43d7c..52c432a44aeb 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1430,56 +1430,14 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
const struct bpf_func_proto *func_proto;
switch (func_id) {
- case BPF_FUNC_map_lookup_elem:
- return &bpf_map_lookup_elem_proto;
- case BPF_FUNC_map_update_elem:
- return &bpf_map_update_elem_proto;
- case BPF_FUNC_map_delete_elem:
- return &bpf_map_delete_elem_proto;
- case BPF_FUNC_map_push_elem:
- return &bpf_map_push_elem_proto;
- case BPF_FUNC_map_pop_elem:
- return &bpf_map_pop_elem_proto;
- case BPF_FUNC_map_peek_elem:
- return &bpf_map_peek_elem_proto;
- case BPF_FUNC_map_lookup_percpu_elem:
- return &bpf_map_lookup_percpu_elem_proto;
- case BPF_FUNC_ktime_get_ns:
- return &bpf_ktime_get_ns_proto;
- case BPF_FUNC_ktime_get_boot_ns:
- return &bpf_ktime_get_boot_ns_proto;
- case BPF_FUNC_tail_call:
- return &bpf_tail_call_proto;
- case BPF_FUNC_get_current_task:
- return &bpf_get_current_task_proto;
- case BPF_FUNC_get_current_task_btf:
- return &bpf_get_current_task_btf_proto;
- case BPF_FUNC_task_pt_regs:
- return &bpf_task_pt_regs_proto;
case BPF_FUNC_get_current_uid_gid:
return &bpf_get_current_uid_gid_proto;
case BPF_FUNC_get_current_comm:
return &bpf_get_current_comm_proto;
- case BPF_FUNC_trace_printk:
- return bpf_get_trace_printk_proto();
case BPF_FUNC_get_smp_processor_id:
return &bpf_get_smp_processor_id_proto;
- case BPF_FUNC_get_numa_node_id:
- return &bpf_get_numa_node_id_proto;
case BPF_FUNC_perf_event_read:
return &bpf_perf_event_read_proto;
- case BPF_FUNC_get_prandom_u32:
- return &bpf_get_prandom_u32_proto;
- case BPF_FUNC_probe_read_user:
- return &bpf_probe_read_user_proto;
- case BPF_FUNC_probe_read_kernel:
- return security_locked_down(LOCKDOWN_BPF_READ_KERNEL) < 0 ?
- NULL : &bpf_probe_read_kernel_proto;
- case BPF_FUNC_probe_read_user_str:
- return &bpf_probe_read_user_str_proto;
- case BPF_FUNC_probe_read_kernel_str:
- return security_locked_down(LOCKDOWN_BPF_READ_KERNEL) < 0 ?
- NULL : &bpf_probe_read_kernel_str_proto;
#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
case BPF_FUNC_probe_read:
return security_locked_down(LOCKDOWN_BPF_READ_KERNEL) < 0 ?
@@ -1489,10 +1447,6 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
NULL : &bpf_probe_read_compat_str_proto;
#endif
#ifdef CONFIG_CGROUPS
- case BPF_FUNC_cgrp_storage_get:
- return &bpf_cgrp_storage_get_proto;
- case BPF_FUNC_cgrp_storage_delete:
- return &bpf_cgrp_storage_delete_proto;
case BPF_FUNC_current_task_under_cgroup:
return &bpf_current_task_under_cgroup_proto;
#endif
@@ -1500,20 +1454,6 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
return &bpf_send_signal_proto;
case BPF_FUNC_send_signal_thread:
return &bpf_send_signal_thread_proto;
- case BPF_FUNC_perf_event_read_value:
- return &bpf_perf_event_read_value_proto;
- case BPF_FUNC_ringbuf_output:
- return &bpf_ringbuf_output_proto;
- case BPF_FUNC_ringbuf_reserve:
- return &bpf_ringbuf_reserve_proto;
- case BPF_FUNC_ringbuf_submit:
- return &bpf_ringbuf_submit_proto;
- case BPF_FUNC_ringbuf_discard:
- return &bpf_ringbuf_discard_proto;
- case BPF_FUNC_ringbuf_query:
- return &bpf_ringbuf_query_proto;
- case BPF_FUNC_jiffies64:
- return &bpf_jiffies64_proto;
case BPF_FUNC_get_task_stack:
return prog->sleepable ? &bpf_get_task_stack_sleepable_proto
: &bpf_get_task_stack_proto;
@@ -1521,12 +1461,6 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
return &bpf_copy_from_user_proto;
case BPF_FUNC_copy_from_user_task:
return &bpf_copy_from_user_task_proto;
- case BPF_FUNC_snprintf_btf:
- return &bpf_snprintf_btf_proto;
- case BPF_FUNC_per_cpu_ptr:
- return &bpf_per_cpu_ptr_proto;
- case BPF_FUNC_this_cpu_ptr:
- return &bpf_this_cpu_ptr_proto;
case BPF_FUNC_task_storage_get:
if (bpf_prog_check_recur(prog))
return &bpf_task_storage_get_recur_proto;
@@ -1535,18 +1469,12 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
if (bpf_prog_check_recur(prog))
return &bpf_task_storage_delete_recur_proto;
return &bpf_task_storage_delete_proto;
- case BPF_FUNC_for_each_map_elem:
- return &bpf_for_each_map_elem_proto;
- case BPF_FUNC_snprintf:
- return &bpf_snprintf_proto;
case BPF_FUNC_get_func_ip:
return &bpf_get_func_ip_proto_tracing;
case BPF_FUNC_get_branch_snapshot:
return &bpf_get_branch_snapshot_proto;
case BPF_FUNC_find_vma:
return &bpf_find_vma_proto;
- case BPF_FUNC_trace_vprintk:
- return bpf_get_trace_vprintk_proto();
default:
break;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH bpf-next v4] bpf: streamline allowed helpers between tracing and base sets
2025-04-23 7:31 [PATCH bpf-next v4] bpf: streamline allowed helpers between tracing and base sets Feng Yang
@ 2025-04-23 18:08 ` Andrii Nakryiko
2025-04-23 18:10 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 3+ messages in thread
From: Andrii Nakryiko @ 2025-04-23 18:08 UTC (permalink / raw)
To: Feng Yang
Cc: ast, daniel, andrii, martin.lau, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, mattbobrowski,
rostedt, mhiramat, mathieu.desnoyers, bpf, linux-kernel,
linux-trace-kernel
On Wed, Apr 23, 2025 at 12:33 AM Feng Yang <yangfeng59949@163.com> wrote:
>
> From: Feng Yang <yangfeng@kylinos.cn>
>
> Many conditional checks in switch-case are redundant
> with bpf_base_func_proto and should be removed.
>
> Regarding the permission checks bpf_base_func_proto:
> The permission checks in bpf_prog_load (as outlined below)
> ensure that the trace has both CAP_BPF and CAP_PERFMON capabilities,
> thus enabling the use of corresponding prototypes
> in bpf_base_func_proto without adverse effects.
> bpf_prog_load
> ......
> bpf_cap = bpf_token_capable(token, CAP_BPF);
> ......
> if (type != BPF_PROG_TYPE_SOCKET_FILTER &&
> type != BPF_PROG_TYPE_CGROUP_SKB &&
> !bpf_cap)
> goto put_token;
> ......
> if (is_perfmon_prog_type(type) && !bpf_token_capable(token, CAP_PERFMON))
> goto put_token;
> ......
>
> Signed-off-by: Feng Yang <yangfeng@kylinos.cn>
> Acked-by: Song Liu <song@kernel.org>
> ---
LGTM, applied to bpf-next, thanks. See comments on remaining helpers below.
> Changes in v4:
> - Only modify patch description information.
> - At present, bpf_tracing_func_proto still has the following ID:
> - BPF_FUNC_get_current_uid_gid
> - BPF_FUNC_get_current_comm
I don't see why these two cannot be used in any program, after all, we
have bpf_get_current_task(), these are in the same family.
> - BPF_FUNC_get_smp_processor_id
Based on another thread, I think it's some filter programs that have
to use raw variant of it, right? All other should use non-raw
implementation. So I think the right next step would be to make sure
that bpf_base_func_proto returns non-raw implementation, and only
those few program types that are exceptions should use raw ones?
> - BPF_FUNC_perf_event_read
should be fine to use anywhere (and actually can be useful for
networking programs to measure its own packet processing overhead or
something like that). Checking implementation I don't see any
limitations, it's just PERF_EVENT_ARRAY map access
> - BPF_FUNC_probe_read
> - BPF_FUNC_probe_read_str
generic tracing helpers, should be OK to be used anywhere with
CAP_PERFMON capabilities
> - BPF_FUNC_current_task_under_cgroup
same as above current_comm, if there is CGROUP_ARRAY, this should be
fine (though I don't know, there might be cgroup-specific
restrictions, not sure)
> - BPF_FUNC_send_signal
> - BPF_FUNC_send_signal_thread
fine to do from NMI, so should be fine to do anywhere (with
CAP_PERFMON, presumably)
> - BPF_FUNC_get_task_stack
seems fine (again, if it works under NMI and doesn't use any
context-dependent things, should be fine for any program type)
> - BPF_FUNC_copy_from_user
> - BPF_FUNC_copy_from_user_task
same as probe_read/probe_read_str (but only for sleepable)
> - BPF_FUNC_task_storage_get
> - BPF_FUNC_task_storage_delete
this is designed to work anywhere, so yeah, why not?
> - BPF_FUNC_get_func_ip
nope, very context dependent, definitely not generic (and just doesn't
make sense for most program types)
> - BPF_FUNC_get_branch_snapshot
NMI-enabled and not context-dependent, good to be used anywhere
> - BPF_FUNC_find_vma
non-sleepable, but other than that doesn't really make any assumptions
about program type, should be fine everywhere (NMI-safe, I believe?)
> - BPF_FUNC_probe_write_user
it's just like probe_read_user, CAP_PERFMON, so we can enable it
anywhere for completeness, but I'm not sure if that is a good idea...
> - I'm not sure which ones can be used by all programs, as Zvi Effron said(https://lore.kernel.org/all/CAC1LvL2SOKojrXPx92J46fFEi3T9TAWb3mC1XKtYzwU=pzTEbQ@mail.gmail.com/)
> - get_smp_processor_id also be retained(https://lore.kernel.org/all/CAADnVQ+WYLfoR1W6AsCJF6fNKEUgfxANXP01EQCJh1=99ZpoNw@mail.gmail.com/)
yep, I saw the discussion, that's fine
>
> - Link to v3: https://lore.kernel.org/all/20250410070258.276759-1-yangfeng59949@163.com/
>
> Changes in v3:
> - Only modify patch description information.
> - Link to v2: https://lore.kernel.org/all/20250408071151.229329-1-yangfeng59949@163.com/
>
> Changes in v2:
> - Only modify patch description information.
> - Link to v1: https://lore.kernel.org/all/20250320032258.116156-1-yangfeng59949@163.com/
> ---
> kernel/trace/bpf_trace.c | 72 ----------------------------------------
> 1 file changed, 72 deletions(-)
>
[...]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH bpf-next v4] bpf: streamline allowed helpers between tracing and base sets
2025-04-23 7:31 [PATCH bpf-next v4] bpf: streamline allowed helpers between tracing and base sets Feng Yang
2025-04-23 18:08 ` Andrii Nakryiko
@ 2025-04-23 18:10 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-04-23 18:10 UTC (permalink / raw)
To: Feng Yang
Cc: ast, daniel, andrii, martin.lau, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, mattbobrowski,
rostedt, mhiramat, mathieu.desnoyers, bpf, linux-kernel,
linux-trace-kernel
Hello:
This patch was applied to bpf/bpf-next.git (master)
by Andrii Nakryiko <andrii@kernel.org>:
On Wed, 23 Apr 2025 15:31:51 +0800 you wrote:
> From: Feng Yang <yangfeng@kylinos.cn>
>
> Many conditional checks in switch-case are redundant
> with bpf_base_func_proto and should be removed.
>
> Regarding the permission checks bpf_base_func_proto:
> The permission checks in bpf_prog_load (as outlined below)
> ensure that the trace has both CAP_BPF and CAP_PERFMON capabilities,
> thus enabling the use of corresponding prototypes
> in bpf_base_func_proto without adverse effects.
> bpf_prog_load
> ......
> bpf_cap = bpf_token_capable(token, CAP_BPF);
> ......
> if (type != BPF_PROG_TYPE_SOCKET_FILTER &&
> type != BPF_PROG_TYPE_CGROUP_SKB &&
> !bpf_cap)
> goto put_token;
> ......
> if (is_perfmon_prog_type(type) && !bpf_token_capable(token, CAP_PERFMON))
> goto put_token;
> ......
>
> [...]
Here is the summary with links:
- [bpf-next,v4] bpf: streamline allowed helpers between tracing and base sets
https://git.kernel.org/bpf/bpf-next/c/6aca583f90b0
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-04-23 18:09 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-23 7:31 [PATCH bpf-next v4] bpf: streamline allowed helpers between tracing and base sets Feng Yang
2025-04-23 18:08 ` Andrii Nakryiko
2025-04-23 18:10 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).