* [PATCH bpf-next v2 0/2] bpf: enable some functions in cgroup programs
@ 2024-07-23 1:28 technoboy85
2024-07-23 1:28 ` [PATCH bpf-next v2 1/2] bpf: enable generic kfuncs for BPF_CGROUP_* programs technoboy85
2024-07-23 1:28 ` [PATCH bpf-next v2 2/2] bpf: allow bpf_current_task_under_cgroup() with BPF_CGROUP_* technoboy85
0 siblings, 2 replies; 5+ messages in thread
From: technoboy85 @ 2024-07-23 1:28 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Steven Rostedt,
Masami Hiramatsu, bpf, linux-trace-kernel
Cc: linux-kernel, Matteo Croce
From: Matteo Croce <teknoraver@meta.com>
Enable some BPF kfuncs and the helper bpf_current_task_under_cgroup()
for program types BPF_CGROUP_*.
These will be used by systemd-networkd:
https://github.com/systemd/systemd/pull/32212
Matteo Croce (2):
bpf: enable generic kfuncs for BPF_CGROUP_* programs
bpf: allow bpf_current_task_under_cgroup() with BPF_CGROUP_*
include/linux/bpf.h | 1 +
kernel/bpf/cgroup.c | 2 ++
kernel/bpf/helpers.c | 29 +++++++++++++++++++++++++++++
kernel/trace/bpf_trace.c | 27 ++-------------------------
4 files changed, 34 insertions(+), 25 deletions(-)
--
2.45.2
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH bpf-next v2 1/2] bpf: enable generic kfuncs for BPF_CGROUP_* programs
2024-07-23 1:28 [PATCH bpf-next v2 0/2] bpf: enable some functions in cgroup programs technoboy85
@ 2024-07-23 1:28 ` technoboy85
2024-07-24 23:29 ` Andrii Nakryiko
2024-07-23 1:28 ` [PATCH bpf-next v2 2/2] bpf: allow bpf_current_task_under_cgroup() with BPF_CGROUP_* technoboy85
1 sibling, 1 reply; 5+ messages in thread
From: technoboy85 @ 2024-07-23 1:28 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Steven Rostedt,
Masami Hiramatsu, bpf, linux-trace-kernel
Cc: linux-kernel, Matteo Croce
From: Matteo Croce <teknoraver@meta.com>
These kfuncs are enabled even in BPF_PROG_TYPE_TRACING, so they
should be safe also in BPF_CGROUP_* programs.
Signed-off-by: Matteo Croce <teknoraver@meta.com>
---
kernel/bpf/helpers.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index b5f0adae8293..23b782641077 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -3051,6 +3051,12 @@ static int __init kfunc_init(void)
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &generic_kfunc_set);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &generic_kfunc_set);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &generic_kfunc_set);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SKB, &generic_kfunc_set);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SOCK, &generic_kfunc_set);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_DEVICE, &generic_kfunc_set);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SOCK_ADDR, &generic_kfunc_set);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SYSCTL, &generic_kfunc_set);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SOCKOPT, &generic_kfunc_set);
ret = ret ?: register_btf_id_dtor_kfuncs(generic_dtors,
ARRAY_SIZE(generic_dtors),
THIS_MODULE);
--
2.45.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH bpf-next v2 2/2] bpf: allow bpf_current_task_under_cgroup() with BPF_CGROUP_*
2024-07-23 1:28 [PATCH bpf-next v2 0/2] bpf: enable some functions in cgroup programs technoboy85
2024-07-23 1:28 ` [PATCH bpf-next v2 1/2] bpf: enable generic kfuncs for BPF_CGROUP_* programs technoboy85
@ 2024-07-23 1:28 ` technoboy85
2024-07-24 23:36 ` Andrii Nakryiko
1 sibling, 1 reply; 5+ messages in thread
From: technoboy85 @ 2024-07-23 1:28 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Steven Rostedt,
Masami Hiramatsu, bpf, linux-trace-kernel
Cc: linux-kernel, Matteo Croce
From: Matteo Croce <teknoraver@meta.com>
The helper bpf_current_task_under_cgroup() currently is only allowed for
tracing programs.
Allow its usage also in the BPF_CGROUP_* program types.
Move the code from kernel/trace/bpf_trace.c to kernel/bpf/helpers.c,
so it compiles also without CONFIG_BPF_EVENTS.
This will be used in systemd-networkd to monitor the sysctl writes,
and filter it's own writes from others:
https://github.com/systemd/systemd/pull/32212
Signed-off-by: Matteo Croce <teknoraver@meta.com>
---
include/linux/bpf.h | 1 +
kernel/bpf/cgroup.c | 2 ++
kernel/bpf/helpers.c | 23 +++++++++++++++++++++++
kernel/trace/bpf_trace.c | 27 ++-------------------------
4 files changed, 28 insertions(+), 25 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4f1d4a97b9d1..4000fd161dda 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -3188,6 +3188,7 @@ extern const struct bpf_func_proto bpf_sock_hash_update_proto;
extern const struct bpf_func_proto bpf_get_current_cgroup_id_proto;
extern const struct bpf_func_proto bpf_get_current_ancestor_cgroup_id_proto;
extern const struct bpf_func_proto bpf_get_cgroup_classid_curr_proto;
+extern const struct bpf_func_proto bpf_current_task_under_cgroup_proto;
extern const struct bpf_func_proto bpf_msg_redirect_hash_proto;
extern const struct bpf_func_proto bpf_msg_redirect_map_proto;
extern const struct bpf_func_proto bpf_sk_redirect_hash_proto;
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index 8ba73042a239..e7113d700b87 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -2581,6 +2581,8 @@ cgroup_current_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
case BPF_FUNC_get_cgroup_classid:
return &bpf_get_cgroup_classid_curr_proto;
#endif
+ case BPF_FUNC_current_task_under_cgroup:
+ return &bpf_current_task_under_cgroup_proto;
default:
return NULL;
}
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 23b782641077..eaa3ce14028a 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2457,6 +2457,29 @@ __bpf_kfunc long bpf_task_under_cgroup(struct task_struct *task,
return ret;
}
+BPF_CALL_2(bpf_current_task_under_cgroup, struct bpf_map *, map, u32, idx)
+{
+ struct bpf_array *array = container_of(map, struct bpf_array, map);
+ struct cgroup *cgrp;
+
+ if (unlikely(idx >= array->map.max_entries))
+ return -E2BIG;
+
+ cgrp = READ_ONCE(array->ptrs[idx]);
+ if (unlikely(!cgrp))
+ return -EAGAIN;
+
+ return task_under_cgroup_hierarchy(current, cgrp);
+}
+
+const struct bpf_func_proto bpf_current_task_under_cgroup_proto = {
+ .func = bpf_current_task_under_cgroup,
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_CONST_MAP_PTR,
+ .arg2_type = ARG_ANYTHING,
+};
+
/**
* bpf_task_get_cgroup1 - Acquires the associated cgroup of a task within a
* specific cgroup1 hierarchy. The cgroup1 hierarchy is identified by its
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index cd098846e251..ea5cdd122024 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -798,29 +798,6 @@ const struct bpf_func_proto bpf_task_pt_regs_proto = {
.ret_btf_id = &bpf_task_pt_regs_ids[0],
};
-BPF_CALL_2(bpf_current_task_under_cgroup, struct bpf_map *, map, u32, idx)
-{
- struct bpf_array *array = container_of(map, struct bpf_array, map);
- struct cgroup *cgrp;
-
- if (unlikely(idx >= array->map.max_entries))
- return -E2BIG;
-
- cgrp = READ_ONCE(array->ptrs[idx]);
- if (unlikely(!cgrp))
- return -EAGAIN;
-
- return task_under_cgroup_hierarchy(current, cgrp);
-}
-
-static const struct bpf_func_proto bpf_current_task_under_cgroup_proto = {
- .func = bpf_current_task_under_cgroup,
- .gpl_only = false,
- .ret_type = RET_INTEGER,
- .arg1_type = ARG_CONST_MAP_PTR,
- .arg2_type = ARG_ANYTHING,
-};
-
struct send_signal_irq_work {
struct irq_work irq_work;
struct task_struct *task;
@@ -1548,8 +1525,6 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
return &bpf_get_numa_node_id_proto;
case BPF_FUNC_perf_event_read:
return &bpf_perf_event_read_proto;
- case BPF_FUNC_current_task_under_cgroup:
- return &bpf_current_task_under_cgroup_proto;
case BPF_FUNC_get_prandom_u32:
return &bpf_get_prandom_u32_proto;
case BPF_FUNC_probe_write_user:
@@ -1578,6 +1553,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
return &bpf_cgrp_storage_get_proto;
case BPF_FUNC_cgrp_storage_delete:
return &bpf_cgrp_storage_delete_proto;
+ case BPF_FUNC_current_task_under_cgroup:
+ return &bpf_current_task_under_cgroup_proto;
#endif
case BPF_FUNC_send_signal:
return &bpf_send_signal_proto;
--
2.45.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH bpf-next v2 1/2] bpf: enable generic kfuncs for BPF_CGROUP_* programs
2024-07-23 1:28 ` [PATCH bpf-next v2 1/2] bpf: enable generic kfuncs for BPF_CGROUP_* programs technoboy85
@ 2024-07-24 23:29 ` Andrii Nakryiko
0 siblings, 0 replies; 5+ messages in thread
From: Andrii Nakryiko @ 2024-07-24 23:29 UTC (permalink / raw)
To: technoboy85
Cc: Alexei Starovoitov, Daniel Borkmann, Steven Rostedt,
Masami Hiramatsu, bpf, linux-trace-kernel, linux-kernel,
Matteo Croce
On Mon, Jul 22, 2024 at 6:28 PM <technoboy85@gmail.com> wrote:
>
> From: Matteo Croce <teknoraver@meta.com>
>
> These kfuncs are enabled even in BPF_PROG_TYPE_TRACING, so they
> should be safe also in BPF_CGROUP_* programs.
>
> Signed-off-by: Matteo Croce <teknoraver@meta.com>
> ---
> kernel/bpf/helpers.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index b5f0adae8293..23b782641077 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -3051,6 +3051,12 @@ static int __init kfunc_init(void)
> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &generic_kfunc_set);
> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &generic_kfunc_set);
> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &generic_kfunc_set);
> + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SKB, &generic_kfunc_set);
> + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SOCK, &generic_kfunc_set);
> + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_DEVICE, &generic_kfunc_set);
> + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SOCK_ADDR, &generic_kfunc_set);
> + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SYSCTL, &generic_kfunc_set);
> + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SOCKOPT, &generic_kfunc_set);
a bit crazy we have so many cgroup program types, but it is what it
is, this lgtm
Acked-by: Andrii Nakryiko <andrii@kernel.org>
> ret = ret ?: register_btf_id_dtor_kfuncs(generic_dtors,
> ARRAY_SIZE(generic_dtors),
> THIS_MODULE);
> --
> 2.45.2
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH bpf-next v2 2/2] bpf: allow bpf_current_task_under_cgroup() with BPF_CGROUP_*
2024-07-23 1:28 ` [PATCH bpf-next v2 2/2] bpf: allow bpf_current_task_under_cgroup() with BPF_CGROUP_* technoboy85
@ 2024-07-24 23:36 ` Andrii Nakryiko
0 siblings, 0 replies; 5+ messages in thread
From: Andrii Nakryiko @ 2024-07-24 23:36 UTC (permalink / raw)
To: technoboy85
Cc: Alexei Starovoitov, Daniel Borkmann, Steven Rostedt,
Masami Hiramatsu, bpf, linux-trace-kernel, linux-kernel,
Matteo Croce
On Mon, Jul 22, 2024 at 6:29 PM <technoboy85@gmail.com> wrote:
>
> From: Matteo Croce <teknoraver@meta.com>
>
> The helper bpf_current_task_under_cgroup() currently is only allowed for
> tracing programs.
> Allow its usage also in the BPF_CGROUP_* program types.
> Move the code from kernel/trace/bpf_trace.c to kernel/bpf/helpers.c,
> so it compiles also without CONFIG_BPF_EVENTS.
>
> This will be used in systemd-networkd to monitor the sysctl writes,
> and filter it's own writes from others:
> https://github.com/systemd/systemd/pull/32212
>
> Signed-off-by: Matteo Croce <teknoraver@meta.com>
> ---
> include/linux/bpf.h | 1 +
> kernel/bpf/cgroup.c | 2 ++
> kernel/bpf/helpers.c | 23 +++++++++++++++++++++++
> kernel/trace/bpf_trace.c | 27 ++-------------------------
> 4 files changed, 28 insertions(+), 25 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 4f1d4a97b9d1..4000fd161dda 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -3188,6 +3188,7 @@ extern const struct bpf_func_proto bpf_sock_hash_update_proto;
> extern const struct bpf_func_proto bpf_get_current_cgroup_id_proto;
> extern const struct bpf_func_proto bpf_get_current_ancestor_cgroup_id_proto;
> extern const struct bpf_func_proto bpf_get_cgroup_classid_curr_proto;
> +extern const struct bpf_func_proto bpf_current_task_under_cgroup_proto;
> extern const struct bpf_func_proto bpf_msg_redirect_hash_proto;
> extern const struct bpf_func_proto bpf_msg_redirect_map_proto;
> extern const struct bpf_func_proto bpf_sk_redirect_hash_proto;
> diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
> index 8ba73042a239..e7113d700b87 100644
> --- a/kernel/bpf/cgroup.c
> +++ b/kernel/bpf/cgroup.c
> @@ -2581,6 +2581,8 @@ cgroup_current_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> case BPF_FUNC_get_cgroup_classid:
> return &bpf_get_cgroup_classid_curr_proto;
> #endif
> + case BPF_FUNC_current_task_under_cgroup:
> + return &bpf_current_task_under_cgroup_proto;
> default:
> return NULL;
> }
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 23b782641077..eaa3ce14028a 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -2457,6 +2457,29 @@ __bpf_kfunc long bpf_task_under_cgroup(struct task_struct *task,
> return ret;
> }
>
> +BPF_CALL_2(bpf_current_task_under_cgroup, struct bpf_map *, map, u32, idx)
> +{
> + struct bpf_array *array = container_of(map, struct bpf_array, map);
> + struct cgroup *cgrp;
> +
> + if (unlikely(idx >= array->map.max_entries))
> + return -E2BIG;
> +
> + cgrp = READ_ONCE(array->ptrs[idx]);
> + if (unlikely(!cgrp))
> + return -EAGAIN;
> +
> + return task_under_cgroup_hierarchy(current, cgrp);
> +}
> +
> +const struct bpf_func_proto bpf_current_task_under_cgroup_proto = {
> + .func = bpf_current_task_under_cgroup,
> + .gpl_only = false,
> + .ret_type = RET_INTEGER,
> + .arg1_type = ARG_CONST_MAP_PTR,
> + .arg2_type = ARG_ANYTHING,
> +};
> +
> /**
> * bpf_task_get_cgroup1 - Acquires the associated cgroup of a task within a
> * specific cgroup1 hierarchy. The cgroup1 hierarchy is identified by its
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index cd098846e251..ea5cdd122024 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -798,29 +798,6 @@ const struct bpf_func_proto bpf_task_pt_regs_proto = {
> .ret_btf_id = &bpf_task_pt_regs_ids[0],
> };
>
> -BPF_CALL_2(bpf_current_task_under_cgroup, struct bpf_map *, map, u32, idx)
> -{
> - struct bpf_array *array = container_of(map, struct bpf_array, map);
> - struct cgroup *cgrp;
> -
> - if (unlikely(idx >= array->map.max_entries))
> - return -E2BIG;
> -
> - cgrp = READ_ONCE(array->ptrs[idx]);
> - if (unlikely(!cgrp))
> - return -EAGAIN;
> -
> - return task_under_cgroup_hierarchy(current, cgrp);
> -}
> -
> -static const struct bpf_func_proto bpf_current_task_under_cgroup_proto = {
> - .func = bpf_current_task_under_cgroup,
> - .gpl_only = false,
> - .ret_type = RET_INTEGER,
> - .arg1_type = ARG_CONST_MAP_PTR,
> - .arg2_type = ARG_ANYTHING,
> -};
> -
> struct send_signal_irq_work {
> struct irq_work irq_work;
> struct task_struct *task;
> @@ -1548,8 +1525,6 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> return &bpf_get_numa_node_id_proto;
> case BPF_FUNC_perf_event_read:
> return &bpf_perf_event_read_proto;
> - case BPF_FUNC_current_task_under_cgroup:
> - return &bpf_current_task_under_cgroup_proto;
> case BPF_FUNC_get_prandom_u32:
> return &bpf_get_prandom_u32_proto;
> case BPF_FUNC_probe_write_user:
> @@ -1578,6 +1553,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> return &bpf_cgrp_storage_get_proto;
> case BPF_FUNC_cgrp_storage_delete:
> return &bpf_cgrp_storage_delete_proto;
> + case BPF_FUNC_current_task_under_cgroup:
> + return &bpf_current_task_under_cgroup_proto;
let's not change this part unnecessarily? It clearly works if
!CONFIG_CGROUPS, so why move them? On the other hand, this,
technically, can regress some BPF program verification on
!CONFIG_CGROUPS. So I'd drop this part, but the rest looks good.
With that, feel free to add my ack for the next revision:
Acked-by: Andrii Nakryiko <andrii@kernel.org>
pw-bot: cr
> #endif
> case BPF_FUNC_send_signal:
> return &bpf_send_signal_proto;
> --
> 2.45.2
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-07-24 23:36 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-23 1:28 [PATCH bpf-next v2 0/2] bpf: enable some functions in cgroup programs technoboy85
2024-07-23 1:28 ` [PATCH bpf-next v2 1/2] bpf: enable generic kfuncs for BPF_CGROUP_* programs technoboy85
2024-07-24 23:29 ` Andrii Nakryiko
2024-07-23 1:28 ` [PATCH bpf-next v2 2/2] bpf: allow bpf_current_task_under_cgroup() with BPF_CGROUP_* technoboy85
2024-07-24 23:36 ` Andrii Nakryiko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).