* [RFC PATCH bpf-next 0/2] fentry supports function optimized by complier
@ 2025-04-30 16:46 Tao Chen
2025-04-30 16:46 ` [RFC PATCH bpf-next 1/2] libbpf: Try get fentry func addr from available_filter_functions_addr Tao Chen
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Tao Chen @ 2025-04-30 16:46 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, alan.maguire
Cc: bpf, linux-kernel, Tao Chen
The previous discussion about fentry invalid on function optimeized by complier
is as follows:
https://lore.kernel.org/bpf/3c6f539b-b498-4587-b0dc-5fdeba717600@oracle.com/
This seems to be something that pahole needs to resolve. However, Alan
mentioned that there are many situations involved in this, and he proposed
that the available_filter_functions_addr can be used to find the address of
the real function. If we can get the real address from user, maybe this address
can be used when the function obtained from the BTF is invalid.
The specific selftest has not been added yet. I just wrote a simple test
program and ran it.
This is the initial RFC patch, feedback is welcome.
Tao Chen (2):
libbpf: Try get fentry func addr from available_filter_functions_addr
bpf: Get fentry func addr from user when BTF info invalid
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 1 +
kernel/bpf/syscall.c | 1 +
kernel/bpf/verifier.c | 9 ++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/bpf.c | 1 +
tools/lib/bpf/bpf.h | 1 +
tools/lib/bpf/gen_loader.c | 1 +
tools/lib/bpf/libbpf.c | 53 ++++++++++++++++++++++++++++++++++
9 files changed, 69 insertions(+)
--
2.43.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [RFC PATCH bpf-next 1/2] libbpf: Try get fentry func addr from available_filter_functions_addr
2025-04-30 16:46 [RFC PATCH bpf-next 0/2] fentry supports function optimized by complier Tao Chen
@ 2025-04-30 16:46 ` Tao Chen
2025-04-30 16:46 ` [RFC PATCH bpf-next 2/2] bpf: Get fentry func addr from user when BTF info invalid Tao Chen
2025-04-30 17:58 ` [RFC PATCH bpf-next 0/2] fentry supports function optimized by complier Alan Maguire
2 siblings, 0 replies; 7+ messages in thread
From: Tao Chen @ 2025-04-30 16:46 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, alan.maguire
Cc: bpf, linux-kernel, Tao Chen
Try to get the real kernel func addr from available_filter_functions_addr,
in case of the function name is optimized by the compiler and a suffix
such as ".isra.0" is added, which will be used in the next patch.
Signed-off-by: Tao Chen <chen.dylane@linux.dev>
---
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/bpf.c | 1 +
tools/lib/bpf/bpf.h | 1 +
tools/lib/bpf/gen_loader.c | 1 +
tools/lib/bpf/libbpf.c | 53 ++++++++++++++++++++++++++++++++++
5 files changed, 57 insertions(+)
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 07ee73cdf9..7016e47a70 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1577,6 +1577,7 @@ union bpf_attr {
* If provided, prog_flags should have BPF_F_TOKEN_FD flag set.
*/
__s32 prog_token_fd;
+ __aligned_u64 fentry_func;
/* The fd_array_cnt can be used to pass the length of the
* fd_array array. In this case all the [map] file descriptors
* passed in this array will be bound to the program, even if
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index a9c3e33d0f..639a6b54e8 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -312,6 +312,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
attr.fd_array = ptr_to_u64(OPTS_GET(opts, fd_array, NULL));
attr.fd_array_cnt = OPTS_GET(opts, fd_array_cnt, 0);
+ attr.fentry_func = OPTS_GET(opts, fentry_func, 0);
if (log_level) {
attr.log_buf = ptr_to_u64(log_buf);
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 777627d33d..b3340bcf7b 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -83,6 +83,7 @@ struct bpf_prog_load_opts {
__u32 attach_btf_id;
__u32 attach_prog_fd;
__u32 attach_btf_obj_fd;
+ __u64 fentry_func;
const int *fd_array;
diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
index 113ae4abd3..d90874b746 100644
--- a/tools/lib/bpf/gen_loader.c
+++ b/tools/lib/bpf/gen_loader.c
@@ -1023,6 +1023,7 @@ void bpf_gen__prog_load(struct bpf_gen *gen,
attr.kern_version = 0;
attr.insn_cnt = tgt_endian((__u32)insn_cnt);
attr.prog_flags = tgt_endian(load_attr->prog_flags);
+ attr.fentry_func = tgt_endian(load_attr->fentry_func);
attr.func_info_rec_size = tgt_endian(load_attr->func_info_rec_size);
attr.func_info_cnt = tgt_endian(load_attr->func_info_cnt);
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 37d563e140..624e75cf2f 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -7456,6 +7456,50 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog,
}
static void fixup_verifier_log(struct bpf_program *prog, char *buf, size_t buf_sz);
+static const char *tracefs_available_filter_functions_addrs(void);
+
+static void try_get_fentry_func_addr(const char *func_name, struct bpf_prog_load_opts *load_attr)
+{
+ const char *available_functions_file = tracefs_available_filter_functions_addrs();
+ char sym_name[500];
+ FILE *f;
+ char *suffix;
+ int ret;
+ unsigned long long sym_addr;
+
+ f = fopen(available_functions_file, "re");
+ if (!f) {
+ pr_warn("failed to open %s: %s\n", available_functions_file, errstr(errno));
+ return;
+ }
+
+ while (true) {
+ ret = fscanf(f, "%llx %499s%*[^\n]\n", &sym_addr, sym_name);
+ if (ret == EOF && feof(f))
+ break;
+
+ if (ret != 2) {
+ pr_warn("failed to parse available_filter_functions_addrs entry: %d\n",
+ ret);
+ goto cleanup;
+ }
+
+ if (strcmp(func_name, sym_name) == 0) {
+ load_attr->fentry_func = sym_addr;
+ break;
+ }
+ /* find [func_name] optimized by compiler like [func_name].isra.0 */
+ suffix = strstr(sym_name, ".");
+ if (suffix && strncmp(sym_name, func_name,
+ strlen(sym_name) - strlen(suffix)) == 0) {
+ load_attr->fentry_func = sym_addr;
+ break;
+ }
+ }
+
+cleanup:
+ fclose(f);
+}
static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog,
struct bpf_insn *insns, int insns_cnt,
@@ -7504,6 +7548,15 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
load_attr.prog_ifindex = prog->prog_ifindex;
load_attr.expected_attach_type = prog->expected_attach_type;
+ if (load_attr.expected_attach_type == BPF_TRACE_FENTRY ||
+ load_attr.expected_attach_type == BPF_TRACE_FEXIT) {
+ const char *func_name;
+
+ func_name = strchr(prog->sec_name, '/');
+ if (func_name)
+ try_get_fentry_func_addr(++func_name, &load_attr);
+ }
+
/* specify func_info/line_info only if kernel supports them */
if (obj->btf && btf__fd(obj->btf) >= 0 && kernel_supports(obj, FEAT_BTF_FUNC)) {
load_attr.prog_btf_fd = btf__fd(obj->btf);
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC PATCH bpf-next 2/2] bpf: Get fentry func addr from user when BTF info invalid
2025-04-30 16:46 [RFC PATCH bpf-next 0/2] fentry supports function optimized by complier Tao Chen
2025-04-30 16:46 ` [RFC PATCH bpf-next 1/2] libbpf: Try get fentry func addr from available_filter_functions_addr Tao Chen
@ 2025-04-30 16:46 ` Tao Chen
2025-04-30 17:57 ` Alan Maguire
2025-04-30 17:58 ` [RFC PATCH bpf-next 0/2] fentry supports function optimized by complier Alan Maguire
2 siblings, 1 reply; 7+ messages in thread
From: Tao Chen @ 2025-04-30 16:46 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, alan.maguire
Cc: bpf, linux-kernel, Tao Chen
If kernel function optimized by the compiler, the function name in
kallsyms will have suffix like ".isra.0" or others. And fentry uses
function name from BTF to find the function address in kallsyms, in this
situation, it will fail due to the inconsistency of function names.
If so, try to use the function address passed from the user.
Signed-off-by: Tao Chen <chen.dylane@linux.dev>
---
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 1 +
kernel/bpf/syscall.c | 1 +
kernel/bpf/verifier.c | 9 +++++++++
4 files changed, 12 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 3f0cc89c06..fd53d1817f 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1626,6 +1626,7 @@ struct bpf_prog_aux {
struct work_struct work;
struct rcu_head rcu;
};
+ u64 fentry_func;
};
struct bpf_prog {
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 07ee73cdf9..7016e47a70 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1577,6 +1577,7 @@ union bpf_attr {
* If provided, prog_flags should have BPF_F_TOKEN_FD flag set.
*/
__s32 prog_token_fd;
+ __aligned_u64 fentry_func;
/* The fd_array_cnt can be used to pass the length of the
* fd_array array. In this case all the [map] file descriptors
* passed in this array will be bound to the program, even if
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 9794446bc8..6c1e3572cc 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2892,6 +2892,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
prog->sleepable = !!(attr->prog_flags & BPF_F_SLEEPABLE);
prog->aux->attach_btf = attach_btf;
prog->aux->attach_btf_id = attr->attach_btf_id;
+ prog->aux->fentry_func = attr->fentry_func;
prog->aux->dst_prog = dst_prog;
prog->aux->dev_bound = !!attr->prog_ifindex;
prog->aux->xdp_has_frags = attr->prog_flags & BPF_F_XDP_HAS_FRAGS;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 54c6953a8b..507c281ddc 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -23304,6 +23304,15 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
} else {
addr = kallsyms_lookup_name(tname);
}
+
+ if (!addr && (prog->expected_attach_type == BPF_TRACE_FENTRY ||
+ prog->expected_attach_type == BPF_TRACE_FEXIT)) {
+ fname = kallsyms_lookup((unsigned long)prog->aux->fentry_func,
+ NULL, NULL, NULL, trace_symbol);
+ if (fname)
+ addr = (long)prog->aux->fentry_func;
+ }
+
if (!addr) {
module_put(mod);
bpf_log(log,
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [RFC PATCH bpf-next 2/2] bpf: Get fentry func addr from user when BTF info invalid
2025-04-30 16:46 ` [RFC PATCH bpf-next 2/2] bpf: Get fentry func addr from user when BTF info invalid Tao Chen
@ 2025-04-30 17:57 ` Alan Maguire
2025-04-30 22:23 ` Alexei Starovoitov
0 siblings, 1 reply; 7+ messages in thread
From: Alan Maguire @ 2025-04-30 17:57 UTC (permalink / raw)
To: Tao Chen, ast, daniel, john.fastabend, andrii, martin.lau,
eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa
Cc: bpf, linux-kernel
On 30/04/2025 17:46, Tao Chen wrote:
> If kernel function optimized by the compiler, the function name in
> kallsyms will have suffix like ".isra.0" or others. And fentry uses
> function name from BTF to find the function address in kallsyms, in this
> situation, it will fail due to the inconsistency of function names.
> If so, try to use the function address passed from the user.
>
> Signed-off-by: Tao Chen <chen.dylane@linux.dev>
> ---
> include/linux/bpf.h | 1 +
> include/uapi/linux/bpf.h | 1 +
> kernel/bpf/syscall.c | 1 +
> kernel/bpf/verifier.c | 9 +++++++++
> 4 files changed, 12 insertions(+)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 3f0cc89c06..fd53d1817f 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1626,6 +1626,7 @@ struct bpf_prog_aux {
> struct work_struct work;
> struct rcu_head rcu;
> };
> + u64 fentry_func;
> };
>
> struct bpf_prog {
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 07ee73cdf9..7016e47a70 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -1577,6 +1577,7 @@ union bpf_attr {
> * If provided, prog_flags should have BPF_F_TOKEN_FD flag set.
> */
> __s32 prog_token_fd;
> + __aligned_u64 fentry_func;
since it can be used for fentry/fexit (I think from below?) might be
clearer to call it attach_func_addr or similar.
> /* The fd_array_cnt can be used to pass the length of the
> * fd_array array. In this case all the [map] file descriptors
> * passed in this array will be bound to the program, even if
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 9794446bc8..6c1e3572cc 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -2892,6 +2892,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
> prog->sleepable = !!(attr->prog_flags & BPF_F_SLEEPABLE);
> prog->aux->attach_btf = attach_btf;
> prog->aux->attach_btf_id = attr->attach_btf_id;
> + prog->aux->fentry_func = attr->fentry_func;
> prog->aux->dst_prog = dst_prog;
> prog->aux->dev_bound = !!attr->prog_ifindex;
> prog->aux->xdp_has_frags = attr->prog_flags & BPF_F_XDP_HAS_FRAGS;
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 54c6953a8b..507c281ddc 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -23304,6 +23304,15 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
> } else {
> addr = kallsyms_lookup_name(tname);
> }
> +
> + if (!addr && (prog->expected_attach_type == BPF_TRACE_FENTRY ||
> + prog->expected_attach_type == BPF_TRACE_FEXIT)) {
> + fname = kallsyms_lookup((unsigned long)prog->aux->fentry_func,
> + NULL, NULL, NULL, trace_symbol);
> + if (fname)
> + addr = (long)prog->aux->fentry_func;
We should do some validation that the fname we get back matches the BTF
func name prefix (fname "foo.isra.0" matches "foo") I think?
> + }
> +
> if (!addr) {
> module_put(mod);
> bpf_log(log,
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH bpf-next 0/2] fentry supports function optimized by complier
2025-04-30 16:46 [RFC PATCH bpf-next 0/2] fentry supports function optimized by complier Tao Chen
2025-04-30 16:46 ` [RFC PATCH bpf-next 1/2] libbpf: Try get fentry func addr from available_filter_functions_addr Tao Chen
2025-04-30 16:46 ` [RFC PATCH bpf-next 2/2] bpf: Get fentry func addr from user when BTF info invalid Tao Chen
@ 2025-04-30 17:58 ` Alan Maguire
2 siblings, 0 replies; 7+ messages in thread
From: Alan Maguire @ 2025-04-30 17:58 UTC (permalink / raw)
To: Tao Chen, ast, daniel, john.fastabend, andrii, martin.lau,
eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa
Cc: bpf, linux-kernel
On 30/04/2025 17:46, Tao Chen wrote:
> The previous discussion about fentry invalid on function optimeized by complier
> is as follows:
> https://lore.kernel.org/bpf/3c6f539b-b498-4587-b0dc-5fdeba717600@oracle.com/
>
> This seems to be something that pahole needs to resolve. However, Alan
> mentioned that there are many situations involved in this, and he proposed
> that the available_filter_functions_addr can be used to find the address of
> the real function. If we can get the real address from user, maybe this address
> can be used when the function obtained from the BTF is invalid.
>
> The specific selftest has not been added yet. I just wrote a simple test
> program and ran it.
>
> This is the initial RFC patch, feedback is welcome.
>
There's some discussion around this in the context of inlines and
optimized functions at [1] that might be of interest. Ultimately for
more complicated cases like this I'd really like to see us have a
name/address relationship encoded in BTF so that we can be clear about
relationships between a function site and its BTF.
The current state of affairs though is that a function will only make it
into BTF if its function signature is consistent with the prototype, so
unless I'm missing something the approach of passing an address to
clarify the relationship seems like a valid one.
It would be great to have a test, can we try adding a function to
bpf_testmod that a compiler will likely optimize to .isra.0 and trace
it? This won't work in all cases, but we could skip the test if the
function isn't optimized as wanted.
[1]
https://lore.kernel.org/dwarves/20250416-btf_inline-v1-0-e4bd2f8adae5@meta.com/
> Tao Chen (2):
> libbpf: Try get fentry func addr from available_filter_functions_addr
> bpf: Get fentry func addr from user when BTF info invalid
>
> include/linux/bpf.h | 1 +
> include/uapi/linux/bpf.h | 1 +
> kernel/bpf/syscall.c | 1 +
> kernel/bpf/verifier.c | 9 ++++++
> tools/include/uapi/linux/bpf.h | 1 +
> tools/lib/bpf/bpf.c | 1 +
> tools/lib/bpf/bpf.h | 1 +
> tools/lib/bpf/gen_loader.c | 1 +
> tools/lib/bpf/libbpf.c | 53 ++++++++++++++++++++++++++++++++++
> 9 files changed, 69 insertions(+)
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH bpf-next 2/2] bpf: Get fentry func addr from user when BTF info invalid
2025-04-30 17:57 ` Alan Maguire
@ 2025-04-30 22:23 ` Alexei Starovoitov
2025-05-06 3:17 ` Tao Chen
0 siblings, 1 reply; 7+ messages in thread
From: Alexei Starovoitov @ 2025-04-30 22:23 UTC (permalink / raw)
To: Alan Maguire
Cc: Tao Chen, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
bpf, LKML
On Wed, Apr 30, 2025 at 10:57 AM Alan Maguire <alan.maguire@oracle.com> wrote:
> > +
> > + if (!addr && (prog->expected_attach_type == BPF_TRACE_FENTRY ||
> > + prog->expected_attach_type == BPF_TRACE_FEXIT)) {
> > + fname = kallsyms_lookup((unsigned long)prog->aux->fentry_func,
> > + NULL, NULL, NULL, trace_symbol);
> > + if (fname)
> > + addr = (long)prog->aux->fentry_func;
>
>
> We should do some validation that the fname we get back matches the BTF
> func name prefix (fname "foo.isra.0" matches "foo") I think?
I don't think that will be enough.
User space should not be able to pass a random kernel address
and convince the kernel that it matches a particular btf_id.
As discussed in the other thread matching based on name is
breaking apart.
pahole does all the safety check to make sure name/addr/btf_id
are consistent.
We shouldn't be adding workarounds like this because
pahole/btf/kernel build is not smart enough.
pw-bot: cr
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH bpf-next 2/2] bpf: Get fentry func addr from user when BTF info invalid
2025-04-30 22:23 ` Alexei Starovoitov
@ 2025-05-06 3:17 ` Tao Chen
0 siblings, 0 replies; 7+ messages in thread
From: Tao Chen @ 2025-05-06 3:17 UTC (permalink / raw)
To: Alexei Starovoitov, Alan Maguire
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
bpf, LKML
在 2025/5/1 06:23, Alexei Starovoitov 写道:
> On Wed, Apr 30, 2025 at 10:57 AM Alan Maguire <alan.maguire@oracle.com> wrote:
>>> +
>>> + if (!addr && (prog->expected_attach_type == BPF_TRACE_FENTRY ||
>>> + prog->expected_attach_type == BPF_TRACE_FEXIT)) {
>>> + fname = kallsyms_lookup((unsigned long)prog->aux->fentry_func,
>>> + NULL, NULL, NULL, trace_symbol);
>>> + if (fname)
>>> + addr = (long)prog->aux->fentry_func;
>>
>>
>> We should do some validation that the fname we get back matches the BTF
>> func name prefix (fname "foo.isra.0" matches "foo") I think?
>
> I don't think that will be enough.
> User space should not be able to pass a random kernel address
> and convince the kernel that it matches a particular btf_id.
> As discussed in the other thread matching based on name is
> breaking apart.
> pahole does all the safety check to make sure name/addr/btf_id
> are consistent.
> We shouldn't be adding workarounds like this because
> pahole/btf/kernel build is not smart enough.
>
Got it thanks for your reply, it is hoped that pahole/btf can have a
better way to solve such problems.
> pw-bot: cr
--
Best Regards
Tao Chen
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-05-06 3:17 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-30 16:46 [RFC PATCH bpf-next 0/2] fentry supports function optimized by complier Tao Chen
2025-04-30 16:46 ` [RFC PATCH bpf-next 1/2] libbpf: Try get fentry func addr from available_filter_functions_addr Tao Chen
2025-04-30 16:46 ` [RFC PATCH bpf-next 2/2] bpf: Get fentry func addr from user when BTF info invalid Tao Chen
2025-04-30 17:57 ` Alan Maguire
2025-04-30 22:23 ` Alexei Starovoitov
2025-05-06 3:17 ` Tao Chen
2025-04-30 17:58 ` [RFC PATCH bpf-next 0/2] fentry supports function optimized by complier Alan Maguire
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).