* [PATCH bpf-next] bpf: make the attach target more accurate
@ 2025-07-07 11:35 Menglong Dong
2025-07-07 13:09 ` Daniel Borkmann
2025-07-07 20:25 ` kernel test robot
0 siblings, 2 replies; 4+ messages in thread
From: Menglong Dong @ 2025-07-07 11:35 UTC (permalink / raw)
To: ast
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, bpf, linux-kernel,
Menglong Dong
For now, we lookup the address of the attach target in
bpf_check_attach_target() with find_kallsyms_symbol_value or
kallsyms_lookup_name, which is not accurate in some cases.
For example, we want to attach to the target "t_next", but there are
multiple symbols with the name "t_next" exist in the kallsyms. The one
that kallsyms_lookup_name() returned may have no ftrace record, which
makes the attach target not available. So we want the one that has ftrace
record to be returned.
Meanwhile, there may be multiple symbols with the name "t_next" in ftrace
record. In this case, the attach target is ambiguous, so the attach should
fail.
Introduce the function bpf_lookup_attach_addr() to do the address lookup,
which is able to solve this problem.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
kernel/bpf/verifier.c | 76 ++++++++++++++++++++++++++++++++++++++++---
1 file changed, 71 insertions(+), 5 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0f6cc2275695..9a7128da6d13 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -23436,6 +23436,72 @@ static int check_non_sleepable_error_inject(u32 btf_id)
return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id);
}
+struct symbol_lookup_ctx {
+ const char *name;
+ unsigned long addr;
+};
+
+static int symbol_callback(void *data, unsigned long addr)
+{
+ struct symbol_lookup_ctx *ctx = data;
+
+ if (!ftrace_location(addr))
+ return 0;
+
+ if (ctx->addr)
+ return -EADDRNOTAVAIL;
+
+ ctx->addr = addr;
+
+ return 0;
+}
+
+static int symbol_mod_callback(void *data, const char *name, unsigned long addr)
+{
+ if (strcmp(((struct symbol_lookup_ctx *)data)->name, name) != 0)
+ return 0;
+
+ return symbol_callback(data, addr);
+}
+
+/**
+ * bpf_lookup_attach_addr: Lookup address for a symbol
+ *
+ * @mod: kernel module to lookup the symbol, NULL means to lookup the kernel
+ * symbols
+ * @sym: the symbol to resolve
+ * @addr: pointer to store the result
+ *
+ * Lookup the address of the symbol @sym, and the address should has
+ * corresponding ftrace location. If multiple symbols with the name @sym
+ * exist, the one that has ftrace location will be returned. If more than
+ * 1 has ftrace location, -EADDRNOTAVAIL will be returned.
+ *
+ * Returns: 0 on success, -errno otherwise.
+ */
+static int bpf_lookup_attach_addr(const struct module *mod, const char *sym,
+ unsigned long *addr)
+{
+ struct symbol_lookup_ctx ctx = { .addr = 0, .name = sym };
+ int err;
+
+ if (!mod)
+ err = kallsyms_on_each_match_symbol(symbol_callback, sym, &ctx);
+ else
+ err = module_kallsyms_on_each_symbol(mod->name, symbol_mod_callback,
+ &ctx);
+
+ if (!ctx.addr)
+ return -ENOENT;
+
+ if (err)
+ return err;
+
+ *addr = ctx.addr;
+
+ return 0;
+}
+
int bpf_check_attach_target(struct bpf_verifier_log *log,
const struct bpf_prog *prog,
const struct bpf_prog *tgt_prog,
@@ -23689,18 +23755,18 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
if (btf_is_module(btf)) {
mod = btf_try_get_module(btf);
if (mod)
- addr = find_kallsyms_symbol_value(mod, tname);
+ ret = bpf_lookup_attach_addr(mod, tname, &addr);
else
- addr = 0;
+ ret = -ENOENT;
} else {
- addr = kallsyms_lookup_name(tname);
+ ret = bpf_lookup_attach_addr(NULL, tname, &addr);
}
- if (!addr) {
+ if (ret) {
module_put(mod);
bpf_log(log,
"The address of function %s cannot be found\n",
tname);
- return -ENOENT;
+ return ret;
}
}
--
2.39.5
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH bpf-next] bpf: make the attach target more accurate
2025-07-07 11:35 [PATCH bpf-next] bpf: make the attach target more accurate Menglong Dong
@ 2025-07-07 13:09 ` Daniel Borkmann
2025-07-08 2:34 ` Menglong Dong
2025-07-07 20:25 ` kernel test robot
1 sibling, 1 reply; 4+ messages in thread
From: Daniel Borkmann @ 2025-07-07 13:09 UTC (permalink / raw)
To: Menglong Dong, ast
Cc: john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song,
kpsingh, sdf, haoluo, jolsa, bpf, linux-kernel, Menglong Dong,
alan.maguire
On 7/7/25 1:35 PM, Menglong Dong wrote:
> For now, we lookup the address of the attach target in
> bpf_check_attach_target() with find_kallsyms_symbol_value or
> kallsyms_lookup_name, which is not accurate in some cases.
>
> For example, we want to attach to the target "t_next", but there are
> multiple symbols with the name "t_next" exist in the kallsyms. The one
> that kallsyms_lookup_name() returned may have no ftrace record, which
> makes the attach target not available. So we want the one that has ftrace
> record to be returned.
>
> Meanwhile, there may be multiple symbols with the name "t_next" in ftrace
> record. In this case, the attach target is ambiguous, so the attach should
> fail.
>
> Introduce the function bpf_lookup_attach_addr() to do the address lookup,
> which is able to solve this problem.
>
> Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
Breaks CI, see also:
First test_progs failure (test_progs-aarch64-gcc-14):
#467/1 tracing_failure/bpf_spin_lock
test_bpf_spin_lock:PASS:tracing_failure__open 0 nsec
libbpf: prog 'test_spin_lock': BPF program load failed: -ENOENT
libbpf: prog 'test_spin_lock': -- BEGIN PROG LOAD LOG --
The address of function bpf_spin_lock cannot be found
processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
-- END PROG LOAD LOG --
libbpf: prog 'test_spin_lock': failed to load: -ENOENT
libbpf: failed to load object 'tracing_failure'
libbpf: failed to load BPF skeleton 'tracing_failure': -ENOENT
test_bpf_spin_lock:FAIL:tracing_failure__load unexpected error: -2 (errno 2)
#467/2 tracing_failure/bpf_spin_unlock
test_bpf_spin_lock:PASS:tracing_failure__open 0 nsec
libbpf: prog 'test_spin_unlock': BPF program load failed: -ENOENT
libbpf: prog 'test_spin_unlock': -- BEGIN PROG LOAD LOG --
The address of function bpf_spin_unlock cannot be found
processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
-- END PROG LOAD LOG --
libbpf: prog 'test_spin_unlock': failed to load: -ENOENT
libbpf: failed to load object 'tracing_failure'
libbpf: failed to load BPF skeleton 'tracing_failure': -ENOENT
test_bpf_spin_lock:FAIL:tracing_failure__load unexpected error: -2 (errno 2)
> kernel/bpf/verifier.c | 76 ++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 71 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 0f6cc2275695..9a7128da6d13 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -23436,6 +23436,72 @@ static int check_non_sleepable_error_inject(u32 btf_id)
> return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id);
> }
>
> +struct symbol_lookup_ctx {
> + const char *name;
> + unsigned long addr;
> +};
> +
> +static int symbol_callback(void *data, unsigned long addr)
> +{
> + struct symbol_lookup_ctx *ctx = data;
> +
> + if (!ftrace_location(addr))
> + return 0;
> +
> + if (ctx->addr)
> + return -EADDRNOTAVAIL;
> +
> + ctx->addr = addr;
> +
> + return 0;
> +}
> +
> +static int symbol_mod_callback(void *data, const char *name, unsigned long addr)
> +{
> + if (strcmp(((struct symbol_lookup_ctx *)data)->name, name) != 0)
> + return 0;
> +
> + return symbol_callback(data, addr);
> +}
> +
> +/**
> + * bpf_lookup_attach_addr: Lookup address for a symbol
> + *
> + * @mod: kernel module to lookup the symbol, NULL means to lookup the kernel
> + * symbols
> + * @sym: the symbol to resolve
> + * @addr: pointer to store the result
> + *
> + * Lookup the address of the symbol @sym, and the address should has
> + * corresponding ftrace location. If multiple symbols with the name @sym
> + * exist, the one that has ftrace location will be returned. If more than
> + * 1 has ftrace location, -EADDRNOTAVAIL will be returned.
> + *
> + * Returns: 0 on success, -errno otherwise.
> + */
> +static int bpf_lookup_attach_addr(const struct module *mod, const char *sym,
> + unsigned long *addr)
> +{
> + struct symbol_lookup_ctx ctx = { .addr = 0, .name = sym };
> + int err;
> +
> + if (!mod)
> + err = kallsyms_on_each_match_symbol(symbol_callback, sym, &ctx);
This is also not really equivalent to kallsyms_lookup_name(). kallsyms_on_each_match_symbol()
only iterates over all symbols in vmlinux whereas kallsyms_lookup_name() looks up both vmlinux
and modules.
> + else
> + err = module_kallsyms_on_each_symbol(mod->name, symbol_mod_callback,
> + &ctx);
> +
> + if (!ctx.addr)
> + return -ENOENT;
> +
> + if (err)
> + return err;
> +
> + *addr = ctx.addr;
> +
> + return 0;
> +}
> +
> int bpf_check_attach_target(struct bpf_verifier_log *log,
> const struct bpf_prog *prog,
> const struct bpf_prog *tgt_prog,
> @@ -23689,18 +23755,18 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
> if (btf_is_module(btf)) {
> mod = btf_try_get_module(btf);
> if (mod)
> - addr = find_kallsyms_symbol_value(mod, tname);
> + ret = bpf_lookup_attach_addr(mod, tname, &addr);
> else
> - addr = 0;
> + ret = -ENOENT;
> } else {
> - addr = kallsyms_lookup_name(tname);
> + ret = bpf_lookup_attach_addr(NULL, tname, &addr);
> }
> - if (!addr) {
> + if (ret) {
> module_put(mod);
> bpf_log(log,
> "The address of function %s cannot be found\n",
> tname);
> - return -ENOENT;
> + return ret;
> }
> }
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH bpf-next] bpf: make the attach target more accurate
2025-07-07 11:35 [PATCH bpf-next] bpf: make the attach target more accurate Menglong Dong
2025-07-07 13:09 ` Daniel Borkmann
@ 2025-07-07 20:25 ` kernel test robot
1 sibling, 0 replies; 4+ messages in thread
From: kernel test robot @ 2025-07-07 20:25 UTC (permalink / raw)
To: Menglong Dong, ast
Cc: llvm, oe-kbuild-all, daniel, john.fastabend, andrii, martin.lau,
eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa, bpf,
linux-kernel, Menglong Dong
Hi Menglong,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Menglong-Dong/bpf-make-the-attach-target-more-accurate/20250707-194159
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20250707113528.378303-1-dongml2%40chinatelecom.cn
patch subject: [PATCH bpf-next] bpf: make the attach target more accurate
config: i386-buildonly-randconfig-003-20250708 (https://download.01.org/0day-ci/archive/20250708/202507080452.fCL471ap-lkp@intel.com/config)
compiler: clang version 20.1.7 (https://github.com/llvm/llvm-project 6146a88f60492b520a36f8f8f3231e15f3cc6082)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250708/202507080452.fCL471ap-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507080452.fCL471ap-lkp@intel.com/
All errors (new ones prefixed by >>):
>> kernel/bpf/verifier.c:23491:43: error: incomplete definition of type 'const struct module'
23491 | err = module_kallsyms_on_each_symbol(mod->name, symbol_mod_callback,
| ~~~^
include/linux/printk.h:400:8: note: forward declaration of 'struct module'
400 | struct module;
| ^
1 error generated.
vim +23491 kernel/bpf/verifier.c
23466
23467 /**
23468 * bpf_lookup_attach_addr: Lookup address for a symbol
23469 *
23470 * @mod: kernel module to lookup the symbol, NULL means to lookup the kernel
23471 * symbols
23472 * @sym: the symbol to resolve
23473 * @addr: pointer to store the result
23474 *
23475 * Lookup the address of the symbol @sym, and the address should has
23476 * corresponding ftrace location. If multiple symbols with the name @sym
23477 * exist, the one that has ftrace location will be returned. If more than
23478 * 1 has ftrace location, -EADDRNOTAVAIL will be returned.
23479 *
23480 * Returns: 0 on success, -errno otherwise.
23481 */
23482 static int bpf_lookup_attach_addr(const struct module *mod, const char *sym,
23483 unsigned long *addr)
23484 {
23485 struct symbol_lookup_ctx ctx = { .addr = 0, .name = sym };
23486 int err;
23487
23488 if (!mod)
23489 err = kallsyms_on_each_match_symbol(symbol_callback, sym, &ctx);
23490 else
23491 err = module_kallsyms_on_each_symbol(mod->name, symbol_mod_callback,
23492 &ctx);
23493
23494 if (!ctx.addr)
23495 return -ENOENT;
23496
23497 if (err)
23498 return err;
23499
23500 *addr = ctx.addr;
23501
23502 return 0;
23503 }
23504
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH bpf-next] bpf: make the attach target more accurate
2025-07-07 13:09 ` Daniel Borkmann
@ 2025-07-08 2:34 ` Menglong Dong
0 siblings, 0 replies; 4+ messages in thread
From: Menglong Dong @ 2025-07-08 2:34 UTC (permalink / raw)
To: Daniel Borkmann
Cc: ast, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, bpf, linux-kernel,
Menglong Dong, alan.maguire
On Mon, Jul 7, 2025 at 9:09 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 7/7/25 1:35 PM, Menglong Dong wrote:
> > For now, we lookup the address of the attach target in
> > bpf_check_attach_target() with find_kallsyms_symbol_value or
> > kallsyms_lookup_name, which is not accurate in some cases.
> >
> > For example, we want to attach to the target "t_next", but there are
> > multiple symbols with the name "t_next" exist in the kallsyms. The one
> > that kallsyms_lookup_name() returned may have no ftrace record, which
> > makes the attach target not available. So we want the one that has ftrace
> > record to be returned.
> >
> > Meanwhile, there may be multiple symbols with the name "t_next" in ftrace
> > record. In this case, the attach target is ambiguous, so the attach should
> > fail.
> >
> > Introduce the function bpf_lookup_attach_addr() to do the address lookup,
> > which is able to solve this problem.
> >
> > Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
>
> Breaks CI, see also:
Yeah, I should run the whole selftests :/
>
> First test_progs failure (test_progs-aarch64-gcc-14):
> #467/1 tracing_failure/bpf_spin_lock
> test_bpf_spin_lock:PASS:tracing_failure__open 0 nsec
> libbpf: prog 'test_spin_lock': BPF program load failed: -ENOENT
> libbpf: prog 'test_spin_lock': -- BEGIN PROG LOAD LOG --
> The address of function bpf_spin_lock cannot be found
> processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
> -- END PROG LOAD LOG --
> libbpf: prog 'test_spin_lock': failed to load: -ENOENT
> libbpf: failed to load object 'tracing_failure'
> libbpf: failed to load BPF skeleton 'tracing_failure': -ENOENT
> test_bpf_spin_lock:FAIL:tracing_failure__load unexpected error: -2 (errno 2)
> #467/2 tracing_failure/bpf_spin_unlock
> test_bpf_spin_lock:PASS:tracing_failure__open 0 nsec
> libbpf: prog 'test_spin_unlock': BPF program load failed: -ENOENT
> libbpf: prog 'test_spin_unlock': -- BEGIN PROG LOAD LOG --
> The address of function bpf_spin_unlock cannot be found
> processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
> -- END PROG LOAD LOG --
> libbpf: prog 'test_spin_unlock': failed to load: -ENOENT
> libbpf: failed to load object 'tracing_failure'
> libbpf: failed to load BPF skeleton 'tracing_failure': -ENOENT
> test_bpf_spin_lock:FAIL:tracing_failure__load unexpected error: -2 (errno 2)
>
> > kernel/bpf/verifier.c | 76 ++++++++++++++++++++++++++++++++++++++++---
> > 1 file changed, 71 insertions(+), 5 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 0f6cc2275695..9a7128da6d13 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -23436,6 +23436,72 @@ static int check_non_sleepable_error_inject(u32 btf_id)
> > return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id);
> > }
> >
> > +struct symbol_lookup_ctx {
> > + const char *name;
> > + unsigned long addr;
> > +};
> > +
> > +static int symbol_callback(void *data, unsigned long addr)
> > +{
> > + struct symbol_lookup_ctx *ctx = data;
> > +
> > + if (!ftrace_location(addr))
> > + return 0;
> > +
> > + if (ctx->addr)
> > + return -EADDRNOTAVAIL;
> > +
> > + ctx->addr = addr;
> > +
> > + return 0;
> > +}
> > +
> > +static int symbol_mod_callback(void *data, const char *name, unsigned long addr)
> > +{
> > + if (strcmp(((struct symbol_lookup_ctx *)data)->name, name) != 0)
> > + return 0;
> > +
> > + return symbol_callback(data, addr);
> > +}
> > +
> > +/**
> > + * bpf_lookup_attach_addr: Lookup address for a symbol
> > + *
> > + * @mod: kernel module to lookup the symbol, NULL means to lookup the kernel
> > + * symbols
> > + * @sym: the symbol to resolve
> > + * @addr: pointer to store the result
> > + *
> > + * Lookup the address of the symbol @sym, and the address should has
> > + * corresponding ftrace location. If multiple symbols with the name @sym
> > + * exist, the one that has ftrace location will be returned. If more than
> > + * 1 has ftrace location, -EADDRNOTAVAIL will be returned.
> > + *
> > + * Returns: 0 on success, -errno otherwise.
> > + */
> > +static int bpf_lookup_attach_addr(const struct module *mod, const char *sym,
> > + unsigned long *addr)
> > +{
> > + struct symbol_lookup_ctx ctx = { .addr = 0, .name = sym };
> > + int err;
> > +
> > + if (!mod)
> > + err = kallsyms_on_each_match_symbol(symbol_callback, sym, &ctx);
>
> This is also not really equivalent to kallsyms_lookup_name(). kallsyms_on_each_match_symbol()
> only iterates over all symbols in vmlinux whereas kallsyms_lookup_name() looks up both vmlinux
> and modules.
Yeah, my mistake. I'll fixup this logic in the next version.
Thanks!
Menglong Dong
>
> > + else
> > + err = module_kallsyms_on_each_symbol(mod->name, symbol_mod_callback,
> > + &ctx);
> > +
> > + if (!ctx.addr)
> > + return -ENOENT;
> > +
> > + if (err)
> > + return err;
> > +
> > + *addr = ctx.addr;
> > +
> > + return 0;
> > +}
> > +
> > int bpf_check_attach_target(struct bpf_verifier_log *log,
> > const struct bpf_prog *prog,
> > const struct bpf_prog *tgt_prog,
> > @@ -23689,18 +23755,18 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
> > if (btf_is_module(btf)) {
> > mod = btf_try_get_module(btf);
> > if (mod)
> > - addr = find_kallsyms_symbol_value(mod, tname);
> > + ret = bpf_lookup_attach_addr(mod, tname, &addr);
> > else
> > - addr = 0;
> > + ret = -ENOENT;
> > } else {
> > - addr = kallsyms_lookup_name(tname);
> > + ret = bpf_lookup_attach_addr(NULL, tname, &addr);
> > }
> > - if (!addr) {
> > + if (ret) {
> > module_put(mod);
> > bpf_log(log,
> > "The address of function %s cannot be found\n",
> > tname);
> > - return -ENOENT;
> > + return ret;
> > }
> > }
> >
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-07-08 2:35 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-07 11:35 [PATCH bpf-next] bpf: make the attach target more accurate Menglong Dong
2025-07-07 13:09 ` Daniel Borkmann
2025-07-08 2:34 ` Menglong Dong
2025-07-07 20:25 ` kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).