* [PATCH bpf-next v2 0/2] libbpf: allow address-based single kprobe attach @ 2026-03-29 12:43 Hoyeon Lee 2026-03-29 12:43 ` [PATCH bpf-next v2 1/2] " Hoyeon Lee 2026-03-29 12:43 ` [PATCH bpf-next v2 2/2] selftests/bpf: add test for " Hoyeon Lee 0 siblings, 2 replies; 11+ messages in thread From: Hoyeon Lee @ 2026-03-29 12:43 UTC (permalink / raw) To: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau Cc: Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Jiri Olsa, Shuah Khan, Feng Yang, linux-kselftest, linux-kernel Today libbpf can attach a single kprobe only by function name, with an optional offset. But the kernel also supports attaching a single kprobe directly by raw kernel address, through both legacy tracefs/debugfs kprobes and PMU-based non-legacy kprobes. Since libbpf doesn't support this yet, callers that already have a target IP still have to drop down to perf_event_open() or direct tracefs writes. This patchset adds address-based single-kprobe attach support to bpf_program__attach_kprobe_opts() and covers it in selftests/bpf. The first commit adds bpf_kprobe_opts.addr so that libbpf can attach single kprobes by raw address through both legacy tracefs/debugfs and PMU-based non-legacy paths. The second commit extends attach_probe selftests/bpf with address-based kprobe attach subtests for these paths. --- Changes in v2: - Fix line wrapping and indentation Hoyeon Lee (2): libbpf: allow address-based single kprobe attach selftests/bpf: add test for address-based single kprobe attach tools/lib/bpf/libbpf.c | 88 +++++++++++++------ tools/lib/bpf/libbpf.h | 5 +- .../selftests/bpf/prog_tests/attach_probe.c | 49 +++++++++++ 3 files changed, 114 insertions(+), 28 deletions(-) -- 2.52.0 ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH bpf-next v2 1/2] libbpf: allow address-based single kprobe attach 2026-03-29 12:43 [PATCH bpf-next v2 0/2] libbpf: allow address-based single kprobe attach Hoyeon Lee @ 2026-03-29 12:43 ` Hoyeon Lee 2026-03-30 10:08 ` Jiri Olsa 2026-03-31 0:33 ` Andrii Nakryiko 2026-03-29 12:43 ` [PATCH bpf-next v2 2/2] selftests/bpf: add test for " Hoyeon Lee 1 sibling, 2 replies; 11+ messages in thread From: Hoyeon Lee @ 2026-03-29 12:43 UTC (permalink / raw) To: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau Cc: Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Jiri Olsa, Shuah Khan, Feng Yang, linux-kselftest, linux-kernel bpf_program__attach_kprobe_opts() currently attaches a single kprobe only by func_name, with an optional offset. This covers only the symbol- based form, not the raw-address form that the kernel already supports for both kprobe PMU events and legacy tracefs/debugfs kprobes. Callers that already have a target IP still have to drop down to perf_event_open() or direct tracefs writes. libbpf already exposes address-based attach for kprobe_multi through bpf_kprobe_multi_opts.addrs. This commit adds bpf_kprobe_opts.addr so that single kprobes can be attached either by func_name + offset or by raw address. Signed-off-by: Hoyeon Lee <hoyeon.lee@suse.com> --- tools/lib/bpf/libbpf.c | 88 +++++++++++++++++++++++++++++------------- tools/lib/bpf/libbpf.h | 5 ++- 2 files changed, 65 insertions(+), 28 deletions(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 9ea41f40dc82..8e1e99ba38ad 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -11523,7 +11523,8 @@ static int determine_uprobe_retprobe_bit(void) #define PERF_UPROBE_REF_CTR_OFFSET_SHIFT 32 static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name, - uint64_t offset, int pid, size_t ref_ctr_off) + uint64_t offset_or_addr, int pid, + size_t ref_ctr_off) { const size_t attr_sz = sizeof(struct perf_event_attr); struct perf_event_attr attr; @@ -11558,7 +11559,7 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name, attr.type = type; attr.config |= (__u64)ref_ctr_off << PERF_UPROBE_REF_CTR_OFFSET_SHIFT; attr.config1 = ptr_to_u64(name); /* kprobe_func or uprobe_path */ - attr.config2 = offset; /* kprobe_addr or probe_offset */ + attr.config2 = offset_or_addr; /* kprobe_addr or probe_offset */ /* pid filter is meaningful only for uprobes */ pfd = syscall(__NR_perf_event_open, &attr, @@ -11633,13 +11634,15 @@ static const char *tracefs_available_filter_functions_addrs(void) } static void gen_probe_legacy_event_name(char *buf, size_t buf_sz, - const char *name, size_t offset) + const char *name, + uint64_t offset_or_addr) { static int index = 0; int i; - snprintf(buf, buf_sz, "libbpf_%u_%d_%s_0x%zx", getpid(), - __sync_fetch_and_add(&index, 1), name, offset); + snprintf(buf, buf_sz, "libbpf_%u_%d_%s_0x%" PRIx64, getpid(), + __sync_fetch_and_add(&index, 1), name ?: "addr", + offset_or_addr); /* sanitize name in the probe name */ for (i = 0; buf[i]; i++) { @@ -11648,13 +11651,28 @@ static void gen_probe_legacy_event_name(char *buf, size_t buf_sz, } } +static void gen_kprobe_target(char *buf, size_t buf_sz, const char *name, + uint64_t offset_or_addr) +{ + if (name) + snprintf(buf, buf_sz, "%s+0x%" PRIx64, name, offset_or_addr); + else + snprintf(buf, buf_sz, "0x%" PRIx64, offset_or_addr); +} + static int add_kprobe_event_legacy(const char *probe_name, bool retprobe, - const char *kfunc_name, size_t offset) + const char *kfunc_name, + uint64_t offset_or_addr) { - return append_to_file(tracefs_kprobe_events(), "%c:%s/%s %s+0x%zx", + char probe_target[128]; + + gen_kprobe_target(probe_target, sizeof(probe_target), kfunc_name, + offset_or_addr); + + return append_to_file(tracefs_kprobe_events(), "%c:%s/%s %s", retprobe ? 'r' : 'p', retprobe ? "kretprobes" : "kprobes", - probe_name, kfunc_name, offset); + probe_name, probe_target); } static int remove_kprobe_event_legacy(const char *probe_name, bool retprobe) @@ -11674,25 +11692,29 @@ static int determine_kprobe_perf_type_legacy(const char *probe_name, bool retpro } static int perf_event_kprobe_open_legacy(const char *probe_name, bool retprobe, - const char *kfunc_name, size_t offset, int pid) + const char *kfunc_name, + uint64_t offset_or_addr, int pid) { const size_t attr_sz = sizeof(struct perf_event_attr); struct perf_event_attr attr; int type, pfd, err; + char probe_target[128]; - err = add_kprobe_event_legacy(probe_name, retprobe, kfunc_name, offset); + gen_kprobe_target(probe_target, sizeof(probe_target), kfunc_name, + offset_or_addr); + + err = add_kprobe_event_legacy(probe_name, retprobe, kfunc_name, + offset_or_addr); if (err < 0) { - pr_warn("failed to add legacy kprobe event for '%s+0x%zx': %s\n", - kfunc_name, offset, - errstr(err)); + pr_warn("failed to add legacy kprobe event for '%s': %s\n", + probe_target, errstr(err)); return err; } type = determine_kprobe_perf_type_legacy(probe_name, retprobe); if (type < 0) { err = type; - pr_warn("failed to determine legacy kprobe event id for '%s+0x%zx': %s\n", - kfunc_name, offset, - errstr(err)); + pr_warn("failed to determine legacy kprobe event id for '%s': %s\n", + probe_target, errstr(err)); goto err_clean_legacy; } @@ -11784,6 +11806,9 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog, enum probe_attach_mode attach_mode; char *legacy_probe = NULL; struct bpf_link *link; + uint64_t offset_or_addr; + char probe_target[128]; + unsigned long addr; size_t offset; bool retprobe, legacy; int pfd, err; @@ -11794,8 +11819,18 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog, attach_mode = OPTS_GET(opts, attach_mode, PROBE_ATTACH_MODE_DEFAULT); retprobe = OPTS_GET(opts, retprobe, false); offset = OPTS_GET(opts, offset, 0); + addr = OPTS_GET(opts, addr, 0); + offset_or_addr = func_name ? offset : addr; pe_opts.bpf_cookie = OPTS_GET(opts, bpf_cookie, 0); + if (!!func_name == !!addr) + return libbpf_err_ptr(-EINVAL); + if (addr && offset) + return libbpf_err_ptr(-EINVAL); + + gen_kprobe_target(probe_target, sizeof(probe_target), func_name, + offset_or_addr); + legacy = determine_kprobe_perf_type() < 0; switch (attach_mode) { case PROBE_ATTACH_MODE_LEGACY: @@ -11819,37 +11854,36 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog, if (!legacy) { pfd = perf_event_open_probe(false /* uprobe */, retprobe, - func_name, offset, + func_name, offset_or_addr, -1 /* pid */, 0 /* ref_ctr_off */); } else { char probe_name[MAX_EVENT_NAME_LEN]; gen_probe_legacy_event_name(probe_name, sizeof(probe_name), - func_name, offset); + func_name, offset_or_addr); legacy_probe = strdup(probe_name); if (!legacy_probe) return libbpf_err_ptr(-ENOMEM); - pfd = perf_event_kprobe_open_legacy(legacy_probe, retprobe, func_name, - offset, -1 /* pid */); + pfd = perf_event_kprobe_open_legacy(legacy_probe, retprobe, + func_name, offset_or_addr, + -1 /* pid */); } if (pfd < 0) { - err = -errno; - pr_warn("prog '%s': failed to create %s '%s+0x%zx' perf event: %s\n", + err = pfd; + pr_warn("prog '%s': failed to create %s '%s' perf event: %s\n", prog->name, retprobe ? "kretprobe" : "kprobe", - func_name, offset, - errstr(err)); + probe_target, errstr(err)); goto err_out; } link = bpf_program__attach_perf_event_opts(prog, pfd, &pe_opts); err = libbpf_get_error(link); if (err) { close(pfd); - pr_warn("prog '%s': failed to attach to %s '%s+0x%zx': %s\n", + pr_warn("prog '%s': failed to attach to %s '%s': %s\n", prog->name, retprobe ? "kretprobe" : "kprobe", - func_name, offset, - errstr(err)); + probe_target, errstr(err)); goto err_clean_legacy; } if (legacy) { diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 0be34852350f..a5ad174d9add 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -563,9 +563,12 @@ struct bpf_kprobe_opts { bool retprobe; /* kprobe attach mode */ enum probe_attach_mode attach_mode; + /* kernel address for kprobe. If specified, offset must be 0 */ + unsigned long addr; size_t :0; }; -#define bpf_kprobe_opts__last_field attach_mode + +#define bpf_kprobe_opts__last_field addr LIBBPF_API struct bpf_link * bpf_program__attach_kprobe(const struct bpf_program *prog, bool retprobe, -- 2.52.0 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: allow address-based single kprobe attach 2026-03-29 12:43 ` [PATCH bpf-next v2 1/2] " Hoyeon Lee @ 2026-03-30 10:08 ` Jiri Olsa 2026-03-31 1:47 ` Hoyeon Lee 2026-03-31 0:33 ` Andrii Nakryiko 1 sibling, 1 reply; 11+ messages in thread From: Jiri Olsa @ 2026-03-30 10:08 UTC (permalink / raw) To: Hoyeon Lee Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Shuah Khan, Feng Yang, linux-kselftest, linux-kernel On Sun, Mar 29, 2026 at 09:43:38PM +0900, Hoyeon Lee wrote: > bpf_program__attach_kprobe_opts() currently attaches a single kprobe only > by func_name, with an optional offset. This covers only the symbol- > based form, not the raw-address form that the kernel already supports > for both kprobe PMU events and legacy tracefs/debugfs kprobes. Callers > that already have a target IP still have to drop down to > perf_event_open() or direct tracefs writes. > > libbpf already exposes address-based attach for kprobe_multi through > bpf_kprobe_multi_opts.addrs. This commit adds bpf_kprobe_opts.addr so > that single kprobes can be attached either by func_name + offset or by > raw address. curious, is this change just for the api to be complete or you do have a usecase where you can't use kprobe_multi and need to attach kprobe by address? SNIP > static void gen_probe_legacy_event_name(char *buf, size_t buf_sz, > - const char *name, size_t offset) > + const char *name, > + uint64_t offset_or_addr) > { > static int index = 0; > int i; > > - snprintf(buf, buf_sz, "libbpf_%u_%d_%s_0x%zx", getpid(), > - __sync_fetch_and_add(&index, 1), name, offset); > + snprintf(buf, buf_sz, "libbpf_%u_%d_%s_0x%" PRIx64, getpid(), > + __sync_fetch_and_add(&index, 1), name ?: "addr", > + offset_or_addr); > > /* sanitize name in the probe name */ > for (i = 0; buf[i]; i++) { > @@ -11648,13 +11651,28 @@ static void gen_probe_legacy_event_name(char *buf, size_t buf_sz, > } > } > > +static void gen_kprobe_target(char *buf, size_t buf_sz, const char *name, > + uint64_t offset_or_addr) > +{ > + if (name) > + snprintf(buf, buf_sz, "%s+0x%" PRIx64, name, offset_or_addr); > + else > + snprintf(buf, buf_sz, "0x%" PRIx64, offset_or_addr); > +} > + > static int add_kprobe_event_legacy(const char *probe_name, bool retprobe, > - const char *kfunc_name, size_t offset) > + const char *kfunc_name, > + uint64_t offset_or_addr) > { > - return append_to_file(tracefs_kprobe_events(), "%c:%s/%s %s+0x%zx", > + char probe_target[128]; > + > + gen_kprobe_target(probe_target, sizeof(probe_target), kfunc_name, > + offset_or_addr); > + it seems like it'd be easier to get probe_target (via gen_kprobe_target) in bpf_program__attach_kprobe_opts and pass it down instead of generating it over and over again > + return append_to_file(tracefs_kprobe_events(), "%c:%s/%s %s", > retprobe ? 'r' : 'p', > retprobe ? "kretprobes" : "kprobes", > - probe_name, kfunc_name, offset); > + probe_name, probe_target); > } > > static int remove_kprobe_event_legacy(const char *probe_name, bool retprobe) > @@ -11674,25 +11692,29 @@ static int determine_kprobe_perf_type_legacy(const char *probe_name, bool retpro > } > > static int perf_event_kprobe_open_legacy(const char *probe_name, bool retprobe, > - const char *kfunc_name, size_t offset, int pid) > + const char *kfunc_name, > + uint64_t offset_or_addr, int pid) > { > const size_t attr_sz = sizeof(struct perf_event_attr); > struct perf_event_attr attr; > int type, pfd, err; > + char probe_target[128]; we need bigger buffer as explained by sashiko [1] [1] https://sashiko.dev/#/patchset/20260329124429.689912-1-hoyeon.lee%40suse.com jirka > > - err = add_kprobe_event_legacy(probe_name, retprobe, kfunc_name, offset); > + gen_kprobe_target(probe_target, sizeof(probe_target), kfunc_name, > + offset_or_addr); > + > + err = add_kprobe_event_legacy(probe_name, retprobe, kfunc_name, > + offset_or_addr); > if (err < 0) { > - pr_warn("failed to add legacy kprobe event for '%s+0x%zx': %s\n", > - kfunc_name, offset, > - errstr(err)); > + pr_warn("failed to add legacy kprobe event for '%s': %s\n", > + probe_target, errstr(err)); > return err; > } > type = determine_kprobe_perf_type_legacy(probe_name, retprobe); > if (type < 0) { > err = type; > - pr_warn("failed to determine legacy kprobe event id for '%s+0x%zx': %s\n", > - kfunc_name, offset, > - errstr(err)); > + pr_warn("failed to determine legacy kprobe event id for '%s': %s\n", > + probe_target, errstr(err)); > goto err_clean_legacy; > } > SNIP ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: allow address-based single kprobe attach 2026-03-30 10:08 ` Jiri Olsa @ 2026-03-31 1:47 ` Hoyeon Lee 0 siblings, 0 replies; 11+ messages in thread From: Hoyeon Lee @ 2026-03-31 1:47 UTC (permalink / raw) To: Jiri Olsa Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Shuah Khan, Feng Yang, linux-kselftest, linux-kernel On Mon, Mar 30, 2026 at 7:08 PM Jiri Olsa <olsajiri@gmail.com> wrote: > > On Sun, Mar 29, 2026 at 09:43:38PM +0900, Hoyeon Lee wrote: > > bpf_program__attach_kprobe_opts() currently attaches a single kprobe only > > by func_name, with an optional offset. This covers only the symbol- > > based form, not the raw-address form that the kernel already supports > > for both kprobe PMU events and legacy tracefs/debugfs kprobes. Callers > > that already have a target IP still have to drop down to > > perf_event_open() or direct tracefs writes. > > > > libbpf already exposes address-based attach for kprobe_multi through > > bpf_kprobe_multi_opts.addrs. This commit adds bpf_kprobe_opts.addr so > > that single kprobes can be attached either by func_name + offset or by > > raw address. > > curious, is this change just for the api to be complete or you do have > a usecase where you can't use kprobe_multi and need to attach kprobe > by address? > The main motivation was to fill the single-kprobe API gap, not to replace kprobe_multi. There is also a case where kprobe_multi is not a drop-in replacement. bpf_task_fd_query() operates on perf event fds and does not support generic link fds, while kprobe_multi returns a link fd. That link fd can be queried through bpf_link_get_info_by_fd(), but it still cannot be used as a drop-in replacement here. For PMU-based non-legacy attach, Andrii is right that func_name = NULL with opts.offset = <raw-address> already works today. However, this does not work for legacy tracefs/debugfs kprobes, because the tracefs event string formatting still expects symbol-based input. So I think the better direction for v3 is to avoid adding a new field to bpf_kprobe_opts(), document that offset is treated as an absolute address when func_name = NULL, and make the legacy path support the same raw-address form as well. > SNIP > > > static void gen_probe_legacy_event_name(char *buf, size_t buf_sz, > > - const char *name, size_t offset) > > + const char *name, > > + uint64_t offset_or_addr) > > { > > static int index = 0; > > int i; > > > > - snprintf(buf, buf_sz, "libbpf_%u_%d_%s_0x%zx", getpid(), > > - __sync_fetch_and_add(&index, 1), name, offset); > > + snprintf(buf, buf_sz, "libbpf_%u_%d_%s_0x%" PRIx64, getpid(), > > + __sync_fetch_and_add(&index, 1), name ?: "addr", > > + offset_or_addr); > > > > /* sanitize name in the probe name */ > > for (i = 0; buf[i]; i++) { > > @@ -11648,13 +11651,28 @@ static void gen_probe_legacy_event_name(char *buf, size_t buf_sz, > > } > > } > > > > +static void gen_kprobe_target(char *buf, size_t buf_sz, const char *name, > > + uint64_t offset_or_addr) > > +{ > > + if (name) > > + snprintf(buf, buf_sz, "%s+0x%" PRIx64, name, offset_or_addr); > > + else > > + snprintf(buf, buf_sz, "0x%" PRIx64, offset_or_addr); > > +} > > + > > static int add_kprobe_event_legacy(const char *probe_name, bool retprobe, > > - const char *kfunc_name, size_t offset) > > + const char *kfunc_name, > > + uint64_t offset_or_addr) > > { > > - return append_to_file(tracefs_kprobe_events(), "%c:%s/%s %s+0x%zx", > > + char probe_target[128]; > > + > > + gen_kprobe_target(probe_target, sizeof(probe_target), kfunc_name, > > + offset_or_addr); > > + > > it seems like it'd be easier to get probe_target (via gen_kprobe_target) > in bpf_program__attach_kprobe_opts and pass it down instead of generating > it over and over again > Good point. I'll simplify this in v3 and build probe_target once in bpf_program__attach_kprobe_opts(), then pass it down. > > + return append_to_file(tracefs_kprobe_events(), "%c:%s/%s %s", > > retprobe ? 'r' : 'p', > > retprobe ? "kretprobes" : "kprobes", > > - probe_name, kfunc_name, offset); > > + probe_name, probe_target); > > } > > > > static int remove_kprobe_event_legacy(const char *probe_name, bool retprobe) > > @@ -11674,25 +11692,29 @@ static int determine_kprobe_perf_type_legacy(const char *probe_name, bool retpro > > } > > > > static int perf_event_kprobe_open_legacy(const char *probe_name, bool retprobe, > > - const char *kfunc_name, size_t offset, int pid) > > + const char *kfunc_name, > > + uint64_t offset_or_addr, int pid) > > { > > const size_t attr_sz = sizeof(struct perf_event_attr); > > struct perf_event_attr attr; > > int type, pfd, err; > > + char probe_target[128]; > > we need bigger buffer as explained by sashiko [1] > Thanks for the pointer! I'll switch this to a larger buffer in v3. > [1] https://sashiko.dev/#/patchset/20260329124429.689912-1-hoyeon.lee%40suse.com > > jirka > > SNIP ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: allow address-based single kprobe attach 2026-03-29 12:43 ` [PATCH bpf-next v2 1/2] " Hoyeon Lee 2026-03-30 10:08 ` Jiri Olsa @ 2026-03-31 0:33 ` Andrii Nakryiko 2026-03-31 1:48 ` Hoyeon Lee 1 sibling, 1 reply; 11+ messages in thread From: Andrii Nakryiko @ 2026-03-31 0:33 UTC (permalink / raw) To: Hoyeon Lee Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Jiri Olsa, Shuah Khan, Feng Yang, linux-kselftest, linux-kernel On Sun, Mar 29, 2026 at 5:44 AM Hoyeon Lee <hoyeon.lee@suse.com> wrote: > > bpf_program__attach_kprobe_opts() currently attaches a single kprobe only > by func_name, with an optional offset. This covers only the symbol- have you tried passing NULL for func_name and specifying absolute address in opts.offset? Looking at the code I don't see why that won't work, we don't enforce func_name to be non-NULL This NULL will turn into config1 = 0, and offset will be config 2, which I think is what you want to attach by address, according to perf_event_open documentation union { __u64 bp_addr; /* breakpoint address */ __u64 kprobe_func; /* for perf_kprobe */ __u64 uprobe_path; /* for perf_uprobe */ __u64 config1; /* extension of config */ }; union { __u64 bp_len; /* breakpoint size */ __u64 kprobe_addr; /* with kprobe_func == NULL */ __u64 probe_offset; /* for perf_[k,u]probe */ __u64 config2; /* extension of config1 */ }; This is the same approach as with uprobes, btw. > based form, not the raw-address form that the kernel already supports > for both kprobe PMU events and legacy tracefs/debugfs kprobes. Callers > that already have a target IP still have to drop down to > perf_event_open() or direct tracefs writes. > > libbpf already exposes address-based attach for kprobe_multi through > bpf_kprobe_multi_opts.addrs. This commit adds bpf_kprobe_opts.addr so > that single kprobes can be attached either by func_name + offset or by > raw address. > > Signed-off-by: Hoyeon Lee <hoyeon.lee@suse.com> > --- > tools/lib/bpf/libbpf.c | 88 +++++++++++++++++++++++++++++------------- > tools/lib/bpf/libbpf.h | 5 ++- > 2 files changed, 65 insertions(+), 28 deletions(-) > [...] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: allow address-based single kprobe attach 2026-03-31 0:33 ` Andrii Nakryiko @ 2026-03-31 1:48 ` Hoyeon Lee 2026-03-31 2:15 ` Alexei Starovoitov 0 siblings, 1 reply; 11+ messages in thread From: Hoyeon Lee @ 2026-03-31 1:48 UTC (permalink / raw) To: Andrii Nakryiko Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Jiri Olsa, Shuah Khan, Feng Yang, linux-kselftest, linux-kernel On Tue, Mar 31, 2026 at 9:33 AM Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > On Sun, Mar 29, 2026 at 5:44 AM Hoyeon Lee <hoyeon.lee@suse.com> wrote: > > > > bpf_program__attach_kprobe_opts() currently attaches a single kprobe only > > by func_name, with an optional offset. This covers only the symbol- > > have you tried passing NULL for func_name and specifying absolute > address in opts.offset? Looking at the code I don't see why that won't > work, we don't enforce func_name to be non-NULL > > This NULL will turn into config1 = 0, and offset will be config 2, > which I think is what you want to attach by address, according to > perf_event_open documentation > > union { > __u64 bp_addr; /* breakpoint address */ > __u64 kprobe_func; /* for perf_kprobe */ > __u64 uprobe_path; /* for perf_uprobe */ > __u64 config1; /* extension of config */ > }; > > union { > __u64 bp_len; /* breakpoint size */ > __u64 kprobe_addr; /* with kprobe_func == NULL */ > __u64 probe_offset; /* for perf_[k,u]probe */ > __u64 config2; /* extension of config1 */ > }; > > This is the same approach as with uprobes, btw. > > I tested this after your comment, and you are right. For PMU-based non-legacy attach, func_name = NULL with opts.offset = <raw-address> already works today. However, this does not work for legacy tracefs/debugfs kprobes, because the tracefs event string formatting still expects symbol-based input. So for v3, instead of adding new field to bpf_kprobe_opts(), I'll document that offset can be treated as an absolute address when func_name = NULL, and make legacy path support the same raw-address form as well. > > based form, not the raw-address form that the kernel already supports > > for both kprobe PMU events and legacy tracefs/debugfs kprobes. Callers > > that already have a target IP still have to drop down to > > perf_event_open() or direct tracefs writes. > > > > libbpf already exposes address-based attach for kprobe_multi through > > bpf_kprobe_multi_opts.addrs. This commit adds bpf_kprobe_opts.addr so > > that single kprobes can be attached either by func_name + offset or by > > raw address. > > > > Signed-off-by: Hoyeon Lee <hoyeon.lee@suse.com> > > --- > > tools/lib/bpf/libbpf.c | 88 +++++++++++++++++++++++++++++------------- > > tools/lib/bpf/libbpf.h | 5 ++- > > 2 files changed, 65 insertions(+), 28 deletions(-) > > > > [...] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: allow address-based single kprobe attach 2026-03-31 1:48 ` Hoyeon Lee @ 2026-03-31 2:15 ` Alexei Starovoitov 2026-03-31 5:55 ` Hoyeon Lee 0 siblings, 1 reply; 11+ messages in thread From: Alexei Starovoitov @ 2026-03-31 2:15 UTC (permalink / raw) To: Hoyeon Lee Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Jiri Olsa, Shuah Khan, Feng Yang, open list:KERNEL SELFTEST FRAMEWORK, LKML On Mon, Mar 30, 2026 at 6:49 PM Hoyeon Lee <hoyeon.lee@suse.com> wrote: > > On Tue, Mar 31, 2026 at 9:33 AM Andrii Nakryiko > <andrii.nakryiko@gmail.com> wrote: > > > > On Sun, Mar 29, 2026 at 5:44 AM Hoyeon Lee <hoyeon.lee@suse.com> wrote: > > > > > > bpf_program__attach_kprobe_opts() currently attaches a single kprobe only > > > by func_name, with an optional offset. This covers only the symbol- > > > > have you tried passing NULL for func_name and specifying absolute > > address in opts.offset? Looking at the code I don't see why that won't > > work, we don't enforce func_name to be non-NULL > > > > This NULL will turn into config1 = 0, and offset will be config 2, > > which I think is what you want to attach by address, according to > > perf_event_open documentation > > > > union { > > __u64 bp_addr; /* breakpoint address */ > > __u64 kprobe_func; /* for perf_kprobe */ > > __u64 uprobe_path; /* for perf_uprobe */ > > __u64 config1; /* extension of config */ > > }; > > > > union { > > __u64 bp_len; /* breakpoint size */ > > __u64 kprobe_addr; /* with kprobe_func == NULL */ > > __u64 probe_offset; /* for perf_[k,u]probe */ > > __u64 config2; /* extension of config1 */ > > }; > > > > This is the same approach as with uprobes, btw. > > > > > > I tested this after your comment, and you are right. For PMU-based > non-legacy attach, func_name = NULL with opts.offset = <raw-address> > already works today. > > However, this does not work for legacy tracefs/debugfs kprobes, because > the tracefs event string formatting still expects symbol-based input. > > So for v3, instead of adding new field to bpf_kprobe_opts(), I'll document > that offset can be treated as an absolute address when func_name = NULL, This is fine. > and make legacy path support the same raw-address form as well. but not this. libbpf doesn't need to support legacy interfaces. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: allow address-based single kprobe attach 2026-03-31 2:15 ` Alexei Starovoitov @ 2026-03-31 5:55 ` Hoyeon Lee 0 siblings, 0 replies; 11+ messages in thread From: Hoyeon Lee @ 2026-03-31 5:55 UTC (permalink / raw) To: Alexei Starovoitov Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Jiri Olsa, Shuah Khan, Feng Yang, open list:KERNEL SELFTEST FRAMEWORK, LKML On Tue, Mar 31, 2026 at 11:15 AM Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote: > > On Mon, Mar 30, 2026 at 6:49 PM Hoyeon Lee <hoyeon.lee@suse.com> wrote: > > > > On Tue, Mar 31, 2026 at 9:33 AM Andrii Nakryiko > > <andrii.nakryiko@gmail.com> wrote: > > > > > > On Sun, Mar 29, 2026 at 5:44 AM Hoyeon Lee <hoyeon.lee@suse.com> wrote: > > > > > > > > bpf_program__attach_kprobe_opts() currently attaches a single kprobe only > > > > by func_name, with an optional offset. This covers only the symbol- > > > > > > have you tried passing NULL for func_name and specifying absolute > > > address in opts.offset? Looking at the code I don't see why that won't > > > work, we don't enforce func_name to be non-NULL > > > > > > This NULL will turn into config1 = 0, and offset will be config 2, > > > which I think is what you want to attach by address, according to > > > perf_event_open documentation > > > > > > union { > > > __u64 bp_addr; /* breakpoint address */ > > > __u64 kprobe_func; /* for perf_kprobe */ > > > __u64 uprobe_path; /* for perf_uprobe */ > > > __u64 config1; /* extension of config */ > > > }; > > > > > > union { > > > __u64 bp_len; /* breakpoint size */ > > > __u64 kprobe_addr; /* with kprobe_func == NULL */ > > > __u64 probe_offset; /* for perf_[k,u]probe */ > > > __u64 config2; /* extension of config1 */ > > > }; > > > > > > This is the same approach as with uprobes, btw. > > > > > > > > > > I tested this after your comment, and you are right. For PMU-based > > non-legacy attach, func_name = NULL with opts.offset = <raw-address> > > already works today. > > > > However, this does not work for legacy tracefs/debugfs kprobes, because > > the tracefs event string formatting still expects symbol-based input. > > > > So for v3, instead of adding new field to bpf_kprobe_opts(), I'll document > > that offset can be treated as an absolute address when func_name = NULL, > > This is fine. > > > and make legacy path support the same raw-address form as well. > > but not this. libbpf doesn't need to support legacy interfaces. Understood. I'll make the legacy kprobe attach API reject func_name = NULL with offset set to a raw address in v3. ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH bpf-next v2 2/2] selftests/bpf: add test for address-based single kprobe attach 2026-03-29 12:43 [PATCH bpf-next v2 0/2] libbpf: allow address-based single kprobe attach Hoyeon Lee 2026-03-29 12:43 ` [PATCH bpf-next v2 1/2] " Hoyeon Lee @ 2026-03-29 12:43 ` Hoyeon Lee 2026-03-30 10:08 ` Jiri Olsa 1 sibling, 1 reply; 11+ messages in thread From: Hoyeon Lee @ 2026-03-29 12:43 UTC (permalink / raw) To: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau Cc: Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Jiri Olsa, Shuah Khan, Feng Yang, linux-kselftest, linux-kernel Currently, attach_probe covers manual single-kprobe attaches by func_name, but not by raw address. This commit adds address-based single-kprobe attach subtests for the two underlying attach paths, legacy tracefs/debugfs and PMU-based non-legacy. The new subtests resolve SYS_NANOSLEEP_KPROBE_NAME through kallsyms, pass the result through bpf_kprobe_opts.addr, and verify that kprobe and kretprobe are still triggered. Signed-off-by: Hoyeon Lee <hoyeon.lee@suse.com> --- .../selftests/bpf/prog_tests/attach_probe.c | 49 +++++++++++++++++++ 1 file changed, 49 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c index 9e77e5da7097..64f2ed75779d 100644 --- a/tools/testing/selftests/bpf/prog_tests/attach_probe.c +++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c @@ -123,6 +123,51 @@ static void test_attach_probe_manual(enum probe_attach_mode attach_mode) test_attach_probe_manual__destroy(skel); } +/* manual attach address-based kprobe/kretprobe testings */ +static void test_attach_kprobe_by_addr(enum probe_attach_mode attach_mode) +{ + DECLARE_LIBBPF_OPTS(bpf_kprobe_opts, kprobe_opts); + struct bpf_link *kprobe_link, *kretprobe_link; + struct test_attach_probe_manual *skel; + unsigned long func_addr; + + if (!ASSERT_OK(load_kallsyms(), "load_kallsyms")) + return; + + func_addr = ksym_get_addr(SYS_NANOSLEEP_KPROBE_NAME); + if (!ASSERT_NEQ(func_addr, 0UL, "func_addr")) + return; + + skel = test_attach_probe_manual__open_and_load(); + if (!ASSERT_OK_PTR(skel, "skel_kprobe_manual_open_and_load")) + return; + + kprobe_opts.attach_mode = attach_mode; + kprobe_opts.retprobe = false; + kprobe_opts.addr = func_addr; + kprobe_link = bpf_program__attach_kprobe_opts(skel->progs.handle_kprobe, + NULL, &kprobe_opts); + if (!ASSERT_OK_PTR(kprobe_link, "attach_kprobe_by_addr")) + goto cleanup; + skel->links.handle_kprobe = kprobe_link; + + kprobe_opts.retprobe = true; + kretprobe_link = bpf_program__attach_kprobe_opts(skel->progs.handle_kretprobe, + NULL, &kprobe_opts); + if (!ASSERT_OK_PTR(kretprobe_link, "attach_kretprobe_by_addr")) + goto cleanup; + skel->links.handle_kretprobe = kretprobe_link; + + /* trigger & validate kprobe && kretprobe */ + usleep(1); + + ASSERT_EQ(skel->bss->kprobe_res, 1, "check_kprobe_res"); + ASSERT_EQ(skel->bss->kretprobe_res, 2, "check_kretprobe_res"); + +cleanup: + test_attach_probe_manual__destroy(skel); +} + /* attach uprobe/uretprobe long event name testings */ static void test_attach_uprobe_long_event_name(void) { @@ -416,6 +461,10 @@ void test_attach_probe(void) test_attach_probe_manual(PROBE_ATTACH_MODE_PERF); if (test__start_subtest("manual-link")) test_attach_probe_manual(PROBE_ATTACH_MODE_LINK); + if (test__start_subtest("kprobe-legacy-by-addr")) + test_attach_kprobe_by_addr(PROBE_ATTACH_MODE_LEGACY); + if (test__start_subtest("kprobe-perf-by-addr")) + test_attach_kprobe_by_addr(PROBE_ATTACH_MODE_PERF); if (test__start_subtest("auto")) test_attach_probe_auto(skel); -- 2.52.0 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: add test for address-based single kprobe attach 2026-03-29 12:43 ` [PATCH bpf-next v2 2/2] selftests/bpf: add test for " Hoyeon Lee @ 2026-03-30 10:08 ` Jiri Olsa 2026-03-31 2:01 ` Hoyeon Lee 0 siblings, 1 reply; 11+ messages in thread From: Jiri Olsa @ 2026-03-30 10:08 UTC (permalink / raw) To: Hoyeon Lee Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Shuah Khan, Feng Yang, linux-kselftest, linux-kernel On Sun, Mar 29, 2026 at 09:43:39PM +0900, Hoyeon Lee wrote: > Currently, attach_probe covers manual single-kprobe attaches by > func_name, but not by raw address. This commit adds address-based > single-kprobe attach subtests for the two underlying attach paths, > legacy tracefs/debugfs and PMU-based non-legacy. The new subtests > resolve SYS_NANOSLEEP_KPROBE_NAME through kallsyms, pass the result > through bpf_kprobe_opts.addr, and verify that kprobe and kretprobe are > still triggered. > > Signed-off-by: Hoyeon Lee <hoyeon.lee@suse.com> > --- > .../selftests/bpf/prog_tests/attach_probe.c | 49 +++++++++++++++++++ > 1 file changed, 49 insertions(+) > > diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c > index 9e77e5da7097..64f2ed75779d 100644 > --- a/tools/testing/selftests/bpf/prog_tests/attach_probe.c > +++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c > @@ -123,6 +123,51 @@ static void test_attach_probe_manual(enum probe_attach_mode attach_mode) > test_attach_probe_manual__destroy(skel); > } > > +/* manual attach address-based kprobe/kretprobe testings */ > +static void test_attach_kprobe_by_addr(enum probe_attach_mode attach_mode) > +{ > + DECLARE_LIBBPF_OPTS(bpf_kprobe_opts, kprobe_opts); > + struct bpf_link *kprobe_link, *kretprobe_link; > + struct test_attach_probe_manual *skel; > + unsigned long func_addr; > + > + if (!ASSERT_OK(load_kallsyms(), "load_kallsyms")) > + return; > + > + func_addr = ksym_get_addr(SYS_NANOSLEEP_KPROBE_NAME); > + if (!ASSERT_NEQ(func_addr, 0UL, "func_addr")) > + return; > + > + skel = test_attach_probe_manual__open_and_load(); > + if (!ASSERT_OK_PTR(skel, "skel_kprobe_manual_open_and_load")) > + return; > + > + kprobe_opts.attach_mode = attach_mode; > + kprobe_opts.retprobe = false; > + kprobe_opts.addr = func_addr; > + kprobe_link = bpf_program__attach_kprobe_opts(skel->progs.handle_kprobe, > + NULL, &kprobe_opts); > + if (!ASSERT_OK_PTR(kprobe_link, "attach_kprobe_by_addr")) > + goto cleanup; > + skel->links.handle_kprobe = kprobe_link; we usually use skel->links.handle_kprobe directly, no need to use kprobe_link or kretprobe_link > + > + kprobe_opts.retprobe = true; > + kretprobe_link = bpf_program__attach_kprobe_opts(skel->progs.handle_kretprobe, > + NULL, &kprobe_opts); > + if (!ASSERT_OK_PTR(kretprobe_link, "attach_kretprobe_by_addr")) > + goto cleanup; > + skel->links.handle_kretprobe = kretprobe_link; > + > + /* trigger & validate kprobe && kretprobe */ > + usleep(1); > + > + ASSERT_EQ(skel->bss->kprobe_res, 1, "check_kprobe_res"); > + ASSERT_EQ(skel->bss->kretprobe_res, 2, "check_kretprobe_res"); > + > +cleanup: > + test_attach_probe_manual__destroy(skel); > +} > + > /* attach uprobe/uretprobe long event name testings */ > static void test_attach_uprobe_long_event_name(void) > { > @@ -416,6 +461,10 @@ void test_attach_probe(void) > test_attach_probe_manual(PROBE_ATTACH_MODE_PERF); > if (test__start_subtest("manual-link")) > test_attach_probe_manual(PROBE_ATTACH_MODE_LINK); > + if (test__start_subtest("kprobe-legacy-by-addr")) > + test_attach_kprobe_by_addr(PROBE_ATTACH_MODE_LEGACY); > + if (test__start_subtest("kprobe-perf-by-addr")) > + test_attach_kprobe_by_addr(PROBE_ATTACH_MODE_PERF); should we test PROBE_ATTACH_MODE_LINK mode as well? jirka > > if (test__start_subtest("auto")) > test_attach_probe_auto(skel); > -- > 2.52.0 > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: add test for address-based single kprobe attach 2026-03-30 10:08 ` Jiri Olsa @ 2026-03-31 2:01 ` Hoyeon Lee 0 siblings, 0 replies; 11+ messages in thread From: Hoyeon Lee @ 2026-03-31 2:01 UTC (permalink / raw) To: Jiri Olsa Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Shuah Khan, Feng Yang, linux-kselftest, linux-kernel On Mon, Mar 30, 2026 at 7:08 PM Jiri Olsa <olsajiri@gmail.com> wrote: > > On Sun, Mar 29, 2026 at 09:43:39PM +0900, Hoyeon Lee wrote: > > Currently, attach_probe covers manual single-kprobe attaches by > > func_name, but not by raw address. This commit adds address-based > > single-kprobe attach subtests for the two underlying attach paths, > > legacy tracefs/debugfs and PMU-based non-legacy. The new subtests > > resolve SYS_NANOSLEEP_KPROBE_NAME through kallsyms, pass the result > > through bpf_kprobe_opts.addr, and verify that kprobe and kretprobe are > > still triggered. > > > > Signed-off-by: Hoyeon Lee <hoyeon.lee@suse.com> > > --- > > .../selftests/bpf/prog_tests/attach_probe.c | 49 +++++++++++++++++++ > > 1 file changed, 49 insertions(+) > > > > diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c > > index 9e77e5da7097..64f2ed75779d 100644 > > --- a/tools/testing/selftests/bpf/prog_tests/attach_probe.c > > +++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c > > @@ -123,6 +123,51 @@ static void test_attach_probe_manual(enum probe_attach_mode attach_mode) > > test_attach_probe_manual__destroy(skel); > > } > > > > +/* manual attach address-based kprobe/kretprobe testings */ > > +static void test_attach_kprobe_by_addr(enum probe_attach_mode attach_mode) > > +{ > > + DECLARE_LIBBPF_OPTS(bpf_kprobe_opts, kprobe_opts); > > + struct bpf_link *kprobe_link, *kretprobe_link; > > + struct test_attach_probe_manual *skel; > > + unsigned long func_addr; > > + > > + if (!ASSERT_OK(load_kallsyms(), "load_kallsyms")) > > + return; > > + > > + func_addr = ksym_get_addr(SYS_NANOSLEEP_KPROBE_NAME); > > + if (!ASSERT_NEQ(func_addr, 0UL, "func_addr")) > > + return; > > + > > + skel = test_attach_probe_manual__open_and_load(); > > + if (!ASSERT_OK_PTR(skel, "skel_kprobe_manual_open_and_load")) > > + return; > > + > > + kprobe_opts.attach_mode = attach_mode; > > + kprobe_opts.retprobe = false; > > + kprobe_opts.addr = func_addr; > > + kprobe_link = bpf_program__attach_kprobe_opts(skel->progs.handle_kprobe, > > + NULL, &kprobe_opts); > > + if (!ASSERT_OK_PTR(kprobe_link, "attach_kprobe_by_addr")) > > + goto cleanup; > > + skel->links.handle_kprobe = kprobe_link; > > we usually use skel->links.handle_kprobe directly, no need to use > kprobe_link or kretprobe_link > Thanks for the review! I'll change that in v3 and assign directly to skel->links.handle_kprobe and skel->links.handle_kretprobe. > > + > > + kprobe_opts.retprobe = true; > > + kretprobe_link = bpf_program__attach_kprobe_opts(skel->progs.handle_kretprobe, > > + NULL, &kprobe_opts); > > + if (!ASSERT_OK_PTR(kretprobe_link, "attach_kretprobe_by_addr")) > > + goto cleanup; > > + skel->links.handle_kretprobe = kretprobe_link; > > + > > + /* trigger & validate kprobe && kretprobe */ > > + usleep(1); > > + > > + ASSERT_EQ(skel->bss->kprobe_res, 1, "check_kprobe_res"); > > + ASSERT_EQ(skel->bss->kretprobe_res, 2, "check_kretprobe_res"); > > + > > +cleanup: > > + test_attach_probe_manual__destroy(skel); > > +} > > + > > /* attach uprobe/uretprobe long event name testings */ > > static void test_attach_uprobe_long_event_name(void) > > { > > @@ -416,6 +461,10 @@ void test_attach_probe(void) > > test_attach_probe_manual(PROBE_ATTACH_MODE_PERF); > > if (test__start_subtest("manual-link")) > > test_attach_probe_manual(PROBE_ATTACH_MODE_LINK); > > + if (test__start_subtest("kprobe-legacy-by-addr")) > > + test_attach_kprobe_by_addr(PROBE_ATTACH_MODE_LEGACY); > > + if (test__start_subtest("kprobe-perf-by-addr")) > > + test_attach_kprobe_by_addr(PROBE_ATTACH_MODE_PERF); > > should we test PROBE_ATTACH_MODE_LINK mode as well? > I covered only LEGACY and PERF here, because they test the two underlying single-kprobe attach paths, legacy tracefs/debugfs and PMU-based non-legacy. I treated LINK as duplicate coverage here. If you think explicit LINK coverage is still worth having, I can add it in v3. > jirka > ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2026-03-31 5:55 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-03-29 12:43 [PATCH bpf-next v2 0/2] libbpf: allow address-based single kprobe attach Hoyeon Lee 2026-03-29 12:43 ` [PATCH bpf-next v2 1/2] " Hoyeon Lee 2026-03-30 10:08 ` Jiri Olsa 2026-03-31 1:47 ` Hoyeon Lee 2026-03-31 0:33 ` Andrii Nakryiko 2026-03-31 1:48 ` Hoyeon Lee 2026-03-31 2:15 ` Alexei Starovoitov 2026-03-31 5:55 ` Hoyeon Lee 2026-03-29 12:43 ` [PATCH bpf-next v2 2/2] selftests/bpf: add test for " Hoyeon Lee 2026-03-30 10:08 ` Jiri Olsa 2026-03-31 2:01 ` Hoyeon Lee
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox