* [RFC PATCH bpf-next v2 0/3] Optimize kprobe.session attachment for exact function names
@ 2026-02-26 17:33 Andrey Grodzovsky
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 1/3] libbpf: " Andrey Grodzovsky
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Andrey Grodzovsky @ 2026-02-26 17:33 UTC (permalink / raw)
To: bpf, linux-open-source
Cc: ast, daniel, andrii, jolsa, rostedt, linux-trace-kernel
When libbpf attaches kprobe.session programs with exact function names
(the common case: SEC("kprobe.session/vfs_read")), the current code path
has two independent performance bottlenecks:
1. Userspace (libbpf): attach_kprobe_session() always parses
/proc/kallsyms to resolve function names, even when the name is exact
(no wildcards). This takes ~150ms per function.
2. Kernel (ftrace): ftrace_lookup_symbols() does a full O(N) linear scan
over ~200K kernel symbols via kallsyms_on_each_symbol(), decompressing
every symbol name, even when resolving a single symbol (cnt == 1).
This series optimizes both layers:
- Patch 1: libbpf detects exact function names (no wildcards) in
bpf_program__attach_kprobe_multi_opts() and bypasses kallsyms parsing,
passing the symbol directly to the kernel via syms[] array.
ESRCH is normalized to ENOENT for API consistency.
- Patch 2: ftrace_lookup_symbols() uses kallsyms_lookup_name() for
O(log N) binary search when cnt == 1, with fallback to linear scan for
duplicate symbols or module symbols. Included here for context; this
patch is destined for the tracing tree via linux-trace-kernel (Steven
Rostedt).
- Patch 3: Selftests validating exact-name attachment via
kprobe_multi_session.c and error consistency between wildcard and exact
paths in test_attach_api_fails.
Changes since v1:
https://lore.kernel.org/bpf/20260223215113.924599-1-andrey.grodzovsky@crowdstrike.com/
- Moved exact-name detection from attach_kprobe_session() into
bpf_program__attach_kprobe_multi_opts() so all callers benefit,
not just session (Jiri Olsa)
- Changed ftrace_location() to boolean check only, keeping original
kallsyms address in addrs[] consistent with kallsyms_callback
behavior (Jiri Olsa)
- Removed verbose performance rationale from ftrace code comment,
kept in changelog (Steven Rostedt)
- Consolidated session syms test into kprobe_multi_session.c using
session_check(), validating via kprobe_session_result[0] == 4 (Jiri Olsa)
- Folded error tests into existing test_attach_api_fails() instead of
separate subtest (Jiri Olsa)
- Deleted standalone kprobe_multi_session_syms.c and
kprobe_multi_session_errors.c
Andrey Grodzovsky (3):
libbpf: Optimize kprobe.session attachment for exact function names
ftrace: Use kallsyms binary search for single-symbol lookup
selftests/bpf: add tests for kprobe.session optimization
kernel/trace/ftrace.c | 22 ++++++++++++
tools/lib/bpf/libbpf.c | 34 +++++++++++++++----
.../bpf/prog_tests/kprobe_multi_test.c | 33 ++++++++++++++++--
.../bpf/progs/kprobe_multi_session.c | 10 ++++++
4 files changed, 91 insertions(+), 8 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [RFC PATCH bpf-next v2 1/3] libbpf: Optimize kprobe.session attachment for exact function names
2026-02-26 17:33 [RFC PATCH bpf-next v2 0/3] Optimize kprobe.session attachment for exact function names Andrey Grodzovsky
@ 2026-02-26 17:33 ` Andrey Grodzovsky
2026-02-27 17:08 ` Jiri Olsa
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 2/3] ftrace: Use kallsyms binary search for single-symbol lookup Andrey Grodzovsky
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 3/3] selftests/bpf: add tests for kprobe.session optimization Andrey Grodzovsky
2 siblings, 1 reply; 7+ messages in thread
From: Andrey Grodzovsky @ 2026-02-26 17:33 UTC (permalink / raw)
To: bpf, linux-open-source
Cc: ast, daniel, andrii, jolsa, rostedt, linux-trace-kernel
Implement dual-path optimization in attach_kprobe_session():
- Fast path: Use syms[] array for exact function names
(no kallsyms parsing)
- Slow path: Use pattern matching with kallsyms only for
wildcards
This avoids expensive kallsyms file parsing (~150ms) when function names
are specified exactly, improving attachment time 50x (~3-5ms).
Error code normalization: The fast path returns ESRCH from kernel's
ftrace_lookup_symbols(), while slow path returns ENOENT from userspace
kallsyms parsing. Convert ESRCH to ENOENT in fast path to maintain API
consistency - both paths now return identical error codes for "symbol
not found".
Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
---
tools/lib/bpf/libbpf.c | 34 ++++++++++++++++++++++++++++------
1 file changed, 28 insertions(+), 6 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 0be7017800fe..0ba8aa2c5fd2 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -12042,6 +12042,20 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
return libbpf_err_ptr(-EINVAL);
if (pattern) {
+ /*
+ * Exact function name (no wildcards): bypass kallsyms parsing
+ * and pass the symbol directly to the kernel via syms[] array.
+ * The kernel's ftrace_lookup_symbols() resolves it efficiently.
+ */
+ if (!strpbrk(pattern, "*?")) {
+ const char *sym = pattern;
+
+ syms = &sym;
+ cnt = 1;
+ pattern = NULL;
+ goto attach;
+ }
+
if (has_available_filter_functions_addrs())
err = libbpf_available_kprobes_parse(&res);
else
@@ -12060,6 +12074,7 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
cnt = res.cnt;
}
+attach:
retprobe = OPTS_GET(opts, retprobe, false);
session = OPTS_GET(opts, session, false);
@@ -12067,7 +12082,6 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
return libbpf_err_ptr(-EINVAL);
attach_type = session ? BPF_TRACE_KPROBE_SESSION : BPF_TRACE_KPROBE_MULTI;
-
lopts.kprobe_multi.syms = syms;
lopts.kprobe_multi.addrs = addrs;
lopts.kprobe_multi.cookies = cookies;
@@ -12084,6 +12098,14 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
link_fd = bpf_link_create(prog_fd, 0, attach_type, &lopts);
if (link_fd < 0) {
err = -errno;
+ /*
+ * Normalize error code: when exact name bypasses kallsyms
+ * parsing, kernel returns ESRCH from ftrace_lookup_symbols().
+ * Convert to ENOENT for API consistency with the pattern
+ * matching path which returns ENOENT from userspace.
+ */
+ if (err == -ESRCH)
+ err = -ENOENT;
pr_warn("prog '%s': failed to attach: %s\n",
prog->name, errstr(err));
goto error;
@@ -12192,7 +12214,7 @@ static int attach_kprobe_session(const struct bpf_program *prog, long cookie,
{
LIBBPF_OPTS(bpf_kprobe_multi_opts, opts, .session = true);
const char *spec;
- char *pattern;
+ char *func_name;
int n;
*link = NULL;
@@ -12202,14 +12224,14 @@ static int attach_kprobe_session(const struct bpf_program *prog, long cookie,
return 0;
spec = prog->sec_name + sizeof("kprobe.session/") - 1;
- n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
+ n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &func_name);
if (n < 1) {
- pr_warn("kprobe session pattern is invalid: %s\n", spec);
+ pr_warn("kprobe session function name is invalid: %s\n", spec);
return -EINVAL;
}
- *link = bpf_program__attach_kprobe_multi_opts(prog, pattern, &opts);
- free(pattern);
+ *link = bpf_program__attach_kprobe_multi_opts(prog, func_name, &opts);
+ free(func_name);
return *link ? 0 : -errno;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC PATCH bpf-next v2 2/3] ftrace: Use kallsyms binary search for single-symbol lookup
2026-02-26 17:33 [RFC PATCH bpf-next v2 0/3] Optimize kprobe.session attachment for exact function names Andrey Grodzovsky
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 1/3] libbpf: " Andrey Grodzovsky
@ 2026-02-26 17:33 ` Andrey Grodzovsky
2026-02-26 18:24 ` bot+bpf-ci
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 3/3] selftests/bpf: add tests for kprobe.session optimization Andrey Grodzovsky
2 siblings, 1 reply; 7+ messages in thread
From: Andrey Grodzovsky @ 2026-02-26 17:33 UTC (permalink / raw)
To: bpf, linux-open-source
Cc: ast, daniel, andrii, jolsa, rostedt, linux-trace-kernel
When ftrace_lookup_symbols() is called with a single symbol (cnt == 1),
use kallsyms_lookup_name() for O(log N) binary search instead of the
full linear scan via kallsyms_on_each_symbol().
ftrace_lookup_symbols() was designed for batch resolution of many
symbols in a single pass. For large cnt this is efficient: a single
O(N) walk over all symbols with O(log cnt) binary search into the
sorted input array. But for cnt == 1 it still decompresses all ~200K
kernel symbols only to match one.
kallsyms_lookup_name() uses the sorted kallsyms index and needs only
~17 decompressions for a single lookup.
This is the common path for kprobe.session with exact function names,
where libbpf sends one symbol per BPF_LINK_CREATE syscall.
If binary lookup fails (duplicate symbol names where the first match
is not ftrace-instrumented, or module symbols), the function falls
through to the existing linear scan path.
Before (cnt=1, 50 kprobe.session programs):
Attach: 858 ms (kallsyms_expand_symbol 25% of CPU)
After:
Attach: 52 ms (16x faster)
Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
---
kernel/trace/ftrace.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 827fb9a0bf0d..cfa0c7ad7cbf 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -9263,6 +9263,15 @@ static int kallsyms_callback(void *data, const char *name, unsigned long addr)
* @addrs array, which needs to be big enough to store at least @cnt
* addresses.
*
+ * For a single symbol (cnt == 1), uses kallsyms_lookup_name() which
+ * performs an O(log N) binary search via the sorted kallsyms index.
+ * This avoids the full O(N) linear scan over all kernel symbols that
+ * the multi-symbol path requires.
+ *
+ * For multiple symbols, uses a single-pass linear scan via
+ * kallsyms_on_each_symbol() with binary search into the sorted input
+ * array.
+ *
* Returns: 0 if all provided symbols are found, -ESRCH otherwise.
*/
int ftrace_lookup_symbols(const char **sorted_syms, size_t cnt, unsigned long *addrs)
@@ -9270,6 +9279,19 @@ int ftrace_lookup_symbols(const char **sorted_syms, size_t cnt, unsigned long *a
struct kallsyms_data args;
int found_all;
+ /* Fast path: single symbol uses O(log N) binary search */
+ if (cnt == 1) {
+ addrs[0] = kallsyms_lookup_name(sorted_syms[0]);
+ if (addrs[0] && ftrace_location(addrs[0]))
+ return 0;
+ /*
+ * Binary lookup can fail for duplicate symbol names
+ * where the first match is not ftrace-instrumented,
+ * or for module symbols. Retry with linear scan.
+ */
+ }
+
+ /* Batch path: single-pass O(N) linear scan */
memset(addrs, 0, sizeof(*addrs) * cnt);
args.addrs = addrs;
args.syms = sorted_syms;
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC PATCH bpf-next v2 3/3] selftests/bpf: add tests for kprobe.session optimization
2026-02-26 17:33 [RFC PATCH bpf-next v2 0/3] Optimize kprobe.session attachment for exact function names Andrey Grodzovsky
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 1/3] libbpf: " Andrey Grodzovsky
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 2/3] ftrace: Use kallsyms binary search for single-symbol lookup Andrey Grodzovsky
@ 2026-02-26 17:33 ` Andrey Grodzovsky
2 siblings, 0 replies; 7+ messages in thread
From: Andrey Grodzovsky @ 2026-02-26 17:33 UTC (permalink / raw)
To: bpf, linux-open-source
Cc: ast, daniel, andrii, jolsa, rostedt, linux-trace-kernel
Extend existing kprobe_multi_test subtests to validate the
kprobe.session exact function name optimization:
In kprobe_multi_session.c, add test_kprobe_syms which attaches a
kprobe.session program to an exact function name (bpf_fentry_test1)
exercising the fast syms[] path that bypasses kallsyms parsing. It
calls session_check() so bpf_fentry_test1 is hit by both the wildcard
and exact probes, and test_session_skel_api validates
kprobe_session_result[0] == 4 (entry + return from each probe).
In test_attach_api_fails, add fail_7 and fail_8 verifying error code
consistency between the wildcard pattern path (slow, parses kallsyms)
and the exact function name path (fast, uses syms[] array). Both
paths must return -ENOENT for non-existent functions.
Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
---
.../bpf/prog_tests/kprobe_multi_test.c | 33 +++++++++++++++++--
.../bpf/progs/kprobe_multi_session.c | 10 ++++++
2 files changed, 41 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
index 9caef222e528..ea605245ba14 100644
--- a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
@@ -327,6 +327,30 @@ static void test_attach_api_fails(void)
if (!ASSERT_EQ(saved_error, -E2BIG, "fail_6_error"))
goto cleanup;
+ /* fail_7 - non-existent wildcard pattern (slow path) */
+ LIBBPF_OPTS_RESET(opts);
+
+ link = bpf_program__attach_kprobe_multi_opts(skel->progs.test_kprobe_manual,
+ "__nonexistent_func_xyz_*",
+ &opts);
+ saved_error = -errno;
+ if (!ASSERT_ERR_PTR(link, "fail_7"))
+ goto cleanup;
+
+ if (!ASSERT_EQ(saved_error, -ENOENT, "fail_7_error"))
+ goto cleanup;
+
+ /* fail_8 - non-existent exact name (fast path), same error as wildcard */
+ link = bpf_program__attach_kprobe_multi_opts(skel->progs.test_kprobe_manual,
+ "__nonexistent_func_xyz_123",
+ &opts);
+ saved_error = -errno;
+ if (!ASSERT_ERR_PTR(link, "fail_8"))
+ goto cleanup;
+
+ if (!ASSERT_EQ(saved_error, -ENOENT, "fail_8_error"))
+ goto cleanup;
+
cleanup:
bpf_link__destroy(link);
kprobe_multi__destroy(skel);
@@ -355,8 +379,13 @@ static void test_session_skel_api(void)
ASSERT_OK(err, "test_run");
ASSERT_EQ(topts.retval, 0, "test_run");
- /* bpf_fentry_test1-4 trigger return probe, result is 2 */
- for (i = 0; i < 4; i++)
+ /*
+ * bpf_fentry_test1 is hit by both the wildcard probe and the exact
+ * name probe (test_kprobe_syms), so entry + return fires twice: 4.
+ * bpf_fentry_test2-4 are hit only by the wildcard probe: 2.
+ */
+ ASSERT_EQ(skel->bss->kprobe_session_result[0], 4, "kprobe_session_result");
+ for (i = 1; i < 4; i++)
ASSERT_EQ(skel->bss->kprobe_session_result[i], 2, "kprobe_session_result");
/* bpf_fentry_test5-8 trigger only entry probe, result is 1 */
diff --git a/tools/testing/selftests/bpf/progs/kprobe_multi_session.c b/tools/testing/selftests/bpf/progs/kprobe_multi_session.c
index bd8b7fb7061e..d52a65b40bbf 100644
--- a/tools/testing/selftests/bpf/progs/kprobe_multi_session.c
+++ b/tools/testing/selftests/bpf/progs/kprobe_multi_session.c
@@ -76,3 +76,13 @@ int test_kprobe(struct pt_regs *ctx)
{
return session_check(ctx);
}
+
+/*
+ * Exact function name (no wildcards) - exercises the fast syms[] path
+ * in bpf_program__attach_kprobe_multi_opts() which bypasses kallsyms parsing.
+ */
+SEC("kprobe.session/bpf_fentry_test1")
+int test_kprobe_syms(struct pt_regs *ctx)
+{
+ return session_check(ctx);
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [RFC PATCH bpf-next v2 2/3] ftrace: Use kallsyms binary search for single-symbol lookup
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 2/3] ftrace: Use kallsyms binary search for single-symbol lookup Andrey Grodzovsky
@ 2026-02-26 18:24 ` bot+bpf-ci
0 siblings, 0 replies; 7+ messages in thread
From: bot+bpf-ci @ 2026-02-26 18:24 UTC (permalink / raw)
To: andrey.grodzovsky, bpf, linux-open-source
Cc: ast, daniel, andrii, jolsa, rostedt, linux-trace-kernel, ast,
andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2769 bytes --]
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 827fb9a0bf0d..cfa0c7ad7cbf 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
[ ... ]
> @@ -9270,6 +9279,19 @@ int ftrace_lookup_symbols(const char **sorted_syms, size_t cnt, unsigned long *a
> struct kallsyms_data args;
> int found_all;
>
> + /* Fast path: single symbol uses O(log N) binary search */
> + if (cnt == 1) {
> + addrs[0] = kallsyms_lookup_name(sorted_syms[0]);
> + if (addrs[0] && ftrace_location(addrs[0]))
> + return 0;
> + /*
> + * Binary lookup can fail for duplicate symbol names
> + * where the first match is not ftrace-instrumented,
> + * or for module symbols. Retry with linear scan.
> + */
Is "or for module symbols" accurate here as a separate failure case?
kallsyms_lookup_name() falls back to module_kallsyms_lookup_name()
when the vmlinux lookup fails:
kernel/kallsyms.c:kallsyms_lookup_name() {
...
ret = kallsyms_lookup_names(name, &i, NULL);
if (!ret)
return kallsyms_sym_address(get_symbol_seq(i));
return module_kallsyms_lookup_name(name);
}
So for a symbol that exists only in a module, the fast path finds
it through module_kallsyms_lookup_name() and succeeds if
ftrace_location() confirms it.
The only module-related failure case is when a vmlinux symbol with
the same name shadows the module symbol in the binary search, but
that is already covered by "duplicate symbol names where the first
match is not ftrace-instrumented."
The same phrasing appears in the commit message:
> If binary lookup fails (duplicate symbol names where the first match
> is not ftrace-instrumented, or module symbols), the function falls
> through to the existing linear scan path.
Would something like "for duplicate symbol names where the first
match is not ftrace-instrumented (including cases where a vmlinux
symbol shadows an ftrace-instrumented module symbol)" be more
precise?
> + }
> +
> + /* Batch path: single-pass O(N) linear scan */
> memset(addrs, 0, sizeof(*addrs) * cnt);
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22453951811
AI-authorship-score: low
AI-authorship-explanation: Domain-specific kernel expertise with precise empirical benchmarks and standard kernel patch conventions consistent with a human developer optimizing a real-world bottleneck.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: Misleading comment and commit message claim module symbols cause fast path fallback, but kallsyms_lookup_name() handles module-only symbols correctly via module_kallsyms_lookup_name().
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH bpf-next v2 1/3] libbpf: Optimize kprobe.session attachment for exact function names
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 1/3] libbpf: " Andrey Grodzovsky
@ 2026-02-27 17:08 ` Jiri Olsa
2026-02-27 20:52 ` [External] " Andrey Grodzovsky
0 siblings, 1 reply; 7+ messages in thread
From: Jiri Olsa @ 2026-02-27 17:08 UTC (permalink / raw)
To: Andrey Grodzovsky
Cc: bpf, linux-open-source, ast, daniel, andrii, rostedt,
linux-trace-kernel
On Thu, Feb 26, 2026 at 12:33:40PM -0500, Andrey Grodzovsky wrote:
> Implement dual-path optimization in attach_kprobe_session():
> - Fast path: Use syms[] array for exact function names
> (no kallsyms parsing)
> - Slow path: Use pattern matching with kallsyms only for
> wildcards
>
> This avoids expensive kallsyms file parsing (~150ms) when function names
> are specified exactly, improving attachment time 50x (~3-5ms).
>
> Error code normalization: The fast path returns ESRCH from kernel's
> ftrace_lookup_symbols(), while slow path returns ENOENT from userspace
> kallsyms parsing. Convert ESRCH to ENOENT in fast path to maintain API
> consistency - both paths now return identical error codes for "symbol
> not found".
>
> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
> ---
> tools/lib/bpf/libbpf.c | 34 ++++++++++++++++++++++++++++------
> 1 file changed, 28 insertions(+), 6 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 0be7017800fe..0ba8aa2c5fd2 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -12042,6 +12042,20 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
> return libbpf_err_ptr(-EINVAL);
>
> if (pattern) {
> + /*
> + * Exact function name (no wildcards): bypass kallsyms parsing
> + * and pass the symbol directly to the kernel via syms[] array.
> + * The kernel's ftrace_lookup_symbols() resolves it efficiently.
> + */
> + if (!strpbrk(pattern, "*?")) {
> + const char *sym = pattern;
> +
> + syms = &sym;
why not use pattern ndirectly?
> + cnt = 1;
> + pattern = NULL;
not sure why we need this
> + goto attach;
> + }
I wonder we could just another if path and avoid the goto, like:
- if (pattern) {
+ /*
+ * Exact function name (no wildcards): bypass kallsyms parsing
+ * and pass the symbol directly to the kernel via syms[] array.
+ * The kernel's ftrace_lookup_symbols() resolves it efficiently.
+ */
+ if (pattern && !strpbrk(pattern, "*?")) {
+ syms = &pattern;
+ cnt = 1;
+ } else if (pattern) {
if (has_available_filter_functions_addrs())
err = libbpf_available_kprobes_parse(&res);
wdyt?
> +
> if (has_available_filter_functions_addrs())
> err = libbpf_available_kprobes_parse(&res);
> else
> @@ -12060,6 +12074,7 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
> cnt = res.cnt;
> }
>
> +attach:
> retprobe = OPTS_GET(opts, retprobe, false);
> session = OPTS_GET(opts, session, false);
>
> @@ -12067,7 +12082,6 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
> return libbpf_err_ptr(-EINVAL);
>
> attach_type = session ? BPF_TRACE_KPROBE_SESSION : BPF_TRACE_KPROBE_MULTI;
> -
not needed
> lopts.kprobe_multi.syms = syms;
> lopts.kprobe_multi.addrs = addrs;
> lopts.kprobe_multi.cookies = cookies;
> @@ -12084,6 +12098,14 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
> link_fd = bpf_link_create(prog_fd, 0, attach_type, &lopts);
> if (link_fd < 0) {
> err = -errno;
> + /*
> + * Normalize error code: when exact name bypasses kallsyms
> + * parsing, kernel returns ESRCH from ftrace_lookup_symbols().
> + * Convert to ENOENT for API consistency with the pattern
> + * matching path which returns ENOENT from userspace.
> + */
> + if (err == -ESRCH)
> + err = -ENOENT;
> pr_warn("prog '%s': failed to attach: %s\n",
> prog->name, errstr(err));
> goto error;
> @@ -12192,7 +12214,7 @@ static int attach_kprobe_session(const struct bpf_program *prog, long cookie,
> {
> LIBBPF_OPTS(bpf_kprobe_multi_opts, opts, .session = true);
> const char *spec;
> - char *pattern;
> + char *func_name;
I don't think we need the change, it's jus for the different pr_warn
below right? let's keep pattern
thanks,
jirka
> int n;
>
> *link = NULL;
> @@ -12202,14 +12224,14 @@ static int attach_kprobe_session(const struct bpf_program *prog, long cookie,
> return 0;
>
> spec = prog->sec_name + sizeof("kprobe.session/") - 1;
> - n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
> + n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &func_name);
> if (n < 1) {
> - pr_warn("kprobe session pattern is invalid: %s\n", spec);
> + pr_warn("kprobe session function name is invalid: %s\n", spec);
> return -EINVAL;
> }
>
> - *link = bpf_program__attach_kprobe_multi_opts(prog, pattern, &opts);
> - free(pattern);
> + *link = bpf_program__attach_kprobe_multi_opts(prog, func_name, &opts);
> + free(func_name);
> return *link ? 0 : -errno;
> }
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [External] Re: [RFC PATCH bpf-next v2 1/3] libbpf: Optimize kprobe.session attachment for exact function names
2026-02-27 17:08 ` Jiri Olsa
@ 2026-02-27 20:52 ` Andrey Grodzovsky
0 siblings, 0 replies; 7+ messages in thread
From: Andrey Grodzovsky @ 2026-02-27 20:52 UTC (permalink / raw)
To: Jiri Olsa
Cc: bpf, linux-open-source, ast, daniel, andrii, rostedt,
linux-trace-kernel
> - if (pattern) {
> + /*
> + * Exact function name (no wildcards): bypass kallsyms parsing
> + * and pass the symbol directly to the kernel via syms[] array.
> + * The kernel's ftrace_lookup_symbols() resolves it efficiently.
> + */
> + if (pattern && !strpbrk(pattern, "*?")) {
> + syms = &pattern;
> + cnt = 1;
> + } else if (pattern) {
> if (has_available_filter_functions_addrs())
> err = libbpf_available_kprobes_parse(&res);
>
>
> wdyt?
Totally agree, updated in V3.
Thanks,
Andrey
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-02-27 20:52 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-26 17:33 [RFC PATCH bpf-next v2 0/3] Optimize kprobe.session attachment for exact function names Andrey Grodzovsky
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 1/3] libbpf: " Andrey Grodzovsky
2026-02-27 17:08 ` Jiri Olsa
2026-02-27 20:52 ` [External] " Andrey Grodzovsky
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 2/3] ftrace: Use kallsyms binary search for single-symbol lookup Andrey Grodzovsky
2026-02-26 18:24 ` bot+bpf-ci
2026-02-26 17:33 ` [RFC PATCH bpf-next v2 3/3] selftests/bpf: add tests for kprobe.session optimization Andrey Grodzovsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox