* [PATCH bpf-next 0/3] Fix ftrace for livepatch + BPF fexit programs
@ 2025-10-24 7:12 Song Liu
2025-10-24 7:12 ` [PATCH bpf-next 1/3] ftrace: Fix BPF fexit with livepatch Song Liu
` (3 more replies)
0 siblings, 4 replies; 12+ messages in thread
From: Song Liu @ 2025-10-24 7:12 UTC (permalink / raw)
To: bpf, linux-trace-kernel, live-patching
Cc: ast, daniel, andrii, rostedt, andrey.grodzovsky, mhiramat,
kernel-team, Song Liu
livepatch and BPF trampoline are two special users of ftrace. livepatch
uses ftrace with IPMODIFY flag and BPF trampoline uses ftrace direct
functions. When livepatch and BPF trampoline with fexit programs attach to
the same kernel function, BPF trampoline needs to call into the patched
version of the kernel function.
1/3 and 2/3 of this patchset fix two issues with livepatch + fexit cases,
one in the register_ftrace_direct path, the other in the
modify_ftrace_direct path.
3/3 adds selftests for both cases.
Song Liu (3):
ftrace: Fix BPF fexit with livepatch
ftrace: bpf: Fix IPMODIFY + DIRECT in modify_ftrace_direct()
selftests/bpf: Add tests for livepatch + bpf trampoline
kernel/bpf/trampoline.c | 12 ++-
kernel/trace/ftrace.c | 14 ++-
tools/testing/selftests/bpf/config | 3 +
.../bpf/prog_tests/livepatch_trampoline.c | 101 ++++++++++++++++++
.../bpf/progs/livepatch_trampoline.c | 30 ++++++
5 files changed, 153 insertions(+), 7 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/livepatch_trampoline.c
create mode 100644 tools/testing/selftests/bpf/progs/livepatch_trampoline.c
--
2.47.3
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH bpf-next 1/3] ftrace: Fix BPF fexit with livepatch
2025-10-24 7:12 [PATCH bpf-next 0/3] Fix ftrace for livepatch + BPF fexit programs Song Liu
@ 2025-10-24 7:12 ` Song Liu
2025-10-24 11:42 ` Jiri Olsa
2025-10-24 7:12 ` [PATCH bpf-next 2/3] ftrace: bpf: Fix IPMODIFY + DIRECT in modify_ftrace_direct() Song Liu
` (2 subsequent siblings)
3 siblings, 1 reply; 12+ messages in thread
From: Song Liu @ 2025-10-24 7:12 UTC (permalink / raw)
To: bpf, linux-trace-kernel, live-patching
Cc: ast, daniel, andrii, rostedt, andrey.grodzovsky, mhiramat,
kernel-team, Song Liu, stable
When livepatch is attached to the same function as bpf trampoline with
a fexit program, bpf trampoline code calls register_ftrace_direct()
twice. The first time will fail with -EAGAIN, and the second time it
will succeed. This requires register_ftrace_direct() to unregister
the address on the first attempt. Otherwise, the bpf trampoline cannot
attach. Here is an easy way to reproduce this issue:
insmod samples/livepatch/livepatch-sample.ko
bpftrace -e 'fexit:cmdline_proc_show {}'
ERROR: Unable to attach probe: fexit:vmlinux:cmdline_proc_show...
Fix this by cleaning up the hash when register_ftrace_function_nolock hits
errors.
Fixes: d05cb470663a ("ftrace: Fix modification of direct_function hash while in use")
Cc: stable@vger.kernel.org # v6.6+
Reported-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
Closes: https://lore.kernel.org/live-patching/c5058315a39d4615b333e485893345be@crowdstrike.com/
Cc: Steven Rostedt (Google) <rostedt@goodmis.org>
Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Acked-and-tested-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
Signed-off-by: Song Liu <song@kernel.org>
---
kernel/trace/ftrace.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 42bd2ba68a82..7f432775a6b5 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6048,6 +6048,8 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
ops->direct_call = addr;
err = register_ftrace_function_nolock(ops);
+ if (err)
+ remove_direct_functions_hash(hash, addr);
out_unlock:
mutex_unlock(&direct_mutex);
--
2.47.3
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next 2/3] ftrace: bpf: Fix IPMODIFY + DIRECT in modify_ftrace_direct()
2025-10-24 7:12 [PATCH bpf-next 0/3] Fix ftrace for livepatch + BPF fexit programs Song Liu
2025-10-24 7:12 ` [PATCH bpf-next 1/3] ftrace: Fix BPF fexit with livepatch Song Liu
@ 2025-10-24 7:12 ` Song Liu
2025-10-24 11:43 ` Jiri Olsa
2025-10-24 7:12 ` [PATCH bpf-next 3/3] selftests/bpf: Add tests for livepatch + bpf trampoline Song Liu
2025-10-24 16:42 ` [PATCH bpf-next 0/3] Fix ftrace for livepatch + BPF fexit programs Alexei Starovoitov
3 siblings, 1 reply; 12+ messages in thread
From: Song Liu @ 2025-10-24 7:12 UTC (permalink / raw)
To: bpf, linux-trace-kernel, live-patching
Cc: ast, daniel, andrii, rostedt, andrey.grodzovsky, mhiramat,
kernel-team, Song Liu
ftrace_hash_ipmodify_enable() checks IPMODIFY and DIRECT ftrace_ops on
the same kernel function. When needed, ftrace_hash_ipmodify_enable()
calls ops->ops_func() to prepare the direct ftrace (BPF trampoline) to
share the same function as the IPMODIFY ftrace (livepatch).
ftrace_hash_ipmodify_enable() is called in register_ftrace_direct() path,
but not called in modify_ftrace_direct() path. As a result, the following
operations will break livepatch:
1. Load livepatch to a kernel function;
2. Attach fentry program to the kernel function;
3. Attach fexit program to the kernel function.
After 3, the kernel function being used will not be the livepatched
version, but the original version.
Fix this by adding ftrace_hash_ipmodify_enable() to modify_ftrace_direct()
and adjust some logic around the call.
Signed-off-by: Song Liu <song@kernel.org>
---
kernel/bpf/trampoline.c | 12 +++++++-----
kernel/trace/ftrace.c | 12 ++++++++++--
2 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 5949095e51c3..8015f5dc3169 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -221,6 +221,13 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
if (tr->func.ftrace_managed) {
ftrace_set_filter_ip(tr->fops, (unsigned long)ip, 0, 1);
+ /*
+ * Clearing fops->trampoline_mutex and fops->NULL is
+ * needed by the "goto again" case in
+ * bpf_trampoline_update().
+ */
+ tr->fops->trampoline = 0;
+ tr->fops->func = NULL;
ret = register_ftrace_direct(tr->fops, (long)new_addr);
} else {
ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, NULL, new_addr);
@@ -479,11 +486,6 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
* BPF_TRAMP_F_SHARE_IPMODIFY is set, we can generate the
* trampoline again, and retry register.
*/
- /* reset fops->func and fops->trampoline for re-register */
- tr->fops->func = NULL;
- tr->fops->trampoline = 0;
-
- /* free im memory and reallocate later */
bpf_tramp_image_free(im);
goto again;
}
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 7f432775a6b5..370f620734cf 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -2020,8 +2020,6 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
if (is_ipmodify)
goto rollback;
- FTRACE_WARN_ON(rec->flags & FTRACE_FL_DIRECT);
-
/*
* Another ops with IPMODIFY is already
* attached. We are now attaching a direct
@@ -6128,6 +6126,15 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
if (err)
return err;
+ /*
+ * Call ftrace_hash_ipmodify_enable() here, so that we can call
+ * ops->ops_func for the ops. This is needed because the above
+ * register_ftrace_function_nolock() worked on tmp_ops.
+ */
+ err = ftrace_hash_ipmodify_enable(ops);
+ if (err)
+ goto out;
+
/*
* Now the ftrace_ops_list_func() is called to do the direct callers.
* We can safely change the direct functions attached to each entry.
@@ -6149,6 +6156,7 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
mutex_unlock(&ftrace_lock);
+out:
/* Removing the tmp_ops will add the updated direct callers to the functions */
unregister_ftrace_function(&tmp_ops);
--
2.47.3
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next 3/3] selftests/bpf: Add tests for livepatch + bpf trampoline
2025-10-24 7:12 [PATCH bpf-next 0/3] Fix ftrace for livepatch + BPF fexit programs Song Liu
2025-10-24 7:12 ` [PATCH bpf-next 1/3] ftrace: Fix BPF fexit with livepatch Song Liu
2025-10-24 7:12 ` [PATCH bpf-next 2/3] ftrace: bpf: Fix IPMODIFY + DIRECT in modify_ftrace_direct() Song Liu
@ 2025-10-24 7:12 ` Song Liu
2025-10-24 16:42 ` [PATCH bpf-next 0/3] Fix ftrace for livepatch + BPF fexit programs Alexei Starovoitov
3 siblings, 0 replies; 12+ messages in thread
From: Song Liu @ 2025-10-24 7:12 UTC (permalink / raw)
To: bpf, linux-trace-kernel, live-patching
Cc: ast, daniel, andrii, rostedt, andrey.grodzovsky, mhiramat,
kernel-team, Song Liu
Both livepatch and BPF trampoline use ftrace. Special attention is needed
when livepatch and fexit program touch the same function at the same
time, because livepatch updates a kernel function and the BPF trampoline
need to call into the right version of the kernel function.
Use samples/livepatch/livepatch-sample.ko for the test.
The test covers two cases:
1) When a fentry program is loaded first. This exercises the
modify_ftrace_direct code path.
2) When a fentry program is loaded first. This exercises the
register_ftrace_direct code path.
Signed-off-by: Song Liu <song@kernel.org>
---
tools/testing/selftests/bpf/config | 3 +
.../bpf/prog_tests/livepatch_trampoline.c | 113 ++++++++++++++++++
.../bpf/progs/livepatch_trampoline.c | 30 +++++
3 files changed, 146 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/livepatch_trampoline.c
create mode 100644 tools/testing/selftests/bpf/progs/livepatch_trampoline.c
diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config
index 70b28c1e653e..f2a2fd236ca8 100644
--- a/tools/testing/selftests/bpf/config
+++ b/tools/testing/selftests/bpf/config
@@ -50,6 +50,7 @@ CONFIG_IPV6_SIT=y
CONFIG_IPV6_TUNNEL=y
CONFIG_KEYS=y
CONFIG_LIRC=y
+CONFIG_LIVEPATCH=y
CONFIG_LWTUNNEL=y
CONFIG_MODULE_SIG=y
CONFIG_MODULE_SRCVERSION_ALL=y
@@ -111,6 +112,8 @@ CONFIG_IP6_NF_FILTER=y
CONFIG_NF_NAT=y
CONFIG_PACKET=y
CONFIG_RC_CORE=y
+CONFIG_SAMPLES=y
+CONFIG_SAMPLE_LIVEPATCH=m
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SYN_COOKIES=y
diff --git a/tools/testing/selftests/bpf/prog_tests/livepatch_trampoline.c b/tools/testing/selftests/bpf/prog_tests/livepatch_trampoline.c
new file mode 100644
index 000000000000..6326f957dbcd
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/livepatch_trampoline.c
@@ -0,0 +1,113 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+
+#include <test_progs.h>
+#include "testing_helpers.h"
+#include "livepatch_trampoline.skel.h"
+
+static int load_livepatch(void)
+{
+ const char *livepatch_paths[] = {
+ "../../../../samples/livepatch/livepatch-sample.ko",
+ /* This is the path used by CI */
+ "/tmp/work/bpf/bpf/kbuild-output/samples/livepatch/livepatch-sample.ko",
+ };
+ int ret, i;
+
+ for (i = 0; i < sizeof(livepatch_paths) / sizeof(char *); i++) {
+ ret = load_module(livepatch_paths[i], env_verbosity > VERBOSE_NONE);
+ if (!ret)
+ break;
+ }
+ return ret;
+}
+
+static void unload_livepatch(void)
+{
+ /* Disable the livepatch before unloading the module */
+ system("echo 0 > /sys/kernel/livepatch/livepatch_sample/enabled");
+
+ unload_module("livepatch_sample", env_verbosity > VERBOSE_NONE);
+}
+
+static void read_proc_cmdline(void)
+{
+ char buf[4096];
+ int fd, ret;
+
+ fd = open("/proc/cmdline", O_RDONLY);
+ if (!ASSERT_OK_FD(fd, "open /proc/cmdline"))
+ return;
+
+ ret = read(fd, buf, sizeof(buf));
+ if (!ASSERT_GT(ret, 0, "read /proc/cmdline"))
+ goto out;
+
+ ASSERT_OK(strncmp(buf, "this has been live patched", 26), "strncmp");
+
+out:
+ close(fd);
+}
+
+static void __test_livepatch_trampoline(bool fexit_first)
+{
+ struct livepatch_trampoline *skel = NULL;
+ int err;
+
+ skel = livepatch_trampoline__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
+ goto out;
+
+ skel->bss->my_pid = getpid();
+
+ if (!fexit_first) {
+ /* fentry program is loaded first by default */
+ err = livepatch_trampoline__attach(skel);
+ if (!ASSERT_OK(err, "skel_attach"))
+ goto out;
+ } else {
+ /* Manually load fexit program first. */
+ skel->links.fexit_cmdline = bpf_program__attach(skel->progs.fexit_cmdline);
+ if (!ASSERT_OK_PTR(skel->links.fexit_cmdline, "attach_fexit"))
+ goto out;
+
+ skel->links.fentry_cmdline = bpf_program__attach(skel->progs.fentry_cmdline);
+ if (!ASSERT_OK_PTR(skel->links.fentry_cmdline, "attach_fentry"))
+ goto out;
+ }
+
+ read_proc_cmdline();
+
+ ASSERT_EQ(skel->bss->fentry_hit, 1, "fentry_hit");
+ ASSERT_EQ(skel->bss->fexit_hit, 1, "fexit_hit");
+out:
+ livepatch_trampoline__destroy(skel);
+}
+
+void test_livepatch_trampoline(void)
+{
+ int retry_cnt = 0;
+
+retry:
+ if (load_livepatch()) {
+ if (retry_cnt) {
+ ASSERT_OK(1, "load_livepatch");
+ goto out;
+ }
+ /*
+ * Something else (previous run of the same test?) loaded
+ * the KLP module. Unload the KLP module and retry.
+ */
+ unload_livepatch();
+ retry_cnt++;
+ goto retry;
+ }
+
+ if (test__start_subtest("fentry_first"))
+ __test_livepatch_trampoline(false);
+
+ if (test__start_subtest("fexit_first"))
+ __test_livepatch_trampoline(true);
+out:
+ unload_livepatch();
+}
diff --git a/tools/testing/selftests/bpf/progs/livepatch_trampoline.c b/tools/testing/selftests/bpf/progs/livepatch_trampoline.c
new file mode 100644
index 000000000000..15579d5bcd91
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/livepatch_trampoline.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+int fentry_hit;
+int fexit_hit;
+int my_pid;
+
+SEC("fentry/cmdline_proc_show")
+int BPF_PROG(fentry_cmdline)
+{
+ if (my_pid != (bpf_get_current_pid_tgid() >> 32))
+ return 0;
+
+ fentry_hit = 1;
+ return 0;
+}
+
+SEC("fexit/cmdline_proc_show")
+int BPF_PROG(fexit_cmdline)
+{
+ if (my_pid != (bpf_get_current_pid_tgid() >> 32))
+ return 0;
+
+ fexit_hit = 1;
+ return 0;
+}
--
2.47.3
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 1/3] ftrace: Fix BPF fexit with livepatch
2025-10-24 7:12 ` [PATCH bpf-next 1/3] ftrace: Fix BPF fexit with livepatch Song Liu
@ 2025-10-24 11:42 ` Jiri Olsa
2025-10-24 15:42 ` Song Liu
0 siblings, 1 reply; 12+ messages in thread
From: Jiri Olsa @ 2025-10-24 11:42 UTC (permalink / raw)
To: Song Liu
Cc: bpf, linux-trace-kernel, live-patching, ast, daniel, andrii,
rostedt, andrey.grodzovsky, mhiramat, kernel-team, stable
On Fri, Oct 24, 2025 at 12:12:55AM -0700, Song Liu wrote:
> When livepatch is attached to the same function as bpf trampoline with
> a fexit program, bpf trampoline code calls register_ftrace_direct()
> twice. The first time will fail with -EAGAIN, and the second time it
> will succeed. This requires register_ftrace_direct() to unregister
> the address on the first attempt. Otherwise, the bpf trampoline cannot
> attach. Here is an easy way to reproduce this issue:
>
> insmod samples/livepatch/livepatch-sample.ko
> bpftrace -e 'fexit:cmdline_proc_show {}'
> ERROR: Unable to attach probe: fexit:vmlinux:cmdline_proc_show...
>
> Fix this by cleaning up the hash when register_ftrace_function_nolock hits
> errors.
>
> Fixes: d05cb470663a ("ftrace: Fix modification of direct_function hash while in use")
> Cc: stable@vger.kernel.org # v6.6+
> Reported-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
> Closes: https://lore.kernel.org/live-patching/c5058315a39d4615b333e485893345be@crowdstrike.com/
> Cc: Steven Rostedt (Google) <rostedt@goodmis.org>
> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Acked-and-tested-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
> Signed-off-by: Song Liu <song@kernel.org>
> ---
> kernel/trace/ftrace.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 42bd2ba68a82..7f432775a6b5 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -6048,6 +6048,8 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
> ops->direct_call = addr;
>
> err = register_ftrace_function_nolock(ops);
> + if (err)
> + remove_direct_functions_hash(hash, addr);
should this be handled by the caller of the register_ftrace_direct?
fops->hash is updated by ftrace_set_filter_ip in register_fentry
seems like it's should be caller responsibility, also you could do that
just for (err == -EAGAIN) case to address the use case directly
jirka
>
> out_unlock:
> mutex_unlock(&direct_mutex);
> --
> 2.47.3
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 2/3] ftrace: bpf: Fix IPMODIFY + DIRECT in modify_ftrace_direct()
2025-10-24 7:12 ` [PATCH bpf-next 2/3] ftrace: bpf: Fix IPMODIFY + DIRECT in modify_ftrace_direct() Song Liu
@ 2025-10-24 11:43 ` Jiri Olsa
2025-10-24 15:47 ` Song Liu
0 siblings, 1 reply; 12+ messages in thread
From: Jiri Olsa @ 2025-10-24 11:43 UTC (permalink / raw)
To: Song Liu
Cc: bpf, linux-trace-kernel, live-patching, ast, daniel, andrii,
rostedt, andrey.grodzovsky, mhiramat, kernel-team
On Fri, Oct 24, 2025 at 12:12:56AM -0700, Song Liu wrote:
> ftrace_hash_ipmodify_enable() checks IPMODIFY and DIRECT ftrace_ops on
> the same kernel function. When needed, ftrace_hash_ipmodify_enable()
> calls ops->ops_func() to prepare the direct ftrace (BPF trampoline) to
> share the same function as the IPMODIFY ftrace (livepatch).
>
> ftrace_hash_ipmodify_enable() is called in register_ftrace_direct() path,
> but not called in modify_ftrace_direct() path. As a result, the following
> operations will break livepatch:
>
> 1. Load livepatch to a kernel function;
> 2. Attach fentry program to the kernel function;
> 3. Attach fexit program to the kernel function.
>
> After 3, the kernel function being used will not be the livepatched
> version, but the original version.
>
> Fix this by adding ftrace_hash_ipmodify_enable() to modify_ftrace_direct()
> and adjust some logic around the call.
>
> Signed-off-by: Song Liu <song@kernel.org>
> ---
> kernel/bpf/trampoline.c | 12 +++++++-----
> kernel/trace/ftrace.c | 12 ++++++++++--
> 2 files changed, 17 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index 5949095e51c3..8015f5dc3169 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
> @@ -221,6 +221,13 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
>
> if (tr->func.ftrace_managed) {
> ftrace_set_filter_ip(tr->fops, (unsigned long)ip, 0, 1);
> + /*
> + * Clearing fops->trampoline_mutex and fops->NULL is
s/trampoline_mutex/trampoline/
> + * needed by the "goto again" case in
> + * bpf_trampoline_update().
> + */
> + tr->fops->trampoline = 0;
> + tr->fops->func = NULL;
IIUC you move this because if modify_fentry returns -EAGAIN
we don't want to reset the trampoline, right?
> ret = register_ftrace_direct(tr->fops, (long)new_addr);
> } else {
> ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, NULL, new_addr);
> @@ -479,11 +486,6 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
> * BPF_TRAMP_F_SHARE_IPMODIFY is set, we can generate the
> * trampoline again, and retry register.
> */
> - /* reset fops->func and fops->trampoline for re-register */
> - tr->fops->func = NULL;
> - tr->fops->trampoline = 0;
> -
> - /* free im memory and reallocate later */
> bpf_tramp_image_free(im);
> goto again;
> }
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 7f432775a6b5..370f620734cf 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -2020,8 +2020,6 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
> if (is_ipmodify)
> goto rollback;
>
> - FTRACE_WARN_ON(rec->flags & FTRACE_FL_DIRECT);
why is this needed?
thanks,
jirka
> -
> /*
> * Another ops with IPMODIFY is already
> * attached. We are now attaching a direct
> @@ -6128,6 +6126,15 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
> if (err)
> return err;
>
> + /*
> + * Call ftrace_hash_ipmodify_enable() here, so that we can call
> + * ops->ops_func for the ops. This is needed because the above
> + * register_ftrace_function_nolock() worked on tmp_ops.
> + */
> + err = ftrace_hash_ipmodify_enable(ops);
> + if (err)
> + goto out;
> +
> /*
> * Now the ftrace_ops_list_func() is called to do the direct callers.
> * We can safely change the direct functions attached to each entry.
> @@ -6149,6 +6156,7 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
>
> mutex_unlock(&ftrace_lock);
>
> +out:
> /* Removing the tmp_ops will add the updated direct callers to the functions */
> unregister_ftrace_function(&tmp_ops);
>
> --
> 2.47.3
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 1/3] ftrace: Fix BPF fexit with livepatch
2025-10-24 11:42 ` Jiri Olsa
@ 2025-10-24 15:42 ` Song Liu
2025-10-24 18:51 ` Jiri Olsa
0 siblings, 1 reply; 12+ messages in thread
From: Song Liu @ 2025-10-24 15:42 UTC (permalink / raw)
To: Jiri Olsa
Cc: Song Liu, bpf@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
live-patching@vger.kernel.org, ast@kernel.org,
daniel@iogearbox.net, andrii@kernel.org, rostedt@goodmis.org,
andrey.grodzovsky@crowdstrike.com, mhiramat@kernel.org,
Kernel Team, stable@vger.kernel.org
> On Oct 24, 2025, at 4:42 AM, Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Fri, Oct 24, 2025 at 12:12:55AM -0700, Song Liu wrote:
>> When livepatch is attached to the same function as bpf trampoline with
>> a fexit program, bpf trampoline code calls register_ftrace_direct()
>> twice. The first time will fail with -EAGAIN, and the second time it
>> will succeed. This requires register_ftrace_direct() to unregister
>> the address on the first attempt. Otherwise, the bpf trampoline cannot
>> attach. Here is an easy way to reproduce this issue:
>>
>> insmod samples/livepatch/livepatch-sample.ko
>> bpftrace -e 'fexit:cmdline_proc_show {}'
>> ERROR: Unable to attach probe: fexit:vmlinux:cmdline_proc_show...
>>
>> Fix this by cleaning up the hash when register_ftrace_function_nolock hits
>> errors.
>>
>> Fixes: d05cb470663a ("ftrace: Fix modification of direct_function hash while in use")
>> Cc: stable@vger.kernel.org # v6.6+
>> Reported-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
>> Closes: https://lore.kernel.org/live-patching/c5058315a39d4615b333e485893345be@crowdstrike.com/
>> Cc: Steven Rostedt (Google) <rostedt@goodmis.org>
>> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org>
>> Acked-and-tested-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
>> Signed-off-by: Song Liu <song@kernel.org>
>> ---
>> kernel/trace/ftrace.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
>> index 42bd2ba68a82..7f432775a6b5 100644
>> --- a/kernel/trace/ftrace.c
>> +++ b/kernel/trace/ftrace.c
>> @@ -6048,6 +6048,8 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
>> ops->direct_call = addr;
>>
>> err = register_ftrace_function_nolock(ops);
>> + if (err)
>> + remove_direct_functions_hash(hash, addr);
>
> should this be handled by the caller of the register_ftrace_direct?
> fops->hash is updated by ftrace_set_filter_ip in register_fentry
We need to clean up here. This is because register_ftrace_direct added
the new entries to direct_functions. It need to clean these entries
for the caller so that the next call of register_ftrace_direct can
work.
> seems like it's should be caller responsibility, also you could do that
> just for (err == -EAGAIN) case to address the use case directly
The cleanup is valid for any error cases, as we need to remove unused
entries from direct_functions.
Thanks,
Song
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 2/3] ftrace: bpf: Fix IPMODIFY + DIRECT in modify_ftrace_direct()
2025-10-24 11:43 ` Jiri Olsa
@ 2025-10-24 15:47 ` Song Liu
2025-10-24 16:21 ` Steven Rostedt
0 siblings, 1 reply; 12+ messages in thread
From: Song Liu @ 2025-10-24 15:47 UTC (permalink / raw)
To: Jiri Olsa
Cc: Song Liu, bpf@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
live-patching@vger.kernel.org, ast@kernel.org,
daniel@iogearbox.net, andrii@kernel.org, rostedt@goodmis.org,
andrey.grodzovsky@crowdstrike.com, mhiramat@kernel.org,
Kernel Team
> On Oct 24, 2025, at 4:43 AM, Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Fri, Oct 24, 2025 at 12:12:56AM -0700, Song Liu wrote:
>> ftrace_hash_ipmodify_enable() checks IPMODIFY and DIRECT ftrace_ops on
>> the same kernel function. When needed, ftrace_hash_ipmodify_enable()
>> calls ops->ops_func() to prepare the direct ftrace (BPF trampoline) to
>> share the same function as the IPMODIFY ftrace (livepatch).
>>
>> ftrace_hash_ipmodify_enable() is called in register_ftrace_direct() path,
>> but not called in modify_ftrace_direct() path. As a result, the following
>> operations will break livepatch:
>>
>> 1. Load livepatch to a kernel function;
>> 2. Attach fentry program to the kernel function;
>> 3. Attach fexit program to the kernel function.
>>
>> After 3, the kernel function being used will not be the livepatched
>> version, but the original version.
>>
>> Fix this by adding ftrace_hash_ipmodify_enable() to modify_ftrace_direct()
>> and adjust some logic around the call.
>>
>> Signed-off-by: Song Liu <song@kernel.org>
>> ---
>> kernel/bpf/trampoline.c | 12 +++++++-----
>> kernel/trace/ftrace.c | 12 ++++++++++--
>> 2 files changed, 17 insertions(+), 7 deletions(-)
>>
>> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
>> index 5949095e51c3..8015f5dc3169 100644
>> --- a/kernel/bpf/trampoline.c
>> +++ b/kernel/bpf/trampoline.c
>> @@ -221,6 +221,13 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
>>
>> if (tr->func.ftrace_managed) {
>> ftrace_set_filter_ip(tr->fops, (unsigned long)ip, 0, 1);
>> + /*
>> + * Clearing fops->trampoline_mutex and fops->NULL is
>
> s/trampoline_mutex/trampoline/
Good catch!
>
>> + * needed by the "goto again" case in
>> + * bpf_trampoline_update().
>> + */
>> + tr->fops->trampoline = 0;
>> + tr->fops->func = NULL;
>
> IIUC you move this because if modify_fentry returns -EAGAIN
> we don't want to reset the trampoline, right?
Right, we don’t want to reset this in the modify_fentry path.
We can add a check before “goto again” so that we only do the
reset for register_fentry, but I think it is cleaner this way.
I can be convinced to change it.
>
>> ret = register_ftrace_direct(tr->fops, (long)new_addr);
>> } else {
>> ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, NULL, new_addr);
>> @@ -479,11 +486,6 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
>> * BPF_TRAMP_F_SHARE_IPMODIFY is set, we can generate the
>> * trampoline again, and retry register.
>> */
>> - /* reset fops->func and fops->trampoline for re-register */
>> - tr->fops->func = NULL;
>> - tr->fops->trampoline = 0;
>> -
>> - /* free im memory and reallocate later */
>> bpf_tramp_image_free(im);
>> goto again;
>> }
>> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
>> index 7f432775a6b5..370f620734cf 100644
>> --- a/kernel/trace/ftrace.c
>> +++ b/kernel/trace/ftrace.c
>> @@ -2020,8 +2020,6 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
>> if (is_ipmodify)
>> goto rollback;
>>
>> - FTRACE_WARN_ON(rec->flags & FTRACE_FL_DIRECT);
>
> why is this needed?
This is needed for the modify_ftrace_direct case, because
the record already have a direct function (BPF trampoline)
attached.
Thanks,
Song
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 2/3] ftrace: bpf: Fix IPMODIFY + DIRECT in modify_ftrace_direct()
2025-10-24 15:47 ` Song Liu
@ 2025-10-24 16:21 ` Steven Rostedt
2025-10-24 17:03 ` Song Liu
0 siblings, 1 reply; 12+ messages in thread
From: Steven Rostedt @ 2025-10-24 16:21 UTC (permalink / raw)
To: Song Liu
Cc: Jiri Olsa, Song Liu, bpf@vger.kernel.org,
linux-trace-kernel@vger.kernel.org, live-patching@vger.kernel.org,
ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org,
andrey.grodzovsky@crowdstrike.com, mhiramat@kernel.org,
Kernel Team
On Fri, 24 Oct 2025 15:47:04 +0000
Song Liu <songliubraving@meta.com> wrote:
> >> --- a/kernel/trace/ftrace.c
> >> +++ b/kernel/trace/ftrace.c
> >> @@ -2020,8 +2020,6 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
> >> if (is_ipmodify)
> >> goto rollback;
> >>
> >> - FTRACE_WARN_ON(rec->flags & FTRACE_FL_DIRECT);
> >
> > why is this needed?
>
> This is needed for the modify_ftrace_direct case, because
> the record already have a direct function (BPF trampoline)
> attached.
I don't like the fact that it's removing a check for other cases.
It needs to be denoted that this use case is "OK" where as other use cases
are not. That way the check is still there for other cases and only "OK"
for this use case.
Something like this:
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 370f620734cf..51b205bafe80 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1971,7 +1971,8 @@ static void ftrace_hash_rec_enable_modify(struct ftrace_ops *ops)
*/
static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
struct ftrace_hash *old_hash,
- struct ftrace_hash *new_hash)
+ struct ftrace_hash *new_hash,
+ bool update)
{
struct ftrace_page *pg;
struct dyn_ftrace *rec, *end = NULL;
@@ -2020,6 +2021,16 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
if (is_ipmodify)
goto rollback;
+ /*
+ * If this is called by __modify_ftrace_direct()
+ * then it is only chaning where the direct
+ * pointer is jumping to, and the record already
+ * points to a direct trampoline. If it isn't
+ * then it is a bug to update ipmodify on a direct
+ * caller.
+ */
+ FTRACE_WARN_ON(!update &&
+ (rec->flags & FTRACE_FL_DIRECT));
/*
* Another ops with IPMODIFY is already
* attached. We are now attaching a direct
@@ -2067,14 +2078,14 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
return -EBUSY;
}
-static int ftrace_hash_ipmodify_enable(struct ftrace_ops *ops)
+static int ftrace_hash_ipmodify_enable(struct ftrace_ops *ops, bool update)
{
struct ftrace_hash *hash = ops->func_hash->filter_hash;
if (ftrace_hash_empty(hash))
hash = NULL;
- return __ftrace_hash_update_ipmodify(ops, EMPTY_HASH, hash);
+ return __ftrace_hash_update_ipmodify(ops, EMPTY_HASH, hash, update);
}
/* Disabling always succeeds */
@@ -2085,7 +2096,7 @@ static void ftrace_hash_ipmodify_disable(struct ftrace_ops *ops)
if (ftrace_hash_empty(hash))
hash = NULL;
- __ftrace_hash_update_ipmodify(ops, hash, EMPTY_HASH);
+ __ftrace_hash_update_ipmodify(ops, hash, EMPTY_HASH, false);
}
static int ftrace_hash_ipmodify_update(struct ftrace_ops *ops,
@@ -2099,7 +2110,7 @@ static int ftrace_hash_ipmodify_update(struct ftrace_ops *ops,
if (ftrace_hash_empty(new_hash))
new_hash = NULL;
- return __ftrace_hash_update_ipmodify(ops, old_hash, new_hash);
+ return __ftrace_hash_update_ipmodify(ops, old_hash, new_hash, false);
}
static void print_ip_ins(const char *fmt, const unsigned char *p)
@@ -3059,7 +3070,7 @@ int ftrace_startup(struct ftrace_ops *ops, int command)
*/
ops->flags |= FTRACE_OPS_FL_ENABLED | FTRACE_OPS_FL_ADDING;
- ret = ftrace_hash_ipmodify_enable(ops);
+ ret = ftrace_hash_ipmodify_enable(ops, false);
if (ret < 0) {
/* Rollback registration process */
__unregister_ftrace_function(ops);
@@ -6131,7 +6142,7 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
* ops->ops_func for the ops. This is needed because the above
* register_ftrace_function_nolock() worked on tmp_ops.
*/
- err = ftrace_hash_ipmodify_enable(ops);
+ err = ftrace_hash_ipmodify_enable(ops, true);
if (err)
goto out;
-- Steve
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 0/3] Fix ftrace for livepatch + BPF fexit programs
2025-10-24 7:12 [PATCH bpf-next 0/3] Fix ftrace for livepatch + BPF fexit programs Song Liu
` (2 preceding siblings ...)
2025-10-24 7:12 ` [PATCH bpf-next 3/3] selftests/bpf: Add tests for livepatch + bpf trampoline Song Liu
@ 2025-10-24 16:42 ` Alexei Starovoitov
3 siblings, 0 replies; 12+ messages in thread
From: Alexei Starovoitov @ 2025-10-24 16:42 UTC (permalink / raw)
To: Song Liu
Cc: bpf, linux-trace-kernel, live-patching, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Steven Rostedt,
Andrey Grodzovsky, Masami Hiramatsu, Kernel Team
On Fri, Oct 24, 2025 at 12:13 AM Song Liu <song@kernel.org> wrote:
>
> livepatch and BPF trampoline are two special users of ftrace. livepatch
> uses ftrace with IPMODIFY flag and BPF trampoline uses ftrace direct
> functions. When livepatch and BPF trampoline with fexit programs attach to
> the same kernel function, BPF trampoline needs to call into the patched
> version of the kernel function.
This sounds serious. Pls target bpf tree in respin after addressing
the comments.
pw-bot: cr
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 2/3] ftrace: bpf: Fix IPMODIFY + DIRECT in modify_ftrace_direct()
2025-10-24 16:21 ` Steven Rostedt
@ 2025-10-24 17:03 ` Song Liu
0 siblings, 0 replies; 12+ messages in thread
From: Song Liu @ 2025-10-24 17:03 UTC (permalink / raw)
To: Steven Rostedt
Cc: Jiri Olsa, Song Liu, bpf@vger.kernel.org,
linux-trace-kernel@vger.kernel.org, live-patching@vger.kernel.org,
ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org,
andrey.grodzovsky@crowdstrike.com, mhiramat@kernel.org,
Kernel Team
> On Oct 24, 2025, at 9:21 AM, Steven Rostedt <rostedt@goodmis.org> wrote:
>
> On Fri, 24 Oct 2025 15:47:04 +0000
> Song Liu <songliubraving@meta.com> wrote:
>
>>>> --- a/kernel/trace/ftrace.c
>>>> +++ b/kernel/trace/ftrace.c
>>>> @@ -2020,8 +2020,6 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
>>>> if (is_ipmodify)
>>>> goto rollback;
>>>>
>>>> - FTRACE_WARN_ON(rec->flags & FTRACE_FL_DIRECT);
>>>
>>> why is this needed?
>>
>> This is needed for the modify_ftrace_direct case, because
>> the record already have a direct function (BPF trampoline)
>> attached.
>
> I don't like the fact that it's removing a check for other cases.
Acked that removing check for other cases is not ideal.
I looked at the code a bit more and got a slightly different
version (attached below).
The basic idea is to leave existing ftrace_hash_ipmodify_*
logically the same, and call __ftrace_hash_update_ipmodify
directly from __modify_ftrace_direct().
I think this is logically cleaner.
Does this make sense?
Thanks,
Song
diff --git i/kernel/trace/ftrace.c w/kernel/trace/ftrace.c
index 370f620734cf..4f6745dddc35 100644
--- i/kernel/trace/ftrace.c
+++ w/kernel/trace/ftrace.c
@@ -1971,7 +1971,8 @@ static void ftrace_hash_rec_enable_modify(struct ftrace_ops *ops)
*/
static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
struct ftrace_hash *old_hash,
- struct ftrace_hash *new_hash)
+ struct ftrace_hash *new_hash,
+ bool update_target)
{
struct ftrace_page *pg;
struct dyn_ftrace *rec, *end = NULL;
@@ -2006,10 +2007,13 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
if (rec->flags & FTRACE_FL_DISABLED)
continue;
- /* We need to update only differences of filter_hash */
+ /*
+ * Unless we are updating the target of a direct function,
+ * we only need to update differences of filter_hash
+ */
in_old = !!ftrace_lookup_ip(old_hash, rec->ip);
in_new = !!ftrace_lookup_ip(new_hash, rec->ip);
- if (in_old == in_new)
+ if (!update_target && (in_old == in_new))
continue;
if (in_new) {
@@ -2020,6 +2024,17 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
if (is_ipmodify)
goto rollback;
+ /*
+ * If this is called by __modify_ftrace_direct()
+ * then it is only chaning where the direct
+ * pointer is jumping to, and the record already
+ * points to a direct trampoline. If it isn't
+ * then it is a bug to update ipmodify on a direct
+ * caller.
+ */
+ FTRACE_WARN_ON(!update_target &&
+ (rec->flags & FTRACE_FL_DIRECT));
+
/*
* Another ops with IPMODIFY is already
* attached. We are now attaching a direct
@@ -2074,7 +2089,7 @@ static int ftrace_hash_ipmodify_enable(struct ftrace_ops *ops)
if (ftrace_hash_empty(hash))
hash = NULL;
- return __ftrace_hash_update_ipmodify(ops, EMPTY_HASH, hash);
+ return __ftrace_hash_update_ipmodify(ops, EMPTY_HASH, hash, false);
}
/* Disabling always succeeds */
@@ -2085,7 +2100,7 @@ static void ftrace_hash_ipmodify_disable(struct ftrace_ops *ops)
if (ftrace_hash_empty(hash))
hash = NULL;
- __ftrace_hash_update_ipmodify(ops, hash, EMPTY_HASH);
+ __ftrace_hash_update_ipmodify(ops, hash, EMPTY_HASH, false);
}
static int ftrace_hash_ipmodify_update(struct ftrace_ops *ops,
@@ -2099,7 +2114,7 @@ static int ftrace_hash_ipmodify_update(struct ftrace_ops *ops,
if (ftrace_hash_empty(new_hash))
new_hash = NULL;
- return __ftrace_hash_update_ipmodify(ops, old_hash, new_hash);
+ return __ftrace_hash_update_ipmodify(ops, old_hash, new_hash, false);
}
static void print_ip_ins(const char *fmt, const unsigned char *p)
@@ -6106,7 +6121,7 @@ EXPORT_SYMBOL_GPL(unregister_ftrace_direct);
static int
__modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
{
- struct ftrace_hash *hash;
+ struct ftrace_hash *hash = ops->func_hash->filter_hash;
struct ftrace_func_entry *entry, *iter;
static struct ftrace_ops tmp_ops = {
.func = ftrace_stub,
@@ -6131,7 +6146,7 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
* ops->ops_func for the ops. This is needed because the above
* register_ftrace_function_nolock() worked on tmp_ops.
*/
- err = ftrace_hash_ipmodify_enable(ops);
+ err = __ftrace_hash_update_ipmodify(ops, hash, hash, true);
if (err)
goto out;
@@ -6141,7 +6156,6 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
*/
mutex_lock(&ftrace_lock);
- hash = ops->func_hash->filter_hash;
size = 1 << hash->size_bits;
for (i = 0; i < size; i++) {
hlist_for_each_entry(iter, &hash->buckets[i], hlist) {
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 1/3] ftrace: Fix BPF fexit with livepatch
2025-10-24 15:42 ` Song Liu
@ 2025-10-24 18:51 ` Jiri Olsa
0 siblings, 0 replies; 12+ messages in thread
From: Jiri Olsa @ 2025-10-24 18:51 UTC (permalink / raw)
To: Song Liu
Cc: Jiri Olsa, Song Liu, bpf@vger.kernel.org,
linux-trace-kernel@vger.kernel.org, live-patching@vger.kernel.org,
ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org,
rostedt@goodmis.org, andrey.grodzovsky@crowdstrike.com,
mhiramat@kernel.org, Kernel Team, stable@vger.kernel.org
On Fri, Oct 24, 2025 at 03:42:44PM +0000, Song Liu wrote:
>
>
> > On Oct 24, 2025, at 4:42 AM, Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Fri, Oct 24, 2025 at 12:12:55AM -0700, Song Liu wrote:
> >> When livepatch is attached to the same function as bpf trampoline with
> >> a fexit program, bpf trampoline code calls register_ftrace_direct()
> >> twice. The first time will fail with -EAGAIN, and the second time it
> >> will succeed. This requires register_ftrace_direct() to unregister
> >> the address on the first attempt. Otherwise, the bpf trampoline cannot
> >> attach. Here is an easy way to reproduce this issue:
> >>
> >> insmod samples/livepatch/livepatch-sample.ko
> >> bpftrace -e 'fexit:cmdline_proc_show {}'
> >> ERROR: Unable to attach probe: fexit:vmlinux:cmdline_proc_show...
> >>
> >> Fix this by cleaning up the hash when register_ftrace_function_nolock hits
> >> errors.
> >>
> >> Fixes: d05cb470663a ("ftrace: Fix modification of direct_function hash while in use")
> >> Cc: stable@vger.kernel.org # v6.6+
> >> Reported-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
> >> Closes: https://lore.kernel.org/live-patching/c5058315a39d4615b333e485893345be@crowdstrike.com/
> >> Cc: Steven Rostedt (Google) <rostedt@goodmis.org>
> >> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> >> Acked-and-tested-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
> >> Signed-off-by: Song Liu <song@kernel.org>
> >> ---
> >> kernel/trace/ftrace.c | 2 ++
> >> 1 file changed, 2 insertions(+)
> >>
> >> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> >> index 42bd2ba68a82..7f432775a6b5 100644
> >> --- a/kernel/trace/ftrace.c
> >> +++ b/kernel/trace/ftrace.c
> >> @@ -6048,6 +6048,8 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
> >> ops->direct_call = addr;
> >>
> >> err = register_ftrace_function_nolock(ops);
> >> + if (err)
> >> + remove_direct_functions_hash(hash, addr);
> >
> > should this be handled by the caller of the register_ftrace_direct?
> > fops->hash is updated by ftrace_set_filter_ip in register_fentry
>
> We need to clean up here. This is because register_ftrace_direct added
> the new entries to direct_functions. It need to clean these entries
> for the caller so that the next call of register_ftrace_direct can
> work.
>
> > seems like it's should be caller responsibility, also you could do that
> > just for (err == -EAGAIN) case to address the use case directly
>
> The cleanup is valid for any error cases, as we need to remove unused
> entries from direct_functions.
I see, I wonder then we could use free_hash to restore original
direct_functions, something like:
if (err) {
call_direct_funcs = rcu_assign_pointer(free_hash);
free_hash = new_hash;
}
we'd need to keep new_hash value
but feel free to ignore, removal is also fine ;-)
thanks,
jirka
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-10-24 18:51 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-24 7:12 [PATCH bpf-next 0/3] Fix ftrace for livepatch + BPF fexit programs Song Liu
2025-10-24 7:12 ` [PATCH bpf-next 1/3] ftrace: Fix BPF fexit with livepatch Song Liu
2025-10-24 11:42 ` Jiri Olsa
2025-10-24 15:42 ` Song Liu
2025-10-24 18:51 ` Jiri Olsa
2025-10-24 7:12 ` [PATCH bpf-next 2/3] ftrace: bpf: Fix IPMODIFY + DIRECT in modify_ftrace_direct() Song Liu
2025-10-24 11:43 ` Jiri Olsa
2025-10-24 15:47 ` Song Liu
2025-10-24 16:21 ` Steven Rostedt
2025-10-24 17:03 ` Song Liu
2025-10-24 7:12 ` [PATCH bpf-next 3/3] selftests/bpf: Add tests for livepatch + bpf trampoline Song Liu
2025-10-24 16:42 ` [PATCH bpf-next 0/3] Fix ftrace for livepatch + BPF fexit programs Alexei Starovoitov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).