public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/3] bpf: Support bpf_get_func_ip helper in uprobes
@ 2023-08-01  7:29 Jiri Olsa
  2023-08-01  7:30 ` [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program Jiri Olsa
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Jiri Olsa @ 2023-08-01  7:29 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt

hi,
adding support for bpf_get_func_ip helper for uprobe program to return
probed address for both uprobe and return uprobe as suggested by Andrii
in [1].

We agreed that uprobe can have special use of bpf_get_func_ip helper
that differs from kprobe.

The kprobe bpf_get_func_ip returns:
  - address of the function if probe is attach on function entry
    for both kprobe and return kprobe
  - 0 if the probe is not attach on function entry

The uprobe bpf_get_func_ip returns:
  - address of the probe for both uprobe and return uprobe

The reason for this semantic change is that kernel can't really tell
if the probe user space address is function entry.

Also available at:
  https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
  uprobe_get_func_ip

thanks,
jirka


[1] https://lore.kernel.org/bpf/CAEf4BzZ=xLVkG5eurEuvLU79wAMtwho7ReR+XJAgwhFF4M-7Cg@mail.gmail.com/
---
Jiri Olsa (3):
      bpf: Add support for bpf_get_func_ip helper for uprobe program
      selftests/bpf: Add bpf_get_func_ip tests for uprobe on function entry
      selftests/bpf: Add bpf_get_func_ip test for uprobe inside function

 include/linux/bpf.h                                         |  5 +++++
 include/uapi/linux/bpf.h                                    |  7 ++++++-
 kernel/trace/bpf_trace.c                                    | 21 ++++++++++++++++++++-
 kernel/trace/trace_probe.h                                  |  5 +++++
 kernel/trace/trace_uprobe.c                                 |  5 -----
 tools/include/uapi/linux/bpf.h                              |  7 ++++++-
 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c   | 51 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 tools/testing/selftests/bpf/progs/get_func_ip_test.c        | 25 +++++++++++++++++++++++--
 tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c | 18 ++++++++++++++++++
 9 files changed, 133 insertions(+), 11 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program
  2023-08-01  7:29 [PATCH bpf-next 0/3] bpf: Support bpf_get_func_ip helper in uprobes Jiri Olsa
@ 2023-08-01  7:30 ` Jiri Olsa
  2023-08-01 11:53   ` Yafang Shao
  2023-08-02 11:21   ` Alan Maguire
  2023-08-01  7:30 ` [PATCH bpf-next 2/3] selftests/bpf: Add bpf_get_func_ip tests for uprobe on function entry Jiri Olsa
  2023-08-01  7:30 ` [PATCH bpf-next 3/3] selftests/bpf: Add bpf_get_func_ip test for uprobe inside function Jiri Olsa
  2 siblings, 2 replies; 14+ messages in thread
From: Jiri Olsa @ 2023-08-01  7:30 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt

Adding support for bpf_get_func_ip helper for uprobe program to return
probed address for both uprobe and return uprobe.

We discussed this in [1] and agreed that uprobe can have special use
of bpf_get_func_ip helper that differs from kprobe.

The kprobe bpf_get_func_ip returns:
  - address of the function if probe is attach on function entry
    for both kprobe and return kprobe
  - 0 if the probe is not attach on function entry

The uprobe bpf_get_func_ip returns:
  - address of the probe for both uprobe and return uprobe

The reason for this semantic change is that kernel can't really tell
if the probe user space address is function entry.

The uprobe program is actually kprobe type program attached as uprobe.
One of the consequences of this design is that uprobes do not have its
own set of helpers, but share them with kprobes.

As we need different functionality for bpf_get_func_ip helper for uprobe,
I'm adding the bool value to the bpf_trace_run_ctx, so the helper can
detect that it's executed in uprobe context and call specific code.

The is_uprobe bool is set as true in bpf_prog_run_array_sleepable which
is currently used only for executing bpf programs in uprobe.

Suggested-by: Andrii Nakryiko <andrii@kernel.org>
[1] https://lore.kernel.org/bpf/CAEf4BzZ=xLVkG5eurEuvLU79wAMtwho7ReR+XJAgwhFF4M-7Cg@mail.gmail.com/
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/bpf.h            |  5 +++++
 include/uapi/linux/bpf.h       |  7 ++++++-
 kernel/trace/bpf_trace.c       | 21 ++++++++++++++++++++-
 kernel/trace/trace_probe.h     |  5 +++++
 kernel/trace/trace_uprobe.c    |  5 -----
 tools/include/uapi/linux/bpf.h |  7 ++++++-
 6 files changed, 42 insertions(+), 8 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index ceaa8c23287f..8ea071383ef1 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1819,6 +1819,7 @@ struct bpf_cg_run_ctx {
 struct bpf_trace_run_ctx {
 	struct bpf_run_ctx run_ctx;
 	u64 bpf_cookie;
+	bool is_uprobe;
 };
 
 struct bpf_tramp_run_ctx {
@@ -1867,6 +1868,8 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
 	if (unlikely(!array))
 		return ret;
 
+	run_ctx.is_uprobe = false;
+
 	migrate_disable();
 	old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
 	item = &array->items[0];
@@ -1906,6 +1909,8 @@ bpf_prog_run_array_sleepable(const struct bpf_prog_array __rcu *array_rcu,
 	rcu_read_lock_trace();
 	migrate_disable();
 
+	run_ctx.is_uprobe = true;
+
 	array = rcu_dereference_check(array_rcu, rcu_read_lock_trace_held());
 	if (unlikely(!array))
 		goto out;
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 70da85200695..d21deb46f49f 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5086,9 +5086,14 @@ union bpf_attr {
  * u64 bpf_get_func_ip(void *ctx)
  * 	Description
  * 		Get address of the traced function (for tracing and kprobe programs).
+ *
+ * 		When called for kprobe program attached as uprobe it returns
+ * 		probe address for both entry and return uprobe.
+ *
  * 	Return
- * 		Address of the traced function.
+ * 		Address of the traced function for kprobe.
  * 		0 for kprobes placed within the function (not at the entry).
+ * 		Address of the probe for uprobe and return uprobe.
  *
  * u64 bpf_get_attach_cookie(void *ctx)
  * 	Description
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index c92eb8c6ff08..7930a91ca7f3 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1057,9 +1057,28 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
 #define get_entry_ip(fentry_ip) fentry_ip
 #endif
 
+#ifdef CONFIG_UPROBES
+static unsigned long bpf_get_func_ip_uprobe(struct pt_regs *regs)
+{
+	struct uprobe_dispatch_data *udd;
+
+	udd = (struct uprobe_dispatch_data *) current->utask->vaddr;
+	return udd->bp_addr;
+}
+#else
+#define bpf_get_func_ip_uprobe(regs) (u64) -1
+#endif
+
 BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
 {
-	struct kprobe *kp = kprobe_running();
+	struct bpf_trace_run_ctx *run_ctx;
+	struct kprobe *kp;
+
+	run_ctx = container_of(current->bpf_ctx, struct bpf_trace_run_ctx, run_ctx);
+	if (run_ctx->is_uprobe)
+		return bpf_get_func_ip_uprobe(regs);
+
+	kp = kprobe_running();
 
 	if (!kp || !(kp->flags & KPROBE_FLAG_ON_FUNC_ENTRY))
 		return 0;
diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
index 01ea148723de..7dde806be91e 100644
--- a/kernel/trace/trace_probe.h
+++ b/kernel/trace/trace_probe.h
@@ -519,3 +519,8 @@ void __trace_probe_log_err(int offset, int err);
 
 #define trace_probe_log_err(offs, err)	\
 	__trace_probe_log_err(offs, TP_ERR_##err)
+
+struct uprobe_dispatch_data {
+	struct trace_uprobe	*tu;
+	unsigned long		bp_addr;
+};
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 555c223c3232..fc76c3985672 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -88,11 +88,6 @@ static struct trace_uprobe *to_trace_uprobe(struct dyn_event *ev)
 static int register_uprobe_event(struct trace_uprobe *tu);
 static int unregister_uprobe_event(struct trace_uprobe *tu);
 
-struct uprobe_dispatch_data {
-	struct trace_uprobe	*tu;
-	unsigned long		bp_addr;
-};
-
 static int uprobe_dispatcher(struct uprobe_consumer *con, struct pt_regs *regs);
 static int uretprobe_dispatcher(struct uprobe_consumer *con,
 				unsigned long func, struct pt_regs *regs);
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 70da85200695..d21deb46f49f 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -5086,9 +5086,14 @@ union bpf_attr {
  * u64 bpf_get_func_ip(void *ctx)
  * 	Description
  * 		Get address of the traced function (for tracing and kprobe programs).
+ *
+ * 		When called for kprobe program attached as uprobe it returns
+ * 		probe address for both entry and return uprobe.
+ *
  * 	Return
- * 		Address of the traced function.
+ * 		Address of the traced function for kprobe.
  * 		0 for kprobes placed within the function (not at the entry).
+ * 		Address of the probe for uprobe and return uprobe.
  *
  * u64 bpf_get_attach_cookie(void *ctx)
  * 	Description
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH bpf-next 2/3] selftests/bpf: Add bpf_get_func_ip tests for uprobe on function entry
  2023-08-01  7:29 [PATCH bpf-next 0/3] bpf: Support bpf_get_func_ip helper in uprobes Jiri Olsa
  2023-08-01  7:30 ` [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program Jiri Olsa
@ 2023-08-01  7:30 ` Jiri Olsa
  2023-08-01  7:30 ` [PATCH bpf-next 3/3] selftests/bpf: Add bpf_get_func_ip test for uprobe inside function Jiri Olsa
  2 siblings, 0 replies; 14+ messages in thread
From: Jiri Olsa @ 2023-08-01  7:30 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt

Adding get_func_ip tests for uprobe on function entry that
validates that bpf_get_func_ip returns proper values from
both uprobe and return uprobe.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/get_func_ip_test.c         | 11 ++++++++
 .../selftests/bpf/progs/get_func_ip_test.c    | 25 +++++++++++++++++--
 2 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
index fede8ef58b5b..114cdbc04caf 100644
--- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
@@ -1,6 +1,11 @@
 // SPDX-License-Identifier: GPL-2.0
 #include <test_progs.h>
 #include "get_func_ip_test.skel.h"
+#include "get_func_ip_uprobe_test.skel.h"
+
+static noinline void uprobe_trigger(void)
+{
+}
 
 static void test_function_entry(void)
 {
@@ -20,6 +25,8 @@ static void test_function_entry(void)
 	if (!ASSERT_OK(err, "get_func_ip_test__attach"))
 		goto cleanup;
 
+	skel->bss->uprobe_trigger = (unsigned long) uprobe_trigger;
+
 	prog_fd = bpf_program__fd(skel->progs.test1);
 	err = bpf_prog_test_run_opts(prog_fd, &topts);
 	ASSERT_OK(err, "test_run");
@@ -30,11 +37,15 @@ static void test_function_entry(void)
 
 	ASSERT_OK(err, "test_run");
 
+	uprobe_trigger();
+
 	ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
 	ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
 	ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
 	ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
 	ASSERT_EQ(skel->bss->test5_result, 1, "test5_result");
+	ASSERT_EQ(skel->bss->test7_result, 1, "test7_result");
+	ASSERT_EQ(skel->bss->test8_result, 1, "test8_result");
 
 cleanup:
 	get_func_ip_test__destroy(skel);
diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
index 8559e698b40d..8956eb78a226 100644
--- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
@@ -1,8 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
-#include <linux/bpf.h>
+#include "vmlinux.h"
 #include <bpf/bpf_helpers.h>
 #include <bpf/bpf_tracing.h>
-#include <stdbool.h>
 
 char _license[] SEC("license") = "GPL";
 
@@ -83,3 +82,25 @@ int test6(struct pt_regs *ctx)
 	test6_result = (const void *) addr == 0;
 	return 0;
 }
+
+unsigned long uprobe_trigger;
+
+__u64 test7_result = 0;
+SEC("uprobe//proc/self/exe:uprobe_trigger")
+int BPF_UPROBE(test7)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test7_result = (const void *) addr == (const void *) uprobe_trigger;
+	return 0;
+}
+
+__u64 test8_result = 0;
+SEC("uretprobe//proc/self/exe:uprobe_trigger")
+int BPF_URETPROBE(test8, int ret)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test8_result = (const void *) addr == (const void *) uprobe_trigger;
+	return 0;
+}
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH bpf-next 3/3] selftests/bpf: Add bpf_get_func_ip test for uprobe inside function
  2023-08-01  7:29 [PATCH bpf-next 0/3] bpf: Support bpf_get_func_ip helper in uprobes Jiri Olsa
  2023-08-01  7:30 ` [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program Jiri Olsa
  2023-08-01  7:30 ` [PATCH bpf-next 2/3] selftests/bpf: Add bpf_get_func_ip tests for uprobe on function entry Jiri Olsa
@ 2023-08-01  7:30 ` Jiri Olsa
  2023-08-02 11:30   ` Alan Maguire
  2 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2023-08-01  7:30 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt

Adding get_func_ip test for uprobe inside function that validates
the get_func_ip helper returns correct probe address value.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/get_func_ip_test.c         | 40 ++++++++++++++++++-
 .../bpf/progs/get_func_ip_uprobe_test.c       | 18 +++++++++
 2 files changed, 57 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c

diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
index 114cdbc04caf..f199220ad6de 100644
--- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
@@ -55,7 +55,16 @@ static void test_function_entry(void)
  * offset, disabling it for all other archs
  */
 #ifdef __x86_64__
-static void test_function_body(void)
+extern void uprobe_trigger_body(void);
+asm(
+".globl uprobe_trigger_body\n"
+".type uprobe_trigger_body, @function\n"
+"uprobe_trigger_body:\n"
+"	nop\n"
+"	ret\n"
+);
+
+static void test_function_body_kprobe(void)
 {
 	struct get_func_ip_test *skel = NULL;
 	LIBBPF_OPTS(bpf_test_run_opts, topts);
@@ -90,6 +99,35 @@ static void test_function_body(void)
 	bpf_link__destroy(link6);
 	get_func_ip_test__destroy(skel);
 }
+
+static void test_function_body_uprobe(void)
+{
+	struct get_func_ip_uprobe_test *skel = NULL;
+	int err;
+
+	skel = get_func_ip_uprobe_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "get_func_ip_uprobe_test__open_and_load"))
+		return;
+
+	err = get_func_ip_uprobe_test__attach(skel);
+	if (!ASSERT_OK(err, "get_func_ip_test__attach"))
+		goto cleanup;
+
+	skel->bss->uprobe_trigger_body = (unsigned long) uprobe_trigger_body;
+
+	uprobe_trigger_body();
+
+	ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
+
+cleanup:
+	get_func_ip_uprobe_test__destroy(skel);
+}
+
+static void test_function_body(void)
+{
+	test_function_body_kprobe();
+	test_function_body_uprobe();
+}
 #else
 #define test_function_body()
 #endif
diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
new file mode 100644
index 000000000000..052f8a4345a8
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+unsigned long uprobe_trigger_body;
+
+__u64 test1_result = 0;
+SEC("uprobe//proc/self/exe:uprobe_trigger_body+1")
+int BPF_UPROBE(test1)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test1_result = (const void *) addr == (const void *) uprobe_trigger_body + 1;
+	return 0;
+}
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program
  2023-08-01  7:30 ` [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program Jiri Olsa
@ 2023-08-01 11:53   ` Yafang Shao
  2023-08-01 19:44     ` Yonghong Song
  2023-08-02 11:21   ` Alan Maguire
  1 sibling, 1 reply; 14+ messages in thread
From: Yafang Shao @ 2023-08-01 11:53 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt

On Tue, Aug 1, 2023 at 3:30 PM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Adding support for bpf_get_func_ip helper for uprobe program to return
> probed address for both uprobe and return uprobe.
>
> We discussed this in [1] and agreed that uprobe can have special use
> of bpf_get_func_ip helper that differs from kprobe.
>
> The kprobe bpf_get_func_ip returns:
>   - address of the function if probe is attach on function entry
>     for both kprobe and return kprobe
>   - 0 if the probe is not attach on function entry
>
> The uprobe bpf_get_func_ip returns:
>   - address of the probe for both uprobe and return uprobe
>
> The reason for this semantic change is that kernel can't really tell
> if the probe user space address is function entry.
>
> The uprobe program is actually kprobe type program attached as uprobe.
> One of the consequences of this design is that uprobes do not have its
> own set of helpers, but share them with kprobes.
>
> As we need different functionality for bpf_get_func_ip helper for uprobe,
> I'm adding the bool value to the bpf_trace_run_ctx, so the helper can
> detect that it's executed in uprobe context and call specific code.
>
> The is_uprobe bool is set as true in bpf_prog_run_array_sleepable which
> is currently used only for executing bpf programs in uprobe.

That is error-prone.  If we don't intend to rename
bpf_prog_run_array_sleepable() to bpf_prog_run_array_uprobe(), I think
we'd better introduce a new parameter 'bool is_uprobe' into it.

>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> [1] https://lore.kernel.org/bpf/CAEf4BzZ=xLVkG5eurEuvLU79wAMtwho7ReR+XJAgwhFF4M-7Cg@mail.gmail.com/
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/linux/bpf.h            |  5 +++++
>  include/uapi/linux/bpf.h       |  7 ++++++-
>  kernel/trace/bpf_trace.c       | 21 ++++++++++++++++++++-
>  kernel/trace/trace_probe.h     |  5 +++++
>  kernel/trace/trace_uprobe.c    |  5 -----
>  tools/include/uapi/linux/bpf.h |  7 ++++++-
>  6 files changed, 42 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index ceaa8c23287f..8ea071383ef1 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1819,6 +1819,7 @@ struct bpf_cg_run_ctx {
>  struct bpf_trace_run_ctx {
>         struct bpf_run_ctx run_ctx;
>         u64 bpf_cookie;
> +       bool is_uprobe;
>  };
>
>  struct bpf_tramp_run_ctx {
> @@ -1867,6 +1868,8 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
>         if (unlikely(!array))
>                 return ret;
>
> +       run_ctx.is_uprobe = false;
> +
>         migrate_disable();
>         old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
>         item = &array->items[0];
> @@ -1906,6 +1909,8 @@ bpf_prog_run_array_sleepable(const struct bpf_prog_array __rcu *array_rcu,
>         rcu_read_lock_trace();
>         migrate_disable();
>
> +       run_ctx.is_uprobe = true;
> +
>         array = rcu_dereference_check(array_rcu, rcu_read_lock_trace_held());
>         if (unlikely(!array))
>                 goto out;
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 70da85200695..d21deb46f49f 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -5086,9 +5086,14 @@ union bpf_attr {
>   * u64 bpf_get_func_ip(void *ctx)
>   *     Description
>   *             Get address of the traced function (for tracing and kprobe programs).
> + *
> + *             When called for kprobe program attached as uprobe it returns
> + *             probe address for both entry and return uprobe.
> + *
>   *     Return
> - *             Address of the traced function.
> + *             Address of the traced function for kprobe.
>   *             0 for kprobes placed within the function (not at the entry).
> + *             Address of the probe for uprobe and return uprobe.
>   *
>   * u64 bpf_get_attach_cookie(void *ctx)
>   *     Description
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index c92eb8c6ff08..7930a91ca7f3 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1057,9 +1057,28 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
>  #define get_entry_ip(fentry_ip) fentry_ip
>  #endif
>
> +#ifdef CONFIG_UPROBES
> +static unsigned long bpf_get_func_ip_uprobe(struct pt_regs *regs)
> +{
> +       struct uprobe_dispatch_data *udd;
> +
> +       udd = (struct uprobe_dispatch_data *) current->utask->vaddr;
> +       return udd->bp_addr;
> +}
> +#else
> +#define bpf_get_func_ip_uprobe(regs) (u64) -1
> +#endif
> +
>  BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
>  {
> -       struct kprobe *kp = kprobe_running();
> +       struct bpf_trace_run_ctx *run_ctx;
> +       struct kprobe *kp;
> +
> +       run_ctx = container_of(current->bpf_ctx, struct bpf_trace_run_ctx, run_ctx);
> +       if (run_ctx->is_uprobe)
> +               return bpf_get_func_ip_uprobe(regs);
> +
> +       kp = kprobe_running();
>
>         if (!kp || !(kp->flags & KPROBE_FLAG_ON_FUNC_ENTRY))
>                 return 0;
> diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
> index 01ea148723de..7dde806be91e 100644
> --- a/kernel/trace/trace_probe.h
> +++ b/kernel/trace/trace_probe.h
> @@ -519,3 +519,8 @@ void __trace_probe_log_err(int offset, int err);
>
>  #define trace_probe_log_err(offs, err) \
>         __trace_probe_log_err(offs, TP_ERR_##err)
> +
> +struct uprobe_dispatch_data {
> +       struct trace_uprobe     *tu;
> +       unsigned long           bp_addr;
> +};
> diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
> index 555c223c3232..fc76c3985672 100644
> --- a/kernel/trace/trace_uprobe.c
> +++ b/kernel/trace/trace_uprobe.c
> @@ -88,11 +88,6 @@ static struct trace_uprobe *to_trace_uprobe(struct dyn_event *ev)
>  static int register_uprobe_event(struct trace_uprobe *tu);
>  static int unregister_uprobe_event(struct trace_uprobe *tu);
>
> -struct uprobe_dispatch_data {
> -       struct trace_uprobe     *tu;
> -       unsigned long           bp_addr;
> -};
> -
>  static int uprobe_dispatcher(struct uprobe_consumer *con, struct pt_regs *regs);
>  static int uretprobe_dispatcher(struct uprobe_consumer *con,
>                                 unsigned long func, struct pt_regs *regs);
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 70da85200695..d21deb46f49f 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -5086,9 +5086,14 @@ union bpf_attr {
>   * u64 bpf_get_func_ip(void *ctx)
>   *     Description
>   *             Get address of the traced function (for tracing and kprobe programs).
> + *
> + *             When called for kprobe program attached as uprobe it returns
> + *             probe address for both entry and return uprobe.
> + *
>   *     Return
> - *             Address of the traced function.
> + *             Address of the traced function for kprobe.
>   *             0 for kprobes placed within the function (not at the entry).
> + *             Address of the probe for uprobe and return uprobe.
>   *
>   * u64 bpf_get_attach_cookie(void *ctx)
>   *     Description
> --
> 2.41.0
>
>


-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program
  2023-08-01 11:53   ` Yafang Shao
@ 2023-08-01 19:44     ` Yonghong Song
  2023-08-01 20:18       ` Yonghong Song
  0 siblings, 1 reply; 14+ messages in thread
From: Yonghong Song @ 2023-08-01 19:44 UTC (permalink / raw)
  To: Yafang Shao, Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt



On 8/1/23 4:53 AM, Yafang Shao wrote:
> On Tue, Aug 1, 2023 at 3:30 PM Jiri Olsa <jolsa@kernel.org> wrote:
>>
>> Adding support for bpf_get_func_ip helper for uprobe program to return
>> probed address for both uprobe and return uprobe.
>>
>> We discussed this in [1] and agreed that uprobe can have special use
>> of bpf_get_func_ip helper that differs from kprobe.
>>
>> The kprobe bpf_get_func_ip returns:
>>    - address of the function if probe is attach on function entry
>>      for both kprobe and return kprobe
>>    - 0 if the probe is not attach on function entry
>>
>> The uprobe bpf_get_func_ip returns:
>>    - address of the probe for both uprobe and return uprobe
>>
>> The reason for this semantic change is that kernel can't really tell
>> if the probe user space address is function entry.
>>
>> The uprobe program is actually kprobe type program attached as uprobe.
>> One of the consequences of this design is that uprobes do not have its
>> own set of helpers, but share them with kprobes.
>>
>> As we need different functionality for bpf_get_func_ip helper for uprobe,
>> I'm adding the bool value to the bpf_trace_run_ctx, so the helper can
>> detect that it's executed in uprobe context and call specific code.
>>
>> The is_uprobe bool is set as true in bpf_prog_run_array_sleepable which
>> is currently used only for executing bpf programs in uprobe.
> 
> That is error-prone.  If we don't intend to rename
> bpf_prog_run_array_sleepable() to bpf_prog_run_array_uprobe(), I think
> we'd better introduce a new parameter 'bool is_uprobe' into it.

Agree that renaming bpf_prog_run_array_sleepable() to
bpf_prog_run_array_uprobe() probably better. This way, it is
self-explainable for `run_ctx.is_uprobe = true`.

If unlikely case in the future, another sleepable run prog array
is needed. They can have their own bpf_prog_run_array_<..>
and underlying bpf_prog_run_array_sleepable() can be factored out.

> 
>>
>> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
>> [1] https://lore.kernel.org/bpf/CAEf4BzZ=xLVkG5eurEuvLU79wAMtwho7ReR+XJAgwhFF4M-7Cg@mail.gmail.com/
>> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
>> ---
>>   include/linux/bpf.h            |  5 +++++
>>   include/uapi/linux/bpf.h       |  7 ++++++-
>>   kernel/trace/bpf_trace.c       | 21 ++++++++++++++++++++-
>>   kernel/trace/trace_probe.h     |  5 +++++
>>   kernel/trace/trace_uprobe.c    |  5 -----
>>   tools/include/uapi/linux/bpf.h |  7 ++++++-
>>   6 files changed, 42 insertions(+), 8 deletions(-)
>>
>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>> index ceaa8c23287f..8ea071383ef1 100644
>> --- a/include/linux/bpf.h
>> +++ b/include/linux/bpf.h
>> @@ -1819,6 +1819,7 @@ struct bpf_cg_run_ctx {
>>   struct bpf_trace_run_ctx {
>>          struct bpf_run_ctx run_ctx;
>>          u64 bpf_cookie;
>> +       bool is_uprobe;
>>   };
>>
>>   struct bpf_tramp_run_ctx {
>> @@ -1867,6 +1868,8 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
>>          if (unlikely(!array))
>>                  return ret;
>>
>> +       run_ctx.is_uprobe = false;
>> +
>>          migrate_disable();
>>          old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
>>          item = &array->items[0];
>> @@ -1906,6 +1909,8 @@ bpf_prog_run_array_sleepable(const struct bpf_prog_array __rcu *array_rcu,
>>          rcu_read_lock_trace();
>>          migrate_disable();
>>
>> +       run_ctx.is_uprobe = true;
>> +
>>          array = rcu_dereference_check(array_rcu, rcu_read_lock_trace_held());
>>          if (unlikely(!array))
>>                  goto out;
[...]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program
  2023-08-01 19:44     ` Yonghong Song
@ 2023-08-01 20:18       ` Yonghong Song
  2023-08-01 20:43         ` Alexei Starovoitov
  0 siblings, 1 reply; 14+ messages in thread
From: Yonghong Song @ 2023-08-01 20:18 UTC (permalink / raw)
  To: Yafang Shao, Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt



On 8/1/23 12:44 PM, Yonghong Song wrote:
> 
> 
> On 8/1/23 4:53 AM, Yafang Shao wrote:
>> On Tue, Aug 1, 2023 at 3:30 PM Jiri Olsa <jolsa@kernel.org> wrote:
>>>
>>> Adding support for bpf_get_func_ip helper for uprobe program to return
>>> probed address for both uprobe and return uprobe.
>>>
>>> We discussed this in [1] and agreed that uprobe can have special use
>>> of bpf_get_func_ip helper that differs from kprobe.
>>>
>>> The kprobe bpf_get_func_ip returns:
>>>    - address of the function if probe is attach on function entry
>>>      for both kprobe and return kprobe
>>>    - 0 if the probe is not attach on function entry
>>>
>>> The uprobe bpf_get_func_ip returns:
>>>    - address of the probe for both uprobe and return uprobe
>>>
>>> The reason for this semantic change is that kernel can't really tell
>>> if the probe user space address is function entry.
>>>
>>> The uprobe program is actually kprobe type program attached as uprobe.
>>> One of the consequences of this design is that uprobes do not have its
>>> own set of helpers, but share them with kprobes.
>>>
>>> As we need different functionality for bpf_get_func_ip helper for 
>>> uprobe,
>>> I'm adding the bool value to the bpf_trace_run_ctx, so the helper can
>>> detect that it's executed in uprobe context and call specific code.
>>>
>>> The is_uprobe bool is set as true in bpf_prog_run_array_sleepable which
>>> is currently used only for executing bpf programs in uprobe.
>>
>> That is error-prone.  If we don't intend to rename
>> bpf_prog_run_array_sleepable() to bpf_prog_run_array_uprobe(), I think
>> we'd better introduce a new parameter 'bool is_uprobe' into it.
> 
> Agree that renaming bpf_prog_run_array_sleepable() to
> bpf_prog_run_array_uprobe() probably better. This way, it is
> self-explainable for `run_ctx.is_uprobe = true`.
> 
> If unlikely case in the future, another sleepable run prog array
> is needed. They can have their own bpf_prog_run_array_<..>
> and underlying bpf_prog_run_array_sleepable() can be factored out.

Or if want to avoid unnecessary code churn, at least add
a comment in bpf_prog_run_array_sleepable() to explain
that why it is safe to do `run_ctx.is_uprobe = true;`.

> 
>>
>>>
>>> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
>>> [1] 
>>> https://lore.kernel.org/bpf/CAEf4BzZ=xLVkG5eurEuvLU79wAMtwho7ReR+XJAgwhFF4M-7Cg@mail.gmail.com/
>>> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
>>> ---
>>>   include/linux/bpf.h            |  5 +++++
>>>   include/uapi/linux/bpf.h       |  7 ++++++-
>>>   kernel/trace/bpf_trace.c       | 21 ++++++++++++++++++++-
>>>   kernel/trace/trace_probe.h     |  5 +++++
>>>   kernel/trace/trace_uprobe.c    |  5 -----
>>>   tools/include/uapi/linux/bpf.h |  7 ++++++-
>>>   6 files changed, 42 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>>> index ceaa8c23287f..8ea071383ef1 100644
>>> --- a/include/linux/bpf.h
>>> +++ b/include/linux/bpf.h
>>> @@ -1819,6 +1819,7 @@ struct bpf_cg_run_ctx {
>>>   struct bpf_trace_run_ctx {
>>>          struct bpf_run_ctx run_ctx;
>>>          u64 bpf_cookie;
>>> +       bool is_uprobe;
>>>   };
>>>
>>>   struct bpf_tramp_run_ctx {
>>> @@ -1867,6 +1868,8 @@ bpf_prog_run_array(const struct bpf_prog_array 
>>> *array,
>>>          if (unlikely(!array))
>>>                  return ret;
>>>
>>> +       run_ctx.is_uprobe = false;
>>> +
>>>          migrate_disable();
>>>          old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
>>>          item = &array->items[0];
>>> @@ -1906,6 +1909,8 @@ bpf_prog_run_array_sleepable(const struct 
>>> bpf_prog_array __rcu *array_rcu,
>>>          rcu_read_lock_trace();
>>>          migrate_disable();
>>>
>>> +       run_ctx.is_uprobe = true;
>>> +
>>>          array = rcu_dereference_check(array_rcu, 
>>> rcu_read_lock_trace_held());
>>>          if (unlikely(!array))
>>>                  goto out;
> [...]
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program
  2023-08-01 20:18       ` Yonghong Song
@ 2023-08-01 20:43         ` Alexei Starovoitov
  2023-08-02  7:15           ` Jiri Olsa
  0 siblings, 1 reply; 14+ messages in thread
From: Alexei Starovoitov @ 2023-08-01 20:43 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Yafang Shao, Jiri Olsa, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo,
	Masami Hiramatsu (Google), Steven Rostedt

On Tue, Aug 1, 2023 at 1:18 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
>
>
> On 8/1/23 12:44 PM, Yonghong Song wrote:
> >
> >
> > On 8/1/23 4:53 AM, Yafang Shao wrote:
> >> On Tue, Aug 1, 2023 at 3:30 PM Jiri Olsa <jolsa@kernel.org> wrote:
> >>>
> >>> Adding support for bpf_get_func_ip helper for uprobe program to return
> >>> probed address for both uprobe and return uprobe.
> >>>
> >>> We discussed this in [1] and agreed that uprobe can have special use
> >>> of bpf_get_func_ip helper that differs from kprobe.
> >>>
> >>> The kprobe bpf_get_func_ip returns:
> >>>    - address of the function if probe is attach on function entry
> >>>      for both kprobe and return kprobe
> >>>    - 0 if the probe is not attach on function entry
> >>>
> >>> The uprobe bpf_get_func_ip returns:
> >>>    - address of the probe for both uprobe and return uprobe
> >>>
> >>> The reason for this semantic change is that kernel can't really tell
> >>> if the probe user space address is function entry.
> >>>
> >>> The uprobe program is actually kprobe type program attached as uprobe.
> >>> One of the consequences of this design is that uprobes do not have its
> >>> own set of helpers, but share them with kprobes.
> >>>
> >>> As we need different functionality for bpf_get_func_ip helper for
> >>> uprobe,
> >>> I'm adding the bool value to the bpf_trace_run_ctx, so the helper can
> >>> detect that it's executed in uprobe context and call specific code.
> >>>
> >>> The is_uprobe bool is set as true in bpf_prog_run_array_sleepable which
> >>> is currently used only for executing bpf programs in uprobe.
> >>
> >> That is error-prone.  If we don't intend to rename
> >> bpf_prog_run_array_sleepable() to bpf_prog_run_array_uprobe(), I think
> >> we'd better introduce a new parameter 'bool is_uprobe' into it.
> >
> > Agree that renaming bpf_prog_run_array_sleepable() to
> > bpf_prog_run_array_uprobe() probably better. This way, it is
> > self-explainable for `run_ctx.is_uprobe = true`.
> >
> > If unlikely case in the future, another sleepable run prog array
> > is needed. They can have their own bpf_prog_run_array_<..>
> > and underlying bpf_prog_run_array_sleepable() can be factored out.
>
> Or if want to avoid unnecessary code churn, at least add
> a comment in bpf_prog_run_array_sleepable() to explain
> that why it is safe to do `run_ctx.is_uprobe = true;`.

I think renaming to _uprobe() is a good idea.
I would prefer if we can remove the bool is_uprobe run-time check,
but don't see a way to do it cleanly.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program
  2023-08-01 20:43         ` Alexei Starovoitov
@ 2023-08-02  7:15           ` Jiri Olsa
  0 siblings, 0 replies; 14+ messages in thread
From: Jiri Olsa @ 2023-08-02  7:15 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Yonghong Song, Yafang Shao, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo,
	Masami Hiramatsu (Google), Steven Rostedt

On Tue, Aug 01, 2023 at 01:43:53PM -0700, Alexei Starovoitov wrote:
> On Tue, Aug 1, 2023 at 1:18 PM Yonghong Song <yonghong.song@linux.dev> wrote:
> >
> >
> >
> > On 8/1/23 12:44 PM, Yonghong Song wrote:
> > >
> > >
> > > On 8/1/23 4:53 AM, Yafang Shao wrote:
> > >> On Tue, Aug 1, 2023 at 3:30 PM Jiri Olsa <jolsa@kernel.org> wrote:
> > >>>
> > >>> Adding support for bpf_get_func_ip helper for uprobe program to return
> > >>> probed address for both uprobe and return uprobe.
> > >>>
> > >>> We discussed this in [1] and agreed that uprobe can have special use
> > >>> of bpf_get_func_ip helper that differs from kprobe.
> > >>>
> > >>> The kprobe bpf_get_func_ip returns:
> > >>>    - address of the function if probe is attach on function entry
> > >>>      for both kprobe and return kprobe
> > >>>    - 0 if the probe is not attach on function entry
> > >>>
> > >>> The uprobe bpf_get_func_ip returns:
> > >>>    - address of the probe for both uprobe and return uprobe
> > >>>
> > >>> The reason for this semantic change is that kernel can't really tell
> > >>> if the probe user space address is function entry.
> > >>>
> > >>> The uprobe program is actually kprobe type program attached as uprobe.
> > >>> One of the consequences of this design is that uprobes do not have its
> > >>> own set of helpers, but share them with kprobes.
> > >>>
> > >>> As we need different functionality for bpf_get_func_ip helper for
> > >>> uprobe,
> > >>> I'm adding the bool value to the bpf_trace_run_ctx, so the helper can
> > >>> detect that it's executed in uprobe context and call specific code.
> > >>>
> > >>> The is_uprobe bool is set as true in bpf_prog_run_array_sleepable which
> > >>> is currently used only for executing bpf programs in uprobe.
> > >>
> > >> That is error-prone.  If we don't intend to rename
> > >> bpf_prog_run_array_sleepable() to bpf_prog_run_array_uprobe(), I think
> > >> we'd better introduce a new parameter 'bool is_uprobe' into it.
> > >
> > > Agree that renaming bpf_prog_run_array_sleepable() to
> > > bpf_prog_run_array_uprobe() probably better. This way, it is
> > > self-explainable for `run_ctx.is_uprobe = true`.
> > >
> > > If unlikely case in the future, another sleepable run prog array
> > > is needed. They can have their own bpf_prog_run_array_<..>
> > > and underlying bpf_prog_run_array_sleepable() can be factored out.
> >
> > Or if want to avoid unnecessary code churn, at least add
> > a comment in bpf_prog_run_array_sleepable() to explain
> > that why it is safe to do `run_ctx.is_uprobe = true;`.
> 
> I think renaming to _uprobe() is a good idea.
> I would prefer if we can remove the bool is_uprobe run-time check,
> but don't see a way to do it cleanly.

ok, I'll rename it

thanks,
jirka

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program
  2023-08-01  7:30 ` [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program Jiri Olsa
  2023-08-01 11:53   ` Yafang Shao
@ 2023-08-02 11:21   ` Alan Maguire
  2023-08-02 12:23     ` Jiri Olsa
  1 sibling, 1 reply; 14+ messages in thread
From: Alan Maguire @ 2023-08-02 11:21 UTC (permalink / raw)
  To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt

On 01/08/2023 08:30, Jiri Olsa wrote:
> Adding support for bpf_get_func_ip helper for uprobe program to return
> probed address for both uprobe and return uprobe.
> 
> We discussed this in [1] and agreed that uprobe can have special use
> of bpf_get_func_ip helper that differs from kprobe.
> 
> The kprobe bpf_get_func_ip returns:
>   - address of the function if probe is attach on function entry
>     for both kprobe and return kprobe
>   - 0 if the probe is not attach on function entry
> 
> The uprobe bpf_get_func_ip returns:
>   - address of the probe for both uprobe and return uprobe
> 
> The reason for this semantic change is that kernel can't really tell
> if the probe user space address is function entry.
> 
> The uprobe program is actually kprobe type program attached as uprobe.
> One of the consequences of this design is that uprobes do not have its
> own set of helpers, but share them with kprobes.
> 
> As we need different functionality for bpf_get_func_ip helper for uprobe,
> I'm adding the bool value to the bpf_trace_run_ctx, so the helper can
> detect that it's executed in uprobe context and call specific code.
> 
> The is_uprobe bool is set as true in bpf_prog_run_array_sleepable which
> is currently used only for executing bpf programs in uprobe.
> 
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> [1] https://lore.kernel.org/bpf/CAEf4BzZ=xLVkG5eurEuvLU79wAMtwho7ReR+XJAgwhFF4M-7Cg@mail.gmail.com/
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/linux/bpf.h            |  5 +++++
>  include/uapi/linux/bpf.h       |  7 ++++++-
>  kernel/trace/bpf_trace.c       | 21 ++++++++++++++++++++-
>  kernel/trace/trace_probe.h     |  5 +++++
>  kernel/trace/trace_uprobe.c    |  5 -----
>  tools/include/uapi/linux/bpf.h |  7 ++++++-
>  6 files changed, 42 insertions(+), 8 deletions(-)
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index ceaa8c23287f..8ea071383ef1 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1819,6 +1819,7 @@ struct bpf_cg_run_ctx {
>  struct bpf_trace_run_ctx {
>  	struct bpf_run_ctx run_ctx;
>  	u64 bpf_cookie;
> +	bool is_uprobe;
>  };
>  
>  struct bpf_tramp_run_ctx {
> @@ -1867,6 +1868,8 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
>  	if (unlikely(!array))
>  		return ret;
>  
> +	run_ctx.is_uprobe = false;
> +
>  	migrate_disable();
>  	old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
>  	item = &array->items[0];
> @@ -1906,6 +1909,8 @@ bpf_prog_run_array_sleepable(const struct bpf_prog_array __rcu *array_rcu,
>  	rcu_read_lock_trace();
>  	migrate_disable();
>  
> +	run_ctx.is_uprobe = true;
> +
>  	array = rcu_dereference_check(array_rcu, rcu_read_lock_trace_held());
>  	if (unlikely(!array))
>  		goto out;
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 70da85200695..d21deb46f49f 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -5086,9 +5086,14 @@ union bpf_attr {
>   * u64 bpf_get_func_ip(void *ctx)
>   * 	Description
>   * 		Get address of the traced function (for tracing and kprobe programs).
> + *
> + * 		When called for kprobe program attached as uprobe it returns
> + * 		probe address for both entry and return uprobe.
> + *
>   * 	Return
> - * 		Address of the traced function.
> + * 		Address of the traced function for kprobe.
>   * 		0 for kprobes placed within the function (not at the entry).
> + * 		Address of the probe for uprobe and return uprobe.
>   *
>   * u64 bpf_get_attach_cookie(void *ctx)
>   * 	Description
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index c92eb8c6ff08..7930a91ca7f3 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1057,9 +1057,28 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
>  #define get_entry_ip(fentry_ip) fentry_ip
>  #endif
>  
> +#ifdef CONFIG_UPROBES
> +static unsigned long bpf_get_func_ip_uprobe(struct pt_regs *regs)
> +{
> +	struct uprobe_dispatch_data *udd;
> +
> +	udd = (struct uprobe_dispatch_data *) current->utask->vaddr;
> +	return udd->bp_addr;
> +}
> +#else
> +#define bpf_get_func_ip_uprobe(regs) (u64) -1

Small thing - I don't think this value is exposed to users due to the
run_ctx->is_uprobe predicate, but would it be worth using -EOPNOTSUPP
here maybe?

> +#endif
> +
>  BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
>  {
> -	struct kprobe *kp = kprobe_running();
> +	struct bpf_trace_run_ctx *run_ctx;
> +	struct kprobe *kp;
> +
> +	run_ctx = container_of(current->bpf_ctx, struct bpf_trace_run_ctx, run_ctx);
> +	if (run_ctx->is_uprobe)
> +		return bpf_get_func_ip_uprobe(regs);
> +
> +	kp = kprobe_running();
>  
>  	if (!kp || !(kp->flags & KPROBE_FLAG_ON_FUNC_ENTRY))
>  		return 0;
> diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
> index 01ea148723de..7dde806be91e 100644
> --- a/kernel/trace/trace_probe.h
> +++ b/kernel/trace/trace_probe.h
> @@ -519,3 +519,8 @@ void __trace_probe_log_err(int offset, int err);
>  
>  #define trace_probe_log_err(offs, err)	\
>  	__trace_probe_log_err(offs, TP_ERR_##err)
> +
> +struct uprobe_dispatch_data {
> +	struct trace_uprobe	*tu;
> +	unsigned long		bp_addr;
> +};
> diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
> index 555c223c3232..fc76c3985672 100644
> --- a/kernel/trace/trace_uprobe.c
> +++ b/kernel/trace/trace_uprobe.c
> @@ -88,11 +88,6 @@ static struct trace_uprobe *to_trace_uprobe(struct dyn_event *ev)
>  static int register_uprobe_event(struct trace_uprobe *tu);
>  static int unregister_uprobe_event(struct trace_uprobe *tu);
>  
> -struct uprobe_dispatch_data {
> -	struct trace_uprobe	*tu;
> -	unsigned long		bp_addr;
> -};
> -
>  static int uprobe_dispatcher(struct uprobe_consumer *con, struct pt_regs *regs);
>  static int uretprobe_dispatcher(struct uprobe_consumer *con,
>  				unsigned long func, struct pt_regs *regs);
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 70da85200695..d21deb46f49f 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -5086,9 +5086,14 @@ union bpf_attr {
>   * u64 bpf_get_func_ip(void *ctx)
>   * 	Description
>   * 		Get address of the traced function (for tracing and kprobe programs).
> + *
> + * 		When called for kprobe program attached as uprobe it returns
> + * 		probe address for both entry and return uprobe.
> + *
>   * 	Return
> - * 		Address of the traced function.
> + * 		Address of the traced function for kprobe.
>   * 		0 for kprobes placed within the function (not at the entry).
> + * 		Address of the probe for uprobe and return uprobe.
>   *
>   * u64 bpf_get_attach_cookie(void *ctx)
>   * 	Description

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next 3/3] selftests/bpf: Add bpf_get_func_ip test for uprobe inside function
  2023-08-01  7:30 ` [PATCH bpf-next 3/3] selftests/bpf: Add bpf_get_func_ip test for uprobe inside function Jiri Olsa
@ 2023-08-02 11:30   ` Alan Maguire
  2023-08-02 12:26     ` Jiri Olsa
  0 siblings, 1 reply; 14+ messages in thread
From: Alan Maguire @ 2023-08-02 11:30 UTC (permalink / raw)
  To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt

On 01/08/2023 08:30, Jiri Olsa wrote:
> Adding get_func_ip test for uprobe inside function that validates
> the get_func_ip helper returns correct probe address value.
> 
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  .../bpf/prog_tests/get_func_ip_test.c         | 40 ++++++++++++++++++-
>  .../bpf/progs/get_func_ip_uprobe_test.c       | 18 +++++++++
>  2 files changed, 57 insertions(+), 1 deletion(-)
>  create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
> 
> diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> index 114cdbc04caf..f199220ad6de 100644
> --- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> +++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> @@ -55,7 +55,16 @@ static void test_function_entry(void)
>   * offset, disabling it for all other archs

nit: comment here

/* test6 is x86_64 specific because of the instruction
 * offset, disabling it for all other archs

...should probably be updated now multiple tests are gated by the
#ifdef __x86_64__.

BTW I tested if these tests would pass on aarch64 with a few tweaks
to instruction offsets, and they do. Something like the following
gets all of the tests running and passing on aarch64:

diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
index f199220ad6de..61ac13508c58 100644
--- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
@@ -51,10 +51,10 @@ static void test_function_entry(void)
        get_func_ip_test__destroy(skel);
 }

-/* test6 is x86_64 specific because of the instruction
- * offset, disabling it for all other archs
+/* tests below are x86_64/aarch64 specific because of the instruction
+ * offsets, disabling them for all other archs
  */
-#ifdef __x86_64__
+#if defined(__x86_64__) || defined(__aarch64__)
 extern void uprobe_trigger_body(void);
 asm(
 ".globl uprobe_trigger_body\n"
@@ -82,7 +82,11 @@ static void test_function_body_kprobe(void)
        if (!ASSERT_OK(err, "get_func_ip_test__load"))
                goto cleanup;

+#if defined(__x86_64__)
        kopts.offset = skel->kconfig->CONFIG_X86_KERNEL_IBT ? 9 : 5;
+#elif defined(__aarch64__)
+       kopts.offset = 8;
+#endif

        link6 = bpf_program__attach_kprobe_opts(skel->progs.test6,
"bpf_fentry_test6", &kopts);
        if (!ASSERT_OK_PTR(link6, "link6"))
diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
b/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
index 052f8a4345a8..56af4a8447b9 100644
--- a/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
+++ b/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
@@ -8,11 +8,17 @@ char _license[] SEC("license") = "GPL";
 unsigned long uprobe_trigger_body;

 __u64 test1_result = 0;
+#if defined(__TARGET_ARCH_x86)
+#define OFFSET 1
 SEC("uprobe//proc/self/exe:uprobe_trigger_body+1")
+#elif defined(__TARGET_ARCH_arm64)
+#define OFFSET 4
+SEC("uprobe//proc/self/exe:uprobe_trigger_body+4")
+#endif
 int BPF_UPROBE(test1)
 {
        __u64 addr = bpf_get_func_ip(ctx);

-       test1_result = (const void *) addr == (const void *)
uprobe_trigger_body + 1;
+       test1_result = (const void *) addr == (const void *)
uprobe_trigger_body + OFFSET;
        return 0;
 }


Anyway if you're doing a later version and want to roll something like
the above in feel free, otherwise I can send a followup patch later on.
Regardless, for the series on aarch64:

Tested-by: Alan Maguire <alan.maguire@oracle.com>

>   */
>  #ifdef __x86_64__
> -static void test_function_body(void)
> +extern void uprobe_trigger_body(void);
> +asm(
> +".globl uprobe_trigger_body\n"
> +".type uprobe_trigger_body, @function\n"
> +"uprobe_trigger_body:\n"
> +"	nop\n"
> +"	ret\n"
> +);
> +
> +static void test_function_body_kprobe(void)
>  {
>  	struct get_func_ip_test *skel = NULL;
>  	LIBBPF_OPTS(bpf_test_run_opts, topts);
> @@ -90,6 +99,35 @@ static void test_function_body(void)
>  	bpf_link__destroy(link6);
>  	get_func_ip_test__destroy(skel);
>  }
> +
> +static void test_function_body_uprobe(void)
> +{
> +	struct get_func_ip_uprobe_test *skel = NULL;
> +	int err;
> +
> +	skel = get_func_ip_uprobe_test__open_and_load();
> +	if (!ASSERT_OK_PTR(skel, "get_func_ip_uprobe_test__open_and_load"))
> +		return;
> +
> +	err = get_func_ip_uprobe_test__attach(skel);
> +	if (!ASSERT_OK(err, "get_func_ip_test__attach"))
> +		goto cleanup;
> +
> +	skel->bss->uprobe_trigger_body = (unsigned long) uprobe_trigger_body;
> +
> +	uprobe_trigger_body();
> +
> +	ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
> +
> +cleanup:
> +	get_func_ip_uprobe_test__destroy(skel);
> +}
> +
> +static void test_function_body(void)
> +{
> +	test_function_body_kprobe();
> +	test_function_body_uprobe();
> +}
>  #else
>  #define test_function_body()
>  #endif
> diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
> new file mode 100644
> index 000000000000..052f8a4345a8
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
> @@ -0,0 +1,18 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include "vmlinux.h"
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +
> +char _license[] SEC("license") = "GPL";
> +
> +unsigned long uprobe_trigger_body;
> +
> +__u64 test1_result = 0;
> +SEC("uprobe//proc/self/exe:uprobe_trigger_body+1")
> +int BPF_UPROBE(test1)
> +{
> +	__u64 addr = bpf_get_func_ip(ctx);
> +
> +	test1_result = (const void *) addr == (const void *) uprobe_trigger_body + 1;
> +	return 0;
> +}

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program
  2023-08-02 11:21   ` Alan Maguire
@ 2023-08-02 12:23     ` Jiri Olsa
  0 siblings, 0 replies; 14+ messages in thread
From: Jiri Olsa @ 2023-08-02 12:23 UTC (permalink / raw)
  To: Alan Maguire
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt

On Wed, Aug 02, 2023 at 12:21:59PM +0100, Alan Maguire wrote:

SNIP

> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index c92eb8c6ff08..7930a91ca7f3 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -1057,9 +1057,28 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
> >  #define get_entry_ip(fentry_ip) fentry_ip
> >  #endif
> >  
> > +#ifdef CONFIG_UPROBES
> > +static unsigned long bpf_get_func_ip_uprobe(struct pt_regs *regs)
> > +{
> > +	struct uprobe_dispatch_data *udd;
> > +
> > +	udd = (struct uprobe_dispatch_data *) current->utask->vaddr;
> > +	return udd->bp_addr;
> > +}
> > +#else
> > +#define bpf_get_func_ip_uprobe(regs) (u64) -1
> 
> Small thing - I don't think this value is exposed to users due to the
> run_ctx->is_uprobe predicate, but would it be worth using -EOPNOTSUPP
> here maybe?

I initially thought of putting WARN_ON_ONCE in here, but as you said it
won't trigger so ended up with -1 .. but I don't mind using -EOPNOTSUPP

jirka

> 
> > +#endif
> > +
> >  BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
> >  {
> > -	struct kprobe *kp = kprobe_running();
> > +	struct bpf_trace_run_ctx *run_ctx;
> > +	struct kprobe *kp;
> > +
> > +	run_ctx = container_of(current->bpf_ctx, struct bpf_trace_run_ctx, run_ctx);
> > +	if (run_ctx->is_uprobe)
> > +		return bpf_get_func_ip_uprobe(regs);
> > +
> > +	kp = kprobe_running();
> >  
> >  	if (!kp || !(kp->flags & KPROBE_FLAG_ON_FUNC_ENTRY))
> >  		return 0;
> > diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
> > index 01ea148723de..7dde806be91e 100644
> > --- a/kernel/trace/trace_probe.h
> > +++ b/kernel/trace/trace_probe.h
> > @@ -519,3 +519,8 @@ void __trace_probe_log_err(int offset, int err);
> >  
> >  #define trace_probe_log_err(offs, err)	\
> >  	__trace_probe_log_err(offs, TP_ERR_##err)
> > +
> > +struct uprobe_dispatch_data {
> > +	struct trace_uprobe	*tu;
> > +	unsigned long		bp_addr;
> > +};
> > diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
> > index 555c223c3232..fc76c3985672 100644
> > --- a/kernel/trace/trace_uprobe.c
> > +++ b/kernel/trace/trace_uprobe.c
> > @@ -88,11 +88,6 @@ static struct trace_uprobe *to_trace_uprobe(struct dyn_event *ev)
> >  static int register_uprobe_event(struct trace_uprobe *tu);
> >  static int unregister_uprobe_event(struct trace_uprobe *tu);
> >  
> > -struct uprobe_dispatch_data {
> > -	struct trace_uprobe	*tu;
> > -	unsigned long		bp_addr;
> > -};
> > -
> >  static int uprobe_dispatcher(struct uprobe_consumer *con, struct pt_regs *regs);
> >  static int uretprobe_dispatcher(struct uprobe_consumer *con,
> >  				unsigned long func, struct pt_regs *regs);
> > diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> > index 70da85200695..d21deb46f49f 100644
> > --- a/tools/include/uapi/linux/bpf.h
> > +++ b/tools/include/uapi/linux/bpf.h
> > @@ -5086,9 +5086,14 @@ union bpf_attr {
> >   * u64 bpf_get_func_ip(void *ctx)
> >   * 	Description
> >   * 		Get address of the traced function (for tracing and kprobe programs).
> > + *
> > + * 		When called for kprobe program attached as uprobe it returns
> > + * 		probe address for both entry and return uprobe.
> > + *
> >   * 	Return
> > - * 		Address of the traced function.
> > + * 		Address of the traced function for kprobe.
> >   * 		0 for kprobes placed within the function (not at the entry).
> > + * 		Address of the probe for uprobe and return uprobe.
> >   *
> >   * u64 bpf_get_attach_cookie(void *ctx)
> >   * 	Description

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next 3/3] selftests/bpf: Add bpf_get_func_ip test for uprobe inside function
  2023-08-02 11:30   ` Alan Maguire
@ 2023-08-02 12:26     ` Jiri Olsa
  2023-08-02 12:42       ` Alan Maguire
  0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2023-08-02 12:26 UTC (permalink / raw)
  To: Alan Maguire
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt

On Wed, Aug 02, 2023 at 12:30:36PM +0100, Alan Maguire wrote:
> On 01/08/2023 08:30, Jiri Olsa wrote:
> > Adding get_func_ip test for uprobe inside function that validates
> > the get_func_ip helper returns correct probe address value.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  .../bpf/prog_tests/get_func_ip_test.c         | 40 ++++++++++++++++++-
> >  .../bpf/progs/get_func_ip_uprobe_test.c       | 18 +++++++++
> >  2 files changed, 57 insertions(+), 1 deletion(-)
> >  create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
> > 
> > diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > index 114cdbc04caf..f199220ad6de 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > @@ -55,7 +55,16 @@ static void test_function_entry(void)
> >   * offset, disabling it for all other archs
> 
> nit: comment here
> 
> /* test6 is x86_64 specific because of the instruction
>  * offset, disabling it for all other archs
> 
> ...should probably be updated now multiple tests are gated by the
> #ifdef __x86_64__.

right will update that

> 
> BTW I tested if these tests would pass on aarch64 with a few tweaks
> to instruction offsets, and they do. Something like the following
> gets all of the tests running and passing on aarch64:

nice, thanks a lot for testing that

SNIP

> diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
> b/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
> index 052f8a4345a8..56af4a8447b9 100644
> --- a/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
> +++ b/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
> @@ -8,11 +8,17 @@ char _license[] SEC("license") = "GPL";
>  unsigned long uprobe_trigger_body;
> 
>  __u64 test1_result = 0;
> +#if defined(__TARGET_ARCH_x86)
> +#define OFFSET 1
>  SEC("uprobe//proc/self/exe:uprobe_trigger_body+1")
> +#elif defined(__TARGET_ARCH_arm64)
> +#define OFFSET 4
> +SEC("uprobe//proc/self/exe:uprobe_trigger_body+4")
> +#endif
>  int BPF_UPROBE(test1)
>  {
>         __u64 addr = bpf_get_func_ip(ctx);
> 
> -       test1_result = (const void *) addr == (const void *)
> uprobe_trigger_body + 1;
> +       test1_result = (const void *) addr == (const void *)
> uprobe_trigger_body + OFFSET;
>         return 0;
>  }
> 
> 
> Anyway if you're doing a later version and want to roll something like
> the above in feel free, otherwise I can send a followup patch later on.
> Regardless, for the series on aarch64:

I'd preffer if you could send follow up for arm, because I have
no easy way to test that change

thanks,
jirka

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next 3/3] selftests/bpf: Add bpf_get_func_ip test for uprobe inside function
  2023-08-02 12:26     ` Jiri Olsa
@ 2023-08-02 12:42       ` Alan Maguire
  0 siblings, 0 replies; 14+ messages in thread
From: Alan Maguire @ 2023-08-02 12:42 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google),
	Steven Rostedt

On 02/08/2023 13:26, Jiri Olsa wrote:
> On Wed, Aug 02, 2023 at 12:30:36PM +0100, Alan Maguire wrote:
>> On 01/08/2023 08:30, Jiri Olsa wrote:
>>> Adding get_func_ip test for uprobe inside function that validates
>>> the get_func_ip helper returns correct probe address value.
>>>
>>> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
>>> ---
>>>  .../bpf/prog_tests/get_func_ip_test.c         | 40 ++++++++++++++++++-
>>>  .../bpf/progs/get_func_ip_uprobe_test.c       | 18 +++++++++
>>>  2 files changed, 57 insertions(+), 1 deletion(-)
>>>  create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
>>>
>>> diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
>>> index 114cdbc04caf..f199220ad6de 100644
>>> --- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
>>> +++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
>>> @@ -55,7 +55,16 @@ static void test_function_entry(void)
>>>   * offset, disabling it for all other archs
>>
>> nit: comment here
>>
>> /* test6 is x86_64 specific because of the instruction
>>  * offset, disabling it for all other archs
>>
>> ...should probably be updated now multiple tests are gated by the
>> #ifdef __x86_64__.
> 
> right will update that
> 
>>
>> BTW I tested if these tests would pass on aarch64 with a few tweaks
>> to instruction offsets, and they do. Something like the following
>> gets all of the tests running and passing on aarch64:
> 
> nice, thanks a lot for testing that
> 
> SNIP
> 
>> diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
>> b/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
>> index 052f8a4345a8..56af4a8447b9 100644
>> --- a/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
>> +++ b/tools/testing/selftests/bpf/progs/get_func_ip_uprobe_test.c
>> @@ -8,11 +8,17 @@ char _license[] SEC("license") = "GPL";
>>  unsigned long uprobe_trigger_body;
>>
>>  __u64 test1_result = 0;
>> +#if defined(__TARGET_ARCH_x86)
>> +#define OFFSET 1
>>  SEC("uprobe//proc/self/exe:uprobe_trigger_body+1")
>> +#elif defined(__TARGET_ARCH_arm64)
>> +#define OFFSET 4
>> +SEC("uprobe//proc/self/exe:uprobe_trigger_body+4")
>> +#endif
>>  int BPF_UPROBE(test1)
>>  {
>>         __u64 addr = bpf_get_func_ip(ctx);
>>
>> -       test1_result = (const void *) addr == (const void *)
>> uprobe_trigger_body + 1;
>> +       test1_result = (const void *) addr == (const void *)
>> uprobe_trigger_body + OFFSET;
>>         return 0;
>>  }
>>
>>
>> Anyway if you're doing a later version and want to roll something like
>> the above in feel free, otherwise I can send a followup patch later on.
>> Regardless, for the series on aarch64:
> 
> I'd preffer if you could send follow up for arm, because I have
> no easy way to test that change
>

sure, will do! thanks!

Alan

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-08-02 12:43 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-01  7:29 [PATCH bpf-next 0/3] bpf: Support bpf_get_func_ip helper in uprobes Jiri Olsa
2023-08-01  7:30 ` [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program Jiri Olsa
2023-08-01 11:53   ` Yafang Shao
2023-08-01 19:44     ` Yonghong Song
2023-08-01 20:18       ` Yonghong Song
2023-08-01 20:43         ` Alexei Starovoitov
2023-08-02  7:15           ` Jiri Olsa
2023-08-02 11:21   ` Alan Maguire
2023-08-02 12:23     ` Jiri Olsa
2023-08-01  7:30 ` [PATCH bpf-next 2/3] selftests/bpf: Add bpf_get_func_ip tests for uprobe on function entry Jiri Olsa
2023-08-01  7:30 ` [PATCH bpf-next 3/3] selftests/bpf: Add bpf_get_func_ip test for uprobe inside function Jiri Olsa
2023-08-02 11:30   ` Alan Maguire
2023-08-02 12:26     ` Jiri Olsa
2023-08-02 12:42       ` Alan Maguire

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox