* [PATCH bpf-next v2 00/10] bpf: tracing session supporting
@ 2025-10-22 8:01 Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 01/10] bpf: add tracing session support Menglong Dong
` (7 more replies)
0 siblings, 8 replies; 10+ messages in thread
From: Menglong Dong @ 2025-10-22 8:01 UTC (permalink / raw)
To: ast, jolsa
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, mattbobrowski, rostedt,
mhiramat, mathieu.desnoyers, leon.hwang, jiang.biao, bpf,
linux-kernel, linux-trace-kernel
Sometimes, we need to hook both the entry and exit of a function with
TRACING. Therefore, we need define a FENTRY and a FEXIT for the target
function, which is not convenient.
Therefore, we add a tracing session support for TRACING. Generally
speaking, it's similar to kprobe session, which can hook both the entry
and exit of a function with a single BPF program. Meanwhile, it can also
control the execution of the fexit with the return value of the fentry.
Session cookie is also supported with the kfunc bpf_fsession_cookie().
The kfunc bpf_tracing_is_exit() and bpf_fsession_cookie() are both inlined
in the verifier.
We allow the usage of bpf_get_func_ret() to get the return value in the
fentry of the tracing session, as it will always get "0", which is safe
enough and is OK.
The while fsession stuff is arch related, so the -EOPNOTSUPP will be
returned if it is not supported yet by the arch. In this series, we only
support x86_64. And later, other arch will be implemented.
Changes since v1:
* session cookie support.
In this version, session cookie is implemented, and the kfunc
bpf_fsession_cookie() is added.
* restructure the layout of the stack.
In this version, the session stuff that stored in the stack is changed,
and we locate them after the return value to not break
bpf_get_func_ip().
* testcase enhancement.
Some nits in the testcase that suggested by Jiri is fixed. Meanwhile,
the testcase for get_func_ip and session cookie is added too.
Menglong Dong (10):
bpf: add tracing session support
bpf: add kfunc bpf_tracing_is_exit for TRACE_SESSION
bpf: add kfunc bpf_fsession_cookie for TRACING SESSION
bpf,x86: add ret_off to invoke_bpf()
bpf,x86: add tracing session supporting for x86_64
libbpf: add support for tracing session
selftests/bpf: test get_func_ip for fsession
selftests/bpf: add testcases for tracing session
selftests/bpf: add session cookie testcase for fsession
selftests/bpf: add testcase for mixing fsession, fentry and fexit
arch/arm64/net/bpf_jit_comp.c | 3 +
arch/loongarch/net/bpf_jit.c | 3 +
arch/powerpc/net/bpf_jit_comp.c | 3 +
arch/riscv/net/bpf_jit_comp64.c | 3 +
arch/s390/net/bpf_jit_comp.c | 3 +
arch/x86/net/bpf_jit_comp.c | 214 ++++++++++++++++--
include/linux/bpf.h | 2 +
include/uapi/linux/bpf.h | 1 +
kernel/bpf/btf.c | 2 +
kernel/bpf/syscall.c | 2 +
kernel/bpf/trampoline.c | 5 +-
kernel/bpf/verifier.c | 45 +++-
kernel/trace/bpf_trace.c | 59 ++++-
net/bpf/test_run.c | 1 +
net/core/bpf_sk_storage.c | 1 +
tools/bpf/bpftool/common.c | 1 +
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/bpf.c | 2 +
tools/lib/bpf/libbpf.c | 3 +
.../selftests/bpf/prog_tests/fsession_test.c | 161 +++++++++++++
.../bpf/prog_tests/get_func_ip_test.c | 2 +
.../bpf/prog_tests/tracing_failure.c | 2 +-
.../selftests/bpf/progs/fsession_cookie.c | 49 ++++
.../selftests/bpf/progs/fsession_mixed.c | 45 ++++
.../selftests/bpf/progs/fsession_test.c | 175 ++++++++++++++
.../selftests/bpf/progs/get_func_ip_test.c | 14 ++
26 files changed, 776 insertions(+), 26 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/fsession_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fsession_cookie.c
create mode 100644 tools/testing/selftests/bpf/progs/fsession_mixed.c
create mode 100644 tools/testing/selftests/bpf/progs/fsession_test.c
--
2.51.1.dirty
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH bpf-next v2 01/10] bpf: add tracing session support
2025-10-22 8:01 [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
@ 2025-10-22 8:01 ` Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 02/10] bpf: add kfunc bpf_tracing_is_exit for TRACE_SESSION Menglong Dong
` (6 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: Menglong Dong @ 2025-10-22 8:01 UTC (permalink / raw)
To: ast, jolsa
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, mattbobrowski, rostedt,
mhiramat, mathieu.desnoyers, leon.hwang, jiang.biao, bpf,
linux-kernel, linux-trace-kernel
The tracing session is something that similar to kprobe session. It allow
to attach a single BPF program to both the entry and the exit of the
target functions.
While a non-zero value is returned by the fentry, the fexit will be
skipped, which is similar to kprobe session.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
Co-developed-by: Leon Hwang <leon.hwang@linux.dev>
Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
---
arch/arm64/net/bpf_jit_comp.c | 3 +++
arch/loongarch/net/bpf_jit.c | 3 +++
arch/powerpc/net/bpf_jit_comp.c | 3 +++
arch/riscv/net/bpf_jit_comp64.c | 3 +++
arch/s390/net/bpf_jit_comp.c | 3 +++
arch/x86/net/bpf_jit_comp.c | 3 +++
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 1 +
kernel/bpf/btf.c | 2 ++
kernel/bpf/syscall.c | 2 ++
kernel/bpf/trampoline.c | 5 ++++-
kernel/bpf/verifier.c | 12 +++++++++---
net/bpf/test_run.c | 1 +
net/core/bpf_sk_storage.c | 1 +
tools/include/uapi/linux/bpf.h | 1 +
.../selftests/bpf/prog_tests/tracing_failure.c | 2 +-
16 files changed, 41 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index ab83089c3d8f..06f4bd6c6755 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -2788,6 +2788,9 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
void *image, *tmp;
int ret;
+ if (tlinks[BPF_TRAMP_SESSION].nr_links)
+ return -EOPNOTSUPP;
+
/* image doesn't need to be in module memory range, so we can
* use kvmalloc.
*/
diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
index cbe53d0b7fb0..ad596341658a 100644
--- a/arch/loongarch/net/bpf_jit.c
+++ b/arch/loongarch/net/bpf_jit.c
@@ -1739,6 +1739,9 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
void *image, *tmp;
struct jit_ctx ctx;
+ if (tlinks[BPF_TRAMP_SESSION].nr_links)
+ return -EOPNOTSUPP;
+
size = ro_image_end - ro_image;
image = kvmalloc(size, GFP_KERNEL);
if (!image)
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 88ad5ba7b87f..bcc0ce09f6fa 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -1017,6 +1017,9 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
void *rw_image, *tmp;
int ret;
+ if (tlinks[BPF_TRAMP_SESSION].nr_links)
+ return -EOPNOTSUPP;
+
/*
* rw_image doesn't need to be in module memory range, so we can
* use kvmalloc.
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index 45cbc7c6fe49..55b0284bf177 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -1286,6 +1286,9 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
struct rv_jit_context ctx;
u32 size = ro_image_end - ro_image;
+ if (tlinks[BPF_TRAMP_SESSION].nr_links)
+ return -EOPNOTSUPP;
+
image = kvmalloc(size, GFP_KERNEL);
if (!image)
return -ENOMEM;
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index cf461d76e9da..3f25bf55b150 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -2924,6 +2924,9 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
struct bpf_tramp_jit tjit;
int ret;
+ if (tlinks[BPF_TRAMP_SESSION].nr_links)
+ return -EOPNOTSUPP;
+
/* Compute offsets, check whether the code fits. */
memset(&tjit, 0, sizeof(tjit));
ret = __arch_prepare_bpf_trampoline(im, &tjit, m, flags,
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index d4c93d9e73e4..389c3a96e2b8 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -3478,6 +3478,9 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
int ret;
u32 size = image_end - image;
+ if (tlinks[BPF_TRAMP_SESSION].nr_links)
+ return -EOPNOTSUPP;
+
/* rw_image doesn't need to be in module memory range, so we can
* use kvmalloc.
*/
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 204f9c759a41..3f6cad5df2db 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1270,6 +1270,7 @@ enum bpf_tramp_prog_type {
BPF_TRAMP_FENTRY,
BPF_TRAMP_FEXIT,
BPF_TRAMP_MODIFY_RETURN,
+ BPF_TRAMP_SESSION,
BPF_TRAMP_MAX,
BPF_TRAMP_REPLACE, /* more than MAX */
};
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 6829936d33f5..79ba3023e8be 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1133,6 +1133,7 @@ enum bpf_attach_type {
BPF_NETKIT_PEER,
BPF_TRACE_KPROBE_SESSION,
BPF_TRACE_UPROBE_SESSION,
+ BPF_TRACE_SESSION,
__MAX_BPF_ATTACH_TYPE
};
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 0de8fc8a0e0b..2c1c3e0caff8 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6107,6 +6107,7 @@ static int btf_validate_prog_ctx_type(struct bpf_verifier_log *log, const struct
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_MODIFY_RETURN:
+ case BPF_TRACE_SESSION:
/* allow u64* as ctx */
if (btf_is_int(t) && t->size == 8)
return 0;
@@ -6704,6 +6705,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
fallthrough;
case BPF_LSM_CGROUP:
case BPF_TRACE_FEXIT:
+ case BPF_TRACE_SESSION:
/* When LSM programs are attached to void LSM hooks
* they use FEXIT trampolines and when attached to
* int LSM hooks, they use MODIFY_RETURN trampolines.
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 8a129746bd6c..cb483701fe39 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3564,6 +3564,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
case BPF_PROG_TYPE_TRACING:
if (prog->expected_attach_type != BPF_TRACE_FENTRY &&
prog->expected_attach_type != BPF_TRACE_FEXIT &&
+ prog->expected_attach_type != BPF_TRACE_SESSION &&
prog->expected_attach_type != BPF_MODIFY_RETURN) {
err = -EINVAL;
goto out_put_prog;
@@ -4337,6 +4338,7 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
case BPF_TRACE_RAW_TP:
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
+ case BPF_TRACE_SESSION:
case BPF_MODIFY_RETURN:
return BPF_PROG_TYPE_TRACING;
case BPF_LSM_MAC:
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 5949095e51c3..f6d4dea3461e 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -111,7 +111,7 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
return (ptype == BPF_PROG_TYPE_TRACING &&
(eatype == BPF_TRACE_FENTRY || eatype == BPF_TRACE_FEXIT ||
- eatype == BPF_MODIFY_RETURN)) ||
+ eatype == BPF_MODIFY_RETURN || eatype == BPF_TRACE_SESSION)) ||
(ptype == BPF_PROG_TYPE_LSM && eatype == BPF_LSM_MAC);
}
@@ -418,6 +418,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
tr->flags &= (BPF_TRAMP_F_SHARE_IPMODIFY | BPF_TRAMP_F_TAIL_CALL_CTX);
if (tlinks[BPF_TRAMP_FEXIT].nr_links ||
+ tlinks[BPF_TRAMP_SESSION].nr_links ||
tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links) {
/* NOTE: BPF_TRAMP_F_RESTORE_REGS and BPF_TRAMP_F_SKIP_FRAME
* should not be set together.
@@ -515,6 +516,8 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
return BPF_TRAMP_MODIFY_RETURN;
case BPF_TRACE_FEXIT:
return BPF_TRAMP_FEXIT;
+ case BPF_TRACE_SESSION:
+ return BPF_TRAMP_SESSION;
case BPF_LSM_MAC:
if (!prog->aux->attach_func_proto->type)
/* The function returns void, we cannot modify its
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9b4f6920f79b..3ffdf2143f16 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17281,6 +17281,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
break;
case BPF_TRACE_RAW_TP:
case BPF_MODIFY_RETURN:
+ case BPF_TRACE_SESSION:
return 0;
case BPF_TRACE_ITER:
break;
@@ -22736,6 +22737,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
if (prog_type == BPF_PROG_TYPE_TRACING &&
insn->imm == BPF_FUNC_get_func_ret) {
if (eatype == BPF_TRACE_FEXIT ||
+ eatype == BPF_TRACE_SESSION ||
eatype == BPF_MODIFY_RETURN) {
/* Load nr_args from ctx - 8 */
insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
@@ -23677,7 +23679,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
if (tgt_prog->type == BPF_PROG_TYPE_TRACING &&
prog_extension &&
(tgt_prog->expected_attach_type == BPF_TRACE_FENTRY ||
- tgt_prog->expected_attach_type == BPF_TRACE_FEXIT)) {
+ tgt_prog->expected_attach_type == BPF_TRACE_FEXIT ||
+ tgt_prog->expected_attach_type == BPF_TRACE_SESSION)) {
/* Program extensions can extend all program types
* except fentry/fexit. The reason is the following.
* The fentry/fexit programs are used for performance
@@ -23692,7 +23695,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
* beyond reasonable stack size. Hence extending fentry
* is not allowed.
*/
- bpf_log(log, "Cannot extend fentry/fexit\n");
+ bpf_log(log, "Cannot extend fentry/fexit/session\n");
return -EINVAL;
}
} else {
@@ -23776,6 +23779,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
case BPF_LSM_CGROUP:
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
+ case BPF_TRACE_SESSION:
if (!btf_type_is_func(t)) {
bpf_log(log, "attach_btf_id %u is not a function\n",
btf_id);
@@ -23942,6 +23946,7 @@ static bool can_be_sleepable(struct bpf_prog *prog)
case BPF_TRACE_FEXIT:
case BPF_MODIFY_RETURN:
case BPF_TRACE_ITER:
+ case BPF_TRACE_SESSION:
return true;
default:
return false;
@@ -24023,9 +24028,10 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
tgt_info.tgt_name);
return -EINVAL;
} else if ((prog->expected_attach_type == BPF_TRACE_FEXIT ||
+ prog->expected_attach_type == BPF_TRACE_SESSION ||
prog->expected_attach_type == BPF_MODIFY_RETURN) &&
btf_id_set_contains(&noreturn_deny, btf_id)) {
- verbose(env, "Attaching fexit/fmod_ret to __noreturn function '%s' is rejected.\n",
+ verbose(env, "Attaching fexit/session/fmod_ret to __noreturn function '%s' is rejected.\n",
tgt_info.tgt_name);
return -EINVAL;
}
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 73d5f0d9f5f4..f7f0fd5383c4 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -685,6 +685,7 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog,
switch (prog->expected_attach_type) {
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
+ case BPF_TRACE_SESSION:
if (bpf_fentry_test1(1) != 2 ||
bpf_fentry_test2(2, 3) != 5 ||
bpf_fentry_test3(4, 5, 6) != 15 ||
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index d3fbaf89a698..8da8834aa134 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -365,6 +365,7 @@ static bool bpf_sk_storage_tracing_allowed(const struct bpf_prog *prog)
return true;
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
+ case BPF_TRACE_SESSION:
return !!strncmp(prog->aux->attach_func_name, "bpf_sk_storage",
strlen("bpf_sk_storage"));
default:
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 6829936d33f5..79ba3023e8be 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1133,6 +1133,7 @@ enum bpf_attach_type {
BPF_NETKIT_PEER,
BPF_TRACE_KPROBE_SESSION,
BPF_TRACE_UPROBE_SESSION,
+ BPF_TRACE_SESSION,
__MAX_BPF_ATTACH_TYPE
};
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_failure.c b/tools/testing/selftests/bpf/prog_tests/tracing_failure.c
index 10e231965589..58b02552507d 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_failure.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_failure.c
@@ -73,7 +73,7 @@ static void test_tracing_deny(void)
static void test_fexit_noreturns(void)
{
test_tracing_fail_prog("fexit_noreturns",
- "Attaching fexit/fmod_ret to __noreturn function 'do_exit' is rejected.");
+ "Attaching fexit/session/fmod_ret to __noreturn function 'do_exit' is rejected.");
}
void test_tracing_failure(void)
--
2.51.1.dirty
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next v2 02/10] bpf: add kfunc bpf_tracing_is_exit for TRACE_SESSION
2025-10-22 8:01 [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 01/10] bpf: add tracing session support Menglong Dong
@ 2025-10-22 8:01 ` Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 03/10] bpf: add kfunc bpf_fsession_cookie for TRACING SESSION Menglong Dong
` (5 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: Menglong Dong @ 2025-10-22 8:01 UTC (permalink / raw)
To: ast, jolsa
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, mattbobrowski, rostedt,
mhiramat, mathieu.desnoyers, leon.hwang, jiang.biao, bpf,
linux-kernel, linux-trace-kernel
If TRACE_SESSION exists, we will use extra 8-bytes in the stack of the
trampoline to store the flags that we needed, and the 8-bytes lie after
the return value, which means ctx[nr_args + 1]. And we will store the
flag "is_exit" to the first bit of it.
Introduce the kfunc bpf_tracing_is_exit(), which is used to tell if it
is fexit currently. Meanwhile, inline it in the verifier.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
Co-developed-by: Leon Hwang <leon.hwang@linux.dev>
Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
---
v2:
- store the session flags after return value, instead of before nr_args
- inline the bpf_tracing_is_exit, as Jiri suggested
---
kernel/bpf/verifier.c | 15 +++++++++++-
kernel/trace/bpf_trace.c | 49 +++++++++++++++++++++++++++++++++++++---
2 files changed, 60 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 3ffdf2143f16..a4d0dd4440fd 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12293,6 +12293,7 @@ enum special_kfunc_type {
KF___bpf_trap,
KF_bpf_task_work_schedule_signal,
KF_bpf_task_work_schedule_resume,
+ KF_bpf_tracing_is_exit,
};
BTF_ID_LIST(special_kfunc_list)
@@ -12365,6 +12366,7 @@ BTF_ID(func, bpf_res_spin_unlock_irqrestore)
BTF_ID(func, __bpf_trap)
BTF_ID(func, bpf_task_work_schedule_signal)
BTF_ID(func, bpf_task_work_schedule_resume)
+BTF_ID(func, bpf_tracing_is_exit)
static bool is_task_work_add_kfunc(u32 func_id)
{
@@ -12419,7 +12421,8 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
struct bpf_reg_state *reg = ®s[regno];
bool arg_mem_size = false;
- if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx])
+ if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
+ meta->func_id == special_kfunc_list[KF_bpf_tracing_is_exit])
return KF_ARG_PTR_TO_CTX;
/* In this function, we verify the kfunc's BTF as per the argument type,
@@ -21994,6 +21997,16 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
*cnt = 1;
+ } else if (desc->func_id == special_kfunc_list[KF_bpf_tracing_is_exit]) {
+ /* Load nr_args from ctx - 8 */
+ insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
+ /* add rax, 1 */
+ insn_buf[1] = BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1);
+ insn_buf[2] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 3);
+ insn_buf[3] = BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1);
+ insn_buf[4] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0);
+ insn_buf[5] = BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1);
+ *cnt = 6;
}
if (env->insn_aux_data[insn_idx].arg_prog) {
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 4f87c16d915a..d0720d850621 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3356,12 +3356,55 @@ static const struct btf_kfunc_id_set bpf_kprobe_multi_kfunc_set = {
.filter = bpf_kprobe_multi_filter,
};
-static int __init bpf_kprobe_multi_kfuncs_init(void)
+__bpf_kfunc_start_defs();
+
+__bpf_kfunc bool bpf_tracing_is_exit(void *ctx)
+{
+ /* This helper call is inlined by verifier. */
+ u64 nr_args = ((u64 *)ctx)[-1];
+
+ /*
+ * ctx[nr_args + 1] is the session flags, and the last bit is
+ * is_exit.
+ */
+ return ((u64 *)ctx)[nr_args + 1] & 1;
+}
+
+__bpf_kfunc_end_defs();
+
+BTF_KFUNCS_START(tracing_kfunc_set_ids)
+BTF_ID_FLAGS(func, bpf_tracing_is_exit, KF_FASTCALL)
+BTF_KFUNCS_END(tracing_kfunc_set_ids)
+
+static int bpf_tracing_filter(const struct bpf_prog *prog, u32 kfunc_id)
{
- return register_btf_kfunc_id_set(BPF_PROG_TYPE_KPROBE, &bpf_kprobe_multi_kfunc_set);
+ if (!btf_id_set8_contains(&tracing_kfunc_set_ids, kfunc_id))
+ return 0;
+
+ if (prog->type != BPF_PROG_TYPE_TRACING ||
+ prog->expected_attach_type != BPF_TRACE_SESSION)
+ return -EINVAL;
+
+ return 0;
+}
+
+static const struct btf_kfunc_id_set bpf_tracing_kfunc_set = {
+ .owner = THIS_MODULE,
+ .set = &tracing_kfunc_set_ids,
+ .filter = bpf_tracing_filter,
+};
+
+static int __init bpf_trace_kfuncs_init(void)
+{
+ int err = 0;
+
+ err = err ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_KPROBE, &bpf_kprobe_multi_kfunc_set);
+ err = err ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_tracing_kfunc_set);
+
+ return err;
}
-late_initcall(bpf_kprobe_multi_kfuncs_init);
+late_initcall(bpf_trace_kfuncs_init);
typedef int (*copy_fn_t)(void *dst, const void *src, u32 size, struct task_struct *tsk);
--
2.51.1.dirty
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next v2 03/10] bpf: add kfunc bpf_fsession_cookie for TRACING SESSION
2025-10-22 8:01 [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 01/10] bpf: add tracing session support Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 02/10] bpf: add kfunc bpf_tracing_is_exit for TRACE_SESSION Menglong Dong
@ 2025-10-22 8:01 ` Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 04/10] bpf,x86: add ret_off to invoke_bpf() Menglong Dong
` (4 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: Menglong Dong @ 2025-10-22 8:01 UTC (permalink / raw)
To: ast, jolsa
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, mattbobrowski, rostedt,
mhiramat, mathieu.desnoyers, leon.hwang, jiang.biao, bpf,
linux-kernel, linux-trace-kernel
Add the kfunc bpf_fsession_cookie(), which is similar to
bpf_session_cookie() and return the address of the session cookie. The
address of the session cookie is stored after session flags, which means
ctx[nr_args + 2].
Inline this kfunc in the verifier too.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
include/linux/bpf.h | 1 +
kernel/bpf/verifier.c | 20 ++++++++++++++++++--
kernel/trace/bpf_trace.c | 10 ++++++++++
3 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 3f6cad5df2db..83d5d4d3120d 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1735,6 +1735,7 @@ struct bpf_prog {
enforce_expected_attach_type:1, /* Enforce expected_attach_type checking at attach time */
call_get_stack:1, /* Do we call bpf_get_stack() or bpf_get_stackid() */
call_get_func_ip:1, /* Do we call get_func_ip() */
+ call_session_cookie:1, /* Do we call bpf_fsession_cookie() */
tstamp_type_access:1, /* Accessed __sk_buff->tstamp_type */
sleepable:1; /* BPF program is sleepable */
enum bpf_prog_type type; /* Type of BPF program */
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a4d0dd4440fd..ab46e5fbc7a6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12294,6 +12294,7 @@ enum special_kfunc_type {
KF_bpf_task_work_schedule_signal,
KF_bpf_task_work_schedule_resume,
KF_bpf_tracing_is_exit,
+ KF_bpf_fsession_cookie,
};
BTF_ID_LIST(special_kfunc_list)
@@ -12367,6 +12368,7 @@ BTF_ID(func, __bpf_trap)
BTF_ID(func, bpf_task_work_schedule_signal)
BTF_ID(func, bpf_task_work_schedule_resume)
BTF_ID(func, bpf_tracing_is_exit)
+BTF_ID(func, bpf_fsession_cookie)
static bool is_task_work_add_kfunc(u32 func_id)
{
@@ -12422,7 +12424,8 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
bool arg_mem_size = false;
if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
- meta->func_id == special_kfunc_list[KF_bpf_tracing_is_exit])
+ meta->func_id == special_kfunc_list[KF_bpf_tracing_is_exit] ||
+ meta->func_id == special_kfunc_list[KF_bpf_fsession_cookie])
return KF_ARG_PTR_TO_CTX;
/* In this function, we verify the kfunc's BTF as per the argument type,
@@ -13915,7 +13918,8 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
}
}
- if (meta.func_id == special_kfunc_list[KF_bpf_session_cookie]) {
+ if (meta.func_id == special_kfunc_list[KF_bpf_session_cookie] ||
+ meta.func_id == special_kfunc_list[KF_bpf_fsession_cookie]) {
meta.r0_size = sizeof(u64);
meta.r0_rdonly = false;
}
@@ -14196,6 +14200,9 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
return err;
}
+ if (meta.func_id == special_kfunc_list[KF_bpf_fsession_cookie])
+ env->prog->call_session_cookie = true;
+
return 0;
}
@@ -22007,6 +22014,15 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
insn_buf[4] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0);
insn_buf[5] = BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1);
*cnt = 6;
+ } else if (desc->func_id == special_kfunc_list[KF_bpf_fsession_cookie]) {
+ /* Load nr_args from ctx - 8 */
+ insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
+ /* add rax, 2 */
+ insn_buf[1] = BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2);
+ insn_buf[2] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 3);
+ insn_buf[3] = BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1);
+ insn_buf[4] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0);
+ *cnt = 5;
}
if (env->insn_aux_data[insn_idx].arg_prog) {
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index d0720d850621..4a8568bd654d 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3370,10 +3370,20 @@ __bpf_kfunc bool bpf_tracing_is_exit(void *ctx)
return ((u64 *)ctx)[nr_args + 1] & 1;
}
+__bpf_kfunc u64 *bpf_fsession_cookie(void *ctx)
+{
+ /* This helper call is inlined by verifier. */
+ u64 nr_args = ((u64 *)ctx)[-1];
+
+ /* ctx[nr_args + 2] is the session cookie address */
+ return (u64 *)((u64 *)ctx)[nr_args + 2];
+}
+
__bpf_kfunc_end_defs();
BTF_KFUNCS_START(tracing_kfunc_set_ids)
BTF_ID_FLAGS(func, bpf_tracing_is_exit, KF_FASTCALL)
+BTF_ID_FLAGS(func, bpf_fsession_cookie, KF_FASTCALL)
BTF_KFUNCS_END(tracing_kfunc_set_ids)
static int bpf_tracing_filter(const struct bpf_prog *prog, u32 kfunc_id)
--
2.51.1.dirty
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next v2 04/10] bpf,x86: add ret_off to invoke_bpf()
2025-10-22 8:01 [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
` (2 preceding siblings ...)
2025-10-22 8:01 ` [PATCH bpf-next v2 03/10] bpf: add kfunc bpf_fsession_cookie for TRACING SESSION Menglong Dong
@ 2025-10-22 8:01 ` Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 05/10] bpf,x86: add tracing session supporting for x86_64 Menglong Dong
` (3 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: Menglong Dong @ 2025-10-22 8:01 UTC (permalink / raw)
To: ast, jolsa
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, mattbobrowski, rostedt,
mhiramat, mathieu.desnoyers, leon.hwang, jiang.biao, bpf,
linux-kernel, linux-trace-kernel
For now, the offset of the return value in trampoline is fixed 8-bytes.
In this commit, we introduce the variable "ret_off" to represent the
offset of the return value. For now, the "ret_off" is just 8. And in the
following patch, we will make it something else to use the room after it.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
arch/x86/net/bpf_jit_comp.c | 41 +++++++++++++++++++++----------------
1 file changed, 23 insertions(+), 18 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 389c3a96e2b8..7a604ee9713f 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -2940,7 +2940,7 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog,
static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
struct bpf_tramp_link *l, int stack_size,
- int run_ctx_off, bool save_ret,
+ int run_ctx_off, bool save_ret, int ret_off,
void *image, void *rw_image)
{
u8 *prog = *pprog;
@@ -3005,7 +3005,7 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
* value of BPF_PROG_TYPE_STRUCT_OPS prog.
*/
if (save_ret)
- emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+ emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ret_off);
/* replace 2 nops with JE insn, since jmp target is known */
jmp_insn[0] = X86_JE;
@@ -3055,7 +3055,7 @@ static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond)
static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
struct bpf_tramp_links *tl, int stack_size,
- int run_ctx_off, bool save_ret,
+ int run_ctx_off, bool save_ret, int ret_off,
void *image, void *rw_image)
{
int i;
@@ -3063,7 +3063,8 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
for (i = 0; i < tl->nr_links; i++) {
if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size,
- run_ctx_off, save_ret, image, rw_image))
+ run_ctx_off, save_ret, ret_off, image,
+ rw_image))
return -EINVAL;
}
*pprog = prog;
@@ -3072,7 +3073,7 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
struct bpf_tramp_links *tl, int stack_size,
- int run_ctx_off, u8 **branches,
+ int run_ctx_off, int ret_off, u8 **branches,
void *image, void *rw_image)
{
u8 *prog = *pprog;
@@ -3082,18 +3083,18 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
* Set this to 0 to avoid confusing the program.
*/
emit_mov_imm32(&prog, false, BPF_REG_0, 0);
- emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+ emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ret_off);
for (i = 0; i < tl->nr_links; i++) {
if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size, run_ctx_off, true,
- image, rw_image))
+ ret_off, image, rw_image))
return -EINVAL;
- /* mod_ret prog stored return value into [rbp - 8]. Emit:
- * if (*(u64 *)(rbp - 8) != 0)
+ /* mod_ret prog stored return value into [rbp - ret_off]. Emit:
+ * if (*(u64 *)(rbp - ret_off) != 0)
* goto do_fexit;
*/
- /* cmp QWORD PTR [rbp - 0x8], 0x0 */
- EMIT4(0x48, 0x83, 0x7d, 0xf8); EMIT1(0x00);
+ /* cmp QWORD PTR [rbp - ret_off], 0x0 */
+ EMIT4(0x48, 0x83, 0x7d, -ret_off); EMIT1(0x00);
/* Save the location of the branch and Generate 6 nops
* (4 bytes for an offset and 2 bytes for the jump) These nops
@@ -3179,7 +3180,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
void *func_addr)
{
int i, ret, nr_regs = m->nr_args, stack_size = 0;
- int regs_off, nregs_off, ip_off, run_ctx_off, arg_stack_off, rbx_off;
+ int ret_off, regs_off, nregs_off, ip_off, run_ctx_off, arg_stack_off,
+ rbx_off;
struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
@@ -3213,7 +3215,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
* RBP + 8 [ return address ]
* RBP + 0 [ RBP ]
*
- * RBP - 8 [ return value ] BPF_TRAMP_F_CALL_ORIG or
+ * RBP - ret_off [ return value ] BPF_TRAMP_F_CALL_ORIG or
* BPF_TRAMP_F_RET_FENTRY_RET flags
*
* [ reg_argN ] always
@@ -3239,6 +3241,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
if (save_ret)
stack_size += 8;
+ ret_off = stack_size;
stack_size += nr_regs * 8;
regs_off = stack_size;
@@ -3341,7 +3344,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
if (fentry->nr_links) {
if (invoke_bpf(m, &prog, fentry, regs_off, run_ctx_off,
- flags & BPF_TRAMP_F_RET_FENTRY_RET, image, rw_image))
+ flags & BPF_TRAMP_F_RET_FENTRY_RET, ret_off,
+ image, rw_image))
return -EINVAL;
}
@@ -3352,7 +3356,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
return -ENOMEM;
if (invoke_bpf_mod_ret(m, &prog, fmod_ret, regs_off,
- run_ctx_off, branches, image, rw_image)) {
+ run_ctx_off, ret_off, branches,
+ image, rw_image)) {
ret = -EINVAL;
goto cleanup;
}
@@ -3380,7 +3385,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
}
}
/* remember return value in a stack for bpf prog to access */
- emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+ emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ret_off);
im->ip_after_call = image + (prog - (u8 *)rw_image);
emit_nops(&prog, X86_PATCH_SIZE);
}
@@ -3403,7 +3408,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
if (fexit->nr_links) {
if (invoke_bpf(m, &prog, fexit, regs_off, run_ctx_off,
- false, image, rw_image)) {
+ false, ret_off, image, rw_image)) {
ret = -EINVAL;
goto cleanup;
}
@@ -3433,7 +3438,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
/* restore return value of orig_call or fentry prog back into RAX */
if (save_ret)
- emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
+ emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -ret_off);
emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, -rbx_off);
EMIT1(0xC9); /* leave */
--
2.51.1.dirty
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next v2 05/10] bpf,x86: add tracing session supporting for x86_64
2025-10-22 8:01 [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
` (3 preceding siblings ...)
2025-10-22 8:01 ` [PATCH bpf-next v2 04/10] bpf,x86: add ret_off to invoke_bpf() Menglong Dong
@ 2025-10-22 8:01 ` Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 06/10] libbpf: add support for tracing session Menglong Dong
` (2 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: Menglong Dong @ 2025-10-22 8:01 UTC (permalink / raw)
To: ast, jolsa
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, mattbobrowski, rostedt,
mhiramat, mathieu.desnoyers, leon.hwang, jiang.biao, bpf,
linux-kernel, linux-trace-kernel
Add BPF_TRACE_SESSION supporting to x86_64. invoke_bpf_session_entry and
invoke_bpf_session_exit is introduced for this purpose.
In invoke_bpf_session_entry(), we will check if the return value of the
fentry is 0, and set the corresponding session flag if not. And in
invoke_bpf_session_exit(), we will check if the corresponding flag is
set. If set, the fexit will be skipped.
As designed, the session flags and session cookie address is stored after
the return value, and the stack look like this:
cookie ptr -> 8 bytes
session flags -> 8 bytes
return value -> 8 bytes
argN -> 8 bytes
...
arg1 -> 8 bytes
nr_args -> 8 bytes
...
cookieN -> 8 bytes
cookie1 -> 8 bytes
In the entry of the session, we will clear the return value, so the fentry
will always get 0 with ctx[nr_args] or bpf_get_func_ret().
Before the execution of the BPF prog, the "cookie ptr" will be filled with
the corresponding cookie address, which is done in
invoke_bpf_session_entry() and invoke_bpf_session_exit().
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
Co-developed-by: Leon Hwang <leon.hwang@linux.dev>
Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
---
v2:
- add session cookie support
- add the session stuff after return value, instead of before nr_args
---
arch/x86/net/bpf_jit_comp.c | 185 +++++++++++++++++++++++++++++++++++-
1 file changed, 181 insertions(+), 4 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7a604ee9713f..2fffc530c88c 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -3109,6 +3109,148 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
return 0;
}
+static int invoke_bpf_session_entry(const struct btf_func_model *m, u8 **pprog,
+ struct bpf_tramp_links *tl, int stack_size,
+ int run_ctx_off, int ret_off, int sflags_off,
+ int cookies_off, void *image, void *rw_image)
+{
+ int i, j = 0, cur_cookie_off;
+ u64 session_flags;
+ u8 *prog = *pprog;
+ u8 *jmp_insn;
+
+ /* clear the session flags:
+ * xor rax, rax
+ * mov QWORD PTR [rbp - sflags_off], rax
+ */
+ EMIT3(0x48, 0x31, 0xC0);
+ emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -sflags_off);
+ /*
+ * clear the return value to make sure bpf_get_func_ret() always
+ * get 0 in fentry:
+ * mov QWORD PTR [rbp - 0x8], rax
+ */
+ emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ret_off);
+ /* clear all the cookies in the cookie array */
+ for (i = 0; i < tl->nr_links; i++) {
+ if (tl->links[i]->link.prog->call_session_cookie) {
+ cur_cookie_off = -cookies_off + j * 8;
+ /* mov QWORD PTR [rbp - sflags_off], rax */
+ emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0,
+ cur_cookie_off);
+ j++;
+ }
+ }
+
+ j = 0;
+ for (i = 0; i < tl->nr_links; i++) {
+ if (tl->links[i]->link.prog->call_session_cookie) {
+ cur_cookie_off = -cookies_off + j * 8;
+ /*
+ * save the cookie address to rbp - sflags_off + 8:
+ * lea rax, [rbp - cur_cookie_off]
+ * mov QWORD PTR [rbp - sflags_off + 8], rax
+ */
+ if (!is_imm8(cur_cookie_off))
+ EMIT3_off32(0x48, 0x8D, 0x85, cur_cookie_off);
+ else
+ EMIT4(0x48, 0x8D, 0x45, cur_cookie_off);
+ emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -sflags_off + 8);
+ j++;
+ }
+ if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size, run_ctx_off, true,
+ ret_off, image, rw_image))
+ return -EINVAL;
+
+ /* fentry prog stored return value into [rbp - 8]. Emit:
+ * if (*(u64 *)(rbp - ret_off) != 0) {
+ * *(u64 *)(rbp - sflags_off) |= (1 << (i + 1));
+ * *(u64 *)(rbp - ret_off) = 0;
+ * }
+ */
+ /* cmp QWORD PTR [rbp - ret_off], 0x0 */
+ EMIT4(0x48, 0x83, 0x7d, -ret_off); EMIT1(0x00);
+ /* emit 2 nops that will be replaced with JE insn */
+ jmp_insn = prog;
+ emit_nops(&prog, 2);
+
+ session_flags = (1ULL << (i + 1));
+ /* mov rax, $session_flags */
+ emit_mov_imm64(&prog, BPF_REG_0, session_flags >> 32, (u32) session_flags);
+ /* or QWORD PTR [rbp - sflags_off], rax */
+ EMIT2(0x48, 0x09);
+ emit_insn_suffix(&prog, BPF_REG_FP, BPF_REG_0, -sflags_off);
+
+ /* mov QWORD PTR [rbp - ret_off], 0x0 */
+ EMIT4(0x48, 0xC7, 0x45, -ret_off); EMIT4(0x00, 0x00, 0x00, 0x00);
+
+ jmp_insn[0] = X86_JE;
+ jmp_insn[1] = prog - jmp_insn - 2;
+ }
+
+ *pprog = prog;
+ return 0;
+}
+
+static int invoke_bpf_session_exit(const struct btf_func_model *m, u8 **pprog,
+ struct bpf_tramp_links *tl, int stack_size,
+ int run_ctx_off, int ret_off, int sflags_off,
+ int cookies_off, void *image, void *rw_image)
+{
+ int i, j = 0, cur_cookie_off;
+ u64 session_flags;
+ u8 *prog = *pprog;
+ u8 *jmp_insn;
+
+ /*
+ * set the bpf_trace_is_exit flag to the session flags:
+ * mov rax, 1
+ * or QWORD PTR [rbp - sflags_off], rax
+ */
+ emit_mov_imm32(&prog, false, BPF_REG_0, 1);
+ EMIT2(0x48, 0x09);
+ emit_insn_suffix(&prog, BPF_REG_FP, BPF_REG_0, -sflags_off);
+
+ for (i = 0; i < tl->nr_links; i++) {
+ if (tl->links[i]->link.prog->call_session_cookie) {
+ cur_cookie_off = -cookies_off + j * 8;
+ /*
+ * save the cookie address to rbp - sflags_off + 8:
+ * lea rax, [rbp - cur_cookie_off]
+ * mov QWORD PTR [rbp - sflags_off + 8], rax
+ */
+ if (!is_imm8(cur_cookie_off))
+ EMIT3_off32(0x48, 0x8D, 0x85, cur_cookie_off);
+ else
+ EMIT4(0x48, 0x8D, 0x45, cur_cookie_off);
+ emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -sflags_off + 8);
+ j++;
+ }
+ /* check if (1 << (i+1)) is set in the session flags, and
+ * skip the execution of the fexit program if it is.
+ */
+ session_flags = 1ULL << (i + 1);
+ /* mov rax, $session_flags */
+ emit_mov_imm64(&prog, BPF_REG_0, session_flags >> 32, (u32) session_flags);
+ /* test QWORD PTR [rbp - sflags_off], rax */
+ EMIT2(0x48, 0x85);
+ emit_insn_suffix(&prog, BPF_REG_FP, BPF_REG_0, -sflags_off);
+ /* emit 2 nops that will be replaced with JE insn */
+ jmp_insn = prog;
+ emit_nops(&prog, 2);
+
+ if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size, run_ctx_off, false,
+ ret_off, image, rw_image))
+ return -EINVAL;
+
+ jmp_insn[0] = X86_JNE;
+ jmp_insn[1] = prog - jmp_insn - 2;
+ }
+
+ *pprog = prog;
+ return 0;
+}
+
/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
#define LOAD_TRAMP_TAIL_CALL_CNT_PTR(stack) \
__LOAD_TCC_PTR(-round_up(stack, 8) - 8)
@@ -3181,8 +3323,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
{
int i, ret, nr_regs = m->nr_args, stack_size = 0;
int ret_off, regs_off, nregs_off, ip_off, run_ctx_off, arg_stack_off,
- rbx_off;
+ rbx_off, sflags_off = 0, cookies_off;
struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_links *session = &tlinks[BPF_TRAMP_SESSION];
struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
void *orig_call = func_addr;
@@ -3215,6 +3358,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
* RBP + 8 [ return address ]
* RBP + 0 [ RBP ]
*
+ * [ cookie ptr ] tracing session
+ * RBP - sflags_off [ session flags ] tracing session
+ *
* RBP - ret_off [ return value ] BPF_TRAMP_F_CALL_ORIG or
* BPF_TRAMP_F_RET_FENTRY_RET flags
*
@@ -3230,6 +3376,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
*
* RBP - run_ctx_off [ bpf_tramp_run_ctx ]
*
+ * [ session cookieN ]
+ * [ ... ]
+ * RBP - cookies_off [ session cookie1 ] tracing session
+ *
* [ stack_argN ] BPF_TRAMP_F_CALL_ORIG
* [ ... ]
* [ stack_arg2 ]
@@ -3237,6 +3387,12 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
* RSP [ tail_call_cnt_ptr ] BPF_TRAMP_F_TAIL_CALL_CTX
*/
+ /* room for session flags and cookie ptr */
+ if (session->nr_links) {
+ stack_size += 8 + 8;
+ sflags_off = stack_size;
+ }
+
/* room for return value of orig_call or fentry prog */
save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
if (save_ret)
@@ -3261,6 +3417,14 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
stack_size += (sizeof(struct bpf_tramp_run_ctx) + 7) & ~0x7;
run_ctx_off = stack_size;
+ if (session->nr_links) {
+ for (i = 0; i < session->nr_links; i++) {
+ if (session->links[i]->link.prog->call_session_cookie)
+ stack_size += 8;
+ }
+ }
+ cookies_off = stack_size;
+
if (nr_regs > 6 && (flags & BPF_TRAMP_F_CALL_ORIG)) {
/* the space that used to pass arguments on-stack */
stack_size += (nr_regs - get_nr_used_regs(m)) * 8;
@@ -3349,6 +3513,13 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
return -EINVAL;
}
+ if (session->nr_links) {
+ if (invoke_bpf_session_entry(m, &prog, session, regs_off,
+ run_ctx_off, ret_off, sflags_off,
+ cookies_off, image, rw_image))
+ return -EINVAL;
+ }
+
if (fmod_ret->nr_links) {
branches = kcalloc(fmod_ret->nr_links, sizeof(u8 *),
GFP_KERNEL);
@@ -3414,6 +3585,15 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
}
}
+ if (session->nr_links) {
+ if (invoke_bpf_session_exit(m, &prog, session, regs_off,
+ run_ctx_off, ret_off, sflags_off,
+ cookies_off, image, rw_image)) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
if (flags & BPF_TRAMP_F_RESTORE_REGS)
restore_regs(m, &prog, regs_off);
@@ -3483,9 +3663,6 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
int ret;
u32 size = image_end - image;
- if (tlinks[BPF_TRAMP_SESSION].nr_links)
- return -EOPNOTSUPP;
-
/* rw_image doesn't need to be in module memory range, so we can
* use kvmalloc.
*/
--
2.51.1.dirty
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next v2 06/10] libbpf: add support for tracing session
2025-10-22 8:01 [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
` (4 preceding siblings ...)
2025-10-22 8:01 ` [PATCH bpf-next v2 05/10] bpf,x86: add tracing session supporting for x86_64 Menglong Dong
@ 2025-10-22 8:01 ` Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 07/10] selftests/bpf: test get_func_ip for fsession Menglong Dong
2025-10-22 8:19 ` [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
7 siblings, 0 replies; 10+ messages in thread
From: Menglong Dong @ 2025-10-22 8:01 UTC (permalink / raw)
To: ast, jolsa
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, mattbobrowski, rostedt,
mhiramat, mathieu.desnoyers, leon.hwang, jiang.biao, bpf,
linux-kernel, linux-trace-kernel
Add BPF_TRACE_SESSION to libbpf and bpftool.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
tools/bpf/bpftool/common.c | 1 +
tools/lib/bpf/bpf.c | 2 ++
tools/lib/bpf/libbpf.c | 3 +++
3 files changed, 6 insertions(+)
diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c
index e8daf963ecef..534be6cfa2be 100644
--- a/tools/bpf/bpftool/common.c
+++ b/tools/bpf/bpftool/common.c
@@ -1191,6 +1191,7 @@ const char *bpf_attach_type_input_str(enum bpf_attach_type t)
case BPF_TRACE_FENTRY: return "fentry";
case BPF_TRACE_FEXIT: return "fexit";
case BPF_MODIFY_RETURN: return "mod_ret";
+ case BPF_TRACE_SESSION: return "fsession";
case BPF_SK_REUSEPORT_SELECT: return "sk_skb_reuseport_select";
case BPF_SK_REUSEPORT_SELECT_OR_MIGRATE: return "sk_skb_reuseport_select_or_migrate";
default: return libbpf_bpf_attach_type_str(t);
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 339b19797237..caed2b689068 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -794,6 +794,7 @@ int bpf_link_create(int prog_fd, int target_fd,
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_MODIFY_RETURN:
+ case BPF_TRACE_SESSION:
case BPF_LSM_MAC:
attr.link_create.tracing.cookie = OPTS_GET(opts, tracing.cookie, 0);
if (!OPTS_ZEROED(opts, tracing))
@@ -917,6 +918,7 @@ int bpf_link_create(int prog_fd, int target_fd,
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_MODIFY_RETURN:
+ case BPF_TRACE_SESSION:
return bpf_raw_tracepoint_open(NULL, prog_fd);
default:
return libbpf_err(err);
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index dd3b2f57082d..e582620cd097 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -115,6 +115,7 @@ static const char * const attach_type_name[] = {
[BPF_TRACE_FENTRY] = "trace_fentry",
[BPF_TRACE_FEXIT] = "trace_fexit",
[BPF_MODIFY_RETURN] = "modify_return",
+ [BPF_TRACE_SESSION] = "trace_session",
[BPF_LSM_MAC] = "lsm_mac",
[BPF_LSM_CGROUP] = "lsm_cgroup",
[BPF_SK_LOOKUP] = "sk_lookup",
@@ -9607,6 +9608,8 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("fentry.s+", TRACING, BPF_TRACE_FENTRY, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
SEC_DEF("fmod_ret.s+", TRACING, BPF_MODIFY_RETURN, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
SEC_DEF("fexit.s+", TRACING, BPF_TRACE_FEXIT, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
+ SEC_DEF("fsession+", TRACING, BPF_TRACE_SESSION, SEC_ATTACH_BTF, attach_trace),
+ SEC_DEF("fsession.s+", TRACING, BPF_TRACE_SESSION, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
SEC_DEF("freplace+", EXT, 0, SEC_ATTACH_BTF, attach_trace),
SEC_DEF("lsm+", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF, attach_lsm),
SEC_DEF("lsm.s+", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_lsm),
--
2.51.1.dirty
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next v2 07/10] selftests/bpf: test get_func_ip for fsession
2025-10-22 8:01 [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
` (5 preceding siblings ...)
2025-10-22 8:01 ` [PATCH bpf-next v2 06/10] libbpf: add support for tracing session Menglong Dong
@ 2025-10-22 8:01 ` Menglong Dong
2025-10-22 14:11 ` Menglong Dong
2025-10-22 8:19 ` [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
7 siblings, 1 reply; 10+ messages in thread
From: Menglong Dong @ 2025-10-22 8:01 UTC (permalink / raw)
To: ast, jolsa
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, mattbobrowski, rostedt,
mhiramat, mathieu.desnoyers, leon.hwang, jiang.biao, bpf,
linux-kernel, linux-trace-kernel
As the layout of the stack changed for fsession, we'd better test
bpf_get_func_ip() for it.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
.../selftests/bpf/prog_tests/get_func_ip_test.c | 2 ++
.../testing/selftests/bpf/progs/get_func_ip_test.c | 14 ++++++++++++++
2 files changed, 16 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
index c40242dfa8fb..a9078a1dbb07 100644
--- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
@@ -46,6 +46,8 @@ static void test_function_entry(void)
ASSERT_EQ(skel->bss->test5_result, 1, "test5_result");
ASSERT_EQ(skel->bss->test7_result, 1, "test7_result");
ASSERT_EQ(skel->bss->test8_result, 1, "test8_result");
+ ASSERT_EQ(skel->bss->test9_result1, 1, "test9_result1");
+ ASSERT_EQ(skel->bss->test9_result2, 1, "test9_result2");
cleanup:
get_func_ip_test__destroy(skel);
diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
index 2011cacdeb18..9acb79fc7537 100644
--- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
@@ -103,3 +103,17 @@ int BPF_URETPROBE(test8, int ret)
test8_result = (const void *) addr == (const void *) uprobe_trigger;
return 0;
}
+
+__u64 test9_result1 = 0;
+__u64 test9_result2 = 0;
+SEC("fsession/bpf_fentry_test1")
+int BPF_PROG(test9, int a)
+{
+ __u64 addr = bpf_get_func_ip(ctx);
+
+ if (bpf_tracing_is_exit(ctx))
+ test9_result1 = (const void *) addr == &bpf_fentry_test1;
+ else
+ test9_result2 = (const void *) addr == &bpf_fentry_test1;
+ return 0;
+}
--
2.51.1.dirty
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next v2 00/10] bpf: tracing session supporting
2025-10-22 8:01 [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
` (6 preceding siblings ...)
2025-10-22 8:01 ` [PATCH bpf-next v2 07/10] selftests/bpf: test get_func_ip for fsession Menglong Dong
@ 2025-10-22 8:19 ` Menglong Dong
7 siblings, 0 replies; 10+ messages in thread
From: Menglong Dong @ 2025-10-22 8:19 UTC (permalink / raw)
To: ast, jolsa, Menglong Dong
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, mattbobrowski, rostedt,
mhiramat, mathieu.desnoyers, leon.hwang, jiang.biao, bpf,
linux-kernel, linux-trace-kernel
On 2025/10/22 16:01, Menglong Dong wrote:
> Sometimes, we need to hook both the entry and exit of a function with
> TRACING. Therefore, we need define a FENTRY and a FEXIT for the target
> function, which is not convenient.
>
> Therefore, we add a tracing session support for TRACING. Generally
> speaking, it's similar to kprobe session, which can hook both the entry
> and exit of a function with a single BPF program. Meanwhile, it can also
> control the execution of the fexit with the return value of the fentry.
> Session cookie is also supported with the kfunc bpf_fsession_cookie().
>
> The kfunc bpf_tracing_is_exit() and bpf_fsession_cookie() are both inlined
> in the verifier.
Hmm......the gmail broken after it sent the 7th patch. So I sent
the remained 3 patches again. However, they are not recognized
together as a series. So awkward :/
Should I resend it?
Thanks!
Menglong Dong
>
> We allow the usage of bpf_get_func_ret() to get the return value in the
> fentry of the tracing session, as it will always get "0", which is safe
> enough and is OK.
>
> The while fsession stuff is arch related, so the -EOPNOTSUPP will be
> returned if it is not supported yet by the arch. In this series, we only
> support x86_64. And later, other arch will be implemented.
>
> Changes since v1:
> * session cookie support.
> In this version, session cookie is implemented, and the kfunc
> bpf_fsession_cookie() is added.
>
> * restructure the layout of the stack.
> In this version, the session stuff that stored in the stack is changed,
> and we locate them after the return value to not break
> bpf_get_func_ip().
>
> * testcase enhancement.
> Some nits in the testcase that suggested by Jiri is fixed. Meanwhile,
> the testcase for get_func_ip and session cookie is added too.
>
> Menglong Dong (10):
> bpf: add tracing session support
> bpf: add kfunc bpf_tracing_is_exit for TRACE_SESSION
> bpf: add kfunc bpf_fsession_cookie for TRACING SESSION
> bpf,x86: add ret_off to invoke_bpf()
> bpf,x86: add tracing session supporting for x86_64
> libbpf: add support for tracing session
> selftests/bpf: test get_func_ip for fsession
> selftests/bpf: add testcases for tracing session
> selftests/bpf: add session cookie testcase for fsession
> selftests/bpf: add testcase for mixing fsession, fentry and fexit
>
> arch/arm64/net/bpf_jit_comp.c | 3 +
> arch/loongarch/net/bpf_jit.c | 3 +
> arch/powerpc/net/bpf_jit_comp.c | 3 +
> arch/riscv/net/bpf_jit_comp64.c | 3 +
> arch/s390/net/bpf_jit_comp.c | 3 +
> arch/x86/net/bpf_jit_comp.c | 214 ++++++++++++++++--
> include/linux/bpf.h | 2 +
> include/uapi/linux/bpf.h | 1 +
> kernel/bpf/btf.c | 2 +
> kernel/bpf/syscall.c | 2 +
> kernel/bpf/trampoline.c | 5 +-
> kernel/bpf/verifier.c | 45 +++-
> kernel/trace/bpf_trace.c | 59 ++++-
> net/bpf/test_run.c | 1 +
> net/core/bpf_sk_storage.c | 1 +
> tools/bpf/bpftool/common.c | 1 +
> tools/include/uapi/linux/bpf.h | 1 +
> tools/lib/bpf/bpf.c | 2 +
> tools/lib/bpf/libbpf.c | 3 +
> .../selftests/bpf/prog_tests/fsession_test.c | 161 +++++++++++++
> .../bpf/prog_tests/get_func_ip_test.c | 2 +
> .../bpf/prog_tests/tracing_failure.c | 2 +-
> .../selftests/bpf/progs/fsession_cookie.c | 49 ++++
> .../selftests/bpf/progs/fsession_mixed.c | 45 ++++
> .../selftests/bpf/progs/fsession_test.c | 175 ++++++++++++++
> .../selftests/bpf/progs/get_func_ip_test.c | 14 ++
> 26 files changed, 776 insertions(+), 26 deletions(-)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/fsession_test.c
> create mode 100644 tools/testing/selftests/bpf/progs/fsession_cookie.c
> create mode 100644 tools/testing/selftests/bpf/progs/fsession_mixed.c
> create mode 100644 tools/testing/selftests/bpf/progs/fsession_test.c
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next v2 07/10] selftests/bpf: test get_func_ip for fsession
2025-10-22 8:01 ` [PATCH bpf-next v2 07/10] selftests/bpf: test get_func_ip for fsession Menglong Dong
@ 2025-10-22 14:11 ` Menglong Dong
0 siblings, 0 replies; 10+ messages in thread
From: Menglong Dong @ 2025-10-22 14:11 UTC (permalink / raw)
To: ast, jolsa, Menglong Dong
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, mattbobrowski, rostedt,
mhiramat, mathieu.desnoyers, leon.hwang, jiang.biao, bpf,
linux-kernel, linux-trace-kernel
On 2025/10/22 16:01, Menglong Dong wrote:
> As the layout of the stack changed for fsession, we'd better test
> bpf_get_func_ip() for it.
>
> Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
> ---
> .../selftests/bpf/prog_tests/get_func_ip_test.c | 2 ++
> .../testing/selftests/bpf/progs/get_func_ip_test.c | 14 ++++++++++++++
> 2 files changed, 16 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> index c40242dfa8fb..a9078a1dbb07 100644
> --- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> +++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> @@ -46,6 +46,8 @@ static void test_function_entry(void)
> ASSERT_EQ(skel->bss->test5_result, 1, "test5_result");
> ASSERT_EQ(skel->bss->test7_result, 1, "test7_result");
> ASSERT_EQ(skel->bss->test8_result, 1, "test8_result");
> + ASSERT_EQ(skel->bss->test9_result1, 1, "test9_result1");
> + ASSERT_EQ(skel->bss->test9_result2, 1, "test9_result2");
Oops, the fsession part should be factor out, and be skipped
if not X86_64, which failed the CI for !X86_64 arch :(
I'll fix it in the next version.
>
> cleanup:
> get_func_ip_test__destroy(skel);
> diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> index 2011cacdeb18..9acb79fc7537 100644
> --- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> +++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> @@ -103,3 +103,17 @@ int BPF_URETPROBE(test8, int ret)
> test8_result = (const void *) addr == (const void *) uprobe_trigger;
> return 0;
> }
> +
> +__u64 test9_result1 = 0;
> +__u64 test9_result2 = 0;
> +SEC("fsession/bpf_fentry_test1")
> +int BPF_PROG(test9, int a)
> +{
> + __u64 addr = bpf_get_func_ip(ctx);
> +
> + if (bpf_tracing_is_exit(ctx))
> + test9_result1 = (const void *) addr == &bpf_fentry_test1;
> + else
> + test9_result2 = (const void *) addr == &bpf_fentry_test1;
> + return 0;
> +}
>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-10-22 14:11 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-22 8:01 [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 01/10] bpf: add tracing session support Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 02/10] bpf: add kfunc bpf_tracing_is_exit for TRACE_SESSION Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 03/10] bpf: add kfunc bpf_fsession_cookie for TRACING SESSION Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 04/10] bpf,x86: add ret_off to invoke_bpf() Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 05/10] bpf,x86: add tracing session supporting for x86_64 Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 06/10] libbpf: add support for tracing session Menglong Dong
2025-10-22 8:01 ` [PATCH bpf-next v2 07/10] selftests/bpf: test get_func_ip for fsession Menglong Dong
2025-10-22 14:11 ` Menglong Dong
2025-10-22 8:19 ` [PATCH bpf-next v2 00/10] bpf: tracing session supporting Menglong Dong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox