* [PATCH bpf-next v2 0/3] Separate tests that need error injection
@ 2026-02-10 14:14 Viktor Malik
2026-02-10 14:14 ` [PATCH bpf-next v2 1/3] selftests/bpf: Split module_attach into subtests Viktor Malik
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Viktor Malik @ 2026-02-10 14:14 UTC (permalink / raw)
To: bpf
Cc: Andrii Nakryiko, Eduard Zingerman, Alexei Starovoitov,
Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Shuah Khan, Jordan Rome, Viktor Malik, Qais Yousef, Hou Tao
Some enterprise kernels (such as RHEL) do not enable error injection via
BPF (CONFIG_FUNCTION_ERROR_INJECTION and CONFIG_BPF_KPROBE_OVERRIDE).
When running test_progs on such kernels, a lot of test cases fail since
they use sleepable fentry or fmod_ret program types which require error
injection to be enabled. While it is possible to skip these via custom
DENYLIST, some test_progs are not properly split into subtests and
therefore must be entirely skipped.
This patch series split such tests into subtests, namely module_attach
and read_vsyscall. In addition, the last patch separates a sleepable
fentry test out of the LSM test suite into a separate test so that it
can be skipped without skipping other LSM tests.
Changes in v2:
- Fix indices in read_vsyscall/copy_from_user subtest (reported by CI)
Viktor Malik (3):
selftests/bpf: Split module_attach into subtests
selftests/bpf: Split read_vsyscall into subtests
selftests/bpf: Split sleepable fentry from LSM test
.../bpf/prog_tests/fentry_sleepable.c | 28 +++
.../selftests/bpf/prog_tests/module_attach.c | 168 +++++++++++++-----
.../selftests/bpf/prog_tests/read_vsyscall.c | 41 ++++-
.../selftests/bpf/prog_tests/test_lsm.c | 8 -
.../selftests/bpf/progs/fentry_sleepable.c | 28 +++
tools/testing/selftests/bpf/progs/lsm.c | 21 ---
.../selftests/bpf/progs/read_vsyscall.c | 18 +-
.../selftests/bpf/progs/test_module_attach.c | 63 +++----
8 files changed, 249 insertions(+), 126 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_sleepable.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_sleepable.c
--
2.53.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH bpf-next v2 1/3] selftests/bpf: Split module_attach into subtests
2026-02-10 14:14 [PATCH bpf-next v2 0/3] Separate tests that need error injection Viktor Malik
@ 2026-02-10 14:14 ` Viktor Malik
2026-02-10 14:47 ` bot+bpf-ci
2026-02-10 16:01 ` Mykyta Yatsenko
2026-02-10 14:14 ` [PATCH bpf-next v2 2/3] selftests/bpf: Split read_vsyscall " Viktor Malik
2026-02-10 14:14 ` [PATCH bpf-next v2 3/3] selftests/bpf: Split sleepable fentry from LSM test Viktor Malik
2 siblings, 2 replies; 7+ messages in thread
From: Viktor Malik @ 2026-02-10 14:14 UTC (permalink / raw)
To: bpf
Cc: Andrii Nakryiko, Eduard Zingerman, Alexei Starovoitov,
Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Shuah Khan, Jordan Rome, Viktor Malik, Qais Yousef, Hou Tao
The test verifies attachment to various hooks in a kernel module,
however, everything is flattened into a single test. When running the
test on a kernel which doesn't support some of the hooks, it is
impossible to skip them selectively.
Isolate each BPF program into a separate subtest. This is done by
disabling auto-loading of programs and loading and testing each program
separately.
Signed-off-by: Viktor Malik <vmalik@redhat.com>
---
.../selftests/bpf/prog_tests/module_attach.c | 168 +++++++++++++-----
.../selftests/bpf/progs/test_module_attach.c | 63 +++----
2 files changed, 149 insertions(+), 82 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/module_attach.c b/tools/testing/selftests/bpf/prog_tests/module_attach.c
index 70fa7ae93173..8668c26c202f 100644
--- a/tools/testing/selftests/bpf/prog_tests/module_attach.c
+++ b/tools/testing/selftests/bpf/prog_tests/module_attach.c
@@ -6,6 +6,23 @@
#include "test_module_attach.skel.h"
#include "testing_helpers.h"
+static const char * const read_tests[] = {
+ "handle_raw_tp",
+ "handle_tp_btf",
+ "handle_fentry",
+ "handle_fentry_explicit",
+ "handle_fmod_ret",
+};
+
+static const char * const detach_tests[] = {
+ "handle_fentry",
+ "handle_fexit",
+ "kprobe_multi",
+};
+
+static const int READ_SZ = 456;
+static const int WRITE_SZ = 457;
+
static int duration;
static int trigger_module_test_writable(int *val)
@@ -33,27 +50,66 @@ static int trigger_module_test_writable(int *val)
return 0;
}
-void test_module_attach(void)
+static void test_module_attach_prog(const char *prog_name, int sz,
+ const char *attach_target, int ret)
{
- const int READ_SZ = 456;
- const int WRITE_SZ = 457;
- struct test_module_attach* skel;
- struct test_module_attach__bss *bss;
- struct bpf_link *link;
+ struct test_module_attach *skel;
+ struct bpf_program *prog;
int err;
- int writable_val = 0;
skel = test_module_attach__open();
if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
return;
- err = bpf_program__set_attach_target(skel->progs.handle_fentry_manual,
- 0, "bpf_testmod_test_read");
- ASSERT_OK(err, "set_attach_target");
+ prog = bpf_object__find_program_by_name(skel->obj, prog_name);
+ if (!ASSERT_OK_PTR(prog, "find program"))
+ goto cleanup;
+ bpf_program__set_autoload(prog, true);
- err = bpf_program__set_attach_target(skel->progs.handle_fentry_explicit_manual,
- 0, "bpf_testmod:bpf_testmod_test_read");
- ASSERT_OK(err, "set_attach_target_explicit");
+ if (attach_target) {
+ err = bpf_program__set_attach_target(prog, 0, attach_target);
+ ASSERT_OK(err, attach_target);
+ }
+
+ err = test_module_attach__load(skel);
+ if (CHECK(err, "skel_load", "failed to load skeleton\n"))
+ return;
+
+ err = test_module_attach__attach(skel);
+ if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err))
+ goto cleanup;
+
+ if (sz) {
+ /* trigger both read and write though each test uses only one */
+ ASSERT_OK(trigger_module_test_read(sz), "trigger_read");
+ ASSERT_OK(trigger_module_test_write(sz), "trigger_write");
+
+ ASSERT_EQ(skel->bss->sz, sz, prog_name);
+ }
+
+ if (ret)
+ ASSERT_EQ(skel->bss->retval, ret, "ret");
+cleanup:
+ test_module_attach__destroy(skel);
+}
+
+static void test_module_attach_writable(void)
+{
+ struct test_module_attach__bss *bss;
+ struct test_module_attach *skel;
+ struct bpf_program *prog;
+ int writable_val = 0;
+ int err;
+
+ skel = test_module_attach__open();
+ if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
+ return;
+
+ prog = bpf_object__find_program_by_name(skel->obj,
+ "handle_raw_tp_writable_bare");
+ if (!ASSERT_OK_PTR(prog, "find program"))
+ goto cleanup;
+ bpf_program__set_autoload(prog, true);
err = test_module_attach__load(skel);
if (CHECK(err, "skel_load", "failed to load skeleton\n"))
@@ -65,21 +121,6 @@ void test_module_attach(void)
if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err))
goto cleanup;
- /* trigger tracepoint */
- ASSERT_OK(trigger_module_test_read(READ_SZ), "trigger_read");
- ASSERT_OK(trigger_module_test_write(WRITE_SZ), "trigger_write");
-
- ASSERT_EQ(bss->raw_tp_read_sz, READ_SZ, "raw_tp");
- ASSERT_EQ(bss->raw_tp_bare_write_sz, WRITE_SZ, "raw_tp_bare");
- ASSERT_EQ(bss->tp_btf_read_sz, READ_SZ, "tp_btf");
- ASSERT_EQ(bss->fentry_read_sz, READ_SZ, "fentry");
- ASSERT_EQ(bss->fentry_manual_read_sz, READ_SZ, "fentry_manual");
- ASSERT_EQ(bss->fentry_explicit_read_sz, READ_SZ, "fentry_explicit");
- ASSERT_EQ(bss->fentry_explicit_manual_read_sz, READ_SZ, "fentry_explicit_manual");
- ASSERT_EQ(bss->fexit_read_sz, READ_SZ, "fexit");
- ASSERT_EQ(bss->fexit_ret, -EIO, "fexit_tet");
- ASSERT_EQ(bss->fmod_ret_read_sz, READ_SZ, "fmod_ret");
-
bss->raw_tp_writable_bare_early_ret = true;
bss->raw_tp_writable_bare_out_val = 0xf1f2f3f4;
ASSERT_OK(trigger_module_test_writable(&writable_val),
@@ -87,31 +128,72 @@ void test_module_attach(void)
ASSERT_EQ(bss->raw_tp_writable_bare_in_val, 1024, "writable_test_in");
ASSERT_EQ(bss->raw_tp_writable_bare_out_val, writable_val,
"writable_test_out");
+cleanup:
+ test_module_attach__destroy(skel);
+}
- test_module_attach__detach(skel);
-
- /* attach fentry/fexit and make sure it gets module reference */
- link = bpf_program__attach(skel->progs.handle_fentry);
- if (!ASSERT_OK_PTR(link, "attach_fentry"))
- goto cleanup;
+static void test_module_attach_detach(const char *prog_name)
+{
+ struct test_module_attach *skel;
+ struct bpf_program *prog;
+ struct bpf_link *link;
+ int err;
- ASSERT_ERR(unload_bpf_testmod(false), "unload_bpf_testmod");
- bpf_link__destroy(link);
+ skel = test_module_attach__open();
+ if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
+ return;
- link = bpf_program__attach(skel->progs.handle_fexit);
- if (!ASSERT_OK_PTR(link, "attach_fexit"))
+ prog = bpf_object__find_program_by_name(skel->obj, prog_name);
+ if (!ASSERT_OK_PTR(prog, "find program"))
goto cleanup;
+ bpf_program__set_autoload(prog, true);
- ASSERT_ERR(unload_bpf_testmod(false), "unload_bpf_testmod");
- bpf_link__destroy(link);
+ err = test_module_attach__load(skel);
+ if (CHECK(err, "skel_load", "failed to load skeleton\n"))
+ goto cleanup;
- link = bpf_program__attach(skel->progs.kprobe_multi);
- if (!ASSERT_OK_PTR(link, "attach_kprobe_multi"))
+ /* attach and make sure it gets module reference */
+ link = bpf_program__attach(prog);
+ if (!ASSERT_OK_PTR(link, "attach"))
goto cleanup;
ASSERT_ERR(unload_bpf_testmod(false), "unload_bpf_testmod");
bpf_link__destroy(link);
-
cleanup:
test_module_attach__destroy(skel);
}
+
+void test_module_attach(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(read_tests); i++) {
+ if (!test__start_subtest(read_tests[i]))
+ continue;
+ test_module_attach_prog(read_tests[i], READ_SZ, NULL, 0);
+ }
+ if (test__start_subtest("handle_raw_tp_bare")) {
+ test_module_attach_prog("handle_raw_tp_bare", WRITE_SZ, NULL,
+ 0);
+ }
+ if (test__start_subtest("handle_raw_tp_writable_bare"))
+ test_module_attach_writable();
+ if (test__start_subtest("handle_fentry_manual")) {
+ test_module_attach_prog("handle_fentry_manual", READ_SZ,
+ "bpf_testmod_test_read", 0);
+ }
+ if (test__start_subtest("handle_fentry_explicit_manual")) {
+ test_module_attach_prog("handle_fentry_explicit_manual",
+ READ_SZ,
+ "bpf_testmod:bpf_testmod_test_read", 0);
+ }
+ if (test__start_subtest("handle_fexit"))
+ test_module_attach_prog("handle_fexit", READ_SZ, NULL, -EIO);
+ if (test__start_subtest("handle_fexit_ret"))
+ test_module_attach_prog("handle_fexit_ret", 0, NULL, 0);
+ for (i = 0; i < ARRAY_SIZE(detach_tests); i++) {
+ if (!test__start_subtest(detach_tests[i]))
+ continue;
+ test_module_attach_detach(detach_tests[i]);
+ }
+}
diff --git a/tools/testing/selftests/bpf/progs/test_module_attach.c b/tools/testing/selftests/bpf/progs/test_module_attach.c
index 03d7f89787a1..5609e388fb58 100644
--- a/tools/testing/selftests/bpf/progs/test_module_attach.c
+++ b/tools/testing/selftests/bpf/progs/test_module_attach.c
@@ -7,23 +7,21 @@
#include <bpf/bpf_core_read.h>
#include "../test_kmods/bpf_testmod.h"
-__u32 raw_tp_read_sz = 0;
+__u32 sz = 0;
-SEC("raw_tp/bpf_testmod_test_read")
+SEC("?raw_tp/bpf_testmod_test_read")
int BPF_PROG(handle_raw_tp,
struct task_struct *task, struct bpf_testmod_test_read_ctx *read_ctx)
{
- raw_tp_read_sz = BPF_CORE_READ(read_ctx, len);
+ sz = BPF_CORE_READ(read_ctx, len);
return 0;
}
-__u32 raw_tp_bare_write_sz = 0;
-
-SEC("raw_tp/bpf_testmod_test_write_bare_tp")
+SEC("?raw_tp/bpf_testmod_test_write_bare_tp")
int BPF_PROG(handle_raw_tp_bare,
struct task_struct *task, struct bpf_testmod_test_write_ctx *write_ctx)
{
- raw_tp_bare_write_sz = BPF_CORE_READ(write_ctx, len);
+ sz = BPF_CORE_READ(write_ctx, len);
return 0;
}
@@ -31,7 +29,7 @@ int raw_tp_writable_bare_in_val = 0;
int raw_tp_writable_bare_early_ret = 0;
int raw_tp_writable_bare_out_val = 0;
-SEC("raw_tp.w/bpf_testmod_test_writable_bare_tp")
+SEC("?raw_tp.w/bpf_testmod_test_writable_bare_tp")
int BPF_PROG(handle_raw_tp_writable_bare,
struct bpf_testmod_test_writable_ctx *writable)
{
@@ -41,76 +39,65 @@ int BPF_PROG(handle_raw_tp_writable_bare,
return 0;
}
-__u32 tp_btf_read_sz = 0;
-
-SEC("tp_btf/bpf_testmod_test_read")
+SEC("?tp_btf/bpf_testmod_test_read")
int BPF_PROG(handle_tp_btf,
struct task_struct *task, struct bpf_testmod_test_read_ctx *read_ctx)
{
- tp_btf_read_sz = read_ctx->len;
+ sz = read_ctx->len;
return 0;
}
-__u32 fentry_read_sz = 0;
-
-SEC("fentry/bpf_testmod_test_read")
+SEC("?fentry/bpf_testmod_test_read")
int BPF_PROG(handle_fentry,
struct file *file, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
{
- fentry_read_sz = len;
+ sz = len;
return 0;
}
-__u32 fentry_manual_read_sz = 0;
-
-SEC("fentry")
+SEC("?fentry")
int BPF_PROG(handle_fentry_manual,
struct file *file, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
{
- fentry_manual_read_sz = len;
+ sz = len;
return 0;
}
-__u32 fentry_explicit_read_sz = 0;
-
-SEC("fentry/bpf_testmod:bpf_testmod_test_read")
+SEC("?fentry/bpf_testmod:bpf_testmod_test_read")
int BPF_PROG(handle_fentry_explicit,
struct file *file, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
{
- fentry_explicit_read_sz = len;
+ sz = len;
return 0;
}
-__u32 fentry_explicit_manual_read_sz = 0;
-
-SEC("fentry")
+SEC("?fentry")
int BPF_PROG(handle_fentry_explicit_manual,
struct file *file, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
{
- fentry_explicit_manual_read_sz = len;
+ sz = len;
return 0;
}
-__u32 fexit_read_sz = 0;
-int fexit_ret = 0;
+int retval = 0;
-SEC("fexit/bpf_testmod_test_read")
+SEC("?fexit/bpf_testmod_test_read")
int BPF_PROG(handle_fexit,
struct file *file, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len,
int ret)
{
- fexit_read_sz = len;
- fexit_ret = ret;
+ sz = len;
+ retval = ret;
return 0;
}
-SEC("fexit/bpf_testmod_return_ptr")
+SEC("?fexit/bpf_testmod_return_ptr")
int BPF_PROG(handle_fexit_ret, int arg, struct file *ret)
{
long buf = 0;
@@ -122,18 +109,16 @@ int BPF_PROG(handle_fexit_ret, int arg, struct file *ret)
return 0;
}
-__u32 fmod_ret_read_sz = 0;
-
-SEC("fmod_ret/bpf_testmod_test_read")
+SEC("?fmod_ret/bpf_testmod_test_read")
int BPF_PROG(handle_fmod_ret,
struct file *file, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
{
- fmod_ret_read_sz = len;
+ sz = len;
return 0; /* don't override the exit code */
}
-SEC("kprobe.multi/bpf_testmod_test_read")
+SEC("?kprobe.multi/bpf_testmod_test_read")
int BPF_PROG(kprobe_multi)
{
return 0;
--
2.53.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH bpf-next v2 2/3] selftests/bpf: Split read_vsyscall into subtests
2026-02-10 14:14 [PATCH bpf-next v2 0/3] Separate tests that need error injection Viktor Malik
2026-02-10 14:14 ` [PATCH bpf-next v2 1/3] selftests/bpf: Split module_attach into subtests Viktor Malik
@ 2026-02-10 14:14 ` Viktor Malik
2026-02-10 14:14 ` [PATCH bpf-next v2 3/3] selftests/bpf: Split sleepable fentry from LSM test Viktor Malik
2 siblings, 0 replies; 7+ messages in thread
From: Viktor Malik @ 2026-02-10 14:14 UTC (permalink / raw)
To: bpf
Cc: Andrii Nakryiko, Eduard Zingerman, Alexei Starovoitov,
Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Shuah Khan, Jordan Rome, Viktor Malik, Qais Yousef, Hou Tao
Split the test into two subtests: one for checking bpf_probe_read* of a
vsyscall page from a non-sleepable probe and one for checking
bpf_copy_from_user* of a vsyscall page from a sleepable probe.
The split is useful for running the test on kernels which do not support
sleepable fentry programs to allow skipping just a part of the test.
Signed-off-by: Viktor Malik <vmalik@redhat.com>
---
.../selftests/bpf/prog_tests/read_vsyscall.c | 41 ++++++++++++++++---
.../selftests/bpf/progs/read_vsyscall.c | 18 ++++----
2 files changed, 44 insertions(+), 15 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/read_vsyscall.c b/tools/testing/selftests/bpf/prog_tests/read_vsyscall.c
index a8d1eaa67020..bbacd1309046 100644
--- a/tools/testing/selftests/bpf/prog_tests/read_vsyscall.c
+++ b/tools/testing/selftests/bpf/prog_tests/read_vsyscall.c
@@ -14,22 +14,30 @@
struct read_ret_desc {
const char *name;
int ret;
-} all_read[] = {
+};
+
+struct read_ret_desc all_probe_read[] = {
{ .name = "probe_read_kernel", .ret = -ERANGE },
{ .name = "probe_read_kernel_str", .ret = -ERANGE },
{ .name = "probe_read", .ret = -ERANGE },
{ .name = "probe_read_str", .ret = -ERANGE },
{ .name = "probe_read_user", .ret = -EFAULT },
{ .name = "probe_read_user_str", .ret = -EFAULT },
+};
+
+struct read_ret_desc all_copy_from_user[] = {
{ .name = "copy_from_user", .ret = -EFAULT },
{ .name = "copy_from_user_task", .ret = -EFAULT },
{ .name = "copy_from_user_str", .ret = -EFAULT },
{ .name = "copy_from_user_task_str", .ret = -EFAULT },
};
-void test_read_vsyscall(void)
+static void test_read_vsyscall_subtest(const char *prog_name,
+ const struct read_ret_desc *descs,
+ unsigned int cnt)
{
struct read_vsyscall *skel;
+ struct bpf_program *prog;
unsigned int i;
int err;
@@ -37,10 +45,19 @@ void test_read_vsyscall(void)
test__skip();
return;
#endif
- skel = read_vsyscall__open_and_load();
- if (!ASSERT_OK_PTR(skel, "read_vsyscall open_load"))
+ skel = read_vsyscall__open();
+ if (!ASSERT_OK_PTR(skel, "read_vsyscall open"))
return;
+ prog = bpf_object__find_program_by_name(skel->obj, prog_name);
+ if (!ASSERT_OK_PTR(prog, "read_vsyscall find program"))
+ goto out;
+ bpf_program__set_autoload(prog, true);
+
+ err = read_vsyscall__load(skel);
+ if (!ASSERT_EQ(err, 0, "read_vsyscall load"))
+ goto out;
+
skel->bss->target_pid = getpid();
err = read_vsyscall__attach(skel);
if (!ASSERT_EQ(err, 0, "read_vsyscall attach"))
@@ -52,8 +69,20 @@ void test_read_vsyscall(void)
skel->bss->user_ptr = (void *)VSYSCALL_ADDR;
usleep(1);
- for (i = 0; i < ARRAY_SIZE(all_read); i++)
- ASSERT_EQ(skel->bss->read_ret[i], all_read[i].ret, all_read[i].name);
+ for (i = 0; i < cnt; i++)
+ ASSERT_EQ(skel->bss->read_ret[i], descs[i].ret, descs[i].name);
out:
read_vsyscall__destroy(skel);
}
+
+void test_read_vsyscall(void)
+{
+ if (test__start_subtest("probe_read")) {
+ test_read_vsyscall_subtest("probe_read", all_probe_read,
+ ARRAY_SIZE(all_probe_read));
+ }
+ if (test__start_subtest("copy_from_user")) {
+ test_read_vsyscall_subtest("copy_from_user", all_copy_from_user,
+ ARRAY_SIZE(all_copy_from_user));
+ }
+}
diff --git a/tools/testing/selftests/bpf/progs/read_vsyscall.c b/tools/testing/selftests/bpf/progs/read_vsyscall.c
index 395591374d4f..45bcf923ec70 100644
--- a/tools/testing/selftests/bpf/progs/read_vsyscall.c
+++ b/tools/testing/selftests/bpf/progs/read_vsyscall.c
@@ -8,7 +8,7 @@
int target_pid = 0;
void *user_ptr = 0;
-int read_ret[10];
+int read_ret[6];
char _license[] SEC("license") = "GPL";
@@ -19,8 +19,8 @@ int bpf_copy_from_user_str(void *dst, u32, const void *, u64) __weak __ksym;
int bpf_copy_from_user_task_str(void *dst, u32, const void *,
struct task_struct *, u64) __weak __ksym;
-SEC("fentry/" SYS_PREFIX "sys_nanosleep")
-int do_probe_read(void *ctx)
+SEC("?fentry/" SYS_PREFIX "sys_nanosleep")
+int probe_read(void *ctx)
{
char buf[8];
@@ -37,19 +37,19 @@ int do_probe_read(void *ctx)
return 0;
}
-SEC("fentry.s/" SYS_PREFIX "sys_nanosleep")
-int do_copy_from_user(void *ctx)
+SEC("?fentry.s/" SYS_PREFIX "sys_nanosleep")
+int copy_from_user(void *ctx)
{
char buf[8];
if ((bpf_get_current_pid_tgid() >> 32) != target_pid)
return 0;
- read_ret[6] = bpf_copy_from_user(buf, sizeof(buf), user_ptr);
- read_ret[7] = bpf_copy_from_user_task(buf, sizeof(buf), user_ptr,
+ read_ret[0] = bpf_copy_from_user(buf, sizeof(buf), user_ptr);
+ read_ret[1] = bpf_copy_from_user_task(buf, sizeof(buf), user_ptr,
bpf_get_current_task_btf(), 0);
- read_ret[8] = bpf_copy_from_user_str((char *)buf, sizeof(buf), user_ptr, 0);
- read_ret[9] = bpf_copy_from_user_task_str((char *)buf,
+ read_ret[2] = bpf_copy_from_user_str((char *)buf, sizeof(buf), user_ptr, 0);
+ read_ret[3] = bpf_copy_from_user_task_str((char *)buf,
sizeof(buf),
user_ptr,
bpf_get_current_task_btf(),
--
2.53.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH bpf-next v2 3/3] selftests/bpf: Split sleepable fentry from LSM test
2026-02-10 14:14 [PATCH bpf-next v2 0/3] Separate tests that need error injection Viktor Malik
2026-02-10 14:14 ` [PATCH bpf-next v2 1/3] selftests/bpf: Split module_attach into subtests Viktor Malik
2026-02-10 14:14 ` [PATCH bpf-next v2 2/3] selftests/bpf: Split read_vsyscall " Viktor Malik
@ 2026-02-10 14:14 ` Viktor Malik
2 siblings, 0 replies; 7+ messages in thread
From: Viktor Malik @ 2026-02-10 14:14 UTC (permalink / raw)
To: bpf
Cc: Andrii Nakryiko, Eduard Zingerman, Alexei Starovoitov,
Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Shuah Khan, Jordan Rome, Viktor Malik, Qais Yousef, Hou Tao
The test_lsm/lsm_basic test contains a test attaching to a sleepable
fentry, which doesn't logically fit into the LSM test suite. In
addition, it makes the entire test fail on kernels which do not support
sleepable fentry programs.
Factor out the sleepable fentry part into a new test fentry_sleepable.
Signed-off-by: Viktor Malik <vmalik@redhat.com>
---
.../bpf/prog_tests/fentry_sleepable.c | 28 +++++++++++++++++++
.../selftests/bpf/prog_tests/test_lsm.c | 8 ------
.../selftests/bpf/progs/fentry_sleepable.c | 28 +++++++++++++++++++
tools/testing/selftests/bpf/progs/lsm.c | 21 --------------
4 files changed, 56 insertions(+), 29 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_sleepable.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_sleepable.c
diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_sleepable.c b/tools/testing/selftests/bpf/prog_tests/fentry_sleepable.c
new file mode 100644
index 000000000000..efe3ea616575
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/fentry_sleepable.c
@@ -0,0 +1,28 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include <unistd.h>
+#include "fentry_sleepable.skel.h"
+
+void test_fentry_sleepable(void)
+{
+ struct fentry_sleepable *skel;
+ int buf = 1234;
+ int err;
+
+ skel = fentry_sleepable__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_sleepable__open_and_load"))
+ return;
+
+ err = fentry_sleepable__attach(skel);
+ if (!ASSERT_OK(err, "fentry_sleepable__attach"))
+ goto cleanup;
+
+ syscall(__NR_setdomainname, &buf, -2L);
+ syscall(__NR_setdomainname, 0, -3L);
+ syscall(__NR_setdomainname, ~0L, -4L);
+
+ ASSERT_EQ(skel->bss->copy_test, 3, "copy_test");
+
+cleanup:
+ fentry_sleepable__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/test_lsm.c b/tools/testing/selftests/bpf/prog_tests/test_lsm.c
index bdc4fc06bc5a..67c7c437574a 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_lsm.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_lsm.c
@@ -55,7 +55,6 @@ int exec_cmd(int *monitored_pid)
static int test_lsm(struct lsm *skel)
{
struct bpf_link *link;
- int buf = 1234;
int err;
err = lsm__attach(skel);
@@ -82,15 +81,8 @@ static int test_lsm(struct lsm *skel)
ASSERT_EQ(skel->bss->mprotect_count, 1, "mprotect_count");
- syscall(__NR_setdomainname, &buf, -2L);
- syscall(__NR_setdomainname, 0, -3L);
- syscall(__NR_setdomainname, ~0L, -4L);
-
- ASSERT_EQ(skel->bss->copy_test, 3, "copy_test");
-
lsm__detach(skel);
- skel->bss->copy_test = 0;
skel->bss->bprm_count = 0;
skel->bss->mprotect_count = 0;
return 0;
diff --git a/tools/testing/selftests/bpf/progs/fentry_sleepable.c b/tools/testing/selftests/bpf/progs/fentry_sleepable.c
new file mode 100644
index 000000000000..621548b97564
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_sleepable.c
@@ -0,0 +1,28 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "vmlinux.h"
+#include <errno.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+
+char _license[] SEC("license") = "GPL";
+
+int copy_test = 0;
+
+SEC("fentry.s/" SYS_PREFIX "sys_setdomainname")
+int BPF_PROG(test_sys_setdomainname, struct pt_regs *regs)
+{
+ void *ptr = (void *)PT_REGS_PARM1_SYSCALL(regs);
+ int len = PT_REGS_PARM2_SYSCALL(regs);
+ int buf = 0;
+ long ret;
+
+ ret = bpf_copy_from_user(&buf, sizeof(buf), ptr);
+ if (len == -2 && ret == 0 && buf == 1234)
+ copy_test++;
+ if (len == -3 && ret == -EFAULT)
+ copy_test++;
+ if (len == -4 && ret == -EFAULT)
+ copy_test++;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/lsm.c b/tools/testing/selftests/bpf/progs/lsm.c
index 7de173daf27b..dd5d41903a26 100644
--- a/tools/testing/selftests/bpf/progs/lsm.c
+++ b/tools/testing/selftests/bpf/progs/lsm.c
@@ -9,7 +9,6 @@
#include <bpf/bpf_core_read.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
-#include "bpf_misc.h"
struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
@@ -161,23 +160,3 @@ int BPF_PROG(test_task_free, struct task_struct *task)
{
return 0;
}
-
-int copy_test = 0;
-
-SEC("fentry.s/" SYS_PREFIX "sys_setdomainname")
-int BPF_PROG(test_sys_setdomainname, struct pt_regs *regs)
-{
- void *ptr = (void *)PT_REGS_PARM1_SYSCALL(regs);
- int len = PT_REGS_PARM2_SYSCALL(regs);
- int buf = 0;
- long ret;
-
- ret = bpf_copy_from_user(&buf, sizeof(buf), ptr);
- if (len == -2 && ret == 0 && buf == 1234)
- copy_test++;
- if (len == -3 && ret == -EFAULT)
- copy_test++;
- if (len == -4 && ret == -EFAULT)
- copy_test++;
- return 0;
-}
--
2.53.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next v2 1/3] selftests/bpf: Split module_attach into subtests
2026-02-10 14:14 ` [PATCH bpf-next v2 1/3] selftests/bpf: Split module_attach into subtests Viktor Malik
@ 2026-02-10 14:47 ` bot+bpf-ci
2026-02-10 16:01 ` Mykyta Yatsenko
1 sibling, 0 replies; 7+ messages in thread
From: bot+bpf-ci @ 2026-02-10 14:47 UTC (permalink / raw)
To: vmalik, bpf
Cc: andrii, eddyz87, ast, daniel, martin.lau, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, linux, vmalik,
qyousef, houtao1, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2570 bytes --]
> diff --git a/tools/testing/selftests/bpf/prog_tests/module_attach.c b/tools/testing/selftests/bpf/prog_tests/module_attach.c
> index 70fa7ae93173..8668c26c202f 100644
> --- a/tools/testing/selftests/bpf/prog_tests/module_attach.c
> +++ b/tools/testing/selftests/bpf/prog_tests/module_attach.c
[ ... ]
> +static void test_module_attach_prog(const char *prog_name, int sz,
> + const char *attach_target, int ret)
> +{
> + struct test_module_attach *skel;
> + struct bpf_program *prog;
> + int err;
> +
> + skel = test_module_attach__open();
> + if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
> + return;
> +
> + prog = bpf_object__find_program_by_name(skel->obj, prog_name);
> + if (!ASSERT_OK_PTR(prog, "find program"))
> + goto cleanup;
> + bpf_program__set_autoload(prog, true);
> +
> + if (attach_target) {
> + err = bpf_program__set_attach_target(prog, 0, attach_target);
> + ASSERT_OK(err, attach_target);
> + }
> +
> + err = test_module_attach__load(skel);
> + if (CHECK(err, "skel_load", "failed to load skeleton\n"))
> + return;
Shouldn't this be "goto cleanup" instead of "return"? The skeleton was
successfully opened and will leak if load fails.
[ ... ]
> +static void test_module_attach_writable(void)
> +{
> + struct test_module_attach__bss *bss;
> + struct test_module_attach *skel;
> + struct bpf_program *prog;
> + int writable_val = 0;
> + int err;
> +
> + skel = test_module_attach__open();
> + if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
> + return;
> +
> + prog = bpf_object__find_program_by_name(skel->obj,
> + "handle_raw_tp_writable_bare");
> + if (!ASSERT_OK_PTR(prog, "find program"))
> + goto cleanup;
> + bpf_program__set_autoload(prog, true);
> +
> + err = test_module_attach__load(skel);
> + if (CHECK(err, "skel_load", "failed to load skeleton\n"))
> + return;
Same issue here - should be "goto cleanup" to avoid leaking the skeleton.
[ ... ]
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21868797729
AI-authorship-score: low
AI-authorship-explanation: The code follows standard BPF selftest patterns with no signs of AI generation; the inconsistent error handling suggests human oversight rather than AI template output.
issues-found: 2
issue-severity-score: low
issue-severity-explanation: Skeleton memory leak in test code on error paths; affects only test cleanup, not production kernel code.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next v2 1/3] selftests/bpf: Split module_attach into subtests
2026-02-10 14:14 ` [PATCH bpf-next v2 1/3] selftests/bpf: Split module_attach into subtests Viktor Malik
2026-02-10 14:47 ` bot+bpf-ci
@ 2026-02-10 16:01 ` Mykyta Yatsenko
2026-02-10 20:45 ` Viktor Malik
1 sibling, 1 reply; 7+ messages in thread
From: Mykyta Yatsenko @ 2026-02-10 16:01 UTC (permalink / raw)
To: Viktor Malik, bpf
Cc: Andrii Nakryiko, Eduard Zingerman, Alexei Starovoitov,
Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Shuah Khan, Jordan Rome, Qais Yousef, Hou Tao
On 2/10/26 14:14, Viktor Malik wrote:
> The test verifies attachment to various hooks in a kernel module,
> however, everything is flattened into a single test. When running the
> test on a kernel which doesn't support some of the hooks, it is
> impossible to skip them selectively.
>
> Isolate each BPF program into a separate subtest. This is done by
> disabling auto-loading of programs and loading and testing each program
> separately.
>
> Signed-off-by: Viktor Malik <vmalik@redhat.com>
> ---
> .../selftests/bpf/prog_tests/module_attach.c | 168 +++++++++++++-----
> .../selftests/bpf/progs/test_module_attach.c | 63 +++----
> 2 files changed, 149 insertions(+), 82 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/module_attach.c b/tools/testing/selftests/bpf/prog_tests/module_attach.c
> index 70fa7ae93173..8668c26c202f 100644
> --- a/tools/testing/selftests/bpf/prog_tests/module_attach.c
> +++ b/tools/testing/selftests/bpf/prog_tests/module_attach.c
> @@ -6,6 +6,23 @@
> #include "test_module_attach.skel.h"
> #include "testing_helpers.h"
>
> +static const char * const read_tests[] = {
> + "handle_raw_tp",
> + "handle_tp_btf",
> + "handle_fentry",
> + "handle_fentry_explicit",
> + "handle_fmod_ret",
> +};
> +
> +static const char * const detach_tests[] = {
> + "handle_fentry",
> + "handle_fexit",
> + "kprobe_multi",
> +};
> +
> +static const int READ_SZ = 456;
> +static const int WRITE_SZ = 457;
> +
> static int duration;
>
> static int trigger_module_test_writable(int *val)
> @@ -33,27 +50,66 @@ static int trigger_module_test_writable(int *val)
> return 0;
> }
>
> -void test_module_attach(void)
> +static void test_module_attach_prog(const char *prog_name, int sz,
> + const char *attach_target, int ret)
> {
> - const int READ_SZ = 456;
> - const int WRITE_SZ = 457;
> - struct test_module_attach* skel;
> - struct test_module_attach__bss *bss;
> - struct bpf_link *link;
> + struct test_module_attach *skel;
> + struct bpf_program *prog;
> int err;
> - int writable_val = 0;
>
> skel = test_module_attach__open();
> if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
> return;
>
> - err = bpf_program__set_attach_target(skel->progs.handle_fentry_manual,
> - 0, "bpf_testmod_test_read");
> - ASSERT_OK(err, "set_attach_target");
> + prog = bpf_object__find_program_by_name(skel->obj, prog_name);
> + if (!ASSERT_OK_PTR(prog, "find program"))
> + goto cleanup;
> + bpf_program__set_autoload(prog, true);
>
> - err = bpf_program__set_attach_target(skel->progs.handle_fentry_explicit_manual,
> - 0, "bpf_testmod:bpf_testmod_test_read");
> - ASSERT_OK(err, "set_attach_target_explicit");
> + if (attach_target) {
> + err = bpf_program__set_attach_target(prog, 0, attach_target);
> + ASSERT_OK(err, attach_target);
Does it makes sense to goto cleanup if err here?
> + }
> +
> + err = test_module_attach__load(skel);
> + if (CHECK(err, "skel_load", "failed to load skeleton\n"))
Is there any reason to use CHECK instead of ASSERT_OK(more common)?
> + return;
> +
> + err = test_module_attach__attach(skel);
> + if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err))
> + goto cleanup;
> +
> + if (sz) {
> + /* trigger both read and write though each test uses only one */
> + ASSERT_OK(trigger_module_test_read(sz), "trigger_read");
> + ASSERT_OK(trigger_module_test_write(sz), "trigger_write");
> +
> + ASSERT_EQ(skel->bss->sz, sz, prog_name);
> + }
> +
> + if (ret)
> + ASSERT_EQ(skel->bss->retval, ret, "ret");
> +cleanup:
> + test_module_attach__destroy(skel);
> +}
> +
> +static void test_module_attach_writable(void)
> +{
> + struct test_module_attach__bss *bss;
> + struct test_module_attach *skel;
> + struct bpf_program *prog;
> + int writable_val = 0;
> + int err;
> +
> + skel = test_module_attach__open();
> + if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
> + return;
> +
> + prog = bpf_object__find_program_by_name(skel->obj,
> + "handle_raw_tp_writable_bare");
can we use skel->progs.handle_raw_tp_writable_bare instead of
find_program_by_name?
> + if (!ASSERT_OK_PTR(prog, "find program"))
> + goto cleanup;
> + bpf_program__set_autoload(prog, true);
>
> err = test_module_attach__load(skel);
> if (CHECK(err, "skel_load", "failed to load skeleton\n"))
> @@ -65,21 +121,6 @@ void test_module_attach(void)
> if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err))
> goto cleanup;
>
> - /* trigger tracepoint */
> - ASSERT_OK(trigger_module_test_read(READ_SZ), "trigger_read");
> - ASSERT_OK(trigger_module_test_write(WRITE_SZ), "trigger_write");
> -
> - ASSERT_EQ(bss->raw_tp_read_sz, READ_SZ, "raw_tp");
> - ASSERT_EQ(bss->raw_tp_bare_write_sz, WRITE_SZ, "raw_tp_bare");
> - ASSERT_EQ(bss->tp_btf_read_sz, READ_SZ, "tp_btf");
> - ASSERT_EQ(bss->fentry_read_sz, READ_SZ, "fentry");
> - ASSERT_EQ(bss->fentry_manual_read_sz, READ_SZ, "fentry_manual");
> - ASSERT_EQ(bss->fentry_explicit_read_sz, READ_SZ, "fentry_explicit");
> - ASSERT_EQ(bss->fentry_explicit_manual_read_sz, READ_SZ, "fentry_explicit_manual");
> - ASSERT_EQ(bss->fexit_read_sz, READ_SZ, "fexit");
> - ASSERT_EQ(bss->fexit_ret, -EIO, "fexit_tet");
> - ASSERT_EQ(bss->fmod_ret_read_sz, READ_SZ, "fmod_ret");
> -
> bss->raw_tp_writable_bare_early_ret = true;
> bss->raw_tp_writable_bare_out_val = 0xf1f2f3f4;
> ASSERT_OK(trigger_module_test_writable(&writable_val),
> @@ -87,31 +128,72 @@ void test_module_attach(void)
> ASSERT_EQ(bss->raw_tp_writable_bare_in_val, 1024, "writable_test_in");
> ASSERT_EQ(bss->raw_tp_writable_bare_out_val, writable_val,
> "writable_test_out");
> +cleanup:
> + test_module_attach__destroy(skel);
> +}
>
> - test_module_attach__detach(skel);
> -
> - /* attach fentry/fexit and make sure it gets module reference */
> - link = bpf_program__attach(skel->progs.handle_fentry);
> - if (!ASSERT_OK_PTR(link, "attach_fentry"))
> - goto cleanup;
> +static void test_module_attach_detach(const char *prog_name)
> +{
> + struct test_module_attach *skel;
> + struct bpf_program *prog;
> + struct bpf_link *link;
> + int err;
>
> - ASSERT_ERR(unload_bpf_testmod(false), "unload_bpf_testmod");
> - bpf_link__destroy(link);
> + skel = test_module_attach__open();
> + if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
> + return;
>
> - link = bpf_program__attach(skel->progs.handle_fexit);
> - if (!ASSERT_OK_PTR(link, "attach_fexit"))
> + prog = bpf_object__find_program_by_name(skel->obj, prog_name);
> + if (!ASSERT_OK_PTR(prog, "find program"))
> goto cleanup;
> + bpf_program__set_autoload(prog, true);
>
> - ASSERT_ERR(unload_bpf_testmod(false), "unload_bpf_testmod");
> - bpf_link__destroy(link);
> + err = test_module_attach__load(skel);
> + if (CHECK(err, "skel_load", "failed to load skeleton\n"))
> + goto cleanup;
>
> - link = bpf_program__attach(skel->progs.kprobe_multi);
> - if (!ASSERT_OK_PTR(link, "attach_kprobe_multi"))
> + /* attach and make sure it gets module reference */
> + link = bpf_program__attach(prog);
> + if (!ASSERT_OK_PTR(link, "attach"))
> goto cleanup;
>
> ASSERT_ERR(unload_bpf_testmod(false), "unload_bpf_testmod");
> bpf_link__destroy(link);
> -
> cleanup:
> test_module_attach__destroy(skel);
> }
> +
> +void test_module_attach(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(read_tests); i++) {
> + if (!test__start_subtest(read_tests[i]))
> + continue;
> + test_module_attach_prog(read_tests[i], READ_SZ, NULL, 0);
> + }
> + if (test__start_subtest("handle_raw_tp_bare")) {
> + test_module_attach_prog("handle_raw_tp_bare", WRITE_SZ, NULL,
> + 0);
> + }
> + if (test__start_subtest("handle_raw_tp_writable_bare"))
> + test_module_attach_writable();
> + if (test__start_subtest("handle_fentry_manual")) {
> + test_module_attach_prog("handle_fentry_manual", READ_SZ,
> + "bpf_testmod_test_read", 0);
> + }
> + if (test__start_subtest("handle_fentry_explicit_manual")) {
> + test_module_attach_prog("handle_fentry_explicit_manual",
> + READ_SZ,
> + "bpf_testmod:bpf_testmod_test_read", 0);
> + }
> + if (test__start_subtest("handle_fexit"))
> + test_module_attach_prog("handle_fexit", READ_SZ, NULL, -EIO);
> + if (test__start_subtest("handle_fexit_ret"))
> + test_module_attach_prog("handle_fexit_ret", 0, NULL, 0);
> + for (i = 0; i < ARRAY_SIZE(detach_tests); i++) {
> + if (!test__start_subtest(detach_tests[i]))
> + continue;
> + test_module_attach_detach(detach_tests[i]);
> + }
> +}
> diff --git a/tools/testing/selftests/bpf/progs/test_module_attach.c b/tools/testing/selftests/bpf/progs/test_module_attach.c
> index 03d7f89787a1..5609e388fb58 100644
> --- a/tools/testing/selftests/bpf/progs/test_module_attach.c
> +++ b/tools/testing/selftests/bpf/progs/test_module_attach.c
> @@ -7,23 +7,21 @@
> #include <bpf/bpf_core_read.h>
> #include "../test_kmods/bpf_testmod.h"
>
> -__u32 raw_tp_read_sz = 0;
> +__u32 sz = 0;
>
> -SEC("raw_tp/bpf_testmod_test_read")
> +SEC("?raw_tp/bpf_testmod_test_read")
> int BPF_PROG(handle_raw_tp,
> struct task_struct *task, struct bpf_testmod_test_read_ctx *read_ctx)
> {
> - raw_tp_read_sz = BPF_CORE_READ(read_ctx, len);
> + sz = BPF_CORE_READ(read_ctx, len);
> return 0;
> }
>
> -__u32 raw_tp_bare_write_sz = 0;
> -
> -SEC("raw_tp/bpf_testmod_test_write_bare_tp")
> +SEC("?raw_tp/bpf_testmod_test_write_bare_tp")
> int BPF_PROG(handle_raw_tp_bare,
> struct task_struct *task, struct bpf_testmod_test_write_ctx *write_ctx)
> {
> - raw_tp_bare_write_sz = BPF_CORE_READ(write_ctx, len);
> + sz = BPF_CORE_READ(write_ctx, len);
> return 0;
> }
>
> @@ -31,7 +29,7 @@ int raw_tp_writable_bare_in_val = 0;
> int raw_tp_writable_bare_early_ret = 0;
> int raw_tp_writable_bare_out_val = 0;
>
> -SEC("raw_tp.w/bpf_testmod_test_writable_bare_tp")
> +SEC("?raw_tp.w/bpf_testmod_test_writable_bare_tp")
> int BPF_PROG(handle_raw_tp_writable_bare,
> struct bpf_testmod_test_writable_ctx *writable)
> {
> @@ -41,76 +39,65 @@ int BPF_PROG(handle_raw_tp_writable_bare,
> return 0;
> }
>
> -__u32 tp_btf_read_sz = 0;
> -
> -SEC("tp_btf/bpf_testmod_test_read")
> +SEC("?tp_btf/bpf_testmod_test_read")
> int BPF_PROG(handle_tp_btf,
> struct task_struct *task, struct bpf_testmod_test_read_ctx *read_ctx)
> {
> - tp_btf_read_sz = read_ctx->len;
> + sz = read_ctx->len;
> return 0;
> }
>
> -__u32 fentry_read_sz = 0;
> -
> -SEC("fentry/bpf_testmod_test_read")
> +SEC("?fentry/bpf_testmod_test_read")
> int BPF_PROG(handle_fentry,
> struct file *file, struct kobject *kobj,
> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
> {
> - fentry_read_sz = len;
> + sz = len;
> return 0;
> }
>
> -__u32 fentry_manual_read_sz = 0;
> -
> -SEC("fentry")
> +SEC("?fentry")
> int BPF_PROG(handle_fentry_manual,
> struct file *file, struct kobject *kobj,
> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
> {
> - fentry_manual_read_sz = len;
> + sz = len;
> return 0;
> }
>
> -__u32 fentry_explicit_read_sz = 0;
> -
> -SEC("fentry/bpf_testmod:bpf_testmod_test_read")
> +SEC("?fentry/bpf_testmod:bpf_testmod_test_read")
> int BPF_PROG(handle_fentry_explicit,
> struct file *file, struct kobject *kobj,
> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
> {
> - fentry_explicit_read_sz = len;
> + sz = len;
> return 0;
> }
>
>
> -__u32 fentry_explicit_manual_read_sz = 0;
> -
> -SEC("fentry")
> +SEC("?fentry")
> int BPF_PROG(handle_fentry_explicit_manual,
> struct file *file, struct kobject *kobj,
> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
> {
> - fentry_explicit_manual_read_sz = len;
> + sz = len;
> return 0;
> }
>
> -__u32 fexit_read_sz = 0;
> -int fexit_ret = 0;
> +int retval = 0;
>
> -SEC("fexit/bpf_testmod_test_read")
> +SEC("?fexit/bpf_testmod_test_read")
> int BPF_PROG(handle_fexit,
> struct file *file, struct kobject *kobj,
> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len,
> int ret)
> {
> - fexit_read_sz = len;
> - fexit_ret = ret;
> + sz = len;
> + retval = ret;
> return 0;
> }
>
> -SEC("fexit/bpf_testmod_return_ptr")
> +SEC("?fexit/bpf_testmod_return_ptr")
> int BPF_PROG(handle_fexit_ret, int arg, struct file *ret)
> {
> long buf = 0;
> @@ -122,18 +109,16 @@ int BPF_PROG(handle_fexit_ret, int arg, struct file *ret)
> return 0;
> }
>
> -__u32 fmod_ret_read_sz = 0;
> -
> -SEC("fmod_ret/bpf_testmod_test_read")
> +SEC("?fmod_ret/bpf_testmod_test_read")
> int BPF_PROG(handle_fmod_ret,
> struct file *file, struct kobject *kobj,
> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
> {
> - fmod_ret_read_sz = len;
> + sz = len;
> return 0; /* don't override the exit code */
> }
>
> -SEC("kprobe.multi/bpf_testmod_test_read")
> +SEC("?kprobe.multi/bpf_testmod_test_read")
> int BPF_PROG(kprobe_multi)
> {
> return 0;
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next v2 1/3] selftests/bpf: Split module_attach into subtests
2026-02-10 16:01 ` Mykyta Yatsenko
@ 2026-02-10 20:45 ` Viktor Malik
0 siblings, 0 replies; 7+ messages in thread
From: Viktor Malik @ 2026-02-10 20:45 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: Andrii Nakryiko, Eduard Zingerman, Alexei Starovoitov,
Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Shuah Khan, Jordan Rome, Qais Yousef, Hou Tao, bpf
On 2/10/26 17:01, Mykyta Yatsenko wrote:
> On 2/10/26 14:14, Viktor Malik wrote:
>> The test verifies attachment to various hooks in a kernel module,
>> however, everything is flattened into a single test. When running the
>> test on a kernel which doesn't support some of the hooks, it is
>> impossible to skip them selectively.
>>
>> Isolate each BPF program into a separate subtest. This is done by
>> disabling auto-loading of programs and loading and testing each program
>> separately.
>>
>> Signed-off-by: Viktor Malik <vmalik@redhat.com>
>> ---
>> .../selftests/bpf/prog_tests/module_attach.c | 168 +++++++++++++-----
>> .../selftests/bpf/progs/test_module_attach.c | 63 +++----
>> 2 files changed, 149 insertions(+), 82 deletions(-)
>>
>> diff --git a/tools/testing/selftests/bpf/prog_tests/module_attach.c b/tools/testing/selftests/bpf/prog_tests/module_attach.c
>> index 70fa7ae93173..8668c26c202f 100644
>> --- a/tools/testing/selftests/bpf/prog_tests/module_attach.c
>> +++ b/tools/testing/selftests/bpf/prog_tests/module_attach.c
>> @@ -6,6 +6,23 @@
>> #include "test_module_attach.skel.h"
>> #include "testing_helpers.h"
>>
>> +static const char * const read_tests[] = {
>> + "handle_raw_tp",
>> + "handle_tp_btf",
>> + "handle_fentry",
>> + "handle_fentry_explicit",
>> + "handle_fmod_ret",
>> +};
>> +
>> +static const char * const detach_tests[] = {
>> + "handle_fentry",
>> + "handle_fexit",
>> + "kprobe_multi",
>> +};
>> +
>> +static const int READ_SZ = 456;
>> +static const int WRITE_SZ = 457;
>> +
>> static int duration;
>>
>> static int trigger_module_test_writable(int *val)
>> @@ -33,27 +50,66 @@ static int trigger_module_test_writable(int *val)
>> return 0;
>> }
>>
>> -void test_module_attach(void)
>> +static void test_module_attach_prog(const char *prog_name, int sz,
>> + const char *attach_target, int ret)
>> {
>> - const int READ_SZ = 456;
>> - const int WRITE_SZ = 457;
>> - struct test_module_attach* skel;
>> - struct test_module_attach__bss *bss;
>> - struct bpf_link *link;
>> + struct test_module_attach *skel;
>> + struct bpf_program *prog;
>> int err;
>> - int writable_val = 0;
>>
>> skel = test_module_attach__open();
>> if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
>> return;
>>
>> - err = bpf_program__set_attach_target(skel->progs.handle_fentry_manual,
>> - 0, "bpf_testmod_test_read");
>> - ASSERT_OK(err, "set_attach_target");
>> + prog = bpf_object__find_program_by_name(skel->obj, prog_name);
>> + if (!ASSERT_OK_PTR(prog, "find program"))
>> + goto cleanup;
>> + bpf_program__set_autoload(prog, true);
>>
>> - err = bpf_program__set_attach_target(skel->progs.handle_fentry_explicit_manual,
>> - 0, "bpf_testmod:bpf_testmod_test_read");
>> - ASSERT_OK(err, "set_attach_target_explicit");
>> + if (attach_target) {
>> + err = bpf_program__set_attach_target(prog, 0, attach_target);
>> + ASSERT_OK(err, attach_target);
> Does it makes sense to goto cleanup if err here?
Yeah, good catch, it doesn't make sense to continue with loading.
>> + }
>> +
>> + err = test_module_attach__load(skel);
>> + if (CHECK(err, "skel_load", "failed to load skeleton\n"))
> Is there any reason to use CHECK instead of ASSERT_OK(more common)?
No, I just used what was in the old code and didn't realize that CHECK
is now legacy. I'll replace it by ASSERT everywhere.
>> + return;
>> +
>> + err = test_module_attach__attach(skel);
>> + if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err))
>> + goto cleanup;
>> +
>> + if (sz) {
>> + /* trigger both read and write though each test uses only one */
>> + ASSERT_OK(trigger_module_test_read(sz), "trigger_read");
>> + ASSERT_OK(trigger_module_test_write(sz), "trigger_write");
>> +
>> + ASSERT_EQ(skel->bss->sz, sz, prog_name);
>> + }
>> +
>> + if (ret)
>> + ASSERT_EQ(skel->bss->retval, ret, "ret");
>> +cleanup:
>> + test_module_attach__destroy(skel);
>> +}
>> +
>> +static void test_module_attach_writable(void)
>> +{
>> + struct test_module_attach__bss *bss;
>> + struct test_module_attach *skel;
>> + struct bpf_program *prog;
>> + int writable_val = 0;
>> + int err;
>> +
>> + skel = test_module_attach__open();
>> + if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
>> + return;
>> +
>> + prog = bpf_object__find_program_by_name(skel->obj,
>> + "handle_raw_tp_writable_bare");
> can we use skel->progs.handle_raw_tp_writable_bare instead of
> find_program_by_name?
Of course we can, good catch!
Thanks!
Viktor
>> + if (!ASSERT_OK_PTR(prog, "find program"))
>> + goto cleanup;
>> + bpf_program__set_autoload(prog, true);
>>
>> err = test_module_attach__load(skel);
>> if (CHECK(err, "skel_load", "failed to load skeleton\n"))
>> @@ -65,21 +121,6 @@ void test_module_attach(void)
>> if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err))
>> goto cleanup;
>>
>> - /* trigger tracepoint */
>> - ASSERT_OK(trigger_module_test_read(READ_SZ), "trigger_read");
>> - ASSERT_OK(trigger_module_test_write(WRITE_SZ), "trigger_write");
>> -
>> - ASSERT_EQ(bss->raw_tp_read_sz, READ_SZ, "raw_tp");
>> - ASSERT_EQ(bss->raw_tp_bare_write_sz, WRITE_SZ, "raw_tp_bare");
>> - ASSERT_EQ(bss->tp_btf_read_sz, READ_SZ, "tp_btf");
>> - ASSERT_EQ(bss->fentry_read_sz, READ_SZ, "fentry");
>> - ASSERT_EQ(bss->fentry_manual_read_sz, READ_SZ, "fentry_manual");
>> - ASSERT_EQ(bss->fentry_explicit_read_sz, READ_SZ, "fentry_explicit");
>> - ASSERT_EQ(bss->fentry_explicit_manual_read_sz, READ_SZ, "fentry_explicit_manual");
>> - ASSERT_EQ(bss->fexit_read_sz, READ_SZ, "fexit");
>> - ASSERT_EQ(bss->fexit_ret, -EIO, "fexit_tet");
>> - ASSERT_EQ(bss->fmod_ret_read_sz, READ_SZ, "fmod_ret");
>> -
>> bss->raw_tp_writable_bare_early_ret = true;
>> bss->raw_tp_writable_bare_out_val = 0xf1f2f3f4;
>> ASSERT_OK(trigger_module_test_writable(&writable_val),
>> @@ -87,31 +128,72 @@ void test_module_attach(void)
>> ASSERT_EQ(bss->raw_tp_writable_bare_in_val, 1024, "writable_test_in");
>> ASSERT_EQ(bss->raw_tp_writable_bare_out_val, writable_val,
>> "writable_test_out");
>> +cleanup:
>> + test_module_attach__destroy(skel);
>> +}
>>
>> - test_module_attach__detach(skel);
>> -
>> - /* attach fentry/fexit and make sure it gets module reference */
>> - link = bpf_program__attach(skel->progs.handle_fentry);
>> - if (!ASSERT_OK_PTR(link, "attach_fentry"))
>> - goto cleanup;
>> +static void test_module_attach_detach(const char *prog_name)
>> +{
>> + struct test_module_attach *skel;
>> + struct bpf_program *prog;
>> + struct bpf_link *link;
>> + int err;
>>
>> - ASSERT_ERR(unload_bpf_testmod(false), "unload_bpf_testmod");
>> - bpf_link__destroy(link);
>> + skel = test_module_attach__open();
>> + if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
>> + return;
>>
>> - link = bpf_program__attach(skel->progs.handle_fexit);
>> - if (!ASSERT_OK_PTR(link, "attach_fexit"))
>> + prog = bpf_object__find_program_by_name(skel->obj, prog_name);
>> + if (!ASSERT_OK_PTR(prog, "find program"))
>> goto cleanup;
>> + bpf_program__set_autoload(prog, true);
>>
>> - ASSERT_ERR(unload_bpf_testmod(false), "unload_bpf_testmod");
>> - bpf_link__destroy(link);
>> + err = test_module_attach__load(skel);
>> + if (CHECK(err, "skel_load", "failed to load skeleton\n"))
>> + goto cleanup;
>>
>> - link = bpf_program__attach(skel->progs.kprobe_multi);
>> - if (!ASSERT_OK_PTR(link, "attach_kprobe_multi"))
>> + /* attach and make sure it gets module reference */
>> + link = bpf_program__attach(prog);
>> + if (!ASSERT_OK_PTR(link, "attach"))
>> goto cleanup;
>>
>> ASSERT_ERR(unload_bpf_testmod(false), "unload_bpf_testmod");
>> bpf_link__destroy(link);
>> -
>> cleanup:
>> test_module_attach__destroy(skel);
>> }
>> +
>> +void test_module_attach(void)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < ARRAY_SIZE(read_tests); i++) {
>> + if (!test__start_subtest(read_tests[i]))
>> + continue;
>> + test_module_attach_prog(read_tests[i], READ_SZ, NULL, 0);
>> + }
>> + if (test__start_subtest("handle_raw_tp_bare")) {
>> + test_module_attach_prog("handle_raw_tp_bare", WRITE_SZ, NULL,
>> + 0);
>> + }
>> + if (test__start_subtest("handle_raw_tp_writable_bare"))
>> + test_module_attach_writable();
>> + if (test__start_subtest("handle_fentry_manual")) {
>> + test_module_attach_prog("handle_fentry_manual", READ_SZ,
>> + "bpf_testmod_test_read", 0);
>> + }
>> + if (test__start_subtest("handle_fentry_explicit_manual")) {
>> + test_module_attach_prog("handle_fentry_explicit_manual",
>> + READ_SZ,
>> + "bpf_testmod:bpf_testmod_test_read", 0);
>> + }
>> + if (test__start_subtest("handle_fexit"))
>> + test_module_attach_prog("handle_fexit", READ_SZ, NULL, -EIO);
>> + if (test__start_subtest("handle_fexit_ret"))
>> + test_module_attach_prog("handle_fexit_ret", 0, NULL, 0);
>> + for (i = 0; i < ARRAY_SIZE(detach_tests); i++) {
>> + if (!test__start_subtest(detach_tests[i]))
>> + continue;
>> + test_module_attach_detach(detach_tests[i]);
>> + }
>> +}
>> diff --git a/tools/testing/selftests/bpf/progs/test_module_attach.c b/tools/testing/selftests/bpf/progs/test_module_attach.c
>> index 03d7f89787a1..5609e388fb58 100644
>> --- a/tools/testing/selftests/bpf/progs/test_module_attach.c
>> +++ b/tools/testing/selftests/bpf/progs/test_module_attach.c
>> @@ -7,23 +7,21 @@
>> #include <bpf/bpf_core_read.h>
>> #include "../test_kmods/bpf_testmod.h"
>>
>> -__u32 raw_tp_read_sz = 0;
>> +__u32 sz = 0;
>>
>> -SEC("raw_tp/bpf_testmod_test_read")
>> +SEC("?raw_tp/bpf_testmod_test_read")
>> int BPF_PROG(handle_raw_tp,
>> struct task_struct *task, struct bpf_testmod_test_read_ctx *read_ctx)
>> {
>> - raw_tp_read_sz = BPF_CORE_READ(read_ctx, len);
>> + sz = BPF_CORE_READ(read_ctx, len);
>> return 0;
>> }
>>
>> -__u32 raw_tp_bare_write_sz = 0;
>> -
>> -SEC("raw_tp/bpf_testmod_test_write_bare_tp")
>> +SEC("?raw_tp/bpf_testmod_test_write_bare_tp")
>> int BPF_PROG(handle_raw_tp_bare,
>> struct task_struct *task, struct bpf_testmod_test_write_ctx *write_ctx)
>> {
>> - raw_tp_bare_write_sz = BPF_CORE_READ(write_ctx, len);
>> + sz = BPF_CORE_READ(write_ctx, len);
>> return 0;
>> }
>>
>> @@ -31,7 +29,7 @@ int raw_tp_writable_bare_in_val = 0;
>> int raw_tp_writable_bare_early_ret = 0;
>> int raw_tp_writable_bare_out_val = 0;
>>
>> -SEC("raw_tp.w/bpf_testmod_test_writable_bare_tp")
>> +SEC("?raw_tp.w/bpf_testmod_test_writable_bare_tp")
>> int BPF_PROG(handle_raw_tp_writable_bare,
>> struct bpf_testmod_test_writable_ctx *writable)
>> {
>> @@ -41,76 +39,65 @@ int BPF_PROG(handle_raw_tp_writable_bare,
>> return 0;
>> }
>>
>> -__u32 tp_btf_read_sz = 0;
>> -
>> -SEC("tp_btf/bpf_testmod_test_read")
>> +SEC("?tp_btf/bpf_testmod_test_read")
>> int BPF_PROG(handle_tp_btf,
>> struct task_struct *task, struct bpf_testmod_test_read_ctx *read_ctx)
>> {
>> - tp_btf_read_sz = read_ctx->len;
>> + sz = read_ctx->len;
>> return 0;
>> }
>>
>> -__u32 fentry_read_sz = 0;
>> -
>> -SEC("fentry/bpf_testmod_test_read")
>> +SEC("?fentry/bpf_testmod_test_read")
>> int BPF_PROG(handle_fentry,
>> struct file *file, struct kobject *kobj,
>> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
>> {
>> - fentry_read_sz = len;
>> + sz = len;
>> return 0;
>> }
>>
>> -__u32 fentry_manual_read_sz = 0;
>> -
>> -SEC("fentry")
>> +SEC("?fentry")
>> int BPF_PROG(handle_fentry_manual,
>> struct file *file, struct kobject *kobj,
>> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
>> {
>> - fentry_manual_read_sz = len;
>> + sz = len;
>> return 0;
>> }
>>
>> -__u32 fentry_explicit_read_sz = 0;
>> -
>> -SEC("fentry/bpf_testmod:bpf_testmod_test_read")
>> +SEC("?fentry/bpf_testmod:bpf_testmod_test_read")
>> int BPF_PROG(handle_fentry_explicit,
>> struct file *file, struct kobject *kobj,
>> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
>> {
>> - fentry_explicit_read_sz = len;
>> + sz = len;
>> return 0;
>> }
>>
>>
>> -__u32 fentry_explicit_manual_read_sz = 0;
>> -
>> -SEC("fentry")
>> +SEC("?fentry")
>> int BPF_PROG(handle_fentry_explicit_manual,
>> struct file *file, struct kobject *kobj,
>> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
>> {
>> - fentry_explicit_manual_read_sz = len;
>> + sz = len;
>> return 0;
>> }
>>
>> -__u32 fexit_read_sz = 0;
>> -int fexit_ret = 0;
>> +int retval = 0;
>>
>> -SEC("fexit/bpf_testmod_test_read")
>> +SEC("?fexit/bpf_testmod_test_read")
>> int BPF_PROG(handle_fexit,
>> struct file *file, struct kobject *kobj,
>> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len,
>> int ret)
>> {
>> - fexit_read_sz = len;
>> - fexit_ret = ret;
>> + sz = len;
>> + retval = ret;
>> return 0;
>> }
>>
>> -SEC("fexit/bpf_testmod_return_ptr")
>> +SEC("?fexit/bpf_testmod_return_ptr")
>> int BPF_PROG(handle_fexit_ret, int arg, struct file *ret)
>> {
>> long buf = 0;
>> @@ -122,18 +109,16 @@ int BPF_PROG(handle_fexit_ret, int arg, struct file *ret)
>> return 0;
>> }
>>
>> -__u32 fmod_ret_read_sz = 0;
>> -
>> -SEC("fmod_ret/bpf_testmod_test_read")
>> +SEC("?fmod_ret/bpf_testmod_test_read")
>> int BPF_PROG(handle_fmod_ret,
>> struct file *file, struct kobject *kobj,
>> struct bin_attribute *bin_attr, char *buf, loff_t off, size_t len)
>> {
>> - fmod_ret_read_sz = len;
>> + sz = len;
>> return 0; /* don't override the exit code */
>> }
>>
>> -SEC("kprobe.multi/bpf_testmod_test_read")
>> +SEC("?kprobe.multi/bpf_testmod_test_read")
>> int BPF_PROG(kprobe_multi)
>> {
>> return 0;
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-02-10 20:45 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-10 14:14 [PATCH bpf-next v2 0/3] Separate tests that need error injection Viktor Malik
2026-02-10 14:14 ` [PATCH bpf-next v2 1/3] selftests/bpf: Split module_attach into subtests Viktor Malik
2026-02-10 14:47 ` bot+bpf-ci
2026-02-10 16:01 ` Mykyta Yatsenko
2026-02-10 20:45 ` Viktor Malik
2026-02-10 14:14 ` [PATCH bpf-next v2 2/3] selftests/bpf: Split read_vsyscall " Viktor Malik
2026-02-10 14:14 ` [PATCH bpf-next v2 3/3] selftests/bpf: Split sleepable fentry from LSM test Viktor Malik
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox