netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v1 0/3] Allow struct_ops to get bpf_map during
@ 2025-07-31 21:09 Amery Hung
  2025-07-31 21:09 ` [PATCH bpf-next v1 1/3] bpf: Allow getting bpf_map from struct_ops kdata Amery Hung
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Amery Hung @ 2025-07-31 21:09 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, memxor,
	martin.lau, ameryhung, kernel-team

This patchset allows struct_ops implementors to access bpf_map during
reg() and unreg() so that they can create an id to struct_ops instance
mapping. This in turn allows struct_ops kfuncs to refer to the instance
without passing a pointer to the struct_ops. The selftest provides an
end-to-end example.

Some struct_ops users extend themselves with other bpf programs, which
also need to call struct_ops kfuncs. For example, scx_layered uses
syscall bpf programs as a scx_layered specific control plane and uses
tracing programs to get additional information for scheduling [0].
The kfuncs may need to refer to the struct_ops instance and perform
jobs accordingly. To allow calling struct_ops kfuncs referring to
specific instances from different program types and context (e.g.,
struct_ops, tracing, async callbacks), the traditional way is to pass
the struct_ops pointer to kfuncs.

This patchset provides an alternative way, through a combination of
bpf map id and global variable. First, a struct_ops implementor will
use the map id of the struct_ops map as the id of an instance. Then, 
it needs to maintain an id to instance mapping: inserting a new mapping
during reg() and removing it during unreg(). The map id can be acquired
by calling bpf_struct_ops_get(), which is tweaked to return bpf_map
instead of bool.

[0] https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/bpf/main.bpf.c


Amery Hung (3):
  bpf: Allow getting bpf_map from struct_ops kdata
  selftests/bpf: Add multi_st_ops that supports multiple instances
  selftests/bpf: Test multi_st_ops and calling kfuncs from different
    programs

 include/linux/bpf.h                           |   4 +-
 kernel/bpf/bpf_struct_ops.c                   |   7 +-
 .../test_struct_ops_id_ops_mapping.c          |  77 ++++++++++++
 .../bpf/progs/struct_ops_id_ops_mapping1.c    |  57 +++++++++
 .../bpf/progs/struct_ops_id_ops_mapping2.c    |  57 +++++++++
 .../selftests/bpf/test_kmods/bpf_testmod.c    | 112 ++++++++++++++++++
 .../selftests/bpf/test_kmods/bpf_testmod.h    |   8 ++
 .../bpf/test_kmods/bpf_testmod_kfunc.h        |   2 +
 8 files changed, 319 insertions(+), 5 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_id_ops_mapping.c
 create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping1.c
 create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping2.c

-- 
2.47.3


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH bpf-next v1 1/3] bpf: Allow getting bpf_map from struct_ops kdata
  2025-07-31 21:09 [PATCH bpf-next v1 0/3] Allow struct_ops to get bpf_map during Amery Hung
@ 2025-07-31 21:09 ` Amery Hung
  2025-08-01  9:31   ` kernel test robot
  2025-07-31 21:09 ` [PATCH bpf-next v1 2/3] selftests/bpf: Add multi_st_ops that supports multiple instances Amery Hung
  2025-07-31 21:09 ` [PATCH bpf-next v1 3/3] selftests/bpf: Test multi_st_ops and calling kfuncs from different programs Amery Hung
  2 siblings, 1 reply; 7+ messages in thread
From: Amery Hung @ 2025-07-31 21:09 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, memxor,
	martin.lau, ameryhung, kernel-team

To allow struct_ops implementors to access the bpf_map during reg() and
unreg(), return bpf_map from bpf_struct_ops_get() instead and let
callers check the error. Additionally, expose bpf_struct_ops_get() and
bpf_struct_ops_put() so that kernel modules can use them.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 include/linux/bpf.h         | 4 ++--
 kernel/bpf/bpf_struct_ops.c | 7 ++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f9cd2164ed23..5bc08a77df7c 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1948,7 +1948,7 @@ struct bpf_struct_ops_common_value {
 		__register_bpf_struct_ops(st_ops);			\
 	})
 #define BPF_MODULE_OWNER ((void *)((0xeB9FUL << 2) + POISON_POINTER_DELTA))
-bool bpf_struct_ops_get(const void *kdata);
+struct bpf_map *bpf_struct_ops_get(const void *kdata);
 void bpf_struct_ops_put(const void *kdata);
 int bpf_struct_ops_supported(const struct bpf_struct_ops *st_ops, u32 moff);
 int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
@@ -1963,7 +1963,7 @@ void bpf_struct_ops_image_free(void *image);
 static inline bool bpf_try_module_get(const void *data, struct module *owner)
 {
 	if (owner == BPF_MODULE_OWNER)
-		return bpf_struct_ops_get(data);
+		return !IS_ERR(bpf_struct_ops_get(data));
 	else
 		return try_module_get(owner);
 }
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 687a3e9c76f5..7b9bb6a211f0 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -1150,7 +1150,7 @@ const struct bpf_map_ops bpf_struct_ops_map_ops = {
 /* "const void *" because some subsystem is
  * passing a const (e.g. const struct tcp_congestion_ops *)
  */
-bool bpf_struct_ops_get(const void *kdata)
+struct bpf_map *bpf_struct_ops_get(const void *kdata)
 {
 	struct bpf_struct_ops_value *kvalue;
 	struct bpf_struct_ops_map *st_map;
@@ -1159,9 +1159,9 @@ bool bpf_struct_ops_get(const void *kdata)
 	kvalue = container_of(kdata, struct bpf_struct_ops_value, data);
 	st_map = container_of(kvalue, struct bpf_struct_ops_map, kvalue);
 
-	map = __bpf_map_inc_not_zero(&st_map->map, false);
-	return !IS_ERR(map);
+	return __bpf_map_inc_not_zero(&st_map->map, false);
 }
+EXPORT_SYMBOL_GPL(bpf_struct_ops_get);
 
 void bpf_struct_ops_put(const void *kdata)
 {
@@ -1173,6 +1173,7 @@ void bpf_struct_ops_put(const void *kdata)
 
 	bpf_map_put(&st_map->map);
 }
+EXPORT_SYMBOL_GPL(bpf_struct_ops_put);
 
 static bool bpf_struct_ops_valid_to_reg(struct bpf_map *map)
 {
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH bpf-next v1 2/3] selftests/bpf: Add multi_st_ops that supports multiple instances
  2025-07-31 21:09 [PATCH bpf-next v1 0/3] Allow struct_ops to get bpf_map during Amery Hung
  2025-07-31 21:09 ` [PATCH bpf-next v1 1/3] bpf: Allow getting bpf_map from struct_ops kdata Amery Hung
@ 2025-07-31 21:09 ` Amery Hung
  2025-08-04 23:43   ` Martin KaFai Lau
  2025-07-31 21:09 ` [PATCH bpf-next v1 3/3] selftests/bpf: Test multi_st_ops and calling kfuncs from different programs Amery Hung
  2 siblings, 1 reply; 7+ messages in thread
From: Amery Hung @ 2025-07-31 21:09 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, memxor,
	martin.lau, ameryhung, kernel-team

Current struct_ops in bpf_testmod only support attaching single instance.
Add multi_st_ops that supports multiple instances. The struct_ops uses map
id as the struct_ops id and will reject attachment with an existing id.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 .../selftests/bpf/test_kmods/bpf_testmod.c    | 112 ++++++++++++++++++
 .../selftests/bpf/test_kmods/bpf_testmod.h    |   8 ++
 .../bpf/test_kmods/bpf_testmod_kfunc.h        |   2 +
 3 files changed, 122 insertions(+)

diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index e9e918cdf31f..522c2ddca929 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -1057,6 +1057,8 @@ __bpf_kfunc int bpf_kfunc_st_ops_inc10(struct st_ops_args *args)
 	return args->a;
 }
 
+__bpf_kfunc int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id);
+
 BTF_KFUNCS_START(bpf_testmod_check_kfunc_ids)
 BTF_ID_FLAGS(func, bpf_testmod_test_mod_kfunc)
 BTF_ID_FLAGS(func, bpf_kfunc_call_test1)
@@ -1097,6 +1099,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_st_ops_test_prologue, KF_TRUSTED_ARGS | KF_SLEEPABL
 BTF_ID_FLAGS(func, bpf_kfunc_st_ops_test_epilogue, KF_TRUSTED_ARGS | KF_SLEEPABLE)
 BTF_ID_FLAGS(func, bpf_kfunc_st_ops_test_pro_epilogue, KF_TRUSTED_ARGS | KF_SLEEPABLE)
 BTF_ID_FLAGS(func, bpf_kfunc_st_ops_inc10, KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_kfunc_multi_st_ops_test_1, KF_TRUSTED_ARGS)
 BTF_KFUNCS_END(bpf_testmod_check_kfunc_ids)
 
 static int bpf_testmod_ops_init(struct btf *btf)
@@ -1528,6 +1531,114 @@ static struct bpf_struct_ops testmod_st_ops = {
 	.owner = THIS_MODULE,
 };
 
+struct hlist_head multi_st_ops_list;
+static DEFINE_SPINLOCK(multi_st_ops_lock);
+
+static int multi_st_ops_init(struct btf *btf)
+{
+	spin_lock_init(&multi_st_ops_lock);
+	INIT_HLIST_HEAD(&multi_st_ops_list);
+
+	return 0;
+}
+
+static int multi_st_ops_init_member(const struct btf_type *t,
+				    const struct btf_member *member,
+				    void *kdata, const void *udata)
+{
+	return 0;
+}
+
+static struct bpf_testmod_multi_st_ops *multi_st_ops_find_nolock(u32 id)
+{
+	struct bpf_testmod_multi_st_ops *st_ops;
+
+	hlist_for_each_entry(st_ops, &multi_st_ops_list, node) {
+		if (st_ops->id == id)
+			return st_ops;
+	}
+
+	return NULL;
+}
+
+int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id)
+{
+	struct bpf_testmod_multi_st_ops *st_ops;
+	unsigned long flags;
+	int ret = -1;
+
+	spin_lock_irqsave(&multi_st_ops_lock, flags);
+	st_ops = multi_st_ops_find_nolock(id);
+	if (st_ops)
+		ret = st_ops->test_1(args);
+	spin_unlock_irqrestore(&multi_st_ops_lock, flags);
+
+	return ret;
+}
+
+static int multi_st_ops_reg(void *kdata, struct bpf_link *link)
+{
+	struct bpf_testmod_multi_st_ops *st_ops =
+		(struct bpf_testmod_multi_st_ops *)kdata;
+	struct bpf_map *map;
+	unsigned long flags;
+	int err = 0;
+
+	map = bpf_struct_ops_get(kdata);
+
+	spin_lock_irqsave(&multi_st_ops_lock, flags);
+	if (multi_st_ops_find_nolock(map->id)) {
+		pr_err("multi_st_ops(id:%d) has already been registered\n", map->id);
+		err = -EEXIST;
+		goto unlock;
+	}
+
+	st_ops->id = map->id;
+	hlist_add_head(&st_ops->node, &multi_st_ops_list);
+unlock:
+	bpf_struct_ops_put(kdata);
+	spin_unlock_irqrestore(&multi_st_ops_lock, flags);
+
+	return err;
+}
+
+static void multi_st_ops_unreg(void *kdata, struct bpf_link *link)
+{
+	struct bpf_testmod_multi_st_ops *st_ops;
+	struct bpf_map *map;
+	unsigned long flags;
+
+	map = bpf_struct_ops_get(kdata);
+
+	spin_lock_irqsave(&multi_st_ops_lock, flags);
+	st_ops = multi_st_ops_find_nolock(map->id);
+	if (st_ops)
+		hlist_del(&st_ops->node);
+	spin_unlock_irqrestore(&multi_st_ops_lock, flags);
+
+	bpf_struct_ops_put(kdata);
+}
+
+static int bpf_testmod_multi_st_ops__test_1(struct st_ops_args *args)
+{
+	return 0;
+}
+
+static struct bpf_testmod_multi_st_ops multi_st_ops_cfi_stubs = {
+	.test_1 = bpf_testmod_multi_st_ops__test_1,
+};
+
+struct bpf_struct_ops testmod_multi_st_ops = {
+	.verifier_ops = &bpf_testmod_verifier_ops,
+	.init = multi_st_ops_init,
+	.init_member = multi_st_ops_init_member,
+	.reg = multi_st_ops_reg,
+	.unreg = multi_st_ops_unreg,
+	.cfi_stubs = &multi_st_ops_cfi_stubs,
+	.name = "bpf_testmod_multi_st_ops",
+	.owner = THIS_MODULE,
+};
+
 extern int bpf_fentry_test1(int a);
 
 static int bpf_testmod_init(void)
@@ -1550,6 +1661,7 @@ static int bpf_testmod_init(void)
 	ret = ret ?: register_bpf_struct_ops(&bpf_testmod_ops2, bpf_testmod_ops2);
 	ret = ret ?: register_bpf_struct_ops(&bpf_testmod_ops3, bpf_testmod_ops3);
 	ret = ret ?: register_bpf_struct_ops(&testmod_st_ops, bpf_testmod_st_ops);
+	ret = ret ?: register_bpf_struct_ops(&testmod_multi_st_ops, bpf_testmod_multi_st_ops);
 	ret = ret ?: register_btf_id_dtor_kfuncs(bpf_testmod_dtors,
 						 ARRAY_SIZE(bpf_testmod_dtors),
 						 THIS_MODULE);
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.h
index c9fab51f16e2..b8001ba7c368 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.h
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.h
@@ -116,4 +116,12 @@ struct bpf_testmod_st_ops {
 	struct module *owner;
 };
 
+#define BPF_TESTMOD_NAME_SZ 16
+
+struct bpf_testmod_multi_st_ops {
+	int (*test_1)(struct st_ops_args *args);
+	struct hlist_node node;
+	int id;
+};
+
 #endif /* _BPF_TESTMOD_H */
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
index b58817938deb..286e7faa4754 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
@@ -159,4 +159,6 @@ void bpf_kfunc_trusted_task_test(struct task_struct *ptr) __ksym;
 void bpf_kfunc_trusted_num_test(int *ptr) __ksym;
 void bpf_kfunc_rcu_task_test(struct task_struct *ptr) __ksym;
 
+int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __ksym;
+
 #endif /* _BPF_TESTMOD_KFUNC_H */
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH bpf-next v1 3/3] selftests/bpf: Test multi_st_ops and calling kfuncs from different programs
  2025-07-31 21:09 [PATCH bpf-next v1 0/3] Allow struct_ops to get bpf_map during Amery Hung
  2025-07-31 21:09 ` [PATCH bpf-next v1 1/3] bpf: Allow getting bpf_map from struct_ops kdata Amery Hung
  2025-07-31 21:09 ` [PATCH bpf-next v1 2/3] selftests/bpf: Add multi_st_ops that supports multiple instances Amery Hung
@ 2025-07-31 21:09 ` Amery Hung
  2 siblings, 0 replies; 7+ messages in thread
From: Amery Hung @ 2025-07-31 21:09 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, memxor,
	martin.lau, ameryhung, kernel-team

Test multi_st_ops and demonstrate how different bpf programs can call
a kfuncs that refers to the struct_ops instance in the same source file
by id. The id is defined as a global vairable and initialized before
attaching the skeleton. Kfuncs that take the id can hide the argument
with a macro to make it almost transparent to bpf program developers.

The test involves two struct_ops returning different values from
.test_1. In syscall and tracing programs, check if the correct value is
returned by a kfunc that calls .test_1.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 .../test_struct_ops_id_ops_mapping.c          | 77 +++++++++++++++++++
 .../bpf/progs/struct_ops_id_ops_mapping1.c    | 57 ++++++++++++++
 .../bpf/progs/struct_ops_id_ops_mapping2.c    | 57 ++++++++++++++
 3 files changed, 191 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_id_ops_mapping.c
 create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping1.c
 create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping2.c

diff --git a/tools/testing/selftests/bpf/prog_tests/test_struct_ops_id_ops_mapping.c b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_id_ops_mapping.c
new file mode 100644
index 000000000000..927524ac191d
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_id_ops_mapping.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <test_progs.h>
+#include "struct_ops_id_ops_mapping1.skel.h"
+#include "struct_ops_id_ops_mapping2.skel.h"
+
+static void test_st_ops_id_ops_mapping(void)
+{
+	struct struct_ops_id_ops_mapping1 *skel1 = NULL;
+	struct struct_ops_id_ops_mapping2 *skel2 = NULL;
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+	struct bpf_map_info info = {};
+	__u32 len = sizeof(info);
+	int err, pid, prog1_fd, prog2_fd;
+
+	skel1 = struct_ops_id_ops_mapping1__open_and_load();
+	if (!ASSERT_OK_PTR(skel1, "struct_ops_id_ops_mapping1__open"))
+		goto out;
+
+	skel2 = struct_ops_id_ops_mapping2__open_and_load();
+	if (!ASSERT_OK_PTR(skel2, "struct_ops_id_ops_mapping2__open"))
+		goto out;
+
+	err = bpf_map_get_info_by_fd(bpf_map__fd(skel1->maps.st_ops_map),
+				     &info, &len);
+	if (!ASSERT_OK(err, "bpf_map_get_info_by_fd"))
+		goto out;
+
+	skel1->bss->st_ops_id = info.id;
+
+	err = bpf_map_get_info_by_fd(bpf_map__fd(skel2->maps.st_ops_map),
+				     &info, &len);
+	if (!ASSERT_OK(err, "bpf_map_get_info_by_fd"))
+		goto out;
+
+	skel2->bss->st_ops_id = info.id;
+
+	err = struct_ops_id_ops_mapping1__attach(skel1);
+	if (!ASSERT_OK(err, "struct_ops_id_ops_mapping1__attach"))
+		goto out;
+
+	err = struct_ops_id_ops_mapping2__attach(skel2);
+	if (!ASSERT_OK(err, "struct_ops_id_ops_mapping2__attach"))
+		goto out;
+
+	/* run tracing prog that calls .test_1 and checks return */
+	pid = getpid();
+	skel1->bss->test_pid = pid;
+	skel2->bss->test_pid = pid;
+	sys_gettid();
+	skel1->bss->test_pid = 0;
+	skel2->bss->test_pid = 0;
+
+	/* run syscall_prog that calls .test_1 and checks return */
+	prog1_fd = bpf_program__fd(skel1->progs.syscall_prog);
+	err = bpf_prog_test_run_opts(prog1_fd, &topts);
+	ASSERT_OK(err, "bpf_prog_test_run_opts");
+
+	prog2_fd = bpf_program__fd(skel2->progs.syscall_prog);
+	err = bpf_prog_test_run_opts(prog2_fd, &topts);
+	ASSERT_OK(err, "bpf_prog_test_run_opts");
+
+	ASSERT_EQ(skel1->bss->test_err, 0, "skel1->bss->test_err");
+	ASSERT_EQ(skel2->bss->test_err, 0, "skel2->bss->test_err");
+
+out:
+	if (skel1)
+		struct_ops_id_ops_mapping1__destroy(skel1);
+	if (skel2)
+		struct_ops_id_ops_mapping2__destroy(skel2);
+}
+
+void test_struct_ops_id_ops_mapping(void)
+{
+	if (test__start_subtest("st_ops_id_ops_mapping"))
+		test_st_ops_id_ops_mapping();
+}
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping1.c b/tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping1.c
new file mode 100644
index 000000000000..a99b8b2aa2fb
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping1.c
@@ -0,0 +1,57 @@
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+#include "../test_kmods/bpf_testmod.h"
+#include "../test_kmods/bpf_testmod_kfunc.h"
+
+char _license[] SEC("license") = "GPL";
+
+#define bpf_kfunc_multi_st_ops_test_1(args) bpf_kfunc_multi_st_ops_test_1(args, st_ops_id)
+int st_ops_id;
+
+int test_pid;
+int test_err;
+
+#define MAP1_MAGIC 1234
+
+SEC("struct_ops")
+int BPF_PROG(test_1, struct st_ops_args *args)
+{
+	return MAP1_MAGIC;
+}
+
+SEC("tp_btf/sys_enter")
+int BPF_PROG(sys_enter, struct pt_regs *regs, long id)
+{
+	struct st_ops_args args = {};
+	struct task_struct *task;
+	int ret;
+
+	task = bpf_get_current_task_btf();
+	if (!test_pid || task->pid != test_pid)
+		return 0;
+
+	ret = bpf_kfunc_multi_st_ops_test_1(&args);
+	if (ret != MAP1_MAGIC)
+		test_err++;
+
+	return 0;
+}
+
+SEC("syscall")
+int syscall_prog(void *ctx)
+{
+	struct st_ops_args args = {};
+	int ret;
+
+	ret = bpf_kfunc_multi_st_ops_test_1(&args);
+	if (ret != MAP1_MAGIC)
+		test_err++;
+
+	return 0;
+}
+
+SEC(".struct_ops.link")
+struct bpf_testmod_multi_st_ops st_ops_map = {
+	.test_1 = (void *)test_1,
+};
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping2.c b/tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping2.c
new file mode 100644
index 000000000000..4ad62963261f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping2.c
@@ -0,0 +1,57 @@
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+#include "../test_kmods/bpf_testmod.h"
+#include "../test_kmods/bpf_testmod_kfunc.h"
+
+char _license[] SEC("license") = "GPL";
+
+#define bpf_kfunc_multi_st_ops_test_1(args) bpf_kfunc_multi_st_ops_test_1(args, st_ops_id)
+int st_ops_id;
+
+int test_pid;
+int test_err;
+
+#define MAP2_MAGIC 4567
+
+SEC("struct_ops")
+int BPF_PROG(test_1, struct st_ops_args *args)
+{
+	return MAP2_MAGIC;
+}
+
+SEC("tp_btf/sys_enter")
+int BPF_PROG(sys_enter, struct pt_regs *regs, long id)
+{
+	struct st_ops_args args = {};
+	struct task_struct *task;
+	int ret;
+
+	task = bpf_get_current_task_btf();
+	if (!test_pid || task->pid != test_pid)
+		return 0;
+
+	ret = bpf_kfunc_multi_st_ops_test_1(&args);
+	if (ret != MAP2_MAGIC)
+		test_err++;
+
+	return 0;
+}
+
+SEC("syscall")
+int syscall_prog(void *ctx)
+{
+	struct st_ops_args args = {};
+	int ret;
+
+	ret = bpf_kfunc_multi_st_ops_test_1(&args);
+	if (ret != MAP2_MAGIC)
+		test_err++;
+
+	return 0;
+}
+
+SEC(".struct_ops.link")
+struct bpf_testmod_multi_st_ops st_ops_map = {
+	.test_1 = (void *)test_1,
+};
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next v1 1/3] bpf: Allow getting bpf_map from struct_ops kdata
  2025-07-31 21:09 ` [PATCH bpf-next v1 1/3] bpf: Allow getting bpf_map from struct_ops kdata Amery Hung
@ 2025-08-01  9:31   ` kernel test robot
  0 siblings, 0 replies; 7+ messages in thread
From: kernel test robot @ 2025-08-01  9:31 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: llvm, oe-kbuild-all, netdev, alexei.starovoitov, andrii, daniel,
	tj, memxor, martin.lau, ameryhung, kernel-team

Hi Amery,

kernel test robot noticed the following build warnings:

[auto build test WARNING on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Amery-Hung/bpf-Allow-getting-bpf_map-from-struct_ops-kdata/20250801-051108
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20250731210950.3927649-2-ameryhung%40gmail.com
patch subject: [PATCH bpf-next v1 1/3] bpf: Allow getting bpf_map from struct_ops kdata
config: arm-randconfig-001-20250801 (https://download.01.org/0day-ci/archive/20250801/202508011701.e7Owy62s-lkp@intel.com/config)
compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 8f09b03aebb71c154f3bbe725c29e3f47d37c26e)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250801/202508011701.e7Owy62s-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202508011701.e7Owy62s-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> kernel/bpf/bpf_struct_ops.c:1157:18: warning: unused variable 'map' [-Wunused-variable]
    1157 |         struct bpf_map *map;
         |                         ^~~
   1 warning generated.


vim +/map +1157 kernel/bpf/bpf_struct_ops.c

85d33df357b634 Martin KaFai Lau 2020-01-08  1149  
85d33df357b634 Martin KaFai Lau 2020-01-08  1150  /* "const void *" because some subsystem is
85d33df357b634 Martin KaFai Lau 2020-01-08  1151   * passing a const (e.g. const struct tcp_congestion_ops *)
85d33df357b634 Martin KaFai Lau 2020-01-08  1152   */
eab8366be0bf63 Amery Hung       2025-07-31  1153  struct bpf_map *bpf_struct_ops_get(const void *kdata)
85d33df357b634 Martin KaFai Lau 2020-01-08  1154  {
85d33df357b634 Martin KaFai Lau 2020-01-08  1155  	struct bpf_struct_ops_value *kvalue;
b671c2067a04c0 Kui-Feng Lee     2023-03-22  1156  	struct bpf_struct_ops_map *st_map;
b671c2067a04c0 Kui-Feng Lee     2023-03-22 @1157  	struct bpf_map *map;
85d33df357b634 Martin KaFai Lau 2020-01-08  1158  
85d33df357b634 Martin KaFai Lau 2020-01-08  1159  	kvalue = container_of(kdata, struct bpf_struct_ops_value, data);
b671c2067a04c0 Kui-Feng Lee     2023-03-22  1160  	st_map = container_of(kvalue, struct bpf_struct_ops_map, kvalue);
85d33df357b634 Martin KaFai Lau 2020-01-08  1161  
eab8366be0bf63 Amery Hung       2025-07-31  1162  	return __bpf_map_inc_not_zero(&st_map->map, false);
eb18b49ea758ec Martin KaFai Lau 2021-08-24  1163  }
eab8366be0bf63 Amery Hung       2025-07-31  1164  EXPORT_SYMBOL_GPL(bpf_struct_ops_get);
eb18b49ea758ec Martin KaFai Lau 2021-08-24  1165  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next v1 2/3] selftests/bpf: Add multi_st_ops that supports multiple instances
  2025-07-31 21:09 ` [PATCH bpf-next v1 2/3] selftests/bpf: Add multi_st_ops that supports multiple instances Amery Hung
@ 2025-08-04 23:43   ` Martin KaFai Lau
  2025-08-05 15:27     ` Amery Hung
  0 siblings, 1 reply; 7+ messages in thread
From: Martin KaFai Lau @ 2025-08-04 23:43 UTC (permalink / raw)
  To: Amery Hung
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, memxor,
	martin.lau, kernel-team, bpf

On 7/31/25 2:09 PM, Amery Hung wrote:
> +static int multi_st_ops_reg(void *kdata, struct bpf_link *link)
> +{
> +	struct bpf_testmod_multi_st_ops *st_ops =
> +		(struct bpf_testmod_multi_st_ops *)kdata;
> +	struct bpf_map *map;
> +	unsigned long flags;
> +	int err = 0;
> +
> +	map = bpf_struct_ops_get(kdata);

The bpf_struct_ops_get returns a map pointer and also inc_not_zero() the map 
which we know it won't fail at this point, so no check is needed.

> +
> +	spin_lock_irqsave(&multi_st_ops_lock, flags);
> +	if (multi_st_ops_find_nolock(map->id)) {
> +		pr_err("multi_st_ops(id:%d) has already been registered\n", map->id);
> +		err = -EEXIST;
> +		goto unlock;
> +	}
> +
> +	st_ops->id = map->id;
> +	hlist_add_head(&st_ops->node, &multi_st_ops_list);
> +unlock:
> +	bpf_struct_ops_put(kdata);

To get an id, it needs a get and an immediate put. No concern on the performance 
  but just feels not easy to use. e.g. For the subsystem supporting link_update, 
it will need to do this get/put twice. One on the old kdata and another on the 
new kdata. Take a look at the bpf_struct_ops_map_link_update().

To create a id->struct_ops mapping, the subsystem needs neither the map pointer 
nor incrementing the map refcnt. How about create a new helper to only return 
the id of the kdata?

Uncompiled code:

u32 bpf_struct_ops_id(const void *kdata)
{
	struct bpf_struct_ops_value *kvalue;
	struct bpf_struct_ops_map *st_map;

	kvalue = container_of(kdata, struct bpf_struct_ops_value, data);
	st_map = container_of(kvalue, struct bpf_struct_ops_map, kvalue);

	return st_map->map.id;
}

> +	spin_unlock_irqrestore(&multi_st_ops_lock, flags);
> +
> +	return err;
> +}
> +
> +static void multi_st_ops_unreg(void *kdata, struct bpf_link *link)
> +{
> +	struct bpf_testmod_multi_st_ops *st_ops;
> +	struct bpf_map *map;
> +	unsigned long flags;
> +
> +	map = bpf_struct_ops_get(kdata);
> +
> +	spin_lock_irqsave(&multi_st_ops_lock, flags);
> +	st_ops = multi_st_ops_find_nolock(map->id);
> +	if (st_ops)
> +		hlist_del(&st_ops->node);
> +	spin_unlock_irqrestore(&multi_st_ops_lock, flags);
> +
> +	bpf_struct_ops_put(kdata);
> +}


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next v1 2/3] selftests/bpf: Add multi_st_ops that supports multiple instances
  2025-08-04 23:43   ` Martin KaFai Lau
@ 2025-08-05 15:27     ` Amery Hung
  0 siblings, 0 replies; 7+ messages in thread
From: Amery Hung @ 2025-08-05 15:27 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, memxor,
	martin.lau, kernel-team, bpf

On Mon, Aug 4, 2025 at 4:43 PM Martin KaFai Lau <martin.lau@linux.dev> wrote:
>
> On 7/31/25 2:09 PM, Amery Hung wrote:
> > +static int multi_st_ops_reg(void *kdata, struct bpf_link *link)
> > +{
> > +     struct bpf_testmod_multi_st_ops *st_ops =
> > +             (struct bpf_testmod_multi_st_ops *)kdata;
> > +     struct bpf_map *map;
> > +     unsigned long flags;
> > +     int err = 0;
> > +
> > +     map = bpf_struct_ops_get(kdata);
>
> The bpf_struct_ops_get returns a map pointer and also inc_not_zero() the map
> which we know it won't fail at this point, so no check is needed.
>
> > +
> > +     spin_lock_irqsave(&multi_st_ops_lock, flags);
> > +     if (multi_st_ops_find_nolock(map->id)) {
> > +             pr_err("multi_st_ops(id:%d) has already been registered\n", map->id);
> > +             err = -EEXIST;
> > +             goto unlock;
> > +     }
> > +
> > +     st_ops->id = map->id;
> > +     hlist_add_head(&st_ops->node, &multi_st_ops_list);
> > +unlock:
> > +     bpf_struct_ops_put(kdata);
>
> To get an id, it needs a get and an immediate put. No concern on the performance
>   but just feels not easy to use. e.g. For the subsystem supporting link_update,
> it will need to do this get/put twice. One on the old kdata and another on the
> new kdata. Take a look at the bpf_struct_ops_map_link_update().
>
> To create a id->struct_ops mapping, the subsystem needs neither the map pointer
> nor incrementing the map refcnt. How about create a new helper to only return
> the id of the kdata?
>

Make sense. I will create a new helper as you suggested. I was
thinking about repurposing existing helpers. There is indeed no need
to increase the refcount as kdata is protected under lock during
map_update, link_create and link_update.

Thanks,
Amery

> Uncompiled code:
>
> u32 bpf_struct_ops_id(const void *kdata)
> {
>         struct bpf_struct_ops_value *kvalue;
>         struct bpf_struct_ops_map *st_map;
>
>         kvalue = container_of(kdata, struct bpf_struct_ops_value, data);
>         st_map = container_of(kvalue, struct bpf_struct_ops_map, kvalue);
>
>         return st_map->map.id;
> }
>
> > +     spin_unlock_irqrestore(&multi_st_ops_lock, flags);
> > +
> > +     return err;
> > +}
> > +
> > +static void multi_st_ops_unreg(void *kdata, struct bpf_link *link)
> > +{
> > +     struct bpf_testmod_multi_st_ops *st_ops;
> > +     struct bpf_map *map;
> > +     unsigned long flags;
> > +
> > +     map = bpf_struct_ops_get(kdata);
> > +
> > +     spin_lock_irqsave(&multi_st_ops_lock, flags);
> > +     st_ops = multi_st_ops_find_nolock(map->id);
> > +     if (st_ops)
> > +             hlist_del(&st_ops->node);
> > +     spin_unlock_irqrestore(&multi_st_ops_lock, flags);
> > +
> > +     bpf_struct_ops_put(kdata);
> > +}
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-08-05 15:27 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-31 21:09 [PATCH bpf-next v1 0/3] Allow struct_ops to get bpf_map during Amery Hung
2025-07-31 21:09 ` [PATCH bpf-next v1 1/3] bpf: Allow getting bpf_map from struct_ops kdata Amery Hung
2025-08-01  9:31   ` kernel test robot
2025-07-31 21:09 ` [PATCH bpf-next v1 2/3] selftests/bpf: Add multi_st_ops that supports multiple instances Amery Hung
2025-08-04 23:43   ` Martin KaFai Lau
2025-08-05 15:27     ` Amery Hung
2025-07-31 21:09 ` [PATCH bpf-next v1 3/3] selftests/bpf: Test multi_st_ops and calling kfuncs from different programs Amery Hung

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).