netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 bpf-next 0/4] Support associating BPF programs with struct_ops
@ 2025-10-16 20:44 Amery Hung
  2025-10-16 20:45 ` [PATCH v2 bpf-next 1/4] bpf: Allow verifier to fixup kernel module kfuncs Amery Hung
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Amery Hung @ 2025-10-16 20:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, martin.lau,
	ameryhung, kernel-team

v1 -> v2
   - Poison st_ops_assoc when reusing the program in more than one
     struct_ops maps and add a helper to access the pointer (Andrii)
   - Minor style and naming changes (Andrii)

---

Hi,

This patchset adds a new BPF command BPF_STRUCT_OPS_ASSOCIATE_PROG to
the bpf() syscall to allow associating a BPF program with a struct_ops.
The command is introduced to address a emerging need from struct_ops
users. As the number of subsystems adopting struct_ops grows, more
users are building their struct_ops-based solution with some help from
other BPF programs. For exmample, scx_layer uses a syscall program as
a user space trigger to refresh layers [0]. It also uses tracing program
to infer whether a task is using GPU and needs to be prioritized [1]. In
these use cases, when there are multiple struct_ops instances, the
struct_ops kfuncs called from different BPF programs, whether struct_ops
or not needs to be able to refer to a specific one, which currently is
not possible.

The new BPF command will allow users to explicitly associate a BPF
program with a struct_ops map. The libbpf wrapper can be called after
loading programs and before attaching programs and struct_ops.

Internally, it will set prog->aux->st_ops_assoc to the struct_ops
struct (i.e., kdata). struct_ops kfuncs can then get the associated
struct_ops by adding a "__prog" argument. The value of the speical
argument will be fixed up by the verifier during verification.

The command conceptually associates the implementation of BPF programs
with struct_ops map, not the attachment. A program associated with the
map will take a refcount of it so that st_ops_assoc always points to a
valid struct_ops struct. However, the struct_ops can be in an
uninitialized or unattached state. The struct_ops implementer will be
responsible to maintain and check the state of the associated
struct_ops before accessing it.

We can also consider support associating struct_ops link with BPF
programs, which on one hand make struct_ops implementer's job easier,
but might complicate libbpf workflow and does not apply to legacy
struct_ops attachment.

[0] https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/bpf/main.bpf.c#L557
[1] https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/bpf/main.bpf.c#L754

---

Amery Hung (4):
  bpf: Allow verifier to fixup kernel module kfuncs
  bpf: Support associating BPF program with struct_ops
  libbpf: Add bpf_prog_assoc_struct_ops() API
  selftests/bpf: Test BPF_PROG_ASSOC_STRUCT_OPS command

 include/linux/bpf.h                           |  16 +++
 include/uapi/linux/bpf.h                      |  17 +++
 kernel/bpf/bpf_struct_ops.c                   |  44 ++++++++
 kernel/bpf/core.c                             |   6 +
 kernel/bpf/syscall.c                          |  50 +++++++++
 kernel/bpf/verifier.c                         |   3 +-
 tools/include/uapi/linux/bpf.h                |  17 +++
 tools/lib/bpf/bpf.c                           |  19 ++++
 tools/lib/bpf/bpf.h                           |  20 ++++
 tools/lib/bpf/libbpf.map                      |   1 +
 .../bpf/prog_tests/test_struct_ops_assoc.c    |  76 +++++++++++++
 .../selftests/bpf/progs/struct_ops_assoc.c    | 105 ++++++++++++++++++
 .../selftests/bpf/test_kmods/bpf_testmod.c    |  17 +++
 .../bpf/test_kmods/bpf_testmod_kfunc.h        |   1 +
 14 files changed, 390 insertions(+), 2 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_assoc.c
 create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_assoc.c

-- 
2.47.3


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 bpf-next 1/4] bpf: Allow verifier to fixup kernel module kfuncs
  2025-10-16 20:44 [PATCH v2 bpf-next 0/4] Support associating BPF programs with struct_ops Amery Hung
@ 2025-10-16 20:45 ` Amery Hung
  2025-10-16 20:45 ` [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops Amery Hung
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 13+ messages in thread
From: Amery Hung @ 2025-10-16 20:45 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, martin.lau,
	ameryhung, kernel-team

Allow verifier to fixup kfuncs in kernel module to support kfuncs with
__prog arguments. Currently, special kfuncs and kfuncs with __prog
arguments are kernel kfuncs. Allowing kernel module kfuncs should not
affect existing kfunc fixup as kernel module kfuncs have BTF IDs greater
than kernel kfuncs' BTF IDs.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 kernel/bpf/verifier.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e892df386eed..d5f1046d08b7 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -21889,8 +21889,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 
 	if (!bpf_jit_supports_far_kfunc_call())
 		insn->imm = BPF_CALL_IMM(desc->addr);
-	if (insn->off)
-		return 0;
+
 	if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl] ||
 	    desc->func_id == special_kfunc_list[KF_bpf_percpu_obj_new_impl]) {
 		struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta;
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
  2025-10-16 20:44 [PATCH v2 bpf-next 0/4] Support associating BPF programs with struct_ops Amery Hung
  2025-10-16 20:45 ` [PATCH v2 bpf-next 1/4] bpf: Allow verifier to fixup kernel module kfuncs Amery Hung
@ 2025-10-16 20:45 ` Amery Hung
  2025-10-16 23:51   ` Martin KaFai Lau
                     ` (4 more replies)
  2025-10-16 20:45 ` [PATCH v2 bpf-next 3/4] libbpf: Add bpf_prog_assoc_struct_ops() API Amery Hung
  2025-10-16 20:45 ` [PATCH v2 bpf-next 4/4] selftests/bpf: Test BPF_PROG_ASSOC_STRUCT_OPS command Amery Hung
  3 siblings, 5 replies; 13+ messages in thread
From: Amery Hung @ 2025-10-16 20:45 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, martin.lau,
	ameryhung, kernel-team

Add a new BPF command BPF_PROG_ASSOC_STRUCT_OPS to allow associating
a BPF program with a struct_ops map. This command takes a file
descriptor of a struct_ops map and a BPF program and set
prog->aux->st_ops_assoc to the kdata of the struct_ops map.

The command does not accept a struct_ops program or a non-struct_ops
map. Programs of a struct_ops map is automatically associated with the
map during map update. If a program is shared between two struct_ops
maps, prog->aux->st_ops_assoc will be poisoned to indicate that the
associated struct_ops is ambiguous. A kernel helper
bpf_prog_get_assoc_struct_ops() can be used to retrieve the pointer.
The associated struct_ops map, once set, cannot be changed later. This
restriction may be lifted in the future if there is a use case.

Each associated programs except struct_ops programs of the map will take
a refcount on the map to pin it so that prog->aux->st_ops_assoc, if set,
is always valid. However, it is not guaranteed whether the map members
are fully updated nor is it attached or not. For example, a BPF program
can be associated with a struct_ops map before map_update. The
struct_ops implementer will be responsible for maintaining and checking
the state of the associated struct_ops map before accessing it.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 include/linux/bpf.h            | 16 +++++++++++
 include/uapi/linux/bpf.h       | 17 ++++++++++++
 kernel/bpf/bpf_struct_ops.c    | 44 ++++++++++++++++++++++++++++++
 kernel/bpf/core.c              |  6 ++++
 kernel/bpf/syscall.c           | 50 ++++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h | 17 ++++++++++++
 6 files changed, 150 insertions(+)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index a98c83346134..b2037e9b72a1 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1710,6 +1710,8 @@ struct bpf_prog_aux {
 		struct rcu_head	rcu;
 	};
 	struct bpf_stream stream[2];
+	struct mutex st_ops_assoc_mutex;
+	void *st_ops_assoc;
 };
 
 struct bpf_prog {
@@ -2010,6 +2012,9 @@ static inline void bpf_module_put(const void *data, struct module *owner)
 		module_put(owner);
 }
 int bpf_struct_ops_link_create(union bpf_attr *attr);
+int bpf_prog_assoc_struct_ops(struct bpf_prog *prog, struct bpf_map *map);
+void bpf_prog_disassoc_struct_ops(struct bpf_prog *prog);
+void *bpf_prog_get_assoc_struct_ops(const struct bpf_prog_aux *aux);
 u32 bpf_struct_ops_id(const void *kdata);
 
 #ifdef CONFIG_NET
@@ -2057,6 +2062,17 @@ static inline int bpf_struct_ops_link_create(union bpf_attr *attr)
 {
 	return -EOPNOTSUPP;
 }
+static inline int bpf_prog_assoc_struct_ops(struct bpf_prog *prog, struct bpf_map *map)
+{
+	return -EOPNOTSUPP;
+}
+static inline void bpf_prog_disassoc_struct_ops(struct bpf_prog *prog)
+{
+}
+static inline void *bpf_prog_get_assoc_struct_ops(const struct bpf_prog_aux *aux)
+{
+	return NULL;
+}
 static inline void bpf_map_struct_ops_info_fill(struct bpf_map_info *info, struct bpf_map *map)
 {
 }
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index ae83d8649ef1..41cacdbd7bd5 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -918,6 +918,16 @@ union bpf_iter_link_info {
  *		Number of bytes read from the stream on success, or -1 if an
  *		error occurred (in which case, *errno* is set appropriately).
  *
+ * BPF_PROG_ASSOC_STRUCT_OPS
+ * 	Description
+ * 		Associate a BPF program with a struct_ops map. The struct_ops
+ * 		map is identified by *map_fd* and the BPF program is
+ * 		identified by *prog_fd*.
+ *
+ * 	Return
+ * 		0 on success or -1 if an error occurred (in which case,
+ * 		*errno* is set appropriately).
+ *
  * NOTES
  *	eBPF objects (maps and programs) can be shared between processes.
  *
@@ -974,6 +984,7 @@ enum bpf_cmd {
 	BPF_PROG_BIND_MAP,
 	BPF_TOKEN_CREATE,
 	BPF_PROG_STREAM_READ_BY_FD,
+	BPF_PROG_ASSOC_STRUCT_OPS,
 	__MAX_BPF_CMD,
 };
 
@@ -1890,6 +1901,12 @@ union bpf_attr {
 		__u32		prog_fd;
 	} prog_stream_read;
 
+	struct {
+		__u32		map_fd;
+		__u32		prog_fd;
+		__u32		flags;
+	} prog_assoc_struct_ops;
+
 } __attribute__((aligned(8)));
 
 /* The description below is an attempt at providing documentation to eBPF
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index a41e6730edcf..e060d9823e4a 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -528,6 +528,7 @@ static void bpf_struct_ops_map_put_progs(struct bpf_struct_ops_map *st_map)
 	for (i = 0; i < st_map->funcs_cnt; i++) {
 		if (!st_map->links[i])
 			break;
+		bpf_prog_disassoc_struct_ops(st_map->links[i]->prog);
 		bpf_link_put(st_map->links[i]);
 		st_map->links[i] = NULL;
 	}
@@ -801,6 +802,9 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 			goto reset_unlock;
 		}
 
+		/* If the program is reused, prog->aux->st_ops_assoc will be poisoned */
+		bpf_prog_assoc_struct_ops(prog, &st_map->map);
+
 		link = kzalloc(sizeof(*link), GFP_USER);
 		if (!link) {
 			bpf_prog_put(prog);
@@ -1394,6 +1398,46 @@ int bpf_struct_ops_link_create(union bpf_attr *attr)
 	return err;
 }
 
+int bpf_prog_assoc_struct_ops(struct bpf_prog *prog, struct bpf_map *map)
+{
+	struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map;
+	void *kdata = &st_map->kvalue.data;
+	int ret = 0;
+
+	mutex_lock(&prog->aux->st_ops_assoc_mutex);
+
+	if (prog->aux->st_ops_assoc && prog->aux->st_ops_assoc != kdata) {
+		if (prog->type == BPF_PROG_TYPE_STRUCT_OPS)
+			WRITE_ONCE(prog->aux->st_ops_assoc, BPF_PTR_POISON);
+
+		ret = -EBUSY;
+		goto out;
+	}
+
+	WRITE_ONCE(prog->aux->st_ops_assoc, kdata);
+out:
+	mutex_unlock(&prog->aux->st_ops_assoc_mutex);
+	return ret;
+}
+
+void bpf_prog_disassoc_struct_ops(struct bpf_prog *prog)
+{
+	mutex_lock(&prog->aux->st_ops_assoc_mutex);
+	WRITE_ONCE(prog->aux->st_ops_assoc, NULL);
+	mutex_unlock(&prog->aux->st_ops_assoc_mutex);
+}
+
+void *bpf_prog_get_assoc_struct_ops(const struct bpf_prog_aux *aux)
+{
+	void *st_ops_assoc = READ_ONCE(aux->st_ops_assoc);
+
+	if (!st_ops_assoc || st_ops_assoc == BPF_PTR_POISON)
+		return NULL;
+
+	return st_ops_assoc;
+}
+EXPORT_SYMBOL_GPL(bpf_prog_get_assoc_struct_ops);
+
 void bpf_map_struct_ops_info_fill(struct bpf_map_info *info, struct bpf_map *map)
 {
 	struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map;
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index d595fe512498..f66831776760 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -136,6 +136,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag
 	mutex_init(&fp->aux->used_maps_mutex);
 	mutex_init(&fp->aux->ext_mutex);
 	mutex_init(&fp->aux->dst_mutex);
+	mutex_init(&fp->aux->st_ops_assoc_mutex);
 
 #ifdef CONFIG_BPF_SYSCALL
 	bpf_prog_stream_init(fp);
@@ -286,6 +287,7 @@ void __bpf_prog_free(struct bpf_prog *fp)
 	if (fp->aux) {
 		mutex_destroy(&fp->aux->used_maps_mutex);
 		mutex_destroy(&fp->aux->dst_mutex);
+		mutex_destroy(&fp->aux->st_ops_assoc_mutex);
 		kfree(fp->aux->poke_tab);
 		kfree(fp->aux);
 	}
@@ -2875,6 +2877,10 @@ static void bpf_prog_free_deferred(struct work_struct *work)
 #endif
 	bpf_free_used_maps(aux);
 	bpf_free_used_btfs(aux);
+	if (aux->st_ops_assoc) {
+		bpf_struct_ops_put(aux->st_ops_assoc);
+		bpf_prog_disassoc_struct_ops(aux->prog);
+	}
 	if (bpf_prog_is_dev_bound(aux))
 		bpf_prog_dev_bound_destroy(aux->prog);
 #ifdef CONFIG_PERF_EVENTS
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index a48fa86f82a7..f4027e50e1d5 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -6092,6 +6092,53 @@ static int prog_stream_read(union bpf_attr *attr)
 	return ret;
 }
 
+#define BPF_PROG_ASSOC_STRUCT_OPS_LAST_FIELD prog_assoc_struct_ops.prog_fd
+
+static int prog_assoc_struct_ops(union bpf_attr *attr)
+{
+	struct bpf_prog *prog;
+	struct bpf_map *map;
+	int ret;
+
+	if (CHECK_ATTR(BPF_PROG_ASSOC_STRUCT_OPS))
+		return -EINVAL;
+
+	if (attr->prog_assoc_struct_ops.flags)
+		return -EINVAL;
+
+	prog = bpf_prog_get(attr->prog_assoc_struct_ops.prog_fd);
+	if (IS_ERR(prog))
+		return PTR_ERR(prog);
+
+	if (prog->type == BPF_PROG_TYPE_STRUCT_OPS) {
+		ret = -EINVAL;
+		goto put_prog;
+	}
+
+	map = bpf_map_get(attr->prog_assoc_struct_ops.map_fd);
+	if (IS_ERR(map)) {
+		ret = PTR_ERR(map);
+		goto put_prog;
+	}
+
+	if (map->map_type != BPF_MAP_TYPE_STRUCT_OPS) {
+		ret = -EINVAL;
+		goto put_map;
+	}
+
+	ret = bpf_prog_assoc_struct_ops(prog, map);
+	if (ret)
+		goto put_map;
+
+	bpf_prog_put(prog);
+	return 0;
+put_map:
+	bpf_map_put(map);
+put_prog:
+	bpf_prog_put(prog);
+	return ret;
+}
+
 static int __sys_bpf(enum bpf_cmd cmd, bpfptr_t uattr, unsigned int size)
 {
 	union bpf_attr attr;
@@ -6231,6 +6278,9 @@ static int __sys_bpf(enum bpf_cmd cmd, bpfptr_t uattr, unsigned int size)
 	case BPF_PROG_STREAM_READ_BY_FD:
 		err = prog_stream_read(&attr);
 		break;
+	case BPF_PROG_ASSOC_STRUCT_OPS:
+		err = prog_assoc_struct_ops(&attr);
+		break;
 	default:
 		err = -EINVAL;
 		break;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index ae83d8649ef1..41cacdbd7bd5 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -918,6 +918,16 @@ union bpf_iter_link_info {
  *		Number of bytes read from the stream on success, or -1 if an
  *		error occurred (in which case, *errno* is set appropriately).
  *
+ * BPF_PROG_ASSOC_STRUCT_OPS
+ * 	Description
+ * 		Associate a BPF program with a struct_ops map. The struct_ops
+ * 		map is identified by *map_fd* and the BPF program is
+ * 		identified by *prog_fd*.
+ *
+ * 	Return
+ * 		0 on success or -1 if an error occurred (in which case,
+ * 		*errno* is set appropriately).
+ *
  * NOTES
  *	eBPF objects (maps and programs) can be shared between processes.
  *
@@ -974,6 +984,7 @@ enum bpf_cmd {
 	BPF_PROG_BIND_MAP,
 	BPF_TOKEN_CREATE,
 	BPF_PROG_STREAM_READ_BY_FD,
+	BPF_PROG_ASSOC_STRUCT_OPS,
 	__MAX_BPF_CMD,
 };
 
@@ -1890,6 +1901,12 @@ union bpf_attr {
 		__u32		prog_fd;
 	} prog_stream_read;
 
+	struct {
+		__u32		map_fd;
+		__u32		prog_fd;
+		__u32		flags;
+	} prog_assoc_struct_ops;
+
 } __attribute__((aligned(8)));
 
 /* The description below is an attempt at providing documentation to eBPF
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 bpf-next 3/4] libbpf: Add bpf_prog_assoc_struct_ops() API
  2025-10-16 20:44 [PATCH v2 bpf-next 0/4] Support associating BPF programs with struct_ops Amery Hung
  2025-10-16 20:45 ` [PATCH v2 bpf-next 1/4] bpf: Allow verifier to fixup kernel module kfuncs Amery Hung
  2025-10-16 20:45 ` [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops Amery Hung
@ 2025-10-16 20:45 ` Amery Hung
  2025-10-16 20:45 ` [PATCH v2 bpf-next 4/4] selftests/bpf: Test BPF_PROG_ASSOC_STRUCT_OPS command Amery Hung
  3 siblings, 0 replies; 13+ messages in thread
From: Amery Hung @ 2025-10-16 20:45 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, martin.lau,
	ameryhung, kernel-team

Add low-level wrapper API for BPF_PROG_ASSOC_STRUCT_OPS command in the
bpf() syscall.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 tools/lib/bpf/bpf.c      | 19 +++++++++++++++++++
 tools/lib/bpf/bpf.h      | 20 ++++++++++++++++++++
 tools/lib/bpf/libbpf.map |  1 +
 3 files changed, 40 insertions(+)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 339b19797237..020149da30dd 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -1397,3 +1397,22 @@ int bpf_prog_stream_read(int prog_fd, __u32 stream_id, void *buf, __u32 buf_len,
 	err = sys_bpf(BPF_PROG_STREAM_READ_BY_FD, &attr, attr_sz);
 	return libbpf_err_errno(err);
 }
+
+int bpf_prog_assoc_struct_ops(int map_fd, int prog_fd,
+			      struct bpf_prog_assoc_struct_ops_opts *opts)
+{
+	const size_t attr_sz = offsetofend(union bpf_attr, prog_assoc_struct_ops);
+	union bpf_attr attr;
+	int err;
+
+	if (!OPTS_VALID(opts, bpf_prog_assoc_struct_ops_opts))
+		return libbpf_err(-EINVAL);
+
+	memset(&attr, 0, attr_sz);
+	attr.prog_assoc_struct_ops.map_fd = map_fd;
+	attr.prog_assoc_struct_ops.prog_fd = prog_fd;
+	attr.prog_assoc_struct_ops.flags = OPTS_GET(opts, flags, 0);
+
+	err = sys_bpf(BPF_PROG_ASSOC_STRUCT_OPS, &attr, attr_sz);
+	return libbpf_err_errno(err);
+}
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index e983a3e40d61..14687c08772d 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -733,6 +733,26 @@ struct bpf_prog_stream_read_opts {
 LIBBPF_API int bpf_prog_stream_read(int prog_fd, __u32 stream_id, void *buf, __u32 buf_len,
 				    struct bpf_prog_stream_read_opts *opts);
 
+struct bpf_prog_assoc_struct_ops_opts {
+	size_t sz;
+	__u32 flags;
+	size_t :0;
+};
+#define bpf_prog_assoc_struct_ops_opts__last_field flags
+/**
+ * @brief **bpf_prog_assoc_struct_ops** associates a BPF program with a
+ * struct_ops map.
+ *
+ * @param map_fd FD for the struct_ops map to be associated with a BPF program
+ * @param prog_fd FD for the BPF program
+ * @param opts optional options, can be NULL
+ *
+ * @return 0 on success; negative error code, otherwise (errno is also set to
+ * the error code)
+ */
+LIBBPF_API int bpf_prog_assoc_struct_ops(int map_fd, int prog_fd,
+					 struct bpf_prog_assoc_struct_ops_opts *opts);
+
 #ifdef __cplusplus
 } /* extern "C" */
 #endif
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 8ed8749907d4..e1602569426a 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -451,4 +451,5 @@ LIBBPF_1.7.0 {
 	global:
 		bpf_map__set_exclusive_program;
 		bpf_map__exclusive_program;
+		bpf_prog_assoc_struct_ops;
 } LIBBPF_1.6.0;
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 bpf-next 4/4] selftests/bpf: Test BPF_PROG_ASSOC_STRUCT_OPS command
  2025-10-16 20:44 [PATCH v2 bpf-next 0/4] Support associating BPF programs with struct_ops Amery Hung
                   ` (2 preceding siblings ...)
  2025-10-16 20:45 ` [PATCH v2 bpf-next 3/4] libbpf: Add bpf_prog_assoc_struct_ops() API Amery Hung
@ 2025-10-16 20:45 ` Amery Hung
  3 siblings, 0 replies; 13+ messages in thread
From: Amery Hung @ 2025-10-16 20:45 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, martin.lau,
	ameryhung, kernel-team

Test BPF_PROG_ASSOC_STRUCT_OPS command that associates a BPF program
with a struct_ops. The test follows the same logic in commit
ba7000f1c360 ("selftests/bpf: Test multi_st_ops and calling kfuncs from
different programs"), but instead of using map id to identify a specific
struct_ops, this test uses the new BPF command to associate a struct_ops
with a program.

The test consists of two sets of almost identical struct_ops maps and BPF
programs associated with the map. Their only difference is the unique
value returned by bpf_testmod_multi_st_ops::test_1().

The test first loads the programs and associates them with struct_ops
maps. Then, it exercises the BPF programs. They will in turn call kfunc
bpf_kfunc_multi_st_ops_test_1_prog_arg() to trigger test_1() of the
associated struct_ops map, and then check if the right unique value is
returned.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 .../bpf/prog_tests/test_struct_ops_assoc.c    |  76 +++++++++++++
 .../selftests/bpf/progs/struct_ops_assoc.c    | 105 ++++++++++++++++++
 .../selftests/bpf/test_kmods/bpf_testmod.c    |  17 +++
 .../bpf/test_kmods/bpf_testmod_kfunc.h        |   1 +
 4 files changed, 199 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_assoc.c
 create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_assoc.c

diff --git a/tools/testing/selftests/bpf/prog_tests/test_struct_ops_assoc.c b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_assoc.c
new file mode 100644
index 000000000000..cf8b104cbfb7
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_assoc.c
@@ -0,0 +1,76 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <test_progs.h>
+#include "struct_ops_assoc.skel.h"
+
+static void test_st_ops_assoc(void)
+{
+	int sys_enter_prog_a_fd, sys_enter_prog_b_fd;
+	int syscall_prog_a_fd, syscall_prog_b_fd;
+	struct struct_ops_assoc *skel = NULL;
+	int err, pid, map_a_fd, map_b_fd;
+
+	skel = struct_ops_assoc__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "struct_ops_assoc__open"))
+		goto out;
+
+	sys_enter_prog_a_fd = bpf_program__fd(skel->progs.sys_enter_prog_a);
+	sys_enter_prog_b_fd = bpf_program__fd(skel->progs.sys_enter_prog_b);
+	syscall_prog_a_fd = bpf_program__fd(skel->progs.syscall_prog_a);
+	syscall_prog_b_fd = bpf_program__fd(skel->progs.syscall_prog_b);
+	map_a_fd = bpf_map__fd(skel->maps.st_ops_map_a);
+	map_b_fd = bpf_map__fd(skel->maps.st_ops_map_b);
+
+	err = bpf_prog_assoc_struct_ops(map_a_fd, syscall_prog_a_fd, NULL);
+	if (!ASSERT_OK(err, "bpf_prog_assoc_struct_ops"))
+		goto out;
+
+	err = bpf_prog_assoc_struct_ops(map_a_fd, sys_enter_prog_a_fd, NULL);
+	if (!ASSERT_OK(err, "bpf_prog_assoc_struct_ops"))
+		goto out;
+
+	err = bpf_prog_assoc_struct_ops(map_b_fd, syscall_prog_b_fd, NULL);
+	if (!ASSERT_OK(err, "bpf_prog_assoc_struct_ops"))
+		goto out;
+
+	err = bpf_prog_assoc_struct_ops(map_b_fd, sys_enter_prog_b_fd, NULL);
+	if (!ASSERT_OK(err, "bpf_prog_assoc_struct_ops"))
+		goto out;
+
+	/* sys_enter_prog_a already associated with map_a */
+	err = bpf_prog_assoc_struct_ops(map_b_fd, sys_enter_prog_a_fd, NULL);
+	if (!ASSERT_ERR(err, "bpf_prog_assoc_struct_ops"))
+		goto out;
+
+	err = struct_ops_assoc__attach(skel);
+	if (!ASSERT_OK(err, "struct_ops_assoc__attach"))
+		goto out;
+
+	/* run tracing prog that calls .test_1 and checks return */
+	pid = getpid();
+	skel->bss->test_pid = pid;
+	sys_gettid();
+	skel->bss->test_pid = 0;
+
+	ASSERT_EQ(skel->bss->test_err_a, 0, "skel->bss->test_err_a");
+	ASSERT_EQ(skel->bss->test_err_b, 0, "skel->bss->test_err_b");
+
+	/* run syscall_prog that calls .test_1 and checks return */
+	err = bpf_prog_test_run_opts(syscall_prog_a_fd, NULL);
+	ASSERT_OK(err, "bpf_prog_test_run_opts");
+
+	err = bpf_prog_test_run_opts(syscall_prog_b_fd, NULL);
+	ASSERT_OK(err, "bpf_prog_test_run_opts");
+
+	ASSERT_EQ(skel->bss->test_err_a, 0, "skel->bss->test_err");
+	ASSERT_EQ(skel->bss->test_err_b, 0, "skel->bss->test_err");
+
+out:
+	struct_ops_assoc__destroy(skel);
+}
+
+void test_struct_ops_assoc(void)
+{
+	if (test__start_subtest("st_ops_assoc"))
+		test_st_ops_assoc();
+}
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_assoc.c b/tools/testing/selftests/bpf/progs/struct_ops_assoc.c
new file mode 100644
index 000000000000..fe47287a49f0
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/struct_ops_assoc.c
@@ -0,0 +1,105 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+#include "../test_kmods/bpf_testmod.h"
+#include "../test_kmods/bpf_testmod_kfunc.h"
+
+char _license[] SEC("license") = "GPL";
+
+int test_pid;
+
+/* Programs associated with st_ops_map_a */
+
+#define MAP_A_MAGIC 1234
+int test_err_a;
+
+SEC("struct_ops")
+int BPF_PROG(test_1_a, struct st_ops_args *args)
+{
+	return MAP_A_MAGIC;
+}
+
+SEC("tp_btf/sys_enter")
+int BPF_PROG(sys_enter_prog_a, struct pt_regs *regs, long id)
+{
+	struct st_ops_args args = {};
+	struct task_struct *task;
+	int ret;
+
+	task = bpf_get_current_task_btf();
+	if (!test_pid || task->pid != test_pid)
+		return 0;
+
+	ret = bpf_kfunc_multi_st_ops_test_1_prog_arg(&args, NULL);
+	if (ret != MAP_A_MAGIC)
+		test_err_a++;
+
+	return 0;
+}
+
+SEC("syscall")
+int syscall_prog_a(void *ctx)
+{
+	struct st_ops_args args = {};
+	int ret;
+
+	ret = bpf_kfunc_multi_st_ops_test_1_prog_arg(&args, NULL);
+	if (ret != MAP_A_MAGIC)
+		test_err_a++;
+
+	return 0;
+}
+
+SEC(".struct_ops.link")
+struct bpf_testmod_multi_st_ops st_ops_map_a = {
+	.test_1 = (void *)test_1_a,
+};
+
+/* Programs associated with st_ops_map_b */
+
+#define MAP_B_MAGIC 5678
+int test_err_b;
+
+SEC("struct_ops")
+int BPF_PROG(test_1_b, struct st_ops_args *args)
+{
+	return MAP_B_MAGIC;
+}
+
+SEC("tp_btf/sys_enter")
+int BPF_PROG(sys_enter_prog_b, struct pt_regs *regs, long id)
+{
+	struct st_ops_args args = {};
+	struct task_struct *task;
+	int ret;
+
+	task = bpf_get_current_task_btf();
+	if (!test_pid || task->pid != test_pid)
+		return 0;
+
+	ret = bpf_kfunc_multi_st_ops_test_1_prog_arg(&args, NULL);
+	if (ret != MAP_B_MAGIC)
+		test_err_b++;
+
+	return 0;
+}
+
+SEC("syscall")
+int syscall_prog_b(void *ctx)
+{
+	struct st_ops_args args = {};
+	int ret;
+
+	ret = bpf_kfunc_multi_st_ops_test_1_prog_arg(&args, NULL);
+	if (ret != MAP_B_MAGIC)
+		test_err_b++;
+
+	return 0;
+}
+
+SEC(".struct_ops.link")
+struct bpf_testmod_multi_st_ops st_ops_map_b = {
+	.test_1 = (void *)test_1_b,
+};
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index 6df6475f5dbc..d3c3a8f1e63b 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -1101,6 +1101,7 @@ __bpf_kfunc int bpf_kfunc_st_ops_inc10(struct st_ops_args *args)
 }
 
 __bpf_kfunc int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id);
+__bpf_kfunc int bpf_kfunc_multi_st_ops_test_1_prog_arg(struct st_ops_args *args, void *aux_prog);
 
 BTF_KFUNCS_START(bpf_testmod_check_kfunc_ids)
 BTF_ID_FLAGS(func, bpf_testmod_test_mod_kfunc)
@@ -1143,6 +1144,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_st_ops_test_epilogue, KF_TRUSTED_ARGS | KF_SLEEPABL
 BTF_ID_FLAGS(func, bpf_kfunc_st_ops_test_pro_epilogue, KF_TRUSTED_ARGS | KF_SLEEPABLE)
 BTF_ID_FLAGS(func, bpf_kfunc_st_ops_inc10, KF_TRUSTED_ARGS)
 BTF_ID_FLAGS(func, bpf_kfunc_multi_st_ops_test_1, KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_kfunc_multi_st_ops_test_1_prog_arg, KF_TRUSTED_ARGS)
 BTF_KFUNCS_END(bpf_testmod_check_kfunc_ids)
 
 static int bpf_testmod_ops_init(struct btf *btf)
@@ -1604,6 +1606,7 @@ static struct bpf_testmod_multi_st_ops *multi_st_ops_find_nolock(u32 id)
 	return NULL;
 }
 
+/* Call test_1() of the struct_ops map identified by the id */
 int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id)
 {
 	struct bpf_testmod_multi_st_ops *st_ops;
@@ -1619,6 +1622,20 @@ int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id)
 	return ret;
 }
 
+/* Call test_1() of the associated struct_ops map */
+int bpf_kfunc_multi_st_ops_test_1_prog_arg(struct st_ops_args *args, void *aux__prog)
+{
+	struct bpf_prog_aux *prog_aux = (struct bpf_prog_aux *)aux__prog;
+	struct bpf_testmod_multi_st_ops *st_ops;
+	int ret = -1;
+
+	st_ops = (struct bpf_testmod_multi_st_ops *)bpf_prog_get_assoc_struct_ops(prog_aux);
+	if (st_ops)
+		ret = st_ops->test_1(args);
+
+	return ret;
+}
+
 static int multi_st_ops_reg(void *kdata, struct bpf_link *link)
 {
 	struct bpf_testmod_multi_st_ops *st_ops =
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
index 4df6fa6a92cb..d40f4cddbd1e 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
@@ -162,5 +162,6 @@ struct task_struct *bpf_kfunc_ret_rcu_test(void) __ksym;
 int *bpf_kfunc_ret_rcu_test_nostruct(int rdonly_buf_size) __ksym;
 
 int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __ksym;
+int bpf_kfunc_multi_st_ops_test_1_prog_arg(struct st_ops_args *args, void *aux__prog) __ksym;
 
 #endif /* _BPF_TESTMOD_KFUNC_H */
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
  2025-10-16 20:45 ` [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops Amery Hung
@ 2025-10-16 23:51   ` Martin KaFai Lau
  2025-10-16 23:58     ` Amery Hung
  2025-10-17  0:19   ` Martin KaFai Lau
                     ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Martin KaFai Lau @ 2025-10-16 23:51 UTC (permalink / raw)
  To: Amery Hung
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, martin.lau, bpf,
	kernel-team


On 10/16/25 1:45 PM, Amery Hung wrote:
> diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
> index a41e6730edcf..e060d9823e4a 100644
> --- a/kernel/bpf/bpf_struct_ops.c
> +++ b/kernel/bpf/bpf_struct_ops.c
> @@ -528,6 +528,7 @@ static void bpf_struct_ops_map_put_progs(struct bpf_struct_ops_map *st_map)
>   	for (i = 0; i < st_map->funcs_cnt; i++) {
>   		if (!st_map->links[i])
>   			break;
> +		bpf_prog_disassoc_struct_ops(st_map->links[i]->prog);

It took me some time to understand why it needs to specifically call 
bpf_prog_disassoc_struct_ops here for struct_ops programs. bpf_prog_put 
has not been done yet. The BPF_PTR_POISON could be set back to NULL. My 
understanding is the BPF_PTR_POISON should stay with the prog's lifetime?
>   		bpf_link_put(st_map->links[i]);
>   		st_map->links[i] = NULL;
>   	}
> @@ -801,6 +802,9 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
>   			goto reset_unlock;
>   		}
>   
> +		/* If the program is reused, prog->aux->st_ops_assoc will be poisoned */
> +		bpf_prog_assoc_struct_ops(prog, &st_map->map);
> +
>   		link = kzalloc(sizeof(*link), GFP_USER);
>   		if (!link) {
>   			bpf_prog_put(prog);
> @@ -1394,6 +1398,46 @@ int bpf_struct_ops_link_create(union bpf_attr *attr)
>   	return err;
>   }
>   
> +int bpf_prog_assoc_struct_ops(struct bpf_prog *prog, struct bpf_map *map)
> +{
> +	struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map;
> +	void *kdata = &st_map->kvalue.data;
> +	int ret = 0;
> +
> +	mutex_lock(&prog->aux->st_ops_assoc_mutex);
> +
> +	if (prog->aux->st_ops_assoc && prog->aux->st_ops_assoc != kdata) {
> +		if (prog->type == BPF_PROG_TYPE_STRUCT_OPS)
> +			WRITE_ONCE(prog->aux->st_ops_assoc, BPF_PTR_POISON);
> +
> +		ret = -EBUSY;
> +		goto out;
> +	}
> +
> +	WRITE_ONCE(prog->aux->st_ops_assoc, kdata);
> +out:
> +	mutex_unlock(&prog->aux->st_ops_assoc_mutex);
> +	return ret;
> +}
> +
> +void bpf_prog_disassoc_struct_ops(struct bpf_prog *prog)
> +{
> +	mutex_lock(&prog->aux->st_ops_assoc_mutex);

Can it check the prog type here and decide if bpf_struct_ops_put needs 
to be called?

> +	WRITE_ONCE(prog->aux->st_ops_assoc, NULL);



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
  2025-10-16 23:51   ` Martin KaFai Lau
@ 2025-10-16 23:58     ` Amery Hung
  0 siblings, 0 replies; 13+ messages in thread
From: Amery Hung @ 2025-10-16 23:58 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, martin.lau, bpf,
	kernel-team

On Thu, Oct 16, 2025 at 4:51 PM Martin KaFai Lau <martin.lau@linux.dev> wrote:
>
>
> On 10/16/25 1:45 PM, Amery Hung wrote:
> > diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
> > index a41e6730edcf..e060d9823e4a 100644
> > --- a/kernel/bpf/bpf_struct_ops.c
> > +++ b/kernel/bpf/bpf_struct_ops.c
> > @@ -528,6 +528,7 @@ static void bpf_struct_ops_map_put_progs(struct bpf_struct_ops_map *st_map)
> >       for (i = 0; i < st_map->funcs_cnt; i++) {
> >               if (!st_map->links[i])
> >                       break;
> > +             bpf_prog_disassoc_struct_ops(st_map->links[i]->prog);
>
> It took me some time to understand why it needs to specifically call
> bpf_prog_disassoc_struct_ops here for struct_ops programs. bpf_prog_put
> has not been done yet. The BPF_PTR_POISON could be set back to NULL. My
> understanding is the BPF_PTR_POISON should stay with the prog's lifetime?

You are right, once BPF_PTR_POISON is set, it cannot be cleared. Will
fix it in v3

> >               bpf_link_put(st_map->links[i]);
> >               st_map->links[i] = NULL;
> >       }
> > @@ -801,6 +802,9 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
> >                       goto reset_unlock;
> >               }
> >
> > +             /* If the program is reused, prog->aux->st_ops_assoc will be poisoned */
> > +             bpf_prog_assoc_struct_ops(prog, &st_map->map);
> > +
> >               link = kzalloc(sizeof(*link), GFP_USER);
> >               if (!link) {
> >                       bpf_prog_put(prog);
> > @@ -1394,6 +1398,46 @@ int bpf_struct_ops_link_create(union bpf_attr *attr)
> >       return err;
> >   }
> >
> > +int bpf_prog_assoc_struct_ops(struct bpf_prog *prog, struct bpf_map *map)
> > +{
> > +     struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map;
> > +     void *kdata = &st_map->kvalue.data;
> > +     int ret = 0;
> > +
> > +     mutex_lock(&prog->aux->st_ops_assoc_mutex);
> > +
> > +     if (prog->aux->st_ops_assoc && prog->aux->st_ops_assoc != kdata) {
> > +             if (prog->type == BPF_PROG_TYPE_STRUCT_OPS)
> > +                     WRITE_ONCE(prog->aux->st_ops_assoc, BPF_PTR_POISON);
> > +
> > +             ret = -EBUSY;
> > +             goto out;
> > +     }
> > +
> > +     WRITE_ONCE(prog->aux->st_ops_assoc, kdata);
> > +out:
> > +     mutex_unlock(&prog->aux->st_ops_assoc_mutex);
> > +     return ret;
> > +}
> > +
> > +void bpf_prog_disassoc_struct_ops(struct bpf_prog *prog)
> > +{
> > +     mutex_lock(&prog->aux->st_ops_assoc_mutex);
>
> Can it check the prog type here and decide if bpf_struct_ops_put needs
> to be called?

I will move map refcount inc and dec into these two helpers.

Thanks for the suggestion

>
> > +     WRITE_ONCE(prog->aux->st_ops_assoc, NULL);
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
  2025-10-16 20:45 ` [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops Amery Hung
  2025-10-16 23:51   ` Martin KaFai Lau
@ 2025-10-17  0:19   ` Martin KaFai Lau
  2025-10-17 16:38     ` Amery Hung
  2025-10-17 14:18   ` kernel test robot
                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Martin KaFai Lau @ 2025-10-17  0:19 UTC (permalink / raw)
  To: Amery Hung
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, martin.lau,
	kernel-team, bpf



On 10/16/25 1:45 PM, Amery Hung wrote:
> Each associated programs except struct_ops programs of the map will take
> a refcount on the map to pin it so that prog->aux->st_ops_assoc, if set,
> is always valid. However, it is not guaranteed whether the map members
> are fully updated nor is it attached or not. For example, a BPF program
> can be associated with a struct_ops map before map_update. The

Forgot to ask this, should it at least ensure the map is fully updated 
or it does not help in the use case?

> struct_ops implementer will be responsible for maintaining and checking
> the state of the associated struct_ops map before accessing it.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
  2025-10-16 20:45 ` [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops Amery Hung
  2025-10-16 23:51   ` Martin KaFai Lau
  2025-10-17  0:19   ` Martin KaFai Lau
@ 2025-10-17 14:18   ` kernel test robot
  2025-10-17 16:03   ` kernel test robot
  2025-10-17 17:05   ` kernel test robot
  4 siblings, 0 replies; 13+ messages in thread
From: kernel test robot @ 2025-10-17 14:18 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: llvm, oe-kbuild-all, netdev, alexei.starovoitov, andrii, daniel,
	tj, martin.lau, ameryhung, kernel-team

Hi Amery,

kernel test robot noticed the following build warnings:

[auto build test WARNING on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Amery-Hung/bpf-Allow-verifier-to-fixup-kernel-module-kfuncs/20251017-044703
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20251016204503.3203690-3-ameryhung%40gmail.com
patch subject: [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
config: sparc64-defconfig (https://download.01.org/0day-ci/archive/20251017/202510172107.6Yh2tFCb-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251017/202510172107.6Yh2tFCb-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202510172107.6Yh2tFCb-lkp@intel.com/

All warnings (new ones prefixed by >>):

   kernel/bpf/core.c:2881:3: error: call to undeclared function 'bpf_struct_ops_put'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
    2881 |                 bpf_struct_ops_put(aux->st_ops_assoc);
         |                 ^
   kernel/bpf/core.c:2881:3: note: did you mean 'bpf_struct_ops_find'?
   include/linux/btf.h:538:49: note: 'bpf_struct_ops_find' declared here
     538 | static inline const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *btf, u32 type_id)
         |                                                 ^
   In file included from kernel/bpf/core.c:3240:
   In file included from include/linux/bpf_trace.h:5:
   In file included from include/trace/events/xdp.h:384:
   In file included from include/trace/define_trace.h:132:
   In file included from include/trace/trace_events.h:21:
   In file included from include/linux/trace_events.h:6:
   In file included from include/linux/ring_buffer.h:7:
>> include/linux/poll.h:134:27: warning: division by zero is undefined [-Wdivision-by-zero]
     134 |                 M(RDNORM) | M(RDBAND) | M(WRNORM) | M(WRBAND) |
         |                                         ^~~~~~~~~
   include/linux/poll.h:132:32: note: expanded from macro 'M'
     132 | #define M(X) (__force __poll_t)__MAP(val, POLL##X, (__force __u16)EPOLL##X)
         |                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/poll.h:118:51: note: expanded from macro '__MAP'
     118 |         (from < to ? (v & from) * (to/from) : (v & from) / (from/to))
         |                                                          ^ ~~~~~~~~~
   include/linux/poll.h:134:39: warning: division by zero is undefined [-Wdivision-by-zero]
     134 |                 M(RDNORM) | M(RDBAND) | M(WRNORM) | M(WRBAND) |
         |                                                     ^~~~~~~~~
   include/linux/poll.h:132:32: note: expanded from macro 'M'
     132 | #define M(X) (__force __poll_t)__MAP(val, POLL##X, (__force __u16)EPOLL##X)
         |                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/poll.h:118:51: note: expanded from macro '__MAP'
     118 |         (from < to ? (v & from) * (to/from) : (v & from) / (from/to))
         |                                                          ^ ~~~~~~~~~
   include/linux/poll.h:135:12: warning: division by zero is undefined [-Wdivision-by-zero]
     135 |                 M(HUP) | M(RDHUP) | M(MSG);
         |                          ^~~~~~~~
   include/linux/poll.h:132:32: note: expanded from macro 'M'
     132 | #define M(X) (__force __poll_t)__MAP(val, POLL##X, (__force __u16)EPOLL##X)
         |                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/poll.h:118:51: note: expanded from macro '__MAP'
     118 |         (from < to ? (v & from) * (to/from) : (v & from) / (from/to))
         |                                                          ^ ~~~~~~~~~
   include/linux/poll.h:135:23: warning: division by zero is undefined [-Wdivision-by-zero]
     135 |                 M(HUP) | M(RDHUP) | M(MSG);
         |                                     ^~~~~~
   include/linux/poll.h:132:32: note: expanded from macro 'M'
     132 | #define M(X) (__force __poll_t)__MAP(val, POLL##X, (__force __u16)EPOLL##X)
         |                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/poll.h:118:51: note: expanded from macro '__MAP'
     118 |         (from < to ? (v & from) * (to/from) : (v & from) / (from/to))
         |                                                          ^ ~~~~~~~~~
   4 warnings and 1 error generated.


vim +134 include/linux/poll.h

7a163b2195cda0c Al Viro 2018-02-01  129  
7a163b2195cda0c Al Viro 2018-02-01  130  static inline __poll_t demangle_poll(u16 val)
7a163b2195cda0c Al Viro 2018-02-01  131  {
7a163b2195cda0c Al Viro 2018-02-01  132  #define M(X) (__force __poll_t)__MAP(val, POLL##X, (__force __u16)EPOLL##X)
7a163b2195cda0c Al Viro 2018-02-01  133  	return M(IN) | M(OUT) | M(PRI) | M(ERR) | M(NVAL) |
7a163b2195cda0c Al Viro 2018-02-01 @134  		M(RDNORM) | M(RDBAND) | M(WRNORM) | M(WRBAND) |
7a163b2195cda0c Al Viro 2018-02-01  135  		M(HUP) | M(RDHUP) | M(MSG);
7a163b2195cda0c Al Viro 2018-02-01  136  #undef M
7a163b2195cda0c Al Viro 2018-02-01  137  }
7a163b2195cda0c Al Viro 2018-02-01  138  #undef __MAP
7a163b2195cda0c Al Viro 2018-02-01  139  
7a163b2195cda0c Al Viro 2018-02-01  140  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
  2025-10-16 20:45 ` [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops Amery Hung
                     ` (2 preceding siblings ...)
  2025-10-17 14:18   ` kernel test robot
@ 2025-10-17 16:03   ` kernel test robot
  2025-10-17 17:05   ` kernel test robot
  4 siblings, 0 replies; 13+ messages in thread
From: kernel test robot @ 2025-10-17 16:03 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: oe-kbuild-all, netdev, alexei.starovoitov, andrii, daniel, tj,
	martin.lau, ameryhung, kernel-team

Hi Amery,

kernel test robot noticed the following build errors:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Amery-Hung/bpf-Allow-verifier-to-fixup-kernel-module-kfuncs/20251017-044703
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20251016204503.3203690-3-ameryhung%40gmail.com
patch subject: [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
config: x86_64-randconfig-161-20251017 (https://download.01.org/0day-ci/archive/20251017/202510172346.Djfrforq-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251017/202510172346.Djfrforq-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202510172346.Djfrforq-lkp@intel.com/

All errors (new ones prefixed by >>):

   kernel/bpf/core.c: In function 'bpf_prog_free_deferred':
>> kernel/bpf/core.c:2881:17: error: implicit declaration of function 'bpf_struct_ops_put'; did you mean 'bpf_struct_ops_find'? [-Wimplicit-function-declaration]
    2881 |                 bpf_struct_ops_put(aux->st_ops_assoc);
         |                 ^~~~~~~~~~~~~~~~~~
         |                 bpf_struct_ops_find


vim +2881 kernel/bpf/core.c

  2863	
  2864	static void bpf_prog_free_deferred(struct work_struct *work)
  2865	{
  2866		struct bpf_prog_aux *aux;
  2867		int i;
  2868	
  2869		aux = container_of(work, struct bpf_prog_aux, work);
  2870	#ifdef CONFIG_BPF_SYSCALL
  2871		bpf_free_kfunc_btf_tab(aux->kfunc_btf_tab);
  2872		bpf_prog_stream_free(aux->prog);
  2873	#endif
  2874	#ifdef CONFIG_CGROUP_BPF
  2875		if (aux->cgroup_atype != CGROUP_BPF_ATTACH_TYPE_INVALID)
  2876			bpf_cgroup_atype_put(aux->cgroup_atype);
  2877	#endif
  2878		bpf_free_used_maps(aux);
  2879		bpf_free_used_btfs(aux);
  2880		if (aux->st_ops_assoc) {
> 2881			bpf_struct_ops_put(aux->st_ops_assoc);
  2882			bpf_prog_disassoc_struct_ops(aux->prog);
  2883		}
  2884		if (bpf_prog_is_dev_bound(aux))
  2885			bpf_prog_dev_bound_destroy(aux->prog);
  2886	#ifdef CONFIG_PERF_EVENTS
  2887		if (aux->prog->has_callchain_buf)
  2888			put_callchain_buffers();
  2889	#endif
  2890		if (aux->dst_trampoline)
  2891			bpf_trampoline_put(aux->dst_trampoline);
  2892		for (i = 0; i < aux->real_func_cnt; i++) {
  2893			/* We can just unlink the subprog poke descriptor table as
  2894			 * it was originally linked to the main program and is also
  2895			 * released along with it.
  2896			 */
  2897			aux->func[i]->aux->poke_tab = NULL;
  2898			bpf_jit_free(aux->func[i]);
  2899		}
  2900		if (aux->real_func_cnt) {
  2901			kfree(aux->func);
  2902			bpf_prog_unlock_free(aux->prog);
  2903		} else {
  2904			bpf_jit_free(aux->prog);
  2905		}
  2906	}
  2907	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
  2025-10-17  0:19   ` Martin KaFai Lau
@ 2025-10-17 16:38     ` Amery Hung
  2025-10-17 16:49       ` Amery Hung
  0 siblings, 1 reply; 13+ messages in thread
From: Amery Hung @ 2025-10-17 16:38 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, martin.lau,
	kernel-team, bpf

On Thu, Oct 16, 2025 at 5:19 PM Martin KaFai Lau <martin.lau@linux.dev> wrote:
>
>
>
> On 10/16/25 1:45 PM, Amery Hung wrote:
> > Each associated programs except struct_ops programs of the map will take
> > a refcount on the map to pin it so that prog->aux->st_ops_assoc, if set,
> > is always valid. However, it is not guaranteed whether the map members
> > are fully updated nor is it attached or not. For example, a BPF program
> > can be associated with a struct_ops map before map_update. The
>
> Forgot to ask this, should it at least ensure the map is fully updated
> or it does not help in the use case?

It makes sense and is necessary. Originally, I thought we don't need
to make any promise about the state of the map since the struct_ops
implementers have to track the state of the struct_ops themselves
anyways. However, checking the state stored in kdata that may be
incomplete does not look right.

I will only return kdata from bpf_prog_get_assoc_struct_ops () when
kvalue->common.state == READY or INUSE.

If tracking the state in struct_ops kdata is overly complicated for
struct_ops implementers, then we might need to consider changing the
associated struct_ops from map to link.

>
> > struct_ops implementer will be responsible for maintaining and checking
> > the state of the associated struct_ops map before accessing it.
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
  2025-10-17 16:38     ` Amery Hung
@ 2025-10-17 16:49       ` Amery Hung
  0 siblings, 0 replies; 13+ messages in thread
From: Amery Hung @ 2025-10-17 16:49 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: netdev, alexei.starovoitov, andrii, daniel, tj, martin.lau,
	kernel-team, bpf

On Fri, Oct 17, 2025 at 9:38 AM Amery Hung <ameryhung@gmail.com> wrote:
>
> On Thu, Oct 16, 2025 at 5:19 PM Martin KaFai Lau <martin.lau@linux.dev> wrote:
> >
> >
> >
> > On 10/16/25 1:45 PM, Amery Hung wrote:
> > > Each associated programs except struct_ops programs of the map will take
> > > a refcount on the map to pin it so that prog->aux->st_ops_assoc, if set,
> > > is always valid. However, it is not guaranteed whether the map members
> > > are fully updated nor is it attached or not. For example, a BPF program
> > > can be associated with a struct_ops map before map_update. The
> >
> > Forgot to ask this, should it at least ensure the map is fully updated
> > or it does not help in the use case?
>
> It makes sense and is necessary. Originally, I thought we don't need
> to make any promise about the state of the map since the struct_ops
> implementers have to track the state of the struct_ops themselves
> anyways. However, checking the state stored in kdata that may be
> incomplete does not look right.
>
> I will only return kdata from bpf_prog_get_assoc_struct_ops () when
> kvalue->common.state == READY or INUSE.

should be kvalue->common.state != INIT to make it consistent across
legacy and link-based attachment.

>
> If tracking the state in struct_ops kdata is overly complicated for
> struct_ops implementers, then we might need to consider changing the
> associated struct_ops from map to link.
>
> >
> > > struct_ops implementer will be responsible for maintaining and checking
> > > the state of the associated struct_ops map before accessing it.
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
  2025-10-16 20:45 ` [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops Amery Hung
                     ` (3 preceding siblings ...)
  2025-10-17 16:03   ` kernel test robot
@ 2025-10-17 17:05   ` kernel test robot
  4 siblings, 0 replies; 13+ messages in thread
From: kernel test robot @ 2025-10-17 17:05 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: llvm, oe-kbuild-all, netdev, alexei.starovoitov, andrii, daniel,
	tj, martin.lau, ameryhung, kernel-team

Hi Amery,

kernel test robot noticed the following build errors:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Amery-Hung/bpf-Allow-verifier-to-fixup-kernel-module-kfuncs/20251017-044703
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20251016204503.3203690-3-ameryhung%40gmail.com
patch subject: [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops
config: x86_64-kexec (https://download.01.org/0day-ci/archive/20251018/202510180007.IYugtu6G-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251018/202510180007.IYugtu6G-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202510180007.IYugtu6G-lkp@intel.com/

All errors (new ones prefixed by >>):

>> kernel/bpf/core.c:2881:3: error: call to undeclared function 'bpf_struct_ops_put'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
    2881 |                 bpf_struct_ops_put(aux->st_ops_assoc);
         |                 ^
   kernel/bpf/core.c:2881:3: note: did you mean 'bpf_struct_ops_find'?
   include/linux/btf.h:538:49: note: 'bpf_struct_ops_find' declared here
     538 | static inline const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *btf, u32 type_id)
         |                                                 ^
   1 error generated.


vim +/bpf_struct_ops_put +2881 kernel/bpf/core.c

  2863	
  2864	static void bpf_prog_free_deferred(struct work_struct *work)
  2865	{
  2866		struct bpf_prog_aux *aux;
  2867		int i;
  2868	
  2869		aux = container_of(work, struct bpf_prog_aux, work);
  2870	#ifdef CONFIG_BPF_SYSCALL
  2871		bpf_free_kfunc_btf_tab(aux->kfunc_btf_tab);
  2872		bpf_prog_stream_free(aux->prog);
  2873	#endif
  2874	#ifdef CONFIG_CGROUP_BPF
  2875		if (aux->cgroup_atype != CGROUP_BPF_ATTACH_TYPE_INVALID)
  2876			bpf_cgroup_atype_put(aux->cgroup_atype);
  2877	#endif
  2878		bpf_free_used_maps(aux);
  2879		bpf_free_used_btfs(aux);
  2880		if (aux->st_ops_assoc) {
> 2881			bpf_struct_ops_put(aux->st_ops_assoc);
  2882			bpf_prog_disassoc_struct_ops(aux->prog);
  2883		}
  2884		if (bpf_prog_is_dev_bound(aux))
  2885			bpf_prog_dev_bound_destroy(aux->prog);
  2886	#ifdef CONFIG_PERF_EVENTS
  2887		if (aux->prog->has_callchain_buf)
  2888			put_callchain_buffers();
  2889	#endif
  2890		if (aux->dst_trampoline)
  2891			bpf_trampoline_put(aux->dst_trampoline);
  2892		for (i = 0; i < aux->real_func_cnt; i++) {
  2893			/* We can just unlink the subprog poke descriptor table as
  2894			 * it was originally linked to the main program and is also
  2895			 * released along with it.
  2896			 */
  2897			aux->func[i]->aux->poke_tab = NULL;
  2898			bpf_jit_free(aux->func[i]);
  2899		}
  2900		if (aux->real_func_cnt) {
  2901			kfree(aux->func);
  2902			bpf_prog_unlock_free(aux->prog);
  2903		} else {
  2904			bpf_jit_free(aux->prog);
  2905		}
  2906	}
  2907	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-10-17 17:06 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-16 20:44 [PATCH v2 bpf-next 0/4] Support associating BPF programs with struct_ops Amery Hung
2025-10-16 20:45 ` [PATCH v2 bpf-next 1/4] bpf: Allow verifier to fixup kernel module kfuncs Amery Hung
2025-10-16 20:45 ` [PATCH v2 bpf-next 2/4] bpf: Support associating BPF program with struct_ops Amery Hung
2025-10-16 23:51   ` Martin KaFai Lau
2025-10-16 23:58     ` Amery Hung
2025-10-17  0:19   ` Martin KaFai Lau
2025-10-17 16:38     ` Amery Hung
2025-10-17 16:49       ` Amery Hung
2025-10-17 14:18   ` kernel test robot
2025-10-17 16:03   ` kernel test robot
2025-10-17 17:05   ` kernel test robot
2025-10-16 20:45 ` [PATCH v2 bpf-next 3/4] libbpf: Add bpf_prog_assoc_struct_ops() API Amery Hung
2025-10-16 20:45 ` [PATCH v2 bpf-next 4/4] selftests/bpf: Test BPF_PROG_ASSOC_STRUCT_OPS command Amery Hung

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).