public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS
@ 2026-01-16 20:16 Ihor Solodrai
  2026-01-16 20:16 ` [PATCH bpf-next v2 01/13] bpf: Refactor btf_kfunc_id_set_contains Ihor Solodrai
                   ` (13 more replies)
  0 siblings, 14 replies; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

This series implements a generic "implicit arguments" feature for BPF
kernel functions. For context see prior work [1][2].

A mechanism is created for kfuncs to have arguments that are not
visible to the BPF programs, and are provided to the kernel function
implementation by the verifier.

This mechanism is then used in the kfuncs that have a parameter with
__prog annotation [3], which is the current way of passing struct
bpf_prog_aux pointer to kfuncs.

The function with implicit arguments is defined by KF_IMPLICIT_ARGS
flag in BTF_IDS_FLAGS set. In this series, only a pointer to struct
bpf_prog_aux can be implicit, although it is simple to extend this to
more types.

The verifier handles a kfunc with KF_IMPLICIT_ARGS by resolving it to
a different (actual) BTF prototype early in verification (patch #3).

A <kfunc>_impl function generated in BTF for a kfunc with implicit
args does not have a "bpf_kfunc" decl tag, and a kernel address. The
verifier will reject a program trying to call such an _impl kfunc.

The usage of <kfunc>_impl functions in BPF is only allowed for kfuncs
with an explicit kernel (or kmodule) declaration, that is in "legacy"
cases. As of this series, there are no legacy kernel functions, as all
__prog users are migrated to KF_IMPLICIT_ARGS. However the
implementation allows for legacy cases support in principle.

The series consists of the following patches:
  - patches #1 and #2 are non-functional refactoring in kernel/bpf
  - patch #3 defines KF_IMPLICIT_ARGS flag and teaches the verifier
    about it
  - patches #4-#5 implement btf2btf transformation in resolve_btfids
  - patch #6 adds selftests specific to KF_IMPLICIT_ARGS feature
  - patches #7-#11 migrate the current users of __prog argument to
    KF_IMPLICIT_ARGS
  - patch #12 removes __prog arg suffix support from the kernel
  - patch #13 updates the docs

[1] https://lore.kernel.org/bpf/20251029190113.3323406-1-ihor.solodrai@linux.dev/
[2] https://lore.kernel.org/bpf/20250924211716.1287715-1-ihor.solodrai@linux.dev/
[3] https://docs.kernel.org/bpf/kfuncs.html#prog-annotation

---

v1->v2:
  - Replace the following kernel functions with KF_IMPLICIT_ARGS version:
    - bpf_stream_vprintk_impl -> bpf_stream_vprintk
    - bpf_task_work_schedule_resume_impl -> bpf_task_work_schedule_resume
    - bpf_task_work_schedule_signal_impl -> bpf_task_work_schedule_signal
    - bpf_wq_set_callback_impl -> bpf_wq_set_callback_impl
  - Remove __prog arg suffix support from the verifier
  - Rework btf2btf implementation in resolve_btfids
    - Do distill base and sort before BTF_ids patching
    - Collect kfuncs based on BTF decl tags, before BTF_ids are patched
  - resolve_btfids: use dynamic memory for intermediate data (Andrii)
  - verifier: reset .subreg_def for caller saved registers on kfunc
    call (Eduard)
  - selftests/hid: remove Makefile changes (Benjamin)
  - selftests/bpf: Add a patch (#11) migrating struct_ops_assoc test
    to KF_IMPLICIT_ARGS
  - Various nits across the series (Alexei, Andrii, Eduard)

v1: https://lore.kernel.org/bpf/20260109184852.1089786-1-ihor.solodrai@linux.dev/

---

Ihor Solodrai (13):
  bpf: Refactor btf_kfunc_id_set_contains
  bpf: Introduce struct bpf_kfunc_meta
  bpf: Verifier support for KF_IMPLICIT_ARGS
  resolve_btfids: Introduce finalize_btf() step
  resolve_btfids: Support for KF_IMPLICIT_ARGS
  selftests/bpf: Add tests for KF_IMPLICIT_ARGS
  bpf: Migrate bpf_wq_set_callback_impl() to KF_IMPLICIT_ARGS
  HID: Use bpf_wq_set_callback kernel function
  bpf: Migrate bpf_task_work_schedule_* kfuncs to KF_IMPLICIT_ARGS
  bpf: Migrate bpf_stream_vprintk() to KF_IMPLICIT_ARGS
  selftests/bpf: Migrate struct_ops_assoc test to KF_IMPLICIT_ARGS
  bpf: Remove __prog kfunc arg annotation
  bpf,docs: Document KF_IMPLICIT_ARGS flag

 Documentation/bpf/kfuncs.rst                  |  49 +-
 drivers/hid/bpf/progs/hid_bpf_helpers.h       |   8 +-
 include/linux/btf.h                           |   5 +-
 kernel/bpf/btf.c                              |  70 ++-
 kernel/bpf/helpers.c                          |  43 +-
 kernel/bpf/stream.c                           |   5 +-
 kernel/bpf/verifier.c                         | 245 ++++++----
 tools/bpf/resolve_btfids/main.c               | 452 +++++++++++++++++-
 tools/lib/bpf/bpf_helpers.h                   |   6 +-
 .../testing/selftests/bpf/bpf_experimental.h  |   5 -
 .../bpf/prog_tests/kfunc_implicit_args.c      |  10 +
 .../testing/selftests/bpf/progs/file_reader.c |   4 +-
 .../selftests/bpf/progs/kfunc_implicit_args.c |  41 ++
 .../testing/selftests/bpf/progs/stream_fail.c |   6 +-
 .../selftests/bpf/progs/struct_ops_assoc.c    |   8 +-
 .../bpf/progs/struct_ops_assoc_in_timer.c     |   4 +-
 .../bpf/progs/struct_ops_assoc_reuse.c        |   6 +-
 tools/testing/selftests/bpf/progs/task_work.c |  11 +-
 .../selftests/bpf/progs/task_work_fail.c      |  16 +-
 .../selftests/bpf/progs/task_work_stress.c    |   5 +-
 .../bpf/progs/verifier_async_cb_context.c     |  10 +-
 .../testing/selftests/bpf/progs/wq_failures.c |   4 +-
 .../selftests/bpf/test_kmods/bpf_testmod.c    |  35 +-
 .../bpf/test_kmods/bpf_testmod_kfunc.h        |   6 +-
 .../selftests/hid/progs/hid_bpf_helpers.h     |   8 +-
 25 files changed, 838 insertions(+), 224 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/kfunc_implicit_args.c
 create mode 100644 tools/testing/selftests/bpf/progs/kfunc_implicit_args.c

-- 
2.52.0


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 01/13] bpf: Refactor btf_kfunc_id_set_contains
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-16 20:16 ` [PATCH bpf-next v2 02/13] bpf: Introduce struct bpf_kfunc_meta Ihor Solodrai
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

btf_kfunc_id_set_contains() is called by fetch_kfunc_meta() in the BPF
verifier to get the kfunc flags stored in the .BTF_ids ELF section.
If it returns NULL instead of a valid pointer, it's interpreted as an
illegal kfunc usage failing the verification.

There are two potential reasons for btf_kfunc_id_set_contains() to
return NULL:

  1. Provided kfunc BTF id is not present in relevant kfunc id sets.
  2. The kfunc is not allowed, as determined by the program type
     specific filter [1].

The filter functions accept a pointer to `struct bpf_prog`, so they
might implicitly depend on earlier stages of verification, when
bpf_prog members are set.

For example, bpf_qdisc_kfunc_filter() in linux/net/sched/bpf_qdisc.c
inspects prog->aux->st_ops [2], which is initialized in:

    check_attach_btf_id() -> check_struct_ops_btf_id()

So far this hasn't been an issue, because fetch_kfunc_meta() is the
only caller of btf_kfunc_id_set_contains().

However in subsequent patches of this series it is necessary to
inspect kfunc flags earlier in BPF verifier, in the add_kfunc_call().

To resolve this, refactor btf_kfunc_id_set_contains() into two
interface functions:
  * btf_kfunc_flags() that simply returns pointer to kfunc_flags
    without applying the filters
  * btf_kfunc_is_allowed() that both checks for kfunc_flags existence
    (which is a requirement for a kfunc to be allowed) and applies the
    prog filters

See [3] for the previous version of this patch.

[1] https://lore.kernel.org/all/20230519225157.760788-7-aditi.ghag@isovalent.com/
[2] https://lore.kernel.org/all/20250409214606.2000194-4-ameryhung@gmail.com/
[3] https://lore.kernel.org/bpf/20251029190113.3323406-3-ihor.solodrai@linux.dev/

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
Reviewed-by: Eduard Zingerman <eddyz87@gmail.com>
---
 include/linux/btf.h   |  4 +--
 kernel/bpf/btf.c      | 70 ++++++++++++++++++++++++++++++++-----------
 kernel/bpf/verifier.c |  6 ++--
 3 files changed, 58 insertions(+), 22 deletions(-)

diff --git a/include/linux/btf.h b/include/linux/btf.h
index 78dc79810c7d..a2f4f383f5b6 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -575,8 +575,8 @@ const char *btf_name_by_offset(const struct btf *btf, u32 offset);
 const char *btf_str_by_offset(const struct btf *btf, u32 offset);
 struct btf *btf_parse_vmlinux(void);
 struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
-u32 *btf_kfunc_id_set_contains(const struct btf *btf, u32 kfunc_btf_id,
-			       const struct bpf_prog *prog);
+u32 *btf_kfunc_flags(const struct btf *btf, u32 kfunc_btf_id, const struct bpf_prog *prog);
+bool btf_kfunc_is_allowed(const struct btf *btf, u32 kfunc_btf_id, const struct bpf_prog *prog);
 u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
 				const struct bpf_prog *prog);
 int register_btf_kfunc_id_set(enum bpf_prog_type prog_type,
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 364dd84bfc5a..d10b3404260f 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -8757,24 +8757,17 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
 	return ret;
 }
 
-static u32 *__btf_kfunc_id_set_contains(const struct btf *btf,
-					enum btf_kfunc_hook hook,
-					u32 kfunc_btf_id,
-					const struct bpf_prog *prog)
+static u32 *btf_kfunc_id_set_contains(const struct btf *btf,
+				      enum btf_kfunc_hook hook,
+				      u32 kfunc_btf_id)
 {
-	struct btf_kfunc_hook_filter *hook_filter;
 	struct btf_id_set8 *set;
-	u32 *id, i;
+	u32 *id;
 
 	if (hook >= BTF_KFUNC_HOOK_MAX)
 		return NULL;
 	if (!btf->kfunc_set_tab)
 		return NULL;
-	hook_filter = &btf->kfunc_set_tab->hook_filters[hook];
-	for (i = 0; i < hook_filter->nr_filters; i++) {
-		if (hook_filter->filters[i](prog, kfunc_btf_id))
-			return NULL;
-	}
 	set = btf->kfunc_set_tab->sets[hook];
 	if (!set)
 		return NULL;
@@ -8785,6 +8778,28 @@ static u32 *__btf_kfunc_id_set_contains(const struct btf *btf,
 	return id + 1;
 }
 
+static bool __btf_kfunc_is_allowed(const struct btf *btf,
+				   enum btf_kfunc_hook hook,
+				   u32 kfunc_btf_id,
+				   const struct bpf_prog *prog)
+{
+	struct btf_kfunc_hook_filter *hook_filter;
+	int i;
+
+	if (hook >= BTF_KFUNC_HOOK_MAX)
+		return false;
+	if (!btf->kfunc_set_tab)
+		return false;
+
+	hook_filter = &btf->kfunc_set_tab->hook_filters[hook];
+	for (i = 0; i < hook_filter->nr_filters; i++) {
+		if (hook_filter->filters[i](prog, kfunc_btf_id))
+			return false;
+	}
+
+	return true;
+}
+
 static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
 {
 	switch (prog_type) {
@@ -8832,6 +8847,26 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
 	}
 }
 
+bool btf_kfunc_is_allowed(const struct btf *btf,
+			  u32 kfunc_btf_id,
+			  const struct bpf_prog *prog)
+{
+	enum bpf_prog_type prog_type = resolve_prog_type(prog);
+	enum btf_kfunc_hook hook;
+	u32 *kfunc_flags;
+
+	kfunc_flags = btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id);
+	if (kfunc_flags && __btf_kfunc_is_allowed(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id, prog))
+		return true;
+
+	hook = bpf_prog_type_to_kfunc_hook(prog_type);
+	kfunc_flags = btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id);
+	if (kfunc_flags && __btf_kfunc_is_allowed(btf, hook, kfunc_btf_id, prog))
+		return true;
+
+	return false;
+}
+
 /* Caution:
  * Reference to the module (obtained using btf_try_get_module) corresponding to
  * the struct btf *MUST* be held when calling this function from verifier
@@ -8839,26 +8874,27 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
  * keeping the reference for the duration of the call provides the necessary
  * protection for looking up a well-formed btf->kfunc_set_tab.
  */
-u32 *btf_kfunc_id_set_contains(const struct btf *btf,
-			       u32 kfunc_btf_id,
-			       const struct bpf_prog *prog)
+u32 *btf_kfunc_flags(const struct btf *btf, u32 kfunc_btf_id, const struct bpf_prog *prog)
 {
 	enum bpf_prog_type prog_type = resolve_prog_type(prog);
 	enum btf_kfunc_hook hook;
 	u32 *kfunc_flags;
 
-	kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id, prog);
+	kfunc_flags = btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id);
 	if (kfunc_flags)
 		return kfunc_flags;
 
 	hook = bpf_prog_type_to_kfunc_hook(prog_type);
-	return __btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id, prog);
+	return btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id);
 }
 
 u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
 				const struct bpf_prog *prog)
 {
-	return __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, kfunc_btf_id, prog);
+	if (!__btf_kfunc_is_allowed(btf, BTF_KFUNC_HOOK_FMODRET, kfunc_btf_id, prog))
+		return NULL;
+
+	return btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, kfunc_btf_id);
 }
 
 static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9de0ec0c3ed9..bd9bd797f5a0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13723,10 +13723,10 @@ static int fetch_kfunc_meta(struct bpf_verifier_env *env,
 		*kfunc_name = func_name;
 	func_proto = btf_type_by_id(desc_btf, func->type);
 
-	kfunc_flags = btf_kfunc_id_set_contains(desc_btf, func_id, env->prog);
-	if (!kfunc_flags) {
+	if (!btf_kfunc_is_allowed(desc_btf, func_id, env->prog))
 		return -EACCES;
-	}
+
+	kfunc_flags = btf_kfunc_flags(desc_btf, func_id, env->prog);
 
 	memset(meta, 0, sizeof(*meta));
 	meta->btf = desc_btf;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 02/13] bpf: Introduce struct bpf_kfunc_meta
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
  2026-01-16 20:16 ` [PATCH bpf-next v2 01/13] bpf: Refactor btf_kfunc_id_set_contains Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-16 20:16 ` [PATCH bpf-next v2 03/13] bpf: Verifier support for KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

There is code duplication between add_kfunc_call() and
fetch_kfunc_meta() collecting information about a kfunc from BTF.

Introduce struct bpf_kfunc_meta to hold common kfunc BTF data and
implement fetch_kfunc_meta() to fill it in, instead of struct
bpf_kfunc_call_arg_meta directly.

Then use these in add_kfunc_call() and (new) fetch_kfunc_arg_meta()
functions, and fixup previous usages of fetch_kfunc_meta() to
fetch_kfunc_arg_meta().

Besides the code dedup, this change enables add_kfunc_call() to access
kfunc->flags.

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
 kernel/bpf/verifier.c | 156 ++++++++++++++++++++++++------------------
 1 file changed, 91 insertions(+), 65 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index bd9bd797f5a0..5c76fd97bf7c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -294,6 +294,14 @@ struct bpf_call_arg_meta {
 	s64 const_map_key;
 };
 
+struct bpf_kfunc_meta {
+	struct btf *btf;
+	const struct btf_type *proto;
+	const char *name;
+	const u32 *flags;
+	s32 id;
+};
+
 struct bpf_kfunc_call_arg_meta {
 	/* In parameters */
 	struct btf *btf;
@@ -3263,16 +3271,68 @@ static struct btf *find_kfunc_desc_btf(struct bpf_verifier_env *env, s16 offset)
 	return btf_vmlinux ?: ERR_PTR(-ENOENT);
 }
 
-static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
+static int fetch_kfunc_meta(struct bpf_verifier_env *env,
+			    s32 func_id,
+			    s16 offset,
+			    struct bpf_kfunc_meta *kfunc)
 {
 	const struct btf_type *func, *func_proto;
+	const char *func_name;
+	u32 *kfunc_flags;
+	struct btf *btf;
+
+	if (func_id <= 0) {
+		verbose(env, "invalid kernel function btf_id %d\n", func_id);
+		return -EINVAL;
+	}
+
+	btf = find_kfunc_desc_btf(env, offset);
+	if (IS_ERR(btf)) {
+		verbose(env, "failed to find BTF for kernel function\n");
+		return PTR_ERR(btf);
+	}
+
+	/*
+	 * Note that kfunc_flags may be NULL at this point, which
+	 * means that we couldn't find func_id in any relevant
+	 * kfunc_id_set. This most likely indicates an invalid kfunc
+	 * call.  However we don't fail with an error here,
+	 * and let the caller decide what to do with NULL kfunc->flags.
+	 */
+	kfunc_flags = btf_kfunc_flags(btf, func_id, env->prog);
+
+	func = btf_type_by_id(btf, func_id);
+	if (!func || !btf_type_is_func(func)) {
+		verbose(env, "kernel btf_id %d is not a function\n", func_id);
+		return -EINVAL;
+	}
+
+	func_name = btf_name_by_offset(btf, func->name_off);
+	func_proto = btf_type_by_id(btf, func->type);
+	if (!func_proto || !btf_type_is_func_proto(func_proto)) {
+		verbose(env, "kernel function btf_id %d does not have a valid func_proto\n",
+			func_id);
+		return -EINVAL;
+	}
+
+	memset(kfunc, 0, sizeof(*kfunc));
+	kfunc->btf = btf;
+	kfunc->id = func_id;
+	kfunc->name = func_name;
+	kfunc->proto = func_proto;
+	kfunc->flags = kfunc_flags;
+
+	return 0;
+}
+
+static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
+{
 	struct bpf_kfunc_btf_tab *btf_tab;
 	struct btf_func_model func_model;
 	struct bpf_kfunc_desc_tab *tab;
 	struct bpf_prog_aux *prog_aux;
+	struct bpf_kfunc_meta kfunc;
 	struct bpf_kfunc_desc *desc;
-	const char *func_name;
-	struct btf *desc_btf;
 	unsigned long addr;
 	int err;
 
@@ -3322,12 +3382,6 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 		prog_aux->kfunc_btf_tab = btf_tab;
 	}
 
-	desc_btf = find_kfunc_desc_btf(env, offset);
-	if (IS_ERR(desc_btf)) {
-		verbose(env, "failed to find BTF for kernel function\n");
-		return PTR_ERR(desc_btf);
-	}
-
 	if (find_kfunc_desc(env->prog, func_id, offset))
 		return 0;
 
@@ -3336,24 +3390,13 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 		return -E2BIG;
 	}
 
-	func = btf_type_by_id(desc_btf, func_id);
-	if (!func || !btf_type_is_func(func)) {
-		verbose(env, "kernel btf_id %u is not a function\n",
-			func_id);
-		return -EINVAL;
-	}
-	func_proto = btf_type_by_id(desc_btf, func->type);
-	if (!func_proto || !btf_type_is_func_proto(func_proto)) {
-		verbose(env, "kernel function btf_id %u does not have a valid func_proto\n",
-			func_id);
-		return -EINVAL;
-	}
+	err = fetch_kfunc_meta(env, func_id, offset, &kfunc);
+	if (err)
+		return err;
 
-	func_name = btf_name_by_offset(desc_btf, func->name_off);
-	addr = kallsyms_lookup_name(func_name);
+	addr = kallsyms_lookup_name(kfunc.name);
 	if (!addr) {
-		verbose(env, "cannot find address for kernel function %s\n",
-			func_name);
+		verbose(env, "cannot find address for kernel function %s\n", kfunc.name);
 		return -EINVAL;
 	}
 
@@ -3363,9 +3406,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 			return err;
 	}
 
-	err = btf_distill_func_proto(&env->log, desc_btf,
-				     func_proto, func_name,
-				     &func_model);
+	err = btf_distill_func_proto(&env->log, kfunc.btf, kfunc.proto, kfunc.name, &func_model);
 	if (err)
 		return err;
 
@@ -13696,44 +13737,28 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 	return 0;
 }
 
-static int fetch_kfunc_meta(struct bpf_verifier_env *env,
-			    struct bpf_insn *insn,
-			    struct bpf_kfunc_call_arg_meta *meta,
-			    const char **kfunc_name)
+static int fetch_kfunc_arg_meta(struct bpf_verifier_env *env,
+				s32 func_id,
+				s16 offset,
+				struct bpf_kfunc_call_arg_meta *meta)
 {
-	const struct btf_type *func, *func_proto;
-	u32 func_id, *kfunc_flags;
-	const char *func_name;
-	struct btf *desc_btf;
-
-	if (kfunc_name)
-		*kfunc_name = NULL;
-
-	if (!insn->imm)
-		return -EINVAL;
+	struct bpf_kfunc_meta kfunc;
+	int err;
 
-	desc_btf = find_kfunc_desc_btf(env, insn->off);
-	if (IS_ERR(desc_btf))
-		return PTR_ERR(desc_btf);
+	err = fetch_kfunc_meta(env, func_id, offset, &kfunc);
+	if (err)
+		return err;
 
-	func_id = insn->imm;
-	func = btf_type_by_id(desc_btf, func_id);
-	func_name = btf_name_by_offset(desc_btf, func->name_off);
-	if (kfunc_name)
-		*kfunc_name = func_name;
-	func_proto = btf_type_by_id(desc_btf, func->type);
+	memset(meta, 0, sizeof(*meta));
+	meta->btf = kfunc.btf;
+	meta->func_id = kfunc.id;
+	meta->func_proto = kfunc.proto;
+	meta->func_name = kfunc.name;
 
-	if (!btf_kfunc_is_allowed(desc_btf, func_id, env->prog))
+	if (!kfunc.flags || !btf_kfunc_is_allowed(kfunc.btf, kfunc.id, env->prog))
 		return -EACCES;
 
-	kfunc_flags = btf_kfunc_flags(desc_btf, func_id, env->prog);
-
-	memset(meta, 0, sizeof(*meta));
-	meta->btf = desc_btf;
-	meta->func_id = func_id;
-	meta->kfunc_flags = *kfunc_flags;
-	meta->func_proto = func_proto;
-	meta->func_name = func_name;
+	meta->kfunc_flags = *kfunc.flags;
 
 	return 0;
 }
@@ -13938,12 +13963,13 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	if (!insn->imm)
 		return 0;
 
-	err = fetch_kfunc_meta(env, insn, &meta, &func_name);
-	if (err == -EACCES && func_name)
-		verbose(env, "calling kernel function %s is not allowed\n", func_name);
+	err = fetch_kfunc_arg_meta(env, insn->imm, insn->off, &meta);
+	if (err == -EACCES && meta.func_name)
+		verbose(env, "calling kernel function %s is not allowed\n", meta.func_name);
 	if (err)
 		return err;
 	desc_btf = meta.btf;
+	func_name = meta.func_name;
 	insn_aux = &env->insn_aux_data[insn_idx];
 
 	insn_aux->is_iter_next = is_iter_next_kfunc(&meta);
@@ -17783,7 +17809,7 @@ static bool get_call_summary(struct bpf_verifier_env *env, struct bpf_insn *call
 	if (bpf_pseudo_kfunc_call(call)) {
 		int err;
 
-		err = fetch_kfunc_meta(env, call, &meta, NULL);
+		err = fetch_kfunc_arg_meta(env, call->imm, call->off, &meta);
 		if (err < 0)
 			/* error would be reported later */
 			return false;
@@ -18291,7 +18317,7 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
 		} else if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
 			struct bpf_kfunc_call_arg_meta meta;
 
-			ret = fetch_kfunc_meta(env, insn, &meta, NULL);
+			ret = fetch_kfunc_arg_meta(env, insn->imm, insn->off, &meta);
 			if (ret == 0 && is_iter_next_kfunc(&meta)) {
 				mark_prune_point(env, t);
 				/* Checking and saving state checkpoints at iter_next() call
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 03/13] bpf: Verifier support for KF_IMPLICIT_ARGS
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
  2026-01-16 20:16 ` [PATCH bpf-next v2 01/13] bpf: Refactor btf_kfunc_id_set_contains Ihor Solodrai
  2026-01-16 20:16 ` [PATCH bpf-next v2 02/13] bpf: Introduce struct bpf_kfunc_meta Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-20  0:03   ` Eduard Zingerman
  2026-01-16 20:16 ` [PATCH bpf-next v2 04/13] resolve_btfids: Introduce finalize_btf() step Ihor Solodrai
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

A kernel function bpf_foo marked with KF_IMPLICIT_ARGS flag is
expected to have two associated types in BTF:
  * `bpf_foo` with a function prototype that omits implicit arguments
  * `bpf_foo_impl` with a function prototype that matches the kernel
     declaration of `bpf_foo`, but doesn't have a ksym associated with
     its name

In order to support kfuncs with implicit arguments, the verifier has
to know how to resolve a call of `bpf_foo` to the correct BTF function
prototype and address.

To implement this, in add_kfunc_call() kfunc flags are checked for
KF_IMPLICIT_ARGS. For such kfuncs a BTF func prototype is adjusted to
the one found for `bpf_foo_impl` (func_name + "_impl" suffix, by
convention) function in BTF.

This effectively changes the signature of the `bpf_foo` kfunc in the
context of verification: from one without implicit args to the one
with full argument list.

The values of implicit arguments by design are provided by the
verifier, and so they can only be of particular types. In this patch
the only allowed implicit arg type is a pointer to struct
bpf_prog_aux.

In order for the verifier to correctly set an implicit bpf_prog_aux
arg value at runtime, is_kfunc_arg_prog() is extended to check for the
arg type. At a point when prog arg is determined in check_kfunc_args()
the kfunc with implicit args already has a prototype with full
argument list, so the existing value patch mechanism just works.

If a new kfunc with KF_IMPLICIT_ARG is declared for an existing kfunc
that uses a __prog argument (a legacy case), the prototype
substitution works in exactly the same way, assuming the kfunc follows
the _impl naming convention. The difference is only in how _impl
prototype is added to the BTF, which is not the verifier's
concern. See a subsequent resolve_btfids patch for details.

__prog suffix is still supported at this point, but will be removed in
a subsequent patch, after current users are moved to KF_IMPLICIT_ARGS.

Introduction of KF_IMPLICIT_ARGS revealed an issue with zero-extension
tracking, because an explicit rX = 0 in place of the verifier-supplied
argument is now absent if the arg is implicit (the BPF prog doesn't
pass a dummy NULL anymore). To mitigate this, reset the subreg_def of
all caller saved registers in check_kfunc_call() [1].

[1] https://lore.kernel.org/bpf/b4a760ef828d40dac7ea6074d39452bb0dc82caa.camel@gmail.com/

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
---
 include/linux/btf.h   |  1 +
 kernel/bpf/verifier.c | 58 ++++++++++++++++++++++++++++++++++++++++---
 2 files changed, 55 insertions(+), 4 deletions(-)

diff --git a/include/linux/btf.h b/include/linux/btf.h
index a2f4f383f5b6..48108471c5b1 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -78,6 +78,7 @@
 #define KF_ARENA_RET    (1 << 13) /* kfunc returns an arena pointer */
 #define KF_ARENA_ARG1   (1 << 14) /* kfunc takes an arena pointer as its first argument */
 #define KF_ARENA_ARG2   (1 << 15) /* kfunc takes an arena pointer as its second argument */
+#define KF_IMPLICIT_ARGS (1 << 16) /* kfunc has implicit arguments supplied by the verifier */
 
 /*
  * Tag marking a kernel function as a kfunc. This is meant to minimize the
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5c76fd97bf7c..c6bb8a098fc5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3271,6 +3271,34 @@ static struct btf *find_kfunc_desc_btf(struct bpf_verifier_env *env, s16 offset)
 	return btf_vmlinux ?: ERR_PTR(-ENOENT);
 }
 
+#define KF_IMPL_SUFFIX "_impl"
+
+static const struct btf_type *find_kfunc_impl_proto(struct bpf_verifier_env *env,
+						    struct btf *btf,
+						    const char *func_name)
+{
+	char *buf = env->tmp_str_buf;
+	const struct btf_type *func;
+	s32 impl_id;
+	int len;
+
+	len = snprintf(buf, TMP_STR_BUF_LEN, "%s%s", func_name, KF_IMPL_SUFFIX);
+	if (len < 0 || len >= TMP_STR_BUF_LEN) {
+		verbose(env, "function name %s%s is too long\n", func_name, KF_IMPL_SUFFIX);
+		return NULL;
+	}
+
+	impl_id = btf_find_by_name_kind(btf, buf, BTF_KIND_FUNC);
+	if (impl_id <= 0) {
+		verbose(env, "cannot find function %s in BTF\n", buf);
+		return NULL;
+	}
+
+	func = btf_type_by_id(btf, impl_id);
+
+	return btf_type_by_id(btf, func->type);
+}
+
 static int fetch_kfunc_meta(struct bpf_verifier_env *env,
 			    s32 func_id,
 			    s16 offset,
@@ -3308,7 +3336,16 @@ static int fetch_kfunc_meta(struct bpf_verifier_env *env,
 	}
 
 	func_name = btf_name_by_offset(btf, func->name_off);
-	func_proto = btf_type_by_id(btf, func->type);
+
+	/*
+	 * An actual prototype of a kfunc with KF_IMPLICIT_ARGS flag
+	 * can be found through the counterpart _impl kfunc.
+	 */
+	if (kfunc_flags && (*kfunc_flags & KF_IMPLICIT_ARGS))
+		func_proto = find_kfunc_impl_proto(env, btf, func_name);
+	else
+		func_proto = btf_type_by_id(btf, func->type);
+
 	if (!func_proto || !btf_type_is_func_proto(func_proto)) {
 		verbose(env, "kernel function btf_id %d does not have a valid func_proto\n",
 			func_id);
@@ -12174,9 +12211,11 @@ static bool is_kfunc_arg_irq_flag(const struct btf *btf, const struct btf_param
 	return btf_param_match_suffix(btf, arg, "__irq_flag");
 }
 
+static bool is_kfunc_arg_prog_aux(const struct btf *btf, const struct btf_param *arg);
+
 static bool is_kfunc_arg_prog(const struct btf *btf, const struct btf_param *arg)
 {
-	return btf_param_match_suffix(btf, arg, "__prog");
+	return btf_param_match_suffix(btf, arg, "__prog") || is_kfunc_arg_prog_aux(btf, arg);
 }
 
 static bool is_kfunc_arg_scalar_with_name(const struct btf *btf,
@@ -12207,6 +12246,7 @@ enum {
 	KF_ARG_WORKQUEUE_ID,
 	KF_ARG_RES_SPIN_LOCK_ID,
 	KF_ARG_TASK_WORK_ID,
+	KF_ARG_PROG_AUX_ID
 };
 
 BTF_ID_LIST(kf_arg_btf_ids)
@@ -12218,6 +12258,7 @@ BTF_ID(struct, bpf_rb_node)
 BTF_ID(struct, bpf_wq)
 BTF_ID(struct, bpf_res_spin_lock)
 BTF_ID(struct, bpf_task_work)
+BTF_ID(struct, bpf_prog_aux)
 
 static bool __is_kfunc_ptr_arg_type(const struct btf *btf,
 				    const struct btf_param *arg, int type)
@@ -12298,6 +12339,11 @@ static bool is_kfunc_arg_callback(struct bpf_verifier_env *env, const struct btf
 	return true;
 }
 
+static bool is_kfunc_arg_prog_aux(const struct btf *btf, const struct btf_param *arg)
+{
+	return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_PROG_AUX_ID);
+}
+
 /* Returns true if struct is composed of scalars, 4 levels of nesting allowed */
 static bool __btf_type_is_scalar_struct(struct bpf_verifier_env *env,
 					const struct btf *btf,
@@ -14177,8 +14223,12 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		}
 	}
 
-	for (i = 0; i < CALLER_SAVED_REGS; i++)
-		mark_reg_not_init(env, regs, caller_saved[i]);
+	for (i = 0; i < CALLER_SAVED_REGS; i++) {
+		u32 regno = caller_saved[i];
+
+		mark_reg_not_init(env, regs, regno);
+		regs[regno].subreg_def = DEF_NOT_SUBREG;
+	}
 
 	/* Check return type */
 	t = btf_type_skip_modifiers(desc_btf, meta.func_proto->type, NULL);
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 04/13] resolve_btfids: Introduce finalize_btf() step
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (2 preceding siblings ...)
  2026-01-16 20:16 ` [PATCH bpf-next v2 03/13] bpf: Verifier support for KF_IMPLICIT_ARGS Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-20  0:13   ` Eduard Zingerman
  2026-01-16 20:16 ` [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

Since recently [1][2] resolve_btfids executes final adjustments to the
kernel/module BTF before it's embedded into the target binary.

To keep the implementation simple, a clear and stable "pipeline" of
how BTF data flows through resolve_btfids would be helpful. Some BTF
modifications may change the ids of the types, so it is important to
maintain correct order of operations with respect to .BTF_ids
resolution too.

This patch refactors the BTF handling to establish the following
sequence:
  - load target ELF sections
  - load .BTF_ids symbols
    - this will be a dependency of btf2btf transformations in
      subsequent patches
  - load BTF and its base as is
  - (*) btf2btf transformations will happen here
  - finalize_btf(), introduced in this patch
    - does distill base and sort BTF
  - resolve and patch .BTF_ids

This approach helps to avoid fixups in .BTF_ids data in case the ids
change at any point of BTF processing, because symbol resolution
happens on the finalized, ready to dump, BTF data.

This also gives flexibility in BTF transformations, because they will
happen on BTF that is not distilled and/or sorted yet, allowing to
freely add, remove and modify BTF types.

[1] https://lore.kernel.org/bpf/20251219181321.1283664-1-ihor.solodrai@linux.dev/
[2] https://lore.kernel.org/bpf/20260109130003.3313716-1-dolinux.peng@gmail.com/

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
---
 tools/bpf/resolve_btfids/main.c | 69 +++++++++++++++++++++++----------
 1 file changed, 48 insertions(+), 21 deletions(-)

diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
index 343d08050116..1fcf37af6764 100644
--- a/tools/bpf/resolve_btfids/main.c
+++ b/tools/bpf/resolve_btfids/main.c
@@ -563,19 +563,6 @@ static int load_btf(struct object *obj)
 	obj->base_btf = base_btf;
 	obj->btf = btf;
 
-	if (obj->base_btf && obj->distill_base) {
-		err = btf__distill_base(obj->btf, &base_btf, &btf);
-		if (err) {
-			pr_err("FAILED to distill base BTF: %s\n", strerror(errno));
-			goto out_err;
-		}
-
-		btf__free(obj->base_btf);
-		btf__free(obj->btf);
-		obj->base_btf = base_btf;
-		obj->btf = btf;
-	}
-
 	return 0;
 
 out_err:
@@ -911,6 +898,41 @@ static int sort_btf_by_name(struct btf *btf)
 	return err;
 }
 
+static int finalize_btf(struct object *obj)
+{
+	struct btf *base_btf = obj->base_btf, *btf = obj->btf;
+	int err;
+
+	if (obj->base_btf && obj->distill_base) {
+		err = btf__distill_base(obj->btf, &base_btf, &btf);
+		if (err) {
+			pr_err("FAILED to distill base BTF: %s\n", strerror(errno));
+			goto out_err;
+		}
+
+		btf__free(obj->base_btf);
+		btf__free(obj->btf);
+		obj->base_btf = base_btf;
+		obj->btf = btf;
+	}
+
+	err = sort_btf_by_name(obj->btf);
+	if (err) {
+		pr_err("FAILED to sort BTF: %s\n", strerror(errno));
+		goto out_err;
+	}
+
+	return 0;
+
+out_err:
+	btf__free(base_btf);
+	btf__free(btf);
+	obj->base_btf = NULL;
+	obj->btf = NULL;
+
+	return err;
+}
+
 static inline int make_out_path(char *buf, u32 buf_sz, const char *in_path, const char *suffix)
 {
 	int len = snprintf(buf, buf_sz, "%s%s", in_path, suffix);
@@ -1054,6 +1076,7 @@ int main(int argc, const char **argv)
 	};
 	const char *btfids_path = NULL;
 	bool fatal_warnings = false;
+	bool resolve_btfids = true;
 	char out_path[PATH_MAX];
 
 	struct option btfid_options[] = {
@@ -1083,12 +1106,6 @@ int main(int argc, const char **argv)
 	if (btfids_path)
 		return patch_btfids(btfids_path, obj.path);
 
-	if (load_btf(&obj))
-		goto out;
-
-	if (sort_btf_by_name(obj.btf))
-		goto out;
-
 	if (elf_collect(&obj))
 		goto out;
 
@@ -1099,12 +1116,22 @@ int main(int argc, const char **argv)
 	if (obj.efile.idlist_shndx == -1 ||
 	    obj.efile.symbols_shndx == -1) {
 		pr_debug("Cannot find .BTF_ids or symbols sections, skip symbols resolution\n");
-		goto dump_btf;
+		resolve_btfids = false;
 	}
 
-	if (symbols_collect(&obj))
+	if (resolve_btfids)
+		if (symbols_collect(&obj))
+			goto out;
+
+	if (load_btf(&obj))
 		goto out;
 
+	if (finalize_btf(&obj))
+		goto out;
+
+	if (!resolve_btfids)
+		goto dump_btf;
+
 	if (symbols_resolve(&obj))
 		goto out;
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (3 preceding siblings ...)
  2026-01-16 20:16 ` [PATCH bpf-next v2 04/13] resolve_btfids: Introduce finalize_btf() step Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-16 20:39   ` bot+bpf-ci
                     ` (2 more replies)
  2026-01-16 20:16 ` [PATCH bpf-next v2 06/13] selftests/bpf: Add tests " Ihor Solodrai
                   ` (8 subsequent siblings)
  13 siblings, 3 replies; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

Implement BTF modifications in resolve_btfids to support BPF kernel
functions with implicit arguments.

For a kfunc marked with KF_IMPLICIT_ARGS flag, a new function
prototype is added to BTF that does not have implicit arguments. The
kfunc's prototype is then updated to a new one in BTF. This prototype
is the intended interface for the BPF programs.

A <func_name>_impl function is added to BTF to make the original kfunc
prototype searchable for the BPF verifier. If a <func_name>_impl
function already exists in BTF, its interpreted as a legacy case, and
this step is skipped.

Whether an argument is implicit is determined by its type:
currently only `struct bpf_prog_aux *` is supported.

As a result, the BTF associated with kfunc is changed from

    __bpf_kfunc bpf_foo(int arg1, struct bpf_prog_aux *aux);

into

    bpf_foo_impl(int arg1, struct bpf_prog_aux *aux);
    __bpf_kfunc bpf_foo(int arg1);

For more context see previous discussions and patches [1][2].

[1] https://lore.kernel.org/dwarves/ba1650aa-fafd-49a8-bea4-bdddee7c38c9@linux.dev/
[2] https://lore.kernel.org/bpf/20251029190113.3323406-1-ihor.solodrai@linux.dev/

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
---
 tools/bpf/resolve_btfids/main.c | 383 ++++++++++++++++++++++++++++++++
 1 file changed, 383 insertions(+)

diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
index 1fcf37af6764..b83316359cfd 100644
--- a/tools/bpf/resolve_btfids/main.c
+++ b/tools/bpf/resolve_btfids/main.c
@@ -152,6 +152,23 @@ struct object {
 	int nr_typedefs;
 };
 
+#define KF_IMPLICIT_ARGS (1 << 16)
+#define KF_IMPL_SUFFIX "_impl"
+
+struct kfunc {
+	const char *name;
+	u32 btf_id;
+	u32 flags;
+};
+
+struct btf2btf_context {
+	struct btf *btf;
+	u32 *decl_tags;
+	u32 nr_decl_tags;
+	struct kfunc *kfuncs;
+	u32 nr_kfuncs;
+};
+
 static int verbose;
 static int warnings;
 
@@ -837,6 +854,369 @@ static int dump_raw_btf(struct btf *btf, const char *out_path)
 	return 0;
 }
 
+static const struct btf_type *btf_type_skip_qualifiers(const struct btf *btf, s32 type_id)
+{
+	const struct btf_type *t = btf__type_by_id(btf, type_id);
+
+	while (btf_is_mod(t))
+		t = btf__type_by_id(btf, t->type);
+
+	return t;
+}
+
+static const struct btf_decl_tag *btf_type_decl_tag(const struct btf_type *t)
+{
+	return (const struct btf_decl_tag *)(t + 1);
+}
+
+static int collect_decl_tags(struct btf2btf_context *ctx)
+{
+	const u32 type_cnt = btf__type_cnt(ctx->btf);
+	struct btf *btf = ctx->btf;
+	const struct btf_type *t;
+	u32 *tags, *tmp;
+	u32 nr_tags = 0;
+
+	tags = malloc(type_cnt * sizeof(u32));
+	if (!tags)
+		return -ENOMEM;
+
+	for (u32 id = 1; id < type_cnt; id++) {
+		t = btf__type_by_id(btf, id);
+		if (!btf_is_decl_tag(t))
+			continue;
+		tags[nr_tags++] = id;
+	}
+
+	if (nr_tags == 0) {
+		ctx->decl_tags = NULL;
+		free(tags);
+		return 0;
+	}
+
+	tmp = realloc(tags, nr_tags * sizeof(u32));
+	if (!tmp) {
+		free(tags);
+		return -ENOMEM;
+	}
+
+	ctx->decl_tags = tmp;
+	ctx->nr_decl_tags = nr_tags;
+
+	return 0;
+}
+
+/*
+ * To find the kfunc flags having its struct btf_id (with ELF addresses)
+ * we need to find the address that is in range of a set8.
+ * If a set8 is found, then the flags are located at addr + 4 bytes.
+ * Return 0 (no flags!) if not found.
+ */
+static u32 find_kfunc_flags(struct object *obj, struct btf_id *kfunc_id)
+{
+	const u32 *elf_data_ptr = obj->efile.idlist->d_buf;
+	u64 set_lower_addr, set_upper_addr, addr;
+	struct btf_id *set_id;
+	struct rb_node *next;
+	u32 flags;
+	u64 idx;
+
+	next = rb_first(&obj->sets);
+	while (next) {
+		set_id = rb_entry(next, struct btf_id, rb_node);
+		if (set_id->kind != BTF_ID_KIND_SET8 || set_id->addr_cnt != 1)
+			goto skip;
+
+		set_lower_addr = set_id->addr[0];
+		set_upper_addr = set_lower_addr + set_id->cnt * sizeof(u64);
+
+		for (u32 i = 0; i < kfunc_id->addr_cnt; i++) {
+			addr = kfunc_id->addr[i];
+			/*
+			 * Lower bound is exclusive to skip the 8-byte header of the set.
+			 * Upper bound is inclusive to capture the last entry at offset 8*cnt.
+			 */
+			if (set_lower_addr < addr && addr <= set_upper_addr) {
+				pr_debug("found kfunc %s in BTF_ID_FLAGS %s\n",
+					 kfunc_id->name, set_id->name);
+				goto found;
+			}
+		}
+skip:
+		next = rb_next(next);
+	}
+
+	return 0;
+
+found:
+	idx = addr - obj->efile.idlist_addr;
+	idx = idx / sizeof(u32) + 1;
+	flags = elf_data_ptr[idx];
+
+	return flags;
+}
+
+static s64 collect_kfuncs(struct object *obj, struct btf2btf_context *ctx)
+{
+	struct kfunc *kfunc, *kfuncs, *tmp;
+	const char *tag_name, *func_name;
+	struct btf *btf = ctx->btf;
+	const struct btf_type *t;
+	u32 flags, func_id;
+	struct btf_id *id;
+	s64 nr_kfuncs = 0;
+
+	if (ctx->nr_decl_tags == 0)
+		return 0;
+
+	kfuncs = malloc(ctx->nr_decl_tags * sizeof(*kfuncs));
+	if (!kfuncs)
+		return -ENOMEM;
+
+	for (u32 i = 0; i < ctx->nr_decl_tags; i++) {
+		t = btf__type_by_id(btf, ctx->decl_tags[i]);
+		if (btf_kflag(t) || btf_type_decl_tag(t)->component_idx != -1)
+			continue;
+
+		tag_name = btf__name_by_offset(btf, t->name_off);
+		if (strcmp(tag_name, "bpf_kfunc") != 0)
+			continue;
+
+		func_id = t->type;
+		t = btf__type_by_id(btf, func_id);
+		if (!btf_is_func(t))
+			continue;
+
+		func_name = btf__name_by_offset(btf, t->name_off);
+		if (!func_name)
+			continue;
+
+		id = btf_id__find(&obj->funcs, func_name);
+		if (!id || id->kind != BTF_ID_KIND_SYM)
+			continue;
+
+		flags = find_kfunc_flags(obj, id);
+
+		kfunc = &kfuncs[nr_kfuncs++];
+		kfunc->name = id->name;
+		kfunc->btf_id = func_id;
+		kfunc->flags = flags;
+	}
+
+	if (nr_kfuncs == 0) {
+		ctx->kfuncs = NULL;
+		ctx->nr_kfuncs = 0;
+		free(kfuncs);
+		return 0;
+	}
+
+	tmp = realloc(kfuncs, nr_kfuncs * sizeof(*kfuncs));
+	if (!tmp) {
+		free(kfuncs);
+		return -ENOMEM;
+	}
+
+	ctx->kfuncs = tmp;
+	ctx->nr_kfuncs = nr_kfuncs;
+
+	return 0;
+}
+
+static int build_btf2btf_context(struct object *obj, struct btf2btf_context *ctx)
+{
+	int err;
+
+	ctx->btf = obj->btf;
+
+	err = collect_decl_tags(ctx);
+	if (err) {
+		pr_err("ERROR: resolve_btfids: failed to collect decl tags from BTF\n");
+		return err;
+	}
+
+	err = collect_kfuncs(obj, ctx);
+	if (err) {
+		pr_err("ERROR: resolve_btfids: failed to collect kfuncs from BTF\n");
+		return err;
+	}
+
+	return 0;
+}
+
+
+/* Implicit BPF kfunc arguments can only be of particular types */
+static bool is_kf_implicit_arg(const struct btf *btf, const struct btf_param *p)
+{
+	static const char *const kf_implicit_arg_types[] = {
+		"bpf_prog_aux",
+	};
+	const struct btf_type *t;
+	const char *name;
+
+	t = btf_type_skip_qualifiers(btf, p->type);
+	if (!btf_is_ptr(t))
+		return false;
+
+	t = btf_type_skip_qualifiers(btf, t->type);
+	if (!btf_is_struct(t))
+		return false;
+
+	name = btf__name_by_offset(btf, t->name_off);
+	if (!name)
+		return false;
+
+	for (int i = 0; i < ARRAY_SIZE(kf_implicit_arg_types); i++)
+		if (strcmp(name, kf_implicit_arg_types[i]) == 0)
+			return true;
+
+	return false;
+}
+
+/*
+ * For a kfunc with KF_IMPLICIT_ARGS we do the following:
+ *   1. Add a new function with _impl suffix in the name, with the prototype
+ *      of the original kfunc.
+ *   2. Add all decl tags except "bpf_kfunc" for the _impl func.
+ *   3. Add a new function prototype with modified list of arguments:
+ *      omitting implicit args.
+ *   4. Change the prototype of the original kfunc to the new one.
+ *
+ * This way we transform the BTF associated with the kfunc from
+ *	__bpf_kfunc bpf_foo(int arg1, void *implicit_arg);
+ * into
+ *	bpf_foo_impl(int arg1, void *implicit_arg);
+ *	__bpf_kfunc bpf_foo(int arg1);
+ *
+ * If a kfunc with KF_IMPLICIT_ARGS already has an _impl counterpart
+ * in BTF, then it's a legacy case: an _impl function is declared in the
+ * source code. In this case, we can skip adding an _impl function, but we
+ * still have to add a func prototype that omits implicit args.
+ */
+static int process_kfunc_with_implicit_args(struct btf2btf_context *ctx, struct kfunc *kfunc)
+{
+	s32 idx, new_proto_id, new_func_id, proto_id;
+	const char *param_name, *tag_name;
+	const struct btf_param *params;
+	enum btf_func_linkage linkage;
+	char tmp_name[KSYM_NAME_LEN];
+	struct btf *btf = ctx->btf;
+	int err, len, nr_params;
+	struct btf_type *t;
+
+	t = (struct btf_type *)btf__type_by_id(btf, kfunc->btf_id);
+	if (!t || !btf_is_func(t)) {
+		pr_err("ERROR: resolve_btfids: btf id %d is not a function\n", kfunc->btf_id);
+		return -EINVAL;
+	}
+
+	linkage = btf_vlen(t);
+
+	proto_id = t->type;
+	t = (struct btf_type *)btf__type_by_id(btf, proto_id);
+	if (!t || !btf_is_func_proto(t)) {
+		pr_err("ERROR: resolve_btfids: btf id %d is not a function prototype\n", proto_id);
+		return -EINVAL;
+	}
+
+	len = snprintf(tmp_name, sizeof(tmp_name), "%s%s", kfunc->name, KF_IMPL_SUFFIX);
+	if (len < 0 || len >= sizeof(tmp_name)) {
+		pr_err("ERROR: function name is too long: %s%s\n", kfunc->name, KF_IMPL_SUFFIX);
+		return -E2BIG;
+	}
+
+	if (btf__find_by_name_kind(btf, tmp_name, BTF_KIND_FUNC) > 0) {
+		pr_debug("resolve_btfids: function %s already exists in BTF\n", tmp_name);
+		goto add_new_proto;
+	}
+
+	/* Add a new function with _impl suffix and original prototype */
+	new_func_id = btf__add_func(btf, tmp_name, linkage, proto_id);
+	if (new_func_id < 0) {
+		pr_err("ERROR: resolve_btfids: failed to add func %s to BTF\n", tmp_name);
+		return new_func_id;
+	}
+
+	/* Copy all decl tags except "bpf_kfunc" from the original kfunc to the new one */
+	for (int i = 0; i < ctx->nr_decl_tags; i++) {
+		t = (struct btf_type *)btf__type_by_id(btf, ctx->decl_tags[i]);
+		if (t->type != kfunc->btf_id)
+			continue;
+
+		tag_name = btf__name_by_offset(btf, t->name_off);
+		if (strcmp(tag_name, "bpf_kfunc") == 0)
+			continue;
+
+		idx = btf_type_decl_tag(t)->component_idx;
+
+		if (btf_kflag(t))
+			err = btf__add_decl_attr(btf, tag_name, new_func_id, idx);
+		else
+			err = btf__add_decl_tag(btf, tag_name, new_func_id, idx);
+
+		if (err < 0) {
+			pr_err("ERROR: resolve_btfids: failed to add decl tag %s for %s\n",
+			       tag_name, tmp_name);
+			return -EINVAL;
+		}
+	}
+
+add_new_proto:
+	t = (struct btf_type *)btf__type_by_id(btf, proto_id);
+	new_proto_id = btf__add_func_proto(btf, t->type);
+	if (new_proto_id < 0) {
+		pr_err("ERROR: resolve_btfids: failed to add func proto for %s\n", kfunc->name);
+		return new_proto_id;
+	}
+
+	/* Add non-implicit args to the new prototype */
+	t = (struct btf_type *)btf__type_by_id(btf, proto_id);
+	nr_params = btf_vlen(t);
+	for (int i = 0; i < nr_params; i++) {
+		params = btf_params(t);
+		if (is_kf_implicit_arg(btf, &params[i]))
+			break;
+		param_name = btf__name_by_offset(btf, params[i].name_off);
+		err = btf__add_func_param(btf, param_name, params[i].type);
+		if (err < 0) {
+			pr_err("ERROR: resolve_btfids: failed to add param %s for %s\n",
+			       param_name, kfunc->name);
+			return err;
+		}
+		t = (struct btf_type *)btf__type_by_id(btf, proto_id);
+	}
+
+	/* Finally change the prototype of the original kfunc to the new one */
+	t = (struct btf_type *)btf__type_by_id(btf, kfunc->btf_id);
+	t->type = new_proto_id;
+
+	pr_debug("resolve_btfids: updated BTF for kfunc with implicit args %s\n", kfunc->name);
+
+	return 0;
+}
+
+static int btf2btf(struct object *obj)
+{
+	struct btf2btf_context ctx = {};
+	int err;
+
+	err = build_btf2btf_context(obj, &ctx);
+	if (err)
+		return err;
+
+	for (u32 i = 0; i < ctx.nr_kfuncs; i++) {
+		struct kfunc *kfunc = &ctx.kfuncs[i];
+
+		if (!(kfunc->flags & KF_IMPLICIT_ARGS))
+			continue;
+
+		err = process_kfunc_with_implicit_args(&ctx, kfunc);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
 /*
  * Sort types by name in ascending order resulting in all
  * anonymous types being placed before named types.
@@ -1126,6 +1506,9 @@ int main(int argc, const char **argv)
 	if (load_btf(&obj))
 		goto out;
 
+	if (btf2btf(&obj))
+		goto out;
+
 	if (finalize_btf(&obj))
 		goto out;
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 06/13] selftests/bpf: Add tests for KF_IMPLICIT_ARGS
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (4 preceding siblings ...)
  2026-01-16 20:16 ` [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-20  1:24   ` Eduard Zingerman
  2026-01-16 20:16 ` [PATCH bpf-next v2 07/13] bpf: Migrate bpf_wq_set_callback_impl() to KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

Add trivial end-to-end tests to validate that KF_IMPLICIT_ARGS flag is
properly handled by both resolve_btfids and the verifier.

Declare kfuncs in bpf_testmod. Check that bpf_prog_aux pointer is set
in the kfunc implementation. Verify that calls with implicit args and
a legacy case all work.

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
---
 .../bpf/prog_tests/kfunc_implicit_args.c      | 10 +++++
 .../selftests/bpf/progs/kfunc_implicit_args.c | 41 +++++++++++++++++++
 .../selftests/bpf/test_kmods/bpf_testmod.c    | 26 ++++++++++++
 3 files changed, 77 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/kfunc_implicit_args.c
 create mode 100644 tools/testing/selftests/bpf/progs/kfunc_implicit_args.c

diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_implicit_args.c b/tools/testing/selftests/bpf/prog_tests/kfunc_implicit_args.c
new file mode 100644
index 000000000000..5e4793c9c29a
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/kfunc_implicit_args.c
@@ -0,0 +1,10 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <test_progs.h>
+#include "kfunc_implicit_args.skel.h"
+
+void test_kfunc_implicit_args(void)
+{
+	RUN_TESTS(kfunc_implicit_args);
+}
diff --git a/tools/testing/selftests/bpf/progs/kfunc_implicit_args.c b/tools/testing/selftests/bpf/progs/kfunc_implicit_args.c
new file mode 100644
index 000000000000..89b6a47e22dd
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/kfunc_implicit_args.c
@@ -0,0 +1,41 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+extern int bpf_kfunc_implicit_arg(int a) __weak __ksym;
+extern int bpf_kfunc_implicit_arg_impl(int a, struct bpf_prog_aux *aux) __weak __ksym; /* illegal */
+extern int bpf_kfunc_implicit_arg_legacy(int a, int b) __weak __ksym;
+extern int bpf_kfunc_implicit_arg_legacy_impl(int a, int b, struct bpf_prog_aux *aux) __weak __ksym;
+
+char _license[] SEC("license") = "GPL";
+
+SEC("syscall")
+__retval(5)
+int test_kfunc_implicit_arg(void *ctx)
+{
+	return bpf_kfunc_implicit_arg(5);
+}
+
+SEC("syscall")
+__failure __msg("cannot find address for kernel function bpf_kfunc_implicit_arg_impl")
+int test_kfunc_implicit_arg_impl_illegal(void *ctx)
+{
+	return bpf_kfunc_implicit_arg_impl(5, NULL);
+}
+
+SEC("syscall")
+__retval(7)
+int test_kfunc_implicit_arg_legacy(void *ctx)
+{
+	return bpf_kfunc_implicit_arg_legacy(3, 4);
+}
+
+SEC("syscall")
+__retval(11)
+int test_kfunc_implicit_arg_legacy_impl(void *ctx)
+{
+	return bpf_kfunc_implicit_arg_legacy_impl(5, 6, NULL);
+}
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index bc07ce9d5477..a996b816ecc4 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -1142,6 +1142,10 @@ __bpf_kfunc int bpf_kfunc_st_ops_inc10(struct st_ops_args *args)
 __bpf_kfunc int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id);
 __bpf_kfunc int bpf_kfunc_multi_st_ops_test_1_impl(struct st_ops_args *args, void *aux_prog);
 
+__bpf_kfunc int bpf_kfunc_implicit_arg(int a, struct bpf_prog_aux *aux);
+__bpf_kfunc int bpf_kfunc_implicit_arg_legacy(int a, int b, struct bpf_prog_aux *aux);
+__bpf_kfunc int bpf_kfunc_implicit_arg_legacy_impl(int a, int b, struct bpf_prog_aux *aux);
+
 BTF_KFUNCS_START(bpf_testmod_check_kfunc_ids)
 BTF_ID_FLAGS(func, bpf_testmod_test_mod_kfunc)
 BTF_ID_FLAGS(func, bpf_kfunc_call_test1)
@@ -1184,6 +1188,9 @@ BTF_ID_FLAGS(func, bpf_kfunc_st_ops_test_pro_epilogue, KF_SLEEPABLE)
 BTF_ID_FLAGS(func, bpf_kfunc_st_ops_inc10)
 BTF_ID_FLAGS(func, bpf_kfunc_multi_st_ops_test_1)
 BTF_ID_FLAGS(func, bpf_kfunc_multi_st_ops_test_1_impl)
+BTF_ID_FLAGS(func, bpf_kfunc_implicit_arg, KF_IMPLICIT_ARGS)
+BTF_ID_FLAGS(func, bpf_kfunc_implicit_arg_legacy, KF_IMPLICIT_ARGS)
+BTF_ID_FLAGS(func, bpf_kfunc_implicit_arg_legacy_impl)
 BTF_KFUNCS_END(bpf_testmod_check_kfunc_ids)
 
 static int bpf_testmod_ops_init(struct btf *btf)
@@ -1675,6 +1682,25 @@ int bpf_kfunc_multi_st_ops_test_1_impl(struct st_ops_args *args, void *aux__prog
 	return ret;
 }
 
+int bpf_kfunc_implicit_arg(int a, struct bpf_prog_aux *aux)
+{
+	if (aux && a > 0)
+		return a;
+	return -EINVAL;
+}
+
+int bpf_kfunc_implicit_arg_legacy(int a, int b, struct bpf_prog_aux *aux)
+{
+	if (aux)
+		return a + b;
+	return -EINVAL;
+}
+
+int bpf_kfunc_implicit_arg_legacy_impl(int a, int b, struct bpf_prog_aux *aux)
+{
+	return bpf_kfunc_implicit_arg_legacy(a, b, aux);
+}
+
 static int multi_st_ops_reg(void *kdata, struct bpf_link *link)
 {
 	struct bpf_testmod_multi_st_ops *st_ops =
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 07/13] bpf: Migrate bpf_wq_set_callback_impl() to KF_IMPLICIT_ARGS
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (5 preceding siblings ...)
  2026-01-16 20:16 ` [PATCH bpf-next v2 06/13] selftests/bpf: Add tests " Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-20  1:50   ` Eduard Zingerman
  2026-01-16 20:16 ` [PATCH bpf-next v2 08/13] HID: Use bpf_wq_set_callback kernel function Ihor Solodrai
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

Implement bpf_wq_set_callback() with an implicit bpf_prog_aux
argument, and remove bpf_wq_set_callback_impl().

Update special kfunc checks in the verifier accordingly.

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
---
 kernel/bpf/helpers.c                             | 11 +++++------
 kernel/bpf/verifier.c                            | 16 ++++++++--------
 tools/testing/selftests/bpf/bpf_experimental.h   |  5 -----
 .../bpf/progs/verifier_async_cb_context.c        |  4 ++--
 tools/testing/selftests/bpf/progs/wq_failures.c  |  4 ++--
 5 files changed, 17 insertions(+), 23 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 9eaa4185e0a7..c76a9003b221 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -3120,12 +3120,11 @@ __bpf_kfunc int bpf_wq_start(struct bpf_wq *wq, unsigned int flags)
 	return 0;
 }
 
-__bpf_kfunc int bpf_wq_set_callback_impl(struct bpf_wq *wq,
-					 int (callback_fn)(void *map, int *key, void *value),
-					 unsigned int flags,
-					 void *aux__prog)
+__bpf_kfunc int bpf_wq_set_callback(struct bpf_wq *wq,
+				    int (callback_fn)(void *map, int *key, void *value),
+				    unsigned int flags,
+				    struct bpf_prog_aux *aux)
 {
-	struct bpf_prog_aux *aux = (struct bpf_prog_aux *)aux__prog;
 	struct bpf_async_kern *async = (struct bpf_async_kern *)wq;
 
 	if (flags)
@@ -4488,7 +4487,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_memset)
 BTF_ID_FLAGS(func, bpf_modify_return_test_tp)
 #endif
 BTF_ID_FLAGS(func, bpf_wq_init)
-BTF_ID_FLAGS(func, bpf_wq_set_callback_impl)
+BTF_ID_FLAGS(func, bpf_wq_set_callback, KF_IMPLICIT_ARGS)
 BTF_ID_FLAGS(func, bpf_wq_start)
 BTF_ID_FLAGS(func, bpf_preempt_disable)
 BTF_ID_FLAGS(func, bpf_preempt_enable)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index c6bb8a098fc5..360f2199b5d3 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -520,7 +520,7 @@ static bool is_async_callback_calling_kfunc(u32 btf_id);
 static bool is_callback_calling_kfunc(u32 btf_id);
 static bool is_bpf_throw_kfunc(struct bpf_insn *insn);
 
-static bool is_bpf_wq_set_callback_impl_kfunc(u32 btf_id);
+static bool is_bpf_wq_set_callback_kfunc(u32 btf_id);
 static bool is_task_work_add_kfunc(u32 func_id);
 
 static bool is_sync_callback_calling_function(enum bpf_func_id func_id)
@@ -562,7 +562,7 @@ static bool is_async_cb_sleepable(struct bpf_verifier_env *env, struct bpf_insn
 
 	/* bpf_wq and bpf_task_work callbacks are always sleepable. */
 	if (bpf_pseudo_kfunc_call(insn) && insn->off == 0 &&
-	    (is_bpf_wq_set_callback_impl_kfunc(insn->imm) || is_task_work_add_kfunc(insn->imm)))
+	    (is_bpf_wq_set_callback_kfunc(insn->imm) || is_task_work_add_kfunc(insn->imm)))
 		return true;
 
 	verifier_bug(env, "unhandled async callback in is_async_cb_sleepable");
@@ -12437,7 +12437,7 @@ enum special_kfunc_type {
 	KF_bpf_percpu_obj_new_impl,
 	KF_bpf_percpu_obj_drop_impl,
 	KF_bpf_throw,
-	KF_bpf_wq_set_callback_impl,
+	KF_bpf_wq_set_callback,
 	KF_bpf_preempt_disable,
 	KF_bpf_preempt_enable,
 	KF_bpf_iter_css_task_new,
@@ -12501,7 +12501,7 @@ BTF_ID(func, bpf_dynptr_clone)
 BTF_ID(func, bpf_percpu_obj_new_impl)
 BTF_ID(func, bpf_percpu_obj_drop_impl)
 BTF_ID(func, bpf_throw)
-BTF_ID(func, bpf_wq_set_callback_impl)
+BTF_ID(func, bpf_wq_set_callback)
 BTF_ID(func, bpf_preempt_disable)
 BTF_ID(func, bpf_preempt_enable)
 #ifdef CONFIG_CGROUPS
@@ -12994,7 +12994,7 @@ static bool is_sync_callback_calling_kfunc(u32 btf_id)
 
 static bool is_async_callback_calling_kfunc(u32 btf_id)
 {
-	return btf_id == special_kfunc_list[KF_bpf_wq_set_callback_impl] ||
+	return is_bpf_wq_set_callback_kfunc(btf_id) ||
 	       is_task_work_add_kfunc(btf_id);
 }
 
@@ -13004,9 +13004,9 @@ static bool is_bpf_throw_kfunc(struct bpf_insn *insn)
 	       insn->imm == special_kfunc_list[KF_bpf_throw];
 }
 
-static bool is_bpf_wq_set_callback_impl_kfunc(u32 btf_id)
+static bool is_bpf_wq_set_callback_kfunc(u32 btf_id)
 {
-	return btf_id == special_kfunc_list[KF_bpf_wq_set_callback_impl];
+	return btf_id == special_kfunc_list[KF_bpf_wq_set_callback];
 }
 
 static bool is_callback_calling_kfunc(u32 btf_id)
@@ -14085,7 +14085,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		meta.r0_rdonly = false;
 	}
 
-	if (is_bpf_wq_set_callback_impl_kfunc(meta.func_id)) {
+	if (is_bpf_wq_set_callback_kfunc(meta.func_id)) {
 		err = push_callback_call(env, insn, insn_idx, meta.subprogno,
 					 set_timer_callback_state);
 		if (err) {
diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index 2cd9165c7348..68a49b1f77ae 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -580,11 +580,6 @@ extern void bpf_iter_css_destroy(struct bpf_iter_css *it) __weak __ksym;
 
 extern int bpf_wq_init(struct bpf_wq *wq, void *p__map, unsigned int flags) __weak __ksym;
 extern int bpf_wq_start(struct bpf_wq *wq, unsigned int flags) __weak __ksym;
-extern int bpf_wq_set_callback_impl(struct bpf_wq *wq,
-		int (callback_fn)(void *map, int *key, void *value),
-		unsigned int flags__k, void *aux__ign) __ksym;
-#define bpf_wq_set_callback(timer, cb, flags) \
-	bpf_wq_set_callback_impl(timer, cb, flags, NULL)
 
 struct bpf_iter_kmem_cache;
 extern int bpf_iter_kmem_cache_new(struct bpf_iter_kmem_cache *it) __weak __ksym;
diff --git a/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c b/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c
index 7efa9521105e..5d5e1cd4d51d 100644
--- a/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c
+++ b/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c
@@ -96,7 +96,7 @@ int wq_non_sleepable_prog(void *ctx)
 
 	if (bpf_wq_init(&val->w, &wq_map, 0) != 0)
 		return 0;
-	if (bpf_wq_set_callback_impl(&val->w, wq_cb, 0, NULL) != 0)
+	if (bpf_wq_set_callback(&val->w, wq_cb, 0) != 0)
 		return 0;
 	return 0;
 }
@@ -114,7 +114,7 @@ int wq_sleepable_prog(void *ctx)
 
 	if (bpf_wq_init(&val->w, &wq_map, 0) != 0)
 		return 0;
-	if (bpf_wq_set_callback_impl(&val->w, wq_cb, 0, NULL) != 0)
+	if (bpf_wq_set_callback(&val->w, wq_cb, 0) != 0)
 		return 0;
 	return 0;
 }
diff --git a/tools/testing/selftests/bpf/progs/wq_failures.c b/tools/testing/selftests/bpf/progs/wq_failures.c
index d06f6d40594a..3767f5595bbc 100644
--- a/tools/testing/selftests/bpf/progs/wq_failures.c
+++ b/tools/testing/selftests/bpf/progs/wq_failures.c
@@ -97,7 +97,7 @@ __failure
 /* check that the first argument of bpf_wq_set_callback()
  * is a correct bpf_wq pointer.
  */
-__msg(": (85) call bpf_wq_set_callback_impl#") /* anchor message */
+__msg(": (85) call bpf_wq_set_callback#") /* anchor message */
 __msg("arg#0 doesn't point to a map value")
 long test_wrong_wq_pointer(void *ctx)
 {
@@ -123,7 +123,7 @@ __failure
 /* check that the first argument of bpf_wq_set_callback()
  * is a correct bpf_wq pointer.
  */
-__msg(": (85) call bpf_wq_set_callback_impl#") /* anchor message */
+__msg(": (85) call bpf_wq_set_callback#") /* anchor message */
 __msg("off 1 doesn't point to 'struct bpf_wq' that is at 0")
 long test_wrong_wq_pointer_offset(void *ctx)
 {
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 08/13] HID: Use bpf_wq_set_callback kernel function
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (6 preceding siblings ...)
  2026-01-16 20:16 ` [PATCH bpf-next v2 07/13] bpf: Migrate bpf_wq_set_callback_impl() to KF_IMPLICIT_ARGS Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-16 20:16 ` [PATCH bpf-next v2 09/13] bpf: Migrate bpf_task_work_schedule_* kfuncs to KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

Remove extern declaration of bpf_wq_set_callback_impl() from
hid_bpf_helpers.h and replace bpf_wq_set_callback macro with a
corresponding new declaration.

Tested with:
  # append tools/testing/selftests/hid/config and build the kernel
  $ make -C tools/testing/selftests/hid
  # in built kernel
  $ ./tools/testing/selftests/hid/hid_bpf -t test_multiply_events_wq

  TAP version 13
  1..1
  # Starting 1 tests from 1 test cases.
  #  RUN           hid_bpf.test_multiply_events_wq ...
  [    2.575520] hid-generic 0003:0001:0A36.0001: hidraw0: USB HID v0.00 Device [test-uhid-device-138] on 138
  #            OK  hid_bpf.test_multiply_events_wq
  ok 1 hid_bpf.test_multiply_events_wq
  # PASSED: 1 / 1 tests passed.
  # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
  PASS

Acked-by: Benjamin Tissoires <bentiss@kernel.org>
Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
---
 drivers/hid/bpf/progs/hid_bpf_helpers.h             | 8 +++-----
 tools/testing/selftests/hid/progs/hid_bpf_helpers.h | 8 +++-----
 2 files changed, 6 insertions(+), 10 deletions(-)

diff --git a/drivers/hid/bpf/progs/hid_bpf_helpers.h b/drivers/hid/bpf/progs/hid_bpf_helpers.h
index bf19785a6b06..228f8d787567 100644
--- a/drivers/hid/bpf/progs/hid_bpf_helpers.h
+++ b/drivers/hid/bpf/progs/hid_bpf_helpers.h
@@ -33,11 +33,9 @@ extern int hid_bpf_try_input_report(struct hid_bpf_ctx *ctx,
 /* bpf_wq implementation */
 extern int bpf_wq_init(struct bpf_wq *wq, void *p__map, unsigned int flags) __weak __ksym;
 extern int bpf_wq_start(struct bpf_wq *wq, unsigned int flags) __weak __ksym;
-extern int bpf_wq_set_callback_impl(struct bpf_wq *wq,
-		int (callback_fn)(void *map, int *key, void *value),
-		unsigned int flags__k, void *aux__ign) __ksym;
-#define bpf_wq_set_callback(wq, cb, flags) \
-	bpf_wq_set_callback_impl(wq, cb, flags, NULL)
+extern int bpf_wq_set_callback(struct bpf_wq *wq,
+		int (*callback_fn)(void *, int *, void *),
+		unsigned int flags) __weak __ksym;
 
 #define HID_MAX_DESCRIPTOR_SIZE	4096
 #define HID_IGNORE_EVENT	-1
diff --git a/tools/testing/selftests/hid/progs/hid_bpf_helpers.h b/tools/testing/selftests/hid/progs/hid_bpf_helpers.h
index 531228b849da..80ab60905865 100644
--- a/tools/testing/selftests/hid/progs/hid_bpf_helpers.h
+++ b/tools/testing/selftests/hid/progs/hid_bpf_helpers.h
@@ -116,10 +116,8 @@ extern int hid_bpf_try_input_report(struct hid_bpf_ctx *ctx,
 /* bpf_wq implementation */
 extern int bpf_wq_init(struct bpf_wq *wq, void *p__map, unsigned int flags) __weak __ksym;
 extern int bpf_wq_start(struct bpf_wq *wq, unsigned int flags) __weak __ksym;
-extern int bpf_wq_set_callback_impl(struct bpf_wq *wq,
-		int (callback_fn)(void *map, int *key, void *wq),
-		unsigned int flags__k, void *aux__ign) __weak __ksym;
-#define bpf_wq_set_callback(timer, cb, flags) \
-	bpf_wq_set_callback_impl(timer, cb, flags, NULL)
+extern int bpf_wq_set_callback(struct bpf_wq *wq,
+		int (*callback_fn)(void *, int *, void *),
+		unsigned int flags) __weak __ksym;
 
 #endif /* __HID_BPF_HELPERS_H */
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 09/13] bpf: Migrate bpf_task_work_schedule_* kfuncs to KF_IMPLICIT_ARGS
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (7 preceding siblings ...)
  2026-01-16 20:16 ` [PATCH bpf-next v2 08/13] HID: Use bpf_wq_set_callback kernel function Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-20  1:52   ` Eduard Zingerman
  2026-01-16 20:16 ` [PATCH bpf-next v2 10/13] bpf: Migrate bpf_stream_vprintk() " Ihor Solodrai
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

Implement bpf_task_work_schedule_* with an implicit bpf_prog_aux
argument, and remove corresponding _impl funcs from the kernel.

Update special kfunc checks in the verifier accordingly.

Update the selftests to use the new API with implicit argument.

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
---
 kernel/bpf/helpers.c                          | 30 +++++++++----------
 kernel/bpf/verifier.c                         | 12 ++++----
 .../testing/selftests/bpf/progs/file_reader.c |  4 ++-
 tools/testing/selftests/bpf/progs/task_work.c | 11 +++++--
 .../selftests/bpf/progs/task_work_fail.c      | 16 +++++++---
 .../selftests/bpf/progs/task_work_stress.c    |  5 ++--
 .../bpf/progs/verifier_async_cb_context.c     |  6 ++--
 7 files changed, 50 insertions(+), 34 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index c76a9003b221..f2f974b5fb3b 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -4274,41 +4274,39 @@ static int bpf_task_work_schedule(struct task_struct *task, struct bpf_task_work
 }
 
 /**
- * bpf_task_work_schedule_signal_impl - Schedule BPF callback using task_work_add with TWA_SIGNAL
+ * bpf_task_work_schedule_signal - Schedule BPF callback using task_work_add with TWA_SIGNAL
  * mode
  * @task: Task struct for which callback should be scheduled
  * @tw: Pointer to struct bpf_task_work in BPF map value for internal bookkeeping
  * @map__map: bpf_map that embeds struct bpf_task_work in the values
  * @callback: pointer to BPF subprogram to call
- * @aux__prog: user should pass NULL
+ * @aux: pointer to bpf_prog_aux of the caller BPF program, implicitly set by the verifier
  *
  * Return: 0 if task work has been scheduled successfully, negative error code otherwise
  */
-__bpf_kfunc int bpf_task_work_schedule_signal_impl(struct task_struct *task,
-						   struct bpf_task_work *tw, void *map__map,
-						   bpf_task_work_callback_t callback,
-						   void *aux__prog)
+__bpf_kfunc int bpf_task_work_schedule_signal(struct task_struct *task, struct bpf_task_work *tw,
+					      void *map__map, bpf_task_work_callback_t callback,
+					      struct bpf_prog_aux *aux)
 {
-	return bpf_task_work_schedule(task, tw, map__map, callback, aux__prog, TWA_SIGNAL);
+	return bpf_task_work_schedule(task, tw, map__map, callback, aux, TWA_SIGNAL);
 }
 
 /**
- * bpf_task_work_schedule_resume_impl - Schedule BPF callback using task_work_add with TWA_RESUME
+ * bpf_task_work_schedule_resume - Schedule BPF callback using task_work_add with TWA_RESUME
  * mode
  * @task: Task struct for which callback should be scheduled
  * @tw: Pointer to struct bpf_task_work in BPF map value for internal bookkeeping
  * @map__map: bpf_map that embeds struct bpf_task_work in the values
  * @callback: pointer to BPF subprogram to call
- * @aux__prog: user should pass NULL
+ * @aux: pointer to bpf_prog_aux of the caller BPF program, implicitly set by the verifier
  *
  * Return: 0 if task work has been scheduled successfully, negative error code otherwise
  */
-__bpf_kfunc int bpf_task_work_schedule_resume_impl(struct task_struct *task,
-						   struct bpf_task_work *tw, void *map__map,
-						   bpf_task_work_callback_t callback,
-						   void *aux__prog)
+__bpf_kfunc int bpf_task_work_schedule_resume(struct task_struct *task, struct bpf_task_work *tw,
+					      void *map__map, bpf_task_work_callback_t callback,
+					      struct bpf_prog_aux *aux)
 {
-	return bpf_task_work_schedule(task, tw, map__map, callback, aux__prog, TWA_RESUME);
+	return bpf_task_work_schedule(task, tw, map__map, callback, aux, TWA_RESUME);
 }
 
 static int make_file_dynptr(struct file *file, u32 flags, bool may_sleep,
@@ -4536,8 +4534,8 @@ BTF_ID_FLAGS(func, bpf_strncasestr);
 BTF_ID_FLAGS(func, bpf_cgroup_read_xattr, KF_RCU)
 #endif
 BTF_ID_FLAGS(func, bpf_stream_vprintk_impl)
-BTF_ID_FLAGS(func, bpf_task_work_schedule_signal_impl)
-BTF_ID_FLAGS(func, bpf_task_work_schedule_resume_impl)
+BTF_ID_FLAGS(func, bpf_task_work_schedule_signal, KF_IMPLICIT_ARGS)
+BTF_ID_FLAGS(func, bpf_task_work_schedule_resume, KF_IMPLICIT_ARGS)
 BTF_ID_FLAGS(func, bpf_dynptr_from_file)
 BTF_ID_FLAGS(func, bpf_dynptr_file_discard)
 BTF_KFUNCS_END(common_btf_ids)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 360f2199b5d3..c2abf30e9460 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12457,8 +12457,8 @@ enum special_kfunc_type {
 	KF_bpf_dynptr_from_file,
 	KF_bpf_dynptr_file_discard,
 	KF___bpf_trap,
-	KF_bpf_task_work_schedule_signal_impl,
-	KF_bpf_task_work_schedule_resume_impl,
+	KF_bpf_task_work_schedule_signal,
+	KF_bpf_task_work_schedule_resume,
 	KF_bpf_arena_alloc_pages,
 	KF_bpf_arena_free_pages,
 	KF_bpf_arena_reserve_pages,
@@ -12534,16 +12534,16 @@ BTF_ID(func, bpf_res_spin_unlock_irqrestore)
 BTF_ID(func, bpf_dynptr_from_file)
 BTF_ID(func, bpf_dynptr_file_discard)
 BTF_ID(func, __bpf_trap)
-BTF_ID(func, bpf_task_work_schedule_signal_impl)
-BTF_ID(func, bpf_task_work_schedule_resume_impl)
+BTF_ID(func, bpf_task_work_schedule_signal)
+BTF_ID(func, bpf_task_work_schedule_resume)
 BTF_ID(func, bpf_arena_alloc_pages)
 BTF_ID(func, bpf_arena_free_pages)
 BTF_ID(func, bpf_arena_reserve_pages)
 
 static bool is_task_work_add_kfunc(u32 func_id)
 {
-	return func_id == special_kfunc_list[KF_bpf_task_work_schedule_signal_impl] ||
-	       func_id == special_kfunc_list[KF_bpf_task_work_schedule_resume_impl];
+	return func_id == special_kfunc_list[KF_bpf_task_work_schedule_signal] ||
+	       func_id == special_kfunc_list[KF_bpf_task_work_schedule_resume];
 }
 
 static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta)
diff --git a/tools/testing/selftests/bpf/progs/file_reader.c b/tools/testing/selftests/bpf/progs/file_reader.c
index 4d756b623557..ff3270a0cb9b 100644
--- a/tools/testing/selftests/bpf/progs/file_reader.c
+++ b/tools/testing/selftests/bpf/progs/file_reader.c
@@ -77,7 +77,9 @@ int on_open_validate_file_read(void *c)
 		err = 1;
 		return 0;
 	}
-	bpf_task_work_schedule_signal_impl(task, &work->tw, &arrmap, task_work_callback, NULL);
+
+	bpf_task_work_schedule_signal(task, &work->tw, &arrmap, task_work_callback);
+
 	return 0;
 }
 
diff --git a/tools/testing/selftests/bpf/progs/task_work.c b/tools/testing/selftests/bpf/progs/task_work.c
index 663a80990f8f..eec422af20b8 100644
--- a/tools/testing/selftests/bpf/progs/task_work.c
+++ b/tools/testing/selftests/bpf/progs/task_work.c
@@ -66,7 +66,8 @@ int oncpu_hash_map(struct pt_regs *args)
 	if (!work)
 		return 0;
 
-	bpf_task_work_schedule_resume_impl(task, &work->tw, &hmap, process_work, NULL);
+	bpf_task_work_schedule_resume(task, &work->tw, &hmap, process_work);
+
 	return 0;
 }
 
@@ -80,7 +81,9 @@ int oncpu_array_map(struct pt_regs *args)
 	work = bpf_map_lookup_elem(&arrmap, &key);
 	if (!work)
 		return 0;
-	bpf_task_work_schedule_signal_impl(task, &work->tw, &arrmap, process_work, NULL);
+
+	bpf_task_work_schedule_signal(task, &work->tw, &arrmap, process_work);
+
 	return 0;
 }
 
@@ -102,6 +105,8 @@ int oncpu_lru_map(struct pt_regs *args)
 	work = bpf_map_lookup_elem(&lrumap, &key);
 	if (!work || work->data[0])
 		return 0;
-	bpf_task_work_schedule_resume_impl(task, &work->tw, &lrumap, process_work, NULL);
+
+	bpf_task_work_schedule_resume(task, &work->tw, &lrumap, process_work);
+
 	return 0;
 }
diff --git a/tools/testing/selftests/bpf/progs/task_work_fail.c b/tools/testing/selftests/bpf/progs/task_work_fail.c
index 1270953fd092..557bdf9eb0fc 100644
--- a/tools/testing/selftests/bpf/progs/task_work_fail.c
+++ b/tools/testing/selftests/bpf/progs/task_work_fail.c
@@ -53,7 +53,9 @@ int mismatch_map(struct pt_regs *args)
 	work = bpf_map_lookup_elem(&arrmap, &key);
 	if (!work)
 		return 0;
-	bpf_task_work_schedule_resume_impl(task, &work->tw, &hmap, process_work, NULL);
+
+	bpf_task_work_schedule_resume(task, &work->tw, &hmap, process_work);
+
 	return 0;
 }
 
@@ -65,7 +67,9 @@ int no_map_task_work(struct pt_regs *args)
 	struct bpf_task_work tw;
 
 	task = bpf_get_current_task_btf();
-	bpf_task_work_schedule_resume_impl(task, &tw, &hmap, process_work, NULL);
+
+	bpf_task_work_schedule_resume(task, &tw, &hmap, process_work);
+
 	return 0;
 }
 
@@ -76,7 +80,9 @@ int task_work_null(struct pt_regs *args)
 	struct task_struct *task;
 
 	task = bpf_get_current_task_btf();
-	bpf_task_work_schedule_resume_impl(task, NULL, &hmap, process_work, NULL);
+
+	bpf_task_work_schedule_resume(task, NULL, &hmap, process_work);
+
 	return 0;
 }
 
@@ -91,6 +97,8 @@ int map_null(struct pt_regs *args)
 	work = bpf_map_lookup_elem(&arrmap, &key);
 	if (!work)
 		return 0;
-	bpf_task_work_schedule_resume_impl(task, &work->tw, NULL, process_work, NULL);
+
+	bpf_task_work_schedule_resume(task, &work->tw, NULL, process_work);
+
 	return 0;
 }
diff --git a/tools/testing/selftests/bpf/progs/task_work_stress.c b/tools/testing/selftests/bpf/progs/task_work_stress.c
index 55e555f7f41b..0cba36569714 100644
--- a/tools/testing/selftests/bpf/progs/task_work_stress.c
+++ b/tools/testing/selftests/bpf/progs/task_work_stress.c
@@ -51,8 +51,9 @@ int schedule_task_work(void *ctx)
 		if (!work)
 			return 0;
 	}
-	err = bpf_task_work_schedule_signal_impl(bpf_get_current_task_btf(), &work->tw, &hmap,
-						 process_work, NULL);
+	err = bpf_task_work_schedule_signal(bpf_get_current_task_btf(), &work->tw, &hmap,
+					    process_work);
+
 	if (err)
 		__sync_fetch_and_add(&schedule_error, 1);
 	else
diff --git a/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c b/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c
index 5d5e1cd4d51d..f47c78719639 100644
--- a/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c
+++ b/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c
@@ -156,7 +156,8 @@ int task_work_non_sleepable_prog(void *ctx)
 	if (!task)
 		return 0;
 
-	bpf_task_work_schedule_resume_impl(task, &val->tw, &task_work_map, task_work_cb, NULL);
+	bpf_task_work_schedule_resume(task, &val->tw, &task_work_map, task_work_cb);
+
 	return 0;
 }
 
@@ -176,6 +177,7 @@ int task_work_sleepable_prog(void *ctx)
 	if (!task)
 		return 0;
 
-	bpf_task_work_schedule_resume_impl(task, &val->tw, &task_work_map, task_work_cb, NULL);
+	bpf_task_work_schedule_resume(task, &val->tw, &task_work_map, task_work_cb);
+
 	return 0;
 }
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 10/13] bpf: Migrate bpf_stream_vprintk() to KF_IMPLICIT_ARGS
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (8 preceding siblings ...)
  2026-01-16 20:16 ` [PATCH bpf-next v2 09/13] bpf: Migrate bpf_task_work_schedule_* kfuncs to KF_IMPLICIT_ARGS Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-20  1:53   ` Eduard Zingerman
  2026-01-16 20:16 ` [PATCH bpf-next v2 11/13] selftests/bpf: Migrate struct_ops_assoc test " Ihor Solodrai
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

Implement bpf_stream_vprintk with an implicit bpf_prog_aux argument,
and remote bpf_stream_vprintk_impl from the kernel.

Update the selftests to use the new API with implicit argument.

bpf_stream_vprintk macro is changed to use the new bpf_stream_vprintk
kfunc, and the extern definition of bpf_stream_vprintk_impl is
replaced accordingly.

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
---
 kernel/bpf/helpers.c                            | 2 +-
 kernel/bpf/stream.c                             | 5 ++---
 tools/lib/bpf/bpf_helpers.h                     | 6 +++---
 tools/testing/selftests/bpf/progs/stream_fail.c | 6 +++---
 4 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index f2f974b5fb3b..f8aa1320e2f7 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -4533,7 +4533,7 @@ BTF_ID_FLAGS(func, bpf_strncasestr);
 #if defined(CONFIG_BPF_LSM) && defined(CONFIG_CGROUPS)
 BTF_ID_FLAGS(func, bpf_cgroup_read_xattr, KF_RCU)
 #endif
-BTF_ID_FLAGS(func, bpf_stream_vprintk_impl)
+BTF_ID_FLAGS(func, bpf_stream_vprintk, KF_IMPLICIT_ARGS)
 BTF_ID_FLAGS(func, bpf_task_work_schedule_signal, KF_IMPLICIT_ARGS)
 BTF_ID_FLAGS(func, bpf_task_work_schedule_resume, KF_IMPLICIT_ARGS)
 BTF_ID_FLAGS(func, bpf_dynptr_from_file)
diff --git a/kernel/bpf/stream.c b/kernel/bpf/stream.c
index 0b6bc3f30335..24730df55e69 100644
--- a/kernel/bpf/stream.c
+++ b/kernel/bpf/stream.c
@@ -212,14 +212,13 @@ __bpf_kfunc_start_defs();
  * Avoid using enum bpf_stream_id so that kfunc users don't have to pull in the
  * enum in headers.
  */
-__bpf_kfunc int bpf_stream_vprintk_impl(int stream_id, const char *fmt__str, const void *args,
-					u32 len__sz, void *aux__prog)
+__bpf_kfunc int bpf_stream_vprintk(int stream_id, const char *fmt__str, const void *args,
+				   u32 len__sz, struct bpf_prog_aux *aux)
 {
 	struct bpf_bprintf_data data = {
 		.get_bin_args	= true,
 		.get_buf	= true,
 	};
-	struct bpf_prog_aux *aux = aux__prog;
 	u32 fmt_size = strlen(fmt__str) + 1;
 	struct bpf_stream *stream;
 	u32 data_len = len__sz;
diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
index d4e4e388e625..c145da05a67c 100644
--- a/tools/lib/bpf/bpf_helpers.h
+++ b/tools/lib/bpf/bpf_helpers.h
@@ -315,8 +315,8 @@ enum libbpf_tristate {
 			  ___param, sizeof(___param));		\
 })
 
-extern int bpf_stream_vprintk_impl(int stream_id, const char *fmt__str, const void *args,
-				   __u32 len__sz, void *aux__prog) __weak __ksym;
+extern int bpf_stream_vprintk(int stream_id, const char *fmt__str, const void *args,
+			      __u32 len__sz) __weak __ksym;
 
 #define bpf_stream_printk(stream_id, fmt, args...)					\
 ({											\
@@ -328,7 +328,7 @@ extern int bpf_stream_vprintk_impl(int stream_id, const char *fmt__str, const vo
 	___bpf_fill(___param, args);							\
 	_Pragma("GCC diagnostic pop")							\
 											\
-	bpf_stream_vprintk_impl(stream_id, ___fmt, ___param, sizeof(___param), NULL);	\
+	bpf_stream_vprintk(stream_id, ___fmt, ___param, sizeof(___param));		\
 })
 
 /* Use __bpf_printk when bpf_printk call has 3 or fewer fmt args
diff --git a/tools/testing/selftests/bpf/progs/stream_fail.c b/tools/testing/selftests/bpf/progs/stream_fail.c
index 3662515f0107..8e8249f3521c 100644
--- a/tools/testing/selftests/bpf/progs/stream_fail.c
+++ b/tools/testing/selftests/bpf/progs/stream_fail.c
@@ -10,7 +10,7 @@ SEC("syscall")
 __failure __msg("Possibly NULL pointer passed")
 int stream_vprintk_null_arg(void *ctx)
 {
-	bpf_stream_vprintk_impl(BPF_STDOUT, "", NULL, 0, NULL);
+	bpf_stream_vprintk(BPF_STDOUT, "", NULL, 0);
 	return 0;
 }
 
@@ -18,7 +18,7 @@ SEC("syscall")
 __failure __msg("R3 type=scalar expected=")
 int stream_vprintk_scalar_arg(void *ctx)
 {
-	bpf_stream_vprintk_impl(BPF_STDOUT, "", (void *)46, 0, NULL);
+	bpf_stream_vprintk(BPF_STDOUT, "", (void *)46, 0);
 	return 0;
 }
 
@@ -26,7 +26,7 @@ SEC("syscall")
 __failure __msg("arg#1 doesn't point to a const string")
 int stream_vprintk_string_arg(void *ctx)
 {
-	bpf_stream_vprintk_impl(BPF_STDOUT, ctx, NULL, 0, NULL);
+	bpf_stream_vprintk(BPF_STDOUT, ctx, NULL, 0);
 	return 0;
 }
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 11/13] selftests/bpf: Migrate struct_ops_assoc test to KF_IMPLICIT_ARGS
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (9 preceding siblings ...)
  2026-01-16 20:16 ` [PATCH bpf-next v2 10/13] bpf: Migrate bpf_stream_vprintk() " Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-20  1:59   ` Eduard Zingerman
  2026-01-16 20:16 ` [PATCH bpf-next v2 12/13] bpf: Remove __prog kfunc arg annotation Ihor Solodrai
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

A test kfunc named bpf_kfunc_multi_st_ops_test_1_impl() is a user of
__prog suffix. Subsequent patch removes __prog support in favor of
KF_IMPLICIT_ARGS, so migrate this kfunc to use implicit argument.

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
---
 tools/testing/selftests/bpf/progs/struct_ops_assoc.c     | 8 ++++----
 .../selftests/bpf/progs/struct_ops_assoc_in_timer.c      | 4 ++--
 .../testing/selftests/bpf/progs/struct_ops_assoc_reuse.c | 6 +++---
 tools/testing/selftests/bpf/test_kmods/bpf_testmod.c     | 9 ++++-----
 .../testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h | 6 ++++--
 5 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/tools/testing/selftests/bpf/progs/struct_ops_assoc.c b/tools/testing/selftests/bpf/progs/struct_ops_assoc.c
index 8f1097903e22..68842e3f936b 100644
--- a/tools/testing/selftests/bpf/progs/struct_ops_assoc.c
+++ b/tools/testing/selftests/bpf/progs/struct_ops_assoc.c
@@ -32,7 +32,7 @@ int BPF_PROG(sys_enter_prog_a, struct pt_regs *regs, long id)
 	if (!test_pid || task->pid != test_pid)
 		return 0;
 
-	ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+	ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
 	if (ret != MAP_A_MAGIC)
 		test_err_a++;
 
@@ -45,7 +45,7 @@ int syscall_prog_a(void *ctx)
 	struct st_ops_args args = {};
 	int ret;
 
-	ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+	ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
 	if (ret != MAP_A_MAGIC)
 		test_err_a++;
 
@@ -79,7 +79,7 @@ int BPF_PROG(sys_enter_prog_b, struct pt_regs *regs, long id)
 	if (!test_pid || task->pid != test_pid)
 		return 0;
 
-	ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+	ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
 	if (ret != MAP_B_MAGIC)
 		test_err_b++;
 
@@ -92,7 +92,7 @@ int syscall_prog_b(void *ctx)
 	struct st_ops_args args = {};
 	int ret;
 
-	ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+	ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
 	if (ret != MAP_B_MAGIC)
 		test_err_b++;
 
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_assoc_in_timer.c b/tools/testing/selftests/bpf/progs/struct_ops_assoc_in_timer.c
index d5a2ea934284..0bed49e9f217 100644
--- a/tools/testing/selftests/bpf/progs/struct_ops_assoc_in_timer.c
+++ b/tools/testing/selftests/bpf/progs/struct_ops_assoc_in_timer.c
@@ -31,7 +31,7 @@ __noinline static int timer_cb(void *map, int *key, struct bpf_timer *timer)
 	struct st_ops_args args = {};
 
 	recur++;
-	timer_test_1_ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+	timer_test_1_ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
 	recur--;
 
 	timer_cb_run++;
@@ -64,7 +64,7 @@ int syscall_prog(void *ctx)
 	struct st_ops_args args = {};
 	int ret;
 
-	ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+	ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
 	if (ret != MAP_MAGIC)
 		test_err++;
 
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_assoc_reuse.c b/tools/testing/selftests/bpf/progs/struct_ops_assoc_reuse.c
index 5bb6ebf5eed4..396b3e58c729 100644
--- a/tools/testing/selftests/bpf/progs/struct_ops_assoc_reuse.c
+++ b/tools/testing/selftests/bpf/progs/struct_ops_assoc_reuse.c
@@ -23,7 +23,7 @@ int BPF_PROG(test_1_a, struct st_ops_args *args)
 
 	if (!recur) {
 		recur++;
-		ret = bpf_kfunc_multi_st_ops_test_1_impl(args, NULL);
+		ret = bpf_kfunc_multi_st_ops_test_1_assoc(args);
 		if (ret != -1)
 			test_err_a++;
 		recur--;
@@ -40,7 +40,7 @@ int syscall_prog_a(void *ctx)
 	struct st_ops_args args = {};
 	int ret;
 
-	ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+	ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
 	if (ret != MAP_A_MAGIC)
 		test_err_a++;
 
@@ -62,7 +62,7 @@ int syscall_prog_b(void *ctx)
 	struct st_ops_args args = {};
 	int ret;
 
-	ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+	ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
 	if (ret != MAP_A_MAGIC)
 		test_err_b++;
 
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index a996b816ecc4..0d542ba64365 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -1140,7 +1140,7 @@ __bpf_kfunc int bpf_kfunc_st_ops_inc10(struct st_ops_args *args)
 }
 
 __bpf_kfunc int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id);
-__bpf_kfunc int bpf_kfunc_multi_st_ops_test_1_impl(struct st_ops_args *args, void *aux_prog);
+__bpf_kfunc int bpf_kfunc_multi_st_ops_test_1_assoc(struct st_ops_args *args, struct bpf_prog_aux *aux);
 
 __bpf_kfunc int bpf_kfunc_implicit_arg(int a, struct bpf_prog_aux *aux);
 __bpf_kfunc int bpf_kfunc_implicit_arg_legacy(int a, int b, struct bpf_prog_aux *aux);
@@ -1187,7 +1187,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_st_ops_test_epilogue, KF_SLEEPABLE)
 BTF_ID_FLAGS(func, bpf_kfunc_st_ops_test_pro_epilogue, KF_SLEEPABLE)
 BTF_ID_FLAGS(func, bpf_kfunc_st_ops_inc10)
 BTF_ID_FLAGS(func, bpf_kfunc_multi_st_ops_test_1)
-BTF_ID_FLAGS(func, bpf_kfunc_multi_st_ops_test_1_impl)
+BTF_ID_FLAGS(func, bpf_kfunc_multi_st_ops_test_1_assoc, KF_IMPLICIT_ARGS)
 BTF_ID_FLAGS(func, bpf_kfunc_implicit_arg, KF_IMPLICIT_ARGS)
 BTF_ID_FLAGS(func, bpf_kfunc_implicit_arg_legacy, KF_IMPLICIT_ARGS)
 BTF_ID_FLAGS(func, bpf_kfunc_implicit_arg_legacy_impl)
@@ -1669,13 +1669,12 @@ int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id)
 }
 
 /* Call test_1() of the associated struct_ops map */
-int bpf_kfunc_multi_st_ops_test_1_impl(struct st_ops_args *args, void *aux__prog)
+int bpf_kfunc_multi_st_ops_test_1_assoc(struct st_ops_args *args, struct bpf_prog_aux *aux)
 {
-	struct bpf_prog_aux *prog_aux = (struct bpf_prog_aux *)aux__prog;
 	struct bpf_testmod_multi_st_ops *st_ops;
 	int ret = -1;
 
-	st_ops = (struct bpf_testmod_multi_st_ops *)bpf_prog_get_assoc_struct_ops(prog_aux);
+	st_ops = (struct bpf_testmod_multi_st_ops *)bpf_prog_get_assoc_struct_ops(aux);
 	if (st_ops)
 		ret = st_ops->test_1(args);
 
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
index 2357a0340ffe..225ea30c4e3d 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
@@ -161,7 +161,9 @@ void bpf_kfunc_rcu_task_test(struct task_struct *ptr) __ksym;
 struct task_struct *bpf_kfunc_ret_rcu_test(void) __ksym;
 int *bpf_kfunc_ret_rcu_test_nostruct(int rdonly_buf_size) __ksym;
 
-int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __ksym;
-int bpf_kfunc_multi_st_ops_test_1_impl(struct st_ops_args *args, void *aux__prog) __ksym;
+#ifndef __KERNEL__
+extern int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __weak __ksym;
+extern int bpf_kfunc_multi_st_ops_test_1_assoc(struct st_ops_args *args) __weak __ksym;
+#endif
 
 #endif /* _BPF_TESTMOD_KFUNC_H */
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 12/13] bpf: Remove __prog kfunc arg annotation
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (10 preceding siblings ...)
  2026-01-16 20:16 ` [PATCH bpf-next v2 11/13] selftests/bpf: Migrate struct_ops_assoc test " Ihor Solodrai
@ 2026-01-16 20:16 ` Ihor Solodrai
  2026-01-20  2:01   ` Eduard Zingerman
  2026-01-16 20:17 ` [PATCH bpf-next v2 13/13] bpf,docs: Document KF_IMPLICIT_ARGS flag Ihor Solodrai
  2026-01-20  1:49 ` [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Eduard Zingerman
  13 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

Now that all the __prog suffix users in the kernel tree migrated to
KF_IMPLICIT_ARGS, remove it from the verifier.

See prior discussion for context [1].

[1] https://lore.kernel.org/bpf/CAEf4BzbgPfRm9BX=TsZm-TsHFAHcwhPY4vTt=9OT-uhWqf8tqw@mail.gmail.com/

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
---
 kernel/bpf/verifier.c | 11 ++---------
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index c2abf30e9460..5f2849ea845c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12211,13 +12211,6 @@ static bool is_kfunc_arg_irq_flag(const struct btf *btf, const struct btf_param
 	return btf_param_match_suffix(btf, arg, "__irq_flag");
 }
 
-static bool is_kfunc_arg_prog_aux(const struct btf *btf, const struct btf_param *arg);
-
-static bool is_kfunc_arg_prog(const struct btf *btf, const struct btf_param *arg)
-{
-	return btf_param_match_suffix(btf, arg, "__prog") || is_kfunc_arg_prog_aux(btf, arg);
-}
-
 static bool is_kfunc_arg_scalar_with_name(const struct btf *btf,
 					  const struct btf_param *arg,
 					  const char *name)
@@ -13280,8 +13273,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 		if (is_kfunc_arg_ignore(btf, &args[i]))
 			continue;
 
-		if (is_kfunc_arg_prog(btf, &args[i])) {
-			/* Used to reject repeated use of __prog. */
+		if (is_kfunc_arg_prog_aux(btf, &args[i])) {
+			/* Reject repeated use bpf_prog_aux */
 			if (meta->arg_prog) {
 				verifier_bug(env, "Only 1 prog->aux argument supported per-kfunc");
 				return -EFAULT;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH bpf-next v2 13/13] bpf,docs: Document KF_IMPLICIT_ARGS flag
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (11 preceding siblings ...)
  2026-01-16 20:16 ` [PATCH bpf-next v2 12/13] bpf: Remove __prog kfunc arg annotation Ihor Solodrai
@ 2026-01-16 20:17 ` Ihor Solodrai
  2026-01-20  1:49 ` [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Eduard Zingerman
  13 siblings, 0 replies; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:17 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

Add a section explaining KF_IMPLICIT_ARGS kfunc flag. Remove __prog
arg annotation, as it is no longer supported.

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
---
 Documentation/bpf/kfuncs.rst | 49 +++++++++++++++++++++++-------------
 1 file changed, 32 insertions(+), 17 deletions(-)

diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst
index 3eb59a8f9f34..75e6c078e0e7 100644
--- a/Documentation/bpf/kfuncs.rst
+++ b/Documentation/bpf/kfuncs.rst
@@ -232,23 +232,6 @@ Or::
                 ...
         }
 
-2.3.6 __prog Annotation
----------------------------
-This annotation is used to indicate that the argument needs to be fixed up to
-the bpf_prog_aux of the caller BPF program. Any value passed into this argument
-is ignored, and rewritten by the verifier.
-
-An example is given below::
-
-        __bpf_kfunc int bpf_wq_set_callback_impl(struct bpf_wq *wq,
-                                                 int (callback_fn)(void *map, int *key, void *value),
-                                                 unsigned int flags,
-                                                 void *aux__prog)
-         {
-                struct bpf_prog_aux *aux = aux__prog;
-                ...
-         }
-
 .. _BPF_kfunc_nodef:
 
 2.4 Using an existing kernel function
@@ -381,6 +364,38 @@ encouraged to make their use-cases known as early as possible, and participate
 in upstream discussions regarding whether to keep, change, deprecate, or remove
 those kfuncs if and when such discussions occur.
 
+2.5.9 KF_IMPLICIT_ARGS flag
+------------------------------------
+
+The KF_IMPLICIT_ARGS flag is used to indicate that the BPF signature
+of the kfunc is different from it's kernel signature, and the values
+for implicit arguments are provided at load time by the verifier.
+
+Only arguments of specific types are implicit.
+Currently only ``struct bpf_prog_aux *`` type is supported.
+
+A kfunc with KF_IMPLICIT_ARGS flag therefore has two types in BTF: one
+function matching the kernel declaration (with _impl suffix in the
+name by convention), and another matching the intended BPF API.
+
+Verifier only allows calls to the non-_impl version of a kfunc, that
+uses a signature without the implicit arguments.
+
+Example declaration:
+
+.. code-block:: c
+
+	__bpf_kfunc int bpf_task_work_schedule_signal(struct task_struct *task, struct bpf_task_work *tw,
+						      void *map__map, bpf_task_work_callback_t callback,
+						      struct bpf_prog_aux *aux) { ... }
+
+Example usage in BPF program:
+
+.. code-block:: c
+
+	/* note that the last argument is omitted */
+        bpf_task_work_schedule_signal(task, &work->tw, &arrmap, task_work_callback);
+
 2.6 Registering the kfuncs
 --------------------------
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS
  2026-01-16 20:16 ` [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS Ihor Solodrai
@ 2026-01-16 20:39   ` bot+bpf-ci
  2026-01-16 20:44     ` Ihor Solodrai
  2026-01-17  0:06   ` Andrii Nakryiko
  2026-01-20  0:55   ` Eduard Zingerman
  2 siblings, 1 reply; 35+ messages in thread
From: bot+bpf-ci @ 2026-01-16 20:39 UTC (permalink / raw)
  To: ihor.solodrai, ast, daniel, andrii, martin.lau, eddyz87
  Cc: yatsenko, tj, alan.maguire, bentiss, jikos, ameryhung, bpf,
	linux-kernel, linux-input, sched-ext, ast, andrii, daniel,
	martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 3046 bytes --]

> diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
> index 1fcf37af6764..b83316359cfd 100644
> --- a/tools/bpf/resolve_btfids/main.c
> +++ b/tools/bpf/resolve_btfids/main.c

[ ... ]

> +struct btf2btf_context {
> +	struct btf *btf;
> +	u32 *decl_tags;
> +	u32 nr_decl_tags;
> +	struct kfunc *kfuncs;
> +	u32 nr_kfuncs;
> +};

[ ... ]

> +static int collect_decl_tags(struct btf2btf_context *ctx)
> +{
> +	const u32 type_cnt = btf__type_cnt(ctx->btf);
> +	struct btf *btf = ctx->btf;
> +	const struct btf_type *t;
> +	u32 *tags, *tmp;
> +	u32 nr_tags = 0;
> +
> +	tags = malloc(type_cnt * sizeof(u32));
> +	if (!tags)
> +		return -ENOMEM;

[ ... ]

> +	tmp = realloc(tags, nr_tags * sizeof(u32));
> +	if (!tmp) {
> +		free(tags);
> +		return -ENOMEM;
> +	}
> +
> +	ctx->decl_tags = tmp;
> +	ctx->nr_decl_tags = nr_tags;
> +
> +	return 0;
> +}

[ ... ]

> +static s64 collect_kfuncs(struct object *obj, struct btf2btf_context *ctx)
> +{
> +	struct kfunc *kfunc, *kfuncs, *tmp;

[ ... ]

> +	kfuncs = malloc(ctx->nr_decl_tags * sizeof(*kfuncs));
> +	if (!kfuncs)
> +		return -ENOMEM;

[ ... ]

> +	tmp = realloc(kfuncs, nr_kfuncs * sizeof(*kfuncs));
> +	if (!tmp) {
> +		free(kfuncs);
> +		return -ENOMEM;
> +	}
> +
> +	ctx->kfuncs = tmp;
> +	ctx->nr_kfuncs = nr_kfuncs;
> +
> +	return 0;
> +}
> +
> +static int build_btf2btf_context(struct object *obj, struct btf2btf_context *ctx)
> +{
> +	int err;
> +
> +	ctx->btf = obj->btf;
> +
> +	err = collect_decl_tags(ctx);
> +	if (err) {
> +		pr_err("ERROR: resolve_btfids: failed to collect decl tags from BTF\n");
> +		return err;
> +	}
> +
> +	err = collect_kfuncs(obj, ctx);
> +	if (err) {
> +		pr_err("ERROR: resolve_btfids: failed to collect kfuncs from BTF\n");
> +		return err;
                  ^^^^

When collect_kfuncs() fails here, ctx->decl_tags has already been allocated
by collect_decl_tags() but is not freed before returning. Should this error
path call free(ctx->decl_tags)?

> +	}
> +
> +	return 0;
> +}

[ ... ]

> +static int btf2btf(struct object *obj)
> +{
> +	struct btf2btf_context ctx = {};
> +	int err;
> +
> +	err = build_btf2btf_context(obj, &ctx);
> +	if (err)
> +		return err;
> +
> +	for (u32 i = 0; i < ctx.nr_kfuncs; i++) {
> +		struct kfunc *kfunc = &ctx.kfuncs[i];
> +
> +		if (!(kfunc->flags & KF_IMPLICIT_ARGS))
> +			continue;
> +
> +		err = process_kfunc_with_implicit_args(&ctx, kfunc);
> +		if (err)
> +			return err;
> +	}
> +
> +	return 0;
> +}
          ^^^^

ctx.decl_tags and ctx.kfuncs are allocated during build_btf2btf_context()
but are never freed on any return path from btf2btf(). While this may be
acceptable for a short-lived build tool where the OS reclaims memory on
exit, should cleanup be added for consistency with the rest of the code
which carefully frees allocations?


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21079944982

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS
  2026-01-16 20:39   ` bot+bpf-ci
@ 2026-01-16 20:44     ` Ihor Solodrai
  0 siblings, 0 replies; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-16 20:44 UTC (permalink / raw)
  To: bot+bpf-ci, ast, daniel, andrii, martin.lau, eddyz87
  Cc: yatsenko, tj, alan.maguire, bentiss, jikos, ameryhung, bpf,
	linux-kernel, linux-input, sched-ext, martin.lau, yonghong.song,
	clm

On 1/16/26 12:39 PM, bot+bpf-ci@kernel.org wrote:
>> diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
>> index 1fcf37af6764..b83316359cfd 100644
>> --- a/tools/bpf/resolve_btfids/main.c
>> +++ b/tools/bpf/resolve_btfids/main.c
> 
> [ ... ]
> 
>> +struct btf2btf_context {
>> +	struct btf *btf;
>> +	u32 *decl_tags;
>> +	u32 nr_decl_tags;
>> +	struct kfunc *kfuncs;
>> +	u32 nr_kfuncs;
>> +};
> 
> [ ... ]
> 
>> +static int collect_decl_tags(struct btf2btf_context *ctx)
>> +{
>> +	const u32 type_cnt = btf__type_cnt(ctx->btf);
>> +	struct btf *btf = ctx->btf;
>> +	const struct btf_type *t;
>> +	u32 *tags, *tmp;
>> +	u32 nr_tags = 0;
>> +
>> +	tags = malloc(type_cnt * sizeof(u32));
>> +	if (!tags)
>> +		return -ENOMEM;
> 
> [ ... ]
> 
>> +	tmp = realloc(tags, nr_tags * sizeof(u32));
>> +	if (!tmp) {
>> +		free(tags);
>> +		return -ENOMEM;
>> +	}
>> +
>> +	ctx->decl_tags = tmp;
>> +	ctx->nr_decl_tags = nr_tags;
>> +
>> +	return 0;
>> +}
> 
> [ ... ]
> 
>> +static s64 collect_kfuncs(struct object *obj, struct btf2btf_context *ctx)
>> +{
>> +	struct kfunc *kfunc, *kfuncs, *tmp;
> 
> [ ... ]
> 
>> +	kfuncs = malloc(ctx->nr_decl_tags * sizeof(*kfuncs));
>> +	if (!kfuncs)
>> +		return -ENOMEM;
> 
> [ ... ]
> 
>> +	tmp = realloc(kfuncs, nr_kfuncs * sizeof(*kfuncs));
>> +	if (!tmp) {
>> +		free(kfuncs);
>> +		return -ENOMEM;
>> +	}
>> +
>> +	ctx->kfuncs = tmp;
>> +	ctx->nr_kfuncs = nr_kfuncs;
>> +
>> +	return 0;
>> +}
>> +
>> +static int build_btf2btf_context(struct object *obj, struct btf2btf_context *ctx)
>> +{
>> +	int err;
>> +
>> +	ctx->btf = obj->btf;
>> +
>> +	err = collect_decl_tags(ctx);
>> +	if (err) {
>> +		pr_err("ERROR: resolve_btfids: failed to collect decl tags from BTF\n");
>> +		return err;
>> +	}
>> +
>> +	err = collect_kfuncs(obj, ctx);
>> +	if (err) {
>> +		pr_err("ERROR: resolve_btfids: failed to collect kfuncs from BTF\n");
>> +		return err;
>                   ^^^^
> 
> When collect_kfuncs() fails here, ctx->decl_tags has already been allocated
> by collect_decl_tags() but is not freed before returning. Should this error
> path call free(ctx->decl_tags)?
> 
>> +	}
>> +
>> +	return 0;
>> +}
> 
> [ ... ]
> 
>> +static int btf2btf(struct object *obj)
>> +{
>> +	struct btf2btf_context ctx = {};
>> +	int err;
>> +
>> +	err = build_btf2btf_context(obj, &ctx);
>> +	if (err)
>> +		return err;
>> +
>> +	for (u32 i = 0; i < ctx.nr_kfuncs; i++) {
>> +		struct kfunc *kfunc = &ctx.kfuncs[i];
>> +
>> +		if (!(kfunc->flags & KF_IMPLICIT_ARGS))
>> +			continue;
>> +
>> +		err = process_kfunc_with_implicit_args(&ctx, kfunc);
>> +		if (err)
>> +			return err;
>> +	}
>> +
>> +	return 0;
>> +}
>           ^^^^
> 
> ctx.decl_tags and ctx.kfuncs are allocated during build_btf2btf_context()
> but are never freed on any return path from btf2btf(). While this may be
> acceptable for a short-lived build tool where the OS reclaims memory on
> exit, should cleanup be added for consistency with the rest of the code
> which carefully frees allocations?

Right. I just noticed this after sending.

As AI mentioned, it's not a sever issue, but we should free everything
properly to be consistent.


> 
> 
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
> 
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21079944982


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS
  2026-01-16 20:16 ` [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS Ihor Solodrai
  2026-01-16 20:39   ` bot+bpf-ci
@ 2026-01-17  0:06   ` Andrii Nakryiko
  2026-01-17  6:36     ` Ihor Solodrai
  2026-01-20  0:55   ` Eduard Zingerman
  2 siblings, 1 reply; 35+ messages in thread
From: Andrii Nakryiko @ 2026-01-17  0:06 UTC (permalink / raw)
  To: Ihor Solodrai
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Mykyta Yatsenko, Tejun Heo,
	Alan Maguire, Benjamin Tissoires, Jiri Kosina, Amery Hung, bpf,
	linux-kernel, linux-input, sched-ext

On Fri, Jan 16, 2026 at 12:17 PM Ihor Solodrai <ihor.solodrai@linux.dev> wrote:
>
> Implement BTF modifications in resolve_btfids to support BPF kernel
> functions with implicit arguments.
>
> For a kfunc marked with KF_IMPLICIT_ARGS flag, a new function
> prototype is added to BTF that does not have implicit arguments. The
> kfunc's prototype is then updated to a new one in BTF. This prototype
> is the intended interface for the BPF programs.
>
> A <func_name>_impl function is added to BTF to make the original kfunc
> prototype searchable for the BPF verifier. If a <func_name>_impl
> function already exists in BTF, its interpreted as a legacy case, and
> this step is skipped.
>
> Whether an argument is implicit is determined by its type:
> currently only `struct bpf_prog_aux *` is supported.
>
> As a result, the BTF associated with kfunc is changed from
>
>     __bpf_kfunc bpf_foo(int arg1, struct bpf_prog_aux *aux);
>
> into
>
>     bpf_foo_impl(int arg1, struct bpf_prog_aux *aux);
>     __bpf_kfunc bpf_foo(int arg1);
>
> For more context see previous discussions and patches [1][2].
>
> [1] https://lore.kernel.org/dwarves/ba1650aa-fafd-49a8-bea4-bdddee7c38c9@linux.dev/
> [2] https://lore.kernel.org/bpf/20251029190113.3323406-1-ihor.solodrai@linux.dev/
>
> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
> ---
>  tools/bpf/resolve_btfids/main.c | 383 ++++++++++++++++++++++++++++++++
>  1 file changed, 383 insertions(+)
>

[...]

> +static int collect_decl_tags(struct btf2btf_context *ctx)
> +{
> +       const u32 type_cnt = btf__type_cnt(ctx->btf);
> +       struct btf *btf = ctx->btf;
> +       const struct btf_type *t;
> +       u32 *tags, *tmp;
> +       u32 nr_tags = 0;
> +
> +       tags = malloc(type_cnt * sizeof(u32));

waste of memory, really, see below

> +       if (!tags)
> +               return -ENOMEM;
> +
> +       for (u32 id = 1; id < type_cnt; id++) {
> +               t = btf__type_by_id(btf, id);
> +               if (!btf_is_decl_tag(t))
> +                       continue;
> +               tags[nr_tags++] = id;
> +       }
> +
> +       if (nr_tags == 0) {
> +               ctx->decl_tags = NULL;
> +               free(tags);
> +               return 0;
> +       }
> +
> +       tmp = realloc(tags, nr_tags * sizeof(u32));
> +       if (!tmp) {
> +               free(tags);
> +               return -ENOMEM;
> +       }

This is an interesting realloc() usage pattern, it's quite
unconventional to preallocate too much memory, and then shrink (in C
world)

check libbpf's libbpf_add_mem(), that's a generic "primitive" inside
the libbpf. Do not reuse it as is, but it should give you an idea of a
common pattern: you start with NULL (empty data), when you need to add
a new element, you calculate a new array size which normally would be
some minimal value (to avoid going through 1 -> 2 -> 4 -> 8, many
small and wasteful steps; normally we just jump straight to 16 or so)
or some factor of previous size (doesn't have to be 2x,
libbpf_add_mem() expands by 25%, for instance).

This is a super common approach in C. Please utilize it here as well.

> +
> +       ctx->decl_tags = tmp;
> +       ctx->nr_decl_tags = nr_tags;
> +
> +       return 0;
> +}
> +
> +/*
> + * To find the kfunc flags having its struct btf_id (with ELF addresses)
> + * we need to find the address that is in range of a set8.
> + * If a set8 is found, then the flags are located at addr + 4 bytes.
> + * Return 0 (no flags!) if not found.
> + */
> +static u32 find_kfunc_flags(struct object *obj, struct btf_id *kfunc_id)
> +{
> +       const u32 *elf_data_ptr = obj->efile.idlist->d_buf;
> +       u64 set_lower_addr, set_upper_addr, addr;
> +       struct btf_id *set_id;
> +       struct rb_node *next;
> +       u32 flags;
> +       u64 idx;
> +
> +       next = rb_first(&obj->sets);
> +       while (next) {

for(next = rb_first(...); next; next = rb_next(next)) seems like a
good fit here, no?

> +               set_id = rb_entry(next, struct btf_id, rb_node);
> +               if (set_id->kind != BTF_ID_KIND_SET8 || set_id->addr_cnt != 1)
> +                       goto skip;
> +
> +               set_lower_addr = set_id->addr[0];
> +               set_upper_addr = set_lower_addr + set_id->cnt * sizeof(u64);
> +
> +               for (u32 i = 0; i < kfunc_id->addr_cnt; i++) {
> +                       addr = kfunc_id->addr[i];
> +                       /*
> +                        * Lower bound is exclusive to skip the 8-byte header of the set.
> +                        * Upper bound is inclusive to capture the last entry at offset 8*cnt.
> +                        */
> +                       if (set_lower_addr < addr && addr <= set_upper_addr) {
> +                               pr_debug("found kfunc %s in BTF_ID_FLAGS %s\n",
> +                                        kfunc_id->name, set_id->name);
> +                               goto found;

why goto, just do what needs to be done and return?

> +                       }
> +               }
> +skip:
> +               next = rb_next(next);
> +       }
> +
> +       return 0;
> +
> +found:
> +       idx = addr - obj->efile.idlist_addr;
> +       idx = idx / sizeof(u32) + 1;
> +       flags = elf_data_ptr[idx];
> +
> +       return flags;
> +}
> +
> +static s64 collect_kfuncs(struct object *obj, struct btf2btf_context *ctx)
> +{
> +       struct kfunc *kfunc, *kfuncs, *tmp;
> +       const char *tag_name, *func_name;
> +       struct btf *btf = ctx->btf;
> +       const struct btf_type *t;
> +       u32 flags, func_id;
> +       struct btf_id *id;
> +       s64 nr_kfuncs = 0;
> +
> +       if (ctx->nr_decl_tags == 0)
> +               return 0;
> +
> +       kfuncs = malloc(ctx->nr_decl_tags * sizeof(*kfuncs));

ditto about realloc() usage pattern

> +       if (!kfuncs)
> +               return -ENOMEM;
> +

[...]

> +/*
> + * For a kfunc with KF_IMPLICIT_ARGS we do the following:
> + *   1. Add a new function with _impl suffix in the name, with the prototype
> + *      of the original kfunc.
> + *   2. Add all decl tags except "bpf_kfunc" for the _impl func.
> + *   3. Add a new function prototype with modified list of arguments:
> + *      omitting implicit args.
> + *   4. Change the prototype of the original kfunc to the new one.
> + *
> + * This way we transform the BTF associated with the kfunc from
> + *     __bpf_kfunc bpf_foo(int arg1, void *implicit_arg);
> + * into
> + *     bpf_foo_impl(int arg1, void *implicit_arg);
> + *     __bpf_kfunc bpf_foo(int arg1);
> + *
> + * If a kfunc with KF_IMPLICIT_ARGS already has an _impl counterpart
> + * in BTF, then it's a legacy case: an _impl function is declared in the
> + * source code. In this case, we can skip adding an _impl function, but we
> + * still have to add a func prototype that omits implicit args.
> + */
> +static int process_kfunc_with_implicit_args(struct btf2btf_context *ctx, struct kfunc *kfunc)
> +{

this logic looks good

> +       s32 idx, new_proto_id, new_func_id, proto_id;
> +       const char *param_name, *tag_name;
> +       const struct btf_param *params;
> +       enum btf_func_linkage linkage;
> +       char tmp_name[KSYM_NAME_LEN];
> +       struct btf *btf = ctx->btf;
> +       int err, len, nr_params;
> +       struct btf_type *t;
> +

[...]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS
  2026-01-17  0:06   ` Andrii Nakryiko
@ 2026-01-17  6:36     ` Ihor Solodrai
  2026-01-20  0:24       ` Eduard Zingerman
  0 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-17  6:36 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Mykyta Yatsenko, Tejun Heo,
	Alan Maguire, Benjamin Tissoires, Jiri Kosina, Amery Hung, bpf,
	linux-kernel, linux-input, sched-ext



On 1/16/26 4:06 PM, Andrii Nakryiko wrote:
> On Fri, Jan 16, 2026 at 12:17 PM Ihor Solodrai <ihor.solodrai@linux.dev> wrote:
>>
>> Implement BTF modifications in resolve_btfids to support BPF kernel
>> functions with implicit arguments.
>>
>> For a kfunc marked with KF_IMPLICIT_ARGS flag, a new function
>> prototype is added to BTF that does not have implicit arguments. The
>> kfunc's prototype is then updated to a new one in BTF. This prototype
>> is the intended interface for the BPF programs.
>>
>> A <func_name>_impl function is added to BTF to make the original kfunc
>> prototype searchable for the BPF verifier. If a <func_name>_impl
>> function already exists in BTF, its interpreted as a legacy case, and
>> this step is skipped.
>>
>> Whether an argument is implicit is determined by its type:
>> currently only `struct bpf_prog_aux *` is supported.
>>
>> As a result, the BTF associated with kfunc is changed from
>>
>>     __bpf_kfunc bpf_foo(int arg1, struct bpf_prog_aux *aux);
>>
>> into
>>
>>     bpf_foo_impl(int arg1, struct bpf_prog_aux *aux);
>>     __bpf_kfunc bpf_foo(int arg1);
>>
>> For more context see previous discussions and patches [1][2].
>>
>> [1] https://lore.kernel.org/dwarves/ba1650aa-fafd-49a8-bea4-bdddee7c38c9@linux.dev/
>> [2] https://lore.kernel.org/bpf/20251029190113.3323406-1-ihor.solodrai@linux.dev/
>>
>> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
>> ---
>>  tools/bpf/resolve_btfids/main.c | 383 ++++++++++++++++++++++++++++++++
>>  1 file changed, 383 insertions(+)
>>
> 
> [...]
> 
>> +static int collect_decl_tags(struct btf2btf_context *ctx)
>> +{
>> +       const u32 type_cnt = btf__type_cnt(ctx->btf);
>> +       struct btf *btf = ctx->btf;
>> +       const struct btf_type *t;
>> +       u32 *tags, *tmp;
>> +       u32 nr_tags = 0;
>> +
>> +       tags = malloc(type_cnt * sizeof(u32));
> 
> waste of memory, really, see below
> 
>> +       if (!tags)
>> +               return -ENOMEM;
>> +
>> +       for (u32 id = 1; id < type_cnt; id++) {
>> +               t = btf__type_by_id(btf, id);
>> +               if (!btf_is_decl_tag(t))
>> +                       continue;
>> +               tags[nr_tags++] = id;
>> +       }
>> +
>> +       if (nr_tags == 0) {
>> +               ctx->decl_tags = NULL;
>> +               free(tags);
>> +               return 0;
>> +       }
>> +
>> +       tmp = realloc(tags, nr_tags * sizeof(u32));
>> +       if (!tmp) {
>> +               free(tags);
>> +               return -ENOMEM;
>> +       }
> 
> This is an interesting realloc() usage pattern, it's quite
> unconventional to preallocate too much memory, and then shrink (in C
> world)
> 
> check libbpf's libbpf_add_mem(), that's a generic "primitive" inside
> the libbpf. Do not reuse it as is, but it should give you an idea of a
> common pattern: you start with NULL (empty data), when you need to add
> a new element, you calculate a new array size which normally would be
> some minimal value (to avoid going through 1 -> 2 -> 4 -> 8, many
> small and wasteful steps; normally we just jump straight to 16 or so)
> or some factor of previous size (doesn't have to be 2x,
> libbpf_add_mem() expands by 25%, for instance).
> 
> This is a super common approach in C. Please utilize it here as well.

Hi Andrii, thanks for taking a quick look.

I am aware of the typical size doubling (or whatever the multiplier
is) pattern for growing arrays. Amortized cost and all that.

I don't know if this pre-alloc + shrink is common, but I did use it in
pahole before [1], for example.

The chain of thought that makes me like it is:
  * if we knew the array size beforehand, we'd simply pre-allocate it
  * here we don't, but we do know an upper limit (and it's not crazy)
  * if we pre-allocate to upper limit, we can use the array without
    worrying about the bounds checks and growing on every use
  * if we care (we might not), we can shrink to the actual size

The dynamic array approach is certainly more generic, and helpers can
be written to make it easy. But in cases like this - collect something
once and then use - over-pre-allocating makes more sense to me.

Re waste we are talking <1Mb (~100k types * 4), so it's whatever.

In any case it's not super important, so I don't mind changing this if
you insist. Being conventional has it's benefits too.

[1] https://git.kernel.org/pub/scm/devel/pahole/pahole.git/tree/btf_encoder.c?h=v1.31#n2182

> 
>> +
>> +       ctx->decl_tags = tmp;
>> +       ctx->nr_decl_tags = nr_tags;
>> +
>> +       return 0;
>> +}
>> +
>> +/*
>> + * To find the kfunc flags having its struct btf_id (with ELF addresses)
>> + * we need to find the address that is in range of a set8.
>> + * If a set8 is found, then the flags are located at addr + 4 bytes.
>> + * Return 0 (no flags!) if not found.
>> + */
>> +static u32 find_kfunc_flags(struct object *obj, struct btf_id *kfunc_id)
>> +{
>> +       const u32 *elf_data_ptr = obj->efile.idlist->d_buf;
>> +       u64 set_lower_addr, set_upper_addr, addr;
>> +       struct btf_id *set_id;
>> +       struct rb_node *next;
>> +       u32 flags;
>> +       u64 idx;
>> +
>> +       next = rb_first(&obj->sets);
>> +       while (next) {
> 
> for(next = rb_first(...); next; next = rb_next(next)) seems like a
> good fit here, no?

Looks like it. We could do 'continue' then.

> 
>> +               set_id = rb_entry(next, struct btf_id, rb_node);
>> +               if (set_id->kind != BTF_ID_KIND_SET8 || set_id->addr_cnt != 1)
>> +                       goto skip;
>> +
>> +               set_lower_addr = set_id->addr[0];
>> +               set_upper_addr = set_lower_addr + set_id->cnt * sizeof(u64);
>> +
>> +               for (u32 i = 0; i < kfunc_id->addr_cnt; i++) {
>> +                       addr = kfunc_id->addr[i];
>> +                       /*
>> +                        * Lower bound is exclusive to skip the 8-byte header of the set.
>> +                        * Upper bound is inclusive to capture the last entry at offset 8*cnt.
>> +                        */
>> +                       if (set_lower_addr < addr && addr <= set_upper_addr) {
>> +                               pr_debug("found kfunc %s in BTF_ID_FLAGS %s\n",
>> +                                        kfunc_id->name, set_id->name);
>> +                               goto found;
> 
> why goto, just do what needs to be done and return?

Indeed.

> 
>> +                       }
>> +               }
>> +skip:
>> +               next = rb_next(next);
>> +       }
>> +
>> +       return 0;
>> +
>> +found:
>> +       idx = addr - obj->efile.idlist_addr;
>> +       idx = idx / sizeof(u32) + 1;
>> +       flags = elf_data_ptr[idx];
>> +
>> +       return flags;
>> +}
>> +
>> +static s64 collect_kfuncs(struct object *obj, struct btf2btf_context *ctx)
>> +{
>> +       struct kfunc *kfunc, *kfuncs, *tmp;
>> +       const char *tag_name, *func_name;
>> +       struct btf *btf = ctx->btf;
>> +       const struct btf_type *t;
>> +       u32 flags, func_id;
>> +       struct btf_id *id;
>> +       s64 nr_kfuncs = 0;
>> +
>> +       if (ctx->nr_decl_tags == 0)
>> +               return 0;
>> +
>> +       kfuncs = malloc(ctx->nr_decl_tags * sizeof(*kfuncs));
> 
> ditto about realloc() usage pattern
> 
>> +       if (!kfuncs)
>> +               return -ENOMEM;
>> +
> 
> [...]
> 
>> +/*
>> + * For a kfunc with KF_IMPLICIT_ARGS we do the following:
>> + *   1. Add a new function with _impl suffix in the name, with the prototype
>> + *      of the original kfunc.
>> + *   2. Add all decl tags except "bpf_kfunc" for the _impl func.
>> + *   3. Add a new function prototype with modified list of arguments:
>> + *      omitting implicit args.
>> + *   4. Change the prototype of the original kfunc to the new one.
>> + *
>> + * This way we transform the BTF associated with the kfunc from
>> + *     __bpf_kfunc bpf_foo(int arg1, void *implicit_arg);
>> + * into
>> + *     bpf_foo_impl(int arg1, void *implicit_arg);
>> + *     __bpf_kfunc bpf_foo(int arg1);
>> + *
>> + * If a kfunc with KF_IMPLICIT_ARGS already has an _impl counterpart
>> + * in BTF, then it's a legacy case: an _impl function is declared in the
>> + * source code. In this case, we can skip adding an _impl function, but we
>> + * still have to add a func prototype that omits implicit args.
>> + */
>> +static int process_kfunc_with_implicit_args(struct btf2btf_context *ctx, struct kfunc *kfunc)
>> +{
> 
> this logic looks good
> 
>> +       s32 idx, new_proto_id, new_func_id, proto_id;
>> +       const char *param_name, *tag_name;
>> +       const struct btf_param *params;
>> +       enum btf_func_linkage linkage;
>> +       char tmp_name[KSYM_NAME_LEN];
>> +       struct btf *btf = ctx->btf;
>> +       int err, len, nr_params;
>> +       struct btf_type *t;
>> +
> 
> [...]


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 03/13] bpf: Verifier support for KF_IMPLICIT_ARGS
  2026-01-16 20:16 ` [PATCH bpf-next v2 03/13] bpf: Verifier support for KF_IMPLICIT_ARGS Ihor Solodrai
@ 2026-01-20  0:03   ` Eduard Zingerman
  0 siblings, 0 replies; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20  0:03 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:
> A kernel function bpf_foo marked with KF_IMPLICIT_ARGS flag is
> expected to have two associated types in BTF:
>   * `bpf_foo` with a function prototype that omits implicit arguments
>   * `bpf_foo_impl` with a function prototype that matches the kernel
>      declaration of `bpf_foo`, but doesn't have a ksym associated with
>      its name
> 
> In order to support kfuncs with implicit arguments, the verifier has
> to know how to resolve a call of `bpf_foo` to the correct BTF function
> prototype and address.
> 
> To implement this, in add_kfunc_call() kfunc flags are checked for
> KF_IMPLICIT_ARGS. For such kfuncs a BTF func prototype is adjusted to
> the one found for `bpf_foo_impl` (func_name + "_impl" suffix, by
> convention) function in BTF.
> 
> This effectively changes the signature of the `bpf_foo` kfunc in the
> context of verification: from one without implicit args to the one
> with full argument list.
> 
> The values of implicit arguments by design are provided by the
> verifier, and so they can only be of particular types. In this patch
> the only allowed implicit arg type is a pointer to struct
> bpf_prog_aux.
> 
> In order for the verifier to correctly set an implicit bpf_prog_aux
> arg value at runtime, is_kfunc_arg_prog() is extended to check for the
> arg type. At a point when prog arg is determined in check_kfunc_args()
> the kfunc with implicit args already has a prototype with full
> argument list, so the existing value patch mechanism just works.
> 
> If a new kfunc with KF_IMPLICIT_ARG is declared for an existing kfunc
> that uses a __prog argument (a legacy case), the prototype
> substitution works in exactly the same way, assuming the kfunc follows
> the _impl naming convention. The difference is only in how _impl
> prototype is added to the BTF, which is not the verifier's
> concern. See a subsequent resolve_btfids patch for details.
> 
> __prog suffix is still supported at this point, but will be removed in
> a subsequent patch, after current users are moved to KF_IMPLICIT_ARGS.
> 
> Introduction of KF_IMPLICIT_ARGS revealed an issue with zero-extension
> tracking, because an explicit rX = 0 in place of the verifier-supplied
> argument is now absent if the arg is implicit (the BPF prog doesn't
> pass a dummy NULL anymore). To mitigate this, reset the subreg_def of
> all caller saved registers in check_kfunc_call() [1].
> 
> [1] https://lore.kernel.org/bpf/b4a760ef828d40dac7ea6074d39452bb0dc82caa.camel@gmail.com/
> 
> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
> ---

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

[...]

> @@ -14177,8 +14223,12 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  		}
>  	}
>  
> -	for (i = 0; i < CALLER_SAVED_REGS; i++)
> -		mark_reg_not_init(env, regs, caller_saved[i]);
> +	for (i = 0; i < CALLER_SAVED_REGS; i++) {
> +		u32 regno = caller_saved[i];
> +
> +		mark_reg_not_init(env, regs, regno);
> +		regs[regno].subreg_def = DEF_NOT_SUBREG;
> +	}

But we still need to understand why .subreg_def assignment can't be
moved inside mark_reg_not_init().

[...]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 04/13] resolve_btfids: Introduce finalize_btf() step
  2026-01-16 20:16 ` [PATCH bpf-next v2 04/13] resolve_btfids: Introduce finalize_btf() step Ihor Solodrai
@ 2026-01-20  0:13   ` Eduard Zingerman
  2026-01-20 18:11     ` Ihor Solodrai
  0 siblings, 1 reply; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20  0:13 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:
> Since recently [1][2] resolve_btfids executes final adjustments to the
> kernel/module BTF before it's embedded into the target binary.
> 
> To keep the implementation simple, a clear and stable "pipeline" of
> how BTF data flows through resolve_btfids would be helpful. Some BTF
> modifications may change the ids of the types, so it is important to
> maintain correct order of operations with respect to .BTF_ids
> resolution too.
> 
> This patch refactors the BTF handling to establish the following
> sequence:
>   - load target ELF sections
>   - load .BTF_ids symbols
>     - this will be a dependency of btf2btf transformations in
>       subsequent patches
>   - load BTF and its base as is
>   - (*) btf2btf transformations will happen here
>   - finalize_btf(), introduced in this patch
>     - does distill base and sort BTF
>   - resolve and patch .BTF_ids
> 
> This approach helps to avoid fixups in .BTF_ids data in case the ids
> change at any point of BTF processing, because symbol resolution
> happens on the finalized, ready to dump, BTF data.
> 
> This also gives flexibility in BTF transformations, because they will
> happen on BTF that is not distilled and/or sorted yet, allowing to
> freely add, remove and modify BTF types.
> 
> [1] https://lore.kernel.org/bpf/20251219181321.1283664-1-ihor.solodrai@linux.dev/
> [2] https://lore.kernel.org/bpf/20260109130003.3313716-1-dolinux.peng@gmail.com/
> 
> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
> ---

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

> @@ -1099,12 +1116,22 @@ int main(int argc, const char **argv)
>  	if (obj.efile.idlist_shndx == -1 ||
>  	    obj.efile.symbols_shndx == -1) {
>  		pr_debug("Cannot find .BTF_ids or symbols sections, skip symbols resolution\n");
> -		goto dump_btf;
> +		resolve_btfids = false;
>  	}
>  
> -	if (symbols_collect(&obj))
> +	if (resolve_btfids)
> +		if (symbols_collect(&obj))
> +			goto out;

Nit: check obj.efile.idlist_shndx and obj.efile.symbols_shndx inside symbols_collect()?
     To avoid resolve_btfids flag and the `goto dump_btf;` below.

> +
> +	if (load_btf(&obj))
>  		goto out;
>  
> +	if (finalize_btf(&obj))
> +		goto out;
> +
> +	if (!resolve_btfids)
> +		goto dump_btf;
> +
>  	if (symbols_resolve(&obj))
>  		goto out;
>  

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS
  2026-01-17  6:36     ` Ihor Solodrai
@ 2026-01-20  0:24       ` Eduard Zingerman
  0 siblings, 0 replies; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20  0:24 UTC (permalink / raw)
  To: Ihor Solodrai, Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Mykyta Yatsenko, Tejun Heo, Alan Maguire,
	Benjamin Tissoires, Jiri Kosina, Amery Hung, bpf, linux-kernel,
	linux-input, sched-ext

On Fri, 2026-01-16 at 22:36 -0800, Ihor Solodrai wrote:

[...]

> > > +static int collect_decl_tags(struct btf2btf_context *ctx)
> > > +{
> > > +       const u32 type_cnt = btf__type_cnt(ctx->btf);
> > > +       struct btf *btf = ctx->btf;
> > > +       const struct btf_type *t;
> > > +       u32 *tags, *tmp;
> > > +       u32 nr_tags = 0;
> > > +
> > > +       tags = malloc(type_cnt * sizeof(u32));
> > 
> > waste of memory, really, see below
> > 
> > > +       if (!tags)
> > > +               return -ENOMEM;
> > > +
> > > +       for (u32 id = 1; id < type_cnt; id++) {
> > > +               t = btf__type_by_id(btf, id);
> > > +               if (!btf_is_decl_tag(t))
> > > +                       continue;
> > > +               tags[nr_tags++] = id;
> > > +       }
> > > +
> > > +       if (nr_tags == 0) {
> > > +               ctx->decl_tags = NULL;
> > > +               free(tags);
> > > +               return 0;
> > > +       }
> > > +
> > > +       tmp = realloc(tags, nr_tags * sizeof(u32));
> > > +       if (!tmp) {
> > > +               free(tags);
> > > +               return -ENOMEM;
> > > +       }
> > 
> > This is an interesting realloc() usage pattern, it's quite
> > unconventional to preallocate too much memory, and then shrink (in C
> > world)
> > 
> > check libbpf's libbpf_add_mem(), that's a generic "primitive" inside
> > the libbpf. Do not reuse it as is, but it should give you an idea of a
> > common pattern: you start with NULL (empty data), when you need to add
> > a new element, you calculate a new array size which normally would be
> > some minimal value (to avoid going through 1 -> 2 -> 4 -> 8, many
> > small and wasteful steps; normally we just jump straight to 16 or so)
> > or some factor of previous size (doesn't have to be 2x,
> > libbpf_add_mem() expands by 25%, for instance).
> > 
> > This is a super common approach in C. Please utilize it here as well.
> 
> Hi Andrii, thanks for taking a quick look.
> 
> I am aware of the typical size doubling (or whatever the multiplier
> is) pattern for growing arrays. Amortized cost and all that.
> 
> I don't know if this pre-alloc + shrink is common, but I did use it in
> pahole before [1], for example.
> 
> The chain of thought that makes me like it is:
>   * if we knew the array size beforehand, we'd simply pre-allocate it
>   * here we don't, but we do know an upper limit (and it's not crazy)
>   * if we pre-allocate to upper limit, we can use the array without
>     worrying about the bounds checks and growing on every use
>   * if we care (we might not), we can shrink to the actual size
> 
> The dynamic array approach is certainly more generic, and helpers can
> be written to make it easy. But in cases like this - collect something
> once and then use - over-pre-allocating makes more sense to me.
> 
> Re waste we are talking <1Mb (~100k types * 4), so it's whatever.
> 
> In any case it's not super important, so I don't mind changing this if
> you insist. Being conventional has it's benefits too.
> 
> [1] https://git.kernel.org/pub/scm/devel/pahole/pahole.git/tree/btf_encoder.c?h=v1.31#n2182

In my test kernel there are ~70K types and ~300 decl tags.
Allocating an array of 70K elements to store 300 seem to be quite an overkill.
I'd move to what Andrii suggests just to reduce the surprise factor for the reader.

[...]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS
  2026-01-16 20:16 ` [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS Ihor Solodrai
  2026-01-16 20:39   ` bot+bpf-ci
  2026-01-17  0:06   ` Andrii Nakryiko
@ 2026-01-20  0:55   ` Eduard Zingerman
  2 siblings, 0 replies; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20  0:55 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:
> Implement BTF modifications in resolve_btfids to support BPF kernel
> functions with implicit arguments.
> 
> For a kfunc marked with KF_IMPLICIT_ARGS flag, a new function
> prototype is added to BTF that does not have implicit arguments. The
> kfunc's prototype is then updated to a new one in BTF. This prototype
> is the intended interface for the BPF programs.
> 
> A <func_name>_impl function is added to BTF to make the original kfunc
> prototype searchable for the BPF verifier. If a <func_name>_impl
> function already exists in BTF, its interpreted as a legacy case, and
> this step is skipped.
> 
> Whether an argument is implicit is determined by its type:
> currently only `struct bpf_prog_aux *` is supported.
> 
> As a result, the BTF associated with kfunc is changed from
> 
>     __bpf_kfunc bpf_foo(int arg1, struct bpf_prog_aux *aux);
> 
> into
> 
>     bpf_foo_impl(int arg1, struct bpf_prog_aux *aux);
>     __bpf_kfunc bpf_foo(int arg1);
> 
> For more context see previous discussions and patches [1][2].
> 
> [1] https://lore.kernel.org/dwarves/ba1650aa-fafd-49a8-bea4-bdddee7c38c9@linux.dev/
> [2] https://lore.kernel.org/bpf/20251029190113.3323406-1-ihor.solodrai@linux.dev/
> 
> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
> ---

Patch logic looks good to me, modulo LLM's memory management concern
and nit from Andrii.

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

> @@ -837,6 +854,369 @@ static int dump_raw_btf(struct btf *btf, const char *out_path)
>  	return 0;
>  }
>  
> +static const struct btf_type *btf_type_skip_qualifiers(const struct btf *btf, s32 type_id)
> +{
> +	const struct btf_type *t = btf__type_by_id(btf, type_id);
> +
> +	while (btf_is_mod(t))
> +		t = btf__type_by_id(btf, t->type);
> +
> +	return t;
> +}
> +
> +static const struct btf_decl_tag *btf_type_decl_tag(const struct btf_type *t)
> +{
> +	return (const struct btf_decl_tag *)(t + 1);
> +}

Nit: there is a utility function btf_decl_tag() in bpf/btf.h
     which does exactly the same.

[...]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 06/13] selftests/bpf: Add tests for KF_IMPLICIT_ARGS
  2026-01-16 20:16 ` [PATCH bpf-next v2 06/13] selftests/bpf: Add tests " Ihor Solodrai
@ 2026-01-20  1:24   ` Eduard Zingerman
  0 siblings, 0 replies; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20  1:24 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:
> Add trivial end-to-end tests to validate that KF_IMPLICIT_ARGS flag is
> properly handled by both resolve_btfids and the verifier.
> 
> Declare kfuncs in bpf_testmod. Check that bpf_prog_aux pointer is set
> in the kfunc implementation. Verify that calls with implicit args and
> a legacy case all work.
> 
> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
> ---

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

[...]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS
  2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
                   ` (12 preceding siblings ...)
  2026-01-16 20:17 ` [PATCH bpf-next v2 13/13] bpf,docs: Document KF_IMPLICIT_ARGS flag Ihor Solodrai
@ 2026-01-20  1:49 ` Eduard Zingerman
  13 siblings, 0 replies; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20  1:49 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:

[...]

> v1->v2:
>   - Replace the following kernel functions with KF_IMPLICIT_ARGS version:
>     - bpf_stream_vprintk_impl -> bpf_stream_vprintk
>     - bpf_task_work_schedule_resume_impl -> bpf_task_work_schedule_resume
>     - bpf_task_work_schedule_signal_impl -> bpf_task_work_schedule_signal
>     - bpf_wq_set_callback_impl -> bpf_wq_set_callback_impl

Just to clarify my understanding, this is a breaking change, right?
E.g. bpf_stream_vprintk_impl is no longer in vmlinux.h and on a load
attempt an error is reported:

  kfunc 'bpf_stream_vprintk_impl' is referenced but wasn't resolved

Maybe call it out explicitly in the cover letter?

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 07/13] bpf: Migrate bpf_wq_set_callback_impl() to KF_IMPLICIT_ARGS
  2026-01-16 20:16 ` [PATCH bpf-next v2 07/13] bpf: Migrate bpf_wq_set_callback_impl() to KF_IMPLICIT_ARGS Ihor Solodrai
@ 2026-01-20  1:50   ` Eduard Zingerman
  0 siblings, 0 replies; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20  1:50 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:
> Implement bpf_wq_set_callback() with an implicit bpf_prog_aux
> argument, and remove bpf_wq_set_callback_impl().
> 
> Update special kfunc checks in the verifier accordingly.
> 
> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
> ---

Reviewed-by: Eduard Zingerman <eddyz87@gmail.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 09/13] bpf: Migrate bpf_task_work_schedule_* kfuncs to KF_IMPLICIT_ARGS
  2026-01-16 20:16 ` [PATCH bpf-next v2 09/13] bpf: Migrate bpf_task_work_schedule_* kfuncs to KF_IMPLICIT_ARGS Ihor Solodrai
@ 2026-01-20  1:52   ` Eduard Zingerman
  0 siblings, 0 replies; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20  1:52 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:
> Implement bpf_task_work_schedule_* with an implicit bpf_prog_aux
> argument, and remove corresponding _impl funcs from the kernel.
> 
> Update special kfunc checks in the verifier accordingly.
> 
> Update the selftests to use the new API with implicit argument.
> 
> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
> ---

Reviewed-by: Eduard Zingerman <eddyz87@gmail.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 10/13] bpf: Migrate bpf_stream_vprintk() to KF_IMPLICIT_ARGS
  2026-01-16 20:16 ` [PATCH bpf-next v2 10/13] bpf: Migrate bpf_stream_vprintk() " Ihor Solodrai
@ 2026-01-20  1:53   ` Eduard Zingerman
  0 siblings, 0 replies; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20  1:53 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:
> Implement bpf_stream_vprintk with an implicit bpf_prog_aux argument,
> and remote bpf_stream_vprintk_impl from the kernel.
> 
> Update the selftests to use the new API with implicit argument.
> 
> bpf_stream_vprintk macro is changed to use the new bpf_stream_vprintk
> kfunc, and the extern definition of bpf_stream_vprintk_impl is
> replaced accordingly.
> 
> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
> ---

Reviewed-by: Eduard Zingerman <eddyz87@gmail.com>

[...]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 11/13] selftests/bpf: Migrate struct_ops_assoc test to KF_IMPLICIT_ARGS
  2026-01-16 20:16 ` [PATCH bpf-next v2 11/13] selftests/bpf: Migrate struct_ops_assoc test " Ihor Solodrai
@ 2026-01-20  1:59   ` Eduard Zingerman
  2026-01-20 18:20     ` Ihor Solodrai
  0 siblings, 1 reply; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20  1:59 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:

[...]

> diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
> index 2357a0340ffe..225ea30c4e3d 100644
> --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
> +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
> @@ -161,7 +161,9 @@ void bpf_kfunc_rcu_task_test(struct task_struct *ptr) __ksym;
>  struct task_struct *bpf_kfunc_ret_rcu_test(void) __ksym;
>  int *bpf_kfunc_ret_rcu_test_nostruct(int rdonly_buf_size) __ksym;
>  
> -int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __ksym;
> -int bpf_kfunc_multi_st_ops_test_1_impl(struct st_ops_args *args, void *aux__prog) __ksym;
> +#ifndef __KERNEL__
> +extern int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __weak __ksym;
> +extern int bpf_kfunc_multi_st_ops_test_1_assoc(struct st_ops_args *args) __weak __ksym;
> +#endif

Nit: wbpf_kfunc_multi_st_ops_test_1 change is not necessary, right?

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 12/13] bpf: Remove __prog kfunc arg annotation
  2026-01-16 20:16 ` [PATCH bpf-next v2 12/13] bpf: Remove __prog kfunc arg annotation Ihor Solodrai
@ 2026-01-20  2:01   ` Eduard Zingerman
  0 siblings, 0 replies; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20  2:01 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:
> Now that all the __prog suffix users in the kernel tree migrated to
> KF_IMPLICIT_ARGS, remove it from the verifier.
> 
> See prior discussion for context [1].
> 
> [1] https://lore.kernel.org/bpf/CAEf4BzbgPfRm9BX=TsZm-TsHFAHcwhPY4vTt=9OT-uhWqf8tqw@mail.gmail.com/
> 
> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
> ---

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

[...]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 04/13] resolve_btfids: Introduce finalize_btf() step
  2026-01-20  0:13   ` Eduard Zingerman
@ 2026-01-20 18:11     ` Ihor Solodrai
  2026-01-20 18:19       ` Eduard Zingerman
  0 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-20 18:11 UTC (permalink / raw)
  To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On 1/19/26 4:13 PM, Eduard Zingerman wrote:
> On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:
>> Since recently [1][2] resolve_btfids executes final adjustments to the
>> kernel/module BTF before it's embedded into the target binary.
>>
>> To keep the implementation simple, a clear and stable "pipeline" of
>> how BTF data flows through resolve_btfids would be helpful. Some BTF
>> modifications may change the ids of the types, so it is important to
>> maintain correct order of operations with respect to .BTF_ids
>> resolution too.
>>
>> This patch refactors the BTF handling to establish the following
>> sequence:
>>   - load target ELF sections
>>   - load .BTF_ids symbols
>>     - this will be a dependency of btf2btf transformations in
>>       subsequent patches
>>   - load BTF and its base as is
>>   - (*) btf2btf transformations will happen here
>>   - finalize_btf(), introduced in this patch
>>     - does distill base and sort BTF
>>   - resolve and patch .BTF_ids
>>
>> This approach helps to avoid fixups in .BTF_ids data in case the ids
>> change at any point of BTF processing, because symbol resolution
>> happens on the finalized, ready to dump, BTF data.
>>
>> This also gives flexibility in BTF transformations, because they will
>> happen on BTF that is not distilled and/or sorted yet, allowing to
>> freely add, remove and modify BTF types.
>>
>> [1] https://lore.kernel.org/bpf/20251219181321.1283664-1-ihor.solodrai@linux.dev/
>> [2] https://lore.kernel.org/bpf/20260109130003.3313716-1-dolinux.peng@gmail.com/
>>
>> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
>> ---
> 
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> 
>> @@ -1099,12 +1116,22 @@ int main(int argc, const char **argv)
>>  	if (obj.efile.idlist_shndx == -1 ||
>>  	    obj.efile.symbols_shndx == -1) {
>>  		pr_debug("Cannot find .BTF_ids or symbols sections, skip symbols resolution\n");
>> -		goto dump_btf;
>> +		resolve_btfids = false;
>>  	}
>>  
>> -	if (symbols_collect(&obj))
>> +	if (resolve_btfids)
>> +		if (symbols_collect(&obj))
>> +			goto out;
> 
> Nit: check obj.efile.idlist_shndx and obj.efile.symbols_shndx inside symbols_collect()?
>      To avoid resolve_btfids flag and the `goto dump_btf;` below.

Hi Eduard, thank you for review.

The issue is that in case of .BTF_ids section absent we have to skip
some of the steps, specifically:
  - symbols_collect()
  - sequence between symbols_resolve() and dump_raw_btf_ids()

It's not an exit condition, we still have to do load/dump of the BTF.

I tried in symbols_collect():

	if (obj.efile.idlist_shndx == -1 || obj.efile.symbols_shndx == -1)
		return 0;

But then, we either have to do the same check in symbols_resolve() and
co, or maybe store a flag in the struct object.  So I decided it's
better to have an explicit flag in the main control flow, instead of
hiding it.

lmk if you had something else in mind


> 
>> +
>> +	if (load_btf(&obj))
>>  		goto out;
>>  
>> +	if (finalize_btf(&obj))
>> +		goto out;
>> +
>> +	if (!resolve_btfids)
>> +		goto dump_btf;
>> +
>>  	if (symbols_resolve(&obj))
>>  		goto out;
>>  


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 04/13] resolve_btfids: Introduce finalize_btf() step
  2026-01-20 18:11     ` Ihor Solodrai
@ 2026-01-20 18:19       ` Eduard Zingerman
  2026-01-20 18:35         ` Ihor Solodrai
  0 siblings, 1 reply; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20 18:19 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Tue, 2026-01-20 at 10:11 -0800, Ihor Solodrai wrote:

[...]

> > > @@ -1099,12 +1116,22 @@ int main(int argc, const char **argv)
> > >  	if (obj.efile.idlist_shndx == -1 ||
> > >  	    obj.efile.symbols_shndx == -1) {
> > >  		pr_debug("Cannot find .BTF_ids or symbols sections, skip symbols resolution\n");
> > > -		goto dump_btf;
> > > +		resolve_btfids = false;
> > >  	}
> > >  
> > > -	if (symbols_collect(&obj))
> > > +	if (resolve_btfids)
> > > +		if (symbols_collect(&obj))
> > > +			goto out;
> > 
> > Nit: check obj.efile.idlist_shndx and obj.efile.symbols_shndx inside symbols_collect()?
> >      To avoid resolve_btfids flag and the `goto dump_btf;` below.
> 
> Hi Eduard, thank you for review.
> 
> The issue is that in case of .BTF_ids section absent we have to skip
> some of the steps, specifically:
>   - symbols_collect()
>   - sequence between symbols_resolve() and dump_raw_btf_ids()

> It's not an exit condition, we still have to do load/dump of the BTF.
> 
> I tried in symbols_collect():
> 
> 	if (obj.efile.idlist_shndx == -1 || obj.efile.symbols_shndx == -1)
> 		return 0;
> 
> But then, we either have to do the same check in symbols_resolve() and
> co, or maybe store a flag in the struct object.  So I decided it's
> better to have an explicit flag in the main control flow, instead of
> hiding it.

For symbols_resolve() is any special logic necessary?
I think that `id = btf_id__find(root, str);` will just return NULL for
every type, thus the whole function would be a noop passing through
BTF types once.

symbols_patch() will be a noop, as it will attempt traversing empty roots.
dump_raw_btf_ids() already returns if there are no .BTF_ids.

[...]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 11/13] selftests/bpf: Migrate struct_ops_assoc test to KF_IMPLICIT_ARGS
  2026-01-20  1:59   ` Eduard Zingerman
@ 2026-01-20 18:20     ` Ihor Solodrai
  2026-01-20 18:24       ` Eduard Zingerman
  0 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-20 18:20 UTC (permalink / raw)
  To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On 1/19/26 5:59 PM, Eduard Zingerman wrote:
> On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:
> 
> [...]
> 
>> diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
>> index 2357a0340ffe..225ea30c4e3d 100644
>> --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
>> +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
>> @@ -161,7 +161,9 @@ void bpf_kfunc_rcu_task_test(struct task_struct *ptr) __ksym;
>>  struct task_struct *bpf_kfunc_ret_rcu_test(void) __ksym;
>>  int *bpf_kfunc_ret_rcu_test_nostruct(int rdonly_buf_size) __ksym;
>>  
>> -int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __ksym;
>> -int bpf_kfunc_multi_st_ops_test_1_impl(struct st_ops_args *args, void *aux__prog) __ksym;
>> +#ifndef __KERNEL__
>> +extern int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __weak __ksym;
>> +extern int bpf_kfunc_multi_st_ops_test_1_assoc(struct st_ops_args *args) __weak __ksym;
>> +#endif
> 
> Nit: wbpf_kfunc_multi_st_ops_test_1 change is not necessary, right?

Right, but it felt wrong to only change one of these decls.

This header is weird in that it is included both in the module code
and in BPF progs, although it is typically not a problem since the
most kfunc signatures match.

Maybe it should have #ifndef __KERNEL__ followed by kfunc declarations
that correspond to vmlinux.h format? I haven't tried that, but seems
logical to me.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 11/13] selftests/bpf: Migrate struct_ops_assoc test to KF_IMPLICIT_ARGS
  2026-01-20 18:20     ` Ihor Solodrai
@ 2026-01-20 18:24       ` Eduard Zingerman
  0 siblings, 0 replies; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20 18:24 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Tue, 2026-01-20 at 10:20 -0800, Ihor Solodrai wrote:
> On 1/19/26 5:59 PM, Eduard Zingerman wrote:
> > On Fri, 2026-01-16 at 12:16 -0800, Ihor Solodrai wrote:
> > 
> > [...]
> > 
> > > diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
> > > index 2357a0340ffe..225ea30c4e3d 100644
> > > --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
> > > +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
> > > @@ -161,7 +161,9 @@ void bpf_kfunc_rcu_task_test(struct task_struct *ptr) __ksym;
> > >  struct task_struct *bpf_kfunc_ret_rcu_test(void) __ksym;
> > >  int *bpf_kfunc_ret_rcu_test_nostruct(int rdonly_buf_size) __ksym;
> > >  
> > > -int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __ksym;
> > > -int bpf_kfunc_multi_st_ops_test_1_impl(struct st_ops_args *args, void *aux__prog) __ksym;
> > > +#ifndef __KERNEL__
> > > +extern int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __weak __ksym;
> > > +extern int bpf_kfunc_multi_st_ops_test_1_assoc(struct st_ops_args *args) __weak __ksym;
> > > +#endif
> > 
> > Nit: wbpf_kfunc_multi_st_ops_test_1 change is not necessary, right?
> 
> Right, but it felt wrong to only change one of these decls.
> 
> This header is weird in that it is included both in the module code
> and in BPF progs, although it is typically not a problem since the
> most kfunc signatures match.

I think it is used this way, so that compiler can warn user about
signature mismatch during development.

> Maybe it should have #ifndef __KERNEL__ followed by kfunc declarations
> that correspond to vmlinux.h format? I haven't tried that, but seems
> logical to me.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 04/13] resolve_btfids: Introduce finalize_btf() step
  2026-01-20 18:19       ` Eduard Zingerman
@ 2026-01-20 18:35         ` Ihor Solodrai
  2026-01-20 18:40           ` Eduard Zingerman
  0 siblings, 1 reply; 35+ messages in thread
From: Ihor Solodrai @ 2026-01-20 18:35 UTC (permalink / raw)
  To: Eduard Zingerman, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On 1/20/26 10:19 AM, Eduard Zingerman wrote:
> On Tue, 2026-01-20 at 10:11 -0800, Ihor Solodrai wrote:
> 
> [...]
> 
>>>> @@ -1099,12 +1116,22 @@ int main(int argc, const char **argv)
>>>>  	if (obj.efile.idlist_shndx == -1 ||
>>>>  	    obj.efile.symbols_shndx == -1) {
>>>>  		pr_debug("Cannot find .BTF_ids or symbols sections, skip symbols resolution\n");
>>>> -		goto dump_btf;
>>>> +		resolve_btfids = false;
>>>>  	}
>>>>  
>>>> -	if (symbols_collect(&obj))
>>>> +	if (resolve_btfids)
>>>> +		if (symbols_collect(&obj))
>>>> +			goto out;
>>>
>>> Nit: check obj.efile.idlist_shndx and obj.efile.symbols_shndx inside symbols_collect()?
>>>      To avoid resolve_btfids flag and the `goto dump_btf;` below.
>>
>> Hi Eduard, thank you for review.
>>
>> The issue is that in case of .BTF_ids section absent we have to skip
>> some of the steps, specifically:
>>   - symbols_collect()
>>   - sequence between symbols_resolve() and dump_raw_btf_ids()
> 
>> It's not an exit condition, we still have to do load/dump of the BTF.
>>
>> I tried in symbols_collect():
>>
>> 	if (obj.efile.idlist_shndx == -1 || obj.efile.symbols_shndx == -1)
>> 		return 0;
>>
>> But then, we either have to do the same check in symbols_resolve() and
>> co, or maybe store a flag in the struct object.  So I decided it's
>> better to have an explicit flag in the main control flow, instead of
>> hiding it.
> 
> For symbols_resolve() is any special logic necessary?
> I think that `id = btf_id__find(root, str);` will just return NULL for
> every type, thus the whole function would be a noop passing through
> BTF types once.
> 
> symbols_patch() will be a noop, as it will attempt traversing empty roots.
> dump_raw_btf_ids() already returns if there are no .BTF_ids.

Hm... Looks like you're right, those would be noops.

Still, I think it's clearer what steps are skipped with a toplevel
flag.  Otherwise to figure out that those are noops you need to check
every subroutine (as you just did), and a future change may
unintentionally break the expectation of noop creating an unnecessary
debugging session.

And re symbols_resolve(), if we don't like allocating unnecessary
memory, why are we ok with traversing the BTF with noops? Seems
a bit inconsistent to me.

> 
> [...]


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH bpf-next v2 04/13] resolve_btfids: Introduce finalize_btf() step
  2026-01-20 18:35         ` Ihor Solodrai
@ 2026-01-20 18:40           ` Eduard Zingerman
  0 siblings, 0 replies; 35+ messages in thread
From: Eduard Zingerman @ 2026-01-20 18:40 UTC (permalink / raw)
  To: Ihor Solodrai, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau
  Cc: Mykyta Yatsenko, Tejun Heo, Alan Maguire, Benjamin Tissoires,
	Jiri Kosina, Amery Hung, bpf, linux-kernel, linux-input,
	sched-ext

On Tue, 2026-01-20 at 10:35 -0800, Ihor Solodrai wrote:
> On 1/20/26 10:19 AM, Eduard Zingerman wrote:
> > On Tue, 2026-01-20 at 10:11 -0800, Ihor Solodrai wrote:
> > 
> > [...]
> > 
> > > > > @@ -1099,12 +1116,22 @@ int main(int argc, const char **argv)
> > > > >  	if (obj.efile.idlist_shndx == -1 ||
> > > > >  	    obj.efile.symbols_shndx == -1) {
> > > > >  		pr_debug("Cannot find .BTF_ids or symbols sections, skip symbols resolution\n");
> > > > > -		goto dump_btf;
> > > > > +		resolve_btfids = false;
> > > > >  	}
> > > > >  
> > > > > -	if (symbols_collect(&obj))
> > > > > +	if (resolve_btfids)
> > > > > +		if (symbols_collect(&obj))
> > > > > +			goto out;
> > > > 
> > > > Nit: check obj.efile.idlist_shndx and obj.efile.symbols_shndx inside symbols_collect()?
> > > >      To avoid resolve_btfids flag and the `goto dump_btf;` below.
> > > 
> > > Hi Eduard, thank you for review.
> > > 
> > > The issue is that in case of .BTF_ids section absent we have to skip
> > > some of the steps, specifically:
> > >   - symbols_collect()
> > >   - sequence between symbols_resolve() and dump_raw_btf_ids()
> > 
> > > It's not an exit condition, we still have to do load/dump of the BTF.
> > > 
> > > I tried in symbols_collect():
> > > 
> > > 	if (obj.efile.idlist_shndx == -1 || obj.efile.symbols_shndx == -1)
> > > 		return 0;
> > > 
> > > But then, we either have to do the same check in symbols_resolve() and
> > > co, or maybe store a flag in the struct object.  So I decided it's
> > > better to have an explicit flag in the main control flow, instead of
> > > hiding it.
> > 
> > For symbols_resolve() is any special logic necessary?
> > I think that `id = btf_id__find(root, str);` will just return NULL for
> > every type, thus the whole function would be a noop passing through
> > BTF types once.
> > 
> > symbols_patch() will be a noop, as it will attempt traversing empty roots.
> > dump_raw_btf_ids() already returns if there are no .BTF_ids.
> 
> Hm... Looks like you're right, those would be noops.
> 
> Still, I think it's clearer what steps are skipped with a toplevel
> flag.  Otherwise to figure out that those are noops you need to check
> every subroutine (as you just did), and a future change may
> unintentionally break the expectation of noop creating an unnecessary
> debugging session.

I'd argue that resolve_btfids is written in a clear read data /
process data manner, so it should behave correctly even if collected
data is empty. Is there a difference between no .BTF_ids and empty
.BTF_ids?

> And re symbols_resolve(), if we don't like allocating unnecessary
> memory, why are we ok with traversing the BTF with noops? Seems
> a bit inconsistent to me.

He-he, I never heard about consistency :)
Just don't like flags, they make me think more.

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2026-01-20 18:40 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-16 20:16 [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Ihor Solodrai
2026-01-16 20:16 ` [PATCH bpf-next v2 01/13] bpf: Refactor btf_kfunc_id_set_contains Ihor Solodrai
2026-01-16 20:16 ` [PATCH bpf-next v2 02/13] bpf: Introduce struct bpf_kfunc_meta Ihor Solodrai
2026-01-16 20:16 ` [PATCH bpf-next v2 03/13] bpf: Verifier support for KF_IMPLICIT_ARGS Ihor Solodrai
2026-01-20  0:03   ` Eduard Zingerman
2026-01-16 20:16 ` [PATCH bpf-next v2 04/13] resolve_btfids: Introduce finalize_btf() step Ihor Solodrai
2026-01-20  0:13   ` Eduard Zingerman
2026-01-20 18:11     ` Ihor Solodrai
2026-01-20 18:19       ` Eduard Zingerman
2026-01-20 18:35         ` Ihor Solodrai
2026-01-20 18:40           ` Eduard Zingerman
2026-01-16 20:16 ` [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS Ihor Solodrai
2026-01-16 20:39   ` bot+bpf-ci
2026-01-16 20:44     ` Ihor Solodrai
2026-01-17  0:06   ` Andrii Nakryiko
2026-01-17  6:36     ` Ihor Solodrai
2026-01-20  0:24       ` Eduard Zingerman
2026-01-20  0:55   ` Eduard Zingerman
2026-01-16 20:16 ` [PATCH bpf-next v2 06/13] selftests/bpf: Add tests " Ihor Solodrai
2026-01-20  1:24   ` Eduard Zingerman
2026-01-16 20:16 ` [PATCH bpf-next v2 07/13] bpf: Migrate bpf_wq_set_callback_impl() to KF_IMPLICIT_ARGS Ihor Solodrai
2026-01-20  1:50   ` Eduard Zingerman
2026-01-16 20:16 ` [PATCH bpf-next v2 08/13] HID: Use bpf_wq_set_callback kernel function Ihor Solodrai
2026-01-16 20:16 ` [PATCH bpf-next v2 09/13] bpf: Migrate bpf_task_work_schedule_* kfuncs to KF_IMPLICIT_ARGS Ihor Solodrai
2026-01-20  1:52   ` Eduard Zingerman
2026-01-16 20:16 ` [PATCH bpf-next v2 10/13] bpf: Migrate bpf_stream_vprintk() " Ihor Solodrai
2026-01-20  1:53   ` Eduard Zingerman
2026-01-16 20:16 ` [PATCH bpf-next v2 11/13] selftests/bpf: Migrate struct_ops_assoc test " Ihor Solodrai
2026-01-20  1:59   ` Eduard Zingerman
2026-01-20 18:20     ` Ihor Solodrai
2026-01-20 18:24       ` Eduard Zingerman
2026-01-16 20:16 ` [PATCH bpf-next v2 12/13] bpf: Remove __prog kfunc arg annotation Ihor Solodrai
2026-01-20  2:01   ` Eduard Zingerman
2026-01-16 20:17 ` [PATCH bpf-next v2 13/13] bpf,docs: Document KF_IMPLICIT_ARGS flag Ihor Solodrai
2026-01-20  1:49 ` [PATCH bpf-next v2 00/13] bpf: Kernel functions with KF_IMPLICIT_ARGS Eduard Zingerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox