BPF List
 help / color / mirror / Atom feed
* [PATCH RFC bpf-next v2 0/4] bpf: Support 64-bit pointers to kfuncs
@ 2023-02-15 23:59 Ilya Leoshkevich
  2023-02-15 23:59 ` [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL Ilya Leoshkevich
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Ilya Leoshkevich @ 2023-02-15 23:59 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

v1: https://lore.kernel.org/bpf/20230214212809.242632-1-iii@linux.ibm.com/T/#t
v1 -> v2: Add BPF_HELPER_CALL (Stanislav).
          Add check_subprogs() cleanup - noticed while reviewing the
          code for BPF_HELPER_CALL.
          Drop WARN_ON_ONCE (Stanislav, Alexei).
          Add bpf_jit_get_func_addr() to x86_64 JIT.

Hi,

This series solves the problems outlined in [1, 2, 3]. The main problem
is that kfuncs in modules do not fit into bpf_insn.imm on s390x; the
secondary problem is that there is a conflict between "abstract" XDP
metadata function BTF ids and their "concrete" addresses.

The solution is to keep fkunc BTF ids in bpf_insn.imm, and put the
addresses into bpf_kfunc_desc, which does not have size restrictions.

Regtested with test_verifier and test_progs on x86_64 and s390x.
TODO: Need to add bpf_jit_get_func_addr() to arm, sparc and i386 JITs.

[1] https://lore.kernel.org/bpf/Y9%2FyrKZkBK6yzXp+@krava/
[2] https://lore.kernel.org/bpf/20230128000650.1516334-1-iii@linux.ibm.com/
[3] https://lore.kernel.org/bpf/20230128000650.1516334-32-iii@linux.ibm.com/

Best regards,
Ilya

Ilya Leoshkevich (4):
  bpf: Introduce BPF_HELPER_CALL
  bpf: Use BPF_HELPER_CALL in check_subprogs()
  bpf, x86: Use bpf_jit_get_func_addr()
  bpf: Support 64-bit pointers to kfuncs

 arch/x86/net/bpf_jit_comp.c    | 15 ++++--
 include/linux/bpf.h            |  2 +
 include/uapi/linux/bpf.h       |  4 ++
 kernel/bpf/core.c              | 21 ++++++--
 kernel/bpf/disasm.c            |  2 +-
 kernel/bpf/verifier.c          | 95 ++++++++++++----------------------
 tools/include/linux/filter.h   |  2 +-
 tools/include/uapi/linux/bpf.h |  4 ++
 8 files changed, 75 insertions(+), 70 deletions(-)

-- 
2.39.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL
  2023-02-15 23:59 [PATCH RFC bpf-next v2 0/4] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
@ 2023-02-15 23:59 ` Ilya Leoshkevich
  2023-02-16 16:37   ` Alexei Starovoitov
  2023-02-15 23:59 ` [PATCH RFC bpf-next v2 2/4] bpf: Use BPF_HELPER_CALL in check_subprogs() Ilya Leoshkevich
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 14+ messages in thread
From: Ilya Leoshkevich @ 2023-02-15 23:59 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

Make the code more readable by introducing a symbolic constant
instead of using 0.

Suggested-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 include/uapi/linux/bpf.h       |  4 ++++
 kernel/bpf/disasm.c            |  2 +-
 kernel/bpf/verifier.c          | 12 +++++++-----
 tools/include/linux/filter.h   |  2 +-
 tools/include/uapi/linux/bpf.h |  4 ++++
 5 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 1503f61336b6..37f7588d5b2f 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1211,6 +1211,10 @@ enum bpf_link_type {
  */
 #define BPF_PSEUDO_FUNC		4
 
+/* when bpf_call->src_reg == BPF_HELPER_CALL, bpf_call->imm == index of a bpf
+ * helper function (see ___BPF_FUNC_MAPPER below for a full list)
+ */
+#define BPF_HELPER_CALL		0
 /* when bpf_call->src_reg == BPF_PSEUDO_CALL, bpf_call->imm == pc-relative
  * offset to another bpf function
  */
diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
index 7b4afb7d96db..c11d9b5a45a9 100644
--- a/kernel/bpf/disasm.c
+++ b/kernel/bpf/disasm.c
@@ -19,7 +19,7 @@ static const char *__func_get_name(const struct bpf_insn_cbs *cbs,
 {
 	BUILD_BUG_ON(ARRAY_SIZE(func_id_str) != __BPF_FUNC_MAX_ID);
 
-	if (!insn->src_reg &&
+	if (insn->src_reg == BPF_HELPER_CALL &&
 	    insn->imm >= 0 && insn->imm < __BPF_FUNC_MAX_ID &&
 	    func_id_str[insn->imm])
 		return func_id_str[insn->imm];
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 272563a0b770..427525fc3791 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2947,7 +2947,8 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
 			/* BPF helpers that invoke callback subprogs are
 			 * equivalent to BPF_PSEUDO_CALL above
 			 */
-			if (insn->src_reg == 0 && is_callback_calling_function(insn->imm))
+			if (insn->src_reg == BPF_HELPER_CALL &&
+			    is_callback_calling_function(insn->imm))
 				return -ENOTSUPP;
 			/* kfunc with imm==0 is invalid and fixup_kfunc_call will
 			 * catch this error later. Make backtracking conservative
@@ -7522,7 +7523,7 @@ static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 	}
 
 	if (insn->code == (BPF_JMP | BPF_CALL) &&
-	    insn->src_reg == 0 &&
+	    insn->src_reg == BPF_HELPER_CALL &&
 	    insn->imm == BPF_FUNC_timer_set_callback) {
 		struct bpf_verifier_state *async_cb;
 
@@ -14730,7 +14731,7 @@ static int do_check(struct bpf_verifier_env *env)
 				if (BPF_SRC(insn->code) != BPF_K ||
 				    (insn->src_reg != BPF_PSEUDO_KFUNC_CALL
 				     && insn->off != 0) ||
-				    (insn->src_reg != BPF_REG_0 &&
+				    (insn->src_reg != BPF_HELPER_CALL &&
 				     insn->src_reg != BPF_PSEUDO_CALL &&
 				     insn->src_reg != BPF_PSEUDO_KFUNC_CALL) ||
 				    insn->dst_reg != BPF_REG_0 ||
@@ -14740,7 +14741,8 @@ static int do_check(struct bpf_verifier_env *env)
 				}
 
 				if (env->cur_state->active_lock.ptr) {
-					if ((insn->src_reg == BPF_REG_0 && insn->imm != BPF_FUNC_spin_unlock) ||
+					if ((insn->src_reg == BPF_HELPER_CALL &&
+					     insn->imm != BPF_FUNC_spin_unlock) ||
 					    (insn->src_reg == BPF_PSEUDO_CALL) ||
 					    (insn->src_reg == BPF_PSEUDO_KFUNC_CALL &&
 					     (insn->off != 0 || !is_bpf_graph_api_kfunc(insn->imm)))) {
@@ -16933,7 +16935,7 @@ static struct bpf_prog *inline_bpf_loop(struct bpf_verifier_env *env,
 static bool is_bpf_loop_call(struct bpf_insn *insn)
 {
 	return insn->code == (BPF_JMP | BPF_CALL) &&
-		insn->src_reg == 0 &&
+		insn->src_reg == BPF_HELPER_CALL &&
 		insn->imm == BPF_FUNC_loop;
 }
 
diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
index 736bdeccdfe4..78dc208c8d88 100644
--- a/tools/include/linux/filter.h
+++ b/tools/include/linux/filter.h
@@ -261,7 +261,7 @@
 	((struct bpf_insn) {					\
 		.code  = BPF_JMP | BPF_CALL,			\
 		.dst_reg = 0,					\
-		.src_reg = 0,					\
+		.src_reg = BPF_HELPER_CALL,			\
 		.off   = 0,					\
 		.imm   = ((FUNC) - BPF_FUNC_unspec) })
 
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 1503f61336b6..37f7588d5b2f 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1211,6 +1211,10 @@ enum bpf_link_type {
  */
 #define BPF_PSEUDO_FUNC		4
 
+/* when bpf_call->src_reg == BPF_HELPER_CALL, bpf_call->imm == index of a bpf
+ * helper function (see ___BPF_FUNC_MAPPER below for a full list)
+ */
+#define BPF_HELPER_CALL		0
 /* when bpf_call->src_reg == BPF_PSEUDO_CALL, bpf_call->imm == pc-relative
  * offset to another bpf function
  */
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH RFC bpf-next v2 2/4] bpf: Use BPF_HELPER_CALL in check_subprogs()
  2023-02-15 23:59 [PATCH RFC bpf-next v2 0/4] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
  2023-02-15 23:59 ` [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL Ilya Leoshkevich
@ 2023-02-15 23:59 ` Ilya Leoshkevich
  2023-02-15 23:59 ` [PATCH RFC bpf-next v2 3/4] bpf, x86: Use bpf_jit_get_func_addr() Ilya Leoshkevich
  2023-02-15 23:59 ` [PATCH RFC bpf-next v2 4/4] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
  3 siblings, 0 replies; 14+ messages in thread
From: Ilya Leoshkevich @ 2023-02-15 23:59 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

The condition src_reg != BPF_PSEUDO_CALL && imm == BPF_FUNC_tail_call
may be satisfied by a kfunc call. This would lead to unnecessarily
setting has_tail_call. Use src_reg == BPF_HELPER_CALL instead.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 kernel/bpf/verifier.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 427525fc3791..71158a6786a1 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2475,8 +2475,8 @@ static int check_subprogs(struct bpf_verifier_env *env)
 		u8 code = insn[i].code;
 
 		if (code == (BPF_JMP | BPF_CALL) &&
-		    insn[i].imm == BPF_FUNC_tail_call &&
-		    insn[i].src_reg != BPF_PSEUDO_CALL)
+		    insn[i].src_reg == BPF_HELPER_CALL &&
+		    insn[i].imm == BPF_FUNC_tail_call)
 			subprog[cur_subprog].has_tail_call = true;
 		if (BPF_CLASS(code) == BPF_LD &&
 		    (BPF_MODE(code) == BPF_ABS || BPF_MODE(code) == BPF_IND))
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH RFC bpf-next v2 3/4] bpf, x86: Use bpf_jit_get_func_addr()
  2023-02-15 23:59 [PATCH RFC bpf-next v2 0/4] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
  2023-02-15 23:59 ` [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL Ilya Leoshkevich
  2023-02-15 23:59 ` [PATCH RFC bpf-next v2 2/4] bpf: Use BPF_HELPER_CALL in check_subprogs() Ilya Leoshkevich
@ 2023-02-15 23:59 ` Ilya Leoshkevich
  2023-02-15 23:59 ` [PATCH RFC bpf-next v2 4/4] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
  3 siblings, 0 replies; 14+ messages in thread
From: Ilya Leoshkevich @ 2023-02-15 23:59 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

Preparation for moving kfunc address from bpf_insn.imm.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 arch/x86/net/bpf_jit_comp.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 1056bbf55b17..f722f431ba6f 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -964,7 +964,8 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op)
 #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
 
 static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image,
-		  int oldproglen, struct jit_context *ctx, bool jmp_padding)
+		  int oldproglen, struct jit_context *ctx, bool jmp_padding,
+		  bool extra_pass)
 {
 	bool tail_call_reachable = bpf_prog->aux->tail_call_reachable;
 	struct bpf_insn *insn = bpf_prog->insnsi;
@@ -1000,9 +1001,11 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 		const s32 imm32 = insn->imm;
 		u32 dst_reg = insn->dst_reg;
 		u32 src_reg = insn->src_reg;
+		bool func_addr_fixed;
 		u8 b2 = 0, b3 = 0;
 		u8 *start_of_ldx;
 		s64 jmp_offset;
+		u64 func_addr;
 		s16 insn_off;
 		u8 jmp_cond;
 		u8 *func;
@@ -1536,7 +1539,12 @@ st:			if (is_imm8(insn->off))
 		case BPF_JMP | BPF_CALL: {
 			int offs;
 
-			func = (u8 *) __bpf_call_base + imm32;
+			err = bpf_jit_get_func_addr(bpf_prog, insn, extra_pass,
+						    &func_addr,
+						    &func_addr_fixed);
+			if (err < 0)
+				return err;
+			func = (u8 *)(unsigned long)func_addr;
 			if (tail_call_reachable) {
 				/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
 				EMIT3_off32(0x48, 0x8B, 0x85,
@@ -2518,7 +2526,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	for (pass = 0; pass < MAX_PASSES || image; pass++) {
 		if (!padding && pass >= PADDING_PASSES)
 			padding = true;
-		proglen = do_jit(prog, addrs, image, rw_image, oldproglen, &ctx, padding);
+		proglen = do_jit(prog, addrs, image, rw_image, oldproglen, &ctx,
+				 padding, extra_pass);
 		if (proglen <= 0) {
 out_image:
 			image = NULL;
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH RFC bpf-next v2 4/4] bpf: Support 64-bit pointers to kfuncs
  2023-02-15 23:59 [PATCH RFC bpf-next v2 0/4] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (2 preceding siblings ...)
  2023-02-15 23:59 ` [PATCH RFC bpf-next v2 3/4] bpf, x86: Use bpf_jit_get_func_addr() Ilya Leoshkevich
@ 2023-02-15 23:59 ` Ilya Leoshkevich
  2023-02-17  9:40   ` Jiri Olsa
  3 siblings, 1 reply; 14+ messages in thread
From: Ilya Leoshkevich @ 2023-02-15 23:59 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

test_ksyms_module fails to emit a kfunc call targeting a module on
s390x, because the verifier stores the difference between kfunc
address and __bpf_call_base in bpf_insn.imm, which is s32, and modules
are roughly (1 << 42) bytes away from the kernel on s390x.

Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
and storing the absolute address in bpf_kfunc_desc, which JITs retrieve
as usual by calling bpf_jit_get_func_addr().

Introduce bpf_get_kfunc_addr() instead of exposing both
find_kfunc_desc() and struct bpf_kfunc_desc.

This also fixes the problem with XDP metadata functions outlined in
the description of commit 63d7b53ab59f ("s390/bpf: Implement
bpf_jit_supports_kfunc_call()") by replacing address lookups with BTF
id lookups. This eliminates the inconsistency between "abstract" XDP
metadata functions' BTF ids and their concrete addresses.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 include/linux/bpf.h   |  2 ++
 kernel/bpf/core.c     | 21 ++++++++++--
 kernel/bpf/verifier.c | 79 +++++++++++++------------------------------
 3 files changed, 44 insertions(+), 58 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index be34f7deb6c3..83ce94d11484 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2227,6 +2227,8 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
 const struct btf_func_model *
 bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
 			 const struct bpf_insn *insn);
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+		       u8 **func_addr);
 struct bpf_core_ctx {
 	struct bpf_verifier_log *log;
 	const struct btf *btf;
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 3390961c4e10..3644b90650b4 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
 {
 	s16 off = insn->off;
 	s32 imm = insn->imm;
+	bool fixed;
 	u8 *addr;
+	int err;
 
-	*func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
-	if (!*func_addr_fixed) {
+	switch (insn->src_reg) {
+	case BPF_PSEUDO_CALL:
 		/* Place-holder address till the last pass has collected
 		 * all addresses for JITed subprograms in which case we
 		 * can pick them up from prog->aux.
@@ -1200,15 +1202,28 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
 			addr = (u8 *)prog->aux->func[off]->bpf_func;
 		else
 			return -EINVAL;
-	} else {
+		fixed = false;
+		break;
+	case BPF_HELPER_CALL:
 		/* Address of a BPF helper call. Since part of the core
 		 * kernel, it's always at a fixed location. __bpf_call_base
 		 * and the helper with imm relative to it are both in core
 		 * kernel.
 		 */
 		addr = (u8 *)__bpf_call_base + imm;
+		fixed = true;
+		break;
+	case BPF_PSEUDO_KFUNC_CALL:
+		err = bpf_get_kfunc_addr(prog, imm, off, &addr);
+		if (err)
+			return err;
+		fixed = true;
+		break;
+	default:
+		return -EINVAL;
 	}
 
+	*func_addr_fixed = fixed;
 	*func_addr = (unsigned long)addr;
 	return 0;
 }
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 71158a6786a1..47d390923610 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2115,8 +2115,8 @@ static int add_subprog(struct bpf_verifier_env *env, int off)
 struct bpf_kfunc_desc {
 	struct btf_func_model func_model;
 	u32 func_id;
-	s32 imm;
 	u16 offset;
+	unsigned long addr;
 };
 
 struct bpf_kfunc_btf {
@@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog, u32 func_id, u16 offset)
 		       sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
 }
 
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+		       u8 **func_addr)
+{
+	const struct bpf_kfunc_desc *desc;
+
+	desc = find_kfunc_desc(prog, func_id, offset);
+	if (!desc)
+		return -EFAULT;
+
+	*func_addr = (u8 *)desc->addr;
+	return 0;
+}
+
 static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env,
 					 s16 offset)
 {
@@ -2261,8 +2274,8 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 	struct bpf_kfunc_desc *desc;
 	const char *func_name;
 	struct btf *desc_btf;
-	unsigned long call_imm;
 	unsigned long addr;
+	void *xdp_kfunc;
 	int err;
 
 	prog_aux = env->prog->aux;
@@ -2346,24 +2359,21 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 		return -EINVAL;
 	}
 
-	call_imm = BPF_CALL_IMM(addr);
-	/* Check whether or not the relative offset overflows desc->imm */
-	if ((unsigned long)(s32)call_imm != call_imm) {
-		verbose(env, "address of kernel function %s is out of range\n",
-			func_name);
-		return -EINVAL;
-	}
-
 	if (bpf_dev_bound_kfunc_id(func_id)) {
 		err = bpf_dev_bound_kfunc_check(&env->log, prog_aux);
 		if (err)
 			return err;
+
+		xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, func_id);
+		if (xdp_kfunc)
+			addr = (unsigned long)xdp_kfunc;
+		/* fallback to default kfunc when not supported by netdev */
 	}
 
 	desc = &tab->descs[tab->nr_descs++];
 	desc->func_id = func_id;
-	desc->imm = call_imm;
 	desc->offset = offset;
+	desc->addr = addr;
 	err = btf_distill_func_proto(&env->log, desc_btf,
 				     func_proto, func_name,
 				     &desc->func_model);
@@ -2373,30 +2383,6 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 	return err;
 }
 
-static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
-{
-	const struct bpf_kfunc_desc *d0 = a;
-	const struct bpf_kfunc_desc *d1 = b;
-
-	if (d0->imm > d1->imm)
-		return 1;
-	else if (d0->imm < d1->imm)
-		return -1;
-	return 0;
-}
-
-static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
-{
-	struct bpf_kfunc_desc_tab *tab;
-
-	tab = prog->aux->kfunc_tab;
-	if (!tab)
-		return;
-
-	sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
-	     kfunc_desc_cmp_by_imm, NULL);
-}
-
 bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
 {
 	return !!prog->aux->kfunc_tab;
@@ -2407,14 +2393,15 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
 			 const struct bpf_insn *insn)
 {
 	const struct bpf_kfunc_desc desc = {
-		.imm = insn->imm,
+		.func_id = insn->imm,
+		.offset = insn->off,
 	};
 	const struct bpf_kfunc_desc *res;
 	struct bpf_kfunc_desc_tab *tab;
 
 	tab = prog->aux->kfunc_tab;
 	res = bsearch(&desc, tab->descs, tab->nr_descs,
-		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm);
+		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
 
 	return res ? &res->func_model : NULL;
 }
@@ -16269,7 +16256,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			    struct bpf_insn *insn_buf, int insn_idx, int *cnt)
 {
 	const struct bpf_kfunc_desc *desc;
-	void *xdp_kfunc;
 
 	if (!insn->imm) {
 		verbose(env, "invalid kernel function call not eliminated in verifier pass\n");
@@ -16277,20 +16263,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	}
 
 	*cnt = 0;
-
-	if (bpf_dev_bound_kfunc_id(insn->imm)) {
-		xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, insn->imm);
-		if (xdp_kfunc) {
-			insn->imm = BPF_CALL_IMM(xdp_kfunc);
-			return 0;
-		}
-
-		/* fallback to default kfunc when not supported by netdev */
-	}
-
-	/* insn->imm has the btf func_id. Replace it with
-	 * an address (relative to __bpf_call_base).
-	 */
 	desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
 	if (!desc) {
 		verbose(env, "verifier internal error: kernel function descriptor not found for func_id %u\n",
@@ -16298,7 +16270,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		return -EFAULT;
 	}
 
-	insn->imm = desc->imm;
 	if (insn->off)
 		return 0;
 	if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl]) {
@@ -16852,8 +16823,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 		}
 	}
 
-	sort_kfunc_descs_by_imm(env->prog);
-
 	return 0;
 }
 
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL
  2023-02-15 23:59 ` [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL Ilya Leoshkevich
@ 2023-02-16 16:37   ` Alexei Starovoitov
  2023-02-16 17:25     ` Stanislav Fomichev
  0 siblings, 1 reply; 14+ messages in thread
From: Alexei Starovoitov @ 2023-02-16 16:37 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev

On Wed, Feb 15, 2023 at 3:59 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> Make the code more readable by introducing a symbolic constant
> instead of using 0.
>
> Suggested-by: Stanislav Fomichev <sdf@google.com>
> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> ---
>  include/uapi/linux/bpf.h       |  4 ++++
>  kernel/bpf/disasm.c            |  2 +-
>  kernel/bpf/verifier.c          | 12 +++++++-----
>  tools/include/linux/filter.h   |  2 +-
>  tools/include/uapi/linux/bpf.h |  4 ++++
>  5 files changed, 17 insertions(+), 7 deletions(-)
>
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 1503f61336b6..37f7588d5b2f 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -1211,6 +1211,10 @@ enum bpf_link_type {
>   */
>  #define BPF_PSEUDO_FUNC                4
>
> +/* when bpf_call->src_reg == BPF_HELPER_CALL, bpf_call->imm == index of a bpf
> + * helper function (see ___BPF_FUNC_MAPPER below for a full list)
> + */
> +#define BPF_HELPER_CALL                0

I don't like this "cleanup".
The code reads fine as-is.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL
  2023-02-16 16:37   ` Alexei Starovoitov
@ 2023-02-16 17:25     ` Stanislav Fomichev
  2023-02-16 17:33       ` Alexei Starovoitov
  0 siblings, 1 reply; 14+ messages in thread
From: Stanislav Fomichev @ 2023-02-16 17:25 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Ilya Leoshkevich, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, bpf, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Jiri Olsa

On 02/16, Alexei Starovoitov wrote:
> On Wed, Feb 15, 2023 at 3:59 PM Ilya Leoshkevich <iii@linux.ibm.com>  
> wrote:
> >
> > Make the code more readable by introducing a symbolic constant
> > instead of using 0.
> >
> > Suggested-by: Stanislav Fomichev <sdf@google.com>
> > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > ---
> >  include/uapi/linux/bpf.h       |  4 ++++
> >  kernel/bpf/disasm.c            |  2 +-
> >  kernel/bpf/verifier.c          | 12 +++++++-----
> >  tools/include/linux/filter.h   |  2 +-
> >  tools/include/uapi/linux/bpf.h |  4 ++++
> >  5 files changed, 17 insertions(+), 7 deletions(-)
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 1503f61336b6..37f7588d5b2f 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -1211,6 +1211,10 @@ enum bpf_link_type {
> >   */
> >  #define BPF_PSEUDO_FUNC                4
> >
> > +/* when bpf_call->src_reg == BPF_HELPER_CALL, bpf_call->imm == index  
> of a bpf
> > + * helper function (see ___BPF_FUNC_MAPPER below for a full list)
> > + */
> > +#define BPF_HELPER_CALL                0

> I don't like this "cleanup".
> The code reads fine as-is.

Even in the context of patch 4? There would be the following switch
without BPF_HELPER_CALL:

switch (insn->src_reg) {
case 0:
	...
	break;

case BPF_PSEUDO_CALL:
	...
	break;

case BPF_PSEUDO_KFUNC_CALL:
	...
	break;
}

That 'case 0' feels like it deserves a name. But up to you, I'm fine
either way.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL
  2023-02-16 17:25     ` Stanislav Fomichev
@ 2023-02-16 17:33       ` Alexei Starovoitov
  2023-02-16 18:03         ` Stanislav Fomichev
  0 siblings, 1 reply; 14+ messages in thread
From: Alexei Starovoitov @ 2023-02-16 17:33 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Ilya Leoshkevich, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, bpf, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Jiri Olsa

On Thu, Feb 16, 2023 at 9:25 AM Stanislav Fomichev <sdf@google.com> wrote:
>
> On 02/16, Alexei Starovoitov wrote:
> > On Wed, Feb 15, 2023 at 3:59 PM Ilya Leoshkevich <iii@linux.ibm.com>
> > wrote:
> > >
> > > Make the code more readable by introducing a symbolic constant
> > > instead of using 0.
> > >
> > > Suggested-by: Stanislav Fomichev <sdf@google.com>
> > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > ---
> > >  include/uapi/linux/bpf.h       |  4 ++++
> > >  kernel/bpf/disasm.c            |  2 +-
> > >  kernel/bpf/verifier.c          | 12 +++++++-----
> > >  tools/include/linux/filter.h   |  2 +-
> > >  tools/include/uapi/linux/bpf.h |  4 ++++
> > >  5 files changed, 17 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > index 1503f61336b6..37f7588d5b2f 100644
> > > --- a/include/uapi/linux/bpf.h
> > > +++ b/include/uapi/linux/bpf.h
> > > @@ -1211,6 +1211,10 @@ enum bpf_link_type {
> > >   */
> > >  #define BPF_PSEUDO_FUNC                4
> > >
> > > +/* when bpf_call->src_reg == BPF_HELPER_CALL, bpf_call->imm == index
> > of a bpf
> > > + * helper function (see ___BPF_FUNC_MAPPER below for a full list)
> > > + */
> > > +#define BPF_HELPER_CALL                0
>
> > I don't like this "cleanup".
> > The code reads fine as-is.
>
> Even in the context of patch 4? There would be the following switch
> without BPF_HELPER_CALL:
>
> switch (insn->src_reg) {
> case 0:
>         ...
>         break;
>
> case BPF_PSEUDO_CALL:
>         ...
>         break;
>
> case BPF_PSEUDO_KFUNC_CALL:
>         ...
>         break;
> }
>
> That 'case 0' feels like it deserves a name. But up to you, I'm fine
> either way.

It's philosophical.
Some people insist on if (ptr == NULL). I insist on if (!ptr).
That's why canonical bpf progs are written as:
val = bpf_map_lookup();
if (!val) ...
zero is zero. It doesn't need #define.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL
  2023-02-16 17:33       ` Alexei Starovoitov
@ 2023-02-16 18:03         ` Stanislav Fomichev
  2023-02-17 10:57           ` Ilya Leoshkevich
  0 siblings, 1 reply; 14+ messages in thread
From: Stanislav Fomichev @ 2023-02-16 18:03 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Ilya Leoshkevich, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, bpf, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Jiri Olsa

On 02/16, Alexei Starovoitov wrote:
> On Thu, Feb 16, 2023 at 9:25 AM Stanislav Fomichev <sdf@google.com> wrote:
> >
> > On 02/16, Alexei Starovoitov wrote:
> > > On Wed, Feb 15, 2023 at 3:59 PM Ilya Leoshkevich <iii@linux.ibm.com>
> > > wrote:
> > > >
> > > > Make the code more readable by introducing a symbolic constant
> > > > instead of using 0.
> > > >
> > > > Suggested-by: Stanislav Fomichev <sdf@google.com>
> > > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > ---
> > > >  include/uapi/linux/bpf.h       |  4 ++++
> > > >  kernel/bpf/disasm.c            |  2 +-
> > > >  kernel/bpf/verifier.c          | 12 +++++++-----
> > > >  tools/include/linux/filter.h   |  2 +-
> > > >  tools/include/uapi/linux/bpf.h |  4 ++++
> > > >  5 files changed, 17 insertions(+), 7 deletions(-)
> > > >
> > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > index 1503f61336b6..37f7588d5b2f 100644
> > > > --- a/include/uapi/linux/bpf.h
> > > > +++ b/include/uapi/linux/bpf.h
> > > > @@ -1211,6 +1211,10 @@ enum bpf_link_type {
> > > >   */
> > > >  #define BPF_PSEUDO_FUNC                4
> > > >
> > > > +/* when bpf_call->src_reg == BPF_HELPER_CALL, bpf_call->imm ==  
> index
> > > of a bpf
> > > > + * helper function (see ___BPF_FUNC_MAPPER below for a full list)
> > > > + */
> > > > +#define BPF_HELPER_CALL                0
> >
> > > I don't like this "cleanup".
> > > The code reads fine as-is.
> >
> > Even in the context of patch 4? There would be the following switch
> > without BPF_HELPER_CALL:
> >
> > switch (insn->src_reg) {
> > case 0:
> >         ...
> >         break;
> >
> > case BPF_PSEUDO_CALL:
> >         ...
> >         break;
> >
> > case BPF_PSEUDO_KFUNC_CALL:
> >         ...
> >         break;
> > }
> >
> > That 'case 0' feels like it deserves a name. But up to you, I'm fine
> > either way.

> It's philosophical.
> Some people insist on if (ptr == NULL). I insist on if (!ptr).
> That's why canonical bpf progs are written as:
> val = bpf_map_lookup();
> if (!val) ...
> zero is zero. It doesn't need #define.

Are you sure we still want to apply the same logic here for src_reg? I
agree that doing src_reg vs !src_reg made sense when we had a "helper"
vs "non-helper" (bpf2bpf) situation. However now this src_reg feels more
like an enum. And since we have an enum value for 1 and 2, it feels
natural to have another one for 0?

That second patch from the series ([0]) might be a good example on why
we actually need it. I'm assuming at some point we've had:
#define BPF_PSEUDO_CALL 1

So we ended up writing `src_reg != BPF_PSEUDO_CALL` instead of actually
doing `src_reg == BPF_HELPER_CALL` (aka `src_reg == 0`).
Afterwards, we've added BPF_PSEUDO_KFUNC_CALL=2 which broke our previous
src_reg vs !src_reg assumptions...

[0]:  
https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/T/#mf87a26ef48a909b62ce950639acfdf5b296b487b

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH RFC bpf-next v2 4/4] bpf: Support 64-bit pointers to kfuncs
  2023-02-15 23:59 ` [PATCH RFC bpf-next v2 4/4] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
@ 2023-02-17  9:40   ` Jiri Olsa
  2023-02-17 10:53     ` Ilya Leoshkevich
  0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2023-02-17  9:40 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Stanislav Fomichev

On Thu, Feb 16, 2023 at 12:59:31AM +0100, Ilya Leoshkevich wrote:

SNIP

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 71158a6786a1..47d390923610 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2115,8 +2115,8 @@ static int add_subprog(struct bpf_verifier_env *env, int off)
>  struct bpf_kfunc_desc {
>  	struct btf_func_model func_model;
>  	u32 func_id;
> -	s32 imm;
>  	u16 offset;
> +	unsigned long addr;
>  };
>  
>  struct bpf_kfunc_btf {
> @@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog, u32 func_id, u16 offset)
>  		       sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
>  }
>  
> +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
> +		       u8 **func_addr)
> +{
> +	const struct bpf_kfunc_desc *desc;
> +
> +	desc = find_kfunc_desc(prog, func_id, offset);
> +	if (!desc)
> +		return -EFAULT;

should we warn here? this should alwayss succeed, right?

jirka

> +
> +	*func_addr = (u8 *)desc->addr;
> +	return 0;
> +}
> +

SNIP

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH RFC bpf-next v2 4/4] bpf: Support 64-bit pointers to kfuncs
  2023-02-17  9:40   ` Jiri Olsa
@ 2023-02-17 10:53     ` Ilya Leoshkevich
  0 siblings, 0 replies; 14+ messages in thread
From: Ilya Leoshkevich @ 2023-02-17 10:53 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Stanislav Fomichev

On Fri, 2023-02-17 at 10:40 +0100, Jiri Olsa wrote:
> On Thu, Feb 16, 2023 at 12:59:31AM +0100, Ilya Leoshkevich wrote:
> 
> SNIP
> 
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 71158a6786a1..47d390923610 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -2115,8 +2115,8 @@ static int add_subprog(struct
> > bpf_verifier_env *env, int off)
> >  struct bpf_kfunc_desc {
> >         struct btf_func_model func_model;
> >         u32 func_id;
> > -       s32 imm;
> >         u16 offset;
> > +       unsigned long addr;
> >  };
> >  
> >  struct bpf_kfunc_btf {
> > @@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog,
> > u32 func_id, u16 offset)
> >                        sizeof(tab->descs[0]),
> > kfunc_desc_cmp_by_id_off);
> >  }
> >  
> > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> > u16 offset,
> > +                      u8 **func_addr)
> > +{
> > +       const struct bpf_kfunc_desc *desc;
> > +
> > +       desc = find_kfunc_desc(prog, func_id, offset);
> > +       if (!desc)
> > +               return -EFAULT;
> 
> should we warn here? this should alwayss succeed, right?

Hi Jiri!

This was discussed here:

https://lore.kernel.org/bpf/20230214212809.242632-1-iii@linux.ibm.com/T/#m3a4748997d31f6840c50b0bf2ccafe9d24f9218f

The conclusion was that the existing code does not warn in situations
like this, so we should not warn here either.

Best regards,
Ilya

> 
> jirka
> 
> > +
> > +       *func_addr = (u8 *)desc->addr;
> > +       return 0;
> > +}
> > +
> 
> SNIP


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL
  2023-02-16 18:03         ` Stanislav Fomichev
@ 2023-02-17 10:57           ` Ilya Leoshkevich
  2023-02-17 16:19             ` Alexei Starovoitov
  0 siblings, 1 reply; 14+ messages in thread
From: Ilya Leoshkevich @ 2023-02-17 10:57 UTC (permalink / raw)
  To: Stanislav Fomichev, Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa

On Thu, 2023-02-16 at 10:03 -0800, Stanislav Fomichev wrote:
> On 02/16, Alexei Starovoitov wrote:
> > On Thu, Feb 16, 2023 at 9:25 AM Stanislav Fomichev <sdf@google.com>
> > wrote:
> > > 
> > > On 02/16, Alexei Starovoitov wrote:
> > > > On Wed, Feb 15, 2023 at 3:59 PM Ilya Leoshkevich
> > > > <iii@linux.ibm.com>
> > > > wrote:
> > > > > 
> > > > > Make the code more readable by introducing a symbolic
> > > > > constant
> > > > > instead of using 0.
> > > > > 
> > > > > Suggested-by: Stanislav Fomichev <sdf@google.com>
> > > > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > > ---
> > > > >  include/uapi/linux/bpf.h       |  4 ++++
> > > > >  kernel/bpf/disasm.c            |  2 +-
> > > > >  kernel/bpf/verifier.c          | 12 +++++++-----
> > > > >  tools/include/linux/filter.h   |  2 +-
> > > > >  tools/include/uapi/linux/bpf.h |  4 ++++
> > > > >  5 files changed, 17 insertions(+), 7 deletions(-)
> > > > > 
> > > > > diff --git a/include/uapi/linux/bpf.h
> > > > > b/include/uapi/linux/bpf.h
> > > > > index 1503f61336b6..37f7588d5b2f 100644
> > > > > --- a/include/uapi/linux/bpf.h
> > > > > +++ b/include/uapi/linux/bpf.h
> > > > > @@ -1211,6 +1211,10 @@ enum bpf_link_type {
> > > > >   */
> > > > >  #define BPF_PSEUDO_FUNC                4
> > > > > 
> > > > > +/* when bpf_call->src_reg == BPF_HELPER_CALL, bpf_call->imm
> > > > > ==  
> > index
> > > > of a bpf
> > > > > + * helper function (see ___BPF_FUNC_MAPPER below for a full
> > > > > list)
> > > > > + */
> > > > > +#define BPF_HELPER_CALL                0
> > > 
> > > > I don't like this "cleanup".
> > > > The code reads fine as-is.
> > > 
> > > Even in the context of patch 4? There would be the following
> > > switch
> > > without BPF_HELPER_CALL:
> > > 
> > > switch (insn->src_reg) {
> > > case 0:
> > >         ...
> > >         break;
> > > 
> > > case BPF_PSEUDO_CALL:
> > >         ...
> > >         break;
> > > 
> > > case BPF_PSEUDO_KFUNC_CALL:
> > >         ...
> > >         break;
> > > }
> > > 
> > > That 'case 0' feels like it deserves a name. But up to you, I'm
> > > fine
> > > either way.
> 
> > It's philosophical.
> > Some people insist on if (ptr == NULL). I insist on if (!ptr).
> > That's why canonical bpf progs are written as:
> > val = bpf_map_lookup();
> > if (!val) ...
> > zero is zero. It doesn't need #define.
> 
> Are you sure we still want to apply the same logic here for src_reg?
> I
> agree that doing src_reg vs !src_reg made sense when we had a
> "helper"
> vs "non-helper" (bpf2bpf) situation. However now this src_reg feels
> more
> like an enum. And since we have an enum value for 1 and 2, it feels
> natural to have another one for 0?
> 
> That second patch from the series ([0]) might be a good example on
> why
> we actually need it. I'm assuming at some point we've had:
> #define BPF_PSEUDO_CALL 1
> 
> So we ended up writing `src_reg != BPF_PSEUDO_CALL` instead of
> actually
> doing `src_reg == BPF_HELPER_CALL` (aka `src_reg == 0`).
> Afterwards, we've added BPF_PSEUDO_KFUNC_CALL=2 which broke our
> previous
> src_reg vs !src_reg assumptions...
> 
> [0]:  
> https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/T/#mf87a26ef48a909b62ce950639acfdf5b296b487b

FWIW the helper checks before this series had inconsistent style:

- !insn->src_reg
- insn->src_reg == 0
- insn->src_reg != BPF_REG_0
- insn[i].src_reg != BPF_PSEUDO_CALL

Now at least it's the same style everywhere, and also it's easy to
grep for "where do we check for helper calls".

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL
  2023-02-17 10:57           ` Ilya Leoshkevich
@ 2023-02-17 16:19             ` Alexei Starovoitov
  2023-02-17 17:08               ` Stanislav Fomichev
  0 siblings, 1 reply; 14+ messages in thread
From: Alexei Starovoitov @ 2023-02-17 16:19 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Stanislav Fomichev, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, bpf, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Jiri Olsa

On Fri, Feb 17, 2023 at 2:57 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> On Thu, 2023-02-16 at 10:03 -0800, Stanislav Fomichev wrote:
> > On 02/16, Alexei Starovoitov wrote:
> > > On Thu, Feb 16, 2023 at 9:25 AM Stanislav Fomichev <sdf@google.com>
> > > wrote:
> > > >
> > > > On 02/16, Alexei Starovoitov wrote:
> > > > > On Wed, Feb 15, 2023 at 3:59 PM Ilya Leoshkevich
> > > > > <iii@linux.ibm.com>
> > > > > wrote:
> > > > > >
> > > > > > Make the code more readable by introducing a symbolic
> > > > > > constant
> > > > > > instead of using 0.
> > > > > >
> > > > > > Suggested-by: Stanislav Fomichev <sdf@google.com>
> > > > > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > > > ---
> > > > > >  include/uapi/linux/bpf.h       |  4 ++++
> > > > > >  kernel/bpf/disasm.c            |  2 +-
> > > > > >  kernel/bpf/verifier.c          | 12 +++++++-----
> > > > > >  tools/include/linux/filter.h   |  2 +-
> > > > > >  tools/include/uapi/linux/bpf.h |  4 ++++
> > > > > >  5 files changed, 17 insertions(+), 7 deletions(-)
> > > > > >
> > > > > > diff --git a/include/uapi/linux/bpf.h
> > > > > > b/include/uapi/linux/bpf.h
> > > > > > index 1503f61336b6..37f7588d5b2f 100644
> > > > > > --- a/include/uapi/linux/bpf.h
> > > > > > +++ b/include/uapi/linux/bpf.h
> > > > > > @@ -1211,6 +1211,10 @@ enum bpf_link_type {
> > > > > >   */
> > > > > >  #define BPF_PSEUDO_FUNC                4
> > > > > >
> > > > > > +/* when bpf_call->src_reg == BPF_HELPER_CALL, bpf_call->imm
> > > > > > ==
> > > index
> > > > > of a bpf
> > > > > > + * helper function (see ___BPF_FUNC_MAPPER below for a full
> > > > > > list)
> > > > > > + */
> > > > > > +#define BPF_HELPER_CALL                0
> > > >
> > > > > I don't like this "cleanup".
> > > > > The code reads fine as-is.
> > > >
> > > > Even in the context of patch 4? There would be the following
> > > > switch
> > > > without BPF_HELPER_CALL:
> > > >
> > > > switch (insn->src_reg) {
> > > > case 0:
> > > >         ...
> > > >         break;
> > > >
> > > > case BPF_PSEUDO_CALL:
> > > >         ...
> > > >         break;
> > > >
> > > > case BPF_PSEUDO_KFUNC_CALL:
> > > >         ...
> > > >         break;
> > > > }
> > > >
> > > > That 'case 0' feels like it deserves a name. But up to you, I'm
> > > > fine
> > > > either way.
> >
> > > It's philosophical.
> > > Some people insist on if (ptr == NULL). I insist on if (!ptr).
> > > That's why canonical bpf progs are written as:
> > > val = bpf_map_lookup();
> > > if (!val) ...
> > > zero is zero. It doesn't need #define.
> >
> > Are you sure we still want to apply the same logic here for src_reg?
> > I
> > agree that doing src_reg vs !src_reg made sense when we had a
> > "helper"
> > vs "non-helper" (bpf2bpf) situation. However now this src_reg feels
> > more
> > like an enum. And since we have an enum value for 1 and 2, it feels
> > natural to have another one for 0?
> >
> > That second patch from the series ([0]) might be a good example on
> > why
> > we actually need it. I'm assuming at some point we've had:
> > #define BPF_PSEUDO_CALL 1
> >
> > So we ended up writing `src_reg != BPF_PSEUDO_CALL` instead of
> > actually
> > doing `src_reg == BPF_HELPER_CALL` (aka `src_reg == 0`).
> > Afterwards, we've added BPF_PSEUDO_KFUNC_CALL=2 which broke our
> > previous
> > src_reg vs !src_reg assumptions...
> >
> > [0]:
> > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/T/#mf87a26ef48a909b62ce950639acfdf5b296b487b
>
> FWIW the helper checks before this series had inconsistent style:
>
> - !insn->src_reg
> - insn->src_reg == 0
> - insn->src_reg != BPF_REG_0
> - insn[i].src_reg != BPF_PSEUDO_CALL
>
> Now at least it's the same style everywhere, and also it's easy to
> grep for "where do we check for helper calls".

The above checks are not equivalent.
Comparing src_reg with BPF_REG_0 makes sense in one context
and doesn't in the other.
It's never ok to add stuff to uapi when it works as-is.
I also don't buy theoretical arguments about future additions
and how something will be cleaner in the future because
we predicted it so well today.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL
  2023-02-17 16:19             ` Alexei Starovoitov
@ 2023-02-17 17:08               ` Stanislav Fomichev
  0 siblings, 0 replies; 14+ messages in thread
From: Stanislav Fomichev @ 2023-02-17 17:08 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Ilya Leoshkevich, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, bpf, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Jiri Olsa

On Fri, Feb 17, 2023 at 8:19 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Fri, Feb 17, 2023 at 2:57 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >
> > On Thu, 2023-02-16 at 10:03 -0800, Stanislav Fomichev wrote:
> > > On 02/16, Alexei Starovoitov wrote:
> > > > On Thu, Feb 16, 2023 at 9:25 AM Stanislav Fomichev <sdf@google.com>
> > > > wrote:
> > > > >
> > > > > On 02/16, Alexei Starovoitov wrote:
> > > > > > On Wed, Feb 15, 2023 at 3:59 PM Ilya Leoshkevich
> > > > > > <iii@linux.ibm.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > Make the code more readable by introducing a symbolic
> > > > > > > constant
> > > > > > > instead of using 0.
> > > > > > >
> > > > > > > Suggested-by: Stanislav Fomichev <sdf@google.com>
> > > > > > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > > > > ---
> > > > > > >  include/uapi/linux/bpf.h       |  4 ++++
> > > > > > >  kernel/bpf/disasm.c            |  2 +-
> > > > > > >  kernel/bpf/verifier.c          | 12 +++++++-----
> > > > > > >  tools/include/linux/filter.h   |  2 +-
> > > > > > >  tools/include/uapi/linux/bpf.h |  4 ++++
> > > > > > >  5 files changed, 17 insertions(+), 7 deletions(-)
> > > > > > >
> > > > > > > diff --git a/include/uapi/linux/bpf.h
> > > > > > > b/include/uapi/linux/bpf.h
> > > > > > > index 1503f61336b6..37f7588d5b2f 100644
> > > > > > > --- a/include/uapi/linux/bpf.h
> > > > > > > +++ b/include/uapi/linux/bpf.h
> > > > > > > @@ -1211,6 +1211,10 @@ enum bpf_link_type {
> > > > > > >   */
> > > > > > >  #define BPF_PSEUDO_FUNC                4
> > > > > > >
> > > > > > > +/* when bpf_call->src_reg == BPF_HELPER_CALL, bpf_call->imm
> > > > > > > ==
> > > > index
> > > > > > of a bpf
> > > > > > > + * helper function (see ___BPF_FUNC_MAPPER below for a full
> > > > > > > list)
> > > > > > > + */
> > > > > > > +#define BPF_HELPER_CALL                0
> > > > >
> > > > > > I don't like this "cleanup".
> > > > > > The code reads fine as-is.
> > > > >
> > > > > Even in the context of patch 4? There would be the following
> > > > > switch
> > > > > without BPF_HELPER_CALL:
> > > > >
> > > > > switch (insn->src_reg) {
> > > > > case 0:
> > > > >         ...
> > > > >         break;
> > > > >
> > > > > case BPF_PSEUDO_CALL:
> > > > >         ...
> > > > >         break;
> > > > >
> > > > > case BPF_PSEUDO_KFUNC_CALL:
> > > > >         ...
> > > > >         break;
> > > > > }
> > > > >
> > > > > That 'case 0' feels like it deserves a name. But up to you, I'm
> > > > > fine
> > > > > either way.
> > >
> > > > It's philosophical.
> > > > Some people insist on if (ptr == NULL). I insist on if (!ptr).
> > > > That's why canonical bpf progs are written as:
> > > > val = bpf_map_lookup();
> > > > if (!val) ...
> > > > zero is zero. It doesn't need #define.
> > >
> > > Are you sure we still want to apply the same logic here for src_reg?
> > > I
> > > agree that doing src_reg vs !src_reg made sense when we had a
> > > "helper"
> > > vs "non-helper" (bpf2bpf) situation. However now this src_reg feels
> > > more
> > > like an enum. And since we have an enum value for 1 and 2, it feels
> > > natural to have another one for 0?
> > >
> > > That second patch from the series ([0]) might be a good example on
> > > why
> > > we actually need it. I'm assuming at some point we've had:
> > > #define BPF_PSEUDO_CALL 1
> > >
> > > So we ended up writing `src_reg != BPF_PSEUDO_CALL` instead of
> > > actually
> > > doing `src_reg == BPF_HELPER_CALL` (aka `src_reg == 0`).
> > > Afterwards, we've added BPF_PSEUDO_KFUNC_CALL=2 which broke our
> > > previous
> > > src_reg vs !src_reg assumptions...
> > >
> > > [0]:
> > > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/T/#mf87a26ef48a909b62ce950639acfdf5b296b487b
> >
> > FWIW the helper checks before this series had inconsistent style:
> >
> > - !insn->src_reg
> > - insn->src_reg == 0
> > - insn->src_reg != BPF_REG_0
> > - insn[i].src_reg != BPF_PSEUDO_CALL
> >
> > Now at least it's the same style everywhere, and also it's easy to
> > grep for "where do we check for helper calls".
>
> The above checks are not equivalent.
> Comparing src_reg with BPF_REG_0 makes sense in one context
> and doesn't in the other.
> It's never ok to add stuff to uapi when it works as-is.
> I also don't buy theoretical arguments about future additions
> and how something will be cleaner in the future because
> we predicted it so well today.

SG! Then let's maybe respin without this part? I might have derailed
the conversation too much from the actual issue :-[

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-02-17 17:08 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-15 23:59 [PATCH RFC bpf-next v2 0/4] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
2023-02-15 23:59 ` [PATCH RFC bpf-next v2 1/4] bpf: Introduce BPF_HELPER_CALL Ilya Leoshkevich
2023-02-16 16:37   ` Alexei Starovoitov
2023-02-16 17:25     ` Stanislav Fomichev
2023-02-16 17:33       ` Alexei Starovoitov
2023-02-16 18:03         ` Stanislav Fomichev
2023-02-17 10:57           ` Ilya Leoshkevich
2023-02-17 16:19             ` Alexei Starovoitov
2023-02-17 17:08               ` Stanislav Fomichev
2023-02-15 23:59 ` [PATCH RFC bpf-next v2 2/4] bpf: Use BPF_HELPER_CALL in check_subprogs() Ilya Leoshkevich
2023-02-15 23:59 ` [PATCH RFC bpf-next v2 3/4] bpf, x86: Use bpf_jit_get_func_addr() Ilya Leoshkevich
2023-02-15 23:59 ` [PATCH RFC bpf-next v2 4/4] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
2023-02-17  9:40   ` Jiri Olsa
2023-02-17 10:53     ` Ilya Leoshkevich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox