public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH bpf-next v2 0/2] bpf, x64: Fix tailcall infinite loop
@ 2023-08-18 15:12 Leon Hwang
  2023-08-18 15:12 ` [RFC PATCH bpf-next v2 1/2] " Leon Hwang
  2023-08-18 15:12 ` [RFC PATCH bpf-next v2 2/2] selftests/bpf: Add testcases for tailcall infinite loop fixing Leon Hwang
  0 siblings, 2 replies; 8+ messages in thread
From: Leon Hwang @ 2023-08-18 15:12 UTC (permalink / raw)
  To: ast, daniel, andrii, maciej.fijalkowski; +Cc: song, hffilwlqm, bpf

This patch series fixes a tailcall infinite loop.

From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall
handling in JIT"), the tailcall on x64 works better than before.

From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms
for x64 JIT"), tailcall is able to run in BPF subprograms on x64.

From commit 5b92a28aae4dd0f8 ("bpf: Support attaching tracing BPF program
to other BPF programs"), BPF program is able to trace other BPF programs.

How about combining them all together?

1. FENTRY/FEXIT on a BPF subprogram.
2. A tailcall runs in the BPF subprogram.
3. The tailcall calls itself.

As a result, a tailcall infinite loop comes up. And the loop would halt
the machine.

As we know, in tail call context, the tail_call_cnt propagates by stack
and RAX register between BPF subprograms. So do it in trampolines.

How did I discover the bug?

From commit 7f6e4312e15a5c37 ("bpf: Limit caller's stack depth 256 for
subprogs with tailcalls"), the total stack size limits to around 8KiB.
Then, I write some bpf progs to validate the stack consuming, that are
tailcalls running in bpf2bpf and FENTRY/FEXIT tracing on bpf2bpf[1].

At that time, accidently, I made a tailcall loop. And then the loop halted
my VM. Without the loop, the bpf progs would consume over 8KiB stack size.
But the _stack-overflow_ did not halt my VM.

With bpf_printk(), I confirmed that the tailcall count limit did not work
expectedly. Next, read the code and fix it.

Finally, unfortunately, I only fix it on x64 but other arches. As a
result, CI tests failed because this bug hasn't been fixed on s390x.

Some helps are requested.

[1]: https://github.com/Asphaltt/learn-by-example/tree/main/ebpf/tailcall-stackoverflow

Leon Hwang (2):
  bpf, x64: Fix tailcall infinite loop
  selftests/bpf: Add testcases for tailcall infinite loop fixing

 arch/x86/net/bpf_jit_comp.c                   |  40 +++-
 include/linux/bpf.h                           |   5 +
 kernel/bpf/trampoline.c                       |   4 +-
 kernel/bpf/verifier.c                         |  31 ++-
 .../selftests/bpf/prog_tests/tailcalls.c      | 194 +++++++++++++++++-
 .../bpf/progs/tailcall_bpf2bpf_fentry.c       |  18 ++
 .../bpf/progs/tailcall_bpf2bpf_fexit.c        |  18 ++
 7 files changed, 292 insertions(+), 18 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c
 create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c


base-commit: 9930e4af4b509bcf6f060b09b16884f26102d110
-- 
2.41.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFC PATCH bpf-next v2 1/2] bpf, x64: Fix tailcall infinite loop
  2023-08-18 15:12 [RFC PATCH bpf-next v2 0/2] bpf, x64: Fix tailcall infinite loop Leon Hwang
@ 2023-08-18 15:12 ` Leon Hwang
  2023-08-18 15:25   ` Leon Hwang
  2023-08-21 22:33   ` Alexei Starovoitov
  2023-08-18 15:12 ` [RFC PATCH bpf-next v2 2/2] selftests/bpf: Add testcases for tailcall infinite loop fixing Leon Hwang
  1 sibling, 2 replies; 8+ messages in thread
From: Leon Hwang @ 2023-08-18 15:12 UTC (permalink / raw)
  To: ast, daniel, andrii, maciej.fijalkowski; +Cc: song, hffilwlqm, bpf

From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall
handling in JIT"), the tailcall on x64 works better than before.

From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms
for x64 JIT"), tailcall is able to run in BPF subprograms on x64.

From commit 5b92a28aae4dd0f8 ("bpf: Support attaching tracing BPF program
to other BPF programs"), BPF program is able to trace other BPF programs.

How about combining them all together?

1. FENTRY/FEXIT on a BPF subprogram.
2. A tailcall runs in the BPF subprogram.
3. The tailcall calls itself.

As a result, a tailcall infinite loop comes up. And the loop would halt
the machine.

As we know, in tail call context, the tail_call_cnt propagates by stack
and RAX register between BPF subprograms. So do it in trampolines.

Signed-off-by: Leon Hwang <hffilwlqm@gmail.com>
---
 arch/x86/net/bpf_jit_comp.c | 40 +++++++++++++++++++++++++++++--------
 include/linux/bpf.h         |  5 +++++
 kernel/bpf/trampoline.c     |  4 ++--
 kernel/bpf/verifier.c       | 31 +++++++++++++++++++++-------
 4 files changed, 63 insertions(+), 17 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index a5930042139d3..1ad17d7de5eee 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -303,8 +303,12 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
 	prog += X86_PATCH_SIZE;
 	if (!ebpf_from_cbpf) {
 		if (tail_call_reachable && !is_subprog)
+			/* When it's the entry of the whole tailcall context,
+			 * zeroing rax means initialising tail_call_cnt.
+			 */
 			EMIT2(0x31, 0xC0); /* xor eax, eax */
 		else
+			// Keep the same instruction layout.
 			EMIT2(0x66, 0x90); /* nop2 */
 	}
 	EMIT1(0x55);             /* push rbp */
@@ -1018,6 +1022,10 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op)
 
 #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
 
+/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
+#define RESTORE_TAIL_CALL_CNT(stack)				\
+	EMIT3_off32(0x48, 0x8B, 0x85, -round_up(stack, 8) - 8)
+
 static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image,
 		  int oldproglen, struct jit_context *ctx, bool jmp_padding)
 {
@@ -1623,9 +1631,7 @@ st:			if (is_imm8(insn->off))
 
 			func = (u8 *) __bpf_call_base + imm32;
 			if (tail_call_reachable) {
-				/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
-				EMIT3_off32(0x48, 0x8B, 0x85,
-					    -round_up(bpf_prog->aux->stack_depth, 8) - 8);
+				RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth);
 				if (!imm32)
 					return -EINVAL;
 				offs = 7 + x86_call_depth_emit_accounting(&prog, func);
@@ -2298,7 +2304,9 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
  * push rbp
  * mov rbp, rsp
  * sub rsp, 16                     // space for skb and dev
- * push rbx                        // temp regs to pass start time
+ * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
+ * mov rax, 2                      // cache number of argument to rax
+ * mov qword ptr [rbp - 32], rax   // save number of argument to stack
  * mov qword ptr [rbp - 16], rdi   // save skb pointer to stack
  * mov qword ptr [rbp - 8], rsi    // save dev pointer to stack
  * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
@@ -2323,7 +2331,9 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
  * push rbp
  * mov rbp, rsp
  * sub rsp, 24                     // space for skb, dev, return value
- * push rbx                        // temp regs to pass start time
+ * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
+ * mov rax, 2                      // cache number of argument to rax
+ * mov qword ptr [rbp - 32], rax   // save number of argument to stack
  * mov qword ptr [rbp - 24], rdi   // save skb pointer to stack
  * mov qword ptr [rbp - 16], rsi   // save dev pointer to stack
  * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
@@ -2400,6 +2410,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	 *                     [ ...        ]
 	 *                     [ stack_arg2 ]
 	 * RBP - arg_stack_off [ stack_arg1 ]
+	 * RSP                 [ tail_call_cnt ] BPF_TRAMP_F_TAIL_CALL_CTX
 	 */
 
 	/* room for return value of orig_call or fentry prog */
@@ -2464,6 +2475,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	else
 		/* sub rsp, stack_size */
 		EMIT4(0x48, 0x83, 0xEC, stack_size);
+	if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
+		EMIT1(0x50);		/* push rax */
 	/* mov QWORD PTR [rbp - rbx_off], rbx */
 	emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_6, -rbx_off);
 
@@ -2516,9 +2529,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 		restore_regs(m, &prog, regs_off);
 		save_args(m, &prog, arg_stack_off, true);
 
+		if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
+			/* Before calling the original function, restore the
+			 * tail_call_cnt from stack to rax.
+			 */
+			RESTORE_TAIL_CALL_CNT(stack_size);
+
 		if (flags & BPF_TRAMP_F_ORIG_STACK) {
-			emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
-			EMIT2(0xff, 0xd0); /* call *rax */
+			emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, 8);
+			EMIT2(0xff, 0xd3); /* call *rbx */ // FIXME: Confirm 0xd3?
 		} else {
 			/* call original function */
 			if (emit_rsb_call(&prog, orig_call, prog)) {
@@ -2569,7 +2588,12 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 			ret = -EINVAL;
 			goto cleanup;
 		}
-	}
+	} else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
+		/* Before running the original function, restore the
+		 * tail_call_cnt from stack to rax.
+		 */
+		RESTORE_TAIL_CALL_CNT(stack_size);
+
 	/* restore return value of orig_call or fentry prog back into RAX */
 	if (save_ret)
 		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index cfabbcf47bdb8..c8df257ea435d 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1028,6 +1028,11 @@ struct btf_func_model {
  */
 #define BPF_TRAMP_F_SHARE_IPMODIFY	BIT(6)
 
+/* Indicate that current trampoline is in a tail call context. Then, it has to
+ * cache and restore tail_call_cnt to avoid infinite tail call loop.
+ */
+#define BPF_TRAMP_F_TAIL_CALL_CTX	BIT(7)
+
 /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
  * bytes on x86.
  */
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 78acf28d48732..16ab5da7161f2 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -415,8 +415,8 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
 		goto out;
 	}
 
-	/* clear all bits except SHARE_IPMODIFY */
-	tr->flags &= BPF_TRAMP_F_SHARE_IPMODIFY;
+	/* clear all bits except SHARE_IPMODIFY and TAIL_CALL_CTX */
+	tr->flags &= (BPF_TRAMP_F_SHARE_IPMODIFY | BPF_TRAMP_F_TAIL_CALL_CTX);
 
 	if (tlinks[BPF_TRAMP_FEXIT].nr_links ||
 	    tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links) {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4ccca1f6c9981..52ba9b043f16e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -19246,6 +19246,21 @@ static int check_non_sleepable_error_inject(u32 btf_id)
 	return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id);
 }
 
+static inline int find_subprog_index(const struct bpf_prog *prog,
+				     u32 btf_id)
+{
+	struct bpf_prog_aux *aux = prog->aux;
+	int i, subprog = -1;
+
+	for (i = 0; i < aux->func_info_cnt; i++)
+		if (aux->func_info[i].type_id == btf_id) {
+			subprog = i;
+			break;
+		}
+
+	return subprog;
+}
+
 int bpf_check_attach_target(struct bpf_verifier_log *log,
 			    const struct bpf_prog *prog,
 			    const struct bpf_prog *tgt_prog,
@@ -19254,9 +19269,9 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
 {
 	bool prog_extension = prog->type == BPF_PROG_TYPE_EXT;
 	const char prefix[] = "btf_trace_";
-	int ret = 0, subprog = -1, i;
 	const struct btf_type *t;
 	bool conservative = true;
+	int ret = 0, subprog;
 	const char *tname;
 	struct btf *btf;
 	long addr = 0;
@@ -19291,11 +19306,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
 			return -EINVAL;
 		}
 
-		for (i = 0; i < aux->func_info_cnt; i++)
-			if (aux->func_info[i].type_id == btf_id) {
-				subprog = i;
-				break;
-			}
+		subprog = find_subprog_index(tgt_prog, btf_id);
 		if (subprog == -1) {
 			bpf_log(log, "Subprog %s doesn't exist\n", tname);
 			return -EINVAL;
@@ -19559,7 +19570,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 	struct bpf_attach_target_info tgt_info = {};
 	u32 btf_id = prog->aux->attach_btf_id;
 	struct bpf_trampoline *tr;
-	int ret;
+	int ret, subprog;
 	u64 key;
 
 	if (prog->type == BPF_PROG_TYPE_SYSCALL) {
@@ -19629,6 +19640,12 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 	if (!tr)
 		return -ENOMEM;
 
+	if (tgt_prog && tgt_prog->aux->tail_call_reachable) {
+		subprog = find_subprog_index(tgt_prog, btf_id);
+		tr->flags = subprog > 0 && tgt_prog->aux->func[subprog]->is_func ?
+			    BPF_TRAMP_F_TAIL_CALL_CTX : 0;
+	}
+
 	prog->aux->dst_trampoline = tr;
 	return 0;
 }
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC PATCH bpf-next v2 2/2] selftests/bpf: Add testcases for tailcall infinite loop fixing
  2023-08-18 15:12 [RFC PATCH bpf-next v2 0/2] bpf, x64: Fix tailcall infinite loop Leon Hwang
  2023-08-18 15:12 ` [RFC PATCH bpf-next v2 1/2] " Leon Hwang
@ 2023-08-18 15:12 ` Leon Hwang
  1 sibling, 0 replies; 8+ messages in thread
From: Leon Hwang @ 2023-08-18 15:12 UTC (permalink / raw)
  To: ast, daniel, andrii, maciej.fijalkowski; +Cc: song, hffilwlqm, bpf

Add 3 test cases to confirm the tailcall infinite loop bug has been fixed.

Like tailcall_bpf2bpf cases, do fentry/fexit on the bpf2bpf, and then
check the final count result.

tools/testing/selftests/bpf/test_progs -t tailcalls
226/13  tailcalls/tailcall_bpf2bpf_fentry:OK
226/14  tailcalls/tailcall_bpf2bpf_fexit:OK
226/15  tailcalls/tailcall_bpf2bpf_fentry_fexit:OK
226     tailcalls:OK
Summary: 1/15 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Leon Hwang <hffilwlqm@gmail.com>
---
 .../selftests/bpf/prog_tests/tailcalls.c      | 194 +++++++++++++++++-
 .../bpf/progs/tailcall_bpf2bpf_fentry.c       |  18 ++
 .../bpf/progs/tailcall_bpf2bpf_fexit.c        |  18 ++
 3 files changed, 229 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c
 create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c

diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
index 58fe2c586ed76..a47c2fd6b8d37 100644
--- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
@@ -634,7 +634,7 @@ static void test_tailcall_bpf2bpf_2(void)
 		return;
 
 	data_fd = bpf_map__fd(data_map);
-	if (CHECK_FAIL(map_fd < 0))
+	if (CHECK_FAIL(data_fd < 0))
 		return;
 
 	i = 0;
@@ -884,6 +884,191 @@ static void test_tailcall_bpf2bpf_6(void)
 	tailcall_bpf2bpf6__destroy(obj);
 }
 
+static void __tailcall_bpf2bpf_fentry_fexit(bool test_fentry, bool test_fexit)
+{
+	struct bpf_object *tgt_obj = NULL, *fentry_obj = NULL, *fexit_obj = NULL;
+	struct bpf_link *fentry_link = NULL, *fexit_link = NULL;
+	int err, map_fd, prog_fd, main_fd, data_fd, i, val;
+	struct bpf_map *prog_array, *data_map;
+	struct bpf_program *prog;
+	char buff[128] = {};
+
+	LIBBPF_OPTS(bpf_test_run_opts, topts,
+		.data_in = buff,
+		.data_size_in = sizeof(buff),
+		.repeat = 1,
+	);
+
+	err = bpf_prog_test_load("tailcall_bpf2bpf2.bpf.o",
+				 BPF_PROG_TYPE_SCHED_CLS,
+				 &tgt_obj, &prog_fd);
+	if (!ASSERT_OK(err, "load tgt_obj"))
+		return;
+
+	prog = bpf_object__find_program_by_name(tgt_obj, "entry");
+	if (!ASSERT_OK_PTR(prog, "find entry prog"))
+		goto out;
+
+	main_fd = bpf_program__fd(prog);
+	if (!ASSERT_FALSE(main_fd < 0, "find entry prog fd"))
+		goto out;
+
+	prog_array = bpf_object__find_map_by_name(tgt_obj, "jmp_table");
+	if (!ASSERT_OK_PTR(prog_array, "find jmp_table map"))
+		goto out;
+
+	map_fd = bpf_map__fd(prog_array);
+	if (!ASSERT_FALSE(map_fd < 0, "find jmp_table map fd"))
+		goto out;
+
+	prog = bpf_object__find_program_by_name(tgt_obj, "classifier_0");
+	if (!ASSERT_OK_PTR(prog, "find classifier_0 prog"))
+		goto out;
+
+	prog_fd = bpf_program__fd(prog);
+	if (!ASSERT_FALSE(prog_fd < 0, "find classifier_0 prog fd"))
+		goto out;
+
+	i = 0;
+	err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY);
+	if (!ASSERT_OK(err, "update jmp_table"))
+		goto out;
+
+	if (test_fentry) {
+		fentry_obj = bpf_object__open_file("tailcall_bpf2bpf_fentry.bpf.o",
+						   NULL);
+		if (!ASSERT_OK_PTR(fentry_obj, "open fentry_obj file"))
+			goto out;
+
+		prog = bpf_object__find_program_by_name(fentry_obj, "fentry");
+		if (!ASSERT_OK_PTR(prog, "find fentry prog"))
+			goto out;
+
+		err = bpf_program__set_attach_target(prog, prog_fd,
+						     "subprog_tail");
+		if (!ASSERT_OK(err, "set_attach_target subprog_tail"))
+			goto out;
+
+		err = bpf_object__load(fentry_obj);
+		if (!ASSERT_OK(err, "load fentry_obj"))
+			goto out;
+
+		fentry_link = bpf_program__attach_trace(prog);
+		if (!ASSERT_OK_PTR(fentry_link, "attach_trace"))
+			goto out;
+	}
+
+	if (test_fexit) {
+		fexit_obj = bpf_object__open_file("tailcall_bpf2bpf_fexit.bpf.o",
+						  NULL);
+		if (!ASSERT_OK_PTR(fexit_obj, "open fexit_obj file"))
+			goto out;
+
+		prog = bpf_object__find_program_by_name(fexit_obj, "fexit");
+		if (!ASSERT_OK_PTR(prog, "find fexit prog"))
+			goto out;
+
+		err = bpf_program__set_attach_target(prog, prog_fd,
+						     "subprog_tail");
+		if (!ASSERT_OK(err, "set_attach_target subprog_tail"))
+			goto out;
+
+		err = bpf_object__load(fexit_obj);
+		if (!ASSERT_OK(err, "load fexit_obj"))
+			goto out;
+
+		fexit_link = bpf_program__attach_trace(prog);
+		if (!ASSERT_OK_PTR(fexit_link, "attach_trace"))
+			goto out;
+	}
+
+	err = bpf_prog_test_run_opts(main_fd, &topts);
+	ASSERT_OK(err, "tailcall");
+	ASSERT_EQ(topts.retval, 1, "tailcall retval");
+
+	data_map = bpf_object__find_map_by_name(tgt_obj, "tailcall.bss");
+	if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
+			  "find tailcall.bss map"))
+		goto out;
+
+	data_fd = bpf_map__fd(data_map);
+	if (!ASSERT_FALSE(data_fd < 0, "find tailcall.bss map fd"))
+		goto out;
+
+	i = 0;
+	err = bpf_map_lookup_elem(data_fd, &i, &val);
+	ASSERT_OK(err, "tailcall count");
+	ASSERT_EQ(val, 33, "tailcall count");
+
+	if (test_fentry) {
+		data_map = bpf_object__find_map_by_name(fentry_obj, ".bss");
+		if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
+				  "find tailcall_bpf2bpf_fentry.bss.bss map"))
+			goto out;
+
+		data_fd = bpf_map__fd(data_map);
+		if (!ASSERT_FALSE(data_fd < 0,
+				  "find tailcall_bpf2bpf_fentry.bss.bss map fd"))
+			goto out;
+
+		i = 0;
+		err = bpf_map_lookup_elem(data_fd, &i, &val);
+		ASSERT_OK(err, "fentry count");
+		ASSERT_EQ(val, 33, "fentry count");
+	}
+
+	if (test_fexit) {
+		data_map = bpf_object__find_map_by_name(fexit_obj, ".bss");
+		if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
+				  "find tailcall_bpf2bpf_fexit.bss map"))
+			goto out;
+
+		data_fd = bpf_map__fd(data_map);
+		if (!ASSERT_FALSE(data_fd < 0,
+				  "find tailcall_bpf2bpf_fexit.bss map fd"))
+			goto out;
+
+		i = 0;
+		err = bpf_map_lookup_elem(data_fd, &i, &val);
+		ASSERT_OK(err, "fexit count");
+		ASSERT_EQ(val, 33, "fexit count");
+	}
+
+out:
+	bpf_link__destroy(fentry_link);
+	bpf_link__destroy(fexit_link);
+	bpf_object__close(fentry_obj);
+	bpf_object__close(fexit_obj);
+	bpf_object__close(tgt_obj);
+}
+
+/* test_tailcall_bpf2bpf_fentry checks that the count value of the tail call
+ * limit enforcement matches with expectations when tailcall is preceded with
+ * bpf2bpf call, and the bpf2bpf call is traced by fentry.
+ */
+static void test_tailcall_bpf2bpf_fentry(void)
+{
+	__tailcall_bpf2bpf_fentry_fexit(true, false);
+}
+
+/* test_tailcall_bpf2bpf_fexit checks that the count value of the tail call
+ * limit enforcement matches with expectations when tailcall is preceded with
+ * bpf2bpf call, and the bpf2bpf call is traced by fexit.
+ */
+static void test_tailcall_bpf2bpf_fexit(void)
+{
+	__tailcall_bpf2bpf_fentry_fexit(false, true);
+}
+
+/* test_tailcall_bpf2bpf_fentry_fexit checks that the count value of the tail call
+ * limit enforcement matches with expectations when tailcall is preceded with
+ * bpf2bpf call, and the bpf2bpf call is traced by both fentry and fexit.
+ */
+static void test_tailcall_bpf2bpf_fentry_fexit(void)
+{
+	__tailcall_bpf2bpf_fentry_fexit(true, true);
+}
+
 void test_tailcalls(void)
 {
 	if (test__start_subtest("tailcall_1"))
@@ -910,4 +1095,11 @@ void test_tailcalls(void)
 		test_tailcall_bpf2bpf_4(true);
 	if (test__start_subtest("tailcall_bpf2bpf_6"))
 		test_tailcall_bpf2bpf_6();
+	if (test__start_subtest("tailcall_bpf2bpf_fentry"))
+		test_tailcall_bpf2bpf_fentry();
+	if (test__start_subtest("tailcall_bpf2bpf_fexit"))
+		test_tailcall_bpf2bpf_fexit();
+	if (test__start_subtest("tailcall_bpf2bpf_fentry_fexit"))
+		test_tailcall_bpf2bpf_fentry_fexit();
 }
+
diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c
new file mode 100644
index 0000000000000..8436c6729167c
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright Leon Hwang */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+int count = 0;
+
+SEC("fentry/subprog_tail")
+int BPF_PROG(fentry, struct sk_buff *skb)
+{
+	count++;
+
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c
new file mode 100644
index 0000000000000..fe16412c6e6e9
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright Leon Hwang */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+int count = 0;
+
+SEC("fexit/subprog_tail")
+int BPF_PROG(fexit, struct sk_buff *skb)
+{
+	count++;
+
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH bpf-next v2 1/2] bpf, x64: Fix tailcall infinite loop
  2023-08-18 15:12 ` [RFC PATCH bpf-next v2 1/2] " Leon Hwang
@ 2023-08-18 15:25   ` Leon Hwang
  2023-08-21 22:33   ` Alexei Starovoitov
  1 sibling, 0 replies; 8+ messages in thread
From: Leon Hwang @ 2023-08-18 15:25 UTC (permalink / raw)
  To: ast, daniel, andrii, maciej.fijalkowski; +Cc: song, bpf



On 2023/8/18 23:12, Leon Hwang wrote:
> From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall
> handling in JIT"), the tailcall on x64 works better than before.
> 
> From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms
> for x64 JIT"), tailcall is able to run in BPF subprograms on x64.
> 
> From commit 5b92a28aae4dd0f8 ("bpf: Support attaching tracing BPF program
> to other BPF programs"), BPF program is able to trace other BPF programs.
> 
> How about combining them all together?
> 
> 1. FENTRY/FEXIT on a BPF subprogram.
> 2. A tailcall runs in the BPF subprogram.
> 3. The tailcall calls itself.
> 
> As a result, a tailcall infinite loop comes up. And the loop would halt
> the machine.
> 
> As we know, in tail call context, the tail_call_cnt propagates by stack
> and RAX register between BPF subprograms. So do it in trampolines.
> 
> Signed-off-by: Leon Hwang <hffilwlqm@gmail.com>
> ---
>  arch/x86/net/bpf_jit_comp.c | 40 +++++++++++++++++++++++++++++--------
>  include/linux/bpf.h         |  5 +++++
>  kernel/bpf/trampoline.c     |  4 ++--
>  kernel/bpf/verifier.c       | 31 +++++++++++++++++++++-------
>  4 files changed, 63 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index a5930042139d3..1ad17d7de5eee 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -303,8 +303,12 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
>  	prog += X86_PATCH_SIZE;
>  	if (!ebpf_from_cbpf) {
>  		if (tail_call_reachable && !is_subprog)
> +			/* When it's the entry of the whole tailcall context,
> +			 * zeroing rax means initialising tail_call_cnt.
> +			 */
>  			EMIT2(0x31, 0xC0); /* xor eax, eax */
>  		else
> +			// Keep the same instruction layout.
>  			EMIT2(0x66, 0x90); /* nop2 */
>  	}
>  	EMIT1(0x55);             /* push rbp */
> @@ -1018,6 +1022,10 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op)
>  
>  #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
>  
> +/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
> +#define RESTORE_TAIL_CALL_CNT(stack)				\
> +	EMIT3_off32(0x48, 0x8B, 0x85, -round_up(stack, 8) - 8)
> +
>  static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image,
>  		  int oldproglen, struct jit_context *ctx, bool jmp_padding)
>  {
> @@ -1623,9 +1631,7 @@ st:			if (is_imm8(insn->off))
>  
>  			func = (u8 *) __bpf_call_base + imm32;
>  			if (tail_call_reachable) {
> -				/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
> -				EMIT3_off32(0x48, 0x8B, 0x85,
> -					    -round_up(bpf_prog->aux->stack_depth, 8) - 8);
> +				RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth);
>  				if (!imm32)
>  					return -EINVAL;
>  				offs = 7 + x86_call_depth_emit_accounting(&prog, func);
> @@ -2298,7 +2304,9 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
>   * push rbp
>   * mov rbp, rsp
>   * sub rsp, 16                     // space for skb and dev
> - * push rbx                        // temp regs to pass start time
> + * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
> + * mov rax, 2                      // cache number of argument to rax
> + * mov qword ptr [rbp - 32], rax   // save number of argument to stack
>   * mov qword ptr [rbp - 16], rdi   // save skb pointer to stack
>   * mov qword ptr [rbp - 8], rsi    // save dev pointer to stack
>   * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
> @@ -2323,7 +2331,9 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
>   * push rbp
>   * mov rbp, rsp
>   * sub rsp, 24                     // space for skb, dev, return value
> - * push rbx                        // temp regs to pass start time
> + * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
> + * mov rax, 2                      // cache number of argument to rax
> + * mov qword ptr [rbp - 32], rax   // save number of argument to stack
>   * mov qword ptr [rbp - 24], rdi   // save skb pointer to stack
>   * mov qword ptr [rbp - 16], rsi   // save dev pointer to stack
>   * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
> @@ -2400,6 +2410,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>  	 *                     [ ...        ]
>  	 *                     [ stack_arg2 ]
>  	 * RBP - arg_stack_off [ stack_arg1 ]
> +	 * RSP                 [ tail_call_cnt ] BPF_TRAMP_F_TAIL_CALL_CTX
>  	 */
>  
>  	/* room for return value of orig_call or fentry prog */
> @@ -2464,6 +2475,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>  	else
>  		/* sub rsp, stack_size */
>  		EMIT4(0x48, 0x83, 0xEC, stack_size);
> +	if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
> +		EMIT1(0x50);		/* push rax */
>  	/* mov QWORD PTR [rbp - rbx_off], rbx */
>  	emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_6, -rbx_off);
>  
> @@ -2516,9 +2529,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>  		restore_regs(m, &prog, regs_off);
>  		save_args(m, &prog, arg_stack_off, true);
>  
> +		if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
> +			/* Before calling the original function, restore the
> +			 * tail_call_cnt from stack to rax.
> +			 */
> +			RESTORE_TAIL_CALL_CNT(stack_size);
> +
>  		if (flags & BPF_TRAMP_F_ORIG_STACK) {
> -			emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
> -			EMIT2(0xff, 0xd0); /* call *rax */
> +			emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, 8);
> +			EMIT2(0xff, 0xd3); /* call *rbx */ // FIXME: Confirm 0xd3?

To avoiding rax conflict with tail call, change the calling register
from rax to rbx

But, I'm unable to confirm the opcode.

Then, I asked chatGPT to list `call` and corresponding opcode:

Certainly! Here's a table that provides `call` instructions along with
their corresponding opcodes in x86-64 assembly:

| `call` Register | Opcode (Hex) | Opcode (Binary) |
|-----------------|--------------|-----------------|
| `rax`           | `FF D0`      | `11111111 11010000` |
| `rcx`           | `FF D1`      | `11111111 11010001` |
| `rdx`           | `FF D2`      | `11111111 11010010` |
| `rbx`           | `FF D3`      | `11111111 11010011` |
| `rsp`           | `FF D4`      | `11111111 11010100` |
| `rbp`           | `FF D5`      | `11111111 11010101` |
| `rsi`           | `FF D6`      | `11111111 11010110` |
| `rdi`           | `FF D7`      | `11111111 11010111` |
| `r8`            | `41 FF D0`   | `01000001 11111111 11010000` |
| `r9`            | `41 FF D1`   | `01000001 11111111 11010001` |
| `r10`           | `41 FF D2`   | `01000001 11111111 11010010` |
| `r11`           | `41 FF D3`   | `01000001 11111111 11010011` |
| `r12`           | `41 FF D4`   | `01000001 11111111 11010100` |
| `r13`           | `41 FF D5`   | `01000001 11111111 11010101` |
| `r14`           | `41 FF D6`   | `01000001 11111111 11010110` |
| `r15`           | `41 FF D7`   | `01000001 11111111 11010111` |

EMIT2(0xff, 0xd3); /* call *rbx */, is it right?

Thanks,
Leon

[...]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH bpf-next v2 1/2] bpf, x64: Fix tailcall infinite loop
  2023-08-18 15:12 ` [RFC PATCH bpf-next v2 1/2] " Leon Hwang
  2023-08-18 15:25   ` Leon Hwang
@ 2023-08-21 22:33   ` Alexei Starovoitov
  2023-08-22  3:17     ` Leon Hwang
  1 sibling, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2023-08-21 22:33 UTC (permalink / raw)
  To: Leon Hwang
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Fijalkowski, Maciej, Song Liu, bpf

On Fri, Aug 18, 2023 at 8:12 AM Leon Hwang <hffilwlqm@gmail.com> wrote:
>
> From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall
> handling in JIT"), the tailcall on x64 works better than before.
>
> From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms
> for x64 JIT"), tailcall is able to run in BPF subprograms on x64.
>
> From commit 5b92a28aae4dd0f8 ("bpf: Support attaching tracing BPF program
> to other BPF programs"), BPF program is able to trace other BPF programs.
>
> How about combining them all together?
>
> 1. FENTRY/FEXIT on a BPF subprogram.
> 2. A tailcall runs in the BPF subprogram.
> 3. The tailcall calls itself.
>
> As a result, a tailcall infinite loop comes up. And the loop would halt
> the machine.
>
> As we know, in tail call context, the tail_call_cnt propagates by stack
> and RAX register between BPF subprograms. So do it in trampolines.
>
> Signed-off-by: Leon Hwang <hffilwlqm@gmail.com>
> ---
>  arch/x86/net/bpf_jit_comp.c | 40 +++++++++++++++++++++++++++++--------
>  include/linux/bpf.h         |  5 +++++
>  kernel/bpf/trampoline.c     |  4 ++--
>  kernel/bpf/verifier.c       | 31 +++++++++++++++++++++-------
>  4 files changed, 63 insertions(+), 17 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index a5930042139d3..1ad17d7de5eee 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -303,8 +303,12 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
>         prog += X86_PATCH_SIZE;
>         if (!ebpf_from_cbpf) {
>                 if (tail_call_reachable && !is_subprog)
> +                       /* When it's the entry of the whole tailcall context,
> +                        * zeroing rax means initialising tail_call_cnt.
> +                        */
>                         EMIT2(0x31, 0xC0); /* xor eax, eax */
>                 else
> +                       // Keep the same instruction layout.

No c++ style comments please.

>                         EMIT2(0x66, 0x90); /* nop2 */
>         }
>         EMIT1(0x55);             /* push rbp */
> @@ -1018,6 +1022,10 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op)
>
>  #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
>
> +/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
> +#define RESTORE_TAIL_CALL_CNT(stack)                           \
> +       EMIT3_off32(0x48, 0x8B, 0x85, -round_up(stack, 8) - 8)
> +
>  static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image,
>                   int oldproglen, struct jit_context *ctx, bool jmp_padding)
>  {
> @@ -1623,9 +1631,7 @@ st:                       if (is_imm8(insn->off))
>
>                         func = (u8 *) __bpf_call_base + imm32;
>                         if (tail_call_reachable) {
> -                               /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
> -                               EMIT3_off32(0x48, 0x8B, 0x85,
> -                                           -round_up(bpf_prog->aux->stack_depth, 8) - 8);
> +                               RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth);
>                                 if (!imm32)
>                                         return -EINVAL;
>                                 offs = 7 + x86_call_depth_emit_accounting(&prog, func);
> @@ -2298,7 +2304,9 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
>   * push rbp
>   * mov rbp, rsp
>   * sub rsp, 16                     // space for skb and dev
> - * push rbx                        // temp regs to pass start time
> + * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
> + * mov rax, 2                      // cache number of argument to rax

What does it mean?

> + * mov qword ptr [rbp - 32], rax   // save number of argument to stack

Here // is ok since it's inside /* */

>   * mov qword ptr [rbp - 16], rdi   // save skb pointer to stack
>   * mov qword ptr [rbp - 8], rsi    // save dev pointer to stack
>   * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
> @@ -2323,7 +2331,9 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
>   * push rbp
>   * mov rbp, rsp
>   * sub rsp, 24                     // space for skb, dev, return value
> - * push rbx                        // temp regs to pass start time
> + * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
> + * mov rax, 2                      // cache number of argument to rax
> + * mov qword ptr [rbp - 32], rax   // save number of argument to stack
>   * mov qword ptr [rbp - 24], rdi   // save skb pointer to stack
>   * mov qword ptr [rbp - 16], rsi   // save dev pointer to stack
>   * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
> @@ -2400,6 +2410,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>          *                     [ ...        ]
>          *                     [ stack_arg2 ]
>          * RBP - arg_stack_off [ stack_arg1 ]
> +        * RSP                 [ tail_call_cnt ] BPF_TRAMP_F_TAIL_CALL_CTX
>          */
>
>         /* room for return value of orig_call or fentry prog */
> @@ -2464,6 +2475,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>         else
>                 /* sub rsp, stack_size */
>                 EMIT4(0x48, 0x83, 0xEC, stack_size);
> +       if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
> +               EMIT1(0x50);            /* push rax */
>         /* mov QWORD PTR [rbp - rbx_off], rbx */
>         emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_6, -rbx_off);
>
> @@ -2516,9 +2529,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>                 restore_regs(m, &prog, regs_off);
>                 save_args(m, &prog, arg_stack_off, true);
>
> +               if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
> +                       /* Before calling the original function, restore the
> +                        * tail_call_cnt from stack to rax.
> +                        */
> +                       RESTORE_TAIL_CALL_CNT(stack_size);
> +
>                 if (flags & BPF_TRAMP_F_ORIG_STACK) {
> -                       emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
> -                       EMIT2(0xff, 0xd0); /* call *rax */
> +                       emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, 8);
> +                       EMIT2(0xff, 0xd3); /* call *rbx */ // FIXME: Confirm 0xd3?

please no FIXME like comments.
You have to be confident in the code you're submitting.
llvm-mc -triple=x86_64 -show-encoding -x86-asm-syntax=intel
-output-asm-variant=1 <<< 'call rbx'

>                 } else {
>                         /* call original function */
>                         if (emit_rsb_call(&prog, orig_call, prog)) {
> @@ -2569,7 +2588,12 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>                         ret = -EINVAL;
>                         goto cleanup;
>                 }
> -       }
> +       } else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
> +               /* Before running the original function, restore the
> +                * tail_call_cnt from stack to rax.
> +                */
> +               RESTORE_TAIL_CALL_CNT(stack_size);
> +
>         /* restore return value of orig_call or fentry prog back into RAX */
>         if (save_ret)
>                 emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index cfabbcf47bdb8..c8df257ea435d 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1028,6 +1028,11 @@ struct btf_func_model {
>   */
>  #define BPF_TRAMP_F_SHARE_IPMODIFY     BIT(6)
>
> +/* Indicate that current trampoline is in a tail call context. Then, it has to
> + * cache and restore tail_call_cnt to avoid infinite tail call loop.
> + */
> +#define BPF_TRAMP_F_TAIL_CALL_CTX      BIT(7)
> +
>  /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
>   * bytes on x86.
>   */
> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index 78acf28d48732..16ab5da7161f2 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
> @@ -415,8 +415,8 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
>                 goto out;
>         }
>
> -       /* clear all bits except SHARE_IPMODIFY */
> -       tr->flags &= BPF_TRAMP_F_SHARE_IPMODIFY;
> +       /* clear all bits except SHARE_IPMODIFY and TAIL_CALL_CTX */
> +       tr->flags &= (BPF_TRAMP_F_SHARE_IPMODIFY | BPF_TRAMP_F_TAIL_CALL_CTX);
>
>         if (tlinks[BPF_TRAMP_FEXIT].nr_links ||
>             tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links) {
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 4ccca1f6c9981..52ba9b043f16e 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -19246,6 +19246,21 @@ static int check_non_sleepable_error_inject(u32 btf_id)
>         return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id);
>  }
>
> +static inline int find_subprog_index(const struct bpf_prog *prog,
> +                                    u32 btf_id)
> +{
> +       struct bpf_prog_aux *aux = prog->aux;
> +       int i, subprog = -1;
> +
> +       for (i = 0; i < aux->func_info_cnt; i++)
> +               if (aux->func_info[i].type_id == btf_id) {
> +                       subprog = i;
> +                       break;
> +               }
> +
> +       return subprog;
> +}
> +
>  int bpf_check_attach_target(struct bpf_verifier_log *log,
>                             const struct bpf_prog *prog,
>                             const struct bpf_prog *tgt_prog,
> @@ -19254,9 +19269,9 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
>  {
>         bool prog_extension = prog->type == BPF_PROG_TYPE_EXT;
>         const char prefix[] = "btf_trace_";
> -       int ret = 0, subprog = -1, i;
>         const struct btf_type *t;
>         bool conservative = true;
> +       int ret = 0, subprog;
>         const char *tname;
>         struct btf *btf;
>         long addr = 0;
> @@ -19291,11 +19306,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
>                         return -EINVAL;
>                 }
>
> -               for (i = 0; i < aux->func_info_cnt; i++)
> -                       if (aux->func_info[i].type_id == btf_id) {
> -                               subprog = i;
> -                               break;
> -                       }
> +               subprog = find_subprog_index(tgt_prog, btf_id);
>                 if (subprog == -1) {
>                         bpf_log(log, "Subprog %s doesn't exist\n", tname);
>                         return -EINVAL;
> @@ -19559,7 +19570,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
>         struct bpf_attach_target_info tgt_info = {};
>         u32 btf_id = prog->aux->attach_btf_id;
>         struct bpf_trampoline *tr;
> -       int ret;
> +       int ret, subprog;
>         u64 key;
>
>         if (prog->type == BPF_PROG_TYPE_SYSCALL) {
> @@ -19629,6 +19640,12 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
>         if (!tr)
>                 return -ENOMEM;
>
> +       if (tgt_prog && tgt_prog->aux->tail_call_reachable) {
> +               subprog = find_subprog_index(tgt_prog, btf_id);
> +               tr->flags = subprog > 0 && tgt_prog->aux->func[subprog]->is_func ?
> +                           BPF_TRAMP_F_TAIL_CALL_CTX : 0;

If prog has subprogs all of them will 'is_func', no?
What's the point of the search ?
Just tgt_prog->aux->tail_call_reachable and func_cnt > 0 would be enough?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH bpf-next v2 1/2] bpf, x64: Fix tailcall infinite loop
  2023-08-21 22:33   ` Alexei Starovoitov
@ 2023-08-22  3:17     ` Leon Hwang
  2023-08-22 21:29       ` Alexei Starovoitov
  0 siblings, 1 reply; 8+ messages in thread
From: Leon Hwang @ 2023-08-22  3:17 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Fijalkowski, Maciej, Song Liu, bpf



On 22/8/23 06:33, Alexei Starovoitov wrote:
> On Fri, Aug 18, 2023 at 8:12 AM Leon Hwang <hffilwlqm@gmail.com> wrote:
>>
>> From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall
>> handling in JIT"), the tailcall on x64 works better than before.
>>
>> From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms
>> for x64 JIT"), tailcall is able to run in BPF subprograms on x64.
>>
>> From commit 5b92a28aae4dd0f8 ("bpf: Support attaching tracing BPF program
>> to other BPF programs"), BPF program is able to trace other BPF programs.
>>
>> How about combining them all together?
>>
>> 1. FENTRY/FEXIT on a BPF subprogram.
>> 2. A tailcall runs in the BPF subprogram.
>> 3. The tailcall calls itself.
>>
>> As a result, a tailcall infinite loop comes up. And the loop would halt
>> the machine.
>>
>> As we know, in tail call context, the tail_call_cnt propagates by stack
>> and RAX register between BPF subprograms. So do it in trampolines.
>>
>> Signed-off-by: Leon Hwang <hffilwlqm@gmail.com>
>> ---
>>  arch/x86/net/bpf_jit_comp.c | 40 +++++++++++++++++++++++++++++--------
>>  include/linux/bpf.h         |  5 +++++
>>  kernel/bpf/trampoline.c     |  4 ++--
>>  kernel/bpf/verifier.c       | 31 +++++++++++++++++++++-------
>>  4 files changed, 63 insertions(+), 17 deletions(-)
>>
>> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
>> index a5930042139d3..1ad17d7de5eee 100644
>> --- a/arch/x86/net/bpf_jit_comp.c
>> +++ b/arch/x86/net/bpf_jit_comp.c
>> @@ -303,8 +303,12 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
>>         prog += X86_PATCH_SIZE;
>>         if (!ebpf_from_cbpf) {
>>                 if (tail_call_reachable && !is_subprog)
>> +                       /* When it's the entry of the whole tailcall context,
>> +                        * zeroing rax means initialising tail_call_cnt.
>> +                        */
>>                         EMIT2(0x31, 0xC0); /* xor eax, eax */
>>                 else
>> +                       // Keep the same instruction layout.
> 
> No c++ style comments please.

Got it.

> 
>>                         EMIT2(0x66, 0x90); /* nop2 */
>>         }
>>         EMIT1(0x55);             /* push rbp */
>> @@ -1018,6 +1022,10 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op)
>>
>>  #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
>>
>> +/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
>> +#define RESTORE_TAIL_CALL_CNT(stack)                           \
>> +       EMIT3_off32(0x48, 0x8B, 0x85, -round_up(stack, 8) - 8)
>> +
>>  static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image,
>>                   int oldproglen, struct jit_context *ctx, bool jmp_padding)
>>  {
>> @@ -1623,9 +1631,7 @@ st:                       if (is_imm8(insn->off))
>>
>>                         func = (u8 *) __bpf_call_base + imm32;
>>                         if (tail_call_reachable) {
>> -                               /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
>> -                               EMIT3_off32(0x48, 0x8B, 0x85,
>> -                                           -round_up(bpf_prog->aux->stack_depth, 8) - 8);
>> +                               RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth);
>>                                 if (!imm32)
>>                                         return -EINVAL;
>>                                 offs = 7 + x86_call_depth_emit_accounting(&prog, func);
>> @@ -2298,7 +2304,9 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
>>   * push rbp
>>   * mov rbp, rsp
>>   * sub rsp, 16                     // space for skb and dev
>> - * push rbx                        // temp regs to pass start time
>> + * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
>> + * mov rax, 2                      // cache number of argument to rax
> 
> What does it mean?

I think it's the corresponding instruction to the following code snippet
in arch_prepare_bpf_trampoline().

	/* Store number of argument registers of the traced function:
	 *   mov rax, nr_regs
	 *   mov QWORD PTR [rbp - nregs_off], rax
	 */
	emit_mov_imm64(&prog, BPF_REG_0, 0, (u32) nr_regs);
	emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -nregs_off);

> 
>> + * mov qword ptr [rbp - 32], rax   // save number of argument to stack
> 
> Here // is ok since it's inside /* */

Got it.

> 
>>   * mov qword ptr [rbp - 16], rdi   // save skb pointer to stack
>>   * mov qword ptr [rbp - 8], rsi    // save dev pointer to stack
>>   * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
>> @@ -2323,7 +2331,9 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
>>   * push rbp
>>   * mov rbp, rsp
>>   * sub rsp, 24                     // space for skb, dev, return value
>> - * push rbx                        // temp regs to pass start time
>> + * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
>> + * mov rax, 2                      // cache number of argument to rax
>> + * mov qword ptr [rbp - 32], rax   // save number of argument to stack
>>   * mov qword ptr [rbp - 24], rdi   // save skb pointer to stack
>>   * mov qword ptr [rbp - 16], rsi   // save dev pointer to stack
>>   * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
>> @@ -2400,6 +2410,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>>          *                     [ ...        ]
>>          *                     [ stack_arg2 ]
>>          * RBP - arg_stack_off [ stack_arg1 ]
>> +        * RSP                 [ tail_call_cnt ] BPF_TRAMP_F_TAIL_CALL_CTX
>>          */
>>
>>         /* room for return value of orig_call or fentry prog */
>> @@ -2464,6 +2475,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>>         else
>>                 /* sub rsp, stack_size */
>>                 EMIT4(0x48, 0x83, 0xEC, stack_size);
>> +       if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
>> +               EMIT1(0x50);            /* push rax */
>>         /* mov QWORD PTR [rbp - rbx_off], rbx */
>>         emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_6, -rbx_off);
>>
>> @@ -2516,9 +2529,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>>                 restore_regs(m, &prog, regs_off);
>>                 save_args(m, &prog, arg_stack_off, true);
>>
>> +               if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
>> +                       /* Before calling the original function, restore the
>> +                        * tail_call_cnt from stack to rax.
>> +                        */
>> +                       RESTORE_TAIL_CALL_CNT(stack_size);
>> +
>>                 if (flags & BPF_TRAMP_F_ORIG_STACK) {
>> -                       emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
>> -                       EMIT2(0xff, 0xd0); /* call *rax */
>> +                       emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, 8);
>> +                       EMIT2(0xff, 0xd3); /* call *rbx */ // FIXME: Confirm 0xd3?
> 
> please no FIXME like comments.
> You have to be confident in the code you're submitting.
> llvm-mc -triple=x86_64 -show-encoding -x86-asm-syntax=intel
> -output-asm-variant=1 <<< 'call rbx'

Got it. Thanks for the guide.

> 
>>                 } else {
>>                         /* call original function */
>>                         if (emit_rsb_call(&prog, orig_call, prog)) {
>> @@ -2569,7 +2588,12 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>>                         ret = -EINVAL;
>>                         goto cleanup;
>>                 }
>> -       }
>> +       } else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
>> +               /* Before running the original function, restore the
>> +                * tail_call_cnt from stack to rax.
>> +                */
>> +               RESTORE_TAIL_CALL_CNT(stack_size);
>> +
>>         /* restore return value of orig_call or fentry prog back into RAX */
>>         if (save_ret)
>>                 emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>> index cfabbcf47bdb8..c8df257ea435d 100644
>> --- a/include/linux/bpf.h
>> +++ b/include/linux/bpf.h
>> @@ -1028,6 +1028,11 @@ struct btf_func_model {
>>   */
>>  #define BPF_TRAMP_F_SHARE_IPMODIFY     BIT(6)
>>
>> +/* Indicate that current trampoline is in a tail call context. Then, it has to
>> + * cache and restore tail_call_cnt to avoid infinite tail call loop.
>> + */
>> +#define BPF_TRAMP_F_TAIL_CALL_CTX      BIT(7)
>> +
>>  /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
>>   * bytes on x86.
>>   */
>> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
>> index 78acf28d48732..16ab5da7161f2 100644
>> --- a/kernel/bpf/trampoline.c
>> +++ b/kernel/bpf/trampoline.c
>> @@ -415,8 +415,8 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
>>                 goto out;
>>         }
>>
>> -       /* clear all bits except SHARE_IPMODIFY */
>> -       tr->flags &= BPF_TRAMP_F_SHARE_IPMODIFY;
>> +       /* clear all bits except SHARE_IPMODIFY and TAIL_CALL_CTX */
>> +       tr->flags &= (BPF_TRAMP_F_SHARE_IPMODIFY | BPF_TRAMP_F_TAIL_CALL_CTX);
>>
>>         if (tlinks[BPF_TRAMP_FEXIT].nr_links ||
>>             tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links) {
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 4ccca1f6c9981..52ba9b043f16e 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -19246,6 +19246,21 @@ static int check_non_sleepable_error_inject(u32 btf_id)
>>         return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id);
>>  }
>>
>> +static inline int find_subprog_index(const struct bpf_prog *prog,
>> +                                    u32 btf_id)
>> +{
>> +       struct bpf_prog_aux *aux = prog->aux;
>> +       int i, subprog = -1;
>> +
>> +       for (i = 0; i < aux->func_info_cnt; i++)
>> +               if (aux->func_info[i].type_id == btf_id) {
>> +                       subprog = i;
>> +                       break;
>> +               }
>> +
>> +       return subprog;
>> +}
>> +
>>  int bpf_check_attach_target(struct bpf_verifier_log *log,
>>                             const struct bpf_prog *prog,
>>                             const struct bpf_prog *tgt_prog,
>> @@ -19254,9 +19269,9 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
>>  {
>>         bool prog_extension = prog->type == BPF_PROG_TYPE_EXT;
>>         const char prefix[] = "btf_trace_";
>> -       int ret = 0, subprog = -1, i;
>>         const struct btf_type *t;
>>         bool conservative = true;
>> +       int ret = 0, subprog;
>>         const char *tname;
>>         struct btf *btf;
>>         long addr = 0;
>> @@ -19291,11 +19306,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
>>                         return -EINVAL;
>>                 }
>>
>> -               for (i = 0; i < aux->func_info_cnt; i++)
>> -                       if (aux->func_info[i].type_id == btf_id) {
>> -                               subprog = i;
>> -                               break;
>> -                       }
>> +               subprog = find_subprog_index(tgt_prog, btf_id);
>>                 if (subprog == -1) {
>>                         bpf_log(log, "Subprog %s doesn't exist\n", tname);
>>                         return -EINVAL;
>> @@ -19559,7 +19570,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
>>         struct bpf_attach_target_info tgt_info = {};
>>         u32 btf_id = prog->aux->attach_btf_id;
>>         struct bpf_trampoline *tr;
>> -       int ret;
>> +       int ret, subprog;
>>         u64 key;
>>
>>         if (prog->type == BPF_PROG_TYPE_SYSCALL) {
>> @@ -19629,6 +19640,12 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
>>         if (!tr)
>>                 return -ENOMEM;
>>
>> +       if (tgt_prog && tgt_prog->aux->tail_call_reachable) {
>> +               subprog = find_subprog_index(tgt_prog, btf_id);
>> +               tr->flags = subprog > 0 && tgt_prog->aux->func[subprog]->is_func ?
>> +                           BPF_TRAMP_F_TAIL_CALL_CTX : 0;
> 
> If prog has subprogs all of them will 'is_func', no?
> What's the point of the search ?
> Just tgt_prog->aux->tail_call_reachable and func_cnt > 0 would be enough?

tgt_prog->aux->tail_call_reachable and subprog > 0 would be enough?
It has to confirm that the attaching target is a subprog of tgt_prog instead of
tgt_prog itself.

In tail call context, when 'call' a func, tail_call_cnt will be restored to rax.

static int do_jit() {
			/* call */
		case BPF_JMP | BPF_CALL: {
			int offs;

			func = (u8 *) __bpf_call_base + imm32;
			if (tail_call_reachable) {
				/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
				EMIT3_off32(0x48, 0x8B, 0x85,
					    -round_up(bpf_prog->aux->stack_depth, 8) - 8);
				/* ... */
			}
}

As a result, when 'call' a subprog, tail_call_cnt will be transferred by rax.
Do all of subprogs run by 'call', including not-'is_func' subprogs?

The point of the search is to confirm that the attaching subprog runs by 'call'.

Currently, I'm sure that tgt_prog->aux->tail_call_reachable, subprog > 0 and
tgt_prog->aux->func[subprog]->is_func is the case to be fixed.

Thanks,
Leon

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH bpf-next v2 1/2] bpf, x64: Fix tailcall infinite loop
  2023-08-22  3:17     ` Leon Hwang
@ 2023-08-22 21:29       ` Alexei Starovoitov
  2023-08-23  1:49         ` Leon Hwang
  0 siblings, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2023-08-22 21:29 UTC (permalink / raw)
  To: Leon Hwang
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Fijalkowski, Maciej, Song Liu, bpf

On Mon, Aug 21, 2023 at 8:17 PM Leon Hwang <hffilwlqm@gmail.com> wrote:
>
>
>
> On 22/8/23 06:33, Alexei Starovoitov wrote:
> > On Fri, Aug 18, 2023 at 8:12 AM Leon Hwang <hffilwlqm@gmail.com> wrote:
> >>
> >> From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall
> >> handling in JIT"), the tailcall on x64 works better than before.
> >>
> >> From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms
> >> for x64 JIT"), tailcall is able to run in BPF subprograms on x64.
> >>
> >> From commit 5b92a28aae4dd0f8 ("bpf: Support attaching tracing BPF program
> >> to other BPF programs"), BPF program is able to trace other BPF programs.
> >>
> >> How about combining them all together?
> >>
> >> 1. FENTRY/FEXIT on a BPF subprogram.
> >> 2. A tailcall runs in the BPF subprogram.
> >> 3. The tailcall calls itself.
> >>
> >> As a result, a tailcall infinite loop comes up. And the loop would halt
> >> the machine.
> >>
> >> As we know, in tail call context, the tail_call_cnt propagates by stack
> >> and RAX register between BPF subprograms. So do it in trampolines.
> >>
> >> Signed-off-by: Leon Hwang <hffilwlqm@gmail.com>
> >> ---
> >>  arch/x86/net/bpf_jit_comp.c | 40 +++++++++++++++++++++++++++++--------
> >>  include/linux/bpf.h         |  5 +++++
> >>  kernel/bpf/trampoline.c     |  4 ++--
> >>  kernel/bpf/verifier.c       | 31 +++++++++++++++++++++-------
> >>  4 files changed, 63 insertions(+), 17 deletions(-)
> >>
> >> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> >> index a5930042139d3..1ad17d7de5eee 100644
> >> --- a/arch/x86/net/bpf_jit_comp.c
> >> +++ b/arch/x86/net/bpf_jit_comp.c
> >> @@ -303,8 +303,12 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
> >>         prog += X86_PATCH_SIZE;
> >>         if (!ebpf_from_cbpf) {
> >>                 if (tail_call_reachable && !is_subprog)
> >> +                       /* When it's the entry of the whole tailcall context,
> >> +                        * zeroing rax means initialising tail_call_cnt.
> >> +                        */
> >>                         EMIT2(0x31, 0xC0); /* xor eax, eax */
> >>                 else
> >> +                       // Keep the same instruction layout.
> >
> > No c++ style comments please.
>
> Got it.
>
> >
> >>                         EMIT2(0x66, 0x90); /* nop2 */
> >>         }
> >>         EMIT1(0x55);             /* push rbp */
> >> @@ -1018,6 +1022,10 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op)
> >>
> >>  #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
> >>
> >> +/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
> >> +#define RESTORE_TAIL_CALL_CNT(stack)                           \
> >> +       EMIT3_off32(0x48, 0x8B, 0x85, -round_up(stack, 8) - 8)
> >> +
> >>  static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image,
> >>                   int oldproglen, struct jit_context *ctx, bool jmp_padding)
> >>  {
> >> @@ -1623,9 +1631,7 @@ st:                       if (is_imm8(insn->off))
> >>
> >>                         func = (u8 *) __bpf_call_base + imm32;
> >>                         if (tail_call_reachable) {
> >> -                               /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
> >> -                               EMIT3_off32(0x48, 0x8B, 0x85,
> >> -                                           -round_up(bpf_prog->aux->stack_depth, 8) - 8);
> >> +                               RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth);
> >>                                 if (!imm32)
> >>                                         return -EINVAL;
> >>                                 offs = 7 + x86_call_depth_emit_accounting(&prog, func);
> >> @@ -2298,7 +2304,9 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
> >>   * push rbp
> >>   * mov rbp, rsp
> >>   * sub rsp, 16                     // space for skb and dev
> >> - * push rbx                        // temp regs to pass start time
> >> + * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
> >> + * mov rax, 2                      // cache number of argument to rax
> >
> > What does it mean?
>
> I think it's the corresponding instruction to the following code snippet
> in arch_prepare_bpf_trampoline().
>
>         /* Store number of argument registers of the traced function:
>          *   mov rax, nr_regs
>          *   mov QWORD PTR [rbp - nregs_off], rax
>          */
>         emit_mov_imm64(&prog, BPF_REG_0, 0, (u32) nr_regs);
>         emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -nregs_off);

Ahh. I see.
The comment on top of arch_prepare_bpf_trampoline() is hopelessly obsolete.
Don't touch it in this patch set. We probably should delete it at some point
or take an effort to update it thoroughly.
Earlier recommendation to you was to update this comment:
/* Generated trampoline stack layout:

> >
> >> + * mov qword ptr [rbp - 32], rax   // save number of argument to stack
> >
> > Here // is ok since it's inside /* */
>
> Got it.
>
> >
> >>   * mov qword ptr [rbp - 16], rdi   // save skb pointer to stack
> >>   * mov qword ptr [rbp - 8], rsi    // save dev pointer to stack
> >>   * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
> >> @@ -2323,7 +2331,9 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
> >>   * push rbp
> >>   * mov rbp, rsp
> >>   * sub rsp, 24                     // space for skb, dev, return value
> >> - * push rbx                        // temp regs to pass start time
> >> + * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
> >> + * mov rax, 2                      // cache number of argument to rax
> >> + * mov qword ptr [rbp - 32], rax   // save number of argument to stack
> >>   * mov qword ptr [rbp - 24], rdi   // save skb pointer to stack
> >>   * mov qword ptr [rbp - 16], rsi   // save dev pointer to stack
> >>   * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
> >> @@ -2400,6 +2410,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >>          *                     [ ...        ]
> >>          *                     [ stack_arg2 ]
> >>          * RBP - arg_stack_off [ stack_arg1 ]
> >> +        * RSP                 [ tail_call_cnt ] BPF_TRAMP_F_TAIL_CALL_CTX
> >>          */
> >>
> >>         /* room for return value of orig_call or fentry prog */
> >> @@ -2464,6 +2475,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >>         else
> >>                 /* sub rsp, stack_size */
> >>                 EMIT4(0x48, 0x83, 0xEC, stack_size);
> >> +       if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
> >> +               EMIT1(0x50);            /* push rax */
> >>         /* mov QWORD PTR [rbp - rbx_off], rbx */
> >>         emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_6, -rbx_off);
> >>
> >> @@ -2516,9 +2529,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >>                 restore_regs(m, &prog, regs_off);
> >>                 save_args(m, &prog, arg_stack_off, true);
> >>
> >> +               if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
> >> +                       /* Before calling the original function, restore the
> >> +                        * tail_call_cnt from stack to rax.
> >> +                        */
> >> +                       RESTORE_TAIL_CALL_CNT(stack_size);
> >> +
> >>                 if (flags & BPF_TRAMP_F_ORIG_STACK) {
> >> -                       emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
> >> -                       EMIT2(0xff, 0xd0); /* call *rax */
> >> +                       emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, 8);
> >> +                       EMIT2(0xff, 0xd3); /* call *rbx */ // FIXME: Confirm 0xd3?
> >
> > please no FIXME like comments.
> > You have to be confident in the code you're submitting.
> > llvm-mc -triple=x86_64 -show-encoding -x86-asm-syntax=intel
> > -output-asm-variant=1 <<< 'call rbx'
>
> Got it. Thanks for the guide.
>
> >
> >>                 } else {
> >>                         /* call original function */
> >>                         if (emit_rsb_call(&prog, orig_call, prog)) {
> >> @@ -2569,7 +2588,12 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >>                         ret = -EINVAL;
> >>                         goto cleanup;
> >>                 }
> >> -       }
> >> +       } else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
> >> +               /* Before running the original function, restore the
> >> +                * tail_call_cnt from stack to rax.
> >> +                */
> >> +               RESTORE_TAIL_CALL_CNT(stack_size);
> >> +
> >>         /* restore return value of orig_call or fentry prog back into RAX */
> >>         if (save_ret)
> >>                 emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
> >> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> >> index cfabbcf47bdb8..c8df257ea435d 100644
> >> --- a/include/linux/bpf.h
> >> +++ b/include/linux/bpf.h
> >> @@ -1028,6 +1028,11 @@ struct btf_func_model {
> >>   */
> >>  #define BPF_TRAMP_F_SHARE_IPMODIFY     BIT(6)
> >>
> >> +/* Indicate that current trampoline is in a tail call context. Then, it has to
> >> + * cache and restore tail_call_cnt to avoid infinite tail call loop.
> >> + */
> >> +#define BPF_TRAMP_F_TAIL_CALL_CTX      BIT(7)
> >> +
> >>  /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
> >>   * bytes on x86.
> >>   */
> >> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> >> index 78acf28d48732..16ab5da7161f2 100644
> >> --- a/kernel/bpf/trampoline.c
> >> +++ b/kernel/bpf/trampoline.c
> >> @@ -415,8 +415,8 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
> >>                 goto out;
> >>         }
> >>
> >> -       /* clear all bits except SHARE_IPMODIFY */
> >> -       tr->flags &= BPF_TRAMP_F_SHARE_IPMODIFY;
> >> +       /* clear all bits except SHARE_IPMODIFY and TAIL_CALL_CTX */
> >> +       tr->flags &= (BPF_TRAMP_F_SHARE_IPMODIFY | BPF_TRAMP_F_TAIL_CALL_CTX);
> >>
> >>         if (tlinks[BPF_TRAMP_FEXIT].nr_links ||
> >>             tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links) {
> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> >> index 4ccca1f6c9981..52ba9b043f16e 100644
> >> --- a/kernel/bpf/verifier.c
> >> +++ b/kernel/bpf/verifier.c
> >> @@ -19246,6 +19246,21 @@ static int check_non_sleepable_error_inject(u32 btf_id)
> >>         return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id);
> >>  }
> >>
> >> +static inline int find_subprog_index(const struct bpf_prog *prog,
> >> +                                    u32 btf_id)
> >> +{
> >> +       struct bpf_prog_aux *aux = prog->aux;
> >> +       int i, subprog = -1;
> >> +
> >> +       for (i = 0; i < aux->func_info_cnt; i++)
> >> +               if (aux->func_info[i].type_id == btf_id) {
> >> +                       subprog = i;
> >> +                       break;
> >> +               }
> >> +
> >> +       return subprog;
> >> +}
> >> +
> >>  int bpf_check_attach_target(struct bpf_verifier_log *log,
> >>                             const struct bpf_prog *prog,
> >>                             const struct bpf_prog *tgt_prog,
> >> @@ -19254,9 +19269,9 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
> >>  {
> >>         bool prog_extension = prog->type == BPF_PROG_TYPE_EXT;
> >>         const char prefix[] = "btf_trace_";
> >> -       int ret = 0, subprog = -1, i;
> >>         const struct btf_type *t;
> >>         bool conservative = true;
> >> +       int ret = 0, subprog;
> >>         const char *tname;
> >>         struct btf *btf;
> >>         long addr = 0;
> >> @@ -19291,11 +19306,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
> >>                         return -EINVAL;
> >>                 }
> >>
> >> -               for (i = 0; i < aux->func_info_cnt; i++)
> >> -                       if (aux->func_info[i].type_id == btf_id) {
> >> -                               subprog = i;
> >> -                               break;
> >> -                       }
> >> +               subprog = find_subprog_index(tgt_prog, btf_id);
> >>                 if (subprog == -1) {
> >>                         bpf_log(log, "Subprog %s doesn't exist\n", tname);
> >>                         return -EINVAL;
> >> @@ -19559,7 +19570,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
> >>         struct bpf_attach_target_info tgt_info = {};
> >>         u32 btf_id = prog->aux->attach_btf_id;
> >>         struct bpf_trampoline *tr;
> >> -       int ret;
> >> +       int ret, subprog;
> >>         u64 key;
> >>
> >>         if (prog->type == BPF_PROG_TYPE_SYSCALL) {
> >> @@ -19629,6 +19640,12 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
> >>         if (!tr)
> >>                 return -ENOMEM;
> >>
> >> +       if (tgt_prog && tgt_prog->aux->tail_call_reachable) {
> >> +               subprog = find_subprog_index(tgt_prog, btf_id);
> >> +               tr->flags = subprog > 0 && tgt_prog->aux->func[subprog]->is_func ?
> >> +                           BPF_TRAMP_F_TAIL_CALL_CTX : 0;
> >
> > If prog has subprogs all of them will 'is_func', no?
> > What's the point of the search ?
> > Just tgt_prog->aux->tail_call_reachable and func_cnt > 0 would be enough?
>
> tgt_prog->aux->tail_call_reachable and subprog > 0 would be enough?
> It has to confirm that the attaching target is a subprog of tgt_prog instead of
> tgt_prog itself.
>
> In tail call context, when 'call' a func, tail_call_cnt will be restored to rax.
>
> static int do_jit() {
>                         /* call */
>                 case BPF_JMP | BPF_CALL: {
>                         int offs;
>
>                         func = (u8 *) __bpf_call_base + imm32;
>                         if (tail_call_reachable) {
>                                 /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
>                                 EMIT3_off32(0x48, 0x8B, 0x85,
>                                             -round_up(bpf_prog->aux->stack_depth, 8) - 8);
>                                 /* ... */
>                         }
> }
>
> As a result, when 'call' a subprog, tail_call_cnt will be transferred by rax.
> Do all of subprogs run by 'call', including not-'is_func' subprogs?

Let me ask again. Do you see a subprog that has is_func==0 ?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH bpf-next v2 1/2] bpf, x64: Fix tailcall infinite loop
  2023-08-22 21:29       ` Alexei Starovoitov
@ 2023-08-23  1:49         ` Leon Hwang
  0 siblings, 0 replies; 8+ messages in thread
From: Leon Hwang @ 2023-08-23  1:49 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Fijalkowski, Maciej, Song Liu, bpf



On 23/8/23 05:29, Alexei Starovoitov wrote:
> On Mon, Aug 21, 2023 at 8:17 PM Leon Hwang <hffilwlqm@gmail.com> wrote:
>>
>>
>>
>> On 22/8/23 06:33, Alexei Starovoitov wrote:
>>> On Fri, Aug 18, 2023 at 8:12 AM Leon Hwang <hffilwlqm@gmail.com> wrote:
>>>>

[SNIP]

>>>>   * sub rsp, 16                     // space for skb and dev
>>>> - * push rbx                        // temp regs to pass start time
>>>> + * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
>>>> + * mov rax, 2                      // cache number of argument to rax
>>>
>>> What does it mean?
>>
>> I think it's the corresponding instruction to the following code snippet
>> in arch_prepare_bpf_trampoline().
>>
>>         /* Store number of argument registers of the traced function:
>>          *   mov rax, nr_regs
>>          *   mov QWORD PTR [rbp - nregs_off], rax
>>          */
>>         emit_mov_imm64(&prog, BPF_REG_0, 0, (u32) nr_regs);
>>         emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -nregs_off);
> 
> Ahh. I see.
> The comment on top of arch_prepare_bpf_trampoline() is hopelessly obsolete.
> Don't touch it in this patch set. We probably should delete it at some point
> or take an effort to update it thoroughly.

Got it.

> Earlier recommendation to you was to update this comment:
> /* Generated trampoline stack layout:
> 
>>>
>>>> + * mov qword ptr [rbp - 32], rax   // save number of argument to stack
>>>
>>> Here // is ok since it's inside /* */
>>
>> Got it.
>>
>>>
>>>>   * mov qword ptr [rbp - 16], rdi   // save skb pointer to stack
>>>>   * mov qword ptr [rbp - 8], rsi    // save dev pointer to stack
>>>>   * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
>>>> @@ -2323,7 +2331,9 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
>>>>   * push rbp
>>>>   * mov rbp, rsp
>>>>   * sub rsp, 24                     // space for skb, dev, return value
>>>> - * push rbx                        // temp regs to pass start time
>>>> + * mov qword ptr [rbp - 40], rbx   // temp regs to pass start time
>>>> + * mov rax, 2                      // cache number of argument to rax
>>>> + * mov qword ptr [rbp - 32], rax   // save number of argument to stack
>>>>   * mov qword ptr [rbp - 24], rdi   // save skb pointer to stack
>>>>   * mov qword ptr [rbp - 16], rsi   // save dev pointer to stack
>>>>   * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
>>>> @@ -2400,6 +2410,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>>>>          *                     [ ...        ]
>>>>          *                     [ stack_arg2 ]
>>>>          * RBP - arg_stack_off [ stack_arg1 ]
>>>> +        * RSP                 [ tail_call_cnt ] BPF_TRAMP_F_TAIL_CALL_CTX
>>>>          */
>>>>
>>>>         /* room for return value of orig_call or fentry prog */
>>>> @@ -2464,6 +2475,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>>>>         else
>>>>                 /* sub rsp, stack_size */
>>>>                 EMIT4(0x48, 0x83, 0xEC, stack_size);
>>>> +       if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
>>>> +               EMIT1(0x50);            /* push rax */
>>>>         /* mov QWORD PTR [rbp - rbx_off], rbx */
>>>>         emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_6, -rbx_off);
>>>>
>>>> @@ -2516,9 +2529,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>>>>                 restore_regs(m, &prog, regs_off);
>>>>                 save_args(m, &prog, arg_stack_off, true);
>>>>
>>>> +               if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
>>>> +                       /* Before calling the original function, restore the
>>>> +                        * tail_call_cnt from stack to rax.
>>>> +                        */
>>>> +                       RESTORE_TAIL_CALL_CNT(stack_size);
>>>> +
>>>>                 if (flags & BPF_TRAMP_F_ORIG_STACK) {
>>>> -                       emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
>>>> -                       EMIT2(0xff, 0xd0); /* call *rax */
>>>> +                       emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, 8);
>>>> +                       EMIT2(0xff, 0xd3); /* call *rbx */ // FIXME: Confirm 0xd3?
>>>
>>> please no FIXME like comments.
>>> You have to be confident in the code you're submitting.
>>> llvm-mc -triple=x86_64 -show-encoding -x86-asm-syntax=intel
>>> -output-asm-variant=1 <<< 'call rbx'
>>
>> Got it. Thanks for the guide.
>>
>>>
>>>>                 } else {
>>>>                         /* call original function */
>>>>                         if (emit_rsb_call(&prog, orig_call, prog)) {

[SNIP]

>>>>
>>>>         if (prog->type == BPF_PROG_TYPE_SYSCALL) {
>>>> @@ -19629,6 +19640,12 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
>>>>         if (!tr)
>>>>                 return -ENOMEM;
>>>>
>>>> +       if (tgt_prog && tgt_prog->aux->tail_call_reachable) {
>>>> +               subprog = find_subprog_index(tgt_prog, btf_id);
>>>> +               tr->flags = subprog > 0 && tgt_prog->aux->func[subprog]->is_func ?
>>>> +                           BPF_TRAMP_F_TAIL_CALL_CTX : 0;
>>>
>>> If prog has subprogs all of them will 'is_func', no?
>>> What's the point of the search ?
>>> Just tgt_prog->aux->tail_call_reachable and func_cnt > 0 would be enough?
>>
>> tgt_prog->aux->tail_call_reachable and subprog > 0 would be enough?
>> It has to confirm that the attaching target is a subprog of tgt_prog instead of
>> tgt_prog itself.
>>
>> In tail call context, when 'call' a func, tail_call_cnt will be restored to rax.
>>
>> static int do_jit() {
>>                         /* call */
>>                 case BPF_JMP | BPF_CALL: {
>>                         int offs;
>>
>>                         func = (u8 *) __bpf_call_base + imm32;
>>                         if (tail_call_reachable) {
>>                                 /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
>>                                 EMIT3_off32(0x48, 0x8B, 0x85,
>>                                             -round_up(bpf_prog->aux->stack_depth, 8) - 8);
>>                                 /* ... */
>>                         }
>> }
>>
>> As a result, when 'call' a subprog, tail_call_cnt will be transferred by rax.
>> Do all of subprogs run by 'call', including not-'is_func' subprogs?
> 
> Let me ask again. Do you see a subprog that has is_func==0 ?

Oh, I get it.

In jit_subprogs(), all of subprogs are 'is_func'.

So, it's unnecessary to check tgt_prog->aux->func[subprog]->is_func.

I'll submit a new RFC PATCH later.

Thanks,
Leon

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-08-23  1:49 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-18 15:12 [RFC PATCH bpf-next v2 0/2] bpf, x64: Fix tailcall infinite loop Leon Hwang
2023-08-18 15:12 ` [RFC PATCH bpf-next v2 1/2] " Leon Hwang
2023-08-18 15:25   ` Leon Hwang
2023-08-21 22:33   ` Alexei Starovoitov
2023-08-22  3:17     ` Leon Hwang
2023-08-22 21:29       ` Alexei Starovoitov
2023-08-23  1:49         ` Leon Hwang
2023-08-18 15:12 ` [RFC PATCH bpf-next v2 2/2] selftests/bpf: Add testcases for tailcall infinite loop fixing Leon Hwang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox