BPF List
 help / color / mirror / Atom feed
* [PATCH v2 bpf 1/2] bpf: Fix the corner case with may_goto and jump to the 1st insn.
@ 2024-06-19  1:18 Alexei Starovoitov
  2024-06-19  1:18 ` [PATCH v2 bpf 2/2] selftests/bpf: Tests with may_goto and jumps " Alexei Starovoitov
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Alexei Starovoitov @ 2024-06-19  1:18 UTC (permalink / raw)
  To: bpf; +Cc: daniel, andrii, martin.lau, memxor, eddyz87, zacecob, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

When the following program is processed by the verifier:
L1: may_goto L2
    goto L1
L2: w0 = 0
    exit

the may_goto insn is first converted to:
L1: r11 = *(u64 *)(r10 -8)
    if r11 == 0x0 goto L2
    r11 -= 1
    *(u64 *)(r10 -8) = r11
    goto L1
L2: w0 = 0
    exit

then later as the last step the verifier inserts:
  *(u64 *)(r10 -8) = BPF_MAX_LOOPS
as the first insn of the program to initialize loop count.

When the first insn happens to be a branch target of some jmp the
bpf_patch_insn_data() logic will produce:
L1: *(u64 *)(r10 -8) = BPF_MAX_LOOPS
    r11 = *(u64 *)(r10 -8)
    if r11 == 0x0 goto L2
    r11 -= 1
    *(u64 *)(r10 -8) = r11
    goto L1
L2: w0 = 0
    exit

because instruction patching adjusts all jmps and calls, but for this
particular corner case it's incorrect and the L1 label should be one
instruction down, like:
    *(u64 *)(r10 -8) = BPF_MAX_LOOPS
L1: r11 = *(u64 *)(r10 -8)
    if r11 == 0x0 goto L2
    r11 -= 1
    *(u64 *)(r10 -8) = r11
    goto L1
L2: w0 = 0
    exit

and that's what this patch is fixing.
After bpf_patch_insn_data() call adjust_jmp_off() to adjust all jmps
that point to newly insert BPF_ST insn to point to insn after.

Note that bpf_patch_insn_data() cannot easily be changed to accommodate
this logic, since jumps that point before or after a sequence of patched
instructions have to be adjusted with the full length of the patch.

Conceptually it's somewhat similar to "insert" of instructions between other
instructions with weird semantics. Like "insert" before 1st insn would require
adjustment of CALL insns to point to newly inserted 1st insn, but not an
adjustment JMP insns that point to 1st, yet still adjusting JMP insns that
cross over 1st insn (point to insn before or insn after), hence use simple
adjust_jmp_off() logic to fix this corner case. Ideally bpf_patch_insn_data()
would have an auxiliary info to say where 'the start of newly inserted patch
is', but it would be too complex for backport.

Reported-by: Zac Ecob <zacecob@protonmail.com>
Closes: https://lore.kernel.org/bpf/CAADnVQJ_WWx8w4b=6Gc2EpzAjgv+6A0ridnMz2TvS2egj4r3Gw@mail.gmail.com/
Fixes: 011832b97b31 ("bpf: Introduce may_goto instruction")
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 kernel/bpf/verifier.c | 50 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e0a398a97d32..5586a571bf55 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12721,6 +12721,16 @@ static bool signed_add32_overflows(s32 a, s32 b)
 	return res < a;
 }
 
+static bool signed_add16_overflows(s16 a, s16 b)
+{
+	/* Do the add in u16, where overflow is well-defined */
+	s16 res = (s16)((u16)a + (u16)b);
+
+	if (b < 0)
+		return res > a;
+	return res < a;
+}
+
 static bool signed_sub_overflows(s64 a, s64 b)
 {
 	/* Do the sub in u64, where overflow is well-defined */
@@ -18732,6 +18742,39 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
 	return new_prog;
 }
 
+/*
+ * For all jmp insns in a given 'prog' that point to 'tgt_idx' insn adjust the
+ * jump offset by 'delta'.
+ */
+static int adjust_jmp_off(struct bpf_prog *prog, u32 tgt_idx, u32 delta)
+{
+	struct bpf_insn *insn = prog->insnsi;
+	u32 insn_cnt = prog->len, i;
+
+	for (i = 0; i < insn_cnt; i++, insn++) {
+		u8 code = insn->code;
+
+		if ((BPF_CLASS(code) != BPF_JMP && BPF_CLASS(code) != BPF_JMP32) ||
+		    BPF_OP(code) == BPF_CALL || BPF_OP(code) == BPF_EXIT)
+			continue;
+
+		if (insn->code == (BPF_JMP32 | BPF_JA)) {
+			if (i + 1 + insn->imm != tgt_idx)
+				continue;
+			if (signed_add32_overflows(insn->imm, delta))
+				return -ERANGE;
+			insn->imm += delta;
+		} else {
+			if (i + 1 + insn->off != tgt_idx)
+				continue;
+			if (signed_add16_overflows(insn->imm, delta))
+				return -ERANGE;
+			insn->off += delta;
+		}
+	}
+	return 0;
+}
+
 static int adjust_subprog_starts_after_remove(struct bpf_verifier_env *env,
 					      u32 off, u32 cnt)
 {
@@ -20548,6 +20591,13 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 		if (!new_prog)
 			return -ENOMEM;
 		env->prog = prog = new_prog;
+		/*
+		 * If may_goto is a first insn of a prog there could be a jmp
+		 * insn that points to it, hence adjust all such jmps to point
+		 * to insn after BPF_ST that inits may_goto count.
+		 * Adjustment will succeed because bpf_patch_insn_data() didn't fail.
+		 */
+		WARN_ON(adjust_jmp_off(env->prog, subprog_start, 1));
 	}
 
 	/* Since poke tab is now finalized, publish aux to tracker. */
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 bpf 2/2] selftests/bpf: Tests with may_goto and jumps to the 1st insn
  2024-06-19  1:18 [PATCH v2 bpf 1/2] bpf: Fix the corner case with may_goto and jump to the 1st insn Alexei Starovoitov
@ 2024-06-19  1:18 ` Alexei Starovoitov
  2024-06-21 18:20 ` [PATCH v2 bpf 1/2] bpf: Fix the corner case with may_goto and jump " patchwork-bot+netdevbpf
  2024-07-12  6:41 ` Shung-Hsi Yu
  2 siblings, 0 replies; 5+ messages in thread
From: Alexei Starovoitov @ 2024-06-19  1:18 UTC (permalink / raw)
  To: bpf; +Cc: daniel, andrii, martin.lau, memxor, eddyz87, zacecob, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Add few tests with may_goto and jumps to the 1st insn.

Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 .../bpf/progs/verifier_iterating_callbacks.c  | 94 +++++++++++++++++++
 1 file changed, 94 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c b/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c
index bd676d7e615f..8885e5239d6b 100644
--- a/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c
+++ b/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c
@@ -307,6 +307,100 @@ int iter_limit_bug(struct __sk_buff *skb)
 	return 0;
 }
 
+SEC("socket")
+__success __retval(0)
+__naked void ja_and_may_goto(void)
+{
+	asm volatile ("			\
+l0_%=:	.byte 0xe5; /* may_goto */	\
+	.byte 0; /* regs */		\
+	.short 1; /* off 1 */		\
+	.long 0; /* imm */		\
+	goto l0_%=;			\
+	r0 = 0;				\
+	exit;				\
+"	::: __clobber_common);
+}
+
+SEC("socket")
+__success __retval(0)
+__naked void ja_and_may_goto2(void)
+{
+	asm volatile ("			\
+l0_%=:	r0 = 0;				\
+	.byte 0xe5; /* may_goto */	\
+	.byte 0; /* regs */		\
+	.short 1; /* off 1 */		\
+	.long 0; /* imm */		\
+	goto l0_%=;			\
+	r0 = 0;				\
+	exit;				\
+"	::: __clobber_common);
+}
+
+SEC("socket")
+__success __retval(0)
+__naked void jlt_and_may_goto(void)
+{
+	asm volatile ("			\
+l0_%=:	call %[bpf_jiffies64];		\
+	.byte 0xe5; /* may_goto */	\
+	.byte 0; /* regs */		\
+	.short 1; /* off 1 */		\
+	.long 0; /* imm */		\
+	if r0 < 10 goto l0_%=;		\
+	r0 = 0;				\
+	exit;				\
+"	:: __imm(bpf_jiffies64)
+	: __clobber_all);
+}
+
+#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
+	(defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
+	defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390) || \
+	defined(__TARGET_ARCH_loongarch)) && \
+	__clang_major__ >= 18
+SEC("socket")
+__success __retval(0)
+__naked void gotol_and_may_goto(void)
+{
+	asm volatile ("			\
+l0_%=:	r0 = 0;				\
+	.byte 0xe5; /* may_goto */	\
+	.byte 0; /* regs */		\
+	.short 1; /* off 1 */		\
+	.long 0; /* imm */		\
+	gotol l0_%=;			\
+	r0 = 0;				\
+	exit;				\
+"	::: __clobber_common);
+}
+#endif
+
+SEC("socket")
+__success __retval(0)
+__naked void ja_and_may_goto_subprog(void)
+{
+	asm volatile ("			\
+	call subprog_with_may_goto;	\
+	exit;				\
+"	::: __clobber_all);
+}
+
+static __naked __noinline __used
+void subprog_with_may_goto(void)
+{
+	asm volatile ("			\
+l0_%=:	.byte 0xe5; /* may_goto */	\
+	.byte 0; /* regs */		\
+	.short 1; /* off 1 */		\
+	.long 0; /* imm */		\
+	goto l0_%=;			\
+	r0 = 0;				\
+	exit;				\
+"	::: __clobber_all);
+}
+
 #define ARR_SZ 1000000
 int zero;
 char arr[ARR_SZ];
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 bpf 1/2] bpf: Fix the corner case with may_goto and jump to the 1st insn.
  2024-06-19  1:18 [PATCH v2 bpf 1/2] bpf: Fix the corner case with may_goto and jump to the 1st insn Alexei Starovoitov
  2024-06-19  1:18 ` [PATCH v2 bpf 2/2] selftests/bpf: Tests with may_goto and jumps " Alexei Starovoitov
@ 2024-06-21 18:20 ` patchwork-bot+netdevbpf
  2024-07-12  6:41 ` Shung-Hsi Yu
  2 siblings, 0 replies; 5+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-06-21 18:20 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, daniel, andrii, martin.lau, memxor, eddyz87, zacecob,
	kernel-team

Hello:

This series was applied to bpf/bpf.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:

On Tue, 18 Jun 2024 18:18:58 -0700 you wrote:
> From: Alexei Starovoitov <ast@kernel.org>
> 
> When the following program is processed by the verifier:
> L1: may_goto L2
>     goto L1
> L2: w0 = 0
>     exit
> 
> [...]

Here is the summary with links:
  - [v2,bpf,1/2] bpf: Fix the corner case with may_goto and jump to the 1st insn.
    https://git.kernel.org/bpf/bpf/c/5337ac4c9b80
  - [v2,bpf,2/2] selftests/bpf: Tests with may_goto and jumps to the 1st insn
    https://git.kernel.org/bpf/bpf/c/2673315947c9

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 bpf 1/2] bpf: Fix the corner case with may_goto and jump to the 1st insn.
  2024-06-19  1:18 [PATCH v2 bpf 1/2] bpf: Fix the corner case with may_goto and jump to the 1st insn Alexei Starovoitov
  2024-06-19  1:18 ` [PATCH v2 bpf 2/2] selftests/bpf: Tests with may_goto and jumps " Alexei Starovoitov
  2024-06-21 18:20 ` [PATCH v2 bpf 1/2] bpf: Fix the corner case with may_goto and jump " patchwork-bot+netdevbpf
@ 2024-07-12  6:41 ` Shung-Hsi Yu
  2024-07-12 14:23   ` Alexei Starovoitov
  2 siblings, 1 reply; 5+ messages in thread
From: Shung-Hsi Yu @ 2024-07-12  6:41 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, daniel, andrii, martin.lau, memxor, eddyz87, zacecob,
	kernel-team

On Tue, Jun 18, 2024 at 06:18:58PM GMT, Alexei Starovoitov wrote:
...
> +static int adjust_jmp_off(struct bpf_prog *prog, u32 tgt_idx, u32 delta)
> +{
> +	struct bpf_insn *insn = prog->insnsi;
> +	u32 insn_cnt = prog->len, i;
> +
> +	for (i = 0; i < insn_cnt; i++, insn++) {
> +		u8 code = insn->code;
> +
> +		if ((BPF_CLASS(code) != BPF_JMP && BPF_CLASS(code) != BPF_JMP32) ||
> +		    BPF_OP(code) == BPF_CALL || BPF_OP(code) == BPF_EXIT)
> +			continue;
> +
> +		if (insn->code == (BPF_JMP32 | BPF_JA)) {
> +			if (i + 1 + insn->imm != tgt_idx)
> +				continue;
> +			if (signed_add32_overflows(insn->imm, delta))
> +				return -ERANGE;
> +			insn->imm += delta;
> +		} else {
> +			if (i + 1 + insn->off != tgt_idx)
> +				continue;
> +			if (signed_add16_overflows(insn->imm, delta))

Looks like this be signed_add16_overflows(insn->**off**, delta) instead?

I'll proceed assuming so, and include a fix for this in v3 of the
overflow-checker refactoring patch-set.

> +				return -ERANGE;
> +			insn->off += delta;
> +		}
> +	}
> +	return 0;
> +}
...

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 bpf 1/2] bpf: Fix the corner case with may_goto and jump to the 1st insn.
  2024-07-12  6:41 ` Shung-Hsi Yu
@ 2024-07-12 14:23   ` Alexei Starovoitov
  0 siblings, 0 replies; 5+ messages in thread
From: Alexei Starovoitov @ 2024-07-12 14:23 UTC (permalink / raw)
  To: Shung-Hsi Yu
  Cc: bpf, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Kumar Kartikeya Dwivedi, Eddy Z, Zac Ecob, Kernel Team

On Thu, Jul 11, 2024 at 11:41 PM Shung-Hsi Yu <shung-hsi.yu@suse.com> wrote:
>
> On Tue, Jun 18, 2024 at 06:18:58PM GMT, Alexei Starovoitov wrote:
> ...
> > +static int adjust_jmp_off(struct bpf_prog *prog, u32 tgt_idx, u32 delta)
> > +{
> > +     struct bpf_insn *insn = prog->insnsi;
> > +     u32 insn_cnt = prog->len, i;
> > +
> > +     for (i = 0; i < insn_cnt; i++, insn++) {
> > +             u8 code = insn->code;
> > +
> > +             if ((BPF_CLASS(code) != BPF_JMP && BPF_CLASS(code) != BPF_JMP32) ||
> > +                 BPF_OP(code) == BPF_CALL || BPF_OP(code) == BPF_EXIT)
> > +                     continue;
> > +
> > +             if (insn->code == (BPF_JMP32 | BPF_JA)) {
> > +                     if (i + 1 + insn->imm != tgt_idx)
> > +                             continue;
> > +                     if (signed_add32_overflows(insn->imm, delta))
> > +                             return -ERANGE;
> > +                     insn->imm += delta;
> > +             } else {
> > +                     if (i + 1 + insn->off != tgt_idx)
> > +                             continue;
> > +                     if (signed_add16_overflows(insn->imm, delta))
>
> Looks like this be signed_add16_overflows(insn->**off**, delta) instead?
>
> I'll proceed assuming so, and include a fix for this in v3 of the
> overflow-checker refactoring patch-set.

Ohh. Good catch. Thanks!

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-07-12 14:24 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-19  1:18 [PATCH v2 bpf 1/2] bpf: Fix the corner case with may_goto and jump to the 1st insn Alexei Starovoitov
2024-06-19  1:18 ` [PATCH v2 bpf 2/2] selftests/bpf: Tests with may_goto and jumps " Alexei Starovoitov
2024-06-21 18:20 ` [PATCH v2 bpf 1/2] bpf: Fix the corner case with may_goto and jump " patchwork-bot+netdevbpf
2024-07-12  6:41 ` Shung-Hsi Yu
2024-07-12 14:23   ` Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox