* [PATCH bpf-next 0/2] Use overflow.h helpers to check for overflows @ 2024-06-23 7:03 Shung-Hsi Yu 2024-06-23 7:03 ` [PATCH bpf-next 1/2] bpf: verifier: use check_add_overflow() to check for addition overflows Shung-Hsi Yu 2024-06-23 7:03 ` [PATCH bpf-next 2/2] bpf: verifier: use check_sub_overflow() to check for subtraction overflows Shung-Hsi Yu 0 siblings, 2 replies; 8+ messages in thread From: Shung-Hsi Yu @ 2024-06-23 7:03 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend, Andrii Nakryiko, Yonghong Song, Shung-Hsi Yu This patch set refactors kernel/bpf/verifier.c to use type-agnostic, generic overflow-check helpers defined in include/linux/overflow.h to check for addition and subtraction overflow, and drop the signed_*_overflows() helpers we currently have in kernel/bpf/verifier.c. There should be no functional change in how the verifier works. The main motivation is to make future refactoring[1] easier. While check_mul_overflow() also exists and could potentially replace what we have in scalar*_min_max_mul(), it does not help with refactoring and would either change how the verifier works (e.g. lifting restriction on umax<=U32_MAX and u32_max<=U16_MAX) or make the code slightly harder to read, so it is left for future endeavour. 1: https://github.com/kernel-patches/bpf/pull/7205/commits Shung-Hsi Yu (2): bpf: use check_add_overflow() to check for addition overflows bpf: use check_sub_overflow() to check for subtraction overflows kernel/bpf/verifier.c | 120 ++++++++++++++++-------------------------- 1 file changed, 44 insertions(+), 76 deletions(-) -- 2.45.2 ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH bpf-next 1/2] bpf: verifier: use check_add_overflow() to check for addition overflows 2024-06-23 7:03 [PATCH bpf-next 0/2] Use overflow.h helpers to check for overflows Shung-Hsi Yu @ 2024-06-23 7:03 ` Shung-Hsi Yu 2024-06-24 3:38 ` Alexei Starovoitov 2024-06-23 7:03 ` [PATCH bpf-next 2/2] bpf: verifier: use check_sub_overflow() to check for subtraction overflows Shung-Hsi Yu 1 sibling, 1 reply; 8+ messages in thread From: Shung-Hsi Yu @ 2024-06-23 7:03 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend, Andrii Nakryiko, Yonghong Song, Shung-Hsi Yu signed_add*_overflows() was added back when there was no overflow-check helper. With the introduction of such helpers in commit f0907827a8a91 ("compiler.h: enable builtin overflow checkers and add fallback code"), we can drop signed_add*_overflows() in kernel/bpf/verifier.c and use the generic check_add_overflow() instead. This will make future refactoring easier, and possibly taking advantage of compiler-emitted hardware instructions that efficiently implement these checks. Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> --- shung-hsi.yu: maybe there's a better name instead of {min,max}_cur, but I coudln't come up with one. --- kernel/bpf/verifier.c | 74 ++++++++++++++++++------------------------- 1 file changed, 30 insertions(+), 44 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 3f6be4923655..b1ad76c514f5 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12720,26 +12720,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return 0; } -static bool signed_add_overflows(s64 a, s64 b) -{ - /* Do the add in u64, where overflow is well-defined */ - s64 res = (s64)((u64)a + (u64)b); - - if (b < 0) - return res > a; - return res < a; -} - -static bool signed_add32_overflows(s32 a, s32 b) -{ - /* Do the add in u32, where overflow is well-defined */ - s32 res = (s32)((u32)a + (u32)b); - - if (b < 0) - return res > a; - return res < a; -} - static bool signed_sub_overflows(s64 a, s64 b) { /* Do the sub in u64, where overflow is well-defined */ @@ -13134,6 +13114,8 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, struct bpf_sanitize_info info = {}; u8 opcode = BPF_OP(insn->code); u32 dst = insn->dst_reg; + s64 smin_cur, smax_cur; + u64 umin_cur, umax_cur; int ret; dst_reg = ®s[dst]; @@ -13241,21 +13223,21 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, * added into the variable offset, and we copy the fixed offset * from ptr_reg. */ - if (signed_add_overflows(smin_ptr, smin_val) || - signed_add_overflows(smax_ptr, smax_val)) { + if (check_add_overflow(smin_ptr, smin_val, &smin_cur) || + check_add_overflow(smax_ptr, smax_val, &smax_cur)) { dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; } else { - dst_reg->smin_value = smin_ptr + smin_val; - dst_reg->smax_value = smax_ptr + smax_val; + dst_reg->smin_value = smin_cur; + dst_reg->smax_value = smax_cur; } - if (umin_ptr + umin_val < umin_ptr || - umax_ptr + umax_val < umax_ptr) { + if (check_add_overflow(umin_ptr, umin_val, &umin_cur) || + check_add_overflow(umax_ptr, umax_val, &umax_cur)) { dst_reg->umin_value = 0; dst_reg->umax_value = U64_MAX; } else { - dst_reg->umin_value = umin_ptr + umin_val; - dst_reg->umax_value = umax_ptr + umax_val; + dst_reg->umin_value = umin_cur; + dst_reg->umax_value = umax_cur; } dst_reg->var_off = tnum_add(ptr_reg->var_off, off_reg->var_off); dst_reg->off = ptr_reg->off; @@ -13362,22 +13344,24 @@ static void scalar32_min_max_add(struct bpf_reg_state *dst_reg, s32 smax_val = src_reg->s32_max_value; u32 umin_val = src_reg->u32_min_value; u32 umax_val = src_reg->u32_max_value; + s32 smin_cur, smax_cur; + u32 umin_cur, umax_cur; - if (signed_add32_overflows(dst_reg->s32_min_value, smin_val) || - signed_add32_overflows(dst_reg->s32_max_value, smax_val)) { + if (check_add_overflow(dst_reg->s32_min_value, smin_val, &smin_cur) || + check_add_overflow(dst_reg->s32_max_value, smax_val, &smax_cur)) { dst_reg->s32_min_value = S32_MIN; dst_reg->s32_max_value = S32_MAX; } else { - dst_reg->s32_min_value += smin_val; - dst_reg->s32_max_value += smax_val; + dst_reg->s32_min_value = smin_cur; + dst_reg->s32_max_value = smax_cur; } - if (dst_reg->u32_min_value + umin_val < umin_val || - dst_reg->u32_max_value + umax_val < umax_val) { + if (check_add_overflow(dst_reg->u32_min_value, umin_val, &umin_cur) || + check_add_overflow(dst_reg->u32_max_value, umax_val, &umax_cur)) { dst_reg->u32_min_value = 0; dst_reg->u32_max_value = U32_MAX; } else { - dst_reg->u32_min_value += umin_val; - dst_reg->u32_max_value += umax_val; + dst_reg->u32_min_value = umin_cur; + dst_reg->u32_max_value = umax_cur; } } @@ -13388,22 +13372,24 @@ static void scalar_min_max_add(struct bpf_reg_state *dst_reg, s64 smax_val = src_reg->smax_value; u64 umin_val = src_reg->umin_value; u64 umax_val = src_reg->umax_value; + s64 smin_cur, smax_cur; + u64 umin_cur, umax_cur; - if (signed_add_overflows(dst_reg->smin_value, smin_val) || - signed_add_overflows(dst_reg->smax_value, smax_val)) { + if (check_add_overflow(dst_reg->smin_value, smin_val, &smin_cur) || + check_add_overflow(dst_reg->smax_value, smax_val, &smax_cur)) { dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; } else { - dst_reg->smin_value += smin_val; - dst_reg->smax_value += smax_val; + dst_reg->smin_value = smin_cur; + dst_reg->smax_value = smax_cur; } - if (dst_reg->umin_value + umin_val < umin_val || - dst_reg->umax_value + umax_val < umax_val) { + if (check_add_overflow(dst_reg->umin_value, umin_val, &umin_cur) || + check_add_overflow(dst_reg->umax_value, umax_val, &umax_cur)) { dst_reg->umin_value = 0; dst_reg->umax_value = U64_MAX; } else { - dst_reg->umin_value += umin_val; - dst_reg->umax_value += umax_val; + dst_reg->umin_value = umin_cur; + dst_reg->umax_value = umax_cur; } } -- 2.45.2 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: verifier: use check_add_overflow() to check for addition overflows 2024-06-23 7:03 ` [PATCH bpf-next 1/2] bpf: verifier: use check_add_overflow() to check for addition overflows Shung-Hsi Yu @ 2024-06-24 3:38 ` Alexei Starovoitov 2024-06-24 15:26 ` Shung-Hsi Yu 0 siblings, 1 reply; 8+ messages in thread From: Alexei Starovoitov @ 2024-06-24 3:38 UTC (permalink / raw) To: Shung-Hsi Yu Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend, Andrii Nakryiko, Yonghong Song On Sun, Jun 23, 2024 at 12:03 AM Shung-Hsi Yu <shung-hsi.yu@suse.com> wrote: > > signed_add*_overflows() was added back when there was no overflow-check > helper. With the introduction of such helpers in commit f0907827a8a91 > ("compiler.h: enable builtin overflow checkers and add fallback code"), we > can drop signed_add*_overflows() in kernel/bpf/verifier.c and use the > generic check_add_overflow() instead. > > This will make future refactoring easier, and possibly taking advantage of > compiler-emitted hardware instructions that efficiently implement these > checks. > > Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> > --- > shung-hsi.yu: maybe there's a better name instead of {min,max}_cur, but > I coudln't come up with one. Just smin/smax without _cur suffix ? What is the asm before/after ? ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: verifier: use check_add_overflow() to check for addition overflows 2024-06-24 3:38 ` Alexei Starovoitov @ 2024-06-24 15:26 ` Shung-Hsi Yu 2024-06-24 20:17 ` Alexei Starovoitov 0 siblings, 1 reply; 8+ messages in thread From: Shung-Hsi Yu @ 2024-06-24 15:26 UTC (permalink / raw) To: Alexei Starovoitov Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend, Andrii Nakryiko, Yonghong Song On Sun, Jun 23, 2024 at 08:38:44PM GMT, Alexei Starovoitov wrote: > On Sun, Jun 23, 2024 at 12:03 AM Shung-Hsi Yu <shung-hsi.yu@suse.com> wrote: > > signed_add*_overflows() was added back when there was no overflow-check > > helper. With the introduction of such helpers in commit f0907827a8a91 > > ("compiler.h: enable builtin overflow checkers and add fallback code"), we > > can drop signed_add*_overflows() in kernel/bpf/verifier.c and use the > > generic check_add_overflow() instead. > > > > This will make future refactoring easier, and possibly taking advantage of > > compiler-emitted hardware instructions that efficiently implement these > > checks. > > > > Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> > > --- > > shung-hsi.yu: maybe there's a better name instead of {min,max}_cur, but > > I coudln't come up with one. > > Just smin/smax without _cur suffix ? Going with Jiri's suggestion under patch 2 to drop the new variables instead. > What is the asm before/after ? Tested with only this patch applied and compiled with GCC 13.3.0 for x86_64. x86 reading is difficult for me, but I think the relevant change for signed addition are: Before: s64 smin_val = src_reg->smin_value; 675: 4c 8b 46 28 mov 0x28(%rsi),%r8 { 679: 48 89 f8 mov %rdi,%rax s64 smax_val = src_reg->smax_value; u64 umin_val = src_reg->umin_value; u64 umax_val = src_reg->umax_value; if (signed_add_overflows(dst_reg->smin_value, smin_val) || 67c: 48 8b 7f 28 mov 0x28(%rdi),%rdi u64 umin_val = src_reg->umin_value; 680: 48 8b 56 38 mov 0x38(%rsi),%rdx u64 umax_val = src_reg->umax_value; 684: 48 8b 4e 40 mov 0x40(%rsi),%rcx s64 res = (s64)((u64)a + (u64)b); 688: 4d 8d 0c 38 lea (%r8,%rdi,1),%r9 return res < a; 68c: 4c 39 cf cmp %r9,%rdi 68f: 41 0f 9f c2 setg %r10b if (b < 0) 693: 4d 85 c0 test %r8,%r8 696: 0f 88 8f 00 00 00 js 72b <scalar_min_max_add+0xbb> signed_add_overflows(dst_reg->smax_value, smax_val)) { dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; 69c: 48 bf ff ff ff ff ff movabs $0x7fffffffffffffff,%rdi 6a3: ff ff 7f s64 smax_val = src_reg->smax_value; 6a6: 4c 8b 46 30 mov 0x30(%rsi),%r8 dst_reg->smin_value = S64_MIN; 6aa: 48 be 00 00 00 00 00 movabs $0x8000000000000000,%rsi 6b1: 00 00 80 if (signed_add_overflows(dst_reg->smin_value, smin_val) || 6b4: 45 84 d2 test %r10b,%r10b 6b7: 75 12 jne 6cb <scalar_min_max_add+0x5b> signed_add_overflows(dst_reg->smax_value, smax_val)) { 6b9: 4c 8b 50 30 mov 0x30(%rax),%r10 s64 res = (s64)((u64)a + (u64)b); 6bd: 4f 8d 1c 02 lea (%r10,%r8,1),%r11 if (b < 0) 6c1: 4d 85 c0 test %r8,%r8 6c4: 78 58 js 71e <scalar_min_max_add+0xae> if (signed_add_overflows(dst_reg->smin_value, smin_val) || 6c6: 4d 39 da cmp %r11,%r10 6c9: 7e 58 jle 723 <scalar_min_max_add+0xb3> 6cb: 48 89 70 28 mov %rsi,0x28(%rax) ... if (signed_add_overflows(dst_reg->smin_value, smin_val) || 71e: 4d 39 da cmp %r11,%r10 721: 7c a8 jl 6cb <scalar_min_max_add+0x5b> dst_reg->smin_value += smin_val; 723: 4c 89 ce mov %r9,%rsi dst_reg->smax_value += smax_val; 726: 4c 89 df mov %r11,%rdi 729: eb a0 jmp 6cb <scalar_min_max_add+0x5b> return res > a; 72b: 4c 39 cf cmp %r9,%rdi 72e: 41 0f 9c c2 setl %r10b 732: e9 65 ff ff ff jmp 69c <scalar_min_max_add+0x2c> 737: 66 0f 1f 84 00 00 00 nopw 0x0(%rax,%rax,1) 73e: 00 00 740: 90 nop After: if (check_add_overflow(dst_reg->smin_value, smin_val, &smin) || 142d3: 4c 89 de mov %r11,%rsi 142d6: 49 03 74 24 28 add 0x28(%r12),%rsi 142db: 41 89 54 24 54 mov %edx,0x54(%r12) dst_reg->smax_value = S64_MAX; 142e0: 48 ba ff ff ff ff ff movabs $0x7fffffffffffffff,%rdx 142e7: ff ff 7f 142ea: 41 89 44 24 50 mov %eax,0x50(%r12) dst_reg->smin_value = S64_MIN; 142ef: 48 b8 00 00 00 00 00 movabs $0x8000000000000000,%rax 142f6: 00 00 80 if (check_add_overflow(dst_reg->smin_value, smin_val, &smin) || 142f9: 70 27 jo 14322 <adjust_reg_min_max_vals+0xde2> check_add_overflow(dst_reg->smax_value, smax_val, &smax)) { 142fb: 49 03 4c 24 30 add 0x30(%r12),%rcx 14300: 0f 90 c0 seto %al 14303: 48 89 ca mov %rcx,%rdx 14306: 0f b6 c0 movzbl %al,%eax dst_reg->smax_value = S64_MAX; 14309: 48 85 c0 test %rax,%rax 1430c: 48 b8 ff ff ff ff ff movabs $0x7fffffffffffffff,%rax 14313: ff ff 7f 14316: 48 0f 45 d0 cmovne %rax,%rdx 1431a: 48 8d 40 01 lea 0x1(%rax),%rax 1431e: 48 0f 44 c6 cmove %rsi,%rax Also uploaded complete disassembly for before[1] and after[2]. On a brief look it seems before this change it does lea, cmp, setg with a bit more branching, and after this change it uses add, seto, cmove/cmovene. Will look a bit more into this. 1: https://pastebin.com/NPkG1Ydd 2: https://pastebin.com/v9itLFnp ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: verifier: use check_add_overflow() to check for addition overflows 2024-06-24 15:26 ` Shung-Hsi Yu @ 2024-06-24 20:17 ` Alexei Starovoitov 0 siblings, 0 replies; 8+ messages in thread From: Alexei Starovoitov @ 2024-06-24 20:17 UTC (permalink / raw) To: Shung-Hsi Yu Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend, Andrii Nakryiko, Yonghong Song On Mon, Jun 24, 2024 at 8:26 AM Shung-Hsi Yu <shung-hsi.yu@suse.com> wrote: > > On Sun, Jun 23, 2024 at 08:38:44PM GMT, Alexei Starovoitov wrote: > > On Sun, Jun 23, 2024 at 12:03 AM Shung-Hsi Yu <shung-hsi.yu@suse.com> wrote: > > > signed_add*_overflows() was added back when there was no overflow-check > > > helper. With the introduction of such helpers in commit f0907827a8a91 > > > ("compiler.h: enable builtin overflow checkers and add fallback code"), we > > > can drop signed_add*_overflows() in kernel/bpf/verifier.c and use the > > > generic check_add_overflow() instead. > > > > > > This will make future refactoring easier, and possibly taking advantage of > > > compiler-emitted hardware instructions that efficiently implement these > > > checks. > > > > > > Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> > > > --- > > > shung-hsi.yu: maybe there's a better name instead of {min,max}_cur, but > > > I coudln't come up with one. > > > > Just smin/smax without _cur suffix ? > > Going with Jiri's suggestion under patch 2 to drop the new variables instead. > > > What is the asm before/after ? > > Tested with only this patch applied and compiled with GCC 13.3.0 for > x86_64. x86 reading is difficult for me, but I think the relevant change > for signed addition are: > > Before: > > s64 smin_val = src_reg->smin_value; > 675: 4c 8b 46 28 mov 0x28(%rsi),%r8 > { > 679: 48 89 f8 mov %rdi,%rax > s64 smax_val = src_reg->smax_value; > u64 umin_val = src_reg->umin_value; > u64 umax_val = src_reg->umax_value; > > if (signed_add_overflows(dst_reg->smin_value, smin_val) || > 67c: 48 8b 7f 28 mov 0x28(%rdi),%rdi > u64 umin_val = src_reg->umin_value; > 680: 48 8b 56 38 mov 0x38(%rsi),%rdx > u64 umax_val = src_reg->umax_value; > 684: 48 8b 4e 40 mov 0x40(%rsi),%rcx > s64 res = (s64)((u64)a + (u64)b); > 688: 4d 8d 0c 38 lea (%r8,%rdi,1),%r9 > return res < a; > 68c: 4c 39 cf cmp %r9,%rdi > 68f: 41 0f 9f c2 setg %r10b > if (b < 0) > 693: 4d 85 c0 test %r8,%r8 > 696: 0f 88 8f 00 00 00 js 72b <scalar_min_max_add+0xbb> > signed_add_overflows(dst_reg->smax_value, smax_val)) { > dst_reg->smin_value = S64_MIN; > dst_reg->smax_value = S64_MAX; > 69c: 48 bf ff ff ff ff ff movabs $0x7fffffffffffffff,%rdi > 6a3: ff ff 7f > s64 smax_val = src_reg->smax_value; > 6a6: 4c 8b 46 30 mov 0x30(%rsi),%r8 > dst_reg->smin_value = S64_MIN; > 6aa: 48 be 00 00 00 00 00 movabs $0x8000000000000000,%rsi > 6b1: 00 00 80 > if (signed_add_overflows(dst_reg->smin_value, smin_val) || > 6b4: 45 84 d2 test %r10b,%r10b > 6b7: 75 12 jne 6cb <scalar_min_max_add+0x5b> > signed_add_overflows(dst_reg->smax_value, smax_val)) { > 6b9: 4c 8b 50 30 mov 0x30(%rax),%r10 > s64 res = (s64)((u64)a + (u64)b); > 6bd: 4f 8d 1c 02 lea (%r10,%r8,1),%r11 > if (b < 0) > 6c1: 4d 85 c0 test %r8,%r8 > 6c4: 78 58 js 71e <scalar_min_max_add+0xae> > if (signed_add_overflows(dst_reg->smin_value, smin_val) || > 6c6: 4d 39 da cmp %r11,%r10 > 6c9: 7e 58 jle 723 <scalar_min_max_add+0xb3> > 6cb: 48 89 70 28 mov %rsi,0x28(%rax) > ... > if (signed_add_overflows(dst_reg->smin_value, smin_val) || > 71e: 4d 39 da cmp %r11,%r10 > 721: 7c a8 jl 6cb <scalar_min_max_add+0x5b> > dst_reg->smin_value += smin_val; > 723: 4c 89 ce mov %r9,%rsi > dst_reg->smax_value += smax_val; > 726: 4c 89 df mov %r11,%rdi > 729: eb a0 jmp 6cb <scalar_min_max_add+0x5b> > return res > a; > 72b: 4c 39 cf cmp %r9,%rdi > 72e: 41 0f 9c c2 setl %r10b > 732: e9 65 ff ff ff jmp 69c <scalar_min_max_add+0x2c> > 737: 66 0f 1f 84 00 00 00 nopw 0x0(%rax,%rax,1) > 73e: 00 00 > 740: 90 nop > > After: > > if (check_add_overflow(dst_reg->smin_value, smin_val, &smin) || > 142d3: 4c 89 de mov %r11,%rsi > 142d6: 49 03 74 24 28 add 0x28(%r12),%rsi > 142db: 41 89 54 24 54 mov %edx,0x54(%r12) > dst_reg->smax_value = S64_MAX; > 142e0: 48 ba ff ff ff ff ff movabs $0x7fffffffffffffff,%rdx > 142e7: ff ff 7f > 142ea: 41 89 44 24 50 mov %eax,0x50(%r12) > dst_reg->smin_value = S64_MIN; > 142ef: 48 b8 00 00 00 00 00 movabs $0x8000000000000000,%rax > 142f6: 00 00 80 > if (check_add_overflow(dst_reg->smin_value, smin_val, &smin) || > 142f9: 70 27 jo 14322 <adjust_reg_min_max_vals+0xde2> > check_add_overflow(dst_reg->smax_value, smax_val, &smax)) { > 142fb: 49 03 4c 24 30 add 0x30(%r12),%rcx > 14300: 0f 90 c0 seto %al > 14303: 48 89 ca mov %rcx,%rdx > 14306: 0f b6 c0 movzbl %al,%eax > dst_reg->smax_value = S64_MAX; > 14309: 48 85 c0 test %rax,%rax > 1430c: 48 b8 ff ff ff ff ff movabs $0x7fffffffffffffff,%rax > 14313: ff ff 7f > 14316: 48 0f 45 d0 cmovne %rax,%rdx > 1431a: 48 8d 40 01 lea 0x1(%rax),%rax > 1431e: 48 0f 44 c6 cmove %rsi,%rax > > Also uploaded complete disassembly for before[1] and after[2]. > > On a brief look it seems before this change it does lea, cmp, setg with > a bit more branching, and after this change it uses add, seto, > cmove/cmovene. Will look a bit more into this. New code is better indeed. That alone is worth doing this change. Pls include before/after asm in the commit log. ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH bpf-next 2/2] bpf: verifier: use check_sub_overflow() to check for subtraction overflows 2024-06-23 7:03 [PATCH bpf-next 0/2] Use overflow.h helpers to check for overflows Shung-Hsi Yu 2024-06-23 7:03 ` [PATCH bpf-next 1/2] bpf: verifier: use check_add_overflow() to check for addition overflows Shung-Hsi Yu @ 2024-06-23 7:03 ` Shung-Hsi Yu 2024-06-24 7:15 ` Jiri Olsa 1 sibling, 1 reply; 8+ messages in thread From: Shung-Hsi Yu @ 2024-06-23 7:03 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend, Andrii Nakryiko, Yonghong Song, Shung-Hsi Yu Similar to previous patch that drops signed_add*_overflows() and uses (compiler) builtin-based check_add_overflow(), do the same for signed_sub*_overflows() and replace them with the generic check_sub_overflow() to make future refactoring easier. Unsigned overflow check for subtraction does not use helpers and are simple enough already, so they're left untouched. Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> --- kernel/bpf/verifier.c | 46 +++++++++++++------------------------------ 1 file changed, 14 insertions(+), 32 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b1ad76c514f5..2c1657a26fdb 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12720,26 +12720,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return 0; } -static bool signed_sub_overflows(s64 a, s64 b) -{ - /* Do the sub in u64, where overflow is well-defined */ - s64 res = (s64)((u64)a - (u64)b); - - if (b < 0) - return res < a; - return res > a; -} - -static bool signed_sub32_overflows(s32 a, s32 b) -{ - /* Do the sub in u32, where overflow is well-defined */ - s32 res = (s32)((u32)a - (u32)b); - - if (b < 0) - return res < a; - return res > a; -} - static bool check_reg_sane_offset(struct bpf_verifier_env *env, const struct bpf_reg_state *reg, enum bpf_reg_type type) @@ -13280,14 +13260,14 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, /* A new variable offset is created. If the subtrahend is known * nonnegative, then any reg->range we had before is still good. */ - if (signed_sub_overflows(smin_ptr, smax_val) || - signed_sub_overflows(smax_ptr, smin_val)) { + if (check_sub_overflow(smin_ptr, smax_val, &smin_cur) || + check_sub_overflow(smax_ptr, smin_val, &smax_cur)) { /* Overflow possible, we know nothing */ dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; } else { - dst_reg->smin_value = smin_ptr - smax_val; - dst_reg->smax_value = smax_ptr - smin_val; + dst_reg->smin_value = smin_cur; + dst_reg->smax_value = smax_cur; } if (umin_ptr < umax_val) { /* Overflow possible, we know nothing */ @@ -13400,15 +13380,16 @@ static void scalar32_min_max_sub(struct bpf_reg_state *dst_reg, s32 smax_val = src_reg->s32_max_value; u32 umin_val = src_reg->u32_min_value; u32 umax_val = src_reg->u32_max_value; + s32 smin_cur, smax_cur; - if (signed_sub32_overflows(dst_reg->s32_min_value, smax_val) || - signed_sub32_overflows(dst_reg->s32_max_value, smin_val)) { + if (check_sub_overflow(dst_reg->s32_min_value, smax_val, &smin_cur) || + check_sub_overflow(dst_reg->s32_max_value, smin_val, &smax_cur)) { /* Overflow possible, we know nothing */ dst_reg->s32_min_value = S32_MIN; dst_reg->s32_max_value = S32_MAX; } else { - dst_reg->s32_min_value -= smax_val; - dst_reg->s32_max_value -= smin_val; + dst_reg->s32_min_value = smin_cur; + dst_reg->s32_max_value = smax_cur; } if (dst_reg->u32_min_value < umax_val) { /* Overflow possible, we know nothing */ @@ -13428,15 +13409,16 @@ static void scalar_min_max_sub(struct bpf_reg_state *dst_reg, s64 smax_val = src_reg->smax_value; u64 umin_val = src_reg->umin_value; u64 umax_val = src_reg->umax_value; + s64 smin_cur, smax_cur; - if (signed_sub_overflows(dst_reg->smin_value, smax_val) || - signed_sub_overflows(dst_reg->smax_value, smin_val)) { + if (check_sub_overflow(dst_reg->smin_value, smax_val, &smin_cur) || + check_sub_overflow(dst_reg->smax_value, smin_val, &smax_cur)) { /* Overflow possible, we know nothing */ dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; } else { - dst_reg->smin_value -= smax_val; - dst_reg->smax_value -= smin_val; + dst_reg->smin_value = smin_cur; + dst_reg->smax_value = smax_cur; } if (dst_reg->umin_value < umax_val) { /* Overflow possible, we know nothing */ -- 2.45.2 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 2/2] bpf: verifier: use check_sub_overflow() to check for subtraction overflows 2024-06-23 7:03 ` [PATCH bpf-next 2/2] bpf: verifier: use check_sub_overflow() to check for subtraction overflows Shung-Hsi Yu @ 2024-06-24 7:15 ` Jiri Olsa 2024-06-24 12:28 ` Shung-Hsi Yu 0 siblings, 1 reply; 8+ messages in thread From: Jiri Olsa @ 2024-06-24 7:15 UTC (permalink / raw) To: Shung-Hsi Yu Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend, Andrii Nakryiko, Yonghong Song On Sun, Jun 23, 2024 at 03:03:20PM +0800, Shung-Hsi Yu wrote: SNIP > @@ -13428,15 +13409,16 @@ static void scalar_min_max_sub(struct bpf_reg_state *dst_reg, > s64 smax_val = src_reg->smax_value; > u64 umin_val = src_reg->umin_value; > u64 umax_val = src_reg->umax_value; > + s64 smin_cur, smax_cur; > > - if (signed_sub_overflows(dst_reg->smin_value, smax_val) || > - signed_sub_overflows(dst_reg->smax_value, smin_val)) { > + if (check_sub_overflow(dst_reg->smin_value, smax_val, &smin_cur) || > + check_sub_overflow(dst_reg->smax_value, smin_val, &smax_cur)) { > /* Overflow possible, we know nothing */ > dst_reg->smin_value = S64_MIN; > dst_reg->smax_value = S64_MAX; > } else { > - dst_reg->smin_value -= smax_val; > - dst_reg->smax_value -= smin_val; > + dst_reg->smin_value = smin_cur; > + dst_reg->smax_value = smax_cur; > } > if (dst_reg->umin_value < umax_val) { > /* Overflow possible, we know nothing */ > -- > 2.45.2 > > could we use dst_reg->smin_* pointers directly as the sum pointer arguments in check_add_overflow ? ditto for the check_sub_overflow in the other change jirka --- diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 3f6be4923655..dbb99818e938 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12720,26 +12720,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return 0; } -static bool signed_add_overflows(s64 a, s64 b) -{ - /* Do the add in u64, where overflow is well-defined */ - s64 res = (s64)((u64)a + (u64)b); - - if (b < 0) - return res > a; - return res < a; -} - -static bool signed_add32_overflows(s32 a, s32 b) -{ - /* Do the add in u32, where overflow is well-defined */ - s32 res = (s32)((u32)a + (u32)b); - - if (b < 0) - return res > a; - return res < a; -} - static bool signed_sub_overflows(s64 a, s64 b) { /* Do the sub in u64, where overflow is well-defined */ @@ -13241,21 +13221,15 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, * added into the variable offset, and we copy the fixed offset * from ptr_reg. */ - if (signed_add_overflows(smin_ptr, smin_val) || - signed_add_overflows(smax_ptr, smax_val)) { + if (check_add_overflow(smin_ptr, smin_val, &dst_reg->smin_value) || + check_add_overflow(smax_ptr, smax_val, &dst_reg->smax_value)) { dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; - } else { - dst_reg->smin_value = smin_ptr + smin_val; - dst_reg->smax_value = smax_ptr + smax_val; } - if (umin_ptr + umin_val < umin_ptr || - umax_ptr + umax_val < umax_ptr) { + if (check_add_overflow(umin_ptr, umin_val, &dst_reg->umin_value) || + check_add_overflow(umax_ptr, umax_val, &dst_reg->umax_value)) { dst_reg->umin_value = 0; dst_reg->umax_value = U64_MAX; - } else { - dst_reg->umin_value = umin_ptr + umin_val; - dst_reg->umax_value = umax_ptr + umax_val; } dst_reg->var_off = tnum_add(ptr_reg->var_off, off_reg->var_off); dst_reg->off = ptr_reg->off; @@ -13363,21 +13337,15 @@ static void scalar32_min_max_add(struct bpf_reg_state *dst_reg, u32 umin_val = src_reg->u32_min_value; u32 umax_val = src_reg->u32_max_value; - if (signed_add32_overflows(dst_reg->s32_min_value, smin_val) || - signed_add32_overflows(dst_reg->s32_max_value, smax_val)) { + if (check_add_overflow(dst_reg->s32_min_value, smin_val, &dst_reg->s32_min_value) || + check_add_overflow(dst_reg->s32_max_value, smax_val, &dst_reg->s32_max_value)) { dst_reg->s32_min_value = S32_MIN; dst_reg->s32_max_value = S32_MAX; - } else { - dst_reg->s32_min_value += smin_val; - dst_reg->s32_max_value += smax_val; } - if (dst_reg->u32_min_value + umin_val < umin_val || - dst_reg->u32_max_value + umax_val < umax_val) { + if (check_add_overflow(dst_reg->u32_min_value, umin_val, &dst_reg->u32_min_value) || + check_add_overflow(dst_reg->u32_max_value, umax_val, &dst_reg->u32_max_value)) { dst_reg->u32_min_value = 0; dst_reg->u32_max_value = U32_MAX; - } else { - dst_reg->u32_min_value += umin_val; - dst_reg->u32_max_value += umax_val; } } @@ -13389,21 +13357,15 @@ static void scalar_min_max_add(struct bpf_reg_state *dst_reg, u64 umin_val = src_reg->umin_value; u64 umax_val = src_reg->umax_value; - if (signed_add_overflows(dst_reg->smin_value, smin_val) || - signed_add_overflows(dst_reg->smax_value, smax_val)) { + if (check_add_overflow(dst_reg->smin_value, smin_val, &dst_reg->smin_value) || + check_add_overflow(dst_reg->smax_value, smax_val, &dst_reg->smax_value)) { dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; - } else { - dst_reg->smin_value += smin_val; - dst_reg->smax_value += smax_val; } - if (dst_reg->umin_value + umin_val < umin_val || - dst_reg->umax_value + umax_val < umax_val) { + if (check_add_overflow(dst_reg->umin_value, umin_val, &dst_reg->umin_value) || + check_add_overflow(dst_reg->umax_value, umax_val, &dst_reg->umax_value)) { dst_reg->umin_value = 0; dst_reg->umax_value = U64_MAX; - } else { - dst_reg->umin_value += umin_val; - dst_reg->umax_value += umax_val; } } ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 2/2] bpf: verifier: use check_sub_overflow() to check for subtraction overflows 2024-06-24 7:15 ` Jiri Olsa @ 2024-06-24 12:28 ` Shung-Hsi Yu 0 siblings, 0 replies; 8+ messages in thread From: Shung-Hsi Yu @ 2024-06-24 12:28 UTC (permalink / raw) To: Jiri Olsa, Alexei Starovoitov Cc: bpf, Daniel Borkmann, John Fastabend, Andrii Nakryiko, Yonghong Song On Mon, Jun 24, 2024 at 09:15:48AM GMT, Jiri Olsa wrote: > On Sun, Jun 23, 2024 at 03:03:20PM +0800, Shung-Hsi Yu wrote: > SNIP > > @@ -13428,15 +13409,16 @@ static void scalar_min_max_sub(struct bpf_reg_state *dst_reg, > > s64 smax_val = src_reg->smax_value; > > u64 umin_val = src_reg->umin_value; > > u64 umax_val = src_reg->umax_value; > > + s64 smin_cur, smax_cur; > > > > - if (signed_sub_overflows(dst_reg->smin_value, smax_val) || > > - signed_sub_overflows(dst_reg->smax_value, smin_val)) { > > + if (check_sub_overflow(dst_reg->smin_value, smax_val, &smin_cur) || > > + check_sub_overflow(dst_reg->smax_value, smin_val, &smax_cur)) { > > /* Overflow possible, we know nothing */ > > dst_reg->smin_value = S64_MIN; > > dst_reg->smax_value = S64_MAX; > > } else { > > - dst_reg->smin_value -= smax_val; > > - dst_reg->smax_value -= smin_val; > > + dst_reg->smin_value = smin_cur; > > + dst_reg->smax_value = smax_cur; > > } > > if (dst_reg->umin_value < umax_val) { > > /* Overflow possible, we know nothing */ > > could we use dst_reg->smin_* pointers directly as the sum pointer > arguments in check_add_overflow ? ditto for the check_sub_overflow > in the other change Ah, yes. Didn't think of that. if (check_add_overflow(dst_reg->smin_value, smin_val, &dst_reg->smin_value) || check_add_overflow(dst_reg->smax_value, smax_val, &dst_reg->smax_value)) { dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; } Does look much cleaner, thanks. I'll go with this for v2 unless there's objection. > ... ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-06-24 20:17 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-06-23 7:03 [PATCH bpf-next 0/2] Use overflow.h helpers to check for overflows Shung-Hsi Yu 2024-06-23 7:03 ` [PATCH bpf-next 1/2] bpf: verifier: use check_add_overflow() to check for addition overflows Shung-Hsi Yu 2024-06-24 3:38 ` Alexei Starovoitov 2024-06-24 15:26 ` Shung-Hsi Yu 2024-06-24 20:17 ` Alexei Starovoitov 2024-06-23 7:03 ` [PATCH bpf-next 2/2] bpf: verifier: use check_sub_overflow() to check for subtraction overflows Shung-Hsi Yu 2024-06-24 7:15 ` Jiri Olsa 2024-06-24 12:28 ` Shung-Hsi Yu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox