* [PATCH v3 0/2] bpf, verifier: Improve precision of BPF_ADD and BPF_SUB
@ 2025-06-23 4:03 Harishankar Vishwanathan
2025-06-23 4:03 ` [PATCH v3 1/2] bpf, verifier: Improve precision for " Harishankar Vishwanathan
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Harishankar Vishwanathan @ 2025-06-23 4:03 UTC (permalink / raw)
To: ast
Cc: m.shachnai, srinivas.narayana, santosh.nagarakatte,
Harishankar Vishwanathan, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
bpf, linux-kernel
This patchset improves the precision of BPF_ADD and BPF_SUB range
tracking. It also adds selftests that exercise the cases where precision
improvement occurs, and selftests for the cases where precise bounds
cannot be computed and the output register state values are set to
unbounded.
Changelog:
v3:
* Improve readability in selftests and commit message by using
more readable constants (suggested by Eduard Zingerman).
* Add four new selftests for the cases where precise output register
state bounds cannot be computed in scalar(32)_min_max_add/sub, so the
output register state must be set to unbounded, i.e., [0, U64_MAX]
or [0, U32_MAX].
* Add suggested-by Eduard tag to commit message for changes to
verifier_bounds.c
v2:
* Add clearer example of precision improvement in the commit message for
verifier.c changes.
* Add selftests that exercise the precision improvement to
verifier_bounds.c (suggested by Eduard Zingerman).
v1:
https://lore.kernel.org/bpf/20250610221356.2663491-1-harishankar.vishwanathan@gmail.com/
Harishankar Vishwanathan (2):
bpf, verifier: Improve precision for BPF_ADD and BPF_SUB
selftests/bpf: Add testcases for BPF_ADD and BPF_SUB
kernel/bpf/verifier.c | 76 ++++++---
.../selftests/bpf/progs/verifier_bounds.c | 161 ++++++++++++++++++
2 files changed, 217 insertions(+), 20 deletions(-)
--
2.45.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v3 1/2] bpf, verifier: Improve precision for BPF_ADD and BPF_SUB
2025-06-23 4:03 [PATCH v3 0/2] bpf, verifier: Improve precision of BPF_ADD and BPF_SUB Harishankar Vishwanathan
@ 2025-06-23 4:03 ` Harishankar Vishwanathan
2025-06-24 21:59 ` Eduard Zingerman
2025-06-23 4:03 ` [PATCH v3 2/2] selftests/bpf: Add testcases " Harishankar Vishwanathan
2025-06-25 2:00 ` [PATCH v3 0/2] bpf, verifier: Improve precision of " patchwork-bot+netdevbpf
2 siblings, 1 reply; 7+ messages in thread
From: Harishankar Vishwanathan @ 2025-06-23 4:03 UTC (permalink / raw)
To: ast
Cc: m.shachnai, srinivas.narayana, santosh.nagarakatte,
Harishankar Vishwanathan, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
bpf, linux-kernel
This patch improves the precison of the scalar(32)_min_max_add and
scalar(32)_min_max_sub functions, which update the u(32)min/u(32)_max
ranges for the BPF_ADD and BPF_SUB instructions. We discovered this more
precise operator using a technique we are developing for automatically
synthesizing functions for updating tnums and ranges.
According to the BPF ISA [1], "Underflow and overflow are allowed during
arithmetic operations, meaning the 64-bit or 32-bit value will wrap".
Our patch leverages the wrap-around semantics of unsigned overflow and
underflow to improve precision.
Below is an example of our patch for scalar_min_max_add; the idea is
analogous for all four functions.
There are three cases to consider when adding two u64 ranges [dst_umin,
dst_umax] and [src_umin, src_umax]. Consider a value x in the range
[dst_umin, dst_umax] and another value y in the range [src_umin,
src_umax].
(a) No overflow: No addition x + y overflows. This occurs when even the
largest possible sum, i.e., dst_umax + src_umax <= U64_MAX.
(b) Partial overflow: Some additions x + y overflow. This occurs when
the largest possible sum overflows (dst_umax + src_umax > U64_MAX), but
the smallest possible sum does not overflow (dst_umin + src_umin <=
U64_MAX).
(c) Full overflow: All additions x + y overflow. This occurs when both
the smallest possible sum and the largest possible sum overflow, i.e.,
both (dst_umin + src_umin) and (dst_umax + src_umax) are > U64_MAX.
The current implementation conservatively sets the output bounds to
unbounded, i.e, [umin=0, umax=U64_MAX], whenever there is *any*
possibility of overflow, i.e, in cases (b) and (c). Otherwise it
computes tight bounds as [dst_umin + src_umin, dst_umax + src_umax]:
if (check_add_overflow(*dst_umin, src_reg->umin_value, dst_umin) ||
check_add_overflow(*dst_umax, src_reg->umax_value, dst_umax)) {
*dst_umin = 0;
*dst_umax = U64_MAX;
}
Our synthesis-based technique discovered a more precise operator.
Particularly, in case (c), all possible additions x + y overflow and
wrap around according to eBPF semantics, and the computation of the
output range as [dst_umin + src_umin, dst_umax + src_umax] continues to
work. Only in case (b), do we need to set the output bounds to
unbounded, i.e., [0, U64_MAX].
Case (b) can be checked by seeing if the minimum possible sum does *not*
overflow and the maximum possible sum *does* overflow, and when that
happens, we set the output to unbounded:
min_overflow = check_add_overflow(*dst_umin, src_reg->umin_value, dst_umin);
max_overflow = check_add_overflow(*dst_umax, src_reg->umax_value, dst_umax);
if (!min_overflow && max_overflow) {
*dst_umin = 0;
*dst_umax = U64_MAX;
}
Below is an example eBPF program and the corresponding log from the
verifier.
The current implementation of scalar_min_max_add() sets r3's bounds to
[0, U64_MAX] at instruction 5: (0f) r3 += r3, due to conservative
overflow handling.
0: R1=ctx() R10=fp0
0: (b7) r4 = 0 ; R4_w=0
1: (87) r4 = -r4 ; R4_w=scalar()
2: (18) r3 = 0xa000000000000000 ; R3_w=0xa000000000000000
4: (4f) r3 |= r4 ; R3_w=scalar(smin=0xa000000000000000,smax=-1,umin=0xa000000000000000,var_off=(0xa000000000000000; 0x5fffffffffffffff)) R4_w=scalar()
5: (0f) r3 += r3 ; R3_w=scalar()
6: (b7) r0 = 1 ; R0_w=1
7: (95) exit
With our patch, r3's bounds after instruction 5 are set to a much more
precise [0x4000000000000000,0xfffffffffffffffe].
...
5: (0f) r3 += r3 ; R3_w=scalar(umin=0x4000000000000000,umax=0xfffffffffffffffe)
6: (b7) r0 = 1 ; R0_w=1
7: (95) exit
The logic for scalar32_min_max_add is analogous. For the
scalar(32)_min_max_sub functions, the reasoning is similar but applied
to detecting underflow instead of overflow.
We verified the correctness of the new implementations using Agni [3,4].
We since also discovered that a similar technique has been used to
calculate output ranges for unsigned interval addition and subtraction
in Hacker's Delight [2].
[1] https://docs.kernel.org/bpf/standardization/instruction-set.html
[2] Hacker's Delight Ch.4-2, Propagating Bounds through Add’s and Subtract’s
[3] https://github.com/bpfverif/agni
[4] https://people.cs.rutgers.edu/~sn349/papers/sas24-preprint.pdf
Co-developed-by: Matan Shachnai <m.shachnai@rutgers.edu>
Signed-off-by: Matan Shachnai <m.shachnai@rutgers.edu>
Co-developed-by: Srinivas Narayana <srinivas.narayana@rutgers.edu>
Signed-off-by: Srinivas Narayana <srinivas.narayana@rutgers.edu>
Co-developed-by: Santosh Nagarakatte <santosh.nagarakatte@rutgers.edu>
Signed-off-by: Santosh Nagarakatte <santosh.nagarakatte@rutgers.edu>
Signed-off-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>
---
kernel/bpf/verifier.c | 76 +++++++++++++++++++++++++++++++------------
1 file changed, 56 insertions(+), 20 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 279a64933262..f403524bd215 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -14605,14 +14605,25 @@ static void scalar32_min_max_add(struct bpf_reg_state *dst_reg,
s32 *dst_smax = &dst_reg->s32_max_value;
u32 *dst_umin = &dst_reg->u32_min_value;
u32 *dst_umax = &dst_reg->u32_max_value;
+ u32 umin_val = src_reg->u32_min_value;
+ u32 umax_val = src_reg->u32_max_value;
+ bool min_overflow, max_overflow;
if (check_add_overflow(*dst_smin, src_reg->s32_min_value, dst_smin) ||
check_add_overflow(*dst_smax, src_reg->s32_max_value, dst_smax)) {
*dst_smin = S32_MIN;
*dst_smax = S32_MAX;
}
- if (check_add_overflow(*dst_umin, src_reg->u32_min_value, dst_umin) ||
- check_add_overflow(*dst_umax, src_reg->u32_max_value, dst_umax)) {
+
+ /* If either all additions overflow or no additions overflow, then
+ * it is okay to set: dst_umin = dst_umin + src_umin, dst_umax =
+ * dst_umax + src_umax. Otherwise (some additions overflow), set
+ * the output bounds to unbounded.
+ */
+ min_overflow = check_add_overflow(*dst_umin, umin_val, dst_umin);
+ max_overflow = check_add_overflow(*dst_umax, umax_val, dst_umax);
+
+ if (!min_overflow && max_overflow) {
*dst_umin = 0;
*dst_umax = U32_MAX;
}
@@ -14625,14 +14636,25 @@ static void scalar_min_max_add(struct bpf_reg_state *dst_reg,
s64 *dst_smax = &dst_reg->smax_value;
u64 *dst_umin = &dst_reg->umin_value;
u64 *dst_umax = &dst_reg->umax_value;
+ u64 umin_val = src_reg->umin_value;
+ u64 umax_val = src_reg->umax_value;
+ bool min_overflow, max_overflow;
if (check_add_overflow(*dst_smin, src_reg->smin_value, dst_smin) ||
check_add_overflow(*dst_smax, src_reg->smax_value, dst_smax)) {
*dst_smin = S64_MIN;
*dst_smax = S64_MAX;
}
- if (check_add_overflow(*dst_umin, src_reg->umin_value, dst_umin) ||
- check_add_overflow(*dst_umax, src_reg->umax_value, dst_umax)) {
+
+ /* If either all additions overflow or no additions overflow, then
+ * it is okay to set: dst_umin = dst_umin + src_umin, dst_umax =
+ * dst_umax + src_umax. Otherwise (some additions overflow), set
+ * the output bounds to unbounded.
+ */
+ min_overflow = check_add_overflow(*dst_umin, umin_val, dst_umin);
+ max_overflow = check_add_overflow(*dst_umax, umax_val, dst_umax);
+
+ if (!min_overflow && max_overflow) {
*dst_umin = 0;
*dst_umax = U64_MAX;
}
@@ -14643,8 +14665,11 @@ static void scalar32_min_max_sub(struct bpf_reg_state *dst_reg,
{
s32 *dst_smin = &dst_reg->s32_min_value;
s32 *dst_smax = &dst_reg->s32_max_value;
+ u32 *dst_umin = &dst_reg->u32_min_value;
+ u32 *dst_umax = &dst_reg->u32_max_value;
u32 umin_val = src_reg->u32_min_value;
u32 umax_val = src_reg->u32_max_value;
+ bool min_underflow, max_underflow;
if (check_sub_overflow(*dst_smin, src_reg->s32_max_value, dst_smin) ||
check_sub_overflow(*dst_smax, src_reg->s32_min_value, dst_smax)) {
@@ -14652,14 +14677,18 @@ static void scalar32_min_max_sub(struct bpf_reg_state *dst_reg,
*dst_smin = S32_MIN;
*dst_smax = S32_MAX;
}
- if (dst_reg->u32_min_value < umax_val) {
- /* Overflow possible, we know nothing */
- dst_reg->u32_min_value = 0;
- dst_reg->u32_max_value = U32_MAX;
- } else {
- /* Cannot overflow (as long as bounds are consistent) */
- dst_reg->u32_min_value -= umax_val;
- dst_reg->u32_max_value -= umin_val;
+
+ /* If either all subtractions underflow or no subtractions
+ * underflow, it is okay to set: dst_umin = dst_umin - src_umax,
+ * dst_umax = dst_umax - src_umin. Otherwise (some subtractions
+ * underflow), set the output bounds to unbounded.
+ */
+ min_underflow = check_sub_overflow(*dst_umin, umax_val, dst_umin);
+ max_underflow = check_sub_overflow(*dst_umax, umin_val, dst_umax);
+
+ if (min_underflow && !max_underflow) {
+ *dst_umin = 0;
+ *dst_umax = U32_MAX;
}
}
@@ -14668,8 +14697,11 @@ static void scalar_min_max_sub(struct bpf_reg_state *dst_reg,
{
s64 *dst_smin = &dst_reg->smin_value;
s64 *dst_smax = &dst_reg->smax_value;
+ u64 *dst_umin = &dst_reg->umin_value;
+ u64 *dst_umax = &dst_reg->umax_value;
u64 umin_val = src_reg->umin_value;
u64 umax_val = src_reg->umax_value;
+ bool min_underflow, max_underflow;
if (check_sub_overflow(*dst_smin, src_reg->smax_value, dst_smin) ||
check_sub_overflow(*dst_smax, src_reg->smin_value, dst_smax)) {
@@ -14677,14 +14709,18 @@ static void scalar_min_max_sub(struct bpf_reg_state *dst_reg,
*dst_smin = S64_MIN;
*dst_smax = S64_MAX;
}
- if (dst_reg->umin_value < umax_val) {
- /* Overflow possible, we know nothing */
- dst_reg->umin_value = 0;
- dst_reg->umax_value = U64_MAX;
- } else {
- /* Cannot overflow (as long as bounds are consistent) */
- dst_reg->umin_value -= umax_val;
- dst_reg->umax_value -= umin_val;
+
+ /* If either all subtractions underflow or no subtractions
+ * underflow, it is okay to set: dst_umin = dst_umin - src_umax,
+ * dst_umax = dst_umax - src_umin. Otherwise (some subtractions
+ * underflow), set the output bounds to unbounded.
+ */
+ min_underflow = check_sub_overflow(*dst_umin, umax_val, dst_umin);
+ max_underflow = check_sub_overflow(*dst_umax, umin_val, dst_umax);
+
+ if (min_underflow && !max_underflow) {
+ *dst_umin = 0;
+ *dst_umax = U64_MAX;
}
}
--
2.45.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v3 2/2] selftests/bpf: Add testcases for BPF_ADD and BPF_SUB
2025-06-23 4:03 [PATCH v3 0/2] bpf, verifier: Improve precision of BPF_ADD and BPF_SUB Harishankar Vishwanathan
2025-06-23 4:03 ` [PATCH v3 1/2] bpf, verifier: Improve precision for " Harishankar Vishwanathan
@ 2025-06-23 4:03 ` Harishankar Vishwanathan
2025-06-24 22:45 ` Eduard Zingerman
2025-06-25 2:00 ` [PATCH v3 0/2] bpf, verifier: Improve precision of " patchwork-bot+netdevbpf
2 siblings, 1 reply; 7+ messages in thread
From: Harishankar Vishwanathan @ 2025-06-23 4:03 UTC (permalink / raw)
To: ast
Cc: m.shachnai, srinivas.narayana, santosh.nagarakatte,
Harishankar Vishwanathan, Eduard Zingerman, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Mykola Lysenko, Shuah Khan, Matan Shachnai, Henriette Herzog,
Luis Gerhorst, Kumar Kartikeya Dwivedi, bpf, linux-kernel,
linux-kselftest
The previous commit improves the precision in scalar(32)_min_max_add,
and scalar(32)_min_max_sub. The improvement in precision occurs in cases
when all outcomes overflow or underflow, respectively.
This commit adds selftests that exercise those cases.
This commit also adds selftests for cases where the output register
state bounds for u(32)_min/u(32)_max are conservatively set to unbounded
(when there is partial overflow or underflow).
Signed-off-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>
Co-developed-by: Matan Shachnai <m.shachnai@rutgers.edu>
Signed-off-by: Matan Shachnai <m.shachnai@rutgers.edu>
Suggested-by: Eduard Zingerman <eddyz87@gmail.com>
---
.../selftests/bpf/progs/verifier_bounds.c | 161 ++++++++++++++++++
1 file changed, 161 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c
index 30e16153fdf1..31986f6c609e 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bounds.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
@@ -1371,4 +1371,165 @@ __naked void mult_sign_ovf(void)
__imm(bpf_skb_store_bytes)
: __clobber_all);
}
+
+SEC("socket")
+__description("64-bit addition, all outcomes overflow")
+__success __log_level(2)
+__msg("5: (0f) r3 += r3 {{.*}} R3_w=scalar(umin=0x4000000000000000,umax=0xfffffffffffffffe)")
+__retval(0)
+__naked void add64_full_overflow(void)
+{
+ asm volatile (
+ "r4 = 0;"
+ "r4 = -r4;"
+ "r3 = 0xa000000000000000 ll;"
+ "r3 |= r4;"
+ "r3 += r3;"
+ "r0 = 0;"
+ "exit"
+ :
+ :
+ : __clobber_all);
+}
+
+SEC("socket")
+__description("64-bit addition, partial overflow, result in unbounded reg")
+__success __log_level(2)
+__msg("4: (0f) r3 += r3 {{.*}} R3_w=scalar()")
+__retval(0)
+__naked void add64_partial_overflow(void)
+{
+ asm volatile (
+ "r4 = 0;"
+ "r4 = -r4;"
+ "r3 = 2;"
+ "r3 |= r4;"
+ "r3 += r3;"
+ "r0 = 0;"
+ "exit"
+ :
+ :
+ : __clobber_all);
+}
+
+SEC("socket")
+__description("32-bit addition overflow, all outcomes overflow")
+__success __log_level(2)
+__msg("4: (0c) w3 += w3 {{.*}} R3_w=scalar(smin=umin=umin32=0x40000000,smax=umax=umax32=0xfffffffe,var_off=(0x0; 0xffffffff))")
+__retval(0)
+__naked void add32_full_overflow(void)
+{
+ asm volatile (
+ "w4 = 0;"
+ "w4 = -w4;"
+ "w3 = 0xa0000000;"
+ "w3 |= w4;"
+ "w3 += w3;"
+ "r0 = 0;"
+ "exit"
+ :
+ :
+ : __clobber_all);
+}
+
+SEC("socket")
+__description("32-bit addition, partial overflow, result in unbounded u32 bounds")
+__success __log_level(2)
+__msg("4: (0c) w3 += w3 {{.*}} R3_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))")
+__retval(0)
+__naked void add32_partial_overflow(void)
+{
+ asm volatile (
+ "w4 = 0;"
+ "w4 = -w4;"
+ "w3 = 2;"
+ "w3 |= w4;"
+ "w3 += w3;"
+ "r0 = 0;"
+ "exit"
+ :
+ :
+ : __clobber_all);
+}
+
+SEC("socket")
+__description("64-bit subtraction, all outcomes underflow")
+__success __log_level(2)
+__msg("6: (1f) r3 -= r1 {{.*}} R3_w=scalar(umin=1,umax=0x8000000000000000)")
+__retval(0)
+__naked void sub64_full_overflow(void)
+{
+ asm volatile (
+ "r1 = 0;"
+ "r1 = -r1;"
+ "r2 = 0x8000000000000000 ll;"
+ "r1 |= r2;"
+ "r3 = 0;"
+ "r3 -= r1;"
+ "r0 = 0;"
+ "exit"
+ :
+ :
+ : __clobber_all);
+}
+
+SEC("socket")
+__description("64-bit subtration, partial overflow, result in unbounded reg")
+__success __log_level(2)
+__msg("3: (1f) r3 -= r2 {{.*}} R3_w=scalar()")
+__retval(0)
+__naked void sub64_partial_overflow(void)
+{
+ asm volatile (
+ "r3 = 0;"
+ "r3 = -r3;"
+ "r2 = 1;"
+ "r3 -= r2;"
+ "r0 = 0;"
+ "exit"
+ :
+ :
+ : __clobber_all);
+}
+
+SEC("socket")
+__description("32-bit subtraction overflow, all outcomes underflow")
+__success __log_level(2)
+__msg("5: (1c) w3 -= w1 {{.*}} R3_w=scalar(smin=umin=umin32=1,smax=umax=umax32=0x80000000,var_off=(0x0; 0xffffffff))")
+__retval(0)
+__naked void sub32_full_overflow(void)
+{
+ asm volatile (
+ "w1 = 0;"
+ "w1 = -w1;"
+ "w2 = 0x80000000;"
+ "w1 |= w2;"
+ "w3 = 0;"
+ "w3 -= w1;"
+ "r0 = 0;"
+ "exit"
+ :
+ :
+ : __clobber_all);
+}
+
+SEC("socket")
+__description("32-bit subtration, partial overflow, result in unbounded u32 bounds")
+__success __log_level(2)
+__msg("3: (1c) w3 -= w2 {{.*}} R3_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))")
+__retval(0)
+__naked void sub32_partial_overflow(void)
+{
+ asm volatile (
+ "w3 = 0;"
+ "w3 = -w3;"
+ "w2 = 1;"
+ "w3 -= w2;"
+ "r0 = 0;"
+ "exit"
+ :
+ :
+ : __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.45.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v3 1/2] bpf, verifier: Improve precision for BPF_ADD and BPF_SUB
2025-06-23 4:03 ` [PATCH v3 1/2] bpf, verifier: Improve precision for " Harishankar Vishwanathan
@ 2025-06-24 21:59 ` Eduard Zingerman
0 siblings, 0 replies; 7+ messages in thread
From: Eduard Zingerman @ 2025-06-24 21:59 UTC (permalink / raw)
To: Harishankar Vishwanathan, ast
Cc: m.shachnai, srinivas.narayana, santosh.nagarakatte,
Daniel Borkmann, John Fastabend, Andrii Nakryiko,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh,
Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf, linux-kernel
On Mon, 2025-06-23 at 00:03 -0400, Harishankar Vishwanathan wrote:
> This patch improves the precison of the scalar(32)_min_max_add and
> scalar(32)_min_max_sub functions, which update the u(32)min/u(32)_max
> ranges for the BPF_ADD and BPF_SUB instructions. We discovered this more
> precise operator using a technique we are developing for automatically
> synthesizing functions for updating tnums and ranges.
>
> According to the BPF ISA [1], "Underflow and overflow are allowed during
> arithmetic operations, meaning the 64-bit or 32-bit value will wrap".
> Our patch leverages the wrap-around semantics of unsigned overflow and
> underflow to improve precision.
>
> Below is an example of our patch for scalar_min_max_add; the idea is
> analogous for all four functions.
>
> There are three cases to consider when adding two u64 ranges [dst_umin,
> dst_umax] and [src_umin, src_umax]. Consider a value x in the range
> [dst_umin, dst_umax] and another value y in the range [src_umin,
> src_umax].
>
> (a) No overflow: No addition x + y overflows. This occurs when even the
> largest possible sum, i.e., dst_umax + src_umax <= U64_MAX.
>
> (b) Partial overflow: Some additions x + y overflow. This occurs when
> the largest possible sum overflows (dst_umax + src_umax > U64_MAX), but
> the smallest possible sum does not overflow (dst_umin + src_umin <=
> U64_MAX).
>
> (c) Full overflow: All additions x + y overflow. This occurs when both
> the smallest possible sum and the largest possible sum overflow, i.e.,
> both (dst_umin + src_umin) and (dst_umax + src_umax) are > U64_MAX.
>
[...]
> Co-developed-by: Matan Shachnai <m.shachnai@rutgers.edu>
> Signed-off-by: Matan Shachnai <m.shachnai@rutgers.edu>
> Co-developed-by: Srinivas Narayana <srinivas.narayana@rutgers.edu>
> Signed-off-by: Srinivas Narayana <srinivas.narayana@rutgers.edu>
> Co-developed-by: Santosh Nagarakatte <santosh.nagarakatte@rutgers.edu>
> Signed-off-by: Santosh Nagarakatte <santosh.nagarakatte@rutgers.edu>
> Signed-off-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>
> ---
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
(Please don't drop acks).
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v3 2/2] selftests/bpf: Add testcases for BPF_ADD and BPF_SUB
2025-06-23 4:03 ` [PATCH v3 2/2] selftests/bpf: Add testcases " Harishankar Vishwanathan
@ 2025-06-24 22:45 ` Eduard Zingerman
2025-06-25 1:50 ` Alexei Starovoitov
0 siblings, 1 reply; 7+ messages in thread
From: Eduard Zingerman @ 2025-06-24 22:45 UTC (permalink / raw)
To: Harishankar Vishwanathan, ast
Cc: m.shachnai, srinivas.narayana, santosh.nagarakatte,
Daniel Borkmann, John Fastabend, Andrii Nakryiko,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh,
Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
Shuah Khan, Matan Shachnai, Henriette Herzog, Luis Gerhorst,
Kumar Kartikeya Dwivedi, bpf, linux-kernel, linux-kselftest
On Mon, 2025-06-23 at 00:03 -0400, Harishankar Vishwanathan wrote:
> The previous commit improves the precision in scalar(32)_min_max_add,
> and scalar(32)_min_max_sub. The improvement in precision occurs in cases
> when all outcomes overflow or underflow, respectively.
>
> This commit adds selftests that exercise those cases.
>
> This commit also adds selftests for cases where the output register
> state bounds for u(32)_min/u(32)_max are conservatively set to unbounded
> (when there is partial overflow or underflow).
>
> Signed-off-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>
> Co-developed-by: Matan Shachnai <m.shachnai@rutgers.edu>
> Signed-off-by: Matan Shachnai <m.shachnai@rutgers.edu>
> Suggested-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
Thank you for adding these tests. Even with "human readable" numbers
took me 15-20 minutes to verify the numbers :)
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> .../selftests/bpf/progs/verifier_bounds.c | 161 ++++++++++++++++++
> 1 file changed, 161 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c
> index 30e16153fdf1..31986f6c609e 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_bounds.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
> @@ -1371,4 +1371,165 @@ __naked void mult_sign_ovf(void)
> __imm(bpf_skb_store_bytes)
> : __clobber_all);
> }
> +
> +SEC("socket")
> +__description("64-bit addition, all outcomes overflow")
> +__success __log_level(2)
> +__msg("5: (0f) r3 += r3 {{.*}} R3_w=scalar(umin=0x4000000000000000,umax=0xfffffffffffffffe)")
> +__retval(0)
> +__naked void add64_full_overflow(void)
> +{
> + asm volatile (
> + "r4 = 0;"
> + "r4 = -r4;"
Nit: there is a change in the workings that would make range
propagation in negation instruction, a better way to get unbound
scalar here is e.g. call to bpf_get_prandom_u32() or read from a
constant global map.
Depending on order in which patches would be accepted this rework
would be either on you or on the other patch-set author.
> + "r3 = 0xa000000000000000 ll;"
> + "r3 |= r4;"
> + "r3 += r3;"
> + "r0 = 0;"
> + "exit"
> + :
> + :
> + : __clobber_all);
> +}
[...]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v3 2/2] selftests/bpf: Add testcases for BPF_ADD and BPF_SUB
2025-06-24 22:45 ` Eduard Zingerman
@ 2025-06-25 1:50 ` Alexei Starovoitov
0 siblings, 0 replies; 7+ messages in thread
From: Alexei Starovoitov @ 2025-06-25 1:50 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Harishankar Vishwanathan, Alexei Starovoitov, Matan Shachnai,
Srinivas Narayana, Santosh Nagarakatte, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Mykola Lysenko, Shuah Khan, Matan Shachnai, Henriette Herzog,
Luis Gerhorst, Kumar Kartikeya Dwivedi, bpf, LKML,
open list:KERNEL SELFTEST FRAMEWORK
On Tue, Jun 24, 2025 at 3:45 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Mon, 2025-06-23 at 00:03 -0400, Harishankar Vishwanathan wrote:
> > The previous commit improves the precision in scalar(32)_min_max_add,
> > and scalar(32)_min_max_sub. The improvement in precision occurs in cases
> > when all outcomes overflow or underflow, respectively.
> >
> > This commit adds selftests that exercise those cases.
> >
> > This commit also adds selftests for cases where the output register
> > state bounds for u(32)_min/u(32)_max are conservatively set to unbounded
> > (when there is partial overflow or underflow).
> >
> > Signed-off-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>
> > Co-developed-by: Matan Shachnai <m.shachnai@rutgers.edu>
> > Signed-off-by: Matan Shachnai <m.shachnai@rutgers.edu>
> > Suggested-by: Eduard Zingerman <eddyz87@gmail.com>
> > ---
>
> Thank you for adding these tests. Even with "human readable" numbers
> took me 15-20 minutes to verify the numbers :)
>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
>
> > .../selftests/bpf/progs/verifier_bounds.c | 161 ++++++++++++++++++
> > 1 file changed, 161 insertions(+)
> >
> > diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c
> > index 30e16153fdf1..31986f6c609e 100644
> > --- a/tools/testing/selftests/bpf/progs/verifier_bounds.c
> > +++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
> > @@ -1371,4 +1371,165 @@ __naked void mult_sign_ovf(void)
> > __imm(bpf_skb_store_bytes)
> > : __clobber_all);
> > }
> > +
> > +SEC("socket")
> > +__description("64-bit addition, all outcomes overflow")
> > +__success __log_level(2)
> > +__msg("5: (0f) r3 += r3 {{.*}} R3_w=scalar(umin=0x4000000000000000,umax=0xfffffffffffffffe)")
> > +__retval(0)
> > +__naked void add64_full_overflow(void)
> > +{
> > + asm volatile (
> > + "r4 = 0;"
> > + "r4 = -r4;"
>
> Nit: there is a change in the workings that would make range
> propagation in negation instruction, a better way to get unbound
> scalar here is e.g. call to bpf_get_prandom_u32() or read from a
> constant global map.
> Depending on order in which patches would be accepted this rework
> would be either on you or on the other patch-set author.
Fixed them up like below while applying:
diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c
b/tools/testing/selftests/bpf/progs/verifier_bounds.c
index 31986f6c609e..e52a24e15b34 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bounds.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
@@ -1380,15 +1380,15 @@ __retval(0)
__naked void add64_full_overflow(void)
{
asm volatile (
- "r4 = 0;"
- "r4 = -r4;"
+ "call %[bpf_get_prandom_u32];"
+ "r4 = r0;"
"r3 = 0xa000000000000000 ll;"
"r3 |= r4;"
"r3 += r3;"
"r0 = 0;"
"exit"
:
- :
+ : __imm(bpf_get_prandom_u32)
: __clobber_all);
}
@@ -1400,15 +1400,15 @@ __retval(0)
__naked void add64_partial_overflow(void)
{
asm volatile (
- "r4 = 0;"
- "r4 = -r4;"
+ "call %[bpf_get_prandom_u32];"
+ "r4 = r0;"
"r3 = 2;"
"r3 |= r4;"
"r3 += r3;"
"r0 = 0;"
"exit"
:
- :
+ : __imm(bpf_get_prandom_u32)
: __clobber_all);
}
etc
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v3 0/2] bpf, verifier: Improve precision of BPF_ADD and BPF_SUB
2025-06-23 4:03 [PATCH v3 0/2] bpf, verifier: Improve precision of BPF_ADD and BPF_SUB Harishankar Vishwanathan
2025-06-23 4:03 ` [PATCH v3 1/2] bpf, verifier: Improve precision for " Harishankar Vishwanathan
2025-06-23 4:03 ` [PATCH v3 2/2] selftests/bpf: Add testcases " Harishankar Vishwanathan
@ 2025-06-25 2:00 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 7+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-06-25 2:00 UTC (permalink / raw)
To: Harishankar Vishwanathan
Cc: ast, m.shachnai, srinivas.narayana, santosh.nagarakatte, daniel,
john.fastabend, andrii, martin.lau, eddyz87, song, yonghong.song,
kpsingh, sdf, haoluo, jolsa, bpf, linux-kernel
Hello:
This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Mon, 23 Jun 2025 00:03:55 -0400 you wrote:
> This patchset improves the precision of BPF_ADD and BPF_SUB range
> tracking. It also adds selftests that exercise the cases where precision
> improvement occurs, and selftests for the cases where precise bounds
> cannot be computed and the output register state values are set to
> unbounded.
>
> Changelog:
>
> [...]
Here is the summary with links:
- [v3,1/2] bpf, verifier: Improve precision for BPF_ADD and BPF_SUB
https://git.kernel.org/bpf/bpf-next/c/7a998a731627
- [v3,2/2] selftests/bpf: Add testcases for BPF_ADD and BPF_SUB
(no matching commit)
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-06-25 1:59 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-23 4:03 [PATCH v3 0/2] bpf, verifier: Improve precision of BPF_ADD and BPF_SUB Harishankar Vishwanathan
2025-06-23 4:03 ` [PATCH v3 1/2] bpf, verifier: Improve precision for " Harishankar Vishwanathan
2025-06-24 21:59 ` Eduard Zingerman
2025-06-23 4:03 ` [PATCH v3 2/2] selftests/bpf: Add testcases " Harishankar Vishwanathan
2025-06-24 22:45 ` Eduard Zingerman
2025-06-25 1:50 ` Alexei Starovoitov
2025-06-25 2:00 ` [PATCH v3 0/2] bpf, verifier: Improve precision of " patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).