BPF List
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/2] bpf: Add value tracking for BPF_DIV
@ 2025-12-23  9:10 Yazhou Tang
  2025-12-23  9:10 ` [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV Yazhou Tang
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Yazhou Tang @ 2025-12-23  9:10 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518,
	shenghaoyuan0928, ziye

From: Yazhou Tang <tangyazhou518@outlook.com>

Add value tracking (range and bitwise tracking) for BPF_DIV. Please
see commit log of 1/2 for more details.

---

Changes v1 => v2:
1. Fixed 2 bugs in sdiv32 analysis logic and corrected the associated
   selftest cases (AI reviewer).
2. Renamed `tnum_bottom` to `tnum_empty` for better clarity, and updated
   commit message to explain its role in signed BPF_DIV analysis.

v1:
https://lore.kernel.org/bpf/tencent_717092CD734D050CCD93401CA624BB3C8307@qq.com/
https://lore.kernel.org/bpf/tencent_7C98FAECA40C98489ACF4515CE346F031509@qq.com/

Yazhou Tang (2):
  bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
  selftests/bpf: Add tests for BPF_DIV analysis

 include/linux/tnum.h                          |   4 +
 kernel/bpf/tnum.c                             | 159 ++++++-
 kernel/bpf/verifier.c                         | 225 ++++++++++
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_div_bounds.c | 404 ++++++++++++++++++
 .../bpf/progs/verifier_value_illegal_alu.c    |   7 +-
 6 files changed, 797 insertions(+), 4 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_div_bounds.c

-- 
2.52.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
  2025-12-23  9:10 [PATCH bpf-next v2 0/2] bpf: Add value tracking for BPF_DIV Yazhou Tang
@ 2025-12-23  9:10 ` Yazhou Tang
  2025-12-23 20:58   ` Yonghong Song
                     ` (3 more replies)
  2025-12-23  9:10 ` [PATCH bpf-next v2 2/2] selftests/bpf: Add tests for BPF_DIV analysis Yazhou Tang
  2025-12-23 13:50 ` [syzbot ci] Re: bpf: Add value tracking for BPF_DIV syzbot ci
  2 siblings, 4 replies; 11+ messages in thread
From: Yazhou Tang @ 2025-12-23  9:10 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518,
	shenghaoyuan0928, ziye

From: Yazhou Tang <tangyazhou518@outlook.com>

This patch introduces interval analysis (range tracking) and tnum
analysis (bitwise tracking) for both signed and unsigned division
operations in the BPF verifier.

The BPF verifier currently lacks support for value tracking on BPF_DIV
instructions, which can lead to false positives during verification of
BPF programs that utilize division instructions.

According to the BPF instruction set[1], the instruction's offset field
(`insn->off`) is used to distinguish between signed (`off == 1`) and
unsigned division (`off == 0`). Moreover, we also follow the BPF division
semantics to handle special cases, such as division by zero and signed
division overflow.

- UDIV: dst = (src != 0) ? (dst / src) : 0
- SDIV: dst = (src == 0) ? 0 : ((src == -1 && dst == LLONG_MIN) ? LLONG_MIN : (dst / src))

Here is the overview of the changes made in this patch:

1. For interval analysis:
   - Added `scalar_min_max_udiv` and `scalar32_min_max_udiv` to update
     umin/umax bounds, which is straightforward.
   - Added `scalar_min_max_sdiv` and `scalar32_min_max_sdiv` to update
     smin/smax bounds. It handles non-monotonic intervals by decomposing
     the divisor range into negative, zero, and positive sub-ranges, and
     computing the result range for each sub-range separately. Finally,
     it combines the results to get the final smin/smax bounds.
2. For tnum analysis, we referred to LLVM's KnownBits implementation[2]
   and the recent research on abstract interpretation of division[3]:
   - Added `tnum_udiv` to compute the tnum for unsigned division. It
     calculates the maximum possible result based on the maximum values
     of the dividend tnum and the minimum values of the divisor tnum,
     then constructs the resulting tnum accordingly. We have prove its
     soundness using Rocq Prover[4].
   - Added `tnum_sdiv` to compute the tnum for signed division. It splits
     the operands into positive and negative components, then performs
     calculation on absolute values using `tnum_udiv`, finally unions
     the results to ensure soundness.

     Introduced `tnum_empty` to represent the bottom element (⊥) of the
     tnum lattice to support the split-tnum analysis for signed division.
     Mathematically, tnum_empty represents an empty set of possible  values,
     which is crucial when dealing with sub-tnums (e.g., the negative
     component of a strictly positive tnum) yield no valid values.
3. Also updated existing selftests based on the expected BPF_DIV behavior.

[1] https://www.kernel.org/doc/Documentation/bpf/standardization/instruction-set.rst
[2] https://llvm.org/doxygen/KnownBits_8cpp_source.html
[3] https://dl.acm.org/doi/10.1145/3728905
[4] https://github.com/shenghaoyuan/open-verified-artifacts/tree/main/tnum

Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Co-developed-by: Tianci Cao <ziye@zju.edu.cn>
Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
---
Hello everyone,

Thanks for reviewing our patch! This patch adds interval and tnum analysis
for both signed and unsigned BPF_DIV instructions in the BPF verifier.

We also have implemented interval and tnum analysis for BPF_MOD
instruction, which is closely related to division. However, to keep the
patch size manageable and facilitate easier review, we have decided to
submit the BPF_MOD related changes in a separate patch following this one.

Best,

Yazhou Tang

 include/linux/tnum.h                          |   4 +
 kernel/bpf/tnum.c                             | 159 ++++++++++++-
 kernel/bpf/verifier.c                         | 225 ++++++++++++++++++
 .../bpf/progs/verifier_value_illegal_alu.c    |   7 +-
 4 files changed, 391 insertions(+), 4 deletions(-)

diff --git a/include/linux/tnum.h b/include/linux/tnum.h
index c52b862dad45..fd00deb2cb88 100644
--- a/include/linux/tnum.h
+++ b/include/linux/tnum.h
@@ -50,6 +50,10 @@ struct tnum tnum_or(struct tnum a, struct tnum b);
 struct tnum tnum_xor(struct tnum a, struct tnum b);
 /* Multiply two tnums, return @a * @b */
 struct tnum tnum_mul(struct tnum a, struct tnum b);
+/* Unsigned division, return @a / @b */
+struct tnum tnum_udiv(struct tnum a, struct tnum b);
+/* Signed division, return @a / @b */
+struct tnum tnum_sdiv(struct tnum a, struct tnum b, bool alu32);
 
 /* Return true if the known bits of both tnums have the same value */
 bool tnum_overlap(struct tnum a, struct tnum b);
diff --git a/kernel/bpf/tnum.c b/kernel/bpf/tnum.c
index f8e70e9c3998..20cd023709bf 100644
--- a/kernel/bpf/tnum.c
+++ b/kernel/bpf/tnum.c
@@ -12,6 +12,13 @@
 #define TNUM(_v, _m)	(struct tnum){.value = _v, .mask = _m}
 /* A completely unknown value */
 const struct tnum tnum_unknown = { .value = 0, .mask = -1 };
+/* Not well-formed Tnum, whose concrete value is empty set. */
+const struct tnum tnum_empty = { .value = -1, .mask = -1 };
+
+static bool __tnum_eqb(struct tnum a, struct tnum b)
+{
+	return a.value == b.value && a.mask == b.mask;
+}
 
 struct tnum tnum_const(u64 value)
 {
@@ -83,9 +90,23 @@ struct tnum tnum_sub(struct tnum a, struct tnum b)
 	return TNUM(dv & ~mu, mu);
 }
 
+/* __tnum_neg_width: tnum negation with given bit width.
+ * @a: the tnum to be negated.
+ * @width: the bit width to perform negation, 32 or 64.
+ */
+static struct tnum __tnum_neg_width(struct tnum a, int width)
+{
+	if (width == 32)
+		return tnum_subreg(tnum_sub(TNUM(0, 0), tnum_subreg(a)));
+	else if (width == 64)
+		return tnum_sub(TNUM(0, 0), a);
+	else
+		return tnum_unknown;
+}
+
 struct tnum tnum_neg(struct tnum a)
 {
-	return tnum_sub(TNUM(0, 0), a);
+	return __tnum_neg_width(a, 64);
 }
 
 struct tnum tnum_and(struct tnum a, struct tnum b)
@@ -167,6 +188,138 @@ bool tnum_overlap(struct tnum a, struct tnum b)
 	return (a.value & mu) == (b.value & mu);
 }
 
+/* __get_mask: get a mask that covers all bits up to the highest set bit in x.
+ * For example:
+ *   x = 0b0000...0000 -> return 0b0000...0000
+ *   x = 0b0000...0001 -> return 0b0000...0001
+ *   x = 0b0000...1001 -> return 0b0000...1111
+ *   x = 0b1111...1111 -> return 0b1111...1111
+ */
+static u64 __get_mask(u64 x)
+{
+	int width = 0;
+
+	if (x > 0)
+		width = 64 - __builtin_clzll(x);
+	if (width == 0)
+		return 0;
+	else if (width == 64)
+		return U64_MAX;
+	else
+		return (1ULL << width) - 1;
+}
+
+struct tnum tnum_udiv(struct tnum a, struct tnum b)
+{
+	if (tnum_is_const(b)) {
+		/* BPF div specification: x / 0 = 0 */
+		if (b.value == 0)
+			return TNUM(0, 0);
+		if (tnum_is_const(a))
+			return TNUM(a.value / b.value, 0);
+	}
+
+	if (b.value == 0)
+		return tnum_unknown;
+
+	u64 a_max = a.value + a.mask;
+	u64 b_min = b.value;
+	u64 max_res = a_max / b_min;
+	return TNUM(0, __get_mask(max_res));
+}
+
+static u64 __msb(u64 x, int width)
+{
+	return x & (1ULL << (width - 1));
+}
+
+static struct tnum __tnum_get_positive(struct tnum x, int width)
+{
+	if (__msb(x.value, width))
+		return tnum_empty;
+	if (__msb(x.mask, width))
+		return TNUM(x.value, x.mask & ~(1ULL << (width - 1)));
+	return x;
+}
+
+static struct tnum __tnum_get_negative(struct tnum x, int width)
+{
+	if (__msb(x.value, width))
+		return x;
+	if (__msb(x.mask, width))
+		return TNUM(x.value | (1ULL << (width - 1)), x.mask & ~(1ULL << (width - 1)));
+	return tnum_empty;
+}
+
+static struct tnum __tnum_abs(struct tnum x, int width)
+{
+	if (__msb(x.value, width))
+		return __tnum_neg_width(x, width);
+	else
+		return x;
+}
+
+/* __tnum_sdiv, a helper for tnum_sdiv.
+ * @a: tnum a, a's sign is fixed, __msb(a.mask) == 0
+ * @b: tnum b, b's sign is fixed, __msb(b.mask) == 0
+ *
+ * This function reuses tnum_udiv by operating on the absolute values of a and b,
+ * and then adjusting the sign of the result based on C's division rules.
+ * Here we don't need to specially handle the case of [S64_MIN / -1], because
+ * after __tnum_abs, S64_MIN becomes (S64_MAX + 1), and the behavior of
+ * unsigned [(S64_MAX + 1) / 1] is normal.
+ */
+static struct tnum __tnum_sdiv(struct tnum a, struct tnum b, int width)
+{
+	if (__tnum_eqb(a, tnum_empty) || __tnum_eqb(b, tnum_empty))
+		return tnum_empty;
+
+	struct tnum a_abs = __tnum_abs(a, width);
+	struct tnum b_abs = __tnum_abs(b, width);
+	struct tnum res_abs = tnum_udiv(a_abs, b_abs);
+
+	if (__msb(a.value, width) == __msb(b.value, width))
+		return res_abs;
+	else
+		return __tnum_neg_width(res_abs, width);
+}
+
+struct tnum tnum_sdiv(struct tnum a, struct tnum b, bool alu32)
+{
+	if (tnum_is_const(b)) {
+		/* BPF div specification: x / 0 = 0 */
+		if (b.value == 0)
+			return TNUM(0, 0);
+		if (tnum_is_const(a)) {
+			/* BPF div specification: S32_MIN / -1 = S32_MIN */
+			if (alu32 && (u32)a.value == (u32)S32_MIN && (u32)b.value == (u32)-1)
+				return TNUM((u32)S32_MIN, 0);
+			/* BPF div specification: S64_MIN / -1 = S64_MIN */
+			if (!alu32 && a.value == S64_MIN && b.value == (u64)-1)
+				return TNUM((u64)S64_MIN, 0);
+			s64 sval = (s64)a.value / (s64)b.value;
+			return TNUM((u64)sval, 0);
+		}
+	}
+
+	if (b.value == 0)
+		return tnum_unknown;
+
+	int width = alu32 ? 32 : 64;
+	struct tnum a_pos = __tnum_get_positive(a, width);
+	struct tnum a_neg = __tnum_get_negative(a, width);
+	struct tnum b_pos = __tnum_get_positive(b, width);
+	struct tnum b_neg = __tnum_get_negative(b, width);
+
+	struct tnum res_pos = __tnum_sdiv(a_pos, b_pos, width);
+	struct tnum res_neg = __tnum_sdiv(a_neg, b_neg, width);
+	struct tnum res_mix1 = __tnum_sdiv(a_pos, b_neg, width);
+	struct tnum res_mix2 = __tnum_sdiv(a_neg, b_pos, width);
+
+	return tnum_union(tnum_union(res_pos, res_neg),
+					tnum_union(res_mix1, res_mix2));
+}
+
 /* Note that if a and b disagree - i.e. one has a 'known 1' where the other has
  * a 'known 0' - this will return a 'known 1' for that bit.
  */
@@ -186,6 +339,10 @@ struct tnum tnum_intersect(struct tnum a, struct tnum b)
  */
 struct tnum tnum_union(struct tnum a, struct tnum b)
 {
+	if (__tnum_eqb(a, tnum_empty))
+		return b;
+	if (__tnum_eqb(b, tnum_empty))
+		return a;
 	u64 v = a.value & b.value;
 	u64 mu = (a.value ^ b.value) | a.mask | b.mask;
 
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index d6b8a77fbe3b..89bc0e4c4a76 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15076,6 +15076,218 @@ static void scalar_min_max_mul(struct bpf_reg_state *dst_reg,
 	}
 }
 
+static void __scalar32_min_max_join(s32 *res_min, s32 *res_max, s32 x_min, s32 x_max)
+{
+	*res_min = min(*res_min, x_min);
+	*res_max = max(*res_max, x_max);
+}
+
+static void __scalar_min_max_join(s64 *res_min, s64 *res_max, s64 x_min, s64 x_max)
+{
+	*res_min = min(*res_min, x_min);
+	*res_max = max(*res_max, x_max);
+}
+
+static void scalar32_min_max_udiv(struct bpf_reg_state *dst_reg,
+				struct bpf_reg_state *src_reg)
+{
+	u32 *dst_umin = &dst_reg->u32_min_value;
+	u32 *dst_umax = &dst_reg->u32_max_value;
+	u32 umin_val = src_reg->u32_min_value;
+	u32 umax_val = src_reg->u32_max_value;
+
+	if (umin_val == 0) {
+		/* BPF div specification: x / 0 = 0
+		 * 1. If umin_val == umax_val == 0, i.e. divisor is certainly 0,
+		 * then the result must be 0, [a,b] / [0,0] = [0,0].
+		 * 2. If umin_val == 0 && umax_val != 0, then dst_umin = x / 0 = 0,
+		 * dst_umax = dst_umax / 1, remains unchanged, [a,b] / [0,x] = [0,b].
+		 */
+		*dst_umin = 0;
+		if (umax_val == 0)
+			*dst_umax = 0;
+	} else {
+		*dst_umin = *dst_umin / umax_val;
+		*dst_umax = *dst_umax / umin_val;
+	}
+
+	/* Reset signed interval to TOP. */
+	dst_reg->s32_min_value = S32_MIN;
+	dst_reg->s32_max_value = S32_MAX;
+}
+
+static void scalar_min_max_udiv(struct bpf_reg_state *dst_reg,
+				struct bpf_reg_state *src_reg)
+{
+	u64 *dst_umin = &dst_reg->umin_value;
+	u64 *dst_umax = &dst_reg->umax_value;
+	u64 umin_val = src_reg->umin_value;
+	u64 umax_val = src_reg->umax_value;
+
+	if (umin_val == 0) {
+		/* BPF div specification: x / 0 = 0
+		 * 1. If umin_val == umax_val == 0, i.e. divisor is certainly 0,
+		 * then the result must be 0, [a,b] / [0,0] = [0,0].
+		 * 2. If umin_val == 0 && umax_val != 0, then dst_umin = x / 0 = 0,
+		 * dst_umax = dst_umax / 1, remains unchanged, [a,b] / [0,x] = [0,b].
+		 */
+		*dst_umin = 0;
+		if (umax_val == 0)
+			*dst_umax = 0;
+	} else {
+		*dst_umin = *dst_umin / umax_val;
+		*dst_umax = *dst_umax / umin_val;
+	}
+
+	/* Reset signed interval to TOP. */
+	dst_reg->smin_value = S64_MIN;
+	dst_reg->smax_value = S64_MAX;
+}
+
+static s32 __bpf_sdiv32(s32 a, s32 b)
+{
+	/* BPF div specification: x / 0 = 0 */
+	if (unlikely(b == 0))
+		return 0;
+	/* BPF mod specification: S32_MIN / -1 = S32_MIN */
+	if (unlikely(a == S32_MIN && b == -1))
+		return S32_MIN;
+	return a / b;
+}
+
+/* The divisor interval does not cross 0,
+ * i.e. src_min and src_max have same sign.
+ */
+static void __sdiv32_range(s32 dst_min, s32 dst_max, s32 src_min, s32 src_max,
+				s32 *res_min, s32 *res_max)
+{
+	s32 tmp_res[4] = {
+		__bpf_sdiv32(dst_min, src_min),
+		__bpf_sdiv32(dst_min, src_max),
+		__bpf_sdiv32(dst_max, src_min),
+		__bpf_sdiv32(dst_max, src_max)
+	};
+
+	*res_min = min_array(tmp_res, 4);
+	*res_max = max_array(tmp_res, 4);
+}
+
+static void scalar32_min_max_sdiv(struct bpf_reg_state *dst_reg,
+				struct bpf_reg_state *src_reg)
+{
+	s32 *dst_smin = &dst_reg->s32_min_value;
+	s32 *dst_smax = &dst_reg->s32_max_value;
+	s32 smin_val = src_reg->s32_min_value;
+	s32 smax_val = src_reg->s32_max_value;
+	s32 res_min, res_max, tmp_min, tmp_max;
+
+	if (smin_val <= 0 && smax_val >= 0) {
+		/* BPF div specification: x / 0 = 0
+		 * Set initial result to 0, as 0 is in divisor interval.
+		 */
+		res_min = 0;
+		res_max = 0;
+		/* negative divisor interval: [a_min,a_max] / [b_min,-1] */
+		if (smin_val < 0) {
+			__sdiv32_range(*dst_smin, *dst_smax, smin_val, -1,
+					&tmp_min, &tmp_max);
+			__scalar32_min_max_join(&res_min, &res_max, tmp_min, tmp_max);
+		}
+		/* positive divisor interval: [a_min,a_max] / [1,b_max] */
+		if (smax_val > 0) {
+			__sdiv32_range(*dst_smin, *dst_smax, 1, smax_val,
+					&tmp_min, &tmp_max);
+			__scalar32_min_max_join(&res_min, &res_max, tmp_min, tmp_max);
+		}
+	} else {
+		__sdiv32_range(*dst_smin, *dst_smax, smin_val, smax_val,
+			&res_min, &res_max);
+	}
+
+	/* BPF mod specification: S32_MIN / -1 = S32_MIN */
+	if (*dst_smin == S32_MIN && smin_val <= -1 && smax_val >= -1)
+		res_min = S32_MIN;
+
+	*dst_smin = res_min;
+	*dst_smax = res_max;
+
+	/* Reset unsigned interval to TOP. */
+	dst_reg->u32_min_value = 0;
+	dst_reg->u32_max_value = U32_MAX;
+}
+
+static s64 __bpf_sdiv(s64 a, s64 b)
+{
+	/* BPF div specification: x / 0 = 0 */
+	if (unlikely(b == 0))
+		return 0;
+	/* BPF div specification: S64_MIN / -1 = S64_MIN */
+	if (unlikely(a == S64_MIN && b == -1))
+		return S64_MIN;
+	return a / b;
+}
+
+/* The divisor interval does not cross 0,
+ * i.e. src_min and src_max have same sign.
+ */
+static void __sdiv_range(s64 dst_min, s64 dst_max, s64 src_min, s64 src_max,
+				s64 *res_min, s64 *res_max)
+{
+	s64 tmp_res[4] = {
+		__bpf_sdiv(dst_min, src_min),
+		__bpf_sdiv(dst_min, src_max),
+		__bpf_sdiv(dst_max, src_min),
+		__bpf_sdiv(dst_max, src_max)
+	};
+
+	*res_min = min_array(tmp_res, 4);
+	*res_max = max_array(tmp_res, 4);
+}
+
+static void scalar_min_max_sdiv(struct bpf_reg_state *dst_reg,
+				struct bpf_reg_state *src_reg)
+{
+	s64 *dst_smin = &dst_reg->smin_value;
+	s64 *dst_smax = &dst_reg->smax_value;
+	s64 smin_val = src_reg->smin_value;
+	s64 smax_val = src_reg->smax_value;
+	s64 res_min, res_max, tmp_min, tmp_max;
+
+	if (smin_val <= 0 && smax_val >= 0) {
+		/* BPF div specification: x / 0 = 0
+		 * Set initial result to 0, as 0 is in divisor interval.
+		 */
+		res_min = 0;
+		res_max = 0;
+		/* negative divisor interval: [a_min,a_max] / [b_min,-1] */
+		if (smin_val < 0) {
+			__sdiv_range(*dst_smin, *dst_smax, smin_val, -1,
+					&tmp_min, &tmp_max);
+			__scalar_min_max_join(&res_min, &res_max, tmp_min, tmp_max);
+		}
+		/* positive divisor interval: [a_min,a_max] / [1,b_max] */
+		if (smax_val > 0) {
+			__sdiv_range(*dst_smin, *dst_smax, 1, smax_val,
+					&tmp_min, &tmp_max);
+			__scalar_min_max_join(&res_min, &res_max, tmp_min, tmp_max);
+		}
+	} else {
+		__sdiv_range(*dst_smin, *dst_smax, smin_val, smax_val,
+			&res_min, &res_max);
+	}
+
+	/* BPF mod specification: S64_MIN / -1 = S64_MIN */
+	if (*dst_smin == S64_MIN && smin_val <= -1 && smax_val >= -1)
+		res_min = S64_MIN;
+
+	*dst_smin = res_min;
+	*dst_smax = res_max;
+
+	/* Reset unsigned interval to TOP. */
+	dst_reg->umin_value = 0;
+	dst_reg->umax_value = U64_MAX;
+}
+
 static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
 				 struct bpf_reg_state *src_reg)
 {
@@ -15479,6 +15691,7 @@ static bool is_safe_to_compute_dst_reg_range(struct bpf_insn *insn,
 	case BPF_XOR:
 	case BPF_OR:
 	case BPF_MUL:
+	case BPF_DIV:
 		return true;
 
 	/* Shift operators range is only computable if shift dimension operand
@@ -15504,6 +15717,7 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
 				      struct bpf_reg_state src_reg)
 {
 	u8 opcode = BPF_OP(insn->code);
+	s16 off = insn->off;
 	bool alu32 = (BPF_CLASS(insn->code) != BPF_ALU64);
 	int ret;
 
@@ -15555,6 +15769,17 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
 		scalar32_min_max_mul(dst_reg, &src_reg);
 		scalar_min_max_mul(dst_reg, &src_reg);
 		break;
+	case BPF_DIV:
+		if (off == 1) {
+			dst_reg->var_off = tnum_sdiv(dst_reg->var_off, src_reg.var_off, alu32);
+			scalar32_min_max_sdiv(dst_reg, &src_reg);
+			scalar_min_max_sdiv(dst_reg, &src_reg);
+		} else {
+			dst_reg->var_off = tnum_udiv(dst_reg->var_off, src_reg.var_off);
+			scalar32_min_max_udiv(dst_reg, &src_reg);
+			scalar_min_max_udiv(dst_reg, &src_reg);
+		}
+		break;
 	case BPF_AND:
 		dst_reg->var_off = tnum_and(dst_reg->var_off, src_reg.var_off);
 		scalar32_min_max_and(dst_reg, &src_reg);
diff --git a/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c b/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c
index 2129e4353fd9..4d8273c258d5 100644
--- a/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c
+++ b/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c
@@ -173,14 +173,15 @@ __naked void flow_keys_illegal_variable_offset_alu(void)
 	asm volatile("					\
 	r6 = r1;					\
 	r7 = *(u64*)(r6 + %[flow_keys_off]);		\
-	r8 = 8;						\
-	r8 /= 1;					\
+	call %[bpf_get_prandom_u32];			\
+	r8 = r0;					\
 	r8 &= 8;					\
 	r7 += r8;					\
 	r0 = *(u64*)(r7 + 0);				\
 	exit;						\
 "	:
-	: __imm_const(flow_keys_off, offsetof(struct __sk_buff, flow_keys))
+	: __imm_const(flow_keys_off, offsetof(struct __sk_buff, flow_keys)),
+	  __imm(bpf_get_prandom_u32)
 	: __clobber_all);
 }
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH bpf-next v2 2/2] selftests/bpf: Add tests for BPF_DIV analysis
  2025-12-23  9:10 [PATCH bpf-next v2 0/2] bpf: Add value tracking for BPF_DIV Yazhou Tang
  2025-12-23  9:10 ` [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV Yazhou Tang
@ 2025-12-23  9:10 ` Yazhou Tang
  2025-12-23 13:50 ` [syzbot ci] Re: bpf: Add value tracking for BPF_DIV syzbot ci
  2 siblings, 0 replies; 11+ messages in thread
From: Yazhou Tang @ 2025-12-23  9:10 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518,
	shenghaoyuan0928, ziye

From: Yazhou Tang <tangyazhou518@outlook.com>

Now BPF_DIV has value tracking support via interval and tnum analysis.
This patch adds selftests to cover various cases of signed and
unsigned division operations, including edge cases like division by
zero and signed division overflow.

Specifically, these selftests are based on dead code elimination: If
the BPF verifier can precisely analyze the result of a division
operation, it can prune the path that leads to an error (here we use
invalid memory access as the error case), allowing the program to pass
verification.

Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Co-developed-by: Tianci Cao <ziye@zju.edu.cn>
Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
---
Hello everyone,

Thanks for reviewing our patch! This patch adds the necessary selftests
for the BPF_DIV range tracking enhancements.

Regarding the test implementation: I noticed multiple patterns for BPF
selftests (e.g., out-of-bounds read in `verifier_bounds.c`, illegal
return value in `verifier_mul.c` and `verifier_precision.v`). I have
opted for the invalid memory access approach with `__msg` label as it
is concise and straightforward.

If the community prefers these tests to be integrated into existing
files or follow a different pattern, please let me know and I will
gladly refactor them.

Best,

Yazhou Tang

 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_div_bounds.c | 404 ++++++++++++++++++
 2 files changed, 406 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_div_bounds.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 5829ffd70f8f..d892ad7b688e 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -33,6 +33,7 @@
 #include "verifier_direct_packet_access.skel.h"
 #include "verifier_direct_stack_access_wraparound.skel.h"
 #include "verifier_div0.skel.h"
+#include "verifier_div_bounds.skel.h"
 #include "verifier_div_overflow.skel.h"
 #include "verifier_global_subprogs.skel.h"
 #include "verifier_global_ptr_args.skel.h"
@@ -174,6 +175,7 @@ void test_verifier_d_path(void)               { RUN(verifier_d_path); }
 void test_verifier_direct_packet_access(void) { RUN(verifier_direct_packet_access); }
 void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); }
 void test_verifier_div0(void)                 { RUN(verifier_div0); }
+void test_verifier_div_bounds(void)           { RUN(verifier_div_bounds); }
 void test_verifier_div_overflow(void)         { RUN(verifier_div_overflow); }
 void test_verifier_global_subprogs(void)      { RUN(verifier_global_subprogs); }
 void test_verifier_global_ptr_args(void)      { RUN(verifier_global_ptr_args); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_div_bounds.c b/tools/testing/selftests/bpf/progs/verifier_div_bounds.c
new file mode 100644
index 000000000000..7a8b6905de56
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_div_bounds.c
@@ -0,0 +1,404 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <limits.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+/* This file contains unit tests for signed/unsigned division
+ * operations, focusing on verifying whether BPF verifier's
+ * tnum and interval analysis modules soundly and precisely
+ * compute the results.
+ */
+
+SEC("socket")
+__description("UDIV32, non-zero divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 /= w2 {{.*}}; R1=scalar(smin=smin32=0,smax=umax=smax32=umax32=4,var_off=(0x0; 0x7))")
+__naked void udiv32_non_zero(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w2 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w2 &= 1;					\
+	w2 |= 2;					\
+	w1 /= w2;					\
+	if w1 <= 4 goto l0_%=;				\
+	/* Precise analysis will prune the path with error code */\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UDIV32, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 /= w2 {{.*}}; R1=0 R2=0")
+__naked void udiv32_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w2 = 0;						\
+	w1 /= w2;					\
+	if w1 == 0 goto l0_%=;				\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UDIV64, non-zero divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 /= r2 {{.*}}; R1=scalar(smin=smin32=0,smax=umax=smax32=umax32=4,var_off=(0x0; 0x7))")
+__naked void udiv64_non_zero(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r2 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r2 &= 1;					\
+	r2 |= 2;					\
+	r1 /= r2;					\
+	if r1 <= 4 goto l0_%=;				\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UDIV64, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 /= r2 {{.*}}; R1=0 R2=0")
+__naked void udiv64_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r2 = 0;						\
+	r1 /= r2;					\
+	if r1 == 0 goto l0_%=;				\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 s/= w2 {{.*}}; R1=scalar(smin=smin32=0,smax=umax=smax32=umax32=4,var_off=(0x0; 0x7))")
+__naked void sdiv32_non_zero(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w2 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w2 &= 1;					\
+	w2 |= 2;					\
+	w1 s/= w2;					\
+	if w1 s<= 4 goto l0_%=;				\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero divisor, negative dividend")
+__success __retval(0) __log_level(2)
+__msg("w1 s/= w2 {{.*}}; R1=scalar(smin=0,smax=umax=0xffffffff,smin32=-4,smax32=0,var_off=(0x0; 0xffffffff))")
+__naked void sdiv32_negative_dividend(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w2 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w1 = -w1;					\
+	w2 &= 1;					\
+	w2 |= 2;					\
+	w1 s/= w2;					\
+	if w1 s>= -4 goto l0_%=;			\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero divisor, negative divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 s/= w2 {{.*}}; R1=scalar(smin=0,smax=umax=0xffffffff,smin32=-4,smax32=0,var_off=(0x0; 0xffffffff))")
+__naked void sdiv32_negative_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w2 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w2 &= 1;					\
+	w2 |= 2;					\
+	w2 = -w2;					\
+	w1 s/= w2;					\
+	if w1 s>= -4 goto l0_%=;			\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero divisor, both negative")
+__success __retval(0) __log_level(2)
+__msg("w1 s/= w2 {{.*}}; R1=scalar(smin=smin32=0,smax=umax=smax32=umax32=4,var_off=(0x0; 0x7))")
+__naked void sdiv32_both_negative(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w2 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w2 &= 1;					\
+	w2 |= 2;					\
+	w1 = -w1;					\
+	w2 = -w2;					\
+	w1 s/= w2;					\
+	if w1 s<= 4 goto l0_%=;				\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 s/= w2 {{.*}}; R1=0 R2=0")
+__naked void sdiv32_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w2 = 0;						\
+	w1 s/= w2;					\
+	if w1 == 0 goto l0_%=;				\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, S32_MIN/-1")
+__success __retval(0) __log_level(2)
+__msg("w2 s/= -1 {{.*}}; R2=0x80000000")
+__naked void sdiv32_overflow(void)
+{
+	asm volatile ("					\
+	w1 = %[int_min];				\
+	w2 = w1;					\
+	w2 s/= -1;					\
+	if w1 == w2 goto l0_%=;				\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(int_min, INT_MIN)
+	: __clobber_all);
+}
+
+
+SEC("socket")
+__description("SDIV64, non-zero divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 s/= r2 {{.*}}; R1=scalar(smin=smin32=0,smax=umax=smax32=umax32=4,var_off=(0x0; 0x7))")
+__naked void sdiv64_non_zero(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r2 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r2 &= 1;					\
+	r2 |= 2;					\
+	r1 s/= r2;					\
+	if r1 s<= 4 goto l0_%=;				\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero divisor, negative dividend")
+__success __retval(0) __log_level(2)
+__msg("r1 s/= r2 {{.*}}; R1=scalar(smin=smin32=-4,smax=smax32=0)")
+__naked void sdiv64_negative_dividend(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r2 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r1 = -r1;					\
+	r2 &= 1;					\
+	r2 |= 2;					\
+	r1 s/= r2;					\
+	if r1 s>= -4 goto l0_%=;			\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero divisor, negative divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 s/= r2 {{.*}}; R1=scalar(smin=smin32=-4,smax=smax32=0)")
+__naked void sdiv64_negative_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r2 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r2 &= 1;					\
+	r2 |= 2;					\
+	r2 = -r2;					\
+	r1 s/= r2;					\
+	if r1 s>= -4 goto l0_%=;			\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero divisor, both negative")
+__success __retval(0) __log_level(2)
+__msg("r1 s/= r2 {{.*}}; R1=scalar(smin=smin32=0,smax=umax=smax32=umax32=4,var_off=(0x0; 0x7))")
+__naked void sdiv64_both_negative(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r2 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r2 &= 1;					\
+	r2 |= 2;					\
+	r1 = -r1;					\
+	r2 = -r2;					\
+	r1 s/= r2;					\
+	if r1 s<= 4 goto l0_%=;				\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 s/= r2 {{.*}}; R1=0 R2=0")
+__naked void sdiv64_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r2 = 0;						\
+	r1 s/= r2;					\
+	if r1 == 0 goto l0_%=;				\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, S64_MIN/-1")
+__success __retval(0) __log_level(2)
+__msg("r2 s/= -1 {{.*}}; R2=0x8000000000000000")
+__naked void sdiv64_overflow(void)
+{
+	asm volatile ("					\
+	r1 = %[llong_min] ll;				\
+	r2 = r1;					\
+	r2 s/= -1;					\
+	if r1 == r2 goto l0_%=;				\
+	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(llong_min, LLONG_MIN)
+	: __clobber_all);
+}
\ No newline at end of file
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [syzbot ci] Re: bpf: Add value tracking for BPF_DIV
  2025-12-23  9:10 [PATCH bpf-next v2 0/2] bpf: Add value tracking for BPF_DIV Yazhou Tang
  2025-12-23  9:10 ` [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV Yazhou Tang
  2025-12-23  9:10 ` [PATCH bpf-next v2 2/2] selftests/bpf: Add tests for BPF_DIV analysis Yazhou Tang
@ 2025-12-23 13:50 ` syzbot ci
  2 siblings, 0 replies; 11+ messages in thread
From: syzbot ci @ 2025-12-23 13:50 UTC (permalink / raw)
  To: andrii, ast, bpf, daniel, eddyz87, haoluo, john.fastabend, jolsa,
	kpsingh, martin.lau, sdf, shenghaoyuan0928, song, tangyazhou518,
	tangyazhou, yonghong.song, ziye
  Cc: syzbot, syzkaller-bugs

syzbot ci has tested the following series

[v2] bpf: Add value tracking for BPF_DIV
https://lore.kernel.org/all/20251223091120.2413435-1-tangyazhou@zju.edu.cn
* [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
* [PATCH bpf-next v2 2/2] selftests/bpf: Add tests for BPF_DIV analysis

and found the following issue:
WARNING in reg_bounds_sanity_check

Full report is available here:
https://ci.syzbot.org/series/9bfb8ed3-0d6e-4ae4-93a1-e5d466326f9e

***

WARNING in reg_bounds_sanity_check

tree:      bpf-next
URL:       https://kernel.googlesource.com/pub/scm/linux/kernel/git/bpf/bpf-next.git
base:      ec439c38013550420aecc15988ae6acb670838c1
arch:      amd64
compiler:  Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
config:    https://ci.syzbot.org/builds/67e90945-0cc5-47d7-a492-406b63cd4281/config
C repro:   https://ci.syzbot.org/findings/5808370d-6cee-4ff9-9a6f-ba4007533b78/c_repro
syz repro: https://ci.syzbot.org/findings/5808370d-6cee-4ff9-9a6f-ba4007533b78/syz_repro

------------[ cut here ]------------
verifier bug: REG INVARIANTS VIOLATION (alu): range bounds violation u64=[0x1, 0x0] s64=[0x1, 0x0] u32=[0x1, 0x0] s32=[0x1, 0x0] var_off=(0x1, 0x0)
WARNING: kernel/bpf/verifier.c:2748 at reg_bounds_sanity_check+0x201/0xc30 kernel/bpf/verifier.c:2742, CPU#0: syz.0.17/5999
Modules linked in:
CPU: 0 UID: 0 PID: 5999 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
RIP: 0010:reg_bounds_sanity_check+0x3e6/0xc30 kernel/bpf/verifier.c:2742
Code: 98 00 00 00 4c 8b 8c 24 88 00 00 00 41 ff 34 24 41 57 55 41 55 ff b4 24 f0 00 00 00 ff b4 24 a8 00 00 00 ff b4 24 c0 00 00 00 <67> 48 0f b9 3a 48 83 c4 38 49 bf 00 00 00 00 00 fc ff df 48 8b 84
RSP: 0018:ffffc90004277098 EFLAGS: 00010246
RAX: dffffc0000000000 RBX: 1ffff11022c96804 RCX: 0000000000000001
RDX: ffffffff8b71cd00 RSI: ffffffff8b71bae0 RDI: ffffffff8f861280
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000001
R10: 000000000000000c R11: 0000000000000000 R12: ffff8881164b4020
R13: 0000000000000001 R14: 1ffff11022c96803 R15: 0000000000000001
FS:  000055555eb0e500(0000) GS:ffff88818e835000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fad443e7dac CR3: 00000001bc680000 CR4: 00000000000006f0
Call Trace:
 <TASK>
 check_alu_op kernel/bpf/verifier.c:16205 [inline]
 do_check_insn kernel/bpf/verifier.c:20533 [inline]
 do_check+0xa72c/0xeba0 kernel/bpf/verifier.c:20802
 do_check_common+0x19cc/0x25b0 kernel/bpf/verifier.c:24080
 do_check_main kernel/bpf/verifier.c:24163 [inline]
 bpf_check+0x5e7a/0x1c300 kernel/bpf/verifier.c:25470
 bpf_prog_load+0x13ba/0x1a10 kernel/bpf/syscall.c:3088
 __sys_bpf+0x5c3/0x8a0 kernel/bpf/syscall.c:6207
 __do_sys_bpf kernel/bpf/syscall.c:6320 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:6318 [inline]
 __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:6318
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fad4418f7c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe64db1fd8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007fad443e5fa0 RCX: 00007fad4418f7c9
RDX: 0000000000000094 RSI: 0000200000000340 RDI: 0000000000000005
RBP: 00007fad441f297f R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fad443e5fa0 R14: 00007fad443e5fa0 R15: 0000000000000003
 </TASK>
----------------
Code disassembly (best guess):
   0:	98                   	cwtl
   1:	00 00                	add    %al,(%rax)
   3:	00 4c 8b 8c          	add    %cl,-0x74(%rbx,%rcx,4)
   7:	24 88                	and    $0x88,%al
   9:	00 00                	add    %al,(%rax)
   b:	00 41 ff             	add    %al,-0x1(%rcx)
   e:	34 24                	xor    $0x24,%al
  10:	41 57                	push   %r15
  12:	55                   	push   %rbp
  13:	41 55                	push   %r13
  15:	ff b4 24 f0 00 00 00 	push   0xf0(%rsp)
  1c:	ff b4 24 a8 00 00 00 	push   0xa8(%rsp)
  23:	ff b4 24 c0 00 00 00 	push   0xc0(%rsp)
* 2a:	67 48 0f b9 3a       	ud1    (%edx),%rdi <-- trapping instruction
  2f:	48 83 c4 38          	add    $0x38,%rsp
  33:	49 bf 00 00 00 00 00 	movabs $0xdffffc0000000000,%r15
  3a:	fc ff df
  3d:	48                   	rex.W
  3e:	8b                   	.byte 0x8b
  3f:	84                   	.byte 0x84


***

If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
  Tested-by: syzbot@syzkaller.appspotmail.com

---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
  2025-12-23  9:10 ` [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV Yazhou Tang
@ 2025-12-23 20:58   ` Yonghong Song
  2025-12-25  9:17     ` Yazhou Tang
  2025-12-24  0:32   ` kernel test robot
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Yonghong Song @ 2025-12-23 20:58 UTC (permalink / raw)
  To: Yazhou Tang, bpf
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
	kpsingh, sdf, haoluo, jolsa, tangyazhou518, shenghaoyuan0928,
	ziye



On 12/23/25 1:10 AM, Yazhou Tang wrote:
> From: Yazhou Tang <tangyazhou518@outlook.com>
>
> This patch introduces interval analysis (range tracking) and tnum
> analysis (bitwise tracking) for both signed and unsigned division
> operations in the BPF verifier.
>
> The BPF verifier currently lacks support for value tracking on BPF_DIV
> instructions, which can lead to false positives during verification of
> BPF programs that utilize division instructions.
>
> According to the BPF instruction set[1], the instruction's offset field
> (`insn->off`) is used to distinguish between signed (`off == 1`) and
> unsigned division (`off == 0`). Moreover, we also follow the BPF division
> semantics to handle special cases, such as division by zero and signed
> division overflow.
>
> - UDIV: dst = (src != 0) ? (dst / src) : 0
> - SDIV: dst = (src == 0) ? 0 : ((src == -1 && dst == LLONG_MIN) ? LLONG_MIN : (dst / src))
>
> Here is the overview of the changes made in this patch:
>
> 1. For interval analysis:
>     - Added `scalar_min_max_udiv` and `scalar32_min_max_udiv` to update
>       umin/umax bounds, which is straightforward.
>     - Added `scalar_min_max_sdiv` and `scalar32_min_max_sdiv` to update
>       smin/smax bounds. It handles non-monotonic intervals by decomposing
>       the divisor range into negative, zero, and positive sub-ranges, and
>       computing the result range for each sub-range separately. Finally,
>       it combines the results to get the final smin/smax bounds.
> 2. For tnum analysis, we referred to LLVM's KnownBits implementation[2]
>     and the recent research on abstract interpretation of division[3]:
>     - Added `tnum_udiv` to compute the tnum for unsigned division. It
>       calculates the maximum possible result based on the maximum values
>       of the dividend tnum and the minimum values of the divisor tnum,
>       then constructs the resulting tnum accordingly. We have prove its
>       soundness using Rocq Prover[4].
>     - Added `tnum_sdiv` to compute the tnum for signed division. It splits
>       the operands into positive and negative components, then performs
>       calculation on absolute values using `tnum_udiv`, finally unions
>       the results to ensure soundness.
>
>       Introduced `tnum_empty` to represent the bottom element (⊥) of the
>       tnum lattice to support the split-tnum analysis for signed division.
>       Mathematically, tnum_empty represents an empty set of possible  values,
>       which is crucial when dealing with sub-tnums (e.g., the negative
>       component of a strictly positive tnum) yield no valid values.
> 3. Also updated existing selftests based on the expected BPF_DIV behavior.

In my own experience, typical division (signed or unsigned) has remainder 0.
For example, (ptr1 - ptr2)/sizeof(*ptr1).

Do you have other production examples which needs more complex div/sdiv
handling in tnum and verifier? For example see:
    https://lore.kernel.org/bpf/aRYSlGmmQM1kfF_b@mail.gmail.com/

Without digging into details, a few comments below.

>
> [1] https://www.kernel.org/doc/Documentation/bpf/standardization/instruction-set.rst
> [2] https://llvm.org/doxygen/KnownBits_8cpp_source.html
> [3] https://dl.acm.org/doi/10.1145/3728905
> [4] https://github.com/shenghaoyuan/open-verified-artifacts/tree/main/tnum
>
> Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
> Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
> Co-developed-by: Tianci Cao <ziye@zju.edu.cn>
> Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
> Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
> ---
> Hello everyone,
>
> Thanks for reviewing our patch! This patch adds interval and tnum analysis
> for both signed and unsigned BPF_DIV instructions in the BPF verifier.
>
> We also have implemented interval and tnum analysis for BPF_MOD
> instruction, which is closely related to division. However, to keep the
> patch size manageable and facilitate easier review, we have decided to
> submit the BPF_MOD related changes in a separate patch following this one.
>
> Best,
>
> Yazhou Tang
>
>   include/linux/tnum.h                          |   4 +
>   kernel/bpf/tnum.c                             | 159 ++++++++++++-
>   kernel/bpf/verifier.c                         | 225 ++++++++++++++++++
>   .../bpf/progs/verifier_value_illegal_alu.c    |   7 +-
>   4 files changed, 391 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/tnum.h b/include/linux/tnum.h
> index c52b862dad45..fd00deb2cb88 100644
> --- a/include/linux/tnum.h
> +++ b/include/linux/tnum.h
> @@ -50,6 +50,10 @@ struct tnum tnum_or(struct tnum a, struct tnum b);
>   struct tnum tnum_xor(struct tnum a, struct tnum b);
>   /* Multiply two tnums, return @a * @b */
>   struct tnum tnum_mul(struct tnum a, struct tnum b);
> +/* Unsigned division, return @a / @b */
> +struct tnum tnum_udiv(struct tnum a, struct tnum b);
> +/* Signed division, return @a / @b */
> +struct tnum tnum_sdiv(struct tnum a, struct tnum b, bool alu32);
>   
>   /* Return true if the known bits of both tnums have the same value */
>   bool tnum_overlap(struct tnum a, struct tnum b);
> diff --git a/kernel/bpf/tnum.c b/kernel/bpf/tnum.c
> index f8e70e9c3998..20cd023709bf 100644
> --- a/kernel/bpf/tnum.c
> +++ b/kernel/bpf/tnum.c
> @@ -12,6 +12,13 @@
>   #define TNUM(_v, _m)	(struct tnum){.value = _v, .mask = _m}
>   /* A completely unknown value */
>   const struct tnum tnum_unknown = { .value = 0, .mask = -1 };
> +/* Not well-formed Tnum, whose concrete value is empty set. */
> +const struct tnum tnum_empty = { .value = -1, .mask = -1 };

Maybe tnum_empty renamed to tnum_poison? This will make it
explicit that it is not well formed.

> +
> +static bool __tnum_eqb(struct tnum a, struct tnum b)
> +{
> +	return a.value == b.value && a.mask == b.mask;
> +}
>   
>   struct tnum tnum_const(u64 value)
>   {
> @@ -83,9 +90,23 @@ struct tnum tnum_sub(struct tnum a, struct tnum b)
>   	return TNUM(dv & ~mu, mu);
>   }
>   
> +/* __tnum_neg_width: tnum negation with given bit width.
> + * @a: the tnum to be negated.
> + * @width: the bit width to perform negation, 32 or 64.
> + */
> +static struct tnum __tnum_neg_width(struct tnum a, int width)
> +{
> +	if (width == 32)
> +		return tnum_subreg(tnum_sub(TNUM(0, 0), tnum_subreg(a)));
> +	else if (width == 64)
> +		return tnum_sub(TNUM(0, 0), a);
> +	else
> +		return tnum_unknown;
> +}
> +
>   struct tnum tnum_neg(struct tnum a)
>   {
> -	return tnum_sub(TNUM(0, 0), a);
> +	return __tnum_neg_width(a, 64);
>   }
>   
>   struct tnum tnum_and(struct tnum a, struct tnum b)
> @@ -167,6 +188,138 @@ bool tnum_overlap(struct tnum a, struct tnum b)
>   	return (a.value & mu) == (b.value & mu);
>   }
>   
> +/* __get_mask: get a mask that covers all bits up to the highest set bit in x.
> + * For example:
> + *   x = 0b0000...0000 -> return 0b0000...0000
> + *   x = 0b0000...0001 -> return 0b0000...0001
> + *   x = 0b0000...1001 -> return 0b0000...1111
> + *   x = 0b1111...1111 -> return 0b1111...1111
> + */
> +static u64 __get_mask(u64 x)
> +{
> +	int width = 0;
> +
> +	if (x > 0)
> +		width = 64 - __builtin_clzll(x);

Maybe 'width = fls64(x)'?

> +	if (width == 0)
> +		return 0;
> +	else if (width == 64)
> +		return U64_MAX;
> +	else
> +		return (1ULL << width) - 1;
> +}
> +
> +struct tnum tnum_udiv(struct tnum a, struct tnum b)
> +{
> +	if (tnum_is_const(b)) {
> +		/* BPF div specification: x / 0 = 0 */
> +		if (b.value == 0)
> +			return TNUM(0, 0);
> +		if (tnum_is_const(a))
> +			return TNUM(a.value / b.value, 0);
> +	}
> +
> +	if (b.value == 0)
> +		return tnum_unknown;
> +
> +	u64 a_max = a.value + a.mask;
> +	u64 b_min = b.value;
> +	u64 max_res = a_max / b_min;
> +	return TNUM(0, __get_mask(max_res));
> +}
> +
> +static u64 __msb(u64 x, int width)
> +{
> +	return x & (1ULL << (width - 1));
> +}
> +
> +static struct tnum __tnum_get_positive(struct tnum x, int width)
> +{
> +	if (__msb(x.value, width))
> +		return tnum_empty;
> +	if (__msb(x.mask, width))
> +		return TNUM(x.value, x.mask & ~(1ULL << (width - 1)));
> +	return x;
> +}
> +
> +static struct tnum __tnum_get_negative(struct tnum x, int width)
> +{
> +	if (__msb(x.value, width))
> +		return x;
> +	if (__msb(x.mask, width))
> +		return TNUM(x.value | (1ULL << (width - 1)), x.mask & ~(1ULL << (width - 1)));
> +	return tnum_empty;
> +}
> +
> +static struct tnum __tnum_abs(struct tnum x, int width)
> +{
> +	if (__msb(x.value, width))
> +		return __tnum_neg_width(x, width);
> +	else
> +		return x;
> +}
> +
> +/* __tnum_sdiv, a helper for tnum_sdiv.
> + * @a: tnum a, a's sign is fixed, __msb(a.mask) == 0
> + * @b: tnum b, b's sign is fixed, __msb(b.mask) == 0
> + *
> + * This function reuses tnum_udiv by operating on the absolute values of a and b,
> + * and then adjusting the sign of the result based on C's division rules.
> + * Here we don't need to specially handle the case of [S64_MIN / -1], because
> + * after __tnum_abs, S64_MIN becomes (S64_MAX + 1), and the behavior of
> + * unsigned [(S64_MAX + 1) / 1] is normal.
> + */
> +static struct tnum __tnum_sdiv(struct tnum a, struct tnum b, int width)
> +{
> +	if (__tnum_eqb(a, tnum_empty) || __tnum_eqb(b, tnum_empty))
> +		return tnum_empty;
> +
> +	struct tnum a_abs = __tnum_abs(a, width);
> +	struct tnum b_abs = __tnum_abs(b, width);
> +	struct tnum res_abs = tnum_udiv(a_abs, b_abs);
> +
> +	if (__msb(a.value, width) == __msb(b.value, width))
> +		return res_abs;
> +	else
> +		return __tnum_neg_width(res_abs, width);
> +}
> +
> +struct tnum tnum_sdiv(struct tnum a, struct tnum b, bool alu32)
> +{
> +	if (tnum_is_const(b)) {
> +		/* BPF div specification: x / 0 = 0 */
> +		if (b.value == 0)
> +			return TNUM(0, 0);
> +		if (tnum_is_const(a)) {
> +			/* BPF div specification: S32_MIN / -1 = S32_MIN */
> +			if (alu32 && (u32)a.value == (u32)S32_MIN && (u32)b.value == (u32)-1)
> +				return TNUM((u32)S32_MIN, 0);
> +			/* BPF div specification: S64_MIN / -1 = S64_MIN */
> +			if (!alu32 && a.value == S64_MIN && b.value == (u64)-1)
> +				return TNUM((u64)S64_MIN, 0);
> +			s64 sval = (s64)a.value / (s64)b.value;
> +			return TNUM((u64)sval, 0);
> +		}
> +	}
> +
> +	if (b.value == 0)
> +		return tnum_unknown;
> +
> +	int width = alu32 ? 32 : 64;
> +	struct tnum a_pos = __tnum_get_positive(a, width);
> +	struct tnum a_neg = __tnum_get_negative(a, width);
> +	struct tnum b_pos = __tnum_get_positive(b, width);
> +	struct tnum b_neg = __tnum_get_negative(b, width);
> +
> +	struct tnum res_pos = __tnum_sdiv(a_pos, b_pos, width);
> +	struct tnum res_neg = __tnum_sdiv(a_neg, b_neg, width);
> +	struct tnum res_mix1 = __tnum_sdiv(a_pos, b_neg, width);
> +	struct tnum res_mix2 = __tnum_sdiv(a_neg, b_pos, width);
> +
> +	return tnum_union(tnum_union(res_pos, res_neg),
> +					tnum_union(res_mix1, res_mix2));
> +}

The above is very complex. It would be great if you can have
some examples to show the logic. For example, for this tnum patch:

https://lore.kernel.org/bpf/20250826034524.2159515-1-nandakumar@nandakumar.co.in

> +
>   /* Note that if a and b disagree - i.e. one has a 'known 1' where the other has
>    * a 'known 0' - this will return a 'known 1' for that bit.
>    */
> @@ -186,6 +339,10 @@ struct tnum tnum_intersect(struct tnum a, struct tnum b)
>    */
>   struct tnum tnum_union(struct tnum a, struct tnum b)
>   {
> +	if (__tnum_eqb(a, tnum_empty))
> +		return b;
> +	if (__tnum_eqb(b, tnum_empty))
> +		return a;
>   	u64 v = a.value & b.value;
>   	u64 mu = (a.value ^ b.value) | a.mask | b.mask;

[...]


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
  2025-12-23  9:10 ` [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV Yazhou Tang
  2025-12-23 20:58   ` Yonghong Song
@ 2025-12-24  0:32   ` kernel test robot
  2025-12-24 12:41   ` kernel test robot
  2025-12-24 15:30   ` kernel test robot
  3 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2025-12-24  0:32 UTC (permalink / raw)
  To: Yazhou Tang, bpf
  Cc: oe-kbuild-all, ast, daniel, john.fastabend, andrii, martin.lau,
	eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa,
	tangyazhou518, shenghaoyuan0928, ziye

Hi Yazhou,

kernel test robot noticed the following build warnings:

[auto build test WARNING on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Yazhou-Tang/bpf-Add-interval-and-tnum-analysis-for-signed-and-unsigned-BPF_DIV/20251223-171652
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20251223091120.2413435-2-tangyazhou%40zju.edu.cn
patch subject: [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
config: alpha-randconfig-r131-20251224 (https://download.01.org/0day-ci/archive/20251224/202512240848.WegL0AOr-lkp@intel.com/config)
compiler: alpha-linux-gcc (GCC) 8.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251224/202512240848.WegL0AOr-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202512240848.WegL0AOr-lkp@intel.com/

sparse warnings: (new ones prefixed by >>)
>> kernel/bpf/tnum.c:16:19: sparse: sparse: symbol 'tnum_empty' was not declared. Should it be static?

vim +/tnum_empty +16 kernel/bpf/tnum.c

    11	
    12	#define TNUM(_v, _m)	(struct tnum){.value = _v, .mask = _m}
    13	/* A completely unknown value */
    14	const struct tnum tnum_unknown = { .value = 0, .mask = -1 };
    15	/* Not well-formed Tnum, whose concrete value is empty set. */
  > 16	const struct tnum tnum_empty = { .value = -1, .mask = -1 };
    17	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
  2025-12-23  9:10 ` [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV Yazhou Tang
  2025-12-23 20:58   ` Yonghong Song
  2025-12-24  0:32   ` kernel test robot
@ 2025-12-24 12:41   ` kernel test robot
  2025-12-24 15:30   ` kernel test robot
  3 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2025-12-24 12:41 UTC (permalink / raw)
  To: Yazhou Tang, bpf
  Cc: oe-kbuild-all, ast, daniel, john.fastabend, andrii, martin.lau,
	eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa,
	tangyazhou518, shenghaoyuan0928, ziye

Hi Yazhou,

kernel test robot noticed the following build errors:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Yazhou-Tang/bpf-Add-interval-and-tnum-analysis-for-signed-and-unsigned-BPF_DIV/20251223-171652
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20251223091120.2413435-2-tangyazhou%40zju.edu.cn
patch subject: [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
config: m68k-hp300_defconfig (https://download.01.org/0day-ci/archive/20251224/202512242039.MsW4xvOa-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 15.1.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251224/202512242039.MsW4xvOa-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202512242039.MsW4xvOa-lkp@intel.com/

All errors (new ones prefixed by >>):

   m68k-linux-ld: kernel/bpf/verifier.o: in function `__sdiv_range':
>> verifier.c:(.text+0xff6): undefined reference to `__divdi3'
>> m68k-linux-ld: verifier.c:(.text+0x103a): undefined reference to `__divdi3'
   m68k-linux-ld: verifier.c:(.text+0x108c): undefined reference to `__divdi3'
   m68k-linux-ld: verifier.c:(.text+0x10dc): undefined reference to `__divdi3'
   m68k-linux-ld: kernel/bpf/verifier.o: in function `adjust_scalar_min_max_vals':
>> verifier.c:(.text+0x11c2a): undefined reference to `__udivdi3'
   m68k-linux-ld: kernel/bpf/tnum.o: in function `tnum_udiv':
>> tnum.c:(.text+0x636): undefined reference to `__udivdi3'
>> m68k-linux-ld: tnum.c:(.text+0x678): undefined reference to `__udivdi3'
   m68k-linux-ld: kernel/bpf/tnum.o: in function `tnum_sdiv':
>> tnum.c:(.text+0xc92): undefined reference to `__divdi3'

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
  2025-12-23  9:10 ` [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV Yazhou Tang
                     ` (2 preceding siblings ...)
  2025-12-24 12:41   ` kernel test robot
@ 2025-12-24 15:30   ` kernel test robot
  3 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2025-12-24 15:30 UTC (permalink / raw)
  To: Yazhou Tang, bpf
  Cc: oe-kbuild-all, ast, daniel, john.fastabend, andrii, martin.lau,
	eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa,
	tangyazhou518, shenghaoyuan0928, ziye

Hi Yazhou,

kernel test robot noticed the following build errors:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Yazhou-Tang/bpf-Add-interval-and-tnum-analysis-for-signed-and-unsigned-BPF_DIV/20251223-171652
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20251223091120.2413435-2-tangyazhou%40zju.edu.cn
patch subject: [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
config: i386-buildonly-randconfig-002-20251224 (https://download.01.org/0day-ci/archive/20251224/202512242332.vFPARGR1-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251224/202512242332.vFPARGR1-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202512242332.vFPARGR1-lkp@intel.com/

All errors (new ones prefixed by >>):

   ld: kernel/bpf/verifier.o: in function `__bpf_sdiv':
   verifier.c:(.text+0x10b8): undefined reference to `__divdi3'
   ld: kernel/bpf/verifier.o: in function `adjust_scalar_min_max_vals':
   verifier.c:(.text+0x1195b): undefined reference to `__udivdi3'
>> ld: verifier.c:(.text+0x11974): undefined reference to `__udivdi3'
   ld: kernel/bpf/tnum.o: in function `tnum_udiv':
   tnum.c:(.text+0x63f): undefined reference to `__udivdi3'
>> ld: tnum.c:(.text+0x66c): undefined reference to `__udivdi3'
   ld: kernel/bpf/tnum.o: in function `tnum_sdiv':
   tnum.c:(.text+0xccd): undefined reference to `__divdi3'

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
  2025-12-23 20:58   ` Yonghong Song
@ 2025-12-25  9:17     ` Yazhou Tang
  2026-01-03  0:21       ` Alexei Starovoitov
  0 siblings, 1 reply; 11+ messages in thread
From: Yazhou Tang @ 2025-12-25  9:17 UTC (permalink / raw)
  To: yonghong.song
  Cc: andrii, ast, bpf, daniel, eddyz87, haoluo, john.fastabend, jolsa,
	kpsingh, martin.lau, sdf, shenghaoyuan0928, song, tangyazhou518,
	tangyazhou, ziye

Hi Yonghong,

Thank you for the review.

> In my own experience, typical division (signed or unsigned) has remainder 0.
> For example, (ptr1 - ptr2)/sizeof(*ptr1).
> 
> Do you have other production examples which needs more complex div/sdiv
> handling in tnum and verifier? For example see:
>     https://lore.kernel.org/bpf/aRYSlGmmQM1kfF_b@mail.gmail.com/

You are absolutely right that pointer arithmetic is the most common use
case for division in BPF today.

To be honest, our primary motivation for this patch is not driven by a
specific real-world problem or use case. Instead, we want to improve the
completeness of the BPF verifier.

While analyzing the verifier's logic, we noticed that unlike other ALU
operations, BPF_DIV (and BPF_MOD) lacks precise value tracking. This
creates a gap where valid BPF code using standard ISA instructions might
be rejected simply because the verifier falls back to "unknown scalar"
too easily. We believe that closing this gap makes the verifier more robust
for future developers who might use division in non-pointer contexts.

That said, I fully agree with your concern regarding code complexity. To
address this, I am working on a v3 which will:

1. Simplify and refactor the logic to be cleaner and more readable.
2. Add detailed in-code comments and commit message examples to explain
   the tnum/interval derivation steps.
3. Fix the ALU32/64 mismatch issue reported by syzbot ci (which I have
   already root-caused).

I hope this helps explain the goal of the patch.

> > +/* Not well-formed Tnum, whose concrete value is empty set. */
> > +const struct tnum tnum_empty = { .value = -1, .mask = -1 };
> 
> Maybe tnum_empty renamed to tnum_poison? This will make it
> explicit that it is not well formed.

> > +static u64 __get_mask(u64 x)
> > +{
> > +	int width = 0;
> > +
> > +	if (x > 0)
> > +		width = 64 - __builtin_clzll(x);
> 
> Maybe 'width = fls64(x)'?

Agreed. I will do these 2 changes in the next version.

Thank you for the suggestions!

Best regards,

Yazhou


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
  2025-12-25  9:17     ` Yazhou Tang
@ 2026-01-03  0:21       ` Alexei Starovoitov
  2026-01-03  8:40         ` Yazhou Tang
  0 siblings, 1 reply; 11+ messages in thread
From: Alexei Starovoitov @ 2026-01-03  0:21 UTC (permalink / raw)
  To: Yazhou Tang
  Cc: Yonghong Song, Andrii Nakryiko, Alexei Starovoitov, bpf,
	Daniel Borkmann, Eduard, Hao Luo, John Fastabend, Jiri Olsa,
	KP Singh, Martin KaFai Lau, Stanislav Fomichev, Shenghao Yuan,
	Song Liu, Yazhou Tang, Tianci Cao

On Thu, Dec 25, 2025 at 1:17 AM Yazhou Tang <tangyazhou@zju.edu.cn> wrote:
>
> Hi Yonghong,
>
> Thank you for the review.
>
> > In my own experience, typical division (signed or unsigned) has remainder 0.
> > For example, (ptr1 - ptr2)/sizeof(*ptr1).
> >
> > Do you have other production examples which needs more complex div/sdiv
> > handling in tnum and verifier? For example see:
> >     https://lore.kernel.org/bpf/aRYSlGmmQM1kfF_b@mail.gmail.com/
>
> You are absolutely right that pointer arithmetic is the most common use
> case for division in BPF today.
>
> To be honest, our primary motivation for this patch is not driven by a
> specific real-world problem or use case. Instead, we want to improve the
> completeness of the BPF verifier.
>
> While analyzing the verifier's logic, we noticed that unlike other ALU
> operations, BPF_DIV (and BPF_MOD) lacks precise value tracking. This
> creates a gap where valid BPF code using standard ISA instructions might
> be rejected simply because the verifier falls back to "unknown scalar"
> too easily. We believe that closing this gap makes the verifier more robust
> for future developers who might use division in non-pointer contexts.
>
> That said, I fully agree with your concern regarding code complexity. To
> address this, I am working on a v3 which will:
>
> 1. Simplify and refactor the logic to be cleaner and more readable.
> 2. Add detailed in-code comments and commit message examples to explain
>    the tnum/interval derivation steps.
> 3. Fix the ALU32/64 mismatch issue reported by syzbot ci (which I have
>    already root-caused).

No. Drop all the tnum complexity and only support a single case where
divisor is a constant.
"Completeness" is not a goal for the verifier. It aims to track
real world cases where precision is important.
So far I seen only one case where compiler couldn't optimize modulo
operation into shifts and muls and that caused verification issues.
The code was something like: array[idx % sizeof()].
The verifier can be improved for such specific case,
but not for "completeness".

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV
  2026-01-03  0:21       ` Alexei Starovoitov
@ 2026-01-03  8:40         ` Yazhou Tang
  0 siblings, 0 replies; 11+ messages in thread
From: Yazhou Tang @ 2026-01-03  8:40 UTC (permalink / raw)
  To: alexei.starovoitov
  Cc: andrii, ast, bpf, daniel, eddyz87, haoluo, john.fastabend, jolsa,
	kpsingh, martin.lau, sdf, shenghaoyuan0928, song, tangyazhou518,
	tangyazhou, yonghong.song, ziye

Hi Alexei,

Thank you for the review.

> No. Drop all the tnum complexity and only support a single case where
> divisor is a constant.
> "Completeness" is not a goal for the verifier. It aims to track
> real world cases where precision is important.
> So far I seen only one case where compiler couldn't optimize modulo
> operation into shifts and muls and that caused verification issues.
> The code was something like: array[idx % sizeof()].
> The verifier can be improved for such specific case,
> but not for "completeness".

Understood. I will drop the general tnum complexity.

In v3, I will restrict the analysis to constant divisors only. And I will
also include support for BPF_MOD alongside BPF_DIV in this update, as
they share similar logic and use cases.

Best,

Yazhou


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2026-01-03  8:41 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-23  9:10 [PATCH bpf-next v2 0/2] bpf: Add value tracking for BPF_DIV Yazhou Tang
2025-12-23  9:10 ` [PATCH bpf-next v2 1/2] bpf: Add interval and tnum analysis for signed and unsigned BPF_DIV Yazhou Tang
2025-12-23 20:58   ` Yonghong Song
2025-12-25  9:17     ` Yazhou Tang
2026-01-03  0:21       ` Alexei Starovoitov
2026-01-03  8:40         ` Yazhou Tang
2025-12-24  0:32   ` kernel test robot
2025-12-24 12:41   ` kernel test robot
2025-12-24 15:30   ` kernel test robot
2025-12-23  9:10 ` [PATCH bpf-next v2 2/2] selftests/bpf: Add tests for BPF_DIV analysis Yazhou Tang
2025-12-23 13:50 ` [syzbot ci] Re: bpf: Add value tracking for BPF_DIV syzbot ci

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox