public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v3 0/2] bpf: Add range tracking for BPF_DIV and BPF_MOD
@ 2026-01-13 10:35 Yazhou Tang
  2026-01-13 10:35 ` [PATCH bpf-next v3 1/2] " Yazhou Tang
  2026-01-13 10:35 ` [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_DIV and BPF_MOD range tracking Yazhou Tang
  0 siblings, 2 replies; 8+ messages in thread
From: Yazhou Tang @ 2026-01-13 10:35 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518,
	shenghaoyuan0928, ziye

From: Yazhou Tang <tangyazhou518@outlook.com>

Add range tracking (interval analysis) for BPF_DIV and BPF_MOD when
divisor is constant. Please see commit log of 1/2 for more details.

---

Changes v2 => v3:
1. Fixup a bug in `adjust_scalar_min_max_vals` function that lead to
   incorrect range results. (Syzbot)
2. Remove tnum analysis logic. (Alexei)
3. Only handle "constant divisor" case. (Alexei)
4. Add BPF_MOD range analysis logic.
5. Update selftests accordingly.
6. Add detailed code comments and improve commit messages. (Yonghong)

v2: https://lore.kernel.org/bpf/20251223091120.2413435-1-tangyazhou@zju.edu.cn/

Changes v1 => v2:
1. Fixed 2 bugs in sdiv32 analysis logic and corrected the associated
   selftest cases. (AI reviewer)
2. Renamed `tnum_bottom` to `tnum_empty` for better clarity, and updated
   commit message to explain its role in signed BPF_DIV analysis.

v1:
https://lore.kernel.org/bpf/tencent_717092CD734D050CCD93401CA624BB3C8307@qq.com/
https://lore.kernel.org/bpf/tencent_7C98FAECA40C98489ACF4515CE346F031509@qq.com/

Yazhou Tang (2):
  BPF_DIV and BPF_MOD
  BPF_DIV and BPF_MOD: new tests

 kernel/bpf/verifier.c                         | 326 ++++++++
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../bpf/progs/verifier_div_mod_bounds.c       | 781 ++++++++++++++++++
 .../bpf/progs/verifier_value_illegal_alu.c    |   7 +-
 .../testing/selftests/bpf/verifier/precise.c  |   4 +-
 5 files changed, 1115 insertions(+), 5 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_div_mod_bounds.c

-- 
2.52.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH bpf-next v3 1/2] bpf: Add range tracking for BPF_DIV and BPF_MOD
  2026-01-13 10:35 [PATCH bpf-next v3 0/2] bpf: Add range tracking for BPF_DIV and BPF_MOD Yazhou Tang
@ 2026-01-13 10:35 ` Yazhou Tang
  2026-01-14  0:48   ` kernel test robot
  2026-01-14  3:08   ` Alexei Starovoitov
  2026-01-13 10:35 ` [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_DIV and BPF_MOD range tracking Yazhou Tang
  1 sibling, 2 replies; 8+ messages in thread
From: Yazhou Tang @ 2026-01-13 10:35 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518,
	shenghaoyuan0928, ziye, syzbot

From: Yazhou Tang <tangyazhou518@outlook.com>

This patch implements range tracking (interval analysis) for BPF_DIV and
BPF_MOD operations when the divisor is a constant, covering both signed
and unsigned variants.

While LLVM typically optimizes integer division and modulo by constants
into multiplication and shift sequences, this optimization is less
effective for the BPF target when dealing with 64-bit arithmetic.

Currently, the verifier does not track bounds for scalar division or
modulo, treating the result as "unbounded". This leads to false positive
rejections for safe code patterns.

For example, the following code (compiled with -O2):

```c
int test(struct pt_regs *ctx) {
    char buffer[6] = {1};
    __u64 x = bpf_ktime_get_ns();
    __u64 res = x % sizeof(buffer);
    char value = buffer[res];
    bpf_printk("res = %llu, val = %d", res, value);
    return 0;
}
```

Generates a raw `BPF_MOD64` instruction:

```asm
;     __u64 res = x % sizeof(buffer);
       1:	97 00 00 00 06 00 00 00	r0 %= 0x6
;     char value = buffer[res];
       2:	18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00	r1 = 0x0 ll
       4:	0f 01 00 00 00 00 00 00	r1 += r0
       5:	91 14 00 00 00 00 00 00	r4 = *(s8 *)(r1 + 0x0)
```

Without this patch, the verifier fails with "math between map_value
pointer and register with unbounded min value is not allowed" because
it cannot deduce that `r0` is within [0, 5].

According to the BPF instruction set[1], the instruction's offset field
(`insn->off`) is used to distinguish between signed (`off == 1`) and
unsigned division (`off == 0`). Moreover, we also follow the BPF division
and modulo semantics (runtime behavior) to handle special cases, such as
division by zero and signed division overflow.

- UDIV: dst = (src != 0) ? (dst / src) : 0
- SDIV: dst = (src == 0) ? 0 : ((src == -1 && dst == LLONG_MIN) ? LLONG_MIN : (dst / src))
- UMOD: dst = (src != 0) ? (dst % src) : dst
- SMOD: dst = (src == 0) ? dst : ((src == -1 && dst == LLONG_MIN) ? 0: (dst s% src))

Here is the overview of the changes made in this patch (See the code comments
for more details and examples):

For BPF_DIV:
1. Main analysis:
   - General cases and "division by zero" case: compute the new range by
     dividing max_dividend and min_dividend by the constant divisor.
     Use helper functions `__bpf_udiv32`, `__bpf_udiv`, `__bpf_sdiv32`,
     and `__bpf_sdiv` to encapsulate the division logic, including handling
     division by zero.
   - "SIGNED_MIN / -1" case in signed division: mark the result as unbounded
     if the dividend is not a single number.
2. Post-processing:
   - Domain reset: Signed analysis resets unsigned bounds to unbounded,
     and vice versa.
   - Width reset: 32-bit analysis resets 64-bit bounds to unbounded
     (and vice versa) to maintain consistency.
   - Tnum reset: reset `var_off` to unknown since precise bitwise tracking
     for division is not implemented.

For BPF_MOD:
1. Main analysis:
   - General case: For signed modulo, the result's sign matches the
     dividend's sign. The result's absolute value is strictly bounded
     by min(abs(dividend), abs(divisor) - 1).
     Special care is taken when the divisor is SIGNED_MIN. By casting
     to unsigned before negation and subtracting 1, we avoid signed
     overflow and correctly calculate the maximum possible magnitude
     (`res_max_abs` in the code).
   - "Division by zero" case: If the divisor is zero, the destination
     register remains unchanged (matching runtime behavior).
   - "Small dividend" case: If the dividend is already within the possible
     result range (e.g., [2, 5] % 10), the operation is an identity
     function, and the register state remains unchanged.
2. Post-processing (if the result is changed compared to the dividend):
   - Domain reset: Signed analysis resets unsigned bounds to unbounded,
     and vice versa.
   - Width reset: 32-bit analysis resets 64-bit bounds to unbounded
     (and vice versa) to maintain consistency.
   - Tnum reset: reset `var_off` to unknown since precise bitwise tracking
     for division is not implemented.

Also updated existing selftests based on the expected BPF_DIV and
BPF_MOD behavior.

[1] https://www.kernel.org/doc/Documentation/bpf/standardization/instruction-set.rst

Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Co-developed-by: Tianci Cao <ziye@zju.edu.cn>
Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
Tested-by: syzbot@syzkaller.appspotmail.com
---
 kernel/bpf/verifier.c                         | 326 ++++++++++++++++++
 .../bpf/progs/verifier_value_illegal_alu.c    |   7 +-
 .../testing/selftests/bpf/verifier/precise.c  |   4 +-
 3 files changed, 332 insertions(+), 5 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 53635ea2e41b..f3b51751d990 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15077,6 +15077,283 @@ static void scalar_min_max_mul(struct bpf_reg_state *dst_reg,
 	}
 }
 
+static u32 __bpf_udiv32(u32 a, u32 b)
+{
+	/* BPF div specification: x / 0 = 0 */
+	if (unlikely(b == 0))
+		return 0;
+	return a / b;
+}
+
+static u64 __bpf_udiv(u64 a, u64 b)
+{
+	/* BPF div specification: x / 0 = 0 */
+	if (unlikely(b == 0))
+		return 0;
+	return a / b;
+}
+
+static s32 __bpf_sdiv32(s32 a, s32 b)
+{
+	/* BPF div specification: x / 0 = 0 */
+	if (unlikely(b == 0))
+		return 0;
+	return a / b;
+}
+
+static s64 __bpf_sdiv(s64 a, s64 b)
+{
+	/* BPF div specification: x / 0 = 0 */
+	if (unlikely(b == 0))
+		return 0;
+	return a / b;
+}
+
+static void scalar32_min_max_udiv(struct bpf_reg_state *dst_reg,
+				  struct bpf_reg_state *src_reg)
+{
+	u32 *dst_umin = &dst_reg->u32_min_value;
+	u32 *dst_umax = &dst_reg->u32_max_value;
+	u32 src_val = src_reg->u32_min_value; /* const divisor */
+
+	*dst_umin = __bpf_udiv32(*dst_umin, src_val);
+	*dst_umax = __bpf_udiv32(*dst_umax, src_val);
+
+	/* Reset signed range to unbounded. */
+	dst_reg->s32_min_value = S32_MIN;
+	dst_reg->s32_max_value = S32_MAX;
+}
+
+static void scalar_min_max_udiv(struct bpf_reg_state *dst_reg,
+				struct bpf_reg_state *src_reg)
+{
+	u64 *dst_umin = &dst_reg->umin_value;
+	u64 *dst_umax = &dst_reg->umax_value;
+	u64 src_val = src_reg->umin_value; /* const divisor */
+
+	*dst_umin = __bpf_udiv(*dst_umin, src_val);
+	*dst_umax = __bpf_udiv(*dst_umax, src_val);
+
+	/* Reset signed range to unbounded. */
+	dst_reg->smin_value = S64_MIN;
+	dst_reg->smax_value = S64_MAX;
+}
+
+static void scalar32_min_max_sdiv(struct bpf_reg_state *dst_reg,
+				  struct bpf_reg_state *src_reg)
+{
+	s32 *dst_smin = &dst_reg->s32_min_value;
+	s32 *dst_smax = &dst_reg->s32_max_value;
+	s32 src_val = src_reg->u32_min_value; /* const divisor */
+	s32 res1, res2;
+
+	/* BPF div specification: S32_MIN / -1 = S32_MIN */
+	if (*dst_smin == S32_MIN && src_val == -1) {
+		/* If dividend is not a single number, e.g., [S32_MIN, S32_MIN+10]/(-1),
+		 * the result = {S32_MIN} U [-(S32_MIN+10), -(S32_MIN+1)]
+		 *            = {S32_MIN} U [S32_MAX-9, S32_MAX] = [S32_MIN, S32_MAX]
+		 * which is unbounded.
+		 */
+		if (*dst_smax != S32_MIN)
+			__mark_reg32_unbounded(dst_reg);
+		return;
+	}
+
+	res1 = __bpf_sdiv32(*dst_smin, src_val);
+	res2 = __bpf_sdiv32(*dst_smax, src_val);
+	*dst_smin = min(res1, res2);
+	*dst_smax = max(res1, res2);
+
+	/* Reset unsigned range to unbounded. */
+	dst_reg->u32_min_value = 0;
+	dst_reg->u32_max_value = U32_MAX;
+}
+
+static void scalar_min_max_sdiv(struct bpf_reg_state *dst_reg,
+				struct bpf_reg_state *src_reg)
+{
+	s64 *dst_smin = &dst_reg->smin_value;
+	s64 *dst_smax = &dst_reg->smax_value;
+	s64 src_val = src_reg->umin_value; /* const divisor */
+	s64 res1, res2;
+
+	/* BPF div specification: S64_MIN / -1 = S64_MIN */
+	if (*dst_smin == S64_MIN && src_val == -1) {
+		/* If dividend is not a single number, e.g., [S64_MIN, S64_MIN+10]/(-1),
+		 * the result = {S64_MIN} U [-(S64_MIN+10), -(S64_MIN+1)]
+		 *            = {S64_MIN} U [S64_MAX-9, S64_MAX] = [S64_MIN, S64_MAX]
+		 * which is unbounded.
+		 */
+		if (*dst_smax != S64_MIN)
+			__mark_reg64_unbounded(dst_reg);
+		return;
+	}
+
+	res1 = __bpf_sdiv(*dst_smin, src_val);
+	res2 = __bpf_sdiv(*dst_smax, src_val);
+	*dst_smin = min(res1, res2);
+	*dst_smax = max(res1, res2);
+
+	/* Reset unsigned range to unbounded. */
+	dst_reg->umin_value = 0;
+	dst_reg->umax_value = U64_MAX;
+}
+
+static bool scalar32_min_max_umod(struct bpf_reg_state *dst_reg,
+				  struct bpf_reg_state *src_reg)
+{
+	u32 *dst_umin = &dst_reg->u32_min_value;
+	u32 *dst_umax = &dst_reg->u32_max_value;
+	u32 src_val = src_reg->u32_min_value; /* const divisor */
+	u32 res_max = src_val - 1;
+
+	/* 1. BPF mod specification: x % 0 = x
+	 *    If src_val == 0, i.e. divisor is certainly 0,
+	 *    then the result remains unchanged, [a,b] % [0,0] = [a,b].
+	 * 2. Optimization: If dst_umax <= res_max,
+	 *    then the result remains unchanged. e.g., [2, 5] % 10 = [2, 5].
+	 */
+	if (src_val == 0 || *dst_umax <= res_max)
+		return false;
+
+	*dst_umin = 0;
+	*dst_umax = min(*dst_umax, res_max);
+
+	/* Reset signed range to unbounded. */
+	dst_reg->s32_min_value = S32_MIN;
+	dst_reg->s32_max_value = S32_MAX;
+	return true;
+}
+
+static bool scalar_min_max_umod(struct bpf_reg_state *dst_reg,
+				struct bpf_reg_state *src_reg)
+{
+	u64 *dst_umin = &dst_reg->umin_value;
+	u64 *dst_umax = &dst_reg->umax_value;
+	u64 src_val = src_reg->umin_value; /* const divisor */
+	u64 res_max = src_val - 1;
+
+	/* 1. BPF mod specification: x % 0 = x
+	 *    If src_val == 0, i.e. divisor is certainly 0,
+	 *    then the result remains unchanged, [a,b] % [0,0] = [a,b].
+	 * 2. Optimization: If dst_umax <= res_max,
+	 *    then the result remains unchanged. e.g., [2, 5] % 10 = [2, 5].
+	 */
+	if (src_val == 0 || *dst_umax <= res_max)
+		return false;
+
+	*dst_umin = 0;
+	*dst_umax = min(*dst_umax, res_max);
+
+	/* Reset signed range to unbounded. */
+	dst_reg->smin_value = S64_MIN;
+	dst_reg->smax_value = S64_MAX;
+	return true;
+}
+
+static bool scalar32_min_max_smod(struct bpf_reg_state *dst_reg,
+				  struct bpf_reg_state *src_reg)
+{
+	s32 *dst_smin = &dst_reg->s32_min_value;
+	s32 *dst_smax = &dst_reg->s32_max_value;
+	s32 src_val = src_reg->s32_min_value; /* const divisor */
+	u32 src_abs; /* unsigned to avoid overflow */
+	s32 res_max_abs;
+
+	/* 1. BPF mod specification: x % 0 = x
+	 *    If src_val == 0, i.e. divisor is certainly 0,
+	 *    then the result remains unchanged, [a,b] % [0,0] = [a,b].
+	 */
+	if (src_val == 0)
+		return false;
+
+	/* Safe absolute value calculation:
+	 * If src_val == S32_MIN (-2147483648), src_abs becomes 2147483648.
+	 */
+	src_abs = (src_val > 0) ? (u32)src_val : -(u32)src_val;
+
+	/* Calculate the maximum possible absolute value of the result.
+	 * Even if src_abs is 2147483648 (S32_MIN), subtracting 1 gives
+	 * 2147483647 (S32_MAX), which fits perfectly in s32.
+	 */
+	res_max_abs = src_abs - 1;
+
+	/* 2. Optimization: If the dividend is already within the result range,
+	 *    then the result remains unchanged. e.g., [-2, 5] % 10 = [-2, 5].
+	 */
+	if (*dst_smin >= -res_max_abs && *dst_smax <= res_max_abs)
+		return false;
+
+	/* General case: result has the same sign as the dividend. */
+	if (*dst_smin >= 0) {
+		*dst_smin = 0;
+		*dst_smax = min(*dst_smax, res_max_abs);
+	} else if (*dst_smax <= 0) {
+		*dst_smax = 0;
+		*dst_smin = max(*dst_smin, -res_max_abs);
+	} else {
+		*dst_smin = -res_max_abs;
+		*dst_smax = res_max_abs;
+	}
+
+	/* Reset unsigned range to unbounded. */
+	dst_reg->u32_min_value = 0;
+	dst_reg->u32_max_value = U32_MAX;
+	return true;
+}
+
+static bool scalar_min_max_smod(struct bpf_reg_state *dst_reg,
+				struct bpf_reg_state *src_reg)
+{
+	s64 *dst_smin = &dst_reg->smin_value;
+	s64 *dst_smax = &dst_reg->smax_value;
+	s64 src_val = src_reg->smin_value; /* const divisor */
+	u64 src_abs; /* unsigned to avoid overflow */
+	s64 res_max_abs;
+
+	/* 1. BPF mod specification: x % 0 = x
+	 *    If src_val == 0, i.e. divisor is certainly 0,
+	 *    then the result remains unchanged, [a,b] % [0,0] = [a,b].
+	 */
+	if (src_val == 0)
+		return false;
+
+	/* Safe absolute value calculation:
+	 * If src_val == S64_MIN (-2^63), src_abs becomes 2^63.
+	 */
+	src_abs = (src_val > 0) ? (u64)src_val : -(u64)src_val;
+
+	/* Calculate the maximum possible absolute value of the result.
+	 * Even if src_abs is 2^63 (S64_MIN), subtracting 1 gives
+	 * 2^63 - 1 (S64_MAX), which fits perfectly in s64.
+	 */
+	res_max_abs = src_abs - 1;
+
+	/* Optimization: If the dividend is already within the result range,
+	 * then the result remains unchanged.
+	 * Example: [-2, 5] % 10 = [-2, 5].
+	 */
+	if (*dst_smin >= -res_max_abs && *dst_smax <= res_max_abs)
+		return false;
+
+	/* General case: result has the same sign as the dividend. */
+	if (*dst_smin >= 0) {
+		*dst_smin = 0;
+		*dst_smax = min(*dst_smax, res_max_abs);
+	} else if (*dst_smax <= 0) {
+		*dst_smax = 0;
+		*dst_smin = max(*dst_smin, -res_max_abs);
+	} else {
+		*dst_smin = -res_max_abs;
+		*dst_smax = res_max_abs;
+	}
+
+	/* Reset unsigned range to unbounded. */
+	dst_reg->umin_value = 0;
+	dst_reg->umax_value = U64_MAX;
+	return true;
+}
+
 static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
 				 struct bpf_reg_state *src_reg)
 {
@@ -15482,6 +15759,13 @@ static bool is_safe_to_compute_dst_reg_range(struct bpf_insn *insn,
 	case BPF_MUL:
 		return true;
 
+	/* Division and modulo operators range is only safe to compute when the
+	 * divisor is a constant.
+	 */
+	case BPF_DIV:
+	case BPF_MOD:
+		return src_is_const;
+
 	/* Shift operators range is only computable if shift dimension operand
 	 * is a constant. Shifts greater than 31 or 63 are undefined. This
 	 * includes shifts by a negative number.
@@ -15505,6 +15789,7 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
 				      struct bpf_reg_state src_reg)
 {
 	u8 opcode = BPF_OP(insn->code);
+	s16 off = insn->off;
 	bool alu32 = (BPF_CLASS(insn->code) != BPF_ALU64);
 	int ret;
 
@@ -15556,6 +15841,47 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
 		scalar32_min_max_mul(dst_reg, &src_reg);
 		scalar_min_max_mul(dst_reg, &src_reg);
 		break;
+	case BPF_DIV:
+		if (alu32) {
+			if (off == 1)
+				scalar32_min_max_sdiv(dst_reg, &src_reg);
+			else
+				scalar32_min_max_udiv(dst_reg, &src_reg);
+			__mark_reg64_unbounded(dst_reg);
+		} else {
+			if (off == 1)
+				scalar_min_max_sdiv(dst_reg, &src_reg);
+			else
+				scalar_min_max_udiv(dst_reg, &src_reg);
+			__mark_reg32_unbounded(dst_reg);
+		}
+		/* Since we don't have precise tnum analysis for division yet,
+		 * we must reset var_off to unknown to avoid inconsistency.
+		 * Subsequent reg_bounds_sync() will rebuild it from scalar bounds.
+		 */
+		dst_reg->var_off = tnum_unknown;
+		break;
+	case BPF_MOD:
+		bool changed = true;
+		if (alu32)
+			changed = (off == 1) ? scalar32_min_max_smod(dst_reg, &src_reg)
+						: scalar32_min_max_umod(dst_reg, &src_reg);
+		else
+			changed = (off == 1) ? scalar_min_max_smod(dst_reg, &src_reg)
+						: scalar_min_max_umod(dst_reg, &src_reg);
+		/* Similar to BPF_DIV, we need to reset var_off and 32/64 range
+		 * to unknown (unbounded). But if the result is equal to dividend
+		 * (due to special cases in BPF_MOD analysis), we can also keep
+		 * them unchanged.
+		 */
+		if (changed) {
+			if (alu32)
+				__mark_reg64_unbounded(dst_reg);
+			else
+				__mark_reg32_unbounded(dst_reg);
+			dst_reg->var_off = tnum_unknown;
+		}
+		break;
 	case BPF_AND:
 		dst_reg->var_off = tnum_and(dst_reg->var_off, src_reg.var_off);
 		scalar32_min_max_and(dst_reg, &src_reg);
diff --git a/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c b/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c
index 2129e4353fd9..4d8273c258d5 100644
--- a/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c
+++ b/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c
@@ -173,14 +173,15 @@ __naked void flow_keys_illegal_variable_offset_alu(void)
 	asm volatile("					\
 	r6 = r1;					\
 	r7 = *(u64*)(r6 + %[flow_keys_off]);		\
-	r8 = 8;						\
-	r8 /= 1;					\
+	call %[bpf_get_prandom_u32];			\
+	r8 = r0;					\
 	r8 &= 8;					\
 	r7 += r8;					\
 	r0 = *(u64*)(r7 + 0);				\
 	exit;						\
 "	:
-	: __imm_const(flow_keys_off, offsetof(struct __sk_buff, flow_keys))
+	: __imm_const(flow_keys_off, offsetof(struct __sk_buff, flow_keys)),
+	  __imm(bpf_get_prandom_u32)
 	: __clobber_all);
 }
 
diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
index 59a020c35647..061d98f6e9bb 100644
--- a/tools/testing/selftests/bpf/verifier/precise.c
+++ b/tools/testing/selftests/bpf/verifier/precise.c
@@ -229,11 +229,11 @@
 {
 	"precise: program doesn't prematurely prune branches",
 	.insns = {
-		BPF_ALU64_IMM(BPF_MOV, BPF_REG_6, 0x400),
+		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+		BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_0),
 		BPF_ALU64_IMM(BPF_MOV, BPF_REG_7, 0),
 		BPF_ALU64_IMM(BPF_MOV, BPF_REG_8, 0),
 		BPF_ALU64_IMM(BPF_MOV, BPF_REG_9, 0x80000000),
-		BPF_ALU64_IMM(BPF_MOD, BPF_REG_6, 0x401),
 		BPF_JMP_IMM(BPF_JA, 0, 0, 0),
 		BPF_JMP_REG(BPF_JLE, BPF_REG_6, BPF_REG_9, 2),
 		BPF_ALU64_IMM(BPF_MOD, BPF_REG_6, 1),
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_DIV and BPF_MOD range tracking
  2026-01-13 10:35 [PATCH bpf-next v3 0/2] bpf: Add range tracking for BPF_DIV and BPF_MOD Yazhou Tang
  2026-01-13 10:35 ` [PATCH bpf-next v3 1/2] " Yazhou Tang
@ 2026-01-13 10:35 ` Yazhou Tang
  1 sibling, 0 replies; 8+ messages in thread
From: Yazhou Tang @ 2026-01-13 10:35 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, tangyazhou518,
	shenghaoyuan0928, ziye

From: Yazhou Tang <tangyazhou518@outlook.com>

Now BPF_DIV has range tracking support via interval analysis. This patch
adds selftests to cover various cases of BPF_DIV and BPF_MOD operations
when the divisor is a constant, also covering both signed and unsigned variants.

This patch includes several types of tests in 32-bit and 64-bit variants:
1. For UDIV
   - positive divisor
   - zero divisor
2. For SDIV
   - positive divisor, dividend cross zero
   - negative divisor, dividend cross zero
   - zero divisor
   - overflow (SIGNED_MIN/-1), normal dividend
   - overflow (SIGNED_MIN/-1), constant dividend
3. For UMOD
   - positive divisor
   - positive divisor, small dividend
   - zero divisor
4. For SMOD
   - positive divisor, dividend cross zero
   - positive divisor, dividend cross zero, small dividend
   - negative divisor, dividend cross zero
   - negative divisor, dividend cross zero, small dividend
   - zero divisor
   - overflow (SIGNED_MIN/-1), normal dividend
   - overflow (SIGNED_MIN/-1), constant dividend

Specifically, these selftests are based on dead code elimination:
If the BPF verifier can precisely analyze the result of BPF_DIV/BPF_MOD
instruction, it can prune the path that leads to an error (here we use
invalid memory access as the error case), allowing the program to pass
verification.

Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Co-developed-by: Tianci Cao <ziye@zju.edu.cn>
Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
---
Hello everyone,

Thanks for reviewing our patch! This patch adds the necessary selftests
for the BPF_DIV and BPF_MOD range tracking enhancements.

Regarding the test implementation: I noticed multiple patterns for BPF
selftests (e.g., out-of-bounds read in `verifier_bounds.c`, illegal
return value in `verifier_mul.c` and `verifier_precision.v`). I have
opted for the invalid memory access approach with `__msg` label as it
is concise and straightforward.

If the community prefers these tests to be integrated into existing
files or follow a different pattern, please let me know and I will
gladly refactor them.

Best,

Yazhou Tang

 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../bpf/progs/verifier_div_mod_bounds.c       | 781 ++++++++++++++++++
 2 files changed, 783 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_div_mod_bounds.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 5829ffd70f8f..ceeda5cd1dd8 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -33,6 +33,7 @@
 #include "verifier_direct_packet_access.skel.h"
 #include "verifier_direct_stack_access_wraparound.skel.h"
 #include "verifier_div0.skel.h"
+#include "verifier_div_mod_bounds.skel.h"
 #include "verifier_div_overflow.skel.h"
 #include "verifier_global_subprogs.skel.h"
 #include "verifier_global_ptr_args.skel.h"
@@ -174,6 +175,7 @@ void test_verifier_d_path(void)               { RUN(verifier_d_path); }
 void test_verifier_direct_packet_access(void) { RUN(verifier_direct_packet_access); }
 void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); }
 void test_verifier_div0(void)                 { RUN(verifier_div0); }
+void test_verifier_div_mod_bounds(void)       { RUN(verifier_div_mod_bounds); }
 void test_verifier_div_overflow(void)         { RUN(verifier_div_overflow); }
 void test_verifier_global_subprogs(void)      { RUN(verifier_global_subprogs); }
 void test_verifier_global_ptr_args(void)      { RUN(verifier_global_ptr_args); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_div_mod_bounds.c b/tools/testing/selftests/bpf/progs/verifier_div_mod_bounds.c
new file mode 100644
index 000000000000..9367233a104f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_div_mod_bounds.c
@@ -0,0 +1,781 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <limits.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+/* This file contains unit tests for signed/unsigned division and modulo
+ * operations (with divisor as a constant), focusing on verifying whether
+ * BPF verifier's range tracking module soundly and precisely computes
+ * the results.
+ */
+
+SEC("socket")
+__description("UDIV32, positive divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 /= 3 {{.*}}; R1=scalar(smin=smin32=0,smax=umax=smax32=umax32=3,var_off=(0x0; 0x3))")
+__naked void udiv32_pos_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w1 /= 3;					\
+	if w1 > 3 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UDIV32, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 /= w2 {{.*}}; R1=0 R2=0")
+__naked void udiv32_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w2 = 0;						\
+	w1 /= w2;					\
+	if w1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UDIV64, positive divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 /= 3 {{.*}}; R1=scalar(smin=smin32=0,smax=umax=smax32=umax32=3,var_off=(0x0; 0x3))")
+__naked void udiv64_pos_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r1 /= 3;					\
+	if r1 > 3 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UDIV64, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 /= r2 {{.*}}; R1=0 R2=0")
+__naked void udiv64_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r2 = 0;						\
+	r1 /= r2;					\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, positive divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 s/= 3 {{.*}}; R1=scalar(smin=0,smax=umax=0xffffffff,smin32=-2,smax32=3,var_off=(0x0; 0xffffffff))")
+__naked void sdiv32_pos_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	if w1 s< -8 goto l0_%=;				\
+	if w1 s> 10 goto l0_%=;				\
+	w1 s/= 3;					\
+	if w1 s< -2 goto l1_%=;				\
+	if w1 s> 3 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, negative divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 s/= -3 {{.*}}; R1=scalar(smin=0,smax=umax=0xffffffff,smin32=-3,smax32=2,var_off=(0x0; 0xffffffff))")
+__naked void sdiv32_neg_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	if w1 s< -8 goto l0_%=;				\
+	if w1 s> 10 goto l0_%=;				\
+	w1 s/= -3;					\
+	if w1 s< -3 goto l1_%=;				\
+	if w1 s> 2 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 s/= w2 {{.*}}; R1=0 R2=0")
+__naked void sdiv32_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w2 = 0;						\
+	w1 s/= w2;					\
+	if w1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, overflow (S32_MIN/-1)")
+__success __retval(0) __log_level(2)
+__msg("w1 s/= -1 {{.*}}; R1=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))")
+__naked void sdiv32_overflow_1(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w2 = %[int_min];				\
+	w2 += 10;					\
+	if w1 s> w2 goto l0_%=;				\
+	w1 s/= -1;					\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(int_min, INT_MIN),
+	  __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, overflow (S32_MIN/-1), constant dividend")
+__success __retval(0) __log_level(2)
+__msg("w1 s/= -1 {{.*}}; R1=0x80000000")
+__naked void sdiv32_overflow_2(void)
+{
+	asm volatile ("					\
+	w1 = %[int_min];				\
+	w1 s/= -1;					\
+	if w1 != %[int_min] goto l0_%=;			\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm_const(int_min, INT_MIN)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, positive divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 s/= 3 {{.*}}; R1=scalar(smin=smin32=-2,smax=smax32=3)")
+__naked void sdiv64_pos_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	if r1 s< -8 goto l0_%=;				\
+	if r1 s> 10 goto l0_%=;				\
+	r1 s/= 3;					\
+	if r1 s< -2 goto l1_%=;				\
+	if r1 s> 3 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, negative divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 s/= -3 {{.*}}; R1=scalar(smin=smin32=-3,smax=smax32=2)")
+__naked void sdiv64_neg_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	if r1 s< -8 goto l0_%=;				\
+	if r1 s> 10 goto l0_%=;				\
+	r1 s/= -3;					\
+	if r1 s< -3 goto l1_%=;				\
+	if r1 s> 2 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 s/= r2 {{.*}}; R1=0 R2=0")
+__naked void sdiv64_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r2 = 0;						\
+	r1 s/= r2;					\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, overflow (S64_MIN/-1)")
+__success __retval(0) __log_level(2)
+__msg("r1 s/= -1 {{.*}}; R1=scalar()")
+__naked void sdiv64_overflow_1(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	r1 = r0;					\
+	r2 = %[llong_min] ll;				\
+	r2 += 10;					\
+	if r1 s> r2 goto l0_%=;				\
+	r1 s/= -1;					\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(llong_min, LLONG_MIN),
+	  __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, overflow (S64_MIN/-1), constant dividend")
+__success __retval(0) __log_level(2)
+__msg("r1 s/= -1 {{.*}}; R1=0x8000000000000000")
+__naked void sdiv64_overflow_2(void)
+{
+	asm volatile ("					\
+	r1 = %[llong_min] ll;				\
+	r1 s/= -1;					\
+	r2 = %[llong_min] ll;				\
+	if r1 != r2 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm_const(llong_min, LLONG_MIN)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UMOD32, positive divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 %= 3 {{.*}}; R1=scalar(smin=smin32=0,smax=umax=smax32=umax32=2,var_off=(0x0; 0x3))")
+__naked void umod32_pos_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w1 %%= 3;					\
+	if w1 > 3 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UMOD32, positive divisor, small dividend")
+__success __retval(0) __log_level(2)
+__msg("w1 %= 10 {{.*}}; R1=scalar(smin=umin=smin32=umin32=1,smax=umax=smax32=umax32=9,var_off=(0x1; 0x8))")
+__naked void umod32_pos_divisor_unchanged(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w1 %%= 10;					\
+	if w1 < 1 goto l0_%=;				\
+	if w1 > 9 goto l0_%=;				\
+	if w1 & 1 != 1 goto l0_%=;			\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UMOD32, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 %= w2 {{.*}}; R1=scalar(smin=umin=smin32=umin32=1,smax=umax=smax32=umax32=9,var_off=(0x1; 0x8)) R2=0")
+__naked void umod32_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w1 &= 8;					\
+	w1 |= 1;					\
+	w2 = 0;						\
+	w1 %%= w2;					\
+	if w1 < 1 goto l0_%=;				\
+	if w1 > 9 goto l0_%=;				\
+	if w1 & 1 != 1 goto l0_%=;			\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UMOD64, positive divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 %= 3 {{.*}}; R1=scalar(smin=smin32=0,smax=umax=smax32=umax32=2,var_off=(0x0; 0x3))")
+__naked void umod64_pos_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r1 %%= 3;					\
+	if r1 > 3 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UMOD64, positive divisor, small dividend")
+__success __retval(0) __log_level(2)
+__msg("r1 %= 10 {{.*}}; R1=scalar(smin=umin=smin32=umin32=1,smax=umax=smax32=umax32=9,var_off=(0x1; 0x8))")
+__naked void umod64_pos_divisor_unchanged(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r1 %%= 10;					\
+	if r1 < 1 goto l0_%=;				\
+	if r1 > 9 goto l0_%=;				\
+	if r1 & 1 != 1 goto l0_%=;			\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("UMOD64, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 %= r2 {{.*}}; R1=scalar(smin=umin=smin32=umin32=1,smax=umax=smax32=umax32=9,var_off=(0x1; 0x8)) R2=0")
+__naked void umod64_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r1 &= 8;					\
+	r1 |= 1;					\
+	r2 = 0;						\
+	r1 %%= r2;					\
+	if r1 < 1 goto l0_%=;				\
+	if r1 > 9 goto l0_%=;				\
+	if r1 & 1 != 1 goto l0_%=;			\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, positive divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 s%= 3 {{.*}}; R1=scalar(smin=0,smax=umax=0xffffffff,smin32=-2,smax32=2,var_off=(0x0; 0xffffffff))")
+__naked void smod32_pos_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	if w1 s< -8 goto l0_%=;				\
+	if w1 s> 10 goto l0_%=;				\
+	w1 s%%= 3;					\
+	if w1 s< -2 goto l1_%=;				\
+	if w1 s> 2 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, positive divisor, small dividend")
+__success __retval(0) __log_level(2)
+__msg("w1 s%= 11 {{.*}}; R1=scalar(smin=0,smax=umax=0xffffffff,smin32=-8,smax32=10,var_off=(0x0; 0xffffffff))")
+__naked void smod32_pos_divisor_unchanged(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	if w1 s< -8 goto l0_%=;				\
+	if w1 s> 10 goto l0_%=;				\
+	w1 s%%= 11;					\
+	if w1 s< -8 goto l1_%=;				\
+	if w1 s> 10 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, negative divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 s%= -3 {{.*}}; R1=scalar(smin=0,smax=umax=0xffffffff,smin32=-2,smax32=2,var_off=(0x0; 0xffffffff))")
+__naked void smod32_neg_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	if w1 s< -8 goto l0_%=;				\
+	if w1 s> 10 goto l0_%=;				\
+	w1 s%%= -3;					\
+	if w1 s< -2 goto l1_%=;				\
+	if w1 s> 2 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, negative divisor, small dividend")
+__success __retval(0) __log_level(2)
+__msg("w1 s%= -11 {{.*}}; R1=scalar(smin=0,smax=umax=0xffffffff,smin32=-8,smax32=10,var_off=(0x0; 0xffffffff))")
+__naked void smod32_neg_divisor_unchanged(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	if w1 s< -8 goto l0_%=;				\
+	if w1 s> 10 goto l0_%=;				\
+	w1 s%%= -11;					\
+	if w1 s< -8 goto l1_%=;				\
+	if w1 s> 10 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("w1 s%= w2 {{.*}}; R1=scalar(smin=0,smax=umax=0xffffffff,smin32=-8,smax32=10,var_off=(0x0; 0xffffffff)) R2=0")
+__naked void smod32_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	if w1 s< -8 goto l0_%=;				\
+	if w1 s> 10 goto l0_%=;				\
+	w2 = 0;					 \
+	w1 s%%= w2;					\
+	if w1 s< -8 goto l1_%=;				\
+	if w1 s> 10 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, overflow (S32_MIN%-1)")
+__success __retval(0) __log_level(2)
+__msg("w1 s%= -1 {{.*}}; R1=0")
+__naked void smod32_overflow_1(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	w1 = w0;					\
+	w2 = %[int_min];				\
+	w2 += 10;					\
+	if w1 s> w2 goto l0_%=;				\
+	w1 s%%= -1;					\
+	if w1 != 0 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm_const(int_min, INT_MIN),
+	  __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, overflow (S32_MIN%-1), constant dividend")
+__success __retval(0) __log_level(2)
+__msg("w1 s%= -1 {{.*}}; R1=0")
+__naked void smod32_overflow_2(void)
+{
+	asm volatile ("					\
+	w1 = %[int_min];				\
+	w1 s%%= -1;					\
+	if w1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm_const(int_min, INT_MIN)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, positive divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 s%= 3 {{.*}}; R1=scalar(smin=smin32=-2,smax=smax32=2)")
+__naked void smod64_pos_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	if r1 s< -8 goto l0_%=;				\
+	if r1 s> 10 goto l0_%=;				\
+	r1 s%%= 3;					\
+	if r1 s< -2 goto l1_%=;				\
+	if r1 s> 2 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, positive divisor, small dividend")
+__success __retval(0) __log_level(2)
+__msg("r1 s%= 11 {{.*}}; R1=scalar(smin=smin32=-8,smax=smax32=10)")
+__naked void smod64_pos_divisor_unchanged(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	if r1 s< -8 goto l0_%=;				\
+	if r1 s> 10 goto l0_%=;				\
+	r1 s%%= 11;					\
+	if r1 s< -8 goto l1_%=;				\
+	if r1 s> 10 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, negative divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 s%= -3 {{.*}}; R1=scalar(smin=smin32=-2,smax=smax32=2)")
+__naked void smod64_neg_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	if r1 s< -8 goto l0_%=;				\
+	if r1 s> 10 goto l0_%=;				\
+	r1 s%%= -3;					\
+	if r1 s< -2 goto l1_%=;				\
+	if r1 s> 2 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, negative divisor, small dividend")
+__success __retval(0) __log_level(2)
+__msg("r1 s%= -11 {{.*}}; R1=scalar(smin=smin32=-8,smax=smax32=10)")
+__naked void smod64_neg_divisor_unchanged(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	if r1 s< -8 goto l0_%=;				\
+	if r1 s> 10 goto l0_%=;				\
+	r1 s%%= -11;					\
+	if r1 s< -8 goto l1_%=;				\
+	if r1 s> 10 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, zero divisor")
+__success __retval(0) __log_level(2)
+__msg("r1 s%= r2 {{.*}}; R1=scalar(smin=smin32=-8,smax=smax32=10) R2=0")
+__naked void smod64_zero_divisor(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	if r1 s< -8 goto l0_%=;				\
+	if r1 s> 10 goto l0_%=;				\
+	r2 = 0;					 \
+	r1 s%%= r2;					\
+	if r1 s< -8 goto l1_%=;				\
+	if r1 s> 10 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, overflow (S64_MIN%-1)")
+__success __retval(0) __log_level(2)
+__msg("r1 s%= -1 {{.*}}; R1=0")
+__naked void smod64_overflow_1(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	r1 = r0;					\
+	r2 = %[llong_min] ll;				\
+	r2 += 10;					\
+	if r1 s> r2 goto l0_%=;				\
+	r1 s%%= -1;					\
+	if r1 != 0 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm_const(llong_min, LLONG_MIN),
+	  __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, overflow (S64_MIN%-1), constant dividend")
+__success __retval(0) __log_level(2)
+__msg("r1 s%= -1 {{.*}}; R1=0")
+__naked void smod64_overflow_2(void)
+{
+	asm volatile ("					\
+	r1 = %[llong_min] ll;				\
+	r1 s%%= -1;					\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u64 *)(r1 + 0);				\
+	exit;						\
+"	:
+	: __imm_const(llong_min, LLONG_MIN)
+	: __clobber_all);
+}
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next v3 1/2] bpf: Add range tracking for BPF_DIV and BPF_MOD
  2026-01-13 10:35 ` [PATCH bpf-next v3 1/2] " Yazhou Tang
@ 2026-01-14  0:48   ` kernel test robot
  2026-01-14  3:08   ` Alexei Starovoitov
  1 sibling, 0 replies; 8+ messages in thread
From: kernel test robot @ 2026-01-14  0:48 UTC (permalink / raw)
  To: Yazhou Tang, bpf
  Cc: oe-kbuild-all, ast, daniel, john.fastabend, andrii, martin.lau,
	eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa,
	tangyazhou518, shenghaoyuan0928, ziye, syzbot

Hi Yazhou,

kernel test robot noticed the following build errors:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Yazhou-Tang/bpf-Add-range-tracking-for-BPF_DIV-and-BPF_MOD/20260113-184035
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20260113103552.3435695-2-tangyazhou%40zju.edu.cn
patch subject: [PATCH bpf-next v3 1/2] bpf: Add range tracking for BPF_DIV and BPF_MOD
config: riscv-randconfig-002-20260114 (https://download.01.org/0day-ci/archive/20260114/202601140812.rsul8sPI-lkp@intel.com/config)
compiler: riscv64-linux-gcc (GCC) 9.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260114/202601140812.rsul8sPI-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202601140812.rsul8sPI-lkp@intel.com/

All errors (new ones prefixed by >>):

   kernel/bpf/verifier.c: In function 'adjust_scalar_min_max_vals':
>> kernel/bpf/verifier.c:15865:3: error: a label can only be part of a statement and a declaration is not a statement
   15865 |   bool changed = true;
         |   ^~~~

Kconfig warnings: (for reference only)
   WARNING: unmet direct dependencies detected for CAN_DEV
   Depends on [n]: NETDEVICES [=n] && CAN [=y]
   Selected by [y]:
   - CAN [=y] && NET [=y]


vim +15865 kernel/bpf/verifier.c

 15781	
 15782	/* WARNING: This function does calculations on 64-bit values, but the actual
 15783	 * execution may occur on 32-bit values. Therefore, things like bitshifts
 15784	 * need extra checks in the 32-bit case.
 15785	 */
 15786	static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
 15787					      struct bpf_insn *insn,
 15788					      struct bpf_reg_state *dst_reg,
 15789					      struct bpf_reg_state src_reg)
 15790	{
 15791		u8 opcode = BPF_OP(insn->code);
 15792		s16 off = insn->off;
 15793		bool alu32 = (BPF_CLASS(insn->code) != BPF_ALU64);
 15794		int ret;
 15795	
 15796		if (!is_safe_to_compute_dst_reg_range(insn, &src_reg)) {
 15797			__mark_reg_unknown(env, dst_reg);
 15798			return 0;
 15799		}
 15800	
 15801		if (sanitize_needed(opcode)) {
 15802			ret = sanitize_val_alu(env, insn);
 15803			if (ret < 0)
 15804				return sanitize_err(env, insn, ret, NULL, NULL);
 15805		}
 15806	
 15807		/* Calculate sign/unsigned bounds and tnum for alu32 and alu64 bit ops.
 15808		 * There are two classes of instructions: The first class we track both
 15809		 * alu32 and alu64 sign/unsigned bounds independently this provides the
 15810		 * greatest amount of precision when alu operations are mixed with jmp32
 15811		 * operations. These operations are BPF_ADD, BPF_SUB, BPF_MUL, BPF_ADD,
 15812		 * and BPF_OR. This is possible because these ops have fairly easy to
 15813		 * understand and calculate behavior in both 32-bit and 64-bit alu ops.
 15814		 * See alu32 verifier tests for examples. The second class of
 15815		 * operations, BPF_LSH, BPF_RSH, and BPF_ARSH, however are not so easy
 15816		 * with regards to tracking sign/unsigned bounds because the bits may
 15817		 * cross subreg boundaries in the alu64 case. When this happens we mark
 15818		 * the reg unbounded in the subreg bound space and use the resulting
 15819		 * tnum to calculate an approximation of the sign/unsigned bounds.
 15820		 */
 15821		switch (opcode) {
 15822		case BPF_ADD:
 15823			scalar32_min_max_add(dst_reg, &src_reg);
 15824			scalar_min_max_add(dst_reg, &src_reg);
 15825			dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);
 15826			break;
 15827		case BPF_SUB:
 15828			scalar32_min_max_sub(dst_reg, &src_reg);
 15829			scalar_min_max_sub(dst_reg, &src_reg);
 15830			dst_reg->var_off = tnum_sub(dst_reg->var_off, src_reg.var_off);
 15831			break;
 15832		case BPF_NEG:
 15833			env->fake_reg[0] = *dst_reg;
 15834			__mark_reg_known(dst_reg, 0);
 15835			scalar32_min_max_sub(dst_reg, &env->fake_reg[0]);
 15836			scalar_min_max_sub(dst_reg, &env->fake_reg[0]);
 15837			dst_reg->var_off = tnum_neg(env->fake_reg[0].var_off);
 15838			break;
 15839		case BPF_MUL:
 15840			dst_reg->var_off = tnum_mul(dst_reg->var_off, src_reg.var_off);
 15841			scalar32_min_max_mul(dst_reg, &src_reg);
 15842			scalar_min_max_mul(dst_reg, &src_reg);
 15843			break;
 15844		case BPF_DIV:
 15845			if (alu32) {
 15846				if (off == 1)
 15847					scalar32_min_max_sdiv(dst_reg, &src_reg);
 15848				else
 15849					scalar32_min_max_udiv(dst_reg, &src_reg);
 15850				__mark_reg64_unbounded(dst_reg);
 15851			} else {
 15852				if (off == 1)
 15853					scalar_min_max_sdiv(dst_reg, &src_reg);
 15854				else
 15855					scalar_min_max_udiv(dst_reg, &src_reg);
 15856				__mark_reg32_unbounded(dst_reg);
 15857			}
 15858			/* Since we don't have precise tnum analysis for division yet,
 15859			 * we must reset var_off to unknown to avoid inconsistency.
 15860			 * Subsequent reg_bounds_sync() will rebuild it from scalar bounds.
 15861			 */
 15862			dst_reg->var_off = tnum_unknown;
 15863			break;
 15864		case BPF_MOD:
 15865			bool changed = true;
 15866			if (alu32)
 15867				changed = (off == 1) ? scalar32_min_max_smod(dst_reg, &src_reg)
 15868							: scalar32_min_max_umod(dst_reg, &src_reg);
 15869			else
 15870				changed = (off == 1) ? scalar_min_max_smod(dst_reg, &src_reg)
 15871							: scalar_min_max_umod(dst_reg, &src_reg);
 15872			/* Similar to BPF_DIV, we need to reset var_off and 32/64 range
 15873			 * to unknown (unbounded). But if the result is equal to dividend
 15874			 * (due to special cases in BPF_MOD analysis), we can also keep
 15875			 * them unchanged.
 15876			 */
 15877			if (changed) {
 15878				if (alu32)
 15879					__mark_reg64_unbounded(dst_reg);
 15880				else
 15881					__mark_reg32_unbounded(dst_reg);
 15882				dst_reg->var_off = tnum_unknown;
 15883			}
 15884			break;
 15885		case BPF_AND:
 15886			dst_reg->var_off = tnum_and(dst_reg->var_off, src_reg.var_off);
 15887			scalar32_min_max_and(dst_reg, &src_reg);
 15888			scalar_min_max_and(dst_reg, &src_reg);
 15889			break;
 15890		case BPF_OR:
 15891			dst_reg->var_off = tnum_or(dst_reg->var_off, src_reg.var_off);
 15892			scalar32_min_max_or(dst_reg, &src_reg);
 15893			scalar_min_max_or(dst_reg, &src_reg);
 15894			break;
 15895		case BPF_XOR:
 15896			dst_reg->var_off = tnum_xor(dst_reg->var_off, src_reg.var_off);
 15897			scalar32_min_max_xor(dst_reg, &src_reg);
 15898			scalar_min_max_xor(dst_reg, &src_reg);
 15899			break;
 15900		case BPF_LSH:
 15901			if (alu32)
 15902				scalar32_min_max_lsh(dst_reg, &src_reg);
 15903			else
 15904				scalar_min_max_lsh(dst_reg, &src_reg);
 15905			break;
 15906		case BPF_RSH:
 15907			if (alu32)
 15908				scalar32_min_max_rsh(dst_reg, &src_reg);
 15909			else
 15910				scalar_min_max_rsh(dst_reg, &src_reg);
 15911			break;
 15912		case BPF_ARSH:
 15913			if (alu32)
 15914				scalar32_min_max_arsh(dst_reg, &src_reg);
 15915			else
 15916				scalar_min_max_arsh(dst_reg, &src_reg);
 15917			break;
 15918		default:
 15919			break;
 15920		}
 15921	
 15922		/* ALU32 ops are zero extended into 64bit register */
 15923		if (alu32)
 15924			zext_32_to_64(dst_reg);
 15925		reg_bounds_sync(dst_reg);
 15926		return 0;
 15927	}
 15928	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next v3 1/2] bpf: Add range tracking for BPF_DIV and BPF_MOD
  2026-01-13 10:35 ` [PATCH bpf-next v3 1/2] " Yazhou Tang
  2026-01-14  0:48   ` kernel test robot
@ 2026-01-14  3:08   ` Alexei Starovoitov
  2026-01-14  5:47     ` Yazhou Tang
  1 sibling, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2026-01-14  3:08 UTC (permalink / raw)
  To: Yazhou Tang
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Yazhou Tang, Shenghao Yuan, Tianci Cao, syzbot

On Tue, Jan 13, 2026 at 2:36 AM Yazhou Tang <tangyazhou@zju.edu.cn> wrote:
>
> From: Yazhou Tang <tangyazhou518@outlook.com>
>
> This patch implements range tracking (interval analysis) for BPF_DIV and
> BPF_MOD operations when the divisor is a constant, covering both signed
> and unsigned variants.
>
> While LLVM typically optimizes integer division and modulo by constants
> into multiplication and shift sequences, this optimization is less
> effective for the BPF target when dealing with 64-bit arithmetic.
>
> Currently, the verifier does not track bounds for scalar division or
> modulo, treating the result as "unbounded". This leads to false positive
> rejections for safe code patterns.
>
> For example, the following code (compiled with -O2):
>
> ```c
> int test(struct pt_regs *ctx) {
>     char buffer[6] = {1};
>     __u64 x = bpf_ktime_get_ns();
>     __u64 res = x % sizeof(buffer);
>     char value = buffer[res];
>     bpf_printk("res = %llu, val = %d", res, value);
>     return 0;
> }
> ```
>
> Generates a raw `BPF_MOD64` instruction:
>
> ```asm
> ;     __u64 res = x % sizeof(buffer);
>        1:       97 00 00 00 06 00 00 00 r0 %= 0x6
> ;     char value = buffer[res];
>        2:       18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0x0 ll
>        4:       0f 01 00 00 00 00 00 00 r1 += r0
>        5:       91 14 00 00 00 00 00 00 r4 = *(s8 *)(r1 + 0x0)
> ```
>
> Without this patch, the verifier fails with "math between map_value
> pointer and register with unbounded min value is not allowed" because
> it cannot deduce that `r0` is within [0, 5].
>
> According to the BPF instruction set[1], the instruction's offset field
> (`insn->off`) is used to distinguish between signed (`off == 1`) and
> unsigned division (`off == 0`). Moreover, we also follow the BPF division
> and modulo semantics (runtime behavior) to handle special cases, such as
> division by zero and signed division overflow.
>
> - UDIV: dst = (src != 0) ? (dst / src) : 0
> - SDIV: dst = (src == 0) ? 0 : ((src == -1 && dst == LLONG_MIN) ? LLONG_MIN : (dst / src))
> - UMOD: dst = (src != 0) ? (dst % src) : dst
> - SMOD: dst = (src == 0) ? dst : ((src == -1 && dst == LLONG_MIN) ? 0: (dst s% src))
>
> Here is the overview of the changes made in this patch (See the code comments
> for more details and examples):
>
> For BPF_DIV:
> 1. Main analysis:
>    - General cases and "division by zero" case: compute the new range by
>      dividing max_dividend and min_dividend by the constant divisor.
>      Use helper functions `__bpf_udiv32`, `__bpf_udiv`, `__bpf_sdiv32`,
>      and `__bpf_sdiv` to encapsulate the division logic, including handling
>      division by zero.
>    - "SIGNED_MIN / -1" case in signed division: mark the result as unbounded
>      if the dividend is not a single number.
> 2. Post-processing:
>    - Domain reset: Signed analysis resets unsigned bounds to unbounded,
>      and vice versa.
>    - Width reset: 32-bit analysis resets 64-bit bounds to unbounded
>      (and vice versa) to maintain consistency.
>    - Tnum reset: reset `var_off` to unknown since precise bitwise tracking
>      for division is not implemented.
>
> For BPF_MOD:
> 1. Main analysis:
>    - General case: For signed modulo, the result's sign matches the
>      dividend's sign. The result's absolute value is strictly bounded
>      by min(abs(dividend), abs(divisor) - 1).
>      Special care is taken when the divisor is SIGNED_MIN. By casting
>      to unsigned before negation and subtracting 1, we avoid signed
>      overflow and correctly calculate the maximum possible magnitude
>      (`res_max_abs` in the code).
>    - "Division by zero" case: If the divisor is zero, the destination
>      register remains unchanged (matching runtime behavior).
>    - "Small dividend" case: If the dividend is already within the possible
>      result range (e.g., [2, 5] % 10), the operation is an identity
>      function, and the register state remains unchanged.
> 2. Post-processing (if the result is changed compared to the dividend):
>    - Domain reset: Signed analysis resets unsigned bounds to unbounded,
>      and vice versa.
>    - Width reset: 32-bit analysis resets 64-bit bounds to unbounded
>      (and vice versa) to maintain consistency.
>    - Tnum reset: reset `var_off` to unknown since precise bitwise tracking
>      for division is not implemented.
>
> Also updated existing selftests based on the expected BPF_DIV and
> BPF_MOD behavior.
>
> [1] https://www.kernel.org/doc/Documentation/bpf/standardization/instruction-set.rst
>
> Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
> Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
> Co-developed-by: Tianci Cao <ziye@zju.edu.cn>
> Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
> Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
> Tested-by: syzbot@syzkaller.appspotmail.com
> ---
>  kernel/bpf/verifier.c                         | 326 ++++++++++++++++++
>  .../bpf/progs/verifier_value_illegal_alu.c    |   7 +-
>  .../testing/selftests/bpf/verifier/precise.c  |   4 +-
>  3 files changed, 332 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 53635ea2e41b..f3b51751d990 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -15077,6 +15077,283 @@ static void scalar_min_max_mul(struct bpf_reg_state *dst_reg,
>         }
>  }
>
> +static u32 __bpf_udiv32(u32 a, u32 b)
> +{
> +       /* BPF div specification: x / 0 = 0 */
> +       if (unlikely(b == 0))
> +               return 0;
> +       return a / b;
> +}
> +
> +static u64 __bpf_udiv(u64 a, u64 b)
> +{
> +       /* BPF div specification: x / 0 = 0 */
> +       if (unlikely(b == 0))
> +               return 0;
> +       return a / b;
> +}
> +
> +static s32 __bpf_sdiv32(s32 a, s32 b)
> +{
> +       /* BPF div specification: x / 0 = 0 */
> +       if (unlikely(b == 0))
> +               return 0;
> +       return a / b;
> +}
> +
> +static s64 __bpf_sdiv(s64 a, s64 b)
> +{
> +       /* BPF div specification: x / 0 = 0 */
> +       if (unlikely(b == 0))
> +               return 0;
> +       return a / b;
> +}

The whole thing looks very much AI generated.
It can spit out a lot of code, but submitter (YOU) must
think it through before submitting.
In the above... 4 almost equivalent helpers don't bother you?!

and b==0 check... isn't it obvious that might be
easier to read and less verbose to do it once before all that?

You need to step up the quality of this patchset.
AI is NOT your friend.

pw-bot: cr

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next v3 1/2] bpf: Add range tracking for BPF_DIV and BPF_MOD
  2026-01-14  3:08   ` Alexei Starovoitov
@ 2026-01-14  5:47     ` Yazhou Tang
  2026-01-15  3:53       ` Alexei Starovoitov
  0 siblings, 1 reply; 8+ messages in thread
From: Yazhou Tang @ 2026-01-14  5:47 UTC (permalink / raw)
  To: alexei.starovoitov
  Cc: andrii, ast, bpf, daniel, eddyz87, haoluo, john.fastabend, jolsa,
	kpsingh, martin.lau, sdf, shenghaoyuan0928, song, syzbot,
	tangyazhou518, tangyazhou, yonghong.song, ziye

Hi Alexei,

> The whole thing looks very much AI generated.
> It can spit out a lot of code, but submitter (YOU) must
> think it through before submitting.
> In the above... 4 almost equivalent helpers don't bother you?!
> 
> and b==0 check... isn't it obvious that might be
> easier to read and less verbose to do it once before all that?
> 
> You need to step up the quality of this patchset.
> AI is NOT your friend.
> 
> pw-bot: cr

I accept the criticism regarding the code verbosity and the redundant
helpers.

To clarify: I assure you that every line of code in this patch was
written by hand.

The verbosity and the redundant helpers were not generated by AI,
but were remnants from my v2 implementation where I handled full
interval-based divisors. When I simplified the logic for v3 to handle
constants only, I failed to clean up the structure sufficiently.
I apologize for this oversight.

I will clean this up in v4 by removing the helpers and hoisting the
zero-checks as suggested.

Before I submit v4, could you please let me know if there are any other
issues you see in the current logic or structure? I want to ensure the
next version meets the quality standards.

Best regards,

Yazhou


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next v3 1/2] bpf: Add range tracking for BPF_DIV and BPF_MOD
  2026-01-14  5:47     ` Yazhou Tang
@ 2026-01-15  3:53       ` Alexei Starovoitov
  2026-01-15  5:05         ` Yazhou Tang
  0 siblings, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2026-01-15  3:53 UTC (permalink / raw)
  To: Yazhou Tang
  Cc: Andrii Nakryiko, Alexei Starovoitov, bpf, Daniel Borkmann, Eduard,
	Hao Luo, John Fastabend, Jiri Olsa, KP Singh, Martin KaFai Lau,
	Stanislav Fomichev, Shenghao Yuan, Song Liu, syzbot, Yazhou Tang,
	Yonghong Song, Tianci Cao

On Tue, Jan 13, 2026 at 9:48 PM Yazhou Tang <tangyazhou@zju.edu.cn> wrote:
>
> Hi Alexei,
>
> > The whole thing looks very much AI generated.
> > It can spit out a lot of code, but submitter (YOU) must
> > think it through before submitting.
> > In the above... 4 almost equivalent helpers don't bother you?!
> >
> > and b==0 check... isn't it obvious that might be
> > easier to read and less verbose to do it once before all that?
> >
> > You need to step up the quality of this patchset.
> > AI is NOT your friend.
> >
> > pw-bot: cr
>
> I accept the criticism regarding the code verbosity and the redundant
> helpers.
>
> To clarify: I assure you that every line of code in this patch was
> written by hand.
>
> The verbosity and the redundant helpers were not generated by AI,
> but were remnants from my v2 implementation where I handled full
> interval-based divisors. When I simplified the logic for v3 to handle
> constants only, I failed to clean up the structure sufficiently.
> I apologize for this oversight.
>
> I will clean this up in v4 by removing the helpers and hoisting the
> zero-checks as suggested.
>
> Before I submit v4, could you please let me know if there are any other
> issues you see in the current logic or structure? I want to ensure the
> next version meets the quality standards.

1.
Think of ways to reduce copy paste. Some of it is unavoidable,
but 'easy to copy' shouldn't be a default mode of operation.

2.
Use proper kernel comment style. We don't use networking anymore
for any new code.

3.
The 'bool changed' is a bit difficult to follow.
It seem you've tried to share the code, but it seem doing 'void'
return BPF_MOD and using two helpers that do:
__mark_reg64/32_unbounded(dst_reg);
dst_reg->var_off = tnum_unknown;
would be easier to follow, since all assignments will be in one place,
instead of split between scalar[32|64]_min_max_[su]mod()
and adjust_scalar_min_max_vals().
Or even explicitly doing:
+       dst_reg->u32_min_value = 0;
+       dst_reg->u32_max_value = U32_MAX;
//+       return true;
+       __mark_reg64_unbounded(dst_reg); and tnum_unknown

would make the intent more clear.

4.
smod64/32 have 3 supported combinations depending on the sign of smin/smax,
I couldn't find where selftests cover them all.

It seem the tests do:
+       if w1 s< -8 goto l0_%=;                         \
+       if w1 s> 10 goto l0_%=;                         \
(or the same thing with r1)

so only this part is tested?

+       } else {
+               *dst_smin = -res_max_abs;
+               *dst_smax = res_max_abs;
+       }

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next v3 1/2] bpf: Add range tracking for BPF_DIV and BPF_MOD
  2026-01-15  3:53       ` Alexei Starovoitov
@ 2026-01-15  5:05         ` Yazhou Tang
  0 siblings, 0 replies; 8+ messages in thread
From: Yazhou Tang @ 2026-01-15  5:05 UTC (permalink / raw)
  To: alexei.starovoitov
  Cc: andrii, ast, bpf, daniel, eddyz87, haoluo, john.fastabend, jolsa,
	kpsingh, martin.lau, sdf, shenghaoyuan0928, song, syzbot,
	tangyazhou518, tangyazhou, yonghong.song, ziye

Hi Alexei,

On 1/15/26 11:53 AM, Alexei Starovoitov wrote:
> 1.
> Think of ways to reduce copy paste. Some of it is unavoidable,
> but 'easy to copy' shouldn't be a default mode of operation.
>
> 2.
> Use proper kernel comment style. We don't use networking anymore
> for any new code.
>
> 3.
> The 'bool changed' is a bit difficult to follow.
> It seem you've tried to share the code, but it seem doing 'void'
> return BPF_MOD and using two helpers that do:
> __mark_reg64/32_unbounded(dst_reg);
> dst_reg->var_off = tnum_unknown;
> would be easier to follow, since all assignments will be in one place,
> instead of split between scalar[32|64]_min_max_[su]mod()
> and adjust_scalar_min_max_vals().
> Or even explicitly doing:
> +       dst_reg->u32_min_value = 0;
> +       dst_reg->u32_max_value = U32_MAX;
> //+       return true;
> +       __mark_reg64_unbounded(dst_reg); and tnum_unknown
>
> would make the intent more clear.
>
> 4.
> smod64/32 have 3 supported combinations depending on the sign of smin/smax,
> I couldn't find where selftests cover them all.
>
> It seem the tests do:
> +       if w1 s< -8 goto l0_%=;                         \
> +       if w1 s> 10 goto l0_%=;                         \
> (or the same thing with r1)
>
> so only this part is tested?
>
> +       } else {
> +               *dst_smin = -res_max_abs;
> +               *dst_smax = res_max_abs;
> +       }

Thank you for the detailed feedback. I agree with all your points.

For v4, I have refactored the code to improve readability and coverage:

1. Refactoring: I changed the scalar*_min_max_*mod functions to return void.
   The logic for resetting var_off and marking the other bit-width as unbounded
   is now explicitly handled inside these helpers, rather than splitting it
   between the helper and the caller. This removes the confusing bool changed logic.

2. Comments: I updated all multi-line comments to follow the standard kernel style.

3. Tests: You are right that the previous tests only covered the "mixed sign"
   case for SMOD. I have added new test cases to explicitly cover:
   - Strictly positive dividend (e.g., [10, 12] s% 3)
   - Strictly negative dividend (e.g., [-12, -10] s% 3)
   - Mixed sign dividend (existing tests)
   Additionally, for consistency, I have also added similar "strictly positive"
   and "strictly negative" test cases for SDIV.

I will send v4 shortly.

Best regards,

Yazhou


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-01-15  5:06 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-13 10:35 [PATCH bpf-next v3 0/2] bpf: Add range tracking for BPF_DIV and BPF_MOD Yazhou Tang
2026-01-13 10:35 ` [PATCH bpf-next v3 1/2] " Yazhou Tang
2026-01-14  0:48   ` kernel test robot
2026-01-14  3:08   ` Alexei Starovoitov
2026-01-14  5:47     ` Yazhou Tang
2026-01-15  3:53       ` Alexei Starovoitov
2026-01-15  5:05         ` Yazhou Tang
2026-01-13 10:35 ` [PATCH bpf-next v3 2/2] selftests/bpf: Add tests for BPF_DIV and BPF_MOD range tracking Yazhou Tang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox