From: Eduard Zingerman <eddyz87@gmail.com>
To: sashiko@lists.linux.dev
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH RFC bpf-next 1/4] bpf: representation and basic operations on circular numbers
Date: Thu, 23 Apr 2026 15:51:24 -0700 [thread overview]
Message-ID: <0c47b0b7ea476647746806c46fded4353be885f7.camel@gmail.com> (raw)
In-Reply-To: <20260421171830.3881BC2BCB5@smtp.kernel.org>
On Tue, 2026-04-21 at 17:18 +0000, sashiko-bot@kernel.org wrote:
[...]
> > diff --git a/kernel/bpf/cnum_defs.h b/kernel/bpf/cnum_defs.h
> > new file mode 100644
> > index 0000000000000..28dcb79ba6950
> > --- /dev/null
> > +++ b/kernel/bpf/cnum_defs.h
> [ ... ]
> > +#if T == 32
> > +#define ut2 u64
> > +#define st2 s64
> > +#define utt u32
> > +#define stt s32
> > +#define EMPTY CNUM32_EMPTY
> > +#elif T == 64
> > +#define ut2 unsigned __int128
> > +#define st2 signed __int128
>
> Will the unconditional use of unsigned __int128 and signed __int128 break
> compilation on 32-bit architectures? Since the BPF subsystem and verifier
> are built on 32-bit kernels when CONFIG_BPF_SYSCALL is enabled, this
> might cause widespread build failures.
Forgoing __int128 would require something along the following lines:
/* *result = c*d - a*b, if fits in u32; all operands unsigned */
static bool check_mul_u32_u32_sub(u32 a, u32 b, u32 c, u32 d, u32 *result)
{
u64 size = (u64)c * d - (u64)a * b;
if (size > U32_MAX)
return false;
*result = size;
return true;
}
/* *result = c*d - a*b, if fits in u32; all operands signed */
static bool check_mul_s32_s32_sub(s32 a, s32 b, s32 c, s32 d, u32 *result)
{
s64 size = (s64)c * d - (s64)a * b;
if (size > U32_MAX)
return false;
*result = size;
return true;
}
/* Return (s128)a * b >> shift */
static s64 mul_s64_s64_shr(s64 a, s64 b, unsigned int shift)
{
return mul_s64_u64_shr(a, abs(b), shift) * (b < 0 ? -1 : 1);
}
/* *result = c*d - a*b, if fits in u64; all operands unsigned */
static bool check_mul_u64_u64_sub(u64 a, u64 b, u64 c, u64 d, u64 *result)
{
u64 cd_hi = mul_u64_u64_shr(c, d, 64);
u64 cd_lo = c * d;
u64 ab_hi = mul_u64_u64_shr(a, b, 64);
u64 ab_lo = a * b;
u64 borrow = cd_lo < ab_lo;
u64 hi = cd_hi - ab_hi - borrow;
if (hi != 0)
return false;
*result = cd_lo - ab_lo;
return true;
}
/* *result = c*d - a*b, if fits in u64; all operands signed */
static bool check_mul_s64_s64_sub(s64 a, s64 b, s64 c, s64 d, u64 *result)
{
s64 cd_hi = mul_s64_s64_shr(c, d, 64);
u64 cd_lo = (u64)c * (u64)d;
s64 ab_hi = mul_s64_s64_shr(a, b, 64);
u64 ab_lo = (u64)a * (u64)b;
u64 borrow = cd_lo < ab_lo;
s64 hi = cd_hi - ab_hi - borrow;
if (hi != 0)
return false;
*result = cd_lo - ab_lo;
return true;
}
For use in mk_mul_{u,s}. This is on top of the following functions:
- cnum{32,64}_gap
- cnum{32,64}_extend
- cnum{32,64}_bigger
- cnum{32,64}_union
- cnum{32,64}_cut
- cnum{32,64}_mk_mul_{u,s}
- cnum{32,64}_mul_chunk
- cnum{32,64}_mul
Overall +230 lines of non-trivial code.
I did some work to consolidate existing checks in [1], and was unable
to get union, cut and mul_chunk verified for 32-bit and 64-bit domains,
cbmc does not converge to an answer. I'm a bit hesitant regarding
brute-force 8-bit domain verification: there were a few non-trivial
bugs in check_mul because of C implicit cast rules and 8-bit testing
did not reveal them. Looks like a theorem prover is needed indeed.
[1] https://github.com/eddyz87/cnum-verif/tree/consolidated-checks
I tried reinstating the old mul implementation:
static void scalar_min_max_mul(struct bpf_reg_state *dst_reg,
struct bpf_reg_state *src_reg)
{
s64 smin = reg_smin(dst_reg);
s64 smax = reg_smax(dst_reg);
u64 umin = reg_umin(dst_reg);
u64 umax = reg_umax(dst_reg);
s64 tmp_prod[4];
if (check_mul_overflow(umax, reg_umax(src_reg), &umax) ||
check_mul_overflow(umin, reg_umin(src_reg), &umin)) {
/* Overflow possible, we know nothing */
umin = 0;
umax = U64_MAX;
}
if (check_mul_overflow(smin, reg_smin(src_reg), &tmp_prod[0]) ||
check_mul_overflow(smin, reg_smax(src_reg), &tmp_prod[1]) ||
check_mul_overflow(smax, reg_smin(src_reg), &tmp_prod[2]) ||
check_mul_overflow(smax, reg_smax(src_reg), &tmp_prod[3])) {
/* Overflow possible, we know nothing */
smin = S64_MIN;
smax = S64_MAX;
} else {
smin = min_array(tmp_prod, 4);
smax = max_array(tmp_prod, 4);
}
dst_reg->r64 = cnum64_intersect(cnum64_from_urange(umin, umax),
cnum64_from_srange(smin, smax));
}
As it still handles cases like [+-a, +-b] x [+-b, +-d] reasonably well
for bounded a, b, c, d. As a result:
- no tests failed
- no difference in veristat results.
Therefore, for v2 I'll drop cnum{32,64}_mul completely and defer to
the old code (or move it inside cnum_defs.h to avoid code duplication).
[...]
next prev parent reply other threads:[~2026-04-23 22:51 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-21 10:28 [PATCH RFC bpf-next 0/4] bpf: replace min/max fields with struct cnum{32,64} Eduard Zingerman
2026-04-21 10:28 ` [PATCH RFC bpf-next 1/4] bpf: representation and basic operations on circular numbers Eduard Zingerman
2026-04-21 11:16 ` bot+bpf-ci
2026-04-21 17:18 ` sashiko-bot
2026-04-21 17:45 ` Eduard Zingerman
2026-04-23 22:51 ` Eduard Zingerman [this message]
2026-04-21 10:28 ` [PATCH RFC bpf-next 2/4] bpf: use accessor functions for bpf_reg_state min/max fields Eduard Zingerman
2026-04-21 10:28 ` [PATCH RFC bpf-next 3/4] bpf: replace min/max fields with struct cnum{32,64} Eduard Zingerman
2026-04-21 11:16 ` bot+bpf-ci
2026-04-21 16:23 ` Alexei Starovoitov
2026-04-21 16:48 ` Eduard Zingerman
2026-04-21 17:16 ` Alexei Starovoitov
2026-04-21 17:20 ` Eduard Zingerman
2026-04-21 18:06 ` sashiko-bot
2026-04-21 18:31 ` Eduard Zingerman
2026-04-21 10:28 ` [PATCH RFC bpf-next 4/4] selftests/bpf: new cases handled by 32->64 range refinements Eduard Zingerman
2026-04-21 16:10 ` [PATCH RFC bpf-next 0/4] bpf: replace min/max fields with struct cnum{32,64} Alexei Starovoitov
2026-04-21 16:33 ` Eduard Zingerman
2026-04-21 17:14 ` Alexei Starovoitov
2026-04-21 23:45 ` Eduard Zingerman
2026-04-22 14:50 ` Yazhou Tang
2026-04-22 15:03 ` Alexei Starovoitov
2026-04-22 15:32 ` Yazhou Tang
2026-04-22 16:13 ` Alexei Starovoitov
2026-04-23 11:23 ` shenghao yuan
2026-04-22 19:05 ` Eduard Zingerman
2026-04-23 11:45 ` shenghao yuan
2026-04-23 14:18 ` Yazhou Tang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0c47b0b7ea476647746806c46fded4353be885f7.camel@gmail.com \
--to=eddyz87@gmail.com \
--cc=bpf@vger.kernel.org \
--cc=sashiko@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox