public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf 1/2] bpf: Fix incorrect pruning due to atomic fetch precision tracking
@ 2026-03-30 13:27 Daniel Borkmann
  2026-03-30 13:27 ` [PATCH bpf 2/2] selftests/bpf: Add more precision tracking tests for atomics Daniel Borkmann
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Daniel Borkmann @ 2026-03-30 13:27 UTC (permalink / raw)
  To: eddyz87; +Cc: ast, bpf, STAR Labs SG

When backtrack_insn encounters a BPF_STX instruction with BPF_ATOMIC
and BPF_FETCH, the src register (or r0 for BPF_CMPXCHG) also acts as
a destination, thus receiving the old value from the memory location.

The current backtracking logic does not account for this. It treats
atomic fetch operations the same as regular stores where the src
register is only an input. This leads the backtrack_insn to fail to
propagate precision to the stack location, which is then not marked
as precise!

Later, the verifier's path pruning can incorrectly consider two states
equivalent when they differ in terms of stack state. Meaning, two
branches can be treated as equivalent and thus get pruned when they
should not be seen as such.

Fix it as follows: When the fetch dst register is being tracked for
precision, clear it and propagate precision over to the stack slot.
This is similar to how BPF_LDX handles loads from the stack.

Before:

  0: (b7) r1 = 8                        ; R1=8
  1: (7b) *(u64 *)(r10 -8) = r1         ; R1=8 R10=fp0 fp-8=8
  2: (b7) r2 = 0                        ; R2=0
  3: (db) r2 = atomic64_fetch_add((u64 *)(r10 -8), r2)          ; R2=8 R10=fp0 fp-8=mmmmmmmm
  4: (bf) r3 = r10                      ; R3=fp0 R10=fp0
  5: (0f) r3 += r2
  mark_precise: frame0: last_idx 5 first_idx 0 subseq_idx -1
  mark_precise: frame0: regs=r2 stack= before 4: (bf) r3 = r10
  mark_precise: frame0: regs=r2 stack= before 3: (db) r2 = atomic64_fetch_add((u64 *)(r10 -8), r2)
  mark_precise: frame0: regs=r2 stack= before 2: (b7) r2 = 0
  6: R2=8 R3=fp8
  6: (b7) r0 = 0                        ; R0=0
  7: (95) exit

After:

  0: (b7) r1 = 8                        ; R1=8
  1: (7b) *(u64 *)(r10 -8) = r1         ; R1=8 R10=fp0 fp-8=8
  2: (b7) r2 = 0                        ; R2=0
  3: (db) r2 = atomic64_fetch_add((u64 *)(r10 -8), r2)          ; R2=8 R10=fp0 fp-8=mmmmmmmm
  4: (bf) r3 = r10                      ; R3=fp0 R10=fp0
  5: (0f) r3 += r2
  mark_precise: frame0: last_idx 5 first_idx 0 subseq_idx -1
  mark_precise: frame0: regs=r2 stack= before 4: (bf) r3 = r10
  mark_precise: frame0: regs=r2 stack= before 3: (db) r2 = atomic64_fetch_add((u64 *)(r10 -8), r2)
  mark_precise: frame0: regs= stack=-8 before 2: (b7) r2 = 0
  mark_precise: frame0: regs= stack=-8 before 1: (7b) *(u64 *)(r10 -8) = r1
  mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8
  6: R2=8 R3=fp8
  6: (b7) r0 = 0                        ; R0=0
  7: (95) exit

Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg")
Fixes: 5ca419f2864a ("bpf: Add BPF_FETCH field / create atomic_fetch_add instruction")
Reported-by: STAR Labs SG <info@starlabs.sg>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 kernel/bpf/verifier.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index f108c01ff6d0..293aa957a5ff 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4474,6 +4474,31 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
 			 * encountered a case of pointer subtraction.
 			 */
 			return -ENOTSUPP;
+
+		/* atomic fetch operation writes the old value into a
+		 * register (sreg or r0) and if it was tracked for
+		 * precision, propagate to the stack slot like we do
+		 * in ldx.
+		 */
+		if (class == BPF_STX && mode == BPF_ATOMIC &&
+		    (insn->imm & BPF_FETCH)) {
+			u32 load_reg = insn->imm == BPF_CMPXCHG ?
+				       BPF_REG_0 : sreg;
+
+			if (bt_is_reg_set(bt, load_reg)) {
+				bt_clear_reg(bt, load_reg);
+				/* atomic fetch from non-stack memory
+				 * can't be further backtracked, same
+				 * as for ldx.
+				 */
+				if (!hist || !(hist->flags & INSN_F_STACK_ACCESS))
+					return 0;
+				spi = insn_stack_access_spi(hist->flags);
+				fr = insn_stack_access_frameno(hist->flags);
+				bt_set_frame_slot(bt, fr, spi);
+				return 0;
+			}
+		}
 		/* scalars can only be spilled into stack */
 		if (!hist || !(hist->flags & INSN_F_STACK_ACCESS))
 			return 0;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-03-30 22:02 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-30 13:27 [PATCH bpf 1/2] bpf: Fix incorrect pruning due to atomic fetch precision tracking Daniel Borkmann
2026-03-30 13:27 ` [PATCH bpf 2/2] selftests/bpf: Add more precision tracking tests for atomics Daniel Borkmann
2026-03-30 14:42   ` Puranjay Mohan
2026-03-30 14:41 ` [PATCH bpf 1/2] bpf: Fix incorrect pruning due to atomic fetch precision tracking Puranjay Mohan
2026-03-30 21:56   ` Daniel Borkmann
2026-03-30 14:45 ` Alexei Starovoitov
2026-03-30 22:02   ` Daniel Borkmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox