* [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction
@ 2023-12-09 1:09 Andrii Nakryiko
2023-12-09 1:09 ` [PATCH bpf-next 2/2] selftests/bpf: validate fake register spill/fill precision backtracking logic Andrii Nakryiko
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2023-12-09 1:09 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
When verifier validates BPF_ST_MEM instruction that stores known
constant to stack (e.g., *(u64 *)(r10 - 8) = 123), it effectively spills
a fake register with a constant (but initially imprecise) value to
a stack slot. Because read-side logic treats it as a proper register
fill from stack slot, we need to mark such stack slot initialization as
INSN_F_STACK_ACCESS instruction to stop precision backtracking from
missing it.
Fixes: 41f6f64e6999 ("bpf: support non-r10 register spill/fill to/from stack in precision tracking")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/bpf/verifier.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index fb690539d5f6..727a59e4a647 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4498,7 +4498,6 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
__mark_reg_known(&fake_reg, insn->imm);
fake_reg.type = SCALAR_VALUE;
save_register_state(env, state, spi, &fake_reg, size);
- insn_flags = 0; /* not a register spill */
} else if (reg && is_spillable_regtype(reg->type)) {
/* register containing pointer is being spilled into stack */
if (size != BPF_REG_SIZE) {
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH bpf-next 2/2] selftests/bpf: validate fake register spill/fill precision backtracking logic
2023-12-09 1:09 [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction Andrii Nakryiko
@ 2023-12-09 1:09 ` Andrii Nakryiko
2023-12-09 18:02 ` Eduard Zingerman
2023-12-09 2:01 ` [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction Eduard Zingerman
2023-12-10 3:20 ` patchwork-bot+netdevbpf
2 siblings, 1 reply; 10+ messages in thread
From: Andrii Nakryiko @ 2023-12-09 1:09 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Add two tests validating that verifier's precision backtracking logic
handles BPF_ST_MEM instructions that produce fake register spill into
register slot. This is happening when non-zero constant is written
directly to a slot, e.g., *(u64 *)(r10 -8) = 123.
Add both full 64-bit register spill, as well as 32-bit "sub-spill".
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 154 ++++++++++++++++++
1 file changed, 154 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index df4920da3472..508f5d6c7347 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -577,4 +577,158 @@ __naked void partial_stack_load_preserves_zeros(void)
: __clobber_common);
}
+char two_byte_buf[2] SEC(".data.two_byte_buf");
+
+SEC("raw_tp")
+__log_level(2) __flag(BPF_F_TEST_STATE_FREQ)
+__success
+/* make sure fp-8 is IMPRECISE fake register spill */
+__msg("3: (7a) *(u64 *)(r10 -8) = 1 ; R10=fp0 fp-8_w=1")
+/* and fp-16 is spilled IMPRECISE const reg */
+__msg("5: (7b) *(u64 *)(r10 -16) = r0 ; R0_w=1 R10=fp0 fp-16_w=1")
+/* validate load from fp-8, which was initialized using BPF_ST_MEM */
+__msg("8: (79) r2 = *(u64 *)(r10 -8) ; R2_w=1 R10=fp0 fp-8=1")
+__msg("9: (0f) r1 += r2")
+__msg("mark_precise: frame0: last_idx 9 first_idx 7 subseq_idx -1")
+__msg("mark_precise: frame0: regs=r2 stack= before 8: (79) r2 = *(u64 *)(r10 -8)")
+__msg("mark_precise: frame0: regs= stack=-8 before 7: (bf) r1 = r6")
+/* note, fp-8 is precise, fp-16 is not yet precise, we'll get there */
+__msg("mark_precise: frame0: parent state regs= stack=-8: R0_w=1 R1=ctx() R6_r=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8_rw=P1 fp-16_w=1")
+__msg("mark_precise: frame0: last_idx 6 first_idx 3 subseq_idx 7")
+__msg("mark_precise: frame0: regs= stack=-8 before 6: (05) goto pc+0")
+__msg("mark_precise: frame0: regs= stack=-8 before 5: (7b) *(u64 *)(r10 -16) = r0")
+__msg("mark_precise: frame0: regs= stack=-8 before 4: (b7) r0 = 1")
+__msg("mark_precise: frame0: regs= stack=-8 before 3: (7a) *(u64 *)(r10 -8) = 1")
+__msg("10: R1_w=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2_w=1")
+/* validate load from fp-16, which was initialized using BPF_STX_MEM */
+__msg("12: (79) r2 = *(u64 *)(r10 -16) ; R2_w=1 R10=fp0 fp-16=1")
+__msg("13: (0f) r1 += r2")
+__msg("mark_precise: frame0: last_idx 13 first_idx 7 subseq_idx -1")
+__msg("mark_precise: frame0: regs=r2 stack= before 12: (79) r2 = *(u64 *)(r10 -16)")
+__msg("mark_precise: frame0: regs= stack=-16 before 11: (bf) r1 = r6")
+__msg("mark_precise: frame0: regs= stack=-16 before 10: (73) *(u8 *)(r1 +0) = r2")
+__msg("mark_precise: frame0: regs= stack=-16 before 9: (0f) r1 += r2")
+__msg("mark_precise: frame0: regs= stack=-16 before 8: (79) r2 = *(u64 *)(r10 -8)")
+__msg("mark_precise: frame0: regs= stack=-16 before 7: (bf) r1 = r6")
+/* now both fp-8 and fp-16 are precise, very good */
+__msg("mark_precise: frame0: parent state regs= stack=-16: R0_w=1 R1=ctx() R6_r=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8_rw=P1 fp-16_rw=P1")
+__msg("mark_precise: frame0: last_idx 6 first_idx 3 subseq_idx 7")
+__msg("mark_precise: frame0: regs= stack=-16 before 6: (05) goto pc+0")
+__msg("mark_precise: frame0: regs= stack=-16 before 5: (7b) *(u64 *)(r10 -16) = r0")
+__msg("mark_precise: frame0: regs=r0 stack= before 4: (b7) r0 = 1")
+__msg("14: R1_w=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2_w=1")
+__naked void stack_load_preserves_const_precision(void)
+{
+ asm volatile (
+ /* establish checkpoint with state that has no stack slots;
+ * if we bubble up to this state without finding desired stack
+ * slot, then it's a bug and should be caught
+ */
+ "goto +0;"
+
+ /* fp-8 is const 1 *fake* register */
+ ".8byte %[fp8_st_one];" /* LLVM-18+: *(u64 *)(r10 -8) = 1; */
+
+ /* fp-16 is const 1 register */
+ "r0 = 1;"
+ "*(u64 *)(r10 -16) = r0;"
+
+ /* force checkpoint to check precision marks preserved in parent states */
+ "goto +0;"
+
+ /* load single U64 from aligned FAKE_REG=1 slot */
+ "r1 = %[two_byte_buf];"
+ "r2 = *(u64 *)(r10 -8);"
+ "r1 += r2;"
+ "*(u8 *)(r1 + 0) = r2;" /* this should be fine */
+
+ /* load single U64 from aligned REG=1 slot */
+ "r1 = %[two_byte_buf];"
+ "r2 = *(u64 *)(r10 -16);"
+ "r1 += r2;"
+ "*(u8 *)(r1 + 0) = r2;" /* this should be fine */
+
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm_ptr(two_byte_buf),
+ __imm_insn(fp8_st_one, BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 1))
+ : __clobber_common);
+}
+
+SEC("raw_tp")
+__log_level(2) __flag(BPF_F_TEST_STATE_FREQ)
+__success
+/* make sure fp-8 is 32-bit FAKE subregister spill */
+__msg("3: (62) *(u32 *)(r10 -8) = 1 ; R10=fp0 fp-8=????1")
+/* but fp-16 is spilled IMPRECISE zero const reg */
+__msg("5: (63) *(u32 *)(r10 -16) = r0 ; R0_w=1 R10=fp0 fp-16=????1")
+/* validate load from fp-8, which was initialized using BPF_ST_MEM */
+__msg("8: (61) r2 = *(u32 *)(r10 -8) ; R2_w=1 R10=fp0 fp-8=????1")
+__msg("9: (0f) r1 += r2")
+__msg("mark_precise: frame0: last_idx 9 first_idx 7 subseq_idx -1")
+__msg("mark_precise: frame0: regs=r2 stack= before 8: (61) r2 = *(u32 *)(r10 -8)")
+__msg("mark_precise: frame0: regs= stack=-8 before 7: (bf) r1 = r6")
+__msg("mark_precise: frame0: parent state regs= stack=-8: R0_w=1 R1=ctx() R6_r=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8_r=????P1 fp-16=????1")
+__msg("mark_precise: frame0: last_idx 6 first_idx 3 subseq_idx 7")
+__msg("mark_precise: frame0: regs= stack=-8 before 6: (05) goto pc+0")
+__msg("mark_precise: frame0: regs= stack=-8 before 5: (63) *(u32 *)(r10 -16) = r0")
+__msg("mark_precise: frame0: regs= stack=-8 before 4: (b7) r0 = 1")
+__msg("mark_precise: frame0: regs= stack=-8 before 3: (62) *(u32 *)(r10 -8) = 1")
+__msg("10: R1_w=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2_w=1")
+/* validate load from fp-16, which was initialized using BPF_STX_MEM */
+__msg("12: (61) r2 = *(u32 *)(r10 -16) ; R2_w=1 R10=fp0 fp-16=????1")
+__msg("13: (0f) r1 += r2")
+__msg("mark_precise: frame0: last_idx 13 first_idx 7 subseq_idx -1")
+__msg("mark_precise: frame0: regs=r2 stack= before 12: (61) r2 = *(u32 *)(r10 -16)")
+__msg("mark_precise: frame0: regs= stack=-16 before 11: (bf) r1 = r6")
+__msg("mark_precise: frame0: regs= stack=-16 before 10: (73) *(u8 *)(r1 +0) = r2")
+__msg("mark_precise: frame0: regs= stack=-16 before 9: (0f) r1 += r2")
+__msg("mark_precise: frame0: regs= stack=-16 before 8: (61) r2 = *(u32 *)(r10 -8)")
+__msg("mark_precise: frame0: regs= stack=-16 before 7: (bf) r1 = r6")
+__msg("mark_precise: frame0: parent state regs= stack=-16: R0_w=1 R1=ctx() R6_r=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8_r=????P1 fp-16_r=????P1")
+__msg("mark_precise: frame0: last_idx 6 first_idx 3 subseq_idx 7")
+__msg("mark_precise: frame0: regs= stack=-16 before 6: (05) goto pc+0")
+__msg("mark_precise: frame0: regs= stack=-16 before 5: (63) *(u32 *)(r10 -16) = r0")
+__msg("mark_precise: frame0: regs=r0 stack= before 4: (b7) r0 = 1")
+__msg("14: R1_w=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2_w=1")
+__naked void stack_load_preserves_const_precision_subreg(void)
+{
+ asm volatile (
+ /* establish checkpoint with state that has no stack slots;
+ * if we bubble up to this state without finding desired stack
+ * slot, then it's a bug and should be caught
+ */
+ "goto +0;"
+
+ /* fp-8 is const 1 *fake* SUB-register */
+ ".8byte %[fp8_st_one];" /* LLVM-18+: *(u32 *)(r10 -8) = 1; */
+
+ /* fp-16 is const 1 SUB-register */
+ "r0 = 1;"
+ "*(u32 *)(r10 -16) = r0;"
+
+ /* force checkpoint to check precision marks preserved in parent states */
+ "goto +0;"
+
+ /* load single U32 from aligned FAKE_REG=1 slot */
+ "r1 = %[two_byte_buf];"
+ "r2 = *(u32 *)(r10 -8);"
+ "r1 += r2;"
+ "*(u8 *)(r1 + 0) = r2;" /* this should be fine */
+
+ /* load single U32 from aligned REG=1 slot */
+ "r1 = %[two_byte_buf];"
+ "r2 = *(u32 *)(r10 -16);"
+ "r1 += r2;"
+ "*(u8 *)(r1 + 0) = r2;" /* this should be fine */
+
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm_ptr(two_byte_buf),
+ __imm_insn(fp8_st_one, BPF_ST_MEM(BPF_W, BPF_REG_FP, -8, 1)) /* 32-bit spill */
+ : __clobber_common);
+}
+
char _license[] SEC("license") = "GPL";
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction
2023-12-09 1:09 [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction Andrii Nakryiko
2023-12-09 1:09 ` [PATCH bpf-next 2/2] selftests/bpf: validate fake register spill/fill precision backtracking logic Andrii Nakryiko
@ 2023-12-09 2:01 ` Eduard Zingerman
2023-12-09 2:15 ` Andrii Nakryiko
2023-12-10 3:20 ` patchwork-bot+netdevbpf
2 siblings, 1 reply; 10+ messages in thread
From: Eduard Zingerman @ 2023-12-09 2:01 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: kernel-team
On Fri, 2023-12-08 at 17:09 -0800, Andrii Nakryiko wrote:
> When verifier validates BPF_ST_MEM instruction that stores known
> constant to stack (e.g., *(u64 *)(r10 - 8) = 123), it effectively spills
> a fake register with a constant (but initially imprecise) value to
> a stack slot. Because read-side logic treats it as a proper register
> fill from stack slot, we need to mark such stack slot initialization as
> INSN_F_STACK_ACCESS instruction to stop precision backtracking from
> missing it.
>
> Fixes: 41f6f64e6999 ("bpf: support non-r10 register spill/fill to/from stack in precision tracking")
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
> kernel/bpf/verifier.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index fb690539d5f6..727a59e4a647 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4498,7 +4498,6 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> __mark_reg_known(&fake_reg, insn->imm);
> fake_reg.type = SCALAR_VALUE;
> save_register_state(env, state, spi, &fake_reg, size);
> - insn_flags = 0; /* not a register spill */
> } else if (reg && is_spillable_regtype(reg->type)) {
> /* register containing pointer is being spilled into stack */
> if (size != BPF_REG_SIZE) {
So, the problem is that for some 'r5 = r10; *(u64 *)(r5 - 8) = 123'
backtracking won't reset precision mark for -8, right?
That's not a tragedy we just get more precision marks than needed,
however, I think that same logic applies to the MISC/ZERO case below.
I'll look through the tests in the morning.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction
2023-12-09 2:01 ` [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction Eduard Zingerman
@ 2023-12-09 2:15 ` Andrii Nakryiko
2023-12-09 2:16 ` Andrii Nakryiko
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2023-12-09 2:15 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team
On Fri, Dec 8, 2023 at 6:01 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Fri, 2023-12-08 at 17:09 -0800, Andrii Nakryiko wrote:
> > When verifier validates BPF_ST_MEM instruction that stores known
> > constant to stack (e.g., *(u64 *)(r10 - 8) = 123), it effectively spills
> > a fake register with a constant (but initially imprecise) value to
> > a stack slot. Because read-side logic treats it as a proper register
> > fill from stack slot, we need to mark such stack slot initialization as
> > INSN_F_STACK_ACCESS instruction to stop precision backtracking from
> > missing it.
> >
> > Fixes: 41f6f64e6999 ("bpf: support non-r10 register spill/fill to/from stack in precision tracking")
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> > kernel/bpf/verifier.c | 1 -
> > 1 file changed, 1 deletion(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index fb690539d5f6..727a59e4a647 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -4498,7 +4498,6 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> > __mark_reg_known(&fake_reg, insn->imm);
> > fake_reg.type = SCALAR_VALUE;
> > save_register_state(env, state, spi, &fake_reg, size);
> > - insn_flags = 0; /* not a register spill */
> > } else if (reg && is_spillable_regtype(reg->type)) {
> > /* register containing pointer is being spilled into stack */
> > if (size != BPF_REG_SIZE) {
>
> So, the problem is that for some 'r5 = r10; *(u64 *)(r5 - 8) = 123'
> backtracking won't reset precision mark for -8, right?
no, the problem is that we won't stop at the right instruction. Let's
say we have this sequence
1: *(u64 *)(r10 - 8) = 123;
2: r1 = *(u64 *)(r10 - 8);
3: if r1 == 123 goto +10;
...
At 3: we want to set r1 to precise. We go back, see that at 2: we set
r1 from fp-8 slot, so instead of looking for r1, we start looking for
what set fp-8 now. So we go to 1:, and because it actually is not
marked as INSN_F_STACK_ACCESS, we skip it, and keep looking further
for what set fp-8. At some point we can go to parent state that didn't
even have fp-8 stack slot allocated (or we can get out and then see
that we haven't cleared all stack slot bits in our masks). So this
patch makes it so that 1: is marked as setting fp-8 slot, and
precision propagation will clear fp-8 from the mask.
Now, the subtle thing here is that this doesn't happen with STACK_ZERO
or STACK_MISC. Let's look at STACK_MISC/STACK_INVALID case.
1: *(u8 *)(r10 -1) = 123; /* now fp-8=m??????? */
2: r1 = *(u64 *)(r10 - 8); /* STACK_MISC read, r1 is set to unknown scalar */
3: if r1 == 123 goto +10;
Let's do analysis again. At 3: we mark r1 as precise, go back to 2:.
Here 2: instruction is not marked as INSN_F_STACK_ACCESS because it
wasn't stack fill due to STACK_MISC (that's handled in
check_read_fixed_off logic). So mark_chain_precision() stops here
because that instruction is resetting r1, so we clear r1 from the
mask, but this instruction isn't STACK_ACCESS, so we don't look for
fp-8 here.
I think check_stack_write_fixed_off() can always set
INSN_F_STACK_ACCESS, actually, maybe that would be easier to follow.
Even when we write STACK_ZERO/STACK_MISC. It's only
check_stack_read_fixed_off() that has to be careful and drop
INSN_F_STACK_ACCESS if we didn't really fill the register state from
the stack slot.
> That's not a tragedy we just get more precision marks than needed,
> however, I think that same logic applies to the MISC/ZERO case below.
See above, MISC/ZERO is fine as is due to check_stack_read_fixed_off()
not setting STACK_ACCESS bit, but I can also send a version that
unconditionally sets INSNS_F_STACK_ACCESS in
check_stack_write_fixed_off().
> I'll look through the tests in the morning.
Thanks, no rush!
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction
2023-12-09 2:15 ` Andrii Nakryiko
@ 2023-12-09 2:16 ` Andrii Nakryiko
2023-12-09 2:28 ` Alexei Starovoitov
2023-12-09 17:05 ` Eduard Zingerman
2 siblings, 0 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2023-12-09 2:16 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team
On Fri, Dec 8, 2023 at 6:15 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Dec 8, 2023 at 6:01 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> >
> > On Fri, 2023-12-08 at 17:09 -0800, Andrii Nakryiko wrote:
> > > When verifier validates BPF_ST_MEM instruction that stores known
> > > constant to stack (e.g., *(u64 *)(r10 - 8) = 123), it effectively spills
> > > a fake register with a constant (but initially imprecise) value to
> > > a stack slot. Because read-side logic treats it as a proper register
> > > fill from stack slot, we need to mark such stack slot initialization as
> > > INSN_F_STACK_ACCESS instruction to stop precision backtracking from
> > > missing it.
> > >
> > > Fixes: 41f6f64e6999 ("bpf: support non-r10 register spill/fill to/from stack in precision tracking")
> > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > ---
> > > kernel/bpf/verifier.c | 1 -
> > > 1 file changed, 1 deletion(-)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index fb690539d5f6..727a59e4a647 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -4498,7 +4498,6 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> > > __mark_reg_known(&fake_reg, insn->imm);
> > > fake_reg.type = SCALAR_VALUE;
> > > save_register_state(env, state, spi, &fake_reg, size);
> > > - insn_flags = 0; /* not a register spill */
> > > } else if (reg && is_spillable_regtype(reg->type)) {
> > > /* register containing pointer is being spilled into stack */
> > > if (size != BPF_REG_SIZE) {
> >
> > So, the problem is that for some 'r5 = r10; *(u64 *)(r5 - 8) = 123'
> > backtracking won't reset precision mark for -8, right?
>
> no, the problem is that we won't stop at the right instruction. Let's
> say we have this sequence
>
> 1: *(u64 *)(r10 - 8) = 123;
> 2: r1 = *(u64 *)(r10 - 8);
> 3: if r1 == 123 goto +10;
> ...
>
> At 3: we want to set r1 to precise. We go back, see that at 2: we set
> r1 from fp-8 slot, so instead of looking for r1, we start looking for
> what set fp-8 now. So we go to 1:, and because it actually is not
> marked as INSN_F_STACK_ACCESS, we skip it, and keep looking further
> for what set fp-8. At some point we can go to parent state that didn't
> even have fp-8 stack slot allocated (or we can get out and then see
> that we haven't cleared all stack slot bits in our masks). So this
> patch makes it so that 1: is marked as setting fp-8 slot, and
> precision propagation will clear fp-8 from the mask.
>
> Now, the subtle thing here is that this doesn't happen with STACK_ZERO
> or STACK_MISC. Let's look at STACK_MISC/STACK_INVALID case.
>
> 1: *(u8 *)(r10 -1) = 123; /* now fp-8=m??????? */
> 2: r1 = *(u64 *)(r10 - 8); /* STACK_MISC read, r1 is set to unknown scalar */
> 3: if r1 == 123 goto +10;
small correction, with STACK_MISC conditional jump won't mark_precise,
we'd need some other instruction to trigger precision, but hopefully
the point is still clear
>
> Let's do analysis again. At 3: we mark r1 as precise, go back to 2:.
> Here 2: instruction is not marked as INSN_F_STACK_ACCESS because it
> wasn't stack fill due to STACK_MISC (that's handled in
> check_read_fixed_off logic). So mark_chain_precision() stops here
> because that instruction is resetting r1, so we clear r1 from the
> mask, but this instruction isn't STACK_ACCESS, so we don't look for
> fp-8 here.
>
> I think check_stack_write_fixed_off() can always set
> INSN_F_STACK_ACCESS, actually, maybe that would be easier to follow.
> Even when we write STACK_ZERO/STACK_MISC. It's only
> check_stack_read_fixed_off() that has to be careful and drop
> INSN_F_STACK_ACCESS if we didn't really fill the register state from
> the stack slot.
>
>
> > That's not a tragedy we just get more precision marks than needed,
> > however, I think that same logic applies to the MISC/ZERO case below.
>
> See above, MISC/ZERO is fine as is due to check_stack_read_fixed_off()
> not setting STACK_ACCESS bit, but I can also send a version that
> unconditionally sets INSNS_F_STACK_ACCESS in
> check_stack_write_fixed_off().
>
> > I'll look through the tests in the morning.
>
> Thanks, no rush!
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction
2023-12-09 2:15 ` Andrii Nakryiko
2023-12-09 2:16 ` Andrii Nakryiko
@ 2023-12-09 2:28 ` Alexei Starovoitov
2023-12-09 4:44 ` Andrii Nakryiko
2023-12-09 17:05 ` Eduard Zingerman
2 siblings, 1 reply; 10+ messages in thread
From: Alexei Starovoitov @ 2023-12-09 2:28 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Eduard Zingerman, Andrii Nakryiko, bpf, Alexei Starovoitov,
Daniel Borkmann, Martin KaFai Lau, Kernel Team
On Fri, Dec 8, 2023 at 6:15 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> See above, MISC/ZERO is fine as is due to check_stack_read_fixed_off()
> not setting STACK_ACCESS bit, but I can also send a version that
> unconditionally sets INSNS_F_STACK_ACCESS in
> check_stack_write_fixed_off().
but it will significantly increase jmp_history and make
backtracking slower?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction
2023-12-09 2:28 ` Alexei Starovoitov
@ 2023-12-09 4:44 ` Andrii Nakryiko
0 siblings, 0 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2023-12-09 4:44 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Eduard Zingerman, Andrii Nakryiko, bpf, Alexei Starovoitov,
Daniel Borkmann, Martin KaFai Lau, Kernel Team
On Fri, Dec 8, 2023 at 6:28 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Fri, Dec 8, 2023 at 6:15 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > See above, MISC/ZERO is fine as is due to check_stack_read_fixed_off()
> > not setting STACK_ACCESS bit, but I can also send a version that
> > unconditionally sets INSNS_F_STACK_ACCESS in
> > check_stack_write_fixed_off().
>
> but it will significantly increase jmp_history and make
> backtracking slower?
Good point about jmp_history length, so I guess we can leave the patch as is.
I don't think it will make backtracking slower, though, the algorithm
will process the same amount of instructions regardless of jmp_history
length.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction
2023-12-09 2:15 ` Andrii Nakryiko
2023-12-09 2:16 ` Andrii Nakryiko
2023-12-09 2:28 ` Alexei Starovoitov
@ 2023-12-09 17:05 ` Eduard Zingerman
2 siblings, 0 replies; 10+ messages in thread
From: Eduard Zingerman @ 2023-12-09 17:05 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team
On Fri, 2023-12-08 at 18:15 -0800, Andrii Nakryiko wrote:
[...]
> Now, the subtle thing here is that this doesn't happen with STACK_ZERO
> or STACK_MISC. Let's look at STACK_MISC/STACK_INVALID case.
>
> 1: *(u8 *)(r10 -1) = 123; /* now fp-8=m??????? */
> 2: r1 = *(u64 *)(r10 - 8); /* STACK_MISC read, r1 is set to unknown scalar */
> 3: if r1 == 123 goto +10;
>
> Let's do analysis again. At 3: we mark r1 as precise, go back to 2:.
> Here 2: instruction is not marked as INSN_F_STACK_ACCESS because it
> wasn't stack fill due to STACK_MISC (that's handled in
> check_read_fixed_off logic). So mark_chain_precision() stops here
> because that instruction is resetting r1, so we clear r1 from the
> mask, but this instruction isn't STACK_ACCESS, so we don't look for
> fp-8 here.
Ok, so STACK_MISC does not actually leak any information, when misc
byte read it's still full range. Makes sense.
I think STACK_ZERO handling is fine, there is no need remember it as
stack access, as it marks precision right away.
Thank you for explanation and sorry for false alarm.
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 2/2] selftests/bpf: validate fake register spill/fill precision backtracking logic
2023-12-09 1:09 ` [PATCH bpf-next 2/2] selftests/bpf: validate fake register spill/fill precision backtracking logic Andrii Nakryiko
@ 2023-12-09 18:02 ` Eduard Zingerman
0 siblings, 0 replies; 10+ messages in thread
From: Eduard Zingerman @ 2023-12-09 18:02 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: kernel-team
On Fri, 2023-12-08 at 17:09 -0800, Andrii Nakryiko wrote:
> Add two tests validating that verifier's precision backtracking logic
> handles BPF_ST_MEM instructions that produce fake register spill into
> register slot. This is happening when non-zero constant is written
> directly to a slot, e.g., *(u64 *)(r10 -8) = 123.
>
> Add both full 64-bit register spill, as well as 32-bit "sub-spill".
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
(imho, goto +0 is an overkill, as we should have tests for behavior of
parent states but not that it would make test cases much simpler).
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction
2023-12-09 1:09 [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction Andrii Nakryiko
2023-12-09 1:09 ` [PATCH bpf-next 2/2] selftests/bpf: validate fake register spill/fill precision backtracking logic Andrii Nakryiko
2023-12-09 2:01 ` [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction Eduard Zingerman
@ 2023-12-10 3:20 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 10+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-12-10 3:20 UTC (permalink / raw)
To: Andrii Nakryiko; +Cc: bpf, ast, daniel, martin.lau, kernel-team
Hello:
This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Fri, 8 Dec 2023 17:09:57 -0800 you wrote:
> When verifier validates BPF_ST_MEM instruction that stores known
> constant to stack (e.g., *(u64 *)(r10 - 8) = 123), it effectively spills
> a fake register with a constant (but initially imprecise) value to
> a stack slot. Because read-side logic treats it as a proper register
> fill from stack slot, we need to mark such stack slot initialization as
> INSN_F_STACK_ACCESS instruction to stop precision backtracking from
> missing it.
>
> [...]
Here is the summary with links:
- [bpf-next,1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction
https://git.kernel.org/bpf/bpf-next/c/482d548d40b0
- [bpf-next,2/2] selftests/bpf: validate fake register spill/fill precision backtracking logic
https://git.kernel.org/bpf/bpf-next/c/7d8ed51bcb32
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-12-10 3:20 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-09 1:09 [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction Andrii Nakryiko
2023-12-09 1:09 ` [PATCH bpf-next 2/2] selftests/bpf: validate fake register spill/fill precision backtracking logic Andrii Nakryiko
2023-12-09 18:02 ` Eduard Zingerman
2023-12-09 2:01 ` [PATCH bpf-next 1/2] bpf: handle fake register spill to stack with BPF_ST_MEM instruction Eduard Zingerman
2023-12-09 2:15 ` Andrii Nakryiko
2023-12-09 2:16 ` Andrii Nakryiko
2023-12-09 2:28 ` Alexei Starovoitov
2023-12-09 4:44 ` Andrii Nakryiko
2023-12-09 17:05 ` Eduard Zingerman
2023-12-10 3:20 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox