* [PATCH bpf-next] bpf: Track aligned st store as imprecise spilled registers
@ 2024-01-02 19:07 Yonghong Song
2024-01-02 21:06 ` Eduard Zingerman
2024-01-02 21:42 ` Andrii Nakryiko
0 siblings, 2 replies; 6+ messages in thread
From: Yonghong Song @ 2024-01-02 19:07 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau, Kuniyuki Iwashima, Martin KaFai Lau
With patch set [1], precision backtracing supports register spill/fill
to/from the stack. The patch [2] allows initial imprecise register spill
with content 0. This is a common case for cpuv3 and lower for
initializing the stack variables with pattern
r1 = 0
*(u64 *)(r10 - 8) = r1
and the [2] has demonstrated good verification improvement.
For cpuv4, the initialization could be
*(u64 *)(r10 - 8) = 0
The current verifier marks the r10-8 contents with STACK_ZERO.
Similar to [2], let us permit the above insn to behave like
imprecise register spill which can reduce number of verified states.
I checked cpuv3 and cpuv4 with and without this patch.
There is no change for cpuv3 since '*(u64 *)(r10 - 8) = 0'
is only generated with cpuv4.
For cpuv4:
$ ../veristat -C old.cpuv4.csv new.cpuv4.csv -e file,prog,insns,states -s '|insns_diff|'
File Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF)
----------------------------------------------------- ---------------------------------------------------- --------- --------- --------------- ---------- ---------- -------------
pyperf600_bpf_loop.bpf.linked3.o on_event 6066 4889 -1177 (-19.40%) 403 321 -82 (-20.35%)
xdp_synproxy_kern.bpf.linked3.o syncookie_tc 12412 11719 -693 (-5.58%) 345 330 -15 (-4.35%)
xdp_synproxy_kern.bpf.linked3.o syncookie_xdp 12478 11794 -684 (-5.48%) 346 331 -15 (-4.34%)
test_cls_redirect.bpf.linked3.o cls_redirect 35483 35387 -96 (-0.27%) 2179 2177 -2 (-0.09%)
local_storage_bench.bpf.linked3.o get_local 228 168 -60 (-26.32%) 17 14 -3 (-17.65%)
test_l4lb_noinline.bpf.linked3.o balancer_ingress 4494 4522 +28 (+0.62%) 217 219 +2 (+0.92%)
test_l4lb_noinline_dynptr.bpf.linked3.o balancer_ingress 1432 1455 +23 (+1.61%) 92 94 +2 (+2.17%)
verifier_iterating_callbacks.bpf.linked3.o widening 52 41 -11 (-21.15%) 4 3 -1 (-25.00%)
test_xdp_noinline.bpf.linked3.o balancer_ingress_v6 3462 3458 -4 (-0.12%) 216 216 +0 (+0.00%)
...
test_l4lb_noinline and test_l4lb_noinline_dynptr has minor regression, but
pyperf600_bpf_loop and local_storage_bench gets pretty good improvement.
[1] https://lore.kernel.org/all/20231205184248.1502704-1-andrii@kernel.org/
[2] https://lore.kernel.org/all/20231205184248.1502704-9-andrii@kernel.org/
Cc: Kuniyuki Iwashima <kuniyu@amazon.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 2 +-
tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a376eb609c41..17ad0228270e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4491,7 +4491,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
state->stack[spi].spilled_ptr.id = 0;
} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
- insn->imm != 0 && env->bpf_capable) {
+ env->bpf_capable) {
struct bpf_reg_state fake_reg = {};
__mark_reg_known(&fake_reg, insn->imm);
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 39fe3372e0e0..05de3de56e79 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -496,13 +496,13 @@ SEC("raw_tp")
__log_level(2)
__success
/* make sure fp-8 is all STACK_ZERO */
-__msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=00000000")
+__msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=0")
/* but fp-16 is spilled IMPRECISE zero const reg */
__msg("4: (7b) *(u64 *)(r10 -16) = r0 ; R0_w=0 R10=fp0 fp-16_w=0")
/* validate that assigning R2 from STACK_ZERO doesn't mark register
* precise immediately; if necessary, it will be marked precise later
*/
-__msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8_w=00000000")
+__msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8_w=0")
/* similarly, when R2 is assigned from spilled register, it is initially
* imprecise, but will be marked precise later once it is used in precise context
*/
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next] bpf: Track aligned st store as imprecise spilled registers
2024-01-02 19:07 [PATCH bpf-next] bpf: Track aligned st store as imprecise spilled registers Yonghong Song
@ 2024-01-02 21:06 ` Eduard Zingerman
2024-01-02 21:42 ` Andrii Nakryiko
1 sibling, 0 replies; 6+ messages in thread
From: Eduard Zingerman @ 2024-01-02 21:06 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau, Kuniyuki Iwashima, Martin KaFai Lau
On Tue, 2024-01-02 at 11:07 -0800, Yonghong Song wrote:
> With patch set [1], precision backtracing supports register spill/fill
> to/from the stack. The patch [2] allows initial imprecise register spill
> with content 0. This is a common case for cpuv3 and lower for
> initializing the stack variables with pattern
> r1 = 0
> *(u64 *)(r10 - 8) = r1
> and the [2] has demonstrated good verification improvement.
>
> For cpuv4, the initialization could be
> *(u64 *)(r10 - 8) = 0
> The current verifier marks the r10-8 contents with STACK_ZERO.
> Similar to [2], let us permit the above insn to behave like
> imprecise register spill which can reduce number of verified states.
>
> I checked cpuv3 and cpuv4 with and without this patch.
> There is no change for cpuv3 since '*(u64 *)(r10 - 8) = 0'
> is only generated with cpuv4.
>
[...]
>
> test_l4lb_noinline and test_l4lb_noinline_dynptr has minor regression, but
> pyperf600_bpf_loop and local_storage_bench gets pretty good improvement.
>
> [1] https://lore.kernel.org/all/20231205184248.1502704-1-andrii@kernel.org/
> [2] https://lore.kernel.org/all/20231205184248.1502704-9-andrii@kernel.org/
>
> Cc: Kuniyuki Iwashima <kuniyu@amazon.com>
> Cc: Martin KaFai Lau <kafai@fb.com>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next] bpf: Track aligned st store as imprecise spilled registers
2024-01-02 19:07 [PATCH bpf-next] bpf: Track aligned st store as imprecise spilled registers Yonghong Song
2024-01-02 21:06 ` Eduard Zingerman
@ 2024-01-02 21:42 ` Andrii Nakryiko
2024-01-02 22:22 ` Andrii Nakryiko
2024-01-03 0:00 ` Yonghong Song
1 sibling, 2 replies; 6+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 21:42 UTC (permalink / raw)
To: Yonghong Song
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
kernel-team, Martin KaFai Lau, Kuniyuki Iwashima,
Martin KaFai Lau
On Tue, Jan 2, 2024 at 11:07 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>
> With patch set [1], precision backtracing supports register spill/fill
> to/from the stack. The patch [2] allows initial imprecise register spill
> with content 0. This is a common case for cpuv3 and lower for
> initializing the stack variables with pattern
> r1 = 0
> *(u64 *)(r10 - 8) = r1
> and the [2] has demonstrated good verification improvement.
>
> For cpuv4, the initialization could be
> *(u64 *)(r10 - 8) = 0
> The current verifier marks the r10-8 contents with STACK_ZERO.
> Similar to [2], let us permit the above insn to behave like
> imprecise register spill which can reduce number of verified states.
>
> I checked cpuv3 and cpuv4 with and without this patch.
> There is no change for cpuv3 since '*(u64 *)(r10 - 8) = 0'
> is only generated with cpuv4.
>
> For cpuv4:
> $ ../veristat -C old.cpuv4.csv new.cpuv4.csv -e file,prog,insns,states -s '|insns_diff|'
> File Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF)
> ----------------------------------------------------- ---------------------------------------------------- --------- --------- --------------- ---------- ---------- -------------
> pyperf600_bpf_loop.bpf.linked3.o on_event 6066 4889 -1177 (-19.40%) 403 321 -82 (-20.35%)
> xdp_synproxy_kern.bpf.linked3.o syncookie_tc 12412 11719 -693 (-5.58%) 345 330 -15 (-4.35%)
> xdp_synproxy_kern.bpf.linked3.o syncookie_xdp 12478 11794 -684 (-5.48%) 346 331 -15 (-4.34%)
> test_cls_redirect.bpf.linked3.o cls_redirect 35483 35387 -96 (-0.27%) 2179 2177 -2 (-0.09%)
> local_storage_bench.bpf.linked3.o get_local 228 168 -60 (-26.32%) 17 14 -3 (-17.65%)
> test_l4lb_noinline.bpf.linked3.o balancer_ingress 4494 4522 +28 (+0.62%) 217 219 +2 (+0.92%)
> test_l4lb_noinline_dynptr.bpf.linked3.o balancer_ingress 1432 1455 +23 (+1.61%) 92 94 +2 (+2.17%)
> verifier_iterating_callbacks.bpf.linked3.o widening 52 41 -11 (-21.15%) 4 3 -1 (-25.00%)
> test_xdp_noinline.bpf.linked3.o balancer_ingress_v6 3462 3458 -4 (-0.12%) 216 216 +0 (+0.00%)
> ...
>
> test_l4lb_noinline and test_l4lb_noinline_dynptr has minor regression, but
> pyperf600_bpf_loop and local_storage_bench gets pretty good improvement.
>
> [1] https://lore.kernel.org/all/20231205184248.1502704-1-andrii@kernel.org/
> [2] https://lore.kernel.org/all/20231205184248.1502704-9-andrii@kernel.org/
>
> Cc: Kuniyuki Iwashima <kuniyu@amazon.com>
> Cc: Martin KaFai Lau <kafai@fb.com>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> ---
> kernel/bpf/verifier.c | 2 +-
> tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 4 ++--
> 2 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index a376eb609c41..17ad0228270e 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4491,7 +4491,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
> state->stack[spi].spilled_ptr.id = 0;
> } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
> - insn->imm != 0 && env->bpf_capable) {
> + env->bpf_capable) {
the change makes sense, there is nothing special about insn->imm == 0
case, so LGTM
> struct bpf_reg_state fake_reg = {};
>
> __mark_reg_known(&fake_reg, insn->imm);
> diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> index 39fe3372e0e0..05de3de56e79 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> @@ -496,13 +496,13 @@ SEC("raw_tp")
> __log_level(2)
> __success
> /* make sure fp-8 is all STACK_ZERO */
but we should update STACK_ZERO comments in this test
and also, STACK_ZERO situation is still possible, right? E.g., when we
spill register at -4 offset, not -8. So I'd either extend or add
another test to make sure we still validate that STACK_ZERO slots
return precise zero. Can you add something like this?
> -__msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=00000000")
> +__msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=0")
> /* but fp-16 is spilled IMPRECISE zero const reg */
> __msg("4: (7b) *(u64 *)(r10 -16) = r0 ; R0_w=0 R10=fp0 fp-16_w=0")
> /* validate that assigning R2 from STACK_ZERO doesn't mark register
> * precise immediately; if necessary, it will be marked precise later
> */
> -__msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8_w=00000000")
> +__msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8_w=0")
> /* similarly, when R2 is assigned from spilled register, it is initially
> * imprecise, but will be marked precise later once it is used in precise context
> */
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next] bpf: Track aligned st store as imprecise spilled registers
2024-01-02 21:42 ` Andrii Nakryiko
@ 2024-01-02 22:22 ` Andrii Nakryiko
2024-01-03 0:03 ` Yonghong Song
2024-01-03 0:00 ` Yonghong Song
1 sibling, 1 reply; 6+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 22:22 UTC (permalink / raw)
To: Yonghong Song
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
kernel-team, Martin KaFai Lau, Kuniyuki Iwashima,
Martin KaFai Lau
On Tue, Jan 2, 2024 at 1:42 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Tue, Jan 2, 2024 at 11:07 AM Yonghong Song <yonghong.song@linux.dev> wrote:
> >
> > With patch set [1], precision backtracing supports register spill/fill
> > to/from the stack. The patch [2] allows initial imprecise register spill
> > with content 0. This is a common case for cpuv3 and lower for
> > initializing the stack variables with pattern
> > r1 = 0
> > *(u64 *)(r10 - 8) = r1
> > and the [2] has demonstrated good verification improvement.
> >
> > For cpuv4, the initialization could be
> > *(u64 *)(r10 - 8) = 0
> > The current verifier marks the r10-8 contents with STACK_ZERO.
> > Similar to [2], let us permit the above insn to behave like
> > imprecise register spill which can reduce number of verified states.
> >
> > I checked cpuv3 and cpuv4 with and without this patch.
> > There is no change for cpuv3 since '*(u64 *)(r10 - 8) = 0'
> > is only generated with cpuv4.
> >
> > For cpuv4:
> > $ ../veristat -C old.cpuv4.csv new.cpuv4.csv -e file,prog,insns,states -s '|insns_diff|'
> > File Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF)
> > ----------------------------------------------------- ---------------------------------------------------- --------- --------- --------------- ---------- ---------- -------------
> > pyperf600_bpf_loop.bpf.linked3.o on_event 6066 4889 -1177 (-19.40%) 403 321 -82 (-20.35%)
> > xdp_synproxy_kern.bpf.linked3.o syncookie_tc 12412 11719 -693 (-5.58%) 345 330 -15 (-4.35%)
> > xdp_synproxy_kern.bpf.linked3.o syncookie_xdp 12478 11794 -684 (-5.48%) 346 331 -15 (-4.34%)
> > test_cls_redirect.bpf.linked3.o cls_redirect 35483 35387 -96 (-0.27%) 2179 2177 -2 (-0.09%)
> > local_storage_bench.bpf.linked3.o get_local 228 168 -60 (-26.32%) 17 14 -3 (-17.65%)
> > test_l4lb_noinline.bpf.linked3.o balancer_ingress 4494 4522 +28 (+0.62%) 217 219 +2 (+0.92%)
> > test_l4lb_noinline_dynptr.bpf.linked3.o balancer_ingress 1432 1455 +23 (+1.61%) 92 94 +2 (+2.17%)
> > verifier_iterating_callbacks.bpf.linked3.o widening 52 41 -11 (-21.15%) 4 3 -1 (-25.00%)
> > test_xdp_noinline.bpf.linked3.o balancer_ingress_v6 3462 3458 -4 (-0.12%) 216 216 +0 (+0.00%)
> > ...
> >
> > test_l4lb_noinline and test_l4lb_noinline_dynptr has minor regression, but
> > pyperf600_bpf_loop and local_storage_bench gets pretty good improvement.
> >
> > [1] https://lore.kernel.org/all/20231205184248.1502704-1-andrii@kernel.org/
> > [2] https://lore.kernel.org/all/20231205184248.1502704-9-andrii@kernel.org/
> >
> > Cc: Kuniyuki Iwashima <kuniyu@amazon.com>
> > Cc: Martin KaFai Lau <kafai@fb.com>
> > Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> > ---
> > kernel/bpf/verifier.c | 2 +-
> > tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 4 ++--
> > 2 files changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index a376eb609c41..17ad0228270e 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -4491,7 +4491,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> > if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
> > state->stack[spi].spilled_ptr.id = 0;
> > } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
> > - insn->imm != 0 && env->bpf_capable) {
> > + env->bpf_capable) {
>
> the change makes sense, there is nothing special about insn->imm == 0
> case, so LGTM
>
> > struct bpf_reg_state fake_reg = {};
> >
> > __mark_reg_known(&fake_reg, insn->imm);
> > diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > index 39fe3372e0e0..05de3de56e79 100644
> > --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> > @@ -496,13 +496,13 @@ SEC("raw_tp")
> > __log_level(2)
> > __success
> > /* make sure fp-8 is all STACK_ZERO */
>
> but we should update STACK_ZERO comments in this test
>
> and also, STACK_ZERO situation is still possible, right? E.g., when we
> spill register at -4 offset, not -8. So I'd either extend or add
> another test to make sure we still validate that STACK_ZERO slots
> return precise zero. Can you add something like this?
>
>
> > -__msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=00000000")
> > +__msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=0")
> > /* but fp-16 is spilled IMPRECISE zero const reg */
> > __msg("4: (7b) *(u64 *)(r10 -16) = r0 ; R0_w=0 R10=fp0 fp-16_w=0")
> > /* validate that assigning R2 from STACK_ZERO doesn't mark register
> > * precise immediately; if necessary, it will be marked precise later
> > */
> > -__msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8_w=00000000")
> > +__msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8_w=0")
> > /* similarly, when R2 is assigned from spilled register, it is initially
> > * imprecise, but will be marked precise later once it is used in precise context
> > */
And seems like test_verifier test is failing now ([0]):
#114/p BPF_ST_MEM stack imm zero, variable offset FAIL
Failed to load prog 'Invalid argument'!
At program exit the register R0 has smin=0 smax=255 should have been in [0, 1]
verification time 19 usec
stack depth 32
processed 11 insns (limit 1000000) max_states_per_insn 0
total_states 0 peak_states 0 mark_read 0
[0] https://github.com/kernel-patches/bpf/actions/runs/7389645653/job/20103046755
> > --
> > 2.34.1
> >
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next] bpf: Track aligned st store as imprecise spilled registers
2024-01-02 21:42 ` Andrii Nakryiko
2024-01-02 22:22 ` Andrii Nakryiko
@ 2024-01-03 0:00 ` Yonghong Song
1 sibling, 0 replies; 6+ messages in thread
From: Yonghong Song @ 2024-01-03 0:00 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
kernel-team, Martin KaFai Lau, Kuniyuki Iwashima,
Martin KaFai Lau
On 1/2/24 1:42 PM, Andrii Nakryiko wrote:
> On Tue, Jan 2, 2024 at 11:07 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>> With patch set [1], precision backtracing supports register spill/fill
>> to/from the stack. The patch [2] allows initial imprecise register spill
>> with content 0. This is a common case for cpuv3 and lower for
>> initializing the stack variables with pattern
>> r1 = 0
>> *(u64 *)(r10 - 8) = r1
>> and the [2] has demonstrated good verification improvement.
>>
>> For cpuv4, the initialization could be
>> *(u64 *)(r10 - 8) = 0
>> The current verifier marks the r10-8 contents with STACK_ZERO.
>> Similar to [2], let us permit the above insn to behave like
>> imprecise register spill which can reduce number of verified states.
>>
>> I checked cpuv3 and cpuv4 with and without this patch.
>> There is no change for cpuv3 since '*(u64 *)(r10 - 8) = 0'
>> is only generated with cpuv4.
>>
>> For cpuv4:
>> $ ../veristat -C old.cpuv4.csv new.cpuv4.csv -e file,prog,insns,states -s '|insns_diff|'
>> File Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF)
>> ----------------------------------------------------- ---------------------------------------------------- --------- --------- --------------- ---------- ---------- -------------
>> pyperf600_bpf_loop.bpf.linked3.o on_event 6066 4889 -1177 (-19.40%) 403 321 -82 (-20.35%)
>> xdp_synproxy_kern.bpf.linked3.o syncookie_tc 12412 11719 -693 (-5.58%) 345 330 -15 (-4.35%)
>> xdp_synproxy_kern.bpf.linked3.o syncookie_xdp 12478 11794 -684 (-5.48%) 346 331 -15 (-4.34%)
>> test_cls_redirect.bpf.linked3.o cls_redirect 35483 35387 -96 (-0.27%) 2179 2177 -2 (-0.09%)
>> local_storage_bench.bpf.linked3.o get_local 228 168 -60 (-26.32%) 17 14 -3 (-17.65%)
>> test_l4lb_noinline.bpf.linked3.o balancer_ingress 4494 4522 +28 (+0.62%) 217 219 +2 (+0.92%)
>> test_l4lb_noinline_dynptr.bpf.linked3.o balancer_ingress 1432 1455 +23 (+1.61%) 92 94 +2 (+2.17%)
>> verifier_iterating_callbacks.bpf.linked3.o widening 52 41 -11 (-21.15%) 4 3 -1 (-25.00%)
>> test_xdp_noinline.bpf.linked3.o balancer_ingress_v6 3462 3458 -4 (-0.12%) 216 216 +0 (+0.00%)
>> ...
>>
>> test_l4lb_noinline and test_l4lb_noinline_dynptr has minor regression, but
>> pyperf600_bpf_loop and local_storage_bench gets pretty good improvement.
>>
>> [1] https://lore.kernel.org/all/20231205184248.1502704-1-andrii@kernel.org/
>> [2] https://lore.kernel.org/all/20231205184248.1502704-9-andrii@kernel.org/
>>
>> Cc: Kuniyuki Iwashima <kuniyu@amazon.com>
>> Cc: Martin KaFai Lau <kafai@fb.com>
>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>> ---
>> kernel/bpf/verifier.c | 2 +-
>> tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 4 ++--
>> 2 files changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index a376eb609c41..17ad0228270e 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -4491,7 +4491,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>> if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
>> state->stack[spi].spilled_ptr.id = 0;
>> } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
>> - insn->imm != 0 && env->bpf_capable) {
>> + env->bpf_capable) {
> the change makes sense, there is nothing special about insn->imm == 0
> case, so LGTM
>
>> struct bpf_reg_state fake_reg = {};
>>
>> __mark_reg_known(&fake_reg, insn->imm);
>> diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
>> index 39fe3372e0e0..05de3de56e79 100644
>> --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
>> +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
>> @@ -496,13 +496,13 @@ SEC("raw_tp")
>> __log_level(2)
>> __success
>> /* make sure fp-8 is all STACK_ZERO */
> but we should update STACK_ZERO comments in this test
Missed this. Will update comments.
>
> and also, STACK_ZERO situation is still possible, right? E.g., when we
> spill register at -4 offset, not -8. So I'd either extend or add
> another test to make sure we still validate that STACK_ZERO slots
> return precise zero. Can you add something like this?
Yes, if offset not 8 byte aligned, e.g., -4, it will be STACK_ZERO.
Will add another test case to capture this.
>
>
>> -__msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=00000000")
>> +__msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=0")
>> /* but fp-16 is spilled IMPRECISE zero const reg */
>> __msg("4: (7b) *(u64 *)(r10 -16) = r0 ; R0_w=0 R10=fp0 fp-16_w=0")
>> /* validate that assigning R2 from STACK_ZERO doesn't mark register
>> * precise immediately; if necessary, it will be marked precise later
>> */
>> -__msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8_w=00000000")
>> +__msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8_w=0")
>> /* similarly, when R2 is assigned from spilled register, it is initially
>> * imprecise, but will be marked precise later once it is used in precise context
>> */
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next] bpf: Track aligned st store as imprecise spilled registers
2024-01-02 22:22 ` Andrii Nakryiko
@ 2024-01-03 0:03 ` Yonghong Song
0 siblings, 0 replies; 6+ messages in thread
From: Yonghong Song @ 2024-01-03 0:03 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
kernel-team, Martin KaFai Lau, Kuniyuki Iwashima,
Martin KaFai Lau
On 1/2/24 2:22 PM, Andrii Nakryiko wrote:
> On Tue, Jan 2, 2024 at 1:42 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
>> On Tue, Jan 2, 2024 at 11:07 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>>> With patch set [1], precision backtracing supports register spill/fill
>>> to/from the stack. The patch [2] allows initial imprecise register spill
>>> with content 0. This is a common case for cpuv3 and lower for
>>> initializing the stack variables with pattern
>>> r1 = 0
>>> *(u64 *)(r10 - 8) = r1
>>> and the [2] has demonstrated good verification improvement.
>>>
>>> For cpuv4, the initialization could be
>>> *(u64 *)(r10 - 8) = 0
>>> The current verifier marks the r10-8 contents with STACK_ZERO.
>>> Similar to [2], let us permit the above insn to behave like
>>> imprecise register spill which can reduce number of verified states.
>>>
>>> I checked cpuv3 and cpuv4 with and without this patch.
>>> There is no change for cpuv3 since '*(u64 *)(r10 - 8) = 0'
>>> is only generated with cpuv4.
>>>
>>> For cpuv4:
>>> $ ../veristat -C old.cpuv4.csv new.cpuv4.csv -e file,prog,insns,states -s '|insns_diff|'
>>> File Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF)
>>> ----------------------------------------------------- ---------------------------------------------------- --------- --------- --------------- ---------- ---------- -------------
>>> pyperf600_bpf_loop.bpf.linked3.o on_event 6066 4889 -1177 (-19.40%) 403 321 -82 (-20.35%)
>>> xdp_synproxy_kern.bpf.linked3.o syncookie_tc 12412 11719 -693 (-5.58%) 345 330 -15 (-4.35%)
>>> xdp_synproxy_kern.bpf.linked3.o syncookie_xdp 12478 11794 -684 (-5.48%) 346 331 -15 (-4.34%)
>>> test_cls_redirect.bpf.linked3.o cls_redirect 35483 35387 -96 (-0.27%) 2179 2177 -2 (-0.09%)
>>> local_storage_bench.bpf.linked3.o get_local 228 168 -60 (-26.32%) 17 14 -3 (-17.65%)
>>> test_l4lb_noinline.bpf.linked3.o balancer_ingress 4494 4522 +28 (+0.62%) 217 219 +2 (+0.92%)
>>> test_l4lb_noinline_dynptr.bpf.linked3.o balancer_ingress 1432 1455 +23 (+1.61%) 92 94 +2 (+2.17%)
>>> verifier_iterating_callbacks.bpf.linked3.o widening 52 41 -11 (-21.15%) 4 3 -1 (-25.00%)
>>> test_xdp_noinline.bpf.linked3.o balancer_ingress_v6 3462 3458 -4 (-0.12%) 216 216 +0 (+0.00%)
>>> ...
>>>
>>> test_l4lb_noinline and test_l4lb_noinline_dynptr has minor regression, but
>>> pyperf600_bpf_loop and local_storage_bench gets pretty good improvement.
>>>
>>> [1] https://lore.kernel.org/all/20231205184248.1502704-1-andrii@kernel.org/
>>> [2] https://lore.kernel.org/all/20231205184248.1502704-9-andrii@kernel.org/
>>>
>>> Cc: Kuniyuki Iwashima <kuniyu@amazon.com>
>>> Cc: Martin KaFai Lau <kafai@fb.com>
>>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>>> ---
>>> kernel/bpf/verifier.c | 2 +-
>>> tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 4 ++--
>>> 2 files changed, 3 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>> index a376eb609c41..17ad0228270e 100644
>>> --- a/kernel/bpf/verifier.c
>>> +++ b/kernel/bpf/verifier.c
>>> @@ -4491,7 +4491,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>>> if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
>>> state->stack[spi].spilled_ptr.id = 0;
>>> } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
>>> - insn->imm != 0 && env->bpf_capable) {
>>> + env->bpf_capable) {
>> the change makes sense, there is nothing special about insn->imm == 0
>> case, so LGTM
>>
>>> struct bpf_reg_state fake_reg = {};
>>>
>>> __mark_reg_known(&fake_reg, insn->imm);
>>> diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
>>> index 39fe3372e0e0..05de3de56e79 100644
>>> --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
>>> +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
>>> @@ -496,13 +496,13 @@ SEC("raw_tp")
>>> __log_level(2)
>>> __success
>>> /* make sure fp-8 is all STACK_ZERO */
>> but we should update STACK_ZERO comments in this test
>>
>> and also, STACK_ZERO situation is still possible, right? E.g., when we
>> spill register at -4 offset, not -8. So I'd either extend or add
>> another test to make sure we still validate that STACK_ZERO slots
>> return precise zero. Can you add something like this?
>>
>>
>>> -__msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=00000000")
>>> +__msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=0")
>>> /* but fp-16 is spilled IMPRECISE zero const reg */
>>> __msg("4: (7b) *(u64 *)(r10 -16) = r0 ; R0_w=0 R10=fp0 fp-16_w=0")
>>> /* validate that assigning R2 from STACK_ZERO doesn't mark register
>>> * precise immediately; if necessary, it will be marked precise later
>>> */
>>> -__msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8_w=00000000")
>>> +__msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8_w=0")
>>> /* similarly, when R2 is assigned from spilled register, it is initially
>>> * imprecise, but will be marked precise later once it is used in precise context
>>> */
> And seems like test_verifier test is failing now ([0]):
>
> #114/p BPF_ST_MEM stack imm zero, variable offset FAIL
> Failed to load prog 'Invalid argument'!
> At program exit the register R0 has smin=0 smax=255 should have been in [0, 1]
> verification time 19 usec
> stack depth 32
> processed 11 insns (limit 1000000) max_states_per_insn 0
> total_states 0 peak_states 0 mark_read 0
>
>
> [0] https://github.com/kernel-patches/bpf/actions/runs/7389645653/job/20103046755
Ack. The CI also sent an email to me about this. Will investigate.
>
>>> --
>>> 2.34.1
>>>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-01-03 0:03 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-02 19:07 [PATCH bpf-next] bpf: Track aligned st store as imprecise spilled registers Yonghong Song
2024-01-02 21:06 ` Eduard Zingerman
2024-01-02 21:42 ` Andrii Nakryiko
2024-01-02 22:22 ` Andrii Nakryiko
2024-01-03 0:03 ` Yonghong Song
2024-01-03 0:00 ` Yonghong Song
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox