* [PATCH bpf-next 0/5] BPF verifier log improvements
@ 2023-10-11 22:37 Andrii Nakryiko
2023-10-11 22:37 ` [PATCH bpf-next 1/5] selftests/bpf: improve percpu_alloc test robustness Andrii Nakryiko
` (7 more replies)
0 siblings, 8 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2023-10-11 22:37 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
This patch set fixes ambiguity in BPF verifier log output of SCALAR register
in the parts that emit umin/umax, smin/smax, etc ranges. See patch #4 for
details.
Also, patch #5 fixes an issue with verifier log missing instruction context
(state) output for conditionals that trigger precision marking. See details in
the patch.
First two patches are just improvements to two selftests that are very flaky
locally when run in parallel mode.
Patch #3 changes 'align' selftest to be less strict about exact verifier log
output (which patch #4 changes, breaking lots of align tests as written). Now
test does more of a register substate checks, mostly around expected var_off()
values. This 'align' selftests is one of the more brittle ones and requires
constant adjustment when verifier log output changes, without really catching
any new issues. So hopefully these changes can minimize future support efforts
for this specific set of tests.
Andrii Nakryiko (5):
selftests/bpf: improve percpu_alloc test robustness
selftests/bpf: improve missed_kprobe_recursion test robustness
selftests/bpf: make align selftests more robust
bpf: disambiguate SCALAR register state output in verifier logs
bpf: ensure proper register state printing for cond jumps
kernel/bpf/verifier.c | 74 ++++--
.../testing/selftests/bpf/prog_tests/align.c | 241 +++++++++---------
.../testing/selftests/bpf/prog_tests/missed.c | 8 +-
.../selftests/bpf/prog_tests/percpu_alloc.c | 3 +
.../selftests/bpf/progs/exceptions_assert.c | 18 +-
.../selftests/bpf/progs/percpu_alloc_array.c | 7 +
.../progs/percpu_alloc_cgrp_local_storage.c | 4 +
.../selftests/bpf/progs/verifier_ldsx.c | 2 +-
8 files changed, 200 insertions(+), 157 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH bpf-next 1/5] selftests/bpf: improve percpu_alloc test robustness
2023-10-11 22:37 [PATCH bpf-next 0/5] BPF verifier log improvements Andrii Nakryiko
@ 2023-10-11 22:37 ` Andrii Nakryiko
2023-10-12 6:04 ` Yafang Shao
2023-10-11 22:37 ` [PATCH bpf-next 2/5] selftests/bpf: improve missed_kprobe_recursion " Andrii Nakryiko
` (6 subsequent siblings)
7 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2023-10-11 22:37 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Make these non-serial tests filter BPF programs by intended PID of
a test runner process. This makes it isolated from other parallel tests
that might interfere accidentally.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
tools/testing/selftests/bpf/prog_tests/percpu_alloc.c | 3 +++
tools/testing/selftests/bpf/progs/percpu_alloc_array.c | 7 +++++++
.../selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c | 4 ++++
3 files changed, 14 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c b/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
index 9541e9b3a034..343da65864d6 100644
--- a/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
@@ -19,6 +19,7 @@ static void test_array(void)
bpf_program__set_autoload(skel->progs.test_array_map_3, true);
bpf_program__set_autoload(skel->progs.test_array_map_4, true);
+ skel->bss->my_pid = getpid();
skel->rodata->nr_cpus = libbpf_num_possible_cpus();
err = percpu_alloc_array__load(skel);
@@ -51,6 +52,7 @@ static void test_array_sleepable(void)
bpf_program__set_autoload(skel->progs.test_array_map_10, true);
+ skel->bss->my_pid = getpid();
skel->rodata->nr_cpus = libbpf_num_possible_cpus();
err = percpu_alloc_array__load(skel);
@@ -85,6 +87,7 @@ static void test_cgrp_local_storage(void)
if (!ASSERT_OK_PTR(skel, "percpu_alloc_cgrp_local_storage__open"))
goto close_fd;
+ skel->bss->my_pid = getpid();
skel->rodata->nr_cpus = libbpf_num_possible_cpus();
err = percpu_alloc_cgrp_local_storage__load(skel);
diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_array.c b/tools/testing/selftests/bpf/progs/percpu_alloc_array.c
index bbc45346e006..37c2d2608ec0 100644
--- a/tools/testing/selftests/bpf/progs/percpu_alloc_array.c
+++ b/tools/testing/selftests/bpf/progs/percpu_alloc_array.c
@@ -71,6 +71,7 @@ int BPF_PROG(test_array_map_2)
}
int cpu0_field_d, sum_field_c;
+int my_pid;
/* Summarize percpu data */
SEC("?fentry/bpf_fentry_test3")
@@ -81,6 +82,9 @@ int BPF_PROG(test_array_map_3)
struct val_t *v;
struct elem *e;
+ if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
+ return 0;
+
e = bpf_map_lookup_elem(&array, &index);
if (!e)
return 0;
@@ -130,6 +134,9 @@ int BPF_PROG(test_array_map_10)
struct val_t *v;
struct elem *e;
+ if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
+ return 0;
+
e = bpf_map_lookup_elem(&array, &index);
if (!e)
return 0;
diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c b/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c
index 1c36a241852c..a2acf9aa6c24 100644
--- a/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c
+++ b/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c
@@ -70,6 +70,7 @@ int BPF_PROG(test_cgrp_local_storage_2)
}
int cpu0_field_d, sum_field_c;
+int my_pid;
/* Summarize percpu data collection */
SEC("fentry/bpf_fentry_test3")
@@ -81,6 +82,9 @@ int BPF_PROG(test_cgrp_local_storage_3)
struct elem *e;
int i;
+ if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
+ return 0;
+
task = bpf_get_current_task_btf();
e = bpf_cgrp_storage_get(&cgrp, task->cgroups->dfl_cgrp, 0, 0);
if (!e)
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next 2/5] selftests/bpf: improve missed_kprobe_recursion test robustness
2023-10-11 22:37 [PATCH bpf-next 0/5] BPF verifier log improvements Andrii Nakryiko
2023-10-11 22:37 ` [PATCH bpf-next 1/5] selftests/bpf: improve percpu_alloc test robustness Andrii Nakryiko
@ 2023-10-11 22:37 ` Andrii Nakryiko
2023-10-12 13:22 ` Jiri Olsa
2023-10-11 22:37 ` [PATCH bpf-next 3/5] selftests/bpf: make align selftests more robust Andrii Nakryiko
` (5 subsequent siblings)
7 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2023-10-11 22:37 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Given missed_kprobe_recursion is non-serial and uses common testing
kfuncs to count number of recursion misses it's possible that some other
parallel test can trigger extraneous recursion misses. So we can't
expect exactly 1 miss. Relax conditions and expect at least one.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
tools/testing/selftests/bpf/prog_tests/missed.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/missed.c b/tools/testing/selftests/bpf/prog_tests/missed.c
index 24ade11f5c05..70d90c43537c 100644
--- a/tools/testing/selftests/bpf/prog_tests/missed.c
+++ b/tools/testing/selftests/bpf/prog_tests/missed.c
@@ -81,10 +81,10 @@ static void test_missed_kprobe_recursion(void)
ASSERT_EQ(topts.retval, 0, "test_run");
ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test1)), 0, "test1_recursion_misses");
- ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test2)), 1, "test2_recursion_misses");
- ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test3)), 1, "test3_recursion_misses");
- ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test4)), 1, "test4_recursion_misses");
- ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test5)), 1, "test5_recursion_misses");
+ ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test2)), 1, "test2_recursion_misses");
+ ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test3)), 1, "test3_recursion_misses");
+ ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test4)), 1, "test4_recursion_misses");
+ ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test5)), 1, "test5_recursion_misses");
cleanup:
missed_kprobe_recursion__destroy(skel);
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next 3/5] selftests/bpf: make align selftests more robust
2023-10-11 22:37 [PATCH bpf-next 0/5] BPF verifier log improvements Andrii Nakryiko
2023-10-11 22:37 ` [PATCH bpf-next 1/5] selftests/bpf: improve percpu_alloc test robustness Andrii Nakryiko
2023-10-11 22:37 ` [PATCH bpf-next 2/5] selftests/bpf: improve missed_kprobe_recursion " Andrii Nakryiko
@ 2023-10-11 22:37 ` Andrii Nakryiko
2023-10-11 22:37 ` [PATCH bpf-next 4/5] bpf: disambiguate SCALAR register state output in verifier logs Andrii Nakryiko
` (4 subsequent siblings)
7 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2023-10-11 22:37 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Align subtest is very specific and finicky about expected verifier log
output and format. This is often completely unnecessary as in a bunch of
situations test actually cares about var_off part of register state. But
given how exact it is right now, any tiny verifier log changes can lead
to align tests failures, requiring constant adjustment.
This patch tries to make this a bit more robust by making logic first
search for specified register and then allowing to match only portion of
register state, not everything exactly. This will come handly with
follow up changes to SCALAR register output disambiguation.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
.../testing/selftests/bpf/prog_tests/align.c | 241 +++++++++---------
1 file changed, 121 insertions(+), 120 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/align.c b/tools/testing/selftests/bpf/prog_tests/align.c
index b92770592563..465c1c3a3d3c 100644
--- a/tools/testing/selftests/bpf/prog_tests/align.c
+++ b/tools/testing/selftests/bpf/prog_tests/align.c
@@ -6,6 +6,7 @@
struct bpf_reg_match {
unsigned int line;
+ const char *reg;
const char *match;
};
@@ -39,13 +40,13 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(off=0,imm=0)"},
- {0, "R10=fp0"},
- {0, "R3_w=2"},
- {1, "R3_w=4"},
- {2, "R3_w=8"},
- {3, "R3_w=16"},
- {4, "R3_w=32"},
+ {0, "R1", "ctx(off=0,imm=0)"},
+ {0, "R10", "fp0"},
+ {0, "R3_w", "2"},
+ {1, "R3_w", "4"},
+ {2, "R3_w", "8"},
+ {3, "R3_w", "16"},
+ {4, "R3_w", "32"},
},
},
{
@@ -67,19 +68,19 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(off=0,imm=0)"},
- {0, "R10=fp0"},
- {0, "R3_w=1"},
- {1, "R3_w=2"},
- {2, "R3_w=4"},
- {3, "R3_w=8"},
- {4, "R3_w=16"},
- {5, "R3_w=1"},
- {6, "R4_w=32"},
- {7, "R4_w=16"},
- {8, "R4_w=8"},
- {9, "R4_w=4"},
- {10, "R4_w=2"},
+ {0, "R1", "ctx(off=0,imm=0)"},
+ {0, "R10", "fp0"},
+ {0, "R3_w", "1"},
+ {1, "R3_w", "2"},
+ {2, "R3_w", "4"},
+ {3, "R3_w", "8"},
+ {4, "R3_w", "16"},
+ {5, "R3_w", "1"},
+ {6, "R4_w", "32"},
+ {7, "R4_w", "16"},
+ {8, "R4_w", "8"},
+ {9, "R4_w", "4"},
+ {10, "R4_w", "2"},
},
},
{
@@ -96,14 +97,14 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(off=0,imm=0)"},
- {0, "R10=fp0"},
- {0, "R3_w=4"},
- {1, "R3_w=8"},
- {2, "R3_w=10"},
- {3, "R4_w=8"},
- {4, "R4_w=12"},
- {5, "R4_w=14"},
+ {0, "R1", "ctx(off=0,imm=0)"},
+ {0, "R10", "fp0"},
+ {0, "R3_w", "4"},
+ {1, "R3_w", "8"},
+ {2, "R3_w", "10"},
+ {3, "R4_w", "8"},
+ {4, "R4_w", "12"},
+ {5, "R4_w", "14"},
},
},
{
@@ -118,12 +119,12 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(off=0,imm=0)"},
- {0, "R10=fp0"},
- {0, "R3_w=7"},
- {1, "R3_w=7"},
- {2, "R3_w=14"},
- {3, "R3_w=56"},
+ {0, "R1", "ctx(off=0,imm=0)"},
+ {0, "R10", "fp0"},
+ {0, "R3_w", "7"},
+ {1, "R3_w", "7"},
+ {2, "R3_w", "14"},
+ {3, "R3_w", "56"},
},
},
@@ -161,19 +162,19 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {6, "R0_w=pkt(off=8,r=8,imm=0)"},
- {6, "R3_w=scalar(umax=255,var_off=(0x0; 0xff))"},
- {7, "R3_w=scalar(umax=510,var_off=(0x0; 0x1fe))"},
- {8, "R3_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
- {9, "R3_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"},
- {10, "R3_w=scalar(umax=4080,var_off=(0x0; 0xff0))"},
- {12, "R3_w=pkt_end(off=0,imm=0)"},
- {17, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"},
- {18, "R4_w=scalar(umax=8160,var_off=(0x0; 0x1fe0))"},
- {19, "R4_w=scalar(umax=4080,var_off=(0x0; 0xff0))"},
- {20, "R4_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"},
- {21, "R4_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
- {22, "R4_w=scalar(umax=510,var_off=(0x0; 0x1fe))"},
+ {6, "R0_w", "pkt(off=8,r=8,imm=0)"},
+ {6, "R3_w", "var_off=(0x0; 0xff)"},
+ {7, "R3_w", "var_off=(0x0; 0x1fe)"},
+ {8, "R3_w", "var_off=(0x0; 0x3fc)"},
+ {9, "R3_w", "var_off=(0x0; 0x7f8)"},
+ {10, "R3_w", "var_off=(0x0; 0xff0)"},
+ {12, "R3_w", "pkt_end(off=0,imm=0)"},
+ {17, "R4_w", "var_off=(0x0; 0xff)"},
+ {18, "R4_w", "var_off=(0x0; 0x1fe0)"},
+ {19, "R4_w", "var_off=(0x0; 0xff0)"},
+ {20, "R4_w", "var_off=(0x0; 0x7f8)"},
+ {21, "R4_w", "var_off=(0x0; 0x3fc)"},
+ {22, "R4_w", "var_off=(0x0; 0x1fe)"},
},
},
{
@@ -194,16 +195,16 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {6, "R3_w=scalar(umax=255,var_off=(0x0; 0xff))"},
- {7, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
- {8, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"},
- {9, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
- {10, "R4_w=scalar(umax=510,var_off=(0x0; 0x1fe))"},
- {11, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
- {12, "R4_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
- {13, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
- {14, "R4_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"},
- {15, "R4_w=scalar(umax=4080,var_off=(0x0; 0xff0))"},
+ {6, "R3_w", "var_off=(0x0; 0xff)"},
+ {7, "R4_w", "var_off=(0x0; 0xff)"},
+ {8, "R4_w", "var_off=(0x0; 0xff)"},
+ {9, "R4_w", "var_off=(0x0; 0xff)"},
+ {10, "R4_w", "var_off=(0x0; 0x1fe)"},
+ {11, "R4_w", "var_off=(0x0; 0xff)"},
+ {12, "R4_w", "var_off=(0x0; 0x3fc)"},
+ {13, "R4_w", "var_off=(0x0; 0xff)"},
+ {14, "R4_w", "var_off=(0x0; 0x7f8)"},
+ {15, "R4_w", "var_off=(0x0; 0xff0)"},
},
},
{
@@ -234,14 +235,14 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {2, "R5_w=pkt(off=0,r=0,imm=0)"},
- {4, "R5_w=pkt(off=14,r=0,imm=0)"},
- {5, "R4_w=pkt(off=14,r=0,imm=0)"},
- {9, "R2=pkt(off=0,r=18,imm=0)"},
- {10, "R5=pkt(off=14,r=18,imm=0)"},
- {10, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"},
- {13, "R4_w=scalar(umax=65535,var_off=(0x0; 0xffff))"},
- {14, "R4_w=scalar(umax=65535,var_off=(0x0; 0xffff))"},
+ {2, "R5_w", "pkt(off=0,r=0,imm=0)"},
+ {4, "R5_w", "pkt(off=14,r=0,imm=0)"},
+ {5, "R4_w", "pkt(off=14,r=0,imm=0)"},
+ {9, "R2", "pkt(off=0,r=18,imm=0)"},
+ {10, "R5", "pkt(off=14,r=18,imm=0)"},
+ {10, "R4_w", "var_off=(0x0; 0xff)"},
+ {13, "R4_w", "var_off=(0x0; 0xffff)"},
+ {14, "R4_w", "var_off=(0x0; 0xffff)"},
},
},
{
@@ -298,20 +299,20 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(off=0,r=8,imm=0)"},
- {7, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {6, "R2_w", "pkt(off=0,r=8,imm=0)"},
+ {7, "R6_w", "var_off=(0x0; 0x3fc)"},
/* Offset is added to packet pointer R5, resulting in
* known fixed offset, and variable offset from R6.
*/
- {11, "R5_w=pkt(id=1,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
+ {11, "R5_w", "pkt(id=1,off=14,"},
/* At the time the word size load is performed from R5,
* it's total offset is NET_IP_ALIGN + reg->off (0) +
* reg->aux_off (14) which is 16. Then the variable
* offset is considered using reg->aux_off_align which
* is 4 and meets the load's requirements.
*/
- {15, "R4=pkt(id=1,off=18,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
- {15, "R5=pkt(id=1,off=14,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
+ {15, "R4", "var_off=(0x0; 0x3fc)"},
+ {15, "R5", "var_off=(0x0; 0x3fc)"},
/* Variable offset is added to R5 packet pointer,
* resulting in auxiliary alignment of 4. To avoid BPF
* verifier's precision backtracking logging
@@ -319,46 +320,46 @@ static struct bpf_align_test tests[] = {
* instruction to validate R5 state. We also check
* that R4 is what it should be in such case.
*/
- {18, "R4_w=pkt(id=2,off=0,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
- {18, "R5_w=pkt(id=2,off=0,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
+ {18, "R4_w", "var_off=(0x0; 0x3fc)"},
+ {18, "R5_w", "var_off=(0x0; 0x3fc)"},
/* Constant offset is added to R5, resulting in
* reg->off of 14.
*/
- {19, "R5_w=pkt(id=2,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
+ {19, "R5_w", "pkt(id=2,off=14,"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off
* (14) which is 16. Then the variable offset is 4-byte
* aligned, so the total offset is 4-byte aligned and
* meets the load's requirements.
*/
- {24, "R4=pkt(id=2,off=18,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
- {24, "R5=pkt(id=2,off=14,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
+ {24, "R4", "var_off=(0x0; 0x3fc)"},
+ {24, "R5", "var_off=(0x0; 0x3fc)"},
/* Constant offset is added to R5 packet pointer,
* resulting in reg->off value of 14.
*/
- {26, "R5_w=pkt(off=14,r=8"},
+ {26, "R5_w", "pkt(off=14,r=8,"},
/* Variable offset is added to R5, resulting in a
* variable offset of (4n). See comment for insn #18
* for R4 = R5 trick.
*/
- {28, "R4_w=pkt(id=3,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
- {28, "R5_w=pkt(id=3,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
+ {28, "R4_w", "var_off=(0x0; 0x3fc)"},
+ {28, "R5_w", "var_off=(0x0; 0x3fc)"},
/* Constant is added to R5 again, setting reg->off to 18. */
- {29, "R5_w=pkt(id=3,off=18,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
+ {29, "R5_w", "pkt(id=3,off=18,"},
/* And once more we add a variable; resulting var_off
* is still (4n), fixed offset is not changed.
* Also, we create a new reg->id.
*/
- {31, "R4_w=pkt(id=4,off=18,r=0,umax=2040,var_off=(0x0; 0x7fc)"},
- {31, "R5_w=pkt(id=4,off=18,r=0,umax=2040,var_off=(0x0; 0x7fc)"},
+ {31, "R4_w", "var_off=(0x0; 0x7fc)"},
+ {31, "R5_w", "var_off=(0x0; 0x7fc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (18)
* which is 20. Then the variable offset is (4n), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {35, "R4=pkt(id=4,off=22,r=22,umax=2040,var_off=(0x0; 0x7fc)"},
- {35, "R5=pkt(id=4,off=18,r=22,umax=2040,var_off=(0x0; 0x7fc)"},
+ {35, "R4", "var_off=(0x0; 0x7fc)"},
+ {35, "R5", "var_off=(0x0; 0x7fc)"},
},
},
{
@@ -396,36 +397,36 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(off=0,r=8,imm=0)"},
- {7, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {6, "R2_w", "pkt(off=0,r=8,imm=0)"},
+ {7, "R6_w", "var_off=(0x0; 0x3fc)"},
/* Adding 14 makes R6 be (4n+2) */
- {8, "R6_w=scalar(umin=14,umax=1034,var_off=(0x2; 0x7fc))"},
+ {8, "R6_w", "var_off=(0x2; 0x7fc)"},
/* Packet pointer has (4n+2) offset */
- {11, "R5_w=pkt(id=1,off=0,r=0,umin=14,umax=1034,var_off=(0x2; 0x7fc)"},
- {12, "R4=pkt(id=1,off=4,r=0,umin=14,umax=1034,var_off=(0x2; 0x7fc)"},
+ {11, "R5_w", "var_off=(0x2; 0x7fc)"},
+ {12, "R4", "var_off=(0x2; 0x7fc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {15, "R5=pkt(id=1,off=0,r=4,umin=14,umax=1034,var_off=(0x2; 0x7fc)"},
+ {15, "R5", "var_off=(0x2; 0x7fc)"},
/* Newly read value in R6 was shifted left by 2, so has
* known alignment of 4.
*/
- {17, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {17, "R6_w", "var_off=(0x0; 0x3fc)"},
/* Added (4n) to packet pointer's (4n+2) var_off, giving
* another (4n+2).
*/
- {19, "R5_w=pkt(id=2,off=0,r=0,umin=14,umax=2054,var_off=(0x2; 0xffc)"},
- {20, "R4=pkt(id=2,off=4,r=0,umin=14,umax=2054,var_off=(0x2; 0xffc)"},
+ {19, "R5_w", "var_off=(0x2; 0xffc)"},
+ {20, "R4", "var_off=(0x2; 0xffc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {23, "R5=pkt(id=2,off=0,r=4,umin=14,umax=2054,var_off=(0x2; 0xffc)"},
+ {23, "R5", "var_off=(0x2; 0xffc)"},
},
},
{
@@ -458,18 +459,18 @@ static struct bpf_align_test tests[] = {
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
.matches = {
- {3, "R5_w=pkt_end(off=0,imm=0)"},
+ {3, "R5_w", "pkt_end(off=0,imm=0)"},
/* (ptr - ptr) << 2 == unknown, (4n) */
- {5, "R5_w=scalar(smax=9223372036854775804,umax=18446744073709551612,var_off=(0x0; 0xfffffffffffffffc)"},
+ {5, "R5_w", "var_off=(0x0; 0xfffffffffffffffc)"},
/* (4n) + 14 == (4n+2). We blow our bounds, because
* the add could overflow.
*/
- {6, "R5_w=scalar(smin=-9223372036854775806,smax=9223372036854775806,umin=2,umax=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"},
+ {6, "R5_w", "var_off=(0x2; 0xfffffffffffffffc)"},
/* Checked s>=0 */
- {9, "R5=scalar(umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
+ {9, "R5", "var_off=(0x2; 0x7ffffffffffffffc)"},
/* packet pointer + nonnegative (4n+2) */
- {11, "R6_w=pkt(id=1,off=0,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
- {12, "R4_w=pkt(id=1,off=4,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
+ {11, "R6_w", "var_off=(0x2; 0x7ffffffffffffffc)"},
+ {12, "R4_w", "var_off=(0x2; 0x7ffffffffffffffc)"},
/* NET_IP_ALIGN + (4n+2) == (4n), alignment is fine.
* We checked the bounds, but it might have been able
* to overflow if the packet pointer started in the
@@ -477,7 +478,7 @@ static struct bpf_align_test tests[] = {
* So we did not get a 'range' on R6, and the access
* attempt will fail.
*/
- {15, "R6_w=pkt(id=1,off=0,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
+ {15, "R6_w", "var_off=(0x2; 0x7ffffffffffffffc)"},
}
},
{
@@ -512,24 +513,23 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(off=0,r=8,imm=0)"},
- {8, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {6, "R2_w", "pkt(off=0,r=8,imm=0)"},
+ {8, "R6_w", "var_off=(0x0; 0x3fc)"},
/* Adding 14 makes R6 be (4n+2) */
- {9, "R6_w=scalar(umin=14,umax=1034,var_off=(0x2; 0x7fc))"},
+ {9, "R6_w", "var_off=(0x2; 0x7fc)"},
/* New unknown value in R7 is (4n) */
- {10, "R7_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {10, "R7_w", "var_off=(0x0; 0x3fc)"},
/* Subtracting it from R6 blows our unsigned bounds */
- {11, "R6=scalar(smin=-1006,smax=1034,umin=2,umax=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"},
+ {11, "R6", "var_off=(0x2; 0xfffffffffffffffc)"},
/* Checked s>= 0 */
- {14, "R6=scalar(umin=2,umax=1034,var_off=(0x2; 0x7fc))"},
+ {14, "R6", "var_off=(0x2; 0x7fc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {20, "R5=pkt(id=2,off=0,r=4,umin=2,umax=1034,var_off=(0x2; 0x7fc)"},
-
+ {20, "R5", "var_off=(0x2; 0x7fc)"},
},
},
{
@@ -566,23 +566,23 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(off=0,r=8,imm=0)"},
- {9, "R6_w=scalar(umax=60,var_off=(0x0; 0x3c))"},
+ {6, "R2_w", "pkt(off=0,r=8,imm=0)"},
+ {9, "R6_w", "var_off=(0x0; 0x3c)"},
/* Adding 14 makes R6 be (4n+2) */
- {10, "R6_w=scalar(umin=14,umax=74,var_off=(0x2; 0x7c))"},
+ {10, "R6_w", "var_off=(0x2; 0x7c)"},
/* Subtracting from packet pointer overflows ubounds */
- {13, "R5_w=pkt(id=2,off=0,r=8,umin=18446744073709551542,umax=18446744073709551602,var_off=(0xffffffffffffff82; 0x7c)"},
+ {13, "R5_w", "var_off=(0xffffffffffffff82; 0x7c)"},
/* New unknown value in R7 is (4n), >= 76 */
- {14, "R7_w=scalar(umin=76,umax=1096,var_off=(0x0; 0x7fc))"},
+ {14, "R7_w", "var_off=(0x0; 0x7fc)"},
/* Adding it to packet pointer gives nice bounds again */
- {16, "R5_w=pkt(id=3,off=0,r=0,umin=2,umax=1082,var_off=(0x2; 0x7fc)"},
+ {16, "R5_w", "var_off=(0x2; 0x7fc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {20, "R5=pkt(id=3,off=0,r=4,umin=2,umax=1082,var_off=(0x2; 0x7fc)"},
+ {20, "R5", "var_off=(0x2; 0x7fc)"},
},
},
};
@@ -635,6 +635,7 @@ static int do_test_single(struct bpf_align_test *test)
line_ptr = strtok(bpf_vlog_copy, "\n");
for (i = 0; i < MAX_MATCHES; i++) {
struct bpf_reg_match m = test->matches[i];
+ const char *p;
int tmp;
if (!m.match)
@@ -649,8 +650,8 @@ static int do_test_single(struct bpf_align_test *test)
line_ptr = strtok(NULL, "\n");
}
if (!line_ptr) {
- printf("Failed to find line %u for match: %s\n",
- m.line, m.match);
+ printf("Failed to find line %u for match: %s=%s\n",
+ m.line, m.reg, m.match);
ret = 1;
printf("%s", bpf_vlog);
break;
@@ -667,15 +668,15 @@ static int do_test_single(struct bpf_align_test *test)
* 6: R0_w=pkt(off=8,r=8,imm=0) R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=8,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
* 6: (71) r3 = *(u8 *)(r2 +0) ; R2_w=pkt(off=0,r=8,imm=0) R3_w=scalar(umax=255,var_off=(0x0; 0xff))
*/
- while (!strstr(line_ptr, m.match)) {
+ while (!(p = strstr(line_ptr, m.reg)) || !strstr(p, m.match)) {
cur_line = -1;
line_ptr = strtok(NULL, "\n");
sscanf(line_ptr ?: "", "%u: ", &cur_line);
if (!line_ptr || cur_line != m.line)
break;
}
- if (cur_line != m.line || !line_ptr || !strstr(line_ptr, m.match)) {
- printf("Failed to find match %u: %s\n", m.line, m.match);
+ if (cur_line != m.line || !line_ptr || !(p = strstr(line_ptr, m.reg)) || !strstr(p, m.match)) {
+ printf("Failed to find match %u: %s=%s\n", m.line, m.reg, m.match);
ret = 1;
printf("%s", bpf_vlog);
break;
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next 4/5] bpf: disambiguate SCALAR register state output in verifier logs
2023-10-11 22:37 [PATCH bpf-next 0/5] BPF verifier log improvements Andrii Nakryiko
` (2 preceding siblings ...)
2023-10-11 22:37 ` [PATCH bpf-next 3/5] selftests/bpf: make align selftests more robust Andrii Nakryiko
@ 2023-10-11 22:37 ` Andrii Nakryiko
2023-10-12 5:33 ` John Fastabend
2023-10-11 22:37 ` [PATCH bpf-next 5/5] bpf: ensure proper register state printing for cond jumps Andrii Nakryiko
` (3 subsequent siblings)
7 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2023-10-11 22:37 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Currently the way that verifier prints SCALAR_VALUE register state (and
PTR_TO_PACKET, which can have var_off and ranges info as well) is very
ambiguous.
In the name of brevity we are trying to eliminate "unnecessary" output
of umin/umax, smin/smax, u32_min/u32_max, and s32_min/s32_max values, if
possible. Current rules are that if any of those have their default
value (which for mins is the minimal value of its respective types: 0,
S32_MIN, or S64_MIN, while for maxs it's U32_MAX, S32_MAX, S64_MAX, or
U64_MAX) *OR* if there is another min/max value that as matching value.
E.g., if smin=100 and umin=100, we'll emit only umin=10, omitting smin
altogether. This approach has a few problems, being both ambiguous and
sort-of incorrect in some cases.
Ambiguity is due to missing value could be either default value or value
of umin/umax or smin/smax. This is especially confusing when we mix
signed and unsigned ranges. Quite often, umin=0 and smin=0, and so we'll
have only `umin=0` leaving anyone reading verifier log to guess whether
smin is actually 0 or it's actually -9223372036854775808 (S64_MIN). And
often times it's important to know, especially when debugging tricky
issues.
"Sort-of incorrectness" comes from mixing negative and positive values.
E.g., if umin is some large positive number, it can be equal to smin
which is, interpreted as signed value, is actually some negative value.
Currently, that smin will be omitted and only umin will be emitted with
a large positive value, giving an impression that smin is also positive.
Anyway, ambiguity is the biggest issue making it impossible to have an
exact understanding of register state, preventing any sort of automated
testing of verifier state based on verifier log. This patch is
attempting to rectify the situation by removing ambiguity, while
minimizing the verboseness of register state output.
The rules are straightforward:
- if some of the values are missing, then it definitely has a default
value. I.e., `umin=0` means that umin is zero, but smin is actually
S64_MIN;
- all the various boundaries that happen to have the same value are
emitted in one equality separated sequence. E.g., if umin and smin are
both 100, we'll emit `smin=umin=100`, making this explicit;
- we do not mix negative and positive values together, and even if
they happen to have the same bit-level value, they will be emitted
separately with proper sign. I.e., if both umax and smax happen to be
0xffffffffffffffff, we'll emit them both separately as
`smax=-1,umax=18446744073709551615`;
- in the name of a bit more uniformity and consistency,
{u32,s32}_{min,max} are renamed to {s,u}{min,max}32, which seems to
improve readability.
The above means that in case of all 4 ranges being, say, [50, 100] range,
we'd previously see hugely ambiguous:
R1=scalar(umin=50,umax=100)
Now, we'll be more explicit:
R1=scalar(smin=umin=smin32=umin32=50,smax=umax=smax32=umax32=100)
This is slightly more verbose, but distinct from the case when we don't
know anything about signed boundaries and 32-bit boundaries, which under
new rules will match the old case:
R1=scalar(umin=50,umax=100)
Also, in the name of simplicity of implementation and consistency, order
for {s,u}32_{min,max} are emitted *before* var_off. Previously they were
emitted afterwards, for unclear reasons.
This patch also includes a few fixes to selftests that expect exact
register state to accommodate slight changes to verifier format. You can
see that the changes are pretty minimal in common cases.
Note, the special case when SCALAR_VALUE register is a known constant
isn't changed, we'll emit constant value once, interpreted as signed
value.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/bpf/verifier.c | 67 +++++++++++++------
.../selftests/bpf/progs/exceptions_assert.c | 18 ++---
.../selftests/bpf/progs/verifier_ldsx.c | 2 +-
3 files changed, 55 insertions(+), 32 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index eed7350e15f4..059f8e930499 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1342,6 +1342,50 @@ static void scrub_spilled_slot(u8 *stype)
*stype = STACK_MISC;
}
+static void print_scalar_ranges(struct bpf_verifier_env *env,
+ const struct bpf_reg_state *reg,
+ const char **sep)
+{
+ struct {
+ const char *name;
+ u64 val;
+ bool omit;
+ } minmaxs[] = {
+ {"smin", reg->smin_value, reg->smin_value == S64_MIN},
+ {"smax", reg->smax_value, reg->smax_value == S64_MAX},
+ {"umin", reg->umin_value, reg->umin_value == 0},
+ {"umax", reg->umax_value, reg->umax_value == U64_MAX},
+ {"smin32", (s64)reg->s32_min_value, reg->s32_min_value == S32_MIN},
+ {"smax32", (s64)reg->s32_max_value, reg->s32_max_value == S32_MAX},
+ {"umin32", reg->u32_min_value, reg->u32_min_value == 0},
+ {"umax32", reg->u32_max_value, reg->u32_max_value == U32_MAX},
+ }, *m1, *m2, *mend = &minmaxs[ARRAY_SIZE(minmaxs)];
+ bool neg1, neg2;
+
+ for (m1 = &minmaxs[0]; m1 < mend; m1++) {
+ if (m1->omit)
+ continue;
+
+ neg1 = m1->name[0] == 's' && (s64)m1->val < 0;
+
+ verbose(env, "%s%s=", *sep, m1->name);
+ *sep = ",";
+
+ for (m2 = m1 + 2; m2 < mend; m2 += 2) {
+ if (m2->omit || m2->val != m1->val)
+ continue;
+ /* don't mix negatives with positives */
+ neg2 = m2->name[0] == 's' && (s64)m2->val < 0;
+ if (neg2 != neg1)
+ continue;
+ m2->omit = true;
+ verbose(env, "%s=", m2->name);
+ }
+
+ verbose(env, m1->name[0] == 's' ? "%lld" : "%llu", m1->val);
+ }
+}
+
static void print_verifier_state(struct bpf_verifier_env *env,
const struct bpf_func_state *state,
bool print_all)
@@ -1405,34 +1449,13 @@ static void print_verifier_state(struct bpf_verifier_env *env,
*/
verbose_a("imm=%llx", reg->var_off.value);
} else {
- if (reg->smin_value != reg->umin_value &&
- reg->smin_value != S64_MIN)
- verbose_a("smin=%lld", (long long)reg->smin_value);
- if (reg->smax_value != reg->umax_value &&
- reg->smax_value != S64_MAX)
- verbose_a("smax=%lld", (long long)reg->smax_value);
- if (reg->umin_value != 0)
- verbose_a("umin=%llu", (unsigned long long)reg->umin_value);
- if (reg->umax_value != U64_MAX)
- verbose_a("umax=%llu", (unsigned long long)reg->umax_value);
+ print_scalar_ranges(env, reg, &sep);
if (!tnum_is_unknown(reg->var_off)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose_a("var_off=%s", tn_buf);
}
- if (reg->s32_min_value != reg->smin_value &&
- reg->s32_min_value != S32_MIN)
- verbose_a("s32_min=%d", (int)(reg->s32_min_value));
- if (reg->s32_max_value != reg->smax_value &&
- reg->s32_max_value != S32_MAX)
- verbose_a("s32_max=%d", (int)(reg->s32_max_value));
- if (reg->u32_min_value != reg->umin_value &&
- reg->u32_min_value != U32_MIN)
- verbose_a("u32_min=%d", (int)(reg->u32_min_value));
- if (reg->u32_max_value != reg->umax_value &&
- reg->u32_max_value != U32_MAX)
- verbose_a("u32_max=%d", (int)(reg->u32_max_value));
}
#undef verbose_a
diff --git a/tools/testing/selftests/bpf/progs/exceptions_assert.c b/tools/testing/selftests/bpf/progs/exceptions_assert.c
index fa35832e6748..e1e5c54a6a11 100644
--- a/tools/testing/selftests/bpf/progs/exceptions_assert.c
+++ b/tools/testing/selftests/bpf/progs/exceptions_assert.c
@@ -31,35 +31,35 @@ check_assert(s64, eq, llong_max, LLONG_MAX);
__msg(": R0_w=scalar(smax=2147483646) R10=fp0")
check_assert(s64, lt, pos, INT_MAX);
-__msg(": R0_w=scalar(umin=9223372036854775808,var_off=(0x8000000000000000; 0x7fffffffffffffff))")
+__msg(": R0_w=scalar(smax=-1,umin=9223372036854775808,var_off=(0x8000000000000000; 0x7fffffffffffffff))")
check_assert(s64, lt, zero, 0);
-__msg(": R0_w=scalar(umin=9223372036854775808,umax=18446744071562067967,var_off=(0x8000000000000000; 0x7fffffffffffffff))")
+__msg(": R0_w=scalar(smax=-2147483649,umin=9223372036854775808,umax=18446744071562067967,var_off=(0x8000000000000000; 0x7fffffffffffffff))")
check_assert(s64, lt, neg, INT_MIN);
__msg(": R0_w=scalar(smax=2147483647) R10=fp0")
check_assert(s64, le, pos, INT_MAX);
__msg(": R0_w=scalar(smax=0) R10=fp0")
check_assert(s64, le, zero, 0);
-__msg(": R0_w=scalar(umin=9223372036854775808,umax=18446744071562067968,var_off=(0x8000000000000000; 0x7fffffffffffffff))")
+__msg(": R0_w=scalar(smax=-2147483648,umin=9223372036854775808,umax=18446744071562067968,var_off=(0x8000000000000000; 0x7fffffffffffffff))")
check_assert(s64, le, neg, INT_MIN);
-__msg(": R0_w=scalar(umin=2147483648,umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff))")
+__msg(": R0_w=scalar(smin=umin=2147483648,umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff))")
check_assert(s64, gt, pos, INT_MAX);
-__msg(": R0_w=scalar(umin=1,umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff))")
+__msg(": R0_w=scalar(smin=umin=1,umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff))")
check_assert(s64, gt, zero, 0);
__msg(": R0_w=scalar(smin=-2147483647) R10=fp0")
check_assert(s64, gt, neg, INT_MIN);
-__msg(": R0_w=scalar(umin=2147483647,umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff))")
+__msg(": R0_w=scalar(smin=umin=2147483647,umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff))")
check_assert(s64, ge, pos, INT_MAX);
-__msg(": R0_w=scalar(umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff)) R10=fp0")
+__msg(": R0_w=scalar(smin=0,umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff)) R10=fp0")
check_assert(s64, ge, zero, 0);
__msg(": R0_w=scalar(smin=-2147483648) R10=fp0")
check_assert(s64, ge, neg, INT_MIN);
SEC("?tc")
__log_level(2) __failure
-__msg(": R0=0 R1=ctx(off=0,imm=0) R2=scalar(smin=-2147483646,smax=2147483645) R10=fp0")
+__msg(": R0=0 R1=ctx(off=0,imm=0) R2=scalar(smin=smin32=-2147483646,smax=smax32=2147483645) R10=fp0")
int check_assert_range_s64(struct __sk_buff *ctx)
{
struct bpf_sock *sk = ctx->sk;
@@ -75,7 +75,7 @@ int check_assert_range_s64(struct __sk_buff *ctx)
SEC("?tc")
__log_level(2) __failure
-__msg(": R1=ctx(off=0,imm=0) R2=scalar(umin=4096,umax=8192,var_off=(0x0; 0x3fff))")
+__msg(": R1=ctx(off=0,imm=0) R2=scalar(smin=umin=smin32=umin32=4096,smax=umax=smax32=umax32=8192,var_off=(0x0; 0x3fff))")
int check_assert_range_u64(struct __sk_buff *ctx)
{
u64 num = ctx->len;
diff --git a/tools/testing/selftests/bpf/progs/verifier_ldsx.c b/tools/testing/selftests/bpf/progs/verifier_ldsx.c
index f90016a57eec..375525329637 100644
--- a/tools/testing/selftests/bpf/progs/verifier_ldsx.c
+++ b/tools/testing/selftests/bpf/progs/verifier_ldsx.c
@@ -64,7 +64,7 @@ __naked void ldsx_s32(void)
SEC("socket")
__description("LDSX, S8 range checking, privileged")
__log_level(2) __success __retval(1)
-__msg("R1_w=scalar(smin=-128,smax=127)")
+__msg("R1_w=scalar(smin=smin32=-128,smax=smax32=127)")
__naked void ldsx_s8_range_priv(void)
{
asm volatile (
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next 5/5] bpf: ensure proper register state printing for cond jumps
2023-10-11 22:37 [PATCH bpf-next 0/5] BPF verifier log improvements Andrii Nakryiko
` (3 preceding siblings ...)
2023-10-11 22:37 ` [PATCH bpf-next 4/5] bpf: disambiguate SCALAR register state output in verifier logs Andrii Nakryiko
@ 2023-10-11 22:37 ` Andrii Nakryiko
2023-10-12 5:40 ` [PATCH bpf-next 0/5] BPF verifier log improvements John Fastabend
` (2 subsequent siblings)
7 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2023-10-11 22:37 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Verifier emits relevant register state involved in any given instruction
next to it after `;` to the right, if possible. Or, worst case, on the
separate line repeating instruction index.
E.g., a nice and simple case would be:
2: (d5) if r0 s<= 0x0 goto pc+1 ; R0_w=0
But if there is some intervening extra output (e.g., precision
backtracking log) involved, we are supposed to see the state after the
precision backtrack log:
4: (75) if r0 s>= 0x0 goto pc+1
mark_precise: frame0: last_idx 4 first_idx 0 subseq_idx -1
mark_precise: frame0: regs=r0 stack= before 2: (d5) if r0 s<= 0x0 goto pc+1
mark_precise: frame0: regs=r0 stack= before 1: (b7) r0 = 0
6: R0_w=0
First off, note that in `6: R0_w=0` instruction index corresponds to the
next instruction, not to the conditional jump instruction itself, which
is wrong and we'll get to that.
But besides that, the above is a happy case that does work today. Yet,
if it so happens that precision backtracking had to traverse some of the
parent states, this `6: R0_w=0` state output would be missing.
This is due to a quirk of print_verifier_state() routine, which performs
mark_verifier_state_clean(env) at the end. This marks all registers as
"non-scratched", which means that subsequent logic to print *relevant*
registers (that is, "scratched ones") fails and doesn't see anything
relevant to print and skips the output altogether.
print_verifier_state() is used both to print instruction context, but
also to print an **entire** verifier state indiscriminately, e.g.,
during precision backtracking (and in a few other situations, like
during entering or exiting subprogram). Which means if we have to print
entire parent state before getting to printing instruction context
state, instruction context is marked as clean and is omitted.
Long story short, this is definitely not intentional. So we fix this
behavior in this patch by teaching print_verifier_state() to clear
scratch state only if it was used to print instruction state, not the
parent/callback state. This is determined by print_all option, so if
it's not set, we don't clear scratch state. This fixes missing
instruction state for these cases.
As for the mismatched instruction index, we fix that by making sure we
call print_insn_state() early inside check_cond_jmp_op() before we
adjusted insn_idx based on jump branch taken logic. And with that we get
desired correct information:
9: (16) if w4 == 0x1 goto pc+9
mark_precise: frame0: last_idx 9 first_idx 9 subseq_idx -1
mark_precise: frame0: parent state regs=r4 stack=: R2_w=1944 R4_rw=P1 R10=fp0
mark_precise: frame0: last_idx 8 first_idx 0 subseq_idx 9
mark_precise: frame0: regs=r4 stack= before 8: (66) if w4 s> 0x3 goto pc+5
mark_precise: frame0: regs=r4 stack= before 7: (b7) r4 = 1
9: R4=1
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/bpf/verifier.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 059f8e930499..fdcae52b76b4 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1539,7 +1539,8 @@ static void print_verifier_state(struct bpf_verifier_env *env,
if (state->in_async_callback_fn)
verbose(env, " async_cb");
verbose(env, "\n");
- mark_verifier_state_clean(env);
+ if (!print_all)
+ mark_verifier_state_clean(env);
}
static inline u32 vlog_alignment(u32 pos)
@@ -14408,6 +14409,8 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
!sanitize_speculative_path(env, insn, *insn_idx + 1,
*insn_idx))
return -EFAULT;
+ if (env->log.level & BPF_LOG_LEVEL)
+ print_insn_state(env, this_branch->frame[this_branch->curframe]);
*insn_idx += insn->off;
return 0;
} else if (pred == 0) {
@@ -14420,6 +14423,8 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
*insn_idx + insn->off + 1,
*insn_idx))
return -EFAULT;
+ if (env->log.level & BPF_LOG_LEVEL)
+ print_insn_state(env, this_branch->frame[this_branch->curframe]);
return 0;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* RE: [PATCH bpf-next 4/5] bpf: disambiguate SCALAR register state output in verifier logs
2023-10-11 22:37 ` [PATCH bpf-next 4/5] bpf: disambiguate SCALAR register state output in verifier logs Andrii Nakryiko
@ 2023-10-12 5:33 ` John Fastabend
2023-10-12 16:22 ` Andrii Nakryiko
0 siblings, 1 reply; 16+ messages in thread
From: John Fastabend @ 2023-10-12 5:33 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Andrii Nakryiko wrote:
> Currently the way that verifier prints SCALAR_VALUE register state (and
> PTR_TO_PACKET, which can have var_off and ranges info as well) is very
> ambiguous.
>
> In the name of brevity we are trying to eliminate "unnecessary" output
> of umin/umax, smin/smax, u32_min/u32_max, and s32_min/s32_max values, if
> possible. Current rules are that if any of those have their default
> value (which for mins is the minimal value of its respective types: 0,
> S32_MIN, or S64_MIN, while for maxs it's U32_MAX, S32_MAX, S64_MAX, or
> U64_MAX) *OR* if there is another min/max value that as matching value.
> E.g., if smin=100 and umin=100, we'll emit only umin=10, omitting smin
> altogether. This approach has a few problems, being both ambiguous and
> sort-of incorrect in some cases.
>
> Ambiguity is due to missing value could be either default value or value
> of umin/umax or smin/smax. This is especially confusing when we mix
> signed and unsigned ranges. Quite often, umin=0 and smin=0, and so we'll
> have only `umin=0` leaving anyone reading verifier log to guess whether
> smin is actually 0 or it's actually -9223372036854775808 (S64_MIN). And
> often times it's important to know, especially when debugging tricky
> issues.
+1
>
> "Sort-of incorrectness" comes from mixing negative and positive values.
> E.g., if umin is some large positive number, it can be equal to smin
> which is, interpreted as signed value, is actually some negative value.
> Currently, that smin will be omitted and only umin will be emitted with
> a large positive value, giving an impression that smin is also positive.
>
> Anyway, ambiguity is the biggest issue making it impossible to have an
> exact understanding of register state, preventing any sort of automated
> testing of verifier state based on verifier log. This patch is
> attempting to rectify the situation by removing ambiguity, while
> minimizing the verboseness of register state output.
>
> The rules are straightforward:
> - if some of the values are missing, then it definitely has a default
> value. I.e., `umin=0` means that umin is zero, but smin is actually
> S64_MIN;
> - all the various boundaries that happen to have the same value are
> emitted in one equality separated sequence. E.g., if umin and smin are
> both 100, we'll emit `smin=umin=100`, making this explicit;
> - we do not mix negative and positive values together, and even if
> they happen to have the same bit-level value, they will be emitted
> separately with proper sign. I.e., if both umax and smax happen to be
> 0xffffffffffffffff, we'll emit them both separately as
> `smax=-1,umax=18446744073709551615`;
> - in the name of a bit more uniformity and consistency,
> {u32,s32}_{min,max} are renamed to {s,u}{min,max}32, which seems to
> improve readability.
agree.
>
> The above means that in case of all 4 ranges being, say, [50, 100] range,
> we'd previously see hugely ambiguous:
>
> R1=scalar(umin=50,umax=100)
>
> Now, we'll be more explicit:
>
> R1=scalar(smin=umin=smin32=umin32=50,smax=umax=smax32=umax32=100)
>
> This is slightly more verbose, but distinct from the case when we don't
> know anything about signed boundaries and 32-bit boundaries, which under
> new rules will match the old case:
>
> R1=scalar(umin=50,umax=100)
Did you consider perhaps just always printing the entire set? Its overly
verbose I guess but I find it easier to track state across multiple
steps this way.
Otherwise patch LGTM.
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH bpf-next 0/5] BPF verifier log improvements
2023-10-11 22:37 [PATCH bpf-next 0/5] BPF verifier log improvements Andrii Nakryiko
` (4 preceding siblings ...)
2023-10-11 22:37 ` [PATCH bpf-next 5/5] bpf: ensure proper register state printing for cond jumps Andrii Nakryiko
@ 2023-10-12 5:40 ` John Fastabend
2023-10-12 15:00 ` Eduard Zingerman
2023-10-16 12:00 ` patchwork-bot+netdevbpf
7 siblings, 0 replies; 16+ messages in thread
From: John Fastabend @ 2023-10-12 5:40 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Andrii Nakryiko wrote:
> This patch set fixes ambiguity in BPF verifier log output of SCALAR register
> in the parts that emit umin/umax, smin/smax, etc ranges. See patch #4 for
> details.
>
> Also, patch #5 fixes an issue with verifier log missing instruction context
> (state) output for conditionals that trigger precision marking. See details in
> the patch.
>
> First two patches are just improvements to two selftests that are very flaky
> locally when run in parallel mode.
>
> Patch #3 changes 'align' selftest to be less strict about exact verifier log
> output (which patch #4 changes, breaking lots of align tests as written). Now
> test does more of a register substate checks, mostly around expected var_off()
> values. This 'align' selftests is one of the more brittle ones and requires
> constant adjustment when verifier log output changes, without really catching
> any new issues. So hopefully these changes can minimize future support efforts
> for this specific set of tests.
LGTM, I had one question/comment in there but I don't think its too
important feel free to ignore if you like.
Acked-by: John Fastabend <john.fastabend@gmail.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 1/5] selftests/bpf: improve percpu_alloc test robustness
2023-10-11 22:37 ` [PATCH bpf-next 1/5] selftests/bpf: improve percpu_alloc test robustness Andrii Nakryiko
@ 2023-10-12 6:04 ` Yafang Shao
2023-10-12 16:37 ` Andrii Nakryiko
0 siblings, 1 reply; 16+ messages in thread
From: Yafang Shao @ 2023-10-12 6:04 UTC (permalink / raw)
To: Andrii Nakryiko; +Cc: bpf, ast, daniel, martin.lau, kernel-team
On Thu, Oct 12, 2023 at 6:37 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Make these non-serial tests filter BPF programs by intended PID of
> a test runner process. This makes it isolated from other parallel tests
> that might interfere accidentally.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
> tools/testing/selftests/bpf/prog_tests/percpu_alloc.c | 3 +++
> tools/testing/selftests/bpf/progs/percpu_alloc_array.c | 7 +++++++
> .../selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c | 4 ++++
> 3 files changed, 14 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c b/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
> index 9541e9b3a034..343da65864d6 100644
> --- a/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
> +++ b/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
> @@ -19,6 +19,7 @@ static void test_array(void)
> bpf_program__set_autoload(skel->progs.test_array_map_3, true);
> bpf_program__set_autoload(skel->progs.test_array_map_4, true);
>
> + skel->bss->my_pid = getpid();
> skel->rodata->nr_cpus = libbpf_num_possible_cpus();
>
> err = percpu_alloc_array__load(skel);
> @@ -51,6 +52,7 @@ static void test_array_sleepable(void)
>
> bpf_program__set_autoload(skel->progs.test_array_map_10, true);
>
> + skel->bss->my_pid = getpid();
> skel->rodata->nr_cpus = libbpf_num_possible_cpus();
>
> err = percpu_alloc_array__load(skel);
> @@ -85,6 +87,7 @@ static void test_cgrp_local_storage(void)
> if (!ASSERT_OK_PTR(skel, "percpu_alloc_cgrp_local_storage__open"))
> goto close_fd;
>
> + skel->bss->my_pid = getpid();
> skel->rodata->nr_cpus = libbpf_num_possible_cpus();
>
> err = percpu_alloc_cgrp_local_storage__load(skel);
> diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_array.c b/tools/testing/selftests/bpf/progs/percpu_alloc_array.c
> index bbc45346e006..37c2d2608ec0 100644
> --- a/tools/testing/selftests/bpf/progs/percpu_alloc_array.c
> +++ b/tools/testing/selftests/bpf/progs/percpu_alloc_array.c
> @@ -71,6 +71,7 @@ int BPF_PROG(test_array_map_2)
> }
>
> int cpu0_field_d, sum_field_c;
> +int my_pid;
>
> /* Summarize percpu data */
> SEC("?fentry/bpf_fentry_test3")
> @@ -81,6 +82,9 @@ int BPF_PROG(test_array_map_3)
> struct val_t *v;
> struct elem *e;
>
> + if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
> + return 0;
> +
> e = bpf_map_lookup_elem(&array, &index);
> if (!e)
> return 0;
> @@ -130,6 +134,9 @@ int BPF_PROG(test_array_map_10)
> struct val_t *v;
> struct elem *e;
>
> + if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
> + return 0;
> +
> e = bpf_map_lookup_elem(&array, &index);
> if (!e)
> return 0;
> diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c b/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c
> index 1c36a241852c..a2acf9aa6c24 100644
> --- a/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c
> +++ b/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c
> @@ -70,6 +70,7 @@ int BPF_PROG(test_cgrp_local_storage_2)
> }
>
> int cpu0_field_d, sum_field_c;
> +int my_pid;
>
> /* Summarize percpu data collection */
> SEC("fentry/bpf_fentry_test3")
> @@ -81,6 +82,9 @@ int BPF_PROG(test_cgrp_local_storage_3)
> struct elem *e;
> int i;
>
> + if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
> + return 0;
> +
> task = bpf_get_current_task_btf();
a small nit.
We have already got the current task. Should we better use
if (task->pid != my_pid)
instead ?
> e = bpf_cgrp_storage_get(&cgrp, task->cgroups->dfl_cgrp, 0, 0);
> if (!e)
> --
> 2.34.1
>
>
--
Regards
Yafang
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 2/5] selftests/bpf: improve missed_kprobe_recursion test robustness
2023-10-11 22:37 ` [PATCH bpf-next 2/5] selftests/bpf: improve missed_kprobe_recursion " Andrii Nakryiko
@ 2023-10-12 13:22 ` Jiri Olsa
0 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2023-10-12 13:22 UTC (permalink / raw)
To: Andrii Nakryiko; +Cc: bpf, ast, daniel, martin.lau, kernel-team
On Wed, Oct 11, 2023 at 03:37:25PM -0700, Andrii Nakryiko wrote:
> Given missed_kprobe_recursion is non-serial and uses common testing
> kfuncs to count number of recursion misses it's possible that some other
> parallel test can trigger extraneous recursion misses. So we can't
> expect exactly 1 miss. Relax conditions and expect at least one.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
jirka
> ---
> tools/testing/selftests/bpf/prog_tests/missed.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/missed.c b/tools/testing/selftests/bpf/prog_tests/missed.c
> index 24ade11f5c05..70d90c43537c 100644
> --- a/tools/testing/selftests/bpf/prog_tests/missed.c
> +++ b/tools/testing/selftests/bpf/prog_tests/missed.c
> @@ -81,10 +81,10 @@ static void test_missed_kprobe_recursion(void)
> ASSERT_EQ(topts.retval, 0, "test_run");
>
> ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test1)), 0, "test1_recursion_misses");
> - ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test2)), 1, "test2_recursion_misses");
> - ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test3)), 1, "test3_recursion_misses");
> - ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test4)), 1, "test4_recursion_misses");
> - ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test5)), 1, "test5_recursion_misses");
> + ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test2)), 1, "test2_recursion_misses");
> + ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test3)), 1, "test3_recursion_misses");
> + ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test4)), 1, "test4_recursion_misses");
> + ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test5)), 1, "test5_recursion_misses");
>
> cleanup:
> missed_kprobe_recursion__destroy(skel);
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 0/5] BPF verifier log improvements
2023-10-11 22:37 [PATCH bpf-next 0/5] BPF verifier log improvements Andrii Nakryiko
` (5 preceding siblings ...)
2023-10-12 5:40 ` [PATCH bpf-next 0/5] BPF verifier log improvements John Fastabend
@ 2023-10-12 15:00 ` Eduard Zingerman
2023-10-16 12:00 ` patchwork-bot+netdevbpf
7 siblings, 0 replies; 16+ messages in thread
From: Eduard Zingerman @ 2023-10-12 15:00 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: kernel-team
On Wed, 2023-10-11 at 15:37 -0700, Andrii Nakryiko wrote:
> This patch set fixes ambiguity in BPF verifier log output of SCALAR register
> in the parts that emit umin/umax, smin/smax, etc ranges. See patch #4 for
> details.
>
> Also, patch #5 fixes an issue with verifier log missing instruction context
> (state) output for conditionals that trigger precision marking. See details in
> the patch.
>
> First two patches are just improvements to two selftests that are very flaky
> locally when run in parallel mode.
>
> Patch #3 changes 'align' selftest to be less strict about exact verifier log
> output (which patch #4 changes, breaking lots of align tests as written). Now
> test does more of a register substate checks, mostly around expected var_off()
> values. This 'align' selftests is one of the more brittle ones and requires
> constant adjustment when verifier log output changes, without really catching
> any new issues. So hopefully these changes can minimize future support efforts
> for this specific set of tests.
All seems reasonable and passes local testing.
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
>
> Andrii Nakryiko (5):
> selftests/bpf: improve percpu_alloc test robustness
> selftests/bpf: improve missed_kprobe_recursion test robustness
> selftests/bpf: make align selftests more robust
> bpf: disambiguate SCALAR register state output in verifier logs
> bpf: ensure proper register state printing for cond jumps
>
> kernel/bpf/verifier.c | 74 ++++--
> .../testing/selftests/bpf/prog_tests/align.c | 241 +++++++++---------
> .../testing/selftests/bpf/prog_tests/missed.c | 8 +-
> .../selftests/bpf/prog_tests/percpu_alloc.c | 3 +
> .../selftests/bpf/progs/exceptions_assert.c | 18 +-
> .../selftests/bpf/progs/percpu_alloc_array.c | 7 +
> .../progs/percpu_alloc_cgrp_local_storage.c | 4 +
> .../selftests/bpf/progs/verifier_ldsx.c | 2 +-
> 8 files changed, 200 insertions(+), 157 deletions(-)
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 4/5] bpf: disambiguate SCALAR register state output in verifier logs
2023-10-12 5:33 ` John Fastabend
@ 2023-10-12 16:22 ` Andrii Nakryiko
2023-10-12 16:59 ` John Fastabend
0 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2023-10-12 16:22 UTC (permalink / raw)
To: John Fastabend; +Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team
On Wed, Oct 11, 2023 at 10:33 PM John Fastabend
<john.fastabend@gmail.com> wrote:
>
> Andrii Nakryiko wrote:
> > Currently the way that verifier prints SCALAR_VALUE register state (and
> > PTR_TO_PACKET, which can have var_off and ranges info as well) is very
> > ambiguous.
> >
> > In the name of brevity we are trying to eliminate "unnecessary" output
> > of umin/umax, smin/smax, u32_min/u32_max, and s32_min/s32_max values, if
> > possible. Current rules are that if any of those have their default
> > value (which for mins is the minimal value of its respective types: 0,
> > S32_MIN, or S64_MIN, while for maxs it's U32_MAX, S32_MAX, S64_MAX, or
> > U64_MAX) *OR* if there is another min/max value that as matching value.
> > E.g., if smin=100 and umin=100, we'll emit only umin=10, omitting smin
> > altogether. This approach has a few problems, being both ambiguous and
> > sort-of incorrect in some cases.
> >
> > Ambiguity is due to missing value could be either default value or value
> > of umin/umax or smin/smax. This is especially confusing when we mix
> > signed and unsigned ranges. Quite often, umin=0 and smin=0, and so we'll
> > have only `umin=0` leaving anyone reading verifier log to guess whether
> > smin is actually 0 or it's actually -9223372036854775808 (S64_MIN). And
> > often times it's important to know, especially when debugging tricky
> > issues.
>
> +1
>
> >
> > "Sort-of incorrectness" comes from mixing negative and positive values.
> > E.g., if umin is some large positive number, it can be equal to smin
> > which is, interpreted as signed value, is actually some negative value.
> > Currently, that smin will be omitted and only umin will be emitted with
> > a large positive value, giving an impression that smin is also positive.
> >
> > Anyway, ambiguity is the biggest issue making it impossible to have an
> > exact understanding of register state, preventing any sort of automated
> > testing of verifier state based on verifier log. This patch is
> > attempting to rectify the situation by removing ambiguity, while
> > minimizing the verboseness of register state output.
> >
> > The rules are straightforward:
> > - if some of the values are missing, then it definitely has a default
> > value. I.e., `umin=0` means that umin is zero, but smin is actually
> > S64_MIN;
> > - all the various boundaries that happen to have the same value are
> > emitted in one equality separated sequence. E.g., if umin and smin are
> > both 100, we'll emit `smin=umin=100`, making this explicit;
> > - we do not mix negative and positive values together, and even if
> > they happen to have the same bit-level value, they will be emitted
> > separately with proper sign. I.e., if both umax and smax happen to be
> > 0xffffffffffffffff, we'll emit them both separately as
> > `smax=-1,umax=18446744073709551615`;
> > - in the name of a bit more uniformity and consistency,
> > {u32,s32}_{min,max} are renamed to {s,u}{min,max}32, which seems to
> > improve readability.
>
> agree.
>
> >
> > The above means that in case of all 4 ranges being, say, [50, 100] range,
> > we'd previously see hugely ambiguous:
> >
> > R1=scalar(umin=50,umax=100)
> >
> > Now, we'll be more explicit:
> >
> > R1=scalar(smin=umin=smin32=umin32=50,smax=umax=smax32=umax32=100)
> >
> > This is slightly more verbose, but distinct from the case when we don't
> > know anything about signed boundaries and 32-bit boundaries, which under
> > new rules will match the old case:
> >
> > R1=scalar(umin=50,umax=100)
>
> Did you consider perhaps just always printing the entire set? Its overly
> verbose I guess but I find it easier to track state across multiple
> steps this way.
I didn't consider that because it's way too distracting and verbose
(IMO) in practice. For one, those default values represent the idea
"we don't know anything", so whether we see umax=18446744073709551615
or just don't see umax makes little difference in practice (though
perhaps one has to come to realization that those two things are
equivalent). But also think about seeing this:
smin=-9223372036854775807,smax=9223372036854775807,umin=0,umax=18446744073709551615,smin32=-2147483648,smax32=21474836487,umin32=0,umax32=4294967295
How verbose and distracting that is, and how much time would it take
you to notice that this is not just "we don't know anything about this
register", but that actually smin is not a default, it's S64_MIN+1.
This is of course extreme example (I mostly wanted to show how verbose
default output will be), but I think the point stands that omitting
defaults brings out what extra information we have much better.
It's an option to do it for LOG_LEVEL_2, but I would still not do
that, I'd find it too noisy even for level 2.
>
> Otherwise patch LGTM.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 1/5] selftests/bpf: improve percpu_alloc test robustness
2023-10-12 6:04 ` Yafang Shao
@ 2023-10-12 16:37 ` Andrii Nakryiko
0 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2023-10-12 16:37 UTC (permalink / raw)
To: Yafang Shao; +Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team
On Wed, Oct 11, 2023 at 11:04 PM Yafang Shao <laoar.shao@gmail.com> wrote:
>
> On Thu, Oct 12, 2023 at 6:37 AM Andrii Nakryiko <andrii@kernel.org> wrote:
> >
> > Make these non-serial tests filter BPF programs by intended PID of
> > a test runner process. This makes it isolated from other parallel tests
> > that might interfere accidentally.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> > tools/testing/selftests/bpf/prog_tests/percpu_alloc.c | 3 +++
> > tools/testing/selftests/bpf/progs/percpu_alloc_array.c | 7 +++++++
> > .../selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c | 4 ++++
> > 3 files changed, 14 insertions(+)
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c b/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
> > index 9541e9b3a034..343da65864d6 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
> > @@ -19,6 +19,7 @@ static void test_array(void)
> > bpf_program__set_autoload(skel->progs.test_array_map_3, true);
> > bpf_program__set_autoload(skel->progs.test_array_map_4, true);
> >
> > + skel->bss->my_pid = getpid();
> > skel->rodata->nr_cpus = libbpf_num_possible_cpus();
> >
> > err = percpu_alloc_array__load(skel);
> > @@ -51,6 +52,7 @@ static void test_array_sleepable(void)
> >
> > bpf_program__set_autoload(skel->progs.test_array_map_10, true);
> >
> > + skel->bss->my_pid = getpid();
> > skel->rodata->nr_cpus = libbpf_num_possible_cpus();
> >
> > err = percpu_alloc_array__load(skel);
> > @@ -85,6 +87,7 @@ static void test_cgrp_local_storage(void)
> > if (!ASSERT_OK_PTR(skel, "percpu_alloc_cgrp_local_storage__open"))
> > goto close_fd;
> >
> > + skel->bss->my_pid = getpid();
> > skel->rodata->nr_cpus = libbpf_num_possible_cpus();
> >
> > err = percpu_alloc_cgrp_local_storage__load(skel);
> > diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_array.c b/tools/testing/selftests/bpf/progs/percpu_alloc_array.c
> > index bbc45346e006..37c2d2608ec0 100644
> > --- a/tools/testing/selftests/bpf/progs/percpu_alloc_array.c
> > +++ b/tools/testing/selftests/bpf/progs/percpu_alloc_array.c
> > @@ -71,6 +71,7 @@ int BPF_PROG(test_array_map_2)
> > }
> >
> > int cpu0_field_d, sum_field_c;
> > +int my_pid;
> >
> > /* Summarize percpu data */
> > SEC("?fentry/bpf_fentry_test3")
> > @@ -81,6 +82,9 @@ int BPF_PROG(test_array_map_3)
> > struct val_t *v;
> > struct elem *e;
> >
> > + if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
> > + return 0;
> > +
> > e = bpf_map_lookup_elem(&array, &index);
> > if (!e)
> > return 0;
> > @@ -130,6 +134,9 @@ int BPF_PROG(test_array_map_10)
> > struct val_t *v;
> > struct elem *e;
> >
> > + if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
> > + return 0;
> > +
> > e = bpf_map_lookup_elem(&array, &index);
> > if (!e)
> > return 0;
> > diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c b/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c
> > index 1c36a241852c..a2acf9aa6c24 100644
> > --- a/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c
> > +++ b/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c
> > @@ -70,6 +70,7 @@ int BPF_PROG(test_cgrp_local_storage_2)
> > }
> >
> > int cpu0_field_d, sum_field_c;
> > +int my_pid;
> >
> > /* Summarize percpu data collection */
> > SEC("fentry/bpf_fentry_test3")
> > @@ -81,6 +82,9 @@ int BPF_PROG(test_cgrp_local_storage_3)
> > struct elem *e;
> > int i;
> >
> > + if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
> > + return 0;
> > +
> > task = bpf_get_current_task_btf();
>
> a small nit.
>
> We have already got the current task. Should we better use
>
> if (task->pid != my_pid)
>
> instead ?
>
doesn't matter, it's a standard snippet of code we have in all the
tests that might have interference from parallel tests
> > e = bpf_cgrp_storage_get(&cgrp, task->cgroups->dfl_cgrp, 0, 0);
> > if (!e)
> > --
> > 2.34.1
> >
> >
>
>
> --
> Regards
> Yafang
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 4/5] bpf: disambiguate SCALAR register state output in verifier logs
2023-10-12 16:22 ` Andrii Nakryiko
@ 2023-10-12 16:59 ` John Fastabend
2023-10-12 17:45 ` Andrii Nakryiko
0 siblings, 1 reply; 16+ messages in thread
From: John Fastabend @ 2023-10-12 16:59 UTC (permalink / raw)
To: Andrii Nakryiko, John Fastabend
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team
Andrii Nakryiko wrote:
> On Wed, Oct 11, 2023 at 10:33 PM John Fastabend
> <john.fastabend@gmail.com> wrote:
> >
> > Andrii Nakryiko wrote:
> > > Currently the way that verifier prints SCALAR_VALUE register state (and
> > > PTR_TO_PACKET, which can have var_off and ranges info as well) is very
> > > ambiguous.
> > >
> > > In the name of brevity we are trying to eliminate "unnecessary" output
> > > of umin/umax, smin/smax, u32_min/u32_max, and s32_min/s32_max values, if
> > > possible. Current rules are that if any of those have their default
> > > value (which for mins is the minimal value of its respective types: 0,
> > > S32_MIN, or S64_MIN, while for maxs it's U32_MAX, S32_MAX, S64_MAX, or
> > > U64_MAX) *OR* if there is another min/max value that as matching value.
> > > E.g., if smin=100 and umin=100, we'll emit only umin=10, omitting smin
> > > altogether. This approach has a few problems, being both ambiguous and
> > > sort-of incorrect in some cases.
> > >
> > > Ambiguity is due to missing value could be either default value or value
> > > of umin/umax or smin/smax. This is especially confusing when we mix
> > > signed and unsigned ranges. Quite often, umin=0 and smin=0, and so we'll
> > > have only `umin=0` leaving anyone reading verifier log to guess whether
> > > smin is actually 0 or it's actually -9223372036854775808 (S64_MIN). And
> > > often times it's important to know, especially when debugging tricky
> > > issues.
> >
> > +1
> >
> > >
> > > "Sort-of incorrectness" comes from mixing negative and positive values.
> > > E.g., if umin is some large positive number, it can be equal to smin
> > > which is, interpreted as signed value, is actually some negative value.
> > > Currently, that smin will be omitted and only umin will be emitted with
> > > a large positive value, giving an impression that smin is also positive.
> > >
> > > Anyway, ambiguity is the biggest issue making it impossible to have an
> > > exact understanding of register state, preventing any sort of automated
> > > testing of verifier state based on verifier log. This patch is
> > > attempting to rectify the situation by removing ambiguity, while
> > > minimizing the verboseness of register state output.
> > >
> > > The rules are straightforward:
> > > - if some of the values are missing, then it definitely has a default
> > > value. I.e., `umin=0` means that umin is zero, but smin is actually
> > > S64_MIN;
> > > - all the various boundaries that happen to have the same value are
> > > emitted in one equality separated sequence. E.g., if umin and smin are
> > > both 100, we'll emit `smin=umin=100`, making this explicit;
> > > - we do not mix negative and positive values together, and even if
> > > they happen to have the same bit-level value, they will be emitted
> > > separately with proper sign. I.e., if both umax and smax happen to be
> > > 0xffffffffffffffff, we'll emit them both separately as
> > > `smax=-1,umax=18446744073709551615`;
> > > - in the name of a bit more uniformity and consistency,
> > > {u32,s32}_{min,max} are renamed to {s,u}{min,max}32, which seems to
> > > improve readability.
> >
> > agree.
> >
> > >
> > > The above means that in case of all 4 ranges being, say, [50, 100] range,
> > > we'd previously see hugely ambiguous:
> > >
> > > R1=scalar(umin=50,umax=100)
> > >
> > > Now, we'll be more explicit:
> > >
> > > R1=scalar(smin=umin=smin32=umin32=50,smax=umax=smax32=umax32=100)
> > >
> > > This is slightly more verbose, but distinct from the case when we don't
> > > know anything about signed boundaries and 32-bit boundaries, which under
> > > new rules will match the old case:
> > >
> > > R1=scalar(umin=50,umax=100)
> >
> > Did you consider perhaps just always printing the entire set? Its overly
> > verbose I guess but I find it easier to track state across multiple
> > steps this way.
>
> I didn't consider that because it's way too distracting and verbose
> (IMO) in practice. For one, those default values represent the idea
> "we don't know anything", so whether we see umax=18446744073709551615
> or just don't see umax makes little difference in practice (though
> perhaps one has to come to realization that those two things are
> equivalent). But also think about seeing this:
>
> smin=-9223372036854775807,smax=9223372036854775807,umin=0,umax=18446744073709551615,smin32=-2147483648,smax32=21474836487,umin32=0,umax32=4294967295
you could do,
smin=SMIN,smax=SMAX,umin=0,umax=UMAX,smin=SMIN,smax=SMAX,umin32=0,umax32=UMAX
but I see your point.
>
> How verbose and distracting that is, and how much time would it take
> you to notice that this is not just "we don't know anything about this
> register", but that actually smin is not a default, it's S64_MIN+1.
> This is of course extreme example (I mostly wanted to show how verbose
> default output will be), but I think the point stands that omitting
> defaults brings out what extra information we have much better.
>
> It's an option to do it for LOG_LEVEL_2, but I would still not do
> that, I'd find it too noisy even for level 2.
My $.02 just leave it as you have it here.
>
> >
> > Otherwise patch LGTM.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 4/5] bpf: disambiguate SCALAR register state output in verifier logs
2023-10-12 16:59 ` John Fastabend
@ 2023-10-12 17:45 ` Andrii Nakryiko
0 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2023-10-12 17:45 UTC (permalink / raw)
To: John Fastabend; +Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team
On Thu, Oct 12, 2023 at 9:59 AM John Fastabend <john.fastabend@gmail.com> wrote:
>
> Andrii Nakryiko wrote:
> > On Wed, Oct 11, 2023 at 10:33 PM John Fastabend
> > <john.fastabend@gmail.com> wrote:
> > >
> > > Andrii Nakryiko wrote:
> > > > Currently the way that verifier prints SCALAR_VALUE register state (and
> > > > PTR_TO_PACKET, which can have var_off and ranges info as well) is very
> > > > ambiguous.
> > > >
> > > > In the name of brevity we are trying to eliminate "unnecessary" output
> > > > of umin/umax, smin/smax, u32_min/u32_max, and s32_min/s32_max values, if
> > > > possible. Current rules are that if any of those have their default
> > > > value (which for mins is the minimal value of its respective types: 0,
> > > > S32_MIN, or S64_MIN, while for maxs it's U32_MAX, S32_MAX, S64_MAX, or
> > > > U64_MAX) *OR* if there is another min/max value that as matching value.
> > > > E.g., if smin=100 and umin=100, we'll emit only umin=10, omitting smin
> > > > altogether. This approach has a few problems, being both ambiguous and
> > > > sort-of incorrect in some cases.
> > > >
> > > > Ambiguity is due to missing value could be either default value or value
> > > > of umin/umax or smin/smax. This is especially confusing when we mix
> > > > signed and unsigned ranges. Quite often, umin=0 and smin=0, and so we'll
> > > > have only `umin=0` leaving anyone reading verifier log to guess whether
> > > > smin is actually 0 or it's actually -9223372036854775808 (S64_MIN). And
> > > > often times it's important to know, especially when debugging tricky
> > > > issues.
> > >
> > > +1
> > >
> > > >
> > > > "Sort-of incorrectness" comes from mixing negative and positive values.
> > > > E.g., if umin is some large positive number, it can be equal to smin
> > > > which is, interpreted as signed value, is actually some negative value.
> > > > Currently, that smin will be omitted and only umin will be emitted with
> > > > a large positive value, giving an impression that smin is also positive.
> > > >
> > > > Anyway, ambiguity is the biggest issue making it impossible to have an
> > > > exact understanding of register state, preventing any sort of automated
> > > > testing of verifier state based on verifier log. This patch is
> > > > attempting to rectify the situation by removing ambiguity, while
> > > > minimizing the verboseness of register state output.
> > > >
> > > > The rules are straightforward:
> > > > - if some of the values are missing, then it definitely has a default
> > > > value. I.e., `umin=0` means that umin is zero, but smin is actually
> > > > S64_MIN;
> > > > - all the various boundaries that happen to have the same value are
> > > > emitted in one equality separated sequence. E.g., if umin and smin are
> > > > both 100, we'll emit `smin=umin=100`, making this explicit;
> > > > - we do not mix negative and positive values together, and even if
> > > > they happen to have the same bit-level value, they will be emitted
> > > > separately with proper sign. I.e., if both umax and smax happen to be
> > > > 0xffffffffffffffff, we'll emit them both separately as
> > > > `smax=-1,umax=18446744073709551615`;
> > > > - in the name of a bit more uniformity and consistency,
> > > > {u32,s32}_{min,max} are renamed to {s,u}{min,max}32, which seems to
> > > > improve readability.
> > >
> > > agree.
> > >
> > > >
> > > > The above means that in case of all 4 ranges being, say, [50, 100] range,
> > > > we'd previously see hugely ambiguous:
> > > >
> > > > R1=scalar(umin=50,umax=100)
> > > >
> > > > Now, we'll be more explicit:
> > > >
> > > > R1=scalar(smin=umin=smin32=umin32=50,smax=umax=smax32=umax32=100)
> > > >
> > > > This is slightly more verbose, but distinct from the case when we don't
> > > > know anything about signed boundaries and 32-bit boundaries, which under
> > > > new rules will match the old case:
> > > >
> > > > R1=scalar(umin=50,umax=100)
> > >
> > > Did you consider perhaps just always printing the entire set? Its overly
> > > verbose I guess but I find it easier to track state across multiple
> > > steps this way.
> >
> > I didn't consider that because it's way too distracting and verbose
> > (IMO) in practice. For one, those default values represent the idea
> > "we don't know anything", so whether we see umax=18446744073709551615
> > or just don't see umax makes little difference in practice (though
> > perhaps one has to come to realization that those two things are
> > equivalent). But also think about seeing this:
> >
> > smin=-9223372036854775807,smax=9223372036854775807,umin=0,umax=18446744073709551615,smin32=-2147483648,smax32=21474836487,umin32=0,umax32=4294967295
>
> you could do,
>
> smin=SMIN,smax=SMAX,umin=0,umax=UMAX,smin=SMIN,smax=SMAX,umin32=0,umax32=UMAX
>
> but I see your point.
Heh, SMIN/SMAX and other "symbolic constants" is not a bad idea, yes.
I actually have a rather big patch set coming up for testing range
bounds tracking, and there I went even further, both having
S64_MIN/MAX, etc, but also printing in hex vs decimal depending on how
big or small is the actual value. It seems super useful and intuitive
in practice (I did it in a selftests, not in the kernel, but the point
stands).
But I think this is a completely separate discussion and with my
changes to make printing of range a bit more generic it should be now
even easier to do. But let's not do too much at the same time.
>
> >
> > How verbose and distracting that is, and how much time would it take
> > you to notice that this is not just "we don't know anything about this
> > register", but that actually smin is not a default, it's S64_MIN+1.
> > This is of course extreme example (I mostly wanted to show how verbose
> > default output will be), but I think the point stands that omitting
> > defaults brings out what extra information we have much better.
> >
> > It's an option to do it for LOG_LEVEL_2, but I would still not do
> > that, I'd find it too noisy even for level 2.
>
> My $.02 just leave it as you have it here.
Sounds good :)
>
> >
> > >
> > > Otherwise patch LGTM.
>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 0/5] BPF verifier log improvements
2023-10-11 22:37 [PATCH bpf-next 0/5] BPF verifier log improvements Andrii Nakryiko
` (6 preceding siblings ...)
2023-10-12 15:00 ` Eduard Zingerman
@ 2023-10-16 12:00 ` patchwork-bot+netdevbpf
7 siblings, 0 replies; 16+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-10-16 12:00 UTC (permalink / raw)
To: Andrii Nakryiko; +Cc: bpf, ast, daniel, martin.lau, kernel-team
Hello:
This series was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:
On Wed, 11 Oct 2023 15:37:23 -0700 you wrote:
> This patch set fixes ambiguity in BPF verifier log output of SCALAR register
> in the parts that emit umin/umax, smin/smax, etc ranges. See patch #4 for
> details.
>
> Also, patch #5 fixes an issue with verifier log missing instruction context
> (state) output for conditionals that trigger precision marking. See details in
> the patch.
>
> [...]
Here is the summary with links:
- [bpf-next,1/5] selftests/bpf: improve percpu_alloc test robustness
https://git.kernel.org/bpf/bpf-next/c/2d78928c9cf7
- [bpf-next,2/5] selftests/bpf: improve missed_kprobe_recursion test robustness
https://git.kernel.org/bpf/bpf-next/c/08a7078feacf
- [bpf-next,3/5] selftests/bpf: make align selftests more robust
https://git.kernel.org/bpf/bpf-next/c/cde785142885
- [bpf-next,4/5] bpf: disambiguate SCALAR register state output in verifier logs
https://git.kernel.org/bpf/bpf-next/c/72f8a1de4a7e
- [bpf-next,5/5] bpf: ensure proper register state printing for cond jumps
https://git.kernel.org/bpf/bpf-next/c/1a8a315f008a
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2023-10-16 12:00 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-11 22:37 [PATCH bpf-next 0/5] BPF verifier log improvements Andrii Nakryiko
2023-10-11 22:37 ` [PATCH bpf-next 1/5] selftests/bpf: improve percpu_alloc test robustness Andrii Nakryiko
2023-10-12 6:04 ` Yafang Shao
2023-10-12 16:37 ` Andrii Nakryiko
2023-10-11 22:37 ` [PATCH bpf-next 2/5] selftests/bpf: improve missed_kprobe_recursion " Andrii Nakryiko
2023-10-12 13:22 ` Jiri Olsa
2023-10-11 22:37 ` [PATCH bpf-next 3/5] selftests/bpf: make align selftests more robust Andrii Nakryiko
2023-10-11 22:37 ` [PATCH bpf-next 4/5] bpf: disambiguate SCALAR register state output in verifier logs Andrii Nakryiko
2023-10-12 5:33 ` John Fastabend
2023-10-12 16:22 ` Andrii Nakryiko
2023-10-12 16:59 ` John Fastabend
2023-10-12 17:45 ` Andrii Nakryiko
2023-10-11 22:37 ` [PATCH bpf-next 5/5] bpf: ensure proper register state printing for cond jumps Andrii Nakryiko
2023-10-12 5:40 ` [PATCH bpf-next 0/5] BPF verifier log improvements John Fastabend
2023-10-12 15:00 ` Eduard Zingerman
2023-10-16 12:00 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox