From: Eduard Zingerman <eddyz87@gmail.com>
To: bpf@vger.kernel.org, ast@kernel.org
Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev,
kernel-team@fb.com, yonghong.song@linux.dev,
Eduard Zingerman <eddyz87@gmail.com>
Subject: [RFC bpf-next v1 3/7] bpf: don't do clean_live_states when state->loop_entry->branches > 0
Date: Wed, 22 Jan 2025 04:04:38 -0800 [thread overview]
Message-ID: <20250122120442.3536298-4-eddyz87@gmail.com> (raw)
In-Reply-To: <20250122120442.3536298-1-eddyz87@gmail.com>
verifier.c:is_state_visited() uses RANGE_WITHIN states comparison rules
for cached states that have loop_entry with non-zero branches count
(meaning that loop_entry's verification is not yet done).
The RANGE_WITHIN rules in regsafe()/stacksafe() require register and
stack objects types to be identical in current and old states.
verifier.c:clean_live_states() replaces registers and stack spills
with NOT_INIT/STACK_INVALID marks, if these registers/stack spills are
not read in any child state. This means that clean_live_states() works
against loop convergence logic under some conditions. See selftest in
the next patch for a specific example.
Mitigate this by prohibiting clean_verifier_state() when
state->loop_entry->branches > 0.
This undoes negative verification performance impact of the
copy_verifier_state() fix from the previous patch.
Below is comparison between master and current patch.
selftests:
File Program Insns (DIFF) States (DIFF)
-------------------- ---------------------------- ----------------- ----------------
arena_htab.bpf.o arena_htab_llvm -294 (-41.00%) -20 (-35.09%)
arena_htab_asm.bpf.o arena_htab_asm -152 (-25.46%) -10 (-21.28%)
arena_list.bpf.o arena_list_add +329 (+22.04%) +7 (+23.33%)
arena_list.bpf.o arena_list_del -51 (-16.50%) -8 (-34.78%)
iters.bpf.o checkpoint_states_deletion -8297 (-45.78%) -451 (-55.13%)
iters.bpf.o clean_live_states -998653 (-99.87%) -90126 (-99.85%)
iters.bpf.o iter_nested_deeply_iters -226 (-38.11%) -24 (-35.82%)
iters.bpf.o iter_subprog_check_stacksafe -20 (-12.90%) -1 (-6.67%)
iters.bpf.o iter_subprog_iters -286 (-26.14%) -20 (-22.73%)
iters.bpf.o loop_state_deps2 -123 (-25.68%) -11 (-23.91%)
iters.bpf.o triple_continue -4 (-11.43%) +0 (+0.00%)
mptcp_subflow.bpf.o _getsockopt_subflow -55 (-10.98%) -2 (-8.00%)
pyperf600_iter.bpf.o on_event -6025 (-48.83%) -160 (-36.28%)
(arena_list_add requires further investigation)
sched_ext:
Program Insns (DIFF) States (DIFF)
---------------------- ----------------- ----------------
layered_dispatch -3570 (-31.08%) -227 (-26.77%)
layered_dump -2746 (-37.00%) -411 (-60.35%)
layered_enqueue -3781 (-28.93%) -341 (-28.95%)
layered_init -994488 (-99.45%) -84153 (-99.39%)
layered_runnable -1467 (-45.59%) -160 (-54.24%)
refresh_layer_cpumasks -15202 (-92.21%) -1650 (-93.22%)
rusty_select_cpu -647 (-30.84%) -53 (-29.28%)
rusty_set_cpumask -15934 (-78.67%) -1359 (-81.62%)
central_init -330 (-36.18%) -10 (-20.83%)
pair_dispatch -998092 (-99.81%) -58249 (-99.76%)
'layered_init' and 'pair_dispatch' hit 1M on master, but are verified
ok with this patch.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
kernel/bpf/verifier.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index c7ceb59d3a19..1c2199a3f38f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17801,12 +17801,16 @@ static void clean_verifier_state(struct bpf_verifier_env *env,
static void clean_live_states(struct bpf_verifier_env *env, int insn,
struct bpf_verifier_state *cur)
{
+ struct bpf_verifier_state *loop_entry;
struct bpf_verifier_state_list *sl;
sl = *explored_state(env, insn);
while (sl) {
if (sl->state.branches)
goto next;
+ loop_entry = get_loop_entry(&sl->state);
+ if (loop_entry && loop_entry->branches)
+ goto next;
if (sl->state.insn_idx != insn ||
!same_callsites(&sl->state, cur))
goto next;
--
2.47.1
next prev parent reply other threads:[~2025-01-22 12:05 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-22 12:04 [RFC bpf-next v1 0/7] bpf: improvements for iterator-based loops convergence Eduard Zingerman
2025-01-22 12:04 ` [RFC bpf-next v1 1/7] bpf: copy_verifier_state() should copy 'loop_entry' field Eduard Zingerman
2025-01-22 12:04 ` [RFC bpf-next v1 2/7] selftests/bpf: test correct loop_entry update in copy_verifier_state Eduard Zingerman
2025-01-22 12:04 ` Eduard Zingerman [this message]
2025-01-22 12:04 ` [RFC bpf-next v1 4/7] selftests/bpf: check states pruning for deeply nested iterator Eduard Zingerman
2025-01-22 12:04 ` [RFC bpf-next v1 5/7] bpf: DFA-based liveness analysis for program registers Eduard Zingerman
2025-01-22 19:45 ` Eduard Zingerman
2025-01-22 12:04 ` [RFC bpf-next v1 6/7] bpf: use register liveness information for func_states_equal Eduard Zingerman
2025-01-22 12:04 ` [RFC bpf-next v1 7/7] selftests/bpf: test cases for compute_live_registers() Eduard Zingerman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250122120442.3536298-4-eddyz87@gmail.com \
--to=eddyz87@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=kernel-team@fb.com \
--cc=martin.lau@linux.dev \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox