* [PATCH bpf-next 0/4] BPF control flow graph and precision backtrack fixes
@ 2023-11-08 23:11 Andrii Nakryiko
2023-11-08 23:11 ` [PATCH bpf-next 1/4] bpf: handle ldimm64 properly in check_cfg() Andrii Nakryiko
` (3 more replies)
0 siblings, 4 replies; 18+ messages in thread
From: Andrii Nakryiko @ 2023-11-08 23:11 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
A few small-ish fixes to BPF verifier's CFG logic around handling and
reporting ldimm64 instructions, and also too eagerly reporting back edges.
Patch #1 was previously submitted separately ([0]), and so this patch set
supersedes that patch.
Fixing above CFG issues uncovered one interesting edge case in precision
backtracking logic, which patch #2 fixes as well. See the patch for details.
All of these fixes seem to cover quite obscure corner cases that don't come up
often in practice. And they all are applicable only to privileged BPF mode.
So targeting bpf-next seems appropriate. Also note that [1] is also touching
get_prev_insn_idx() function, so would conflict if they land in two different
trees.
[0] https://patchwork.kernel.org/project/netdevbpf/patch/20231101205626.119243-1-andrii@kernel.org/
[1] https://patchwork.kernel.org/project/netdevbpf/list/?series=797781&state=*
Andrii Nakryiko (4):
bpf: handle ldimm64 properly in check_cfg()
bpf: fix precision backtracking instruction iteration
bpf: fix control-flow graph checking in privileged mode
selftests/bpf: add more test cases for check_cfg()
include/linux/bpf.h | 8 +-
kernel/bpf/verifier.c | 85 ++++++++++++-------
.../selftests/bpf/progs/verifier_cfg.c | 66 +++++++++++++-
.../selftests/bpf/progs/verifier_loops1.c | 9 +-
.../testing/selftests/bpf/verifier/ld_imm64.c | 8 +-
5 files changed, 136 insertions(+), 40 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH bpf-next 1/4] bpf: handle ldimm64 properly in check_cfg()
2023-11-08 23:11 [PATCH bpf-next 0/4] BPF control flow graph and precision backtrack fixes Andrii Nakryiko
@ 2023-11-08 23:11 ` Andrii Nakryiko
2023-11-09 22:25 ` Eduard Zingerman
2023-11-08 23:11 ` [PATCH bpf-next 2/4] bpf: fix precision backtracking instruction iteration Andrii Nakryiko
` (2 subsequent siblings)
3 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2023-11-08 23:11 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Hao Sun
ldimm64 instructions are 16-byte long, and so have to be handled
appropriately in check_cfg(), just like the rest of BPF verifier does.
This has implications in three places:
- when determining next instruction for non-jump instructions;
- when determining next instruction for callback address ldimm64
instructions (in visit_func_call_insn());
- when checking for unreachable instructions, where second half of
ldimm64 is expected to be unreachable;
We take this also as an opportunity to report jump into the middle of
ldimm64. And adjust few test_verifier tests accordingly.
Reported-by: Hao Sun <sunhao.th@gmail.com>
Fixes: 475fb78fbf48 ("bpf: verifier (add branch/goto checks)")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
include/linux/bpf.h | 8 ++++--
kernel/bpf/verifier.c | 27 ++++++++++++++-----
.../testing/selftests/bpf/verifier/ld_imm64.c | 8 +++---
3 files changed, 30 insertions(+), 13 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index eb84caf133df..222275232453 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -909,10 +909,14 @@ bpf_ctx_record_field_size(struct bpf_insn_access_aux *aux, u32 size)
aux->ctx_field_size = size;
}
+static bool bpf_is_ldimm64(const struct bpf_insn *insn)
+{
+ return insn->code == (BPF_LD | BPF_IMM | BPF_DW);
+}
+
static inline bool bpf_pseudo_func(const struct bpf_insn *insn)
{
- return insn->code == (BPF_LD | BPF_IMM | BPF_DW) &&
- insn->src_reg == BPF_PSEUDO_FUNC;
+ return bpf_is_ldimm64(insn) && insn->src_reg == BPF_PSEUDO_FUNC;
}
struct bpf_prog_ops {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index bf94ba50c6ee..9d2af05e37a2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15504,15 +15504,16 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,
struct bpf_verifier_env *env,
bool visit_callee)
{
- int ret;
+ int ret, insn_sz;
- ret = push_insn(t, t + 1, FALLTHROUGH, env, false);
+ insn_sz = bpf_is_ldimm64(&insns[t]) ? 2 : 1;
+ ret = push_insn(t, t + insn_sz, FALLTHROUGH, env, false);
if (ret)
return ret;
- mark_prune_point(env, t + 1);
+ mark_prune_point(env, t + insn_sz);
/* when we exit from subprog, we need to record non-linear history */
- mark_jmp_point(env, t + 1);
+ mark_jmp_point(env, t + insn_sz);
if (visit_callee) {
mark_prune_point(env, t);
@@ -15534,15 +15535,17 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,
static int visit_insn(int t, struct bpf_verifier_env *env)
{
struct bpf_insn *insns = env->prog->insnsi, *insn = &insns[t];
- int ret, off;
+ int ret, off, insn_sz;
if (bpf_pseudo_func(insn))
return visit_func_call_insn(t, insns, env, true);
/* All non-branch instructions have a single fall-through edge. */
if (BPF_CLASS(insn->code) != BPF_JMP &&
- BPF_CLASS(insn->code) != BPF_JMP32)
- return push_insn(t, t + 1, FALLTHROUGH, env, false);
+ BPF_CLASS(insn->code) != BPF_JMP32) {
+ insn_sz = bpf_is_ldimm64(insn) ? 2 : 1;
+ return push_insn(t, t + insn_sz, FALLTHROUGH, env, false);
+ }
switch (BPF_OP(insn->code)) {
case BPF_EXIT:
@@ -15672,11 +15675,21 @@ static int check_cfg(struct bpf_verifier_env *env)
}
for (i = 0; i < insn_cnt; i++) {
+ struct bpf_insn *insn = &env->prog->insnsi[i];
+
if (insn_state[i] != EXPLORED) {
verbose(env, "unreachable insn %d\n", i);
ret = -EINVAL;
goto err_free;
}
+ if (bpf_is_ldimm64(insn)) {
+ if (insn_state[i + 1] != 0) {
+ verbose(env, "jump into the middle of ldimm64 insn %d\n", i);
+ ret = -EINVAL;
+ goto err_free;
+ }
+ i++; /* skip second half of ldimm64 */
+ }
}
ret = 0; /* cfg looks good */
diff --git a/tools/testing/selftests/bpf/verifier/ld_imm64.c b/tools/testing/selftests/bpf/verifier/ld_imm64.c
index f9297900cea6..78f19c255f20 100644
--- a/tools/testing/selftests/bpf/verifier/ld_imm64.c
+++ b/tools/testing/selftests/bpf/verifier/ld_imm64.c
@@ -9,8 +9,8 @@
BPF_MOV64_IMM(BPF_REG_0, 2),
BPF_EXIT_INSN(),
},
- .errstr = "invalid BPF_LD_IMM insn",
- .errstr_unpriv = "R1 pointer comparison",
+ .errstr = "jump into the middle of ldimm64 insn 1",
+ .errstr_unpriv = "jump into the middle of ldimm64 insn 1",
.result = REJECT,
},
{
@@ -23,8 +23,8 @@
BPF_LD_IMM64(BPF_REG_0, 1),
BPF_EXIT_INSN(),
},
- .errstr = "invalid BPF_LD_IMM insn",
- .errstr_unpriv = "R1 pointer comparison",
+ .errstr = "jump into the middle of ldimm64 insn 1",
+ .errstr_unpriv = "jump into the middle of ldimm64 insn 1",
.result = REJECT,
},
{
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next 2/4] bpf: fix precision backtracking instruction iteration
2023-11-08 23:11 [PATCH bpf-next 0/4] BPF control flow graph and precision backtrack fixes Andrii Nakryiko
2023-11-08 23:11 ` [PATCH bpf-next 1/4] bpf: handle ldimm64 properly in check_cfg() Andrii Nakryiko
@ 2023-11-08 23:11 ` Andrii Nakryiko
2023-11-09 17:20 ` Eduard Zingerman
2023-11-08 23:11 ` [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode Andrii Nakryiko
2023-11-08 23:11 ` [PATCH bpf-next 4/4] selftests/bpf: add more test cases for check_cfg() Andrii Nakryiko
3 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2023-11-08 23:11 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Fix an edge case in __mark_chain_precision() which prematurely stops
backtracking instructions in a state if it happens that state's first
and last instruction indexes are the same. This situations doesn't
necessarily mean that there were no instructions simulated in a state,
but rather that we starting from the instruction, jumped around a bit,
and then ended up at the same instruction before checkpointing or
marking precision.
To distinguish between these two possible situations, we need to consult
jump history. If it's empty or contain a single record "bridging" parent
state and first instruction of processed state, then we indeed
backtracked all instructions in this state. But if history is not empty,
we are definitely not done yet.
Move this logic inside get_prev_insn_idx() to contain it more nicely.
Use -ENOENT return code to denote "we are out of instructions"
situation.
This bug was exposed by verifier_cfg.c's bounded_recursion subtest, once
the next fix in this patch set is applied.
Fixes: b5dc0163d8fd ("bpf: precise scalar_value tracking")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/bpf/verifier.c | 21 +++++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9d2af05e37a2..edca7f1ad335 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3662,12 +3662,29 @@ static int push_jmp_history(struct bpf_verifier_env *env,
/* Backtrack one insn at a time. If idx is not at the top of recorded
* history then previous instruction came from straight line execution.
+ * Return -ENOENT if we exhausted all instructions within given state.
+ *
+ * It's legal to have a bit of a looping with the same starting and ending
+ * insn index within the same state, e.g.: 3->4->5->3, so just because current
+ * instruction index is the same as state's first_idx doesn't mean we are
+ * done. If there is still some jump history left, we should keep going. We
+ * need to take into account that we might have a jump history between given
+ * state's parent and itself, due to checkpointing. In this case, we'll have
+ * history entry recording a jump from last instruction of parent state and
+ * first instruction of given state.
*/
static int get_prev_insn_idx(struct bpf_verifier_state *st, int i,
u32 *history)
{
u32 cnt = *history;
+ if (i == st->first_insn_idx) {
+ if (cnt == 0)
+ return -ENOENT;
+ if (cnt == 1 && st->jmp_history[0].idx == i)
+ return -ENOENT;
+ }
+
if (cnt && st->jmp_history[cnt - 1].idx == i) {
i = st->jmp_history[cnt - 1].prev_idx;
(*history)--;
@@ -4542,10 +4559,10 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno)
* Nothing to be tracked further in the parent state.
*/
return 0;
- if (i == first_idx)
- break;
subseq_idx = i;
i = get_prev_insn_idx(st, i, &history);
+ if (i == -ENOENT)
+ break;
if (i >= env->prog->len) {
/* This can happen if backtracking reached insn 0
* and there are still reg_mask or stack_mask
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode
2023-11-08 23:11 [PATCH bpf-next 0/4] BPF control flow graph and precision backtrack fixes Andrii Nakryiko
2023-11-08 23:11 ` [PATCH bpf-next 1/4] bpf: handle ldimm64 properly in check_cfg() Andrii Nakryiko
2023-11-08 23:11 ` [PATCH bpf-next 2/4] bpf: fix precision backtracking instruction iteration Andrii Nakryiko
@ 2023-11-08 23:11 ` Andrii Nakryiko
2023-11-09 22:00 ` Eduard Zingerman
2023-11-10 1:26 ` Alexei Starovoitov
2023-11-08 23:11 ` [PATCH bpf-next 4/4] selftests/bpf: add more test cases for check_cfg() Andrii Nakryiko
3 siblings, 2 replies; 18+ messages in thread
From: Andrii Nakryiko @ 2023-11-08 23:11 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Hao Sun
When BPF program is verified in privileged mode, BPF verifier allows
bounded loops. This means that from CFG point of view there are
definitely some back-edges. Original commit adjusted check_cfg() logic
to not detect back-edges in control flow graph if they are resulting
from conditional jumps, which the idea that subsequent full BPF
verification process will determine whether such loops are bounded or
not, and either accept or reject the BPF program. At least that's my
reading of the intent.
Unfortunately, the implementation of this idea doesn't work correctly in
all possible situations. Conditional jump might not result in immediate
back-edge, but just a few unconditional instructions later we can arrive
at back-edge. In such situations check_cfg() would reject BPF program
even in privileged mode, despite it might be bounded loop. Next patch
adds one simple program demonstrating such scenario.
So, this patch fixes this limitation by tracking not just immediate
conditional jump, but also all subsequent instructions that happened in
such conditional branch. For that we store a new flag, CONDITIONAL,
along with current DISCOVERED, EXPLORED, BRANCH, and FALLTHROUGH.
Conditional jump instructions forces CONDITIONAL flag, and in all other
situations we "inherit" this flag based on whether we arrived at given
instruction with CONDITIONAL flag during discovery step.
Note, this approach doesn't detect some obvious infinite loops during
check_cfg() if they happen inside conditional code path. This can be
fixed with more sophisticated DFS state implementation, where we'd
remember some sort of "conditional epoch", and so if a sequence of jumps
or sequential instructions lead to back-edge within the same epoch, that
a loop within the same branch.
But I didn't add that for two reasons. First, subsequent BPF verifier
logic will detect this and prevent anyways, and it's easy to do the same
with just conditional jumps, so there isn't much of a difference in
supporting this.
But also, second, this conditional jump branch might never be taken and
thus will be a dead code. And it seems desirable to be able to express
"this shall not be executed, otherwise we'll while(true){}" logic as
a kind of unreachable guard.
So keeping things simple and allowing this dead code elimination
approach to work.
Note a few test changes. For unknown reason, we have a few tests that
are specified to detect a back-edge in a privileged mode, but looking at
their code it seems like the right outcome is passing check_cfg() and
letting subsequent verification to make a decision about bounded or not
bounded looping.
Bounded recursion case is also interesting. The example should pass, as
recursion is limited to just a few levels and so we never reach maximum
number of nested frames and never exhaust maximum stack depth. But the
way that max stack depth logic works today it falsely detects this as
exceeding max nested frame count. This patch series doesn't attempt to
fix this orthogonal problem, so we just adjust expected verifier failure.
Fixes: 2589726d12a1 ("bpf: introduce bounded loops")
Reported-by: Hao Sun <sunhao.th@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/bpf/verifier.c | 45 +++++++++----------
.../selftests/bpf/progs/verifier_cfg.c | 4 +-
.../selftests/bpf/progs/verifier_loops1.c | 9 ++--
3 files changed, 29 insertions(+), 29 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index edca7f1ad335..35065cae98b7 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15433,8 +15433,9 @@ static int check_return_code(struct bpf_verifier_env *env, int regno)
enum {
DISCOVERED = 0x10,
EXPLORED = 0x20,
- FALLTHROUGH = 1,
- BRANCH = 2,
+ CONDITIONAL = 0x01,
+ FALLTHROUGH = 0x02,
+ BRANCH = 0x04,
};
static void mark_prune_point(struct bpf_verifier_env *env, int idx)
@@ -15468,16 +15469,15 @@ enum {
* w - next instruction
* e - edge
*/
-static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
- bool loop_ok)
+static int push_insn(int t, int w, int e, struct bpf_verifier_env *env)
{
int *insn_stack = env->cfg.insn_stack;
int *insn_state = env->cfg.insn_state;
- if (e == FALLTHROUGH && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
+ if ((e & FALLTHROUGH) && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
return DONE_EXPLORING;
- if (e == BRANCH && insn_state[t] >= (DISCOVERED | BRANCH))
+ if ((e & BRANCH) && insn_state[t] >= (DISCOVERED | BRANCH))
return DONE_EXPLORING;
if (w < 0 || w >= env->prog->len) {
@@ -15486,7 +15486,7 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
return -EINVAL;
}
- if (e == BRANCH) {
+ if (e & BRANCH) {
/* mark branch target for state pruning */
mark_prune_point(env, w);
mark_jmp_point(env, w);
@@ -15495,13 +15495,13 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
if (insn_state[w] == 0) {
/* tree-edge */
insn_state[t] = DISCOVERED | e;
- insn_state[w] = DISCOVERED;
+ insn_state[w] = DISCOVERED | (e & CONDITIONAL);
if (env->cfg.cur_stack >= env->prog->len)
return -E2BIG;
insn_stack[env->cfg.cur_stack++] = w;
return KEEP_EXPLORING;
- } else if ((insn_state[w] & 0xF0) == DISCOVERED) {
- if (loop_ok && env->bpf_capable)
+ } else if (insn_state[w] & DISCOVERED) {
+ if ((e & CONDITIONAL) && env->bpf_capable)
return DONE_EXPLORING;
verbose_linfo(env, t, "%d: ", t);
verbose_linfo(env, w, "%d: ", w);
@@ -15521,10 +15521,11 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,
struct bpf_verifier_env *env,
bool visit_callee)
{
- int ret, insn_sz;
+ int ret, insn_sz, cond;
+ cond = env->cfg.insn_state[t] & CONDITIONAL;
insn_sz = bpf_is_ldimm64(&insns[t]) ? 2 : 1;
- ret = push_insn(t, t + insn_sz, FALLTHROUGH, env, false);
+ ret = push_insn(t, t + insn_sz, FALLTHROUGH | cond, env);
if (ret)
return ret;
@@ -15534,12 +15535,7 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,
if (visit_callee) {
mark_prune_point(env, t);
- ret = push_insn(t, t + insns[t].imm + 1, BRANCH, env,
- /* It's ok to allow recursion from CFG point of
- * view. __check_func_call() will do the actual
- * check.
- */
- bpf_pseudo_func(insns + t));
+ ret = push_insn(t, t + insns[t].imm + 1, BRANCH | cond, env);
}
return ret;
}
@@ -15552,16 +15548,18 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,
static int visit_insn(int t, struct bpf_verifier_env *env)
{
struct bpf_insn *insns = env->prog->insnsi, *insn = &insns[t];
- int ret, off, insn_sz;
+ int ret, off, insn_sz, cond;
if (bpf_pseudo_func(insn))
return visit_func_call_insn(t, insns, env, true);
+ cond = env->cfg.insn_state[t] & CONDITIONAL;
+
/* All non-branch instructions have a single fall-through edge. */
if (BPF_CLASS(insn->code) != BPF_JMP &&
BPF_CLASS(insn->code) != BPF_JMP32) {
insn_sz = bpf_is_ldimm64(insn) ? 2 : 1;
- return push_insn(t, t + insn_sz, FALLTHROUGH, env, false);
+ return push_insn(t, t + insn_sz, FALLTHROUGH | cond, env);
}
switch (BPF_OP(insn->code)) {
@@ -15608,8 +15606,7 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
off = insn->imm;
/* unconditional jump with single edge */
- ret = push_insn(t, t + off + 1, FALLTHROUGH, env,
- true);
+ ret = push_insn(t, t + off + 1, FALLTHROUGH | cond, env);
if (ret)
return ret;
@@ -15622,11 +15619,11 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
/* conditional jump with two edges */
mark_prune_point(env, t);
- ret = push_insn(t, t + 1, FALLTHROUGH, env, true);
+ ret = push_insn(t, t + 1, FALLTHROUGH | CONDITIONAL, env);
if (ret)
return ret;
- return push_insn(t, t + insn->off + 1, BRANCH, env, true);
+ return push_insn(t, t + insn->off + 1, BRANCH | CONDITIONAL, env);
}
}
diff --git a/tools/testing/selftests/bpf/progs/verifier_cfg.c b/tools/testing/selftests/bpf/progs/verifier_cfg.c
index df7697b94007..65d205474f33 100644
--- a/tools/testing/selftests/bpf/progs/verifier_cfg.c
+++ b/tools/testing/selftests/bpf/progs/verifier_cfg.c
@@ -57,7 +57,7 @@ __naked void out_of_range_jump2(void)
SEC("socket")
__description("loop (back-edge)")
-__failure __msg("unreachable insn 1")
+__failure __msg("back-edge")
__msg_unpriv("back-edge")
__naked void loop_back_edge(void)
{
@@ -69,7 +69,7 @@ l0_%=: goto l0_%=; \
SEC("socket")
__description("loop2 (back-edge)")
-__failure __msg("unreachable insn 4")
+__failure __msg("back-edge")
__msg_unpriv("back-edge")
__naked void loop2_back_edge(void)
{
diff --git a/tools/testing/selftests/bpf/progs/verifier_loops1.c b/tools/testing/selftests/bpf/progs/verifier_loops1.c
index 5bc86af80a9a..71735dbf33d4 100644
--- a/tools/testing/selftests/bpf/progs/verifier_loops1.c
+++ b/tools/testing/selftests/bpf/progs/verifier_loops1.c
@@ -75,9 +75,10 @@ l0_%=: r0 += 1; \
" ::: __clobber_all);
}
-SEC("tracepoint")
+SEC("socket")
__description("bounded loop, start in the middle")
-__failure __msg("back-edge")
+__success
+__failure_unpriv __msg_unpriv("back-edge")
__naked void loop_start_in_the_middle(void)
{
asm volatile (" \
@@ -136,7 +137,9 @@ l0_%=: exit; \
SEC("tracepoint")
__description("bounded recursion")
-__failure __msg("back-edge")
+__failure
+/* verifier limitation in detecting max stack depth */
+__msg("the call stack of 8 frames is too deep !")
__naked void bounded_recursion(void)
{
asm volatile (" \
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next 4/4] selftests/bpf: add more test cases for check_cfg()
2023-11-08 23:11 [PATCH bpf-next 0/4] BPF control flow graph and precision backtrack fixes Andrii Nakryiko
` (2 preceding siblings ...)
2023-11-08 23:11 ` [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode Andrii Nakryiko
@ 2023-11-08 23:11 ` Andrii Nakryiko
2023-11-09 22:21 ` Eduard Zingerman
3 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2023-11-08 23:11 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Hao Sun
Add a few more simple cases to validate proper privileged vs unprivileged
loop detection behavior. conditional_loop2 is the one reported by Hao
Sun that triggered this set of fixes.
Suggested-by: Hao Sun <sunhao.th@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
.../selftests/bpf/progs/verifier_cfg.c | 62 +++++++++++++++++++
1 file changed, 62 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_cfg.c b/tools/testing/selftests/bpf/progs/verifier_cfg.c
index 65d205474f33..bba622814123 100644
--- a/tools/testing/selftests/bpf/progs/verifier_cfg.c
+++ b/tools/testing/selftests/bpf/progs/verifier_cfg.c
@@ -97,4 +97,66 @@ l0_%=: r2 = r0; \
" ::: __clobber_all);
}
+SEC("socket")
+__description("conditional loop (2)")
+__success
+__failure_unpriv __msg_unpriv("back-edge from insn 10 to 11")
+__naked void conditional_loop2(void)
+{
+ asm volatile (" \
+ r9 = 2 ll; \
+ r3 = 0x20 ll; \
+ r4 = 0x35 ll; \
+ r8 = r4; \
+ goto l1_%=; \
+l0_%=: r9 -= r3; \
+ r9 -= r4; \
+ r9 -= r8; \
+l1_%=: r8 += r4; \
+ if r8 < 0x64 goto l0_%=; \
+ r0 = r9; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("socket")
+__description("unconditional loop after conditional jump")
+__failure __msg("infinite loop detected")
+__failure_unpriv __msg_unpriv("back-edge from insn 3 to 2")
+__naked void uncond_loop_after_cond_jmp(void)
+{
+ asm volatile (" \
+ r0 = 0; \
+ if r0 > 0 goto l1_%=; \
+l0_%=: r0 = 1; \
+ goto l0_%=; \
+l1_%=: exit; \
+" ::: __clobber_all);
+}
+
+
+__naked __noinline __used
+static unsigned long never_ending_subprog()
+{
+ asm volatile (" \
+ r0 = r1; \
+ goto -1; \
+" ::: __clobber_all);
+}
+
+SEC("socket")
+__description("unconditional loop after conditional jump")
+/* infinite loop is detected *after* check_cfg() */
+__failure __msg("infinite loop detected")
+__naked void uncond_loop_in_subprog_after_cond_jmp(void)
+{
+ asm volatile (" \
+ r0 = 0; \
+ if r0 > 0 goto l1_%=; \
+l0_%=: r0 += 1; \
+ call never_ending_subprog; \
+l1_%=: exit; \
+" ::: __clobber_all);
+}
+
char _license[] SEC("license") = "GPL";
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/4] bpf: fix precision backtracking instruction iteration
2023-11-08 23:11 ` [PATCH bpf-next 2/4] bpf: fix precision backtracking instruction iteration Andrii Nakryiko
@ 2023-11-09 17:20 ` Eduard Zingerman
2023-11-09 23:18 ` Andrii Nakryiko
0 siblings, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2023-11-09 17:20 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: kernel-team
On Wed, 2023-11-08 at 15:11 -0800, Andrii Nakryiko wrote:
> Fix an edge case in __mark_chain_precision() which prematurely stops
> backtracking instructions in a state if it happens that state's first
> and last instruction indexes are the same. This situations doesn't
> necessarily mean that there were no instructions simulated in a state,
> but rather that we starting from the instruction, jumped around a bit,
> and then ended up at the same instruction before checkpointing or
> marking precision.
>
> To distinguish between these two possible situations, we need to consult
> jump history. If it's empty or contain a single record "bridging" parent
> state and first instruction of processed state, then we indeed
> backtracked all instructions in this state. But if history is not empty,
> we are definitely not done yet.
>
> Move this logic inside get_prev_insn_idx() to contain it more nicely.
> Use -ENOENT return code to denote "we are out of instructions"
> situation.
>
> This bug was exposed by verifier_cfg.c's bounded_recursion subtest, once
Note: verifier_cfg.c should be verifier_loops1.c
> the next fix in this patch set is applied.
>
> Fixes: b5dc0163d8fd ("bpf: precise scalar_value tracking")
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Funny how nobody noticed this bug for so long, I looked at exactly
this code today while going through your other patch-set and no alarm
bells rang in my head.
I think that this case needs a dedicated test case that would check
precision tracking log.
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode
2023-11-08 23:11 ` [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode Andrii Nakryiko
@ 2023-11-09 22:00 ` Eduard Zingerman
2023-11-09 23:25 ` Andrii Nakryiko
2023-11-10 1:26 ` Alexei Starovoitov
1 sibling, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2023-11-09 22:00 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: kernel-team, Hao Sun
On Wed, 2023-11-08 at 15:11 -0800, Andrii Nakryiko wrote:
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
(given that I understood check in push_insn correctly).
[...]
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index edca7f1ad335..35065cae98b7 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -15433,8 +15433,9 @@ static int check_return_code(struct bpf_verifier_env *env, int regno)
Nitpick: there is a comment right above this enum which has to be
updated after changes to the enum.
> enum {
> DISCOVERED = 0x10,
> EXPLORED = 0x20,
> - FALLTHROUGH = 1,
> - BRANCH = 2,
> + CONDITIONAL = 0x01,
> + FALLTHROUGH = 0x02,
> + BRANCH = 0x04,
> };
>
> static void mark_prune_point(struct bpf_verifier_env *env, int idx)
> @@ -15468,16 +15469,15 @@ enum {
> * w - next instruction
> * e - edge
> */
> -static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
> - bool loop_ok)
> +static int push_insn(int t, int w, int e, struct bpf_verifier_env *env)
> {
> int *insn_stack = env->cfg.insn_stack;
> int *insn_state = env->cfg.insn_state;
>
> - if (e == FALLTHROUGH && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
> + if ((e & FALLTHROUGH) && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
> return DONE_EXPLORING;
This not related to your changes, but '>=' here is so confusing.
The intent is to check:
((insn_state[t] & (DISCOVERED | FALLTHROUGH)) == (DISCOVERED | FALLTHROUGH))
i.e. DONE_EXPLORING if fall-through branch of 't' had been explored already,
right?
[...]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 4/4] selftests/bpf: add more test cases for check_cfg()
2023-11-08 23:11 ` [PATCH bpf-next 4/4] selftests/bpf: add more test cases for check_cfg() Andrii Nakryiko
@ 2023-11-09 22:21 ` Eduard Zingerman
0 siblings, 0 replies; 18+ messages in thread
From: Eduard Zingerman @ 2023-11-09 22:21 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: kernel-team, Hao Sun
On Wed, 2023-11-08 at 15:11 -0800, Andrii Nakryiko wrote:
> Add a few more simple cases to validate proper privileged vs unprivileged
> loop detection behavior. conditional_loop2 is the one reported by Hao
> Sun that triggered this set of fixes.
>
> Suggested-by: Hao Sun <sunhao.th@gmail.com>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 1/4] bpf: handle ldimm64 properly in check_cfg()
2023-11-08 23:11 ` [PATCH bpf-next 1/4] bpf: handle ldimm64 properly in check_cfg() Andrii Nakryiko
@ 2023-11-09 22:25 ` Eduard Zingerman
0 siblings, 0 replies; 18+ messages in thread
From: Eduard Zingerman @ 2023-11-09 22:25 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: kernel-team, Hao Sun
On Wed, 2023-11-08 at 15:11 -0800, Andrii Nakryiko wrote:
> ldimm64 instructions are 16-byte long, and so have to be handled
> appropriately in check_cfg(), just like the rest of BPF verifier does.
>
> This has implications in three places:
> - when determining next instruction for non-jump instructions;
> - when determining next instruction for callback address ldimm64
> instructions (in visit_func_call_insn());
> - when checking for unreachable instructions, where second half of
> ldimm64 is expected to be unreachable;
>
> We take this also as an opportunity to report jump into the middle of
> ldimm64. And adjust few test_verifier tests accordingly.
>
> Reported-by: Hao Sun <sunhao.th@gmail.com>
> Fixes: 475fb78fbf48 ("bpf: verifier (add branch/goto checks)")
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/4] bpf: fix precision backtracking instruction iteration
2023-11-09 17:20 ` Eduard Zingerman
@ 2023-11-09 23:18 ` Andrii Nakryiko
2023-11-09 23:28 ` Andrii Nakryiko
0 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2023-11-09 23:18 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team
On Thu, Nov 9, 2023 at 9:20 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Wed, 2023-11-08 at 15:11 -0800, Andrii Nakryiko wrote:
> > Fix an edge case in __mark_chain_precision() which prematurely stops
> > backtracking instructions in a state if it happens that state's first
> > and last instruction indexes are the same. This situations doesn't
> > necessarily mean that there were no instructions simulated in a state,
> > but rather that we starting from the instruction, jumped around a bit,
> > and then ended up at the same instruction before checkpointing or
> > marking precision.
> >
> > To distinguish between these two possible situations, we need to consult
> > jump history. If it's empty or contain a single record "bridging" parent
> > state and first instruction of processed state, then we indeed
> > backtracked all instructions in this state. But if history is not empty,
> > we are definitely not done yet.
> >
> > Move this logic inside get_prev_insn_idx() to contain it more nicely.
> > Use -ENOENT return code to denote "we are out of instructions"
> > situation.
> >
> > This bug was exposed by verifier_cfg.c's bounded_recursion subtest, once
>
> Note: verifier_cfg.c should be verifier_loops1.c
fixed
>
> > the next fix in this patch set is applied.
> >
> > Fixes: b5dc0163d8fd ("bpf: precise scalar_value tracking")
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
>
> Funny how nobody noticed this bug for so long, I looked at exactly
> this code today while going through your other patch-set and no alarm
> bells rang in my head.
tell me about it, I spent 3 hours with gdb and tons of extra verifier
logging before I found the issue
>
> I think that this case needs a dedicated test case that would check
> precision tracking log.
ok, will add
>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode
2023-11-09 22:00 ` Eduard Zingerman
@ 2023-11-09 23:25 ` Andrii Nakryiko
0 siblings, 0 replies; 18+ messages in thread
From: Andrii Nakryiko @ 2023-11-09 23:25 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team,
Hao Sun
On Thu, Nov 9, 2023 at 2:00 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Wed, 2023-11-08 at 15:11 -0800, Andrii Nakryiko wrote:
>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> (given that I understood check in push_insn correctly).
>
> [...]
>
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index edca7f1ad335..35065cae98b7 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -15433,8 +15433,9 @@ static int check_return_code(struct bpf_verifier_env *env, int regno)
>
> Nitpick: there is a comment right above this enum which has to be
> updated after changes to the enum.
yep, I also mentioned conditional bits. Now it's 0x12 or 0x13 for
FALLTHROUGH case, and 0x14 and 0x15 for BRANCH.
>
> > enum {
> > DISCOVERED = 0x10,
> > EXPLORED = 0x20,
> > - FALLTHROUGH = 1,
> > - BRANCH = 2,
> > + CONDITIONAL = 0x01,
> > + FALLTHROUGH = 0x02,
> > + BRANCH = 0x04,
> > };
> >
> > static void mark_prune_point(struct bpf_verifier_env *env, int idx)
> > @@ -15468,16 +15469,15 @@ enum {
> > * w - next instruction
> > * e - edge
> > */
> > -static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
> > - bool loop_ok)
> > +static int push_insn(int t, int w, int e, struct bpf_verifier_env *env)
> > {
> > int *insn_stack = env->cfg.insn_stack;
> > int *insn_state = env->cfg.insn_state;
> >
> > - if (e == FALLTHROUGH && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
> > + if ((e & FALLTHROUGH) && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
> > return DONE_EXPLORING;
>
> This not related to your changes, but '>=' here is so confusing.
Agreed. It's a real head-scratcher, how check_cfg() is implemented.
> The intent is to check:
> ((insn_state[t] & (DISCOVERED | FALLTHROUGH)) == (DISCOVERED | FALLTHROUGH))
> i.e. DONE_EXPLORING if fall-through branch of 't' had been explored already,
> right?
I think the intent is to distinguish pure DISCOVERED vs (DISCOVERED |
e) (where e is either FALLTHROUGH or BRANCH). Pure DISCOVERED means
that node is enqueued (in a stack, enstacked ;), but hasn't been
processed yet, while FALLTHROUGH|BRANCH means we are processing it
(but it's not yet fully EXPLORED).
CONDITIONAL is oblivious to FALLTHROUGH|BRANCH, so I put it as 1 so
that this >= check ignores it.
>
> [...]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/4] bpf: fix precision backtracking instruction iteration
2023-11-09 23:18 ` Andrii Nakryiko
@ 2023-11-09 23:28 ` Andrii Nakryiko
2023-11-09 23:37 ` Eduard Zingerman
0 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2023-11-09 23:28 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team
On Thu, Nov 9, 2023 at 3:18 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Nov 9, 2023 at 9:20 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
> >
> > On Wed, 2023-11-08 at 15:11 -0800, Andrii Nakryiko wrote:
> > > Fix an edge case in __mark_chain_precision() which prematurely stops
> > > backtracking instructions in a state if it happens that state's first
> > > and last instruction indexes are the same. This situations doesn't
> > > necessarily mean that there were no instructions simulated in a state,
> > > but rather that we starting from the instruction, jumped around a bit,
> > > and then ended up at the same instruction before checkpointing or
> > > marking precision.
> > >
> > > To distinguish between these two possible situations, we need to consult
> > > jump history. If it's empty or contain a single record "bridging" parent
> > > state and first instruction of processed state, then we indeed
> > > backtracked all instructions in this state. But if history is not empty,
> > > we are definitely not done yet.
> > >
> > > Move this logic inside get_prev_insn_idx() to contain it more nicely.
> > > Use -ENOENT return code to denote "we are out of instructions"
> > > situation.
> > >
> > > This bug was exposed by verifier_cfg.c's bounded_recursion subtest, once
> >
> > Note: verifier_cfg.c should be verifier_loops1.c
>
> fixed
>
> >
> > > the next fix in this patch set is applied.
> > >
> > > Fixes: b5dc0163d8fd ("bpf: precise scalar_value tracking")
> > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> >
> > Funny how nobody noticed this bug for so long, I looked at exactly
> > this code today while going through your other patch-set and no alarm
> > bells rang in my head.
>
> tell me about it, I spent 3 hours with gdb and tons of extra verifier
> logging before I found the issue
>
> >
> > I think that this case needs a dedicated test case that would check
> > precision tracking log.
>
> ok, will add
>
But I will say that it would be much better if verifier/precise.c was
converted to embedded assembly... Let's see if we can somehow
negotiate completing test_verifier conversion? ;)
> >
> > Acked-by: Eduard Zingerman <eddyz87@gmail.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/4] bpf: fix precision backtracking instruction iteration
2023-11-09 23:28 ` Andrii Nakryiko
@ 2023-11-09 23:37 ` Eduard Zingerman
0 siblings, 0 replies; 18+ messages in thread
From: Eduard Zingerman @ 2023-11-09 23:37 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team
On Thu, 2023-11-09 at 15:28 -0800, Andrii Nakryiko wrote:
[...]
> > > I think that this case needs a dedicated test case that would check
> > > precision tracking log.
> >
> > ok, will add
> >
>
> But I will say that it would be much better if verifier/precise.c was
> converted to embedded assembly... Let's see if we can somehow
> negotiate completing test_verifier conversion? ;)
I'll take a look at what can be done for precise.c over the weekend.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode
2023-11-08 23:11 ` [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode Andrii Nakryiko
2023-11-09 22:00 ` Eduard Zingerman
@ 2023-11-10 1:26 ` Alexei Starovoitov
2023-11-10 3:41 ` Andrii Nakryiko
1 sibling, 1 reply; 18+ messages in thread
From: Alexei Starovoitov @ 2023-11-10 1:26 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau,
Kernel Team, Hao Sun
On Wed, Nov 8, 2023 at 3:12 PM Andrii Nakryiko <andrii@kernel.org> wrote:
>
>
> @@ -15622,11 +15619,11 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
> /* conditional jump with two edges */
> mark_prune_point(env, t);
>
> - ret = push_insn(t, t + 1, FALLTHROUGH, env, true);
> + ret = push_insn(t, t + 1, FALLTHROUGH | CONDITIONAL, env);
> if (ret)
> return ret;
>
> - return push_insn(t, t + insn->off + 1, BRANCH, env, true);
> + return push_insn(t, t + insn->off + 1, BRANCH | CONDITIONAL, env);
If I'm reading this correctly, after the first conditional jmp
both fallthrough and branch become sticky with the CONDITIONAL flag.
So all processing after first 'if a == b' insn is running
with loop_ok==true.
If so, all this complexity is not worth it. Let's just remove 'loop_ok' flag.
Also
if (ret) return ret;
in the above looks like an existing bug.
It probably should be if (ret < 0) return ret;
Otherwise it's probably possible to craft a prog where fallthrough
is explored and in such case the branch target will be ignored.
Not a safety issue, since the verifier will walk that path anyway,
but a bug in check_cfg nevertheless.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode
2023-11-10 1:26 ` Alexei Starovoitov
@ 2023-11-10 3:41 ` Andrii Nakryiko
2023-11-10 4:08 ` Alexei Starovoitov
0 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2023-11-10 3:41 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Kernel Team, Hao Sun
On Thu, Nov 9, 2023 at 5:26 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Wed, Nov 8, 2023 at 3:12 PM Andrii Nakryiko <andrii@kernel.org> wrote:
> >
> >
> > @@ -15622,11 +15619,11 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
> > /* conditional jump with two edges */
> > mark_prune_point(env, t);
> >
> > - ret = push_insn(t, t + 1, FALLTHROUGH, env, true);
> > + ret = push_insn(t, t + 1, FALLTHROUGH | CONDITIONAL, env);
> > if (ret)
> > return ret;
> >
> > - return push_insn(t, t + insn->off + 1, BRANCH, env, true);
> > + return push_insn(t, t + insn->off + 1, BRANCH | CONDITIONAL, env);
>
> If I'm reading this correctly, after the first conditional jmp
> both fallthrough and branch become sticky with the CONDITIONAL flag.
> So all processing after first 'if a == b' insn is running
> with loop_ok==true.
> If so, all this complexity is not worth it. Let's just remove 'loop_ok' flag.
So you mean just always assume loop_ok, right?
>
> Also
> if (ret) return ret;
> in the above looks like an existing bug.
> It probably should be if (ret < 0) return ret;
yeah, probably should be error-handling case
> Otherwise it's probably possible to craft a prog where fallthrough
> is explored and in such case the branch target will be ignored.
> Not a safety issue, since the verifier will walk that path anyway,
> but a bug in check_cfg nevertheless.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode
2023-11-10 3:41 ` Andrii Nakryiko
@ 2023-11-10 4:08 ` Alexei Starovoitov
2023-11-10 5:31 ` Andrii Nakryiko
0 siblings, 1 reply; 18+ messages in thread
From: Alexei Starovoitov @ 2023-11-10 4:08 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Kernel Team, Hao Sun
On Thu, Nov 9, 2023 at 7:41 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Nov 9, 2023 at 5:26 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Wed, Nov 8, 2023 at 3:12 PM Andrii Nakryiko <andrii@kernel.org> wrote:
> > >
> > >
> > > @@ -15622,11 +15619,11 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
> > > /* conditional jump with two edges */
> > > mark_prune_point(env, t);
> > >
> > > - ret = push_insn(t, t + 1, FALLTHROUGH, env, true);
> > > + ret = push_insn(t, t + 1, FALLTHROUGH | CONDITIONAL, env);
> > > if (ret)
> > > return ret;
> > >
> > > - return push_insn(t, t + insn->off + 1, BRANCH, env, true);
> > > + return push_insn(t, t + insn->off + 1, BRANCH | CONDITIONAL, env);
> >
> > If I'm reading this correctly, after the first conditional jmp
> > both fallthrough and branch become sticky with the CONDITIONAL flag.
> > So all processing after first 'if a == b' insn is running
> > with loop_ok==true.
> > If so, all this complexity is not worth it. Let's just remove 'loop_ok' flag.
>
> So you mean just always assume loop_ok, right?
yes.
Since not detecting the loop early only adds more cycles later
that states_maybe_looping() should catch quickly enough.
> >
> > Also
> > if (ret) return ret;
> > in the above looks like an existing bug.
> > It probably should be if (ret < 0) return ret;
>
> yeah, probably should be error-handling case
I thought that refactoring
commit 59e2e27d227a ("bpf: Refactor check_cfg to use a structured loop.")
is responsible...
but it looks to be an older bug.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode
2023-11-10 4:08 ` Alexei Starovoitov
@ 2023-11-10 5:31 ` Andrii Nakryiko
2023-11-10 5:33 ` Andrii Nakryiko
0 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2023-11-10 5:31 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Kernel Team, Hao Sun
On Thu, Nov 9, 2023 at 8:08 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Thu, Nov 9, 2023 at 7:41 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Thu, Nov 9, 2023 at 5:26 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Wed, Nov 8, 2023 at 3:12 PM Andrii Nakryiko <andrii@kernel.org> wrote:
> > > >
> > > >
> > > > @@ -15622,11 +15619,11 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
> > > > /* conditional jump with two edges */
> > > > mark_prune_point(env, t);
> > > >
> > > > - ret = push_insn(t, t + 1, FALLTHROUGH, env, true);
> > > > + ret = push_insn(t, t + 1, FALLTHROUGH | CONDITIONAL, env);
> > > > if (ret)
> > > > return ret;
> > > >
> > > > - return push_insn(t, t + insn->off + 1, BRANCH, env, true);
> > > > + return push_insn(t, t + insn->off + 1, BRANCH | CONDITIONAL, env);
> > >
> > > If I'm reading this correctly, after the first conditional jmp
> > > both fallthrough and branch become sticky with the CONDITIONAL flag.
> > > So all processing after first 'if a == b' insn is running
> > > with loop_ok==true.
> > > If so, all this complexity is not worth it. Let's just remove 'loop_ok' flag.
> >
> > So you mean just always assume loop_ok, right?
>
> yes.
> Since not detecting the loop early only adds more cycles later
> that states_maybe_looping() should catch quickly enough.
>
> > >
> > > Also
> > > if (ret) return ret;
> > > in the above looks like an existing bug.
> > > It probably should be if (ret < 0) return ret;
> >
> > yeah, probably should be error-handling case
>
> I thought that refactoring
> commit 59e2e27d227a ("bpf: Refactor check_cfg to use a structured loop.")
> is responsible...
> but it looks to be an older bug.
No, I think it was indeed 59e2e27d227a that changed this. Previously we had
if (ret == 1)
...
if (ret < 0)
goto err;
/* ret == 0 */
I'll add the fix on top of another fix.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode
2023-11-10 5:31 ` Andrii Nakryiko
@ 2023-11-10 5:33 ` Andrii Nakryiko
0 siblings, 0 replies; 18+ messages in thread
From: Andrii Nakryiko @ 2023-11-10 5:33 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Kernel Team, Hao Sun
On Thu, Nov 9, 2023 at 9:31 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Nov 9, 2023 at 8:08 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Thu, Nov 9, 2023 at 7:41 PM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> > >
> > > On Thu, Nov 9, 2023 at 5:26 PM Alexei Starovoitov
> > > <alexei.starovoitov@gmail.com> wrote:
> > > >
> > > > On Wed, Nov 8, 2023 at 3:12 PM Andrii Nakryiko <andrii@kernel.org> wrote:
> > > > >
> > > > >
> > > > > @@ -15622,11 +15619,11 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
> > > > > /* conditional jump with two edges */
> > > > > mark_prune_point(env, t);
> > > > >
> > > > > - ret = push_insn(t, t + 1, FALLTHROUGH, env, true);
> > > > > + ret = push_insn(t, t + 1, FALLTHROUGH | CONDITIONAL, env);
> > > > > if (ret)
> > > > > return ret;
> > > > >
> > > > > - return push_insn(t, t + insn->off + 1, BRANCH, env, true);
> > > > > + return push_insn(t, t + insn->off + 1, BRANCH | CONDITIONAL, env);
> > > >
> > > > If I'm reading this correctly, after the first conditional jmp
> > > > both fallthrough and branch become sticky with the CONDITIONAL flag.
> > > > So all processing after first 'if a == b' insn is running
> > > > with loop_ok==true.
> > > > If so, all this complexity is not worth it. Let's just remove 'loop_ok' flag.
> > >
> > > So you mean just always assume loop_ok, right?
> >
> > yes.
> > Since not detecting the loop early only adds more cycles later
> > that states_maybe_looping() should catch quickly enough.
> >
> > > >
> > > > Also
> > > > if (ret) return ret;
> > > > in the above looks like an existing bug.
> > > > It probably should be if (ret < 0) return ret;
> > >
> > > yeah, probably should be error-handling case
> >
> > I thought that refactoring
> > commit 59e2e27d227a ("bpf: Refactor check_cfg to use a structured loop.")
> > is responsible...
> > but it looks to be an older bug.
>
> No, I think it was indeed 59e2e27d227a that changed this. Previously we had
>
> if (ret == 1)
> ...
> if (ret < 0)
> goto err;
> /* ret == 0 */
>
> I'll add the fix on top of another fix.
I take it back:
Summary: 748 PASSED, 0 SKIPPED, 43 FAILED
I'm not touching this. Some other time, or maybe someone else.
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2023-11-10 7:16 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-08 23:11 [PATCH bpf-next 0/4] BPF control flow graph and precision backtrack fixes Andrii Nakryiko
2023-11-08 23:11 ` [PATCH bpf-next 1/4] bpf: handle ldimm64 properly in check_cfg() Andrii Nakryiko
2023-11-09 22:25 ` Eduard Zingerman
2023-11-08 23:11 ` [PATCH bpf-next 2/4] bpf: fix precision backtracking instruction iteration Andrii Nakryiko
2023-11-09 17:20 ` Eduard Zingerman
2023-11-09 23:18 ` Andrii Nakryiko
2023-11-09 23:28 ` Andrii Nakryiko
2023-11-09 23:37 ` Eduard Zingerman
2023-11-08 23:11 ` [PATCH bpf-next 3/4] bpf: fix control-flow graph checking in privileged mode Andrii Nakryiko
2023-11-09 22:00 ` Eduard Zingerman
2023-11-09 23:25 ` Andrii Nakryiko
2023-11-10 1:26 ` Alexei Starovoitov
2023-11-10 3:41 ` Andrii Nakryiko
2023-11-10 4:08 ` Alexei Starovoitov
2023-11-10 5:31 ` Andrii Nakryiko
2023-11-10 5:33 ` Andrii Nakryiko
2023-11-08 23:11 ` [PATCH bpf-next 4/4] selftests/bpf: add more test cases for check_cfg() Andrii Nakryiko
2023-11-09 22:21 ` Eduard Zingerman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox