* [PATCH bpf 0/3] bpf: improve verifier resilience
@ 2018-12-04 6:46 Alexei Starovoitov
2018-12-04 6:46 ` [PATCH bpf 1/3] bpf: check pending signals while verifying programs Alexei Starovoitov
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Alexei Starovoitov @ 2018-12-04 6:46 UTC (permalink / raw)
To: David S . Miller; +Cc: daniel, ecree, anatoly.trosinenko, netdev, kernel-team
Three patches to improve verifier ability to handle pathological bpf programs
with a lot of branches:
- make sure prog_load syscall can be aborted
- improve branch taken analysis
- introduce per-insn complexity limit for unprivileged programs
Alexei Starovoitov (3):
bpf: check pending signals while verifying programs
bpf: improve verifier branch analysis
bpf: add per-insn complexity limit
kernel/bpf/verifier.c | 103 +++++++++++++++++---
tools/testing/selftests/bpf/test_verifier.c | 4 +-
2 files changed, 91 insertions(+), 16 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH bpf 1/3] bpf: check pending signals while verifying programs
2018-12-04 6:46 [PATCH bpf 0/3] bpf: improve verifier resilience Alexei Starovoitov
@ 2018-12-04 6:46 ` Alexei Starovoitov
2018-12-04 6:46 ` [PATCH bpf 2/3] bpf: improve verifier branch analysis Alexei Starovoitov
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Alexei Starovoitov @ 2018-12-04 6:46 UTC (permalink / raw)
To: David S . Miller; +Cc: daniel, ecree, anatoly.trosinenko, netdev, kernel-team
Malicious user space may try to force the verifier to use as much cpu
time and memory as possible. Hence check for pending signals
while verifying the program.
Note that suspend of sys_bpf(PROG_LOAD) syscall will lead to EAGAIN,
since the kernel has to release the resources used for program verification.
Reported-by: Anatoly Trosinenko <anatoly.trosinenko@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Edward Cree <ecree@solarflare.com>
---
kernel/bpf/verifier.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6dd419550aba..751bb30b7c5c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5148,6 +5148,9 @@ static int do_check(struct bpf_verifier_env *env)
goto process_bpf_exit;
}
+ if (signal_pending(current))
+ return -EAGAIN;
+
if (need_resched())
cond_resched();
--
2.17.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH bpf 2/3] bpf: improve verifier branch analysis
2018-12-04 6:46 [PATCH bpf 0/3] bpf: improve verifier resilience Alexei Starovoitov
2018-12-04 6:46 ` [PATCH bpf 1/3] bpf: check pending signals while verifying programs Alexei Starovoitov
@ 2018-12-04 6:46 ` Alexei Starovoitov
2018-12-04 6:46 ` [PATCH bpf 3/3] bpf: add per-insn complexity limit Alexei Starovoitov
2018-12-04 16:23 ` [PATCH bpf 0/3] bpf: improve verifier resilience Daniel Borkmann
3 siblings, 0 replies; 5+ messages in thread
From: Alexei Starovoitov @ 2018-12-04 6:46 UTC (permalink / raw)
To: David S . Miller; +Cc: daniel, ecree, anatoly.trosinenko, netdev, kernel-team
pathological bpf programs may try to force verifier to explode in
the number of branch states:
20: (d5) if r1 s<= 0x24000028 goto pc+0
21: (b5) if r0 <= 0xe1fa20 goto pc+2
22: (d5) if r1 s<= 0x7e goto pc+0
23: (b5) if r0 <= 0xe880e000 goto pc+0
24: (c5) if r0 s< 0x2100ecf4 goto pc+0
25: (d5) if r1 s<= 0xe880e000 goto pc+1
26: (c5) if r0 s< 0xf4041810 goto pc+0
27: (d5) if r1 s<= 0x1e007e goto pc+0
28: (b5) if r0 <= 0xe86be000 goto pc+0
29: (07) r0 += 16614
30: (c5) if r0 s< 0x6d0020da goto pc+0
31: (35) if r0 >= 0x2100ecf4 goto pc+0
Teach verifier to recognize always taken and always not taken branches.
This analysis is already done for == and != comparison.
Expand it to all other branches.
It also helps real bpf programs to be verified faster:
before after
bpf_lb-DLB_L3.o 2003 1940
bpf_lb-DLB_L4.o 3173 3089
bpf_lb-DUNKNOWN.o 1080 1065
bpf_lxc-DDROP_ALL.o 29584 28052
bpf_lxc-DUNKNOWN.o 36916 35487
bpf_netdev.o 11188 10864
bpf_overlay.o 6679 6643
bpf_lcx_jit.o 39555 38437
Reported-by: Anatoly Trosinenko <anatoly.trosinenko@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Edward Cree <ecree@solarflare.com>
---
kernel/bpf/verifier.c | 93 ++++++++++++++++++---
tools/testing/selftests/bpf/test_verifier.c | 4 +-
2 files changed, 82 insertions(+), 15 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 751bb30b7c5c..55a49703f423 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3751,6 +3751,79 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,
}
}
+/* compute branch direction of the expression "if (reg opcode val) goto target;"
+ * and return:
+ * 1 - branch will be taken and "goto target" will be executed
+ * 0 - branch will not be taken and fall-through to next insn
+ * -1 - unknown. Example: "if (reg < 5)" is unknown when register value range [0,10]
+ */
+static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
+{
+ if (__is_pointer_value(false, reg))
+ return -1;
+
+ switch (opcode) {
+ case BPF_JEQ:
+ if (tnum_is_const(reg->var_off))
+ return !!tnum_equals_const(reg->var_off, val);
+ break;
+ case BPF_JNE:
+ if (tnum_is_const(reg->var_off))
+ return !tnum_equals_const(reg->var_off, val);
+ break;
+ case BPF_JGT:
+ if (reg->umin_value > val)
+ return 1;
+ else if (reg->umax_value <= val)
+ return 0;
+ break;
+ case BPF_JSGT:
+ if (reg->smin_value > (s64)val)
+ return 1;
+ else if (reg->smax_value < (s64)val)
+ return 0;
+ break;
+ case BPF_JLT:
+ if (reg->umax_value < val)
+ return 1;
+ else if (reg->umin_value >= val)
+ return 0;
+ break;
+ case BPF_JSLT:
+ if (reg->smax_value < (s64)val)
+ return 1;
+ else if (reg->smin_value >= (s64)val)
+ return 0;
+ break;
+ case BPF_JGE:
+ if (reg->umin_value >= val)
+ return 1;
+ else if (reg->umax_value < val)
+ return 0;
+ break;
+ case BPF_JSGE:
+ if (reg->smin_value >= (s64)val)
+ return 1;
+ else if (reg->smax_value < (s64)val)
+ return 0;
+ break;
+ case BPF_JLE:
+ if (reg->umax_value <= val)
+ return 1;
+ else if (reg->umin_value > val)
+ return 0;
+ break;
+ case BPF_JSLE:
+ if (reg->smax_value <= (s64)val)
+ return 1;
+ else if (reg->smin_value > (s64)val)
+ return 0;
+ break;
+ }
+
+ return -1;
+}
+
/* Adjusts the register min/max values in the case that the dst_reg is the
* variable register that we are working on, and src_reg is a constant or we're
* simply doing a BPF_K check.
@@ -4152,21 +4225,15 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
dst_reg = ®s[insn->dst_reg];
- /* detect if R == 0 where R was initialized to zero earlier */
- if (BPF_SRC(insn->code) == BPF_K &&
- (opcode == BPF_JEQ || opcode == BPF_JNE) &&
- dst_reg->type == SCALAR_VALUE &&
- tnum_is_const(dst_reg->var_off)) {
- if ((opcode == BPF_JEQ && dst_reg->var_off.value == insn->imm) ||
- (opcode == BPF_JNE && dst_reg->var_off.value != insn->imm)) {
- /* if (imm == imm) goto pc+off;
- * only follow the goto, ignore fall-through
- */
+ if (BPF_SRC(insn->code) == BPF_K) {
+ int pred = is_branch_taken(dst_reg, insn->imm, opcode);
+
+ if (pred == 1) {
+ /* only follow the goto, ignore fall-through */
*insn_idx += insn->off;
return 0;
- } else {
- /* if (imm != imm) goto pc+off;
- * only follow fall-through branch, since
+ } else if (pred == 0) {
+ /* only follow fall-through branch, since
* that's where the program will go
*/
return 0;
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 5dd4410a716c..df6f751cc1e8 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -8576,7 +8576,7 @@ static struct bpf_test tests[] = {
BPF_JMP_IMM(BPF_JA, 0, 0, -7),
},
.fixup_map_hash_8b = { 4 },
- .errstr = "R0 invalid mem access 'inv'",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -10547,7 +10547,7 @@ static struct bpf_test tests[] = {
"check deducing bounds from const, 5",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 0),
- BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
+ BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 1),
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
BPF_EXIT_INSN(),
},
--
2.17.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH bpf 3/3] bpf: add per-insn complexity limit
2018-12-04 6:46 [PATCH bpf 0/3] bpf: improve verifier resilience Alexei Starovoitov
2018-12-04 6:46 ` [PATCH bpf 1/3] bpf: check pending signals while verifying programs Alexei Starovoitov
2018-12-04 6:46 ` [PATCH bpf 2/3] bpf: improve verifier branch analysis Alexei Starovoitov
@ 2018-12-04 6:46 ` Alexei Starovoitov
2018-12-04 16:23 ` [PATCH bpf 0/3] bpf: improve verifier resilience Daniel Borkmann
3 siblings, 0 replies; 5+ messages in thread
From: Alexei Starovoitov @ 2018-12-04 6:46 UTC (permalink / raw)
To: David S . Miller; +Cc: daniel, ecree, anatoly.trosinenko, netdev, kernel-team
malicious bpf program may try to force the verifier to remember
a lot of distinct verifier states.
Put a limit to number of per-insn 'struct bpf_verifier_state'.
Note that hitting the limit doesn't reject the program.
It potentially makes the verifier do more steps to analyze the program.
It means that malicious programs will hit BPF_COMPLEXITY_LIMIT_INSNS sooner
instead of spending cpu time walking long link list.
The limit of BPF_COMPLEXITY_LIMIT_STATES==64 affects cilium progs
with slight increase in number of "steps" it takes to successfully verify
the programs:
before after
bpf_lb-DLB_L3.o 1940 1940
bpf_lb-DLB_L4.o 3089 3089
bpf_lb-DUNKNOWN.o 1065 1065
bpf_lxc-DDROP_ALL.o 28052 | 28162
bpf_lxc-DUNKNOWN.o 35487 | 35541
bpf_netdev.o 10864 10864
bpf_overlay.o 6643 6643
bpf_lcx_jit.o 38437 38437
But it also makes malicious program to be rejected in 0.4 seconds vs 6.5
Hence apply this limit to unprivileged programs only.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Edward Cree <ecree@solarflare.com>
---
kernel/bpf/verifier.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 55a49703f423..fc760d00a38c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -175,6 +175,7 @@ struct bpf_verifier_stack_elem {
#define BPF_COMPLEXITY_LIMIT_INSNS 131072
#define BPF_COMPLEXITY_LIMIT_STACK 1024
+#define BPF_COMPLEXITY_LIMIT_STATES 64
#define BPF_MAP_PTR_UNPRIV 1UL
#define BPF_MAP_PTR_POISON ((void *)((0xeB9FUL << 1) + \
@@ -5047,7 +5048,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
struct bpf_verifier_state_list *new_sl;
struct bpf_verifier_state_list *sl;
struct bpf_verifier_state *cur = env->cur_state, *new;
- int i, j, err;
+ int i, j, err, states_cnt = 0;
sl = env->explored_states[insn_idx];
if (!sl)
@@ -5074,8 +5075,12 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
return 1;
}
sl = sl->next;
+ states_cnt++;
}
+ if (!env->allow_ptr_leaks && states_cnt > BPF_COMPLEXITY_LIMIT_STATES)
+ return 0;
+
/* there were no equivalent states, remember current one.
* technically the current state is not proven to be safe yet,
* but it will either reach outer most bpf_exit (which means it's safe)
--
2.17.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH bpf 0/3] bpf: improve verifier resilience
2018-12-04 6:46 [PATCH bpf 0/3] bpf: improve verifier resilience Alexei Starovoitov
` (2 preceding siblings ...)
2018-12-04 6:46 ` [PATCH bpf 3/3] bpf: add per-insn complexity limit Alexei Starovoitov
@ 2018-12-04 16:23 ` Daniel Borkmann
3 siblings, 0 replies; 5+ messages in thread
From: Daniel Borkmann @ 2018-12-04 16:23 UTC (permalink / raw)
To: Alexei Starovoitov, David S . Miller
Cc: ecree, anatoly.trosinenko, netdev, kernel-team
On 12/04/2018 07:46 AM, Alexei Starovoitov wrote:
> Three patches to improve verifier ability to handle pathological bpf programs
> with a lot of branches:
> - make sure prog_load syscall can be aborted
> - improve branch taken analysis
> - introduce per-insn complexity limit for unprivileged programs
>
> Alexei Starovoitov (3):
> bpf: check pending signals while verifying programs
> bpf: improve verifier branch analysis
> bpf: add per-insn complexity limit
>
> kernel/bpf/verifier.c | 103 +++++++++++++++++---
> tools/testing/selftests/bpf/test_verifier.c | 4 +-
> 2 files changed, 91 insertions(+), 16 deletions(-)
>
Applied to bpf, thanks!
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-12-04 16:24 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-12-04 6:46 [PATCH bpf 0/3] bpf: improve verifier resilience Alexei Starovoitov
2018-12-04 6:46 ` [PATCH bpf 1/3] bpf: check pending signals while verifying programs Alexei Starovoitov
2018-12-04 6:46 ` [PATCH bpf 2/3] bpf: improve verifier branch analysis Alexei Starovoitov
2018-12-04 6:46 ` [PATCH bpf 3/3] bpf: add per-insn complexity limit Alexei Starovoitov
2018-12-04 16:23 ` [PATCH bpf 0/3] bpf: improve verifier resilience Daniel Borkmann
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).