* [PATCH stable/4.14 00/14] BPF stable patches for 4.14
@ 2017-12-22 15:22 Daniel Borkmann
2017-12-22 15:22 ` [PATCH stable/4.14 01/14] bpf: fix branch pruning logic Daniel Borkmann
` (15 more replies)
0 siblings, 16 replies; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:22 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
Hi Greg,
please consider the following backported fixes for 4.14 inclusion.
Thanks a lot!
Alexei Starovoitov (2):
bpf: fix branch pruning logic
bpf: fix integer overflows
Daniel Borkmann (4):
bpf: fix corruption on concurrent perf_event_output calls
bpf, s390x: do not reload skb pointers in non-skb context
bpf, ppc64: do not reload skb pointers in non-skb context
bpf, sparc: fix usage of wrong reg for load_skb_regs after call
Edward Cree (1):
bpf/verifier: fix bounds calculation on BPF_RSH
Jann Horn (7):
bpf: fix incorrect sign extension in check_alu_op()
bpf: fix incorrect tracking of register size truncation
bpf: fix 32-bit ALU op verification
bpf: fix missing error return in check_stack_boundary()
bpf: force strict alignment checks for stack pointers
bpf: don't prune branches when a scalar is replaced with a pointer
selftests/bpf: add tests for recent bugfixes
arch/powerpc/net/bpf_jit_comp64.c | 6 +-
arch/s390/net/bpf_jit_comp.c | 11 +-
arch/sparc/net/bpf_jit_comp_64.c | 6 +-
include/linux/bpf_verifier.h | 6 +-
kernel/bpf/verifier.c | 202 +++++++---
kernel/trace/bpf_trace.c | 19 +-
tools/testing/selftests/bpf/test_verifier.c | 549 +++++++++++++++++++++++++++-
7 files changed, 714 insertions(+), 85 deletions(-)
--
2.9.5
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 01/14] bpf: fix branch pruning logic
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
@ 2017-12-22 15:22 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 01/14] bpf: fix branch pruning logic" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 02/14] bpf: fix corruption on concurrent perf_event_output calls Daniel Borkmann
` (14 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:22 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable, Alexei Starovoitov
From: Alexei Starovoitov <ast@fb.com>
[ Upstream commit c131187db2d3fa2f8bf32fdf4e9a4ef805168467 ]
when the verifier detects that register contains a runtime constant
and it's compared with another constant it will prune exploration
of the branch that is guaranteed not to be taken at runtime.
This is all correct, but malicious program may be constructed
in such a way that it always has a constant comparison and
the other branch is never taken under any conditions.
In this case such path through the program will not be explored
by the verifier. It won't be taken at run-time either, but since
all instructions are JITed the malicious program may cause JITs
to complain about using reserved fields, etc.
To fix the issue we have to track the instructions explored by
the verifier and sanitize instructions that are dead at run time
with NOPs. We cannot reject such dead code, since llvm generates
it for valid C code, since it doesn't do as much data flow
analysis as the verifier does.
Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
include/linux/bpf_verifier.h | 2 +-
kernel/bpf/verifier.c | 27 +++++++++++++++++++++++++++
2 files changed, 28 insertions(+), 1 deletion(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index b8d200f..5d6de3b 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -110,7 +110,7 @@ struct bpf_insn_aux_data {
struct bpf_map *map_ptr; /* pointer for call insn into lookup_elem */
};
int ctx_field_size; /* the ctx field size for load insn, maybe 0 */
- int converted_op_size; /* the valid value width after perceived conversion */
+ bool seen; /* this insn was processed by the verifier */
};
#define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index c48ca2a..201d6b9 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3665,6 +3665,7 @@ static int do_check(struct bpf_verifier_env *env)
if (err)
return err;
+ env->insn_aux_data[insn_idx].seen = true;
if (class == BPF_ALU || class == BPF_ALU64) {
err = check_alu_op(env, insn);
if (err)
@@ -3855,6 +3856,7 @@ static int do_check(struct bpf_verifier_env *env)
return err;
insn_idx++;
+ env->insn_aux_data[insn_idx].seen = true;
} else {
verbose("invalid BPF_LD mode\n");
return -EINVAL;
@@ -4035,6 +4037,7 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env, u32 prog_len,
u32 off, u32 cnt)
{
struct bpf_insn_aux_data *new_data, *old_data = env->insn_aux_data;
+ int i;
if (cnt == 1)
return 0;
@@ -4044,6 +4047,8 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env, u32 prog_len,
memcpy(new_data, old_data, sizeof(struct bpf_insn_aux_data) * off);
memcpy(new_data + off + cnt - 1, old_data + off,
sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1));
+ for (i = off; i < off + cnt - 1; i++)
+ new_data[i].seen = true;
env->insn_aux_data = new_data;
vfree(old_data);
return 0;
@@ -4062,6 +4067,25 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
return new_prog;
}
+/* The verifier does more data flow analysis than llvm and will not explore
+ * branches that are dead at run time. Malicious programs can have dead code
+ * too. Therefore replace all dead at-run-time code with nops.
+ */
+static void sanitize_dead_code(struct bpf_verifier_env *env)
+{
+ struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
+ struct bpf_insn nop = BPF_MOV64_REG(BPF_REG_0, BPF_REG_0);
+ struct bpf_insn *insn = env->prog->insnsi;
+ const int insn_cnt = env->prog->len;
+ int i;
+
+ for (i = 0; i < insn_cnt; i++) {
+ if (aux_data[i].seen)
+ continue;
+ memcpy(insn + i, &nop, sizeof(nop));
+ }
+}
+
/* convert load instructions that access fields of 'struct __sk_buff'
* into sequence of instructions that access fields of 'struct sk_buff'
*/
@@ -4379,6 +4403,9 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr)
free_states(env);
if (ret == 0)
+ sanitize_dead_code(env);
+
+ if (ret == 0)
/* program is valid, convert *(u32*)(ctx + off) accesses */
ret = convert_ctx_accesses(env);
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 02/14] bpf: fix corruption on concurrent perf_event_output calls
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
2017-12-22 15:22 ` [PATCH stable/4.14 01/14] bpf: fix branch pruning logic Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 02/14] bpf: fix corruption on concurrent perf_event_output calls" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 03/14] bpf, s390x: do not reload skb pointers in non-skb context Daniel Borkmann
` (13 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
[ Upstream commit 283ca526a9bd75aed7350220d7b1f8027d99c3fd ]
When tracing and networking programs are both attached in the
system and both use event-output helpers that eventually call
into perf_event_output(), then we could end up in a situation
where the tracing attached program runs in user context while
a cls_bpf program is triggered on that same CPU out of softirq
context.
Since both rely on the same per-cpu perf_sample_data, we could
potentially corrupt it. This can only ever happen in a combination
of the two types; all tracing programs use a bpf_prog_active
counter to bail out in case a program is already running on
that CPU out of a different context. XDP and cls_bpf programs
by themselves don't have this issue as they run in the same
context only. Therefore, split both perf_sample_data so they
cannot be accessed from each other.
Fixes: 20b9d7ac4852 ("bpf: avoid excessive stack usage for perf_sample_data")
Reported-by: Alexei Starovoitov <ast@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Song Liu <songliubraving@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
kernel/trace/bpf_trace.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index dc498b6..6350f64 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -293,14 +293,13 @@ static const struct bpf_func_proto bpf_perf_event_read_proto = {
.arg2_type = ARG_ANYTHING,
};
-static DEFINE_PER_CPU(struct perf_sample_data, bpf_sd);
+static DEFINE_PER_CPU(struct perf_sample_data, bpf_trace_sd);
static __always_inline u64
__bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
- u64 flags, struct perf_raw_record *raw)
+ u64 flags, struct perf_sample_data *sd)
{
struct bpf_array *array = container_of(map, struct bpf_array, map);
- struct perf_sample_data *sd = this_cpu_ptr(&bpf_sd);
unsigned int cpu = smp_processor_id();
u64 index = flags & BPF_F_INDEX_MASK;
struct bpf_event_entry *ee;
@@ -323,8 +322,6 @@ __bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
if (unlikely(event->oncpu != cpu))
return -EOPNOTSUPP;
- perf_sample_data_init(sd, 0, 0);
- sd->raw = raw;
perf_event_output(event, sd, regs);
return 0;
}
@@ -332,6 +329,7 @@ __bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
BPF_CALL_5(bpf_perf_event_output, struct pt_regs *, regs, struct bpf_map *, map,
u64, flags, void *, data, u64, size)
{
+ struct perf_sample_data *sd = this_cpu_ptr(&bpf_trace_sd);
struct perf_raw_record raw = {
.frag = {
.size = size,
@@ -342,7 +340,10 @@ BPF_CALL_5(bpf_perf_event_output, struct pt_regs *, regs, struct bpf_map *, map,
if (unlikely(flags & ~(BPF_F_INDEX_MASK)))
return -EINVAL;
- return __bpf_perf_event_output(regs, map, flags, &raw);
+ perf_sample_data_init(sd, 0, 0);
+ sd->raw = &raw;
+
+ return __bpf_perf_event_output(regs, map, flags, sd);
}
static const struct bpf_func_proto bpf_perf_event_output_proto = {
@@ -357,10 +358,12 @@ static const struct bpf_func_proto bpf_perf_event_output_proto = {
};
static DEFINE_PER_CPU(struct pt_regs, bpf_pt_regs);
+static DEFINE_PER_CPU(struct perf_sample_data, bpf_misc_sd);
u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy)
{
+ struct perf_sample_data *sd = this_cpu_ptr(&bpf_misc_sd);
struct pt_regs *regs = this_cpu_ptr(&bpf_pt_regs);
struct perf_raw_frag frag = {
.copy = ctx_copy,
@@ -378,8 +381,10 @@ u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
};
perf_fetch_caller_regs(regs);
+ perf_sample_data_init(sd, 0, 0);
+ sd->raw = &raw;
- return __bpf_perf_event_output(regs, map, flags, &raw);
+ return __bpf_perf_event_output(regs, map, flags, sd);
}
BPF_CALL_0(bpf_get_current_task)
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 03/14] bpf, s390x: do not reload skb pointers in non-skb context
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
2017-12-22 15:22 ` [PATCH stable/4.14 01/14] bpf: fix branch pruning logic Daniel Borkmann
2017-12-22 15:23 ` [PATCH stable/4.14 02/14] bpf: fix corruption on concurrent perf_event_output calls Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 03/14] bpf, s390x: do not reload skb pointers in non-skb context" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 04/14] bpf, ppc64: do not reload skb pointers in non-skb context Daniel Borkmann
` (12 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable, Michael Holzheu
[ Upstream commit 6d59b7dbf72ed20d0138e2f9b75ca3d4a9d4faca ]
The assumption of unconditionally reloading skb pointers on
BPF helper calls where bpf_helper_changes_pkt_data() holds
true is wrong. There can be different contexts where the
BPF helper would enforce a reload such as in case of XDP.
Here, we do have a struct xdp_buff instead of struct sk_buff
as context, thus this will access garbage.
JITs only ever need to deal with cached skb pointer reload
when ld_abs/ind was seen, therefore guard the reload behind
SEEN_SKB only. Tested on s390x.
Fixes: 9db7f2b81880 ("s390/bpf: recache skb->data/hlen for skb_vlan_push/pop")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
arch/s390/net/bpf_jit_comp.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index b15cd2f..33e2785 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -55,8 +55,7 @@ struct bpf_jit {
#define SEEN_LITERAL 8 /* code uses literals */
#define SEEN_FUNC 16 /* calls C functions */
#define SEEN_TAIL_CALL 32 /* code uses tail calls */
-#define SEEN_SKB_CHANGE 64 /* code changes skb data */
-#define SEEN_REG_AX 128 /* code uses constant blinding */
+#define SEEN_REG_AX 64 /* code uses constant blinding */
#define SEEN_STACK (SEEN_FUNC | SEEN_MEM | SEEN_SKB)
/*
@@ -448,12 +447,12 @@ static void bpf_jit_prologue(struct bpf_jit *jit)
EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0,
REG_15, 152);
}
- if (jit->seen & SEEN_SKB)
+ if (jit->seen & SEEN_SKB) {
emit_load_skb_data_hlen(jit);
- if (jit->seen & SEEN_SKB_CHANGE)
/* stg %b1,ST_OFF_SKBP(%r0,%r15) */
EMIT6_DISP_LH(0xe3000000, 0x0024, BPF_REG_1, REG_0, REG_15,
STK_OFF_SKBP);
+ }
}
/*
@@ -983,8 +982,8 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
EMIT2(0x0d00, REG_14, REG_W1);
/* lgr %b0,%r2: load return value into %b0 */
EMIT4(0xb9040000, BPF_REG_0, REG_2);
- if (bpf_helper_changes_pkt_data((void *)func)) {
- jit->seen |= SEEN_SKB_CHANGE;
+ if ((jit->seen & SEEN_SKB) &&
+ bpf_helper_changes_pkt_data((void *)func)) {
/* lg %b1,ST_OFF_SKBP(%r15) */
EMIT6_DISP_LH(0xe3000000, 0x0004, BPF_REG_1, REG_0,
REG_15, STK_OFF_SKBP);
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 04/14] bpf, ppc64: do not reload skb pointers in non-skb context
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (2 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 03/14] bpf, s390x: do not reload skb pointers in non-skb context Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 04/14] bpf, ppc64: do not reload skb pointers in non-skb context" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 05/14] bpf, sparc: fix usage of wrong reg for load_skb_regs after call Daniel Borkmann
` (11 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
[ Upstream commit 87338c8e2cbb317b5f757e6172f94e2e3799cd20 ]
The assumption of unconditionally reloading skb pointers on
BPF helper calls where bpf_helper_changes_pkt_data() holds
true is wrong. There can be different contexts where the helper
would enforce a reload such as in case of XDP. Here, we do
have a struct xdp_buff instead of struct sk_buff as context,
thus this will access garbage.
JITs only ever need to deal with cached skb pointer reload
when ld_abs/ind was seen, therefore guard the reload behind
SEEN_SKB.
Fixes: 156d0e290e96 ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Tested-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
arch/powerpc/net/bpf_jit_comp64.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index a66e64b..5d115bd 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -762,7 +762,8 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
func = (u8 *) __bpf_call_base + imm;
/* Save skb pointer if we need to re-cache skb data */
- if (bpf_helper_changes_pkt_data(func))
+ if ((ctx->seen & SEEN_SKB) &&
+ bpf_helper_changes_pkt_data(func))
PPC_BPF_STL(3, 1, bpf_jit_stack_local(ctx));
bpf_jit_emit_func_call(image, ctx, (u64)func);
@@ -771,7 +772,8 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
PPC_MR(b2p[BPF_REG_0], 3);
/* refresh skb cache */
- if (bpf_helper_changes_pkt_data(func)) {
+ if ((ctx->seen & SEEN_SKB) &&
+ bpf_helper_changes_pkt_data(func)) {
/* reload skb pointer to r3 */
PPC_BPF_LL(3, 1, bpf_jit_stack_local(ctx));
bpf_jit_emit_skb_loads(image, ctx);
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 05/14] bpf, sparc: fix usage of wrong reg for load_skb_regs after call
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (3 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 04/14] bpf, ppc64: do not reload skb pointers in non-skb context Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 05/14] bpf, sparc: fix usage of wrong reg for load_skb_regs after call" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 06/14] bpf/verifier: fix bounds calculation on BPF_RSH Daniel Borkmann
` (10 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
[ Upstream commit 07aee94394547721ac168cbf4e1c09c14a5fe671 ]
When LD_ABS/IND is used in the program, and we have a BPF helper
call that changes packet data (bpf_helper_changes_pkt_data() returns
true), then in case of sparc JIT, we try to reload cached skb data
from bpf2sparc[BPF_REG_6]. However, there is no such guarantee or
assumption that skb sits in R6 at this point, all helpers changing
skb data only have a guarantee that skb sits in R1. Therefore,
store BPF R1 in L7 temporarily and after procedure call use L7 to
reload cached skb data. skb sitting in R6 is only true at the time
when LD_ABS/IND is executed.
Fixes: 7a12b5031c6b ("sparc64: Add eBPF JIT.")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
arch/sparc/net/bpf_jit_comp_64.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 5765e7e..ff5f9cb 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -1245,14 +1245,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
u8 *func = ((u8 *)__bpf_call_base) + imm;
ctx->saw_call = true;
+ if (ctx->saw_ld_abs_ind && bpf_helper_changes_pkt_data(func))
+ emit_reg_move(bpf2sparc[BPF_REG_1], L7, ctx);
emit_call((u32 *)func, ctx);
emit_nop(ctx);
emit_reg_move(O0, bpf2sparc[BPF_REG_0], ctx);
- if (bpf_helper_changes_pkt_data(func) && ctx->saw_ld_abs_ind)
- load_skb_regs(ctx, bpf2sparc[BPF_REG_6]);
+ if (ctx->saw_ld_abs_ind && bpf_helper_changes_pkt_data(func))
+ load_skb_regs(ctx, L7);
break;
}
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 06/14] bpf/verifier: fix bounds calculation on BPF_RSH
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (4 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 05/14] bpf, sparc: fix usage of wrong reg for load_skb_regs after call Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 06/14] bpf/verifier: fix bounds calculation on BPF_RSH" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 07/14] bpf: fix incorrect sign extension in check_alu_op() Daniel Borkmann
` (9 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable, Edward Cree
From: Edward Cree <ecree@solarflare.com>
[ Upstream commit 4374f256ce8182019353c0c639bb8d0695b4c941 ]
Incorrect signed bounds were being computed.
If the old upper signed bound was positive and the old lower signed bound was
negative, this could cause the new upper signed bound to be too low,
leading to security issues.
Fixes: b03c9f9fdc37 ("bpf/verifier: track signed and unsigned min/max values")
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Edward Cree <ecree@solarflare.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
[jannh@google.com: changed description to reflect bug impact]
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
kernel/bpf/verifier.c | 30 ++++++++++++++++--------------
1 file changed, 16 insertions(+), 14 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 201d6b9..c6ec04f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2157,20 +2157,22 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
mark_reg_unknown(regs, insn->dst_reg);
break;
}
- /* BPF_RSH is an unsigned shift, so make the appropriate casts */
- if (dst_reg->smin_value < 0) {
- if (umin_val) {
- /* Sign bit will be cleared */
- dst_reg->smin_value = 0;
- } else {
- /* Lost sign bit information */
- dst_reg->smin_value = S64_MIN;
- dst_reg->smax_value = S64_MAX;
- }
- } else {
- dst_reg->smin_value =
- (u64)(dst_reg->smin_value) >> umax_val;
- }
+ /* BPF_RSH is an unsigned shift. If the value in dst_reg might
+ * be negative, then either:
+ * 1) src_reg might be zero, so the sign bit of the result is
+ * unknown, so we lose our signed bounds
+ * 2) it's known negative, thus the unsigned bounds capture the
+ * signed bounds
+ * 3) the signed bounds cross zero, so they tell us nothing
+ * about the result
+ * If the value in dst_reg is known nonnegative, then again the
+ * unsigned bounts capture the signed bounds.
+ * Thus, in all cases it suffices to blow away our signed bounds
+ * and rely on inferring new ones from the unsigned bounds and
+ * var_off of the result.
+ */
+ dst_reg->smin_value = S64_MIN;
+ dst_reg->smax_value = S64_MAX;
if (src_known)
dst_reg->var_off = tnum_rshift(dst_reg->var_off,
umin_val);
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 07/14] bpf: fix incorrect sign extension in check_alu_op()
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (5 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 06/14] bpf/verifier: fix bounds calculation on BPF_RSH Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 07/14] bpf: fix incorrect sign extension in check_alu_op()" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 08/14] bpf: fix incorrect tracking of register size truncation Daniel Borkmann
` (8 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
From: Jann Horn <jannh@google.com>
[ Upstream commit 95a762e2c8c942780948091f8f2a4f32fce1ac6f ]
Distinguish between
BPF_ALU64|BPF_MOV|BPF_K (load 32-bit immediate, sign-extended to 64-bit)
and BPF_ALU|BPF_MOV|BPF_K (load 32-bit immediate, zero-padded to 64-bit);
only perform sign extension in the first case.
Starting with v4.14, this is exploitable by unprivileged users as long as
the unprivileged_bpf_disabled sysctl isn't set.
Debian assigned CVE-2017-16995 for this issue.
v3:
- add CVE number (Ben Hutchings)
Fixes: 484611357c19 ("bpf: allow access into map value arrays")
Signed-off-by: Jann Horn <jannh@google.com>
Acked-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
kernel/bpf/verifier.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index c6ec04f..2811ced 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2374,7 +2374,13 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
* remember the value we stored into this reg
*/
regs[insn->dst_reg].type = SCALAR_VALUE;
- __mark_reg_known(regs + insn->dst_reg, insn->imm);
+ if (BPF_CLASS(insn->code) == BPF_ALU64) {
+ __mark_reg_known(regs + insn->dst_reg,
+ insn->imm);
+ } else {
+ __mark_reg_known(regs + insn->dst_reg,
+ (u32)insn->imm);
+ }
}
} else if (opcode > BPF_END) {
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 08/14] bpf: fix incorrect tracking of register size truncation
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (6 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 07/14] bpf: fix incorrect sign extension in check_alu_op() Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 08/14] bpf: fix incorrect tracking of register size truncation" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 09/14] bpf: fix 32-bit ALU op verification Daniel Borkmann
` (7 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
From: Jann Horn <jannh@google.com>
[ Upstream commit 0c17d1d2c61936401f4702e1846e2c19b200f958 ]
Properly handle register truncation to a smaller size.
The old code first mirrors the clearing of the high 32 bits in the bitwise
tristate representation, which is correct. But then, it computes the new
arithmetic bounds as the intersection between the old arithmetic bounds and
the bounds resulting from the bitwise tristate representation. Therefore,
when coerce_reg_to_32() is called on a number with bounds
[0xffff'fff8, 0x1'0000'0007], the verifier computes
[0xffff'fff8, 0xffff'ffff] as bounds of the truncated number.
This is incorrect: The truncated number could also be in the range [0, 7],
and no meaningful arithmetic bounds can be computed in that case apart from
the obvious [0, 0xffff'ffff].
Starting with v4.14, this is exploitable by unprivileged users as long as
the unprivileged_bpf_disabled sysctl isn't set.
Debian assigned CVE-2017-16996 for this issue.
v2:
- flip the mask during arithmetic bounds calculation (Ben Hutchings)
v3:
- add CVE number (Ben Hutchings)
Fixes: b03c9f9fdc37 ("bpf/verifier: track signed and unsigned min/max values")
Signed-off-by: Jann Horn <jannh@google.com>
Acked-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
kernel/bpf/verifier.c | 44 +++++++++++++++++++++++++++-----------------
1 file changed, 27 insertions(+), 17 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 2811ced..1e2afb2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1068,6 +1068,29 @@ static int check_ptr_alignment(struct bpf_verifier_env *env,
return check_generic_ptr_alignment(reg, pointer_desc, off, size, strict);
}
+/* truncate register to smaller size (in bytes)
+ * must be called with size < BPF_REG_SIZE
+ */
+static void coerce_reg_to_size(struct bpf_reg_state *reg, int size)
+{
+ u64 mask;
+
+ /* clear high bits in bit representation */
+ reg->var_off = tnum_cast(reg->var_off, size);
+
+ /* fix arithmetic bounds */
+ mask = ((u64)1 << (size * 8)) - 1;
+ if ((reg->umin_value & ~mask) == (reg->umax_value & ~mask)) {
+ reg->umin_value &= mask;
+ reg->umax_value &= mask;
+ } else {
+ reg->umin_value = 0;
+ reg->umax_value = mask;
+ }
+ reg->smin_value = reg->umin_value;
+ reg->smax_value = reg->umax_value;
+}
+
/* check whether memory at (regno + off) is accessible for t = (read | write)
* if t==write, value_regno is a register which value is stored into memory
* if t==read, value_regno is a register which will receive the value from memory
@@ -1200,9 +1223,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
if (!err && size < BPF_REG_SIZE && value_regno >= 0 && t == BPF_READ &&
state->regs[value_regno].type == SCALAR_VALUE) {
/* b/h/w load zero-extends, mark upper bits as known 0 */
- state->regs[value_regno].var_off = tnum_cast(
- state->regs[value_regno].var_off, size);
- __update_reg_bounds(&state->regs[value_regno]);
+ coerce_reg_to_size(&state->regs[value_regno], size);
}
return err;
}
@@ -1742,14 +1763,6 @@ static int check_call(struct bpf_verifier_env *env, int func_id, int insn_idx)
return 0;
}
-static void coerce_reg_to_32(struct bpf_reg_state *reg)
-{
- /* clear high 32 bits */
- reg->var_off = tnum_cast(reg->var_off, 4);
- /* Update bounds */
- __update_reg_bounds(reg);
-}
-
static bool signed_add_overflows(s64 a, s64 b)
{
/* Do the add in u64, where overflow is well-defined */
@@ -1984,8 +1997,8 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
if (BPF_CLASS(insn->code) != BPF_ALU64) {
/* 32-bit ALU ops are (32,32)->64 */
- coerce_reg_to_32(dst_reg);
- coerce_reg_to_32(&src_reg);
+ coerce_reg_to_size(dst_reg, 4);
+ coerce_reg_to_size(&src_reg, 4);
}
smin_val = src_reg.smin_value;
smax_val = src_reg.smax_value;
@@ -2364,10 +2377,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
return -EACCES;
}
mark_reg_unknown(regs, insn->dst_reg);
- /* high 32 bits are known zero. */
- regs[insn->dst_reg].var_off = tnum_cast(
- regs[insn->dst_reg].var_off, 4);
- __update_reg_bounds(®s[insn->dst_reg]);
+ coerce_reg_to_size(®s[insn->dst_reg], 4);
}
} else {
/* case: R = imm
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 09/14] bpf: fix 32-bit ALU op verification
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (7 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 08/14] bpf: fix incorrect tracking of register size truncation Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 09/14] bpf: fix 32-bit ALU op verification" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 10/14] bpf: fix missing error return in check_stack_boundary() Daniel Borkmann
` (6 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
From: Jann Horn <jannh@google.com>
[ Upstream commit 468f6eafa6c44cb2c5d8aad35e12f06c240a812a ]
32-bit ALU ops operate on 32-bit values and have 32-bit outputs.
Adjust the verifier accordingly.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
kernel/bpf/verifier.c | 28 +++++++++++++++++-----------
1 file changed, 17 insertions(+), 11 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1e2afb2..0c7e4c8 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1984,6 +1984,10 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
return 0;
}
+/* WARNING: This function does calculations on 64-bit values, but the actual
+ * execution may occur on 32-bit values. Therefore, things like bitshifts
+ * need extra checks in the 32-bit case.
+ */
static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn,
struct bpf_reg_state *dst_reg,
@@ -1994,12 +1998,8 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
bool src_known, dst_known;
s64 smin_val, smax_val;
u64 umin_val, umax_val;
+ u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
- if (BPF_CLASS(insn->code) != BPF_ALU64) {
- /* 32-bit ALU ops are (32,32)->64 */
- coerce_reg_to_size(dst_reg, 4);
- coerce_reg_to_size(&src_reg, 4);
- }
smin_val = src_reg.smin_value;
smax_val = src_reg.smax_value;
umin_val = src_reg.umin_value;
@@ -2135,9 +2135,9 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
__update_reg_bounds(dst_reg);
break;
case BPF_LSH:
- if (umax_val > 63) {
- /* Shifts greater than 63 are undefined. This includes
- * shifts by a negative number.
+ if (umax_val >= insn_bitness) {
+ /* Shifts greater than 31 or 63 are undefined.
+ * This includes shifts by a negative number.
*/
mark_reg_unknown(regs, insn->dst_reg);
break;
@@ -2163,9 +2163,9 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
__update_reg_bounds(dst_reg);
break;
case BPF_RSH:
- if (umax_val > 63) {
- /* Shifts greater than 63 are undefined. This includes
- * shifts by a negative number.
+ if (umax_val >= insn_bitness) {
+ /* Shifts greater than 31 or 63 are undefined.
+ * This includes shifts by a negative number.
*/
mark_reg_unknown(regs, insn->dst_reg);
break;
@@ -2201,6 +2201,12 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
break;
}
+ if (BPF_CLASS(insn->code) != BPF_ALU64) {
+ /* 32-bit ALU ops are (32,32)->32 */
+ coerce_reg_to_size(dst_reg, 4);
+ coerce_reg_to_size(&src_reg, 4);
+ }
+
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
return 0;
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 10/14] bpf: fix missing error return in check_stack_boundary()
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (8 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 09/14] bpf: fix 32-bit ALU op verification Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 10/14] bpf: fix missing error return in check_stack_boundary()" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 11/14] bpf: force strict alignment checks for stack pointers Daniel Borkmann
` (5 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
From: Jann Horn <jannh@google.com>
Prevent indirect stack accesses at non-constant addresses, which would
permit reading and corrupting spilled pointers.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
kernel/bpf/verifier.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0c7e4c8..8aa98a0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1303,6 +1303,7 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
tnum_strn(tn_buf, sizeof(tn_buf), regs[regno].var_off);
verbose("invalid variable stack read R%d var_off=%s\n",
regno, tn_buf);
+ return -EACCES;
}
off = regs[regno].off + regs[regno].var_off.value;
if (off >= 0 || off < -MAX_BPF_STACK || off + access_size > 0 ||
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 11/14] bpf: force strict alignment checks for stack pointers
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (9 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 10/14] bpf: fix missing error return in check_stack_boundary() Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 11/14] bpf: force strict alignment checks for stack pointers" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 12/14] bpf: don't prune branches when a scalar is replaced with a pointer Daniel Borkmann
` (4 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
From: Jann Horn <jannh@google.com>
[ Upstream commit a5ec6ae161d72f01411169a938fa5f8baea16e8f ]
Force strict alignment checks for stack pointers because the tracking of
stack spills relies on it; unaligned stack accesses can lead to corruption
of spilled registers, which is exploitable.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
kernel/bpf/verifier.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 8aa98a0..8c35355 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1061,6 +1061,11 @@ static int check_ptr_alignment(struct bpf_verifier_env *env,
break;
case PTR_TO_STACK:
pointer_desc = "stack ";
+ /* The stack spill tracking logic in check_stack_write()
+ * and check_stack_read() relies on stack accesses being
+ * aligned.
+ */
+ strict = true;
break;
default:
break;
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 12/14] bpf: don't prune branches when a scalar is replaced with a pointer
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (10 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 11/14] bpf: force strict alignment checks for stack pointers Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 12/14] bpf: don't prune branches when a scalar is replaced with a pointer" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 13/14] bpf: fix integer overflows Daniel Borkmann
` (3 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
From: Jann Horn <jannh@google.com>
[ Upstream commit 179d1c5602997fef5a940c6ddcf31212cbfebd14 ]
This could be made safe by passing through a reference to env and checking
for env->allow_ptr_leaks, but it would only work one way and is probably
not worth the hassle - not doing it will not directly lead to program
rejection.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
kernel/bpf/verifier.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 8c35355..5a30eda 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3337,15 +3337,14 @@ static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
} else {
- /* if we knew anything about the old value, we're not
- * equal, because we can't know anything about the
- * scalar value of the pointer in the new value.
+ /* We're trying to use a pointer in place of a scalar.
+ * Even if the scalar was unbounded, this could lead to
+ * pointer leaks because scalars are allowed to leak
+ * while pointers are not. We could make this safe in
+ * special cases if root is calling us, but it's
+ * probably not worth the hassle.
*/
- return rold->umin_value == 0 &&
- rold->umax_value == U64_MAX &&
- rold->smin_value == S64_MIN &&
- rold->smax_value == S64_MAX &&
- tnum_is_unknown(rold->var_off);
+ return false;
}
case PTR_TO_MAP_VALUE:
/* If the new min/max/var_off satisfy the old ones and
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 13/14] bpf: fix integer overflows
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (11 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 12/14] bpf: don't prune branches when a scalar is replaced with a pointer Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 13/14] bpf: fix integer overflows" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 14/14] selftests/bpf: add tests for recent bugfixes Daniel Borkmann
` (2 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
From: Alexei Starovoitov <ast@kernel.org>
[ Upstream commit bb7f0f989ca7de1153bd128a40a71709e339fa03 ]
There were various issues related to the limited size of integers used in
the verifier:
- `off + size` overflow in __check_map_access()
- `off + reg->off` overflow in check_mem_access()
- `off + reg->var_off.value` overflow or 32-bit truncation of
`reg->var_off.value` in check_mem_access()
- 32-bit truncation in check_stack_boundary()
Make sure that any integer math cannot overflow by not allowing
pointer math with large values.
Also reduce the scope of "scalar op scalar" tracking.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
include/linux/bpf_verifier.h | 4 ++--
kernel/bpf/verifier.c | 48 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 50 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 5d6de3b..73bec75 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -15,11 +15,11 @@
* In practice this is far bigger than any realistic pointer offset; this limit
* ensures that umax_value + (int)off + (int)size cannot overflow a u64.
*/
-#define BPF_MAX_VAR_OFF (1ULL << 31)
+#define BPF_MAX_VAR_OFF (1 << 29)
/* Maximum variable size permitted for ARG_CONST_SIZE[_OR_ZERO]. This ensures
* that converting umax_value to int cannot overflow.
*/
-#define BPF_MAX_VAR_SIZ INT_MAX
+#define BPF_MAX_VAR_SIZ (1 << 29)
/* Liveness marks, used for registers and spilled-regs (in stack slots).
* Read marks propagate upwards until they find a write mark; they record that
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5a30eda..c5ff809 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1789,6 +1789,41 @@ static bool signed_sub_overflows(s64 a, s64 b)
return res > a;
}
+static bool check_reg_sane_offset(struct bpf_verifier_env *env,
+ const struct bpf_reg_state *reg,
+ enum bpf_reg_type type)
+{
+ bool known = tnum_is_const(reg->var_off);
+ s64 val = reg->var_off.value;
+ s64 smin = reg->smin_value;
+
+ if (known && (val >= BPF_MAX_VAR_OFF || val <= -BPF_MAX_VAR_OFF)) {
+ verbose("math between %s pointer and %lld is not allowed\n",
+ reg_type_str[type], val);
+ return false;
+ }
+
+ if (reg->off >= BPF_MAX_VAR_OFF || reg->off <= -BPF_MAX_VAR_OFF) {
+ verbose("%s pointer offset %d is not allowed\n",
+ reg_type_str[type], reg->off);
+ return false;
+ }
+
+ if (smin == S64_MIN) {
+ verbose("math between %s pointer and register with unbounded min value is not allowed\n",
+ reg_type_str[type]);
+ return false;
+ }
+
+ if (smin >= BPF_MAX_VAR_OFF || smin <= -BPF_MAX_VAR_OFF) {
+ verbose("value %lld makes %s pointer be out of bounds\n",
+ smin, reg_type_str[type]);
+ return false;
+ }
+
+ return true;
+}
+
/* Handles arithmetic on a pointer and a scalar: computes new min/max and var_off.
* Caller should also handle BPF_MOV case separately.
* If we return -EACCES, caller may want to try again treating pointer as a
@@ -1854,6 +1889,10 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
dst_reg->type = ptr_reg->type;
dst_reg->id = ptr_reg->id;
+ if (!check_reg_sane_offset(env, off_reg, ptr_reg->type) ||
+ !check_reg_sane_offset(env, ptr_reg, ptr_reg->type))
+ return -EINVAL;
+
switch (opcode) {
case BPF_ADD:
/* We can take a fixed offset as long as it doesn't overflow
@@ -1984,6 +2023,9 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
return -EACCES;
}
+ if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type))
+ return -EINVAL;
+
__update_reg_bounds(dst_reg);
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
@@ -2013,6 +2055,12 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
src_known = tnum_is_const(src_reg.var_off);
dst_known = tnum_is_const(dst_reg->var_off);
+ if (!src_known &&
+ opcode != BPF_ADD && opcode != BPF_SUB && opcode != BPF_AND) {
+ __mark_reg_unknown(dst_reg);
+ return 0;
+ }
+
switch (opcode) {
case BPF_ADD:
if (signed_add_overflows(dst_reg->smin_value, smin_val) ||
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH stable/4.14 14/14] selftests/bpf: add tests for recent bugfixes
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (12 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 13/14] bpf: fix integer overflows Daniel Borkmann
@ 2017-12-22 15:23 ` Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 14/14] selftests/bpf: add tests for recent bugfixes" has been added to the 4.14-stable tree gregkh
2017-12-22 15:45 ` [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Greg KH
2017-12-22 15:51 ` Greg KH
15 siblings, 1 reply; 32+ messages in thread
From: Daniel Borkmann @ 2017-12-22 15:23 UTC (permalink / raw)
To: gregkh; +Cc: ast, daniel, jannh, stable
From: Jann Horn <jannh@google.com>
[ Upstream commit 2255f8d520b0a318fc6d387d0940854b2f522a7f ]
These tests should cover the following cases:
- MOV with both zero-extended and sign-extended immediates
- implicit truncation of register contents via ALU32/MOV32
- implicit 32-bit truncation of ALU32 output
- oversized register source operand for ALU32 shift
- right-shift of a number that could be positive or negative
- map access where adding the operation size to the offset causes signed
32-bit overflow
- direct stack access at a ~4GiB offset
Also remove the F_LOAD_WITH_STRICT_ALIGNMENT flag from a bunch of tests
that should fail independent of what flags userspace passes.
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
tools/testing/selftests/bpf/test_verifier.c | 549 +++++++++++++++++++++++++++-
1 file changed, 533 insertions(+), 16 deletions(-)
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 64ae21f..7a2d221 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -606,7 +606,6 @@ static struct bpf_test tests[] = {
},
.errstr = "misaligned stack access",
.result = REJECT,
- .flags = F_LOAD_WITH_STRICT_ALIGNMENT,
},
{
"invalid map_fd for function call",
@@ -1797,7 +1796,6 @@ static struct bpf_test tests[] = {
},
.result = REJECT,
.errstr = "misaligned stack access off (0x0; 0x0)+-8+2 size 8",
- .flags = F_LOAD_WITH_STRICT_ALIGNMENT,
},
{
"PTR_TO_STACK store/load - bad alignment on reg",
@@ -1810,7 +1808,6 @@ static struct bpf_test tests[] = {
},
.result = REJECT,
.errstr = "misaligned stack access off (0x0; 0x0)+-10+8 size 8",
- .flags = F_LOAD_WITH_STRICT_ALIGNMENT,
},
{
"PTR_TO_STACK store/load - out of bounds low",
@@ -6115,7 +6112,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6139,7 +6136,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6165,7 +6162,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R8 invalid mem access 'inv'",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6190,7 +6187,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R8 invalid mem access 'inv'",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6238,7 +6235,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6309,7 +6306,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6360,7 +6357,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6387,7 +6384,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6413,7 +6410,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6442,7 +6439,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6472,7 +6469,7 @@ static struct bpf_test tests[] = {
BPF_JMP_IMM(BPF_JA, 0, 0, -7),
},
.fixup_map1 = { 4 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6500,8 +6497,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr_unpriv = "R0 pointer comparison prohibited",
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
.result_unpriv = REJECT,
},
@@ -6557,6 +6553,462 @@ static struct bpf_test tests[] = {
.result = REJECT,
},
{
+ "bounds check based on zero-extended MOV",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+ /* r2 = 0x0000'0000'ffff'ffff */
+ BPF_MOV32_IMM(BPF_REG_2, 0xffffffff),
+ /* r2 = 0 */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 32),
+ /* no-op */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
+ /* access at offset 0 */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .result = ACCEPT
+ },
+ {
+ "bounds check based on sign-extended MOV. test1",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+ /* r2 = 0xffff'ffff'ffff'ffff */
+ BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
+ /* r2 = 0xffff'ffff */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 32),
+ /* r0 = <oob pointer> */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
+ /* access to OOB pointer */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "map_value pointer and 4294967295",
+ .result = REJECT
+ },
+ {
+ "bounds check based on sign-extended MOV. test2",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+ /* r2 = 0xffff'ffff'ffff'ffff */
+ BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
+ /* r2 = 0xfff'ffff */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 36),
+ /* r0 = <oob pointer> */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
+ /* access to OOB pointer */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "R0 min value is outside of the array range",
+ .result = REJECT
+ },
+ {
+ "bounds check based on reg_off + var_off + insn_off. test1",
+ .insns = {
+ BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
+ offsetof(struct __sk_buff, mark)),
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+ BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 29) - 1),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, (1 << 29) - 1),
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 3),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 4 },
+ .errstr = "value_size=8 off=1073741825",
+ .result = REJECT,
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ },
+ {
+ "bounds check based on reg_off + var_off + insn_off. test2",
+ .insns = {
+ BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
+ offsetof(struct __sk_buff, mark)),
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+ BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 30) - 1),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, (1 << 29) - 1),
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 3),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 4 },
+ .errstr = "value 1073741823",
+ .result = REJECT,
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ },
+ {
+ "bounds check after truncation of non-boundary-crossing range",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+ /* r1 = [0x00, 0xff] */
+ BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_MOV64_IMM(BPF_REG_2, 1),
+ /* r2 = 0x10'0000'0000 */
+ BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 36),
+ /* r1 = [0x10'0000'0000, 0x10'0000'00ff] */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2),
+ /* r1 = [0x10'7fff'ffff, 0x10'8000'00fe] */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
+ /* r1 = [0x00, 0xff] */
+ BPF_ALU32_IMM(BPF_SUB, BPF_REG_1, 0x7fffffff),
+ /* r1 = 0 */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
+ /* no-op */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* access at offset 0 */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .result = ACCEPT
+ },
+ {
+ "bounds check after truncation of boundary-crossing range (1)",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+ /* r1 = [0x00, 0xff] */
+ BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0xffff'ff80, 0x1'0000'007f] */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0xffff'ff80, 0xffff'ffff] or
+ * [0x0000'0000, 0x0000'007f]
+ */
+ BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 0),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0x00, 0xff] or
+ * [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]
+ */
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = 0 or
+ * [0x00ff'ffff'ff00'0000, 0x00ff'ffff'ffff'ffff]
+ */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
+ /* no-op or OOB pointer computation */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* potentially OOB access */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ /* not actually fully unbounded, but the bound is very high */
+ .errstr = "R0 unbounded memory access",
+ .result = REJECT
+ },
+ {
+ "bounds check after truncation of boundary-crossing range (2)",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+ /* r1 = [0x00, 0xff] */
+ BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0xffff'ff80, 0x1'0000'007f] */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0xffff'ff80, 0xffff'ffff] or
+ * [0x0000'0000, 0x0000'007f]
+ * difference to previous test: truncation via MOV32
+ * instead of ALU32.
+ */
+ BPF_MOV32_REG(BPF_REG_1, BPF_REG_1),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0x00, 0xff] or
+ * [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]
+ */
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = 0 or
+ * [0x00ff'ffff'ff00'0000, 0x00ff'ffff'ffff'ffff]
+ */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
+ /* no-op or OOB pointer computation */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* potentially OOB access */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ /* not actually fully unbounded, but the bound is very high */
+ .errstr = "R0 unbounded memory access",
+ .result = REJECT
+ },
+ {
+ "bounds check after wrapping 32-bit addition",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+ /* r1 = 0x7fff'ffff */
+ BPF_MOV64_IMM(BPF_REG_1, 0x7fffffff),
+ /* r1 = 0xffff'fffe */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
+ /* r1 = 0 */
+ BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 2),
+ /* no-op */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* access at offset 0 */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .result = ACCEPT
+ },
+ {
+ "bounds check after shift with oversized count operand",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ BPF_MOV64_IMM(BPF_REG_2, 32),
+ BPF_MOV64_IMM(BPF_REG_1, 1),
+ /* r1 = (u32)1 << (u32)32 = ? */
+ BPF_ALU32_REG(BPF_LSH, BPF_REG_1, BPF_REG_2),
+ /* r1 = [0x0000, 0xffff] */
+ BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xffff),
+ /* computes unknown pointer, potentially OOB */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* potentially OOB access */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "R0 max value is outside of the array range",
+ .result = REJECT
+ },
+ {
+ "bounds check after right shift of maybe-negative number",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ /* r1 = [0x00, 0xff] */
+ BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ /* r1 = [-0x01, 0xfe] */
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
+ /* r1 = 0 or 0xff'ffff'ffff'ffff */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
+ /* r1 = 0 or 0xffff'ffff'ffff */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
+ /* computes unknown pointer, potentially OOB */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* potentially OOB access */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "R0 unbounded memory access",
+ .result = REJECT
+ },
+ {
+ "bounds check map access with off+size signed 32bit overflow. test1",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7ffffffe),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_JMP_A(0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "map_value pointer and 2147483646",
+ .result = REJECT
+ },
+ {
+ "bounds check map access with off+size signed 32bit overflow. test2",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_JMP_A(0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "pointer offset 1073741822",
+ .result = REJECT
+ },
+ {
+ "bounds check map access with off+size signed 32bit overflow. test3",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2),
+ BPF_JMP_A(0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "pointer offset -1073741822",
+ .result = REJECT
+ },
+ {
+ "bounds check map access with off+size signed 32bit overflow. test4",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ BPF_MOV64_IMM(BPF_REG_1, 1000000),
+ BPF_ALU64_IMM(BPF_MUL, BPF_REG_1, 1000000),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2),
+ BPF_JMP_A(0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "map_value pointer and 1000000000000",
+ .result = REJECT
+ },
+ {
+ "pointer/scalar confusion in state equality check (way 1)",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_JMP_A(1),
+ BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
+ BPF_JMP_A(0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .result = ACCEPT,
+ .result_unpriv = REJECT,
+ .errstr_unpriv = "R0 leaks addr as return value"
+ },
+ {
+ "pointer/scalar confusion in state equality check (way 2)",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+ BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
+ BPF_JMP_A(1),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .result = ACCEPT,
+ .result_unpriv = REJECT,
+ .errstr_unpriv = "R0 leaks addr as return value"
+ },
+ {
"variable-offset ctx access",
.insns = {
/* Get an unknown value */
@@ -6598,6 +7050,71 @@ static struct bpf_test tests[] = {
.prog_type = BPF_PROG_TYPE_LWT_IN,
},
{
+ "indirect variable-offset stack access",
+ .insns = {
+ /* Fill the top 8 bytes of the stack */
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ /* Get an unknown value */
+ BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
+ /* Make it small and 4-byte aligned */
+ BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 8),
+ /* add it to fp. We now have either fp-4 or fp-8, but
+ * we don't know which
+ */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10),
+ /* dereference it indirectly */
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 5 },
+ .errstr = "variable stack read R2",
+ .result = REJECT,
+ .prog_type = BPF_PROG_TYPE_LWT_IN,
+ },
+ {
+ "direct stack access with 32-bit wraparound. test1",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_EXIT_INSN()
+ },
+ .errstr = "fp pointer and 2147483647",
+ .result = REJECT
+ },
+ {
+ "direct stack access with 32-bit wraparound. test2",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x3fffffff),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x3fffffff),
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_EXIT_INSN()
+ },
+ .errstr = "fp pointer and 1073741823",
+ .result = REJECT
+ },
+ {
+ "direct stack access with 32-bit wraparound. test3",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x1fffffff),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x1fffffff),
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_EXIT_INSN()
+ },
+ .errstr = "fp pointer offset 1073741822",
+ .result = REJECT
+ },
+ {
"liveness pruning and write screening",
.insns = {
/* Get an unknown value */
--
2.9.5
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH stable/4.14 00/14] BPF stable patches for 4.14
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (13 preceding siblings ...)
2017-12-22 15:23 ` [PATCH stable/4.14 14/14] selftests/bpf: add tests for recent bugfixes Daniel Borkmann
@ 2017-12-22 15:45 ` Greg KH
2017-12-22 15:48 ` Greg KH
2017-12-22 15:51 ` Greg KH
15 siblings, 1 reply; 32+ messages in thread
From: Greg KH @ 2017-12-22 15:45 UTC (permalink / raw)
To: Daniel Borkmann; +Cc: ast, jannh, stable
On Fri, Dec 22, 2017 at 04:22:58PM +0100, Daniel Borkmann wrote:
> Hi Greg,
>
> please consider the following backported fixes for 4.14 inclusion.
>
> Thanks a lot!
Nice!
Should I take any of these for 4.9? Or just the one that Jann sent
separately?
thanks,
greg k-h
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 12/14] bpf: don't prune branches when a scalar is replaced with a pointer" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 12/14] bpf: don't prune branches when a scalar is replaced with a pointer Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, gregkh, jannh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 12/14] bpf: don't prune branches when a scalar is replaced with a pointer
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:10 +0100
Subject: [PATCH stable/4.14 12/14] bpf: don't prune branches when a scalar is replaced with a pointer
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org
Message-ID: <20171222152312.2945-13-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
From: Jann Horn <jannh@google.com>
[ Upstream commit 179d1c5602997fef5a940c6ddcf31212cbfebd14 ]
This could be made safe by passing through a reference to env and checking
for env->allow_ptr_leaks, but it would only work one way and is probably
not worth the hassle - not doing it will not directly lead to program
rejection.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
kernel/bpf/verifier.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3337,15 +3337,14 @@ static bool regsafe(struct bpf_reg_state
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
} else {
- /* if we knew anything about the old value, we're not
- * equal, because we can't know anything about the
- * scalar value of the pointer in the new value.
+ /* We're trying to use a pointer in place of a scalar.
+ * Even if the scalar was unbounded, this could lead to
+ * pointer leaks because scalars are allowed to leak
+ * while pointers are not. We could make this safe in
+ * special cases if root is calling us, but it's
+ * probably not worth the hassle.
*/
- return rold->umin_value == 0 &&
- rold->umax_value == U64_MAX &&
- rold->smin_value == S64_MIN &&
- rold->smax_value == S64_MAX &&
- tnum_is_unknown(rold->var_off);
+ return false;
}
case PTR_TO_MAP_VALUE:
/* If the new min/max/var_off satisfy the old ones and
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 09/14] bpf: fix 32-bit ALU op verification" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 09/14] bpf: fix 32-bit ALU op verification Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, gregkh, jannh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 09/14] bpf: fix 32-bit ALU op verification
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-fix-32-bit-alu-op-verification.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:07 +0100
Subject: [PATCH stable/4.14 09/14] bpf: fix 32-bit ALU op verification
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org
Message-ID: <20171222152312.2945-10-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
From: Jann Horn <jannh@google.com>
[ Upstream commit 468f6eafa6c44cb2c5d8aad35e12f06c240a812a ]
32-bit ALU ops operate on 32-bit values and have 32-bit outputs.
Adjust the verifier accordingly.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
kernel/bpf/verifier.c | 28 +++++++++++++++++-----------
1 file changed, 17 insertions(+), 11 deletions(-)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1984,6 +1984,10 @@ static int adjust_ptr_min_max_vals(struc
return 0;
}
+/* WARNING: This function does calculations on 64-bit values, but the actual
+ * execution may occur on 32-bit values. Therefore, things like bitshifts
+ * need extra checks in the 32-bit case.
+ */
static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn,
struct bpf_reg_state *dst_reg,
@@ -1994,12 +1998,8 @@ static int adjust_scalar_min_max_vals(st
bool src_known, dst_known;
s64 smin_val, smax_val;
u64 umin_val, umax_val;
+ u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
- if (BPF_CLASS(insn->code) != BPF_ALU64) {
- /* 32-bit ALU ops are (32,32)->64 */
- coerce_reg_to_size(dst_reg, 4);
- coerce_reg_to_size(&src_reg, 4);
- }
smin_val = src_reg.smin_value;
smax_val = src_reg.smax_value;
umin_val = src_reg.umin_value;
@@ -2135,9 +2135,9 @@ static int adjust_scalar_min_max_vals(st
__update_reg_bounds(dst_reg);
break;
case BPF_LSH:
- if (umax_val > 63) {
- /* Shifts greater than 63 are undefined. This includes
- * shifts by a negative number.
+ if (umax_val >= insn_bitness) {
+ /* Shifts greater than 31 or 63 are undefined.
+ * This includes shifts by a negative number.
*/
mark_reg_unknown(regs, insn->dst_reg);
break;
@@ -2163,9 +2163,9 @@ static int adjust_scalar_min_max_vals(st
__update_reg_bounds(dst_reg);
break;
case BPF_RSH:
- if (umax_val > 63) {
- /* Shifts greater than 63 are undefined. This includes
- * shifts by a negative number.
+ if (umax_val >= insn_bitness) {
+ /* Shifts greater than 31 or 63 are undefined.
+ * This includes shifts by a negative number.
*/
mark_reg_unknown(regs, insn->dst_reg);
break;
@@ -2201,6 +2201,12 @@ static int adjust_scalar_min_max_vals(st
break;
}
+ if (BPF_CLASS(insn->code) != BPF_ALU64) {
+ /* 32-bit ALU ops are (32,32)->32 */
+ coerce_reg_to_size(dst_reg, 4);
+ coerce_reg_to_size(&src_reg, 4);
+ }
+
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
return 0;
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 01/14] bpf: fix branch pruning logic" has been added to the 4.14-stable tree
2017-12-22 15:22 ` [PATCH stable/4.14 01/14] bpf: fix branch pruning logic Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, ast, gregkh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 01/14] bpf: fix branch pruning logic
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-fix-branch-pruning-logic.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:22:59 +0100
Subject: [PATCH stable/4.14 01/14] bpf: fix branch pruning logic
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org, Alexei Starovoitov <ast@fb.com>
Message-ID: <20171222152312.2945-2-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
From: Alexei Starovoitov <ast@fb.com>
[ Upstream commit c131187db2d3fa2f8bf32fdf4e9a4ef805168467 ]
when the verifier detects that register contains a runtime constant
and it's compared with another constant it will prune exploration
of the branch that is guaranteed not to be taken at runtime.
This is all correct, but malicious program may be constructed
in such a way that it always has a constant comparison and
the other branch is never taken under any conditions.
In this case such path through the program will not be explored
by the verifier. It won't be taken at run-time either, but since
all instructions are JITed the malicious program may cause JITs
to complain about using reserved fields, etc.
To fix the issue we have to track the instructions explored by
the verifier and sanitize instructions that are dead at run time
with NOPs. We cannot reject such dead code, since llvm generates
it for valid C code, since it doesn't do as much data flow
analysis as the verifier does.
Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
include/linux/bpf_verifier.h | 2 +-
kernel/bpf/verifier.c | 27 +++++++++++++++++++++++++++
2 files changed, 28 insertions(+), 1 deletion(-)
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -110,7 +110,7 @@ struct bpf_insn_aux_data {
struct bpf_map *map_ptr; /* pointer for call insn into lookup_elem */
};
int ctx_field_size; /* the ctx field size for load insn, maybe 0 */
- int converted_op_size; /* the valid value width after perceived conversion */
+ bool seen; /* this insn was processed by the verifier */
};
#define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3665,6 +3665,7 @@ static int do_check(struct bpf_verifier_
if (err)
return err;
+ env->insn_aux_data[insn_idx].seen = true;
if (class == BPF_ALU || class == BPF_ALU64) {
err = check_alu_op(env, insn);
if (err)
@@ -3855,6 +3856,7 @@ process_bpf_exit:
return err;
insn_idx++;
+ env->insn_aux_data[insn_idx].seen = true;
} else {
verbose("invalid BPF_LD mode\n");
return -EINVAL;
@@ -4035,6 +4037,7 @@ static int adjust_insn_aux_data(struct b
u32 off, u32 cnt)
{
struct bpf_insn_aux_data *new_data, *old_data = env->insn_aux_data;
+ int i;
if (cnt == 1)
return 0;
@@ -4044,6 +4047,8 @@ static int adjust_insn_aux_data(struct b
memcpy(new_data, old_data, sizeof(struct bpf_insn_aux_data) * off);
memcpy(new_data + off + cnt - 1, old_data + off,
sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1));
+ for (i = off; i < off + cnt - 1; i++)
+ new_data[i].seen = true;
env->insn_aux_data = new_data;
vfree(old_data);
return 0;
@@ -4062,6 +4067,25 @@ static struct bpf_prog *bpf_patch_insn_d
return new_prog;
}
+/* The verifier does more data flow analysis than llvm and will not explore
+ * branches that are dead at run time. Malicious programs can have dead code
+ * too. Therefore replace all dead at-run-time code with nops.
+ */
+static void sanitize_dead_code(struct bpf_verifier_env *env)
+{
+ struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
+ struct bpf_insn nop = BPF_MOV64_REG(BPF_REG_0, BPF_REG_0);
+ struct bpf_insn *insn = env->prog->insnsi;
+ const int insn_cnt = env->prog->len;
+ int i;
+
+ for (i = 0; i < insn_cnt; i++) {
+ if (aux_data[i].seen)
+ continue;
+ memcpy(insn + i, &nop, sizeof(nop));
+ }
+}
+
/* convert load instructions that access fields of 'struct __sk_buff'
* into sequence of instructions that access fields of 'struct sk_buff'
*/
@@ -4379,6 +4403,9 @@ skip_full_check:
free_states(env);
if (ret == 0)
+ sanitize_dead_code(env);
+
+ if (ret == 0)
/* program is valid, convert *(u32*)(ctx + off) accesses */
ret = convert_ctx_accesses(env);
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 02/14] bpf: fix corruption on concurrent perf_event_output calls" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 02/14] bpf: fix corruption on concurrent perf_event_output calls Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, ast, gregkh, songliubraving; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 02/14] bpf: fix corruption on concurrent perf_event_output calls
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:00 +0100
Subject: [PATCH stable/4.14 02/14] bpf: fix corruption on concurrent perf_event_output calls
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org
Message-ID: <20171222152312.2945-3-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
[ Upstream commit 283ca526a9bd75aed7350220d7b1f8027d99c3fd ]
When tracing and networking programs are both attached in the
system and both use event-output helpers that eventually call
into perf_event_output(), then we could end up in a situation
where the tracing attached program runs in user context while
a cls_bpf program is triggered on that same CPU out of softirq
context.
Since both rely on the same per-cpu perf_sample_data, we could
potentially corrupt it. This can only ever happen in a combination
of the two types; all tracing programs use a bpf_prog_active
counter to bail out in case a program is already running on
that CPU out of a different context. XDP and cls_bpf programs
by themselves don't have this issue as they run in the same
context only. Therefore, split both perf_sample_data so they
cannot be accessed from each other.
Fixes: 20b9d7ac4852 ("bpf: avoid excessive stack usage for perf_sample_data")
Reported-by: Alexei Starovoitov <ast@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Song Liu <songliubraving@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
kernel/trace/bpf_trace.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -293,14 +293,13 @@ static const struct bpf_func_proto bpf_p
.arg2_type = ARG_ANYTHING,
};
-static DEFINE_PER_CPU(struct perf_sample_data, bpf_sd);
+static DEFINE_PER_CPU(struct perf_sample_data, bpf_trace_sd);
static __always_inline u64
__bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
- u64 flags, struct perf_raw_record *raw)
+ u64 flags, struct perf_sample_data *sd)
{
struct bpf_array *array = container_of(map, struct bpf_array, map);
- struct perf_sample_data *sd = this_cpu_ptr(&bpf_sd);
unsigned int cpu = smp_processor_id();
u64 index = flags & BPF_F_INDEX_MASK;
struct bpf_event_entry *ee;
@@ -323,8 +322,6 @@ __bpf_perf_event_output(struct pt_regs *
if (unlikely(event->oncpu != cpu))
return -EOPNOTSUPP;
- perf_sample_data_init(sd, 0, 0);
- sd->raw = raw;
perf_event_output(event, sd, regs);
return 0;
}
@@ -332,6 +329,7 @@ __bpf_perf_event_output(struct pt_regs *
BPF_CALL_5(bpf_perf_event_output, struct pt_regs *, regs, struct bpf_map *, map,
u64, flags, void *, data, u64, size)
{
+ struct perf_sample_data *sd = this_cpu_ptr(&bpf_trace_sd);
struct perf_raw_record raw = {
.frag = {
.size = size,
@@ -342,7 +340,10 @@ BPF_CALL_5(bpf_perf_event_output, struct
if (unlikely(flags & ~(BPF_F_INDEX_MASK)))
return -EINVAL;
- return __bpf_perf_event_output(regs, map, flags, &raw);
+ perf_sample_data_init(sd, 0, 0);
+ sd->raw = &raw;
+
+ return __bpf_perf_event_output(regs, map, flags, sd);
}
static const struct bpf_func_proto bpf_perf_event_output_proto = {
@@ -357,10 +358,12 @@ static const struct bpf_func_proto bpf_p
};
static DEFINE_PER_CPU(struct pt_regs, bpf_pt_regs);
+static DEFINE_PER_CPU(struct perf_sample_data, bpf_misc_sd);
u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy)
{
+ struct perf_sample_data *sd = this_cpu_ptr(&bpf_misc_sd);
struct pt_regs *regs = this_cpu_ptr(&bpf_pt_regs);
struct perf_raw_frag frag = {
.copy = ctx_copy,
@@ -378,8 +381,10 @@ u64 bpf_event_output(struct bpf_map *map
};
perf_fetch_caller_regs(regs);
+ perf_sample_data_init(sd, 0, 0);
+ sd->raw = &raw;
- return __bpf_perf_event_output(regs, map, flags, &raw);
+ return __bpf_perf_event_output(regs, map, flags, sd);
}
BPF_CALL_0(bpf_get_current_task)
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 07/14] bpf: fix incorrect sign extension in check_alu_op()" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 07/14] bpf: fix incorrect sign extension in check_alu_op() Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, ecree, gregkh, jannh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 07/14] bpf: fix incorrect sign extension in check_alu_op()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:05 +0100
Subject: [PATCH stable/4.14 07/14] bpf: fix incorrect sign extension in check_alu_op()
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org
Message-ID: <20171222152312.2945-8-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
From: Jann Horn <jannh@google.com>
[ Upstream commit 95a762e2c8c942780948091f8f2a4f32fce1ac6f ]
Distinguish between
BPF_ALU64|BPF_MOV|BPF_K (load 32-bit immediate, sign-extended to 64-bit)
and BPF_ALU|BPF_MOV|BPF_K (load 32-bit immediate, zero-padded to 64-bit);
only perform sign extension in the first case.
Starting with v4.14, this is exploitable by unprivileged users as long as
the unprivileged_bpf_disabled sysctl isn't set.
Debian assigned CVE-2017-16995 for this issue.
v3:
- add CVE number (Ben Hutchings)
Fixes: 484611357c19 ("bpf: allow access into map value arrays")
Signed-off-by: Jann Horn <jannh@google.com>
Acked-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
kernel/bpf/verifier.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2374,7 +2374,13 @@ static int check_alu_op(struct bpf_verif
* remember the value we stored into this reg
*/
regs[insn->dst_reg].type = SCALAR_VALUE;
- __mark_reg_known(regs + insn->dst_reg, insn->imm);
+ if (BPF_CLASS(insn->code) == BPF_ALU64) {
+ __mark_reg_known(regs + insn->dst_reg,
+ insn->imm);
+ } else {
+ __mark_reg_known(regs + insn->dst_reg,
+ (u32)insn->imm);
+ }
}
} else if (opcode > BPF_END) {
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 08/14] bpf: fix incorrect tracking of register size truncation" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 08/14] bpf: fix incorrect tracking of register size truncation Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, ecree, gregkh, jannh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 08/14] bpf: fix incorrect tracking of register size truncation
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-fix-incorrect-tracking-of-register-size-truncation.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:06 +0100
Subject: [PATCH stable/4.14 08/14] bpf: fix incorrect tracking of register size truncation
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org
Message-ID: <20171222152312.2945-9-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
From: Jann Horn <jannh@google.com>
[ Upstream commit 0c17d1d2c61936401f4702e1846e2c19b200f958 ]
Properly handle register truncation to a smaller size.
The old code first mirrors the clearing of the high 32 bits in the bitwise
tristate representation, which is correct. But then, it computes the new
arithmetic bounds as the intersection between the old arithmetic bounds and
the bounds resulting from the bitwise tristate representation. Therefore,
when coerce_reg_to_32() is called on a number with bounds
[0xffff'fff8, 0x1'0000'0007], the verifier computes
[0xffff'fff8, 0xffff'ffff] as bounds of the truncated number.
This is incorrect: The truncated number could also be in the range [0, 7],
and no meaningful arithmetic bounds can be computed in that case apart from
the obvious [0, 0xffff'ffff].
Starting with v4.14, this is exploitable by unprivileged users as long as
the unprivileged_bpf_disabled sysctl isn't set.
Debian assigned CVE-2017-16996 for this issue.
v2:
- flip the mask during arithmetic bounds calculation (Ben Hutchings)
v3:
- add CVE number (Ben Hutchings)
Fixes: b03c9f9fdc37 ("bpf/verifier: track signed and unsigned min/max values")
Signed-off-by: Jann Horn <jannh@google.com>
Acked-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
kernel/bpf/verifier.c | 44 +++++++++++++++++++++++++++-----------------
1 file changed, 27 insertions(+), 17 deletions(-)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1068,6 +1068,29 @@ static int check_ptr_alignment(struct bp
return check_generic_ptr_alignment(reg, pointer_desc, off, size, strict);
}
+/* truncate register to smaller size (in bytes)
+ * must be called with size < BPF_REG_SIZE
+ */
+static void coerce_reg_to_size(struct bpf_reg_state *reg, int size)
+{
+ u64 mask;
+
+ /* clear high bits in bit representation */
+ reg->var_off = tnum_cast(reg->var_off, size);
+
+ /* fix arithmetic bounds */
+ mask = ((u64)1 << (size * 8)) - 1;
+ if ((reg->umin_value & ~mask) == (reg->umax_value & ~mask)) {
+ reg->umin_value &= mask;
+ reg->umax_value &= mask;
+ } else {
+ reg->umin_value = 0;
+ reg->umax_value = mask;
+ }
+ reg->smin_value = reg->umin_value;
+ reg->smax_value = reg->umax_value;
+}
+
/* check whether memory at (regno + off) is accessible for t = (read | write)
* if t==write, value_regno is a register which value is stored into memory
* if t==read, value_regno is a register which will receive the value from memory
@@ -1200,9 +1223,7 @@ static int check_mem_access(struct bpf_v
if (!err && size < BPF_REG_SIZE && value_regno >= 0 && t == BPF_READ &&
state->regs[value_regno].type == SCALAR_VALUE) {
/* b/h/w load zero-extends, mark upper bits as known 0 */
- state->regs[value_regno].var_off = tnum_cast(
- state->regs[value_regno].var_off, size);
- __update_reg_bounds(&state->regs[value_regno]);
+ coerce_reg_to_size(&state->regs[value_regno], size);
}
return err;
}
@@ -1742,14 +1763,6 @@ static int check_call(struct bpf_verifie
return 0;
}
-static void coerce_reg_to_32(struct bpf_reg_state *reg)
-{
- /* clear high 32 bits */
- reg->var_off = tnum_cast(reg->var_off, 4);
- /* Update bounds */
- __update_reg_bounds(reg);
-}
-
static bool signed_add_overflows(s64 a, s64 b)
{
/* Do the add in u64, where overflow is well-defined */
@@ -1984,8 +1997,8 @@ static int adjust_scalar_min_max_vals(st
if (BPF_CLASS(insn->code) != BPF_ALU64) {
/* 32-bit ALU ops are (32,32)->64 */
- coerce_reg_to_32(dst_reg);
- coerce_reg_to_32(&src_reg);
+ coerce_reg_to_size(dst_reg, 4);
+ coerce_reg_to_size(&src_reg, 4);
}
smin_val = src_reg.smin_value;
smax_val = src_reg.smax_value;
@@ -2364,10 +2377,7 @@ static int check_alu_op(struct bpf_verif
return -EACCES;
}
mark_reg_unknown(regs, insn->dst_reg);
- /* high 32 bits are known zero. */
- regs[insn->dst_reg].var_off = tnum_cast(
- regs[insn->dst_reg].var_off, 4);
- __update_reg_bounds(®s[insn->dst_reg]);
+ coerce_reg_to_size(®s[insn->dst_reg], 4);
}
} else {
/* case: R = imm
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 13/14] bpf: fix integer overflows" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 13/14] bpf: fix integer overflows Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, gregkh, jannh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 13/14] bpf: fix integer overflows
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-fix-integer-overflows.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:11 +0100
Subject: [PATCH stable/4.14 13/14] bpf: fix integer overflows
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org
Message-ID: <20171222152312.2945-14-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
From: Alexei Starovoitov <ast@kernel.org>
[ Upstream commit bb7f0f989ca7de1153bd128a40a71709e339fa03 ]
There were various issues related to the limited size of integers used in
the verifier:
- `off + size` overflow in __check_map_access()
- `off + reg->off` overflow in check_mem_access()
- `off + reg->var_off.value` overflow or 32-bit truncation of
`reg->var_off.value` in check_mem_access()
- 32-bit truncation in check_stack_boundary()
Make sure that any integer math cannot overflow by not allowing
pointer math with large values.
Also reduce the scope of "scalar op scalar" tracking.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
include/linux/bpf_verifier.h | 4 +--
kernel/bpf/verifier.c | 48 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 50 insertions(+), 2 deletions(-)
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -15,11 +15,11 @@
* In practice this is far bigger than any realistic pointer offset; this limit
* ensures that umax_value + (int)off + (int)size cannot overflow a u64.
*/
-#define BPF_MAX_VAR_OFF (1ULL << 31)
+#define BPF_MAX_VAR_OFF (1 << 29)
/* Maximum variable size permitted for ARG_CONST_SIZE[_OR_ZERO]. This ensures
* that converting umax_value to int cannot overflow.
*/
-#define BPF_MAX_VAR_SIZ INT_MAX
+#define BPF_MAX_VAR_SIZ (1 << 29)
/* Liveness marks, used for registers and spilled-regs (in stack slots).
* Read marks propagate upwards until they find a write mark; they record that
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1789,6 +1789,41 @@ static bool signed_sub_overflows(s64 a,
return res > a;
}
+static bool check_reg_sane_offset(struct bpf_verifier_env *env,
+ const struct bpf_reg_state *reg,
+ enum bpf_reg_type type)
+{
+ bool known = tnum_is_const(reg->var_off);
+ s64 val = reg->var_off.value;
+ s64 smin = reg->smin_value;
+
+ if (known && (val >= BPF_MAX_VAR_OFF || val <= -BPF_MAX_VAR_OFF)) {
+ verbose("math between %s pointer and %lld is not allowed\n",
+ reg_type_str[type], val);
+ return false;
+ }
+
+ if (reg->off >= BPF_MAX_VAR_OFF || reg->off <= -BPF_MAX_VAR_OFF) {
+ verbose("%s pointer offset %d is not allowed\n",
+ reg_type_str[type], reg->off);
+ return false;
+ }
+
+ if (smin == S64_MIN) {
+ verbose("math between %s pointer and register with unbounded min value is not allowed\n",
+ reg_type_str[type]);
+ return false;
+ }
+
+ if (smin >= BPF_MAX_VAR_OFF || smin <= -BPF_MAX_VAR_OFF) {
+ verbose("value %lld makes %s pointer be out of bounds\n",
+ smin, reg_type_str[type]);
+ return false;
+ }
+
+ return true;
+}
+
/* Handles arithmetic on a pointer and a scalar: computes new min/max and var_off.
* Caller should also handle BPF_MOV case separately.
* If we return -EACCES, caller may want to try again treating pointer as a
@@ -1854,6 +1889,10 @@ static int adjust_ptr_min_max_vals(struc
dst_reg->type = ptr_reg->type;
dst_reg->id = ptr_reg->id;
+ if (!check_reg_sane_offset(env, off_reg, ptr_reg->type) ||
+ !check_reg_sane_offset(env, ptr_reg, ptr_reg->type))
+ return -EINVAL;
+
switch (opcode) {
case BPF_ADD:
/* We can take a fixed offset as long as it doesn't overflow
@@ -1984,6 +2023,9 @@ static int adjust_ptr_min_max_vals(struc
return -EACCES;
}
+ if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type))
+ return -EINVAL;
+
__update_reg_bounds(dst_reg);
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
@@ -2013,6 +2055,12 @@ static int adjust_scalar_min_max_vals(st
src_known = tnum_is_const(src_reg.var_off);
dst_known = tnum_is_const(dst_reg->var_off);
+ if (!src_known &&
+ opcode != BPF_ADD && opcode != BPF_SUB && opcode != BPF_AND) {
+ __mark_reg_unknown(dst_reg);
+ return 0;
+ }
+
switch (opcode) {
case BPF_ADD:
if (signed_add_overflows(dst_reg->smin_value, smin_val) ||
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 10/14] bpf: fix missing error return in check_stack_boundary()" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 10/14] bpf: fix missing error return in check_stack_boundary() Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, gregkh, jannh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 10/14] bpf: fix missing error return in check_stack_boundary()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-fix-missing-error-return-in-check_stack_boundary.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:08 +0100
Subject: [PATCH stable/4.14 10/14] bpf: fix missing error return in check_stack_boundary()
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org
Message-ID: <20171222152312.2945-11-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
From: Jann Horn <jannh@google.com>
Prevent indirect stack accesses at non-constant addresses, which would
permit reading and corrupting spilled pointers.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
kernel/bpf/verifier.c | 1 +
1 file changed, 1 insertion(+)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1303,6 +1303,7 @@ static int check_stack_boundary(struct b
tnum_strn(tn_buf, sizeof(tn_buf), regs[regno].var_off);
verbose("invalid variable stack read R%d var_off=%s\n",
regno, tn_buf);
+ return -EACCES;
}
off = regs[regno].off + regs[regno].var_off.value;
if (off >= 0 || off < -MAX_BPF_STACK || off + access_size > 0 ||
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 11/14] bpf: force strict alignment checks for stack pointers" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 11/14] bpf: force strict alignment checks for stack pointers Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, gregkh, jannh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 11/14] bpf: force strict alignment checks for stack pointers
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-force-strict-alignment-checks-for-stack-pointers.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:09 +0100
Subject: [PATCH stable/4.14 11/14] bpf: force strict alignment checks for stack pointers
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org
Message-ID: <20171222152312.2945-12-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
From: Jann Horn <jannh@google.com>
[ Upstream commit a5ec6ae161d72f01411169a938fa5f8baea16e8f ]
Force strict alignment checks for stack pointers because the tracking of
stack spills relies on it; unaligned stack accesses can lead to corruption
of spilled registers, which is exploitable.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
kernel/bpf/verifier.c | 5 +++++
1 file changed, 5 insertions(+)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1061,6 +1061,11 @@ static int check_ptr_alignment(struct bp
break;
case PTR_TO_STACK:
pointer_desc = "stack ";
+ /* The stack spill tracking logic in check_stack_write()
+ * and check_stack_read() relies on stack accesses being
+ * aligned.
+ */
+ strict = true;
break;
default:
break;
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 04/14] bpf, ppc64: do not reload skb pointers in non-skb context" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 04/14] bpf, ppc64: do not reload skb pointers in non-skb context Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, gregkh, naveen.n.rao, sandipan; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 04/14] bpf, ppc64: do not reload skb pointers in non-skb context
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:02 +0100
Subject: [PATCH stable/4.14 04/14] bpf, ppc64: do not reload skb pointers in non-skb context
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org
Message-ID: <20171222152312.2945-5-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
[ Upstream commit 87338c8e2cbb317b5f757e6172f94e2e3799cd20 ]
The assumption of unconditionally reloading skb pointers on
BPF helper calls where bpf_helper_changes_pkt_data() holds
true is wrong. There can be different contexts where the helper
would enforce a reload such as in case of XDP. Here, we do
have a struct xdp_buff instead of struct sk_buff as context,
thus this will access garbage.
JITs only ever need to deal with cached skb pointer reload
when ld_abs/ind was seen, therefore guard the reload behind
SEEN_SKB.
Fixes: 156d0e290e96 ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Tested-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
arch/powerpc/net/bpf_jit_comp64.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -762,7 +762,8 @@ emit_clear:
func = (u8 *) __bpf_call_base + imm;
/* Save skb pointer if we need to re-cache skb data */
- if (bpf_helper_changes_pkt_data(func))
+ if ((ctx->seen & SEEN_SKB) &&
+ bpf_helper_changes_pkt_data(func))
PPC_BPF_STL(3, 1, bpf_jit_stack_local(ctx));
bpf_jit_emit_func_call(image, ctx, (u64)func);
@@ -771,7 +772,8 @@ emit_clear:
PPC_MR(b2p[BPF_REG_0], 3);
/* refresh skb cache */
- if (bpf_helper_changes_pkt_data(func)) {
+ if ((ctx->seen & SEEN_SKB) &&
+ bpf_helper_changes_pkt_data(func)) {
/* reload skb pointer to r3 */
PPC_BPF_LL(3, 1, bpf_jit_stack_local(ctx));
bpf_jit_emit_skb_loads(image, ctx);
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 03/14] bpf, s390x: do not reload skb pointers in non-skb context" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 03/14] bpf, s390x: do not reload skb pointers in non-skb context Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, gregkh, holzheu; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 03/14] bpf, s390x: do not reload skb pointers in non-skb context
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:01 +0100
Subject: [PATCH stable/4.14 03/14] bpf, s390x: do not reload skb pointers in non-skb context
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org, Michael Holzheu <holzheu@linux.vnet.ibm.com>
Message-ID: <20171222152312.2945-4-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
[ Upstream commit 6d59b7dbf72ed20d0138e2f9b75ca3d4a9d4faca ]
The assumption of unconditionally reloading skb pointers on
BPF helper calls where bpf_helper_changes_pkt_data() holds
true is wrong. There can be different contexts where the
BPF helper would enforce a reload such as in case of XDP.
Here, we do have a struct xdp_buff instead of struct sk_buff
as context, thus this will access garbage.
JITs only ever need to deal with cached skb pointer reload
when ld_abs/ind was seen, therefore guard the reload behind
SEEN_SKB only. Tested on s390x.
Fixes: 9db7f2b81880 ("s390/bpf: recache skb->data/hlen for skb_vlan_push/pop")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
arch/s390/net/bpf_jit_comp.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -55,8 +55,7 @@ struct bpf_jit {
#define SEEN_LITERAL 8 /* code uses literals */
#define SEEN_FUNC 16 /* calls C functions */
#define SEEN_TAIL_CALL 32 /* code uses tail calls */
-#define SEEN_SKB_CHANGE 64 /* code changes skb data */
-#define SEEN_REG_AX 128 /* code uses constant blinding */
+#define SEEN_REG_AX 64 /* code uses constant blinding */
#define SEEN_STACK (SEEN_FUNC | SEEN_MEM | SEEN_SKB)
/*
@@ -448,12 +447,12 @@ static void bpf_jit_prologue(struct bpf_
EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0,
REG_15, 152);
}
- if (jit->seen & SEEN_SKB)
+ if (jit->seen & SEEN_SKB) {
emit_load_skb_data_hlen(jit);
- if (jit->seen & SEEN_SKB_CHANGE)
/* stg %b1,ST_OFF_SKBP(%r0,%r15) */
EMIT6_DISP_LH(0xe3000000, 0x0024, BPF_REG_1, REG_0, REG_15,
STK_OFF_SKBP);
+ }
}
/*
@@ -983,8 +982,8 @@ static noinline int bpf_jit_insn(struct
EMIT2(0x0d00, REG_14, REG_W1);
/* lgr %b0,%r2: load return value into %b0 */
EMIT4(0xb9040000, BPF_REG_0, REG_2);
- if (bpf_helper_changes_pkt_data((void *)func)) {
- jit->seen |= SEEN_SKB_CHANGE;
+ if ((jit->seen & SEEN_SKB) &&
+ bpf_helper_changes_pkt_data((void *)func)) {
/* lg %b1,ST_OFF_SKBP(%r15) */
EMIT6_DISP_LH(0xe3000000, 0x0004, BPF_REG_1, REG_0,
REG_15, STK_OFF_SKBP);
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 05/14] bpf, sparc: fix usage of wrong reg for load_skb_regs after call" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 05/14] bpf, sparc: fix usage of wrong reg for load_skb_regs after call Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, davem, gregkh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 05/14] bpf, sparc: fix usage of wrong reg for load_skb_regs after call
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:03 +0100
Subject: [PATCH stable/4.14 05/14] bpf, sparc: fix usage of wrong reg for load_skb_regs after call
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org
Message-ID: <20171222152312.2945-6-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
[ Upstream commit 07aee94394547721ac168cbf4e1c09c14a5fe671 ]
When LD_ABS/IND is used in the program, and we have a BPF helper
call that changes packet data (bpf_helper_changes_pkt_data() returns
true), then in case of sparc JIT, we try to reload cached skb data
from bpf2sparc[BPF_REG_6]. However, there is no such guarantee or
assumption that skb sits in R6 at this point, all helpers changing
skb data only have a guarantee that skb sits in R1. Therefore,
store BPF R1 in L7 temporarily and after procedure call use L7 to
reload cached skb data. skb sitting in R6 is only true at the time
when LD_ABS/IND is executed.
Fixes: 7a12b5031c6b ("sparc64: Add eBPF JIT.")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
arch/sparc/net/bpf_jit_comp_64.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -1245,14 +1245,16 @@ static int build_insn(const struct bpf_i
u8 *func = ((u8 *)__bpf_call_base) + imm;
ctx->saw_call = true;
+ if (ctx->saw_ld_abs_ind && bpf_helper_changes_pkt_data(func))
+ emit_reg_move(bpf2sparc[BPF_REG_1], L7, ctx);
emit_call((u32 *)func, ctx);
emit_nop(ctx);
emit_reg_move(O0, bpf2sparc[BPF_REG_0], ctx);
- if (bpf_helper_changes_pkt_data(func) && ctx->saw_ld_abs_ind)
- load_skb_regs(ctx, bpf2sparc[BPF_REG_6]);
+ if (ctx->saw_ld_abs_ind && bpf_helper_changes_pkt_data(func))
+ load_skb_regs(ctx, L7);
break;
}
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 06/14] bpf/verifier: fix bounds calculation on BPF_RSH" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 06/14] bpf/verifier: fix bounds calculation on BPF_RSH Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, ecree, gregkh, jannh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 06/14] bpf/verifier: fix bounds calculation on BPF_RSH
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:04 +0100
Subject: [PATCH stable/4.14 06/14] bpf/verifier: fix bounds calculation on BPF_RSH
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org, Edward Cree <ecree@solarflare.com>
Message-ID: <20171222152312.2945-7-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
From: Edward Cree <ecree@solarflare.com>
[ Upstream commit 4374f256ce8182019353c0c639bb8d0695b4c941 ]
Incorrect signed bounds were being computed.
If the old upper signed bound was positive and the old lower signed bound was
negative, this could cause the new upper signed bound to be too low,
leading to security issues.
Fixes: b03c9f9fdc37 ("bpf/verifier: track signed and unsigned min/max values")
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Edward Cree <ecree@solarflare.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
[jannh@google.com: changed description to reflect bug impact]
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
kernel/bpf/verifier.c | 30 ++++++++++++++++--------------
1 file changed, 16 insertions(+), 14 deletions(-)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2157,20 +2157,22 @@ static int adjust_scalar_min_max_vals(st
mark_reg_unknown(regs, insn->dst_reg);
break;
}
- /* BPF_RSH is an unsigned shift, so make the appropriate casts */
- if (dst_reg->smin_value < 0) {
- if (umin_val) {
- /* Sign bit will be cleared */
- dst_reg->smin_value = 0;
- } else {
- /* Lost sign bit information */
- dst_reg->smin_value = S64_MIN;
- dst_reg->smax_value = S64_MAX;
- }
- } else {
- dst_reg->smin_value =
- (u64)(dst_reg->smin_value) >> umax_val;
- }
+ /* BPF_RSH is an unsigned shift. If the value in dst_reg might
+ * be negative, then either:
+ * 1) src_reg might be zero, so the sign bit of the result is
+ * unknown, so we lose our signed bounds
+ * 2) it's known negative, thus the unsigned bounds capture the
+ * signed bounds
+ * 3) the signed bounds cross zero, so they tell us nothing
+ * about the result
+ * If the value in dst_reg is known nonnegative, then again the
+ * unsigned bounts capture the signed bounds.
+ * Thus, in all cases it suffices to blow away our signed bounds
+ * and rely on inferring new ones from the unsigned bounds and
+ * var_off of the result.
+ */
+ dst_reg->smin_value = S64_MIN;
+ dst_reg->smax_value = S64_MAX;
if (src_known)
dst_reg->var_off = tnum_rshift(dst_reg->var_off,
umin_val);
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Patch "[PATCH stable/4.14 14/14] selftests/bpf: add tests for recent bugfixes" has been added to the 4.14-stable tree
2017-12-22 15:23 ` [PATCH stable/4.14 14/14] selftests/bpf: add tests for recent bugfixes Daniel Borkmann
@ 2017-12-22 15:47 ` gregkh
0 siblings, 0 replies; 32+ messages in thread
From: gregkh @ 2017-12-22 15:47 UTC (permalink / raw)
To: daniel, ast, gregkh, jannh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
[PATCH stable/4.14 14/14] selftests/bpf: add tests for recent bugfixes
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
selftests-bpf-add-tests-for-recent-bugfixes.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri Dec 22 16:47:02 CET 2017
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 16:23:12 +0100
Subject: [PATCH stable/4.14 14/14] selftests/bpf: add tests for recent bugfixes
To: gregkh@linuxfoundation.org
Cc: ast@kernel.org, daniel@iogearbox.net, jannh@google.com, stable@vger.kernel.org
Message-ID: <20171222152312.2945-15-daniel@iogearbox.net>
From: Daniel Borkmann <daniel@iogearbox.net>
From: Jann Horn <jannh@google.com>
[ Upstream commit 2255f8d520b0a318fc6d387d0940854b2f522a7f ]
These tests should cover the following cases:
- MOV with both zero-extended and sign-extended immediates
- implicit truncation of register contents via ALU32/MOV32
- implicit 32-bit truncation of ALU32 output
- oversized register source operand for ALU32 shift
- right-shift of a number that could be positive or negative
- map access where adding the operation size to the offset causes signed
32-bit overflow
- direct stack access at a ~4GiB offset
Also remove the F_LOAD_WITH_STRICT_ALIGNMENT flag from a bunch of tests
that should fail independent of what flags userspace passes.
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
tools/testing/selftests/bpf/test_verifier.c | 549 +++++++++++++++++++++++++++-
1 file changed, 533 insertions(+), 16 deletions(-)
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -606,7 +606,6 @@ static struct bpf_test tests[] = {
},
.errstr = "misaligned stack access",
.result = REJECT,
- .flags = F_LOAD_WITH_STRICT_ALIGNMENT,
},
{
"invalid map_fd for function call",
@@ -1797,7 +1796,6 @@ static struct bpf_test tests[] = {
},
.result = REJECT,
.errstr = "misaligned stack access off (0x0; 0x0)+-8+2 size 8",
- .flags = F_LOAD_WITH_STRICT_ALIGNMENT,
},
{
"PTR_TO_STACK store/load - bad alignment on reg",
@@ -1810,7 +1808,6 @@ static struct bpf_test tests[] = {
},
.result = REJECT,
.errstr = "misaligned stack access off (0x0; 0x0)+-10+8 size 8",
- .flags = F_LOAD_WITH_STRICT_ALIGNMENT,
},
{
"PTR_TO_STACK store/load - out of bounds low",
@@ -6115,7 +6112,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6139,7 +6136,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6165,7 +6162,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R8 invalid mem access 'inv'",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6190,7 +6187,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R8 invalid mem access 'inv'",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6238,7 +6235,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6309,7 +6306,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6360,7 +6357,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6387,7 +6384,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6413,7 +6410,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6442,7 +6439,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6472,7 +6469,7 @@ static struct bpf_test tests[] = {
BPF_JMP_IMM(BPF_JA, 0, 0, -7),
},
.fixup_map1 = { 4 },
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
},
{
@@ -6500,8 +6497,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
- .errstr_unpriv = "R0 pointer comparison prohibited",
- .errstr = "R0 min value is negative",
+ .errstr = "unbounded min value",
.result = REJECT,
.result_unpriv = REJECT,
},
@@ -6557,6 +6553,462 @@ static struct bpf_test tests[] = {
.result = REJECT,
},
{
+ "bounds check based on zero-extended MOV",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+ /* r2 = 0x0000'0000'ffff'ffff */
+ BPF_MOV32_IMM(BPF_REG_2, 0xffffffff),
+ /* r2 = 0 */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 32),
+ /* no-op */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
+ /* access at offset 0 */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .result = ACCEPT
+ },
+ {
+ "bounds check based on sign-extended MOV. test1",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+ /* r2 = 0xffff'ffff'ffff'ffff */
+ BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
+ /* r2 = 0xffff'ffff */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 32),
+ /* r0 = <oob pointer> */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
+ /* access to OOB pointer */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "map_value pointer and 4294967295",
+ .result = REJECT
+ },
+ {
+ "bounds check based on sign-extended MOV. test2",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+ /* r2 = 0xffff'ffff'ffff'ffff */
+ BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
+ /* r2 = 0xfff'ffff */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 36),
+ /* r0 = <oob pointer> */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
+ /* access to OOB pointer */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "R0 min value is outside of the array range",
+ .result = REJECT
+ },
+ {
+ "bounds check based on reg_off + var_off + insn_off. test1",
+ .insns = {
+ BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
+ offsetof(struct __sk_buff, mark)),
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+ BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 29) - 1),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, (1 << 29) - 1),
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 3),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 4 },
+ .errstr = "value_size=8 off=1073741825",
+ .result = REJECT,
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ },
+ {
+ "bounds check based on reg_off + var_off + insn_off. test2",
+ .insns = {
+ BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
+ offsetof(struct __sk_buff, mark)),
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+ BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 30) - 1),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, (1 << 29) - 1),
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 3),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 4 },
+ .errstr = "value 1073741823",
+ .result = REJECT,
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ },
+ {
+ "bounds check after truncation of non-boundary-crossing range",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+ /* r1 = [0x00, 0xff] */
+ BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_MOV64_IMM(BPF_REG_2, 1),
+ /* r2 = 0x10'0000'0000 */
+ BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 36),
+ /* r1 = [0x10'0000'0000, 0x10'0000'00ff] */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2),
+ /* r1 = [0x10'7fff'ffff, 0x10'8000'00fe] */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
+ /* r1 = [0x00, 0xff] */
+ BPF_ALU32_IMM(BPF_SUB, BPF_REG_1, 0x7fffffff),
+ /* r1 = 0 */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
+ /* no-op */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* access at offset 0 */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .result = ACCEPT
+ },
+ {
+ "bounds check after truncation of boundary-crossing range (1)",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+ /* r1 = [0x00, 0xff] */
+ BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0xffff'ff80, 0x1'0000'007f] */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0xffff'ff80, 0xffff'ffff] or
+ * [0x0000'0000, 0x0000'007f]
+ */
+ BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 0),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0x00, 0xff] or
+ * [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]
+ */
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = 0 or
+ * [0x00ff'ffff'ff00'0000, 0x00ff'ffff'ffff'ffff]
+ */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
+ /* no-op or OOB pointer computation */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* potentially OOB access */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ /* not actually fully unbounded, but the bound is very high */
+ .errstr = "R0 unbounded memory access",
+ .result = REJECT
+ },
+ {
+ "bounds check after truncation of boundary-crossing range (2)",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+ /* r1 = [0x00, 0xff] */
+ BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0xffff'ff80, 0x1'0000'007f] */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0xffff'ff80, 0xffff'ffff] or
+ * [0x0000'0000, 0x0000'007f]
+ * difference to previous test: truncation via MOV32
+ * instead of ALU32.
+ */
+ BPF_MOV32_REG(BPF_REG_1, BPF_REG_1),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = [0x00, 0xff] or
+ * [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]
+ */
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
+ /* r1 = 0 or
+ * [0x00ff'ffff'ff00'0000, 0x00ff'ffff'ffff'ffff]
+ */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
+ /* no-op or OOB pointer computation */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* potentially OOB access */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ /* not actually fully unbounded, but the bound is very high */
+ .errstr = "R0 unbounded memory access",
+ .result = REJECT
+ },
+ {
+ "bounds check after wrapping 32-bit addition",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+ /* r1 = 0x7fff'ffff */
+ BPF_MOV64_IMM(BPF_REG_1, 0x7fffffff),
+ /* r1 = 0xffff'fffe */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
+ /* r1 = 0 */
+ BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 2),
+ /* no-op */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* access at offset 0 */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .result = ACCEPT
+ },
+ {
+ "bounds check after shift with oversized count operand",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ BPF_MOV64_IMM(BPF_REG_2, 32),
+ BPF_MOV64_IMM(BPF_REG_1, 1),
+ /* r1 = (u32)1 << (u32)32 = ? */
+ BPF_ALU32_REG(BPF_LSH, BPF_REG_1, BPF_REG_2),
+ /* r1 = [0x0000, 0xffff] */
+ BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xffff),
+ /* computes unknown pointer, potentially OOB */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* potentially OOB access */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "R0 max value is outside of the array range",
+ .result = REJECT
+ },
+ {
+ "bounds check after right shift of maybe-negative number",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ /* r1 = [0x00, 0xff] */
+ BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ /* r1 = [-0x01, 0xfe] */
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
+ /* r1 = 0 or 0xff'ffff'ffff'ffff */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
+ /* r1 = 0 or 0xffff'ffff'ffff */
+ BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
+ /* computes unknown pointer, potentially OOB */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ /* potentially OOB access */
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ /* exit */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "R0 unbounded memory access",
+ .result = REJECT
+ },
+ {
+ "bounds check map access with off+size signed 32bit overflow. test1",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7ffffffe),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_JMP_A(0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "map_value pointer and 2147483646",
+ .result = REJECT
+ },
+ {
+ "bounds check map access with off+size signed 32bit overflow. test2",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_JMP_A(0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "pointer offset 1073741822",
+ .result = REJECT
+ },
+ {
+ "bounds check map access with off+size signed 32bit overflow. test3",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2),
+ BPF_JMP_A(0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "pointer offset -1073741822",
+ .result = REJECT
+ },
+ {
+ "bounds check map access with off+size signed 32bit overflow. test4",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+ BPF_EXIT_INSN(),
+ BPF_MOV64_IMM(BPF_REG_1, 1000000),
+ BPF_ALU64_IMM(BPF_MUL, BPF_REG_1, 1000000),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2),
+ BPF_JMP_A(0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .errstr = "map_value pointer and 1000000000000",
+ .result = REJECT
+ },
+ {
+ "pointer/scalar confusion in state equality check (way 1)",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_JMP_A(1),
+ BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
+ BPF_JMP_A(0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .result = ACCEPT,
+ .result_unpriv = REJECT,
+ .errstr_unpriv = "R0 leaks addr as return value"
+ },
+ {
+ "pointer/scalar confusion in state equality check (way 2)",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+ BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
+ BPF_JMP_A(1),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .result = ACCEPT,
+ .result_unpriv = REJECT,
+ .errstr_unpriv = "R0 leaks addr as return value"
+ },
+ {
"variable-offset ctx access",
.insns = {
/* Get an unknown value */
@@ -6598,6 +7050,71 @@ static struct bpf_test tests[] = {
.prog_type = BPF_PROG_TYPE_LWT_IN,
},
{
+ "indirect variable-offset stack access",
+ .insns = {
+ /* Fill the top 8 bytes of the stack */
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ /* Get an unknown value */
+ BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
+ /* Make it small and 4-byte aligned */
+ BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 8),
+ /* add it to fp. We now have either fp-4 or fp-8, but
+ * we don't know which
+ */
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10),
+ /* dereference it indirectly */
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 5 },
+ .errstr = "variable stack read R2",
+ .result = REJECT,
+ .prog_type = BPF_PROG_TYPE_LWT_IN,
+ },
+ {
+ "direct stack access with 32-bit wraparound. test1",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_EXIT_INSN()
+ },
+ .errstr = "fp pointer and 2147483647",
+ .result = REJECT
+ },
+ {
+ "direct stack access with 32-bit wraparound. test2",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x3fffffff),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x3fffffff),
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_EXIT_INSN()
+ },
+ .errstr = "fp pointer and 1073741823",
+ .result = REJECT
+ },
+ {
+ "direct stack access with 32-bit wraparound. test3",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x1fffffff),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x1fffffff),
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
+ BPF_EXIT_INSN()
+ },
+ .errstr = "fp pointer offset 1073741822",
+ .result = REJECT
+ },
+ {
"liveness pruning and write screening",
.insns = {
/* Get an unknown value */
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.14/bpf-fix-integer-overflows.patch
queue-4.14/bpf-fix-branch-pruning-logic.patch
queue-4.14/bpf-s390x-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-sparc-fix-usage-of-wrong-reg-for-load_skb_regs-after-call.patch
queue-4.14/bpf-fix-incorrect-tracking-of-register-size-truncation.patch
queue-4.14/bpf-don-t-prune-branches-when-a-scalar-is-replaced-with-a-pointer.patch
queue-4.14/bpf-verifier-fix-bounds-calculation-on-bpf_rsh.patch
queue-4.14/selftests-bpf-add-tests-for-recent-bugfixes.patch
queue-4.14/bpf-fix-corruption-on-concurrent-perf_event_output-calls.patch
queue-4.14/bpf-fix-incorrect-sign-extension-in-check_alu_op.patch
queue-4.14/bpf-ppc64-do-not-reload-skb-pointers-in-non-skb-context.patch
queue-4.14/bpf-fix-missing-error-return-in-check_stack_boundary.patch
queue-4.14/bpf-force-strict-alignment-checks-for-stack-pointers.patch
queue-4.14/bpf-fix-32-bit-alu-op-verification.patch
queue-4.14/bpf-fix-build-issues-on-um-due-to-mising-bpf_perf_event.h.patch
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH stable/4.14 00/14] BPF stable patches for 4.14
2017-12-22 15:45 ` [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Greg KH
@ 2017-12-22 15:48 ` Greg KH
0 siblings, 0 replies; 32+ messages in thread
From: Greg KH @ 2017-12-22 15:48 UTC (permalink / raw)
To: Daniel Borkmann; +Cc: ast, jannh, stable
On Fri, Dec 22, 2017 at 04:45:56PM +0100, Greg KH wrote:
> On Fri, Dec 22, 2017 at 04:22:58PM +0100, Daniel Borkmann wrote:
> > Hi Greg,
> >
> > please consider the following backported fixes for 4.14 inclusion.
> >
> > Thanks a lot!
>
> Nice!
>
> Should I take any of these for 4.9? Or just the one that Jann sent
> separately?
Ah, nevermind, I see your patch series now, thanks for those!
greg k-h
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH stable/4.14 00/14] BPF stable patches for 4.14
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
` (14 preceding siblings ...)
2017-12-22 15:45 ` [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Greg KH
@ 2017-12-22 15:51 ` Greg KH
15 siblings, 0 replies; 32+ messages in thread
From: Greg KH @ 2017-12-22 15:51 UTC (permalink / raw)
To: Daniel Borkmann; +Cc: ast, jannh, stable
On Fri, Dec 22, 2017 at 04:22:58PM +0100, Daniel Borkmann wrote:
> Hi Greg,
>
> please consider the following backported fixes for 4.14 inclusion.
All now queued up, thanks!
greg k-h
^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2017-12-22 15:51 UTC | newest]
Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-12-22 15:22 [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Daniel Borkmann
2017-12-22 15:22 ` [PATCH stable/4.14 01/14] bpf: fix branch pruning logic Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 01/14] bpf: fix branch pruning logic" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 02/14] bpf: fix corruption on concurrent perf_event_output calls Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 02/14] bpf: fix corruption on concurrent perf_event_output calls" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 03/14] bpf, s390x: do not reload skb pointers in non-skb context Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 03/14] bpf, s390x: do not reload skb pointers in non-skb context" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 04/14] bpf, ppc64: do not reload skb pointers in non-skb context Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 04/14] bpf, ppc64: do not reload skb pointers in non-skb context" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 05/14] bpf, sparc: fix usage of wrong reg for load_skb_regs after call Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 05/14] bpf, sparc: fix usage of wrong reg for load_skb_regs after call" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 06/14] bpf/verifier: fix bounds calculation on BPF_RSH Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 06/14] bpf/verifier: fix bounds calculation on BPF_RSH" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 07/14] bpf: fix incorrect sign extension in check_alu_op() Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 07/14] bpf: fix incorrect sign extension in check_alu_op()" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 08/14] bpf: fix incorrect tracking of register size truncation Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 08/14] bpf: fix incorrect tracking of register size truncation" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 09/14] bpf: fix 32-bit ALU op verification Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 09/14] bpf: fix 32-bit ALU op verification" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 10/14] bpf: fix missing error return in check_stack_boundary() Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 10/14] bpf: fix missing error return in check_stack_boundary()" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 11/14] bpf: force strict alignment checks for stack pointers Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 11/14] bpf: force strict alignment checks for stack pointers" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 12/14] bpf: don't prune branches when a scalar is replaced with a pointer Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 12/14] bpf: don't prune branches when a scalar is replaced with a pointer" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 13/14] bpf: fix integer overflows Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 13/14] bpf: fix integer overflows" has been added to the 4.14-stable tree gregkh
2017-12-22 15:23 ` [PATCH stable/4.14 14/14] selftests/bpf: add tests for recent bugfixes Daniel Borkmann
2017-12-22 15:47 ` Patch "[PATCH stable/4.14 14/14] selftests/bpf: add tests for recent bugfixes" has been added to the 4.14-stable tree gregkh
2017-12-22 15:45 ` [PATCH stable/4.14 00/14] BPF stable patches for 4.14 Greg KH
2017-12-22 15:48 ` Greg KH
2017-12-22 15:51 ` Greg KH
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).