netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf v3 0/2] Fix BPF verifier bypass on scalar spill
@ 2023-06-06 21:42 Maxim Mikityanskiy
  2023-06-06 21:42 ` [PATCH bpf v3 1/2] bpf: Fix verifier tracking scalars on spill Maxim Mikityanskiy
  2023-06-06 21:42 ` [PATCH bpf v3 2/2] selftests/bpf: Add test cases to assert proper ID tracking " Maxim Mikityanskiy
  0 siblings, 2 replies; 7+ messages in thread
From: Maxim Mikityanskiy @ 2023-06-06 21:42 UTC (permalink / raw)
  To: bpf
  Cc: netdev, linux-kselftest, Daniel Borkmann, John Fastabend,
	Alexei Starovoitov, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Maxim Mikityanskiy, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan, David S. Miller, Jakub Kicinski,
	Jesper Dangaard Brouer

From: Maxim Mikityanskiy <maxim@isovalent.com>

See the details in the commit message (TL/DR: under CAP_BPF, the
verifier can be fooled to think that a scalar is zero while in fact it's
your predefined number.)

v1 and v2 were sent off-list.

v2 changes:

Added more tests, migrated them to inline asm, started using
bpf_get_prandom_u32, switched to a more bulletproof dead branch check
and modified the failing spill test scenarios so that an unauthorized
access attempt is performed in both branches.

v3 changes:

Dropped an improvement not necessary for the fix, changed the Fixes tag.

Maxim Mikityanskiy (2):
  bpf: Fix verifier tracking scalars on spill
  selftests/bpf: Add test cases to assert proper ID tracking on spill

 kernel/bpf/verifier.c                         |   7 +
 .../selftests/bpf/progs/verifier_spill_fill.c | 198 ++++++++++++++++++
 2 files changed, 205 insertions(+)

-- 
2.40.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH bpf v3 1/2] bpf: Fix verifier tracking scalars on spill
  2023-06-06 21:42 [PATCH bpf v3 0/2] Fix BPF verifier bypass on scalar spill Maxim Mikityanskiy
@ 2023-06-06 21:42 ` Maxim Mikityanskiy
  2023-06-07  1:32   ` Yonghong Song
  2023-06-06 21:42 ` [PATCH bpf v3 2/2] selftests/bpf: Add test cases to assert proper ID tracking " Maxim Mikityanskiy
  1 sibling, 1 reply; 7+ messages in thread
From: Maxim Mikityanskiy @ 2023-06-06 21:42 UTC (permalink / raw)
  To: bpf
  Cc: netdev, linux-kselftest, Daniel Borkmann, John Fastabend,
	Alexei Starovoitov, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Maxim Mikityanskiy, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan, David S. Miller, Jakub Kicinski,
	Jesper Dangaard Brouer

From: Maxim Mikityanskiy <maxim@isovalent.com>

The following scenario describes a verifier bypass in privileged mode
(CAP_BPF or CAP_SYS_ADMIN):

1. Prepare a 32-bit rogue number.
2. Put the rogue number into the upper half of a 64-bit register, and
   roll a random (unknown to the verifier) bit in the lower half. The
   rest of the bits should be zero (although variations are possible).
3. Assign an ID to the register by MOVing it to another arbitrary
   register.
4. Perform a 32-bit spill of the register, then perform a 32-bit fill to
   another register. Due to a bug in the verifier, the ID will be
   preserved, although the new register will contain only the lower 32
   bits, i.e. all zeros except one random bit.

At this point there are two registers with different values but the same
ID, which means the integrity of the verifier state has been corrupted.
Next steps show the actual bypass:

5. Compare the new 32-bit register with 0. In the branch where it's
   equal to 0, the verifier will believe that the original 64-bit
   register is also 0, because it has the same ID, but its actual value
   still contains the rogue number in the upper half.
   Some optimizations of the verifier prevent the actual bypass, so
   extra care is needed: the comparison must be between two registers,
   and both branches must be reachable (this is why one random bit is
   needed). Both branches are still suitable for the bypass.
6. Right shift the original register by 32 bits to pop the rogue number.
7. Use the rogue number as an offset with any pointer. The verifier will
   believe that the offset is 0, while in reality it's the given number.

The fix is similar to the 32-bit BPF_MOV handling in check_alu_op for
SCALAR_VALUE. If the spill is narrowing the actual register value, don't
keep the ID, make sure it's reset to 0.

Fixes: 354e8f1970f8 ("bpf: Support <8-byte scalar spill and refill")
Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
 kernel/bpf/verifier.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5871aa78d01a..7be23eced561 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3856,6 +3856,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
 	mark_stack_slot_scratched(env, spi);
 	if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
 	    !register_is_null(reg) && env->bpf_capable) {
+		bool reg_value_fits;
+
 		if (dst_reg != BPF_REG_FP) {
 			/* The backtracking logic can only recognize explicit
 			 * stack slot address like [fp - 8]. Other spill of
@@ -3867,7 +3869,12 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
 			if (err)
 				return err;
 		}
+
+		reg_value_fits = fls64(reg->umax_value) <= BITS_PER_BYTE * size;
 		save_register_state(state, spi, reg, size);
+		/* Break the relation on a narrowing spill. */
+		if (!reg_value_fits)
+			state->stack[spi].spilled_ptr.id = 0;
 	} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
 		   insn->imm != 0 && env->bpf_capable) {
 		struct bpf_reg_state fake_reg = {};
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH bpf v3 2/2] selftests/bpf: Add test cases to assert proper ID tracking on spill
  2023-06-06 21:42 [PATCH bpf v3 0/2] Fix BPF verifier bypass on scalar spill Maxim Mikityanskiy
  2023-06-06 21:42 ` [PATCH bpf v3 1/2] bpf: Fix verifier tracking scalars on spill Maxim Mikityanskiy
@ 2023-06-06 21:42 ` Maxim Mikityanskiy
  2023-06-07  1:40   ` Yonghong Song
  2023-06-07  1:43   ` Alexei Starovoitov
  1 sibling, 2 replies; 7+ messages in thread
From: Maxim Mikityanskiy @ 2023-06-06 21:42 UTC (permalink / raw)
  To: bpf
  Cc: netdev, linux-kselftest, Daniel Borkmann, John Fastabend,
	Alexei Starovoitov, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Maxim Mikityanskiy, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan, David S. Miller, Jakub Kicinski,
	Jesper Dangaard Brouer

From: Maxim Mikityanskiy <maxim@isovalent.com>

The previous commit fixed a verifier bypass by ensuring that ID is not
preserved on narrowing spills. Add the test cases to check the
problematic patterns.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
---
 .../selftests/bpf/progs/verifier_spill_fill.c | 198 ++++++++++++++++++
 1 file changed, 198 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index 136e5530b72c..999677acc8ae 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -371,4 +371,202 @@ __naked void and_then_at_fp_8(void)
 "	::: __clobber_all);
 }
 
+SEC("xdp")
+__description("32-bit spill of 64-bit reg should clear ID")
+__failure __msg("math between ctx pointer and 4294967295 is not allowed")
+__naked void spill_32bit_of_64bit_fail(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	/* Roll one bit to force the verifier to track both branches. */\
+	call %[bpf_get_prandom_u32];			\
+	r0 &= 0x8;					\
+	/* Put a large number into r1. */		\
+	r1 = 0xffffffff;				\
+	r1 <<= 32;					\
+	r1 += r0;					\
+	/* Assign an ID to r1. */			\
+	r2 = r1;					\
+	/* 32-bit spill r1 to stack - should clear the ID! */\
+	*(u32*)(r10 - 8) = r1;				\
+	/* 32-bit fill r2 from stack. */		\
+	r2 = *(u32*)(r10 - 8);				\
+	/* Compare r2 with another register to trigger find_equal_scalars.\
+	 * Having one random bit is important here, otherwise the verifier cuts\
+	 * the corners. If the ID was mistakenly preserved on spill, this would\
+	 * cause the verifier to think that r1 is also equal to zero in one of\
+	 * the branches, and equal to eight on the other branch.\
+	 */						\
+	r3 = 0;						\
+	if r2 != r3 goto l0_%=;				\
+l0_%=:	r1 >>= 32;					\
+	/* At this point, if the verifier thinks that r1 is 0, an out-of-bounds\
+	 * read will happen, because it actually contains 0xffffffff.\
+	 */						\
+	r6 += r1;					\
+	r0 = *(u32*)(r6 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("16-bit spill of 64-bit reg should clear ID")
+__failure __msg("math between ctx pointer and 4294967295 is not allowed")
+__naked void spill_16bit_of_64bit_fail(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	/* Roll one bit to force the verifier to track both branches. */\
+	call %[bpf_get_prandom_u32];			\
+	r0 &= 0x8;					\
+	/* Put a large number into r1. */		\
+	r1 = 0xffffffff;				\
+	r1 <<= 32;					\
+	r1 += r0;					\
+	/* Assign an ID to r1. */			\
+	r2 = r1;					\
+	/* 16-bit spill r1 to stack - should clear the ID! */\
+	*(u16*)(r10 - 8) = r1;				\
+	/* 16-bit fill r2 from stack. */		\
+	r2 = *(u16*)(r10 - 8);				\
+	/* Compare r2 with another register to trigger find_equal_scalars.\
+	 * Having one random bit is important here, otherwise the verifier cuts\
+	 * the corners. If the ID was mistakenly preserved on spill, this would\
+	 * cause the verifier to think that r1 is also equal to zero in one of\
+	 * the branches, and equal to eight on the other branch.\
+	 */						\
+	r3 = 0;						\
+	if r2 != r3 goto l0_%=;				\
+l0_%=:	r1 >>= 32;					\
+	/* At this point, if the verifier thinks that r1 is 0, an out-of-bounds\
+	 * read will happen, because it actually contains 0xffffffff.\
+	 */						\
+	r6 += r1;					\
+	r0 = *(u32*)(r6 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("8-bit spill of 64-bit reg should clear ID")
+__failure __msg("math between ctx pointer and 4294967295 is not allowed")
+__naked void spill_8bit_of_64bit_fail(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	/* Roll one bit to force the verifier to track both branches. */\
+	call %[bpf_get_prandom_u32];			\
+	r0 &= 0x8;					\
+	/* Put a large number into r1. */		\
+	r1 = 0xffffffff;				\
+	r1 <<= 32;					\
+	r1 += r0;					\
+	/* Assign an ID to r1. */			\
+	r2 = r1;					\
+	/* 8-bit spill r1 to stack - should clear the ID! */\
+	*(u8*)(r10 - 8) = r1;				\
+	/* 8-bit fill r2 from stack. */		\
+	r2 = *(u8*)(r10 - 8);				\
+	/* Compare r2 with another register to trigger find_equal_scalars.\
+	 * Having one random bit is important here, otherwise the verifier cuts\
+	 * the corners. If the ID was mistakenly preserved on spill, this would\
+	 * cause the verifier to think that r1 is also equal to zero in one of\
+	 * the branches, and equal to eight on the other branch.\
+	 */						\
+	r3 = 0;						\
+	if r2 != r3 goto l0_%=;				\
+l0_%=:	r1 >>= 32;					\
+	/* At this point, if the verifier thinks that r1 is 0, an out-of-bounds\
+	 * read will happen, because it actually contains 0xffffffff.\
+	 */						\
+	r6 += r1;					\
+	r0 = *(u32*)(r6 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("16-bit spill of 32-bit reg should clear ID")
+__failure __msg("dereference of modified ctx ptr R6 off=65535 disallowed")
+__naked void spill_16bit_of_32bit_fail(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	/* Roll one bit to force the verifier to track both branches. */\
+	call %[bpf_get_prandom_u32];			\
+	r0 &= 0x8;					\
+	/* Put a large number into r1. */		\
+	w1 = 0xffff0000;				\
+	r1 += r0;					\
+	/* Assign an ID to r1. */			\
+	r2 = r1;					\
+	/* 16-bit spill r1 to stack - should clear the ID! */\
+	*(u16*)(r10 - 8) = r1;				\
+	/* 16-bit fill r2 from stack. */		\
+	r2 = *(u16*)(r10 - 8);				\
+	/* Compare r2 with another register to trigger find_equal_scalars.\
+	 * Having one random bit is important here, otherwise the verifier cuts\
+	 * the corners. If the ID was mistakenly preserved on spill, this would\
+	 * cause the verifier to think that r1 is also equal to zero in one of\
+	 * the branches, and equal to eight on the other branch.\
+	 */						\
+	r3 = 0;						\
+	if r2 != r3 goto l0_%=;				\
+l0_%=:	r1 >>= 16;					\
+	/* At this point, if the verifier thinks that r1 is 0, an out-of-bounds\
+	 * read will happen, because it actually contains 0xffff.\
+	 */						\
+	r6 += r1;					\
+	r0 = *(u32*)(r6 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("8-bit spill of 32-bit reg should clear ID")
+__failure __msg("dereference of modified ctx ptr R6 off=65535 disallowed")
+__naked void spill_8bit_of_32bit_fail(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	/* Roll one bit to force the verifier to track both branches. */\
+	call %[bpf_get_prandom_u32];			\
+	r0 &= 0x8;					\
+	/* Put a large number into r1. */		\
+	w1 = 0xffff0000;				\
+	r1 += r0;					\
+	/* Assign an ID to r1. */			\
+	r2 = r1;					\
+	/* 8-bit spill r1 to stack - should clear the ID! */\
+	*(u8*)(r10 - 8) = r1;				\
+	/* 8-bit fill r2 from stack. */			\
+	r2 = *(u8*)(r10 - 8);				\
+	/* Compare r2 with another register to trigger find_equal_scalars.\
+	 * Having one random bit is important here, otherwise the verifier cuts\
+	 * the corners. If the ID was mistakenly preserved on spill, this would\
+	 * cause the verifier to think that r1 is also equal to zero in one of\
+	 * the branches, and equal to eight on the other branch.\
+	 */						\
+	r3 = 0;						\
+	if r2 != r3 goto l0_%=;				\
+l0_%=:	r1 >>= 16;					\
+	/* At this point, if the verifier thinks that r1 is 0, an out-of-bounds\
+	 * read will happen, because it actually contains 0xffff.\
+	 */						\
+	r6 += r1;					\
+	r0 = *(u32*)(r6 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
 char _license[] SEC("license") = "GPL";
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf v3 1/2] bpf: Fix verifier tracking scalars on spill
  2023-06-06 21:42 ` [PATCH bpf v3 1/2] bpf: Fix verifier tracking scalars on spill Maxim Mikityanskiy
@ 2023-06-07  1:32   ` Yonghong Song
  2023-06-07  7:36     ` Maxim Mikityanskiy
  0 siblings, 1 reply; 7+ messages in thread
From: Yonghong Song @ 2023-06-07  1:32 UTC (permalink / raw)
  To: Maxim Mikityanskiy, bpf
  Cc: netdev, linux-kselftest, Daniel Borkmann, John Fastabend,
	Alexei Starovoitov, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Maxim Mikityanskiy, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan, David S. Miller, Jakub Kicinski,
	Jesper Dangaard Brouer



On 6/6/23 2:42 PM, Maxim Mikityanskiy wrote:
> From: Maxim Mikityanskiy <maxim@isovalent.com>
> 
> The following scenario describes a verifier bypass in privileged mode
> (CAP_BPF or CAP_SYS_ADMIN):
> 
> 1. Prepare a 32-bit rogue number.
> 2. Put the rogue number into the upper half of a 64-bit register, and
>     roll a random (unknown to the verifier) bit in the lower half. The
>     rest of the bits should be zero (although variations are possible).
> 3. Assign an ID to the register by MOVing it to another arbitrary
>     register.
> 4. Perform a 32-bit spill of the register, then perform a 32-bit fill to
>     another register. Due to a bug in the verifier, the ID will be
>     preserved, although the new register will contain only the lower 32
>     bits, i.e. all zeros except one random bit.
> 
> At this point there are two registers with different values but the same
> ID, which means the integrity of the verifier state has been corrupted.
> Next steps show the actual bypass:
> 
> 5. Compare the new 32-bit register with 0. In the branch where it's
>     equal to 0, the verifier will believe that the original 64-bit
>     register is also 0, because it has the same ID, but its actual value
>     still contains the rogue number in the upper half.
>     Some optimizations of the verifier prevent the actual bypass, so
>     extra care is needed: the comparison must be between two registers,
>     and both branches must be reachable (this is why one random bit is
>     needed). Both branches are still suitable for the bypass.
> 6. Right shift the original register by 32 bits to pop the rogue number.
> 7. Use the rogue number as an offset with any pointer. The verifier will
>     believe that the offset is 0, while in reality it's the given number.
> 
> The fix is similar to the 32-bit BPF_MOV handling in check_alu_op for
> SCALAR_VALUE. If the spill is narrowing the actual register value, don't
> keep the ID, make sure it's reset to 0.
> 
> Fixes: 354e8f1970f8 ("bpf: Support <8-byte scalar spill and refill")
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>

LGTM with a small nit below.

Acked-by: Yonghong Song <yhs@fb.com>

> ---
>   kernel/bpf/verifier.c | 7 +++++++
>   1 file changed, 7 insertions(+)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 5871aa78d01a..7be23eced561 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -3856,6 +3856,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>   	mark_stack_slot_scratched(env, spi);
>   	if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
>   	    !register_is_null(reg) && env->bpf_capable) {
> +		bool reg_value_fits;
> +
>   		if (dst_reg != BPF_REG_FP) {
>   			/* The backtracking logic can only recognize explicit
>   			 * stack slot address like [fp - 8]. Other spill of
> @@ -3867,7 +3869,12 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>   			if (err)
>   				return err;
>   		}
> +
> +		reg_value_fits = fls64(reg->umax_value) <= BITS_PER_BYTE * size;
>   		save_register_state(state, spi, reg, size);
> +		/* Break the relation on a narrowing spill. */
> +		if (!reg_value_fits)
> +			state->stack[spi].spilled_ptr.id = 0;

I think the code can be simplied like below:

--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4230,6 +4230,8 @@ static int check_stack_write_fixed_off(struct 
bpf_verifier_env *env,
                                 return err;
                 }
                 save_register_state(state, spi, reg, size);
+               if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
+                       state->stack[spi].spilled_ptr.id = 0;
         } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
                    insn->imm != 0 && env->bpf_capable) {
                 struct bpf_reg_state fake_reg = {};

>   	} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
>   		   insn->imm != 0 && env->bpf_capable) {
>   		struct bpf_reg_state fake_reg = {};

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf v3 2/2] selftests/bpf: Add test cases to assert proper ID tracking on spill
  2023-06-06 21:42 ` [PATCH bpf v3 2/2] selftests/bpf: Add test cases to assert proper ID tracking " Maxim Mikityanskiy
@ 2023-06-07  1:40   ` Yonghong Song
  2023-06-07  1:43   ` Alexei Starovoitov
  1 sibling, 0 replies; 7+ messages in thread
From: Yonghong Song @ 2023-06-07  1:40 UTC (permalink / raw)
  To: Maxim Mikityanskiy, bpf
  Cc: netdev, linux-kselftest, Daniel Borkmann, John Fastabend,
	Alexei Starovoitov, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Maxim Mikityanskiy, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan, David S. Miller, Jakub Kicinski,
	Jesper Dangaard Brouer



On 6/6/23 2:42 PM, Maxim Mikityanskiy wrote:
> From: Maxim Mikityanskiy <maxim@isovalent.com>
> 
> The previous commit fixed a verifier bypass by ensuring that ID is not
> preserved on narrowing spills. Add the test cases to check the
> problematic patterns.
> 
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf v3 2/2] selftests/bpf: Add test cases to assert proper ID tracking on spill
  2023-06-06 21:42 ` [PATCH bpf v3 2/2] selftests/bpf: Add test cases to assert proper ID tracking " Maxim Mikityanskiy
  2023-06-07  1:40   ` Yonghong Song
@ 2023-06-07  1:43   ` Alexei Starovoitov
  1 sibling, 0 replies; 7+ messages in thread
From: Alexei Starovoitov @ 2023-06-07  1:43 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: bpf, Network Development, open list:KERNEL SELFTEST FRAMEWORK,
	Daniel Borkmann, John Fastabend, Alexei Starovoitov,
	Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman,
	Maxim Mikityanskiy, Song Liu, Yonghong Song, KP Singh,
	Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan, David S. Miller, Jakub Kicinski,
	Jesper Dangaard Brouer

On Tue, Jun 6, 2023 at 2:43 PM Maxim Mikityanskiy <maxtram95@gmail.com> wrote:
>
> From: Maxim Mikityanskiy <maxim@isovalent.com>
>
> The previous commit fixed a verifier bypass by ensuring that ID is not
> preserved on narrowing spills. Add the test cases to check the
> problematic patterns.
>
> Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> ---
>  .../selftests/bpf/progs/verifier_spill_fill.c | 198 ++++++++++++++++++
>  1 file changed, 198 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> index 136e5530b72c..999677acc8ae 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> @@ -371,4 +371,202 @@ __naked void and_then_at_fp_8(void)
>  "      ::: __clobber_all);
>  }
>
> +SEC("xdp")
> +__description("32-bit spill of 64-bit reg should clear ID")
> +__failure __msg("math between ctx pointer and 4294967295 is not allowed")
> +__naked void spill_32bit_of_64bit_fail(void)

It's an overkill to test all possible combinations.
32_of_64 and 16_of_32 would be enough.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf v3 1/2] bpf: Fix verifier tracking scalars on spill
  2023-06-07  1:32   ` Yonghong Song
@ 2023-06-07  7:36     ` Maxim Mikityanskiy
  0 siblings, 0 replies; 7+ messages in thread
From: Maxim Mikityanskiy @ 2023-06-07  7:36 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, netdev, linux-kselftest, Daniel Borkmann, John Fastabend,
	Alexei Starovoitov, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Maxim Mikityanskiy, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan, David S. Miller, Jakub Kicinski,
	Jesper Dangaard Brouer

On Tue, 06 Jun 2023 at 18:32:37 -0700, Yonghong Song wrote:
> 
> 
> On 6/6/23 2:42 PM, Maxim Mikityanskiy wrote:
> > From: Maxim Mikityanskiy <maxim@isovalent.com>
> > 
> > The following scenario describes a verifier bypass in privileged mode
> > (CAP_BPF or CAP_SYS_ADMIN):
> > 
> > 1. Prepare a 32-bit rogue number.
> > 2. Put the rogue number into the upper half of a 64-bit register, and
> >     roll a random (unknown to the verifier) bit in the lower half. The
> >     rest of the bits should be zero (although variations are possible).
> > 3. Assign an ID to the register by MOVing it to another arbitrary
> >     register.
> > 4. Perform a 32-bit spill of the register, then perform a 32-bit fill to
> >     another register. Due to a bug in the verifier, the ID will be
> >     preserved, although the new register will contain only the lower 32
> >     bits, i.e. all zeros except one random bit.
> > 
> > At this point there are two registers with different values but the same
> > ID, which means the integrity of the verifier state has been corrupted.
> > Next steps show the actual bypass:
> > 
> > 5. Compare the new 32-bit register with 0. In the branch where it's
> >     equal to 0, the verifier will believe that the original 64-bit
> >     register is also 0, because it has the same ID, but its actual value
> >     still contains the rogue number in the upper half.
> >     Some optimizations of the verifier prevent the actual bypass, so
> >     extra care is needed: the comparison must be between two registers,
> >     and both branches must be reachable (this is why one random bit is
> >     needed). Both branches are still suitable for the bypass.
> > 6. Right shift the original register by 32 bits to pop the rogue number.
> > 7. Use the rogue number as an offset with any pointer. The verifier will
> >     believe that the offset is 0, while in reality it's the given number.
> > 
> > The fix is similar to the 32-bit BPF_MOV handling in check_alu_op for
> > SCALAR_VALUE. If the spill is narrowing the actual register value, don't
> > keep the ID, make sure it's reset to 0.
> > 
> > Fixes: 354e8f1970f8 ("bpf: Support <8-byte scalar spill and refill")
> > Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
> 
> LGTM with a small nit below.
> 
> Acked-by: Yonghong Song <yhs@fb.com>
> 
> > ---
> >   kernel/bpf/verifier.c | 7 +++++++
> >   1 file changed, 7 insertions(+)
> > 
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 5871aa78d01a..7be23eced561 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -3856,6 +3856,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> >   	mark_stack_slot_scratched(env, spi);
> >   	if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
> >   	    !register_is_null(reg) && env->bpf_capable) {
> > +		bool reg_value_fits;
> > +
> >   		if (dst_reg != BPF_REG_FP) {
> >   			/* The backtracking logic can only recognize explicit
> >   			 * stack slot address like [fp - 8]. Other spill of
> > @@ -3867,7 +3869,12 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> >   			if (err)
> >   				return err;
> >   		}
> > +
> > +		reg_value_fits = fls64(reg->umax_value) <= BITS_PER_BYTE * size;
> >   		save_register_state(state, spi, reg, size);
> > +		/* Break the relation on a narrowing spill. */
> > +		if (!reg_value_fits)
> > +			state->stack[spi].spilled_ptr.id = 0;
> 
> I think the code can be simplied like below:
> 
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4230,6 +4230,8 @@ static int check_stack_write_fixed_off(struct
> bpf_verifier_env *env,
>                                 return err;
>                 }
>                 save_register_state(state, spi, reg, size);
> +               if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
> +                       state->stack[spi].spilled_ptr.id = 0;
>         } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
>                    insn->imm != 0 && env->bpf_capable) {
>                 struct bpf_reg_state fake_reg = {};
> 

That's true, I kept the variable to avoid churn when I send a follow-up
improvement:

+               /* Make sure that reg had an ID to build a relation on spill. */
+               if (reg_value_fits && !reg->id)
+                       reg->id = ++env->id_gen;
                save_register_state(state, spi, reg, size);

But yeah, I agree, let's simplify it for now, there is no guarantee that
the follow-up patch will be accepted as is. Thanks for the review!

> >   	} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
> >   		   insn->imm != 0 && env->bpf_capable) {
> >   		struct bpf_reg_state fake_reg = {};

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-06-07  7:36 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-06 21:42 [PATCH bpf v3 0/2] Fix BPF verifier bypass on scalar spill Maxim Mikityanskiy
2023-06-06 21:42 ` [PATCH bpf v3 1/2] bpf: Fix verifier tracking scalars on spill Maxim Mikityanskiy
2023-06-07  1:32   ` Yonghong Song
2023-06-07  7:36     ` Maxim Mikityanskiy
2023-06-06 21:42 ` [PATCH bpf v3 2/2] selftests/bpf: Add test cases to assert proper ID tracking " Maxim Mikityanskiy
2023-06-07  1:40   ` Yonghong Song
2023-06-07  1:43   ` Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).