public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: Eduard Zingerman <eddyz87@gmail.com>
To: Andrii Nakryiko <andrii@kernel.org>,
	bpf@vger.kernel.org, ast@kernel.org,  daniel@iogearbox.net,
	martin.lau@kernel.org
Cc: kernel-team@meta.com
Subject: Re: [PATCH v5 bpf-next 09/23] bpf: drop knowledge-losing __reg_combine_{32,64}_into_{64,32} logic
Date: Tue, 31 Oct 2023 17:38:07 +0200	[thread overview]
Message-ID: <3bf9132b948f35c6626fa5bd84f7c864b2e52677.camel@gmail.com> (raw)
In-Reply-To: <20231027181346.4019398-10-andrii@kernel.org>

On Fri, 2023-10-27 at 11:13 -0700, Andrii Nakryiko wrote:
> > When performing 32-bit conditional operation operating on lower 32 bits
> > of a full 64-bit register, register full value isn't changed. We just
> > potentially gain new knowledge about that register's lower 32 bits.
> > 
> > Unfortunately, __reg_combine_{32,64}_into_{64,32} logic that
> > reg_set_min_max() performs as a last step, can lose information in some
> > cases due to __mark_reg64_unbounded() and __reg_assign_32_into_64().
> > That's bad and completely unnecessary. Especially __reg_assign_32_into_64()
> > looks completely out of place here, because we are not performing
> > zero-extending subregister assignment during conditional jump.
> > 
> > So this patch replaced __reg_combine_* with just a normal
> > reg_bounds_sync() which will do a proper job of deriving u64/s64 bounds
> > from u32/s32, and vice versa (among all other combinations).
> > 
> > __reg_combine_64_into_32() is also used in one more place,
> > coerce_reg_to_size(), while handling 1- and 2-byte register loads.
> > Looking into this, it seems like besides marking subregister as
> > unbounded before performing reg_bounds_sync(), we were also performing
> > deduction of smin32/smax32 and umin32/umax32 bounds from respective
> > smin/smax and umin/umax bounds. It's now redundant as reg_bounds_sync()
> > performs all the same logic more generically (e.g., without unnecessary
> > assumption that upper 32 bits of full register should be zero).
> > 
> > Long story short, we remove __reg_combine_64_into_32() completely, and
> > coerce_reg_to_size() now only does resetting subreg to unbounded and then
> > performing reg_bounds_sync() to recover as much information as possible
> > from 64-bit umin/umax and smin/smax bounds, set explicitly in
> > coerce_reg_to_size() earlier.
> > 
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

> > ---
> >  kernel/bpf/verifier.c | 60 ++++++-------------------------------------
> >  1 file changed, 8 insertions(+), 52 deletions(-)
> > 
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 6b0736c04ebe..f5fcb7fb2c67 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -2641,51 +2641,6 @@ static void __reg_assign_32_into_64(struct bpf_reg_state *reg)
> >  	}
> >  }
> >  
> > -static void __reg_combine_32_into_64(struct bpf_reg_state *reg)
> > -{
> > -	/* special case when 64-bit register has upper 32-bit register
> > -	 * zeroed. Typically happens after zext or <<32, >>32 sequence
> > -	 * allowing us to use 32-bit bounds directly,
> > -	 */
> > -	if (tnum_equals_const(tnum_clear_subreg(reg->var_off), 0)) {
> > -		__reg_assign_32_into_64(reg);
> > -	} else {
> > -		/* Otherwise the best we can do is push lower 32bit known and
> > -		 * unknown bits into register (var_off set from jmp logic)
> > -		 * then learn as much as possible from the 64-bit tnum
> > -		 * known and unknown bits. The previous smin/smax bounds are
> > -		 * invalid here because of jmp32 compare so mark them unknown
> > -		 * so they do not impact tnum bounds calculation.
> > -		 */
> > -		__mark_reg64_unbounded(reg);
> > -	}
> > -	reg_bounds_sync(reg);
> > -}
> > -
> > -static bool __reg64_bound_s32(s64 a)
> > -{
> > -	return a >= S32_MIN && a <= S32_MAX;
> > -}
> > -
> > -static bool __reg64_bound_u32(u64 a)
> > -{
> > -	return a >= U32_MIN && a <= U32_MAX;
> > -}
> > -
> > -static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
> > -{
> > -	__mark_reg32_unbounded(reg);
> > -	if (__reg64_bound_s32(reg->smin_value) && __reg64_bound_s32(reg->smax_value)) {
> > -		reg->s32_min_value = (s32)reg->smin_value;
> > -		reg->s32_max_value = (s32)reg->smax_value;
> > -	}
> > -	if (__reg64_bound_u32(reg->umin_value) && __reg64_bound_u32(reg->umax_value)) {
> > -		reg->u32_min_value = (u32)reg->umin_value;
> > -		reg->u32_max_value = (u32)reg->umax_value;
> > -	}
> > -	reg_bounds_sync(reg);
> > -}
> > -
> >  /* Mark a register as having a completely unknown (scalar) value. */
> >  static void __mark_reg_unknown(const struct bpf_verifier_env *env,
> >  			       struct bpf_reg_state *reg)
> > @@ -6382,9 +6337,10 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size)
> >  	 * values are also truncated so we push 64-bit bounds into
> >  	 * 32-bit bounds. Above were truncated < 32-bits already.
> >  	 */
> > -	if (size >= 4)
> > -		return;
> > -	__reg_combine_64_into_32(reg);
> > +	if (size < 4) {
> > +		__mark_reg32_unbounded(reg);
> > +		reg_bounds_sync(reg);
> > +	}
> >  }
> >  
> >  static void set_sext64_default_val(struct bpf_reg_state *reg, int size)
> > @@ -14623,13 +14579,13 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
> >  					     tnum_subreg(false_32off));
> >  		true_reg->var_off = tnum_or(tnum_clear_subreg(true_64off),
> >  					    tnum_subreg(true_32off));
> > -		__reg_combine_32_into_64(false_reg);
> > -		__reg_combine_32_into_64(true_reg);
> > +		reg_bounds_sync(false_reg);
> > +		reg_bounds_sync(true_reg);
> >  	} else {
> >  		false_reg->var_off = false_64off;
> >  		true_reg->var_off = true_64off;
> > -		__reg_combine_64_into_32(false_reg);
> > -		__reg_combine_64_into_32(true_reg);
> > +		reg_bounds_sync(false_reg);
> > +		reg_bounds_sync(true_reg);
> >  	}
> >  }
> >  


  reply	other threads:[~2023-10-31 15:38 UTC|newest]

Thread overview: 77+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-27 18:13 [PATCH v5 bpf-next 00/23] BPF register bounds logic and testing improvements Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 01/23] selftests/bpf: fix RELEASE=1 build for tc_opts Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 02/23] selftests/bpf: satisfy compiler by having explicit return in btf test Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 03/23] bpf: derive smin/smax from umin/max bounds Andrii Nakryiko
2023-10-31 15:37   ` Eduard Zingerman
2023-10-31 17:30     ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 04/23] bpf: derive smin32/smax32 from umin32/umax32 bounds Andrii Nakryiko
2023-10-31 15:37   ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 05/23] bpf: derive subreg bounds from full bounds when upper 32 bits are constant Andrii Nakryiko
2023-10-31 15:37   ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 06/23] bpf: add special smin32/smax32 derivation from 64-bit bounds Andrii Nakryiko
2023-10-31 15:37   ` Eduard Zingerman
2023-10-31 17:39     ` Andrii Nakryiko
2023-10-31 18:41       ` Alexei Starovoitov
2023-10-31 18:49         ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 07/23] bpf: improve deduction of 64-bit bounds from 32-bit bounds Andrii Nakryiko
2023-10-31 15:37   ` Eduard Zingerman
2023-10-31 20:26   ` Alexei Starovoitov
2023-10-31 20:33     ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 08/23] bpf: try harder to deduce register bounds from different numeric domains Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 09/23] bpf: drop knowledge-losing __reg_combine_{32,64}_into_{64,32} logic Andrii Nakryiko
2023-10-31 15:38   ` Eduard Zingerman [this message]
2023-10-27 18:13 ` [PATCH v5 bpf-next 10/23] selftests/bpf: BPF register range bounds tester Andrii Nakryiko
2023-11-08 22:08   ` Eduard Zingerman
2023-11-08 23:23     ` Andrii Nakryiko
2023-11-09  0:30       ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 11/23] bpf: rename is_branch_taken reg arguments to prepare for the second one Andrii Nakryiko
2023-10-30 19:39   ` Alexei Starovoitov
2023-10-31  5:19     ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 12/23] bpf: generalize is_branch_taken() to work with two registers Andrii Nakryiko
2023-10-31 15:38   ` Eduard Zingerman
2023-10-31 17:41     ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 13/23] bpf: move is_branch_taken() down Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 14/23] bpf: generalize is_branch_taken to handle all conditional jumps in one place Andrii Nakryiko
2023-10-31 15:38   ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 15/23] bpf: unify 32-bit and 64-bit is_branch_taken logic Andrii Nakryiko
2023-10-30 19:52   ` Alexei Starovoitov
2023-10-31  5:28     ` Andrii Nakryiko
2023-10-31 17:35   ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 16/23] bpf: prepare reg_set_min_max for second set of registers Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 17/23] bpf: generalize reg_set_min_max() to handle two sets of two registers Andrii Nakryiko
2023-10-31  2:02   ` Alexei Starovoitov
2023-10-31  6:03     ` Andrii Nakryiko
2023-10-31 16:23       ` Alexei Starovoitov
2023-10-31 17:50         ` Andrii Nakryiko
2023-10-31 17:56           ` Andrii Nakryiko
2023-10-31 18:04             ` Alexei Starovoitov
2023-10-31 18:06               ` Andrii Nakryiko
2023-10-31 18:14   ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 18/23] bpf: generalize reg_set_min_max() to handle non-const register comparisons Andrii Nakryiko
2023-10-31 23:25   ` Eduard Zingerman
2023-11-01 16:35     ` Andrii Nakryiko
2023-11-01 17:12       ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 19/23] bpf: generalize is_scalar_branch_taken() logic Andrii Nakryiko
2023-10-31  2:12   ` Alexei Starovoitov
2023-10-31  6:12     ` Andrii Nakryiko
2023-10-31 16:34       ` Alexei Starovoitov
2023-10-31 18:01         ` Andrii Nakryiko
2023-10-31 20:53           ` Andrii Nakryiko
2023-10-31 20:55             ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 20/23] bpf: enhance BPF_JEQ/BPF_JNE is_branch_taken logic Andrii Nakryiko
2023-10-31  2:20   ` Alexei Starovoitov
2023-10-31  6:16     ` Andrii Nakryiko
2023-10-31 16:36       ` Alexei Starovoitov
2023-10-31 18:04         ` Andrii Nakryiko
2023-10-31 18:06           ` Alexei Starovoitov
2023-10-27 18:13 ` [PATCH v5 bpf-next 21/23] selftests/bpf: adjust OP_EQ/OP_NE handling to use subranges for branch taken Andrii Nakryiko
2023-11-08 18:22   ` Eduard Zingerman
2023-11-08 19:59     ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 22/23] selftests/bpf: add range x range test to reg_bounds Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 23/23] selftests/bpf: add iter test requiring range x range logic Andrii Nakryiko
2023-10-30 17:55 ` [PATCH v5 bpf-next 00/23] BPF register bounds logic and testing improvements Alexei Starovoitov
2023-10-31  5:19   ` Andrii Nakryiko
2023-11-01 12:37     ` Paul Chaignon
2023-11-01 17:13       ` Andrii Nakryiko
2023-11-07  6:37         ` Harishankar Vishwanathan
2023-11-07 16:38           ` Paul Chaignon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3bf9132b948f35c6626fa5bd84f7c864b2e52677.camel@gmail.com \
    --to=eddyz87@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=kernel-team@meta.com \
    --cc=martin.lau@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox