From: Eduard Zingerman <eddyz87@gmail.com>
To: Andrii Nakryiko <andrii@kernel.org>,
bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net,
martin.lau@kernel.org
Cc: kernel-team@meta.com
Subject: Re: [PATCH v5 bpf-next 18/23] bpf: generalize reg_set_min_max() to handle non-const register comparisons
Date: Wed, 01 Nov 2023 01:25:43 +0200 [thread overview]
Message-ID: <2b4d9d4728b77bd5781cd1bd7110c12af2aefc35.camel@gmail.com> (raw)
In-Reply-To: <20231027181346.4019398-19-andrii@kernel.org>
On Fri, 2023-10-27 at 11:13 -0700, Andrii Nakryiko wrote:
> Generalize bounds adjustment logic of reg_set_min_max() to handle not
> just register vs constant case, but in general any register vs any
> register cases. For most of the operations it's trivial extension based
> on range vs range comparison logic, we just need to properly pick
> min/max of a range to compare against min/max of the other range.
>
> For BPF_JSET we keep the original capabilities, just make sure JSET is
> integrated in the common framework. This is manifested in the
> internal-only BPF_KSET + BPF_X "opcode" to allow for simpler and more
> uniform rev_opcode() handling. See the code for details. This allows to
> reuse the same code exactly both for TRUE and FALSE branches without
> explicitly handling both conditions with custom code.
>
> Note also that now we don't need a special handling of BPF_JEQ/BPF_JNE
> case none of the registers are constants. This is now just a normal
> generic case handled by reg_set_min_max().
>
> To make tnum handling cleaner, tnum_with_subreg() helper is added, as
> that's a common operator when dealing with 32-bit subregister bounds.
> This keeps the overall logic much less noisy when it comes to tnums.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
> include/linux/tnum.h | 4 +
> kernel/bpf/tnum.c | 7 +-
> kernel/bpf/verifier.c | 321 +++++++++++++++++++-----------------------
> 3 files changed, 157 insertions(+), 175 deletions(-)
>
> diff --git a/include/linux/tnum.h b/include/linux/tnum.h
> index 1c3948a1d6ad..3c13240077b8 100644
> --- a/include/linux/tnum.h
> +++ b/include/linux/tnum.h
> @@ -106,6 +106,10 @@ int tnum_sbin(char *str, size_t size, struct tnum a);
> struct tnum tnum_subreg(struct tnum a);
> /* Returns the tnum with the lower 32-bit subreg cleared */
> struct tnum tnum_clear_subreg(struct tnum a);
> +/* Returns the tnum with the lower 32-bit subreg in *reg* set to the lower
> + * 32-bit subreg in *subreg*
> + */
> +struct tnum tnum_with_subreg(struct tnum reg, struct tnum subreg);
> /* Returns the tnum with the lower 32-bit subreg set to value */
> struct tnum tnum_const_subreg(struct tnum a, u32 value);
> /* Returns true if 32-bit subreg @a is a known constant*/
> diff --git a/kernel/bpf/tnum.c b/kernel/bpf/tnum.c
> index 3d7127f439a1..f4c91c9b27d7 100644
> --- a/kernel/bpf/tnum.c
> +++ b/kernel/bpf/tnum.c
> @@ -208,7 +208,12 @@ struct tnum tnum_clear_subreg(struct tnum a)
> return tnum_lshift(tnum_rshift(a, 32), 32);
> }
>
> +struct tnum tnum_with_subreg(struct tnum reg, struct tnum subreg)
> +{
> + return tnum_or(tnum_clear_subreg(reg), tnum_subreg(subreg));
> +}
> +
> struct tnum tnum_const_subreg(struct tnum a, u32 value)
> {
> - return tnum_or(tnum_clear_subreg(a), tnum_const(value));
> + return tnum_with_subreg(a, tnum_const(value));
> }
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 522566699fbe..4c974296127b 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -14381,217 +14381,201 @@ static int is_branch_taken(struct bpf_reg_state *reg1, struct bpf_reg_state *reg
> return is_scalar_branch_taken(reg1, reg2, opcode, is_jmp32);
> }
>
> -/* Adjusts the register min/max values in the case that the dst_reg is the
> - * variable register that we are working on, and src_reg is a constant or we're
> - * simply doing a BPF_K check.
> - * In JEQ/JNE cases we also adjust the var_off values.
> +/* Opcode that corresponds to a *false* branch condition.
> + * E.g., if r1 < r2, then reverse (false) condition is r1 >= r2
> */
> -static void reg_set_min_max(struct bpf_reg_state *true_reg1,
> - struct bpf_reg_state *true_reg2,
> - struct bpf_reg_state *false_reg1,
> - struct bpf_reg_state *false_reg2,
> - u8 opcode, bool is_jmp32)
> +static u8 rev_opcode(u8 opcode)
Note: this duplicates flip_opcode() (modulo BPF_JSET).
> {
> - struct tnum false_32off, false_64off;
> - struct tnum true_32off, true_64off;
> - u64 val;
> - u32 val32;
> - s64 sval;
> - s32 sval32;
> -
> - /* If either register is a pointer, we can't learn anything about its
> - * variable offset from the compare (unless they were a pointer into
> - * the same object, but we don't bother with that).
> + switch (opcode) {
> + case BPF_JEQ: return BPF_JNE;
> + case BPF_JNE: return BPF_JEQ;
> + /* JSET doesn't have it's reverse opcode in BPF, so add
> + * BPF_X flag to denote the reverse of that operation
> */
> - if (false_reg1->type != SCALAR_VALUE || false_reg2->type != SCALAR_VALUE)
> - return;
> -
> - /* we expect right-hand registers (src ones) to be constants, for now */
> - if (!is_reg_const(false_reg2, is_jmp32)) {
> - opcode = flip_opcode(opcode);
> - swap(true_reg1, true_reg2);
> - swap(false_reg1, false_reg2);
> + case BPF_JSET: return BPF_JSET | BPF_X;
> + case BPF_JSET | BPF_X: return BPF_JSET;
> + case BPF_JGE: return BPF_JLT;
> + case BPF_JGT: return BPF_JLE;
> + case BPF_JLE: return BPF_JGT;
> + case BPF_JLT: return BPF_JGE;
> + case BPF_JSGE: return BPF_JSLT;
> + case BPF_JSGT: return BPF_JSLE;
> + case BPF_JSLE: return BPF_JSGT;
> + case BPF_JSLT: return BPF_JSGE;
> + default: return 0;
> }
> - if (!is_reg_const(false_reg2, is_jmp32))
> - return;
> +}
>
> - false_32off = tnum_subreg(false_reg1->var_off);
> - false_64off = false_reg1->var_off;
> - true_32off = tnum_subreg(true_reg1->var_off);
> - true_64off = true_reg1->var_off;
> - val = false_reg2->var_off.value;
> - val32 = (u32)tnum_subreg(false_reg2->var_off).value;
> - sval = (s64)val;
> - sval32 = (s32)val32;
> +/* Refine range knowledge for <reg1> <op> <reg>2 conditional operation. */
> +static void regs_refine_cond_op(struct bpf_reg_state *reg1, struct bpf_reg_state *reg2,
> + u8 opcode, bool is_jmp32)
> +{
> + struct tnum t;
>
> switch (opcode) {
> - /* JEQ/JNE comparison doesn't change the register equivalence.
> - *
> - * r1 = r2;
> - * if (r1 == 42) goto label;
> - * ...
> - * label: // here both r1 and r2 are known to be 42.
> - *
> - * Hence when marking register as known preserve it's ID.
> - */
> case BPF_JEQ:
> if (is_jmp32) {
> - __mark_reg32_known(true_reg1, val32);
> - true_32off = tnum_subreg(true_reg1->var_off);
> + reg1->u32_min_value = max(reg1->u32_min_value, reg2->u32_min_value);
> + reg1->u32_max_value = min(reg1->u32_max_value, reg2->u32_max_value);
> + reg1->s32_min_value = max(reg1->s32_min_value, reg2->s32_min_value);
> + reg1->s32_max_value = min(reg1->s32_max_value, reg2->s32_max_value);
> + reg2->u32_min_value = reg1->u32_min_value;
> + reg2->u32_max_value = reg1->u32_max_value;
> + reg2->s32_min_value = reg1->s32_min_value;
> + reg2->s32_max_value = reg1->s32_max_value;
> +
> + t = tnum_intersect(tnum_subreg(reg1->var_off), tnum_subreg(reg2->var_off));
> + reg1->var_off = tnum_with_subreg(reg1->var_off, t);
> + reg2->var_off = tnum_with_subreg(reg2->var_off, t);
> } else {
> - ___mark_reg_known(true_reg1, val);
> - true_64off = true_reg1->var_off;
> + reg1->umin_value = max(reg1->umin_value, reg2->umin_value);
> + reg1->umax_value = min(reg1->umax_value, reg2->umax_value);
> + reg1->smin_value = max(reg1->smin_value, reg2->smin_value);
> + reg1->smax_value = min(reg1->smax_value, reg2->smax_value);
> + reg2->umin_value = reg1->umin_value;
> + reg2->umax_value = reg1->umax_value;
> + reg2->smin_value = reg1->smin_value;
> + reg2->smax_value = reg1->smax_value;
> +
> + reg1->var_off = tnum_intersect(reg1->var_off, reg2->var_off);
> + reg2->var_off = reg1->var_off;
> }
> break;
> case BPF_JNE:
> + /* we don't derive any new information for inequality yet */
> + break;
> + case BPF_JSET:
> + case BPF_JSET | BPF_X: { /* BPF_JSET and its reverse, see rev_opcode() */
> + u64 val;
> +
> + if (!is_reg_const(reg2, is_jmp32))
> + swap(reg1, reg2);
> + if (!is_reg_const(reg2, is_jmp32))
> + break;
> +
> + val = reg_const_value(reg2, is_jmp32);
> + /* BPF_JSET requires single bit to learn something useful */
> + if (!(opcode & BPF_X) && !is_power_of_2(val))
Could you please extend comment a bit, e.g. as follows:
/* For BPF_JSET true branch (!(opcode & BPF_X)) a single bit
* is needed to learn something useful.
*/
For some reason it took me a while to understand this condition :(
> + break;
> +
> if (is_jmp32) {
> - __mark_reg32_known(false_reg1, val32);
> - false_32off = tnum_subreg(false_reg1->var_off);
> + if (opcode & BPF_X)
> + t = tnum_and(tnum_subreg(reg1->var_off), tnum_const(~val));
> + else
> + t = tnum_or(tnum_subreg(reg1->var_off), tnum_const(val));
> + reg1->var_off = tnum_with_subreg(reg1->var_off, t);
> } else {
> - ___mark_reg_known(false_reg1, val);
> - false_64off = false_reg1->var_off;
> + if (opcode & BPF_X)
> + reg1->var_off = tnum_and(reg1->var_off, tnum_const(~val));
> + else
> + reg1->var_off = tnum_or(reg1->var_off, tnum_const(val));
> }
> break;
> - case BPF_JSET:
> + }
> + case BPF_JGE:
> if (is_jmp32) {
> - false_32off = tnum_and(false_32off, tnum_const(~val32));
> - if (is_power_of_2(val32))
> - true_32off = tnum_or(true_32off,
> - tnum_const(val32));
> + reg1->u32_min_value = max(reg1->u32_min_value, reg2->u32_min_value);
> + reg2->u32_max_value = min(reg1->u32_max_value, reg2->u32_max_value);
> } else {
> - false_64off = tnum_and(false_64off, tnum_const(~val));
> - if (is_power_of_2(val))
> - true_64off = tnum_or(true_64off,
> - tnum_const(val));
> + reg1->umin_value = max(reg1->umin_value, reg2->umin_value);
> + reg2->umax_value = min(reg1->umax_value, reg2->umax_value);
> }
> break;
> - case BPF_JGE:
> case BPF_JGT:
> - {
> if (is_jmp32) {
> - u32 false_umax = opcode == BPF_JGT ? val32 : val32 - 1;
> - u32 true_umin = opcode == BPF_JGT ? val32 + 1 : val32;
> -
> - false_reg1->u32_max_value = min(false_reg1->u32_max_value,
> - false_umax);
> - true_reg1->u32_min_value = max(true_reg1->u32_min_value,
> - true_umin);
> + reg1->u32_min_value = max(reg1->u32_min_value, reg2->u32_min_value + 1);
Question: This branch means that reg1 > reg2, right?
If so, why not use reg2->u32_MAX_value, e.g.:
reg1->u32_min_value = max(reg1->u32_min_value, reg2->u32_max_value + 1);
Do I miss something?
> + reg2->u32_max_value = min(reg1->u32_max_value - 1, reg2->u32_max_value);
> } else {
> - u64 false_umax = opcode == BPF_JGT ? val : val - 1;
> - u64 true_umin = opcode == BPF_JGT ? val + 1 : val;
> -
> - false_reg1->umax_value = min(false_reg1->umax_value, false_umax);
> - true_reg1->umin_value = max(true_reg1->umin_value, true_umin);
> + reg1->umin_value = max(reg1->umin_value, reg2->umin_value + 1);
> + reg2->umax_value = min(reg1->umax_value - 1, reg2->umax_value);
> }
> break;
> - }
> case BPF_JSGE:
> + if (is_jmp32) {
> + reg1->s32_min_value = max(reg1->s32_min_value, reg2->s32_min_value);
> + reg2->s32_max_value = min(reg1->s32_max_value, reg2->s32_max_value);
> + } else {
> + reg1->smin_value = max(reg1->smin_value, reg2->smin_value);
> + reg2->smax_value = min(reg1->smax_value, reg2->smax_value);
> + }
> + break;
> case BPF_JSGT:
> - {
> if (is_jmp32) {
> - s32 false_smax = opcode == BPF_JSGT ? sval32 : sval32 - 1;
> - s32 true_smin = opcode == BPF_JSGT ? sval32 + 1 : sval32;
> -
> - false_reg1->s32_max_value = min(false_reg1->s32_max_value, false_smax);
> - true_reg1->s32_min_value = max(true_reg1->s32_min_value, true_smin);
> + reg1->s32_min_value = max(reg1->s32_min_value, reg2->s32_min_value + 1);
> + reg2->s32_max_value = min(reg1->s32_max_value - 1, reg2->s32_max_value);
> } else {
> - s64 false_smax = opcode == BPF_JSGT ? sval : sval - 1;
> - s64 true_smin = opcode == BPF_JSGT ? sval + 1 : sval;
> -
> - false_reg1->smax_value = min(false_reg1->smax_value, false_smax);
> - true_reg1->smin_value = max(true_reg1->smin_value, true_smin);
> + reg1->smin_value = max(reg1->smin_value, reg2->smin_value + 1);
> + reg2->smax_value = min(reg1->smax_value - 1, reg2->smax_value);
> }
> break;
> - }
> case BPF_JLE:
> + if (is_jmp32) {
> + reg1->u32_max_value = min(reg1->u32_max_value, reg2->u32_max_value);
> + reg2->u32_min_value = max(reg1->u32_min_value, reg2->u32_min_value);
> + } else {
> + reg1->umax_value = min(reg1->umax_value, reg2->umax_value);
> + reg2->umin_value = max(reg1->umin_value, reg2->umin_value);
> + }
> + break;
> case BPF_JLT:
> - {
> if (is_jmp32) {
> - u32 false_umin = opcode == BPF_JLT ? val32 : val32 + 1;
> - u32 true_umax = opcode == BPF_JLT ? val32 - 1 : val32;
> -
> - false_reg1->u32_min_value = max(false_reg1->u32_min_value,
> - false_umin);
> - true_reg1->u32_max_value = min(true_reg1->u32_max_value,
> - true_umax);
> + reg1->u32_max_value = min(reg1->u32_max_value, reg2->u32_max_value - 1);
> + reg2->u32_min_value = max(reg1->u32_min_value + 1, reg2->u32_min_value);
> } else {
> - u64 false_umin = opcode == BPF_JLT ? val : val + 1;
> - u64 true_umax = opcode == BPF_JLT ? val - 1 : val;
> -
> - false_reg1->umin_value = max(false_reg1->umin_value, false_umin);
> - true_reg1->umax_value = min(true_reg1->umax_value, true_umax);
> + reg1->umax_value = min(reg1->umax_value, reg2->umax_value - 1);
> + reg2->umin_value = max(reg1->umin_value + 1, reg2->umin_value);
> }
> break;
> - }
> case BPF_JSLE:
> + if (is_jmp32) {
> + reg1->s32_max_value = min(reg1->s32_max_value, reg2->s32_max_value);
> + reg2->s32_min_value = max(reg1->s32_min_value, reg2->s32_min_value);
> + } else {
> + reg1->smax_value = min(reg1->smax_value, reg2->smax_value);
> + reg2->smin_value = max(reg1->smin_value, reg2->smin_value);
> + }
> + break;
> case BPF_JSLT:
> - {
> if (is_jmp32) {
> - s32 false_smin = opcode == BPF_JSLT ? sval32 : sval32 + 1;
> - s32 true_smax = opcode == BPF_JSLT ? sval32 - 1 : sval32;
> -
> - false_reg1->s32_min_value = max(false_reg1->s32_min_value, false_smin);
> - true_reg1->s32_max_value = min(true_reg1->s32_max_value, true_smax);
> + reg1->s32_max_value = min(reg1->s32_max_value, reg2->s32_max_value - 1);
> + reg2->s32_min_value = max(reg1->s32_min_value + 1, reg2->s32_min_value);
> } else {
> - s64 false_smin = opcode == BPF_JSLT ? sval : sval + 1;
> - s64 true_smax = opcode == BPF_JSLT ? sval - 1 : sval;
> -
> - false_reg1->smin_value = max(false_reg1->smin_value, false_smin);
> - true_reg1->smax_value = min(true_reg1->smax_value, true_smax);
> + reg1->smax_value = min(reg1->smax_value, reg2->smax_value - 1);
> + reg2->smin_value = max(reg1->smin_value + 1, reg2->smin_value);
> }
> break;
> - }
> default:
> return;
> }
> -
> - if (is_jmp32) {
> - false_reg1->var_off = tnum_or(tnum_clear_subreg(false_64off),
> - tnum_subreg(false_32off));
> - true_reg1->var_off = tnum_or(tnum_clear_subreg(true_64off),
> - tnum_subreg(true_32off));
> - reg_bounds_sync(false_reg1);
> - reg_bounds_sync(true_reg1);
> - } else {
> - false_reg1->var_off = false_64off;
> - true_reg1->var_off = true_64off;
> - reg_bounds_sync(false_reg1);
> - reg_bounds_sync(true_reg1);
> - }
> -}
> -
> -/* Regs are known to be equal, so intersect their min/max/var_off */
> -static void __reg_combine_min_max(struct bpf_reg_state *src_reg,
> - struct bpf_reg_state *dst_reg)
> -{
> - src_reg->umin_value = dst_reg->umin_value = max(src_reg->umin_value,
> - dst_reg->umin_value);
> - src_reg->umax_value = dst_reg->umax_value = min(src_reg->umax_value,
> - dst_reg->umax_value);
> - src_reg->smin_value = dst_reg->smin_value = max(src_reg->smin_value,
> - dst_reg->smin_value);
> - src_reg->smax_value = dst_reg->smax_value = min(src_reg->smax_value,
> - dst_reg->smax_value);
> - src_reg->var_off = dst_reg->var_off = tnum_intersect(src_reg->var_off,
> - dst_reg->var_off);
> - reg_bounds_sync(src_reg);
> - reg_bounds_sync(dst_reg);
> }
>
> -static void reg_combine_min_max(struct bpf_reg_state *true_src,
> - struct bpf_reg_state *true_dst,
> - struct bpf_reg_state *false_src,
> - struct bpf_reg_state *false_dst,
> - u8 opcode)
> +/* Adjusts the register min/max values in the case that the dst_reg is the
> + * variable register that we are working on, and src_reg is a constant or we're
> + * simply doing a BPF_K check.
> + * In JEQ/JNE cases we also adjust the var_off values.
> + */
> +static void reg_set_min_max(struct bpf_reg_state *true_reg1,
> + struct bpf_reg_state *true_reg2,
> + struct bpf_reg_state *false_reg1,
> + struct bpf_reg_state *false_reg2,
> + u8 opcode, bool is_jmp32)
> {
> - switch (opcode) {
> - case BPF_JEQ:
> - __reg_combine_min_max(true_src, true_dst);
> - break;
> - case BPF_JNE:
> - __reg_combine_min_max(false_src, false_dst);
> - break;
> - }
> + /* If either register is a pointer, we can't learn anything about its
> + * variable offset from the compare (unless they were a pointer into
> + * the same object, but we don't bother with that).
> + */
> + if (false_reg1->type != SCALAR_VALUE || false_reg2->type != SCALAR_VALUE)
> + return;
> +
> + /* fallthrough (FALSE) branch */
> + regs_refine_cond_op(false_reg1, false_reg2, rev_opcode(opcode), is_jmp32);
> + reg_bounds_sync(false_reg1);
> + reg_bounds_sync(false_reg2);
> +
> + /* jump (TRUE) branch */
> + regs_refine_cond_op(true_reg1, true_reg2, opcode, is_jmp32);
> + reg_bounds_sync(true_reg1);
> + reg_bounds_sync(true_reg2);
> }
>
> static void mark_ptr_or_null_reg(struct bpf_func_state *state,
> @@ -14895,21 +14879,10 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
> reg_set_min_max(&other_branch_regs[insn->dst_reg],
> &other_branch_regs[insn->src_reg],
> dst_reg, src_reg, opcode, is_jmp32);
> -
> - if (dst_reg->type == SCALAR_VALUE &&
> - src_reg->type == SCALAR_VALUE &&
> - !is_jmp32 && (opcode == BPF_JEQ || opcode == BPF_JNE)) {
> - /* Comparing for equality, we can combine knowledge */
> - reg_combine_min_max(&other_branch_regs[insn->src_reg],
> - &other_branch_regs[insn->dst_reg],
> - src_reg, dst_reg, opcode);
> - }
> } else if (dst_reg->type == SCALAR_VALUE) {
> - reg_set_min_max(&other_branch_regs[insn->dst_reg], src_reg, /* fake one */
> - dst_reg, src_reg /* same fake one */,
> - opcode, is_jmp32);
> + reg_set_min_max(&other_branch_regs[insn->dst_reg], src_reg /* fake*/,
> + dst_reg, src_reg, opcode, is_jmp32);
> }
> -
> if (BPF_SRC(insn->code) == BPF_X &&
> src_reg->type == SCALAR_VALUE && src_reg->id &&
> !WARN_ON_ONCE(src_reg->id != other_branch_regs[insn->src_reg].id)) {
next prev parent reply other threads:[~2023-10-31 23:25 UTC|newest]
Thread overview: 77+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-27 18:13 [PATCH v5 bpf-next 00/23] BPF register bounds logic and testing improvements Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 01/23] selftests/bpf: fix RELEASE=1 build for tc_opts Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 02/23] selftests/bpf: satisfy compiler by having explicit return in btf test Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 03/23] bpf: derive smin/smax from umin/max bounds Andrii Nakryiko
2023-10-31 15:37 ` Eduard Zingerman
2023-10-31 17:30 ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 04/23] bpf: derive smin32/smax32 from umin32/umax32 bounds Andrii Nakryiko
2023-10-31 15:37 ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 05/23] bpf: derive subreg bounds from full bounds when upper 32 bits are constant Andrii Nakryiko
2023-10-31 15:37 ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 06/23] bpf: add special smin32/smax32 derivation from 64-bit bounds Andrii Nakryiko
2023-10-31 15:37 ` Eduard Zingerman
2023-10-31 17:39 ` Andrii Nakryiko
2023-10-31 18:41 ` Alexei Starovoitov
2023-10-31 18:49 ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 07/23] bpf: improve deduction of 64-bit bounds from 32-bit bounds Andrii Nakryiko
2023-10-31 15:37 ` Eduard Zingerman
2023-10-31 20:26 ` Alexei Starovoitov
2023-10-31 20:33 ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 08/23] bpf: try harder to deduce register bounds from different numeric domains Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 09/23] bpf: drop knowledge-losing __reg_combine_{32,64}_into_{64,32} logic Andrii Nakryiko
2023-10-31 15:38 ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 10/23] selftests/bpf: BPF register range bounds tester Andrii Nakryiko
2023-11-08 22:08 ` Eduard Zingerman
2023-11-08 23:23 ` Andrii Nakryiko
2023-11-09 0:30 ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 11/23] bpf: rename is_branch_taken reg arguments to prepare for the second one Andrii Nakryiko
2023-10-30 19:39 ` Alexei Starovoitov
2023-10-31 5:19 ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 12/23] bpf: generalize is_branch_taken() to work with two registers Andrii Nakryiko
2023-10-31 15:38 ` Eduard Zingerman
2023-10-31 17:41 ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 13/23] bpf: move is_branch_taken() down Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 14/23] bpf: generalize is_branch_taken to handle all conditional jumps in one place Andrii Nakryiko
2023-10-31 15:38 ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 15/23] bpf: unify 32-bit and 64-bit is_branch_taken logic Andrii Nakryiko
2023-10-30 19:52 ` Alexei Starovoitov
2023-10-31 5:28 ` Andrii Nakryiko
2023-10-31 17:35 ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 16/23] bpf: prepare reg_set_min_max for second set of registers Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 17/23] bpf: generalize reg_set_min_max() to handle two sets of two registers Andrii Nakryiko
2023-10-31 2:02 ` Alexei Starovoitov
2023-10-31 6:03 ` Andrii Nakryiko
2023-10-31 16:23 ` Alexei Starovoitov
2023-10-31 17:50 ` Andrii Nakryiko
2023-10-31 17:56 ` Andrii Nakryiko
2023-10-31 18:04 ` Alexei Starovoitov
2023-10-31 18:06 ` Andrii Nakryiko
2023-10-31 18:14 ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 18/23] bpf: generalize reg_set_min_max() to handle non-const register comparisons Andrii Nakryiko
2023-10-31 23:25 ` Eduard Zingerman [this message]
2023-11-01 16:35 ` Andrii Nakryiko
2023-11-01 17:12 ` Eduard Zingerman
2023-10-27 18:13 ` [PATCH v5 bpf-next 19/23] bpf: generalize is_scalar_branch_taken() logic Andrii Nakryiko
2023-10-31 2:12 ` Alexei Starovoitov
2023-10-31 6:12 ` Andrii Nakryiko
2023-10-31 16:34 ` Alexei Starovoitov
2023-10-31 18:01 ` Andrii Nakryiko
2023-10-31 20:53 ` Andrii Nakryiko
2023-10-31 20:55 ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 20/23] bpf: enhance BPF_JEQ/BPF_JNE is_branch_taken logic Andrii Nakryiko
2023-10-31 2:20 ` Alexei Starovoitov
2023-10-31 6:16 ` Andrii Nakryiko
2023-10-31 16:36 ` Alexei Starovoitov
2023-10-31 18:04 ` Andrii Nakryiko
2023-10-31 18:06 ` Alexei Starovoitov
2023-10-27 18:13 ` [PATCH v5 bpf-next 21/23] selftests/bpf: adjust OP_EQ/OP_NE handling to use subranges for branch taken Andrii Nakryiko
2023-11-08 18:22 ` Eduard Zingerman
2023-11-08 19:59 ` Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 22/23] selftests/bpf: add range x range test to reg_bounds Andrii Nakryiko
2023-10-27 18:13 ` [PATCH v5 bpf-next 23/23] selftests/bpf: add iter test requiring range x range logic Andrii Nakryiko
2023-10-30 17:55 ` [PATCH v5 bpf-next 00/23] BPF register bounds logic and testing improvements Alexei Starovoitov
2023-10-31 5:19 ` Andrii Nakryiko
2023-11-01 12:37 ` Paul Chaignon
2023-11-01 17:13 ` Andrii Nakryiko
2023-11-07 6:37 ` Harishankar Vishwanathan
2023-11-07 16:38 ` Paul Chaignon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2b4d9d4728b77bd5781cd1bd7110c12af2aefc35.camel@gmail.com \
--to=eddyz87@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=kernel-team@meta.com \
--cc=martin.lau@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox