bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v1 0/3] bpf: allow void* cast using bpf_rdonly_cast()
@ 2025-06-25  0:05 Eduard Zingerman
  2025-06-25  0:05 ` [PATCH bpf-next v1 1/3] bpf: add bpf_features enum Eduard Zingerman
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Eduard Zingerman @ 2025-06-25  0:05 UTC (permalink / raw)
  To: bpf, ast, andrii; +Cc: daniel, martin.lau, kernel-team, yonghong.song, eddyz87

Currently, pointers returned by `bpf_rdonly_cast()` have a type of
"pointer to btf id", and only casts to structure types are allowed.
Access to memory pointed to by these pointers is done through
`BPF_PROBE_{MEM,MEMSX}` instructions and does not produce errors on
invalid memory access.

This patch set extends `bpf_rdonly_cast()` to allow casts to an
equivalent of 'void *', effectively replacing
`bpf_probe_read_kernel()` calls in situations where access to
individual bytes or integers is necessary.

The mechanism was suggested and explored by Andrii Nakryiko in [1].

To help with detecting support for this feature, an
`enum bpf_features` is added with intended usage as follows:

  if (bpf_core_enum_value_exists(enum bpf_features,
                                 BPF_FEAT_RDONLY_CAST_TO_VOID))
    ...

[1] https://github.com/anakryiko/linux/tree/bpf-mem-cast

Changelog:
v1: https://lore.kernel.org/bpf/20250624191009.902874-1-eddyz87@gmail.com/
v1 -> v2:
- renamed BPF_FEAT_TOTAL to __MAX_BPF_FEAT and moved patch introducing
  bpf_features enum to the start of the series (Alexei);
- dropped patch #3 allowing optout from CAP_SYS_ADMIN drop in
  prog_tests/verifier.c, use a separate runner in prog_tests/*
  instead.

Eduard Zingerman (3):
  bpf: add bpf_features enum
  bpf: allow void* cast using bpf_rdonly_cast()
  selftests/bpf: check operations on untrusted ro pointers to mem

 kernel/bpf/verifier.c                         |  79 ++++++++--
 .../bpf/prog_tests/mem_rdonly_untrusted.c     |   9 ++
 .../bpf/progs/mem_rdonly_untrusted.c          | 136 ++++++++++++++++++
 3 files changed, 212 insertions(+), 12 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/mem_rdonly_untrusted.c
 create mode 100644 tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c

-- 
2.47.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v1 1/3] bpf: add bpf_features enum
  2025-06-25  0:05 [PATCH bpf-next v1 0/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman
@ 2025-06-25  0:05 ` Eduard Zingerman
  2025-06-25  0:05 ` [PATCH bpf-next v1 2/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 9+ messages in thread
From: Eduard Zingerman @ 2025-06-25  0:05 UTC (permalink / raw)
  To: bpf, ast, andrii; +Cc: daniel, martin.lau, kernel-team, yonghong.song, eddyz87

This commit adds a kernel side enum for use in conjucntion with BTF
CO-RE bpf_core_enum_value_exists. The goal of the enum is to assist
with available BPF features detection. Intended usage looks as
follows:

  if (bpf_core_enum_value_exists(enum bpf_features, BPF_FEAT_<f>))
     ... use feature f ...

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 kernel/bpf/verifier.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 279a64933262..71de4c9487d5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -44,6 +44,10 @@ static const struct bpf_verifier_ops * const bpf_verifier_ops[] = {
 #undef BPF_LINK_TYPE
 };
 
+enum bpf_features {
+	__MAX_BPF_FEAT = 0,
+};
+
 struct bpf_mem_alloc bpf_global_percpu_ma;
 static bool bpf_global_percpu_ma_set;
 
@@ -24388,6 +24392,8 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
 	u32 log_true_size;
 	bool is_priv;
 
+	BTF_TYPE_EMIT(enum bpf_features);
+
 	/* no program is valid */
 	if (ARRAY_SIZE(bpf_verifier_ops) == 0)
 		return -EINVAL;
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v1 2/3] bpf: allow void* cast using bpf_rdonly_cast()
  2025-06-25  0:05 [PATCH bpf-next v1 0/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman
  2025-06-25  0:05 ` [PATCH bpf-next v1 1/3] bpf: add bpf_features enum Eduard Zingerman
@ 2025-06-25  0:05 ` Eduard Zingerman
  2025-06-25  0:11   ` Alexei Starovoitov
  2025-06-25  0:05 ` [PATCH bpf-next v1 3/3] selftests/bpf: check operations on untrusted ro pointers to mem Eduard Zingerman
  2025-06-25  0:07 ` [PATCH bpf-next v1 0/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman
  3 siblings, 1 reply; 9+ messages in thread
From: Eduard Zingerman @ 2025-06-25  0:05 UTC (permalink / raw)
  To: bpf, ast, andrii
  Cc: daniel, martin.lau, kernel-team, yonghong.song, eddyz87,
	Alexei Starovoitov, Andrii Nakryiko

Introduce support for `bpf_rdonly_cast(v, 0)`, which casts the value
`v` to an untyped, untrusted pointer, logically similar to a `void *`.
The memory pointed to by such a pointer is treated as read-only.
As with other untrusted pointers, memory access violations on loads
return zero instead of causing a fault.

Technically:
- The resulting pointer is represented as a register of type
  `PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED` with size zero.
- Offsets within such pointers are not tracked.
- Same load instructions are allowed to have both
  `PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED` and `PTR_TO_BTF_ID`
  as the base pointer types.
  In such cases, `bpf_insn_aux_data->ptr_type` is considered the
  weaker of the two: `PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED`.

The following constraints apply to the new pointer type:
- can be used as a base for LDX instructions;
- can't be used as a base for ST/STX or atomic instructions;
- can't be used as parameter for kfuncs or helpers.

These constraints are enforced by existing handling of `MEM_RDONLY`
flag and `PTR_TO_MEM` of size zero.

Suggested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 kernel/bpf/verifier.c | 75 +++++++++++++++++++++++++++++++++++--------
 1 file changed, 62 insertions(+), 13 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 71de4c9487d5..6b2c38b7a7b6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -45,7 +45,8 @@ static const struct bpf_verifier_ops * const bpf_verifier_ops[] = {
 };
 
 enum bpf_features {
-	__MAX_BPF_FEAT = 0,
+	BPF_FEAT_RDONLY_CAST_TO_VOID = 0,
+	__MAX_BPF_FEAT = 1,
 };
 
 struct bpf_mem_alloc bpf_global_percpu_ma;
@@ -7539,6 +7540,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 		}
 	} else if (base_type(reg->type) == PTR_TO_MEM) {
 		bool rdonly_mem = type_is_rdonly_mem(reg->type);
+		bool rdonly_untrusted = rdonly_mem && (reg->type & PTR_UNTRUSTED);
 
 		if (type_may_be_null(reg->type)) {
 			verbose(env, "R%d invalid mem access '%s'\n", regno,
@@ -7558,8 +7560,13 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 			return -EACCES;
 		}
 
-		err = check_mem_region_access(env, regno, off, size,
-					      reg->mem_size, false);
+		/*
+		 * Accesses to untrusted PTR_TO_MEM are done through probe
+		 * instructions, hence no need to check bounds in that case.
+		 */
+		if (!rdonly_untrusted)
+			err = check_mem_region_access(env, regno, off, size,
+						      reg->mem_size, false);
 		if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem))
 			mark_reg_unknown(env, regs, value_regno);
 	} else if (reg->type == PTR_TO_CTX) {
@@ -13606,16 +13613,24 @@ static int check_special_kfunc(struct bpf_verifier_env *env, struct bpf_kfunc_ca
 		regs[BPF_REG_0].btf_id = meta->ret_btf_id;
 	} else if (meta->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
 		ret_t = btf_type_by_id(desc_btf, meta->arg_constant.value);
-		if (!ret_t || !btf_type_is_struct(ret_t)) {
+		if (!ret_t) {
+			verbose(env, "Unknown type ID %lld passed to kfunc bpf_rdonly_cast\n",
+				meta->arg_constant.value);
+			return -EINVAL;
+		} else if (btf_type_is_struct(ret_t)) {
+			mark_reg_known_zero(env, regs, BPF_REG_0);
+			regs[BPF_REG_0].type = PTR_TO_BTF_ID | PTR_UNTRUSTED;
+			regs[BPF_REG_0].btf = desc_btf;
+			regs[BPF_REG_0].btf_id = meta->arg_constant.value;
+		} else if (btf_type_is_void(ret_t)) {
+			mark_reg_known_zero(env, regs, BPF_REG_0);
+			regs[BPF_REG_0].type = PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED;
+			regs[BPF_REG_0].mem_size = 0;
+		} else {
 			verbose(env,
-				"kfunc bpf_rdonly_cast type ID argument must be of a struct\n");
+				"kfunc bpf_rdonly_cast type ID argument must be of a struct or void\n");
 			return -EINVAL;
 		}
-
-		mark_reg_known_zero(env, regs, BPF_REG_0);
-		regs[BPF_REG_0].type = PTR_TO_BTF_ID | PTR_UNTRUSTED;
-		regs[BPF_REG_0].btf = desc_btf;
-		regs[BPF_REG_0].btf_id = meta->arg_constant.value;
 	} else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_slice] ||
 		   meta->func_id == special_kfunc_list[KF_bpf_dynptr_slice_rdwr]) {
 		enum bpf_type_flag type_flag = get_dynptr_type_flag(meta->initialized_dynptr.type);
@@ -14414,6 +14429,13 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
 		return -EACCES;
 	}
 
+	/*
+	 * Accesses to untrusted PTR_TO_MEM are done through probe
+	 * instructions, hence no need to track offsets.
+	 */
+	if (base_type(ptr_reg->type) == PTR_TO_MEM && (ptr_reg->type & PTR_UNTRUSTED))
+		return 0;
+
 	switch (base_type(ptr_reg->type)) {
 	case PTR_TO_CTX:
 	case PTR_TO_MAP_VALUE:
@@ -19571,10 +19593,27 @@ static bool reg_type_mismatch(enum bpf_reg_type src, enum bpf_reg_type prev)
 			       !reg_type_mismatch_ok(prev));
 }
 
+static bool is_ptr_to_mem_or_btf_id(enum bpf_reg_type type)
+{
+	switch (base_type(type)) {
+	case PTR_TO_MEM:
+	case PTR_TO_BTF_ID:
+		return true;
+	default:
+		return false;
+	}
+}
+
+static bool is_ptr_to_mem(enum bpf_reg_type type)
+{
+	return base_type(type) == PTR_TO_MEM;
+}
+
 static int save_aux_ptr_type(struct bpf_verifier_env *env, enum bpf_reg_type type,
 			     bool allow_trust_mismatch)
 {
 	enum bpf_reg_type *prev_type = &env->insn_aux_data[env->insn_idx].ptr_type;
+	enum bpf_reg_type merged_type;
 
 	if (*prev_type == NOT_INIT) {
 		/* Saw a valid insn
@@ -19591,15 +19630,24 @@ static int save_aux_ptr_type(struct bpf_verifier_env *env, enum bpf_reg_type typ
 		 * Reject it.
 		 */
 		if (allow_trust_mismatch &&
-		    base_type(type) == PTR_TO_BTF_ID &&
-		    base_type(*prev_type) == PTR_TO_BTF_ID) {
+		    is_ptr_to_mem_or_btf_id(type) &&
+		    is_ptr_to_mem_or_btf_id(*prev_type)) {
 			/*
 			 * Have to support a use case when one path through
 			 * the program yields TRUSTED pointer while another
 			 * is UNTRUSTED. Fallback to UNTRUSTED to generate
 			 * BPF_PROBE_MEM/BPF_PROBE_MEMSX.
+			 * Same behavior of MEM_RDONLY flag.
 			 */
-			*prev_type = PTR_TO_BTF_ID | PTR_UNTRUSTED;
+			if (is_ptr_to_mem(type) || is_ptr_to_mem(*prev_type))
+				merged_type = PTR_TO_MEM;
+			else
+				merged_type = PTR_TO_BTF_ID;
+			if ((type & PTR_UNTRUSTED) || (*prev_type & PTR_UNTRUSTED))
+				merged_type |= PTR_UNTRUSTED;
+			if ((type & MEM_RDONLY) || (*prev_type & MEM_RDONLY))
+				merged_type |= MEM_RDONLY;
+			*prev_type = merged_type;
 		} else {
 			verbose(env, "same insn cannot be used with different pointers\n");
 			return -EINVAL;
@@ -21207,6 +21255,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
 		 * for this case.
 		 */
 		case PTR_TO_BTF_ID | MEM_ALLOC | PTR_UNTRUSTED:
+		case PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED:
 			if (type == BPF_READ) {
 				if (BPF_MODE(insn->code) == BPF_MEM)
 					insn->code = BPF_LDX | BPF_PROBE_MEM |
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v1 3/3] selftests/bpf: check operations on untrusted ro pointers to mem
  2025-06-25  0:05 [PATCH bpf-next v1 0/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman
  2025-06-25  0:05 ` [PATCH bpf-next v1 1/3] bpf: add bpf_features enum Eduard Zingerman
  2025-06-25  0:05 ` [PATCH bpf-next v1 2/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman
@ 2025-06-25  0:05 ` Eduard Zingerman
  2025-06-25  0:07 ` [PATCH bpf-next v1 0/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman
  3 siblings, 0 replies; 9+ messages in thread
From: Eduard Zingerman @ 2025-06-25  0:05 UTC (permalink / raw)
  To: bpf, ast, andrii; +Cc: daniel, martin.lau, kernel-team, yonghong.song, eddyz87

The following cases are tested:
- it is ok to load memory at any offset from rdonly_untrusted_mem;
- rdonly_untrusted_mem offset/bounds are not tracked;
- writes into rdonly_untrusted_mem are forbidden;
- atomic operations on rdonly_untrusted_mem are forbidden;
- rdonly_untrusted_mem can't be passed as a memory argument of a
  helper of kfunc;
- it is ok to use PTR_TO_MEM and PTR_TO_BTF_ID in a same load
  instruction.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../bpf/prog_tests/mem_rdonly_untrusted.c     |   9 ++
 .../bpf/progs/mem_rdonly_untrusted.c          | 136 ++++++++++++++++++
 2 files changed, 145 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/mem_rdonly_untrusted.c
 create mode 100644 tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c

diff --git a/tools/testing/selftests/bpf/prog_tests/mem_rdonly_untrusted.c b/tools/testing/selftests/bpf/prog_tests/mem_rdonly_untrusted.c
new file mode 100644
index 000000000000..40d4f687bd9c
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/mem_rdonly_untrusted.c
@@ -0,0 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <test_progs.h>
+#include "mem_rdonly_untrusted.skel.h"
+
+void test_mem_rdonly_untrusted(void)
+{
+	RUN_TESTS(mem_rdonly_untrusted);
+}
diff --git a/tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c b/tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c
new file mode 100644
index 000000000000..00604755e698
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c
@@ -0,0 +1,136 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include <bpf/bpf_core_read.h>
+#include "bpf_misc.h"
+#include "../test_kmods/bpf_testmod_kfunc.h"
+
+SEC("socket")
+__success
+__retval(0)
+int ldx_is_ok_bad_addr(void *ctx)
+{
+	char *p;
+
+	if (!bpf_core_enum_value_exists(enum bpf_features, BPF_FEAT_RDONLY_CAST_TO_VOID))
+		return 42;
+
+	p = bpf_rdonly_cast(0, 0);
+	return p[0x7fff];
+}
+
+SEC("socket")
+__success
+__retval(1)
+int ldx_is_ok_good_addr(void *ctx)
+{
+	int v, *p;
+
+	v = 1;
+	p = bpf_rdonly_cast(&v, 0);
+	return *p;
+}
+
+SEC("socket")
+__success
+int offset_not_tracked(void *ctx)
+{
+	int *p, i, s;
+
+	p = bpf_rdonly_cast(0, 0);
+	s = 0;
+	bpf_for(i, 0, 1000 * 1000 * 1000) {
+		p++;
+		s += *p;
+	}
+	return s;
+}
+
+SEC("socket")
+__failure
+__msg("cannot write into rdonly_untrusted_mem")
+int stx_not_ok(void *ctx)
+{
+	int v, *p;
+
+	v = 1;
+	p = bpf_rdonly_cast(&v, 0);
+	*p = 1;
+	return 0;
+}
+
+SEC("socket")
+__failure
+__msg("cannot write into rdonly_untrusted_mem")
+int atomic_not_ok(void *ctx)
+{
+	int v, *p;
+
+	v = 1;
+	p = bpf_rdonly_cast(&v, 0);
+	__sync_fetch_and_add(p, 1);
+	return 0;
+}
+
+SEC("socket")
+__failure
+__msg("cannot write into rdonly_untrusted_mem")
+int atomic_rmw_not_ok(void *ctx)
+{
+	long v, *p;
+
+	v = 1;
+	p = bpf_rdonly_cast(&v, 0);
+	return __sync_val_compare_and_swap(p, 0, 42);
+}
+
+SEC("socket")
+__failure
+__msg("invalid access to memory, mem_size=0 off=0 size=4")
+__msg("R1 min value is outside of the allowed memory range")
+int kfunc_param_not_ok(void *ctx)
+{
+	int *p;
+
+	p = bpf_rdonly_cast(0, 0);
+	bpf_kfunc_trusted_num_test(p);
+	return 0;
+}
+
+SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
+__failure
+__msg("R1 type=rdonly_untrusted_mem expected=")
+int helper_param_not_ok(void *ctx)
+{
+	char *p;
+
+	p = bpf_rdonly_cast(0, 0);
+	/*
+	 * Any helper with ARG_CONST_SIZE_OR_ZERO constraint will do,
+	 * the most permissive constraint
+	 */
+	bpf_copy_from_user(p, 0, (void *)42);
+	return 0;
+}
+
+static __noinline u64 *get_some_addr(void)
+{
+	if (bpf_get_prandom_u32())
+		return bpf_rdonly_cast(0, bpf_core_type_id_kernel(struct sock));
+	else
+		return bpf_rdonly_cast(0, 0);
+}
+
+SEC("socket")
+__success
+__retval(0)
+int mixed_mem_type(void *ctx)
+{
+	u64 *p;
+
+	/* Try to avoid compiler hoisting load to if branches by using __noinline func. */
+	p = get_some_addr();
+	return *p;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v1 0/3] bpf: allow void* cast using bpf_rdonly_cast()
  2025-06-25  0:05 [PATCH bpf-next v1 0/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman
                   ` (2 preceding siblings ...)
  2025-06-25  0:05 ` [PATCH bpf-next v1 3/3] selftests/bpf: check operations on untrusted ro pointers to mem Eduard Zingerman
@ 2025-06-25  0:07 ` Eduard Zingerman
  3 siblings, 0 replies; 9+ messages in thread
From: Eduard Zingerman @ 2025-06-25  0:07 UTC (permalink / raw)
  To: bpf, ast, andrii; +Cc: daniel, martin.lau, kernel-team, yonghong.song

Messed up the subject. This should be v2.
Can resend, if necessary.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v1 2/3] bpf: allow void* cast using bpf_rdonly_cast()
  2025-06-25  0:05 ` [PATCH bpf-next v1 2/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman
@ 2025-06-25  0:11   ` Alexei Starovoitov
  2025-06-25  0:15     ` Eduard Zingerman
  0 siblings, 1 reply; 9+ messages in thread
From: Alexei Starovoitov @ 2025-06-25  0:11 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Kernel Team, Yonghong Song, Andrii Nakryiko

On Tue, Jun 24, 2025 at 5:05 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> Introduce support for `bpf_rdonly_cast(v, 0)`, which casts the value
> `v` to an untyped, untrusted pointer, logically similar to a `void *`.
> The memory pointed to by such a pointer is treated as read-only.
> As with other untrusted pointers, memory access violations on loads
> return zero instead of causing a fault.
>
> Technically:
> - The resulting pointer is represented as a register of type
>   `PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED` with size zero.
> - Offsets within such pointers are not tracked.
> - Same load instructions are allowed to have both
>   `PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED` and `PTR_TO_BTF_ID`
>   as the base pointer types.
>   In such cases, `bpf_insn_aux_data->ptr_type` is considered the
>   weaker of the two: `PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED`.
>
> The following constraints apply to the new pointer type:
> - can be used as a base for LDX instructions;
> - can't be used as a base for ST/STX or atomic instructions;
> - can't be used as parameter for kfuncs or helpers.
>
> These constraints are enforced by existing handling of `MEM_RDONLY`
> flag and `PTR_TO_MEM` of size zero.
>
> Suggested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
> Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
>  kernel/bpf/verifier.c | 75 +++++++++++++++++++++++++++++++++++--------
>  1 file changed, 62 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 71de4c9487d5..6b2c38b7a7b6 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -45,7 +45,8 @@ static const struct bpf_verifier_ops * const bpf_verifier_ops[] = {
>  };
>
>  enum bpf_features {
> -       __MAX_BPF_FEAT = 0,
> +       BPF_FEAT_RDONLY_CAST_TO_VOID = 0,
> +       __MAX_BPF_FEAT = 1,

and the idea is to manually adjust it every time?!
That's way too much churn.
Either remove it or keep it without assignment.
Just as __MAX_BPF_FEAT. Like similar thing in enum bpf_link_type.

--
pw-bot: cr

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v1 2/3] bpf: allow void* cast using bpf_rdonly_cast()
  2025-06-25  0:11   ` Alexei Starovoitov
@ 2025-06-25  0:15     ` Eduard Zingerman
  2025-06-25  0:21       ` Alexei Starovoitov
  0 siblings, 1 reply; 9+ messages in thread
From: Eduard Zingerman @ 2025-06-25  0:15 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Kernel Team, Yonghong Song, Andrii Nakryiko

On Tue, 2025-06-24 at 17:11 -0700, Alexei Starovoitov wrote:

[...]

> >  enum bpf_features {
> > -       __MAX_BPF_FEAT = 0,
> > +       BPF_FEAT_RDONLY_CAST_TO_VOID = 0,
> > +       __MAX_BPF_FEAT = 1,
> 
> and the idea is to manually adjust it every time?!
> That's way too much churn.
> Either remove it or keep it without assignment.
> Just as __MAX_BPF_FEAT. Like similar thing in enum bpf_link_type.

I probably did not understand your previous message:

   > > +enum bpf_features {
   > > +       BPF_FEAT_RDONLY_CAST_TO_VOID = 0,
   > > +       BPF_FEAT_TOTAL,
   > 
   > I don't see the value of 'total', but not strongly against it.
   > But pls be consistent with __MAX_BPF_CMD, __MAX_BPF_MAP_TYPE, ...
   > Say, __MAX_BPF_FEAT ?
   > 
   > 
   > Also it's better to introduce this enum in some earlier patch,
   > and then always add BTF_FEAT_... to this enum
   > in the same patch that adds the feature to make
   > sure backports won't screw it up.
   > Another rule should be to always assign a number to it.


Specifically: "Another rule should be to always assign a number to it."
The BPF_FEAT_RDONLY_CAST_TO_VOID already had a number, so I assumed
you were talking about __MAX_BPF_FEAT.
What did you mean?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v1 2/3] bpf: allow void* cast using bpf_rdonly_cast()
  2025-06-25  0:15     ` Eduard Zingerman
@ 2025-06-25  0:21       ` Alexei Starovoitov
  2025-06-25  0:24         ` Eduard Zingerman
  0 siblings, 1 reply; 9+ messages in thread
From: Alexei Starovoitov @ 2025-06-25  0:21 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Kernel Team, Yonghong Song, Andrii Nakryiko

On Tue, Jun 24, 2025 at 5:15 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Tue, 2025-06-24 at 17:11 -0700, Alexei Starovoitov wrote:
>
> [...]
>
> > >  enum bpf_features {
> > > -       __MAX_BPF_FEAT = 0,
> > > +       BPF_FEAT_RDONLY_CAST_TO_VOID = 0,
> > > +       __MAX_BPF_FEAT = 1,
> >
> > and the idea is to manually adjust it every time?!
> > That's way too much churn.
> > Either remove it or keep it without assignment.
> > Just as __MAX_BPF_FEAT. Like similar thing in enum /.
>
> I probably did not understand your previous message:
>
>    > > +enum bpf_features {
>    > > +       BPF_FEAT_RDONLY_CAST_TO_VOID = 0,
>    > > +       BPF_FEAT_TOTAL,
>    >
>    > I don't see the value of 'total', but not strongly against it.
>    > But pls be consistent with __MAX_BPF_CMD, __MAX_BPF_MAP_TYPE, ...
>    > Say, __MAX_BPF_FEAT ?
>    >
>    >
>    > Also it's better to introduce this enum in some earlier patch,
>    > and then always add BTF_FEAT_... to this enum
>    > in the same patch that adds the feature to make
>    > sure backports won't screw it up.
>    > Another rule should be to always assign a number to it.
>
>
> Specifically: "Another rule should be to always assign a number to it."
> The BPF_FEAT_RDONLY_CAST_TO_VOID already had a number, so I assumed
> you were talking about __MAX_BPF_FEAT.
> What did you mean?

I mean to add " = 123," to actual features, so when they're
backported the number stays the same.
Not to __MAX_BPF_FEAT.

I doubt it matters though,
since bpf progs suppose to use
bpf_core_enum_value_exists(enum bpf_features, name)
that doesn't care about the actual id.

In bpf helpers we got burned by broken backports and added
constants to ___BPF_FUNC_MAPPER macro.
Here I don't see it ever matter.
Just like I don't think __MAX_BPF_FEAT is needed,
but if we follow old steps, then let's do both __MAX_BPF_FEAT
without number and every feature with the number.
The end result will look like bpf_link_type.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v1 2/3] bpf: allow void* cast using bpf_rdonly_cast()
  2025-06-25  0:21       ` Alexei Starovoitov
@ 2025-06-25  0:24         ` Eduard Zingerman
  0 siblings, 0 replies; 9+ messages in thread
From: Eduard Zingerman @ 2025-06-25  0:24 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Kernel Team, Yonghong Song, Andrii Nakryiko

On Tue, 2025-06-24 at 17:21 -0700, Alexei Starovoitov wrote:

[...]

> I mean to add " = 123," to actual features, so when they're
> backported the number stays the same.
> Not to __MAX_BPF_FEAT.
> 
> I doubt it matters though,
> since bpf progs suppose to use
> bpf_core_enum_value_exists(enum bpf_features, name)
> that doesn't care about the actual id.
> 
> In bpf helpers we got burned by broken backports and added
> constants to ___BPF_FUNC_MAPPER macro.
> Here I don't see it ever matter.
> Just like I don't think __MAX_BPF_FEAT is needed,
> but if we follow old steps, then let's do both __MAX_BPF_FEAT
> without number and every feature with the number.
> The end result will look like bpf_link_type.

Understood, thank you, will respin.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-06-25  0:24 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-25  0:05 [PATCH bpf-next v1 0/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman
2025-06-25  0:05 ` [PATCH bpf-next v1 1/3] bpf: add bpf_features enum Eduard Zingerman
2025-06-25  0:05 ` [PATCH bpf-next v1 2/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman
2025-06-25  0:11   ` Alexei Starovoitov
2025-06-25  0:15     ` Eduard Zingerman
2025-06-25  0:21       ` Alexei Starovoitov
2025-06-25  0:24         ` Eduard Zingerman
2025-06-25  0:05 ` [PATCH bpf-next v1 3/3] selftests/bpf: check operations on untrusted ro pointers to mem Eduard Zingerman
2025-06-25  0:07 ` [PATCH bpf-next v1 0/3] bpf: allow void* cast using bpf_rdonly_cast() Eduard Zingerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).