* [PATCH v2 bpf-next 0/5] Dynptr helpers
@ 2023-04-20 7:14 Joanne Koong
2023-04-20 7:14 ` [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust Joanne Koong
` (5 more replies)
0 siblings, 6 replies; 23+ messages in thread
From: Joanne Koong @ 2023-04-20 7:14 UTC (permalink / raw)
To: bpf; +Cc: andrii, ast, daniel, Joanne Koong
This patchset is the 3rd in the dynptr series. The 1st (dynptr
fundamentals) can be found here [0] and the second (skb + xdp dynptrs)
can be found here [1].
This patchset adds the following helpers for interacting with
dynptrs:
int bpf_dynptr_adjust(struct bpf_dynptr *ptr, __u32 start, __u32 end) __ksym;
int bpf_dynptr_is_null(const struct bpf_dynptr *ptr) __ksym;
int bpf_dynptr_is_rdonly(const struct bpf_dynptr *ptr) __ksym;
__u32 bpf_dynptr_size(const struct bpf_dynptr *ptr) __ksym;
int bpf_dynptr_clone(const struct bpf_dynptr *ptr, struct bpf_dynptr *clone__init) __ksym;
[0] https://lore.kernel.org/bpf/20220523210712.3641569-1-joannelkoong@gmail.com/
[1] https://lore.kernel.org/bpf/20230301154953.641654-1-joannelkoong@gmail.com/
v1 -> v2:
v1: https://lore.kernel.org/bpf/20230409033431.3992432-1-joannelkoong@gmail.com/
* change bpf_dynptr_advance/trim to bpf_dynptr_adjust
* rename bpf_dynptr_get_size to bpf_dynptr_size
* refactor handling clone for process_dynptr_func, maintain unique ids
for parent and clone
* remove bpf_dynptr_get_offset()
Joanne Koong (5):
bpf: Add bpf_dynptr_adjust
bpf: Add bpf_dynptr_is_null and bpf_dynptr_is_rdonly
bpf: Add bpf_dynptr_size
bpf: Add bpf_dynptr_clone
selftests/bpf: add tests for dynptr convenience helpers
include/linux/bpf.h | 2 +-
kernel/bpf/helpers.c | 76 ++++-
kernel/bpf/verifier.c | 105 ++++--
kernel/trace/bpf_trace.c | 4 +-
tools/testing/selftests/bpf/bpf_kfuncs.h | 6 +
.../testing/selftests/bpf/prog_tests/dynptr.c | 6 +
.../testing/selftests/bpf/progs/dynptr_fail.c | 287 +++++++++++++++++
.../selftests/bpf/progs/dynptr_success.c | 298 ++++++++++++++++++
8 files changed, 755 insertions(+), 29 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-20 7:14 [PATCH v2 bpf-next 0/5] Dynptr helpers Joanne Koong
@ 2023-04-20 7:14 ` Joanne Koong
2023-04-20 18:38 ` Alexei Starovoitov
` (2 more replies)
2023-04-20 7:14 ` [PATCH v2 bpf-next 2/5] bpf: Add bpf_dynptr_is_null and bpf_dynptr_is_rdonly Joanne Koong
` (4 subsequent siblings)
5 siblings, 3 replies; 23+ messages in thread
From: Joanne Koong @ 2023-04-20 7:14 UTC (permalink / raw)
To: bpf; +Cc: andrii, ast, daniel, Joanne Koong
Add a new kfunc
int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end);
which adjusts the dynptr to reflect the new [start, end) interval.
In particular, it advances the offset of the dynptr by "start" bytes,
and if end is less than the size of the dynptr, then this will trim the
dynptr accordingly.
Adjusting the dynptr interval may be useful in certain situations.
For example, when hashing which takes in generic dynptrs, if the dynptr
points to a struct but only a certain memory region inside the struct
should be hashed, adjust can be used to narrow in on the
specific region to hash.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
kernel/bpf/helpers.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 00e5fb0682ac..7ddf63ac93ce 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1448,6 +1448,13 @@ u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr)
return ptr->size & DYNPTR_SIZE_MASK;
}
+static void bpf_dynptr_set_size(struct bpf_dynptr_kern *ptr, u32 new_size)
+{
+ u32 metadata = ptr->size & ~DYNPTR_SIZE_MASK;
+
+ ptr->size = new_size | metadata;
+}
+
int bpf_dynptr_check_size(u32 size)
{
return size > DYNPTR_MAX_SIZE ? -E2BIG : 0;
@@ -2297,6 +2304,24 @@ __bpf_kfunc void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr_kern *ptr, u32 o
return bpf_dynptr_slice(ptr, offset, buffer, buffer__szk);
}
+__bpf_kfunc int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end)
+{
+ u32 size;
+
+ if (!ptr->data || start > end)
+ return -EINVAL;
+
+ size = bpf_dynptr_get_size(ptr);
+
+ if (start > size || end > size)
+ return -ERANGE;
+
+ ptr->offset += start;
+ bpf_dynptr_set_size(ptr, end - start);
+
+ return 0;
+}
+
__bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
{
return obj;
@@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
+BTF_ID_FLAGS(func, bpf_dynptr_adjust)
BTF_SET8_END(common_btf_ids)
static const struct btf_kfunc_id_set common_kfunc_set = {
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 bpf-next 2/5] bpf: Add bpf_dynptr_is_null and bpf_dynptr_is_rdonly
2023-04-20 7:14 [PATCH v2 bpf-next 0/5] Dynptr helpers Joanne Koong
2023-04-20 7:14 ` [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust Joanne Koong
@ 2023-04-20 7:14 ` Joanne Koong
2023-04-24 19:49 ` John Fastabend
2023-04-20 7:14 ` [PATCH v2 bpf-next 3/5] bpf: Add bpf_dynptr_size Joanne Koong
` (3 subsequent siblings)
5 siblings, 1 reply; 23+ messages in thread
From: Joanne Koong @ 2023-04-20 7:14 UTC (permalink / raw)
To: bpf; +Cc: andrii, ast, daniel, Joanne Koong
bpf_dynptr_is_null returns true if the dynptr is null / invalid
(determined by whether ptr->data is NULL), else false if
the dynptr is a valid dynptr.
bpf_dynptr_is_rdonly returns true if the dynptr is read-only,
else false if the dynptr is read-writable. If the dynptr is
null / invalid, false is returned by default.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/bpf/helpers.c | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 7ddf63ac93ce..566ed89f173b 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1423,7 +1423,7 @@ static const struct bpf_func_proto bpf_kptr_xchg_proto = {
#define DYNPTR_SIZE_MASK 0xFFFFFF
#define DYNPTR_RDONLY_BIT BIT(31)
-static bool bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr)
+static bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr)
{
return ptr->size & DYNPTR_RDONLY_BIT;
}
@@ -1570,7 +1570,7 @@ BPF_CALL_5(bpf_dynptr_write, const struct bpf_dynptr_kern *, dst, u32, offset, v
enum bpf_dynptr_type type;
int err;
- if (!dst->data || bpf_dynptr_is_rdonly(dst))
+ if (!dst->data || __bpf_dynptr_is_rdonly(dst))
return -EINVAL;
err = bpf_dynptr_check_off_len(dst, offset, len);
@@ -1626,7 +1626,7 @@ BPF_CALL_3(bpf_dynptr_data, const struct bpf_dynptr_kern *, ptr, u32, offset, u3
if (err)
return 0;
- if (bpf_dynptr_is_rdonly(ptr))
+ if (__bpf_dynptr_is_rdonly(ptr))
return 0;
type = bpf_dynptr_get_type(ptr);
@@ -2276,7 +2276,7 @@ __bpf_kfunc void *bpf_dynptr_slice(const struct bpf_dynptr_kern *ptr, u32 offset
__bpf_kfunc void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr_kern *ptr, u32 offset,
void *buffer, u32 buffer__szk)
{
- if (!ptr->data || bpf_dynptr_is_rdonly(ptr))
+ if (!ptr->data || __bpf_dynptr_is_rdonly(ptr))
return NULL;
/* bpf_dynptr_slice_rdwr is the same logic as bpf_dynptr_slice.
@@ -2322,6 +2322,19 @@ __bpf_kfunc int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 en
return 0;
}
+__bpf_kfunc bool bpf_dynptr_is_null(struct bpf_dynptr_kern *ptr)
+{
+ return !ptr->data;
+}
+
+__bpf_kfunc bool bpf_dynptr_is_rdonly(struct bpf_dynptr_kern *ptr)
+{
+ if (!ptr->data)
+ return false;
+
+ return __bpf_dynptr_is_rdonly(ptr);
+}
+
__bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
{
return obj;
@@ -2395,6 +2408,8 @@ BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_dynptr_adjust)
+BTF_ID_FLAGS(func, bpf_dynptr_is_null)
+BTF_ID_FLAGS(func, bpf_dynptr_is_rdonly)
BTF_SET8_END(common_btf_ids)
static const struct btf_kfunc_id_set common_kfunc_set = {
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 bpf-next 3/5] bpf: Add bpf_dynptr_size
2023-04-20 7:14 [PATCH v2 bpf-next 0/5] Dynptr helpers Joanne Koong
2023-04-20 7:14 ` [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust Joanne Koong
2023-04-20 7:14 ` [PATCH v2 bpf-next 2/5] bpf: Add bpf_dynptr_is_null and bpf_dynptr_is_rdonly Joanne Koong
@ 2023-04-20 7:14 ` Joanne Koong
2023-04-24 19:52 ` John Fastabend
2023-04-20 7:14 ` [PATCH v2 bpf-next 4/5] bpf: Add bpf_dynptr_clone Joanne Koong
` (2 subsequent siblings)
5 siblings, 1 reply; 23+ messages in thread
From: Joanne Koong @ 2023-04-20 7:14 UTC (permalink / raw)
To: bpf; +Cc: andrii, ast, daniel, Joanne Koong
bpf_dynptr_size returns the number of useable bytes in a dynptr.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
include/linux/bpf.h | 2 +-
kernel/bpf/helpers.c | 15 ++++++++++++---
kernel/trace/bpf_trace.c | 4 ++--
3 files changed, 15 insertions(+), 6 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 18b592fde896..a4429dac3a1b 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1197,7 +1197,7 @@ enum bpf_dynptr_type {
};
int bpf_dynptr_check_size(u32 size);
-u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr);
+u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr);
#ifdef CONFIG_BPF_JIT
int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 566ed89f173b..9018646b86db 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1443,7 +1443,7 @@ static enum bpf_dynptr_type bpf_dynptr_get_type(const struct bpf_dynptr_kern *pt
return (ptr->size & ~(DYNPTR_RDONLY_BIT)) >> DYNPTR_TYPE_SHIFT;
}
-u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr)
+u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr)
{
return ptr->size & DYNPTR_SIZE_MASK;
}
@@ -1476,7 +1476,7 @@ void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr)
static int bpf_dynptr_check_off_len(const struct bpf_dynptr_kern *ptr, u32 offset, u32 len)
{
- u32 size = bpf_dynptr_get_size(ptr);
+ u32 size = __bpf_dynptr_size(ptr);
if (len > size || offset > size - len)
return -E2BIG;
@@ -2311,7 +2311,7 @@ __bpf_kfunc int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 en
if (!ptr->data || start > end)
return -EINVAL;
- size = bpf_dynptr_get_size(ptr);
+ size = __bpf_dynptr_size(ptr);
if (start > size || end > size)
return -ERANGE;
@@ -2335,6 +2335,14 @@ __bpf_kfunc bool bpf_dynptr_is_rdonly(struct bpf_dynptr_kern *ptr)
return __bpf_dynptr_is_rdonly(ptr);
}
+__bpf_kfunc __u32 bpf_dynptr_size(const struct bpf_dynptr_kern *ptr)
+{
+ if (!ptr->data)
+ return -EINVAL;
+
+ return __bpf_dynptr_size(ptr);
+}
+
__bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
{
return obj;
@@ -2410,6 +2418,7 @@ BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_dynptr_adjust)
BTF_ID_FLAGS(func, bpf_dynptr_is_null)
BTF_ID_FLAGS(func, bpf_dynptr_is_rdonly)
+BTF_ID_FLAGS(func, bpf_dynptr_size)
BTF_SET8_END(common_btf_ids)
static const struct btf_kfunc_id_set common_kfunc_set = {
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index bcf91bc7bf71..8deb22a99abe 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1349,9 +1349,9 @@ __bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr,
}
return verify_pkcs7_signature(data_ptr->data,
- bpf_dynptr_get_size(data_ptr),
+ __bpf_dynptr_size(data_ptr),
sig_ptr->data,
- bpf_dynptr_get_size(sig_ptr),
+ __bpf_dynptr_size(sig_ptr),
trusted_keyring->key,
VERIFYING_UNSPECIFIED_SIGNATURE, NULL,
NULL);
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 bpf-next 4/5] bpf: Add bpf_dynptr_clone
2023-04-20 7:14 [PATCH v2 bpf-next 0/5] Dynptr helpers Joanne Koong
` (2 preceding siblings ...)
2023-04-20 7:14 ` [PATCH v2 bpf-next 3/5] bpf: Add bpf_dynptr_size Joanne Koong
@ 2023-04-20 7:14 ` Joanne Koong
2023-04-26 18:16 ` Andrii Nakryiko
2023-04-20 7:14 ` [PATCH v2 bpf-next 5/5] selftests/bpf: add tests for dynptr convenience helpers Joanne Koong
2023-04-26 18:20 ` [PATCH v2 bpf-next 0/5] Dynptr helpers patchwork-bot+netdevbpf
5 siblings, 1 reply; 23+ messages in thread
From: Joanne Koong @ 2023-04-20 7:14 UTC (permalink / raw)
To: bpf; +Cc: andrii, ast, daniel, Joanne Koong
The cloned dynptr will point to the same data as its parent dynptr,
with the same type, offset, size and read-only properties.
Any writes to a dynptr will be reflected across all instances
(by 'instance', this means any dynptrs that point to the same
underlying data).
Please note that data slice and dynptr invalidations will affect all
instances as well. For example, if bpf_dynptr_write() is called on an
skb-type dynptr, all data slices of dynptr instances to that skb
will be invalidated as well (eg data slices of any clones, parents,
grandparents, ...). Another example is if a ringbuf dynptr is submitted,
any instance of that dynptr will be invalidated.
Changing the view of the dynptr (eg advancing the offset or
trimming the size) will only affect that dynptr and not affect any
other instances.
One example use case where cloning may be helpful is for hashing or
iterating through dynptr data. Cloning will allow the user to maintain
the original view of the dynptr for future use, while also allowing
views to smaller subsets of the data after the offset is advanced or the
size is trimmed.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
kernel/bpf/helpers.c | 14 ++++++
kernel/bpf/verifier.c | 105 ++++++++++++++++++++++++++++++++++--------
2 files changed, 99 insertions(+), 20 deletions(-)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 9018646b86db..1ebdc7f1a574 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2343,6 +2343,19 @@ __bpf_kfunc __u32 bpf_dynptr_size(const struct bpf_dynptr_kern *ptr)
return __bpf_dynptr_size(ptr);
}
+__bpf_kfunc int bpf_dynptr_clone(struct bpf_dynptr_kern *ptr,
+ struct bpf_dynptr_kern *clone__uninit)
+{
+ if (!ptr->data) {
+ bpf_dynptr_set_null(clone__uninit);
+ return -EINVAL;
+ }
+
+ *clone__uninit = *ptr;
+
+ return 0;
+}
+
__bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
{
return obj;
@@ -2419,6 +2432,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_adjust)
BTF_ID_FLAGS(func, bpf_dynptr_is_null)
BTF_ID_FLAGS(func, bpf_dynptr_is_rdonly)
BTF_ID_FLAGS(func, bpf_dynptr_size)
+BTF_ID_FLAGS(func, bpf_dynptr_clone)
BTF_SET8_END(common_btf_ids)
static const struct btf_kfunc_id_set common_kfunc_set = {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1e05355facdc..164726673086 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -309,6 +309,7 @@ struct bpf_kfunc_call_arg_meta {
struct {
enum bpf_dynptr_type type;
u32 id;
+ u32 ref_obj_id;
} initialized_dynptr;
struct {
u8 spi;
@@ -847,11 +848,11 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
struct bpf_func_state *state, int spi);
static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- enum bpf_arg_type arg_type, int insn_idx)
+ enum bpf_arg_type arg_type, int insn_idx, int clone_ref_obj_id)
{
struct bpf_func_state *state = func(env, reg);
enum bpf_dynptr_type type;
- int spi, i, id, err;
+ int spi, i, err;
spi = dynptr_get_spi(env, reg);
if (spi < 0)
@@ -887,7 +888,13 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
if (dynptr_type_refcounted(type)) {
/* The id is used to track proper releasing */
- id = acquire_reference_state(env, insn_idx);
+ int id;
+
+ if (clone_ref_obj_id)
+ id = clone_ref_obj_id;
+ else
+ id = acquire_reference_state(env, insn_idx);
+
if (id < 0)
return id;
@@ -901,24 +908,15 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
return 0;
}
-static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+static void invalidate_dynptr(struct bpf_verifier_env *env, struct bpf_func_state *state, int spi)
{
- struct bpf_func_state *state = func(env, reg);
- int spi, i;
-
- spi = dynptr_get_spi(env, reg);
- if (spi < 0)
- return spi;
+ int i;
for (i = 0; i < BPF_REG_SIZE; i++) {
state->stack[spi].slot_type[i] = STACK_INVALID;
state->stack[spi - 1].slot_type[i] = STACK_INVALID;
}
- /* Invalidate any slices associated with this dynptr */
- if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type))
- WARN_ON_ONCE(release_reference(env, state->stack[spi].spilled_ptr.ref_obj_id));
-
__mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
__mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
@@ -945,6 +943,52 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
*/
state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
+}
+
+static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+{
+ struct bpf_func_state *state = func(env, reg);
+ int spi;
+
+ spi = dynptr_get_spi(env, reg);
+ if (spi < 0)
+ return spi;
+
+ if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
+ int ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
+ int i;
+
+ /* If the dynptr has a ref_obj_id, then we need to invalidate
+ * two things:
+ *
+ * 1) Any dynptrs with a matching ref_obj_id (clones)
+ * 2) Any slices derived from this dynptr.
+ */
+
+ /* Invalidate any slices associated with this dynptr */
+ WARN_ON_ONCE(release_reference(env, ref_obj_id));
+
+ /* Invalidate any dynptr clones */
+ for (i = 1; i < state->allocated_stack / BPF_REG_SIZE; i++) {
+ if (state->stack[i].spilled_ptr.ref_obj_id != ref_obj_id)
+ continue;
+
+ /* it should always be the case that if the ref obj id
+ * matches then the stack slot also belongs to a
+ * dynptr
+ */
+ if (state->stack[i].slot_type[0] != STACK_DYNPTR) {
+ verbose(env, "verifier internal error: misconfigured ref_obj_id\n");
+ return -EFAULT;
+ }
+ if (state->stack[i].spilled_ptr.dynptr.first_slot)
+ invalidate_dynptr(env, state, i);
+ }
+
+ return 0;
+ }
+
+ invalidate_dynptr(env, state, spi);
return 0;
}
@@ -6662,7 +6706,7 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
* type, and declare it as 'const struct bpf_dynptr *' in their prototype.
*/
static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
- enum bpf_arg_type arg_type)
+ enum bpf_arg_type arg_type, int clone_ref_obj_id)
{
struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno];
int err;
@@ -6706,7 +6750,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
return err;
}
- err = mark_stack_slots_dynptr(env, reg, arg_type, insn_idx);
+ err = mark_stack_slots_dynptr(env, reg, arg_type, insn_idx, clone_ref_obj_id);
} else /* MEM_RDONLY and None case from above */ {
/* For the reg->type == PTR_TO_STACK case, bpf_dynptr is never const */
if (reg->type == CONST_PTR_TO_DYNPTR && !(arg_type & MEM_RDONLY)) {
@@ -7616,7 +7660,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
err = check_mem_size_reg(env, reg, regno, true, meta);
break;
case ARG_PTR_TO_DYNPTR:
- err = process_dynptr_func(env, regno, insn_idx, arg_type);
+ err = process_dynptr_func(env, regno, insn_idx, arg_type, 0);
if (err)
return err;
break;
@@ -9580,6 +9624,7 @@ enum special_kfunc_type {
KF_bpf_dynptr_from_xdp,
KF_bpf_dynptr_slice,
KF_bpf_dynptr_slice_rdwr,
+ KF_bpf_dynptr_clone,
};
BTF_SET_START(special_kfunc_set)
@@ -9599,6 +9644,7 @@ BTF_ID(func, bpf_dynptr_from_skb)
BTF_ID(func, bpf_dynptr_from_xdp)
BTF_ID(func, bpf_dynptr_slice)
BTF_ID(func, bpf_dynptr_slice_rdwr)
+BTF_ID(func, bpf_dynptr_clone)
BTF_SET_END(special_kfunc_set)
BTF_ID_LIST(special_kfunc_list)
@@ -9620,6 +9666,7 @@ BTF_ID(func, bpf_dynptr_from_skb)
BTF_ID(func, bpf_dynptr_from_xdp)
BTF_ID(func, bpf_dynptr_slice)
BTF_ID(func, bpf_dynptr_slice_rdwr)
+BTF_ID(func, bpf_dynptr_clone)
static bool is_kfunc_bpf_rcu_read_lock(struct bpf_kfunc_call_arg_meta *meta)
{
@@ -10315,6 +10362,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
case KF_ARG_PTR_TO_DYNPTR:
{
enum bpf_arg_type dynptr_arg_type = ARG_PTR_TO_DYNPTR;
+ int clone_ref_obj_id = 0;
if (reg->type != PTR_TO_STACK &&
reg->type != CONST_PTR_TO_DYNPTR) {
@@ -10328,12 +10376,28 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (is_kfunc_arg_uninit(btf, &args[i]))
dynptr_arg_type |= MEM_UNINIT;
- if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_skb])
+ if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_skb]) {
dynptr_arg_type |= DYNPTR_TYPE_SKB;
- else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_xdp])
+ } else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_xdp]) {
dynptr_arg_type |= DYNPTR_TYPE_XDP;
+ } else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_clone] &&
+ (dynptr_arg_type & MEM_UNINIT)) {
+ enum bpf_dynptr_type parent_type = meta->initialized_dynptr.type;
+
+ if (parent_type == BPF_DYNPTR_TYPE_INVALID) {
+ verbose(env, "verifier internal error: no dynptr type for parent of clone\n");
+ return -EFAULT;
+ }
+
+ dynptr_arg_type |= (unsigned int)get_dynptr_type_flag(parent_type);
+ clone_ref_obj_id = meta->initialized_dynptr.ref_obj_id;
+ if (dynptr_type_refcounted(parent_type) && !clone_ref_obj_id) {
+ verbose(env, "verifier internal error: missing ref obj id for parent of clone\n");
+ return -EFAULT;
+ }
+ }
- ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type);
+ ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
if (ret < 0)
return ret;
@@ -10346,6 +10410,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
meta->initialized_dynptr.id = id;
meta->initialized_dynptr.type = dynptr_get_type(env, reg);
+ meta->initialized_dynptr.ref_obj_id = dynptr_ref_obj_id(env, reg);
}
break;
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 bpf-next 5/5] selftests/bpf: add tests for dynptr convenience helpers
2023-04-20 7:14 [PATCH v2 bpf-next 0/5] Dynptr helpers Joanne Koong
` (3 preceding siblings ...)
2023-04-20 7:14 ` [PATCH v2 bpf-next 4/5] bpf: Add bpf_dynptr_clone Joanne Koong
@ 2023-04-20 7:14 ` Joanne Koong
2023-04-26 18:17 ` Andrii Nakryiko
2023-04-26 18:20 ` [PATCH v2 bpf-next 0/5] Dynptr helpers patchwork-bot+netdevbpf
5 siblings, 1 reply; 23+ messages in thread
From: Joanne Koong @ 2023-04-20 7:14 UTC (permalink / raw)
To: bpf; +Cc: andrii, ast, daniel, Joanne Koong
Add various tests for the added dynptr convenience helpers.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
tools/testing/selftests/bpf/bpf_kfuncs.h | 6 +
.../testing/selftests/bpf/prog_tests/dynptr.c | 6 +
.../testing/selftests/bpf/progs/dynptr_fail.c | 287 +++++++++++++++++
.../selftests/bpf/progs/dynptr_success.c | 298 ++++++++++++++++++
4 files changed, 597 insertions(+)
diff --git a/tools/testing/selftests/bpf/bpf_kfuncs.h b/tools/testing/selftests/bpf/bpf_kfuncs.h
index 8c993ec8ceea..f3c41f8902a0 100644
--- a/tools/testing/selftests/bpf/bpf_kfuncs.h
+++ b/tools/testing/selftests/bpf/bpf_kfuncs.h
@@ -35,4 +35,10 @@ extern void *bpf_dynptr_slice(const struct bpf_dynptr *ptr, __u32 offset,
extern void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr *ptr, __u32 offset,
void *buffer, __u32 buffer__szk) __ksym;
+extern int bpf_dynptr_adjust(const struct bpf_dynptr *ptr, __u32 start, __u32 end) __ksym;
+extern int bpf_dynptr_is_null(const struct bpf_dynptr *ptr) __ksym;
+extern int bpf_dynptr_is_rdonly(const struct bpf_dynptr *ptr) __ksym;
+extern __u32 bpf_dynptr_size(const struct bpf_dynptr *ptr) __ksym;
+extern int bpf_dynptr_clone(const struct bpf_dynptr *ptr, struct bpf_dynptr *clone__init) __ksym;
+
#endif
diff --git a/tools/testing/selftests/bpf/prog_tests/dynptr.c b/tools/testing/selftests/bpf/prog_tests/dynptr.c
index d176c34a7d2e..0478916aff37 100644
--- a/tools/testing/selftests/bpf/prog_tests/dynptr.c
+++ b/tools/testing/selftests/bpf/prog_tests/dynptr.c
@@ -20,6 +20,12 @@ static struct {
{"test_ringbuf", SETUP_SYSCALL_SLEEP},
{"test_skb_readonly", SETUP_SKB_PROG},
{"test_dynptr_skb_data", SETUP_SKB_PROG},
+ {"test_adjust", SETUP_SYSCALL_SLEEP},
+ {"test_adjust_err", SETUP_SYSCALL_SLEEP},
+ {"test_zero_size_dynptr", SETUP_SYSCALL_SLEEP},
+ {"test_dynptr_is_null", SETUP_SYSCALL_SLEEP},
+ {"test_dynptr_is_rdonly", SETUP_SKB_PROG},
+ {"test_dynptr_clone", SETUP_SKB_PROG},
};
static void verify_success(const char *prog_name, enum test_setup_type setup_type)
diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index 759eb5c245cd..efe4ce72d00e 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -1378,3 +1378,290 @@ int invalid_slice_rdwr_rdonly(struct __sk_buff *skb)
return 0;
}
+
+/* bpf_dynptr_adjust can only be called on initialized dynptrs */
+SEC("?raw_tp")
+__failure __msg("Expected an initialized dynptr as arg #1")
+int dynptr_adjust_invalid(void *ctx)
+{
+ struct bpf_dynptr ptr;
+
+ /* this should fail */
+ bpf_dynptr_adjust(&ptr, 1, 2);
+
+ return 0;
+}
+
+/* bpf_dynptr_is_null can only be called on initialized dynptrs */
+SEC("?raw_tp")
+__failure __msg("Expected an initialized dynptr as arg #1")
+int dynptr_is_null_invalid(void *ctx)
+{
+ struct bpf_dynptr ptr;
+
+ /* this should fail */
+ bpf_dynptr_is_null(&ptr);
+
+ return 0;
+}
+
+/* bpf_dynptr_is_rdonly can only be called on initialized dynptrs */
+SEC("?raw_tp")
+__failure __msg("Expected an initialized dynptr as arg #1")
+int dynptr_is_rdonly_invalid(void *ctx)
+{
+ struct bpf_dynptr ptr;
+
+ /* this should fail */
+ bpf_dynptr_is_rdonly(&ptr);
+
+ return 0;
+}
+
+/* bpf_dynptr_size can only be called on initialized dynptrs */
+SEC("?raw_tp")
+__failure __msg("Expected an initialized dynptr as arg #1")
+int dynptr_size_invalid(void *ctx)
+{
+ struct bpf_dynptr ptr;
+
+ /* this should fail */
+ bpf_dynptr_size(&ptr);
+
+ return 0;
+}
+
+/* Only initialized dynptrs can be cloned */
+SEC("?raw_tp")
+__failure __msg("Expected an initialized dynptr as arg #1")
+int clone_invalid1(void *ctx)
+{
+ struct bpf_dynptr ptr1;
+ struct bpf_dynptr ptr2;
+
+ /* this should fail */
+ bpf_dynptr_clone(&ptr1, &ptr2);
+
+ return 0;
+}
+
+/* Can't overwrite an existing dynptr when cloning */
+SEC("?xdp")
+__failure __msg("cannot overwrite referenced dynptr")
+int clone_invalid2(struct xdp_md *xdp)
+{
+ struct bpf_dynptr ptr1;
+ struct bpf_dynptr clone;
+
+ bpf_dynptr_from_xdp(xdp, 0, &ptr1);
+
+ bpf_ringbuf_reserve_dynptr(&ringbuf, 64, 0, &clone);
+
+ /* this should fail */
+ bpf_dynptr_clone(&ptr1, &clone);
+
+ bpf_ringbuf_submit_dynptr(&clone, 0);
+
+ return 0;
+}
+
+/* Invalidating a dynptr should invalidate its clones */
+SEC("?raw_tp")
+__failure __msg("Expected an initialized dynptr as arg #3")
+int clone_invalidate1(void *ctx)
+{
+ struct bpf_dynptr clone;
+ struct bpf_dynptr ptr;
+ char read_data[64];
+
+ bpf_ringbuf_reserve_dynptr(&ringbuf, val, 0, &ptr);
+
+ bpf_dynptr_clone(&ptr, &clone);
+
+ bpf_ringbuf_submit_dynptr(&ptr, 0);
+
+ /* this should fail */
+ bpf_dynptr_read(read_data, sizeof(read_data), &clone, 0, 0);
+
+ return 0;
+}
+
+/* Invalidating a dynptr should invalidate its parent */
+SEC("?raw_tp")
+__failure __msg("Expected an initialized dynptr as arg #3")
+int clone_invalidate2(void *ctx)
+{
+ struct bpf_dynptr ptr;
+ struct bpf_dynptr clone;
+ char read_data[64];
+
+ bpf_ringbuf_reserve_dynptr(&ringbuf, val, 0, &ptr);
+
+ bpf_dynptr_clone(&ptr, &clone);
+
+ bpf_ringbuf_submit_dynptr(&clone, 0);
+
+ /* this should fail */
+ bpf_dynptr_read(read_data, sizeof(read_data), &ptr, 0, 0);
+
+ return 0;
+}
+
+/* Invalidating a dynptr should invalidate its siblings */
+SEC("?raw_tp")
+__failure __msg("Expected an initialized dynptr as arg #3")
+int clone_invalidate3(void *ctx)
+{
+ struct bpf_dynptr ptr;
+ struct bpf_dynptr clone1;
+ struct bpf_dynptr clone2;
+ char read_data[64];
+
+ bpf_ringbuf_reserve_dynptr(&ringbuf, val, 0, &ptr);
+
+ bpf_dynptr_clone(&ptr, &clone1);
+
+ bpf_dynptr_clone(&ptr, &clone2);
+
+ bpf_ringbuf_submit_dynptr(&clone2, 0);
+
+ /* this should fail */
+ bpf_dynptr_read(read_data, sizeof(read_data), &clone1, 0, 0);
+
+ return 0;
+}
+
+/* Invalidating a dynptr should invalidate any data slices
+ * of its clones
+ */
+SEC("?raw_tp")
+__failure __msg("invalid mem access 'scalar'")
+int clone_invalidate4(void *ctx)
+{
+ struct bpf_dynptr ptr;
+ struct bpf_dynptr clone;
+ int *data;
+
+ bpf_ringbuf_reserve_dynptr(&ringbuf, val, 0, &ptr);
+
+ bpf_dynptr_clone(&ptr, &clone);
+ data = bpf_dynptr_data(&clone, 0, sizeof(val));
+ if (!data)
+ return 0;
+
+ bpf_ringbuf_submit_dynptr(&ptr, 0);
+
+ /* this should fail */
+ *data = 123;
+
+ return 0;
+}
+
+/* Invalidating a dynptr should invalidate any data slices
+ * of its parent
+ */
+SEC("?raw_tp")
+__failure __msg("invalid mem access 'scalar'")
+int clone_invalidate5(void *ctx)
+{
+ struct bpf_dynptr ptr;
+ struct bpf_dynptr clone;
+ int *data;
+
+ bpf_ringbuf_reserve_dynptr(&ringbuf, val, 0, &ptr);
+ data = bpf_dynptr_data(&ptr, 0, sizeof(val));
+ if (!data)
+ return 0;
+
+ bpf_dynptr_clone(&ptr, &clone);
+
+ bpf_ringbuf_submit_dynptr(&clone, 0);
+
+ /* this should fail */
+ *data = 123;
+
+ return 0;
+}
+
+/* Invalidating a dynptr should invalidate any data slices
+ * of its sibling
+ */
+SEC("?raw_tp")
+__failure __msg("invalid mem access 'scalar'")
+int clone_invalidate6(void *ctx)
+{
+ struct bpf_dynptr ptr;
+ struct bpf_dynptr clone1;
+ struct bpf_dynptr clone2;
+ int *data;
+
+ bpf_ringbuf_reserve_dynptr(&ringbuf, val, 0, &ptr);
+
+ bpf_dynptr_clone(&ptr, &clone1);
+
+ bpf_dynptr_clone(&ptr, &clone2);
+
+ data = bpf_dynptr_data(&clone1, 0, sizeof(val));
+ if (!data)
+ return 0;
+
+ bpf_ringbuf_submit_dynptr(&clone2, 0);
+
+ /* this should fail */
+ *data = 123;
+
+ return 0;
+}
+
+/* A skb clone's data slices should be invalid anytime packet data changes */
+SEC("?tc")
+__failure __msg("invalid mem access 'scalar'")
+int clone_skb_packet_data(struct __sk_buff *skb)
+{
+ char buffer[sizeof(__u32)] = {};
+ struct bpf_dynptr clone;
+ struct bpf_dynptr ptr;
+ __u32 *data;
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+
+ bpf_dynptr_clone(&ptr, &clone);
+ data = bpf_dynptr_slice_rdwr(&clone, 0, buffer, sizeof(buffer));
+ if (!data)
+ return XDP_DROP;
+
+ if (bpf_skb_pull_data(skb, skb->len))
+ return SK_DROP;
+
+ /* this should fail */
+ *data = 123;
+
+ return 0;
+}
+
+/* A xdp clone's data slices should be invalid anytime packet data changes */
+SEC("?xdp")
+__failure __msg("invalid mem access 'scalar'")
+int clone_xdp_packet_data(struct xdp_md *xdp)
+{
+ char buffer[sizeof(__u32)] = {};
+ struct bpf_dynptr clone;
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+ __u32 *data;
+
+ bpf_dynptr_from_xdp(xdp, 0, &ptr);
+
+ bpf_dynptr_clone(&ptr, &clone);
+ data = bpf_dynptr_slice_rdwr(&clone, 0, buffer, sizeof(buffer));
+ if (!data)
+ return XDP_DROP;
+
+ if (bpf_xdp_adjust_head(xdp, 0 - (int)sizeof(*hdr)))
+ return XDP_DROP;
+
+ /* this should fail */
+ *data = 123;
+
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c b/tools/testing/selftests/bpf/progs/dynptr_success.c
index b2fa6c47ecc0..be7de62de045 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_success.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_success.c
@@ -207,3 +207,301 @@ int test_dynptr_skb_data(struct __sk_buff *skb)
return 1;
}
+
+SEC("tp/syscalls/sys_enter_nanosleep")
+int test_adjust(void *ctx)
+{
+ struct bpf_dynptr ptr;
+ __u32 bytes = 64;
+ __u32 off = 10;
+ __u32 trim = 15;
+
+ if (bpf_get_current_pid_tgid() >> 32 != pid)
+ return 0;
+
+ err = bpf_ringbuf_reserve_dynptr(&ringbuf, bytes, 0, &ptr);
+ if (err) {
+ err = 1;
+ goto done;
+ }
+
+ if (bpf_dynptr_size(&ptr) != bytes) {
+ err = 2;
+ goto done;
+ }
+
+ /* Advance the dynptr by off */
+ err = bpf_dynptr_adjust(&ptr, off, bpf_dynptr_size(&ptr));
+ if (err) {
+ err = 3;
+ goto done;
+ }
+
+ if (bpf_dynptr_size(&ptr) != bytes - off) {
+ err = 4;
+ goto done;
+ }
+
+ /* Trim the dynptr */
+ err = bpf_dynptr_adjust(&ptr, off, 15);
+ if (err) {
+ err = 5;
+ goto done;
+ }
+
+ /* Check that the size was adjusted correctly */
+ if (bpf_dynptr_size(&ptr) != trim - off) {
+ err = 6;
+ goto done;
+ }
+
+done:
+ bpf_ringbuf_discard_dynptr(&ptr, 0);
+ return 0;
+}
+
+SEC("tp/syscalls/sys_enter_nanosleep")
+int test_adjust_err(void *ctx)
+{
+ char write_data[45] = "hello there, world!!";
+ struct bpf_dynptr ptr;
+ __u32 size = 64;
+ __u32 off = 20;
+
+ if (bpf_get_current_pid_tgid() >> 32 != pid)
+ return 0;
+
+ if (bpf_ringbuf_reserve_dynptr(&ringbuf, size, 0, &ptr)) {
+ err = 1;
+ goto done;
+ }
+
+ /* Check that start can't be greater than end */
+ if (bpf_dynptr_adjust(&ptr, 5, 1) != -EINVAL) {
+ err = 2;
+ goto done;
+ }
+
+ /* Check that start can't be greater than size */
+ if (bpf_dynptr_adjust(&ptr, size + 1, size + 1) != -ERANGE) {
+ err = 3;
+ goto done;
+ }
+
+ /* Check that end can't be greater than size */
+ if (bpf_dynptr_adjust(&ptr, 0, size + 1) != -ERANGE) {
+ err = 4;
+ goto done;
+ }
+
+ if (bpf_dynptr_adjust(&ptr, off, size)) {
+ err = 5;
+ goto done;
+ }
+
+ /* Check that you can't write more bytes than available into the dynptr
+ * after you've adjusted it
+ */
+ if (bpf_dynptr_write(&ptr, 0, &write_data, sizeof(write_data), 0) != -E2BIG) {
+ err = 6;
+ goto done;
+ }
+
+ /* Check that even after adjusting, submitting/discarding
+ * a ringbuf dynptr works
+ */
+ bpf_ringbuf_submit_dynptr(&ptr, 0);
+ return 0;
+
+done:
+ bpf_ringbuf_discard_dynptr(&ptr, 0);
+ return 0;
+}
+
+SEC("tp/syscalls/sys_enter_nanosleep")
+int test_zero_size_dynptr(void *ctx)
+{
+ char write_data = 'x', read_data;
+ struct bpf_dynptr ptr;
+ __u32 size = 64;
+
+ if (bpf_get_current_pid_tgid() >> 32 != pid)
+ return 0;
+
+ if (bpf_ringbuf_reserve_dynptr(&ringbuf, size, 0, &ptr)) {
+ err = 1;
+ goto done;
+ }
+
+ /* After this, the dynptr has a size of 0 */
+ if (bpf_dynptr_adjust(&ptr, size, size)) {
+ err = 2;
+ goto done;
+ }
+
+ /* Test that reading + writing non-zero bytes is not ok */
+ if (bpf_dynptr_read(&read_data, sizeof(read_data), &ptr, 0, 0) != -E2BIG) {
+ err = 3;
+ goto done;
+ }
+
+ if (bpf_dynptr_write(&ptr, 0, &write_data, sizeof(write_data), 0) != -E2BIG) {
+ err = 4;
+ goto done;
+ }
+
+ /* Test that reading + writing 0 bytes from a 0-size dynptr is ok */
+ if (bpf_dynptr_read(&read_data, 0, &ptr, 0, 0)) {
+ err = 5;
+ goto done;
+ }
+
+ if (bpf_dynptr_write(&ptr, 0, &write_data, 0, 0)) {
+ err = 6;
+ goto done;
+ }
+
+ err = 0;
+
+done:
+ bpf_ringbuf_discard_dynptr(&ptr, 0);
+ return 0;
+}
+
+SEC("tp/syscalls/sys_enter_nanosleep")
+int test_dynptr_is_null(void *ctx)
+{
+ struct bpf_dynptr ptr1;
+ struct bpf_dynptr ptr2;
+ __u64 size = 4;
+
+ if (bpf_get_current_pid_tgid() >> 32 != pid)
+ return 0;
+
+ /* Pass in invalid flags, get back an invalid dynptr */
+ if (bpf_ringbuf_reserve_dynptr(&ringbuf, size, 123, &ptr1) != -EINVAL) {
+ err = 1;
+ goto exit_early;
+ }
+
+ /* Test that the invalid dynptr is null */
+ if (!bpf_dynptr_is_null(&ptr1)) {
+ err = 2;
+ goto exit_early;
+ }
+
+ /* Get a valid dynptr */
+ if (bpf_ringbuf_reserve_dynptr(&ringbuf, size, 0, &ptr2)) {
+ err = 3;
+ goto exit;
+ }
+
+ /* Test that the valid dynptr is not null */
+ if (bpf_dynptr_is_null(&ptr2)) {
+ err = 4;
+ goto exit;
+ }
+
+exit:
+ bpf_ringbuf_discard_dynptr(&ptr2, 0);
+exit_early:
+ bpf_ringbuf_discard_dynptr(&ptr1, 0);
+ return 0;
+}
+
+SEC("cgroup_skb/egress")
+int test_dynptr_is_rdonly(struct __sk_buff *skb)
+{
+ struct bpf_dynptr ptr1;
+ struct bpf_dynptr ptr2;
+ struct bpf_dynptr ptr3;
+
+ /* Pass in invalid flags, get back an invalid dynptr */
+ if (bpf_dynptr_from_skb(skb, 123, &ptr1) != -EINVAL) {
+ err = 1;
+ return 0;
+ }
+
+ /* Test that an invalid dynptr is_rdonly returns false */
+ if (bpf_dynptr_is_rdonly(&ptr1)) {
+ err = 2;
+ return 0;
+ }
+
+ /* Get a read-only dynptr */
+ if (bpf_dynptr_from_skb(skb, 0, &ptr2)) {
+ err = 3;
+ return 0;
+ }
+
+ /* Test that the dynptr is read-only */
+ if (!bpf_dynptr_is_rdonly(&ptr2)) {
+ err = 4;
+ return 0;
+ }
+
+ /* Get a read-writeable dynptr */
+ if (bpf_ringbuf_reserve_dynptr(&ringbuf, 64, 0, &ptr3)) {
+ err = 5;
+ goto done;
+ }
+
+ /* Test that the dynptr is read-only */
+ if (bpf_dynptr_is_rdonly(&ptr3)) {
+ err = 6;
+ goto done;
+ }
+
+done:
+ bpf_ringbuf_discard_dynptr(&ptr3, 0);
+ return 0;
+}
+
+SEC("cgroup_skb/egress")
+int test_dynptr_clone(struct __sk_buff *skb)
+{
+ struct bpf_dynptr ptr1;
+ struct bpf_dynptr ptr2;
+ __u32 off = 2, size;
+
+ /* Get a dynptr */
+ if (bpf_dynptr_from_skb(skb, 0, &ptr1)) {
+ err = 1;
+ return 0;
+ }
+
+ if (bpf_dynptr_adjust(&ptr1, off, bpf_dynptr_size(&ptr1))) {
+ err = 2;
+ return 0;
+ }
+
+ /* Clone the dynptr */
+ if (bpf_dynptr_clone(&ptr1, &ptr2)) {
+ err = 3;
+ return 0;
+ }
+
+ size = bpf_dynptr_size(&ptr1);
+
+ /* Check that the clone has the same size and rd-only */
+ if (bpf_dynptr_size(&ptr2) != size) {
+ err = 4;
+ return 0;
+ }
+
+ if (bpf_dynptr_is_rdonly(&ptr2) != bpf_dynptr_is_rdonly(&ptr1)) {
+ err = 5;
+ return 0;
+ }
+
+ /* Advance and trim the original dynptr */
+ bpf_dynptr_adjust(&ptr1, 5, 5);
+
+ /* Check that only original dynptr was affected, and the clone wasn't */
+ if (bpf_dynptr_size(&ptr2) != size) {
+ err = 6;
+ return 0;
+ }
+
+ return 0;
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-20 7:14 ` [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust Joanne Koong
@ 2023-04-20 18:38 ` Alexei Starovoitov
2023-04-21 3:46 ` Joanne Koong
2023-04-24 14:31 ` Eduard Zingerman
2023-04-24 19:46 ` John Fastabend
2 siblings, 1 reply; 23+ messages in thread
From: Alexei Starovoitov @ 2023-04-20 18:38 UTC (permalink / raw)
To: Joanne Koong; +Cc: bpf, andrii, ast, daniel
On Thu, Apr 20, 2023 at 12:14:10AM -0700, Joanne Koong wrote:
> return obj;
> @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
> BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
> BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
> BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
> +BTF_ID_FLAGS(func, bpf_dynptr_adjust)
I've missed this earlier.
Shouldn't we change all the existing dynptr kfuncs to be KF_TRUSTED_ARGS?
Otherwise when people start passing bpf_dynptr-s from kernel code
(like fuse-bpf is planning to do)
the bpf prog might get vanilla ptr_to_btf_id to bpf_dynptr_kern.
It's probably not possible right now, so not a high-pri issue, but still.
Or something in the verifier makes sure that dynptr-s are all trusted?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-20 18:38 ` Alexei Starovoitov
@ 2023-04-21 3:46 ` Joanne Koong
2023-04-22 23:44 ` Alexei Starovoitov
0 siblings, 1 reply; 23+ messages in thread
From: Joanne Koong @ 2023-04-21 3:46 UTC (permalink / raw)
To: Alexei Starovoitov; +Cc: bpf, andrii, ast, daniel
On Thu, Apr 20, 2023 at 11:38 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Thu, Apr 20, 2023 at 12:14:10AM -0700, Joanne Koong wrote:
> > return obj;
> > @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
> > BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
> > BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
> > BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
> > +BTF_ID_FLAGS(func, bpf_dynptr_adjust)
>
> I've missed this earlier.
> Shouldn't we change all the existing dynptr kfuncs to be KF_TRUSTED_ARGS?
> Otherwise when people start passing bpf_dynptr-s from kernel code
> (like fuse-bpf is planning to do)
> the bpf prog might get vanilla ptr_to_btf_id to bpf_dynptr_kern.
> It's probably not possible right now, so not a high-pri issue, but still.
> Or something in the verifier makes sure that dynptr-s are all trusted?
In my understanding, the checks the verifier enforces for
KF_TRUSTED_ARGS are that the reg->offset is 0 and the reg may not be
null. The verifier logic does this for dynptrs currently, it enforces
that reg->offset is 0 (in stack_slot_obj_get_spi()) and that the
reg->type is PTR_TO_STACK or CONST_PTR_TO_DYNPTR (in
check_kfunc_args() for KF_ARG_PTR_TO_DYNPTR case). But maybe it's a
good idea to add the KF_TRUSTED_ARGS flag anyways in case more safety
checks are added to KF_TRUSTED_ARGS in the future?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-21 3:46 ` Joanne Koong
@ 2023-04-22 23:44 ` Alexei Starovoitov
2023-04-25 5:05 ` Joanne Koong
0 siblings, 1 reply; 23+ messages in thread
From: Alexei Starovoitov @ 2023-04-22 23:44 UTC (permalink / raw)
To: Joanne Koong; +Cc: bpf, Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann
On Thu, Apr 20, 2023 at 8:46 PM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> On Thu, Apr 20, 2023 at 11:38 AM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Thu, Apr 20, 2023 at 12:14:10AM -0700, Joanne Koong wrote:
> > > return obj;
> > > @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
> > > BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
> > > BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
> > > BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
> > > +BTF_ID_FLAGS(func, bpf_dynptr_adjust)
> >
> > I've missed this earlier.
> > Shouldn't we change all the existing dynptr kfuncs to be KF_TRUSTED_ARGS?
> > Otherwise when people start passing bpf_dynptr-s from kernel code
> > (like fuse-bpf is planning to do)
> > the bpf prog might get vanilla ptr_to_btf_id to bpf_dynptr_kern.
> > It's probably not possible right now, so not a high-pri issue, but still.
> > Or something in the verifier makes sure that dynptr-s are all trusted?
>
> In my understanding, the checks the verifier enforces for
> KF_TRUSTED_ARGS are that the reg->offset is 0 and the reg may not be
> null. The verifier logic does this for dynptrs currently, it enforces
> that reg->offset is 0 (in stack_slot_obj_get_spi()) and that the
> reg->type is PTR_TO_STACK or CONST_PTR_TO_DYNPTR (in
> check_kfunc_args() for KF_ARG_PTR_TO_DYNPTR case). But maybe it's a
> good idea to add the KF_TRUSTED_ARGS flag anyways in case more safety
> checks are added to KF_TRUSTED_ARGS in the future?
Yeah. You're right.
The verifier is doing the same checks for dynptr and for trusted ptrs.
So adding KF_TRUSTED_ARGS to bpf_dynptr_adjust is not mandatory.
Maybe an opportunity to generalize the checks between
KF_ARG_PTR_TO_BTF_ID and KF_ARG_PTR_TO_DYNPTR.
But KF_TRUSTED_ARGS is necessary for bpf_dynptr_from_skb
otherwise old style ptr_to_btf_id skb can be passed in.
For example the following passes test_progs:
diff --git a/net/core/filter.c b/net/core/filter.c
index d9ce04ca22ce..abb14036b455 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -11718,6 +11718,7 @@ static int __init bpf_kfunc_init(void)
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_XMIT,
&bpf_kfunc_set_skb);
ret = ret ?:
register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_SEG6LOCAL,
&bpf_kfunc_set_skb);
ret = ret ?:
register_btf_kfunc_id_set(BPF_PROG_TYPE_NETFILTER,
&bpf_kfunc_set_skb);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
&bpf_kfunc_set_skb);
return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP,
&bpf_kfunc_set_xdp);
}
late_initcall(bpf_kfunc_init);
diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c
b/tools/testing/selftests/bpf/progs/dynptr_success.c
index b2fa6c47ecc0..bd8fbc3e04ea 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_success.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_success.c
@@ -4,6 +4,7 @@
#include <string.h>
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
#include "bpf_misc.h"
#include "bpf_kfuncs.h"
#include "errno.h"
@@ -187,6 +188,15 @@ int test_skb_readonly(struct __sk_buff *skb)
return 1;
}
+SEC("fentry/__kfree_skb")
+int BPF_PROG(test_skb, struct __sk_buff *skb)
+{
+ struct bpf_dynptr ptr;
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+ return 0;
+}
but shouldn't. skb in fentry is not trusted.
It's not an issue right now, because bpf_dynptr_from_skb()
is enabled for networking prog types only,
but BPF_PROG_TYPE_NETFILTER is already blending the boundary.
It's more networking than tracing and normal tracing should
be able to examine skb. dynptr allows to do it nicely.
Not a blocker for this set. Just something to follow up.
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-20 7:14 ` [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust Joanne Koong
2023-04-20 18:38 ` Alexei Starovoitov
@ 2023-04-24 14:31 ` Eduard Zingerman
2023-04-24 14:33 ` Eduard Zingerman
2023-04-24 19:46 ` John Fastabend
2 siblings, 1 reply; 23+ messages in thread
From: Eduard Zingerman @ 2023-04-24 14:31 UTC (permalink / raw)
To: Joanne Koong, bpf; +Cc: andrii, ast, daniel
On Thu, 2023-04-20 at 00:14 -0700, Joanne Koong wrote:
> Add a new kfunc
>
> int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end);
>
> which adjusts the dynptr to reflect the new [start, end) interval.
> In particular, it advances the offset of the dynptr by "start" bytes,
> and if end is less than the size of the dynptr, then this will trim the
> dynptr accordingly.
>
> Adjusting the dynptr interval may be useful in certain situations.
> For example, when hashing which takes in generic dynptrs, if the dynptr
> points to a struct but only a certain memory region inside the struct
> should be hashed, adjust can be used to narrow in on the
> specific region to hash.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---
> kernel/bpf/helpers.c | 26 ++++++++++++++++++++++++++
> 1 file changed, 26 insertions(+)
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 00e5fb0682ac..7ddf63ac93ce 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -1448,6 +1448,13 @@ u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr)
> return ptr->size & DYNPTR_SIZE_MASK;
> }
>
> +static void bpf_dynptr_set_size(struct bpf_dynptr_kern *ptr, u32 new_size)
> +{
> + u32 metadata = ptr->size & ~DYNPTR_SIZE_MASK;
> +
> + ptr->size = new_size | metadata;
> +}
> +
> int bpf_dynptr_check_size(u32 size)
> {
> return size > DYNPTR_MAX_SIZE ? -E2BIG : 0;
> @@ -2297,6 +2304,24 @@ __bpf_kfunc void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr_kern *ptr, u32 o
> return bpf_dynptr_slice(ptr, offset, buffer, buffer__szk);
> }
>
> +__bpf_kfunc int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end)
> +{
> + u32 size;
> +
> + if (!ptr->data || start > end)
> + return -EINVAL;
> +
> + size = bpf_dynptr_get_size(ptr);
> +
> + if (start > size || end > size)
> + return -ERANGE;
If new size is computed as "end - start" should
the check above be "end >= size"?
> +
> + ptr->offset += start;
> + bpf_dynptr_set_size(ptr, end - start);
> +
> + return 0;
> +}
> +
> __bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
> {
> return obj;
> @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
> BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
> BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
> BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
> +BTF_ID_FLAGS(func, bpf_dynptr_adjust)
> BTF_SET8_END(common_btf_ids)
>
> static const struct btf_kfunc_id_set common_kfunc_set = {
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-24 14:31 ` Eduard Zingerman
@ 2023-04-24 14:33 ` Eduard Zingerman
0 siblings, 0 replies; 23+ messages in thread
From: Eduard Zingerman @ 2023-04-24 14:33 UTC (permalink / raw)
To: Joanne Koong, bpf; +Cc: andrii, ast, daniel
On Mon, 2023-04-24 at 17:31 +0300, Eduard Zingerman wrote:
> On Thu, 2023-04-20 at 00:14 -0700, Joanne Koong wrote:
> > Add a new kfunc
> >
> > int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end);
> >
> > which adjusts the dynptr to reflect the new [start, end) interval.
> > In particular, it advances the offset of the dynptr by "start" bytes,
> > and if end is less than the size of the dynptr, then this will trim the
> > dynptr accordingly.
> >
> > Adjusting the dynptr interval may be useful in certain situations.
> > For example, when hashing which takes in generic dynptrs, if the dynptr
> > points to a struct but only a certain memory region inside the struct
> > should be hashed, adjust can be used to narrow in on the
> > specific region to hash.
> >
> > Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> > ---
> > kernel/bpf/helpers.c | 26 ++++++++++++++++++++++++++
> > 1 file changed, 26 insertions(+)
> >
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index 00e5fb0682ac..7ddf63ac93ce 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -1448,6 +1448,13 @@ u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr)
> > return ptr->size & DYNPTR_SIZE_MASK;
> > }
> >
> > +static void bpf_dynptr_set_size(struct bpf_dynptr_kern *ptr, u32 new_size)
> > +{
> > + u32 metadata = ptr->size & ~DYNPTR_SIZE_MASK;
> > +
> > + ptr->size = new_size | metadata;
> > +}
> > +
> > int bpf_dynptr_check_size(u32 size)
> > {
> > return size > DYNPTR_MAX_SIZE ? -E2BIG : 0;
> > @@ -2297,6 +2304,24 @@ __bpf_kfunc void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr_kern *ptr, u32 o
> > return bpf_dynptr_slice(ptr, offset, buffer, buffer__szk);
> > }
> >
> > +__bpf_kfunc int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end)
> > +{
> > + u32 size;
> > +
> > + if (!ptr->data || start > end)
> > + return -EINVAL;
> > +
> > + size = bpf_dynptr_get_size(ptr);
> > +
> > + if (start > size || end > size)
> > + return -ERANGE;
>
> If new size is computed as "end - start" should
> the check above be "end >= size"?
Please ignore this comment, I got confused.
>
> > +
> > + ptr->offset += start;
> > + bpf_dynptr_set_size(ptr, end - start);
> > +
> > + return 0;
> > +}
> > +
> > __bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
> > {
> > return obj;
> > @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
> > BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
> > BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
> > BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
> > +BTF_ID_FLAGS(func, bpf_dynptr_adjust)
> > BTF_SET8_END(common_btf_ids)
> >
> > static const struct btf_kfunc_id_set common_kfunc_set = {
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* RE: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-20 7:14 ` [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust Joanne Koong
2023-04-20 18:38 ` Alexei Starovoitov
2023-04-24 14:31 ` Eduard Zingerman
@ 2023-04-24 19:46 ` John Fastabend
2023-04-25 5:29 ` Joanne Koong
2 siblings, 1 reply; 23+ messages in thread
From: John Fastabend @ 2023-04-24 19:46 UTC (permalink / raw)
To: Joanne Koong, bpf; +Cc: andrii, ast, daniel, Joanne Koong
Joanne Koong wrote:
> Add a new kfunc
>
> int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end);
>
> which adjusts the dynptr to reflect the new [start, end) interval.
> In particular, it advances the offset of the dynptr by "start" bytes,
> and if end is less than the size of the dynptr, then this will trim the
> dynptr accordingly.
>
> Adjusting the dynptr interval may be useful in certain situations.
> For example, when hashing which takes in generic dynptrs, if the dynptr
> points to a struct but only a certain memory region inside the struct
> should be hashed, adjust can be used to narrow in on the
> specific region to hash.
Would you want to prohibit creating an empty dynptr with [start, start)?
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---
> kernel/bpf/helpers.c | 26 ++++++++++++++++++++++++++
> 1 file changed, 26 insertions(+)
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 00e5fb0682ac..7ddf63ac93ce 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -1448,6 +1448,13 @@ u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr)
> return ptr->size & DYNPTR_SIZE_MASK;
> }
>
> +static void bpf_dynptr_set_size(struct bpf_dynptr_kern *ptr, u32 new_size)
> +{
> + u32 metadata = ptr->size & ~DYNPTR_SIZE_MASK;
> +
> + ptr->size = new_size | metadata;
> +}
> +
> int bpf_dynptr_check_size(u32 size)
> {
> return size > DYNPTR_MAX_SIZE ? -E2BIG : 0;
> @@ -2297,6 +2304,24 @@ __bpf_kfunc void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr_kern *ptr, u32 o
> return bpf_dynptr_slice(ptr, offset, buffer, buffer__szk);
> }
>
> +__bpf_kfunc int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end)
> +{
> + u32 size;
> +
> + if (!ptr->data || start > end)
> + return -EINVAL;
> +
> + size = bpf_dynptr_get_size(ptr);
> +
> + if (start > size || end > size)
> + return -ERANGE;
maybe 'start >= size'? OTOH if the verifier doesn't mind I guess its OK
to create the thing even if it doesn't make much sense.
> +
> + ptr->offset += start;
> + bpf_dynptr_set_size(ptr, end - start);
> +
> + return 0;
> +}
> +
> __bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
> {
> return obj;
> @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
> BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
> BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
> BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
> +BTF_ID_FLAGS(func, bpf_dynptr_adjust)
> BTF_SET8_END(common_btf_ids)
>
> static const struct btf_kfunc_id_set common_kfunc_set = {
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* RE: [PATCH v2 bpf-next 2/5] bpf: Add bpf_dynptr_is_null and bpf_dynptr_is_rdonly
2023-04-20 7:14 ` [PATCH v2 bpf-next 2/5] bpf: Add bpf_dynptr_is_null and bpf_dynptr_is_rdonly Joanne Koong
@ 2023-04-24 19:49 ` John Fastabend
0 siblings, 0 replies; 23+ messages in thread
From: John Fastabend @ 2023-04-24 19:49 UTC (permalink / raw)
To: Joanne Koong, bpf; +Cc: andrii, ast, daniel, Joanne Koong
Joanne Koong wrote:
> bpf_dynptr_is_null returns true if the dynptr is null / invalid
> (determined by whether ptr->data is NULL), else false if
> the dynptr is a valid dynptr.
>
> bpf_dynptr_is_rdonly returns true if the dynptr is read-only,
> else false if the dynptr is read-writable. If the dynptr is
> null / invalid, false is returned by default.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
> ---
Acked-by: John Fastabend <john.fastabend@gmail.com>
^ permalink raw reply [flat|nested] 23+ messages in thread
* RE: [PATCH v2 bpf-next 3/5] bpf: Add bpf_dynptr_size
2023-04-20 7:14 ` [PATCH v2 bpf-next 3/5] bpf: Add bpf_dynptr_size Joanne Koong
@ 2023-04-24 19:52 ` John Fastabend
0 siblings, 0 replies; 23+ messages in thread
From: John Fastabend @ 2023-04-24 19:52 UTC (permalink / raw)
To: Joanne Koong, bpf; +Cc: andrii, ast, daniel, Joanne Koong
Joanne Koong wrote:
> bpf_dynptr_size returns the number of useable bytes in a dynptr.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---
Acked-by: John Fastabend <john.fastabend@gmail.com>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-22 23:44 ` Alexei Starovoitov
@ 2023-04-25 5:05 ` Joanne Koong
2023-04-25 23:46 ` Alexei Starovoitov
0 siblings, 1 reply; 23+ messages in thread
From: Joanne Koong @ 2023-04-25 5:05 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann
On Sat, Apr 22, 2023 at 4:44 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Thu, Apr 20, 2023 at 8:46 PM Joanne Koong <joannelkoong@gmail.com> wrote:
> >
> > On Thu, Apr 20, 2023 at 11:38 AM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Thu, Apr 20, 2023 at 12:14:10AM -0700, Joanne Koong wrote:
> > > > return obj;
> > > > @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
> > > > BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
> > > > BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
> > > > BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
> > > > +BTF_ID_FLAGS(func, bpf_dynptr_adjust)
> > >
> > > I've missed this earlier.
> > > Shouldn't we change all the existing dynptr kfuncs to be KF_TRUSTED_ARGS?
> > > Otherwise when people start passing bpf_dynptr-s from kernel code
> > > (like fuse-bpf is planning to do)
> > > the bpf prog might get vanilla ptr_to_btf_id to bpf_dynptr_kern.
> > > It's probably not possible right now, so not a high-pri issue, but still.
> > > Or something in the verifier makes sure that dynptr-s are all trusted?
> >
> > In my understanding, the checks the verifier enforces for
> > KF_TRUSTED_ARGS are that the reg->offset is 0 and the reg may not be
> > null. The verifier logic does this for dynptrs currently, it enforces
> > that reg->offset is 0 (in stack_slot_obj_get_spi()) and that the
> > reg->type is PTR_TO_STACK or CONST_PTR_TO_DYNPTR (in
> > check_kfunc_args() for KF_ARG_PTR_TO_DYNPTR case). But maybe it's a
> > good idea to add the KF_TRUSTED_ARGS flag anyways in case more safety
> > checks are added to KF_TRUSTED_ARGS in the future?
>
> Yeah. You're right.
> The verifier is doing the same checks for dynptr and for trusted ptrs.
> So adding KF_TRUSTED_ARGS to bpf_dynptr_adjust is not mandatory.
> Maybe an opportunity to generalize the checks between
> KF_ARG_PTR_TO_BTF_ID and KF_ARG_PTR_TO_DYNPTR.
> But KF_TRUSTED_ARGS is necessary for bpf_dynptr_from_skb
> otherwise old style ptr_to_btf_id skb can be passed in.
>
> For example the following passes test_progs:
> diff --git a/net/core/filter.c b/net/core/filter.c
> index d9ce04ca22ce..abb14036b455 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -11718,6 +11718,7 @@ static int __init bpf_kfunc_init(void)
> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_XMIT,
> &bpf_kfunc_set_skb);
> ret = ret ?:
> register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_SEG6LOCAL,
> &bpf_kfunc_set_skb);
> ret = ret ?:
> register_btf_kfunc_id_set(BPF_PROG_TYPE_NETFILTER,
> &bpf_kfunc_set_skb);
> + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
> &bpf_kfunc_set_skb);
> return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP,
> &bpf_kfunc_set_xdp);
> }
> late_initcall(bpf_kfunc_init);
> diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c
> b/tools/testing/selftests/bpf/progs/dynptr_success.c
> index b2fa6c47ecc0..bd8fbc3e04ea 100644
> --- a/tools/testing/selftests/bpf/progs/dynptr_success.c
> +++ b/tools/testing/selftests/bpf/progs/dynptr_success.c
> @@ -4,6 +4,7 @@
> #include <string.h>
> #include <linux/bpf.h>
> #include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> #include "bpf_misc.h"
> #include "bpf_kfuncs.h"
> #include "errno.h"
> @@ -187,6 +188,15 @@ int test_skb_readonly(struct __sk_buff *skb)
> return 1;
> }
>
> +SEC("fentry/__kfree_skb")
> +int BPF_PROG(test_skb, struct __sk_buff *skb)
> +{
> + struct bpf_dynptr ptr;
> +
> + bpf_dynptr_from_skb(skb, 0, &ptr);
> + return 0;
> +}
>
> but shouldn't. skb in fentry is not trusted.
> It's not an issue right now, because bpf_dynptr_from_skb()
> is enabled for networking prog types only,
> but BPF_PROG_TYPE_NETFILTER is already blending the boundary.
> It's more networking than tracing and normal tracing should
> be able to examine skb. dynptr allows to do it nicely.
> Not a blocker for this set. Just something to follow up.
Ahh I see, thanks for the explanation. I'm trying to find where this
happens in the code - i see the check in the verifier for
is_trusted_reg() (when we call check_kfunc_args() for the
KF_ARG_PTR_TO_BTF_ID case) so it seems like the skb ctx reg is trusted
if it's been marked as either MEM_ALLOC or PTR_TRUSTED, and it's
untrusted if it's not. But where does this get marked as PTR_TRUSTED
for networking prog types?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-24 19:46 ` John Fastabend
@ 2023-04-25 5:29 ` Joanne Koong
2023-04-26 17:46 ` Andrii Nakryiko
0 siblings, 1 reply; 23+ messages in thread
From: Joanne Koong @ 2023-04-25 5:29 UTC (permalink / raw)
To: John Fastabend; +Cc: bpf, andrii, ast, daniel
On Mon, Apr 24, 2023 at 12:46 PM John Fastabend
<john.fastabend@gmail.com> wrote:
>
> Joanne Koong wrote:
> > Add a new kfunc
> >
> > int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end);
> >
> > which adjusts the dynptr to reflect the new [start, end) interval.
> > In particular, it advances the offset of the dynptr by "start" bytes,
> > and if end is less than the size of the dynptr, then this will trim the
> > dynptr accordingly.
> >
> > Adjusting the dynptr interval may be useful in certain situations.
> > For example, when hashing which takes in generic dynptrs, if the dynptr
> > points to a struct but only a certain memory region inside the struct
> > should be hashed, adjust can be used to narrow in on the
> > specific region to hash.
>
> Would you want to prohibit creating an empty dynptr with [start, start)?
I'm open to either :) I don't reallysee a use case for creating an
empty dynptr, but I think the concept of an empty dynptr might be
useful in general, so maybe we should let this be okay as well?
>
> >
> > Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> > ---
> > kernel/bpf/helpers.c | 26 ++++++++++++++++++++++++++
> > 1 file changed, 26 insertions(+)
> >
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index 00e5fb0682ac..7ddf63ac93ce 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -1448,6 +1448,13 @@ u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr)
> > return ptr->size & DYNPTR_SIZE_MASK;
> > }
> >
> > +static void bpf_dynptr_set_size(struct bpf_dynptr_kern *ptr, u32 new_size)
> > +{
> > + u32 metadata = ptr->size & ~DYNPTR_SIZE_MASK;
> > +
> > + ptr->size = new_size | metadata;
> > +}
> > +
> > int bpf_dynptr_check_size(u32 size)
> > {
> > return size > DYNPTR_MAX_SIZE ? -E2BIG : 0;
> > @@ -2297,6 +2304,24 @@ __bpf_kfunc void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr_kern *ptr, u32 o
> > return bpf_dynptr_slice(ptr, offset, buffer, buffer__szk);
> > }
> >
> > +__bpf_kfunc int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end)
> > +{
> > + u32 size;
> > +
> > + if (!ptr->data || start > end)
> > + return -EINVAL;
> > +
> > + size = bpf_dynptr_get_size(ptr);
> > +
> > + if (start > size || end > size)
> > + return -ERANGE;
>
> maybe 'start >= size'? OTOH if the verifier doesn't mind I guess its OK
> to create the thing even if it doesn't make much sense.
I think there might be use cases where this is useful even though the
zero-sized dynptr can't do anything. for example, if there's a helper
function in a program that takes in a dynptr, parses some fixed-size
chunk of its data, and then advances it, it might be useful to have
the concept of a zero-sized dynptr, so that if we're parsing the last
chunk of the data, then the last call to bpf_dynptr_adjust() will
still succeed and the dynptr will be of size 0, which signals
completion.
>
> > +
> > + ptr->offset += start;
> > + bpf_dynptr_set_size(ptr, end - start);
> > +
> > + return 0;
> > +}
> > +
> > __bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
> > {
> > return obj;
> > @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
> > BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
> > BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
> > BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
> > +BTF_ID_FLAGS(func, bpf_dynptr_adjust)
> > BTF_SET8_END(common_btf_ids)
> >
> > static const struct btf_kfunc_id_set common_kfunc_set = {
> > --
> > 2.34.1
> >
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-25 5:05 ` Joanne Koong
@ 2023-04-25 23:46 ` Alexei Starovoitov
2023-04-26 17:50 ` Andrii Nakryiko
0 siblings, 1 reply; 23+ messages in thread
From: Alexei Starovoitov @ 2023-04-25 23:46 UTC (permalink / raw)
To: Joanne Koong; +Cc: bpf, Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann
On Mon, Apr 24, 2023 at 10:05:32PM -0700, Joanne Koong wrote:
> On Sat, Apr 22, 2023 at 4:44 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Thu, Apr 20, 2023 at 8:46 PM Joanne Koong <joannelkoong@gmail.com> wrote:
> > >
> > > On Thu, Apr 20, 2023 at 11:38 AM Alexei Starovoitov
> > > <alexei.starovoitov@gmail.com> wrote:
> > > >
> > > > On Thu, Apr 20, 2023 at 12:14:10AM -0700, Joanne Koong wrote:
> > > > > return obj;
> > > > > @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
> > > > > BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
> > > > > BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
> > > > > BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
> > > > > +BTF_ID_FLAGS(func, bpf_dynptr_adjust)
> > > >
> > > > I've missed this earlier.
> > > > Shouldn't we change all the existing dynptr kfuncs to be KF_TRUSTED_ARGS?
> > > > Otherwise when people start passing bpf_dynptr-s from kernel code
> > > > (like fuse-bpf is planning to do)
> > > > the bpf prog might get vanilla ptr_to_btf_id to bpf_dynptr_kern.
> > > > It's probably not possible right now, so not a high-pri issue, but still.
> > > > Or something in the verifier makes sure that dynptr-s are all trusted?
> > >
> > > In my understanding, the checks the verifier enforces for
> > > KF_TRUSTED_ARGS are that the reg->offset is 0 and the reg may not be
> > > null. The verifier logic does this for dynptrs currently, it enforces
> > > that reg->offset is 0 (in stack_slot_obj_get_spi()) and that the
> > > reg->type is PTR_TO_STACK or CONST_PTR_TO_DYNPTR (in
> > > check_kfunc_args() for KF_ARG_PTR_TO_DYNPTR case). But maybe it's a
> > > good idea to add the KF_TRUSTED_ARGS flag anyways in case more safety
> > > checks are added to KF_TRUSTED_ARGS in the future?
> >
> > Yeah. You're right.
> > The verifier is doing the same checks for dynptr and for trusted ptrs.
> > So adding KF_TRUSTED_ARGS to bpf_dynptr_adjust is not mandatory.
> > Maybe an opportunity to generalize the checks between
> > KF_ARG_PTR_TO_BTF_ID and KF_ARG_PTR_TO_DYNPTR.
> > But KF_TRUSTED_ARGS is necessary for bpf_dynptr_from_skb
> > otherwise old style ptr_to_btf_id skb can be passed in.
> >
> > For example the following passes test_progs:
> > diff --git a/net/core/filter.c b/net/core/filter.c
> > index d9ce04ca22ce..abb14036b455 100644
> > --- a/net/core/filter.c
> > +++ b/net/core/filter.c
> > @@ -11718,6 +11718,7 @@ static int __init bpf_kfunc_init(void)
> > ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_XMIT,
> > &bpf_kfunc_set_skb);
> > ret = ret ?:
> > register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_SEG6LOCAL,
> > &bpf_kfunc_set_skb);
> > ret = ret ?:
> > register_btf_kfunc_id_set(BPF_PROG_TYPE_NETFILTER,
> > &bpf_kfunc_set_skb);
> > + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
> > &bpf_kfunc_set_skb);
> > return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP,
> > &bpf_kfunc_set_xdp);
> > }
> > late_initcall(bpf_kfunc_init);
> > diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c
> > b/tools/testing/selftests/bpf/progs/dynptr_success.c
> > index b2fa6c47ecc0..bd8fbc3e04ea 100644
> > --- a/tools/testing/selftests/bpf/progs/dynptr_success.c
> > +++ b/tools/testing/selftests/bpf/progs/dynptr_success.c
> > @@ -4,6 +4,7 @@
> > #include <string.h>
> > #include <linux/bpf.h>
> > #include <bpf/bpf_helpers.h>
> > +#include <bpf/bpf_tracing.h>
> > #include "bpf_misc.h"
> > #include "bpf_kfuncs.h"
> > #include "errno.h"
> > @@ -187,6 +188,15 @@ int test_skb_readonly(struct __sk_buff *skb)
> > return 1;
> > }
> >
> > +SEC("fentry/__kfree_skb")
> > +int BPF_PROG(test_skb, struct __sk_buff *skb)
> > +{
> > + struct bpf_dynptr ptr;
> > +
> > + bpf_dynptr_from_skb(skb, 0, &ptr);
> > + return 0;
> > +}
> >
> > but shouldn't. skb in fentry is not trusted.
> > It's not an issue right now, because bpf_dynptr_from_skb()
> > is enabled for networking prog types only,
> > but BPF_PROG_TYPE_NETFILTER is already blending the boundary.
> > It's more networking than tracing and normal tracing should
> > be able to examine skb. dynptr allows to do it nicely.
> > Not a blocker for this set. Just something to follow up.
>
> Ahh I see, thanks for the explanation. I'm trying to find where this
> happens in the code - i see the check in the verifier for
> is_trusted_reg() (when we call check_kfunc_args() for the
> KF_ARG_PTR_TO_BTF_ID case) so it seems like the skb ctx reg is trusted
> if it's been marked as either MEM_ALLOC or PTR_TRUSTED, and it's
> untrusted if it's not. But where does this get marked as PTR_TRUSTED
> for networking prog types?
is_trusted_reg() applies to PTR_TO_BTF_ID pointers.
For networking progs skb comes as PTR_TO_CTX which are implicitly trusted
and from safety pov equivalent to PTR_TO_BTF_ID | PTR_TRUSTED.
But tracing progs are different. Arguments of tp_btf progs
are also trusted, but fexit args are not. They're old legacy PTR_TO_BTF_ID
without flags. Neither PTR_TRUSTED nor PTR_UNTRUSTED.
The purpose of KF_TRUSTED_ARGS is to filter out such pointers.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-25 5:29 ` Joanne Koong
@ 2023-04-26 17:46 ` Andrii Nakryiko
2023-04-28 21:08 ` John Fastabend
0 siblings, 1 reply; 23+ messages in thread
From: Andrii Nakryiko @ 2023-04-26 17:46 UTC (permalink / raw)
To: Joanne Koong; +Cc: John Fastabend, bpf, andrii, ast, daniel
On Mon, Apr 24, 2023 at 10:29 PM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 12:46 PM John Fastabend
> <john.fastabend@gmail.com> wrote:
> >
> > Joanne Koong wrote:
> > > Add a new kfunc
> > >
> > > int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end);
> > >
> > > which adjusts the dynptr to reflect the new [start, end) interval.
> > > In particular, it advances the offset of the dynptr by "start" bytes,
> > > and if end is less than the size of the dynptr, then this will trim the
> > > dynptr accordingly.
> > >
> > > Adjusting the dynptr interval may be useful in certain situations.
> > > For example, when hashing which takes in generic dynptrs, if the dynptr
> > > points to a struct but only a certain memory region inside the struct
> > > should be hashed, adjust can be used to narrow in on the
> > > specific region to hash.
> >
> > Would you want to prohibit creating an empty dynptr with [start, start)?
>
> I'm open to either :) I don't reallysee a use case for creating an
> empty dynptr, but I think the concept of an empty dynptr might be
> useful in general, so maybe we should let this be okay as well?
Yes, there is no need to artificially enforce a non-empty range. We
already use pointers to zero-sized memory region in verifier (e.g.,
Alexei's recent kfunc existence check changes). In general, empty
range is a valid range and we should strive to have that working
without assumptions on who and how would use that. As long as it's
conceptually safe, we should support it.
>
> >
> > >
> > > Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> > > ---
> > > kernel/bpf/helpers.c | 26 ++++++++++++++++++++++++++
> > > 1 file changed, 26 insertions(+)
> > >
> > > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > > index 00e5fb0682ac..7ddf63ac93ce 100644
> > > --- a/kernel/bpf/helpers.c
> > > +++ b/kernel/bpf/helpers.c
> > > @@ -1448,6 +1448,13 @@ u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr)
> > > return ptr->size & DYNPTR_SIZE_MASK;
> > > }
> > >
> > > +static void bpf_dynptr_set_size(struct bpf_dynptr_kern *ptr, u32 new_size)
> > > +{
> > > + u32 metadata = ptr->size & ~DYNPTR_SIZE_MASK;
> > > +
> > > + ptr->size = new_size | metadata;
> > > +}
> > > +
> > > int bpf_dynptr_check_size(u32 size)
> > > {
> > > return size > DYNPTR_MAX_SIZE ? -E2BIG : 0;
> > > @@ -2297,6 +2304,24 @@ __bpf_kfunc void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr_kern *ptr, u32 o
> > > return bpf_dynptr_slice(ptr, offset, buffer, buffer__szk);
> > > }
> > >
> > > +__bpf_kfunc int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end)
> > > +{
> > > + u32 size;
> > > +
> > > + if (!ptr->data || start > end)
> > > + return -EINVAL;
> > > +
> > > + size = bpf_dynptr_get_size(ptr);
> > > +
> > > + if (start > size || end > size)
> > > + return -ERANGE;
> >
> > maybe 'start >= size'? OTOH if the verifier doesn't mind I guess its OK
> > to create the thing even if it doesn't make much sense.
>
> I think there might be use cases where this is useful even though the
> zero-sized dynptr can't do anything. for example, if there's a helper
> function in a program that takes in a dynptr, parses some fixed-size
> chunk of its data, and then advances it, it might be useful to have
> the concept of a zero-sized dynptr, so that if we're parsing the last
> chunk of the data, then the last call to bpf_dynptr_adjust() will
> still succeed and the dynptr will be of size 0, which signals
> completion.
>
+1, empty range does happen in practice naturally, and having to
special-case them is a hindrance. Let's keep it possible.
> >
> > > +
> > > + ptr->offset += start;
> > > + bpf_dynptr_set_size(ptr, end - start);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > __bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
> > > {
> > > return obj;
> > > @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
> > > BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
> > > BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
> > > BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
> > > +BTF_ID_FLAGS(func, bpf_dynptr_adjust)
> > > BTF_SET8_END(common_btf_ids)
> > >
> > > static const struct btf_kfunc_id_set common_kfunc_set = {
> > > --
> > > 2.34.1
> > >
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-25 23:46 ` Alexei Starovoitov
@ 2023-04-26 17:50 ` Andrii Nakryiko
0 siblings, 0 replies; 23+ messages in thread
From: Andrii Nakryiko @ 2023-04-26 17:50 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Joanne Koong, bpf, Andrii Nakryiko, Alexei Starovoitov,
Daniel Borkmann
On Tue, Apr 25, 2023 at 4:46 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 10:05:32PM -0700, Joanne Koong wrote:
> > On Sat, Apr 22, 2023 at 4:44 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Thu, Apr 20, 2023 at 8:46 PM Joanne Koong <joannelkoong@gmail.com> wrote:
> > > >
> > > > On Thu, Apr 20, 2023 at 11:38 AM Alexei Starovoitov
> > > > <alexei.starovoitov@gmail.com> wrote:
> > > > >
> > > > > On Thu, Apr 20, 2023 at 12:14:10AM -0700, Joanne Koong wrote:
> > > > > > return obj;
> > > > > > @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
> > > > > > BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
> > > > > > BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
> > > > > > BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
> > > > > > +BTF_ID_FLAGS(func, bpf_dynptr_adjust)
> > > > >
> > > > > I've missed this earlier.
> > > > > Shouldn't we change all the existing dynptr kfuncs to be KF_TRUSTED_ARGS?
> > > > > Otherwise when people start passing bpf_dynptr-s from kernel code
> > > > > (like fuse-bpf is planning to do)
> > > > > the bpf prog might get vanilla ptr_to_btf_id to bpf_dynptr_kern.
> > > > > It's probably not possible right now, so not a high-pri issue, but still.
> > > > > Or something in the verifier makes sure that dynptr-s are all trusted?
> > > >
> > > > In my understanding, the checks the verifier enforces for
> > > > KF_TRUSTED_ARGS are that the reg->offset is 0 and the reg may not be
> > > > null. The verifier logic does this for dynptrs currently, it enforces
> > > > that reg->offset is 0 (in stack_slot_obj_get_spi()) and that the
> > > > reg->type is PTR_TO_STACK or CONST_PTR_TO_DYNPTR (in
> > > > check_kfunc_args() for KF_ARG_PTR_TO_DYNPTR case). But maybe it's a
> > > > good idea to add the KF_TRUSTED_ARGS flag anyways in case more safety
> > > > checks are added to KF_TRUSTED_ARGS in the future?
> > >
> > > Yeah. You're right.
> > > The verifier is doing the same checks for dynptr and for trusted ptrs.
> > > So adding KF_TRUSTED_ARGS to bpf_dynptr_adjust is not mandatory.
> > > Maybe an opportunity to generalize the checks between
> > > KF_ARG_PTR_TO_BTF_ID and KF_ARG_PTR_TO_DYNPTR.
> > > But KF_TRUSTED_ARGS is necessary for bpf_dynptr_from_skb
> > > otherwise old style ptr_to_btf_id skb can be passed in.
> > >
> > > For example the following passes test_progs:
> > > diff --git a/net/core/filter.c b/net/core/filter.c
> > > index d9ce04ca22ce..abb14036b455 100644
> > > --- a/net/core/filter.c
> > > +++ b/net/core/filter.c
> > > @@ -11718,6 +11718,7 @@ static int __init bpf_kfunc_init(void)
> > > ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_XMIT,
> > > &bpf_kfunc_set_skb);
> > > ret = ret ?:
> > > register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_SEG6LOCAL,
> > > &bpf_kfunc_set_skb);
> > > ret = ret ?:
> > > register_btf_kfunc_id_set(BPF_PROG_TYPE_NETFILTER,
> > > &bpf_kfunc_set_skb);
> > > + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
> > > &bpf_kfunc_set_skb);
> > > return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP,
> > > &bpf_kfunc_set_xdp);
> > > }
> > > late_initcall(bpf_kfunc_init);
> > > diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c
> > > b/tools/testing/selftests/bpf/progs/dynptr_success.c
> > > index b2fa6c47ecc0..bd8fbc3e04ea 100644
> > > --- a/tools/testing/selftests/bpf/progs/dynptr_success.c
> > > +++ b/tools/testing/selftests/bpf/progs/dynptr_success.c
> > > @@ -4,6 +4,7 @@
> > > #include <string.h>
> > > #include <linux/bpf.h>
> > > #include <bpf/bpf_helpers.h>
> > > +#include <bpf/bpf_tracing.h>
> > > #include "bpf_misc.h"
> > > #include "bpf_kfuncs.h"
> > > #include "errno.h"
> > > @@ -187,6 +188,15 @@ int test_skb_readonly(struct __sk_buff *skb)
> > > return 1;
> > > }
> > >
> > > +SEC("fentry/__kfree_skb")
> > > +int BPF_PROG(test_skb, struct __sk_buff *skb)
> > > +{
> > > + struct bpf_dynptr ptr;
> > > +
> > > + bpf_dynptr_from_skb(skb, 0, &ptr);
> > > + return 0;
> > > +}
> > >
> > > but shouldn't. skb in fentry is not trusted.
> > > It's not an issue right now, because bpf_dynptr_from_skb()
> > > is enabled for networking prog types only,
> > > but BPF_PROG_TYPE_NETFILTER is already blending the boundary.
> > > It's more networking than tracing and normal tracing should
> > > be able to examine skb. dynptr allows to do it nicely.
> > > Not a blocker for this set. Just something to follow up.
> >
> > Ahh I see, thanks for the explanation. I'm trying to find where this
> > happens in the code - i see the check in the verifier for
> > is_trusted_reg() (when we call check_kfunc_args() for the
> > KF_ARG_PTR_TO_BTF_ID case) so it seems like the skb ctx reg is trusted
> > if it's been marked as either MEM_ALLOC or PTR_TRUSTED, and it's
> > untrusted if it's not. But where does this get marked as PTR_TRUSTED
> > for networking prog types?
>
> is_trusted_reg() applies to PTR_TO_BTF_ID pointers.
> For networking progs skb comes as PTR_TO_CTX which are implicitly trusted
> and from safety pov equivalent to PTR_TO_BTF_ID | PTR_TRUSTED.
> But tracing progs are different. Arguments of tp_btf progs
> are also trusted, but fexit args are not. They're old legacy PTR_TO_BTF_ID
> without flags. Neither PTR_TRUSTED nor PTR_UNTRUSTED.
> The purpose of KF_TRUSTED_ARGS is to filter out such pointers.
I need to do a thorough look at how FUSE BPF is using bpf_dynptr, but
I'd rather us starting strict and not allowing to pass bpf_dynptr as
PTR_TO_BTF_ID, at least right now until we can think this through
thoroughly. One fundamental aspect of current on the stack dynptr or
dynptr passed from user_ringbuf's drain helper is that we have a
guarantee that no one else besides program (on a single current
thread) can modify its state.
Anyways, I'd expect that if some kfunc accepts `struct bpf_dynptr` we
force that it's PTR_TO_DYNPTR_ID or PTR_TO_STACK register, at least
for now. If that's not the case right now, let's make sure it is?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 4/5] bpf: Add bpf_dynptr_clone
2023-04-20 7:14 ` [PATCH v2 bpf-next 4/5] bpf: Add bpf_dynptr_clone Joanne Koong
@ 2023-04-26 18:16 ` Andrii Nakryiko
0 siblings, 0 replies; 23+ messages in thread
From: Andrii Nakryiko @ 2023-04-26 18:16 UTC (permalink / raw)
To: Joanne Koong; +Cc: bpf, andrii, ast, daniel
On Thu, Apr 20, 2023 at 12:15 AM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> The cloned dynptr will point to the same data as its parent dynptr,
> with the same type, offset, size and read-only properties.
>
> Any writes to a dynptr will be reflected across all instances
> (by 'instance', this means any dynptrs that point to the same
> underlying data).
>
> Please note that data slice and dynptr invalidations will affect all
> instances as well. For example, if bpf_dynptr_write() is called on an
> skb-type dynptr, all data slices of dynptr instances to that skb
> will be invalidated as well (eg data slices of any clones, parents,
> grandparents, ...). Another example is if a ringbuf dynptr is submitted,
> any instance of that dynptr will be invalidated.
>
> Changing the view of the dynptr (eg advancing the offset or
> trimming the size) will only affect that dynptr and not affect any
> other instances.
>
> One example use case where cloning may be helpful is for hashing or
> iterating through dynptr data. Cloning will allow the user to maintain
> the original view of the dynptr for future use, while also allowing
> views to smaller subsets of the data after the offset is advanced or the
> size is trimmed.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---
> kernel/bpf/helpers.c | 14 ++++++
> kernel/bpf/verifier.c | 105 ++++++++++++++++++++++++++++++++++--------
> 2 files changed, 99 insertions(+), 20 deletions(-)
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 9018646b86db..1ebdc7f1a574 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -2343,6 +2343,19 @@ __bpf_kfunc __u32 bpf_dynptr_size(const struct bpf_dynptr_kern *ptr)
> return __bpf_dynptr_size(ptr);
> }
>
> +__bpf_kfunc int bpf_dynptr_clone(struct bpf_dynptr_kern *ptr,
> + struct bpf_dynptr_kern *clone__uninit)
> +{
> + if (!ptr->data) {
> + bpf_dynptr_set_null(clone__uninit);
> + return -EINVAL;
> + }
> +
> + *clone__uninit = *ptr;
> +
> + return 0;
> +}
> +
> __bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
> {
> return obj;
> @@ -2419,6 +2432,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_adjust)
> BTF_ID_FLAGS(func, bpf_dynptr_is_null)
> BTF_ID_FLAGS(func, bpf_dynptr_is_rdonly)
> BTF_ID_FLAGS(func, bpf_dynptr_size)
> +BTF_ID_FLAGS(func, bpf_dynptr_clone)
> BTF_SET8_END(common_btf_ids)
>
> static const struct btf_kfunc_id_set common_kfunc_set = {
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 1e05355facdc..164726673086 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -309,6 +309,7 @@ struct bpf_kfunc_call_arg_meta {
> struct {
> enum bpf_dynptr_type type;
> u32 id;
> + u32 ref_obj_id;
> } initialized_dynptr;
> struct {
> u8 spi;
> @@ -847,11 +848,11 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
> struct bpf_func_state *state, int spi);
>
> static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> - enum bpf_arg_type arg_type, int insn_idx)
> + enum bpf_arg_type arg_type, int insn_idx, int clone_ref_obj_id)
> {
> struct bpf_func_state *state = func(env, reg);
> enum bpf_dynptr_type type;
> - int spi, i, id, err;
> + int spi, i, err;
>
> spi = dynptr_get_spi(env, reg);
> if (spi < 0)
> @@ -887,7 +888,13 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
>
> if (dynptr_type_refcounted(type)) {
> /* The id is used to track proper releasing */
> - id = acquire_reference_state(env, insn_idx);
> + int id;
> +
> + if (clone_ref_obj_id)
> + id = clone_ref_obj_id;
> + else
> + id = acquire_reference_state(env, insn_idx);
> +
> if (id < 0)
> return id;
>
> @@ -901,24 +908,15 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> return 0;
> }
>
> -static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> +static void invalidate_dynptr(struct bpf_verifier_env *env, struct bpf_func_state *state, int spi)
> {
> - struct bpf_func_state *state = func(env, reg);
> - int spi, i;
> -
> - spi = dynptr_get_spi(env, reg);
> - if (spi < 0)
> - return spi;
> + int i;
>
> for (i = 0; i < BPF_REG_SIZE; i++) {
> state->stack[spi].slot_type[i] = STACK_INVALID;
> state->stack[spi - 1].slot_type[i] = STACK_INVALID;
> }
>
> - /* Invalidate any slices associated with this dynptr */
> - if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type))
> - WARN_ON_ONCE(release_reference(env, state->stack[spi].spilled_ptr.ref_obj_id));
> -
> __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
>
> @@ -945,6 +943,52 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> */
> state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +}
> +
> +static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> +{
> + struct bpf_func_state *state = func(env, reg);
> + int spi;
> +
> + spi = dynptr_get_spi(env, reg);
> + if (spi < 0)
> + return spi;
> +
> + if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
I inverted this condition, did invalidate_dynptr(spi), return 0. This
reduced nestedness of refcounted case handling code below.
The rest looks great!
> + int ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
> + int i;
> +
> + /* If the dynptr has a ref_obj_id, then we need to invalidate
> + * two things:
> + *
> + * 1) Any dynptrs with a matching ref_obj_id (clones)
> + * 2) Any slices derived from this dynptr.
> + */
> +
[...]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 5/5] selftests/bpf: add tests for dynptr convenience helpers
2023-04-20 7:14 ` [PATCH v2 bpf-next 5/5] selftests/bpf: add tests for dynptr convenience helpers Joanne Koong
@ 2023-04-26 18:17 ` Andrii Nakryiko
0 siblings, 0 replies; 23+ messages in thread
From: Andrii Nakryiko @ 2023-04-26 18:17 UTC (permalink / raw)
To: Joanne Koong; +Cc: bpf, andrii, ast, daniel
On Thu, Apr 20, 2023 at 12:15 AM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> Add various tests for the added dynptr convenience helpers.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---
> tools/testing/selftests/bpf/bpf_kfuncs.h | 6 +
> .../testing/selftests/bpf/prog_tests/dynptr.c | 6 +
> .../testing/selftests/bpf/progs/dynptr_fail.c | 287 +++++++++++++++++
> .../selftests/bpf/progs/dynptr_success.c | 298 ++++++++++++++++++
> 4 files changed, 597 insertions(+)
>
Very nice and thorough set of tests, thanks a lot! I left a comment
below, please follow up with requested improvement.
Applied the patch set to bpf-next. Let's continue discussion about
TRUSTED_ARGS and PTR_TO_BTF_ID for dynptr and do necessary follow ups
separately.
[...]
> +/* Invalidating a dynptr should invalidate any data slices
> + * of its clones
> + */
> +SEC("?raw_tp")
> +__failure __msg("invalid mem access 'scalar'")
> +int clone_invalidate4(void *ctx)
> +{
> + struct bpf_dynptr ptr;
> + struct bpf_dynptr clone;
> + int *data;
> +
> + bpf_ringbuf_reserve_dynptr(&ringbuf, val, 0, &ptr);
> +
> + bpf_dynptr_clone(&ptr, &clone);
> + data = bpf_dynptr_data(&clone, 0, sizeof(val));
> + if (!data)
you'd need bpf_ringbuf_discard_dynptr() here to make sure that
compiler code generation changes don't suddenly start failing due to
missing dynptr release operation. Please send a small follow up making
this more robust.
> + return 0;
> +
> + bpf_ringbuf_submit_dynptr(&ptr, 0);
> +
> + /* this should fail */
> + *data = 123;
> +
> + return 0;
> +}
> +
> +/* Invalidating a dynptr should invalidate any data slices
> + * of its parent
> + */
> +SEC("?raw_tp")
> +__failure __msg("invalid mem access 'scalar'")
> +int clone_invalidate5(void *ctx)
> +{
> + struct bpf_dynptr ptr;
> + struct bpf_dynptr clone;
> + int *data;
> +
> + bpf_ringbuf_reserve_dynptr(&ringbuf, val, 0, &ptr);
> + data = bpf_dynptr_data(&ptr, 0, sizeof(val));
> + if (!data)
ditto, we need to make sure we violate only one rule (using
invalidated slices), not the overall dynptr usage pattern
> + return 0;
> +
> + bpf_dynptr_clone(&ptr, &clone);
> +
> + bpf_ringbuf_submit_dynptr(&clone, 0);
> +
> + /* this should fail */
> + *data = 123;
> +
> + return 0;
> +}
> +
[...]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 0/5] Dynptr helpers
2023-04-20 7:14 [PATCH v2 bpf-next 0/5] Dynptr helpers Joanne Koong
` (4 preceding siblings ...)
2023-04-20 7:14 ` [PATCH v2 bpf-next 5/5] selftests/bpf: add tests for dynptr convenience helpers Joanne Koong
@ 2023-04-26 18:20 ` patchwork-bot+netdevbpf
5 siblings, 0 replies; 23+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-04-26 18:20 UTC (permalink / raw)
To: Joanne Koong; +Cc: bpf, andrii, ast, daniel
Hello:
This series was applied to bpf/bpf-next.git (master)
by Andrii Nakryiko <andrii@kernel.org>:
On Thu, 20 Apr 2023 00:14:09 -0700 you wrote:
> This patchset is the 3rd in the dynptr series. The 1st (dynptr
> fundamentals) can be found here [0] and the second (skb + xdp dynptrs)
> can be found here [1].
>
> This patchset adds the following helpers for interacting with
> dynptrs:
>
> [...]
Here is the summary with links:
- [v2,bpf-next,1/5] bpf: Add bpf_dynptr_adjust
https://git.kernel.org/bpf/bpf-next/c/a18ce61a6c1f
- [v2,bpf-next,2/5] bpf: Add bpf_dynptr_is_null and bpf_dynptr_is_rdonly
https://git.kernel.org/bpf/bpf-next/c/8365e486d6b1
- [v2,bpf-next,3/5] bpf: Add bpf_dynptr_size
https://git.kernel.org/bpf/bpf-next/c/8acc1b16a56f
- [v2,bpf-next,4/5] bpf: Add bpf_dynptr_clone
https://git.kernel.org/bpf/bpf-next/c/cdb12a71426d
- [v2,bpf-next,5/5] selftests/bpf: add tests for dynptr convenience helpers
https://git.kernel.org/bpf/bpf-next/c/a63fcb5ba776
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust
2023-04-26 17:46 ` Andrii Nakryiko
@ 2023-04-28 21:08 ` John Fastabend
0 siblings, 0 replies; 23+ messages in thread
From: John Fastabend @ 2023-04-28 21:08 UTC (permalink / raw)
To: Andrii Nakryiko, Joanne Koong; +Cc: John Fastabend, bpf, andrii, ast, daniel
Andrii Nakryiko wrote:
> On Mon, Apr 24, 2023 at 10:29 PM Joanne Koong <joannelkoong@gmail.com> wrote:
> >
> > On Mon, Apr 24, 2023 at 12:46 PM John Fastabend
> > <john.fastabend@gmail.com> wrote:
> > >
> > > Joanne Koong wrote:
> > > > Add a new kfunc
> > > >
> > > > int bpf_dynptr_adjust(struct bpf_dynptr_kern *ptr, u32 start, u32 end);
> > > >
> > > > which adjusts the dynptr to reflect the new [start, end) interval.
> > > > In particular, it advances the offset of the dynptr by "start" bytes,
> > > > and if end is less than the size of the dynptr, then this will trim the
> > > > dynptr accordingly.
> > > >
> > > > Adjusting the dynptr interval may be useful in certain situations.
> > > > For example, when hashing which takes in generic dynptrs, if the dynptr
> > > > points to a struct but only a certain memory region inside the struct
> > > > should be hashed, adjust can be used to narrow in on the
> > > > specific region to hash.
> > >
> > > Would you want to prohibit creating an empty dynptr with [start, start)?
> >
> > I'm open to either :) I don't reallysee a use case for creating an
> > empty dynptr, but I think the concept of an empty dynptr might be
> > useful in general, so maybe we should let this be okay as well?
>
> Yes, there is no need to artificially enforce a non-empty range. We
> already use pointers to zero-sized memory region in verifier (e.g.,
> Alexei's recent kfunc existence check changes). In general, empty
> range is a valid range and we should strive to have that working
> without assumptions on who and how would use that. As long as it's
> conceptually safe, we should support it.
Ack. Agree sounds good to me.
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2023-04-28 21:08 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-20 7:14 [PATCH v2 bpf-next 0/5] Dynptr helpers Joanne Koong
2023-04-20 7:14 ` [PATCH v2 bpf-next 1/5] bpf: Add bpf_dynptr_adjust Joanne Koong
2023-04-20 18:38 ` Alexei Starovoitov
2023-04-21 3:46 ` Joanne Koong
2023-04-22 23:44 ` Alexei Starovoitov
2023-04-25 5:05 ` Joanne Koong
2023-04-25 23:46 ` Alexei Starovoitov
2023-04-26 17:50 ` Andrii Nakryiko
2023-04-24 14:31 ` Eduard Zingerman
2023-04-24 14:33 ` Eduard Zingerman
2023-04-24 19:46 ` John Fastabend
2023-04-25 5:29 ` Joanne Koong
2023-04-26 17:46 ` Andrii Nakryiko
2023-04-28 21:08 ` John Fastabend
2023-04-20 7:14 ` [PATCH v2 bpf-next 2/5] bpf: Add bpf_dynptr_is_null and bpf_dynptr_is_rdonly Joanne Koong
2023-04-24 19:49 ` John Fastabend
2023-04-20 7:14 ` [PATCH v2 bpf-next 3/5] bpf: Add bpf_dynptr_size Joanne Koong
2023-04-24 19:52 ` John Fastabend
2023-04-20 7:14 ` [PATCH v2 bpf-next 4/5] bpf: Add bpf_dynptr_clone Joanne Koong
2023-04-26 18:16 ` Andrii Nakryiko
2023-04-20 7:14 ` [PATCH v2 bpf-next 5/5] selftests/bpf: add tests for dynptr convenience helpers Joanne Koong
2023-04-26 18:17 ` Andrii Nakryiko
2023-04-26 18:20 ` [PATCH v2 bpf-next 0/5] Dynptr helpers patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).