* [PATCH v1 bpf-next 1/6] bpf: verifier: Rename kernel_type_name helper to btf_type_name
2023-03-09 18:01 [PATCH v1 bpf-next 0/6] Support stashing local kptrs with bpf_kptr_xchg Dave Marchevsky
@ 2023-03-09 18:01 ` Dave Marchevsky
2023-03-09 18:01 ` [PATCH v1 bpf-next 2/6] bpf: btf: Remove unused btf_field_info_type enum Dave Marchevsky
` (5 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Dave Marchevsky @ 2023-03-09 18:01 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team,
tj, Dave Marchevsky
kernel_type_name was introduced in commit 9e15db66136a ("bpf: Implement accurate raw_tp context access via BTF")
with type signature:
const char *kernel_type_name(u32 id)
At that time the function used global btf_vmlinux BTF for all id lookups. Later,
in commit 22dc4a0f5ed1 ("bpf: Remove hard-coded btf_vmlinux assumption from BPF verifier"),
the type signature was changed to:
static const char *kernel_type_name(const struct btf* btf, u32 id)
With the btf parameter used for lookups instead of global btf_vmlinux.
The helper will function as expected for type name lookup using non-kernel BTFs,
and will be used for such in further patches in the series. Let's rename it to
avoid incorrect assumptions that might arise when seeing the current name.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
kernel/bpf/verifier.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 45a082284464..cdf1ba65821b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -752,7 +752,7 @@ static int iter_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
return stack_slot_obj_get_spi(env, reg, "iter", nr_slots);
}
-static const char *kernel_type_name(const struct btf* btf, u32 id)
+static const char *btf_type_name(const struct btf *btf, u32 id)
{
return btf_name_by_offset(btf, btf_type_by_id(btf, id)->name_off);
}
@@ -782,7 +782,7 @@ static const char *iter_type_str(const struct btf *btf, u32 btf_id)
return "<invalid>";
/* we already validated that type is valid and has conforming name */
- return kernel_type_name(btf, btf_id) + sizeof(ITER_PREFIX) - 1;
+ return btf_type_name(btf, btf_id) + sizeof(ITER_PREFIX) - 1;
}
static const char *iter_state_str(enum bpf_iter_state state)
@@ -1349,7 +1349,7 @@ static void print_verifier_state(struct bpf_verifier_env *env,
verbose(env, "%s", reg_type_str(env, t));
if (base_type(t) == PTR_TO_BTF_ID)
- verbose(env, "%s", kernel_type_name(reg->btf, reg->btf_id));
+ verbose(env, "%s", btf_type_name(reg->btf, reg->btf_id));
verbose(env, "(");
/*
* _a stands for append, was shortened to avoid multiline statements below.
@@ -4518,7 +4518,7 @@ static int map_kptr_match_type(struct bpf_verifier_env *env,
struct btf_field *kptr_field,
struct bpf_reg_state *reg, u32 regno)
{
- const char *targ_name = kernel_type_name(kptr_field->kptr.btf, kptr_field->kptr.btf_id);
+ const char *targ_name = btf_type_name(kptr_field->kptr.btf, kptr_field->kptr.btf_id);
int perm_flags = PTR_MAYBE_NULL | PTR_TRUSTED | MEM_RCU;
const char *reg_name = "";
@@ -4534,7 +4534,7 @@ static int map_kptr_match_type(struct bpf_verifier_env *env,
return -EINVAL;
}
/* We need to verify reg->type and reg->btf, before accessing reg->btf */
- reg_name = kernel_type_name(reg->btf, reg->btf_id);
+ reg_name = btf_type_name(reg->btf, reg->btf_id);
/* For ref_ptr case, release function check should ensure we get one
* referenced PTR_TO_BTF_ID, and that its fixed offset is 0. For the
@@ -7177,8 +7177,8 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
btf_vmlinux, *arg_btf_id,
strict_type_match)) {
verbose(env, "R%d is of type %s but %s is expected\n",
- regno, kernel_type_name(reg->btf, reg->btf_id),
- kernel_type_name(btf_vmlinux, *arg_btf_id));
+ regno, btf_type_name(reg->btf, reg->btf_id),
+ btf_type_name(btf_vmlinux, *arg_btf_id));
return -EACCES;
}
}
@@ -7248,7 +7248,7 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env,
verbose(env, "R%d must have zero offset when passed to release func\n",
regno);
verbose(env, "No graph node or root found at R%d type:%s off:%d\n", regno,
- kernel_type_name(reg->btf, reg->btf_id), reg->off);
+ btf_type_name(reg->btf, reg->btf_id), reg->off);
return -EINVAL;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v1 bpf-next 2/6] bpf: btf: Remove unused btf_field_info_type enum
2023-03-09 18:01 [PATCH v1 bpf-next 0/6] Support stashing local kptrs with bpf_kptr_xchg Dave Marchevsky
2023-03-09 18:01 ` [PATCH v1 bpf-next 1/6] bpf: verifier: Rename kernel_type_name helper to btf_type_name Dave Marchevsky
@ 2023-03-09 18:01 ` Dave Marchevsky
2023-03-09 18:01 ` [PATCH v1 bpf-next 3/6] bpf: Change btf_record_find enum parameter to field_mask Dave Marchevsky
` (4 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Dave Marchevsky @ 2023-03-09 18:01 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team,
tj, Dave Marchevsky
This enum was added and used in commit aa3496accc41 ("bpf: Refactor kptr_off_tab
into btf_record"). Later refactoring in commit db559117828d ("bpf: Consolidate
spin_lock, timer management into btf_record") resulted in the enum
values no longer being used anywhere.
Let's remove them.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
kernel/bpf/btf.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 71758cd15b07..37779ceefd09 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3231,12 +3231,6 @@ static void btf_struct_log(struct btf_verifier_env *env,
btf_verifier_log(env, "size=%u vlen=%u", t->size, btf_type_vlen(t));
}
-enum btf_field_info_type {
- BTF_FIELD_SPIN_LOCK,
- BTF_FIELD_TIMER,
- BTF_FIELD_KPTR,
-};
-
enum {
BTF_FIELD_IGNORE = 0,
BTF_FIELD_FOUND = 1,
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v1 bpf-next 3/6] bpf: Change btf_record_find enum parameter to field_mask
2023-03-09 18:01 [PATCH v1 bpf-next 0/6] Support stashing local kptrs with bpf_kptr_xchg Dave Marchevsky
2023-03-09 18:01 ` [PATCH v1 bpf-next 1/6] bpf: verifier: Rename kernel_type_name helper to btf_type_name Dave Marchevsky
2023-03-09 18:01 ` [PATCH v1 bpf-next 2/6] bpf: btf: Remove unused btf_field_info_type enum Dave Marchevsky
@ 2023-03-09 18:01 ` Dave Marchevsky
2023-03-09 18:01 ` [PATCH v1 bpf-next 4/6] bpf: Support __kptr to local kptrs Dave Marchevsky
` (3 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Dave Marchevsky @ 2023-03-09 18:01 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team,
tj, Dave Marchevsky
btf_record_find's 3rd parameter can be multiple enum btf_field_type's
masked together. The function is called with BPF_KPTR in two places in
verifier.c, so it works with masked values already.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
include/linux/bpf.h | 2 +-
kernel/bpf/syscall.c | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index e64ff1e89fb2..3a38db315f7f 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1925,7 +1925,7 @@ void bpf_prog_free_id(struct bpf_prog *prog);
void bpf_map_free_id(struct bpf_map *map);
struct btf_field *btf_record_find(const struct btf_record *rec,
- u32 offset, enum btf_field_type type);
+ u32 offset, u32 field_mask);
void btf_record_free(struct btf_record *rec);
void bpf_map_free_record(struct bpf_map *map);
struct btf_record *btf_record_dup(const struct btf_record *rec);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index f406dfa13792..cc4b7684910c 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -520,14 +520,14 @@ static int btf_field_cmp(const void *a, const void *b)
}
struct btf_field *btf_record_find(const struct btf_record *rec, u32 offset,
- enum btf_field_type type)
+ u32 field_mask)
{
struct btf_field *field;
- if (IS_ERR_OR_NULL(rec) || !(rec->field_mask & type))
+ if (IS_ERR_OR_NULL(rec) || !(rec->field_mask & field_mask))
return NULL;
field = bsearch(&offset, rec->fields, rec->cnt, sizeof(rec->fields[0]), btf_field_cmp);
- if (!field || !(field->type & type))
+ if (!field || !(field->type & field_mask))
return NULL;
return field;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v1 bpf-next 4/6] bpf: Support __kptr to local kptrs
2023-03-09 18:01 [PATCH v1 bpf-next 0/6] Support stashing local kptrs with bpf_kptr_xchg Dave Marchevsky
` (2 preceding siblings ...)
2023-03-09 18:01 ` [PATCH v1 bpf-next 3/6] bpf: Change btf_record_find enum parameter to field_mask Dave Marchevsky
@ 2023-03-09 18:01 ` Dave Marchevsky
2023-03-10 20:07 ` Alexei Starovoitov
2023-03-09 18:01 ` [PATCH v1 bpf-next 5/6] bpf: Allow local kptrs to be exchanged via bpf_kptr_xchg Dave Marchevsky
` (2 subsequent siblings)
6 siblings, 1 reply; 10+ messages in thread
From: Dave Marchevsky @ 2023-03-09 18:01 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team,
tj, Dave Marchevsky
If a PTR_TO_BTF_ID type comes from program BTF - not vmlinux or module
BTF - it must have been allocated by bpf_obj_new and therefore must be
free'd with bpf_obj_drop. Such a PTR_TO_BTF_ID is considered a "local
kptr" and is tagged with MEM_ALLOC type tag by bpf_obj_new.
This patch adds support for treating __kptr-tagged pointers to "local
kptrs" as having an implicit bpf_obj_drop destructor for referenced kptr
acquire / release semantics. Consider the following example:
struct node_data {
long key;
long data;
struct bpf_rb_node node;
};
struct map_value {
struct node_data __kptr *node;
};
struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
__type(key, int);
__type(value, struct map_value);
__uint(max_entries, 1);
} some_nodes SEC(".maps");
If struct node_data had a matching definition in kernel BTF, the verifier would
expect a destructor for the type to be registered. Since struct node_data does
not match any type in kernel BTF, the verifier knows that there is no kfunc
that provides a PTR_TO_BTF_ID to this type, and that such a PTR_TO_BTF_ID can
only come from bpf_obj_new. So instead of searching for a registered dtor,
a bpf_obj_drop dtor can be assumed.
This allows the runtime to properly destruct such kptrs in
bpf_obj_free_fields, which enables maps to clean up map_vals w/ such
kptrs when going away.
Implementation notes:
* "kernel_btf" variable is renamed to "kptr_btf" in btf_parse_kptr.
Before this patch, the variable would only ever point to vmlinux or
module BTFs, but now it can point to some program BTF for local kptr
type. It's later used to populate the (btf, btf_id) pair in kptr btf
field.
* It's necessary to btf_get the program BTF when populating btf_field
for local kptr. btf_record_free later does a btf_put.
* Behavior for non-local referenced kptrs is not modified, as
bpf_find_btf_id helper only searches vmlinux and module BTFs for
matching BTF type. If such a type is found, btf_field_kptr's btf will
pass btf_is_kernel check, and the associated release function is
some one-argument dtor. If btf_is_kernel check fails, associated
release function is two-arg bpf_obj_drop_impl. Before this patch
only btf_field_kptr's w/ kernel or module BTFs were created.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
include/linux/bpf.h | 11 ++++++++++-
include/linux/btf.h | 2 --
kernel/bpf/btf.c | 34 +++++++++++++++++++++++++---------
kernel/bpf/helpers.c | 11 ++++++++---
kernel/bpf/syscall.c | 14 +++++++++++++-
5 files changed, 56 insertions(+), 16 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 3a38db315f7f..756b85f0d0d3 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -189,10 +189,19 @@ enum btf_field_type {
BPF_RB_NODE | BPF_RB_ROOT,
};
+typedef void (*btf_dtor_kfunc_t)(void *);
+typedef void (*btf_dtor_obj_drop)(void *, const struct btf_record *);
+
struct btf_field_kptr {
struct btf *btf;
struct module *module;
- btf_dtor_kfunc_t dtor;
+ union {
+ /* dtor used if btf_is_kernel(btf), otherwise the type
+ * is program-allocated and obj_drop is used
+ */
+ btf_dtor_kfunc_t dtor;
+ btf_dtor_obj_drop obj_drop;
+ };
u32 btf_id;
};
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 1bba0827e8c4..d53b10cc55f2 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -121,8 +121,6 @@ struct btf_struct_metas {
struct btf_struct_meta types[];
};
-typedef void (*btf_dtor_kfunc_t)(void *);
-
extern const struct file_operations btf_fops;
void btf_get(struct btf *btf);
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 37779ceefd09..165a8ef038f5 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3551,12 +3551,17 @@ static int btf_find_field(const struct btf *btf, const struct btf_type *t,
return -EINVAL;
}
+extern void __bpf_obj_drop_impl(void *p, const struct btf_record *rec);
+
static int btf_parse_kptr(const struct btf *btf, struct btf_field *field,
struct btf_field_info *info)
{
struct module *mod = NULL;
const struct btf_type *t;
- struct btf *kernel_btf;
+ /* If a matching btf type is found in kernel or module BTFs, kptr_ref
+ * is that BTF, otherwise it's program BTF
+ */
+ struct btf *kptr_btf;
int ret;
s32 id;
@@ -3565,7 +3570,17 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field,
*/
t = btf_type_by_id(btf, info->kptr.type_id);
id = bpf_find_btf_id(__btf_name_by_offset(btf, t->name_off), BTF_INFO_KIND(t->info),
- &kernel_btf);
+ &kptr_btf);
+ if (id == -ENOENT && !btf_is_kernel(btf)) {
+ /* Type exists only in program BTF. Assume that it's a MEM_ALLOC
+ * kptr allocated via bpf_obj_new
+ */
+ field->kptr.dtor = (void *)&__bpf_obj_drop_impl;
+ id = info->kptr.type_id;
+ kptr_btf = (struct btf *)btf;
+ btf_get(kptr_btf);
+ goto found_dtor;
+ }
if (id < 0)
return id;
@@ -3582,20 +3597,20 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field,
* can be used as a referenced pointer and be stored in a map at
* the same time.
*/
- dtor_btf_id = btf_find_dtor_kfunc(kernel_btf, id);
+ dtor_btf_id = btf_find_dtor_kfunc(kptr_btf, id);
if (dtor_btf_id < 0) {
ret = dtor_btf_id;
goto end_btf;
}
- dtor_func = btf_type_by_id(kernel_btf, dtor_btf_id);
+ dtor_func = btf_type_by_id(kptr_btf, dtor_btf_id);
if (!dtor_func) {
ret = -ENOENT;
goto end_btf;
}
- if (btf_is_module(kernel_btf)) {
- mod = btf_try_get_module(kernel_btf);
+ if (btf_is_module(kptr_btf)) {
+ mod = btf_try_get_module(kptr_btf);
if (!mod) {
ret = -ENXIO;
goto end_btf;
@@ -3605,7 +3620,7 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field,
/* We already verified dtor_func to be btf_type_is_func
* in register_btf_id_dtor_kfuncs.
*/
- dtor_func_name = __btf_name_by_offset(kernel_btf, dtor_func->name_off);
+ dtor_func_name = __btf_name_by_offset(kptr_btf, dtor_func->name_off);
addr = kallsyms_lookup_name(dtor_func_name);
if (!addr) {
ret = -EINVAL;
@@ -3614,14 +3629,15 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field,
field->kptr.dtor = (void *)addr;
}
+found_dtor:
field->kptr.btf_id = id;
- field->kptr.btf = kernel_btf;
+ field->kptr.btf = kptr_btf;
field->kptr.module = mod;
return 0;
end_mod:
module_put(mod);
end_btf:
- btf_put(kernel_btf);
+ btf_put(kptr_btf);
return ret;
}
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index f9b7eeedce08..77d64b6951b9 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1896,14 +1896,19 @@ __bpf_kfunc void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
return p;
}
+void __bpf_obj_drop_impl(void *p, const struct btf_record *rec)
+{
+ if (rec)
+ bpf_obj_free_fields(rec, p);
+ bpf_mem_free(&bpf_global_ma, p);
+}
+
__bpf_kfunc void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
{
struct btf_struct_meta *meta = meta__ign;
void *p = p__alloc;
- if (meta)
- bpf_obj_free_fields(meta->record, p);
- bpf_mem_free(&bpf_global_ma, p);
+ __bpf_obj_drop_impl(p, meta ? meta->record : NULL);
}
static void __bpf_list_add(struct bpf_list_node *node, struct bpf_list_head *head, bool tail)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index cc4b7684910c..0684febc447a 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -659,8 +659,10 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
return;
fields = rec->fields;
for (i = 0; i < rec->cnt; i++) {
+ struct btf_struct_meta *pointee_struct_meta;
const struct btf_field *field = &fields[i];
void *field_ptr = obj + field->offset;
+ void *xchgd_field;
switch (fields[i].type) {
case BPF_SPIN_LOCK:
@@ -672,7 +674,17 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
WRITE_ONCE(*(u64 *)field_ptr, 0);
break;
case BPF_KPTR_REF:
- field->kptr.dtor((void *)xchg((unsigned long *)field_ptr, 0));
+ xchgd_field = (void *)xchg((unsigned long *)field_ptr, 0);
+ if (!btf_is_kernel(field->kptr.btf)) {
+ pointee_struct_meta = btf_find_struct_meta(field->kptr.btf,
+ field->kptr.btf_id);
+ WARN_ON_ONCE(!pointee_struct_meta);
+ field->kptr.obj_drop(xchgd_field, pointee_struct_meta ?
+ pointee_struct_meta->record :
+ NULL);
+ } else {
+ field->kptr.dtor(xchgd_field);
+ }
break;
case BPF_LIST_HEAD:
if (WARN_ON_ONCE(rec->spin_lock_off < 0))
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH v1 bpf-next 4/6] bpf: Support __kptr to local kptrs
2023-03-09 18:01 ` [PATCH v1 bpf-next 4/6] bpf: Support __kptr to local kptrs Dave Marchevsky
@ 2023-03-10 20:07 ` Alexei Starovoitov
0 siblings, 0 replies; 10+ messages in thread
From: Alexei Starovoitov @ 2023-03-10 20:07 UTC (permalink / raw)
To: Dave Marchevsky
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Kernel Team, tj
On Thu, Mar 09, 2023 at 10:01:09AM -0800, Dave Marchevsky wrote:
> If a PTR_TO_BTF_ID type comes from program BTF - not vmlinux or module
> BTF - it must have been allocated by bpf_obj_new and therefore must be
> free'd with bpf_obj_drop. Such a PTR_TO_BTF_ID is considered a "local
> kptr" and is tagged with MEM_ALLOC type tag by bpf_obj_new.
>
> This patch adds support for treating __kptr-tagged pointers to "local
> kptrs" as having an implicit bpf_obj_drop destructor for referenced kptr
> acquire / release semantics. Consider the following example:
>
> struct node_data {
> long key;
> long data;
> struct bpf_rb_node node;
> };
>
> struct map_value {
> struct node_data __kptr *node;
> };
>
> struct {
> __uint(type, BPF_MAP_TYPE_ARRAY);
> __type(key, int);
> __type(value, struct map_value);
> __uint(max_entries, 1);
> } some_nodes SEC(".maps");
>
> If struct node_data had a matching definition in kernel BTF, the verifier would
> expect a destructor for the type to be registered. Since struct node_data does
> not match any type in kernel BTF, the verifier knows that there is no kfunc
> that provides a PTR_TO_BTF_ID to this type, and that such a PTR_TO_BTF_ID can
> only come from bpf_obj_new. So instead of searching for a registered dtor,
> a bpf_obj_drop dtor can be assumed.
>
> This allows the runtime to properly destruct such kptrs in
> bpf_obj_free_fields, which enables maps to clean up map_vals w/ such
> kptrs when going away.
>
> Implementation notes:
> * "kernel_btf" variable is renamed to "kptr_btf" in btf_parse_kptr.
> Before this patch, the variable would only ever point to vmlinux or
> module BTFs, but now it can point to some program BTF for local kptr
> type. It's later used to populate the (btf, btf_id) pair in kptr btf
> field.
> * It's necessary to btf_get the program BTF when populating btf_field
> for local kptr. btf_record_free later does a btf_put.
> * Behavior for non-local referenced kptrs is not modified, as
> bpf_find_btf_id helper only searches vmlinux and module BTFs for
> matching BTF type. If such a type is found, btf_field_kptr's btf will
> pass btf_is_kernel check, and the associated release function is
> some one-argument dtor. If btf_is_kernel check fails, associated
> release function is two-arg bpf_obj_drop_impl. Before this patch
> only btf_field_kptr's w/ kernel or module BTFs were created.
>
> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> ---
> include/linux/bpf.h | 11 ++++++++++-
> include/linux/btf.h | 2 --
> kernel/bpf/btf.c | 34 +++++++++++++++++++++++++---------
> kernel/bpf/helpers.c | 11 ++++++++---
> kernel/bpf/syscall.c | 14 +++++++++++++-
> 5 files changed, 56 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 3a38db315f7f..756b85f0d0d3 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -189,10 +189,19 @@ enum btf_field_type {
> BPF_RB_NODE | BPF_RB_ROOT,
> };
>
> +typedef void (*btf_dtor_kfunc_t)(void *);
> +typedef void (*btf_dtor_obj_drop)(void *, const struct btf_record *);
> +
> struct btf_field_kptr {
> struct btf *btf;
> struct module *module;
> - btf_dtor_kfunc_t dtor;
> + union {
> + /* dtor used if btf_is_kernel(btf), otherwise the type
> + * is program-allocated and obj_drop is used
> + */
> + btf_dtor_kfunc_t dtor;
> + btf_dtor_obj_drop obj_drop;
> + };
> u32 btf_id;
> };
>
> diff --git a/include/linux/btf.h b/include/linux/btf.h
> index 1bba0827e8c4..d53b10cc55f2 100644
> --- a/include/linux/btf.h
> +++ b/include/linux/btf.h
> @@ -121,8 +121,6 @@ struct btf_struct_metas {
> struct btf_struct_meta types[];
> };
>
> -typedef void (*btf_dtor_kfunc_t)(void *);
> -
> extern const struct file_operations btf_fops;
>
> void btf_get(struct btf *btf);
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 37779ceefd09..165a8ef038f5 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -3551,12 +3551,17 @@ static int btf_find_field(const struct btf *btf, const struct btf_type *t,
> return -EINVAL;
> }
>
> +extern void __bpf_obj_drop_impl(void *p, const struct btf_record *rec);
> +
> static int btf_parse_kptr(const struct btf *btf, struct btf_field *field,
> struct btf_field_info *info)
> {
> struct module *mod = NULL;
> const struct btf_type *t;
> - struct btf *kernel_btf;
> + /* If a matching btf type is found in kernel or module BTFs, kptr_ref
> + * is that BTF, otherwise it's program BTF
> + */
> + struct btf *kptr_btf;
> int ret;
> s32 id;
>
> @@ -3565,7 +3570,17 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field,
> */
> t = btf_type_by_id(btf, info->kptr.type_id);
> id = bpf_find_btf_id(__btf_name_by_offset(btf, t->name_off), BTF_INFO_KIND(t->info),
> - &kernel_btf);
> + &kptr_btf);
> + if (id == -ENOENT && !btf_is_kernel(btf)) {
btf_is_kernel(btf) is confusing.
btf_parse()->btf_parse_struct_metas()->btf_parse_fields() is only called for user loaded BTFs.
We can do WARN_ON_ONCE(btf_is_kernel(btf)); here as a way to document it,
but checking it looks wrong.
> + /* Type exists only in program BTF. Assume that it's a MEM_ALLOC
> + * kptr allocated via bpf_obj_new
> + */
> + field->kptr.dtor = (void *)&__bpf_obj_drop_impl;
> + id = info->kptr.type_id;
> + kptr_btf = (struct btf *)btf;
> + btf_get(kptr_btf);
> + goto found_dtor;
> + }
> if (id < 0)
> return id;
>
> @@ -3582,20 +3597,20 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field,
> * can be used as a referenced pointer and be stored in a map at
> * the same time.
> */
> - dtor_btf_id = btf_find_dtor_kfunc(kernel_btf, id);
> + dtor_btf_id = btf_find_dtor_kfunc(kptr_btf, id);
> if (dtor_btf_id < 0) {
> ret = dtor_btf_id;
> goto end_btf;
> }
>
> - dtor_func = btf_type_by_id(kernel_btf, dtor_btf_id);
> + dtor_func = btf_type_by_id(kptr_btf, dtor_btf_id);
> if (!dtor_func) {
> ret = -ENOENT;
> goto end_btf;
> }
>
> - if (btf_is_module(kernel_btf)) {
> - mod = btf_try_get_module(kernel_btf);
> + if (btf_is_module(kptr_btf)) {
> + mod = btf_try_get_module(kptr_btf);
> if (!mod) {
> ret = -ENXIO;
> goto end_btf;
> @@ -3605,7 +3620,7 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field,
> /* We already verified dtor_func to be btf_type_is_func
> * in register_btf_id_dtor_kfuncs.
> */
> - dtor_func_name = __btf_name_by_offset(kernel_btf, dtor_func->name_off);
> + dtor_func_name = __btf_name_by_offset(kptr_btf, dtor_func->name_off);
> addr = kallsyms_lookup_name(dtor_func_name);
> if (!addr) {
> ret = -EINVAL;
> @@ -3614,14 +3629,15 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field,
> field->kptr.dtor = (void *)addr;
> }
>
> +found_dtor:
> field->kptr.btf_id = id;
> - field->kptr.btf = kernel_btf;
> + field->kptr.btf = kptr_btf;
> field->kptr.module = mod;
> return 0;
> end_mod:
> module_put(mod);
> end_btf:
> - btf_put(kernel_btf);
> + btf_put(kptr_btf);
> return ret;
> }
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index f9b7eeedce08..77d64b6951b9 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -1896,14 +1896,19 @@ __bpf_kfunc void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
> return p;
> }
>
> +void __bpf_obj_drop_impl(void *p, const struct btf_record *rec)
> +{
> + if (rec)
> + bpf_obj_free_fields(rec, p);
> + bpf_mem_free(&bpf_global_ma, p);
> +}
> +
> __bpf_kfunc void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
> {
> struct btf_struct_meta *meta = meta__ign;
> void *p = p__alloc;
>
> - if (meta)
> - bpf_obj_free_fields(meta->record, p);
> - bpf_mem_free(&bpf_global_ma, p);
> + __bpf_obj_drop_impl(p, meta ? meta->record : NULL);
> }
>
> static void __bpf_list_add(struct bpf_list_node *node, struct bpf_list_head *head, bool tail)
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index cc4b7684910c..0684febc447a 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -659,8 +659,10 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
> return;
> fields = rec->fields;
> for (i = 0; i < rec->cnt; i++) {
> + struct btf_struct_meta *pointee_struct_meta;
> const struct btf_field *field = &fields[i];
> void *field_ptr = obj + field->offset;
> + void *xchgd_field;
>
> switch (fields[i].type) {
> case BPF_SPIN_LOCK:
> @@ -672,7 +674,17 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
> WRITE_ONCE(*(u64 *)field_ptr, 0);
> break;
> case BPF_KPTR_REF:
> - field->kptr.dtor((void *)xchg((unsigned long *)field_ptr, 0));
> + xchgd_field = (void *)xchg((unsigned long *)field_ptr, 0);
> + if (!btf_is_kernel(field->kptr.btf)) {
> + pointee_struct_meta = btf_find_struct_meta(field->kptr.btf,
> + field->kptr.btf_id);
> + WARN_ON_ONCE(!pointee_struct_meta);
> + field->kptr.obj_drop(xchgd_field, pointee_struct_meta ?
> + pointee_struct_meta->record :
> + NULL);
> + } else {
> + field->kptr.dtor(xchgd_field);
> + }
> break;
> case BPF_LIST_HEAD:
> if (WARN_ON_ONCE(rec->spin_lock_off < 0))
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v1 bpf-next 5/6] bpf: Allow local kptrs to be exchanged via bpf_kptr_xchg
2023-03-09 18:01 [PATCH v1 bpf-next 0/6] Support stashing local kptrs with bpf_kptr_xchg Dave Marchevsky
` (3 preceding siblings ...)
2023-03-09 18:01 ` [PATCH v1 bpf-next 4/6] bpf: Support __kptr to local kptrs Dave Marchevsky
@ 2023-03-09 18:01 ` Dave Marchevsky
2023-03-09 18:01 ` [PATCH v1 bpf-next 6/6] selftests/bpf: Add local kptr stashing test Dave Marchevsky
2023-03-10 20:30 ` [PATCH v1 bpf-next 0/6] Support stashing local kptrs with bpf_kptr_xchg patchwork-bot+netdevbpf
6 siblings, 0 replies; 10+ messages in thread
From: Dave Marchevsky @ 2023-03-09 18:01 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team,
tj, Dave Marchevsky
The previous patch added necessary plumbing for verifier and runtime to
know what to do with non-kernel PTR_TO_BTF_IDs in map values, but didn't
provide any way to get such local kptrs into a map value. This patch
modifies verifier handling of bpf_kptr_xchg to allow MEM_ALLOC kptr
types.
check_reg_type is modified accept MEM_ALLOC-flagged input to
bpf_kptr_xchg despite such types not being in btf_ptr_types. This could
have been done with a MAYBE_MEM_ALLOC equivalent to MAYBE_NULL, but
bpf_kptr_xchg is the only helper that I can forsee using
MAYBE_MEM_ALLOC, so keep it special-cased for now.
The verifier tags bpf_kptr_xchg retval MEM_ALLOC if and only if the BTF
associated with the retval is not kernel BTF.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
kernel/bpf/verifier.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index cdf1ba65821b..86291f44bb34 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -7123,6 +7123,9 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
if (arg_type & PTR_MAYBE_NULL)
type &= ~PTR_MAYBE_NULL;
+ if (meta->func_id == BPF_FUNC_kptr_xchg && type & MEM_ALLOC)
+ type &= ~MEM_ALLOC;
+
for (i = 0; i < ARRAY_SIZE(compatible->types); i++) {
expected = compatible->types[i];
if (expected == NOT_INIT)
@@ -7185,7 +7188,8 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
break;
}
case PTR_TO_BTF_ID | MEM_ALLOC:
- if (meta->func_id != BPF_FUNC_spin_lock && meta->func_id != BPF_FUNC_spin_unlock) {
+ if (meta->func_id != BPF_FUNC_spin_lock && meta->func_id != BPF_FUNC_spin_unlock &&
+ meta->func_id != BPF_FUNC_kptr_xchg) {
verbose(env, "verifier internal error: unimplemented handling of MEM_ALLOC\n");
return -EFAULT;
}
@@ -9151,6 +9155,8 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
if (func_id == BPF_FUNC_kptr_xchg) {
ret_btf = meta.kptr_field->kptr.btf;
ret_btf_id = meta.kptr_field->kptr.btf_id;
+ if (!btf_is_kernel(ret_btf))
+ regs[BPF_REG_0].type |= MEM_ALLOC;
} else {
if (fn->ret_btf_id == BPF_PTR_POISON) {
verbose(env, "verifier internal error:");
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v1 bpf-next 6/6] selftests/bpf: Add local kptr stashing test
2023-03-09 18:01 [PATCH v1 bpf-next 0/6] Support stashing local kptrs with bpf_kptr_xchg Dave Marchevsky
` (4 preceding siblings ...)
2023-03-09 18:01 ` [PATCH v1 bpf-next 5/6] bpf: Allow local kptrs to be exchanged via bpf_kptr_xchg Dave Marchevsky
@ 2023-03-09 18:01 ` Dave Marchevsky
2023-03-10 20:12 ` Alexei Starovoitov
2023-03-10 20:30 ` [PATCH v1 bpf-next 0/6] Support stashing local kptrs with bpf_kptr_xchg patchwork-bot+netdevbpf
6 siblings, 1 reply; 10+ messages in thread
From: Dave Marchevsky @ 2023-03-09 18:01 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team,
tj, Dave Marchevsky
Add a new selftest, local_kptr_stash, which uses bpf_kptr_xchg to stash
a bpf_obj_new-allocated object in a map.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
.../bpf/prog_tests/local_kptr_stash.c | 33 +++++++
.../selftests/bpf/progs/local_kptr_stash.c | 85 +++++++++++++++++++
2 files changed, 118 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/local_kptr_stash.c
create mode 100644 tools/testing/selftests/bpf/progs/local_kptr_stash.c
diff --git a/tools/testing/selftests/bpf/prog_tests/local_kptr_stash.c b/tools/testing/selftests/bpf/prog_tests/local_kptr_stash.c
new file mode 100644
index 000000000000..98353e602741
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/local_kptr_stash.c
@@ -0,0 +1,33 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include <test_progs.h>
+#include <network_helpers.h>
+
+#include "local_kptr_stash.skel.h"
+static void test_local_kptr_stash_simple(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
+ struct local_kptr_stash *skel;
+ int ret;
+
+ skel = local_kptr_stash__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "local_kptr_stash__open_and_load"))
+ return;
+
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.stash_rb_node), &opts);
+ ASSERT_OK(ret, "local_kptr_stash_add_nodes run");
+ ASSERT_OK(opts.retval, "local_kptr_stash_add_nodes retval");
+
+ local_kptr_stash__destroy(skel);
+}
+
+void test_local_kptr_stash_success(void)
+{
+ if (test__start_subtest("rbtree_add_nodes"))
+ test_local_kptr_stash_simple();
+}
diff --git a/tools/testing/selftests/bpf/progs/local_kptr_stash.c b/tools/testing/selftests/bpf/progs/local_kptr_stash.c
new file mode 100644
index 000000000000..df7b419f3dc3
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/local_kptr_stash.c
@@ -0,0 +1,85 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+#include "bpf_experimental.h"
+
+struct node_data {
+ long key;
+ long data;
+ struct bpf_rb_node node;
+};
+
+struct map_value {
+ struct prog_test_ref_kfunc *not_kptr;
+ struct prog_test_ref_kfunc __kptr *val;
+ struct node_data __kptr *node;
+};
+
+/* This is necessary so that LLVM generates BTF for node_data struct
+ * If it's not included, a fwd reference for node_data will be generated but
+ * no struct. Example BTF of "node" field in map_value when not included:
+ *
+ * [10] PTR '(anon)' type_id=35
+ * [34] FWD 'node_data' fwd_kind=struct
+ * [35] TYPE_TAG 'kptr_ref' type_id=34
+ *
+ * (with no node_data struct defined)
+ * Had to do the same w/ bpf_kfunc_call_test_release below
+ */
+struct node_data *just_here_because_btf_bug;
+
+extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym;
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __type(key, int);
+ __type(value, struct map_value);
+ __uint(max_entries, 1);
+} some_nodes SEC(".maps");
+
+SEC("tc")
+long stash_rb_node(void *ctx)
+{
+ struct map_value *mapval;
+ struct node_data *res;
+ int key = 0;
+
+ res = bpf_obj_new(typeof(*res));
+ if (!res)
+ return 1;
+ res->key = 42;
+
+ mapval = bpf_map_lookup_elem(&some_nodes, &key);
+ if (!mapval) {
+ bpf_obj_drop(res);
+ return 1;
+ }
+
+ res = bpf_kptr_xchg(&mapval->node, res);
+ if (res)
+ bpf_obj_drop(res);
+ return 0;
+}
+
+SEC("tc")
+long stash_test_ref_kfunc(void *ctx)
+{
+ struct prog_test_ref_kfunc *res;
+ struct map_value *mapval;
+ int key = 0;
+
+ mapval = bpf_map_lookup_elem(&some_nodes, &key);
+ if (!mapval)
+ return 1;
+
+ res = bpf_kptr_xchg(&mapval->val, NULL);
+ if (res)
+ bpf_kfunc_call_test_release(res);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH v1 bpf-next 6/6] selftests/bpf: Add local kptr stashing test
2023-03-09 18:01 ` [PATCH v1 bpf-next 6/6] selftests/bpf: Add local kptr stashing test Dave Marchevsky
@ 2023-03-10 20:12 ` Alexei Starovoitov
0 siblings, 0 replies; 10+ messages in thread
From: Alexei Starovoitov @ 2023-03-10 20:12 UTC (permalink / raw)
To: Dave Marchevsky
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Kernel Team, tj
On Thu, Mar 09, 2023 at 10:01:11AM -0800, Dave Marchevsky wrote:
> Add a new selftest, local_kptr_stash, which uses bpf_kptr_xchg to stash
> a bpf_obj_new-allocated object in a map.
>
> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> ---
> .../bpf/prog_tests/local_kptr_stash.c | 33 +++++++
> .../selftests/bpf/progs/local_kptr_stash.c | 85 +++++++++++++++++++
> 2 files changed, 118 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/local_kptr_stash.c
> create mode 100644 tools/testing/selftests/bpf/progs/local_kptr_stash.c
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/local_kptr_stash.c b/tools/testing/selftests/bpf/prog_tests/local_kptr_stash.c
> new file mode 100644
> index 000000000000..98353e602741
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/local_kptr_stash.c
> @@ -0,0 +1,33 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
> +
> +#include <test_progs.h>
> +#include <network_helpers.h>
> +
> +#include "local_kptr_stash.skel.h"
> +static void test_local_kptr_stash_simple(void)
> +{
> + LIBBPF_OPTS(bpf_test_run_opts, opts,
> + .data_in = &pkt_v4,
> + .data_size_in = sizeof(pkt_v4),
> + .repeat = 1,
> + );
> + struct local_kptr_stash *skel;
> + int ret;
> +
> + skel = local_kptr_stash__open_and_load();
> + if (!ASSERT_OK_PTR(skel, "local_kptr_stash__open_and_load"))
> + return;
> +
> + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.stash_rb_node), &opts);
> + ASSERT_OK(ret, "local_kptr_stash_add_nodes run");
> + ASSERT_OK(opts.retval, "local_kptr_stash_add_nodes retval");
> +
> + local_kptr_stash__destroy(skel);
> +}
> +
> +void test_local_kptr_stash_success(void)
> +{
> + if (test__start_subtest("rbtree_add_nodes"))
> + test_local_kptr_stash_simple();
> +}
> diff --git a/tools/testing/selftests/bpf/progs/local_kptr_stash.c b/tools/testing/selftests/bpf/progs/local_kptr_stash.c
> new file mode 100644
> index 000000000000..df7b419f3dc3
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/local_kptr_stash.c
> @@ -0,0 +1,85 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
> +
> +#include <vmlinux.h>
> +#include <bpf/bpf_tracing.h>
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_core_read.h>
> +#include "bpf_experimental.h"
> +
> +struct node_data {
> + long key;
> + long data;
> + struct bpf_rb_node node;
> +};
> +
> +struct map_value {
> + struct prog_test_ref_kfunc *not_kptr;
> + struct prog_test_ref_kfunc __kptr *val;
> + struct node_data __kptr *node;
> +};
> +
> +/* This is necessary so that LLVM generates BTF for node_data struct
> + * If it's not included, a fwd reference for node_data will be generated but
> + * no struct. Example BTF of "node" field in map_value when not included:
> + *
> + * [10] PTR '(anon)' type_id=35
> + * [34] FWD 'node_data' fwd_kind=struct
> + * [35] TYPE_TAG 'kptr_ref' type_id=34
> + *
> + * (with no node_data struct defined)
> + * Had to do the same w/ bpf_kfunc_call_test_release below
> + */
> +struct node_data *just_here_because_btf_bug;
> +
> +extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym;
> +
> +struct {
> + __uint(type, BPF_MAP_TYPE_ARRAY);
> + __type(key, int);
> + __type(value, struct map_value);
> + __uint(max_entries, 1);
> +} some_nodes SEC(".maps");
> +
> +SEC("tc")
> +long stash_rb_node(void *ctx)
> +{
> + struct map_value *mapval;
> + struct node_data *res;
> + int key = 0;
> +
> + res = bpf_obj_new(typeof(*res));
> + if (!res)
> + return 1;
> + res->key = 42;
> +
> + mapval = bpf_map_lookup_elem(&some_nodes, &key);
> + if (!mapval) {
> + bpf_obj_drop(res);
> + return 1;
> + }
> +
> + res = bpf_kptr_xchg(&mapval->node, res);
> + if (res)
> + bpf_obj_drop(res);
May be add another tc prog with 2nd bpf_prog_test_run_opts that does:
res = bpf_kptr_xchg(&mapval->node, NULL);
and bpf_obj_drop-s it for real?
The first stash_rb_node() can allocate two objs into key=0 and key=1
the 2nd prog can bpf_kptr_xchg only one of them,
so we test both dtor on map free and explicit xchg+obj_drop.
wdyt?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v1 bpf-next 0/6] Support stashing local kptrs with bpf_kptr_xchg
2023-03-09 18:01 [PATCH v1 bpf-next 0/6] Support stashing local kptrs with bpf_kptr_xchg Dave Marchevsky
` (5 preceding siblings ...)
2023-03-09 18:01 ` [PATCH v1 bpf-next 6/6] selftests/bpf: Add local kptr stashing test Dave Marchevsky
@ 2023-03-10 20:30 ` patchwork-bot+netdevbpf
6 siblings, 0 replies; 10+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-03-10 20:30 UTC (permalink / raw)
To: Dave Marchevsky; +Cc: bpf, ast, daniel, andrii, kernel-team, tj
Hello:
This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Thu, 9 Mar 2023 10:01:05 -0800 you wrote:
> Local kptrs are kptrs allocated via bpf_obj_new with a type specified in program
> BTF. A BPF program which creates a local kptr has exclusive control of the
> lifetime of the kptr, and, prior to terminating, must:
>
> * free the kptr via bpf_obj_drop
> * If the kptr is a {list,rbtree} node, add the node to a {list, rbtree},
> thereby passing control of the lifetime to the collection
>
> [...]
Here is the summary with links:
- [v1,bpf-next,1/6] bpf: verifier: Rename kernel_type_name helper to btf_type_name
https://git.kernel.org/bpf/bpf-next/c/b32a5dae44cc
- [v1,bpf-next,2/6] bpf: btf: Remove unused btf_field_info_type enum
https://git.kernel.org/bpf/bpf-next/c/a4aa38897b6a
- [v1,bpf-next,3/6] bpf: Change btf_record_find enum parameter to field_mask
https://git.kernel.org/bpf/bpf-next/c/74843b57ec70
- [v1,bpf-next,4/6] bpf: Support __kptr to local kptrs
(no matching commit)
- [v1,bpf-next,5/6] bpf: Allow local kptrs to be exchanged via bpf_kptr_xchg
(no matching commit)
- [v1,bpf-next,6/6] selftests/bpf: Add local kptr stashing test
(no matching commit)
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 10+ messages in thread