* [PATCH v1 bpf-next 0/4] Support bpf_kptr_xchg into local kptr
@ 2024-07-28 3:01 Amery Hung
2024-07-28 3:01 ` [PATCH v1 bpf-next 1/4] bpf: Search for kptrs in prog BTF structs Amery Hung
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Amery Hung @ 2024-07-28 3:01 UTC (permalink / raw)
To: bpf
Cc: daniel, andrii, alexei.starovoitov, martin.lau, sinquersw,
davemarchevsky, ameryhung, Amery Hung
This series allows stashing kptr into local kptr. Currently, kptrs are
only allowed to be stashed into map value with bpf_kptr_xchg(). A
motivating use case of this series is to enable adding referenced kptr to
bpf_rbtree or bpf_list by using allocated object as graph node and the
storage of referenced kptr. For example, a bpf qdisc [0] enqueuing a
referenced kptr to a struct sk_buff* to a bpf_list serving as a fifo:
struct skb_node {
struct sk_buff __kptr *skb;
struct bpf_list_node node;
};
private(A) struct bpf_spin_lock fifo_lock;
private(A) struct bpf_list_head fifo __contains(skb_node, node);
/* In Qdisc_ops.enqueue */
struct skb_node *skbn;
skbn = bpf_obj_new(typeof(*skbn));
if (!skbn)
goto drop;
/* skb is a referenced kptr to struct sk_buff acquired earilier
* but not shown in this code snippet.
*/
skb = bpf_kptr_xchg(&skbn->skb, skb);
if (skb)
/* should not happen; do something below releasing skb to
* satisfy the verifier */
...
bpf_spin_lock(&fifo_lock);
bpf_list_push_back(&fifo, &skbn->node);
bpf_spin_unlock(&fifo_lock);
The implementation first searches for BPF_KPTR when generating program
BTF. Then, we teach the verifier that the detination argument of
bpf_kptr_xchg() can be local kptr, and use the btf_record in program BTF
to check against the source argument.
This series is mostly developed by Dave, who kindly helped and sent me
the patchset. The selftests in bpf qdisc (WIP) relies on this series to
work.
[0] https://lore.kernel.org/netdev/20240714175130.4051012-10-amery.hung@bytedance.com/
Dave Marchevsky (4):
bpf: Search for kptrs in prog BTF structs
bpf: Rename ARG_PTR_TO_KPTR -> ARG_KPTR_XCHG_DEST
bpf: Support bpf_kptr_xchg into local kptr
selftests/bpf: Test bpf_kptr_xchg stashing into local kptr
include/linux/bpf.h | 2 +-
kernel/bpf/btf.c | 6 ++-
kernel/bpf/helpers.c | 2 +-
kernel/bpf/verifier.c | 47 ++++++++++++-------
.../selftests/bpf/progs/local_kptr_stash.c | 22 ++++++++-
5 files changed, 57 insertions(+), 22 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v1 bpf-next 1/4] bpf: Search for kptrs in prog BTF structs
2024-07-28 3:01 [PATCH v1 bpf-next 0/4] Support bpf_kptr_xchg into local kptr Amery Hung
@ 2024-07-28 3:01 ` Amery Hung
2024-07-28 3:01 ` [PATCH v1 bpf-next 2/4] bpf: Rename ARG_PTR_TO_KPTR -> ARG_KPTR_XCHG_DEST Amery Hung
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Amery Hung @ 2024-07-28 3:01 UTC (permalink / raw)
To: bpf
Cc: daniel, andrii, alexei.starovoitov, martin.lau, sinquersw,
davemarchevsky, ameryhung, Amery Hung
From: Dave Marchevsky <davemarchevsky@fb.com>
Currently btf_parse_fields is used in two places to create struct
btf_record's for structs: when looking at mapval type, and when looking
at any struct in program BTF. The former looks for kptr fields while the
latter does not. This patch modifies the btf_parse_fields call made when
looking at prog BTF struct types to search for kptrs as well.
Before this series there was no reason to search for kptrs in non-mapval
types: a referenced kptr needs some owner to guarantee resource cleanup,
and map values were the only owner that supported this. If a struct with
a kptr field were to have some non-kptr-aware owner, the kptr field
might not be properly cleaned up and result in resources leaking. Only
searching for kptr fields in mapval was a simple way to avoid this
problem.
In practice, though, searching for BPF_KPTR when populating
struct_meta_tab does not expose us to this risk, as struct_meta_tab is
only accessed through btf_find_struct_meta helper, and that helper is
only called in contexts where recognizing the kptr field is safe:
* PTR_TO_BTF_ID reg w/ MEM_ALLOC flag
* Such a reg is a local kptr and must be free'd via bpf_obj_drop,
which will correctly handle kptr field
* When handling specific kfuncs which either expect MEM_ALLOC input or
return MEM_ALLOC output (obj_{new,drop}, percpu_obj_{new,drop},
list+rbtree funcs, refcount_acquire)
* Will correctly handle kptr field for same reasons as above
* When looking at kptr pointee type
* Called by functions which implement "correct kptr resource
handling"
* In btf_check_and_fixup_fields
* Helper that ensures no ownership loops for lists and rbtrees,
doesn't care about kptr field existence
So we should be able to find BPF_KPTR fields in all prog BTF structs
without leaking resources.
Further patches in the series will build on this change to support
kptr_xchg into non-mapval local kptr. Without this change there would be
no kptr field found in such a type.
On a side note, when building program BTF, the refcount of program BTF
is now initialized before btf_parse_struct_metas() to prevent a
refcount_inc() on zero warning. This happens when BPF_KPTR is present
in program BTF: btf_parse_struct_metas() -> btf_parse_fields()
-> btf_parse_kptr() -> btf_get(). This should be okay as the program BTF
is not available yet outside btf_parse().
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Amery Hung <amery.hung@bytedance.com>
---
kernel/bpf/btf.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 95426d5b634e..7b8275e3e500 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -5585,7 +5585,8 @@ btf_parse_struct_metas(struct bpf_verifier_log *log, struct btf *btf)
type = &tab->types[tab->cnt];
type->btf_id = i;
record = btf_parse_fields(btf, t, BPF_SPIN_LOCK | BPF_LIST_HEAD | BPF_LIST_NODE |
- BPF_RB_ROOT | BPF_RB_NODE | BPF_REFCOUNT, t->size);
+ BPF_RB_ROOT | BPF_RB_NODE | BPF_REFCOUNT |
+ BPF_KPTR, t->size);
/* The record cannot be unset, treat it as an error if so */
if (IS_ERR_OR_NULL(record)) {
ret = PTR_ERR_OR_ZERO(record) ?: -EFAULT;
@@ -5737,6 +5738,8 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
if (err)
goto errout;
+ refcount_set(&btf->refcnt, 1);
+
struct_meta_tab = btf_parse_struct_metas(&env->log, btf);
if (IS_ERR(struct_meta_tab)) {
err = PTR_ERR(struct_meta_tab);
@@ -5759,7 +5762,6 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
goto errout_free;
btf_verifier_env_free(env);
- refcount_set(&btf->refcnt, 1);
return btf;
errout_meta:
--
2.20.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v1 bpf-next 2/4] bpf: Rename ARG_PTR_TO_KPTR -> ARG_KPTR_XCHG_DEST
2024-07-28 3:01 [PATCH v1 bpf-next 0/4] Support bpf_kptr_xchg into local kptr Amery Hung
2024-07-28 3:01 ` [PATCH v1 bpf-next 1/4] bpf: Search for kptrs in prog BTF structs Amery Hung
@ 2024-07-28 3:01 ` Amery Hung
2024-07-28 3:01 ` [PATCH v1 bpf-next 3/4] bpf: Support bpf_kptr_xchg into local kptr Amery Hung
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Amery Hung @ 2024-07-28 3:01 UTC (permalink / raw)
To: bpf
Cc: daniel, andrii, alexei.starovoitov, martin.lau, sinquersw,
davemarchevsky, ameryhung, Amery Hung
From: Dave Marchevsky <davemarchevsky@fb.com>
ARG_PTR_TO_KPTR is currently only used by the bpf_kptr_xchg helper.
Although it limits reg types for that helper's first arg to
PTR_TO_MAP_VALUE, any arbitrary mapval won't do: further custom
verification logic ensures that the mapval reg being xchgd-into is
pointing to a kptr field. If this is not the case, it's not safe to xchg
into that reg's pointee.
Let's rename the bpf_arg_type to more accurately describe the fairly
specific expectations that this arg type encodes.
This is a nonfunctional change.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Amery Hung <amery.hung@bytedance.com>
---
include/linux/bpf.h | 2 +-
kernel/bpf/helpers.c | 2 +-
kernel/bpf/verifier.c | 6 +++---
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 7ad37cbdc815..f853e350c057 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -744,7 +744,7 @@ enum bpf_arg_type {
ARG_PTR_TO_STACK, /* pointer to stack */
ARG_PTR_TO_CONST_STR, /* pointer to a null terminated read-only string */
ARG_PTR_TO_TIMER, /* pointer to bpf_timer */
- ARG_PTR_TO_KPTR, /* pointer to referenced kptr */
+ ARG_KPTR_XCHG_DEST, /* pointer to destination that kptrs are bpf_kptr_xchg'd into */
ARG_PTR_TO_DYNPTR, /* pointer to bpf_dynptr. See bpf_type_flag for dynptr type */
__BPF_ARG_TYPE_MAX,
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index d02ae323996b..8ecd8dc95f16 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1636,7 +1636,7 @@ static const struct bpf_func_proto bpf_kptr_xchg_proto = {
.gpl_only = false,
.ret_type = RET_PTR_TO_BTF_ID_OR_NULL,
.ret_btf_id = BPF_PTR_POISON,
- .arg1_type = ARG_PTR_TO_KPTR,
+ .arg1_type = ARG_KPTR_XCHG_DEST,
.arg2_type = ARG_PTR_TO_BTF_ID_OR_NULL | OBJ_RELEASE,
.arg2_btf_id = BPF_PTR_POISON,
};
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1f5302fb0957..9f2964b13b46 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -8399,7 +8399,7 @@ static const struct bpf_reg_types func_ptr_types = { .types = { PTR_TO_FUNC } };
static const struct bpf_reg_types stack_ptr_types = { .types = { PTR_TO_STACK } };
static const struct bpf_reg_types const_str_ptr_types = { .types = { PTR_TO_MAP_VALUE } };
static const struct bpf_reg_types timer_types = { .types = { PTR_TO_MAP_VALUE } };
-static const struct bpf_reg_types kptr_types = { .types = { PTR_TO_MAP_VALUE } };
+static const struct bpf_reg_types kptr_xchg_dest_types = { .types = { PTR_TO_MAP_VALUE } };
static const struct bpf_reg_types dynptr_types = {
.types = {
PTR_TO_STACK,
@@ -8431,7 +8431,7 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
[ARG_PTR_TO_STACK] = &stack_ptr_types,
[ARG_PTR_TO_CONST_STR] = &const_str_ptr_types,
[ARG_PTR_TO_TIMER] = &timer_types,
- [ARG_PTR_TO_KPTR] = &kptr_types,
+ [ARG_KPTR_XCHG_DEST] = &kptr_xchg_dest_types,
[ARG_PTR_TO_DYNPTR] = &dynptr_types,
};
@@ -9031,7 +9031,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
return err;
break;
}
- case ARG_PTR_TO_KPTR:
+ case ARG_KPTR_XCHG_DEST:
err = process_kptr_func(env, regno, meta);
if (err)
return err;
--
2.20.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v1 bpf-next 3/4] bpf: Support bpf_kptr_xchg into local kptr
2024-07-28 3:01 [PATCH v1 bpf-next 0/4] Support bpf_kptr_xchg into local kptr Amery Hung
2024-07-28 3:01 ` [PATCH v1 bpf-next 1/4] bpf: Search for kptrs in prog BTF structs Amery Hung
2024-07-28 3:01 ` [PATCH v1 bpf-next 2/4] bpf: Rename ARG_PTR_TO_KPTR -> ARG_KPTR_XCHG_DEST Amery Hung
@ 2024-07-28 3:01 ` Amery Hung
2024-07-31 23:38 ` Martin KaFai Lau
2024-07-28 3:01 ` [PATCH v1 bpf-next 4/4] selftests/bpf: Test bpf_kptr_xchg stashing " Amery Hung
2024-08-01 0:11 ` [PATCH v1 bpf-next 0/4] Support bpf_kptr_xchg " Martin KaFai Lau
4 siblings, 1 reply; 8+ messages in thread
From: Amery Hung @ 2024-07-28 3:01 UTC (permalink / raw)
To: bpf
Cc: daniel, andrii, alexei.starovoitov, martin.lau, sinquersw,
davemarchevsky, ameryhung, Amery Hung
From: Dave Marchevsky <davemarchevsky@fb.com>
Currently, users can only stash kptr into map values with bpf_kptr_xchg().
This patch further supports stashing kptr into local kptr by adding local
kptr as a valid destination type.
When stashing into local kptr, btf_record in program BTF is used instead
of btf_record in map to search for the btf_field of the local kptr.
The local kptr specific checks in check_reg_type() only apply when the
source argument of bpf_kptr_xchg() is local kptr. Therefore, we make the
scope of the check explicit as the destination now can also be local kptr.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Amery Hung <amery.hung@bytedance.com>
---
kernel/bpf/verifier.c | 43 +++++++++++++++++++++++++++++--------------
1 file changed, 29 insertions(+), 14 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9f2964b13b46..20094b35053a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -7803,29 +7803,38 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
struct bpf_call_arg_meta *meta)
{
struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno];
- struct bpf_map *map_ptr = reg->map_ptr;
struct btf_field *kptr_field;
+ struct bpf_map *map_ptr;
+ struct btf_record *rec;
u32 kptr_off;
+ if (type_is_ptr_alloc_obj(reg->type)) {
+ rec = reg_btf_record(reg);
+ } else { /* PTR_TO_MAP_VALUE */
+ map_ptr = reg->map_ptr;
+ if (!map_ptr->btf) {
+ verbose(env, "map '%s' has to have BTF in order to use bpf_kptr_xchg\n",
+ map_ptr->name);
+ return -EINVAL;
+ }
+ rec = map_ptr->record;
+ meta->map_ptr = map_ptr;
+ }
+
if (!tnum_is_const(reg->var_off)) {
verbose(env,
"R%d doesn't have constant offset. kptr has to be at the constant offset\n",
regno);
return -EINVAL;
}
- if (!map_ptr->btf) {
- verbose(env, "map '%s' has to have BTF in order to use bpf_kptr_xchg\n",
- map_ptr->name);
- return -EINVAL;
- }
- if (!btf_record_has_field(map_ptr->record, BPF_KPTR)) {
- verbose(env, "map '%s' has no valid kptr\n", map_ptr->name);
+
+ if (!btf_record_has_field(rec, BPF_KPTR)) {
+ verbose(env, "R%d has no valid kptr\n", regno);
return -EINVAL;
}
- meta->map_ptr = map_ptr;
kptr_off = reg->off + reg->var_off.value;
- kptr_field = btf_record_find(map_ptr->record, kptr_off, BPF_KPTR);
+ kptr_field = btf_record_find(rec, kptr_off, BPF_KPTR);
if (!kptr_field) {
verbose(env, "off=%d doesn't point to kptr\n", kptr_off);
return -EACCES;
@@ -8399,7 +8408,12 @@ static const struct bpf_reg_types func_ptr_types = { .types = { PTR_TO_FUNC } };
static const struct bpf_reg_types stack_ptr_types = { .types = { PTR_TO_STACK } };
static const struct bpf_reg_types const_str_ptr_types = { .types = { PTR_TO_MAP_VALUE } };
static const struct bpf_reg_types timer_types = { .types = { PTR_TO_MAP_VALUE } };
-static const struct bpf_reg_types kptr_xchg_dest_types = { .types = { PTR_TO_MAP_VALUE } };
+static const struct bpf_reg_types kptr_xchg_dest_types = {
+ .types = {
+ PTR_TO_MAP_VALUE,
+ PTR_TO_BTF_ID | MEM_ALLOC
+ }
+};
static const struct bpf_reg_types dynptr_types = {
.types = {
PTR_TO_STACK,
@@ -8470,7 +8484,8 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
if (base_type(arg_type) == ARG_PTR_TO_MEM)
type &= ~DYNPTR_TYPE_FLAG_MASK;
- if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type)) {
+ /* local kptr types are allowed as the source argument of bpf_kptr_xchg */
+ if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type) && regno == BPF_REG_2) {
type &= ~MEM_ALLOC;
type &= ~MEM_PERCPU;
}
@@ -8563,7 +8578,7 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
verbose(env, "verifier internal error: unimplemented handling of MEM_ALLOC\n");
return -EFAULT;
}
- if (meta->func_id == BPF_FUNC_kptr_xchg) {
+ if (meta->func_id == BPF_FUNC_kptr_xchg && regno == BPF_REG_2) {
if (map_kptr_match_type(env, meta->kptr_field, reg, regno))
return -EACCES;
}
@@ -8874,7 +8889,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
meta->release_regno = regno;
}
- if (reg->ref_obj_id) {
+ if (reg->ref_obj_id && base_type(arg_type) != ARG_KPTR_XCHG_DEST) {
if (meta->ref_obj_id) {
verbose(env, "verifier internal error: more than one arg with ref_obj_id R%d %u %u\n",
regno, reg->ref_obj_id,
--
2.20.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v1 bpf-next 4/4] selftests/bpf: Test bpf_kptr_xchg stashing into local kptr
2024-07-28 3:01 [PATCH v1 bpf-next 0/4] Support bpf_kptr_xchg into local kptr Amery Hung
` (2 preceding siblings ...)
2024-07-28 3:01 ` [PATCH v1 bpf-next 3/4] bpf: Support bpf_kptr_xchg into local kptr Amery Hung
@ 2024-07-28 3:01 ` Amery Hung
2024-08-01 0:11 ` [PATCH v1 bpf-next 0/4] Support bpf_kptr_xchg " Martin KaFai Lau
4 siblings, 0 replies; 8+ messages in thread
From: Amery Hung @ 2024-07-28 3:01 UTC (permalink / raw)
To: bpf
Cc: daniel, andrii, alexei.starovoitov, martin.lau, sinquersw,
davemarchevsky, ameryhung, Amery Hung
From: Dave Marchevsky <davemarchevsky@fb.com>
Test stashing a referenced kptr in to a local kptr.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Amery Hung <amery.hung@bytedance.com>
---
.../selftests/bpf/progs/local_kptr_stash.c | 22 +++++++++++++++++--
1 file changed, 20 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/local_kptr_stash.c b/tools/testing/selftests/bpf/progs/local_kptr_stash.c
index 75043ffc5dad..a0d784e8a05b 100644
--- a/tools/testing/selftests/bpf/progs/local_kptr_stash.c
+++ b/tools/testing/selftests/bpf/progs/local_kptr_stash.c
@@ -11,6 +11,7 @@
struct node_data {
long key;
long data;
+ struct prog_test_ref_kfunc __kptr *stashed_in_node;
struct bpf_rb_node node;
};
@@ -85,18 +86,35 @@ static bool less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
static int create_and_stash(int idx, int val)
{
+ struct prog_test_ref_kfunc *inner;
struct map_value *mapval;
struct node_data *res;
+ unsigned long dummy;
mapval = bpf_map_lookup_elem(&some_nodes, &idx);
if (!mapval)
return 1;
+ dummy = 0;
+ inner = bpf_kfunc_call_test_acquire(&dummy);
+ if (!inner)
+ return 2;
+
res = bpf_obj_new(typeof(*res));
- if (!res)
- return 1;
+ if (!res) {
+ bpf_kfunc_call_test_release(inner);
+ return 3;
+ }
res->key = val;
+ inner = bpf_kptr_xchg(&res->stashed_in_node, inner);
+ if (inner) {
+ /* Should never happen, we just obj_new'd res */
+ bpf_kfunc_call_test_release(inner);
+ bpf_obj_drop(res);
+ return 4;
+ }
+
res = bpf_kptr_xchg(&mapval->node, res);
if (res)
bpf_obj_drop(res);
--
2.20.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v1 bpf-next 3/4] bpf: Support bpf_kptr_xchg into local kptr
2024-07-28 3:01 ` [PATCH v1 bpf-next 3/4] bpf: Support bpf_kptr_xchg into local kptr Amery Hung
@ 2024-07-31 23:38 ` Martin KaFai Lau
2024-08-01 4:07 ` Amery Hung
0 siblings, 1 reply; 8+ messages in thread
From: Martin KaFai Lau @ 2024-07-31 23:38 UTC (permalink / raw)
To: Amery Hung
Cc: bpf, daniel, andrii, alexei.starovoitov, martin.lau, sinquersw,
davemarchevsky, Amery Hung
On 7/27/24 8:01 PM, Amery Hung wrote:
> @@ -8399,7 +8408,12 @@ static const struct bpf_reg_types func_ptr_types = { .types = { PTR_TO_FUNC } };
> static const struct bpf_reg_types stack_ptr_types = { .types = { PTR_TO_STACK } };
> static const struct bpf_reg_types const_str_ptr_types = { .types = { PTR_TO_MAP_VALUE } };
> static const struct bpf_reg_types timer_types = { .types = { PTR_TO_MAP_VALUE } };
> -static const struct bpf_reg_types kptr_xchg_dest_types = { .types = { PTR_TO_MAP_VALUE } };
> +static const struct bpf_reg_types kptr_xchg_dest_types = {
> + .types = {
> + PTR_TO_MAP_VALUE,
> + PTR_TO_BTF_ID | MEM_ALLOC
> + }
> +};
> static const struct bpf_reg_types dynptr_types = {
> .types = {
> PTR_TO_STACK,
> @@ -8470,7 +8484,8 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
> if (base_type(arg_type) == ARG_PTR_TO_MEM)
> type &= ~DYNPTR_TYPE_FLAG_MASK;
>
> - if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type)) {
> + /* local kptr types are allowed as the source argument of bpf_kptr_xchg */
> + if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type) && regno == BPF_REG_2) {
> type &= ~MEM_ALLOC;
> type &= ~MEM_PERCPU;
> }
> @@ -8563,7 +8578,7 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
> verbose(env, "verifier internal error: unimplemented handling of MEM_ALLOC\n");
> return -EFAULT;
> }
> - if (meta->func_id == BPF_FUNC_kptr_xchg) {
> + if (meta->func_id == BPF_FUNC_kptr_xchg && regno == BPF_REG_2) {
I think this BPF_REG_2 check is because the dst (BPF_REG_1) can be MEM_ALLOC
now. Just want to ensure I understand it correctly.
One nit. Please also update the document for bpf_kptr_xchg in uapi/linux/bpf.h:
==== >8888 ====
* void *bpf_kptr_xchg(void *map_value, void *ptr)
* Description
* Exchange kptr at pointer *map_value* with *ptr*, and return the
==== 8888< ====
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v1 bpf-next 0/4] Support bpf_kptr_xchg into local kptr
2024-07-28 3:01 [PATCH v1 bpf-next 0/4] Support bpf_kptr_xchg into local kptr Amery Hung
` (3 preceding siblings ...)
2024-07-28 3:01 ` [PATCH v1 bpf-next 4/4] selftests/bpf: Test bpf_kptr_xchg stashing " Amery Hung
@ 2024-08-01 0:11 ` Martin KaFai Lau
4 siblings, 0 replies; 8+ messages in thread
From: Martin KaFai Lau @ 2024-08-01 0:11 UTC (permalink / raw)
To: Amery Hung
Cc: bpf, daniel, andrii, alexei.starovoitov, martin.lau, sinquersw,
davemarchevsky, Amery Hung
On 7/27/24 8:01 PM, Amery Hung wrote:
> This series allows stashing kptr into local kptr. Currently, kptrs are
> only allowed to be stashed into map value with bpf_kptr_xchg(). A
> motivating use case of this series is to enable adding referenced kptr to
> bpf_rbtree or bpf_list by using allocated object as graph node and the
> storage of referenced kptr. For example, a bpf qdisc [0] enqueuing a
> referenced kptr to a struct sk_buff* to a bpf_list serving as a fifo:
>
> struct skb_node {
> struct sk_buff __kptr *skb;
> struct bpf_list_node node;
> };
>
> private(A) struct bpf_spin_lock fifo_lock;
> private(A) struct bpf_list_head fifo __contains(skb_node, node);
>
> /* In Qdisc_ops.enqueue */
> struct skb_node *skbn;
>
> skbn = bpf_obj_new(typeof(*skbn));
> if (!skbn)
> goto drop;
>
> /* skb is a referenced kptr to struct sk_buff acquired earilier
> * but not shown in this code snippet.
> */
> skb = bpf_kptr_xchg(&skbn->skb, skb);
> if (skb)
> /* should not happen; do something below releasing skb to
> * satisfy the verifier */
> ...
>
> bpf_spin_lock(&fifo_lock);
> bpf_list_push_back(&fifo, &skbn->node);
> bpf_spin_unlock(&fifo_lock);
>
> The implementation first searches for BPF_KPTR when generating program
> BTF. Then, we teach the verifier that the detination argument of
> bpf_kptr_xchg() can be local kptr, and use the btf_record in program BTF
> to check against the source argument.
>
> This series is mostly developed by Dave, who kindly helped and sent me
> the patchset. The selftests in bpf qdisc (WIP) relies on this series to
> work.
The set lgtm. With the doc fix in the bpf_kptr_xchg, you can carry my Ack.
Acked-by: Martin KaFai Lau <martin.lau@kernel.org>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v1 bpf-next 3/4] bpf: Support bpf_kptr_xchg into local kptr
2024-07-31 23:38 ` Martin KaFai Lau
@ 2024-08-01 4:07 ` Amery Hung
0 siblings, 0 replies; 8+ messages in thread
From: Amery Hung @ 2024-08-01 4:07 UTC (permalink / raw)
To: Martin KaFai Lau
Cc: bpf, daniel, andrii, alexei.starovoitov, martin.lau, sinquersw,
davemarchevsky, Amery Hung
On Wed, Jul 31, 2024 at 4:38 PM Martin KaFai Lau <martin.lau@linux.dev> wrote:
>
> On 7/27/24 8:01 PM, Amery Hung wrote:
> > @@ -8399,7 +8408,12 @@ static const struct bpf_reg_types func_ptr_types = { .types = { PTR_TO_FUNC } };
> > static const struct bpf_reg_types stack_ptr_types = { .types = { PTR_TO_STACK } };
> > static const struct bpf_reg_types const_str_ptr_types = { .types = { PTR_TO_MAP_VALUE } };
> > static const struct bpf_reg_types timer_types = { .types = { PTR_TO_MAP_VALUE } };
> > -static const struct bpf_reg_types kptr_xchg_dest_types = { .types = { PTR_TO_MAP_VALUE } };
> > +static const struct bpf_reg_types kptr_xchg_dest_types = {
> > + .types = {
> > + PTR_TO_MAP_VALUE,
> > + PTR_TO_BTF_ID | MEM_ALLOC
> > + }
> > +};
> > static const struct bpf_reg_types dynptr_types = {
> > .types = {
> > PTR_TO_STACK,
> > @@ -8470,7 +8484,8 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
> > if (base_type(arg_type) == ARG_PTR_TO_MEM)
> > type &= ~DYNPTR_TYPE_FLAG_MASK;
> >
> > - if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type)) {
> > + /* local kptr types are allowed as the source argument of bpf_kptr_xchg */
> > + if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type) && regno == BPF_REG_2) {
> > type &= ~MEM_ALLOC;
> > type &= ~MEM_PERCPU;
> > }
> > @@ -8563,7 +8578,7 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
> > verbose(env, "verifier internal error: unimplemented handling of MEM_ALLOC\n");
> > return -EFAULT;
> > }
> > - if (meta->func_id == BPF_FUNC_kptr_xchg) {
> > + if (meta->func_id == BPF_FUNC_kptr_xchg && regno == BPF_REG_2) {
>
> I think this BPF_REG_2 check is because the dst (BPF_REG_1) can be MEM_ALLOC
> now. Just want to ensure I understand it correctly.
>
Right. I can add a comment for this too if that helps:
/* Check if local kptr in src arg matches kptr in dst arg */
> One nit. Please also update the document for bpf_kptr_xchg in uapi/linux/bpf.h:
>
> ==== >8888 ====
> * void *bpf_kptr_xchg(void *map_value, void *ptr)
> * Description
> * Exchange kptr at pointer *map_value* with *ptr*, and return the
> ==== 8888< ====
>
Got it. I will change the first argument from "map_value" to "dst" and
the description to reflect what this series does.
Thanks,
Amery
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-08-01 4:07 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-28 3:01 [PATCH v1 bpf-next 0/4] Support bpf_kptr_xchg into local kptr Amery Hung
2024-07-28 3:01 ` [PATCH v1 bpf-next 1/4] bpf: Search for kptrs in prog BTF structs Amery Hung
2024-07-28 3:01 ` [PATCH v1 bpf-next 2/4] bpf: Rename ARG_PTR_TO_KPTR -> ARG_KPTR_XCHG_DEST Amery Hung
2024-07-28 3:01 ` [PATCH v1 bpf-next 3/4] bpf: Support bpf_kptr_xchg into local kptr Amery Hung
2024-07-31 23:38 ` Martin KaFai Lau
2024-08-01 4:07 ` Amery Hung
2024-07-28 3:01 ` [PATCH v1 bpf-next 4/4] selftests/bpf: Test bpf_kptr_xchg stashing " Amery Hung
2024-08-01 0:11 ` [PATCH v1 bpf-next 0/4] Support bpf_kptr_xchg " Martin KaFai Lau
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox