* [PATCH bpf-next v3 0/7] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head.
@ 2024-05-01 20:47 Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 1/7] bpf: Remove unnecessary checks on the offset of btf_field Kui-Feng Lee
` (6 more replies)
0 siblings, 7 replies; 13+ messages in thread
From: Kui-Feng Lee @ 2024-05-01 20:47 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Some types, such as type kptr, bpf_rb_root, and bpf_list_head, are
treated in a special way. Previously, these types could not be the
type of a field in a struct type that is used as the type of a global
variable. They could not be the type of a field in a struct type that
is used as the type of a field in the value type of a map either. They
could not even be the type of array elements. This means that they can
only be the type of global variables or of direct fields in the value
type of a map.
The patch set aims to enable the use of these specific types in arrays
and struct fields, providing flexibility. It examines the types of
global variables or the value types of maps, such as arrays and struct
types, recursively to identify these special types and generate field
information for them.
For example,
...
struct task_struct __kptr *ptr[3];
...
it will create 3 instances of "struct btf_field" in the "btf_record" of
the data section.
[...,
btf_field(offset=0x100, type=BPF_KPTR_REF),
btf_field(offset=0x108, type=BPF_KPTR_REF),
btf_field(offset=0x110, type=BPF_KPTR_REF),
...
]
It creates a record of each of three elements. These three records are
almost identical except their offsets.
Another example is
...
struct A {
...
struct task_struct __kptr *task;
struct bpf_rb_root root;
...
}
struct A foo[2];
it will create 4 records.
[...,
btf_field(offset=0x7100, type=BPF_KPTR_REF),
btf_field(offset=0x7108, type=BPF_RB_ROOT:),
btf_field(offset=0x7200, type=BPF_KPTR_REF),
btf_field(offset=0x7208, type=BPF_RB_ROOT:),
...
]
Assuming that the size of an element/struct A is 0x100 and "foo"
starts at 0x7000, it includes two kptr records at 0x7100 and 0x7200,
and two rbtree root records at 0x7108 and 0x7208.
All these field information will be flatten, for struct types, and
repeated, for arrays.
---
Changes from v2:
- Support fields in nested struct type.
- Remove nelems and duplicate field information with offset
adjustments for arrays.
Changes from v1:
- Move the check of element alignment out of btf_field_cmp() to
btf_record_find().
- Change the order of the previous patch 4 "bpf:
check_map_kptr_access() compute the offset from the reg state" as
the patch 7 now.
- Reject BPF_RB_NODE and BPF_LIST_NODE with nelems > 1.
- Rephrase the commit log of the patch "bpf: check_map_access() with
the knowledge of arrays" to clarify the alignment on elements.
v2: https://lore.kernel.org/all/20240412210814.603377-1-thinker.li@gmail.com/
v1: https://lore.kernel.org/bpf/20240410004150.2917641-1-thinker.li@gmail.com/
Kui-Feng Lee (7):
bpf: Remove unnecessary checks on the offset of btf_field.
bpf: Remove unnecessary call to btf_field_type_size().
bpf: create repeated fields for arrays.
bpf: look into the types of the fields of a struct type recursively.
selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
selftests/bpf: Test global bpf_rb_root arrays and fields in nested
struct types.
selftests/bpf: Test global bpf_list_head arrays.
kernel/bpf/btf.c | 161 +++++++++++++++++-
kernel/bpf/verifier.c | 4 +-
.../selftests/bpf/prog_tests/cpumask.c | 5 +
.../selftests/bpf/prog_tests/linked_list.c | 12 ++
.../testing/selftests/bpf/prog_tests/rbtree.c | 47 +++++
.../selftests/bpf/progs/cpumask_success.c | 133 +++++++++++++++
.../testing/selftests/bpf/progs/linked_list.c | 42 +++++
tools/testing/selftests/bpf/progs/rbtree.c | 77 +++++++++
8 files changed, 473 insertions(+), 8 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH bpf-next v3 1/7] bpf: Remove unnecessary checks on the offset of btf_field.
2024-05-01 20:47 [PATCH bpf-next v3 0/7] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
@ 2024-05-01 20:47 ` Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 2/7] bpf: Remove unnecessary call to btf_field_type_size() Kui-Feng Lee
` (5 subsequent siblings)
6 siblings, 0 replies; 13+ messages in thread
From: Kui-Feng Lee @ 2024-05-01 20:47 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee, Eduard Zingerman
reg_find_field_offset() always return a btf_field with a matching offset
value. Checking the offset of the returned btf_field is unnecessary.
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/verifier.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5d42db05315e..86d20433c10d 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -11638,7 +11638,7 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
node_off = reg->off + reg->var_off.value;
field = reg_find_field_offset(reg, node_off, node_field_type);
- if (!field || field->offset != node_off) {
+ if (!field) {
verbose(env, "%s not found at offset=%u\n", node_type_name, node_off);
return -EINVAL;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH bpf-next v3 2/7] bpf: Remove unnecessary call to btf_field_type_size().
2024-05-01 20:47 [PATCH bpf-next v3 0/7] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 1/7] bpf: Remove unnecessary checks on the offset of btf_field Kui-Feng Lee
@ 2024-05-01 20:47 ` Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 3/7] bpf: create repeated fields for arrays Kui-Feng Lee
` (4 subsequent siblings)
6 siblings, 0 replies; 13+ messages in thread
From: Kui-Feng Lee @ 2024-05-01 20:47 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee, Eduard Zingerman
field->size has been initialized by bpf_parse_fields() with the value
returned by btf_field_type_size(). Use it instead of calling
btf_field_type_size() again.
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/btf.c | 2 +-
kernel/bpf/verifier.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 8291fbfd27b1..d4c3342e2027 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6692,7 +6692,7 @@ int btf_struct_access(struct bpf_verifier_log *log,
for (i = 0; i < rec->cnt; i++) {
struct btf_field *field = &rec->fields[i];
u32 offset = field->offset;
- if (off < offset + btf_field_type_size(field->type) && offset < off + size) {
+ if (off < offset + field->size && offset < off + size) {
bpf_log(log,
"direct access to %s is disallowed\n",
btf_field_type_name(field->type));
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 86d20433c10d..43144a6be45c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5448,7 +5448,7 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
* this program. To check that [x1, x2) overlaps with [y1, y2),
* it is sufficient to check x1 < y2 && y1 < x2.
*/
- if (reg->smin_value + off < p + btf_field_type_size(field->type) &&
+ if (reg->smin_value + off < p + field->size &&
p < reg->umax_value + off + size) {
switch (field->type) {
case BPF_KPTR_UNREF:
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH bpf-next v3 3/7] bpf: create repeated fields for arrays.
2024-05-01 20:47 [PATCH bpf-next v3 0/7] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 1/7] bpf: Remove unnecessary checks on the offset of btf_field Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 2/7] bpf: Remove unnecessary call to btf_field_type_size() Kui-Feng Lee
@ 2024-05-01 20:47 ` Kui-Feng Lee
2024-05-02 17:20 ` Eduard Zingerman
2024-05-01 20:47 ` [PATCH bpf-next v3 4/7] bpf: look into the types of the fields of a struct type recursively Kui-Feng Lee
` (3 subsequent siblings)
6 siblings, 1 reply; 13+ messages in thread
From: Kui-Feng Lee @ 2024-05-01 20:47 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
The verifier uses field information for certain special types, such as
kptr, rbtree root, and list head. These types are treated
differently. However, we did not previously support these types in
arrays. This update examines arrays and duplicates field information the
same number of times as the length of the array if the element type is one
of the special types.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/btf.c | 81 +++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 76 insertions(+), 5 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index d4c3342e2027..4a78cd28fab0 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3493,6 +3493,41 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask,
#undef field_mask_test_name
+/* Repeat a field for a specified number of times.
+ *
+ * Copy and repeat a field for repeat_cnt
+ * times. The field is repeated by adding the offset of each field
+ * with
+ * (i + 1) * elem_size
+ * where i is the repeat index and elem_size is the size of the element.
+ */
+static int btf_repeat_field(struct btf_field_info *info, u32 field,
+ u32 repeat_cnt, u32 elem_size)
+{
+ u32 i;
+ u32 cur;
+
+ /* Ensure not repeating fields that should not be repeated. */
+ switch (info[field].type) {
+ case BPF_KPTR_UNREF:
+ case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
+ case BPF_LIST_HEAD:
+ case BPF_RB_ROOT:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ cur = field + 1;
+ for (i = 0; i < repeat_cnt; i++) {
+ memcpy(&info[cur], &info[field], sizeof(info[0]));
+ info[cur++].off += (i + 1) * elem_size;
+ }
+
+ return 0;
+}
+
static int btf_find_struct_field(const struct btf *btf,
const struct btf_type *t, u32 field_mask,
struct btf_field_info *info, int info_cnt)
@@ -3505,6 +3540,19 @@ static int btf_find_struct_field(const struct btf *btf,
for_each_member(i, t, member) {
const struct btf_type *member_type = btf_type_by_id(btf,
member->type);
+ const struct btf_array *array;
+ u32 j, nelems = 1;
+
+ /* Walk into array types to find the element type and the
+ * number of elements in the (flattened) array.
+ */
+ for (j = 0; j < MAX_RESOLVE_DEPTH && btf_type_is_array(member_type); j++) {
+ array = btf_array(member_type);
+ nelems *= array->nelems;
+ member_type = btf_type_by_id(btf, array->type);
+ }
+ if (nelems == 0)
+ continue;
field_type = btf_get_field_type(__btf_name_by_offset(btf, member_type->name_off),
field_mask, &seen_mask, &align, &sz);
@@ -3556,9 +3604,14 @@ static int btf_find_struct_field(const struct btf *btf,
if (ret == BTF_FIELD_IGNORE)
continue;
- if (idx >= info_cnt)
+ if (idx + nelems > info_cnt)
return -E2BIG;
- ++idx;
+ if (nelems > 1) {
+ ret = btf_repeat_field(info, idx, nelems - 1, sz);
+ if (ret < 0)
+ return ret;
+ }
+ idx += nelems;
}
return idx;
}
@@ -3575,6 +3628,19 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
for_each_vsi(i, t, vsi) {
const struct btf_type *var = btf_type_by_id(btf, vsi->type);
const struct btf_type *var_type = btf_type_by_id(btf, var->type);
+ const struct btf_array *array;
+ u32 j, nelems = 1;
+
+ /* Walk into array types to find the element type and the
+ * number of elements in the (flattened) array.
+ */
+ for (j = 0; j < MAX_RESOLVE_DEPTH && btf_type_is_array(var_type); j++) {
+ array = btf_array(var_type);
+ nelems *= array->nelems;
+ var_type = btf_type_by_id(btf, array->type);
+ }
+ if (nelems == 0)
+ continue;
field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
field_mask, &seen_mask, &align, &sz);
@@ -3584,7 +3650,7 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
return field_type;
off = vsi->offset;
- if (vsi->size != sz)
+ if (vsi->size != sz * nelems)
continue;
if (off % align)
continue;
@@ -3624,9 +3690,14 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
if (ret == BTF_FIELD_IGNORE)
continue;
- if (idx >= info_cnt)
+ if (idx + nelems > info_cnt)
return -E2BIG;
- ++idx;
+ if (nelems > 1) {
+ ret = btf_repeat_field(info, idx, nelems - 1, sz);
+ if (ret < 0)
+ return ret;
+ }
+ idx += nelems;
}
return idx;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH bpf-next v3 4/7] bpf: look into the types of the fields of a struct type recursively.
2024-05-01 20:47 [PATCH bpf-next v3 0/7] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (2 preceding siblings ...)
2024-05-01 20:47 ` [PATCH bpf-next v3 3/7] bpf: create repeated fields for arrays Kui-Feng Lee
@ 2024-05-01 20:47 ` Kui-Feng Lee
2024-05-02 19:34 ` Eduard Zingerman
2024-05-01 20:47 ` [PATCH bpf-next v3 5/7] selftests/bpf: Test kptr arrays and kptrs in nested struct fields Kui-Feng Lee
` (2 subsequent siblings)
6 siblings, 1 reply; 13+ messages in thread
From: Kui-Feng Lee @ 2024-05-01 20:47 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
The verifier has field information for specific special types, such as
kptr, rbtree root, and list head. These types are handled
differently. However, we did not previously examine the types of fields of
a struct type variable. Field information records were not generated for
the kptrs, rbtree roots, and linked_list heads that are not located at the
outermost struct type of a variable.
For example,
struct A {
struct task_struct __kptr * task;
};
struct B {
struct A mem_a;
}
struct B var_b;
It did not examine "struct A" so as not to generate field information for
the kptr in "struct A" for "var_b".
This patch enables BPF programs to define fields of these special types in
a struct type other than the direct type of a variable or in a struct type
that is the type of a field in the value type of a map.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/btf.c | 118 +++++++++++++++++++++++++++++++++++++++--------
1 file changed, 98 insertions(+), 20 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 4a78cd28fab0..2ceff77b7e71 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3493,41 +3493,83 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask,
#undef field_mask_test_name
-/* Repeat a field for a specified number of times.
+/* Repeat a number of fields for a specified number of times.
*
- * Copy and repeat a field for repeat_cnt
- * times. The field is repeated by adding the offset of each field
+ * Copy the fields starting from first_field and repeat them for repeat_cnt
+ * times. The fields are repeated by adding the offset of each field
* with
* (i + 1) * elem_size
* where i is the repeat index and elem_size is the size of the element.
*/
-static int btf_repeat_field(struct btf_field_info *info, u32 field,
- u32 repeat_cnt, u32 elem_size)
+static int btf_repeat_fields(struct btf_field_info *info, u32 first_field,
+ u32 field_cnt, u32 repeat_cnt, u32 elem_size)
{
- u32 i;
+ u32 i, j;
u32 cur;
/* Ensure not repeating fields that should not be repeated. */
- switch (info[field].type) {
- case BPF_KPTR_UNREF:
- case BPF_KPTR_REF:
- case BPF_KPTR_PERCPU:
- case BPF_LIST_HEAD:
- case BPF_RB_ROOT:
- break;
- default:
- return -EINVAL;
+ for (i = 0; i < field_cnt; i++) {
+ switch (info[first_field + i].type) {
+ case BPF_KPTR_UNREF:
+ case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
+ case BPF_LIST_HEAD:
+ case BPF_RB_ROOT:
+ break;
+ default:
+ return -EINVAL;
+ }
}
- cur = field + 1;
+ cur = first_field + field_cnt;
for (i = 0; i < repeat_cnt; i++) {
- memcpy(&info[cur], &info[field], sizeof(info[0]));
- info[cur++].off += (i + 1) * elem_size;
+ memcpy(&info[cur], &info[first_field], field_cnt * sizeof(info[0]));
+ for (j = 0; j < field_cnt; j++)
+ info[cur++].off += (i + 1) * elem_size;
}
return 0;
}
+static int btf_find_struct_field(const struct btf *btf,
+ const struct btf_type *t, u32 field_mask,
+ struct btf_field_info *info, int info_cnt);
+
+/* Find special fields in the struct type of a field.
+ *
+ * This function is used to find fields of special types that is not a
+ * global variable or a direct field of a struct type. It also handles the
+ * repetition if it is the element type of an array.
+ */
+static int btf_find_nested_struct(const struct btf *btf, const struct btf_type *t,
+ u32 off, u32 nelems,
+ u32 field_mask, struct btf_field_info *info,
+ int info_cnt)
+{
+ int ret, err, i;
+
+ ret = btf_find_struct_field(btf, t, field_mask, info, info_cnt);
+
+ if (ret <= 0)
+ return ret;
+
+ /* Shift the offsets of the nested struct fields to the offsets
+ * related to the container.
+ */
+ for (i = 0; i < ret; i++)
+ info[i].off += off;
+
+ if (nelems > 1) {
+ err = btf_repeat_fields(info, 0, ret, nelems - 1, t->size);
+ if (err == 0)
+ ret *= nelems;
+ else
+ ret = err;
+ }
+
+ return ret;
+}
+
static int btf_find_struct_field(const struct btf *btf,
const struct btf_type *t, u32 field_mask,
struct btf_field_info *info, int info_cnt)
@@ -3556,6 +3598,27 @@ static int btf_find_struct_field(const struct btf *btf,
field_type = btf_get_field_type(__btf_name_by_offset(btf, member_type->name_off),
field_mask, &seen_mask, &align, &sz);
+ /* Look into fields of struct types */
+ if ((field_type == BPF_KPTR_REF || !field_type) &&
+ __btf_type_is_struct(member_type)) {
+ /* For field_type == BPF_KPTR_REF, it is not
+ * necessary a kptr type. It can also be other
+ * types not special types handled here. However,
+ * it can not be a struct type if it is a kptr.
+ */
+ off = __btf_member_bit_offset(t, member);
+ if (off % 8)
+ /* valid C code cannot generate such BTF */
+ return -EINVAL;
+ off /= 8;
+ ret = btf_find_nested_struct(btf, member_type, off, nelems, field_mask,
+ &info[idx], info_cnt - idx);
+ if (ret < 0)
+ return ret;
+ idx += ret;
+ continue;
+ }
+
if (field_type == 0)
continue;
if (field_type < 0)
@@ -3607,7 +3670,7 @@ static int btf_find_struct_field(const struct btf *btf,
if (idx + nelems > info_cnt)
return -E2BIG;
if (nelems > 1) {
- ret = btf_repeat_field(info, idx, nelems - 1, sz);
+ ret = btf_repeat_fields(info, idx, 1, nelems - 1, sz);
if (ret < 0)
return ret;
}
@@ -3644,6 +3707,21 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
field_mask, &seen_mask, &align, &sz);
+ /* Look into variables of struct types */
+ if ((field_type == BPF_KPTR_REF || !field_type) &&
+ __btf_type_is_struct(var_type)) {
+ sz = var_type->size;
+ if (vsi->size != sz * nelems)
+ continue;
+ off = vsi->offset;
+ ret = btf_find_nested_struct(btf, var_type, off, nelems, field_mask,
+ &info[idx], info_cnt - idx);
+ if (ret < 0)
+ return ret;
+ idx += ret;
+ continue;
+ }
+
if (field_type == 0)
continue;
if (field_type < 0)
@@ -3693,7 +3771,7 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
if (idx + nelems > info_cnt)
return -E2BIG;
if (nelems > 1) {
- ret = btf_repeat_field(info, idx, nelems - 1, sz);
+ ret = btf_repeat_fields(info, idx, 1, nelems - 1, sz);
if (ret < 0)
return ret;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH bpf-next v3 5/7] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
2024-05-01 20:47 [PATCH bpf-next v3 0/7] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (3 preceding siblings ...)
2024-05-01 20:47 ` [PATCH bpf-next v3 4/7] bpf: look into the types of the fields of a struct type recursively Kui-Feng Lee
@ 2024-05-01 20:47 ` Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 6/7] selftests/bpf: Test global bpf_rb_root arrays and fields in nested struct types Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 7/7] selftests/bpf: Test global bpf_list_head arrays Kui-Feng Lee
6 siblings, 0 replies; 13+ messages in thread
From: Kui-Feng Lee @ 2024-05-01 20:47 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Make sure that BPF programs can declare global kptr arrays and kptr fields
in struct types that is the type of a global variable or the type of a
nested descendant field in a global variable.
An array with only one element is special case, that it treats the element
like a non-array kptr field. Nested arrays are also tested to ensure they
are handled properly.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
.../selftests/bpf/prog_tests/cpumask.c | 5 +
.../selftests/bpf/progs/cpumask_success.c | 133 ++++++++++++++++++
2 files changed, 138 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/cpumask.c b/tools/testing/selftests/bpf/prog_tests/cpumask.c
index ecf89df78109..2570bd4b0cb2 100644
--- a/tools/testing/selftests/bpf/prog_tests/cpumask.c
+++ b/tools/testing/selftests/bpf/prog_tests/cpumask.c
@@ -18,6 +18,11 @@ static const char * const cpumask_success_testcases[] = {
"test_insert_leave",
"test_insert_remove_release",
"test_global_mask_rcu",
+ "test_global_mask_array_one_rcu",
+ "test_global_mask_array_rcu",
+ "test_global_mask_array_l2_rcu",
+ "test_global_mask_nested_rcu",
+ "test_global_mask_nested_deep_rcu",
"test_cpumask_weight",
};
diff --git a/tools/testing/selftests/bpf/progs/cpumask_success.c b/tools/testing/selftests/bpf/progs/cpumask_success.c
index 7a1e64c6c065..0b6383fa9958 100644
--- a/tools/testing/selftests/bpf/progs/cpumask_success.c
+++ b/tools/testing/selftests/bpf/progs/cpumask_success.c
@@ -12,6 +12,25 @@ char _license[] SEC("license") = "GPL";
int pid, nr_cpus;
+struct kptr_nested {
+ struct bpf_cpumask __kptr * mask;
+};
+
+struct kptr_nested_mid {
+ int dummy;
+ struct kptr_nested m;
+};
+
+struct kptr_nested_deep {
+ struct kptr_nested_mid ptrs[2];
+};
+
+private(MASK) static struct bpf_cpumask __kptr * global_mask_array[2];
+private(MASK) static struct bpf_cpumask __kptr * global_mask_array_l2[2][1];
+private(MASK) static struct bpf_cpumask __kptr * global_mask_array_one[1];
+private(MASK) static struct kptr_nested global_mask_nested[2];
+private(MASK) static struct kptr_nested_deep global_mask_nested_deep;
+
static bool is_test_task(void)
{
int cur_pid = bpf_get_current_pid_tgid() >> 32;
@@ -460,6 +479,120 @@ int BPF_PROG(test_global_mask_rcu, struct task_struct *task, u64 clone_flags)
return 0;
}
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_array_one_rcu, struct task_struct *task, u64 clone_flags)
+{
+ struct bpf_cpumask *local, *prev;
+
+ if (!is_test_task())
+ return 0;
+
+ /* Kptr arrays with one element are special cased, being treated
+ * just like a single pointer.
+ */
+
+ local = create_cpumask();
+ if (!local)
+ return 0;
+
+ prev = bpf_kptr_xchg(&global_mask_array_one[0], local);
+ if (prev) {
+ bpf_cpumask_release(prev);
+ err = 3;
+ return 0;
+ }
+
+ bpf_rcu_read_lock();
+ local = global_mask_array_one[0];
+ if (!local) {
+ err = 4;
+ bpf_rcu_read_unlock();
+ return 0;
+ }
+
+ bpf_rcu_read_unlock();
+
+ return 0;
+}
+
+static int _global_mask_array_rcu(struct bpf_cpumask **mask0,
+ struct bpf_cpumask **mask1)
+{
+ struct bpf_cpumask *local;
+
+ if (!is_test_task())
+ return 0;
+
+ /* Check if two kptrs in the array work and independently */
+
+ local = create_cpumask();
+ if (!local)
+ return 0;
+
+ bpf_rcu_read_lock();
+
+ local = bpf_kptr_xchg(mask0, local);
+ if (local) {
+ err = 1;
+ goto err_exit;
+ }
+
+ /* [<mask 0>, NULL] */
+ if (!*mask0 || *mask1) {
+ err = 2;
+ goto err_exit;
+ }
+
+ local = create_cpumask();
+ if (!local) {
+ err = 9;
+ goto err_exit;
+ }
+
+ local = bpf_kptr_xchg(mask1, local);
+ if (local) {
+ err = 10;
+ goto err_exit;
+ }
+
+ /* [<mask 0>, <mask 1>] */
+ if (!*mask0 || !*mask1 || *mask0 == *mask1) {
+ err = 11;
+ goto err_exit;
+ }
+
+err_exit:
+ if (local)
+ bpf_cpumask_release(local);
+ bpf_rcu_read_unlock();
+ return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_array_rcu, struct task_struct *task, u64 clone_flags)
+{
+ return _global_mask_array_rcu(&global_mask_array[0], &global_mask_array[1]);
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_array_l2_rcu, struct task_struct *task, u64 clone_flags)
+{
+ return _global_mask_array_rcu(&global_mask_array_l2[0][0], &global_mask_array_l2[1][0]);
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_nested_rcu, struct task_struct *task, u64 clone_flags)
+{
+ return _global_mask_array_rcu(&global_mask_nested[0].mask, &global_mask_nested[1].mask);
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_nested_deep_rcu, struct task_struct *task, u64 clone_flags)
+{
+ return _global_mask_array_rcu(&global_mask_nested_deep.ptrs[0].m.mask,
+ &global_mask_nested_deep.ptrs[1].m.mask);
+}
+
SEC("tp_btf/task_newtask")
int BPF_PROG(test_cpumask_weight, struct task_struct *task, u64 clone_flags)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH bpf-next v3 6/7] selftests/bpf: Test global bpf_rb_root arrays and fields in nested struct types.
2024-05-01 20:47 [PATCH bpf-next v3 0/7] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (4 preceding siblings ...)
2024-05-01 20:47 ` [PATCH bpf-next v3 5/7] selftests/bpf: Test kptr arrays and kptrs in nested struct fields Kui-Feng Lee
@ 2024-05-01 20:47 ` Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 7/7] selftests/bpf: Test global bpf_list_head arrays Kui-Feng Lee
6 siblings, 0 replies; 13+ messages in thread
From: Kui-Feng Lee @ 2024-05-01 20:47 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Make sure global arrays of bpf_rb_root and fields of bpf_rb_root in nested
struct types work correctly.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
.../testing/selftests/bpf/prog_tests/rbtree.c | 47 +++++++++++
tools/testing/selftests/bpf/progs/rbtree.c | 77 +++++++++++++++++++
2 files changed, 124 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/rbtree.c b/tools/testing/selftests/bpf/prog_tests/rbtree.c
index e9300c96607d..9818f06c97c5 100644
--- a/tools/testing/selftests/bpf/prog_tests/rbtree.c
+++ b/tools/testing/selftests/bpf/prog_tests/rbtree.c
@@ -31,6 +31,28 @@ static void test_rbtree_add_nodes(void)
rbtree__destroy(skel);
}
+static void test_rbtree_add_nodes_nested(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
+ struct rbtree *skel;
+ int ret;
+
+ skel = rbtree__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "rbtree__open_and_load"))
+ return;
+
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.rbtree_add_nodes_nested), &opts);
+ ASSERT_OK(ret, "rbtree_add_nodes_nested run");
+ ASSERT_OK(opts.retval, "rbtree_add_nodes_nested retval");
+ ASSERT_EQ(skel->data->less_callback_ran, 1, "rbtree_add_nodes_nested less_callback_ran");
+
+ rbtree__destroy(skel);
+}
+
static void test_rbtree_add_and_remove(void)
{
LIBBPF_OPTS(bpf_test_run_opts, opts,
@@ -53,6 +75,27 @@ static void test_rbtree_add_and_remove(void)
rbtree__destroy(skel);
}
+static void test_rbtree_add_and_remove_array(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
+ struct rbtree *skel;
+ int ret;
+
+ skel = rbtree__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "rbtree__open_and_load"))
+ return;
+
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.rbtree_add_and_remove_array), &opts);
+ ASSERT_OK(ret, "rbtree_add_and_remove_array");
+ ASSERT_OK(opts.retval, "rbtree_add_and_remove_array retval");
+
+ rbtree__destroy(skel);
+}
+
static void test_rbtree_first_and_remove(void)
{
LIBBPF_OPTS(bpf_test_run_opts, opts,
@@ -104,8 +147,12 @@ void test_rbtree_success(void)
{
if (test__start_subtest("rbtree_add_nodes"))
test_rbtree_add_nodes();
+ if (test__start_subtest("rbtree_add_nodes_nested"))
+ test_rbtree_add_nodes_nested();
if (test__start_subtest("rbtree_add_and_remove"))
test_rbtree_add_and_remove();
+ if (test__start_subtest("rbtree_add_and_remove_array"))
+ test_rbtree_add_and_remove_array();
if (test__start_subtest("rbtree_first_and_remove"))
test_rbtree_first_and_remove();
if (test__start_subtest("rbtree_api_release_aliasing"))
diff --git a/tools/testing/selftests/bpf/progs/rbtree.c b/tools/testing/selftests/bpf/progs/rbtree.c
index b09f4fffe57c..a3620c15c136 100644
--- a/tools/testing/selftests/bpf/progs/rbtree.c
+++ b/tools/testing/selftests/bpf/progs/rbtree.c
@@ -13,6 +13,15 @@ struct node_data {
struct bpf_rb_node node;
};
+struct root_nested_inner {
+ struct bpf_spin_lock glock;
+ struct bpf_rb_root root __contains(node_data, node);
+};
+
+struct root_nested {
+ struct root_nested_inner inner;
+};
+
long less_callback_ran = -1;
long removed_key = -1;
long first_data[2] = {-1, -1};
@@ -20,6 +29,9 @@ long first_data[2] = {-1, -1};
#define private(name) SEC(".data." #name) __hidden __attribute__((aligned(8)))
private(A) struct bpf_spin_lock glock;
private(A) struct bpf_rb_root groot __contains(node_data, node);
+private(A) struct bpf_rb_root groot_array[2] __contains(node_data, node);
+private(A) struct bpf_rb_root groot_array_one[1] __contains(node_data, node);
+private(B) struct root_nested groot_nested;
static bool less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
{
@@ -71,6 +83,12 @@ long rbtree_add_nodes(void *ctx)
return __add_three(&groot, &glock);
}
+SEC("tc")
+long rbtree_add_nodes_nested(void *ctx)
+{
+ return __add_three(&groot_nested.inner.root, &groot_nested.inner.glock);
+}
+
SEC("tc")
long rbtree_add_and_remove(void *ctx)
{
@@ -109,6 +127,65 @@ long rbtree_add_and_remove(void *ctx)
return 1;
}
+SEC("tc")
+long rbtree_add_and_remove_array(void *ctx)
+{
+ struct bpf_rb_node *res1 = NULL, *res2 = NULL, *res3 = NULL;
+ struct node_data *nodes[3][2] = {{NULL, NULL}, {NULL, NULL}, {NULL, NULL}};
+ struct node_data *n;
+ long k1 = -1, k2 = -1, k3 = -1;
+ int i, j;
+
+ for (i = 0; i < 3; i++) {
+ for (j = 0; j < 2; j++) {
+ nodes[i][j] = bpf_obj_new(typeof(*nodes[i][j]));
+ if (!nodes[i][j])
+ goto err_out;
+ nodes[i][j]->key = i * 2 + j;
+ }
+ }
+
+ bpf_spin_lock(&glock);
+ for (i = 0; i < 2; i++)
+ for (j = 0; j < 2; j++)
+ bpf_rbtree_add(&groot_array[i], &nodes[i][j]->node, less);
+ for (j = 0; j < 2; j++)
+ bpf_rbtree_add(&groot_array_one[0], &nodes[2][j]->node, less);
+ res1 = bpf_rbtree_remove(&groot_array[0], &nodes[0][0]->node);
+ res2 = bpf_rbtree_remove(&groot_array[1], &nodes[1][0]->node);
+ res3 = bpf_rbtree_remove(&groot_array_one[0], &nodes[2][0]->node);
+ bpf_spin_unlock(&glock);
+
+ if (res1) {
+ n = container_of(res1, struct node_data, node);
+ k1 = n->key;
+ bpf_obj_drop(n);
+ }
+ if (res2) {
+ n = container_of(res2, struct node_data, node);
+ k2 = n->key;
+ bpf_obj_drop(n);
+ }
+ if (res3) {
+ n = container_of(res3, struct node_data, node);
+ k3 = n->key;
+ bpf_obj_drop(n);
+ }
+ if (k1 != 0 || k2 != 2 || k3 != 4)
+ return 2;
+
+ return 0;
+
+err_out:
+ for (i = 0; i < 3; i++) {
+ for (j = 0; j < 2; j++) {
+ if (nodes[i][j])
+ bpf_obj_drop(nodes[i][j]);
+ }
+ }
+ return 1;
+}
+
SEC("tc")
long rbtree_first_and_remove(void *ctx)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH bpf-next v3 7/7] selftests/bpf: Test global bpf_list_head arrays.
2024-05-01 20:47 [PATCH bpf-next v3 0/7] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (5 preceding siblings ...)
2024-05-01 20:47 ` [PATCH bpf-next v3 6/7] selftests/bpf: Test global bpf_rb_root arrays and fields in nested struct types Kui-Feng Lee
@ 2024-05-01 20:47 ` Kui-Feng Lee
6 siblings, 0 replies; 13+ messages in thread
From: Kui-Feng Lee @ 2024-05-01 20:47 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Make sure global arrays of bpf_list_heads and fields of bpf_list_heads in
nested struct types work correctly.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
.../selftests/bpf/prog_tests/linked_list.c | 12 ++++++
.../testing/selftests/bpf/progs/linked_list.c | 42 +++++++++++++++++++
2 files changed, 54 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/linked_list.c b/tools/testing/selftests/bpf/prog_tests/linked_list.c
index 2fb89de63bd2..77d07e0a4a55 100644
--- a/tools/testing/selftests/bpf/prog_tests/linked_list.c
+++ b/tools/testing/selftests/bpf/prog_tests/linked_list.c
@@ -183,6 +183,18 @@ static void test_linked_list_success(int mode, bool leave_in_map)
if (!leave_in_map)
clear_fields(skel->maps.bss_A);
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.global_list_push_pop_nested), &opts);
+ ASSERT_OK(ret, "global_list_push_pop_nested");
+ ASSERT_OK(opts.retval, "global_list_push_pop_nested retval");
+ if (!leave_in_map)
+ clear_fields(skel->maps.bss_A);
+
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.global_list_array_push_pop), &opts);
+ ASSERT_OK(ret, "global_list_array_push_pop");
+ ASSERT_OK(opts.retval, "global_list_array_push_pop retval");
+ if (!leave_in_map)
+ clear_fields(skel->maps.bss_A);
+
if (mode == PUSH_POP)
goto end;
diff --git a/tools/testing/selftests/bpf/progs/linked_list.c b/tools/testing/selftests/bpf/progs/linked_list.c
index 26205ca80679..f69bf3e30321 100644
--- a/tools/testing/selftests/bpf/progs/linked_list.c
+++ b/tools/testing/selftests/bpf/progs/linked_list.c
@@ -11,6 +11,22 @@
#include "linked_list.h"
+struct head_nested_inner {
+ struct bpf_spin_lock lock;
+ struct bpf_list_head head __contains(foo, node2);
+};
+
+struct head_nested {
+ int dummy;
+ struct head_nested_inner inner;
+};
+
+private(C) struct bpf_spin_lock glock_c;
+private(C) struct bpf_list_head ghead_array[2] __contains(foo, node2);
+private(C) struct bpf_list_head ghead_array_one[1] __contains(foo, node2);
+
+private(D) struct head_nested ghead_nested;
+
static __always_inline
int list_push_pop(struct bpf_spin_lock *lock, struct bpf_list_head *head, bool leave_in_map)
{
@@ -309,6 +325,32 @@ int global_list_push_pop(void *ctx)
return test_list_push_pop(&glock, &ghead);
}
+SEC("tc")
+int global_list_push_pop_nested(void *ctx)
+{
+ return test_list_push_pop(&ghead_nested.inner.lock, &ghead_nested.inner.head);
+}
+
+SEC("tc")
+int global_list_array_push_pop(void *ctx)
+{
+ int r;
+
+ r = test_list_push_pop(&glock_c, &ghead_array[0]);
+ if (r)
+ return r;
+
+ r = test_list_push_pop(&glock_c, &ghead_array[1]);
+ if (r)
+ return r;
+
+ /* Arrays with only one element is a special case, being treated
+ * just like a bpf_list_head variable by the verifier, not an
+ * array.
+ */
+ return test_list_push_pop(&glock_c, &ghead_array_one[0]);
+}
+
SEC("tc")
int map_list_push_pop_multiple(void *ctx)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next v3 3/7] bpf: create repeated fields for arrays.
2024-05-01 20:47 ` [PATCH bpf-next v3 3/7] bpf: create repeated fields for arrays Kui-Feng Lee
@ 2024-05-02 17:20 ` Eduard Zingerman
2024-05-03 18:02 ` Kui-Feng Lee
0 siblings, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2024-05-02 17:20 UTC (permalink / raw)
To: Kui-Feng Lee, bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng
On Wed, 2024-05-01 at 13:47 -0700, Kui-Feng Lee wrote:
I think this looks good for repeating fields of nested arrays
(w/o visiting nested structures), two nits below.
> @@ -3575,6 +3628,19 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
> for_each_vsi(i, t, vsi) {
> const struct btf_type *var = btf_type_by_id(btf, vsi->type);
> const struct btf_type *var_type = btf_type_by_id(btf, var->type);
> + const struct btf_array *array;
> + u32 j, nelems = 1;
> +
> + /* Walk into array types to find the element type and the
> + * number of elements in the (flattened) array.
> + */
> + for (j = 0; j < MAX_RESOLVE_DEPTH && btf_type_is_array(var_type); j++) {
> + array = btf_array(var_type);
> + nelems *= array->nelems;
> + var_type = btf_type_by_id(btf, array->type);
> + }
> + if (nelems == 0)
> + continue;
Nit: Should this return an error if j == MAX_RESOLVE_DEPTH after the loop?
>
> field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
> field_mask, &seen_mask, &align, &sz);
> @@ -3584,7 +3650,7 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
> return field_type;
>
> off = vsi->offset;
> - if (vsi->size != sz)
> + if (vsi->size != sz * nelems)
> continue;
> if (off % align)
> continue;
> @@ -3624,9 +3690,14 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
>
> if (ret == BTF_FIELD_IGNORE)
> continue;
> - if (idx >= info_cnt)
> + if (idx + nelems > info_cnt)
> return -E2BIG;
Nit: This is bounded by BTF_FIELDS_MAX which has value of 11,
would that be enough?
> - ++idx;
> + if (nelems > 1) {
> + ret = btf_repeat_field(info, idx, nelems - 1, sz);
> + if (ret < 0)
> + return ret;
> + }
> + idx += nelems;
> }
> return idx;
> }
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next v3 4/7] bpf: look into the types of the fields of a struct type recursively.
2024-05-01 20:47 ` [PATCH bpf-next v3 4/7] bpf: look into the types of the fields of a struct type recursively Kui-Feng Lee
@ 2024-05-02 19:34 ` Eduard Zingerman
2024-05-03 18:07 ` Kui-Feng Lee
0 siblings, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2024-05-02 19:34 UTC (permalink / raw)
To: Kui-Feng Lee, bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng
On Wed, 2024-05-01 at 13:47 -0700, Kui-Feng Lee wrote:
> The verifier has field information for specific special types, such as
> kptr, rbtree root, and list head. These types are handled
> differently. However, we did not previously examine the types of fields of
> a struct type variable. Field information records were not generated for
> the kptrs, rbtree roots, and linked_list heads that are not located at the
> outermost struct type of a variable.
>
> For example,
>
> struct A {
> struct task_struct __kptr * task;
> };
>
> struct B {
> struct A mem_a;
> }
>
> struct B var_b;
>
> It did not examine "struct A" so as not to generate field information for
> the kptr in "struct A" for "var_b".
>
> This patch enables BPF programs to define fields of these special types in
> a struct type other than the direct type of a variable or in a struct type
> that is the type of a field in the value type of a map.
>
> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
> ---
I think that the main logic of this commit is fine.
A few nitpicks about code organization below.
> kernel/bpf/btf.c | 118 +++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 98 insertions(+), 20 deletions(-)
>
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 4a78cd28fab0..2ceff77b7e71 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -3493,41 +3493,83 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask,
[...]
> +static int btf_find_struct_field(const struct btf *btf,
> + const struct btf_type *t, u32 field_mask,
> + struct btf_field_info *info, int info_cnt);
> +
> +/* Find special fields in the struct type of a field.
> + *
> + * This function is used to find fields of special types that is not a
> + * global variable or a direct field of a struct type. It also handles the
> + * repetition if it is the element type of an array.
> + */
> +static int btf_find_nested_struct(const struct btf *btf, const struct btf_type *t,
> + u32 off, u32 nelems,
> + u32 field_mask, struct btf_field_info *info,
> + int info_cnt)
> +{
> + int ret, err, i;
> +
> + ret = btf_find_struct_field(btf, t, field_mask, info, info_cnt);
btf_find_nested_struct() and btf_find_struct_field() are mutually recursive,
as far as I can see this is usually avoided in kernel source.
Would it be possible to make stack explicit or limit traversal depth here?
The 'info_cnt' field won't work as it could be unchanged in
btf_find_struct_field() if 'idx == 0'.
> +
> + if (ret <= 0)
> + return ret;
> +
> + /* Shift the offsets of the nested struct fields to the offsets
> + * related to the container.
> + */
> + for (i = 0; i < ret; i++)
> + info[i].off += off;
> +
> + if (nelems > 1) {
> + err = btf_repeat_fields(info, 0, ret, nelems - 1, t->size);
> + if (err == 0)
> + ret *= nelems;
> + else
> + ret = err;
> + }
> +
> + return ret;
> +}
> +
> static int btf_find_struct_field(const struct btf *btf,
> const struct btf_type *t, u32 field_mask,
> struct btf_field_info *info, int info_cnt)
[...]
> @@ -3644,6 +3707,21 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
>
> field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
> field_mask, &seen_mask, &align, &sz);
Actions taken for members in btf_find_datasec_var() and
btf_find_struct_field() are almost identical, would it be possible to
add a refactoring commit this patch-set so that common logic is moved
to a separate function? It looks like this function would have to be
parameterized only by member size and offset.
> + /* Look into variables of struct types */
> + if ((field_type == BPF_KPTR_REF || !field_type) &&
> + __btf_type_is_struct(var_type)) {
> + sz = var_type->size;
> + if (vsi->size != sz * nelems)
> + continue;
> + off = vsi->offset;
> + ret = btf_find_nested_struct(btf, var_type, off, nelems, field_mask,
> + &info[idx], info_cnt - idx);
> + if (ret < 0)
> + return ret;
> + idx += ret;
> + continue;
> + }
> +
> if (field_type == 0)
> continue;
> if (field_type < 0)
[...]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next v3 3/7] bpf: create repeated fields for arrays.
2024-05-02 17:20 ` Eduard Zingerman
@ 2024-05-03 18:02 ` Kui-Feng Lee
2024-05-03 18:10 ` Eduard Zingerman
0 siblings, 1 reply; 13+ messages in thread
From: Kui-Feng Lee @ 2024-05-03 18:02 UTC (permalink / raw)
To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On 5/2/24 10:20, Eduard Zingerman wrote:
> On Wed, 2024-05-01 at 13:47 -0700, Kui-Feng Lee wrote:
>
> I think this looks good for repeating fields of nested arrays
> (w/o visiting nested structures), two nits below.
>
>> @@ -3575,6 +3628,19 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
>> for_each_vsi(i, t, vsi) {
>> const struct btf_type *var = btf_type_by_id(btf, vsi->type);
>> const struct btf_type *var_type = btf_type_by_id(btf, var->type);
>> + const struct btf_array *array;
>> + u32 j, nelems = 1;
>> +
>> + /* Walk into array types to find the element type and the
>> + * number of elements in the (flattened) array.
>> + */
>> + for (j = 0; j < MAX_RESOLVE_DEPTH && btf_type_is_array(var_type); j++) {
>> + array = btf_array(var_type);
>> + nelems *= array->nelems;
>> + var_type = btf_type_by_id(btf, array->type);
>> + }
>> + if (nelems == 0)
>> + continue;
>
> Nit: Should this return an error if j == MAX_RESOLVE_DEPTH after the loop?
agree
>
>>
>> field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
>> field_mask, &seen_mask, &align, &sz);
>> @@ -3584,7 +3650,7 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
>> return field_type;
>>
>> off = vsi->offset;
>> - if (vsi->size != sz)
>> + if (vsi->size != sz * nelems)
>> continue;
>> if (off % align)
>> continue;
>> @@ -3624,9 +3690,14 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
>>
>> if (ret == BTF_FIELD_IGNORE)
>> continue;
>> - if (idx >= info_cnt)
>> + if (idx + nelems > info_cnt)
>> return -E2BIG;
>
> Nit: This is bounded by BTF_FIELDS_MAX which has value of 11,
> would that be enough?
So far, no one has complained it yet!
But, some one will reach the limit in future.
If people want a flexible length, I will solve it in a follow-up.
WDYT?
>
>> - ++idx;
>> + if (nelems > 1) {
>> + ret = btf_repeat_field(info, idx, nelems - 1, sz);
>> + if (ret < 0)
>> + return ret;
>> + }
>> + idx += nelems;
>> }
>> return idx;
>> }
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next v3 4/7] bpf: look into the types of the fields of a struct type recursively.
2024-05-02 19:34 ` Eduard Zingerman
@ 2024-05-03 18:07 ` Kui-Feng Lee
0 siblings, 0 replies; 13+ messages in thread
From: Kui-Feng Lee @ 2024-05-03 18:07 UTC (permalink / raw)
To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On 5/2/24 12:34, Eduard Zingerman wrote:
> On Wed, 2024-05-01 at 13:47 -0700, Kui-Feng Lee wrote:
>> The verifier has field information for specific special types, such as
>> kptr, rbtree root, and list head. These types are handled
>> differently. However, we did not previously examine the types of fields of
>> a struct type variable. Field information records were not generated for
>> the kptrs, rbtree roots, and linked_list heads that are not located at the
>> outermost struct type of a variable.
>>
>> For example,
>>
>> struct A {
>> struct task_struct __kptr * task;
>> };
>>
>> struct B {
>> struct A mem_a;
>> }
>>
>> struct B var_b;
>>
>> It did not examine "struct A" so as not to generate field information for
>> the kptr in "struct A" for "var_b".
>>
>> This patch enables BPF programs to define fields of these special types in
>> a struct type other than the direct type of a variable or in a struct type
>> that is the type of a field in the value type of a map.
>>
>> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
>> ---
>
> I think that the main logic of this commit is fine.
> A few nitpicks about code organization below.
>
>> kernel/bpf/btf.c | 118 +++++++++++++++++++++++++++++++++++++++--------
>> 1 file changed, 98 insertions(+), 20 deletions(-)
>>
>> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
>> index 4a78cd28fab0..2ceff77b7e71 100644
>> --- a/kernel/bpf/btf.c
>> +++ b/kernel/bpf/btf.c
>> @@ -3493,41 +3493,83 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask,
>
> [...]
>
>> +static int btf_find_struct_field(const struct btf *btf,
>> + const struct btf_type *t, u32 field_mask,
>> + struct btf_field_info *info, int info_cnt);
>> +
>> +/* Find special fields in the struct type of a field.
>> + *
>> + * This function is used to find fields of special types that is not a
>> + * global variable or a direct field of a struct type. It also handles the
>> + * repetition if it is the element type of an array.
>> + */
>> +static int btf_find_nested_struct(const struct btf *btf, const struct btf_type *t,
>> + u32 off, u32 nelems,
>> + u32 field_mask, struct btf_field_info *info,
>> + int info_cnt)
>> +{
>> + int ret, err, i;
>> +
>> + ret = btf_find_struct_field(btf, t, field_mask, info, info_cnt);
>
> btf_find_nested_struct() and btf_find_struct_field() are mutually recursive,
> as far as I can see this is usually avoided in kernel source.
> Would it be possible to make stack explicit or limit traversal depth here?
Sure!
> The 'info_cnt' field won't work as it could be unchanged in
> btf_find_struct_field() if 'idx == 0'
>
>> +
>> + if (ret <= 0)
>> + return ret;
>> +
>> + /* Shift the offsets of the nested struct fields to the offsets
>> + * related to the container.
>> + */
>> + for (i = 0; i < ret; i++)
>> + info[i].off += off;
>> +
>> + if (nelems > 1) {
>> + err = btf_repeat_fields(info, 0, ret, nelems - 1, t->size);
>> + if (err == 0)
>> + ret *= nelems;
>> + else
>> + ret = err;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> static int btf_find_struct_field(const struct btf *btf,
>> const struct btf_type *t, u32 field_mask,
>> struct btf_field_info *info, int info_cnt)
>
> [...]
>
>> @@ -3644,6 +3707,21 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
>>
>> field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
>> field_mask, &seen_mask, &align, &sz);
>
> Actions taken for members in btf_find_datasec_var() and
> btf_find_struct_field() are almost identical, would it be possible to
> add a refactoring commit this patch-set so that common logic is moved
> to a separate function? It looks like this function would have to be
> parameterized only by member size and offset.
Of course, it could be.
>
>> + /* Look into variables of struct types */
>> + if ((field_type == BPF_KPTR_REF || !field_type) &&
>> + __btf_type_is_struct(var_type)) {
>> + sz = var_type->size;
>> + if (vsi->size != sz * nelems)
>> + continue;
>> + off = vsi->offset;
>> + ret = btf_find_nested_struct(btf, var_type, off, nelems, field_mask,
>> + &info[idx], info_cnt - idx);
>> + if (ret < 0)
>> + return ret;
>> + idx += ret;
>> + continue;
>> + }
>> +
>> if (field_type == 0)
>> continue;
>> if (field_type < 0)
>
> [...]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH bpf-next v3 3/7] bpf: create repeated fields for arrays.
2024-05-03 18:02 ` Kui-Feng Lee
@ 2024-05-03 18:10 ` Eduard Zingerman
0 siblings, 0 replies; 13+ messages in thread
From: Eduard Zingerman @ 2024-05-03 18:10 UTC (permalink / raw)
To: Kui-Feng Lee, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On Fri, 2024-05-03 at 11:02 -0700, Kui-Feng Lee wrote:
[...]
> > > @@ -3624,9 +3690,14 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
> > >
> > > if (ret == BTF_FIELD_IGNORE)
> > > continue;
> > > - if (idx >= info_cnt)
> > > + if (idx + nelems > info_cnt)
> > > return -E2BIG;
> >
> > Nit: This is bounded by BTF_FIELDS_MAX which has value of 11,
> > would that be enough?
>
> So far, no one has complained it yet!
> But, some one will reach the limit in future.
> If people want a flexible length, I will solve it in a follow-up.
> WDYT?
Sure, follow-up works.
Just that 11 is not much for an array.
I think sched_ext is the only user for this feature at the moment,
so you are in the best position to judge which size is appropriate.
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2024-05-03 18:10 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-01 20:47 [PATCH bpf-next v3 0/7] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 1/7] bpf: Remove unnecessary checks on the offset of btf_field Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 2/7] bpf: Remove unnecessary call to btf_field_type_size() Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 3/7] bpf: create repeated fields for arrays Kui-Feng Lee
2024-05-02 17:20 ` Eduard Zingerman
2024-05-03 18:02 ` Kui-Feng Lee
2024-05-03 18:10 ` Eduard Zingerman
2024-05-01 20:47 ` [PATCH bpf-next v3 4/7] bpf: look into the types of the fields of a struct type recursively Kui-Feng Lee
2024-05-02 19:34 ` Eduard Zingerman
2024-05-03 18:07 ` Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 5/7] selftests/bpf: Test kptr arrays and kptrs in nested struct fields Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 6/7] selftests/bpf: Test global bpf_rb_root arrays and fields in nested struct types Kui-Feng Lee
2024-05-01 20:47 ` [PATCH bpf-next v3 7/7] selftests/bpf: Test global bpf_list_head arrays Kui-Feng Lee
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).