* [PATCH bpf-next 01/11] bpf: Remove unnecessary checks on the offset of btf_field.
2024-04-10 0:41 [PATCH bpf-next 00/11] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
@ 2024-04-10 0:41 ` Kui-Feng Lee
2024-04-11 22:12 ` Eduard Zingerman
2024-04-10 0:41 ` [PATCH bpf-next 02/11] bpf: Remove unnecessary call to btf_field_type_size() Kui-Feng Lee
` (9 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-10 0:41 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
reg_find_field_offset() always return a btf_field with a matching offset
value. Checking the offset of the returned btf_field is unnecessary.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/verifier.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 2aad6d90550f..0d44940c12d2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -11529,7 +11529,7 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
node_off = reg->off + reg->var_off.value;
field = reg_find_field_offset(reg, node_off, node_field_type);
- if (!field || field->offset != node_off) {
+ if (!field) {
verbose(env, "%s not found at offset=%u\n", node_type_name, node_off);
return -EINVAL;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 01/11] bpf: Remove unnecessary checks on the offset of btf_field.
2024-04-10 0:41 ` [PATCH bpf-next 01/11] bpf: Remove unnecessary checks on the offset of btf_field Kui-Feng Lee
@ 2024-04-11 22:12 ` Eduard Zingerman
0 siblings, 0 replies; 27+ messages in thread
From: Eduard Zingerman @ 2024-04-11 22:12 UTC (permalink / raw)
To: Kui-Feng Lee, bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng
On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
> reg_find_field_offset() always return a btf_field with a matching offset
> value. Checking the offset of the returned btf_field is unnecessary.
>
> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
> ---
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH bpf-next 02/11] bpf: Remove unnecessary call to btf_field_type_size().
2024-04-10 0:41 [PATCH bpf-next 00/11] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
2024-04-10 0:41 ` [PATCH bpf-next 01/11] bpf: Remove unnecessary checks on the offset of btf_field Kui-Feng Lee
@ 2024-04-10 0:41 ` Kui-Feng Lee
2024-04-11 22:12 ` Eduard Zingerman
2024-04-10 0:41 ` [PATCH bpf-next 03/11] bpf: Add nelems to struct btf_field_info and btf_field Kui-Feng Lee
` (8 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-10 0:41 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
field->size has been initialized by bpf_parse_fields() with the value
returned by btf_field_type_size(). Use it instead of calling
btf_field_type_size() again.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/btf.c | 2 +-
kernel/bpf/verifier.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 90c4a32d89ff..e71ea78a4db9 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6672,7 +6672,7 @@ int btf_struct_access(struct bpf_verifier_log *log,
for (i = 0; i < rec->cnt; i++) {
struct btf_field *field = &rec->fields[i];
u32 offset = field->offset;
- if (off < offset + btf_field_type_size(field->type) && offset < off + size) {
+ if (off < offset + field->size && offset < off + size) {
bpf_log(log,
"direct access to %s is disallowed\n",
btf_field_type_name(field->type));
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0d44940c12d2..86adacc5f76c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5432,7 +5432,7 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
* this program. To check that [x1, x2) overlaps with [y1, y2),
* it is sufficient to check x1 < y2 && y1 < x2.
*/
- if (reg->smin_value + off < p + btf_field_type_size(field->type) &&
+ if (reg->smin_value + off < p + field->size &&
p < reg->umax_value + off + size) {
switch (field->type) {
case BPF_KPTR_UNREF:
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 02/11] bpf: Remove unnecessary call to btf_field_type_size().
2024-04-10 0:41 ` [PATCH bpf-next 02/11] bpf: Remove unnecessary call to btf_field_type_size() Kui-Feng Lee
@ 2024-04-11 22:12 ` Eduard Zingerman
0 siblings, 0 replies; 27+ messages in thread
From: Eduard Zingerman @ 2024-04-11 22:12 UTC (permalink / raw)
To: Kui-Feng Lee, bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng
On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
> field->size has been initialized by bpf_parse_fields() with the value
> returned by btf_field_type_size(). Use it instead of calling
> btf_field_type_size() again.
>
> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
> ---
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH bpf-next 03/11] bpf: Add nelems to struct btf_field_info and btf_field.
2024-04-10 0:41 [PATCH bpf-next 00/11] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
2024-04-10 0:41 ` [PATCH bpf-next 01/11] bpf: Remove unnecessary checks on the offset of btf_field Kui-Feng Lee
2024-04-10 0:41 ` [PATCH bpf-next 02/11] bpf: Remove unnecessary call to btf_field_type_size() Kui-Feng Lee
@ 2024-04-10 0:41 ` Kui-Feng Lee
2024-04-10 0:41 ` [PATCH bpf-next 04/11] bpf: check_map_kptr_access() compute the offset from the reg state Kui-Feng Lee
` (7 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-10 0:41 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
To support global field arrays for bpf_rb_root, bpf_list_head, and kptr,
the nelems (number of elements) has been added to btf_field_info and
btf_field. The value of nelems is the length of a field array. Nested
arrays are flatten to get the length.
If a field is not an array, the value of nelems should be 1. In the other
word, you can not distinguish an array with only one element from a field
with the same type as array's element. However, it is not a problem with
the help of the offset and size in btf_field.
field->size will be the size of the array if it is.
The value of nelems of btf_field is always 1 to deactivate arrays for now,
but the nelems of btf_field_info has been updated as the number of elements
for data sections. A later patch will activate it.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
include/linux/bpf.h | 1 +
kernel/bpf/btf.c | 25 ++++++++++++++++++++++---
2 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 62762390c93d..f397ccdc6d4b 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -226,6 +226,7 @@ struct btf_field_graph_root {
struct btf_field {
u32 offset;
u32 size;
+ u32 nelems;
enum btf_field_type type;
union {
struct btf_field_kptr kptr;
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index e71ea78a4db9..831073285ef2 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3289,6 +3289,7 @@ enum {
struct btf_field_info {
enum btf_field_type type;
u32 off;
+ u32 nelems;
union {
struct {
u32 type_id;
@@ -3548,6 +3549,7 @@ static int btf_find_struct_field(const struct btf *btf,
continue;
if (idx >= info_cnt)
return -E2BIG;
+ info[idx].nelems = 1;
++idx;
}
return idx;
@@ -3565,6 +3567,19 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
for_each_vsi(i, t, vsi) {
const struct btf_type *var = btf_type_by_id(btf, vsi->type);
const struct btf_type *var_type = btf_type_by_id(btf, var->type);
+ const struct btf_array *array;
+ u32 j, nelems = 1;
+
+ /* Walk into array types to find the element type and the
+ * number of elements in the (flattened) array.
+ */
+ for (j = 0; j < MAX_RESOLVE_DEPTH && btf_type_is_array(var_type); j++) {
+ array = btf_array(var_type);
+ nelems *= array->nelems;
+ var_type = btf_type_by_id(btf, array->type);
+ }
+ if (nelems == 0)
+ continue;
field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
field_mask, &seen_mask, &align, &sz);
@@ -3574,7 +3589,7 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
return field_type;
off = vsi->offset;
- if (vsi->size != sz)
+ if (vsi->size != sz * nelems)
continue;
if (off % align)
continue;
@@ -3582,9 +3597,12 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
switch (field_type) {
case BPF_SPIN_LOCK:
case BPF_TIMER:
+ case BPF_REFCOUNT:
+ if (nelems != 1)
+ continue;
+ fallthrough;
case BPF_LIST_NODE:
case BPF_RB_NODE:
- case BPF_REFCOUNT:
ret = btf_find_struct(btf, var_type, off, sz, field_type,
idx < info_cnt ? &info[idx] : &tmp);
if (ret < 0)
@@ -3615,7 +3633,7 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
continue;
if (idx >= info_cnt)
return -E2BIG;
- ++idx;
+ info[idx++].nelems = nelems;
}
return idx;
}
@@ -3834,6 +3852,7 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type
rec->fields[i].offset = info_arr[i].off;
rec->fields[i].type = info_arr[i].type;
rec->fields[i].size = field_type_size;
+ rec->fields[i].nelems = 1;
switch (info_arr[i].type) {
case BPF_SPIN_LOCK:
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH bpf-next 04/11] bpf: check_map_kptr_access() compute the offset from the reg state.
2024-04-10 0:41 [PATCH bpf-next 00/11] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (2 preceding siblings ...)
2024-04-10 0:41 ` [PATCH bpf-next 03/11] bpf: Add nelems to struct btf_field_info and btf_field Kui-Feng Lee
@ 2024-04-10 0:41 ` Kui-Feng Lee
2024-04-11 22:13 ` Eduard Zingerman
2024-04-10 0:41 ` [PATCH bpf-next 05/11] bpf: initialize/free array of btf_field(s) Kui-Feng Lee
` (6 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-10 0:41 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Previously, check_map_kptr_access() assumed that the accessed offset was
identical to the offset in the btf_field. However, once field array is
supported, the accessed offset no longer matches the offset in the
bpf_field. It may refer to an element in an array while the offset in the
bpf_field refers to the beginning of the array.
To handle arrays, it computes the offset from the reg state instead.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/verifier.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 86adacc5f76c..34e43220c6f0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5349,18 +5349,19 @@ static u32 btf_ld_kptr_type(struct bpf_verifier_env *env, struct btf_field *kptr
}
static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
- int value_regno, int insn_idx,
+ u32 offset, int value_regno, int insn_idx,
struct btf_field *kptr_field)
{
struct bpf_insn *insn = &env->prog->insnsi[insn_idx];
int class = BPF_CLASS(insn->code);
- struct bpf_reg_state *val_reg;
+ struct bpf_reg_state *val_reg, *reg;
/* Things we already checked for in check_map_access and caller:
* - Reject cases where variable offset may touch kptr
* - size of access (must be BPF_DW)
* - tnum_is_const(reg->var_off)
- * - kptr_field->offset == off + reg->var_off.value
+ * - kptr_field->offset + kptr_field->size * i / kptr_field->nelems
+ * == off + reg->var_off.value where n is an index into the array
*/
/* Only BPF_[LDX,STX,ST] | BPF_MEM | BPF_DW is supported */
if (BPF_MODE(insn->code) != BPF_MEM) {
@@ -5393,8 +5394,9 @@ static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
return -EACCES;
} else if (class == BPF_ST) {
if (insn->imm) {
- verbose(env, "BPF_ST imm must be 0 when storing to kptr at off=%u\n",
- kptr_field->offset);
+ reg = reg_state(env, regno);
+ verbose(env, "BPF_ST imm must be 0 when storing to kptr at off=%llu\n",
+ reg->var_off.value + offset);
return -EACCES;
}
} else {
@@ -6781,7 +6783,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
kptr_field = btf_record_find(reg->map_ptr->record,
off + reg->var_off.value, BPF_KPTR);
if (kptr_field) {
- err = check_map_kptr_access(env, regno, value_regno, insn_idx, kptr_field);
+ err = check_map_kptr_access(env, regno, off, value_regno,
+ insn_idx, kptr_field);
} else if (t == BPF_READ && value_regno >= 0) {
struct bpf_map *map = reg->map_ptr;
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 04/11] bpf: check_map_kptr_access() compute the offset from the reg state.
2024-04-10 0:41 ` [PATCH bpf-next 04/11] bpf: check_map_kptr_access() compute the offset from the reg state Kui-Feng Lee
@ 2024-04-11 22:13 ` Eduard Zingerman
2024-04-12 4:00 ` Kui-Feng Lee
0 siblings, 1 reply; 27+ messages in thread
From: Eduard Zingerman @ 2024-04-11 22:13 UTC (permalink / raw)
To: Kui-Feng Lee, bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng
On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
> Previously, check_map_kptr_access() assumed that the accessed offset was
> identical to the offset in the btf_field. However, once field array is
> supported, the accessed offset no longer matches the offset in the
> bpf_field. It may refer to an element in an array while the offset in the
> bpf_field refers to the beginning of the array.
>
> To handle arrays, it computes the offset from the reg state instead.
>
> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
> ---
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> kernel/bpf/verifier.c | 15 +++++++++------
> 1 file changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 86adacc5f76c..34e43220c6f0 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -5349,18 +5349,19 @@ static u32 btf_ld_kptr_type(struct bpf_verifier_env *env, struct btf_field *kptr
> }
>
> static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
> - int value_regno, int insn_idx,
> + u32 offset, int value_regno, int insn_idx,
> struct btf_field *kptr_field)
> {
> struct bpf_insn *insn = &env->prog->insnsi[insn_idx];
> int class = BPF_CLASS(insn->code);
> - struct bpf_reg_state *val_reg;
> + struct bpf_reg_state *val_reg, *reg;
>
> /* Things we already checked for in check_map_access and caller:
Nit: at the moment when this patch is applied check_map_access is not
yet modified.
> * - Reject cases where variable offset may touch kptr
> * - size of access (must be BPF_DW)
> * - tnum_is_const(reg->var_off)
> - * - kptr_field->offset == off + reg->var_off.value
> + * - kptr_field->offset + kptr_field->size * i / kptr_field->nelems
> + * == off + reg->var_off.value where n is an index into the array
^^^ nit: this should be 'i'
> */
> /* Only BPF_[LDX,STX,ST] | BPF_MEM | BPF_DW is supported */
> if (BPF_MODE(insn->code) != BPF_MEM) {
[...]
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 04/11] bpf: check_map_kptr_access() compute the offset from the reg state.
2024-04-11 22:13 ` Eduard Zingerman
@ 2024-04-12 4:00 ` Kui-Feng Lee
0 siblings, 0 replies; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-12 4:00 UTC (permalink / raw)
To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On 4/11/24 15:13, Eduard Zingerman wrote:
> On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
>> Previously, check_map_kptr_access() assumed that the accessed offset was
>> identical to the offset in the btf_field. However, once field array is
>> supported, the accessed offset no longer matches the offset in the
>> bpf_field. It may refer to an element in an array while the offset in the
>> bpf_field refers to the beginning of the array.
>>
>> To handle arrays, it computes the offset from the reg state instead.
>>
>> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
>> ---
>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
>
>> kernel/bpf/verifier.c | 15 +++++++++------
>> 1 file changed, 9 insertions(+), 6 deletions(-)
>>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 86adacc5f76c..34e43220c6f0 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -5349,18 +5349,19 @@ static u32 btf_ld_kptr_type(struct bpf_verifier_env *env, struct btf_field *kptr
>> }
>>
>> static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
>> - int value_regno, int insn_idx,
>> + u32 offset, int value_regno, int insn_idx,
>> struct btf_field *kptr_field)
>> {
>> struct bpf_insn *insn = &env->prog->insnsi[insn_idx];
>> int class = BPF_CLASS(insn->code);
>> - struct bpf_reg_state *val_reg;
>> + struct bpf_reg_state *val_reg, *reg;
>>
>> /* Things we already checked for in check_map_access and caller:
>
> Nit: at the moment when this patch is applied check_map_access is not
> yet modified.
Yes, I will change the order of the patches.
>
>> * - Reject cases where variable offset may touch kptr
>> * - size of access (must be BPF_DW)
>> * - tnum_is_const(reg->var_off)
>> - * - kptr_field->offset == off + reg->var_off.value
>> + * - kptr_field->offset + kptr_field->size * i / kptr_field->nelems
>> + * == off + reg->var_off.value where n is an index into the array
> ^^^ nit: this should be 'i'
Yes!
>
>> */
>> /* Only BPF_[LDX,STX,ST] | BPF_MEM | BPF_DW is supported */
>> if (BPF_MODE(insn->code) != BPF_MEM) {
>
> [...]
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH bpf-next 05/11] bpf: initialize/free array of btf_field(s).
2024-04-10 0:41 [PATCH bpf-next 00/11] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (3 preceding siblings ...)
2024-04-10 0:41 ` [PATCH bpf-next 04/11] bpf: check_map_kptr_access() compute the offset from the reg state Kui-Feng Lee
@ 2024-04-10 0:41 ` Kui-Feng Lee
2024-04-11 22:13 ` Eduard Zingerman
2024-04-10 0:41 ` [PATCH bpf-next 06/11] bpf: Find btf_field with the knowledge of arrays Kui-Feng Lee
` (5 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-10 0:41 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Initialize and free each element in a btf_field array based on the values
of nelems and size in btf_field. The value of nelems is the length of the
flatten array for nested arrays.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
include/linux/bpf.h | 7 +++++++
kernel/bpf/syscall.c | 39 ++++++++++++++++++++++++---------------
2 files changed, 31 insertions(+), 15 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f397ccdc6d4b..ee53dcd14bd4 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -390,6 +390,9 @@ static inline u32 btf_field_type_align(enum btf_field_type type)
static inline void bpf_obj_init_field(const struct btf_field *field, void *addr)
{
+ u32 elem_size;
+ int i;
+
memset(addr, 0, field->size);
switch (field->type) {
@@ -400,6 +403,10 @@ static inline void bpf_obj_init_field(const struct btf_field *field, void *addr)
RB_CLEAR_NODE((struct rb_node *)addr);
break;
case BPF_LIST_HEAD:
+ elem_size = field->size / field->nelems;
+ for (i = 0; i < field->nelems; i++, addr += elem_size)
+ INIT_LIST_HEAD((struct list_head *)addr);
+ break;
case BPF_LIST_NODE:
INIT_LIST_HEAD((struct list_head *)addr);
break;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index e44c276e8617..543ff0d944e8 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -672,6 +672,8 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
const struct btf_field *field = &fields[i];
void *field_ptr = obj + field->offset;
void *xchgd_field;
+ u32 elem_size = field->size / field->nelems;
+ int j;
switch (fields[i].type) {
case BPF_SPIN_LOCK:
@@ -680,35 +682,42 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
bpf_timer_cancel_and_free(field_ptr);
break;
case BPF_KPTR_UNREF:
- WRITE_ONCE(*(u64 *)field_ptr, 0);
+ for (j = 0; j < field->nelems; j++, field_ptr += elem_size)
+ WRITE_ONCE(*(u64 *)field_ptr, 0);
break;
case BPF_KPTR_REF:
case BPF_KPTR_PERCPU:
- xchgd_field = (void *)xchg((unsigned long *)field_ptr, 0);
- if (!xchgd_field)
- break;
-
- if (!btf_is_kernel(field->kptr.btf)) {
+ if (!btf_is_kernel(field->kptr.btf))
pointee_struct_meta = btf_find_struct_meta(field->kptr.btf,
field->kptr.btf_id);
- migrate_disable();
- __bpf_obj_drop_impl(xchgd_field, pointee_struct_meta ?
- pointee_struct_meta->record : NULL,
- fields[i].type == BPF_KPTR_PERCPU);
- migrate_enable();
- } else {
- field->kptr.dtor(xchgd_field);
+
+ for (j = 0; j < field->nelems; j++, field_ptr += elem_size) {
+ xchgd_field = (void *)xchg((unsigned long *)field_ptr, 0);
+ if (!xchgd_field)
+ continue;
+
+ if (!btf_is_kernel(field->kptr.btf)) {
+ migrate_disable();
+ __bpf_obj_drop_impl(xchgd_field, pointee_struct_meta ?
+ pointee_struct_meta->record : NULL,
+ fields[i].type == BPF_KPTR_PERCPU);
+ migrate_enable();
+ } else {
+ field->kptr.dtor(xchgd_field);
+ }
}
break;
case BPF_LIST_HEAD:
if (WARN_ON_ONCE(rec->spin_lock_off < 0))
continue;
- bpf_list_head_free(field, field_ptr, obj + rec->spin_lock_off);
+ for (j = 0; j < field->nelems; j++, field_ptr += elem_size)
+ bpf_list_head_free(field, field_ptr, obj + rec->spin_lock_off);
break;
case BPF_RB_ROOT:
if (WARN_ON_ONCE(rec->spin_lock_off < 0))
continue;
- bpf_rb_root_free(field, field_ptr, obj + rec->spin_lock_off);
+ for (j = 0; j < field->nelems; j++, field_ptr += elem_size)
+ bpf_rb_root_free(field, field_ptr, obj + rec->spin_lock_off);
break;
case BPF_LIST_NODE:
case BPF_RB_NODE:
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 05/11] bpf: initialize/free array of btf_field(s).
2024-04-10 0:41 ` [PATCH bpf-next 05/11] bpf: initialize/free array of btf_field(s) Kui-Feng Lee
@ 2024-04-11 22:13 ` Eduard Zingerman
2024-04-12 3:56 ` Kui-Feng Lee
0 siblings, 1 reply; 27+ messages in thread
From: Eduard Zingerman @ 2024-04-11 22:13 UTC (permalink / raw)
To: Kui-Feng Lee, bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng
On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
[...]
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index f397ccdc6d4b..ee53dcd14bd4 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -390,6 +390,9 @@ static inline u32 btf_field_type_align(enum btf_field_type type)
>
> static inline void bpf_obj_init_field(const struct btf_field *field, void *addr)
> {
> + u32 elem_size;
> + int i;
> +
> memset(addr, 0, field->size);
>
> switch (field->type) {
> @@ -400,6 +403,10 @@ static inline void bpf_obj_init_field(const struct btf_field *field, void *addr)
> RB_CLEAR_NODE((struct rb_node *)addr);
> break;
> case BPF_LIST_HEAD:
> + elem_size = field->size / field->nelems;
> + for (i = 0; i < field->nelems; i++, addr += elem_size)
> + INIT_LIST_HEAD((struct list_head *)addr);
> + break;
In btf_find_datasec_var() nelem > 1 is allowed for the following types:
- BPF_LIST_{NODE,HEAD}
- BPF_KPTR_{REF,UNREF,PERCPU}:
- BPF_RB_{NODE,ROOT}
Of these types bpf_obj_init_field() handles in a special way
BPF_RB_NODE, BPF_LIST_HEAD and BPF_LIST_NODE.
However, only BPF_LIST_HEAD handling is adjusted in this patch.
Is there a reason to omit BPF_RB_NODE and BPF_LIST_NODE?
> case BPF_LIST_NODE:
> INIT_LIST_HEAD((struct list_head *)addr);
> break;
[...]
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 05/11] bpf: initialize/free array of btf_field(s).
2024-04-11 22:13 ` Eduard Zingerman
@ 2024-04-12 3:56 ` Kui-Feng Lee
2024-04-12 15:32 ` Eduard Zingerman
0 siblings, 1 reply; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-12 3:56 UTC (permalink / raw)
To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On 4/11/24 15:13, Eduard Zingerman wrote:
> On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
> [...]
>
>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>> index f397ccdc6d4b..ee53dcd14bd4 100644
>> --- a/include/linux/bpf.h
>> +++ b/include/linux/bpf.h
>> @@ -390,6 +390,9 @@ static inline u32 btf_field_type_align(enum btf_field_type type)
>>
>> static inline void bpf_obj_init_field(const struct btf_field *field, void *addr)
>> {
>> + u32 elem_size;
>> + int i;
>> +
>> memset(addr, 0, field->size);
>>
>> switch (field->type) {
>> @@ -400,6 +403,10 @@ static inline void bpf_obj_init_field(const struct btf_field *field, void *addr)
>> RB_CLEAR_NODE((struct rb_node *)addr);
>> break;
>> case BPF_LIST_HEAD:
>> + elem_size = field->size / field->nelems;
>> + for (i = 0; i < field->nelems; i++, addr += elem_size)
>> + INIT_LIST_HEAD((struct list_head *)addr);
>> + break;
>
> In btf_find_datasec_var() nelem > 1 is allowed for the following types:
> - BPF_LIST_{NODE,HEAD}
> - BPF_KPTR_{REF,UNREF,PERCPU}:
> - BPF_RB_{NODE,ROOT}
>
> Of these types bpf_obj_init_field() handles in a special way
> BPF_RB_NODE, BPF_LIST_HEAD and BPF_LIST_NODE.
> However, only BPF_LIST_HEAD handling is adjusted in this patch.
> Is there a reason to omit BPF_RB_NODE and BPF_LIST_NODE?
Declaring arrays of list nodes or rbtree nodes seems to be not useful
since they don't contain any useful information. Let me explain below.
We usually declare node fields in other struct types. I guess declaring
arrays of struct types containing node fields is what we actually want.
For example,
struct foo {
struct bpf_list_node node;
...
};
struct foo all_nodes[64];
However, the verifier doesn't look into fields of a outer struct type
except array fields handling by this patchset. In this case, a data
section, it doesn't look into foo even we declare a global variable of
struct foo. For example,
struct foo gFoo;
gFoo would not work since the verifier doesn't follow gFoo and look into
struct foo.
So, I decided not to support rbtree nodes and list nodes.
However, there are a discussion about looking into fields of struct
types in an outer struct type. So, we will have another patchset to
implement it. Once we have it, we can support the case of gFoo and
all_nodes described earlier.
>
>> case BPF_LIST_NODE:
>> INIT_LIST_HEAD((struct list_head *)addr);
>> break;
>
> [...]
>
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 05/11] bpf: initialize/free array of btf_field(s).
2024-04-12 3:56 ` Kui-Feng Lee
@ 2024-04-12 15:32 ` Eduard Zingerman
2024-04-12 17:00 ` Kui-Feng Lee
0 siblings, 1 reply; 27+ messages in thread
From: Eduard Zingerman @ 2024-04-12 15:32 UTC (permalink / raw)
To: Kui-Feng Lee, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On Thu, 2024-04-11 at 20:56 -0700, Kui-Feng Lee wrote:
[...]
> So, I decided not to support rbtree nodes and list nodes.
>
> However, there are a discussion about looking into fields of struct
> types in an outer struct type. So, we will have another patchset to
> implement it. Once we have it, we can support the case of gFoo and
> all_nodes described earlier.
All this makes sense, but in such a case would it make sense to adjust
btf_find_datasec_var() to avoid adding BPF_RB_NODE and BPF_LIST_NODE
variables with nelem > 1? (Like it does for BPF_TIMER etc).
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH bpf-next 05/11] bpf: initialize/free array of btf_field(s).
2024-04-12 15:32 ` Eduard Zingerman
@ 2024-04-12 17:00 ` Kui-Feng Lee
0 siblings, 0 replies; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-12 17:00 UTC (permalink / raw)
To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On 4/12/24 08:32, Eduard Zingerman wrote:
> On Thu, 2024-04-11 at 20:56 -0700, Kui-Feng Lee wrote:
> [...]
>
>> So, I decided not to support rbtree nodes and list nodes.
>>
>> However, there are a discussion about looking into fields of struct
>> types in an outer struct type. So, we will have another patchset to
>> implement it. Once we have it, we can support the case of gFoo and
>> all_nodes described earlier.
>
> All this makes sense, but in such a case would it make sense to adjust
> btf_find_datasec_var() to avoid adding BPF_RB_NODE and BPF_LIST_NODE
> variables with nelem > 1? (Like it does for BPF_TIMER etc).
That make sense. I will change it.
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH bpf-next 06/11] bpf: Find btf_field with the knowledge of arrays.
2024-04-10 0:41 [PATCH bpf-next 00/11] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (4 preceding siblings ...)
2024-04-10 0:41 ` [PATCH bpf-next 05/11] bpf: initialize/free array of btf_field(s) Kui-Feng Lee
@ 2024-04-10 0:41 ` Kui-Feng Lee
2024-04-11 22:14 ` Eduard Zingerman
2024-04-10 0:41 ` [PATCH bpf-next 07/11] bpf: check_map_access() " Kui-Feng Lee
` (4 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-10 0:41 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Make btf_record_find() find the btf_field for an offset by comparing the
offset with the offset of each element, rather than the offset of the
entire array, if a btf_field represents an array. It is important to have
support for btf_field arrays in the future.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/syscall.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 543ff0d944e8..1a37731e632a 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -516,11 +516,18 @@ int bpf_map_alloc_pages(const struct bpf_map *map, gfp_t gfp, int nid,
static int btf_field_cmp(const void *a, const void *b)
{
const struct btf_field *f1 = a, *f2 = b;
+ int gt = 1, lt = -1;
+ if (f2->nelems == 0) {
+ swap(f1, f2);
+ swap(gt, lt);
+ }
if (f1->offset < f2->offset)
- return -1;
- else if (f1->offset > f2->offset)
- return 1;
+ return lt;
+ else if (f1->offset >= f2->offset + f2->size)
+ return gt;
+ if ((f1->offset - f2->offset) % (f2->size / f2->nelems))
+ return gt;
return 0;
}
@@ -528,10 +535,14 @@ struct btf_field *btf_record_find(const struct btf_record *rec, u32 offset,
u32 field_mask)
{
struct btf_field *field;
+ struct btf_field key = {
+ .offset = offset,
+ .size = 0, /* as a label for this key */
+ };
if (IS_ERR_OR_NULL(rec) || !(rec->field_mask & field_mask))
return NULL;
- field = bsearch(&offset, rec->fields, rec->cnt, sizeof(rec->fields[0]), btf_field_cmp);
+ field = bsearch(&key, rec->fields, rec->cnt, sizeof(rec->fields[0]), btf_field_cmp);
if (!field || !(field->type & field_mask))
return NULL;
return field;
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 06/11] bpf: Find btf_field with the knowledge of arrays.
2024-04-10 0:41 ` [PATCH bpf-next 06/11] bpf: Find btf_field with the knowledge of arrays Kui-Feng Lee
@ 2024-04-11 22:14 ` Eduard Zingerman
2024-04-12 2:00 ` Kui-Feng Lee
0 siblings, 1 reply; 27+ messages in thread
From: Eduard Zingerman @ 2024-04-11 22:14 UTC (permalink / raw)
To: Kui-Feng Lee, bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng
On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
> Make btf_record_find() find the btf_field for an offset by comparing the
> offset with the offset of each element, rather than the offset of the
> entire array, if a btf_field represents an array. It is important to have
> support for btf_field arrays in the future.
>
> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
> ---
> kernel/bpf/syscall.c | 19 +++++++++++++++----
> 1 file changed, 15 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 543ff0d944e8..1a37731e632a 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -516,11 +516,18 @@ int bpf_map_alloc_pages(const struct bpf_map *map, gfp_t gfp, int nid,
> static int btf_field_cmp(const void *a, const void *b)
> {
> const struct btf_field *f1 = a, *f2 = b;
> + int gt = 1, lt = -1;
>
> + if (f2->nelems == 0) {
> + swap(f1, f2);
> + swap(gt, lt);
> + }
> if (f1->offset < f2->offset)
> - return -1;
> - else if (f1->offset > f2->offset)
> - return 1;
> + return lt;
> + else if (f1->offset >= f2->offset + f2->size)
> + return gt;
> + if ((f1->offset - f2->offset) % (f2->size / f2->nelems))
> + return gt;
Binary search requires elements to be sorted in non-decreasing order,
however usage of '%' breaks this requirement. E.g. consider an array
with element size equal to 3:
| elem #0 | elem #1 |
| 0 | 1 | 2 | 3 | 4 | 5 |
^ ^ ^
' ' '
f2 f1 f1'
Here f1 > f2, but f1' == f2, while f1' > f1.
Depending on whether or not fields can overlap this might not be a problem,
but I suggest to rework the comparison function to avoid this confusion.
(E.g., find the leftmost field that overlaps with offset being searched for).
> return 0;
> }
>
> @@ -528,10 +535,14 @@ struct btf_field *btf_record_find(const struct btf_record *rec, u32 offset,
> u32 field_mask)
> {
> struct btf_field *field;
> + struct btf_field key = {
> + .offset = offset,
> + .size = 0, /* as a label for this key */
> + };
>
> if (IS_ERR_OR_NULL(rec) || !(rec->field_mask & field_mask))
> return NULL;
> - field = bsearch(&offset, rec->fields, rec->cnt, sizeof(rec->fields[0]), btf_field_cmp);
> + field = bsearch(&key, rec->fields, rec->cnt, sizeof(rec->fields[0]), btf_field_cmp);
> if (!field || !(field->type & field_mask))
> return NULL;
> return field;
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 06/11] bpf: Find btf_field with the knowledge of arrays.
2024-04-11 22:14 ` Eduard Zingerman
@ 2024-04-12 2:00 ` Kui-Feng Lee
0 siblings, 0 replies; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-12 2:00 UTC (permalink / raw)
To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On 4/11/24 15:14, Eduard Zingerman wrote:
> On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
>> Make btf_record_find() find the btf_field for an offset by comparing the
>> offset with the offset of each element, rather than the offset of the
>> entire array, if a btf_field represents an array. It is important to have
>> support for btf_field arrays in the future.
>>
>> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
>> ---
>> kernel/bpf/syscall.c | 19 +++++++++++++++----
>> 1 file changed, 15 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
>> index 543ff0d944e8..1a37731e632a 100644
>> --- a/kernel/bpf/syscall.c
>> +++ b/kernel/bpf/syscall.c
>> @@ -516,11 +516,18 @@ int bpf_map_alloc_pages(const struct bpf_map *map, gfp_t gfp, int nid,
>> static int btf_field_cmp(const void *a, const void *b)
>> {
>> const struct btf_field *f1 = a, *f2 = b;
>> + int gt = 1, lt = -1;
>>
>> + if (f2->nelems == 0) {
>> + swap(f1, f2);
>> + swap(gt, lt);
>> + }
>> if (f1->offset < f2->offset)
>> - return -1;
>> - else if (f1->offset > f2->offset)
>> - return 1;
>> + return lt;
>> + else if (f1->offset >= f2->offset + f2->size)
>> + return gt;
>> + if ((f1->offset - f2->offset) % (f2->size / f2->nelems))
>> + return gt;
>
> Binary search requires elements to be sorted in non-decreasing order,
> however usage of '%' breaks this requirement. E.g. consider an array
> with element size equal to 3:
>
> | elem #0 | elem #1 |
> | 0 | 1 | 2 | 3 | 4 | 5 |
> ^ ^ ^
> ' ' '
> f2 f1 f1'
>
> Here f1 > f2, but f1' == f2, while f1' > f1.
> Depending on whether or not fields can overlap this might not be a problem,
> but I suggest to rework the comparison function to avoid this confusion.
> (E.g., find the leftmost field that overlaps with offset being searched for).
Ok! It will match the leftmost one overlapping with the offset. And
check if the offset aligning with one of the elements in btf_record_find().
>
>> return 0;
>> }
>>
>> @@ -528,10 +535,14 @@ struct btf_field *btf_record_find(const struct btf_record *rec, u32 offset,
>> u32 field_mask)
>> {
>> struct btf_field *field;
>> + struct btf_field key = {
>> + .offset = offset,
>> + .size = 0, /* as a label for this key */
>> + };
>>
>> if (IS_ERR_OR_NULL(rec) || !(rec->field_mask & field_mask))
>> return NULL;
>> - field = bsearch(&offset, rec->fields, rec->cnt, sizeof(rec->fields[0]), btf_field_cmp);
>> + field = bsearch(&key, rec->fields, rec->cnt, sizeof(rec->fields[0]), btf_field_cmp);
>> if (!field || !(field->type & field_mask))
>> return NULL;
>> return field;
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH bpf-next 07/11] bpf: check_map_access() with the knowledge of arrays.
2024-04-10 0:41 [PATCH bpf-next 00/11] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (5 preceding siblings ...)
2024-04-10 0:41 ` [PATCH bpf-next 06/11] bpf: Find btf_field with the knowledge of arrays Kui-Feng Lee
@ 2024-04-10 0:41 ` Kui-Feng Lee
2024-04-11 22:14 ` Eduard Zingerman
2024-04-10 0:41 ` [PATCH bpf-next 08/11] bpf: Enable and verify btf_field arrays Kui-Feng Lee
` (3 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-10 0:41 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Ensure that accessing a map aligns with an element if the corresponding
btf_field, if there is, represents an array.
It would be necessary for an access to be aligned with the beginning of an
array if we didn't make this change. Any access to elements other than the
first one would be rejected.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/verifier.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 34e43220c6f0..67b89d4ea1ba 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5428,7 +5428,7 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
rec = map->record;
for (i = 0; i < rec->cnt; i++) {
struct btf_field *field = &rec->fields[i];
- u32 p = field->offset;
+ u32 p = field->offset, var_p, elem_size;
/* If any part of a field can be touched by load/store, reject
* this program. To check that [x1, x2) overlaps with [y1, y2),
@@ -5448,7 +5448,10 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
verbose(env, "kptr access cannot have variable offset\n");
return -EACCES;
}
- if (p != off + reg->var_off.value) {
+ var_p = off + reg->var_off.value;
+ elem_size = field->size / field->nelems;
+ if (var_p < p || var_p >= p + field->size ||
+ (var_p - p) % elem_size) {
verbose(env, "kptr access misaligned expected=%u off=%llu\n",
p, off + reg->var_off.value);
return -EACCES;
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 07/11] bpf: check_map_access() with the knowledge of arrays.
2024-04-10 0:41 ` [PATCH bpf-next 07/11] bpf: check_map_access() " Kui-Feng Lee
@ 2024-04-11 22:14 ` Eduard Zingerman
2024-04-12 16:32 ` Kui-Feng Lee
0 siblings, 1 reply; 27+ messages in thread
From: Eduard Zingerman @ 2024-04-11 22:14 UTC (permalink / raw)
To: Kui-Feng Lee, bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng
On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
[...]
> Any access to elements other than the first one would be rejected.
I'm not sure this is true, could you please point me to a specific
check in the code that enforces access to go to the first element?
The check added in this patch only enforces correct alignment with
array element start.
Other than this, the patch looks good to me.
[...]
> @@ -5448,7 +5448,10 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
> verbose(env, "kptr access cannot have variable offset\n");
> return -EACCES;
> }
> - if (p != off + reg->var_off.value) {
> + var_p = off + reg->var_off.value;
> + elem_size = field->size / field->nelems;
> + if (var_p < p || var_p >= p + field->size ||
> + (var_p - p) % elem_size) {
> verbose(env, "kptr access misaligned expected=%u off=%llu\n",
> p, off + reg->var_off.value);
> return -EACCES;
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 07/11] bpf: check_map_access() with the knowledge of arrays.
2024-04-11 22:14 ` Eduard Zingerman
@ 2024-04-12 16:32 ` Kui-Feng Lee
2024-04-12 19:08 ` Eduard Zingerman
0 siblings, 1 reply; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-12 16:32 UTC (permalink / raw)
To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On 4/11/24 15:14, Eduard Zingerman wrote:
> On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
> [...]
>
>> Any access to elements other than the first one would be rejected.
>
> I'm not sure this is true, could you please point me to a specific
> check in the code that enforces access to go to the first element?
> The check added in this patch only enforces correct alignment with
> array element start.
I mean accessing to elements other than the first one would be rejected
if we don't have this patch.
Before the change, it enforces correct alignment with the start of the
whole array. Once the array feature is enabled, the "size" of struct
btf_field will be the size of entire array. In another word, accessing
to later elements, other than the first one, doesn't align with the
beginning of entire array, and will be rejected.
>
> Other than this, the patch looks good to me.
>
> [...]
>
>> @@ -5448,7 +5448,10 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
>> verbose(env, "kptr access cannot have variable offset\n");
>> return -EACCES;
>> }
>> - if (p != off + reg->var_off.value) {
Here "p" is the start of the entire array. If access any elements other
than the first one, it should return -EACCES.
>> + var_p = off + reg->var_off.value;
>> + elem_size = field->size / field->nelems;
>> + if (var_p < p || var_p >= p + field->size ||
>> + (var_p - p) % elem_size) {
>> verbose(env, "kptr access misaligned expected=%u off=%llu\n",
>> p, off + reg->var_off.value);
>> return -EACCES;
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH bpf-next 07/11] bpf: check_map_access() with the knowledge of arrays.
2024-04-12 16:32 ` Kui-Feng Lee
@ 2024-04-12 19:08 ` Eduard Zingerman
2024-04-12 19:29 ` Kui-Feng Lee
0 siblings, 1 reply; 27+ messages in thread
From: Eduard Zingerman @ 2024-04-12 19:08 UTC (permalink / raw)
To: Kui-Feng Lee, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On Fri, 2024-04-12 at 09:32 -0700, Kui-Feng Lee wrote:
>
> On 4/11/24 15:14, Eduard Zingerman wrote:
> > On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
> > [...]
> >
> > > Any access to elements other than the first one would be rejected.
> >
> > I'm not sure this is true, could you please point me to a specific
> > check in the code that enforces access to go to the first element?
> > The check added in this patch only enforces correct alignment with
> > array element start.
>
> I mean accessing to elements other than the first one would be rejected
> if we don't have this patch.
Oh, I misunderstood the above statement then.
The way I read it was: "after this patch access to elements other than
the first one would be rejected". While this patch explicitly allows
access to the subsequent array elements, hence confusion.
Sorry for the noise.
>
> Before the change, it enforces correct alignment with the start of the
> whole array. Once the array feature is enabled, the "size" of struct
> btf_field will be the size of entire array. In another word, accessing
> to later elements, other than the first one, doesn't align with the
> beginning of entire array, and will be rejected.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH bpf-next 07/11] bpf: check_map_access() with the knowledge of arrays.
2024-04-12 19:08 ` Eduard Zingerman
@ 2024-04-12 19:29 ` Kui-Feng Lee
2024-04-12 19:50 ` Eduard Zingerman
0 siblings, 1 reply; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-12 19:29 UTC (permalink / raw)
To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On 4/12/24 12:08, Eduard Zingerman wrote:
> On Fri, 2024-04-12 at 09:32 -0700, Kui-Feng Lee wrote:
>>
>> On 4/11/24 15:14, Eduard Zingerman wrote:
>>> On Tue, 2024-04-09 at 17:41 -0700, Kui-Feng Lee wrote:
>>> [...]
>>>
>>>> Any access to elements other than the first one would be rejected.
>>>
>>> I'm not sure this is true, could you please point me to a specific
>>> check in the code that enforces access to go to the first element?
>>> The check added in this patch only enforces correct alignment with
>>> array element start.
>>
>> I mean accessing to elements other than the first one would be rejected
>> if we don't have this patch.
>
> Oh, I misunderstood the above statement then.
> The way I read it was: "after this patch access to elements other than
> the first one would be rejected". While this patch explicitly allows
> access to the subsequent array elements, hence confusion.
> Sorry for the noise.
I will rephrase it to make it more clear.
>
>>
>> Before the change, it enforces correct alignment with the start of the
>> whole array. Once the array feature is enabled, the "size" of struct
>> btf_field will be the size of entire array. In another word, accessing
>> to later elements, other than the first one, doesn't align with the
>> beginning of entire array, and will be rejected.
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH bpf-next 07/11] bpf: check_map_access() with the knowledge of arrays.
2024-04-12 19:29 ` Kui-Feng Lee
@ 2024-04-12 19:50 ` Eduard Zingerman
0 siblings, 0 replies; 27+ messages in thread
From: Eduard Zingerman @ 2024-04-12 19:50 UTC (permalink / raw)
To: Kui-Feng Lee, Kui-Feng Lee, bpf, ast, martin.lau, song,
kernel-team, andrii
Cc: kuifeng
On Fri, 2024-04-12 at 12:29 -0700, Kui-Feng Lee wrote:
[...]
>
> > Oh, I misunderstood the above statement then.
> > The way I read it was: "after this patch access to elements other than
> > the first one would be rejected". While this patch explicitly allows
> > access to the subsequent array elements, hence confusion.
> > Sorry for the noise.
>
> I will rephrase it to make it more clear.
Great, thank you.
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH bpf-next 08/11] bpf: Enable and verify btf_field arrays.
2024-04-10 0:41 [PATCH bpf-next 00/11] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (6 preceding siblings ...)
2024-04-10 0:41 ` [PATCH bpf-next 07/11] bpf: check_map_access() " Kui-Feng Lee
@ 2024-04-10 0:41 ` Kui-Feng Lee
2024-04-10 0:41 ` [PATCH bpf-next 09/11] selftests/bpf: Test global kptr arrays Kui-Feng Lee
` (2 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-10 0:41 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Set the value of "nelems" of a btf_field with the length of the flattened
array if the btf_field represents an array. The "size" is the size of the
entire array rather than the size of an individual element.
Once "nelems" and "size" reflects the length and size of arrays
respectively, it allows BPF programs to easily declare multiple kptr,
btf_rb_root, or btf_list_head in a global array.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/btf.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 831073285ef2..d228360ee1c6 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3836,7 +3836,7 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type
rec->timer_off = -EINVAL;
rec->refcount_off = -EINVAL;
for (i = 0; i < cnt; i++) {
- field_type_size = btf_field_type_size(info_arr[i].type);
+ field_type_size = btf_field_type_size(info_arr[i].type) * info_arr[i].nelems;
if (info_arr[i].off + field_type_size > value_size) {
WARN_ONCE(1, "verifier bug off %d size %d", info_arr[i].off, value_size);
ret = -EFAULT;
@@ -3852,7 +3852,7 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type
rec->fields[i].offset = info_arr[i].off;
rec->fields[i].type = info_arr[i].type;
rec->fields[i].size = field_type_size;
- rec->fields[i].nelems = 1;
+ rec->fields[i].nelems = info_arr[i].nelems;
switch (info_arr[i].type) {
case BPF_SPIN_LOCK:
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH bpf-next 09/11] selftests/bpf: Test global kptr arrays.
2024-04-10 0:41 [PATCH bpf-next 00/11] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (7 preceding siblings ...)
2024-04-10 0:41 ` [PATCH bpf-next 08/11] bpf: Enable and verify btf_field arrays Kui-Feng Lee
@ 2024-04-10 0:41 ` Kui-Feng Lee
2024-04-10 0:41 ` [PATCH bpf-next 10/11] selftests/bpf: Test global bpf_rb_root arrays Kui-Feng Lee
2024-04-10 0:41 ` [PATCH bpf-next 11/11] selftests/bpf: Test global bpf_list_head arrays Kui-Feng Lee
10 siblings, 0 replies; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-10 0:41 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Make sure that BPF programs can declare global kptr arrays.
An array with only one element is special case, that it treats the element
like a non-array kptr field. Nested arrays are also tested to ensure they
are handled properly.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
.../selftests/bpf/prog_tests/cpumask.c | 3 +
.../selftests/bpf/progs/cpumask_success.c | 147 ++++++++++++++++++
2 files changed, 150 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/cpumask.c b/tools/testing/selftests/bpf/prog_tests/cpumask.c
index ecf89df78109..bba601e235f6 100644
--- a/tools/testing/selftests/bpf/prog_tests/cpumask.c
+++ b/tools/testing/selftests/bpf/prog_tests/cpumask.c
@@ -18,6 +18,9 @@ static const char * const cpumask_success_testcases[] = {
"test_insert_leave",
"test_insert_remove_release",
"test_global_mask_rcu",
+ "test_global_mask_array_one_rcu",
+ "test_global_mask_array_rcu",
+ "test_global_mask_array_l2_rcu",
"test_cpumask_weight",
};
diff --git a/tools/testing/selftests/bpf/progs/cpumask_success.c b/tools/testing/selftests/bpf/progs/cpumask_success.c
index 7a1e64c6c065..9d76d85680d7 100644
--- a/tools/testing/selftests/bpf/progs/cpumask_success.c
+++ b/tools/testing/selftests/bpf/progs/cpumask_success.c
@@ -11,6 +11,9 @@
char _license[] SEC("license") = "GPL";
int pid, nr_cpus;
+private(MASK) static struct bpf_cpumask __kptr * global_mask_array[2];
+private(MASK) static struct bpf_cpumask __kptr * global_mask_array_l2[2][1];
+private(MASK) static struct bpf_cpumask __kptr * global_mask_array_one[1];
static bool is_test_task(void)
{
@@ -460,6 +463,150 @@ int BPF_PROG(test_global_mask_rcu, struct task_struct *task, u64 clone_flags)
return 0;
}
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_array_one_rcu, struct task_struct *task, u64 clone_flags)
+{
+ struct bpf_cpumask *local, *prev;
+
+ if (!is_test_task())
+ return 0;
+
+ /* Kptr arrays with one element are special cased, being treated
+ * just like a single pointer.
+ */
+
+ local = create_cpumask();
+ if (!local)
+ return 0;
+
+ prev = bpf_kptr_xchg(&global_mask_array_one[0], local);
+ if (prev) {
+ bpf_cpumask_release(prev);
+ err = 3;
+ return 0;
+ }
+
+ bpf_rcu_read_lock();
+ local = global_mask_array_one[0];
+ if (!local) {
+ err = 4;
+ bpf_rcu_read_unlock();
+ return 0;
+ }
+
+ bpf_rcu_read_unlock();
+
+ return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_array_rcu, struct task_struct *task, u64 clone_flags)
+{
+ struct bpf_cpumask *local;
+
+ if (!is_test_task())
+ return 0;
+
+ /* Check if two kptrs in the array work and independently */
+
+ local = create_cpumask();
+ if (!local)
+ return 0;
+
+ bpf_rcu_read_lock();
+
+ local = bpf_kptr_xchg(&global_mask_array[0], local);
+ if (local) {
+ err = 1;
+ goto err_exit;
+ }
+
+ /* global_mask_array => [<mask>, NULL] */
+ if (!global_mask_array[0] || global_mask_array[1]) {
+ err = 2;
+ goto err_exit;
+ }
+
+ local = create_cpumask();
+ if (!local) {
+ err = 9;
+ goto err_exit;
+ }
+
+ local = bpf_kptr_xchg(&global_mask_array[1], local);
+ if (local) {
+ err = 10;
+ goto err_exit;
+ }
+
+ /* global_mask_array => [<mask 1>, <mask 2>] */
+ if (!global_mask_array[0] || !global_mask_array[1] ||
+ global_mask_array[0] == global_mask_array[1]) {
+ err = 11;
+ goto err_exit;
+ }
+
+err_exit:
+ if (local)
+ bpf_cpumask_release(local);
+ bpf_rcu_read_unlock();
+ return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_array_l2_rcu, struct task_struct *task, u64 clone_flags)
+{
+ struct bpf_cpumask *local;
+
+ if (!is_test_task())
+ return 0;
+
+ /* Check if two kptrs in the array work and independently */
+
+ local = create_cpumask();
+ if (!local)
+ return 0;
+
+ bpf_rcu_read_lock();
+
+ local = bpf_kptr_xchg(&global_mask_array_l2[0][0], local);
+ if (local) {
+ err = 1;
+ goto err_exit;
+ }
+
+ /* global_mask_array => [[<mask>], [NULL]] */
+ if (!global_mask_array_l2[0][0] || global_mask_array_l2[1][0]) {
+ err = 2;
+ goto err_exit;
+ }
+
+ local = create_cpumask();
+ if (!local) {
+ err = 9;
+ goto err_exit;
+ }
+
+ local = bpf_kptr_xchg(&global_mask_array_l2[1][0], local);
+ if (local) {
+ err = 10;
+ goto err_exit;
+ }
+
+ /* global_mask_array => [[<mask 1>], [<mask 2>]] */
+ if (!global_mask_array_l2[0][0] || !global_mask_array_l2[1][0] ||
+ global_mask_array_l2[0][0] == global_mask_array_l2[1][0]) {
+ err = 11;
+ goto err_exit;
+ }
+
+err_exit:
+ if (local)
+ bpf_cpumask_release(local);
+ bpf_rcu_read_unlock();
+ return 0;
+}
+
SEC("tp_btf/task_newtask")
int BPF_PROG(test_cpumask_weight, struct task_struct *task, u64 clone_flags)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH bpf-next 10/11] selftests/bpf: Test global bpf_rb_root arrays.
2024-04-10 0:41 [PATCH bpf-next 00/11] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (8 preceding siblings ...)
2024-04-10 0:41 ` [PATCH bpf-next 09/11] selftests/bpf: Test global kptr arrays Kui-Feng Lee
@ 2024-04-10 0:41 ` Kui-Feng Lee
2024-04-10 0:41 ` [PATCH bpf-next 11/11] selftests/bpf: Test global bpf_list_head arrays Kui-Feng Lee
10 siblings, 0 replies; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-10 0:41 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Make sure global arrays of bpf_rb_root work correctly.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
.../testing/selftests/bpf/prog_tests/rbtree.c | 23 +++++++
tools/testing/selftests/bpf/progs/rbtree.c | 63 +++++++++++++++++++
2 files changed, 86 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/rbtree.c b/tools/testing/selftests/bpf/prog_tests/rbtree.c
index e9300c96607d..399d61b8a9a4 100644
--- a/tools/testing/selftests/bpf/prog_tests/rbtree.c
+++ b/tools/testing/selftests/bpf/prog_tests/rbtree.c
@@ -53,6 +53,27 @@ static void test_rbtree_add_and_remove(void)
rbtree__destroy(skel);
}
+static void test_rbtree_add_and_remove_array(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
+ struct rbtree *skel;
+ int ret;
+
+ skel = rbtree__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "rbtree__open_and_load"))
+ return;
+
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.rbtree_add_and_remove_array), &opts);
+ ASSERT_OK(ret, "rbtree_add_and_remove_array");
+ ASSERT_OK(opts.retval, "rbtree_add_and_remove_array retval");
+
+ rbtree__destroy(skel);
+}
+
static void test_rbtree_first_and_remove(void)
{
LIBBPF_OPTS(bpf_test_run_opts, opts,
@@ -106,6 +127,8 @@ void test_rbtree_success(void)
test_rbtree_add_nodes();
if (test__start_subtest("rbtree_add_and_remove"))
test_rbtree_add_and_remove();
+ if (test__start_subtest("rbtree_add_and_remove_array"))
+ test_rbtree_add_and_remove_array();
if (test__start_subtest("rbtree_first_and_remove"))
test_rbtree_first_and_remove();
if (test__start_subtest("rbtree_api_release_aliasing"))
diff --git a/tools/testing/selftests/bpf/progs/rbtree.c b/tools/testing/selftests/bpf/progs/rbtree.c
index b09f4fffe57c..6bf7285944d6 100644
--- a/tools/testing/selftests/bpf/progs/rbtree.c
+++ b/tools/testing/selftests/bpf/progs/rbtree.c
@@ -20,6 +20,8 @@ long first_data[2] = {-1, -1};
#define private(name) SEC(".data." #name) __hidden __attribute__((aligned(8)))
private(A) struct bpf_spin_lock glock;
private(A) struct bpf_rb_root groot __contains(node_data, node);
+private(A) struct bpf_rb_root groot_array[2] __contains(node_data, node);
+private(A) struct bpf_rb_root groot_array_one[1] __contains(node_data, node);
static bool less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
{
@@ -109,6 +111,67 @@ long rbtree_add_and_remove(void *ctx)
return 1;
}
+SEC("tc")
+long rbtree_add_and_remove_array(void *ctx)
+{
+ struct bpf_rb_node *res1 = NULL, *res2 = NULL, *res3 = NULL;
+ struct node_data *nodes[3][2] = {{NULL, NULL}, {NULL, NULL}, {NULL, NULL}};
+ struct node_data *n;
+ long k1 = -1, k2 = -1, k3 = -1;
+ int i, j;
+
+ for (i = 0; i < 3; i++) {
+ for (j = 0; j < 2; j++) {
+ nodes[i][j] = bpf_obj_new(typeof(*nodes[i][j]));
+ if (!nodes[i][j])
+ goto err_out;
+ nodes[i][j]->key = i * 2 + j;
+ }
+ }
+
+ bpf_spin_lock(&glock);
+ for (i = 0; i < 2; i++)
+ for (j = 0; j < 2; j++)
+ bpf_rbtree_add(&groot_array[i], &nodes[i][j]->node, less);
+ for (j = 0; j < 2; j++)
+ bpf_rbtree_add(&groot_array_one[0], &nodes[2][j]->node, less);
+ res1 = bpf_rbtree_remove(&groot_array[0], &nodes[0][0]->node);
+ res2 = bpf_rbtree_remove(&groot_array[1], &nodes[1][0]->node);
+ res3 = bpf_rbtree_remove(&groot_array_one[0], &nodes[2][0]->node);
+ bpf_spin_unlock(&glock);
+
+ if (res1) {
+ n = container_of(res1, struct node_data, node);
+ k1 = n->key;
+ bpf_obj_drop(n);
+ }
+ if (res2) {
+ n = container_of(res2, struct node_data, node);
+ k2 = n->key;
+ bpf_obj_drop(n);
+ }
+ if (res3) {
+ n = container_of(res3, struct node_data, node);
+ k3 = n->key;
+ bpf_obj_drop(n);
+ }
+ if (k1 != 0 || k2 != 2)
+ return 2;
+ if (k3 != 4)
+ return 3;
+
+ return 0;
+
+err_out:
+ for (i = 0; i < 3; i++) {
+ for (j = 0; j < 2; j++) {
+ if (nodes[i][j])
+ bpf_obj_drop(nodes[i][j]);
+ }
+ }
+ return 1;
+}
+
SEC("tc")
long rbtree_first_and_remove(void *ctx)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH bpf-next 11/11] selftests/bpf: Test global bpf_list_head arrays.
2024-04-10 0:41 [PATCH bpf-next 00/11] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
` (9 preceding siblings ...)
2024-04-10 0:41 ` [PATCH bpf-next 10/11] selftests/bpf: Test global bpf_rb_root arrays Kui-Feng Lee
@ 2024-04-10 0:41 ` Kui-Feng Lee
10 siblings, 0 replies; 27+ messages in thread
From: Kui-Feng Lee @ 2024-04-10 0:41 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii
Cc: sinquersw, kuifeng, Kui-Feng Lee
Make sure global arrays of bpf_list_head(s) work correctly.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
.../selftests/bpf/prog_tests/linked_list.c | 6 +++++
.../testing/selftests/bpf/progs/linked_list.c | 24 +++++++++++++++++++
2 files changed, 30 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/linked_list.c b/tools/testing/selftests/bpf/prog_tests/linked_list.c
index 2fb89de63bd2..0f0ccee688c5 100644
--- a/tools/testing/selftests/bpf/prog_tests/linked_list.c
+++ b/tools/testing/selftests/bpf/prog_tests/linked_list.c
@@ -183,6 +183,12 @@ static void test_linked_list_success(int mode, bool leave_in_map)
if (!leave_in_map)
clear_fields(skel->maps.bss_A);
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.global_list_array_push_pop), &opts);
+ ASSERT_OK(ret, "global_list_array_push_pop");
+ ASSERT_OK(opts.retval, "global_list_array_push_pop retval");
+ if (!leave_in_map)
+ clear_fields(skel->maps.bss_A);
+
if (mode == PUSH_POP)
goto end;
diff --git a/tools/testing/selftests/bpf/progs/linked_list.c b/tools/testing/selftests/bpf/progs/linked_list.c
index 26205ca80679..7c203b5367a7 100644
--- a/tools/testing/selftests/bpf/progs/linked_list.c
+++ b/tools/testing/selftests/bpf/progs/linked_list.c
@@ -11,6 +11,10 @@
#include "linked_list.h"
+private(C) struct bpf_spin_lock glock_c;
+private(C) struct bpf_list_head ghead_array[2] __contains(foo, node2);
+private(C) struct bpf_list_head ghead_array_one[1] __contains(foo, node2);
+
static __always_inline
int list_push_pop(struct bpf_spin_lock *lock, struct bpf_list_head *head, bool leave_in_map)
{
@@ -309,6 +313,26 @@ int global_list_push_pop(void *ctx)
return test_list_push_pop(&glock, &ghead);
}
+SEC("tc")
+int global_list_array_push_pop(void *ctx)
+{
+ int r;
+
+ r = test_list_push_pop(&glock_c, &ghead_array[0]);
+ if (r)
+ return r;
+
+ r = test_list_push_pop(&glock_c, &ghead_array[1]);
+ if (r)
+ return r;
+
+ /* Arrays with only one element is a special case, being treated
+ * just like a bpf_list_head variable by the verifier, not an
+ * array.
+ */
+ return test_list_push_pop(&glock_c, &ghead_array_one[0]);
+}
+
SEC("tc")
int map_list_push_pop_multiple(void *ctx)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread