* [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic
@ 2025-02-07 1:48 Andrii Nakryiko
2025-02-07 1:48 ` [PATCH bpf-next 2/2] selftests/bpf: add test for LDX/STX/ST relocations over array field Andrii Nakryiko
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Andrii Nakryiko @ 2025-02-07 1:48 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Emil Tsalapatis
Libbpf has a somewhat obscure feature of automatically adjusting the
"size" of LDX/STX/ST instruction (memory store and load instructions),
based on originally recorded access size (u8, u16, u32, or u64) and the
actual size of the field on target kernel. This is meant to facilitate
using BPF CO-RE on 32-bit architectures (pointers are always 64-bit in
BPF, but host kernel's BTF will have it as 32-bit type), as well as
generally supporting safe type changes (unsigned integer type changes
can be transparently "relocated").
One issue that surfaced only now, 5 years after this logic was
implemented, is how this all works when dealing with fields that are
arrays. This isn't all that easy and straightforward to hit (see
selftests that reproduce this condition), but one of sched_ext BPF
programs did hit it with innocent looking loop.
Long story short, libbpf used to calculate entire array size, instead of
making sure to only calculate array's element size. But it's the element
that is loaded by LDX/STX/ST instructions (1, 2, 4, or 8 bytes), so
that's what libbpf should check. This patch adjusts the logic for
arrays and fixed the issue.
Reported-by: Emil Tsalapatis <emil@etsalapatis.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
tools/lib/bpf/relo_core.c | 24 ++++++++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
index 7632e9d41827..2b83c98a1137 100644
--- a/tools/lib/bpf/relo_core.c
+++ b/tools/lib/bpf/relo_core.c
@@ -683,7 +683,7 @@ static int bpf_core_calc_field_relo(const char *prog_name,
{
const struct bpf_core_accessor *acc;
const struct btf_type *t;
- __u32 byte_off, byte_sz, bit_off, bit_sz, field_type_id;
+ __u32 byte_off, byte_sz, bit_off, bit_sz, field_type_id, elem_id;
const struct btf_member *m;
const struct btf_type *mt;
bool bitfield;
@@ -706,8 +706,14 @@ static int bpf_core_calc_field_relo(const char *prog_name,
if (!acc->name) {
if (relo->kind == BPF_CORE_FIELD_BYTE_OFFSET) {
*val = spec->bit_offset / 8;
- /* remember field size for load/store mem size */
- sz = btf__resolve_size(spec->btf, acc->type_id);
+ /* remember field size for load/store mem size;
+ * note, for arrays we care about individual element
+ * sizes, not the overall array size
+ */
+ t = skip_mods_and_typedefs(spec->btf, acc->type_id, &elem_id);
+ while (btf_is_array(t))
+ t = skip_mods_and_typedefs(spec->btf, btf_array(t)->type, &elem_id);
+ sz = btf__resolve_size(spec->btf, elem_id);
if (sz < 0)
return -EINVAL;
*field_sz = sz;
@@ -767,7 +773,17 @@ static int bpf_core_calc_field_relo(const char *prog_name,
case BPF_CORE_FIELD_BYTE_OFFSET:
*val = byte_off;
if (!bitfield) {
- *field_sz = byte_sz;
+ /* remember field size for load/store mem size;
+ * note, for arrays we care about individual element
+ * sizes, not the overall array size
+ */
+ t = skip_mods_and_typedefs(spec->btf, field_type_id, &elem_id);
+ while (btf_is_array(t))
+ t = skip_mods_and_typedefs(spec->btf, btf_array(t)->type, &elem_id);
+ sz = btf__resolve_size(spec->btf, elem_id);
+ if (sz < 0)
+ return -EINVAL;
+ *field_sz = sz;
*type_id = field_type_id;
}
break;
--
2.43.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH bpf-next 2/2] selftests/bpf: add test for LDX/STX/ST relocations over array field
2025-02-07 1:48 [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic Andrii Nakryiko
@ 2025-02-07 1:48 ` Andrii Nakryiko
2025-02-10 20:12 ` Cupertino Miranda
2025-02-07 21:45 ` [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic Eduard Zingerman
2025-02-15 4:10 ` patchwork-bot+netdevbpf
2 siblings, 1 reply; 8+ messages in thread
From: Andrii Nakryiko @ 2025-02-07 1:48 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Emil Tsalapatis
Add a simple repro for the issue of miscalculating LDX/STX/ST CO-RE
relocation size adjustment when the CO-RE relocation target type is an
ARRAY.
We need to make sure that compiler generates LDX/STX/ST instruction with
CO-RE relocation against entire ARRAY type, not ARRAY's element. With
the code pattern in selftest, we get this:
59: 61 71 00 00 00 00 00 00 w1 = *(u32 *)(r7 + 0x0)
00000000000001d8: CO-RE <byte_off> [5] struct core_reloc_arrays::a (0:0)
Where offset of `int a[5]` is embedded (through CO-RE relocation) into memory
load instruction itself.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
tools/testing/selftests/bpf/prog_tests/core_reloc.c | 6 ++++--
...f__core_reloc_arrays___err_bad_signed_arr_elem_sz.c | 3 +++
tools/testing/selftests/bpf/progs/core_reloc_types.h | 10 ++++++++++
.../selftests/bpf/progs/test_core_reloc_arrays.c | 5 +++++
4 files changed, 22 insertions(+), 2 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
index e10ea92c3fe2..08963c82f30b 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -85,11 +85,11 @@ static int duration = 0;
#define NESTING_ERR_CASE(name) { \
NESTING_CASE_COMMON(name), \
.fails = true, \
- .run_btfgen_fails = true, \
+ .run_btfgen_fails = true, \
}
#define ARRAYS_DATA(struct_name) STRUCT_TO_CHAR_PTR(struct_name) { \
- .a = { [2] = 1 }, \
+ .a = { [2] = 1, [3] = 11 }, \
.b = { [1] = { [2] = { [3] = 2 } } }, \
.c = { [1] = { .c = 3 } }, \
.d = { [0] = { [0] = { .d = 4 } } }, \
@@ -108,6 +108,7 @@ static int duration = 0;
.input_len = sizeof(struct core_reloc_##name), \
.output = STRUCT_TO_CHAR_PTR(core_reloc_arrays_output) { \
.a2 = 1, \
+ .a3 = 12, \
.b123 = 2, \
.c1c = 3, \
.d00d = 4, \
@@ -602,6 +603,7 @@ static const struct core_reloc_test_case test_cases[] = {
ARRAYS_ERR_CASE(arrays___err_non_array),
ARRAYS_ERR_CASE(arrays___err_wrong_val_type),
ARRAYS_ERR_CASE(arrays___err_bad_zero_sz_arr),
+ ARRAYS_ERR_CASE(arrays___err_bad_signed_arr_elem_sz),
/* enum/ptr/int handling scenarios */
PRIMITIVES_CASE(primitives),
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
new file mode 100644
index 000000000000..21a560427b10
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_arrays___err_bad_signed_arr_elem_sz x) {}
diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
index fd8e1b4c6762..5760ae015e09 100644
--- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
+++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
@@ -347,6 +347,7 @@ struct core_reloc_nesting___err_too_deep {
*/
struct core_reloc_arrays_output {
int a2;
+ int a3;
char b123;
int c1c;
int d00d;
@@ -455,6 +456,15 @@ struct core_reloc_arrays___err_bad_zero_sz_arr {
struct core_reloc_arrays_substruct d[1][2];
};
+struct core_reloc_arrays___err_bad_signed_arr_elem_sz {
+ /* int -> short (signed!): not supported case */
+ short a[5];
+ char b[2][3][4];
+ struct core_reloc_arrays_substruct c[3];
+ struct core_reloc_arrays_substruct d[1][2];
+ struct core_reloc_arrays_substruct f[][2];
+};
+
/*
* PRIMITIVES
*/
diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
index 51b3f79df523..448403634eea 100644
--- a/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
+++ b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
@@ -15,6 +15,7 @@ struct {
struct core_reloc_arrays_output {
int a2;
+ int a3;
char b123;
int c1c;
int d00d;
@@ -41,6 +42,7 @@ int test_core_arrays(void *ctx)
{
struct core_reloc_arrays *in = (void *)&data.in;
struct core_reloc_arrays_output *out = (void *)&data.out;
+ int *a;
if (CORE_READ(&out->a2, &in->a[2]))
return 1;
@@ -53,6 +55,9 @@ int test_core_arrays(void *ctx)
if (CORE_READ(&out->f01c, &in->f[0][1].c))
return 1;
+ a = __builtin_preserve_access_index(({ in->a; }));
+ out->a3 = a[0] + a[1] + a[2] + a[3];
+
return 0;
}
--
2.43.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic
2025-02-07 1:48 [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic Andrii Nakryiko
2025-02-07 1:48 ` [PATCH bpf-next 2/2] selftests/bpf: add test for LDX/STX/ST relocations over array field Andrii Nakryiko
@ 2025-02-07 21:45 ` Eduard Zingerman
2025-02-10 20:05 ` Andrii Nakryiko
2025-02-15 4:10 ` patchwork-bot+netdevbpf
2 siblings, 1 reply; 8+ messages in thread
From: Eduard Zingerman @ 2025-02-07 21:45 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, ast, daniel, martin.lau
Cc: kernel-team, Emil Tsalapatis
On Thu, 2025-02-06 at 17:48 -0800, Andrii Nakryiko wrote:
> Libbpf has a somewhat obscure feature of automatically adjusting the
> "size" of LDX/STX/ST instruction (memory store and load instructions),
> based on originally recorded access size (u8, u16, u32, or u64) and the
> actual size of the field on target kernel. This is meant to facilitate
> using BPF CO-RE on 32-bit architectures (pointers are always 64-bit in
> BPF, but host kernel's BTF will have it as 32-bit type), as well as
> generally supporting safe type changes (unsigned integer type changes
> can be transparently "relocated").
>
> One issue that surfaced only now, 5 years after this logic was
> implemented, is how this all works when dealing with fields that are
> arrays. This isn't all that easy and straightforward to hit (see
> selftests that reproduce this condition), but one of sched_ext BPF
> programs did hit it with innocent looking loop.
>
> Long story short, libbpf used to calculate entire array size, instead of
> making sure to only calculate array's element size. But it's the element
> that is loaded by LDX/STX/ST instructions (1, 2, 4, or 8 bytes), so
> that's what libbpf should check. This patch adjusts the logic for
> arrays and fixed the issue.
>
> Reported-by: Emil Tsalapatis <emil@etsalapatis.com>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
Do I understand correctly, that for nested arrays relocation size
would be resolved to the innermost element size?
To allow e.g.:
struct { int a[2][3]; }
...
int *a = __builtin_preserve_access_index(({ in->a; }));
a[0] = 42;
With a justification that nothing useful could be done with 'int **a'
type when dimensions are not known?
I guess this makes sense.
Acked-by: Eduard Zingerman <eddyz87@gmail.com>?
> tools/lib/bpf/relo_core.c | 24 ++++++++++++++++++++----
> 1 file changed, 20 insertions(+), 4 deletions(-)
>
> diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
> index 7632e9d41827..2b83c98a1137 100644
> --- a/tools/lib/bpf/relo_core.c
> +++ b/tools/lib/bpf/relo_core.c
> @@ -683,7 +683,7 @@ static int bpf_core_calc_field_relo(const char *prog_name,
> {
> const struct bpf_core_accessor *acc;
> const struct btf_type *t;
> - __u32 byte_off, byte_sz, bit_off, bit_sz, field_type_id;
> + __u32 byte_off, byte_sz, bit_off, bit_sz, field_type_id, elem_id;
> const struct btf_member *m;
> const struct btf_type *mt;
> bool bitfield;
> @@ -706,8 +706,14 @@ static int bpf_core_calc_field_relo(const char *prog_name,
> if (!acc->name) {
> if (relo->kind == BPF_CORE_FIELD_BYTE_OFFSET) {
> *val = spec->bit_offset / 8;
> - /* remember field size for load/store mem size */
> - sz = btf__resolve_size(spec->btf, acc->type_id);
> + /* remember field size for load/store mem size;
> + * note, for arrays we care about individual element
> + * sizes, not the overall array size
> + */
> + t = skip_mods_and_typedefs(spec->btf, acc->type_id, &elem_id);
> + while (btf_is_array(t))
> + t = skip_mods_and_typedefs(spec->btf, btf_array(t)->type, &elem_id);
> + sz = btf__resolve_size(spec->btf, elem_id);
Nit: while trying to figure out what this change is about
I commented out the above hunk and this did not trigger any test failures.
[...]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic
2025-02-07 21:45 ` [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic Eduard Zingerman
@ 2025-02-10 20:05 ` Andrii Nakryiko
0 siblings, 0 replies; 8+ messages in thread
From: Andrii Nakryiko @ 2025-02-10 20:05 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team,
Emil Tsalapatis
On Fri, Feb 7, 2025 at 1:45 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Thu, 2025-02-06 at 17:48 -0800, Andrii Nakryiko wrote:
> > Libbpf has a somewhat obscure feature of automatically adjusting the
> > "size" of LDX/STX/ST instruction (memory store and load instructions),
> > based on originally recorded access size (u8, u16, u32, or u64) and the
> > actual size of the field on target kernel. This is meant to facilitate
> > using BPF CO-RE on 32-bit architectures (pointers are always 64-bit in
> > BPF, but host kernel's BTF will have it as 32-bit type), as well as
> > generally supporting safe type changes (unsigned integer type changes
> > can be transparently "relocated").
> >
> > One issue that surfaced only now, 5 years after this logic was
> > implemented, is how this all works when dealing with fields that are
> > arrays. This isn't all that easy and straightforward to hit (see
> > selftests that reproduce this condition), but one of sched_ext BPF
> > programs did hit it with innocent looking loop.
> >
> > Long story short, libbpf used to calculate entire array size, instead of
> > making sure to only calculate array's element size. But it's the element
> > that is loaded by LDX/STX/ST instructions (1, 2, 4, or 8 bytes), so
> > that's what libbpf should check. This patch adjusts the logic for
> > arrays and fixed the issue.
> >
> > Reported-by: Emil Tsalapatis <emil@etsalapatis.com>
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
>
> Do I understand correctly, that for nested arrays relocation size
> would be resolved to the innermost element size?
> To allow e.g.:
>
> struct { int a[2][3]; }
> ...
> int *a = __builtin_preserve_access_index(({ in->a; }));
> a[0] = 42;
>
> With a justification that nothing useful could be done with 'int **a'
> type when dimensions are not known?
> I guess this makes sense.
Known or not, a multi-dimensional array at the lowest level is still
an array of elements, and it is the elements that will be read (up to
u64), so that's why I'm flattening the array and getting to the actual
item.
>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>?
>
> > tools/lib/bpf/relo_core.c | 24 ++++++++++++++++++++----
> > 1 file changed, 20 insertions(+), 4 deletions(-)
> >
> > diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
> > index 7632e9d41827..2b83c98a1137 100644
> > --- a/tools/lib/bpf/relo_core.c
> > +++ b/tools/lib/bpf/relo_core.c
> > @@ -683,7 +683,7 @@ static int bpf_core_calc_field_relo(const char *prog_name,
> > {
> > const struct bpf_core_accessor *acc;
> > const struct btf_type *t;
> > - __u32 byte_off, byte_sz, bit_off, bit_sz, field_type_id;
> > + __u32 byte_off, byte_sz, bit_off, bit_sz, field_type_id, elem_id;
> > const struct btf_member *m;
> > const struct btf_type *mt;
> > bool bitfield;
> > @@ -706,8 +706,14 @@ static int bpf_core_calc_field_relo(const char *prog_name,
> > if (!acc->name) {
> > if (relo->kind == BPF_CORE_FIELD_BYTE_OFFSET) {
> > *val = spec->bit_offset / 8;
> > - /* remember field size for load/store mem size */
> > - sz = btf__resolve_size(spec->btf, acc->type_id);
> > + /* remember field size for load/store mem size;
> > + * note, for arrays we care about individual element
> > + * sizes, not the overall array size
> > + */
> > + t = skip_mods_and_typedefs(spec->btf, acc->type_id, &elem_id);
> > + while (btf_is_array(t))
> > + t = skip_mods_and_typedefs(spec->btf, btf_array(t)->type, &elem_id);
> > + sz = btf__resolve_size(spec->btf, elem_id);
>
> Nit: while trying to figure out what this change is about
> I commented out the above hunk and this did not trigger any test failures.
I don't remember exactly under which conditions we'll have this
branch, something about array element access. But this whole logic has
to stay in sync with non-array-element CO-RE relocation.
>
> [...]
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 2/2] selftests/bpf: add test for LDX/STX/ST relocations over array field
2025-02-07 1:48 ` [PATCH bpf-next 2/2] selftests/bpf: add test for LDX/STX/ST relocations over array field Andrii Nakryiko
@ 2025-02-10 20:12 ` Cupertino Miranda
2025-02-11 0:33 ` Andrii Nakryiko
0 siblings, 1 reply; 8+ messages in thread
From: Cupertino Miranda @ 2025-02-10 20:12 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, ast, daniel, martin.lau
Cc: kernel-team, Emil Tsalapatis
Hi Andrii,
On 07-02-2025 01:48, Andrii Nakryiko wrote:
> Add a simple repro for the issue of miscalculating LDX/STX/ST CO-RE
> relocation size adjustment when the CO-RE relocation target type is an
> ARRAY.
>
> We need to make sure that compiler generates LDX/STX/ST instruction with
> CO-RE relocation against entire ARRAY type, not ARRAY's element. With
> the code pattern in selftest, we get this:
>
> 59: 61 71 00 00 00 00 00 00 w1 = *(u32 *)(r7 + 0x0)
> 00000000000001d8: CO-RE <byte_off> [5] struct core_reloc_arrays::a (0:0)
>
> Where offset of `int a[5]` is embedded (through CO-RE relocation) into memory
> load instruction itself.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
> tools/testing/selftests/bpf/prog_tests/core_reloc.c | 6 ++++--
> ...f__core_reloc_arrays___err_bad_signed_arr_elem_sz.c | 3 +++
> tools/testing/selftests/bpf/progs/core_reloc_types.h | 10 ++++++++++
> .../selftests/bpf/progs/test_core_reloc_arrays.c | 5 +++++
> 4 files changed, 22 insertions(+), 2 deletions(-)
> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
> index e10ea92c3fe2..08963c82f30b 100644
> --- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
> +++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
> @@ -85,11 +85,11 @@ static int duration = 0;
> #define NESTING_ERR_CASE(name) { \
> NESTING_CASE_COMMON(name), \
> .fails = true, \
> - .run_btfgen_fails = true, \
> + .run_btfgen_fails = true, \
> }
>
> #define ARRAYS_DATA(struct_name) STRUCT_TO_CHAR_PTR(struct_name) { \
> - .a = { [2] = 1 }, \
> + .a = { [2] = 1, [3] = 11 }, \
> .b = { [1] = { [2] = { [3] = 2 } } }, \
> .c = { [1] = { .c = 3 } }, \
> .d = { [0] = { [0] = { .d = 4 } } }, \
> @@ -108,6 +108,7 @@ static int duration = 0;
> .input_len = sizeof(struct core_reloc_##name), \
> .output = STRUCT_TO_CHAR_PTR(core_reloc_arrays_output) { \
> .a2 = 1, \
> + .a3 = 12, \
> .b123 = 2, \
> .c1c = 3, \
> .d00d = 4, \
> @@ -602,6 +603,7 @@ static const struct core_reloc_test_case test_cases[] = {
> ARRAYS_ERR_CASE(arrays___err_non_array),
> ARRAYS_ERR_CASE(arrays___err_wrong_val_type),
> ARRAYS_ERR_CASE(arrays___err_bad_zero_sz_arr),
> + ARRAYS_ERR_CASE(arrays___err_bad_signed_arr_elem_sz),
>
> /* enum/ptr/int handling scenarios */
> PRIMITIVES_CASE(primitives),
> diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
> new file mode 100644
> index 000000000000..21a560427b10
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
> @@ -0,0 +1,3 @@
> +#include "core_reloc_types.h"
> +
> +void f(struct core_reloc_arrays___err_bad_signed_arr_elem_sz x) {}
> diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
> index fd8e1b4c6762..5760ae015e09 100644
> --- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
> +++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
> @@ -347,6 +347,7 @@ struct core_reloc_nesting___err_too_deep {
> */
> struct core_reloc_arrays_output {
> int a2;
> + int a3;
> char b123;
> int c1c;
> int d00d;
> @@ -455,6 +456,15 @@ struct core_reloc_arrays___err_bad_zero_sz_arr {
> struct core_reloc_arrays_substruct d[1][2];
> };
>
> +struct core_reloc_arrays___err_bad_signed_arr_elem_sz {
> + /* int -> short (signed!): not supported case */
> + short a[5];
> + char b[2][3][4];
> + struct core_reloc_arrays_substruct c[3];
> + struct core_reloc_arrays_substruct d[1][2];
> + struct core_reloc_arrays_substruct f[][2];
> +};
> +
> /*
> * PRIMITIVES
> */
> diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
> index 51b3f79df523..448403634eea 100644
> --- a/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
> +++ b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
> @@ -15,6 +15,7 @@ struct {
>
> struct core_reloc_arrays_output {
> int a2;
> + int a3;
> char b123;
> int c1c;
> int d00d;
> @@ -41,6 +42,7 @@ int test_core_arrays(void *ctx)
> {
> struct core_reloc_arrays *in = (void *)&data.in;
> struct core_reloc_arrays_output *out = (void *)&data.out;
> + int *a;
>
> if (CORE_READ(&out->a2, &in->a[2]))
> return 1;
> @@ -53,6 +55,9 @@ int test_core_arrays(void *ctx)
> if (CORE_READ(&out->f01c, &in->f[0][1].c))
> return 1;
>
> + a = __builtin_preserve_access_index(({ in->a; }));
> + out->a3 = a[0] + a[1] + a[2] + a[3];
Just to try to understand what seems to be the expectation from the
compiler and CO-RE in this case.
Do you expect that all those a[n] accesses would be generating CO-RE
relocations assuming the size for the elements in in->a ?
> +
> return 0;
> }
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 2/2] selftests/bpf: add test for LDX/STX/ST relocations over array field
2025-02-10 20:12 ` Cupertino Miranda
@ 2025-02-11 0:33 ` Andrii Nakryiko
2025-02-11 10:27 ` Cupertino Miranda
0 siblings, 1 reply; 8+ messages in thread
From: Andrii Nakryiko @ 2025-02-11 0:33 UTC (permalink / raw)
To: Cupertino Miranda
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team,
Emil Tsalapatis
On Mon, Feb 10, 2025 at 12:13 PM Cupertino Miranda
<cupertino.miranda@oracle.com> wrote:
>
> Hi Andrii,
>
> On 07-02-2025 01:48, Andrii Nakryiko wrote:
> > Add a simple repro for the issue of miscalculating LDX/STX/ST CO-RE
> > relocation size adjustment when the CO-RE relocation target type is an
> > ARRAY.
> >
> > We need to make sure that compiler generates LDX/STX/ST instruction with
> > CO-RE relocation against entire ARRAY type, not ARRAY's element. With
> > the code pattern in selftest, we get this:
> >
> > 59: 61 71 00 00 00 00 00 00 w1 = *(u32 *)(r7 + 0x0)
> > 00000000000001d8: CO-RE <byte_off> [5] struct core_reloc_arrays::a (0:0)
> >
> > Where offset of `int a[5]` is embedded (through CO-RE relocation) into memory
> > load instruction itself.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> > tools/testing/selftests/bpf/prog_tests/core_reloc.c | 6 ++++--
> > ...f__core_reloc_arrays___err_bad_signed_arr_elem_sz.c | 3 +++
> > tools/testing/selftests/bpf/progs/core_reloc_types.h | 10 ++++++++++
> > .../selftests/bpf/progs/test_core_reloc_arrays.c | 5 +++++
> > 4 files changed, 22 insertions(+), 2 deletions(-)
> > create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
> > index e10ea92c3fe2..08963c82f30b 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
> > @@ -85,11 +85,11 @@ static int duration = 0;
> > #define NESTING_ERR_CASE(name) { \
> > NESTING_CASE_COMMON(name), \
> > .fails = true, \
> > - .run_btfgen_fails = true, \
> > + .run_btfgen_fails = true, \
> > }
> >
> > #define ARRAYS_DATA(struct_name) STRUCT_TO_CHAR_PTR(struct_name) { \
> > - .a = { [2] = 1 }, \
> > + .a = { [2] = 1, [3] = 11 }, \
> > .b = { [1] = { [2] = { [3] = 2 } } }, \
> > .c = { [1] = { .c = 3 } }, \
> > .d = { [0] = { [0] = { .d = 4 } } }, \
> > @@ -108,6 +108,7 @@ static int duration = 0;
> > .input_len = sizeof(struct core_reloc_##name), \
> > .output = STRUCT_TO_CHAR_PTR(core_reloc_arrays_output) { \
> > .a2 = 1, \
> > + .a3 = 12, \
> > .b123 = 2, \
> > .c1c = 3, \
> > .d00d = 4, \
> > @@ -602,6 +603,7 @@ static const struct core_reloc_test_case test_cases[] = {
> > ARRAYS_ERR_CASE(arrays___err_non_array),
> > ARRAYS_ERR_CASE(arrays___err_wrong_val_type),
> > ARRAYS_ERR_CASE(arrays___err_bad_zero_sz_arr),
> > + ARRAYS_ERR_CASE(arrays___err_bad_signed_arr_elem_sz),
> >
> > /* enum/ptr/int handling scenarios */
> > PRIMITIVES_CASE(primitives),
> > diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
> > new file mode 100644
> > index 000000000000..21a560427b10
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
> > @@ -0,0 +1,3 @@
> > +#include "core_reloc_types.h"
> > +
> > +void f(struct core_reloc_arrays___err_bad_signed_arr_elem_sz x) {}
> > diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
> > index fd8e1b4c6762..5760ae015e09 100644
> > --- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
> > +++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
> > @@ -347,6 +347,7 @@ struct core_reloc_nesting___err_too_deep {
> > */
> > struct core_reloc_arrays_output {
> > int a2;
> > + int a3;
> > char b123;
> > int c1c;
> > int d00d;
> > @@ -455,6 +456,15 @@ struct core_reloc_arrays___err_bad_zero_sz_arr {
> > struct core_reloc_arrays_substruct d[1][2];
> > };
> >
> > +struct core_reloc_arrays___err_bad_signed_arr_elem_sz {
> > + /* int -> short (signed!): not supported case */
> > + short a[5];
> > + char b[2][3][4];
> > + struct core_reloc_arrays_substruct c[3];
> > + struct core_reloc_arrays_substruct d[1][2];
> > + struct core_reloc_arrays_substruct f[][2];
> > +};
> > +
> > /*
> > * PRIMITIVES
> > */
> > diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
> > index 51b3f79df523..448403634eea 100644
> > --- a/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
> > +++ b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
> > @@ -15,6 +15,7 @@ struct {
> >
> > struct core_reloc_arrays_output {
> > int a2;
> > + int a3;
> > char b123;
> > int c1c;
> > int d00d;
> > @@ -41,6 +42,7 @@ int test_core_arrays(void *ctx)
> > {
> > struct core_reloc_arrays *in = (void *)&data.in;
> > struct core_reloc_arrays_output *out = (void *)&data.out;
> > + int *a;
> >
> > if (CORE_READ(&out->a2, &in->a[2]))
> > return 1;
> > @@ -53,6 +55,9 @@ int test_core_arrays(void *ctx)
> > if (CORE_READ(&out->f01c, &in->f[0][1].c))
> > return 1;
> >
> > + a = __builtin_preserve_access_index(({ in->a; }));
> > + out->a3 = a[0] + a[1] + a[2] + a[3];
> Just to try to understand what seems to be the expectation from the
> compiler and CO-RE in this case.
> Do you expect that all those a[n] accesses would be generating CO-RE
> relocations assuming the size for the elements in in->a ?
>
Well, I only care to get LDX instruction with associated in->a CO-RE
relocation. This is what Clang currently generates for this piece of
code. You can see that it combines both LDX+CO-RE relo for a[0], and
then non-CO-RE relocated LDX for a[1], a[2], a[3], where the base was
relocated with CO-RE a bit earlier.
44: 18 07 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r7 = 0x0 ll
0000000000000160: R_BPF_64_64 data
...
55: b7 01 00 00 00 00 00 00 r1 = 0x0
00000000000001b8: CO-RE <byte_off> [5] struct
core_reloc_arrays::a (0:0)
56: 18 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r2 = 0x0 ll
00000000000001c0: R_BPF_64_64 data
58: 0f 12 00 00 00 00 00 00 r2 += r1
59: 61 71 00 00 00 00 00 00 w1 = *(u32 *)(r7 + 0x0)
00000000000001d8: CO-RE <byte_off> [5] struct
core_reloc_arrays::a (0:0)
60: 61 23 04 00 00 00 00 00 w3 = *(u32 *)(r2 + 0x4)
61: 0c 13 00 00 00 00 00 00 w3 += w1
62: 61 21 08 00 00 00 00 00 w1 = *(u32 *)(r2 + 0x8)
63: 0c 13 00 00 00 00 00 00 w3 += w1
64: 61 21 0c 00 00 00 00 00 w1 = *(u32 *)(r2 + 0xc)
65: 0c 13 00 00 00 00 00 00 w3 += w1
66: 63 37 04 01 00 00 00 00 *(u32 *)(r7 + 0x104) = w3
Clang might change code generation pattern in the future, of course,
but at least as of right now I know I did test this logic :) Ideally
I'd be able to generate embedded asm with CO-RE relocation, but I'm
not sure that's supported today.
> > +
> > return 0;
> > }
> >
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 2/2] selftests/bpf: add test for LDX/STX/ST relocations over array field
2025-02-11 0:33 ` Andrii Nakryiko
@ 2025-02-11 10:27 ` Cupertino Miranda
0 siblings, 0 replies; 8+ messages in thread
From: Cupertino Miranda @ 2025-02-11 10:27 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team,
Emil Tsalapatis
On 11-02-2025 00:33, Andrii Nakryiko wrote:
> On Mon, Feb 10, 2025 at 12:13 PM Cupertino Miranda
> <cupertino.miranda@oracle.com> wrote:
>>
>> Hi Andrii,
>>
>> On 07-02-2025 01:48, Andrii Nakryiko wrote:
>>> Add a simple repro for the issue of miscalculating LDX/STX/ST CO-RE
>>> relocation size adjustment when the CO-RE relocation target type is an
>>> ARRAY.
>>>
>>> We need to make sure that compiler generates LDX/STX/ST instruction with
>>> CO-RE relocation against entire ARRAY type, not ARRAY's element. With
>>> the code pattern in selftest, we get this:
>>>
>>> 59: 61 71 00 00 00 00 00 00 w1 = *(u32 *)(r7 + 0x0)
>>> 00000000000001d8: CO-RE <byte_off> [5] struct core_reloc_arrays::a (0:0)
>>>
>>> Where offset of `int a[5]` is embedded (through CO-RE relocation) into memory
>>> load instruction itself.
>>>
>>> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
>>> ---
>>> tools/testing/selftests/bpf/prog_tests/core_reloc.c | 6 ++++--
>>> ...f__core_reloc_arrays___err_bad_signed_arr_elem_sz.c | 3 +++
>>> tools/testing/selftests/bpf/progs/core_reloc_types.h | 10 ++++++++++
>>> .../selftests/bpf/progs/test_core_reloc_arrays.c | 5 +++++
>>> 4 files changed, 22 insertions(+), 2 deletions(-)
>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
>>>
>>> diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
>>> index e10ea92c3fe2..08963c82f30b 100644
>>> --- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
>>> +++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
>>> @@ -85,11 +85,11 @@ static int duration = 0;
>>> #define NESTING_ERR_CASE(name) { \
>>> NESTING_CASE_COMMON(name), \
>>> .fails = true, \
>>> - .run_btfgen_fails = true, \
>>> + .run_btfgen_fails = true, \
>>> }
>>>
>>> #define ARRAYS_DATA(struct_name) STRUCT_TO_CHAR_PTR(struct_name) { \
>>> - .a = { [2] = 1 }, \
>>> + .a = { [2] = 1, [3] = 11 }, \
>>> .b = { [1] = { [2] = { [3] = 2 } } }, \
>>> .c = { [1] = { .c = 3 } }, \
>>> .d = { [0] = { [0] = { .d = 4 } } }, \
>>> @@ -108,6 +108,7 @@ static int duration = 0;
>>> .input_len = sizeof(struct core_reloc_##name), \
>>> .output = STRUCT_TO_CHAR_PTR(core_reloc_arrays_output) { \
>>> .a2 = 1, \
>>> + .a3 = 12, \
>>> .b123 = 2, \
>>> .c1c = 3, \
>>> .d00d = 4, \
>>> @@ -602,6 +603,7 @@ static const struct core_reloc_test_case test_cases[] = {
>>> ARRAYS_ERR_CASE(arrays___err_non_array),
>>> ARRAYS_ERR_CASE(arrays___err_wrong_val_type),
>>> ARRAYS_ERR_CASE(arrays___err_bad_zero_sz_arr),
>>> + ARRAYS_ERR_CASE(arrays___err_bad_signed_arr_elem_sz),
>>>
>>> /* enum/ptr/int handling scenarios */
>>> PRIMITIVES_CASE(primitives),
>>> diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
>>> new file mode 100644
>>> index 000000000000..21a560427b10
>>> --- /dev/null
>>> +++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c
>>> @@ -0,0 +1,3 @@
>>> +#include "core_reloc_types.h"
>>> +
>>> +void f(struct core_reloc_arrays___err_bad_signed_arr_elem_sz x) {}
>>> diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
>>> index fd8e1b4c6762..5760ae015e09 100644
>>> --- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
>>> +++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
>>> @@ -347,6 +347,7 @@ struct core_reloc_nesting___err_too_deep {
>>> */
>>> struct core_reloc_arrays_output {
>>> int a2;
>>> + int a3;
>>> char b123;
>>> int c1c;
>>> int d00d;
>>> @@ -455,6 +456,15 @@ struct core_reloc_arrays___err_bad_zero_sz_arr {
>>> struct core_reloc_arrays_substruct d[1][2];
>>> };
>>>
>>> +struct core_reloc_arrays___err_bad_signed_arr_elem_sz {
>>> + /* int -> short (signed!): not supported case */
>>> + short a[5];
>>> + char b[2][3][4];
>>> + struct core_reloc_arrays_substruct c[3];
>>> + struct core_reloc_arrays_substruct d[1][2];
>>> + struct core_reloc_arrays_substruct f[][2];
>>> +};
>>> +
>>> /*
>>> * PRIMITIVES
>>> */
>>> diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
>>> index 51b3f79df523..448403634eea 100644
>>> --- a/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
>>> +++ b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
>>> @@ -15,6 +15,7 @@ struct {
>>>
>>> struct core_reloc_arrays_output {
>>> int a2;
>>> + int a3;
>>> char b123;
>>> int c1c;
>>> int d00d;
>>> @@ -41,6 +42,7 @@ int test_core_arrays(void *ctx)
>>> {
>>> struct core_reloc_arrays *in = (void *)&data.in;
>>> struct core_reloc_arrays_output *out = (void *)&data.out;
>>> + int *a;
>>>
>>> if (CORE_READ(&out->a2, &in->a[2]))
>>> return 1;
>>> @@ -53,6 +55,9 @@ int test_core_arrays(void *ctx)
>>> if (CORE_READ(&out->f01c, &in->f[0][1].c))
>>> return 1;
>>>
>>> + a = __builtin_preserve_access_index(({ in->a; }));
>>> + out->a3 = a[0] + a[1] + a[2] + a[3];
>> Just to try to understand what seems to be the expectation from the
>> compiler and CO-RE in this case.
>> Do you expect that all those a[n] accesses would be generating CO-RE
>> relocations assuming the size for the elements in in->a ?
>>
>
> Well, I only care to get LDX instruction with associated in->a CO-RE
> relocation. This is what Clang currently generates for this piece of
> code. You can see that it combines both LDX+CO-RE relo for a[0], and
> then non-CO-RE relocated LDX for a[1], a[2], a[3], where the base was
> relocated with CO-RE a bit earlier.
>
> 44: 18 07 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r7 = 0x0 ll
> 0000000000000160: R_BPF_64_64 data
>
> ...
>
> 55: b7 01 00 00 00 00 00 00 r1 = 0x0
> 00000000000001b8: CO-RE <byte_off> [5] struct
> core_reloc_arrays::a (0:0)
> 56: 18 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r2 = 0x0 ll
> 00000000000001c0: R_BPF_64_64 data
> 58: 0f 12 00 00 00 00 00 00 r2 += r1
> 59: 61 71 00 00 00 00 00 00 w1 = *(u32 *)(r7 + 0x0)
> 00000000000001d8: CO-RE <byte_off> [5] struct
> core_reloc_arrays::a (0:0)
> 60: 61 23 04 00 00 00 00 00 w3 = *(u32 *)(r2 + 0x4)
> 61: 0c 13 00 00 00 00 00 00 w3 += w1
> 62: 61 21 08 00 00 00 00 00 w1 = *(u32 *)(r2 + 0x8)
> 63: 0c 13 00 00 00 00 00 00 w3 += w1
> 64: 61 21 0c 00 00 00 00 00 w1 = *(u32 *)(r2 + 0xc)
> 65: 0c 13 00 00 00 00 00 00 w3 += w1
> 66: 63 37 04 01 00 00 00 00 *(u32 *)(r7 + 0x104) = w3
>
> Clang might change code generation pattern in the future, of course,
> but at least as of right now I know I did test this logic :) Ideally
> I'd be able to generate embedded asm with CO-RE relocation, but I'm
> not sure that's supported today.
Ok, good! I just miss read it. :)
Thanks!
>
>>> +
>>> return 0;
>>> }
>>>
>>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic
2025-02-07 1:48 [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic Andrii Nakryiko
2025-02-07 1:48 ` [PATCH bpf-next 2/2] selftests/bpf: add test for LDX/STX/ST relocations over array field Andrii Nakryiko
2025-02-07 21:45 ` [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic Eduard Zingerman
@ 2025-02-15 4:10 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 8+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-02-15 4:10 UTC (permalink / raw)
To: Andrii Nakryiko; +Cc: bpf, ast, daniel, martin.lau, kernel-team, emil
Hello:
This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Thu, 6 Feb 2025 17:48:08 -0800 you wrote:
> Libbpf has a somewhat obscure feature of automatically adjusting the
> "size" of LDX/STX/ST instruction (memory store and load instructions),
> based on originally recorded access size (u8, u16, u32, or u64) and the
> actual size of the field on target kernel. This is meant to facilitate
> using BPF CO-RE on 32-bit architectures (pointers are always 64-bit in
> BPF, but host kernel's BTF will have it as 32-bit type), as well as
> generally supporting safe type changes (unsigned integer type changes
> can be transparently "relocated").
>
> [...]
Here is the summary with links:
- [bpf-next,1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic
https://git.kernel.org/bpf/bpf-next/c/06096d19ee38
- [bpf-next,2/2] selftests/bpf: add test for LDX/STX/ST relocations over array field
https://git.kernel.org/bpf/bpf-next/c/4eb93fea5919
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-02-15 4:10 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-07 1:48 [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic Andrii Nakryiko
2025-02-07 1:48 ` [PATCH bpf-next 2/2] selftests/bpf: add test for LDX/STX/ST relocations over array field Andrii Nakryiko
2025-02-10 20:12 ` Cupertino Miranda
2025-02-11 0:33 ` Andrii Nakryiko
2025-02-11 10:27 ` Cupertino Miranda
2025-02-07 21:45 ` [PATCH bpf-next 1/2] libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic Eduard Zingerman
2025-02-10 20:05 ` Andrii Nakryiko
2025-02-15 4:10 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox