* [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups
@ 2024-06-20 9:17 Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 1/6] libbpf: BTF relocation followup fixing naming, loop logic Alan Maguire
` (6 more replies)
0 siblings, 7 replies; 10+ messages in thread
From: Alan Maguire @ 2024-06-20 9:17 UTC (permalink / raw)
To: andrii, eddyz87
Cc: acme, ast, daniel, jolsa, martin.lau, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, mcgrof, masahiroy, nathan,
mykolal, thinker.li, bentiss, tanggeliang, bpf, Alan Maguire
Follow-up to resilient split BTF series [1],
- cleaning up libbpf relocation code (patch 1);
- adding 'struct module' support for base BTF data (patch 2);
- splitting out field iteration code into separate file (patch 3);
- sharing libbpf relocation code with the kernel (patch 4);
- adding a kbuild --btf_features flag to generate distilled base
BTF in the module-specific case where KBUILD_EXTMOD is true
(patch 5); and
- adding test coverage for module-based kfunc dtor (patch 6)
Generation of distilled base BTF for modules requires the pahole patch
at [2], but without it we just won't get distilled base BTF (and thus BTF
relocation on module load) for bpf_testmod.ko.
Changes since v1 [3]:
- fixed line lengths and made comparison an explicit == 0 (Andrii, patch 1)
- moved btf_iter.c changes to separate patch (Andrii, patch 3)
- grouped common targets in kernel/bpf/Makefile (Andrii, patch 4)
- updated bpf_testmod ctx alloc to use GFP_ATOMIC, and updated dtor
selftest to use map-based dtor cleanup (Eduard, patch 6)
[1] https://lore.kernel.org/bpf/20240613095014.357981-1-alan.maguire@oracle.com/
[2] https://lore.kernel.org/bpf/20240517102714.4072080-1-alan.maguire@oracle.com/
[3] https://lore.kernel.org/bpf/20240618162449.809994-1-alan.maguire@oracle.com/
Alan Maguire (6):
libbpf: BTF relocation followup fixing naming, loop logic
module, bpf: store BTF base pointer in struct module
libbpf: split field iter code into its own file kernel
libbpf,bpf: share BTF relocate-related code with kernel
kbuild,bpf: add module-specific pahole flags for distilled base BTF
selftests/bpf: add kfunc_call test for simple dtor in bpf_testmod
include/linux/btf.h | 64 +++++++
include/linux/module.h | 2 +
kernel/bpf/Makefile | 8 +-
kernel/bpf/btf.c | 176 ++++++++++++-----
kernel/module/main.c | 5 +-
scripts/Makefile.btf | 5 +
scripts/Makefile.modfinal | 2 +-
tools/lib/bpf/Build | 2 +-
tools/lib/bpf/btf.c | 162 ----------------
tools/lib/bpf/btf_iter.c | 177 ++++++++++++++++++
tools/lib/bpf/btf_relocate.c | 95 ++++++----
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 46 +++++
.../bpf/bpf_testmod/bpf_testmod_kfunc.h | 9 +
.../selftests/bpf/prog_tests/kfunc_call.c | 1 +
.../selftests/bpf/progs/kfunc_call_test.c | 37 ++++
15 files changed, 532 insertions(+), 259 deletions(-)
create mode 100644 tools/lib/bpf/btf_iter.c
--
2.31.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 bpf-next 1/6] libbpf: BTF relocation followup fixing naming, loop logic
2024-06-20 9:17 [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups Alan Maguire
@ 2024-06-20 9:17 ` Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 2/6] module, bpf: store BTF base pointer in struct module Alan Maguire
` (5 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Alan Maguire @ 2024-06-20 9:17 UTC (permalink / raw)
To: andrii, eddyz87
Cc: acme, ast, daniel, jolsa, martin.lau, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, mcgrof, masahiroy, nathan,
mykolal, thinker.li, bentiss, tanggeliang, bpf, Alan Maguire,
Andrii Nakryiko
Use less verbose names in BTF relocation code and fix off-by-one error
and typo in btf_relocate.c. Simplify loop over matching distilled
types, moving from assigning a _next value in loop body to moving
match check conditions into the guard.
Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
tools/lib/bpf/btf_relocate.c | 72 ++++++++++++++++--------------------
1 file changed, 31 insertions(+), 41 deletions(-)
diff --git a/tools/lib/bpf/btf_relocate.c b/tools/lib/bpf/btf_relocate.c
index eabb8755f662..23a41fb03e0d 100644
--- a/tools/lib/bpf/btf_relocate.c
+++ b/tools/lib/bpf/btf_relocate.c
@@ -160,7 +160,7 @@ static int btf_mark_embedded_composite_type_ids(struct btf_relocate *r, __u32 i)
*/
static int btf_relocate_map_distilled_base(struct btf_relocate *r)
{
- struct btf_name_info *dist_base_info_sorted, *dist_base_info_sorted_end;
+ struct btf_name_info *info, *info_end;
struct btf_type *base_t, *dist_t;
__u8 *base_name_cnt = NULL;
int err = 0;
@@ -169,26 +169,24 @@ static int btf_relocate_map_distilled_base(struct btf_relocate *r)
/* generate a sort index array of name/type ids sorted by name for
* distilled base BTF to speed name-based lookups.
*/
- dist_base_info_sorted = calloc(r->nr_dist_base_types, sizeof(*dist_base_info_sorted));
- if (!dist_base_info_sorted) {
+ info = calloc(r->nr_dist_base_types, sizeof(*info));
+ if (!info) {
err = -ENOMEM;
goto done;
}
- dist_base_info_sorted_end = dist_base_info_sorted + r->nr_dist_base_types;
+ info_end = info + r->nr_dist_base_types;
for (id = 0; id < r->nr_dist_base_types; id++) {
dist_t = btf_type_by_id(r->dist_base_btf, id);
- dist_base_info_sorted[id].name = btf__name_by_offset(r->dist_base_btf,
- dist_t->name_off);
- dist_base_info_sorted[id].id = id;
- dist_base_info_sorted[id].size = dist_t->size;
- dist_base_info_sorted[id].needs_size = true;
+ info[id].name = btf__name_by_offset(r->dist_base_btf, dist_t->name_off);
+ info[id].id = id;
+ info[id].size = dist_t->size;
+ info[id].needs_size = true;
}
- qsort(dist_base_info_sorted, r->nr_dist_base_types, sizeof(*dist_base_info_sorted),
- cmp_btf_name_size);
+ qsort(info, r->nr_dist_base_types, sizeof(*info), cmp_btf_name_size);
/* Mark distilled base struct/union members of split BTF structs/unions
* in id_map with BTF_IS_EMBEDDED; this signals that these types
- * need to match both name and size, otherwise embeddding the base
+ * need to match both name and size, otherwise embedding the base
* struct/union in the split type is invalid.
*/
for (id = r->nr_dist_base_types; id < r->nr_split_types; id++) {
@@ -216,8 +214,7 @@ static int btf_relocate_map_distilled_base(struct btf_relocate *r)
/* Now search base BTF for matching distilled base BTF types. */
for (id = 1; id < r->nr_base_types; id++) {
- struct btf_name_info *dist_name_info, *dist_name_info_next = NULL;
- struct btf_name_info base_name_info = {};
+ struct btf_name_info *dist_info, base_info = {};
int dist_kind, base_kind;
base_t = btf_type_by_id(r->base_btf, id);
@@ -225,16 +222,16 @@ static int btf_relocate_map_distilled_base(struct btf_relocate *r)
if (!base_t->name_off)
continue;
base_kind = btf_kind(base_t);
- base_name_info.id = id;
- base_name_info.name = btf__name_by_offset(r->base_btf, base_t->name_off);
+ base_info.id = id;
+ base_info.name = btf__name_by_offset(r->base_btf, base_t->name_off);
switch (base_kind) {
case BTF_KIND_INT:
case BTF_KIND_FLOAT:
case BTF_KIND_ENUM:
case BTF_KIND_ENUM64:
/* These types should match both name and size */
- base_name_info.needs_size = true;
- base_name_info.size = base_t->size;
+ base_info.needs_size = true;
+ base_info.size = base_t->size;
break;
case BTF_KIND_FWD:
/* No size considerations for fwds. */
@@ -248,31 +245,24 @@ static int btf_relocate_map_distilled_base(struct btf_relocate *r)
* unless corresponding _base_ types to match them are
* missing.
*/
- base_name_info.needs_size = base_name_cnt[base_t->name_off] > 1;
- base_name_info.size = base_t->size;
+ base_info.needs_size = base_name_cnt[base_t->name_off] > 1;
+ base_info.size = base_t->size;
break;
default:
continue;
}
/* iterate over all matching distilled base types */
- for (dist_name_info = search_btf_name_size(&base_name_info, dist_base_info_sorted,
- r->nr_dist_base_types);
- dist_name_info != NULL; dist_name_info = dist_name_info_next) {
- /* Are there more distilled matches to process after
- * this one?
- */
- dist_name_info_next = dist_name_info + 1;
- if (dist_name_info_next >= dist_base_info_sorted_end ||
- cmp_btf_name_size(&base_name_info, dist_name_info_next))
- dist_name_info_next = NULL;
-
- if (!dist_name_info->id || dist_name_info->id > r->nr_dist_base_types) {
+ for (dist_info = search_btf_name_size(&base_info, info, r->nr_dist_base_types);
+ dist_info != NULL && dist_info < info_end &&
+ cmp_btf_name_size(&base_info, dist_info) == 0;
+ dist_info++) {
+ if (!dist_info->id || dist_info->id >= r->nr_dist_base_types) {
pr_warn("base BTF id [%d] maps to invalid distilled base BTF id [%d]\n",
- id, dist_name_info->id);
+ id, dist_info->id);
err = -EINVAL;
goto done;
}
- dist_t = btf_type_by_id(r->dist_base_btf, dist_name_info->id);
+ dist_t = btf_type_by_id(r->dist_base_btf, dist_info->id);
dist_kind = btf_kind(dist_t);
/* Validate that the found distilled type is compatible.
@@ -319,15 +309,15 @@ static int btf_relocate_map_distilled_base(struct btf_relocate *r)
/* size verification is required for embedded
* struct/unions.
*/
- if (r->id_map[dist_name_info->id] == BTF_IS_EMBEDDED &&
+ if (r->id_map[dist_info->id] == BTF_IS_EMBEDDED &&
base_t->size != dist_t->size)
continue;
break;
default:
continue;
}
- if (r->id_map[dist_name_info->id] &&
- r->id_map[dist_name_info->id] != BTF_IS_EMBEDDED) {
+ if (r->id_map[dist_info->id] &&
+ r->id_map[dist_info->id] != BTF_IS_EMBEDDED) {
/* we already have a match; this tells us that
* multiple base types of the same name
* have the same size, since for cases where
@@ -337,13 +327,13 @@ static int btf_relocate_map_distilled_base(struct btf_relocate *r)
* to in base BTF, so error out.
*/
pr_warn("distilled base BTF type '%s' [%u], size %u has multiple candidates of the same size (ids [%u, %u]) in base BTF\n",
- base_name_info.name, dist_name_info->id,
- base_t->size, id, r->id_map[dist_name_info->id]);
+ base_info.name, dist_info->id,
+ base_t->size, id, r->id_map[dist_info->id]);
err = -EINVAL;
goto done;
}
/* map id and name */
- r->id_map[dist_name_info->id] = id;
+ r->id_map[dist_info->id] = id;
r->str_map[dist_t->name_off] = base_t->name_off;
}
}
@@ -362,7 +352,7 @@ static int btf_relocate_map_distilled_base(struct btf_relocate *r)
}
done:
free(base_name_cnt);
- free(dist_base_info_sorted);
+ free(info);
return err;
}
--
2.31.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 bpf-next 2/6] module, bpf: store BTF base pointer in struct module
2024-06-20 9:17 [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 1/6] libbpf: BTF relocation followup fixing naming, loop logic Alan Maguire
@ 2024-06-20 9:17 ` Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 3/6] libbpf: split field iter code into its own file kernel Alan Maguire
` (4 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Alan Maguire @ 2024-06-20 9:17 UTC (permalink / raw)
To: andrii, eddyz87
Cc: acme, ast, daniel, jolsa, martin.lau, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, mcgrof, masahiroy, nathan,
mykolal, thinker.li, bentiss, tanggeliang, bpf, Alan Maguire
...as this will allow split BTF modules with a base BTF
representation (rather than the full vmlinux BTF at time of
BTF encoding) to resolve their references to kernel types in a
way that is more resilient to small changes in kernel types.
This will allow modules that are not built every time the kernel
is to provide more resilient BTF, rather than have it invalidated
every time BTF ids for core kernel types change.
Fields are ordered to avoid holes in struct module.
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
---
include/linux/module.h | 2 ++
kernel/module/main.c | 5 ++++-
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/include/linux/module.h b/include/linux/module.h
index ffa1c603163c..b79d926cae8a 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -509,7 +509,9 @@ struct module {
#endif
#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
unsigned int btf_data_size;
+ unsigned int btf_base_data_size;
void *btf_data;
+ void *btf_base_data;
#endif
#ifdef CONFIG_JUMP_LABEL
struct jump_entry *jump_entries;
diff --git a/kernel/module/main.c b/kernel/module/main.c
index d18a94b973e1..d9592195c5bb 100644
--- a/kernel/module/main.c
+++ b/kernel/module/main.c
@@ -2166,6 +2166,8 @@ static int find_module_sections(struct module *mod, struct load_info *info)
#endif
#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
mod->btf_data = any_section_objs(info, ".BTF", 1, &mod->btf_data_size);
+ mod->btf_base_data = any_section_objs(info, ".BTF.base", 1,
+ &mod->btf_base_data_size);
#endif
#ifdef CONFIG_JUMP_LABEL
mod->jump_entries = section_objs(info, "__jump_table",
@@ -2590,8 +2592,9 @@ static noinline int do_init_module(struct module *mod)
}
#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
- /* .BTF is not SHF_ALLOC and will get removed, so sanitize pointer */
+ /* .BTF is not SHF_ALLOC and will get removed, so sanitize pointers */
mod->btf_data = NULL;
+ mod->btf_base_data = NULL;
#endif
/*
* We want to free module_init, but be aware that kallsyms may be
--
2.31.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 bpf-next 3/6] libbpf: split field iter code into its own file kernel
2024-06-20 9:17 [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 1/6] libbpf: BTF relocation followup fixing naming, loop logic Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 2/6] module, bpf: store BTF base pointer in struct module Alan Maguire
@ 2024-06-20 9:17 ` Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 4/6] libbpf,bpf: share BTF relocate-related code with kernel Alan Maguire
` (3 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Alan Maguire @ 2024-06-20 9:17 UTC (permalink / raw)
To: andrii, eddyz87
Cc: acme, ast, daniel, jolsa, martin.lau, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, mcgrof, masahiroy, nathan,
mykolal, thinker.li, bentiss, tanggeliang, bpf, Alan Maguire
This will allow it to be shared with the kernel. No functional change.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
---
tools/lib/bpf/Build | 2 +-
tools/lib/bpf/btf.c | 162 -------------------------------------
tools/lib/bpf/btf_iter.c | 169 +++++++++++++++++++++++++++++++++++++++
3 files changed, 170 insertions(+), 163 deletions(-)
create mode 100644 tools/lib/bpf/btf_iter.c
diff --git a/tools/lib/bpf/Build b/tools/lib/bpf/Build
index 336da6844d42..e2cd558ca0b4 100644
--- a/tools/lib/bpf/Build
+++ b/tools/lib/bpf/Build
@@ -1,4 +1,4 @@
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \
netlink.o bpf_prog_linfo.o libbpf_probes.o hashmap.o \
btf_dump.o ringbuf.o strset.o linker.o gen_loader.o relo_core.o \
- usdt.o zip.o elf.o features.o btf_relocate.o
+ usdt.o zip.o elf.o features.o btf_iter.o btf_relocate.o
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index ef1b2f573c1b..0c0f60cad769 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -5093,168 +5093,6 @@ struct btf *btf__load_module_btf(const char *module_name, struct btf *vmlinux_bt
return btf__parse_split(path, vmlinux_btf);
}
-int btf_field_iter_init(struct btf_field_iter *it, struct btf_type *t, enum btf_field_iter_kind iter_kind)
-{
- it->p = NULL;
- it->m_idx = -1;
- it->off_idx = 0;
- it->vlen = 0;
-
- switch (iter_kind) {
- case BTF_FIELD_ITER_IDS:
- switch (btf_kind(t)) {
- case BTF_KIND_UNKN:
- case BTF_KIND_INT:
- case BTF_KIND_FLOAT:
- case BTF_KIND_ENUM:
- case BTF_KIND_ENUM64:
- it->desc = (struct btf_field_desc) {};
- break;
- case BTF_KIND_FWD:
- case BTF_KIND_CONST:
- case BTF_KIND_VOLATILE:
- case BTF_KIND_RESTRICT:
- case BTF_KIND_PTR:
- case BTF_KIND_TYPEDEF:
- case BTF_KIND_FUNC:
- case BTF_KIND_VAR:
- case BTF_KIND_DECL_TAG:
- case BTF_KIND_TYPE_TAG:
- it->desc = (struct btf_field_desc) { 1, {offsetof(struct btf_type, type)} };
- break;
- case BTF_KIND_ARRAY:
- it->desc = (struct btf_field_desc) {
- 2, {sizeof(struct btf_type) + offsetof(struct btf_array, type),
- sizeof(struct btf_type) + offsetof(struct btf_array, index_type)}
- };
- break;
- case BTF_KIND_STRUCT:
- case BTF_KIND_UNION:
- it->desc = (struct btf_field_desc) {
- 0, {},
- sizeof(struct btf_member),
- 1, {offsetof(struct btf_member, type)}
- };
- break;
- case BTF_KIND_FUNC_PROTO:
- it->desc = (struct btf_field_desc) {
- 1, {offsetof(struct btf_type, type)},
- sizeof(struct btf_param),
- 1, {offsetof(struct btf_param, type)}
- };
- break;
- case BTF_KIND_DATASEC:
- it->desc = (struct btf_field_desc) {
- 0, {},
- sizeof(struct btf_var_secinfo),
- 1, {offsetof(struct btf_var_secinfo, type)}
- };
- break;
- default:
- return -EINVAL;
- }
- break;
- case BTF_FIELD_ITER_STRS:
- switch (btf_kind(t)) {
- case BTF_KIND_UNKN:
- it->desc = (struct btf_field_desc) {};
- break;
- case BTF_KIND_INT:
- case BTF_KIND_FLOAT:
- case BTF_KIND_FWD:
- case BTF_KIND_ARRAY:
- case BTF_KIND_CONST:
- case BTF_KIND_VOLATILE:
- case BTF_KIND_RESTRICT:
- case BTF_KIND_PTR:
- case BTF_KIND_TYPEDEF:
- case BTF_KIND_FUNC:
- case BTF_KIND_VAR:
- case BTF_KIND_DECL_TAG:
- case BTF_KIND_TYPE_TAG:
- case BTF_KIND_DATASEC:
- it->desc = (struct btf_field_desc) {
- 1, {offsetof(struct btf_type, name_off)}
- };
- break;
- case BTF_KIND_ENUM:
- it->desc = (struct btf_field_desc) {
- 1, {offsetof(struct btf_type, name_off)},
- sizeof(struct btf_enum),
- 1, {offsetof(struct btf_enum, name_off)}
- };
- break;
- case BTF_KIND_ENUM64:
- it->desc = (struct btf_field_desc) {
- 1, {offsetof(struct btf_type, name_off)},
- sizeof(struct btf_enum64),
- 1, {offsetof(struct btf_enum64, name_off)}
- };
- break;
- case BTF_KIND_STRUCT:
- case BTF_KIND_UNION:
- it->desc = (struct btf_field_desc) {
- 1, {offsetof(struct btf_type, name_off)},
- sizeof(struct btf_member),
- 1, {offsetof(struct btf_member, name_off)}
- };
- break;
- case BTF_KIND_FUNC_PROTO:
- it->desc = (struct btf_field_desc) {
- 1, {offsetof(struct btf_type, name_off)},
- sizeof(struct btf_param),
- 1, {offsetof(struct btf_param, name_off)}
- };
- break;
- default:
- return -EINVAL;
- }
- break;
- default:
- return -EINVAL;
- }
-
- if (it->desc.m_sz)
- it->vlen = btf_vlen(t);
-
- it->p = t;
- return 0;
-}
-
-__u32 *btf_field_iter_next(struct btf_field_iter *it)
-{
- if (!it->p)
- return NULL;
-
- if (it->m_idx < 0) {
- if (it->off_idx < it->desc.t_off_cnt)
- return it->p + it->desc.t_offs[it->off_idx++];
- /* move to per-member iteration */
- it->m_idx = 0;
- it->p += sizeof(struct btf_type);
- it->off_idx = 0;
- }
-
- /* if type doesn't have members, stop */
- if (it->desc.m_sz == 0) {
- it->p = NULL;
- return NULL;
- }
-
- if (it->off_idx >= it->desc.m_off_cnt) {
- /* exhausted this member's fields, go to the next member */
- it->m_idx++;
- it->p += it->desc.m_sz;
- it->off_idx = 0;
- }
-
- if (it->m_idx < it->vlen)
- return it->p + it->desc.m_offs[it->off_idx++];
-
- it->p = NULL;
- return NULL;
-}
-
int btf_ext_visit_type_ids(struct btf_ext *btf_ext, type_id_visit_fn visit, void *ctx)
{
const struct btf_ext_info *seg;
diff --git a/tools/lib/bpf/btf_iter.c b/tools/lib/bpf/btf_iter.c
new file mode 100644
index 000000000000..c308aa60285d
--- /dev/null
+++ b/tools/lib/bpf/btf_iter.c
@@ -0,0 +1,169 @@
+// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+/* Copyright (c) 2021 Facebook */
+/* Copyright (c) 2024, Oracle and/or its affiliates. */
+
+#include "btf.h"
+#include "libbpf_internal.h"
+
+int btf_field_iter_init(struct btf_field_iter *it, struct btf_type *t,
+ enum btf_field_iter_kind iter_kind)
+{
+ it->p = NULL;
+ it->m_idx = -1;
+ it->off_idx = 0;
+ it->vlen = 0;
+
+ switch (iter_kind) {
+ case BTF_FIELD_ITER_IDS:
+ switch (btf_kind(t)) {
+ case BTF_KIND_UNKN:
+ case BTF_KIND_INT:
+ case BTF_KIND_FLOAT:
+ case BTF_KIND_ENUM:
+ case BTF_KIND_ENUM64:
+ it->desc = (struct btf_field_desc) {};
+ break;
+ case BTF_KIND_FWD:
+ case BTF_KIND_CONST:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_PTR:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_FUNC:
+ case BTF_KIND_VAR:
+ case BTF_KIND_DECL_TAG:
+ case BTF_KIND_TYPE_TAG:
+ it->desc = (struct btf_field_desc) { 1, {offsetof(struct btf_type, type)} };
+ break;
+ case BTF_KIND_ARRAY:
+ it->desc = (struct btf_field_desc) {
+ 2, {sizeof(struct btf_type) + offsetof(struct btf_array, type),
+ sizeof(struct btf_type) + offsetof(struct btf_array, index_type)}
+ };
+ break;
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ it->desc = (struct btf_field_desc) {
+ 0, {},
+ sizeof(struct btf_member),
+ 1, {offsetof(struct btf_member, type)}
+ };
+ break;
+ case BTF_KIND_FUNC_PROTO:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, type)},
+ sizeof(struct btf_param),
+ 1, {offsetof(struct btf_param, type)}
+ };
+ break;
+ case BTF_KIND_DATASEC:
+ it->desc = (struct btf_field_desc) {
+ 0, {},
+ sizeof(struct btf_var_secinfo),
+ 1, {offsetof(struct btf_var_secinfo, type)}
+ };
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BTF_FIELD_ITER_STRS:
+ switch (btf_kind(t)) {
+ case BTF_KIND_UNKN:
+ it->desc = (struct btf_field_desc) {};
+ break;
+ case BTF_KIND_INT:
+ case BTF_KIND_FLOAT:
+ case BTF_KIND_FWD:
+ case BTF_KIND_ARRAY:
+ case BTF_KIND_CONST:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_PTR:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_FUNC:
+ case BTF_KIND_VAR:
+ case BTF_KIND_DECL_TAG:
+ case BTF_KIND_TYPE_TAG:
+ case BTF_KIND_DATASEC:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, name_off)}
+ };
+ break;
+ case BTF_KIND_ENUM:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, name_off)},
+ sizeof(struct btf_enum),
+ 1, {offsetof(struct btf_enum, name_off)}
+ };
+ break;
+ case BTF_KIND_ENUM64:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, name_off)},
+ sizeof(struct btf_enum64),
+ 1, {offsetof(struct btf_enum64, name_off)}
+ };
+ break;
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, name_off)},
+ sizeof(struct btf_member),
+ 1, {offsetof(struct btf_member, name_off)}
+ };
+ break;
+ case BTF_KIND_FUNC_PROTO:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, name_off)},
+ sizeof(struct btf_param),
+ 1, {offsetof(struct btf_param, name_off)}
+ };
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (it->desc.m_sz)
+ it->vlen = btf_vlen(t);
+
+ it->p = t;
+ return 0;
+}
+
+__u32 *btf_field_iter_next(struct btf_field_iter *it)
+{
+ if (!it->p)
+ return NULL;
+
+ if (it->m_idx < 0) {
+ if (it->off_idx < it->desc.t_off_cnt)
+ return it->p + it->desc.t_offs[it->off_idx++];
+ /* move to per-member iteration */
+ it->m_idx = 0;
+ it->p += sizeof(struct btf_type);
+ it->off_idx = 0;
+ }
+
+ /* if type doesn't have members, stop */
+ if (it->desc.m_sz == 0) {
+ it->p = NULL;
+ return NULL;
+ }
+
+ if (it->off_idx >= it->desc.m_off_cnt) {
+ /* exhausted this member's fields, go to the next member */
+ it->m_idx++;
+ it->p += it->desc.m_sz;
+ it->off_idx = 0;
+ }
+
+ if (it->m_idx < it->vlen)
+ return it->p + it->desc.m_offs[it->off_idx++];
+
+ it->p = NULL;
+ return NULL;
+}
--
2.31.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 bpf-next 4/6] libbpf,bpf: share BTF relocate-related code with kernel
2024-06-20 9:17 [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups Alan Maguire
` (2 preceding siblings ...)
2024-06-20 9:17 ` [PATCH v2 bpf-next 3/6] libbpf: split field iter code into its own file kernel Alan Maguire
@ 2024-06-20 9:17 ` Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 5/6] kbuild,bpf: add module-specific pahole flags for distilled base BTF Alan Maguire
` (2 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Alan Maguire @ 2024-06-20 9:17 UTC (permalink / raw)
To: andrii, eddyz87
Cc: acme, ast, daniel, jolsa, martin.lau, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, mcgrof, masahiroy, nathan,
mykolal, thinker.li, bentiss, tanggeliang, bpf, Alan Maguire
Share relocation implementation with the kernel. As part of this,
we also need the type/string iteration functions so also share
btf_iter.c file. Relocation code in kernel and userspace is identical
save for the impementation of the reparenting of split BTF to the
relocated base BTF and retrieval of the BTF header from "struct btf";
these small functions need separate user-space and kernel implementations
for the separate "struct btf"s they operate upon.
One other wrinkle on the kernel side is we have to map .BTF.ids in
modules as they were generated with the type ids used at BTF encoding
time. btf_relocate() optionally returns an array mapping from old BTF
ids to relocated ids, so we use that to fix up these references where
needed for kfuncs.
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
include/linux/btf.h | 64 +++++++++++++
kernel/bpf/Makefile | 8 +-
kernel/bpf/btf.c | 176 ++++++++++++++++++++++++-----------
tools/lib/bpf/btf_iter.c | 8 ++
tools/lib/bpf/btf_relocate.c | 23 +++++
5 files changed, 226 insertions(+), 53 deletions(-)
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 56d91daacdba..d199fa17abb4 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -140,6 +140,7 @@ extern const struct file_operations btf_fops;
const char *btf_get_name(const struct btf *btf);
void btf_get(struct btf *btf);
void btf_put(struct btf *btf);
+const struct btf_header *btf_header(const struct btf *btf);
int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_sz);
struct btf *btf_get_by_fd(int fd);
int btf_get_info_by_fd(const struct btf *btf,
@@ -212,8 +213,10 @@ int btf_get_fd_by_id(u32 id);
u32 btf_obj_id(const struct btf *btf);
bool btf_is_kernel(const struct btf *btf);
bool btf_is_module(const struct btf *btf);
+bool btf_is_vmlinux(const struct btf *btf);
struct module *btf_try_get_module(const struct btf *btf);
u32 btf_nr_types(const struct btf *btf);
+struct btf *btf_base_btf(const struct btf *btf);
bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s,
const struct btf_member *m,
u32 expected_offset, u32 expected_size);
@@ -339,6 +342,11 @@ static inline u8 btf_int_offset(const struct btf_type *t)
return BTF_INT_OFFSET(*(u32 *)(t + 1));
}
+static inline __u8 btf_int_bits(const struct btf_type *t)
+{
+ return BTF_INT_BITS(*(__u32 *)(t + 1));
+}
+
static inline bool btf_type_is_scalar(const struct btf_type *t)
{
return btf_type_is_int(t) || btf_type_is_enum(t);
@@ -478,6 +486,11 @@ static inline struct btf_param *btf_params(const struct btf_type *t)
return (struct btf_param *)(t + 1);
}
+static inline struct btf_decl_tag *btf_decl_tag(const struct btf_type *t)
+{
+ return (struct btf_decl_tag *)(t + 1);
+}
+
static inline int btf_id_cmp_func(const void *a, const void *b)
{
const int *pa = a, *pb = b;
@@ -515,9 +528,38 @@ static inline const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *
}
#endif
+enum btf_field_iter_kind {
+ BTF_FIELD_ITER_IDS,
+ BTF_FIELD_ITER_STRS,
+};
+
+struct btf_field_desc {
+ /* once-per-type offsets */
+ int t_off_cnt, t_offs[2];
+ /* member struct size, or zero, if no members */
+ int m_sz;
+ /* repeated per-member offsets */
+ int m_off_cnt, m_offs[1];
+};
+
+struct btf_field_iter {
+ struct btf_field_desc desc;
+ void *p;
+ int m_idx;
+ int off_idx;
+ int vlen;
+};
+
#ifdef CONFIG_BPF_SYSCALL
const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);
+void btf_set_base_btf(struct btf *btf, const struct btf *base_btf);
+int btf_relocate(struct btf *btf, const struct btf *base_btf, __u32 **map_ids);
+int btf_field_iter_init(struct btf_field_iter *it, struct btf_type *t,
+ enum btf_field_iter_kind iter_kind);
+__u32 *btf_field_iter_next(struct btf_field_iter *it);
+
const char *btf_name_by_offset(const struct btf *btf, u32 offset);
+const char *btf_str_by_offset(const struct btf *btf, u32 offset);
struct btf *btf_parse_vmlinux(void);
struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
u32 *btf_kfunc_id_set_contains(const struct btf *btf, u32 kfunc_btf_id,
@@ -544,6 +586,28 @@ static inline const struct btf_type *btf_type_by_id(const struct btf *btf,
{
return NULL;
}
+
+static inline void btf_set_base_btf(struct btf *btf, const struct btf *base_btf)
+{
+}
+
+static inline int btf_relocate(void *log, struct btf *btf, const struct btf *base_btf,
+ __u32 **map_ids)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int btf_field_iter_init(struct btf_field_iter *it, struct btf_type *t,
+ enum btf_field_iter_kind iter_kind)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline __u32 *btf_field_iter_next(struct btf_field_iter *it)
+{
+ return NULL;
+}
+
static inline const char *btf_name_by_offset(const struct btf *btf,
u32 offset)
{
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 7eb9ad3a3ae6..0291eef9ce92 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -50,5 +50,11 @@ endif
obj-$(CONFIG_BPF_PRELOAD) += preload/
obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
-$(obj)/relo_core.o: $(srctree)/tools/lib/bpf/relo_core.c FORCE
+obj-$(CONFIG_BPF_SYSCALL) += btf_iter.o
+obj-$(CONFIG_BPF_SYSCALL) += btf_relocate.o
+
+# Some source files are common to libbpf.
+vpath %.c $(srctree)/kernel/bpf:$(srctree)/tools/lib/bpf
+
+$(obj)/%.o: %.c FORCE
$(call if_changed_rule,cc_o_c)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index ce4707968217..8e12cb80ba73 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -274,6 +274,7 @@ struct btf {
u32 start_str_off; /* first string offset (0 for base BTF) */
char name[MODULE_NAME_LEN];
bool kernel_btf;
+ __u32 *base_id_map; /* map from distilled base BTF -> vmlinux BTF ids */
};
enum verifier_phase {
@@ -530,6 +531,11 @@ static bool btf_type_is_decl_tag_target(const struct btf_type *t)
btf_type_is_var(t) || btf_type_is_typedef(t);
}
+bool btf_is_vmlinux(const struct btf *btf)
+{
+ return btf->kernel_btf && !btf->base_btf;
+}
+
u32 btf_nr_types(const struct btf *btf)
{
u32 total = 0;
@@ -772,7 +778,7 @@ static bool __btf_name_char_ok(char c, bool first)
return true;
}
-static const char *btf_str_by_offset(const struct btf *btf, u32 offset)
+const char *btf_str_by_offset(const struct btf *btf, u32 offset)
{
while (offset < btf->start_str_off)
btf = btf->base_btf;
@@ -1670,14 +1676,8 @@ static void btf_free_kfunc_set_tab(struct btf *btf)
if (!tab)
return;
- /* For module BTF, we directly assign the sets being registered, so
- * there is nothing to free except kfunc_set_tab.
- */
- if (btf_is_module(btf))
- goto free_tab;
for (hook = 0; hook < ARRAY_SIZE(tab->sets); hook++)
kfree(tab->sets[hook]);
-free_tab:
kfree(tab);
btf->kfunc_set_tab = NULL;
}
@@ -1735,7 +1735,12 @@ static void btf_free(struct btf *btf)
kvfree(btf->types);
kvfree(btf->resolved_sizes);
kvfree(btf->resolved_ids);
- kvfree(btf->data);
+ /* vmlinux does not allocate btf->data, it simply points it at
+ * __start_BTF.
+ */
+ if (!btf_is_vmlinux(btf))
+ kvfree(btf->data);
+ kvfree(btf->base_id_map);
kfree(btf);
}
@@ -1764,6 +1769,23 @@ void btf_put(struct btf *btf)
}
}
+struct btf *btf_base_btf(const struct btf *btf)
+{
+ return btf->base_btf;
+}
+
+const struct btf_header *btf_header(const struct btf *btf)
+{
+ return &btf->hdr;
+}
+
+void btf_set_base_btf(struct btf *btf, const struct btf *base_btf)
+{
+ btf->base_btf = (struct btf *)base_btf;
+ btf->start_id = btf_nr_types(base_btf);
+ btf->start_str_off = base_btf->hdr.str_len;
+}
+
static int env_resolve_init(struct btf_verifier_env *env)
{
struct btf *btf = env->btf;
@@ -6083,23 +6105,15 @@ int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_ty
BTF_ID_LIST(bpf_ctx_convert_btf_id)
BTF_ID(struct, bpf_ctx_convert)
-struct btf *btf_parse_vmlinux(void)
+static struct btf *btf_parse_base(struct btf_verifier_env *env, const char *name,
+ void *data, unsigned int data_size)
{
- struct btf_verifier_env *env = NULL;
- struct bpf_verifier_log *log;
struct btf *btf = NULL;
int err;
if (!IS_ENABLED(CONFIG_DEBUG_INFO_BTF))
return ERR_PTR(-ENOENT);
- env = kzalloc(sizeof(*env), GFP_KERNEL | __GFP_NOWARN);
- if (!env)
- return ERR_PTR(-ENOMEM);
-
- log = &env->log;
- log->level = BPF_LOG_KERNEL;
-
btf = kzalloc(sizeof(*btf), GFP_KERNEL | __GFP_NOWARN);
if (!btf) {
err = -ENOMEM;
@@ -6107,10 +6121,10 @@ struct btf *btf_parse_vmlinux(void)
}
env->btf = btf;
- btf->data = __start_BTF;
- btf->data_size = __stop_BTF - __start_BTF;
+ btf->data = data;
+ btf->data_size = data_size;
btf->kernel_btf = true;
- snprintf(btf->name, sizeof(btf->name), "vmlinux");
+ snprintf(btf->name, sizeof(btf->name), "%s", name);
err = btf_parse_hdr(env);
if (err)
@@ -6130,20 +6144,11 @@ struct btf *btf_parse_vmlinux(void)
if (err)
goto errout;
- /* btf_parse_vmlinux() runs under bpf_verifier_lock */
- bpf_ctx_convert.t = btf_type_by_id(btf, bpf_ctx_convert_btf_id[0]);
-
refcount_set(&btf->refcnt, 1);
- err = btf_alloc_id(btf);
- if (err)
- goto errout;
-
- btf_verifier_env_free(env);
return btf;
errout:
- btf_verifier_env_free(env);
if (btf) {
kvfree(btf->types);
kfree(btf);
@@ -6151,19 +6156,61 @@ struct btf *btf_parse_vmlinux(void)
return ERR_PTR(err);
}
+struct btf *btf_parse_vmlinux(void)
+{
+ struct btf_verifier_env *env = NULL;
+ struct bpf_verifier_log *log;
+ struct btf *btf;
+ int err;
+
+ env = kzalloc(sizeof(*env), GFP_KERNEL | __GFP_NOWARN);
+ if (!env)
+ return ERR_PTR(-ENOMEM);
+
+ log = &env->log;
+ log->level = BPF_LOG_KERNEL;
+ btf = btf_parse_base(env, "vmlinux", __start_BTF, __stop_BTF - __start_BTF);
+ if (IS_ERR(btf))
+ goto err_out;
+
+ /* btf_parse_vmlinux() runs under bpf_verifier_lock */
+ bpf_ctx_convert.t = btf_type_by_id(btf, bpf_ctx_convert_btf_id[0]);
+ err = btf_alloc_id(btf);
+ if (err) {
+ btf_free(btf);
+ btf = ERR_PTR(err);
+ }
+err_out:
+ btf_verifier_env_free(env);
+ return btf;
+}
+
#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
-static struct btf *btf_parse_module(const char *module_name, const void *data, unsigned int data_size)
+/* If .BTF_ids section was created with distilled base BTF, both base and
+ * split BTF ids will need to be mapped to actual base/split ids for
+ * BTF now that it has been relocated.
+ */
+static __u32 btf_relocate_id(const struct btf *btf, __u32 id)
+{
+ if (!btf->base_btf || !btf->base_id_map)
+ return id;
+ return btf->base_id_map[id];
+}
+
+static struct btf *btf_parse_module(const char *module_name, const void *data,
+ unsigned int data_size, void *base_data,
+ unsigned int base_data_size)
{
+ struct btf *btf = NULL, *vmlinux_btf, *base_btf = NULL;
struct btf_verifier_env *env = NULL;
struct bpf_verifier_log *log;
- struct btf *btf = NULL, *base_btf;
- int err;
+ int err = 0;
- base_btf = bpf_get_btf_vmlinux();
- if (IS_ERR(base_btf))
- return base_btf;
- if (!base_btf)
+ vmlinux_btf = bpf_get_btf_vmlinux();
+ if (IS_ERR(vmlinux_btf))
+ return vmlinux_btf;
+ if (!vmlinux_btf)
return ERR_PTR(-EINVAL);
env = kzalloc(sizeof(*env), GFP_KERNEL | __GFP_NOWARN);
@@ -6173,6 +6220,16 @@ static struct btf *btf_parse_module(const char *module_name, const void *data, u
log = &env->log;
log->level = BPF_LOG_KERNEL;
+ if (base_data) {
+ base_btf = btf_parse_base(env, ".BTF.base", base_data, base_data_size);
+ if (IS_ERR(base_btf)) {
+ err = PTR_ERR(base_btf);
+ goto errout;
+ }
+ } else {
+ base_btf = vmlinux_btf;
+ }
+
btf = kzalloc(sizeof(*btf), GFP_KERNEL | __GFP_NOWARN);
if (!btf) {
err = -ENOMEM;
@@ -6212,12 +6269,22 @@ static struct btf *btf_parse_module(const char *module_name, const void *data, u
if (err)
goto errout;
+ if (base_btf != vmlinux_btf) {
+ err = btf_relocate(btf, vmlinux_btf, &btf->base_id_map);
+ if (err)
+ goto errout;
+ btf_free(base_btf);
+ base_btf = vmlinux_btf;
+ }
+
btf_verifier_env_free(env);
refcount_set(&btf->refcnt, 1);
return btf;
errout:
btf_verifier_env_free(env);
+ if (base_btf != vmlinux_btf)
+ btf_free(base_btf);
if (btf) {
kvfree(btf->data);
kvfree(btf->types);
@@ -7770,7 +7837,8 @@ static int btf_module_notify(struct notifier_block *nb, unsigned long op,
err = -ENOMEM;
goto out;
}
- btf = btf_parse_module(mod->name, mod->btf_data, mod->btf_data_size);
+ btf = btf_parse_module(mod->name, mod->btf_data, mod->btf_data_size,
+ mod->btf_base_data, mod->btf_base_data_size);
if (IS_ERR(btf)) {
kfree(btf_mod);
if (!IS_ENABLED(CONFIG_MODULE_ALLOW_BTF_MISMATCH)) {
@@ -8094,7 +8162,7 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
bool add_filter = !!kset->filter;
struct btf_kfunc_set_tab *tab;
struct btf_id_set8 *set;
- u32 set_cnt;
+ u32 set_cnt, i;
int ret;
if (hook >= BTF_KFUNC_HOOK_MAX) {
@@ -8140,21 +8208,15 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
goto end;
}
- /* We don't need to allocate, concatenate, and sort module sets, because
- * only one is allowed per hook. Hence, we can directly assign the
- * pointer and return.
- */
- if (!vmlinux_set) {
- tab->sets[hook] = add_set;
- goto do_add_filter;
- }
-
/* In case of vmlinux sets, there may be more than one set being
* registered per hook. To create a unified set, we allocate a new set
* and concatenate all individual sets being registered. While each set
* is individually sorted, they may become unsorted when concatenated,
* hence re-sorting the final set again is required to make binary
* searching the set using btf_id_set8_contains function work.
+ *
+ * For module sets, we need to allocate as we may need to relocate
+ * BTF ids.
*/
set_cnt = set ? set->cnt : 0;
@@ -8184,11 +8246,14 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
/* Concatenate the two sets */
memcpy(set->pairs + set->cnt, add_set->pairs, add_set->cnt * sizeof(set->pairs[0]));
+ /* Now that the set is copied, update with relocated BTF ids */
+ for (i = set->cnt; i < set->cnt + add_set->cnt; i++)
+ set->pairs[i].id = btf_relocate_id(btf, set->pairs[i].id);
+
set->cnt += add_set->cnt;
sort(set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func, NULL);
-do_add_filter:
if (add_filter) {
hook_filter = &tab->hook_filters[hook];
hook_filter->filters[hook_filter->nr_filters++] = kset->filter;
@@ -8308,7 +8373,7 @@ static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
return PTR_ERR(btf);
for (i = 0; i < kset->set->cnt; i++) {
- ret = btf_check_kfunc_protos(btf, kset->set->pairs[i].id,
+ ret = btf_check_kfunc_protos(btf, btf_relocate_id(btf, kset->set->pairs[i].id),
kset->set->pairs[i].flags);
if (ret)
goto err_out;
@@ -8372,7 +8437,7 @@ static int btf_check_dtor_kfuncs(struct btf *btf, const struct btf_id_dtor_kfunc
u32 nr_args, i;
for (i = 0; i < cnt; i++) {
- dtor_btf_id = dtors[i].kfunc_btf_id;
+ dtor_btf_id = btf_relocate_id(btf, dtors[i].kfunc_btf_id);
dtor_func = btf_type_by_id(btf, dtor_btf_id);
if (!dtor_func || !btf_type_is_func(dtor_func))
@@ -8407,7 +8472,7 @@ int register_btf_id_dtor_kfuncs(const struct btf_id_dtor_kfunc *dtors, u32 add_c
{
struct btf_id_dtor_kfunc_tab *tab;
struct btf *btf;
- u32 tab_cnt;
+ u32 tab_cnt, i;
int ret;
btf = btf_get_module_btf(owner);
@@ -8458,6 +8523,13 @@ int register_btf_id_dtor_kfuncs(const struct btf_id_dtor_kfunc *dtors, u32 add_c
btf->dtor_kfunc_tab = tab;
memcpy(tab->dtors + tab->cnt, dtors, add_cnt * sizeof(tab->dtors[0]));
+
+ /* remap BTF ids based on BTF relocation (if any) */
+ for (i = tab_cnt; i < tab_cnt + add_cnt; i++) {
+ tab->dtors[i].btf_id = btf_relocate_id(btf, tab->dtors[i].btf_id);
+ tab->dtors[i].kfunc_btf_id = btf_relocate_id(btf, tab->dtors[i].kfunc_btf_id);
+ }
+
tab->cnt += add_cnt;
sort(tab->dtors, tab->cnt, sizeof(tab->dtors[0]), btf_id_cmp_func, NULL);
diff --git a/tools/lib/bpf/btf_iter.c b/tools/lib/bpf/btf_iter.c
index c308aa60285d..9a6c822c2294 100644
--- a/tools/lib/bpf/btf_iter.c
+++ b/tools/lib/bpf/btf_iter.c
@@ -2,8 +2,16 @@
/* Copyright (c) 2021 Facebook */
/* Copyright (c) 2024, Oracle and/or its affiliates. */
+#ifdef __KERNEL__
+#include <linux/bpf.h>
+#include <linux/btf.h>
+
+#define btf_var_secinfos(t) (struct btf_var_secinfo *)btf_type_var_secinfo(t)
+
+#else
#include "btf.h"
#include "libbpf_internal.h"
+#endif
int btf_field_iter_init(struct btf_field_iter *it, struct btf_type *t,
enum btf_field_iter_kind iter_kind)
diff --git a/tools/lib/bpf/btf_relocate.c b/tools/lib/bpf/btf_relocate.c
index 23a41fb03e0d..2281dbbafa11 100644
--- a/tools/lib/bpf/btf_relocate.c
+++ b/tools/lib/bpf/btf_relocate.c
@@ -5,11 +5,34 @@
#define _GNU_SOURCE
#endif
+#ifdef __KERNEL__
+#include <linux/bpf.h>
+#include <linux/bsearch.h>
+#include <linux/btf.h>
+#include <linux/sort.h>
+#include <linux/string.h>
+#include <linux/bpf_verifier.h>
+
+#define btf_type_by_id (struct btf_type *)btf_type_by_id
+#define btf__type_cnt btf_nr_types
+#define btf__base_btf btf_base_btf
+#define btf__name_by_offset btf_name_by_offset
+#define btf__str_by_offset btf_str_by_offset
+#define btf_kflag btf_type_kflag
+
+#define calloc(nmemb, sz) kvcalloc(nmemb, sz, GFP_KERNEL | __GFP_NOWARN)
+#define free(ptr) kvfree(ptr)
+#define qsort(base, num, sz, cmp) sort(base, num, sz, cmp, NULL)
+
+#else
+
#include "btf.h"
#include "bpf.h"
#include "libbpf.h"
#include "libbpf_internal.h"
+#endif /* __KERNEL__ */
+
struct btf;
struct btf_relocate {
--
2.31.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 bpf-next 5/6] kbuild,bpf: add module-specific pahole flags for distilled base BTF
2024-06-20 9:17 [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups Alan Maguire
` (3 preceding siblings ...)
2024-06-20 9:17 ` [PATCH v2 bpf-next 4/6] libbpf,bpf: share BTF relocate-related code with kernel Alan Maguire
@ 2024-06-20 9:17 ` Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 6/6] selftests/bpf: add kfunc_call test for simple dtor in bpf_testmod Alan Maguire
2024-06-21 22:10 ` [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups patchwork-bot+netdevbpf
6 siblings, 0 replies; 10+ messages in thread
From: Alan Maguire @ 2024-06-20 9:17 UTC (permalink / raw)
To: andrii, eddyz87
Cc: acme, ast, daniel, jolsa, martin.lau, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, mcgrof, masahiroy, nathan,
mykolal, thinker.li, bentiss, tanggeliang, bpf, Alan Maguire
Support creation of module BTF along with distilled base BTF;
the latter is stored in a .BTF.base ELF section and supplements
split BTF references to base BTF with information about base types,
allowing for later relocation of split BTF with a (possibly
changed) base. resolve_btfids detects the presence of a .BTF.base
section and will use it instead of the base BTF it is passed in
BTF id resolution.
Modules will be built with a distilled .BTF.base section for external
module build, i.e.
make -C. -M=path2/module
...while in-tree module build as part of a normal kernel build will
not generate distilled base BTF; this is because in-tree modules
change with the kernel and do not require BTF relocation for the
running vmlinux.
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Reviewed-by: Eduard Zingerman <eddyz87@gmail.com>
---
scripts/Makefile.btf | 5 +++++
scripts/Makefile.modfinal | 2 +-
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/scripts/Makefile.btf b/scripts/Makefile.btf
index 2597e3d4d6e0..b75f09f3f424 100644
--- a/scripts/Makefile.btf
+++ b/scripts/Makefile.btf
@@ -21,8 +21,13 @@ else
# Switch to using --btf_features for v1.26 and later.
pahole-flags-$(call test-ge, $(pahole-ver), 126) = -j --btf_features=encode_force,var,float,enum64,decl_tag,type_tag,optimized_func,consistent_func,decl_tag_kfuncs
+ifneq ($(KBUILD_EXTMOD),)
+module-pahole-flags-$(call test-ge, $(pahole-ver), 126) += --btf_features=distilled_base
+endif
+
endif
pahole-flags-$(CONFIG_PAHOLE_HAS_LANG_EXCLUDE) += --lang_exclude=rust
export PAHOLE_FLAGS := $(pahole-flags-y)
+export MODULE_PAHOLE_FLAGS := $(module-pahole-flags-y)
diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
index 3bec9043e4f3..1fa98b5e952b 100644
--- a/scripts/Makefile.modfinal
+++ b/scripts/Makefile.modfinal
@@ -41,7 +41,7 @@ quiet_cmd_btf_ko = BTF [M] $@
if [ ! -f vmlinux ]; then \
printf "Skipping BTF generation for %s due to unavailability of vmlinux\n" $@ 1>&2; \
else \
- LLVM_OBJCOPY="$(OBJCOPY)" $(PAHOLE) -J $(PAHOLE_FLAGS) --btf_base vmlinux $@; \
+ LLVM_OBJCOPY="$(OBJCOPY)" $(PAHOLE) -J $(PAHOLE_FLAGS) $(MODULE_PAHOLE_FLAGS) --btf_base vmlinux $@; \
$(RESOLVE_BTFIDS) -b vmlinux $@; \
fi;
--
2.31.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 bpf-next 6/6] selftests/bpf: add kfunc_call test for simple dtor in bpf_testmod
2024-06-20 9:17 [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups Alan Maguire
` (4 preceding siblings ...)
2024-06-20 9:17 ` [PATCH v2 bpf-next 5/6] kbuild,bpf: add module-specific pahole flags for distilled base BTF Alan Maguire
@ 2024-06-20 9:17 ` Alan Maguire
2024-06-20 11:41 ` Eduard Zingerman
2024-06-21 22:10 ` [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups patchwork-bot+netdevbpf
6 siblings, 1 reply; 10+ messages in thread
From: Alan Maguire @ 2024-06-20 9:17 UTC (permalink / raw)
To: andrii, eddyz87
Cc: acme, ast, daniel, jolsa, martin.lau, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, mcgrof, masahiroy, nathan,
mykolal, thinker.li, bentiss, tanggeliang, bpf, Alan Maguire
add simple kfuncs to create/destroy a context type to bpf_testmod,
register them and add a kfunc_call test to use them. This provides
test coverage for registration of dtor kfuncs from modules.
By transferring the context pointer to a map value as a __kptr
we also trigger the map-based dtor cleanup logic, improving test
coverage.
Suggested-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
---
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 46 +++++++++++++++++++
.../bpf/bpf_testmod/bpf_testmod_kfunc.h | 9 ++++
.../selftests/bpf/prog_tests/kfunc_call.c | 1 +
.../selftests/bpf/progs/kfunc_call_test.c | 37 +++++++++++++++
4 files changed, 93 insertions(+)
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
index 49f9a311e49b..fa7f803ea9b5 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -159,6 +159,37 @@ __bpf_kfunc void bpf_kfunc_dynptr_test(struct bpf_dynptr *ptr,
{
}
+__bpf_kfunc struct bpf_testmod_ctx *
+bpf_testmod_ctx_create(int *err)
+{
+ struct bpf_testmod_ctx *ctx;
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL | GFP_ATOMIC);
+ if (!ctx) {
+ *err = -ENOMEM;
+ return NULL;
+ }
+ refcount_set(&ctx->usage, 1);
+
+ return ctx;
+}
+
+static void testmod_free_cb(struct rcu_head *head)
+{
+ struct bpf_testmod_ctx *ctx;
+
+ ctx = container_of(head, struct bpf_testmod_ctx, rcu);
+ kfree(ctx);
+}
+
+__bpf_kfunc void bpf_testmod_ctx_release(struct bpf_testmod_ctx *ctx)
+{
+ if (!ctx)
+ return;
+ if (refcount_dec_and_test(&ctx->usage))
+ call_rcu(&ctx->rcu, testmod_free_cb);
+}
+
struct bpf_testmod_btf_type_tag_1 {
int a;
};
@@ -369,8 +400,14 @@ BTF_ID_FLAGS(func, bpf_iter_testmod_seq_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_testmod_seq_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_kfunc_common_test)
BTF_ID_FLAGS(func, bpf_kfunc_dynptr_test)
+BTF_ID_FLAGS(func, bpf_testmod_ctx_create, KF_ACQUIRE | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_testmod_ctx_release, KF_RELEASE)
BTF_KFUNCS_END(bpf_testmod_common_kfunc_ids)
+BTF_ID_LIST(bpf_testmod_dtor_ids)
+BTF_ID(struct, bpf_testmod_ctx)
+BTF_ID(func, bpf_testmod_ctx_release)
+
static const struct btf_kfunc_id_set bpf_testmod_common_kfunc_set = {
.owner = THIS_MODULE,
.set = &bpf_testmod_common_kfunc_ids,
@@ -904,6 +941,12 @@ extern int bpf_fentry_test1(int a);
static int bpf_testmod_init(void)
{
+ const struct btf_id_dtor_kfunc bpf_testmod_dtors[] = {
+ {
+ .btf_id = bpf_testmod_dtor_ids[0],
+ .kfunc_btf_id = bpf_testmod_dtor_ids[1]
+ },
+ };
int ret;
ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC, &bpf_testmod_common_kfunc_set);
@@ -912,6 +955,9 @@ static int bpf_testmod_init(void)
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &bpf_testmod_kfunc_set);
ret = ret ?: register_bpf_struct_ops(&bpf_bpf_testmod_ops, bpf_testmod_ops);
ret = ret ?: register_bpf_struct_ops(&bpf_testmod_ops2, bpf_testmod_ops2);
+ ret = ret ?: register_btf_id_dtor_kfuncs(bpf_testmod_dtors,
+ ARRAY_SIZE(bpf_testmod_dtors),
+ THIS_MODULE);
if (ret < 0)
return ret;
if (bpf_fentry_test1(0) < 0)
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h
index f9809517e7fa..e587a79f2239 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h
@@ -80,6 +80,11 @@ struct sendmsg_args {
int msglen;
};
+struct bpf_testmod_ctx {
+ struct callback_head rcu;
+ refcount_t usage;
+};
+
struct prog_test_ref_kfunc *
bpf_kfunc_call_test_acquire(unsigned long *scalar_ptr) __ksym;
void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym;
@@ -135,4 +140,8 @@ int bpf_kfunc_call_kernel_getsockname(struct addr_args *args) __ksym;
int bpf_kfunc_call_kernel_getpeername(struct addr_args *args) __ksym;
void bpf_kfunc_dynptr_test(struct bpf_dynptr *ptr, struct bpf_dynptr *ptr__nullable) __ksym;
+
+struct bpf_testmod_ctx *bpf_testmod_ctx_create(int *err) __ksym;
+void bpf_testmod_ctx_release(struct bpf_testmod_ctx *ctx) __ksym;
+
#endif /* _BPF_TESTMOD_KFUNC_H */
diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
index 2eb71559713c..5b743212292f 100644
--- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
+++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
@@ -78,6 +78,7 @@ static struct kfunc_test_params kfunc_tests[] = {
SYSCALL_TEST(kfunc_syscall_test, 0),
SYSCALL_NULL_CTX_TEST(kfunc_syscall_test_null, 0),
TC_TEST(kfunc_call_test_static_unused_arg, 0),
+ TC_TEST(kfunc_call_ctx, 0),
};
struct syscall_test_args {
diff --git a/tools/testing/selftests/bpf/progs/kfunc_call_test.c b/tools/testing/selftests/bpf/progs/kfunc_call_test.c
index cf68d1e48a0f..f502f755f567 100644
--- a/tools/testing/selftests/bpf/progs/kfunc_call_test.c
+++ b/tools/testing/selftests/bpf/progs/kfunc_call_test.c
@@ -177,4 +177,41 @@ int kfunc_call_test_static_unused_arg(struct __sk_buff *skb)
return actual != expected ? -1 : 0;
}
+struct ctx_val {
+ struct bpf_testmod_ctx __kptr *ctx;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(max_entries, 1);
+ __type(key, int);
+ __type(value, struct ctx_val);
+} ctx_map SEC(".maps");
+
+SEC("tc")
+int kfunc_call_ctx(struct __sk_buff *skb)
+{
+ struct bpf_testmod_ctx *ctx;
+ int err = 0;
+
+ ctx = bpf_testmod_ctx_create(&err);
+ if (!ctx && !err)
+ err = -1;
+ if (ctx) {
+ int key = 0;
+ struct ctx_val *ctx_val = bpf_map_lookup_elem(&ctx_map, &key);
+
+ /* Transfer ctx to map to be freed via implicit dtor call
+ * on cleanup.
+ */
+ if (ctx_val)
+ ctx = bpf_kptr_xchg(&ctx_val->ctx, ctx);
+ if (ctx) {
+ bpf_testmod_ctx_release(ctx);
+ err = -1;
+ }
+ }
+ return err;
+}
+
char _license[] SEC("license") = "GPL";
--
2.31.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v2 bpf-next 6/6] selftests/bpf: add kfunc_call test for simple dtor in bpf_testmod
2024-06-20 9:17 ` [PATCH v2 bpf-next 6/6] selftests/bpf: add kfunc_call test for simple dtor in bpf_testmod Alan Maguire
@ 2024-06-20 11:41 ` Eduard Zingerman
2024-06-21 22:00 ` Andrii Nakryiko
0 siblings, 1 reply; 10+ messages in thread
From: Eduard Zingerman @ 2024-06-20 11:41 UTC (permalink / raw)
To: Alan Maguire, andrii
Cc: acme, ast, daniel, jolsa, martin.lau, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, mcgrof, masahiroy, nathan,
mykolal, thinker.li, bentiss, tanggeliang, bpf
On Thu, 2024-06-20 at 10:17 +0100, Alan Maguire wrote:
[...]
Hi Alan,
I still get the error message in the dmesg:
[ 10.489223] BUG: sleeping function called from invalid context at include/linux/sched/mm.h:337
[ 10.489454] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 184, name: test_progs
[ 10.489589] preempt_count: 200, expected: 0
[ 10.489659] RCU nest depth: 1, expected: 0
[ 10.489733] 1 lock held by test_progs/184:
[ 10.489811] #0: ffffffff83198a60 (rcu_read_lock){....}-{1:2}, at: bpf_test_timer_enter+0x1d/0xb0
[ 10.490040] Preemption disabled at:
[ 10.490060] [<ffffffff81a0ee6a>] bpf_test_run+0x16a/0x300
[ 10.490197] CPU: 1 PID: 184 Comm: test_progs Tainted: G OE 6.10.0-rc2-00766-gb812ab0e1306-dirty #39
[ 10.490356] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.15.0-1 04/01/2014
[ 10.490475] Call Trace:
[ 10.490515] <TASK>
[ 10.490557] dump_stack_lvl+0x83/0xa0
[ 10.490618] __might_resched+0x199/0x2b0
[ 10.490695] kmalloc_trace_noprof+0x273/0x320
[ 10.490756] ? srso_alias_return_thunk+0x5/0xfbef5
[ 10.490836] ? bpf_test_run+0xc0/0x300
[ 10.490836] ? bpf_testmod_ctx_create+0x23/0x50 [bpf_testmod]
[ 10.490836] bpf_testmod_ctx_create+0x23/0x50 [bpf_testmod]
[ 10.490836] bpf_prog_d1347efc07047347_kfunc_call_ctx+0x2c/0xae
[ 10.490836] bpf_test_run+0x198/0x300
[ 10.490836] ? srso_alias_return_thunk+0x5/0xfbef5
[ 10.490836] ? lockdep_init_map_type+0x4b/0x250
[ 10.490836] bpf_prog_test_run_skb+0x381/0x7f0
[ 10.490836] __sys_bpf+0xc4f/0x2e00
[ 10.490836] ? srso_alias_return_thunk+0x5/0xfbef5
[ 10.490836] ? reacquire_held_locks+0xcf/0x1f0
[ 10.490836] __x64_sys_bpf+0x1e/0x30
[ 10.490836] do_syscall_64+0x68/0x140
[ 10.490836] entry_SYSCALL_64_after_hwframe+0x76/0x7e
The following fix helps:
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -164,7 +164,7 @@ bpf_testmod_ctx_create(int *err)
{
struct bpf_testmod_ctx *ctx;
- ctx = kzalloc(sizeof(*ctx), GFP_KERNEL | GFP_ATOMIC);
+ ctx = kzalloc(sizeof(*ctx), GFP_ATOMIC);
if (!ctx) {
*err = -ENOMEM;
return NULL;
Thanks,
Eduard
[...]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 bpf-next 6/6] selftests/bpf: add kfunc_call test for simple dtor in bpf_testmod
2024-06-20 11:41 ` Eduard Zingerman
@ 2024-06-21 22:00 ` Andrii Nakryiko
0 siblings, 0 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2024-06-21 22:00 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Alan Maguire, andrii, acme, ast, daniel, jolsa, martin.lau, song,
yonghong.song, john.fastabend, kpsingh, sdf, haoluo, mcgrof,
masahiroy, nathan, mykolal, thinker.li, bentiss, tanggeliang, bpf
On Thu, Jun 20, 2024 at 4:41 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Thu, 2024-06-20 at 10:17 +0100, Alan Maguire wrote:
>
> [...]
>
> Hi Alan,
>
> I still get the error message in the dmesg:
>
> [ 10.489223] BUG: sleeping function called from invalid context at include/linux/sched/mm.h:337
> [ 10.489454] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 184, name: test_progs
> [ 10.489589] preempt_count: 200, expected: 0
> [ 10.489659] RCU nest depth: 1, expected: 0
> [ 10.489733] 1 lock held by test_progs/184:
> [ 10.489811] #0: ffffffff83198a60 (rcu_read_lock){....}-{1:2}, at: bpf_test_timer_enter+0x1d/0xb0
> [ 10.490040] Preemption disabled at:
> [ 10.490060] [<ffffffff81a0ee6a>] bpf_test_run+0x16a/0x300
> [ 10.490197] CPU: 1 PID: 184 Comm: test_progs Tainted: G OE 6.10.0-rc2-00766-gb812ab0e1306-dirty #39
> [ 10.490356] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.15.0-1 04/01/2014
> [ 10.490475] Call Trace:
> [ 10.490515] <TASK>
> [ 10.490557] dump_stack_lvl+0x83/0xa0
> [ 10.490618] __might_resched+0x199/0x2b0
> [ 10.490695] kmalloc_trace_noprof+0x273/0x320
> [ 10.490756] ? srso_alias_return_thunk+0x5/0xfbef5
> [ 10.490836] ? bpf_test_run+0xc0/0x300
> [ 10.490836] ? bpf_testmod_ctx_create+0x23/0x50 [bpf_testmod]
> [ 10.490836] bpf_testmod_ctx_create+0x23/0x50 [bpf_testmod]
> [ 10.490836] bpf_prog_d1347efc07047347_kfunc_call_ctx+0x2c/0xae
> [ 10.490836] bpf_test_run+0x198/0x300
> [ 10.490836] ? srso_alias_return_thunk+0x5/0xfbef5
> [ 10.490836] ? lockdep_init_map_type+0x4b/0x250
> [ 10.490836] bpf_prog_test_run_skb+0x381/0x7f0
> [ 10.490836] __sys_bpf+0xc4f/0x2e00
> [ 10.490836] ? srso_alias_return_thunk+0x5/0xfbef5
> [ 10.490836] ? reacquire_held_locks+0xcf/0x1f0
> [ 10.490836] __x64_sys_bpf+0x1e/0x30
> [ 10.490836] do_syscall_64+0x68/0x140
> [ 10.490836] entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
> The following fix helps:
>
> --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
> +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
> @@ -164,7 +164,7 @@ bpf_testmod_ctx_create(int *err)
> {
> struct bpf_testmod_ctx *ctx;
>
> - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL | GFP_ATOMIC);
> + ctx = kzalloc(sizeof(*ctx), GFP_ATOMIC);
fixed while applying, thanks
> if (!ctx) {
> *err = -ENOMEM;
> return NULL;
>
> Thanks,
> Eduard
>
> [...]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups
2024-06-20 9:17 [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups Alan Maguire
` (5 preceding siblings ...)
2024-06-20 9:17 ` [PATCH v2 bpf-next 6/6] selftests/bpf: add kfunc_call test for simple dtor in bpf_testmod Alan Maguire
@ 2024-06-21 22:10 ` patchwork-bot+netdevbpf
6 siblings, 0 replies; 10+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-06-21 22:10 UTC (permalink / raw)
To: Alan Maguire
Cc: andrii, eddyz87, acme, ast, daniel, jolsa, martin.lau, song,
yonghong.song, john.fastabend, kpsingh, sdf, haoluo, mcgrof,
masahiroy, nathan, mykolal, thinker.li, bentiss, tanggeliang, bpf
Hello:
This series was applied to bpf/bpf-next.git (master)
by Andrii Nakryiko <andrii@kernel.org>:
On Thu, 20 Jun 2024 10:17:27 +0100 you wrote:
> Follow-up to resilient split BTF series [1],
>
> - cleaning up libbpf relocation code (patch 1);
> - adding 'struct module' support for base BTF data (patch 2);
> - splitting out field iteration code into separate file (patch 3);
> - sharing libbpf relocation code with the kernel (patch 4);
> - adding a kbuild --btf_features flag to generate distilled base
> BTF in the module-specific case where KBUILD_EXTMOD is true
> (patch 5); and
> - adding test coverage for module-based kfunc dtor (patch 6)
>
> [...]
Here is the summary with links:
- [v2,bpf-next,1/6] libbpf: BTF relocation followup fixing naming, loop logic
https://git.kernel.org/bpf/bpf-next/c/d1cf840854bb
- [v2,bpf-next,2/6] module, bpf: store BTF base pointer in struct module
https://git.kernel.org/bpf/bpf-next/c/d4e48e3dd450
- [v2,bpf-next,3/6] libbpf: split field iter code into its own file kernel
https://git.kernel.org/bpf/bpf-next/c/e7ac331b3055
- [v2,bpf-next,4/6] libbpf,bpf: share BTF relocate-related code with kernel
https://git.kernel.org/bpf/bpf-next/c/8646db238997
- [v2,bpf-next,5/6] kbuild,bpf: add module-specific pahole flags for distilled base BTF
https://git.kernel.org/bpf/bpf-next/c/46fb0b62ea29
- [v2,bpf-next,6/6] selftests/bpf: add kfunc_call test for simple dtor in bpf_testmod
https://git.kernel.org/bpf/bpf-next/c/47a8cf0c5b3f
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-06-21 22:10 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-20 9:17 [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 1/6] libbpf: BTF relocation followup fixing naming, loop logic Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 2/6] module, bpf: store BTF base pointer in struct module Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 3/6] libbpf: split field iter code into its own file kernel Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 4/6] libbpf,bpf: share BTF relocate-related code with kernel Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 5/6] kbuild,bpf: add module-specific pahole flags for distilled base BTF Alan Maguire
2024-06-20 9:17 ` [PATCH v2 bpf-next 6/6] selftests/bpf: add kfunc_call test for simple dtor in bpf_testmod Alan Maguire
2024-06-20 11:41 ` Eduard Zingerman
2024-06-21 22:00 ` Andrii Nakryiko
2024-06-21 22:10 ` [PATCH v2 bpf-next 0/6] bpf: resilient split BTF followups patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox