* [PATCH bpf-next 0/3] Fix libbpf's bpf_object and BPF subskel interoperability
@ 2024-10-23 4:39 Andrii Nakryiko
2024-10-23 4:39 ` [PATCH bpf-next 1/3] selftests/bpf: fix test_spin_lock_fail.c's global vars usage Andrii Nakryiko
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Andrii Nakryiko @ 2024-10-23 4:39 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Fix libbpf's global data map mmap()'ing logic to make BPF objects loaded
through generic bpf_object__load() API interoperable with BPF subskeleton
instantiated from such BPF object. The issue is in re-mmap()'ing of global
data maps after BPF object is loaded into kernel, which is currently done in
BPF skeleton-specific code, and should instead be done in generic and common
bpf_object_load() logic.
See patch #2 for the fix, patch #3 for the selftests. Patch #1 is preliminary
fix for existing spin_lock selftests which currently works by accident.
Andrii Nakryiko (3):
selftests/bpf: fix test_spin_lock_fail.c's global vars usage
libbpf: move global data mmap()'ing into bpf_object__load()
selftests/bpf: validate generic bpf_object and subskel APIs work
together
tools/lib/bpf/libbpf.c | 83 +++++++++----------
.../selftests/bpf/prog_tests/subskeleton.c | 76 ++++++++++++++++-
.../selftests/bpf/progs/test_spin_lock_fail.c | 4 +-
3 files changed, 117 insertions(+), 46 deletions(-)
--
2.43.5
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH bpf-next 1/3] selftests/bpf: fix test_spin_lock_fail.c's global vars usage 2024-10-23 4:39 [PATCH bpf-next 0/3] Fix libbpf's bpf_object and BPF subskel interoperability Andrii Nakryiko @ 2024-10-23 4:39 ` Andrii Nakryiko 2024-10-23 4:39 ` [PATCH bpf-next 2/3] libbpf: move global data mmap()'ing into bpf_object__load() Andrii Nakryiko ` (2 subsequent siblings) 3 siblings, 0 replies; 7+ messages in thread From: Andrii Nakryiko @ 2024-10-23 4:39 UTC (permalink / raw) To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team Global variables of special types (like `struct bpf_spin_lock`) make underlying ARRAY maps non-mmapable. To make this work with libbpf's mmaping logic, application is expected to declare such special variables as static, so libbpf doesn't even attempt to mmap() such ARRAYs. test_spin_lock_fail.c didn't follow this rule, but given it relied on this test to trigger failures, this went unnoticed, as we never got to the step of mmap()'ing these ARRAY maps. It is fragile and relies on specific sequence of libbpf steps, which are an internal implementation details. Fix the test by marking lockA and lockB as static. Fixes: c48748aea4f8 ("selftests/bpf: Add failure test cases for spin lock pairing") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> --- tools/testing/selftests/bpf/progs/test_spin_lock_fail.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c index 43f40c4fe241..1c8b678e2e9a 100644 --- a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c +++ b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c @@ -28,8 +28,8 @@ struct { }, }; -SEC(".data.A") struct bpf_spin_lock lockA; -SEC(".data.B") struct bpf_spin_lock lockB; +static struct bpf_spin_lock lockA SEC(".data.A"); +static struct bpf_spin_lock lockB SEC(".data.B"); SEC("?tc") int lock_id_kptr_preserve(void *ctx) -- 2.43.5 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH bpf-next 2/3] libbpf: move global data mmap()'ing into bpf_object__load() 2024-10-23 4:39 [PATCH bpf-next 0/3] Fix libbpf's bpf_object and BPF subskel interoperability Andrii Nakryiko 2024-10-23 4:39 ` [PATCH bpf-next 1/3] selftests/bpf: fix test_spin_lock_fail.c's global vars usage Andrii Nakryiko @ 2024-10-23 4:39 ` Andrii Nakryiko 2024-10-23 12:54 ` Jiri Olsa 2024-10-23 4:39 ` [PATCH bpf-next 3/3] selftests/bpf: validate generic bpf_object and subskel APIs work together Andrii Nakryiko 2024-10-24 5:20 ` [PATCH bpf-next 0/3] Fix libbpf's bpf_object and BPF subskel interoperability patchwork-bot+netdevbpf 3 siblings, 1 reply; 7+ messages in thread From: Andrii Nakryiko @ 2024-10-23 4:39 UTC (permalink / raw) To: bpf, ast, daniel, martin.lau Cc: andrii, kernel-team, Alastair Robertson, Jonathan Wiepert Since BPF skeleton inception libbpf has been doing mmap()'ing of global data ARRAY maps in bpf_object__load_skeleton() API, which is used by code generated .skel.h files (i.e., by BPF skeletons only). This is wrong because if BPF object is loaded through generic bpf_object__load() API, global data maps won't be re-mmap()'ed after load step, and memory pointers returned from bpf_map__initial_value() would be wrong and won't reflect the actual memory shared between BPF program and user space. bpf_map__initial_value() return result is rarely used after load, so this went unnoticed for a really long time, until bpftrace project attempted to load BPF object through generic bpf_object__load() API and then used BPF subskeleton instantiated from such bpf_object. It turned out that .data/.rodata/.bss data updates through such subskeleton was "blackholed", all because libbpf wouldn't re-mmap() those maps during bpf_object__load() phase. Long story short, this step should be done by libbpf regardless of BPF skeleton usage, right after BPF map is created in the kernel. This patch moves this functionality into bpf_object__populate_internal_map() to achieve this. And bpf_object__load_skeleton() is now simple and almost trivial, only propagating these mmap()'ed pointers into user-supplied skeleton structs. We also do trivial adjustments to error reporting inside bpf_object__populate_internal_map() for consistency with the rest of libbpf's map-handling code. Reported-by: Alastair Robertson <ajor@meta.com> Reported-by: Jonathan Wiepert <jwiepert@meta.com> Fixes: d66562fba1ce ("libbpf: Add BPF object skeleton support") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> --- tools/lib/bpf/libbpf.c | 83 ++++++++++++++++++++---------------------- 1 file changed, 40 insertions(+), 43 deletions(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 7c40286c3948..711173acbcef 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -5122,6 +5122,7 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) enum libbpf_map_type map_type = map->libbpf_type; char *cp, errmsg[STRERR_BUFSIZE]; int err, zero = 0; + size_t mmap_sz; if (obj->gen_loader) { bpf_gen__map_update_elem(obj->gen_loader, map - obj->maps, @@ -5135,8 +5136,8 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) if (err) { err = -errno; cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg)); - pr_warn("Error setting initial map(%s) contents: %s\n", - map->name, cp); + pr_warn("map '%s': failed to set initial contents: %s\n", + bpf_map__name(map), cp); return err; } @@ -5146,11 +5147,43 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) if (err) { err = -errno; cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg)); - pr_warn("Error freezing map(%s) as read-only: %s\n", - map->name, cp); + pr_warn("map '%s': failed to freeze as read-only: %s\n", + bpf_map__name(map), cp); return err; } } + + /* Remap anonymous mmap()-ed "map initialization image" as + * a BPF map-backed mmap()-ed memory, but preserving the same + * memory address. This will cause kernel to change process' + * page table to point to a different piece of kernel memory, + * but from userspace point of view memory address (and its + * contents, being identical at this point) will stay the + * same. This mapping will be released by bpf_object__close() + * as per normal clean up procedure. + */ + mmap_sz = bpf_map_mmap_sz(map); + if (map->def.map_flags & BPF_F_MMAPABLE) { + void *mmaped; + int prot; + + if (map->def.map_flags & BPF_F_RDONLY_PROG) + prot = PROT_READ; + else + prot = PROT_READ | PROT_WRITE; + mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map->fd, 0); + if (mmaped == MAP_FAILED) { + err = -errno; + pr_warn("map '%s': failed to re-mmap() contents: %d\n", + bpf_map__name(map), err); + return err; + } + map->mmaped = mmaped; + } else if (map->mmaped) { + munmap(map->mmaped, mmap_sz); + map->mmaped = NULL; + } + return 0; } @@ -5467,8 +5500,7 @@ bpf_object__create_maps(struct bpf_object *obj) err = bpf_object__populate_internal_map(obj, map); if (err < 0) goto err_out; - } - if (map->def.type == BPF_MAP_TYPE_ARENA) { + } else if (map->def.type == BPF_MAP_TYPE_ARENA) { map->mmaped = mmap((void *)(long)map->map_extra, bpf_map_mmap_sz(map), PROT_READ | PROT_WRITE, map->map_extra ? MAP_SHARED | MAP_FIXED : MAP_SHARED, @@ -13916,46 +13948,11 @@ int bpf_object__load_skeleton(struct bpf_object_skeleton *s) for (i = 0; i < s->map_cnt; i++) { struct bpf_map_skeleton *map_skel = (void *)s->maps + i * s->map_skel_sz; struct bpf_map *map = *map_skel->map; - size_t mmap_sz = bpf_map_mmap_sz(map); - int prot, map_fd = map->fd; - void **mmaped = map_skel->mmaped; - - if (!mmaped) - continue; - - if (!(map->def.map_flags & BPF_F_MMAPABLE)) { - *mmaped = NULL; - continue; - } - if (map->def.type == BPF_MAP_TYPE_ARENA) { - *mmaped = map->mmaped; + if (!map_skel->mmaped) continue; - } - - if (map->def.map_flags & BPF_F_RDONLY_PROG) - prot = PROT_READ; - else - prot = PROT_READ | PROT_WRITE; - /* Remap anonymous mmap()-ed "map initialization image" as - * a BPF map-backed mmap()-ed memory, but preserving the same - * memory address. This will cause kernel to change process' - * page table to point to a different piece of kernel memory, - * but from userspace point of view memory address (and its - * contents, being identical at this point) will stay the - * same. This mapping will be released by bpf_object__close() - * as per normal clean up procedure, so we don't need to worry - * about it from skeleton's clean up perspective. - */ - *mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map_fd, 0); - if (*mmaped == MAP_FAILED) { - err = -errno; - *mmaped = NULL; - pr_warn("failed to re-mmap() map '%s': %d\n", - bpf_map__name(map), err); - return libbpf_err(err); - } + *map_skel->mmaped = map->mmaped; } return 0; -- 2.43.5 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next 2/3] libbpf: move global data mmap()'ing into bpf_object__load() 2024-10-23 4:39 ` [PATCH bpf-next 2/3] libbpf: move global data mmap()'ing into bpf_object__load() Andrii Nakryiko @ 2024-10-23 12:54 ` Jiri Olsa 2024-10-23 15:59 ` Andrii Nakryiko 0 siblings, 1 reply; 7+ messages in thread From: Jiri Olsa @ 2024-10-23 12:54 UTC (permalink / raw) To: Andrii Nakryiko Cc: bpf, ast, daniel, martin.lau, kernel-team, Alastair Robertson, Jonathan Wiepert On Tue, Oct 22, 2024 at 09:39:07PM -0700, Andrii Nakryiko wrote: SNIP > @@ -5146,11 +5147,43 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) > if (err) { > err = -errno; > cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg)); > - pr_warn("Error freezing map(%s) as read-only: %s\n", > - map->name, cp); > + pr_warn("map '%s': failed to freeze as read-only: %s\n", > + bpf_map__name(map), cp); > return err; > } > } > + > + /* Remap anonymous mmap()-ed "map initialization image" as > + * a BPF map-backed mmap()-ed memory, but preserving the same > + * memory address. This will cause kernel to change process' > + * page table to point to a different piece of kernel memory, > + * but from userspace point of view memory address (and its > + * contents, being identical at this point) will stay the > + * same. This mapping will be released by bpf_object__close() > + * as per normal clean up procedure. > + */ > + mmap_sz = bpf_map_mmap_sz(map); > + if (map->def.map_flags & BPF_F_MMAPABLE) { > + void *mmaped; > + int prot; > + > + if (map->def.map_flags & BPF_F_RDONLY_PROG) > + prot = PROT_READ; > + else > + prot = PROT_READ | PROT_WRITE; > + mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map->fd, 0); > + if (mmaped == MAP_FAILED) { > + err = -errno; > + pr_warn("map '%s': failed to re-mmap() contents: %d\n", > + bpf_map__name(map), err); > + return err; > + } > + map->mmaped = mmaped; > + } else if (map->mmaped) { > + munmap(map->mmaped, mmap_sz); > + map->mmaped = NULL; > + } this caught my eye because we did not do that in bpf_object__load_skeleton, makes sense, but why do we mmap *!*BPF_F_MMAPABLE maps in the first place? jirka > + > return 0; > } > > @@ -5467,8 +5500,7 @@ bpf_object__create_maps(struct bpf_object *obj) > err = bpf_object__populate_internal_map(obj, map); > if (err < 0) > goto err_out; > - } > - if (map->def.type == BPF_MAP_TYPE_ARENA) { > + } else if (map->def.type == BPF_MAP_TYPE_ARENA) { > map->mmaped = mmap((void *)(long)map->map_extra, > bpf_map_mmap_sz(map), PROT_READ | PROT_WRITE, > map->map_extra ? MAP_SHARED | MAP_FIXED : MAP_SHARED, > @@ -13916,46 +13948,11 @@ int bpf_object__load_skeleton(struct bpf_object_skeleton *s) > for (i = 0; i < s->map_cnt; i++) { > struct bpf_map_skeleton *map_skel = (void *)s->maps + i * s->map_skel_sz; > struct bpf_map *map = *map_skel->map; > - size_t mmap_sz = bpf_map_mmap_sz(map); > - int prot, map_fd = map->fd; > - void **mmaped = map_skel->mmaped; > - > - if (!mmaped) > - continue; > - > - if (!(map->def.map_flags & BPF_F_MMAPABLE)) { > - *mmaped = NULL; > - continue; > - } > > - if (map->def.type == BPF_MAP_TYPE_ARENA) { > - *mmaped = map->mmaped; > + if (!map_skel->mmaped) > continue; > - } > - > - if (map->def.map_flags & BPF_F_RDONLY_PROG) > - prot = PROT_READ; > - else > - prot = PROT_READ | PROT_WRITE; > > - /* Remap anonymous mmap()-ed "map initialization image" as > - * a BPF map-backed mmap()-ed memory, but preserving the same > - * memory address. This will cause kernel to change process' > - * page table to point to a different piece of kernel memory, > - * but from userspace point of view memory address (and its > - * contents, being identical at this point) will stay the > - * same. This mapping will be released by bpf_object__close() > - * as per normal clean up procedure, so we don't need to worry > - * about it from skeleton's clean up perspective. > - */ > - *mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map_fd, 0); > - if (*mmaped == MAP_FAILED) { > - err = -errno; > - *mmaped = NULL; > - pr_warn("failed to re-mmap() map '%s': %d\n", > - bpf_map__name(map), err); > - return libbpf_err(err); > - } > + *map_skel->mmaped = map->mmaped; > } > > return 0; > -- > 2.43.5 > > ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next 2/3] libbpf: move global data mmap()'ing into bpf_object__load() 2024-10-23 12:54 ` Jiri Olsa @ 2024-10-23 15:59 ` Andrii Nakryiko 0 siblings, 0 replies; 7+ messages in thread From: Andrii Nakryiko @ 2024-10-23 15:59 UTC (permalink / raw) To: Jiri Olsa Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team, Alastair Robertson, Jonathan Wiepert On Wed, Oct 23, 2024 at 5:54 AM Jiri Olsa <olsajiri@gmail.com> wrote: > > On Tue, Oct 22, 2024 at 09:39:07PM -0700, Andrii Nakryiko wrote: > > SNIP > > > @@ -5146,11 +5147,43 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) > > if (err) { > > err = -errno; > > cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg)); > > - pr_warn("Error freezing map(%s) as read-only: %s\n", > > - map->name, cp); > > + pr_warn("map '%s': failed to freeze as read-only: %s\n", > > + bpf_map__name(map), cp); > > return err; > > } > > } > > + > > + /* Remap anonymous mmap()-ed "map initialization image" as > > + * a BPF map-backed mmap()-ed memory, but preserving the same > > + * memory address. This will cause kernel to change process' > > + * page table to point to a different piece of kernel memory, > > + * but from userspace point of view memory address (and its > > + * contents, being identical at this point) will stay the > > + * same. This mapping will be released by bpf_object__close() > > + * as per normal clean up procedure. > > + */ > > + mmap_sz = bpf_map_mmap_sz(map); > > + if (map->def.map_flags & BPF_F_MMAPABLE) { > > + void *mmaped; > > + int prot; > > + > > + if (map->def.map_flags & BPF_F_RDONLY_PROG) > > + prot = PROT_READ; > > + else > > + prot = PROT_READ | PROT_WRITE; > > + mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map->fd, 0); > > + if (mmaped == MAP_FAILED) { > > + err = -errno; > > + pr_warn("map '%s': failed to re-mmap() contents: %d\n", > > + bpf_map__name(map), err); > > + return err; > > + } > > + map->mmaped = mmaped; > > + } else if (map->mmaped) { > > + munmap(map->mmaped, mmap_sz); > > + map->mmaped = NULL; > > + } > > this caught my eye because we did not do that in bpf_object__load_skeleton, > makes sense, but why do we mmap *!*BPF_F_MMAPABLE maps in the first place? The initial mmap(ANONYMOUS) is basically malloc(), but it works uniformly for both BPF_F_MMAPABLE global data arrays, and non-mmapable ones. Just a streamlining and thus simplification. > > jirka > > > + > > return 0; > > } > > > > @@ -5467,8 +5500,7 @@ bpf_object__create_maps(struct bpf_object *obj) > > err = bpf_object__populate_internal_map(obj, map); > > if (err < 0) > > goto err_out; > > - } > > - if (map->def.type == BPF_MAP_TYPE_ARENA) { > > + } else if (map->def.type == BPF_MAP_TYPE_ARENA) { > > map->mmaped = mmap((void *)(long)map->map_extra, > > bpf_map_mmap_sz(map), PROT_READ | PROT_WRITE, > > map->map_extra ? MAP_SHARED | MAP_FIXED : MAP_SHARED, > > @@ -13916,46 +13948,11 @@ int bpf_object__load_skeleton(struct bpf_object_skeleton *s) > > for (i = 0; i < s->map_cnt; i++) { > > struct bpf_map_skeleton *map_skel = (void *)s->maps + i * s->map_skel_sz; > > struct bpf_map *map = *map_skel->map; > > - size_t mmap_sz = bpf_map_mmap_sz(map); > > - int prot, map_fd = map->fd; > > - void **mmaped = map_skel->mmaped; > > - > > - if (!mmaped) > > - continue; > > - > > - if (!(map->def.map_flags & BPF_F_MMAPABLE)) { > > - *mmaped = NULL; > > - continue; > > - } > > > > - if (map->def.type == BPF_MAP_TYPE_ARENA) { > > - *mmaped = map->mmaped; > > + if (!map_skel->mmaped) > > continue; > > - } > > - > > - if (map->def.map_flags & BPF_F_RDONLY_PROG) > > - prot = PROT_READ; > > - else > > - prot = PROT_READ | PROT_WRITE; > > > > - /* Remap anonymous mmap()-ed "map initialization image" as > > - * a BPF map-backed mmap()-ed memory, but preserving the same > > - * memory address. This will cause kernel to change process' > > - * page table to point to a different piece of kernel memory, > > - * but from userspace point of view memory address (and its > > - * contents, being identical at this point) will stay the > > - * same. This mapping will be released by bpf_object__close() > > - * as per normal clean up procedure, so we don't need to worry > > - * about it from skeleton's clean up perspective. > > - */ > > - *mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map_fd, 0); > > - if (*mmaped == MAP_FAILED) { > > - err = -errno; > > - *mmaped = NULL; > > - pr_warn("failed to re-mmap() map '%s': %d\n", > > - bpf_map__name(map), err); > > - return libbpf_err(err); > > - } > > + *map_skel->mmaped = map->mmaped; > > } > > > > return 0; > > -- > > 2.43.5 > > > > ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH bpf-next 3/3] selftests/bpf: validate generic bpf_object and subskel APIs work together 2024-10-23 4:39 [PATCH bpf-next 0/3] Fix libbpf's bpf_object and BPF subskel interoperability Andrii Nakryiko 2024-10-23 4:39 ` [PATCH bpf-next 1/3] selftests/bpf: fix test_spin_lock_fail.c's global vars usage Andrii Nakryiko 2024-10-23 4:39 ` [PATCH bpf-next 2/3] libbpf: move global data mmap()'ing into bpf_object__load() Andrii Nakryiko @ 2024-10-23 4:39 ` Andrii Nakryiko 2024-10-24 5:20 ` [PATCH bpf-next 0/3] Fix libbpf's bpf_object and BPF subskel interoperability patchwork-bot+netdevbpf 3 siblings, 0 replies; 7+ messages in thread From: Andrii Nakryiko @ 2024-10-23 4:39 UTC (permalink / raw) To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team Add a new subtest validating that bpf_object loaded and initialized through generic APIs is still interoperable with BPF subskeleton, including initialization and reading of global variables. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> --- .../selftests/bpf/prog_tests/subskeleton.c | 76 ++++++++++++++++++- 1 file changed, 75 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/bpf/prog_tests/subskeleton.c b/tools/testing/selftests/bpf/prog_tests/subskeleton.c index 9c31b7004f9c..fdf13ed0152a 100644 --- a/tools/testing/selftests/bpf/prog_tests/subskeleton.c +++ b/tools/testing/selftests/bpf/prog_tests/subskeleton.c @@ -46,7 +46,8 @@ static int subskeleton_lib_subresult(struct bpf_object *obj) return result; } -void test_subskeleton(void) +/* initialize and load through skeleton, then instantiate subskeleton out of it */ +static void subtest_skel_subskeleton(void) { int err, result; struct test_subskeleton *skel; @@ -76,3 +77,76 @@ void test_subskeleton(void) cleanup: test_subskeleton__destroy(skel); } + +/* initialize and load through generic bpf_object API, then instantiate subskeleton out of it */ +static void subtest_obj_subskeleton(void) +{ + int err, result; + const void *elf_bytes; + size_t elf_bytes_sz = 0, rodata_sz = 0, bss_sz = 0; + struct bpf_object *obj; + const struct bpf_map *map; + const struct bpf_program *prog; + struct bpf_link *link = NULL; + struct test_subskeleton__rodata *rodata; + struct test_subskeleton__bss *bss; + + elf_bytes = test_subskeleton__elf_bytes(&elf_bytes_sz); + if (!ASSERT_OK_PTR(elf_bytes, "elf_bytes")) + return; + + obj = bpf_object__open_mem(elf_bytes, elf_bytes_sz, NULL); + if (!ASSERT_OK_PTR(obj, "obj_open_mem")) + return; + + map = bpf_object__find_map_by_name(obj, ".rodata"); + if (!ASSERT_OK_PTR(map, "rodata_map_by_name")) + goto cleanup; + + rodata = bpf_map__initial_value(map, &rodata_sz); + if (!ASSERT_OK_PTR(rodata, "rodata_get")) + goto cleanup; + + rodata->rovar1 = 10; + rodata->var1 = 1; + subskeleton_lib_setup(obj); + + err = bpf_object__load(obj); + if (!ASSERT_OK(err, "obj_load")) + goto cleanup; + + prog = bpf_object__find_program_by_name(obj, "handler1"); + if (!ASSERT_OK_PTR(prog, "prog_by_name")) + goto cleanup; + + link = bpf_program__attach(prog); + if (!ASSERT_OK_PTR(link, "prog_attach")) + goto cleanup; + + /* trigger tracepoint */ + usleep(1); + + map = bpf_object__find_map_by_name(obj, ".bss"); + if (!ASSERT_OK_PTR(map, "bss_map_by_name")) + goto cleanup; + + bss = bpf_map__initial_value(map, &bss_sz); + if (!ASSERT_OK_PTR(rodata, "rodata_get")) + goto cleanup; + + result = subskeleton_lib_subresult(obj) * 10; + ASSERT_EQ(bss->out1, result, "out1"); + +cleanup: + bpf_link__destroy(link); + bpf_object__close(obj); +} + + +void test_subskeleton(void) +{ + if (test__start_subtest("skel_subskel")) + subtest_skel_subskeleton(); + if (test__start_subtest("obj_subskel")) + subtest_obj_subskeleton(); +} -- 2.43.5 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next 0/3] Fix libbpf's bpf_object and BPF subskel interoperability 2024-10-23 4:39 [PATCH bpf-next 0/3] Fix libbpf's bpf_object and BPF subskel interoperability Andrii Nakryiko ` (2 preceding siblings ...) 2024-10-23 4:39 ` [PATCH bpf-next 3/3] selftests/bpf: validate generic bpf_object and subskel APIs work together Andrii Nakryiko @ 2024-10-24 5:20 ` patchwork-bot+netdevbpf 3 siblings, 0 replies; 7+ messages in thread From: patchwork-bot+netdevbpf @ 2024-10-24 5:20 UTC (permalink / raw) To: Andrii Nakryiko; +Cc: bpf, ast, daniel, martin.lau, kernel-team Hello: This series was applied to bpf/bpf-next.git (master) by Alexei Starovoitov <ast@kernel.org>: On Tue, 22 Oct 2024 21:39:05 -0700 you wrote: > Fix libbpf's global data map mmap()'ing logic to make BPF objects loaded > through generic bpf_object__load() API interoperable with BPF subskeleton > instantiated from such BPF object. The issue is in re-mmap()'ing of global > data maps after BPF object is loaded into kernel, which is currently done in > BPF skeleton-specific code, and should instead be done in generic and common > bpf_object_load() logic. > > [...] Here is the summary with links: - [bpf-next,1/3] selftests/bpf: fix test_spin_lock_fail.c's global vars usage https://git.kernel.org/bpf/bpf-next/c/1b2bfc29695d - [bpf-next,2/3] libbpf: move global data mmap()'ing into bpf_object__load() https://git.kernel.org/bpf/bpf-next/c/137978f42251 - [bpf-next,3/3] selftests/bpf: validate generic bpf_object and subskel APIs work together https://git.kernel.org/bpf/bpf-next/c/80a54566b7f0 You are awesome, thank you! -- Deet-doot-dot, I am a bot. https://korg.docs.kernel.org/patchwork/pwbot.html ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-10-24 5:20 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-10-23 4:39 [PATCH bpf-next 0/3] Fix libbpf's bpf_object and BPF subskel interoperability Andrii Nakryiko 2024-10-23 4:39 ` [PATCH bpf-next 1/3] selftests/bpf: fix test_spin_lock_fail.c's global vars usage Andrii Nakryiko 2024-10-23 4:39 ` [PATCH bpf-next 2/3] libbpf: move global data mmap()'ing into bpf_object__load() Andrii Nakryiko 2024-10-23 12:54 ` Jiri Olsa 2024-10-23 15:59 ` Andrii Nakryiko 2024-10-23 4:39 ` [PATCH bpf-next 3/3] selftests/bpf: validate generic bpf_object and subskel APIs work together Andrii Nakryiko 2024-10-24 5:20 ` [PATCH bpf-next 0/3] Fix libbpf's bpf_object and BPF subskel interoperability patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox