* [PATCH bpf-next v2 1/4] libbpf: use map_is_created helper in map setters
2025-03-03 13:57 [PATCH bpf-next v2 0/4] Introduce bpf_object__prepare Mykyta Yatsenko
@ 2025-03-03 13:57 ` Mykyta Yatsenko
2025-03-03 13:57 ` [PATCH bpf-next v2 2/4] libbpf: introduce more granular state for bpf_object Mykyta Yatsenko
` (3 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Mykyta Yatsenko @ 2025-03-03 13:57 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87; +Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Refactoring: use map_is_created helper in map setters that need to check
the state of the map. This helps to reduce the number of the places that
depend explicitly on the loaded flag, simplifying refactoring in the
next patch of this set.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
tools/lib/bpf/libbpf.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 899e98225f3b..4895c7ae6422 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -4845,6 +4845,11 @@ static int bpf_get_map_info_from_fdinfo(int fd, struct bpf_map_info *info)
return 0;
}
+static bool map_is_created(const struct bpf_map *map)
+{
+ return map->obj->loaded || map->reused;
+}
+
bool bpf_map__autocreate(const struct bpf_map *map)
{
return map->autocreate;
@@ -4852,7 +4857,7 @@ bool bpf_map__autocreate(const struct bpf_map *map)
int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
{
- if (map->obj->loaded)
+ if (map_is_created(map))
return libbpf_err(-EBUSY);
map->autocreate = autocreate;
@@ -4946,7 +4951,7 @@ struct bpf_map *bpf_map__inner_map(struct bpf_map *map)
int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries)
{
- if (map->obj->loaded)
+ if (map_is_created(map))
return libbpf_err(-EBUSY);
map->def.max_entries = max_entries;
@@ -5191,11 +5196,6 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
static void bpf_map__destroy(struct bpf_map *map);
-static bool map_is_created(const struct bpf_map *map)
-{
- return map->obj->loaded || map->reused;
-}
-
static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, bool is_inner)
{
LIBBPF_OPTS(bpf_map_create_opts, create_attr);
@@ -10299,7 +10299,7 @@ static int map_btf_datasec_resize(struct bpf_map *map, __u32 size)
int bpf_map__set_value_size(struct bpf_map *map, __u32 size)
{
- if (map->obj->loaded || map->reused)
+ if (map_is_created(map))
return libbpf_err(-EBUSY);
if (map->mmaped) {
@@ -10345,7 +10345,7 @@ int bpf_map__set_initial_value(struct bpf_map *map,
{
size_t actual_sz;
- if (map->obj->loaded || map->reused)
+ if (map_is_created(map))
return libbpf_err(-EBUSY);
if (!map->mmaped || map->libbpf_type == LIBBPF_MAP_KCONFIG)
--
2.48.1
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH bpf-next v2 2/4] libbpf: introduce more granular state for bpf_object
2025-03-03 13:57 [PATCH bpf-next v2 0/4] Introduce bpf_object__prepare Mykyta Yatsenko
2025-03-03 13:57 ` [PATCH bpf-next v2 1/4] libbpf: use map_is_created helper in map setters Mykyta Yatsenko
@ 2025-03-03 13:57 ` Mykyta Yatsenko
2025-03-03 13:57 ` [PATCH bpf-next v2 3/4] libbpf: split bpf object load into prepare/load Mykyta Yatsenko
` (2 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Mykyta Yatsenko @ 2025-03-03 13:57 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87; +Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
We are going to split bpf_object loading into 2 stages: preparation and
loading. This will increase flexibility when working with bpf_object
and unlock some optimizations and use cases.
This patch substitutes a boolean flag (loaded) by more finely-grained
state for bpf_object.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
tools/lib/bpf/libbpf.c | 39 ++++++++++++++++++++++-----------------
1 file changed, 22 insertions(+), 17 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 4895c7ae6422..7210278ecdcf 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -670,11 +670,18 @@ struct elf_state {
struct usdt_manager;
+enum bpf_object_state {
+ OBJ_OPEN,
+ OBJ_PREPARED,
+ OBJ_LOADED,
+};
+
struct bpf_object {
char name[BPF_OBJ_NAME_LEN];
char license[64];
__u32 kern_version;
+ enum bpf_object_state state;
struct bpf_program *programs;
size_t nr_programs;
struct bpf_map *maps;
@@ -686,7 +693,6 @@ struct bpf_object {
int nr_extern;
int kconfig_map_idx;
- bool loaded;
bool has_subcalls;
bool has_rodata;
@@ -1511,7 +1517,7 @@ static struct bpf_object *bpf_object__new(const char *path,
obj->kconfig_map_idx = -1;
obj->kern_version = get_kernel_version();
- obj->loaded = false;
+ obj->state = OBJ_OPEN;
return obj;
}
@@ -4847,7 +4853,7 @@ static int bpf_get_map_info_from_fdinfo(int fd, struct bpf_map_info *info)
static bool map_is_created(const struct bpf_map *map)
{
- return map->obj->loaded || map->reused;
+ return map->obj->state >= OBJ_PREPARED || map->reused;
}
bool bpf_map__autocreate(const struct bpf_map *map)
@@ -8550,7 +8556,7 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
if (!obj)
return libbpf_err(-EINVAL);
- if (obj->loaded) {
+ if (obj->state >= OBJ_LOADED) {
pr_warn("object '%s': load can't be attempted twice\n", obj->name);
return libbpf_err(-EINVAL);
}
@@ -8602,8 +8608,7 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
btf__free(obj->btf_vmlinux);
obj->btf_vmlinux = NULL;
- obj->loaded = true; /* doesn't matter if successfully or not */
-
+ obj->state = OBJ_LOADED; /* doesn't matter if successfully or not */
if (err)
goto out;
@@ -8866,7 +8871,7 @@ int bpf_object__pin_maps(struct bpf_object *obj, const char *path)
if (!obj)
return libbpf_err(-ENOENT);
- if (!obj->loaded) {
+ if (obj->state < OBJ_PREPARED) {
pr_warn("object not yet loaded; load it first\n");
return libbpf_err(-ENOENT);
}
@@ -8945,7 +8950,7 @@ int bpf_object__pin_programs(struct bpf_object *obj, const char *path)
if (!obj)
return libbpf_err(-ENOENT);
- if (!obj->loaded) {
+ if (obj->state < OBJ_LOADED) {
pr_warn("object not yet loaded; load it first\n");
return libbpf_err(-ENOENT);
}
@@ -9132,7 +9137,7 @@ int bpf_object__btf_fd(const struct bpf_object *obj)
int bpf_object__set_kversion(struct bpf_object *obj, __u32 kern_version)
{
- if (obj->loaded)
+ if (obj->state >= OBJ_LOADED)
return libbpf_err(-EINVAL);
obj->kern_version = kern_version;
@@ -9229,7 +9234,7 @@ bool bpf_program__autoload(const struct bpf_program *prog)
int bpf_program__set_autoload(struct bpf_program *prog, bool autoload)
{
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EINVAL);
prog->autoload = autoload;
@@ -9261,7 +9266,7 @@ int bpf_program__set_insns(struct bpf_program *prog,
{
struct bpf_insn *insns;
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EBUSY);
insns = libbpf_reallocarray(prog->insns, new_insn_cnt, sizeof(*insns));
@@ -9304,7 +9309,7 @@ static int last_custom_sec_def_handler_id;
int bpf_program__set_type(struct bpf_program *prog, enum bpf_prog_type type)
{
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EBUSY);
/* if type is not changed, do nothing */
@@ -9335,7 +9340,7 @@ enum bpf_attach_type bpf_program__expected_attach_type(const struct bpf_program
int bpf_program__set_expected_attach_type(struct bpf_program *prog,
enum bpf_attach_type type)
{
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EBUSY);
prog->expected_attach_type = type;
@@ -9349,7 +9354,7 @@ __u32 bpf_program__flags(const struct bpf_program *prog)
int bpf_program__set_flags(struct bpf_program *prog, __u32 flags)
{
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EBUSY);
prog->prog_flags = flags;
@@ -9363,7 +9368,7 @@ __u32 bpf_program__log_level(const struct bpf_program *prog)
int bpf_program__set_log_level(struct bpf_program *prog, __u32 log_level)
{
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EBUSY);
prog->log_level = log_level;
@@ -9382,7 +9387,7 @@ int bpf_program__set_log_buf(struct bpf_program *prog, char *log_buf, size_t log
return libbpf_err(-EINVAL);
if (prog->log_size > UINT_MAX)
return libbpf_err(-EINVAL);
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EBUSY);
prog->log_buf = log_buf;
@@ -13666,7 +13671,7 @@ int bpf_program__set_attach_target(struct bpf_program *prog,
if (!prog || attach_prog_fd < 0)
return libbpf_err(-EINVAL);
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EINVAL);
if (attach_prog_fd && !attach_func_name) {
--
2.48.1
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH bpf-next v2 3/4] libbpf: split bpf object load into prepare/load
2025-03-03 13:57 [PATCH bpf-next v2 0/4] Introduce bpf_object__prepare Mykyta Yatsenko
2025-03-03 13:57 ` [PATCH bpf-next v2 1/4] libbpf: use map_is_created helper in map setters Mykyta Yatsenko
2025-03-03 13:57 ` [PATCH bpf-next v2 2/4] libbpf: introduce more granular state for bpf_object Mykyta Yatsenko
@ 2025-03-03 13:57 ` Mykyta Yatsenko
2025-03-03 23:23 ` Andrii Nakryiko
2025-03-03 13:57 ` [PATCH bpf-next v2 4/4] selftests/bpf: add tests for bpf_object__prepare Mykyta Yatsenko
2025-03-03 23:36 ` [PATCH bpf-next v2 0/4] Introduce bpf_object__prepare patchwork-bot+netdevbpf
4 siblings, 1 reply; 7+ messages in thread
From: Mykyta Yatsenko @ 2025-03-03 13:57 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87; +Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Introduce bpf_object__prepare API: additional intermediate preparation
step that performs ELF processing, relocations, prepares final state of
BPF program instructions (accessible with bpf_program__insns()), creates
and (potentially) pins maps, and stops short of loading BPF programs.
We anticipate few use cases for this API, such as:
* Use prepare to initialize bpf_token, without loading freplace
programs, unlocking possibility to lookup BTF of other programs.
* Execute prepare to obtain finalized BPF program instructions without
loading programs, enabling tools like veristat to process one program at
a time, without incurring cost of ELF parsing and processing.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
tools/lib/bpf/libbpf.c | 144 +++++++++++++++++++++++++++------------
tools/lib/bpf/libbpf.h | 13 ++++
tools/lib/bpf/libbpf.map | 1 +
3 files changed, 115 insertions(+), 43 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 7210278ecdcf..80ed6d380584 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -7901,13 +7901,6 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
size_t i;
int err;
- for (i = 0; i < obj->nr_programs; i++) {
- prog = &obj->programs[i];
- err = bpf_object__sanitize_prog(obj, prog);
- if (err)
- return err;
- }
-
for (i = 0; i < obj->nr_programs; i++) {
prog = &obj->programs[i];
if (prog_is_subprog(obj, prog))
@@ -7933,6 +7926,21 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
return 0;
}
+static int bpf_object_prepare_progs(struct bpf_object *obj)
+{
+ struct bpf_program *prog;
+ size_t i;
+ int err;
+
+ for (i = 0; i < obj->nr_programs; i++) {
+ prog = &obj->programs[i];
+ err = bpf_object__sanitize_prog(obj, prog);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
static const struct bpf_sec_def *find_sec_def(const char *sec_name);
static int bpf_object_init_progs(struct bpf_object *obj, const struct bpf_object_open_opts *opts)
@@ -8549,9 +8557,70 @@ static int bpf_object_prepare_struct_ops(struct bpf_object *obj)
return 0;
}
+static void bpf_object_unpin(struct bpf_object *obj)
+{
+ int i;
+
+ /* unpin any maps that were auto-pinned during load */
+ for (i = 0; i < obj->nr_maps; i++)
+ if (obj->maps[i].pinned && !obj->maps[i].reused)
+ bpf_map__unpin(&obj->maps[i], NULL);
+}
+
+static void bpf_object_post_load_cleanup(struct bpf_object *obj)
+{
+ int i;
+
+ /* clean up fd_array */
+ zfree(&obj->fd_array);
+
+ /* clean up module BTFs */
+ for (i = 0; i < obj->btf_module_cnt; i++) {
+ close(obj->btf_modules[i].fd);
+ btf__free(obj->btf_modules[i].btf);
+ free(obj->btf_modules[i].name);
+ }
+ obj->btf_module_cnt = 0;
+ zfree(&obj->btf_modules);
+
+ /* clean up vmlinux BTF */
+ btf__free(obj->btf_vmlinux);
+ obj->btf_vmlinux = NULL;
+}
+
+static int bpf_object_prepare(struct bpf_object *obj, const char *target_btf_path)
+{
+ int err;
+
+ if (obj->state >= OBJ_PREPARED) {
+ pr_warn("object '%s': prepare loading can't be attempted twice\n", obj->name);
+ return -EINVAL;
+ }
+
+ err = bpf_object_prepare_token(obj);
+ err = err ? : bpf_object__probe_loading(obj);
+ err = err ? : bpf_object__load_vmlinux_btf(obj, false);
+ err = err ? : bpf_object__resolve_externs(obj, obj->kconfig);
+ err = err ? : bpf_object__sanitize_maps(obj);
+ err = err ? : bpf_object__init_kern_struct_ops_maps(obj);
+ err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
+ err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
+ err = err ? : bpf_object__sanitize_and_load_btf(obj);
+ err = err ? : bpf_object__create_maps(obj);
+ err = err ? : bpf_object_prepare_progs(obj);
+ obj->state = OBJ_PREPARED;
+
+ if (err) {
+ bpf_object_unpin(obj);
+ bpf_object_unload(obj);
+ return err;
+ }
+ return 0;
+}
+
static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const char *target_btf_path)
{
- int err, i;
+ int err;
if (!obj)
return libbpf_err(-EINVAL);
@@ -8571,17 +8640,12 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
return libbpf_err(-LIBBPF_ERRNO__ENDIAN);
}
- err = bpf_object_prepare_token(obj);
- err = err ? : bpf_object__probe_loading(obj);
- err = err ? : bpf_object__load_vmlinux_btf(obj, false);
- err = err ? : bpf_object__resolve_externs(obj, obj->kconfig);
- err = err ? : bpf_object__sanitize_maps(obj);
- err = err ? : bpf_object__init_kern_struct_ops_maps(obj);
- err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
- err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
- err = err ? : bpf_object__sanitize_and_load_btf(obj);
- err = err ? : bpf_object__create_maps(obj);
- err = err ? : bpf_object__load_progs(obj, extra_log_level);
+ if (obj->state < OBJ_PREPARED) {
+ err = bpf_object_prepare(obj, target_btf_path);
+ if (err)
+ return libbpf_err(err);
+ }
+ err = bpf_object__load_progs(obj, extra_log_level);
err = err ? : bpf_object_init_prog_arrays(obj);
err = err ? : bpf_object_prepare_struct_ops(obj);
@@ -8593,35 +8657,22 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
err = bpf_gen__finish(obj->gen_loader, obj->nr_programs, obj->nr_maps);
}
- /* clean up fd_array */
- zfree(&obj->fd_array);
+ bpf_object_post_load_cleanup(obj);
+ obj->state = OBJ_LOADED; /* doesn't matter if successfully or not */
- /* clean up module BTFs */
- for (i = 0; i < obj->btf_module_cnt; i++) {
- close(obj->btf_modules[i].fd);
- btf__free(obj->btf_modules[i].btf);
- free(obj->btf_modules[i].name);
+ if (err) {
+ bpf_object_unpin(obj);
+ bpf_object_unload(obj);
+ pr_warn("failed to load object '%s'\n", obj->path);
+ return libbpf_err(err);
}
- free(obj->btf_modules);
-
- /* clean up vmlinux BTF */
- btf__free(obj->btf_vmlinux);
- obj->btf_vmlinux = NULL;
-
- obj->state = OBJ_LOADED; /* doesn't matter if successfully or not */
- if (err)
- goto out;
return 0;
-out:
- /* unpin any maps that were auto-pinned during load */
- for (i = 0; i < obj->nr_maps; i++)
- if (obj->maps[i].pinned && !obj->maps[i].reused)
- bpf_map__unpin(&obj->maps[i], NULL);
+}
- bpf_object_unload(obj);
- pr_warn("failed to load object '%s'\n", obj->path);
- return libbpf_err(err);
+int bpf_object__prepare(struct bpf_object *obj)
+{
+ return libbpf_err(bpf_object_prepare(obj, NULL));
}
int bpf_object__load(struct bpf_object *obj)
@@ -9069,6 +9120,13 @@ void bpf_object__close(struct bpf_object *obj)
if (IS_ERR_OR_NULL(obj))
return;
+ /*
+ * if user called bpf_object__prepare() without ever getting to
+ * bpf_object__load(), we need to clean up stuff that is normally
+ * cleaned up at the end of loading step
+ */
+ bpf_object_post_load_cleanup(obj);
+
usdt_manager_free(obj->usdt_man);
obj->usdt_man = NULL;
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 3020ee45303a..e0605403f977 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -241,6 +241,19 @@ LIBBPF_API struct bpf_object *
bpf_object__open_mem(const void *obj_buf, size_t obj_buf_sz,
const struct bpf_object_open_opts *opts);
+/**
+ * @brief **bpf_object__prepare()** prepares BPF object for loading:
+ * performs ELF processing, relocations, prepares final state of BPF program
+ * instructions (accessible with bpf_program__insns()), creates and
+ * (potentially) pins maps. Leaves BPF object in the state ready for program
+ * loading.
+ * @param obj Pointer to a valid BPF object instance returned by
+ * **bpf_object__open*()** API
+ * @return 0, on success; negative error code, otherwise, error code is
+ * stored in errno
+ */
+int bpf_object__prepare(struct bpf_object *obj);
+
/**
* @brief **bpf_object__load()** loads BPF object into kernel.
* @param obj Pointer to a valid BPF object instance returned by
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index b5a838de6f47..d8b71f22f197 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -436,6 +436,7 @@ LIBBPF_1.6.0 {
bpf_linker__add_buf;
bpf_linker__add_fd;
bpf_linker__new_fd;
+ bpf_object__prepare;
btf__add_decl_attr;
btf__add_type_attr;
} LIBBPF_1.5.0;
--
2.48.1
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [PATCH bpf-next v2 3/4] libbpf: split bpf object load into prepare/load
2025-03-03 13:57 ` [PATCH bpf-next v2 3/4] libbpf: split bpf object load into prepare/load Mykyta Yatsenko
@ 2025-03-03 23:23 ` Andrii Nakryiko
0 siblings, 0 replies; 7+ messages in thread
From: Andrii Nakryiko @ 2025-03-03 23:23 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
Mykyta Yatsenko
On Mon, Mar 3, 2025 at 5:58 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Introduce bpf_object__prepare API: additional intermediate preparation
> step that performs ELF processing, relocations, prepares final state of
> BPF program instructions (accessible with bpf_program__insns()), creates
> and (potentially) pins maps, and stops short of loading BPF programs.
>
> We anticipate few use cases for this API, such as:
> * Use prepare to initialize bpf_token, without loading freplace
> programs, unlocking possibility to lookup BTF of other programs.
> * Execute prepare to obtain finalized BPF program instructions without
> loading programs, enabling tools like veristat to process one program at
> a time, without incurring cost of ELF parsing and processing.
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> tools/lib/bpf/libbpf.c | 144 +++++++++++++++++++++++++++------------
> tools/lib/bpf/libbpf.h | 13 ++++
> tools/lib/bpf/libbpf.map | 1 +
> 3 files changed, 115 insertions(+), 43 deletions(-)
>
[...]
> + err = bpf_object_prepare_token(obj);
> + err = err ? : bpf_object__probe_loading(obj);
> + err = err ? : bpf_object__load_vmlinux_btf(obj, false);
> + err = err ? : bpf_object__resolve_externs(obj, obj->kconfig);
> + err = err ? : bpf_object__sanitize_maps(obj);
> + err = err ? : bpf_object__init_kern_struct_ops_maps(obj);
> + err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
> + err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
> + err = err ? : bpf_object__sanitize_and_load_btf(obj);
> + err = err ? : bpf_object__create_maps(obj);
> + err = err ? : bpf_object_prepare_progs(obj);
> + obj->state = OBJ_PREPARED;
> +
> + if (err) {
> + bpf_object_unpin(obj);
> + bpf_object_unload(obj);
I think it's best to set obj->state = OBJ_LOADED here to prevent
subsequent bpf_object__load() from trying to do anything (and probably
crashing). I'll add this while applying.
> + return err;
> + }
> + return 0;
> +}
> +
> static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const char *target_btf_path)
> {
> - int err, i;
> + int err;
>
> if (!obj)
> return libbpf_err(-EINVAL);
[...]
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH bpf-next v2 4/4] selftests/bpf: add tests for bpf_object__prepare
2025-03-03 13:57 [PATCH bpf-next v2 0/4] Introduce bpf_object__prepare Mykyta Yatsenko
` (2 preceding siblings ...)
2025-03-03 13:57 ` [PATCH bpf-next v2 3/4] libbpf: split bpf object load into prepare/load Mykyta Yatsenko
@ 2025-03-03 13:57 ` Mykyta Yatsenko
2025-03-03 23:36 ` [PATCH bpf-next v2 0/4] Introduce bpf_object__prepare patchwork-bot+netdevbpf
4 siblings, 0 replies; 7+ messages in thread
From: Mykyta Yatsenko @ 2025-03-03 13:57 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87; +Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Add selftests, checking that running bpf_object__prepare successfully
creates maps before load step.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
.../selftests/bpf/prog_tests/prepare.c | 99 +++++++++++++++++++
tools/testing/selftests/bpf/progs/prepare.c | 28 ++++++
2 files changed, 127 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/prepare.c
create mode 100644 tools/testing/selftests/bpf/progs/prepare.c
diff --git a/tools/testing/selftests/bpf/prog_tests/prepare.c b/tools/testing/selftests/bpf/prog_tests/prepare.c
new file mode 100644
index 000000000000..fb5cdad97116
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/prepare.c
@@ -0,0 +1,99 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta */
+
+#include <test_progs.h>
+#include <network_helpers.h>
+#include "prepare.skel.h"
+
+static bool check_prepared(struct bpf_object *obj)
+{
+ bool is_prepared = true;
+ const struct bpf_map *map;
+
+ bpf_object__for_each_map(map, obj) {
+ if (bpf_map__fd(map) < 0)
+ is_prepared = false;
+ }
+
+ return is_prepared;
+}
+
+static void test_prepare_no_load(void)
+{
+ struct prepare *skel;
+ int err;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ );
+
+ skel = prepare__open();
+ if (!ASSERT_OK_PTR(skel, "prepare__open"))
+ return;
+
+ if (!ASSERT_FALSE(check_prepared(skel->obj), "not check_prepared"))
+ goto cleanup;
+
+ err = bpf_object__prepare(skel->obj);
+
+ if (!ASSERT_TRUE(check_prepared(skel->obj), "check_prepared"))
+ goto cleanup;
+
+ if (!ASSERT_OK(err, "bpf_object__prepare"))
+ goto cleanup;
+
+cleanup:
+ prepare__destroy(skel);
+}
+
+static void test_prepare_load(void)
+{
+ struct prepare *skel;
+ int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ );
+
+ skel = prepare__open();
+ if (!ASSERT_OK_PTR(skel, "prepare__open"))
+ return;
+
+ if (!ASSERT_FALSE(check_prepared(skel->obj), "not check_prepared"))
+ goto cleanup;
+
+ err = bpf_object__prepare(skel->obj);
+ if (!ASSERT_OK(err, "bpf_object__prepare"))
+ goto cleanup;
+
+ err = prepare__load(skel);
+ if (!ASSERT_OK(err, "prepare__load"))
+ goto cleanup;
+
+ if (!ASSERT_TRUE(check_prepared(skel->obj), "check_prepared"))
+ goto cleanup;
+
+ prog_fd = bpf_program__fd(skel->progs.program);
+ if (!ASSERT_GE(prog_fd, 0, "prog_fd"))
+ goto cleanup;
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ if (!ASSERT_OK(err, "test_run_opts err"))
+ goto cleanup;
+
+ if (!ASSERT_OK(topts.retval, "test_run_opts retval"))
+ goto cleanup;
+
+ ASSERT_EQ(skel->bss->err, 0, "err");
+
+cleanup:
+ prepare__destroy(skel);
+}
+
+void test_prepare(void)
+{
+ if (test__start_subtest("prepare_load"))
+ test_prepare_load();
+ if (test__start_subtest("prepare_no_load"))
+ test_prepare_no_load();
+}
diff --git a/tools/testing/selftests/bpf/progs/prepare.c b/tools/testing/selftests/bpf/progs/prepare.c
new file mode 100644
index 000000000000..1f1dd547e4ee
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/prepare.c
@@ -0,0 +1,28 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta */
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+//#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+int err;
+
+struct {
+ __uint(type, BPF_MAP_TYPE_RINGBUF);
+ __uint(max_entries, 4096);
+} ringbuf SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(max_entries, 1);
+ __type(key, __u32);
+ __type(value, __u32);
+} array_map SEC(".maps");
+
+SEC("cgroup_skb/egress")
+int program(struct __sk_buff *skb)
+{
+ err = 0;
+ return 0;
+}
--
2.48.1
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [PATCH bpf-next v2 0/4] Introduce bpf_object__prepare
2025-03-03 13:57 [PATCH bpf-next v2 0/4] Introduce bpf_object__prepare Mykyta Yatsenko
` (3 preceding siblings ...)
2025-03-03 13:57 ` [PATCH bpf-next v2 4/4] selftests/bpf: add tests for bpf_object__prepare Mykyta Yatsenko
@ 2025-03-03 23:36 ` patchwork-bot+netdevbpf
4 siblings, 0 replies; 7+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-03-03 23:36 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, yatsenko
Hello:
This series was applied to bpf/bpf-next.git (master)
by Andrii Nakryiko <andrii@kernel.org>:
On Mon, 3 Mar 2025 13:57:48 +0000 you wrote:
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> We are introducing a new function in the libbpf API, bpf_object__prepare,
> which provides more granular control over the process of loading a
> bpf_object. bpf_object__prepare performs ELF processing, relocations,
> prepares final state of BPF program instructions (accessible with
> bpf_program__insns()), creates and potentially pins maps, and stops short
> of loading BPF programs.
>
> [...]
Here is the summary with links:
- [bpf-next,v2,1/4] libbpf: use map_is_created helper in map setters
https://git.kernel.org/bpf/bpf-next/c/7218ff1f322d
- [bpf-next,v2,2/4] libbpf: introduce more granular state for bpf_object
https://git.kernel.org/bpf/bpf-next/c/8ca8f6d1a2b4
- [bpf-next,v2,3/4] libbpf: split bpf object load into prepare/load
https://git.kernel.org/bpf/bpf-next/c/da755540c6f8
- [bpf-next,v2,4/4] selftests/bpf: add tests for bpf_object__prepare
https://git.kernel.org/bpf/bpf-next/c/68b61a823aab
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 7+ messages in thread