BPF List
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/3] Introduce bpf_object__prepare
@ 2025-02-28 17:52 Mykyta Yatsenko
  2025-02-28 17:52 ` [PATCH bpf-next 1/3] libbpf: introduce more granular state for bpf_object Mykyta Yatsenko
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Mykyta Yatsenko @ 2025-02-28 17:52 UTC (permalink / raw)
  To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87; +Cc: Mykyta Yatsenko

From: Mykyta Yatsenko <yatsenko@meta.com>

Introduce a new libbpf API function bpf_object__prepare enabling more
granular control over the process of bpf_object loading.
bpf_object__prepare runs the same steps that bpf_object__load is running,
before the actual loading of BPF programs.
This API could be useful when we need access to initialized fields of
bpf_object before program loading, for example: currently we can't pass
bpf_token into bpf_program__set_attach_target, because token initialization
is done during loading.

Mykyta Yatsenko (3):
  libbpf: introduce more granular state for bpf_object
  libbpf: split bpf object load into prepare/load
  selftests/bpf: add tests for bpf_object__prepare

 tools/lib/bpf/libbpf.c                        | 194 ++++++++++++------
 tools/lib/bpf/libbpf.h                        |   9 +
 tools/lib/bpf/libbpf.map                      |   1 +
 .../selftests/bpf/prog_tests/prepare.c        |  99 +++++++++
 tools/testing/selftests/bpf/progs/prepare.c   |  28 +++
 5 files changed, 267 insertions(+), 64 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/prepare.c
 create mode 100644 tools/testing/selftests/bpf/progs/prepare.c

-- 
2.48.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH bpf-next 1/3] libbpf: introduce more granular state for bpf_object
  2025-02-28 17:52 [PATCH bpf-next 0/3] Introduce bpf_object__prepare Mykyta Yatsenko
@ 2025-02-28 17:52 ` Mykyta Yatsenko
  2025-02-28 22:20   ` Andrii Nakryiko
  2025-02-28 22:35   ` Andrii Nakryiko
  2025-02-28 17:52 ` [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load Mykyta Yatsenko
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 13+ messages in thread
From: Mykyta Yatsenko @ 2025-02-28 17:52 UTC (permalink / raw)
  To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87; +Cc: Mykyta Yatsenko

From: Mykyta Yatsenko <yatsenko@meta.com>

Add struct bpf_object_state, substitute bpf_object member loaded by
state. State could be OBJ_OPEN - indicates that bpf_object was just
created, OBJ_PREPARED - prepare step will be introduced in the next
patch, OBJ_LOADED - indicates that bpf_object is loaded, similar to
loaded=true currently.

Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
 tools/lib/bpf/libbpf.c | 47 +++++++++++++++++++++++-------------------
 1 file changed, 26 insertions(+), 21 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 899e98225f3b..9ced1ce2334c 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -670,11 +670,18 @@ struct elf_state {
 
 struct usdt_manager;
 
+enum bpf_object_state {
+	OBJ_OPEN,
+	OBJ_PREPARED,
+	OBJ_LOADED,
+};
+
 struct bpf_object {
 	char name[BPF_OBJ_NAME_LEN];
 	char license[64];
 	__u32 kern_version;
 
+	enum bpf_object_state state;
 	struct bpf_program *programs;
 	size_t nr_programs;
 	struct bpf_map *maps;
@@ -686,7 +693,6 @@ struct bpf_object {
 	int nr_extern;
 	int kconfig_map_idx;
 
-	bool loaded;
 	bool has_subcalls;
 	bool has_rodata;
 
@@ -1511,7 +1517,7 @@ static struct bpf_object *bpf_object__new(const char *path,
 	obj->kconfig_map_idx = -1;
 
 	obj->kern_version = get_kernel_version();
-	obj->loaded = false;
+	obj->state  = OBJ_OPEN;
 
 	return obj;
 }
@@ -4852,7 +4858,7 @@ bool bpf_map__autocreate(const struct bpf_map *map)
 
 int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
 {
-	if (map->obj->loaded)
+	if (map->obj->state >= OBJ_LOADED)
 		return libbpf_err(-EBUSY);
 
 	map->autocreate = autocreate;
@@ -4946,7 +4952,7 @@ struct bpf_map *bpf_map__inner_map(struct bpf_map *map)
 
 int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries)
 {
-	if (map->obj->loaded)
+	if (map->obj->state >= OBJ_LOADED)
 		return libbpf_err(-EBUSY);
 
 	map->def.max_entries = max_entries;
@@ -5193,7 +5199,7 @@ static void bpf_map__destroy(struct bpf_map *map);
 
 static bool map_is_created(const struct bpf_map *map)
 {
-	return map->obj->loaded || map->reused;
+	return map->obj->state >= OBJ_LOADED || map->reused;
 }
 
 static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, bool is_inner)
@@ -8550,7 +8556,7 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
 	if (!obj)
 		return libbpf_err(-EINVAL);
 
-	if (obj->loaded) {
+	if (obj->state >= OBJ_LOADED) {
 		pr_warn("object '%s': load can't be attempted twice\n", obj->name);
 		return libbpf_err(-EINVAL);
 	}
@@ -8602,8 +8608,7 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
 	btf__free(obj->btf_vmlinux);
 	obj->btf_vmlinux = NULL;
 
-	obj->loaded = true; /* doesn't matter if successfully or not */
-
+	obj->state = OBJ_LOADED;/* doesn't matter if successfully or not */
 	if (err)
 		goto out;
 
@@ -8866,7 +8871,7 @@ int bpf_object__pin_maps(struct bpf_object *obj, const char *path)
 	if (!obj)
 		return libbpf_err(-ENOENT);
 
-	if (!obj->loaded) {
+	if (obj->state < OBJ_LOADED) {
 		pr_warn("object not yet loaded; load it first\n");
 		return libbpf_err(-ENOENT);
 	}
@@ -8945,7 +8950,7 @@ int bpf_object__pin_programs(struct bpf_object *obj, const char *path)
 	if (!obj)
 		return libbpf_err(-ENOENT);
 
-	if (!obj->loaded) {
+	if (obj->state < OBJ_LOADED) {
 		pr_warn("object not yet loaded; load it first\n");
 		return libbpf_err(-ENOENT);
 	}
@@ -9132,7 +9137,7 @@ int bpf_object__btf_fd(const struct bpf_object *obj)
 
 int bpf_object__set_kversion(struct bpf_object *obj, __u32 kern_version)
 {
-	if (obj->loaded)
+	if (obj->state >= OBJ_LOADED)
 		return libbpf_err(-EINVAL);
 
 	obj->kern_version = kern_version;
@@ -9229,7 +9234,7 @@ bool bpf_program__autoload(const struct bpf_program *prog)
 
 int bpf_program__set_autoload(struct bpf_program *prog, bool autoload)
 {
-	if (prog->obj->loaded)
+	if (prog->obj->state >= OBJ_LOADED)
 		return libbpf_err(-EINVAL);
 
 	prog->autoload = autoload;
@@ -9261,7 +9266,7 @@ int bpf_program__set_insns(struct bpf_program *prog,
 {
 	struct bpf_insn *insns;
 
-	if (prog->obj->loaded)
+	if (prog->obj->state >= OBJ_LOADED)
 		return libbpf_err(-EBUSY);
 
 	insns = libbpf_reallocarray(prog->insns, new_insn_cnt, sizeof(*insns));
@@ -9304,7 +9309,7 @@ static int last_custom_sec_def_handler_id;
 
 int bpf_program__set_type(struct bpf_program *prog, enum bpf_prog_type type)
 {
-	if (prog->obj->loaded)
+	if (prog->obj->state >= OBJ_LOADED)
 		return libbpf_err(-EBUSY);
 
 	/* if type is not changed, do nothing */
@@ -9335,7 +9340,7 @@ enum bpf_attach_type bpf_program__expected_attach_type(const struct bpf_program
 int bpf_program__set_expected_attach_type(struct bpf_program *prog,
 					   enum bpf_attach_type type)
 {
-	if (prog->obj->loaded)
+	if (prog->obj->state >= OBJ_LOADED)
 		return libbpf_err(-EBUSY);
 
 	prog->expected_attach_type = type;
@@ -9349,7 +9354,7 @@ __u32 bpf_program__flags(const struct bpf_program *prog)
 
 int bpf_program__set_flags(struct bpf_program *prog, __u32 flags)
 {
-	if (prog->obj->loaded)
+	if (prog->obj->state >= OBJ_LOADED)
 		return libbpf_err(-EBUSY);
 
 	prog->prog_flags = flags;
@@ -9363,7 +9368,7 @@ __u32 bpf_program__log_level(const struct bpf_program *prog)
 
 int bpf_program__set_log_level(struct bpf_program *prog, __u32 log_level)
 {
-	if (prog->obj->loaded)
+	if (prog->obj->state >= OBJ_LOADED)
 		return libbpf_err(-EBUSY);
 
 	prog->log_level = log_level;
@@ -9382,7 +9387,7 @@ int bpf_program__set_log_buf(struct bpf_program *prog, char *log_buf, size_t log
 		return libbpf_err(-EINVAL);
 	if (prog->log_size > UINT_MAX)
 		return libbpf_err(-EINVAL);
-	if (prog->obj->loaded)
+	if (prog->obj->state >= OBJ_LOADED)
 		return libbpf_err(-EBUSY);
 
 	prog->log_buf = log_buf;
@@ -10299,7 +10304,7 @@ static int map_btf_datasec_resize(struct bpf_map *map, __u32 size)
 
 int bpf_map__set_value_size(struct bpf_map *map, __u32 size)
 {
-	if (map->obj->loaded || map->reused)
+	if (map->obj->state >= OBJ_LOADED || map->reused)
 		return libbpf_err(-EBUSY);
 
 	if (map->mmaped) {
@@ -10345,7 +10350,7 @@ int bpf_map__set_initial_value(struct bpf_map *map,
 {
 	size_t actual_sz;
 
-	if (map->obj->loaded || map->reused)
+	if (map->obj->state >= OBJ_LOADED || map->reused)
 		return libbpf_err(-EBUSY);
 
 	if (!map->mmaped || map->libbpf_type == LIBBPF_MAP_KCONFIG)
@@ -13666,7 +13671,7 @@ int bpf_program__set_attach_target(struct bpf_program *prog,
 	if (!prog || attach_prog_fd < 0)
 		return libbpf_err(-EINVAL);
 
-	if (prog->obj->loaded)
+	if (prog->obj->state >= OBJ_LOADED)
 		return libbpf_err(-EINVAL);
 
 	if (attach_prog_fd && !attach_func_name) {
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load
  2025-02-28 17:52 [PATCH bpf-next 0/3] Introduce bpf_object__prepare Mykyta Yatsenko
  2025-02-28 17:52 ` [PATCH bpf-next 1/3] libbpf: introduce more granular state for bpf_object Mykyta Yatsenko
@ 2025-02-28 17:52 ` Mykyta Yatsenko
  2025-02-28 22:31   ` Andrii Nakryiko
  2025-03-01  8:12   ` Eduard Zingerman
  2025-02-28 17:52 ` [PATCH bpf-next 3/3] selftests/bpf: add tests for bpf_object__prepare Mykyta Yatsenko
  2025-02-28 22:39 ` [PATCH bpf-next 0/3] Introduce bpf_object__prepare Andrii Nakryiko
  3 siblings, 2 replies; 13+ messages in thread
From: Mykyta Yatsenko @ 2025-02-28 17:52 UTC (permalink / raw)
  To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87; +Cc: Mykyta Yatsenko

From: Mykyta Yatsenko <yatsenko@meta.com>

Introduce bpf_object__prepare API: additional intermediate step,
executing all steps that bpf_object__load is running before the actual
loading of BPF programs.

Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
 tools/lib/bpf/libbpf.c   | 161 +++++++++++++++++++++++++++------------
 tools/lib/bpf/libbpf.h   |   9 +++
 tools/lib/bpf/libbpf.map |   1 +
 3 files changed, 121 insertions(+), 50 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 9ced1ce2334c..dd2f64903c3b 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -4858,7 +4858,7 @@ bool bpf_map__autocreate(const struct bpf_map *map)
 
 int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
 {
-	if (map->obj->state >= OBJ_LOADED)
+	if (map->obj->state >= OBJ_PREPARED)
 		return libbpf_err(-EBUSY);
 
 	map->autocreate = autocreate;
@@ -4952,7 +4952,7 @@ struct bpf_map *bpf_map__inner_map(struct bpf_map *map)
 
 int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries)
 {
-	if (map->obj->state >= OBJ_LOADED)
+	if (map->obj->state >= OBJ_PREPARED)
 		return libbpf_err(-EBUSY);
 
 	map->def.max_entries = max_entries;
@@ -5199,7 +5199,7 @@ static void bpf_map__destroy(struct bpf_map *map);
 
 static bool map_is_created(const struct bpf_map *map)
 {
-	return map->obj->state >= OBJ_LOADED || map->reused;
+	return map->obj->state >= OBJ_PREPARED || map->reused;
 }
 
 static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, bool is_inner)
@@ -7901,13 +7901,6 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
 	size_t i;
 	int err;
 
-	for (i = 0; i < obj->nr_programs; i++) {
-		prog = &obj->programs[i];
-		err = bpf_object__sanitize_prog(obj, prog);
-		if (err)
-			return err;
-	}
-
 	for (i = 0; i < obj->nr_programs; i++) {
 		prog = &obj->programs[i];
 		if (prog_is_subprog(obj, prog))
@@ -7933,6 +7926,21 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
 	return 0;
 }
 
+static int bpf_object_prepare_progs(struct bpf_object *obj)
+{
+	struct bpf_program *prog;
+	size_t i;
+	int err;
+
+	for (i = 0; i < obj->nr_programs; i++) {
+		prog = &obj->programs[i];
+		err = bpf_object__sanitize_prog(obj, prog);
+		if (err)
+			return err;
+	}
+	return 0;
+}
+
 static const struct bpf_sec_def *find_sec_def(const char *sec_name);
 
 static int bpf_object_init_progs(struct bpf_object *obj, const struct bpf_object_open_opts *opts)
@@ -8549,9 +8557,72 @@ static int bpf_object_prepare_struct_ops(struct bpf_object *obj)
 	return 0;
 }
 
+static void bpf_object_unpin(struct bpf_object *obj)
+{
+	int i;
+
+	/* unpin any maps that were auto-pinned during load */
+	for (i = 0; i < obj->nr_maps; i++)
+		if (obj->maps[i].pinned && !obj->maps[i].reused)
+			bpf_map__unpin(&obj->maps[i], NULL);
+}
+
+static void bpf_object_post_load_cleanup(struct bpf_object *obj)
+{
+	int i;
+
+	/* clean up fd_array */
+	zfree(&obj->fd_array);
+
+	/* clean up module BTFs */
+	for (i = 0; i < obj->btf_module_cnt; i++) {
+		close(obj->btf_modules[i].fd);
+		btf__free(obj->btf_modules[i].btf);
+		free(obj->btf_modules[i].name);
+	}
+	free(obj->btf_modules);
+
+	/* clean up vmlinux BTF */
+	btf__free(obj->btf_vmlinux);
+	obj->btf_vmlinux = NULL;
+}
+
+static int bpf_object_prepare(struct bpf_object *obj, const char *target_btf_path)
+{
+	int err;
+
+	if (!obj)
+		return -EINVAL;
+
+	if (obj->state >= OBJ_PREPARED) {
+		pr_warn("object '%s': prepare loading can't be attempted twice\n", obj->name);
+		return -EINVAL;
+	}
+
+	err = bpf_object_prepare_token(obj);
+	err = err ? : bpf_object__probe_loading(obj);
+	err = err ? : bpf_object__load_vmlinux_btf(obj, false);
+	err = err ? : bpf_object__resolve_externs(obj, obj->kconfig);
+	err = err ? : bpf_object__sanitize_maps(obj);
+	err = err ? : bpf_object__init_kern_struct_ops_maps(obj);
+	err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
+	err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
+	err = err ? : bpf_object__sanitize_and_load_btf(obj);
+	err = err ? : bpf_object__create_maps(obj);
+	err = err ? : bpf_object_prepare_progs(obj);
+	obj->state = OBJ_PREPARED;
+
+	if (err) {
+		bpf_object_unpin(obj);
+		bpf_object_unload(obj);
+		return err;
+	}
+	return 0;
+}
+
 static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const char *target_btf_path)
 {
-	int err, i;
+	int err;
 
 	if (!obj)
 		return libbpf_err(-EINVAL);
@@ -8571,17 +8642,12 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
 		return libbpf_err(-LIBBPF_ERRNO__ENDIAN);
 	}
 
-	err = bpf_object_prepare_token(obj);
-	err = err ? : bpf_object__probe_loading(obj);
-	err = err ? : bpf_object__load_vmlinux_btf(obj, false);
-	err = err ? : bpf_object__resolve_externs(obj, obj->kconfig);
-	err = err ? : bpf_object__sanitize_maps(obj);
-	err = err ? : bpf_object__init_kern_struct_ops_maps(obj);
-	err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
-	err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
-	err = err ? : bpf_object__sanitize_and_load_btf(obj);
-	err = err ? : bpf_object__create_maps(obj);
-	err = err ? : bpf_object__load_progs(obj, extra_log_level);
+	if (obj->state < OBJ_PREPARED) {
+		err = bpf_object_prepare(obj, target_btf_path);
+		if (err)
+			return libbpf_err(err);
+	}
+	err = bpf_object__load_progs(obj, extra_log_level);
 	err = err ? : bpf_object_init_prog_arrays(obj);
 	err = err ? : bpf_object_prepare_struct_ops(obj);
 
@@ -8593,35 +8659,22 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
 			err = bpf_gen__finish(obj->gen_loader, obj->nr_programs, obj->nr_maps);
 	}
 
-	/* clean up fd_array */
-	zfree(&obj->fd_array);
+	bpf_object_post_load_cleanup(obj);
+	obj->state = OBJ_LOADED;/* doesn't matter if successfully or not */
 
-	/* clean up module BTFs */
-	for (i = 0; i < obj->btf_module_cnt; i++) {
-		close(obj->btf_modules[i].fd);
-		btf__free(obj->btf_modules[i].btf);
-		free(obj->btf_modules[i].name);
+	if (err) {
+		bpf_object_unpin(obj);
+		bpf_object_unload(obj);
+		pr_warn("failed to load object '%s'\n", obj->path);
+		return libbpf_err(err);
 	}
-	free(obj->btf_modules);
-
-	/* clean up vmlinux BTF */
-	btf__free(obj->btf_vmlinux);
-	obj->btf_vmlinux = NULL;
-
-	obj->state = OBJ_LOADED;/* doesn't matter if successfully or not */
-	if (err)
-		goto out;
 
 	return 0;
-out:
-	/* unpin any maps that were auto-pinned during load */
-	for (i = 0; i < obj->nr_maps; i++)
-		if (obj->maps[i].pinned && !obj->maps[i].reused)
-			bpf_map__unpin(&obj->maps[i], NULL);
+}
 
-	bpf_object_unload(obj);
-	pr_warn("failed to load object '%s'\n", obj->path);
-	return libbpf_err(err);
+int bpf_object__prepare(struct bpf_object *obj)
+{
+	return libbpf_err(bpf_object_prepare(obj, NULL));
 }
 
 int bpf_object__load(struct bpf_object *obj)
@@ -8871,8 +8924,8 @@ int bpf_object__pin_maps(struct bpf_object *obj, const char *path)
 	if (!obj)
 		return libbpf_err(-ENOENT);
 
-	if (obj->state < OBJ_LOADED) {
-		pr_warn("object not yet loaded; load it first\n");
+	if (obj->state < OBJ_PREPARED) {
+		pr_warn("object not yet loaded/prepared; load/prepare it first\n");
 		return libbpf_err(-ENOENT);
 	}
 
@@ -9069,6 +9122,14 @@ void bpf_object__close(struct bpf_object *obj)
 	if (IS_ERR_OR_NULL(obj))
 		return;
 
+	/*
+	 * if user called bpf_object__prepare() without ever getting to
+	 * bpf_object__load(), we need to clean up stuff that is normally
+	 * cleaned up at the end of loading step
+	 */
+	if (obj->state == OBJ_PREPARED)
+		bpf_object_post_load_cleanup(obj);
+
 	usdt_manager_free(obj->usdt_man);
 	obj->usdt_man = NULL;
 
@@ -10304,7 +10365,7 @@ static int map_btf_datasec_resize(struct bpf_map *map, __u32 size)
 
 int bpf_map__set_value_size(struct bpf_map *map, __u32 size)
 {
-	if (map->obj->state >= OBJ_LOADED || map->reused)
+	if (map->obj->state >= OBJ_PREPARED || map->reused)
 		return libbpf_err(-EBUSY);
 
 	if (map->mmaped) {
@@ -10350,7 +10411,7 @@ int bpf_map__set_initial_value(struct bpf_map *map,
 {
 	size_t actual_sz;
 
-	if (map->obj->state >= OBJ_LOADED || map->reused)
+	if (map->obj->state >= OBJ_PREPARED || map->reused)
 		return libbpf_err(-EBUSY);
 
 	if (!map->mmaped || map->libbpf_type == LIBBPF_MAP_KCONFIG)
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 3020ee45303a..09e87998c64e 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -241,6 +241,15 @@ LIBBPF_API struct bpf_object *
 bpf_object__open_mem(const void *obj_buf, size_t obj_buf_sz,
 		     const struct bpf_object_open_opts *opts);
 
+/**
+ * @brief **bpf_object__prepare()** prepares BPF object for loading.
+ * @param obj Pointer to a valid BPF object instance returned by
+ * **bpf_object__open*()** API
+ * @return 0, on success; negative error code, otherwise, error code is
+ * stored in errno
+ */
+int bpf_object__prepare(struct bpf_object *obj);
+
 /**
  * @brief **bpf_object__load()** loads BPF object into kernel.
  * @param obj Pointer to a valid BPF object instance returned by
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index b5a838de6f47..22edde0bf85e 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -438,4 +438,5 @@ LIBBPF_1.6.0 {
 		bpf_linker__new_fd;
 		btf__add_decl_attr;
 		btf__add_type_attr;
+		bpf_object__prepare;
 } LIBBPF_1.5.0;
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next 3/3] selftests/bpf: add tests for bpf_object__prepare
  2025-02-28 17:52 [PATCH bpf-next 0/3] Introduce bpf_object__prepare Mykyta Yatsenko
  2025-02-28 17:52 ` [PATCH bpf-next 1/3] libbpf: introduce more granular state for bpf_object Mykyta Yatsenko
  2025-02-28 17:52 ` [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load Mykyta Yatsenko
@ 2025-02-28 17:52 ` Mykyta Yatsenko
  2025-02-28 22:39 ` [PATCH bpf-next 0/3] Introduce bpf_object__prepare Andrii Nakryiko
  3 siblings, 0 replies; 13+ messages in thread
From: Mykyta Yatsenko @ 2025-02-28 17:52 UTC (permalink / raw)
  To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87; +Cc: Mykyta Yatsenko

From: Mykyta Yatsenko <yatsenko@meta.com>

Add selftests, checking that running bpf_object__prepare successfully
creates maps before load step.

Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
 .../selftests/bpf/prog_tests/prepare.c        | 99 +++++++++++++++++++
 tools/testing/selftests/bpf/progs/prepare.c   | 28 ++++++
 2 files changed, 127 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/prepare.c
 create mode 100644 tools/testing/selftests/bpf/progs/prepare.c

diff --git a/tools/testing/selftests/bpf/prog_tests/prepare.c b/tools/testing/selftests/bpf/prog_tests/prepare.c
new file mode 100644
index 000000000000..fb5cdad97116
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/prepare.c
@@ -0,0 +1,99 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta */
+
+#include <test_progs.h>
+#include <network_helpers.h>
+#include "prepare.skel.h"
+
+static bool check_prepared(struct bpf_object *obj)
+{
+	bool is_prepared = true;
+	const struct bpf_map *map;
+
+	bpf_object__for_each_map(map, obj) {
+		if (bpf_map__fd(map) < 0)
+			is_prepared = false;
+	}
+
+	return is_prepared;
+}
+
+static void test_prepare_no_load(void)
+{
+	struct prepare *skel;
+	int err;
+	LIBBPF_OPTS(bpf_test_run_opts, topts,
+		    .data_in = &pkt_v4,
+		    .data_size_in = sizeof(pkt_v4),
+	);
+
+	skel = prepare__open();
+	if (!ASSERT_OK_PTR(skel, "prepare__open"))
+		return;
+
+	if (!ASSERT_FALSE(check_prepared(skel->obj), "not check_prepared"))
+		goto cleanup;
+
+	err = bpf_object__prepare(skel->obj);
+
+	if (!ASSERT_TRUE(check_prepared(skel->obj), "check_prepared"))
+		goto cleanup;
+
+	if (!ASSERT_OK(err, "bpf_object__prepare"))
+		goto cleanup;
+
+cleanup:
+	prepare__destroy(skel);
+}
+
+static void test_prepare_load(void)
+{
+	struct prepare *skel;
+	int err, prog_fd;
+	LIBBPF_OPTS(bpf_test_run_opts, topts,
+		    .data_in = &pkt_v4,
+		    .data_size_in = sizeof(pkt_v4),
+	);
+
+	skel = prepare__open();
+	if (!ASSERT_OK_PTR(skel, "prepare__open"))
+		return;
+
+	if (!ASSERT_FALSE(check_prepared(skel->obj), "not check_prepared"))
+		goto cleanup;
+
+	err = bpf_object__prepare(skel->obj);
+	if (!ASSERT_OK(err, "bpf_object__prepare"))
+		goto cleanup;
+
+	err = prepare__load(skel);
+	if (!ASSERT_OK(err, "prepare__load"))
+		goto cleanup;
+
+	if (!ASSERT_TRUE(check_prepared(skel->obj), "check_prepared"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.program);
+	if (!ASSERT_GE(prog_fd, 0, "prog_fd"))
+		goto cleanup;
+
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	if (!ASSERT_OK(err, "test_run_opts err"))
+		goto cleanup;
+
+	if (!ASSERT_OK(topts.retval, "test_run_opts retval"))
+		goto cleanup;
+
+	ASSERT_EQ(skel->bss->err, 0, "err");
+
+cleanup:
+	prepare__destroy(skel);
+}
+
+void test_prepare(void)
+{
+	if (test__start_subtest("prepare_load"))
+		test_prepare_load();
+	if (test__start_subtest("prepare_no_load"))
+		test_prepare_no_load();
+}
diff --git a/tools/testing/selftests/bpf/progs/prepare.c b/tools/testing/selftests/bpf/progs/prepare.c
new file mode 100644
index 000000000000..1f1dd547e4ee
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/prepare.c
@@ -0,0 +1,28 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta */
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+//#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+int err;
+
+struct {
+	__uint(type, BPF_MAP_TYPE_RINGBUF);
+	__uint(max_entries, 4096);
+} ringbuf SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, __u32);
+	__type(value, __u32);
+} array_map SEC(".maps");
+
+SEC("cgroup_skb/egress")
+int program(struct __sk_buff *skb)
+{
+	err = 0;
+	return 0;
+}
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 1/3] libbpf: introduce more granular state for bpf_object
  2025-02-28 17:52 ` [PATCH bpf-next 1/3] libbpf: introduce more granular state for bpf_object Mykyta Yatsenko
@ 2025-02-28 22:20   ` Andrii Nakryiko
  2025-02-28 22:35   ` Andrii Nakryiko
  1 sibling, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2025-02-28 22:20 UTC (permalink / raw)
  To: Mykyta Yatsenko
  Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
	Mykyta Yatsenko

On Fri, Feb 28, 2025 at 9:53 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Add struct bpf_object_state, substitute bpf_object member loaded by
> state. State could be OBJ_OPEN - indicates that bpf_object was just
> created, OBJ_PREPARED - prepare step will be introduced in the next
> patch, OBJ_LOADED - indicates that bpf_object is loaded, similar to
> loaded=true currently.
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
>  tools/lib/bpf/libbpf.c | 47 +++++++++++++++++++++++-------------------
>  1 file changed, 26 insertions(+), 21 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 899e98225f3b..9ced1ce2334c 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -670,11 +670,18 @@ struct elf_state {
>
>  struct usdt_manager;
>
> +enum bpf_object_state {
> +       OBJ_OPEN,
> +       OBJ_PREPARED,
> +       OBJ_LOADED,
> +};
> +
>  struct bpf_object {
>         char name[BPF_OBJ_NAME_LEN];
>         char license[64];
>         __u32 kern_version;
>
> +       enum bpf_object_state state;
>         struct bpf_program *programs;
>         size_t nr_programs;
>         struct bpf_map *maps;
> @@ -686,7 +693,6 @@ struct bpf_object {
>         int nr_extern;
>         int kconfig_map_idx;
>
> -       bool loaded;
>         bool has_subcalls;
>         bool has_rodata;
>
> @@ -1511,7 +1517,7 @@ static struct bpf_object *bpf_object__new(const char *path,
>         obj->kconfig_map_idx = -1;
>
>         obj->kern_version = get_kernel_version();
> -       obj->loaded = false;
> +       obj->state  = OBJ_OPEN;
>
>         return obj;
>  }
> @@ -4852,7 +4858,7 @@ bool bpf_map__autocreate(const struct bpf_map *map)
>
>  int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
>  {
> -       if (map->obj->loaded)
> +       if (map->obj->state >= OBJ_LOADED)
>                 return libbpf_err(-EBUSY);
>
>         map->autocreate = autocreate;
> @@ -4946,7 +4952,7 @@ struct bpf_map *bpf_map__inner_map(struct bpf_map *map)
>
>  int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries)
>  {
> -       if (map->obj->loaded)
> +       if (map->obj->state >= OBJ_LOADED)
>                 return libbpf_err(-EBUSY);
>
>         map->def.max_entries = max_entries;
> @@ -5193,7 +5199,7 @@ static void bpf_map__destroy(struct bpf_map *map);
>
>  static bool map_is_created(const struct bpf_map *map)
>  {
> -       return map->obj->loaded || map->reused;
> +       return map->obj->state >= OBJ_LOADED || map->reused;
>  }
>
>  static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, bool is_inner)
> @@ -8550,7 +8556,7 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
>         if (!obj)
>                 return libbpf_err(-EINVAL);
>
> -       if (obj->loaded) {
> +       if (obj->state >= OBJ_LOADED) {
>                 pr_warn("object '%s': load can't be attempted twice\n", obj->name);
>                 return libbpf_err(-EINVAL);
>         }
> @@ -8602,8 +8608,7 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
>         btf__free(obj->btf_vmlinux);
>         obj->btf_vmlinux = NULL;
>
> -       obj->loaded = true; /* doesn't matter if successfully or not */
> -
> +       obj->state = OBJ_LOADED;/* doesn't matter if successfully or not */
>         if (err)
>                 goto out;
>
> @@ -8866,7 +8871,7 @@ int bpf_object__pin_maps(struct bpf_object *obj, const char *path)
>         if (!obj)
>                 return libbpf_err(-ENOENT);
>
> -       if (!obj->loaded) {
> +       if (obj->state < OBJ_LOADED) {

this one seems ok in OBJ_PREPARED as well, all the maps will be
created by that time

>                 pr_warn("object not yet loaded; load it first\n");
>                 return libbpf_err(-ENOENT);
>         }
> @@ -8945,7 +8950,7 @@ int bpf_object__pin_programs(struct bpf_object *obj, const char *path)
>         if (!obj)
>                 return libbpf_err(-ENOENT);
>
> -       if (!obj->loaded) {
> +       if (obj->state < OBJ_LOADED) {
>                 pr_warn("object not yet loaded; load it first\n");
>                 return libbpf_err(-ENOENT);
>         }
> @@ -9132,7 +9137,7 @@ int bpf_object__btf_fd(const struct bpf_object *obj)
>
>  int bpf_object__set_kversion(struct bpf_object *obj, __u32 kern_version)
>  {
> -       if (obj->loaded)
> +       if (obj->state >= OBJ_LOADED)
>                 return libbpf_err(-EINVAL);
>
>         obj->kern_version = kern_version;
> @@ -9229,7 +9234,7 @@ bool bpf_program__autoload(const struct bpf_program *prog)
>
>  int bpf_program__set_autoload(struct bpf_program *prog, bool autoload)
>  {
> -       if (prog->obj->loaded)
> +       if (prog->obj->state >= OBJ_LOADED)
>                 return libbpf_err(-EINVAL);
>
>         prog->autoload = autoload;
> @@ -9261,7 +9266,7 @@ int bpf_program__set_insns(struct bpf_program *prog,
>  {
>         struct bpf_insn *insns;
>
> -       if (prog->obj->loaded)
> +       if (prog->obj->state >= OBJ_LOADED)
>                 return libbpf_err(-EBUSY);
>
>         insns = libbpf_reallocarray(prog->insns, new_insn_cnt, sizeof(*insns));
> @@ -9304,7 +9309,7 @@ static int last_custom_sec_def_handler_id;
>
>  int bpf_program__set_type(struct bpf_program *prog, enum bpf_prog_type type)
>  {
> -       if (prog->obj->loaded)
> +       if (prog->obj->state >= OBJ_LOADED)
>                 return libbpf_err(-EBUSY);
>
>         /* if type is not changed, do nothing */
> @@ -9335,7 +9340,7 @@ enum bpf_attach_type bpf_program__expected_attach_type(const struct bpf_program
>  int bpf_program__set_expected_attach_type(struct bpf_program *prog,
>                                            enum bpf_attach_type type)
>  {
> -       if (prog->obj->loaded)
> +       if (prog->obj->state >= OBJ_LOADED)
>                 return libbpf_err(-EBUSY);
>
>         prog->expected_attach_type = type;
> @@ -9349,7 +9354,7 @@ __u32 bpf_program__flags(const struct bpf_program *prog)
>
>  int bpf_program__set_flags(struct bpf_program *prog, __u32 flags)
>  {
> -       if (prog->obj->loaded)
> +       if (prog->obj->state >= OBJ_LOADED)
>                 return libbpf_err(-EBUSY);
>
>         prog->prog_flags = flags;
> @@ -9363,7 +9368,7 @@ __u32 bpf_program__log_level(const struct bpf_program *prog)
>
>  int bpf_program__set_log_level(struct bpf_program *prog, __u32 log_level)
>  {
> -       if (prog->obj->loaded)
> +       if (prog->obj->state >= OBJ_LOADED)
>                 return libbpf_err(-EBUSY);
>
>         prog->log_level = log_level;
> @@ -9382,7 +9387,7 @@ int bpf_program__set_log_buf(struct bpf_program *prog, char *log_buf, size_t log
>                 return libbpf_err(-EINVAL);
>         if (prog->log_size > UINT_MAX)
>                 return libbpf_err(-EINVAL);
> -       if (prog->obj->loaded)
> +       if (prog->obj->state >= OBJ_LOADED)
>                 return libbpf_err(-EBUSY);
>
>         prog->log_buf = log_buf;
> @@ -10299,7 +10304,7 @@ static int map_btf_datasec_resize(struct bpf_map *map, __u32 size)
>
>  int bpf_map__set_value_size(struct bpf_map *map, __u32 size)
>  {
> -       if (map->obj->loaded || map->reused)
> +       if (map->obj->state >= OBJ_LOADED || map->reused)

OBJ_PREPARED, maps can't be changed after that step

>                 return libbpf_err(-EBUSY);
>
>         if (map->mmaped) {
> @@ -10345,7 +10350,7 @@ int bpf_map__set_initial_value(struct bpf_map *map,
>  {
>         size_t actual_sz;
>
> -       if (map->obj->loaded || map->reused)
> +       if (map->obj->state >= OBJ_LOADED || map->reused)

ditto, I think we have to ban it after OBJ_PREPARED


>                 return libbpf_err(-EBUSY);
>
>         if (!map->mmaped || map->libbpf_type == LIBBPF_MAP_KCONFIG)
> @@ -13666,7 +13671,7 @@ int bpf_program__set_attach_target(struct bpf_program *prog,
>         if (!prog || attach_prog_fd < 0)
>                 return libbpf_err(-EINVAL);
>
> -       if (prog->obj->loaded)
> +       if (prog->obj->state >= OBJ_LOADED)
>                 return libbpf_err(-EINVAL);
>
>         if (attach_prog_fd && !attach_func_name) {
> --
> 2.48.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load
  2025-02-28 17:52 ` [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load Mykyta Yatsenko
@ 2025-02-28 22:31   ` Andrii Nakryiko
  2025-03-01  8:12   ` Eduard Zingerman
  1 sibling, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2025-02-28 22:31 UTC (permalink / raw)
  To: Mykyta Yatsenko
  Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
	Mykyta Yatsenko

On Fri, Feb 28, 2025 at 9:53 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Introduce bpf_object__prepare API: additional intermediate step,
> executing all steps that bpf_object__load is running before the actual
> loading of BPF programs.
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
>  tools/lib/bpf/libbpf.c   | 161 +++++++++++++++++++++++++++------------
>  tools/lib/bpf/libbpf.h   |   9 +++
>  tools/lib/bpf/libbpf.map |   1 +
>  3 files changed, 121 insertions(+), 50 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 9ced1ce2334c..dd2f64903c3b 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -4858,7 +4858,7 @@ bool bpf_map__autocreate(const struct bpf_map *map)
>
>  int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
>  {
> -       if (map->obj->state >= OBJ_LOADED)
> +       if (map->obj->state >= OBJ_PREPARED)

ah, ok, so you've decided to change that in this patch... that works
as well, I guess

>                 return libbpf_err(-EBUSY);
>
>         map->autocreate = autocreate;
> @@ -4952,7 +4952,7 @@ struct bpf_map *bpf_map__inner_map(struct bpf_map *map)
>
>  int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries)
>  {
> -       if (map->obj->state >= OBJ_LOADED)
> +       if (map->obj->state >= OBJ_PREPARED)
>                 return libbpf_err(-EBUSY);
>
>         map->def.max_entries = max_entries;
> @@ -5199,7 +5199,7 @@ static void bpf_map__destroy(struct bpf_map *map);
>
>  static bool map_is_created(const struct bpf_map *map)

we should use this helper in bpf_map__set_max_entries() and other
setters like that. We better error out when someone is trying to set
different value size for the map that was explicitly set from FD with
bpf_map__reuse_fd()... Can you please add a patch that would fix this
up (before all this OBJ_PREPARED refactoring)?

>  {
> -       return map->obj->state >= OBJ_LOADED || map->reused;
> +       return map->obj->state >= OBJ_PREPARED || map->reused;
>  }
>
>  static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, bool is_inner)
> @@ -7901,13 +7901,6 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
>         size_t i;
>         int err;
>
> -       for (i = 0; i < obj->nr_programs; i++) {
> -               prog = &obj->programs[i];
> -               err = bpf_object__sanitize_prog(obj, prog);
> -               if (err)
> -                       return err;
> -       }
> -
>         for (i = 0; i < obj->nr_programs; i++) {
>                 prog = &obj->programs[i];
>                 if (prog_is_subprog(obj, prog))
> @@ -7933,6 +7926,21 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
>         return 0;
>  }
>
> +static int bpf_object_prepare_progs(struct bpf_object *obj)
> +{
> +       struct bpf_program *prog;
> +       size_t i;
> +       int err;
> +
> +       for (i = 0; i < obj->nr_programs; i++) {
> +               prog = &obj->programs[i];
> +               err = bpf_object__sanitize_prog(obj, prog);
> +               if (err)
> +                       return err;
> +       }
> +       return 0;
> +}
> +
>  static const struct bpf_sec_def *find_sec_def(const char *sec_name);
>
>  static int bpf_object_init_progs(struct bpf_object *obj, const struct bpf_object_open_opts *opts)
> @@ -8549,9 +8557,72 @@ static int bpf_object_prepare_struct_ops(struct bpf_object *obj)
>         return 0;
>  }
>
> +static void bpf_object_unpin(struct bpf_object *obj)
> +{
> +       int i;
> +
> +       /* unpin any maps that were auto-pinned during load */
> +       for (i = 0; i < obj->nr_maps; i++)
> +               if (obj->maps[i].pinned && !obj->maps[i].reused)
> +                       bpf_map__unpin(&obj->maps[i], NULL);
> +}
> +
> +static void bpf_object_post_load_cleanup(struct bpf_object *obj)
> +{
> +       int i;
> +
> +       /* clean up fd_array */
> +       zfree(&obj->fd_array);
> +
> +       /* clean up module BTFs */
> +       for (i = 0; i < obj->btf_module_cnt; i++) {
> +               close(obj->btf_modules[i].fd);
> +               btf__free(obj->btf_modules[i].btf);
> +               free(obj->btf_modules[i].name);
> +       }
> +       free(obj->btf_modules);
> +
> +       /* clean up vmlinux BTF */
> +       btf__free(obj->btf_vmlinux);
> +       obj->btf_vmlinux = NULL;
> +}
> +
> +static int bpf_object_prepare(struct bpf_object *obj, const char *target_btf_path)
> +{
> +       int err;
> +
> +       if (!obj)
> +               return -EINVAL;

meh, drop this please, can't be NULL

> +
> +       if (obj->state >= OBJ_PREPARED) {
> +               pr_warn("object '%s': prepare loading can't be attempted twice\n", obj->name);
> +               return -EINVAL;
> +       }
> +
> +       err = bpf_object_prepare_token(obj);
> +       err = err ? : bpf_object__probe_loading(obj);
> +       err = err ? : bpf_object__load_vmlinux_btf(obj, false);
> +       err = err ? : bpf_object__resolve_externs(obj, obj->kconfig);
> +       err = err ? : bpf_object__sanitize_maps(obj);
> +       err = err ? : bpf_object__init_kern_struct_ops_maps(obj);
> +       err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
> +       err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
> +       err = err ? : bpf_object__sanitize_and_load_btf(obj);
> +       err = err ? : bpf_object__create_maps(obj);
> +       err = err ? : bpf_object_prepare_progs(obj);
> +       obj->state = OBJ_PREPARED;
> +
> +       if (err) {
> +               bpf_object_unpin(obj);
> +               bpf_object_unload(obj);
> +               return err;
> +       }
> +       return 0;
> +}
> +
>  static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const char *target_btf_path)
>  {
> -       int err, i;
> +       int err;
>
>         if (!obj)
>                 return libbpf_err(-EINVAL);
> @@ -8571,17 +8642,12 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
>                 return libbpf_err(-LIBBPF_ERRNO__ENDIAN);
>         }
>
> -       err = bpf_object_prepare_token(obj);
> -       err = err ? : bpf_object__probe_loading(obj);
> -       err = err ? : bpf_object__load_vmlinux_btf(obj, false);
> -       err = err ? : bpf_object__resolve_externs(obj, obj->kconfig);
> -       err = err ? : bpf_object__sanitize_maps(obj);
> -       err = err ? : bpf_object__init_kern_struct_ops_maps(obj);
> -       err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
> -       err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
> -       err = err ? : bpf_object__sanitize_and_load_btf(obj);
> -       err = err ? : bpf_object__create_maps(obj);
> -       err = err ? : bpf_object__load_progs(obj, extra_log_level);
> +       if (obj->state < OBJ_PREPARED) {
> +               err = bpf_object_prepare(obj, target_btf_path);
> +               if (err)
> +                       return libbpf_err(err);
> +       }
> +       err = bpf_object__load_progs(obj, extra_log_level);
>         err = err ? : bpf_object_init_prog_arrays(obj);
>         err = err ? : bpf_object_prepare_struct_ops(obj);
>
> @@ -8593,35 +8659,22 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
>                         err = bpf_gen__finish(obj->gen_loader, obj->nr_programs, obj->nr_maps);
>         }
>
> -       /* clean up fd_array */
> -       zfree(&obj->fd_array);
> +       bpf_object_post_load_cleanup(obj);
> +       obj->state = OBJ_LOADED;/* doesn't matter if successfully or not */
>
> -       /* clean up module BTFs */
> -       for (i = 0; i < obj->btf_module_cnt; i++) {
> -               close(obj->btf_modules[i].fd);
> -               btf__free(obj->btf_modules[i].btf);
> -               free(obj->btf_modules[i].name);
> +       if (err) {
> +               bpf_object_unpin(obj);
> +               bpf_object_unload(obj);
> +               pr_warn("failed to load object '%s'\n", obj->path);
> +               return libbpf_err(err);
>         }
> -       free(obj->btf_modules);
> -
> -       /* clean up vmlinux BTF */
> -       btf__free(obj->btf_vmlinux);
> -       obj->btf_vmlinux = NULL;
> -
> -       obj->state = OBJ_LOADED;/* doesn't matter if successfully or not */
> -       if (err)
> -               goto out;
>
>         return 0;
> -out:
> -       /* unpin any maps that were auto-pinned during load */
> -       for (i = 0; i < obj->nr_maps; i++)
> -               if (obj->maps[i].pinned && !obj->maps[i].reused)
> -                       bpf_map__unpin(&obj->maps[i], NULL);
> +}
>
> -       bpf_object_unload(obj);
> -       pr_warn("failed to load object '%s'\n", obj->path);
> -       return libbpf_err(err);
> +int bpf_object__prepare(struct bpf_object *obj)
> +{
> +       return libbpf_err(bpf_object_prepare(obj, NULL));
>  }
>
>  int bpf_object__load(struct bpf_object *obj)
> @@ -8871,8 +8924,8 @@ int bpf_object__pin_maps(struct bpf_object *obj, const char *path)
>         if (!obj)
>                 return libbpf_err(-ENOENT);
>
> -       if (obj->state < OBJ_LOADED) {
> -               pr_warn("object not yet loaded; load it first\n");
> +       if (obj->state < OBJ_PREPARED) {
> +               pr_warn("object not yet loaded/prepared; load/prepare it first\n");

let's keep the original simpler message, this is way too much of
either/or. Most users will never care about bpf_object__prepare()
anyways.

>                 return libbpf_err(-ENOENT);
>         }
>
> @@ -9069,6 +9122,14 @@ void bpf_object__close(struct bpf_object *obj)
>         if (IS_ERR_OR_NULL(obj))
>                 return;
>
> +       /*
> +        * if user called bpf_object__prepare() without ever getting to
> +        * bpf_object__load(), we need to clean up stuff that is normally
> +        * cleaned up at the end of loading step
> +        */
> +       if (obj->state == OBJ_PREPARED)
> +               bpf_object_post_load_cleanup(obj);
> +

let's make bpf_object_post_load_cleanup() idempotent, so calling it
multiple times won't hurt, it should be pretty simple (just set
obj->btf_module_cnt to zero)

>         usdt_manager_free(obj->usdt_man);
>         obj->usdt_man = NULL;
>
> @@ -10304,7 +10365,7 @@ static int map_btf_datasec_resize(struct bpf_map *map, __u32 size)
>
>  int bpf_map__set_value_size(struct bpf_map *map, __u32 size)
>  {
> -       if (map->obj->state >= OBJ_LOADED || map->reused)
> +       if (map->obj->state >= OBJ_PREPARED || map->reused)
>                 return libbpf_err(-EBUSY);
>
>         if (map->mmaped) {
> @@ -10350,7 +10411,7 @@ int bpf_map__set_initial_value(struct bpf_map *map,
>  {
>         size_t actual_sz;
>
> -       if (map->obj->state >= OBJ_LOADED || map->reused)
> +       if (map->obj->state >= OBJ_PREPARED || map->reused)
>                 return libbpf_err(-EBUSY);
>
>         if (!map->mmaped || map->libbpf_type == LIBBPF_MAP_KCONFIG)
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index 3020ee45303a..09e87998c64e 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -241,6 +241,15 @@ LIBBPF_API struct bpf_object *
>  bpf_object__open_mem(const void *obj_buf, size_t obj_buf_sz,
>                      const struct bpf_object_open_opts *opts);
>
> +/**
> + * @brief **bpf_object__prepare()** prepares BPF object for loading.

a bit too brief. Please add that preparation step performs ELF
processing, relocations, prepares final state of BPF program
instructions (accessible with bpf_program__insns()), creates and
(potentially) pins maps, and stops short of loading BPF programs.

This is an important aspect that veristat and others will be relying
on, so it should be documented properly

> + * @param obj Pointer to a valid BPF object instance returned by
> + * **bpf_object__open*()** API
> + * @return 0, on success; negative error code, otherwise, error code is
> + * stored in errno
> + */
> +int bpf_object__prepare(struct bpf_object *obj);
> +
>  /**
>   * @brief **bpf_object__load()** loads BPF object into kernel.
>   * @param obj Pointer to a valid BPF object instance returned by
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index b5a838de6f47..22edde0bf85e 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -438,4 +438,5 @@ LIBBPF_1.6.0 {
>                 bpf_linker__new_fd;
>                 btf__add_decl_attr;
>                 btf__add_type_attr;
> +               bpf_object__prepare;

this is sorted list

pw-bot: cr

>  } LIBBPF_1.5.0;
> --
> 2.48.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 1/3] libbpf: introduce more granular state for bpf_object
  2025-02-28 17:52 ` [PATCH bpf-next 1/3] libbpf: introduce more granular state for bpf_object Mykyta Yatsenko
  2025-02-28 22:20   ` Andrii Nakryiko
@ 2025-02-28 22:35   ` Andrii Nakryiko
  1 sibling, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2025-02-28 22:35 UTC (permalink / raw)
  To: Mykyta Yatsenko
  Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
	Mykyta Yatsenko

On Fri, Feb 28, 2025 at 9:53 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Add struct bpf_object_state, substitute bpf_object member loaded by
> state. State could be OBJ_OPEN - indicates that bpf_object was just
> created, OBJ_PREPARED - prepare step will be introduced in the next
> patch, OBJ_LOADED - indicates that bpf_object is loaded, similar to
> loaded=true currently.
>

This is a bit too low-level and explains "what" at a very mechanistic
level, but not really higher-level "what" and "why". Try to make it a
bit more useful for future readers. E.g., something along the lines
that "we are going to split load step into preparation and loading, to
allow more flexibility when working with bpf_object, and this patch
adds a finer-grained object state enum instead of previous boolean
loaded flag".

> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
>  tools/lib/bpf/libbpf.c | 47 +++++++++++++++++++++++-------------------
>  1 file changed, 26 insertions(+), 21 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 899e98225f3b..9ced1ce2334c 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -670,11 +670,18 @@ struct elf_state {
>
>  struct usdt_manager;
>

[...]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 0/3] Introduce bpf_object__prepare
  2025-02-28 17:52 [PATCH bpf-next 0/3] Introduce bpf_object__prepare Mykyta Yatsenko
                   ` (2 preceding siblings ...)
  2025-02-28 17:52 ` [PATCH bpf-next 3/3] selftests/bpf: add tests for bpf_object__prepare Mykyta Yatsenko
@ 2025-02-28 22:39 ` Andrii Nakryiko
  3 siblings, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2025-02-28 22:39 UTC (permalink / raw)
  To: Mykyta Yatsenko
  Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
	Mykyta Yatsenko

On Fri, Feb 28, 2025 at 9:53 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Introduce a new libbpf API function bpf_object__prepare enabling more
> granular control over the process of bpf_object loading.
> bpf_object__prepare runs the same steps that bpf_object__load is running,
> before the actual loading of BPF programs.
> This API could be useful when we need access to initialized fields of
> bpf_object before program loading, for example: currently we can't pass
> bpf_token into bpf_program__set_attach_target, because token initialization
> is done during loading.
>

I think this cover letter and commit messages in patch 1/2 could use a
bit more work. Here, I'd elaborate on anticipated use cases (and there
are a few already):

  1) taking advantage of BPF token for freplace programs that might
need to lookup BTF of other programs (I'd also mention that we don't
want to move BPF token creation to open step, as open step is "no
privilege assumption" step so that tools like bpftool can generate
skeleton, discover the structure of BPF object, etC).

  2) stopping at prepare gives users finalized BPF program
instructions (with subprogs appended, everything relocated and
finalized, etc). And that property can be taken advantage of by
veristat (and similar tools) that might want to process one program at
a time, but would like to avoid relatively slow ELF parsing and
processing; and even BPF selftests itself (RUN_TESTS part of it at
least) would benefit from this by eliminating waste of re-processing
ELF many times over.


I.e., think about writing this for someone that doesn't have enough
context. They should get at least some of it from your descriptions.

> Mykyta Yatsenko (3):
>   libbpf: introduce more granular state for bpf_object
>   libbpf: split bpf object load into prepare/load
>   selftests/bpf: add tests for bpf_object__prepare
>
>  tools/lib/bpf/libbpf.c                        | 194 ++++++++++++------
>  tools/lib/bpf/libbpf.h                        |   9 +
>  tools/lib/bpf/libbpf.map                      |   1 +
>  .../selftests/bpf/prog_tests/prepare.c        |  99 +++++++++
>  tools/testing/selftests/bpf/progs/prepare.c   |  28 +++
>  5 files changed, 267 insertions(+), 64 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/prepare.c
>  create mode 100644 tools/testing/selftests/bpf/progs/prepare.c
>
> --
> 2.48.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load
  2025-02-28 17:52 ` [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load Mykyta Yatsenko
  2025-02-28 22:31   ` Andrii Nakryiko
@ 2025-03-01  8:12   ` Eduard Zingerman
  2025-03-01 21:45     ` Mykyta Yatsenko
  1 sibling, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2025-03-01  8:12 UTC (permalink / raw)
  To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team
  Cc: Mykyta Yatsenko

On Fri, 2025-02-28 at 17:52 +0000, Mykyta Yatsenko wrote:

[...]

> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 9ced1ce2334c..dd2f64903c3b 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -4858,7 +4858,7 @@ bool bpf_map__autocreate(const struct bpf_map *map)
>  
>  int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
>  {
> -	if (map->obj->state >= OBJ_LOADED)
> +	if (map->obj->state >= OBJ_PREPARED)
>  		return libbpf_err(-EBUSY);

I looked through logic in patches #1 and #2 and changes look correct.
Running tests under valgrind does not show issues with this feature.
The only ask from my side is to consider doing ==/!= comparisons in
cases like above. E.g. it seems that `map->obj->state != OBJ_OPENED`
is a bit simpler to understand when reading condition above.
Or maybe that's just me.

>  	map->autocreate = autocreate;

[...]


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load
  2025-03-01  8:12   ` Eduard Zingerman
@ 2025-03-01 21:45     ` Mykyta Yatsenko
  2025-03-03 21:38       ` Andrii Nakryiko
  0 siblings, 1 reply; 13+ messages in thread
From: Mykyta Yatsenko @ 2025-03-01 21:45 UTC (permalink / raw)
  To: Eduard Zingerman, bpf, ast, andrii, daniel, kafai, kernel-team
  Cc: Mykyta Yatsenko

On 01/03/2025 08:12, Eduard Zingerman wrote:
> On Fri, 2025-02-28 at 17:52 +0000, Mykyta Yatsenko wrote:
>
> [...]
>
>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>> index 9ced1ce2334c..dd2f64903c3b 100644
>> --- a/tools/lib/bpf/libbpf.c
>> +++ b/tools/lib/bpf/libbpf.c
>> @@ -4858,7 +4858,7 @@ bool bpf_map__autocreate(const struct bpf_map *map)
>>   
>>   int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
>>   {
>> -	if (map->obj->state >= OBJ_LOADED)
>> +	if (map->obj->state >= OBJ_PREPARED)
>>   		return libbpf_err(-EBUSY);
> I looked through logic in patches #1 and #2 and changes look correct.
> Running tests under valgrind does not show issues with this feature.
> The only ask from my side is to consider doing ==/!= comparisons in
> cases like above. E.g. it seems that `map->obj->state != OBJ_OPENED`
> is a bit simpler to understand when reading condition above.
> Or maybe that's just me.
I'm not sure about this one.  >= or < checks for state relative to 
operand more
flexibly,for example `map->obj->state >= OBJ_PREPARED` is read as
"is the object in at least PREPARED state". Perhaps, if we add more states,
these >,< checks will not require any changes, while ==, != may.
I guess this also depends on what we actually want to check here, is it that
state at least PREPARED or the state is not initial OPENED.
Not a strong opinion, though, happy to flip code to ==, !=.
>   
>>   	map->autocreate = autocreate;
> [...]
>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load
  2025-03-01 21:45     ` Mykyta Yatsenko
@ 2025-03-03 21:38       ` Andrii Nakryiko
  2025-03-03 22:04         ` Eduard Zingerman
  0 siblings, 1 reply; 13+ messages in thread
From: Andrii Nakryiko @ 2025-03-03 21:38 UTC (permalink / raw)
  To: Mykyta Yatsenko
  Cc: Eduard Zingerman, bpf, ast, andrii, daniel, kafai, kernel-team,
	Mykyta Yatsenko

On Sat, Mar 1, 2025 at 1:45 PM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> On 01/03/2025 08:12, Eduard Zingerman wrote:
> > On Fri, 2025-02-28 at 17:52 +0000, Mykyta Yatsenko wrote:
> >
> > [...]
> >
> >> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> >> index 9ced1ce2334c..dd2f64903c3b 100644
> >> --- a/tools/lib/bpf/libbpf.c
> >> +++ b/tools/lib/bpf/libbpf.c
> >> @@ -4858,7 +4858,7 @@ bool bpf_map__autocreate(const struct bpf_map *map)
> >>
> >>   int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
> >>   {
> >> -    if (map->obj->state >= OBJ_LOADED)
> >> +    if (map->obj->state >= OBJ_PREPARED)
> >>              return libbpf_err(-EBUSY);
> > I looked through logic in patches #1 and #2 and changes look correct.
> > Running tests under valgrind does not show issues with this feature.
> > The only ask from my side is to consider doing ==/!= comparisons in
> > cases like above. E.g. it seems that `map->obj->state != OBJ_OPENED`
> > is a bit simpler to understand when reading condition above.
> > Or maybe that's just me.
> I'm not sure about this one.  >= or < checks for state relative to
> operand more
> flexibly,for example `map->obj->state >= OBJ_PREPARED` is read as
> "is the object in at least PREPARED state". Perhaps, if we add more states,
> these >,< checks will not require any changes, while ==, != may.
> I guess this also depends on what we actually want to check here, is it that
> state at least PREPARED or the state is not initial OPENED.
> Not a strong opinion, though, happy to flip code to ==, !=.

Those steps are logically ordered, so >= and <= makes more sense. If
we ever add one extra step somewhere in between existing steps, most
checks will stay correct, while with equality a lot of them might need
to be adjusted to multiple equalities.

> >
> >>      map->autocreate = autocreate;
> > [...]
> >
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load
  2025-03-03 21:38       ` Andrii Nakryiko
@ 2025-03-03 22:04         ` Eduard Zingerman
  2025-03-03 23:27           ` Andrii Nakryiko
  0 siblings, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2025-03-03 22:04 UTC (permalink / raw)
  To: Andrii Nakryiko, Mykyta Yatsenko
  Cc: bpf, ast, andrii, daniel, kafai, kernel-team, Mykyta Yatsenko

On Mon, 2025-03-03 at 13:38 -0800, Andrii Nakryiko wrote:
> On Sat, Mar 1, 2025 at 1:45 PM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
> > 
> > On 01/03/2025 08:12, Eduard Zingerman wrote:
> > > On Fri, 2025-02-28 at 17:52 +0000, Mykyta Yatsenko wrote:
> > > 
> > > [...]
> > > 
> > > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > > index 9ced1ce2334c..dd2f64903c3b 100644
> > > > --- a/tools/lib/bpf/libbpf.c
> > > > +++ b/tools/lib/bpf/libbpf.c
> > > > @@ -4858,7 +4858,7 @@ bool bpf_map__autocreate(const struct bpf_map *map)
> > > > 
> > > >   int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
> > > >   {
> > > > -    if (map->obj->state >= OBJ_LOADED)
> > > > +    if (map->obj->state >= OBJ_PREPARED)
> > > >              return libbpf_err(-EBUSY);
> > > I looked through logic in patches #1 and #2 and changes look correct.
> > > Running tests under valgrind does not show issues with this feature.
> > > The only ask from my side is to consider doing ==/!= comparisons in
> > > cases like above. E.g. it seems that `map->obj->state != OBJ_OPENED`
> > > is a bit simpler to understand when reading condition above.
> > > Or maybe that's just me.
> > I'm not sure about this one.  >= or < checks for state relative to
> > operand more
> > flexibly,for example `map->obj->state >= OBJ_PREPARED` is read as
> > "is the object in at least PREPARED state". Perhaps, if we add more states,
> > these >,< checks will not require any changes, while ==, != may.
> > I guess this also depends on what we actually want to check here, is it that
> > state at least PREPARED or the state is not initial OPENED.
> > Not a strong opinion, though, happy to flip code to ==, !=.
> 
> Those steps are logically ordered, so >= and <= makes more sense. If
> we ever add one extra step somewhere in between existing steps, most
> checks will stay correct, while with equality a lot of them might need
> to be adjusted to multiple equalities.

As I said, for me personally it is easier to read "can set autocreate
only in OPENED state", compared to "can't set autocreate if state is
PREPARED of higher".
But whatever, I'm not a true C programmer anyway :)

[...]


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load
  2025-03-03 22:04         ` Eduard Zingerman
@ 2025-03-03 23:27           ` Andrii Nakryiko
  0 siblings, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2025-03-03 23:27 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team,
	Mykyta Yatsenko

On Mon, Mar 3, 2025 at 2:04 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Mon, 2025-03-03 at 13:38 -0800, Andrii Nakryiko wrote:
> > On Sat, Mar 1, 2025 at 1:45 PM Mykyta Yatsenko
> > <mykyta.yatsenko5@gmail.com> wrote:
> > >
> > > On 01/03/2025 08:12, Eduard Zingerman wrote:
> > > > On Fri, 2025-02-28 at 17:52 +0000, Mykyta Yatsenko wrote:
> > > >
> > > > [...]
> > > >
> > > > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > > > index 9ced1ce2334c..dd2f64903c3b 100644
> > > > > --- a/tools/lib/bpf/libbpf.c
> > > > > +++ b/tools/lib/bpf/libbpf.c
> > > > > @@ -4858,7 +4858,7 @@ bool bpf_map__autocreate(const struct bpf_map *map)
> > > > >
> > > > >   int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
> > > > >   {
> > > > > -    if (map->obj->state >= OBJ_LOADED)
> > > > > +    if (map->obj->state >= OBJ_PREPARED)
> > > > >              return libbpf_err(-EBUSY);
> > > > I looked through logic in patches #1 and #2 and changes look correct.
> > > > Running tests under valgrind does not show issues with this feature.
> > > > The only ask from my side is to consider doing ==/!= comparisons in
> > > > cases like above. E.g. it seems that `map->obj->state != OBJ_OPENED`
> > > > is a bit simpler to understand when reading condition above.
> > > > Or maybe that's just me.
> > > I'm not sure about this one.  >= or < checks for state relative to
> > > operand more
> > > flexibly,for example `map->obj->state >= OBJ_PREPARED` is read as
> > > "is the object in at least PREPARED state". Perhaps, if we add more states,
> > > these >,< checks will not require any changes, while ==, != may.
> > > I guess this also depends on what we actually want to check here, is it that
> > > state at least PREPARED or the state is not initial OPENED.
> > > Not a strong opinion, though, happy to flip code to ==, !=.
> >
> > Those steps are logically ordered, so >= and <= makes more sense. If
> > we ever add one extra step somewhere in between existing steps, most
> > checks will stay correct, while with equality a lot of them might need
> > to be adjusted to multiple equalities.
>
> As I said, for me personally it is easier to read "can set autocreate
> only in OPENED state", compared to "can't set autocreate if state is
> PREPARED of higher".

The latter, IMO. PREPARED state is when maps are finalized, so if we
got there or beyond, can't modify maps. For progs the similar step is
LOADED.

> But whatever, I'm not a true C programmer anyway :)
>
> [...]
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-03-03 23:27 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-28 17:52 [PATCH bpf-next 0/3] Introduce bpf_object__prepare Mykyta Yatsenko
2025-02-28 17:52 ` [PATCH bpf-next 1/3] libbpf: introduce more granular state for bpf_object Mykyta Yatsenko
2025-02-28 22:20   ` Andrii Nakryiko
2025-02-28 22:35   ` Andrii Nakryiko
2025-02-28 17:52 ` [PATCH bpf-next 2/3] libbpf: split bpf object load into prepare/load Mykyta Yatsenko
2025-02-28 22:31   ` Andrii Nakryiko
2025-03-01  8:12   ` Eduard Zingerman
2025-03-01 21:45     ` Mykyta Yatsenko
2025-03-03 21:38       ` Andrii Nakryiko
2025-03-03 22:04         ` Eduard Zingerman
2025-03-03 23:27           ` Andrii Nakryiko
2025-02-28 17:52 ` [PATCH bpf-next 3/3] selftests/bpf: add tests for bpf_object__prepare Mykyta Yatsenko
2025-02-28 22:39 ` [PATCH bpf-next 0/3] Introduce bpf_object__prepare Andrii Nakryiko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox