* [PATCH bpf-next v2 0/2] libbpf: Add bpf_program__clone() for individual program loading
@ 2026-02-20 19:18 Mykyta Yatsenko
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
` (2 more replies)
0 siblings, 3 replies; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-02-20 19:18 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87; +Cc: Mykyta Yatsenko
This series adds bpf_program__clone() to libbpf and converts veristat to
use it, replacing the costly per-program object re-opening pattern.
veristat needs to load each BPF program in isolation to collect
per-program verification statistics. Previously it achieved this by
opening a fresh bpf_object for every program, disabling autoload on all
but the target, and loading the whole object. For object files with many
programs this meant repeating ELF parsing and BTF processing N times.
Patch 1 introduces bpf_program__clone(), which loads a single program
from a prepared object into the kernel and returns an fd owned by the
caller. It resolves BTF attach targets, fills in func/line info,
fd_array, and other load parameters from the prepared object
automatically, while letting callers override any field via
bpf_prog_load_opts.
Patch 2 converts veristat to prepare the object once and clone each
program individually, eliminating redundant work.
Performance
Tested on selftests: 918 objects, ~4270 programs:
- Wall time: 36.88s -> 23.18s (37% faster)
- User time: 20.80s -> 16.07s (23% faster)
- Kernel time: 12.07s -> 6.06s (50% faster)
Per-program loading also improves coverage: 83 programs that previously
failed now succeed.
Known regression:
- Program-containing maps (PROG_ARRAY, DEVMAP, CPUMAP) track owner
program type. Programs with incompatible attributes loaded against
a shared map will be rejected. This is expected kernel behavior.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
Changes in v2:
- Removed map cloning entirely (libbpf.c)
- Renamed bpf_prog_clone() -> bpf_program__clone()
- Removed unnecessary obj NULL check (libbpf.c)
- Fixed opts handling — no longer mutates caller's opts (libbpf.c)
- Link to v1: https://lore.kernel.org/all/20260212-veristat_prepare-v1-0-c351023fb0db@meta.com/
---
Mykyta Yatsenko (2):
libbpf: Introduce bpf_program__clone()
selftests/bpf: Use bpf_program__clone() in veristat
tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++
tools/lib/bpf/libbpf.h | 17 ++++++
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/veristat.c | 104 ++++++++++++++-------------------
4 files changed, 126 insertions(+), 60 deletions(-)
---
base-commit: 1ace9bac1ad2bc6a0a70baaa16d22b7e783e88c5
change-id: 20260206-veristat_prepare-a4a041873c53
Best regards,
--
Mykyta Yatsenko <yatsenko@meta.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-20 19:18 [PATCH bpf-next v2 0/2] libbpf: Add bpf_program__clone() for individual program loading Mykyta Yatsenko
@ 2026-02-20 19:18 ` Mykyta Yatsenko
2026-02-23 17:25 ` Emil Tsalapatis
` (3 more replies)
2026-02-20 19:18 ` [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat Mykyta Yatsenko
2026-02-20 22:48 ` [PATCH bpf-next v2 0/2] libbpf: Add bpf_program__clone() for individual program loading Alexei Starovoitov
2 siblings, 4 replies; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-02-20 19:18 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87; +Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Add bpf_program__clone() API that loads a single BPF program from a
prepared BPF object into the kernel, returning a file descriptor owned
by the caller.
After bpf_object__prepare(), callers can use bpf_program__clone() to
load individual programs with custom bpf_prog_load_opts, instead of
loading all programs at once via bpf_object__load(). Non-zero fields in
opts override the defaults derived from the program and object
internals; passing NULL opts populates everything automatically.
Internally, bpf_program__clone() resolves BTF-based attach targets
(attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
func/line info, fd_array, license, and kern_version from the
prepared object before calling bpf_prog_load().
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 17 +++++++++++++
tools/lib/bpf/libbpf.map | 1 +
3 files changed, 82 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 0c8bf0b5cce4..4b084bda3f47 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
return prog->line_info_cnt;
}
+int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
+{
+ LIBBPF_OPTS(bpf_prog_load_opts, attr);
+ struct bpf_prog_load_opts *pattr = &attr;
+ struct bpf_object *obj;
+ int err, fd;
+
+ if (!prog)
+ return libbpf_err(-EINVAL);
+
+ if (!OPTS_VALID(opts, bpf_prog_load_opts))
+ return libbpf_err(-EINVAL);
+
+ obj = prog->obj;
+ if (obj->state < OBJ_PREPARED)
+ return libbpf_err(-EINVAL);
+
+ /* Copy caller opts, fall back to prog/object defaults */
+ OPTS_SET(pattr, expected_attach_type,
+ OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
+ OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
+ OPTS_SET(pattr, attach_btf_obj_fd,
+ OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
+ OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
+ OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
+ OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
+ OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
+ OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
+ OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
+ if (attr.token_fd)
+ attr.prog_flags |= BPF_F_TOKEN_FD;
+
+ /* BTF func/line info */
+ if (obj->btf && btf__fd(obj->btf) >= 0) {
+ OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
+ OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
+ OPTS_SET(pattr, func_info_cnt,
+ OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
+ OPTS_SET(pattr, func_info_rec_size,
+ OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
+ OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
+ OPTS_SET(pattr, line_info_cnt,
+ OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
+ OPTS_SET(pattr, line_info_rec_size,
+ OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
+ }
+
+ OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
+ OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
+ OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
+
+ /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
+ if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
+ err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
+ if (err)
+ return libbpf_err(err);
+ }
+
+ fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
+ pattr);
+
+ return libbpf_err(fd);
+}
+
#define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
.sec = (char *)sec_pfx, \
.prog_type = BPF_PROG_TYPE_##ptype, \
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index dfc37a615578..0be34852350f 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
*/
LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
+/**
+ * @brief **bpf_program__clone()** loads a single BPF program from a prepared
+ * BPF object into the kernel, returning its file descriptor.
+ *
+ * The BPF object must have been previously prepared with
+ * **bpf_object__prepare()**. If @opts is provided, any non-zero field
+ * overrides the defaults derived from the program/object internals.
+ * If @opts is NULL, all fields are populated automatically.
+ *
+ * The returned FD is owned by the caller and must be closed with close().
+ *
+ * @param prog BPF program from a prepared object
+ * @param opts Optional load options; non-zero fields override defaults
+ * @return program FD (>= 0) on success; negative error code on failure
+ */
+LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
+
#ifdef __cplusplus
} /* extern "C" */
#endif
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index d18fbcea7578..e727a54e373a 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
bpf_map__set_exclusive_program;
bpf_map__exclusive_program;
bpf_prog_assoc_struct_ops;
+ bpf_program__clone;
bpf_program__assoc_struct_ops;
btf__permute;
} LIBBPF_1.6.0;
--
2.47.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat
2026-02-20 19:18 [PATCH bpf-next v2 0/2] libbpf: Add bpf_program__clone() for individual program loading Mykyta Yatsenko
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
@ 2026-02-20 19:18 ` Mykyta Yatsenko
2026-02-23 17:49 ` Emil Tsalapatis
2026-02-24 2:03 ` Eduard Zingerman
2026-02-20 22:48 ` [PATCH bpf-next v2 0/2] libbpf: Add bpf_program__clone() for individual program loading Alexei Starovoitov
2 siblings, 2 replies; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-02-20 19:18 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87; +Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Replace veristat's per-program object re-opening with
bpf_program__clone().
Previously, veristat opened a separate bpf_object for every program in a
multi-program object file, iterated all programs to enable only the
target one, and then loaded the entire object.
Use bpf_object__prepare() once, then call bpf_program__clone() for each
program individually. This lets veristat load programs one at a time
from a single prepared object.
The caller now owns the returned fd and closes it after collecting stats.
Remove the special single-program fast path and the per-file early exit
in handle_verif_mode() so all files are always processed.
Split fixup_obj() into fixup_obj_maps() for object-wide map fixups that
must run before bpf_object__prepare(), and fixup_obj() for per-program
fixups (struct_ops masking, freplace type guessing) that run before each
bpf_program__clone() call.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
tools/testing/selftests/bpf/veristat.c | 104 ++++++++++++++-------------------
1 file changed, 44 insertions(+), 60 deletions(-)
diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c
index 1be1e353d40a..7d7eb56e04d0 100644
--- a/tools/testing/selftests/bpf/veristat.c
+++ b/tools/testing/selftests/bpf/veristat.c
@@ -1236,7 +1236,7 @@ static void mask_unrelated_struct_ops_progs(struct bpf_object *obj,
}
}
-static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const char *filename)
+static void fixup_obj_maps(struct bpf_object *obj)
{
struct bpf_map *map;
@@ -1251,15 +1251,23 @@ static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const ch
case BPF_MAP_TYPE_INODE_STORAGE:
case BPF_MAP_TYPE_CGROUP_STORAGE:
case BPF_MAP_TYPE_CGRP_STORAGE:
- break;
case BPF_MAP_TYPE_STRUCT_OPS:
- mask_unrelated_struct_ops_progs(obj, map, prog);
break;
default:
if (bpf_map__max_entries(map) == 0)
bpf_map__set_max_entries(map, 1);
}
}
+}
+
+static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const char *filename)
+{
+ struct bpf_map *map;
+
+ bpf_object__for_each_map(map, obj) {
+ if (bpf_map__type(map) == BPF_MAP_TYPE_STRUCT_OPS)
+ mask_unrelated_struct_ops_progs(obj, map, prog);
+ }
/* SEC(freplace) programs can't be loaded with veristat as is,
* but we can try guessing their target program's expected type by
@@ -1608,6 +1616,7 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
const char *base_filename = basename(strdupa(filename));
const char *prog_name = bpf_program__name(prog);
long mem_peak_a, mem_peak_b, mem_peak = -1;
+ LIBBPF_OPTS(bpf_prog_load_opts, opts);
char *buf;
int buf_sz, log_level;
struct verif_stats *stats;
@@ -1647,9 +1656,6 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
}
verif_log_buf[0] = '\0';
- bpf_program__set_log_buf(prog, buf, buf_sz);
- bpf_program__set_log_level(prog, log_level);
-
/* increase chances of successful BPF object loading */
fixup_obj(obj, prog, base_filename);
@@ -1658,15 +1664,21 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
if (env.force_reg_invariants)
bpf_program__set_flags(prog, bpf_program__flags(prog) | BPF_F_TEST_REG_INVARIANTS);
- err = bpf_object__prepare(obj);
- if (!err) {
- cgroup_err = reset_stat_cgroup();
- mem_peak_a = cgroup_memory_peak();
- err = bpf_object__load(obj);
- mem_peak_b = cgroup_memory_peak();
- if (!cgroup_err && mem_peak_a >= 0 && mem_peak_b >= 0)
- mem_peak = mem_peak_b - mem_peak_a;
+ opts.log_buf = buf;
+ opts.log_size = buf_sz;
+ opts.log_level = log_level;
+
+ cgroup_err = reset_stat_cgroup();
+ mem_peak_a = cgroup_memory_peak();
+ fd = bpf_program__clone(prog, &opts);
+ if (fd < 0) {
+ err = fd;
+ fprintf(stderr, "Failed to load program %s %d\n", prog_name, err);
}
+ mem_peak_b = cgroup_memory_peak();
+ if (!cgroup_err && mem_peak_a >= 0 && mem_peak_b >= 0)
+ mem_peak = mem_peak_b - mem_peak_a;
+
env.progs_processed++;
stats->file_name = strdup(base_filename);
@@ -1678,7 +1690,6 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
stats->stats[MEMORY_PEAK] = mem_peak < 0 ? -1 : mem_peak / (1024 * 1024);
memset(&info, 0, info_len);
- fd = bpf_program__fd(prog);
if (fd > 0 && bpf_prog_get_info_by_fd(fd, &info, &info_len) == 0) {
stats->stats[JITED_SIZE] = info.jited_prog_len;
if (env.dump_mode & DUMP_JITED)
@@ -1699,7 +1710,8 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
if (verif_log_buf != buf)
free(buf);
-
+ if (fd > 0)
+ close(fd);
return 0;
}
@@ -2182,8 +2194,8 @@ static int set_global_vars(struct bpf_object *obj, struct var_preset *presets, i
static int process_obj(const char *filename)
{
const char *base_filename = basename(strdupa(filename));
- struct bpf_object *obj = NULL, *tobj;
- struct bpf_program *prog, *tprog, *lprog;
+ struct bpf_object *obj = NULL;
+ struct bpf_program *prog;
libbpf_print_fn_t old_libbpf_print_fn;
LIBBPF_OPTS(bpf_object_open_opts, opts);
int err = 0, prog_cnt = 0;
@@ -2222,51 +2234,26 @@ static int process_obj(const char *filename)
env.files_processed++;
bpf_object__for_each_program(prog, obj) {
+ bpf_program__set_autoload(prog, true);
prog_cnt++;
}
- if (prog_cnt == 1) {
- prog = bpf_object__next_program(obj, NULL);
- bpf_program__set_autoload(prog, true);
- err = set_global_vars(obj, env.presets, env.npresets);
- if (err) {
- fprintf(stderr, "Failed to set global variables %d\n", err);
- goto cleanup;
- }
- process_prog(filename, obj, prog);
+ fixup_obj_maps(obj);
+
+ err = set_global_vars(obj, env.presets, env.npresets);
+ if (err) {
+ fprintf(stderr, "Failed to set global variables %d\n", err);
goto cleanup;
}
- bpf_object__for_each_program(prog, obj) {
- const char *prog_name = bpf_program__name(prog);
-
- tobj = bpf_object__open_file(filename, &opts);
- if (!tobj) {
- err = -errno;
- fprintf(stderr, "Failed to open '%s': %d\n", filename, err);
- goto cleanup;
- }
-
- err = set_global_vars(tobj, env.presets, env.npresets);
- if (err) {
- fprintf(stderr, "Failed to set global variables %d\n", err);
- goto cleanup;
- }
-
- lprog = NULL;
- bpf_object__for_each_program(tprog, tobj) {
- const char *tprog_name = bpf_program__name(tprog);
-
- if (strcmp(prog_name, tprog_name) == 0) {
- bpf_program__set_autoload(tprog, true);
- lprog = tprog;
- } else {
- bpf_program__set_autoload(tprog, false);
- }
- }
+ err = bpf_object__prepare(obj);
+ if (err) {
+ fprintf(stderr, "Failed to prepare BPF object for loading %d\n", err);
+ goto cleanup;
+ }
- process_prog(filename, tobj, lprog);
- bpf_object__close(tobj);
+ bpf_object__for_each_program(prog, obj) {
+ process_prog(filename, obj, prog);
}
cleanup:
@@ -3264,17 +3251,14 @@ static int handle_verif_mode(void)
create_stat_cgroup();
for (i = 0; i < env.filename_cnt; i++) {
err = process_obj(env.filenames[i]);
- if (err) {
+ if (err)
fprintf(stderr, "Failed to process '%s': %d\n", env.filenames[i], err);
- goto out;
- }
}
qsort(env.prog_stats, env.prog_stat_cnt, sizeof(*env.prog_stats), cmp_prog_stats);
output_prog_stats();
-out:
destroy_stat_cgroup();
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 0/2] libbpf: Add bpf_program__clone() for individual program loading
2026-02-20 19:18 [PATCH bpf-next v2 0/2] libbpf: Add bpf_program__clone() for individual program loading Mykyta Yatsenko
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
2026-02-20 19:18 ` [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat Mykyta Yatsenko
@ 2026-02-20 22:48 ` Alexei Starovoitov
2026-02-23 13:57 ` Mykyta Yatsenko
2 siblings, 1 reply; 25+ messages in thread
From: Alexei Starovoitov @ 2026-02-20 22:48 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin Lau, Kernel Team, Eduard, Mykyta Yatsenko
On Fri, Feb 20, 2026 at 11:18 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> This series adds bpf_program__clone() to libbpf and converts veristat to
> use it, replacing the costly per-program object re-opening pattern.
>
> veristat needs to load each BPF program in isolation to collect
> per-program verification statistics. Previously it achieved this by
> opening a fresh bpf_object for every program, disabling autoload on all
> but the target, and loading the whole object. For object files with many
> programs this meant repeating ELF parsing and BTF processing N times.
>
> Patch 1 introduces bpf_program__clone(), which loads a single program
> from a prepared object into the kernel and returns an fd owned by the
> caller. It resolves BTF attach targets, fills in func/line info,
> fd_array, and other load parameters from the prepared object
> automatically, while letting callers override any field via
> bpf_prog_load_opts.
>
> Patch 2 converts veristat to prepare the object once and clone each
> program individually, eliminating redundant work.
>
> Performance
> Tested on selftests: 918 objects, ~4270 programs:
> - Wall time: 36.88s -> 23.18s (37% faster)
> - User time: 20.80s -> 16.07s (23% faster)
> - Kernel time: 12.07s -> 6.06s (50% faster)
>
> Per-program loading also improves coverage: 83 programs that previously
> failed now succeed.
Wait what? How is that possible?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 0/2] libbpf: Add bpf_program__clone() for individual program loading
2026-02-20 22:48 ` [PATCH bpf-next v2 0/2] libbpf: Add bpf_program__clone() for individual program loading Alexei Starovoitov
@ 2026-02-23 13:57 ` Mykyta Yatsenko
0 siblings, 0 replies; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-02-23 13:57 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin Lau, Kernel Team, Eduard, Mykyta Yatsenko
On 2/20/26 22:48, Alexei Starovoitov wrote:
> On Fri, Feb 20, 2026 at 11:18 AM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
>> This series adds bpf_program__clone() to libbpf and converts veristat to
>> use it, replacing the costly per-program object re-opening pattern.
>>
>> veristat needs to load each BPF program in isolation to collect
>> per-program verification statistics. Previously it achieved this by
>> opening a fresh bpf_object for every program, disabling autoload on all
>> but the target, and loading the whole object. For object files with many
>> programs this meant repeating ELF parsing and BTF processing N times.
>>
>> Patch 1 introduces bpf_program__clone(), which loads a single program
>> from a prepared object into the kernel and returns an fd owned by the
>> caller. It resolves BTF attach targets, fills in func/line info,
>> fd_array, and other load parameters from the prepared object
>> automatically, while letting callers override any field via
>> bpf_prog_load_opts.
>>
>> Patch 2 converts veristat to prepare the object once and clone each
>> program individually, eliminating redundant work.
>>
>> Performance
>> Tested on selftests: 918 objects, ~4270 programs:
>> - Wall time: 36.88s -> 23.18s (37% faster)
>> - User time: 20.80s -> 16.07s (23% faster)
>> - Kernel time: 12.07s -> 6.06s (50% faster)
>>
>> Per-program loading also improves coverage: 83 programs that previously
>> failed now succeed.
> Wait what? How is that possible?
Thanks for asking. Take for example verifier_unpriv.bpf.o object,
all progs fail to load on the system veristat, the errors are:
```
libbpf: map 'map_prog1_socket': failed to initialize slot [1]
to prog 'dummy_prog_loop1_socket' fd=-2: -EBADF
...
libbpf: map 'map_prog1_socket': failed to initialize slot
[0] to prog 'dummy_prog_42_socket' fd=-2: -EBADF
...
etc
```
I understand this is due to the libbpf trying to initialize
all PROG_ARRAY maps during object load, regardless if the corresponding
program is loading or not (currently only 1 program is
enabled at a time, so object that has those PROG_ARRAY has no
chance to succeed)
This root cause is the only reason for all 83
improvements.
This is not intentional, but it sounds like a nice side-effect,
so I mention it.
Is this something we would like to address in libbpf?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
@ 2026-02-23 17:25 ` Emil Tsalapatis
2026-02-23 17:59 ` Mykyta Yatsenko
2026-02-24 19:28 ` Eduard Zingerman
` (2 subsequent siblings)
3 siblings, 1 reply; 25+ messages in thread
From: Emil Tsalapatis @ 2026-02-23 17:25 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team,
eddyz87
Cc: Mykyta Yatsenko
On Fri Feb 20, 2026 at 2:18 PM EST, Mykyta Yatsenko wrote:
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Add bpf_program__clone() API that loads a single BPF program from a
> prepared BPF object into the kernel, returning a file descriptor owned
> by the caller.
>
> After bpf_object__prepare(), callers can use bpf_program__clone() to
> load individual programs with custom bpf_prog_load_opts, instead of
> loading all programs at once via bpf_object__load(). Non-zero fields in
> opts override the defaults derived from the program and object
> internals; passing NULL opts populates everything automatically.
>
> Internally, bpf_program__clone() resolves BTF-based attach targets
> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> func/line info, fd_array, license, and kern_version from the
> prepared object before calling bpf_prog_load().
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 17 +++++++++++++
> tools/lib/bpf/libbpf.map | 1 +
> 3 files changed, 82 insertions(+)
>
The code looks in order, one issue below.
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 0c8bf0b5cce4..4b084bda3f47 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> return prog->line_info_cnt;
> }
>
> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> +{
> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> + struct bpf_prog_load_opts *pattr = &attr;
> + struct bpf_object *obj;
> + int err, fd;
> +
> + if (!prog)
> + return libbpf_err(-EINVAL);
> +
> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> + return libbpf_err(-EINVAL);
> +
> + obj = prog->obj;
> + if (obj->state < OBJ_PREPARED)
> + return libbpf_err(-EINVAL);
> +
> + /* Copy caller opts, fall back to prog/object defaults */
> + OPTS_SET(pattr, expected_attach_type,
> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> + OPTS_SET(pattr, attach_btf_obj_fd,
> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> + if (attr.token_fd)
> + attr.prog_flags |= BPF_F_TOKEN_FD;
> +
> + /* BTF func/line info */
> + if (obj->btf && btf__fd(obj->btf) >= 0) {
> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> + OPTS_SET(pattr, func_info_cnt,
> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> + OPTS_SET(pattr, func_info_rec_size,
> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> + OPTS_SET(pattr, line_info_cnt,
> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> + OPTS_SET(pattr, line_info_rec_size,
> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> + }
> +
> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> +
Can we make it so that cloning prepared but not loaded programs does
not load them? The name of the method itself implies the new instance is
identical to the old one, which is not the case - we're currently
loading the cloned program even if the original is not loaded. I don't
see why for OBJ_PREPARED progrmas this shouldn't be explicitly done by the
caller with bpf_prog_load() instead.
If we do make it so that the cloned program's obj->state is identical to the
original's let's also add a test that checks that.
> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> + if (err)
> + return libbpf_err(err);
> + }
> +
> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> + pattr);
> +
> + return libbpf_err(fd);
> +}
> +
> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> .sec = (char *)sec_pfx, \
> .prog_type = BPF_PROG_TYPE_##ptype, \
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index dfc37a615578..0be34852350f 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
> */
> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
>
> +/**
> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
> + * BPF object into the kernel, returning its file descriptor.
> + *
> + * The BPF object must have been previously prepared with
> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
> + * overrides the defaults derived from the program/object internals.
> + * If @opts is NULL, all fields are populated automatically.
> + *
> + * The returned FD is owned by the caller and must be closed with close().
> + *
> + * @param prog BPF program from a prepared object
> + * @param opts Optional load options; non-zero fields override defaults
> + * @return program FD (>= 0) on success; negative error code on failure
> + */
> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
> +
> #ifdef __cplusplus
> } /* extern "C" */
> #endif
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index d18fbcea7578..e727a54e373a 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
> bpf_map__set_exclusive_program;
> bpf_map__exclusive_program;
> bpf_prog_assoc_struct_ops;
> + bpf_program__clone;
> bpf_program__assoc_struct_ops;
> btf__permute;
> } LIBBPF_1.6.0;
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat
2026-02-20 19:18 ` [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat Mykyta Yatsenko
@ 2026-02-23 17:49 ` Emil Tsalapatis
2026-02-23 18:39 ` Mykyta Yatsenko
2026-02-24 2:03 ` Eduard Zingerman
1 sibling, 1 reply; 25+ messages in thread
From: Emil Tsalapatis @ 2026-02-23 17:49 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team,
eddyz87
Cc: Mykyta Yatsenko
On Fri Feb 20, 2026 at 2:18 PM EST, Mykyta Yatsenko wrote:
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Replace veristat's per-program object re-opening with
> bpf_program__clone().
>
> Previously, veristat opened a separate bpf_object for every program in a
> multi-program object file, iterated all programs to enable only the
> target one, and then loaded the entire object.
>
> Use bpf_object__prepare() once, then call bpf_program__clone() for each
> program individually. This lets veristat load programs one at a time
> from a single prepared object.
>
> The caller now owns the returned fd and closes it after collecting stats.
> Remove the special single-program fast path and the per-file early exit
> in handle_verif_mode() so all files are always processed.
>
> Split fixup_obj() into fixup_obj_maps() for object-wide map fixups that
> must run before bpf_object__prepare(), and fixup_obj() for per-program
> fixups (struct_ops masking, freplace type guessing) that run before each
> bpf_program__clone() call.
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> tools/testing/selftests/bpf/veristat.c | 104 ++++++++++++++-------------------
> 1 file changed, 44 insertions(+), 60 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c
> index 1be1e353d40a..7d7eb56e04d0 100644
> --- a/tools/testing/selftests/bpf/veristat.c
> +++ b/tools/testing/selftests/bpf/veristat.c
> @@ -1236,7 +1236,7 @@ static void mask_unrelated_struct_ops_progs(struct bpf_object *obj,
> }
> }
>
> -static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const char *filename)
> +static void fixup_obj_maps(struct bpf_object *obj)
> {
> struct bpf_map *map;
>
> @@ -1251,15 +1251,23 @@ static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const ch
> case BPF_MAP_TYPE_INODE_STORAGE:
> case BPF_MAP_TYPE_CGROUP_STORAGE:
> case BPF_MAP_TYPE_CGRP_STORAGE:
> - break;
> case BPF_MAP_TYPE_STRUCT_OPS:
> - mask_unrelated_struct_ops_progs(obj, map, prog);
> break;
> default:
> if (bpf_map__max_entries(map) == 0)
> bpf_map__set_max_entries(map, 1);
> }
> }
> +}
> +
> +static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const char *filename)
> +{
> + struct bpf_map *map;
> +
> + bpf_object__for_each_map(map, obj) {
> + if (bpf_map__type(map) == BPF_MAP_TYPE_STRUCT_OPS)
> + mask_unrelated_struct_ops_progs(obj, map, prog);
> + }
>
> /* SEC(freplace) programs can't be loaded with veristat as is,
> * but we can try guessing their target program's expected type by
> @@ -1608,6 +1616,7 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
> const char *base_filename = basename(strdupa(filename));
> const char *prog_name = bpf_program__name(prog);
> long mem_peak_a, mem_peak_b, mem_peak = -1;
> + LIBBPF_OPTS(bpf_prog_load_opts, opts);
> char *buf;
> int buf_sz, log_level;
> struct verif_stats *stats;
> @@ -1647,9 +1656,6 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
> }
> verif_log_buf[0] = '\0';
>
> - bpf_program__set_log_buf(prog, buf, buf_sz);
> - bpf_program__set_log_level(prog, log_level);
> -
> /* increase chances of successful BPF object loading */
> fixup_obj(obj, prog, base_filename);
>
> @@ -1658,15 +1664,21 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
> if (env.force_reg_invariants)
> bpf_program__set_flags(prog, bpf_program__flags(prog) | BPF_F_TEST_REG_INVARIANTS);
>
> - err = bpf_object__prepare(obj);
> - if (!err) {
> - cgroup_err = reset_stat_cgroup();
> - mem_peak_a = cgroup_memory_peak();
> - err = bpf_object__load(obj);
> - mem_peak_b = cgroup_memory_peak();
> - if (!cgroup_err && mem_peak_a >= 0 && mem_peak_b >= 0)
> - mem_peak = mem_peak_b - mem_peak_a;
> + opts.log_buf = buf;
> + opts.log_size = buf_sz;
> + opts.log_level = log_level;
> +
> + cgroup_err = reset_stat_cgroup();
> + mem_peak_a = cgroup_memory_peak();
> + fd = bpf_program__clone(prog, &opts);
> + if (fd < 0) {
> + err = fd;
> + fprintf(stderr, "Failed to load program %s %d\n", prog_name, err);
> }
> + mem_peak_b = cgroup_memory_peak();
> + if (!cgroup_err && mem_peak_a >= 0 && mem_peak_b >= 0)
> + mem_peak = mem_peak_b - mem_peak_a;
> +
AFAICT the meaning of mem_peak changes with this patch, right? Before it was the diff between
prepare() and load(), with the patch as it is now it is max(prepare,
load) since mem_peak_a is reset right above. If we do not automatically
load OBJ_PREPARED programs (see the review for patch 1/2) we can keep
the old behavior by storing mem_peak_a after the first prepare and
reading mem_peak_b after clone+load.
> env.progs_processed++;
>
> stats->file_name = strdup(base_filename);
> @@ -1678,7 +1690,6 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
> stats->stats[MEMORY_PEAK] = mem_peak < 0 ? -1 : mem_peak / (1024 * 1024);
>
> memset(&info, 0, info_len);
> - fd = bpf_program__fd(prog);
> if (fd > 0 && bpf_prog_get_info_by_fd(fd, &info, &info_len) == 0) {
> stats->stats[JITED_SIZE] = info.jited_prog_len;
> if (env.dump_mode & DUMP_JITED)
> @@ -1699,7 +1710,8 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
>
> if (verif_log_buf != buf)
> free(buf);
> -
> + if (fd > 0)
> + close(fd);
> return 0;
> }
>
> @@ -2182,8 +2194,8 @@ static int set_global_vars(struct bpf_object *obj, struct var_preset *presets, i
> static int process_obj(const char *filename)
> {
> const char *base_filename = basename(strdupa(filename));
> - struct bpf_object *obj = NULL, *tobj;
> - struct bpf_program *prog, *tprog, *lprog;
> + struct bpf_object *obj = NULL;
> + struct bpf_program *prog;
> libbpf_print_fn_t old_libbpf_print_fn;
> LIBBPF_OPTS(bpf_object_open_opts, opts);
> int err = 0, prog_cnt = 0;
> @@ -2222,51 +2234,26 @@ static int process_obj(const char *filename)
> env.files_processed++;
>
> bpf_object__for_each_program(prog, obj) {
> + bpf_program__set_autoload(prog, true);
> prog_cnt++;
> }
>
> - if (prog_cnt == 1) {
> - prog = bpf_object__next_program(obj, NULL);
> - bpf_program__set_autoload(prog, true);
> - err = set_global_vars(obj, env.presets, env.npresets);
> - if (err) {
> - fprintf(stderr, "Failed to set global variables %d\n", err);
> - goto cleanup;
> - }
> - process_prog(filename, obj, prog);
> + fixup_obj_maps(obj);
> +
> + err = set_global_vars(obj, env.presets, env.npresets);
> + if (err) {
> + fprintf(stderr, "Failed to set global variables %d\n", err);
> goto cleanup;
> }
>
> - bpf_object__for_each_program(prog, obj) {
> - const char *prog_name = bpf_program__name(prog);
> -
> - tobj = bpf_object__open_file(filename, &opts);
> - if (!tobj) {
> - err = -errno;
> - fprintf(stderr, "Failed to open '%s': %d\n", filename, err);
> - goto cleanup;
> - }
> -
> - err = set_global_vars(tobj, env.presets, env.npresets);
> - if (err) {
> - fprintf(stderr, "Failed to set global variables %d\n", err);
> - goto cleanup;
> - }
> -
> - lprog = NULL;
> - bpf_object__for_each_program(tprog, tobj) {
> - const char *tprog_name = bpf_program__name(tprog);
> -
> - if (strcmp(prog_name, tprog_name) == 0) {
> - bpf_program__set_autoload(tprog, true);
> - lprog = tprog;
> - } else {
> - bpf_program__set_autoload(tprog, false);
> - }
> - }
> + err = bpf_object__prepare(obj);
> + if (err) {
> + fprintf(stderr, "Failed to prepare BPF object for loading %d\n", err);
> + goto cleanup;
> + }
>
> - process_prog(filename, tobj, lprog);
> - bpf_object__close(tobj);
> + bpf_object__for_each_program(prog, obj) {
> + process_prog(filename, obj, prog);
> }
>
> cleanup:
> @@ -3264,17 +3251,14 @@ static int handle_verif_mode(void)
> create_stat_cgroup();
> for (i = 0; i < env.filename_cnt; i++) {
> err = process_obj(env.filenames[i]);
> - if (err) {
> + if (err)
> fprintf(stderr, "Failed to process '%s': %d\n", env.filenames[i], err);
> - goto out;
> - }
This means we now do not stop on the first failure - this I assume is
deliberate since it is an improvement over existing behavior imo, but
still pointing it out in case it isn't.
> }
>
> qsort(env.prog_stats, env.prog_stat_cnt, sizeof(*env.prog_stats), cmp_prog_stats);
>
> output_prog_stats();
>
> -out:
> destroy_stat_cgroup();
> return err;
> }
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-23 17:25 ` Emil Tsalapatis
@ 2026-02-23 17:59 ` Mykyta Yatsenko
2026-02-23 18:04 ` Emil Tsalapatis
0 siblings, 1 reply; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-02-23 17:59 UTC (permalink / raw)
To: Emil Tsalapatis, bpf, ast, andrii, daniel, kafai, kernel-team,
eddyz87
Cc: Mykyta Yatsenko
"Emil Tsalapatis" <emil@etsalapatis.com> writes:
> On Fri Feb 20, 2026 at 2:18 PM EST, Mykyta Yatsenko wrote:
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Add bpf_program__clone() API that loads a single BPF program from a
>> prepared BPF object into the kernel, returning a file descriptor owned
>> by the caller.
>>
>> After bpf_object__prepare(), callers can use bpf_program__clone() to
>> load individual programs with custom bpf_prog_load_opts, instead of
>> loading all programs at once via bpf_object__load(). Non-zero fields in
>> opts override the defaults derived from the program and object
>> internals; passing NULL opts populates everything automatically.
>>
>> Internally, bpf_program__clone() resolves BTF-based attach targets
>> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
>> func/line info, fd_array, license, and kern_version from the
>> prepared object before calling bpf_prog_load().
>>
>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>> ---
>> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
>> tools/lib/bpf/libbpf.h | 17 +++++++++++++
>> tools/lib/bpf/libbpf.map | 1 +
>> 3 files changed, 82 insertions(+)
>>
>
> The code looks in order, one issue below.
>
>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>> index 0c8bf0b5cce4..4b084bda3f47 100644
>> --- a/tools/lib/bpf/libbpf.c
>> +++ b/tools/lib/bpf/libbpf.c
>> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
>> return prog->line_info_cnt;
>> }
>>
>> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
>> +{
>> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
>> + struct bpf_prog_load_opts *pattr = &attr;
>> + struct bpf_object *obj;
>> + int err, fd;
>> +
>> + if (!prog)
>> + return libbpf_err(-EINVAL);
>> +
>> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
>> + return libbpf_err(-EINVAL);
>> +
>> + obj = prog->obj;
>> + if (obj->state < OBJ_PREPARED)
>> + return libbpf_err(-EINVAL);
>> +
>> + /* Copy caller opts, fall back to prog/object defaults */
>> + OPTS_SET(pattr, expected_attach_type,
>> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
>> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
>> + OPTS_SET(pattr, attach_btf_obj_fd,
>> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
>> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
>> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
>> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
>> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
>> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
>> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
>> + if (attr.token_fd)
>> + attr.prog_flags |= BPF_F_TOKEN_FD;
>> +
>> + /* BTF func/line info */
>> + if (obj->btf && btf__fd(obj->btf) >= 0) {
>> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
>> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
>> + OPTS_SET(pattr, func_info_cnt,
>> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
>> + OPTS_SET(pattr, func_info_rec_size,
>> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
>> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
>> + OPTS_SET(pattr, line_info_cnt,
>> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
>> + OPTS_SET(pattr, line_info_rec_size,
>> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
>> + }
>> +
>> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
>> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
>> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
>> +
>
> Can we make it so that cloning prepared but not loaded programs does
> not load them? The name of the method itself implies the new instance is
> identical to the old one, which is not the case - we're currently
> loading the cloned program even if the original is not loaded. I don't
> see why for OBJ_PREPARED progrmas this shouldn't be explicitly done by the
> caller with bpf_prog_load() instead.
Mekes sense, but there are few problems:
we won't be cloning a program, but rather it's
attributes (struct bpf_prog_load_opts); I don't think we can do true
cloning with returning a new struct bpf_program.
So the best we can do is to change to something like
bpf_program__clone_attrs() (or bpf_program__load_attrs()) then in
veristat do:
attrs = bpf_program__clone_attrs(prog)
bpf_prog_load(bpf_program__insn(prog), attrs)
Let's see what option maintainers prefer.
>
> If we do make it so that the cloned program's obj->state is identical to the
> original's let's also add a test that checks that.
>
>> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
>> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
>> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
>> + if (err)
>> + return libbpf_err(err);
>> + }
>> +
>> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
>> + pattr);
>> +
>> + return libbpf_err(fd);
>> +}
>> +
>> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
>> .sec = (char *)sec_pfx, \
>> .prog_type = BPF_PROG_TYPE_##ptype, \
>> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
>> index dfc37a615578..0be34852350f 100644
>> --- a/tools/lib/bpf/libbpf.h
>> +++ b/tools/lib/bpf/libbpf.h
>> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
>> */
>> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
>>
>> +/**
>> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
>> + * BPF object into the kernel, returning its file descriptor.
>> + *
>> + * The BPF object must have been previously prepared with
>> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
>> + * overrides the defaults derived from the program/object internals.
>> + * If @opts is NULL, all fields are populated automatically.
>> + *
>> + * The returned FD is owned by the caller and must be closed with close().
>> + *
>> + * @param prog BPF program from a prepared object
>> + * @param opts Optional load options; non-zero fields override defaults
>> + * @return program FD (>= 0) on success; negative error code on failure
>> + */
>> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
>> +
>> #ifdef __cplusplus
>> } /* extern "C" */
>> #endif
>> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
>> index d18fbcea7578..e727a54e373a 100644
>> --- a/tools/lib/bpf/libbpf.map
>> +++ b/tools/lib/bpf/libbpf.map
>> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
>> bpf_map__set_exclusive_program;
>> bpf_map__exclusive_program;
>> bpf_prog_assoc_struct_ops;
>> + bpf_program__clone;
>> bpf_program__assoc_struct_ops;
>> btf__permute;
>> } LIBBPF_1.6.0;
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-23 17:59 ` Mykyta Yatsenko
@ 2026-02-23 18:04 ` Emil Tsalapatis
0 siblings, 0 replies; 25+ messages in thread
From: Emil Tsalapatis @ 2026-02-23 18:04 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team,
eddyz87
Cc: Mykyta Yatsenko
On Mon Feb 23, 2026 at 12:59 PM EST, Mykyta Yatsenko wrote:
> "Emil Tsalapatis" <emil@etsalapatis.com> writes:
>
>> On Fri Feb 20, 2026 at 2:18 PM EST, Mykyta Yatsenko wrote:
>>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>>
>>> Add bpf_program__clone() API that loads a single BPF program from a
>>> prepared BPF object into the kernel, returning a file descriptor owned
>>> by the caller.
>>>
>>> After bpf_object__prepare(), callers can use bpf_program__clone() to
>>> load individual programs with custom bpf_prog_load_opts, instead of
>>> loading all programs at once via bpf_object__load(). Non-zero fields in
>>> opts override the defaults derived from the program and object
>>> internals; passing NULL opts populates everything automatically.
>>>
>>> Internally, bpf_program__clone() resolves BTF-based attach targets
>>> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
>>> func/line info, fd_array, license, and kern_version from the
>>> prepared object before calling bpf_prog_load().
>>>
>>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>>> ---
>>> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
>>> tools/lib/bpf/libbpf.h | 17 +++++++++++++
>>> tools/lib/bpf/libbpf.map | 1 +
>>> 3 files changed, 82 insertions(+)
>>>
>>
>> The code looks in order, one issue below.
>>
>>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>>> index 0c8bf0b5cce4..4b084bda3f47 100644
>>> --- a/tools/lib/bpf/libbpf.c
>>> +++ b/tools/lib/bpf/libbpf.c
>>> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
>>> return prog->line_info_cnt;
>>> }
>>>
>>> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
>>> +{
>>> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
>>> + struct bpf_prog_load_opts *pattr = &attr;
>>> + struct bpf_object *obj;
>>> + int err, fd;
>>> +
>>> + if (!prog)
>>> + return libbpf_err(-EINVAL);
>>> +
>>> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
>>> + return libbpf_err(-EINVAL);
>>> +
>>> + obj = prog->obj;
>>> + if (obj->state < OBJ_PREPARED)
>>> + return libbpf_err(-EINVAL);
>>> +
>>> + /* Copy caller opts, fall back to prog/object defaults */
>>> + OPTS_SET(pattr, expected_attach_type,
>>> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
>>> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
>>> + OPTS_SET(pattr, attach_btf_obj_fd,
>>> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
>>> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
>>> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
>>> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
>>> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
>>> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
>>> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
>>> + if (attr.token_fd)
>>> + attr.prog_flags |= BPF_F_TOKEN_FD;
>>> +
>>> + /* BTF func/line info */
>>> + if (obj->btf && btf__fd(obj->btf) >= 0) {
>>> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
>>> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
>>> + OPTS_SET(pattr, func_info_cnt,
>>> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
>>> + OPTS_SET(pattr, func_info_rec_size,
>>> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
>>> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
>>> + OPTS_SET(pattr, line_info_cnt,
>>> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
>>> + OPTS_SET(pattr, line_info_rec_size,
>>> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
>>> + }
>>> +
>>> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
>>> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
>>> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
>>> +
>>
>> Can we make it so that cloning prepared but not loaded programs does
>> not load them? The name of the method itself implies the new instance is
>> identical to the old one, which is not the case - we're currently
>> loading the cloned program even if the original is not loaded. I don't
>> see why for OBJ_PREPARED progrmas this shouldn't be explicitly done by the
>> caller with bpf_prog_load() instead.
> Mekes sense, but there are few problems:
> we won't be cloning a program, but rather it's
> attributes (struct bpf_prog_load_opts); I don't think we can do true
> cloning with returning a new struct bpf_program.
>
> So the best we can do is to change to something like
> bpf_program__clone_attrs() (or bpf_program__load_attrs()) then in
> veristat do:
>
> attrs = bpf_program__clone_attrs(prog)
> bpf_prog_load(bpf_program__insn(prog), attrs)
>
> Let's see what option maintainers prefer.
I like clone_attrs(), though if we rename it so let's skip the bpf_prog_load()
regardless of whether the original program is already loaded.
>>
>> If we do make it so that the cloned program's obj->state is identical to the
>> original's let's also add a test that checks that.
>>
>>> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
>>> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
>>> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
>>> + if (err)
>>> + return libbpf_err(err);
>>> + }
>>> +
>>> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
>>> + pattr);
>>> +
>>> + return libbpf_err(fd);
>>> +}
>>> +
>>> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
>>> .sec = (char *)sec_pfx, \
>>> .prog_type = BPF_PROG_TYPE_##ptype, \
>>> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
>>> index dfc37a615578..0be34852350f 100644
>>> --- a/tools/lib/bpf/libbpf.h
>>> +++ b/tools/lib/bpf/libbpf.h
>>> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
>>> */
>>> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
>>>
>>> +/**
>>> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
>>> + * BPF object into the kernel, returning its file descriptor.
>>> + *
>>> + * The BPF object must have been previously prepared with
>>> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
>>> + * overrides the defaults derived from the program/object internals.
>>> + * If @opts is NULL, all fields are populated automatically.
>>> + *
>>> + * The returned FD is owned by the caller and must be closed with close().
>>> + *
>>> + * @param prog BPF program from a prepared object
>>> + * @param opts Optional load options; non-zero fields override defaults
>>> + * @return program FD (>= 0) on success; negative error code on failure
>>> + */
>>> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
>>> +
>>> #ifdef __cplusplus
>>> } /* extern "C" */
>>> #endif
>>> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
>>> index d18fbcea7578..e727a54e373a 100644
>>> --- a/tools/lib/bpf/libbpf.map
>>> +++ b/tools/lib/bpf/libbpf.map
>>> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
>>> bpf_map__set_exclusive_program;
>>> bpf_map__exclusive_program;
>>> bpf_prog_assoc_struct_ops;
>>> + bpf_program__clone;
>>> bpf_program__assoc_struct_ops;
>>> btf__permute;
>>> } LIBBPF_1.6.0;
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat
2026-02-23 17:49 ` Emil Tsalapatis
@ 2026-02-23 18:39 ` Mykyta Yatsenko
2026-02-23 18:54 ` Emil Tsalapatis
0 siblings, 1 reply; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-02-23 18:39 UTC (permalink / raw)
To: Emil Tsalapatis, bpf, ast, andrii, daniel, kafai, kernel-team,
eddyz87
Cc: Mykyta Yatsenko
"Emil Tsalapatis" <emil@etsalapatis.com> writes:
> On Fri Feb 20, 2026 at 2:18 PM EST, Mykyta Yatsenko wrote:
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Replace veristat's per-program object re-opening with
>> bpf_program__clone().
>>
>> Previously, veristat opened a separate bpf_object for every program in a
>> multi-program object file, iterated all programs to enable only the
>> target one, and then loaded the entire object.
>>
>> Use bpf_object__prepare() once, then call bpf_program__clone() for each
>> program individually. This lets veristat load programs one at a time
>> from a single prepared object.
>>
>> The caller now owns the returned fd and closes it after collecting stats.
>> Remove the special single-program fast path and the per-file early exit
>> in handle_verif_mode() so all files are always processed.
>>
>> Split fixup_obj() into fixup_obj_maps() for object-wide map fixups that
>> must run before bpf_object__prepare(), and fixup_obj() for per-program
>> fixups (struct_ops masking, freplace type guessing) that run before each
>> bpf_program__clone() call.
>>
>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>> ---
>> tools/testing/selftests/bpf/veristat.c | 104 ++++++++++++++-------------------
>> 1 file changed, 44 insertions(+), 60 deletions(-)
>>
>> diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c
>> index 1be1e353d40a..7d7eb56e04d0 100644
>> --- a/tools/testing/selftests/bpf/veristat.c
>> +++ b/tools/testing/selftests/bpf/veristat.c
>> @@ -1236,7 +1236,7 @@ static void mask_unrelated_struct_ops_progs(struct bpf_object *obj,
>> }
>> }
>>
>> -static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const char *filename)
>> +static void fixup_obj_maps(struct bpf_object *obj)
>> {
>> struct bpf_map *map;
>>
>> @@ -1251,15 +1251,23 @@ static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const ch
>> case BPF_MAP_TYPE_INODE_STORAGE:
>> case BPF_MAP_TYPE_CGROUP_STORAGE:
>> case BPF_MAP_TYPE_CGRP_STORAGE:
>> - break;
>> case BPF_MAP_TYPE_STRUCT_OPS:
>> - mask_unrelated_struct_ops_progs(obj, map, prog);
>> break;
>> default:
>> if (bpf_map__max_entries(map) == 0)
>> bpf_map__set_max_entries(map, 1);
>> }
>> }
>> +}
>> +
>> +static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const char *filename)
>> +{
>> + struct bpf_map *map;
>> +
>> + bpf_object__for_each_map(map, obj) {
>> + if (bpf_map__type(map) == BPF_MAP_TYPE_STRUCT_OPS)
>> + mask_unrelated_struct_ops_progs(obj, map, prog);
>> + }
>>
>> /* SEC(freplace) programs can't be loaded with veristat as is,
>> * but we can try guessing their target program's expected type by
>> @@ -1608,6 +1616,7 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
>> const char *base_filename = basename(strdupa(filename));
>> const char *prog_name = bpf_program__name(prog);
>> long mem_peak_a, mem_peak_b, mem_peak = -1;
>> + LIBBPF_OPTS(bpf_prog_load_opts, opts);
>> char *buf;
>> int buf_sz, log_level;
>> struct verif_stats *stats;
>> @@ -1647,9 +1656,6 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
>> }
>> verif_log_buf[0] = '\0';
>>
>> - bpf_program__set_log_buf(prog, buf, buf_sz);
>> - bpf_program__set_log_level(prog, log_level);
>> -
>> /* increase chances of successful BPF object loading */
>> fixup_obj(obj, prog, base_filename);
>>
>> @@ -1658,15 +1664,21 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
>> if (env.force_reg_invariants)
>> bpf_program__set_flags(prog, bpf_program__flags(prog) | BPF_F_TEST_REG_INVARIANTS);
>>
>> - err = bpf_object__prepare(obj);
>> - if (!err) {
>> - cgroup_err = reset_stat_cgroup();
>> - mem_peak_a = cgroup_memory_peak();
>> - err = bpf_object__load(obj);
>> - mem_peak_b = cgroup_memory_peak();
>> - if (!cgroup_err && mem_peak_a >= 0 && mem_peak_b >= 0)
>> - mem_peak = mem_peak_b - mem_peak_a;
>> + opts.log_buf = buf;
>> + opts.log_size = buf_sz;
>> + opts.log_level = log_level;
>> +
>> + cgroup_err = reset_stat_cgroup();
>> + mem_peak_a = cgroup_memory_peak();
>> + fd = bpf_program__clone(prog, &opts);
>> + if (fd < 0) {
>> + err = fd;
>> + fprintf(stderr, "Failed to load program %s %d\n", prog_name, err);
>> }
>> + mem_peak_b = cgroup_memory_peak();
>> + if (!cgroup_err && mem_peak_a >= 0 && mem_peak_b >= 0)
>> + mem_peak = mem_peak_b - mem_peak_a;
>> +
>
> AFAICT the meaning of mem_peak changes with this patch, right? Before it was the diff between
> prepare() and load(), with the patch as it is now it is max(prepare,
> load) since mem_peak_a is reset right above. If we do not automatically
> load OBJ_PREPARED programs (see the review for patch 1/2) we can keep
> the old behavior by storing mem_peak_a after the first prepare and
> reading mem_peak_b after clone+load.
>
I'm not sure if I understand why the meaning of mem_peak changed. Before
it was:
bpf_object__prepare()
peak_reset()
a = mem_peak()
bpf_object__load();
mem_peak = mem_peak() - a;
and now:
bpf_object__prepare()
...
peak_reset()
a = mem_peak()
bpf_program__clone();
mem_peak = mem_peak() - a;
I understand this code is supposed to measure the memory overhead for
the loading a single program.
Could you please elaborate what the issue is? Maybe some example?
>> env.progs_processed++;
>>
>> stats->file_name = strdup(base_filename);
>> @@ -1678,7 +1690,6 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
>> stats->stats[MEMORY_PEAK] = mem_peak < 0 ? -1 : mem_peak / (1024 * 1024);
>>
>> memset(&info, 0, info_len);
>> - fd = bpf_program__fd(prog);
>> if (fd > 0 && bpf_prog_get_info_by_fd(fd, &info, &info_len) == 0) {
>> stats->stats[JITED_SIZE] = info.jited_prog_len;
>> if (env.dump_mode & DUMP_JITED)
>> @@ -1699,7 +1710,8 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
>>
>> if (verif_log_buf != buf)
>> free(buf);
>> -
>> + if (fd > 0)
>> + close(fd);
>> return 0;
>> }
>>
>> @@ -2182,8 +2194,8 @@ static int set_global_vars(struct bpf_object *obj, struct var_preset *presets, i
>> static int process_obj(const char *filename)
>> {
>> const char *base_filename = basename(strdupa(filename));
>> - struct bpf_object *obj = NULL, *tobj;
>> - struct bpf_program *prog, *tprog, *lprog;
>> + struct bpf_object *obj = NULL;
>> + struct bpf_program *prog;
>> libbpf_print_fn_t old_libbpf_print_fn;
>> LIBBPF_OPTS(bpf_object_open_opts, opts);
>> int err = 0, prog_cnt = 0;
>> @@ -2222,51 +2234,26 @@ static int process_obj(const char *filename)
>> env.files_processed++;
>>
>> bpf_object__for_each_program(prog, obj) {
>> + bpf_program__set_autoload(prog, true);
>> prog_cnt++;
>> }
>>
>> - if (prog_cnt == 1) {
>> - prog = bpf_object__next_program(obj, NULL);
>> - bpf_program__set_autoload(prog, true);
>> - err = set_global_vars(obj, env.presets, env.npresets);
>> - if (err) {
>> - fprintf(stderr, "Failed to set global variables %d\n", err);
>> - goto cleanup;
>> - }
>> - process_prog(filename, obj, prog);
>> + fixup_obj_maps(obj);
>> +
>> + err = set_global_vars(obj, env.presets, env.npresets);
>> + if (err) {
>> + fprintf(stderr, "Failed to set global variables %d\n", err);
>> goto cleanup;
>> }
>>
>> - bpf_object__for_each_program(prog, obj) {
>> - const char *prog_name = bpf_program__name(prog);
>> -
>> - tobj = bpf_object__open_file(filename, &opts);
>> - if (!tobj) {
>> - err = -errno;
>> - fprintf(stderr, "Failed to open '%s': %d\n", filename, err);
>> - goto cleanup;
>> - }
>> -
>> - err = set_global_vars(tobj, env.presets, env.npresets);
>> - if (err) {
>> - fprintf(stderr, "Failed to set global variables %d\n", err);
>> - goto cleanup;
>> - }
>> -
>> - lprog = NULL;
>> - bpf_object__for_each_program(tprog, tobj) {
>> - const char *tprog_name = bpf_program__name(tprog);
>> -
>> - if (strcmp(prog_name, tprog_name) == 0) {
>> - bpf_program__set_autoload(tprog, true);
>> - lprog = tprog;
>> - } else {
>> - bpf_program__set_autoload(tprog, false);
>> - }
>> - }
>> + err = bpf_object__prepare(obj);
>> + if (err) {
>> + fprintf(stderr, "Failed to prepare BPF object for loading %d\n", err);
>> + goto cleanup;
>> + }
>>
>> - process_prog(filename, tobj, lprog);
>> - bpf_object__close(tobj);
>> + bpf_object__for_each_program(prog, obj) {
>> + process_prog(filename, obj, prog);
>> }
>>
>> cleanup:
>> @@ -3264,17 +3251,14 @@ static int handle_verif_mode(void)
>> create_stat_cgroup();
>> for (i = 0; i < env.filename_cnt; i++) {
>> err = process_obj(env.filenames[i]);
>> - if (err) {
>> + if (err)
>> fprintf(stderr, "Failed to process '%s': %d\n", env.filenames[i], err);
>> - goto out;
>> - }
>
> This means we now do not stop on the first failure - this I assume is
> deliberate since it is an improvement over existing behavior imo, but
> still pointing it out in case it isn't.
>
>> }
>>
>> qsort(env.prog_stats, env.prog_stat_cnt, sizeof(*env.prog_stats), cmp_prog_stats);
>>
>> output_prog_stats();
>>
>> -out:
>> destroy_stat_cgroup();
>> return err;
>> }
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat
2026-02-23 18:39 ` Mykyta Yatsenko
@ 2026-02-23 18:54 ` Emil Tsalapatis
0 siblings, 0 replies; 25+ messages in thread
From: Emil Tsalapatis @ 2026-02-23 18:54 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team,
eddyz87
Cc: Mykyta Yatsenko
On Mon Feb 23, 2026 at 1:39 PM EST, Mykyta Yatsenko wrote:
> "Emil Tsalapatis" <emil@etsalapatis.com> writes:
>
>> On Fri Feb 20, 2026 at 2:18 PM EST, Mykyta Yatsenko wrote:
>>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>>
>>> Replace veristat's per-program object re-opening with
>>> bpf_program__clone().
>>>
>>> Previously, veristat opened a separate bpf_object for every program in a
>>> multi-program object file, iterated all programs to enable only the
>>> target one, and then loaded the entire object.
>>>
>>> Use bpf_object__prepare() once, then call bpf_program__clone() for each
>>> program individually. This lets veristat load programs one at a time
>>> from a single prepared object.
>>>
>>> The caller now owns the returned fd and closes it after collecting stats.
>>> Remove the special single-program fast path and the per-file early exit
>>> in handle_verif_mode() so all files are always processed.
>>>
>>> Split fixup_obj() into fixup_obj_maps() for object-wide map fixups that
>>> must run before bpf_object__prepare(), and fixup_obj() for per-program
>>> fixups (struct_ops masking, freplace type guessing) that run before each
>>> bpf_program__clone() call.
>>>
>>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>>> ---
>>> tools/testing/selftests/bpf/veristat.c | 104 ++++++++++++++-------------------
>>> 1 file changed, 44 insertions(+), 60 deletions(-)
>>>
>>> diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c
>>> index 1be1e353d40a..7d7eb56e04d0 100644
>>> --- a/tools/testing/selftests/bpf/veristat.c
>>> +++ b/tools/testing/selftests/bpf/veristat.c
>>> @@ -1236,7 +1236,7 @@ static void mask_unrelated_struct_ops_progs(struct bpf_object *obj,
>>> }
>>> }
>>>
>>> -static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const char *filename)
>>> +static void fixup_obj_maps(struct bpf_object *obj)
>>> {
>>> struct bpf_map *map;
>>>
>>> @@ -1251,15 +1251,23 @@ static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const ch
>>> case BPF_MAP_TYPE_INODE_STORAGE:
>>> case BPF_MAP_TYPE_CGROUP_STORAGE:
>>> case BPF_MAP_TYPE_CGRP_STORAGE:
>>> - break;
>>> case BPF_MAP_TYPE_STRUCT_OPS:
>>> - mask_unrelated_struct_ops_progs(obj, map, prog);
>>> break;
>>> default:
>>> if (bpf_map__max_entries(map) == 0)
>>> bpf_map__set_max_entries(map, 1);
>>> }
>>> }
>>> +}
>>> +
>>> +static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const char *filename)
>>> +{
>>> + struct bpf_map *map;
>>> +
>>> + bpf_object__for_each_map(map, obj) {
>>> + if (bpf_map__type(map) == BPF_MAP_TYPE_STRUCT_OPS)
>>> + mask_unrelated_struct_ops_progs(obj, map, prog);
>>> + }
>>>
>>> /* SEC(freplace) programs can't be loaded with veristat as is,
>>> * but we can try guessing their target program's expected type by
>>> @@ -1608,6 +1616,7 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
>>> const char *base_filename = basename(strdupa(filename));
>>> const char *prog_name = bpf_program__name(prog);
>>> long mem_peak_a, mem_peak_b, mem_peak = -1;
>>> + LIBBPF_OPTS(bpf_prog_load_opts, opts);
>>> char *buf;
>>> int buf_sz, log_level;
>>> struct verif_stats *stats;
>>> @@ -1647,9 +1656,6 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
>>> }
>>> verif_log_buf[0] = '\0';
>>>
>>> - bpf_program__set_log_buf(prog, buf, buf_sz);
>>> - bpf_program__set_log_level(prog, log_level);
>>> -
>>> /* increase chances of successful BPF object loading */
>>> fixup_obj(obj, prog, base_filename);
>>>
>>> @@ -1658,15 +1664,21 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
>>> if (env.force_reg_invariants)
>>> bpf_program__set_flags(prog, bpf_program__flags(prog) | BPF_F_TEST_REG_INVARIANTS);
>>>
>>> - err = bpf_object__prepare(obj);
>>> - if (!err) {
>>> - cgroup_err = reset_stat_cgroup();
>>> - mem_peak_a = cgroup_memory_peak();
>>> - err = bpf_object__load(obj);
>>> - mem_peak_b = cgroup_memory_peak();
>>> - if (!cgroup_err && mem_peak_a >= 0 && mem_peak_b >= 0)
>>> - mem_peak = mem_peak_b - mem_peak_a;
>>> + opts.log_buf = buf;
>>> + opts.log_size = buf_sz;
>>> + opts.log_level = log_level;
>>> +
>>> + cgroup_err = reset_stat_cgroup();
>>> + mem_peak_a = cgroup_memory_peak();
>>> + fd = bpf_program__clone(prog, &opts);
>>> + if (fd < 0) {
>>> + err = fd;
>>> + fprintf(stderr, "Failed to load program %s %d\n", prog_name, err);
>>> }
>>> + mem_peak_b = cgroup_memory_peak();
>>> + if (!cgroup_err && mem_peak_a >= 0 && mem_peak_b >= 0)
>>> + mem_peak = mem_peak_b - mem_peak_a;
>>> +
>>
>> AFAICT the meaning of mem_peak changes with this patch, right? Before it was the diff between
>> prepare() and load(), with the patch as it is now it is max(prepare,
>> load) since mem_peak_a is reset right above. If we do not automatically
>> load OBJ_PREPARED programs (see the review for patch 1/2) we can keep
>> the old behavior by storing mem_peak_a after the first prepare and
>> reading mem_peak_b after clone+load.
>>
> I'm not sure if I understand why the meaning of mem_peak changed. Before
> it was:
> bpf_object__prepare()
> peak_reset()
> a = mem_peak()
> bpf_object__load();
> mem_peak = mem_peak() - a;
>
> and now:
> bpf_object__prepare()
> ...
> peak_reset()
> a = mem_peak()
> bpf_program__clone();
> mem_peak = mem_peak() - a;
>
> I understand this code is supposed to measure the memory overhead for
> the loading a single program.
>
> Could you please elaborate what the issue is? Maybe some example?
You're right, I misread this - behavior is identical. With that:
Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com>
>>> env.progs_processed++;
>>>
>>> stats->file_name = strdup(base_filename);
>>> @@ -1678,7 +1690,6 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
>>> stats->stats[MEMORY_PEAK] = mem_peak < 0 ? -1 : mem_peak / (1024 * 1024);
>>>
>>> memset(&info, 0, info_len);
>>> - fd = bpf_program__fd(prog);
>>> if (fd > 0 && bpf_prog_get_info_by_fd(fd, &info, &info_len) == 0) {
>>> stats->stats[JITED_SIZE] = info.jited_prog_len;
>>> if (env.dump_mode & DUMP_JITED)
>>> @@ -1699,7 +1710,8 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
>>>
>>> if (verif_log_buf != buf)
>>> free(buf);
>>> -
>>> + if (fd > 0)
>>> + close(fd);
>>> return 0;
>>> }
>>>
>>> @@ -2182,8 +2194,8 @@ static int set_global_vars(struct bpf_object *obj, struct var_preset *presets, i
>>> static int process_obj(const char *filename)
>>> {
>>> const char *base_filename = basename(strdupa(filename));
>>> - struct bpf_object *obj = NULL, *tobj;
>>> - struct bpf_program *prog, *tprog, *lprog;
>>> + struct bpf_object *obj = NULL;
>>> + struct bpf_program *prog;
>>> libbpf_print_fn_t old_libbpf_print_fn;
>>> LIBBPF_OPTS(bpf_object_open_opts, opts);
>>> int err = 0, prog_cnt = 0;
>>> @@ -2222,51 +2234,26 @@ static int process_obj(const char *filename)
>>> env.files_processed++;
>>>
>>> bpf_object__for_each_program(prog, obj) {
>>> + bpf_program__set_autoload(prog, true);
>>> prog_cnt++;
>>> }
>>>
>>> - if (prog_cnt == 1) {
>>> - prog = bpf_object__next_program(obj, NULL);
>>> - bpf_program__set_autoload(prog, true);
>>> - err = set_global_vars(obj, env.presets, env.npresets);
>>> - if (err) {
>>> - fprintf(stderr, "Failed to set global variables %d\n", err);
>>> - goto cleanup;
>>> - }
>>> - process_prog(filename, obj, prog);
>>> + fixup_obj_maps(obj);
>>> +
>>> + err = set_global_vars(obj, env.presets, env.npresets);
>>> + if (err) {
>>> + fprintf(stderr, "Failed to set global variables %d\n", err);
>>> goto cleanup;
>>> }
>>>
>>> - bpf_object__for_each_program(prog, obj) {
>>> - const char *prog_name = bpf_program__name(prog);
>>> -
>>> - tobj = bpf_object__open_file(filename, &opts);
>>> - if (!tobj) {
>>> - err = -errno;
>>> - fprintf(stderr, "Failed to open '%s': %d\n", filename, err);
>>> - goto cleanup;
>>> - }
>>> -
>>> - err = set_global_vars(tobj, env.presets, env.npresets);
>>> - if (err) {
>>> - fprintf(stderr, "Failed to set global variables %d\n", err);
>>> - goto cleanup;
>>> - }
>>> -
>>> - lprog = NULL;
>>> - bpf_object__for_each_program(tprog, tobj) {
>>> - const char *tprog_name = bpf_program__name(tprog);
>>> -
>>> - if (strcmp(prog_name, tprog_name) == 0) {
>>> - bpf_program__set_autoload(tprog, true);
>>> - lprog = tprog;
>>> - } else {
>>> - bpf_program__set_autoload(tprog, false);
>>> - }
>>> - }
>>> + err = bpf_object__prepare(obj);
>>> + if (err) {
>>> + fprintf(stderr, "Failed to prepare BPF object for loading %d\n", err);
>>> + goto cleanup;
>>> + }
>>>
>>> - process_prog(filename, tobj, lprog);
>>> - bpf_object__close(tobj);
>>> + bpf_object__for_each_program(prog, obj) {
>>> + process_prog(filename, obj, prog);
>>> }
>>>
>>> cleanup:
>>> @@ -3264,17 +3251,14 @@ static int handle_verif_mode(void)
>>> create_stat_cgroup();
>>> for (i = 0; i < env.filename_cnt; i++) {
>>> err = process_obj(env.filenames[i]);
>>> - if (err) {
>>> + if (err)
>>> fprintf(stderr, "Failed to process '%s': %d\n", env.filenames[i], err);
>>> - goto out;
>>> - }
>>
>> This means we now do not stop on the first failure - this I assume is
>> deliberate since it is an improvement over existing behavior imo, but
>> still pointing it out in case it isn't.
>>
>>> }
>>>
>>> qsort(env.prog_stats, env.prog_stat_cnt, sizeof(*env.prog_stats), cmp_prog_stats);
>>>
>>> output_prog_stats();
>>>
>>> -out:
>>> destroy_stat_cgroup();
>>> return err;
>>> }
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat
2026-02-20 19:18 ` [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat Mykyta Yatsenko
2026-02-23 17:49 ` Emil Tsalapatis
@ 2026-02-24 2:03 ` Eduard Zingerman
2026-02-24 12:20 ` Mykyta Yatsenko
1 sibling, 1 reply; 25+ messages in thread
From: Eduard Zingerman @ 2026-02-24 2:03 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team
Cc: Mykyta Yatsenko
[-- Attachment #1: Type: text/plain, Size: 1493 bytes --]
On Fri, 2026-02-20 at 11:18 -0800, Mykyta Yatsenko wrote:
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Replace veristat's per-program object re-opening with
> bpf_program__clone().
>
> Previously, veristat opened a separate bpf_object for every program in a
> multi-program object file, iterated all programs to enable only the
> target one, and then loaded the entire object.
>
> Use bpf_object__prepare() once, then call bpf_program__clone() for each
> program individually. This lets veristat load programs one at a time
> from a single prepared object.
>
> The caller now owns the returned fd and closes it after collecting stats.
> Remove the special single-program fast path and the per-file early exit
> in handle_verif_mode() so all files are always processed.
>
> Split fixup_obj() into fixup_obj_maps() for object-wide map fixups that
> must run before bpf_object__prepare(), and fixup_obj() for per-program
> fixups (struct_ops masking, freplace type guessing) that run before each
> bpf_program__clone() call.
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
I run selftests binaries through old and new veristat versions and see
some discrepancies. csv files attached.
It looks like there are some failures that are now not logged.
There is also at-least one success -> failure transition and
a bunch of failure -> success transitions.
I see no such differences for sched_ext programs.
Is this an expected behavior?
[...]
[-- Attachment #2: selftests.master.veristat.csv.gz --]
[-- Type: application/gzip, Size: 69925 bytes --]
[-- Attachment #3: selftests.with-patch.veristat.csv.gz --]
[-- Type: application/gzip, Size: 67095 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat
2026-02-24 2:03 ` Eduard Zingerman
@ 2026-02-24 12:20 ` Mykyta Yatsenko
2026-02-24 19:08 ` Eduard Zingerman
0 siblings, 1 reply; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-02-24 12:20 UTC (permalink / raw)
To: Eduard Zingerman, bpf, ast, andrii, daniel, kafai, kernel-team
Cc: Mykyta Yatsenko
On 2/24/26 02:03, Eduard Zingerman wrote:
> On Fri, 2026-02-20 at 11:18 -0800, Mykyta Yatsenko wrote:
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Replace veristat's per-program object re-opening with
>> bpf_program__clone().
>>
>> Previously, veristat opened a separate bpf_object for every program in a
>> multi-program object file, iterated all programs to enable only the
>> target one, and then loaded the entire object.
>>
>> Use bpf_object__prepare() once, then call bpf_program__clone() for each
>> program individually. This lets veristat load programs one at a time
>> from a single prepared object.
>>
>> The caller now owns the returned fd and closes it after collecting stats.
>> Remove the special single-program fast path and the per-file early exit
>> in handle_verif_mode() so all files are always processed.
>>
>> Split fixup_obj() into fixup_obj_maps() for object-wide map fixups that
>> must run before bpf_object__prepare(), and fixup_obj() for per-program
>> fixups (struct_ops masking, freplace type guessing) that run before each
>> bpf_program__clone() call.
>>
>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>> ---
> I run selftests binaries through old and new veristat versions and see
> some discrepancies. csv files attached.
> It looks like there are some failures that are now not logged.
> There is also at-least one success -> failure transition and
> a bunch of failure -> success transitions.
> I see no such differences for sched_ext programs.
> Is this an expected behavior?
yes, the regressions are explained in the cover letter:
"""
Known regression:
- Program-containing maps (PROG_ARRAY, DEVMAP, CPUMAP) track
owner program type. Programs with incompatible attributes
loaded against a shared map will be rejected. This is
expected kernel behavior.
"""
in the previous version of this series, there were no regressions,
but to achieve that we had to be a little bit creative with maps
loading, have a look:
https://lore.kernel.org/all/20260212-veristat_prepare-v1-1-c351023fb0db@meta.com/
clone_prog_maps()
The improvements are explained in the sibling thread with Alexei
(again because of PROG_ARRAY type of maps)
>
> [...]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat
2026-02-24 12:20 ` Mykyta Yatsenko
@ 2026-02-24 19:08 ` Eduard Zingerman
2026-02-24 19:12 ` Mykyta Yatsenko
0 siblings, 1 reply; 25+ messages in thread
From: Eduard Zingerman @ 2026-02-24 19:08 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team
Cc: Mykyta Yatsenko
On Tue, 2026-02-24 at 12:20 +0000, Mykyta Yatsenko wrote:
[...]
> > I run selftests binaries through old and new veristat versions and see
> > some discrepancies. csv files attached.
> > It looks like there are some failures that are now not logged.
> > There is also at-least one success -> failure transition and
> > a bunch of failure -> success transitions.
> > I see no such differences for sched_ext programs.
> > Is this an expected behavior?
> yes, the regressions are explained in the cover letter:
> """
> Known regression:
> - Program-containing maps (PROG_ARRAY, DEVMAP, CPUMAP) track
> owner program type. Programs with incompatible attributes
> loaded against a shared map will be rejected. This is
> expected kernel behavior.
> """
> in the previous version of this series, there were no regressions,
> but to achieve that we had to be a little bit creative with maps
> loading, have a look:
> https://lore.kernel.org/all/20260212-veristat_prepare-v1-1-c351023fb0db@meta.com/
> clone_prog_maps()
>
> The improvements are explained in the sibling thread with Alexei
> (again because of PROG_ARRAY type of maps)
Are you sure that's what happens?
Looking at 'failure -> N/A' transitions, it appears that this is
caused by the early exit from process_obj() if bpf_object__prepare() fails.
Previously each program in the object failing the __prepare() was
reported as 'failed', now these are skipped entirely.
I think it would be nice to have an additional logic in process_obj()
marking programs as failed if __prepare() fails.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat
2026-02-24 19:08 ` Eduard Zingerman
@ 2026-02-24 19:12 ` Mykyta Yatsenko
2026-02-24 19:16 ` Eduard Zingerman
0 siblings, 1 reply; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-02-24 19:12 UTC (permalink / raw)
To: Eduard Zingerman, bpf, ast, andrii, daniel, kafai, kernel-team
Cc: Mykyta Yatsenko
On 2/24/26 19:08, Eduard Zingerman wrote:
> On Tue, 2026-02-24 at 12:20 +0000, Mykyta Yatsenko wrote:
>
> [...]
>
>>> I run selftests binaries through old and new veristat versions and see
>>> some discrepancies. csv files attached.
>>> It looks like there are some failures that are now not logged.
>>> There is also at-least one success -> failure transition and
>>> a bunch of failure -> success transitions.
>>> I see no such differences for sched_ext programs.
>>> Is this an expected behavior?
>> yes, the regressions are explained in the cover letter:
>> """
>> Known regression:
>> - Program-containing maps (PROG_ARRAY, DEVMAP, CPUMAP) track
>> owner program type. Programs with incompatible attributes
>> loaded against a shared map will be rejected. This is
>> expected kernel behavior.
>> """
>> in the previous version of this series, there were no regressions,
>> but to achieve that we had to be a little bit creative with maps
>> loading, have a look:
>> https://lore.kernel.org/all/20260212-veristat_prepare-v1-1-c351023fb0db@meta.com/
>> clone_prog_maps()
>>
>> The improvements are explained in the sibling thread with Alexei
>> (again because of PROG_ARRAY type of maps)
> Are you sure that's what happens?
> Looking at 'failure -> N/A' transitions, it appears that this is
> caused by the early exit from process_obj() if bpf_object__prepare() fails.
> Previously each program in the object failing the __prepare() was
> reported as 'failed', now these are skipped entirely.
> I think it would be nice to have an additional logic in process_obj()
> marking programs as failed if __prepare() fails.
Ah, you mean those ones, I don't consider these regressions,
because they failed in the base version anyways. I discussed this case
with Andrii:
https://lore.kernel.org/all/7990bafe-d72c-47de-a711-0c8a888d4ed9@gmail.com/
What I was talking is the case where it goes from
success to failure with these changes.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat
2026-02-24 19:12 ` Mykyta Yatsenko
@ 2026-02-24 19:16 ` Eduard Zingerman
0 siblings, 0 replies; 25+ messages in thread
From: Eduard Zingerman @ 2026-02-24 19:16 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team
Cc: Mykyta Yatsenko
On Tue, 2026-02-24 at 19:12 +0000, Mykyta Yatsenko wrote:
> On 2/24/26 19:08, Eduard Zingerman wrote:
> > On Tue, 2026-02-24 at 12:20 +0000, Mykyta Yatsenko wrote:
> >
> > [...]
> >
> > > > I run selftests binaries through old and new veristat versions and see
> > > > some discrepancies. csv files attached.
> > > > It looks like there are some failures that are now not logged.
> > > > There is also at-least one success -> failure transition and
> > > > a bunch of failure -> success transitions.
> > > > I see no such differences for sched_ext programs.
> > > > Is this an expected behavior?
> > > yes, the regressions are explained in the cover letter:
> > > """
> > > Known regression:
> > > - Program-containing maps (PROG_ARRAY, DEVMAP, CPUMAP) track
> > > owner program type. Programs with incompatible attributes
> > > loaded against a shared map will be rejected. This is
> > > expected kernel behavior.
> > > """
> > > in the previous version of this series, there were no regressions,
> > > but to achieve that we had to be a little bit creative with maps
> > > loading, have a look:
> > > https://lore.kernel.org/all/20260212-veristat_prepare-v1-1-c351023fb0db@meta.com/
> > > clone_prog_maps()
> > >
> > > The improvements are explained in the sibling thread with Alexei
> > > (again because of PROG_ARRAY type of maps)
> > Are you sure that's what happens?
> > Looking at 'failure -> N/A' transitions, it appears that this is
> > caused by the early exit from process_obj() if bpf_object__prepare() fails.
> > Previously each program in the object failing the __prepare() was
> > reported as 'failed', now these are skipped entirely.
> > I think it would be nice to have an additional logic in process_obj()
> > marking programs as failed if __prepare() fails.
> Ah, you mean those ones, I don't consider these regressions,
> because they failed in the base version anyways. I discussed this case
> with Andrii:
> https://lore.kernel.org/all/7990bafe-d72c-47de-a711-0c8a888d4ed9@gmail.com/
>
> What I was talking is the case where it goes from
> success to failure with these changes.
I think this part is worth doing:
> If we really need to be on par, I can iterate over progs if
> bpf_object__prepare()
> fails, it just looks a bit awkward.
Because otherwise these programs would be completely invisible in the
csv output.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
2026-02-23 17:25 ` Emil Tsalapatis
@ 2026-02-24 19:28 ` Eduard Zingerman
2026-02-24 19:32 ` Eduard Zingerman
2026-02-24 20:47 ` Mykyta Yatsenko
2026-03-06 17:22 ` [External] " Andrey Grodzovsky
2026-03-11 23:03 ` Andrii Nakryiko
3 siblings, 2 replies; 25+ messages in thread
From: Eduard Zingerman @ 2026-02-24 19:28 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team
Cc: Mykyta Yatsenko
On Fri, 2026-02-20 at 11:18 -0800, Mykyta Yatsenko wrote:
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Add bpf_program__clone() API that loads a single BPF program from a
> prepared BPF object into the kernel, returning a file descriptor owned
> by the caller.
>
> After bpf_object__prepare(), callers can use bpf_program__clone() to
> load individual programs with custom bpf_prog_load_opts, instead of
> loading all programs at once via bpf_object__load(). Non-zero fields in
> opts override the defaults derived from the program and object
> internals; passing NULL opts populates everything automatically.
>
> Internally, bpf_program__clone() resolves BTF-based attach targets
> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> func/line info, fd_array, license, and kern_version from the
> prepared object before calling bpf_prog_load().
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 17 +++++++++++++
> tools/lib/bpf/libbpf.map | 1 +
> 3 files changed, 82 insertions(+)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 0c8bf0b5cce4..4b084bda3f47 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> return prog->line_info_cnt;
> }
>
> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> +{
> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> + struct bpf_prog_load_opts *pattr = &attr;
> + struct bpf_object *obj;
> + int err, fd;
> +
> + if (!prog)
> + return libbpf_err(-EINVAL);
> +
> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> + return libbpf_err(-EINVAL);
> +
> + obj = prog->obj;
> + if (obj->state < OBJ_PREPARED)
> + return libbpf_err(-EINVAL);
> +
> + /* Copy caller opts, fall back to prog/object defaults */
> + OPTS_SET(pattr, expected_attach_type,
> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> + OPTS_SET(pattr, attach_btf_obj_fd,
> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
It seems 'fd_array_cnt' is not copied, should it be?
> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> + if (attr.token_fd)
> + attr.prog_flags |= BPF_F_TOKEN_FD;
Nit: should this be 'if (OPTS_GET(opts, token_fd, 0) && attr.token_fd)' ?
> +
> + /* BTF func/line info */
> + if (obj->btf && btf__fd(obj->btf) >= 0) {
> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> + OPTS_SET(pattr, func_info_cnt,
> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> + OPTS_SET(pattr, func_info_rec_size,
> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> + OPTS_SET(pattr, line_info_cnt,
> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> + OPTS_SET(pattr, line_info_rec_size,
> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> + }
> +
> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
Just curious why did you decide not to inherit logging properties from
the original program?
Unless overridden, the original program would point to the buffer
specified for the object in bpf_object_open_opts->kernel_log_buf, right?
> +
> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> + if (err)
> + return libbpf_err(err);
> + }
> +
> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> + pattr);
> +
> + return libbpf_err(fd);
> +}
> +
> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> .sec = (char *)sec_pfx, \
> .prog_type = BPF_PROG_TYPE_##ptype, \
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-24 19:28 ` Eduard Zingerman
@ 2026-02-24 19:32 ` Eduard Zingerman
2026-02-24 20:47 ` Mykyta Yatsenko
1 sibling, 0 replies; 25+ messages in thread
From: Eduard Zingerman @ 2026-02-24 19:32 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team
Cc: Mykyta Yatsenko
On Tue, 2026-02-24 at 11:28 -0800, Eduard Zingerman wrote:
[...]
> > + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> > + if (attr.token_fd)
> > + attr.prog_flags |= BPF_F_TOKEN_FD;
>
> Nit: should this be 'if (OPTS_GET(opts, token_fd, 0) && attr.token_fd)' ?
Nope, bpf_object_load_prog() does it same way you do here.
Sorry for the noise.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-24 19:28 ` Eduard Zingerman
2026-02-24 19:32 ` Eduard Zingerman
@ 2026-02-24 20:47 ` Mykyta Yatsenko
1 sibling, 0 replies; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-02-24 20:47 UTC (permalink / raw)
To: Eduard Zingerman, bpf, ast, andrii, daniel, kafai, kernel-team
Cc: Mykyta Yatsenko
On 2/24/26 19:28, Eduard Zingerman wrote:
> On Fri, 2026-02-20 at 11:18 -0800, Mykyta Yatsenko wrote:
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Add bpf_program__clone() API that loads a single BPF program from a
>> prepared BPF object into the kernel, returning a file descriptor owned
>> by the caller.
>>
>> After bpf_object__prepare(), callers can use bpf_program__clone() to
>> load individual programs with custom bpf_prog_load_opts, instead of
>> loading all programs at once via bpf_object__load(). Non-zero fields in
>> opts override the defaults derived from the program and object
>> internals; passing NULL opts populates everything automatically.
>>
>> Internally, bpf_program__clone() resolves BTF-based attach targets
>> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
>> func/line info, fd_array, license, and kern_version from the
>> prepared object before calling bpf_prog_load().
>>
>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>> ---
>> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
>> tools/lib/bpf/libbpf.h | 17 +++++++++++++
>> tools/lib/bpf/libbpf.map | 1 +
>> 3 files changed, 82 insertions(+)
>>
>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>> index 0c8bf0b5cce4..4b084bda3f47 100644
>> --- a/tools/lib/bpf/libbpf.c
>> +++ b/tools/lib/bpf/libbpf.c
>> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
>> return prog->line_info_cnt;
>> }
>>
>> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
>> +{
>> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
>> + struct bpf_prog_load_opts *pattr = &attr;
>> + struct bpf_object *obj;
>> + int err, fd;
>> +
>> + if (!prog)
>> + return libbpf_err(-EINVAL);
>> +
>> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
>> + return libbpf_err(-EINVAL);
>> +
>> + obj = prog->obj;
>> + if (obj->state < OBJ_PREPARED)
>> + return libbpf_err(-EINVAL);
>> +
>> + /* Copy caller opts, fall back to prog/object defaults */
>> + OPTS_SET(pattr, expected_attach_type,
>> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
>> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
>> + OPTS_SET(pattr, attach_btf_obj_fd,
>> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
>> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
>> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
>> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
>> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
>> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> It seems 'fd_array_cnt' is not copied, should it be?
>
>> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
>> + if (attr.token_fd)
>> + attr.prog_flags |= BPF_F_TOKEN_FD;
> Nit: should this be 'if (OPTS_GET(opts, token_fd, 0) && attr.token_fd)' ?
>
>> +
>> + /* BTF func/line info */
>> + if (obj->btf && btf__fd(obj->btf) >= 0) {
>> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
>> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
>> + OPTS_SET(pattr, func_info_cnt,
>> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
>> + OPTS_SET(pattr, func_info_rec_size,
>> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
>> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
>> + OPTS_SET(pattr, line_info_cnt,
>> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
>> + OPTS_SET(pattr, line_info_rec_size,
>> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
>> + }
>> +
>> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
>> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
>> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> Just curious why did you decide not to inherit logging properties from
> the original program?
> Unless overridden, the original program would point to the buffer
> specified for the object in bpf_object_open_opts->kernel_log_buf, right?
Inheriting the object's log_buf here would mean writing
into a shared mutable buffer, which sounds not very good,
I don't see where this scenario is useful.
>
>> +
>> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
>> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
>> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
>> + if (err)
>> + return libbpf_err(err);
>> + }
>> +
>> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
>> + pattr);
>> +
>> + return libbpf_err(fd);
>> +}
>> +
>> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
>> .sec = (char *)sec_pfx, \
>> .prog_type = BPF_PROG_TYPE_##ptype, \
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [External] [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
2026-02-23 17:25 ` Emil Tsalapatis
2026-02-24 19:28 ` Eduard Zingerman
@ 2026-03-06 17:22 ` Andrey Grodzovsky
2026-03-10 0:08 ` Mykyta Yatsenko
2026-03-11 22:52 ` Andrii Nakryiko
2026-03-11 23:03 ` Andrii Nakryiko
3 siblings, 2 replies; 25+ messages in thread
From: Andrey Grodzovsky @ 2026-03-06 17:22 UTC (permalink / raw)
To: Mykyta Yatsenko, Andrii Nakryiko
Cc: bpf, ast, daniel, kernel-team, DL Linux Open Source Team
Mykyta and Andrii Hi!
We're evaluating the bpf_object__prepare() +
bpf_program__clone() API for use in a production BPF
application that manages hundreds of BPF programs with
selective (dynamic) loading — some programs are loaded at
startup, others loaded/unloaded at runtime based on feature
configuration.
We have a few questions about the intended usage and
potential extensions of this API:
1. Compatibility with bpf_object__load() and object state
After bpf_object__prepare(), the object is in OBJ_PREPARED
state. Several libbpf APIs (e.g., bpf_program__set_type())
gate on OBJ_LOADED state.
Is there a recommended way to transition the object to
OBJ_LOADED after cloning all desired programs? For example,
would a bpf_object__finalize() or similar API that runs
post_load_cleanup() and sets OBJ_LOADED be in scope? This
would allow users to benefit from prepare() + clone() for
selective loading while keeping the object in a state that
the rest of libbpf expects. Or, is the new API not intended
to work with bpf_object in the first place ?
2. Storing the clone FD back on struct bpf_program
bpf_program__clone() returns a caller-owned FD, but APIs
like bpf_program__attach() read prog->fd internally.
Without a way to set the FD back on the program struct, the
caller must reimplement attach logic (section-type dispatch
for kprobe, fentry, raw_tp, etc.).
Would a bpf_program__set_fd() setter (similar to the
existing btf__set_fd()) be acceptable to store the clone FD
back, making bpf_program__attach() and related APIs usable
with cloned programs?
3. Use case: selective program loading from a single BPF
object
Our use case involves a single large BPF object (skeleton)
with hundreds of programs where a subset is loaded at
startup and others are loaded/unloaded dynamically based on
runtime configuration. The current approach requires either:
- Loading all programs upfront (wasteful), or
- Maintaining out-of-tree patches to libbpf for selective
loading
Last year we made an attempt to upstream our solution to
this use case to libbpf[1] but Andrii pointed out how our
approach was problematic for upstream. He then proposed
splitting bpf_object__load() into two steps:
bpf_object__prepare() (creates maps, loads BTF, does
relocations, produces final program instructions) and then
bpf_object__load(). We are trying to follow up on his
input and become more upstream compliant.
The prepare() + clone() API seems similiar to this,
but the questions above about object state and FD ownership
are the main gaps for production adoption. Are there plans
to address these in future revisions, or is this
intentionally scoped to testing/tooling use cases only?
Thanks,
Andrey
[1] -https://lore.kernel.org/all/20250122215206.59859-1-slava.imameev@crowdstrike.com/t/#m93ec917b3dfe3115be2a4b6439e2c649c791686d
On Fri, Feb 20, 2026 at 2:18 PM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Add bpf_program__clone() API that loads a single BPF program from a
> prepared BPF object into the kernel, returning a file descriptor owned
> by the caller.
>
> After bpf_object__prepare(), callers can use bpf_program__clone() to
> load individual programs with custom bpf_prog_load_opts, instead of
> loading all programs at once via bpf_object__load(). Non-zero fields in
> opts override the defaults derived from the program and object
> internals; passing NULL opts populates everything automatically.
>
> Internally, bpf_program__clone() resolves BTF-based attach targets
> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> func/line info, fd_array, license, and kern_version from the
> prepared object before calling bpf_prog_load().
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 17 +++++++++++++
> tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ | 1 +
> 3 files changed, 82 insertions(+)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 0c8bf0b5cce4..4b084bda3f47 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> return prog->line_info_cnt;
> }
>
> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> +{
> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> + struct bpf_prog_load_opts *pattr = &attr;
> + struct bpf_object *obj;
> + int err, fd;
> +
> + if (!prog)
> + return libbpf_err(-EINVAL);
> +
> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> + return libbpf_err(-EINVAL);
> +
> + obj = prog->obj;
> + if (obj->state < OBJ_PREPARED)
> + return libbpf_err(-EINVAL);
> +
> + /* Copy caller opts, fall back to prog/object defaults */
> + OPTS_SET(pattr, expected_attach_type,
> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> + OPTS_SET(pattr, attach_btf_obj_fd,
> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> + if (attr.token_fd)
> + attr.prog_flags |= BPF_F_TOKEN_FD;
> +
> + /* BTF func/line info */
> + if (obj->btf && btf__fd(obj->btf) >= 0) {
> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> + OPTS_SET(pattr, func_info_cnt,
> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> + OPTS_SET(pattr, func_info_rec_size,
> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> + OPTS_SET(pattr, line_info_cnt,
> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> + OPTS_SET(pattr, line_info_rec_size,
> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> + }
> +
> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> +
> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> + if (err)
> + return libbpf_err(err);
> + }
> +
> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> + pattr);
> +
> + return libbpf_err(fd);
> +}
> +
> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> .sec = (char *)sec_pfx, \
> .prog_type = BPF_PROG_TYPE_##ptype, \
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index dfc37a615578..0be34852350f 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
> */
> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
>
> +/**
> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
> + * BPF object into the kernel, returning its file descriptor.
> + *
> + * The BPF object must have been previously prepared with
> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
> + * overrides the defaults derived from the program/object internals.
> + * If @opts is NULL, all fields are populated automatically.
> + *
> + * The returned FD is owned by the caller and must be closed with close().
> + *
> + * @param prog BPF program from a prepared object
> + * @param opts Optional load options; non-zero fields override defaults
> + * @return program FD (>= 0) on success; negative error code on failure
> + */
> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
> +
> #ifdef __cplusplus
> } /* extern "C" */
> #endif
> diff --git a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> index d18fbcea7578..e727a54e373a 100644
> --- a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> +++ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
> bpf_map__set_exclusive_program;
> bpf_map__exclusive_program;
> bpf_prog_assoc_struct_ops;
> + bpf_program__clone;
> bpf_program__assoc_struct_ops;
> btf__permute;
> } LIBBPF_1.6.0;
>
> --
> 2.47.3
>
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [External] [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-03-06 17:22 ` [External] " Andrey Grodzovsky
@ 2026-03-10 0:08 ` Mykyta Yatsenko
2026-03-11 13:35 ` Andrey Grodzovsky
2026-03-11 22:52 ` Andrii Nakryiko
1 sibling, 1 reply; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-03-10 0:08 UTC (permalink / raw)
To: Andrey Grodzovsky, Andrii Nakryiko
Cc: bpf, ast, daniel, kernel-team, DL Linux Open Source Team
Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com> writes:
Hi,
Thanks for reaching out, I'm providing my own opinion on this, I did not
discuss this with Andrii in depth.
bpf_object__finalize() - you probably do not need this, the mentioned
example of bpf_program__set_type() actually rejects when object is in
LOADED state (you can't mutate loaded program).
To make your dynamic loading/unloading work, you need to keep your
object in the PREPARED state indefinitely. As clone() uses some of the
fields that are destroyed by the post_load_cleanup(). calling
bpf_object__load() in this setup may be unsafe (leaking fd, etc).
We have a precedent with bpf_map__reuse_fd(), bpf_program__set_fd() does
not seem too extreme to me, but it seems like some lifecycle invariants
are changing, as we'll have loaded program in prepared object, which I'm
not 100% sure is a problem right now, but possibly going to break
something.
Also a small detail: bpf_program__clone() does not support PROG_ARRAY
maps (just in case you need that) (see cover letter for details).
> Mykyta and Andrii Hi!
>
> We're evaluating the bpf_object__prepare() +
> bpf_program__clone() API for use in a production BPF
> application that manages hundreds of BPF programs with
> selective (dynamic) loading — some programs are loaded at
> startup, others loaded/unloaded at runtime based on feature
> configuration.
>
> We have a few questions about the intended usage and
> potential extensions of this API:
>
> 1. Compatibility with bpf_object__load() and object state
>
> After bpf_object__prepare(), the object is in OBJ_PREPARED
> state. Several libbpf APIs (e.g., bpf_program__set_type())
> gate on OBJ_LOADED state.
>
> Is there a recommended way to transition the object to
> OBJ_LOADED after cloning all desired programs? For example,
> would a bpf_object__finalize() or similar API that runs
> post_load_cleanup() and sets OBJ_LOADED be in scope? This
> would allow users to benefit from prepare() + clone() for
> selective loading while keeping the object in a state that
> the rest of libbpf expects. Or, is the new API not intended
> to work with bpf_object in the first place ?
>
> 2. Storing the clone FD back on struct bpf_program
>
> bpf_program__clone() returns a caller-owned FD, but APIs
> like bpf_program__attach() read prog->fd internally.
> Without a way to set the FD back on the program struct, the
> caller must reimplement attach logic (section-type dispatch
> for kprobe, fentry, raw_tp, etc.).
>
> Would a bpf_program__set_fd() setter (similar to the
> existing btf__set_fd()) be acceptable to store the clone FD
> back, making bpf_program__attach() and related APIs usable
> with cloned programs?
>
> 3. Use case: selective program loading from a single BPF
> object
>
> Our use case involves a single large BPF object (skeleton)
> with hundreds of programs where a subset is loaded at
> startup and others are loaded/unloaded dynamically based on
> runtime configuration. The current approach requires either:
> - Loading all programs upfront (wasteful), or
> - Maintaining out-of-tree patches to libbpf for selective
> loading
>
> Last year we made an attempt to upstream our solution to
> this use case to libbpf[1] but Andrii pointed out how our
> approach was problematic for upstream. He then proposed
> splitting bpf_object__load() into two steps:
> bpf_object__prepare() (creates maps, loads BTF, does
> relocations, produces final program instructions) and then
> bpf_object__load(). We are trying to follow up on his
> input and become more upstream compliant.
>
> The prepare() + clone() API seems similiar to this,
> but the questions above about object state and FD ownership
> are the main gaps for production adoption. Are there plans
> to address these in future revisions, or is this
> intentionally scoped to testing/tooling use cases only?
>
> Thanks,
> Andrey
>
> [1] -https://lore.kernel.org/all/20250122215206.59859-1-slava.imameev@crowdstrike.com/t/#m93ec917b3dfe3115be2a4b6439e2c649c791686d
>
> On Fri, Feb 20, 2026 at 2:18 PM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
>>
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Add bpf_program__clone() API that loads a single BPF program from a
>> prepared BPF object into the kernel, returning a file descriptor owned
>> by the caller.
>>
>> After bpf_object__prepare(), callers can use bpf_program__clone() to
>> load individual programs with custom bpf_prog_load_opts, instead of
>> loading all programs at once via bpf_object__load(). Non-zero fields in
>> opts override the defaults derived from the program and object
>> internals; passing NULL opts populates everything automatically.
>>
>> Internally, bpf_program__clone() resolves BTF-based attach targets
>> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
>> func/line info, fd_array, license, and kern_version from the
>> prepared object before calling bpf_prog_load().
>>
>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>> ---
>> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
>> tools/lib/bpf/libbpf.h | 17 +++++++++++++
>> tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ | 1 +
>> 3 files changed, 82 insertions(+)
>>
>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>> index 0c8bf0b5cce4..4b084bda3f47 100644
>> --- a/tools/lib/bpf/libbpf.c
>> +++ b/tools/lib/bpf/libbpf.c
>> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
>> return prog->line_info_cnt;
>> }
>>
>> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
>> +{
>> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
>> + struct bpf_prog_load_opts *pattr = &attr;
>> + struct bpf_object *obj;
>> + int err, fd;
>> +
>> + if (!prog)
>> + return libbpf_err(-EINVAL);
>> +
>> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
>> + return libbpf_err(-EINVAL);
>> +
>> + obj = prog->obj;
>> + if (obj->state < OBJ_PREPARED)
>> + return libbpf_err(-EINVAL);
>> +
>> + /* Copy caller opts, fall back to prog/object defaults */
>> + OPTS_SET(pattr, expected_attach_type,
>> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
>> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
>> + OPTS_SET(pattr, attach_btf_obj_fd,
>> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
>> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
>> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
>> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
>> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
>> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
>> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
>> + if (attr.token_fd)
>> + attr.prog_flags |= BPF_F_TOKEN_FD;
>> +
>> + /* BTF func/line info */
>> + if (obj->btf && btf__fd(obj->btf) >= 0) {
>> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
>> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
>> + OPTS_SET(pattr, func_info_cnt,
>> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
>> + OPTS_SET(pattr, func_info_rec_size,
>> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
>> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
>> + OPTS_SET(pattr, line_info_cnt,
>> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
>> + OPTS_SET(pattr, line_info_rec_size,
>> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
>> + }
>> +
>> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
>> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
>> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
>> +
>> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
>> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
>> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
>> + if (err)
>> + return libbpf_err(err);
>> + }
>> +
>> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
>> + pattr);
>> +
>> + return libbpf_err(fd);
>> +}
>> +
>> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
>> .sec = (char *)sec_pfx, \
>> .prog_type = BPF_PROG_TYPE_##ptype, \
>> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
>> index dfc37a615578..0be34852350f 100644
>> --- a/tools/lib/bpf/libbpf.h
>> +++ b/tools/lib/bpf/libbpf.h
>> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
>> */
>> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
>>
>> +/**
>> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
>> + * BPF object into the kernel, returning its file descriptor.
>> + *
>> + * The BPF object must have been previously prepared with
>> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
>> + * overrides the defaults derived from the program/object internals.
>> + * If @opts is NULL, all fields are populated automatically.
>> + *
>> + * The returned FD is owned by the caller and must be closed with close().
>> + *
>> + * @param prog BPF program from a prepared object
>> + * @param opts Optional load options; non-zero fields override defaults
>> + * @return program FD (>= 0) on success; negative error code on failure
>> + */
>> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
>> +
>> #ifdef __cplusplus
>> } /* extern "C" */
>> #endif
>> diff --git a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
>> index d18fbcea7578..e727a54e373a 100644
>> --- a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
>> +++ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
>> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
>> bpf_map__set_exclusive_program;
>> bpf_map__exclusive_program;
>> bpf_prog_assoc_struct_ops;
>> + bpf_program__clone;
>> bpf_program__assoc_struct_ops;
>> btf__permute;
>> } LIBBPF_1.6.0;
>>
>> --
>> 2.47.3
>>
>>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [External] [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-03-10 0:08 ` Mykyta Yatsenko
@ 2026-03-11 13:35 ` Andrey Grodzovsky
0 siblings, 0 replies; 25+ messages in thread
From: Andrey Grodzovsky @ 2026-03-11 13:35 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: Andrii Nakryiko, bpf, ast, daniel, kernel-team,
DL Linux Open Source Team
Thanks for the reply and all the clarifications! We are looking forward
for this patchset to be merged so we can try to integrate it into our
dynamic loading solutions.
We will reach out with further questions down the road as we take a deeper
look into this.
Andrey
On Mon, Mar 9, 2026 at 8:08 PM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com> writes:
>
> Hi,
> Thanks for reaching out, I'm providing my own opinion on this, I did not
> discuss this with Andrii in depth.
>
> bpf_object__finalize() - you probably do not need this, the mentioned
> example of bpf_program__set_type() actually rejects when object is in
> LOADED state (you can't mutate loaded program).
>
> To make your dynamic loading/unloading work, you need to keep your
> object in the PREPARED state indefinitely. As clone() uses some of the
> fields that are destroyed by the post_load_cleanup(). calling
> bpf_object__load() in this setup may be unsafe (leaking fd, etc).
>
> We have a precedent with bpf_map__reuse_fd(), bpf_program__set_fd() does
> not seem too extreme to me, but it seems like some lifecycle invariants
> are changing, as we'll have loaded program in prepared object, which I'm
> not 100% sure is a problem right now, but possibly going to break
> something.
>
> Also a small detail: bpf_program__clone() does not support PROG_ARRAY
> maps (just in case you need that) (see cover letter for details).
>
> > Mykyta and Andrii Hi!
> >
> > We're evaluating the bpf_object__prepare() +
> > bpf_program__clone() API for use in a production BPF
> > application that manages hundreds of BPF programs with
> > selective (dynamic) loading — some programs are loaded at
> > startup, others loaded/unloaded at runtime based on feature
> > configuration.
> >
> > We have a few questions about the intended usage and
> > potential extensions of this API:
> >
> > 1. Compatibility with bpf_object__load() and object state
> >
> > After bpf_object__prepare(), the object is in OBJ_PREPARED
> > state. Several libbpf APIs (e.g., bpf_program__set_type())
> > gate on OBJ_LOADED state.
> >
> > Is there a recommended way to transition the object to
> > OBJ_LOADED after cloning all desired programs? For example,
> > would a bpf_object__finalize() or similar API that runs
> > post_load_cleanup() and sets OBJ_LOADED be in scope? This
> > would allow users to benefit from prepare() + clone() for
> > selective loading while keeping the object in a state that
> > the rest of libbpf expects. Or, is the new API not intended
> > to work with bpf_object in the first place ?
> >
> > 2. Storing the clone FD back on struct bpf_program
> >
> > bpf_program__clone() returns a caller-owned FD, but APIs
> > like bpf_program__attach() read prog->fd internally.
> > Without a way to set the FD back on the program struct, the
> > caller must reimplement attach logic (section-type dispatch
> > for kprobe, fentry, raw_tp, etc.).
> >
> > Would a bpf_program__set_fd() setter (similar to the
> > existing btf__set_fd()) be acceptable to store the clone FD
> > back, making bpf_program__attach() and related APIs usable
> > with cloned programs?
> >
> > 3. Use case: selective program loading from a single BPF
> > object
> >
> > Our use case involves a single large BPF object (skeleton)
> > with hundreds of programs where a subset is loaded at
> > startup and others are loaded/unloaded dynamically based on
> > runtime configuration. The current approach requires either:
> > - Loading all programs upfront (wasteful), or
> > - Maintaining out-of-tree patches to libbpf for selective
> > loading
> >
> > Last year we made an attempt to upstream our solution to
> > this use case to libbpf[1] but Andrii pointed out how our
> > approach was problematic for upstream. He then proposed
> > splitting bpf_object__load() into two steps:
> > bpf_object__prepare() (creates maps, loads BTF, does
> > relocations, produces final program instructions) and then
> > bpf_object__load(). We are trying to follow up on his
> > input and become more upstream compliant.
> >
> > The prepare() + clone() API seems similiar to this,
> > but the questions above about object state and FD ownership
> > are the main gaps for production adoption. Are there plans
> > to address these in future revisions, or is this
> > intentionally scoped to testing/tooling use cases only?
> >
> > Thanks,
> > Andrey
> >
> > [1] -https://urldefense.com/v3/__https://lore.kernel.org/all/20250122215206.59859-1-slava.imameev@crowdstrike.com/t/*m93ec917b3dfe3115be2a4b6439e2c649c791686d__;Iw!!BmdzS3_lV9HdKG8!wUu6d5O6o0uz-H7DEauZ0RGiTE7PdgMOgRPqHPUfmckEC1CBLs9ELwahqm-eLff0agg3fL7Ii21gdx8YQZkJyFxBWRFv-gkIXMwtIjk$
> >
> > On Fri, Feb 20, 2026 at 2:18 PM Mykyta Yatsenko
> > <mykyta.yatsenko5@gmail.com> wrote:
> >>
> >> From: Mykyta Yatsenko <yatsenko@meta.com>
> >>
> >> Add bpf_program__clone() API that loads a single BPF program from a
> >> prepared BPF object into the kernel, returning a file descriptor owned
> >> by the caller.
> >>
> >> After bpf_object__prepare(), callers can use bpf_program__clone() to
> >> load individual programs with custom bpf_prog_load_opts, instead of
> >> loading all programs at once via bpf_object__load(). Non-zero fields in
> >> opts override the defaults derived from the program and object
> >> internals; passing NULL opts populates everything automatically.
> >>
> >> Internally, bpf_program__clone() resolves BTF-based attach targets
> >> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> >> func/line info, fd_array, license, and kern_version from the
> >> prepared object before calling bpf_prog_load().
> >>
> >> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> >> ---
> >> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> >> tools/lib/bpf/libbpf.h | 17 +++++++++++++
> >> tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ | 1 +
> >> 3 files changed, 82 insertions(+)
> >>
> >> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> >> index 0c8bf0b5cce4..4b084bda3f47 100644
> >> --- a/tools/lib/bpf/libbpf.c
> >> +++ b/tools/lib/bpf/libbpf.c
> >> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> >> return prog->line_info_cnt;
> >> }
> >>
> >> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> >> +{
> >> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> >> + struct bpf_prog_load_opts *pattr = &attr;
> >> + struct bpf_object *obj;
> >> + int err, fd;
> >> +
> >> + if (!prog)
> >> + return libbpf_err(-EINVAL);
> >> +
> >> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> >> + return libbpf_err(-EINVAL);
> >> +
> >> + obj = prog->obj;
> >> + if (obj->state < OBJ_PREPARED)
> >> + return libbpf_err(-EINVAL);
> >> +
> >> + /* Copy caller opts, fall back to prog/object defaults */
> >> + OPTS_SET(pattr, expected_attach_type,
> >> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> >> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> >> + OPTS_SET(pattr, attach_btf_obj_fd,
> >> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> >> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> >> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> >> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> >> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> >> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> >> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> >> + if (attr.token_fd)
> >> + attr.prog_flags |= BPF_F_TOKEN_FD;
> >> +
> >> + /* BTF func/line info */
> >> + if (obj->btf && btf__fd(obj->btf) >= 0) {
> >> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> >> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> >> + OPTS_SET(pattr, func_info_cnt,
> >> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> >> + OPTS_SET(pattr, func_info_rec_size,
> >> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> >> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> >> + OPTS_SET(pattr, line_info_cnt,
> >> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> >> + OPTS_SET(pattr, line_info_rec_size,
> >> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> >> + }
> >> +
> >> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> >> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> >> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> >> +
> >> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> >> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> >> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> >> + if (err)
> >> + return libbpf_err(err);
> >> + }
> >> +
> >> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> >> + pattr);
> >> +
> >> + return libbpf_err(fd);
> >> +}
> >> +
> >> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> >> .sec = (char *)sec_pfx, \
> >> .prog_type = BPF_PROG_TYPE_##ptype, \
> >> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> >> index dfc37a615578..0be34852350f 100644
> >> --- a/tools/lib/bpf/libbpf.h
> >> +++ b/tools/lib/bpf/libbpf.h
> >> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
> >> */
> >> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
> >>
> >> +/**
> >> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
> >> + * BPF object into the kernel, returning its file descriptor.
> >> + *
> >> + * The BPF object must have been previously prepared with
> >> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
> >> + * overrides the defaults derived from the program/object internals.
> >> + * If @opts is NULL, all fields are populated automatically.
> >> + *
> >> + * The returned FD is owned by the caller and must be closed with close().
> >> + *
> >> + * @param prog BPF program from a prepared object
> >> + * @param opts Optional load options; non-zero fields override defaults
> >> + * @return program FD (>= 0) on success; negative error code on failure
> >> + */
> >> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
> >> +
> >> #ifdef __cplusplus
> >> } /* extern "C" */
> >> #endif
> >> diff --git a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> >> index d18fbcea7578..e727a54e373a 100644
> >> --- a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> >> +++ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> >> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
> >> bpf_map__set_exclusive_program;
> >> bpf_map__exclusive_program;
> >> bpf_prog_assoc_struct_ops;
> >> + bpf_program__clone;
> >> bpf_program__assoc_struct_ops;
> >> btf__permute;
> >> } LIBBPF_1.6.0;
> >>
> >> --
> >> 2.47.3
> >>
> >>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [External] [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-03-06 17:22 ` [External] " Andrey Grodzovsky
2026-03-10 0:08 ` Mykyta Yatsenko
@ 2026-03-11 22:52 ` Andrii Nakryiko
2026-03-16 14:23 ` Andrey Grodzovsky
1 sibling, 1 reply; 25+ messages in thread
From: Andrii Nakryiko @ 2026-03-11 22:52 UTC (permalink / raw)
To: Andrey Grodzovsky
Cc: Mykyta Yatsenko, Andrii Nakryiko, bpf, ast, daniel, kernel-team,
DL Linux Open Source Team
On Fri, Mar 6, 2026 at 9:22 AM Andrey Grodzovsky
<andrey.grodzovsky@crowdstrike.com> wrote:
>
> Mykyta and Andrii Hi!
>
> We're evaluating the bpf_object__prepare() +
> bpf_program__clone() API for use in a production BPF
> application that manages hundreds of BPF programs with
> selective (dynamic) loading — some programs are loaded at
> startup, others loaded/unloaded at runtime based on feature
> configuration.
>
> We have a few questions about the intended usage and
> potential extensions of this API:
>
> 1. Compatibility with bpf_object__load() and object state
>
> After bpf_object__prepare(), the object is in OBJ_PREPARED
> state. Several libbpf APIs (e.g., bpf_program__set_type())
> gate on OBJ_LOADED state.
>
> Is there a recommended way to transition the object to
> OBJ_LOADED after cloning all desired programs? For example,
> would a bpf_object__finalize() or similar API that runs
> post_load_cleanup() and sets OBJ_LOADED be in scope? This
> would allow users to benefit from prepare() + clone() for
> selective loading while keeping the object in a state that
> the rest of libbpf expects. Or, is the new API not intended
> to work with bpf_object in the first place ?
exactly, it's not. It's an escape hatch out of bpf_object into
low-level FD, it was never designed to produce something that should
be put back into bpf_object. This clone stuff if for generic low-level
tooling like veristat and/or maybe bpftool's generic program loading.
And another one is cloning the same fentry program to be attached into
multiple uniform targets (I do a similar hack in retsnoop, for
instance).
In none of those cases cloned FDs are meant to be interoperable with
bpf_object/bpf_program abstractions.
>
> 2. Storing the clone FD back on struct bpf_program
>
> bpf_program__clone() returns a caller-owned FD, but APIs
> like bpf_program__attach() read prog->fd internally.
> Without a way to set the FD back on the program struct, the
> caller must reimplement attach logic (section-type dispatch
> for kprobe, fentry, raw_tp, etc.).
>
> Would a bpf_program__set_fd() setter (similar to the
> existing btf__set_fd()) be acceptable to store the clone FD
> back, making bpf_program__attach() and related APIs usable
> with cloned programs?
technically this could be done, probably, but it just feels too dirty,
tbh... there is so much program-specific information that libbpf
internally preserves (and gives access to most of it through
bpf_program's getter) that would need to be invalidated and/or
re-fetched with this set_fd() approach, that I don't really even want
to consider this too seriously... but see below
>
> 3. Use case: selective program loading from a single BPF
> object
>
> Our use case involves a single large BPF object (skeleton)
> with hundreds of programs where a subset is loaded at
> startup and others are loaded/unloaded dynamically based on
> runtime configuration. The current approach requires either:
> - Loading all programs upfront (wasteful), or
> - Maintaining out-of-tree patches to libbpf for selective
> loading
>
> Last year we made an attempt to upstream our solution to
> this use case to libbpf[1] but Andrii pointed out how our
> approach was problematic for upstream. He then proposed
> splitting bpf_object__load() into two steps:
> bpf_object__prepare() (creates maps, loads BTF, does
> relocations, produces final program instructions) and then
> bpf_object__load(). We are trying to follow up on his
> input and become more upstream compliant.
>
> The prepare() + clone() API seems similiar to this,
> but the questions above about object state and FD ownership
> are the main gaps for production adoption. Are there plans
> to address these in future revisions, or is this
> intentionally scoped to testing/tooling use cases only?
I remember your use case. I don't think clone is really a great fit
*if* you still want to stay at bpf_object/bpf skeleton high-level of
API (i.e., if you want to use bpf_program__attach() APIs and BPF
links).
While definitely a complication, I think we can add support for
loading BPF program after bpf_object__load() happened. You'd have to
keep your optional programs as non-autoloaded (or
bpf_program__set_autoload(false) explicitly), and I'm thinking we
might want to make this behavior opt-in explicitly through
bpf_object_open_opts(), as there are various points in bpf_object
lifetime where we make some decisions with the assumption that
programs will never be loaded, so we'll need to explicitly indicate
that *all* programs would need to be considered loadable, but maybe
much later.
Another thing that won't (or rather might not) work is declarative
prog_array initialization and struct_ops. Those two steps happen in
bpf_object__load() after all programs are loaded. I don't think that
is the problem for you, but I just want to point out that program
loading is not always the last step.
But other than that, despite added complications, it's probably better
to just allow to load programs lazily after bpf_object__load(), after
all.
>
> Thanks,
> Andrey
>
> [1] -https://lore.kernel.org/all/20250122215206.59859-1-slava.imameev@crowdstrike.com/t/#m93ec917b3dfe3115be2a4b6439e2c649c791686d
>
> On Fri, Feb 20, 2026 at 2:18 PM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
> >
> > From: Mykyta Yatsenko <yatsenko@meta.com>
> >
> > Add bpf_program__clone() API that loads a single BPF program from a
> > prepared BPF object into the kernel, returning a file descriptor owned
> > by the caller.
> >
> > After bpf_object__prepare(), callers can use bpf_program__clone() to
> > load individual programs with custom bpf_prog_load_opts, instead of
> > loading all programs at once via bpf_object__load(). Non-zero fields in
> > opts override the defaults derived from the program and object
> > internals; passing NULL opts populates everything automatically.
> >
> > Internally, bpf_program__clone() resolves BTF-based attach targets
> > (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> > func/line info, fd_array, license, and kern_version from the
> > prepared object before calling bpf_prog_load().
> >
> > Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> > ---
> > tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> > tools/lib/bpf/libbpf.h | 17 +++++++++++++
> > tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ | 1 +
> > 3 files changed, 82 insertions(+)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 0c8bf0b5cce4..4b084bda3f47 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> > return prog->line_info_cnt;
> > }
> >
> > +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> > +{
> > + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> > + struct bpf_prog_load_opts *pattr = &attr;
> > + struct bpf_object *obj;
> > + int err, fd;
> > +
> > + if (!prog)
> > + return libbpf_err(-EINVAL);
> > +
> > + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> > + return libbpf_err(-EINVAL);
> > +
> > + obj = prog->obj;
> > + if (obj->state < OBJ_PREPARED)
> > + return libbpf_err(-EINVAL);
> > +
> > + /* Copy caller opts, fall back to prog/object defaults */
> > + OPTS_SET(pattr, expected_attach_type,
> > + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> > + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> > + OPTS_SET(pattr, attach_btf_obj_fd,
> > + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> > + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> > + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> > + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> > + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> > + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> > + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> > + if (attr.token_fd)
> > + attr.prog_flags |= BPF_F_TOKEN_FD;
> > +
> > + /* BTF func/line info */
> > + if (obj->btf && btf__fd(obj->btf) >= 0) {
> > + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> > + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> > + OPTS_SET(pattr, func_info_cnt,
> > + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> > + OPTS_SET(pattr, func_info_rec_size,
> > + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> > + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> > + OPTS_SET(pattr, line_info_cnt,
> > + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> > + OPTS_SET(pattr, line_info_rec_size,
> > + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> > + }
> > +
> > + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> > + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> > + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> > +
> > + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> > + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> > + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> > + if (err)
> > + return libbpf_err(err);
> > + }
> > +
> > + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> > + pattr);
> > +
> > + return libbpf_err(fd);
> > +}
> > +
> > #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> > .sec = (char *)sec_pfx, \
> > .prog_type = BPF_PROG_TYPE_##ptype, \
> > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > index dfc37a615578..0be34852350f 100644
> > --- a/tools/lib/bpf/libbpf.h
> > +++ b/tools/lib/bpf/libbpf.h
> > @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
> > */
> > LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
> >
> > +/**
> > + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
> > + * BPF object into the kernel, returning its file descriptor.
> > + *
> > + * The BPF object must have been previously prepared with
> > + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
> > + * overrides the defaults derived from the program/object internals.
> > + * If @opts is NULL, all fields are populated automatically.
> > + *
> > + * The returned FD is owned by the caller and must be closed with close().
> > + *
> > + * @param prog BPF program from a prepared object
> > + * @param opts Optional load options; non-zero fields override defaults
> > + * @return program FD (>= 0) on success; negative error code on failure
> > + */
> > +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
> > +
> > #ifdef __cplusplus
> > } /* extern "C" */
> > #endif
> > diff --git a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > index d18fbcea7578..e727a54e373a 100644
> > --- a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > +++ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
> > bpf_map__set_exclusive_program;
> > bpf_map__exclusive_program;
> > bpf_prog_assoc_struct_ops;
> > + bpf_program__clone;
> > bpf_program__assoc_struct_ops;
> > btf__permute;
> > } LIBBPF_1.6.0;
> >
> > --
> > 2.47.3
> >
> >
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
` (2 preceding siblings ...)
2026-03-06 17:22 ` [External] " Andrey Grodzovsky
@ 2026-03-11 23:03 ` Andrii Nakryiko
3 siblings, 0 replies; 25+ messages in thread
From: Andrii Nakryiko @ 2026-03-11 23:03 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
Mykyta Yatsenko
On Fri, Feb 20, 2026 at 11:18 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Add bpf_program__clone() API that loads a single BPF program from a
> prepared BPF object into the kernel, returning a file descriptor owned
> by the caller.
>
> After bpf_object__prepare(), callers can use bpf_program__clone() to
> load individual programs with custom bpf_prog_load_opts, instead of
> loading all programs at once via bpf_object__load(). Non-zero fields in
> opts override the defaults derived from the program and object
> internals; passing NULL opts populates everything automatically.
>
> Internally, bpf_program__clone() resolves BTF-based attach targets
> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> func/line info, fd_array, license, and kern_version from the
> prepared object before calling bpf_prog_load().
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 17 +++++++++++++
> tools/lib/bpf/libbpf.map | 1 +
> 3 files changed, 82 insertions(+)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 0c8bf0b5cce4..4b084bda3f47 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> return prog->line_info_cnt;
> }
>
> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> +{
> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> + struct bpf_prog_load_opts *pattr = &attr;
> + struct bpf_object *obj;
> + int err, fd;
> +
> + if (!prog)
> + return libbpf_err(-EINVAL);
> +
> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> + return libbpf_err(-EINVAL);
> +
> + obj = prog->obj;
> + if (obj->state < OBJ_PREPARED)
> + return libbpf_err(-EINVAL);
> +
> + /* Copy caller opts, fall back to prog/object defaults */
> + OPTS_SET(pattr, expected_attach_type,
> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
OPTS_GET(opts, expected_attach_type, prog->expected_attach_type)
and same almost everywhere else
> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> + OPTS_SET(pattr, attach_btf_obj_fd,
> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> + if (attr.token_fd)
> + attr.prog_flags |= BPF_F_TOKEN_FD;
> +
> + /* BTF func/line info */
> + if (obj->btf && btf__fd(obj->btf) >= 0) {
> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> + OPTS_SET(pattr, func_info_cnt,
> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> + OPTS_SET(pattr, func_info_rec_size,
> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> + OPTS_SET(pattr, line_info_cnt,
> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> + OPTS_SET(pattr, line_info_rec_size,
> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> + }
> +
> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
as discussed offline, we shouldn't use OPTS_SET(), we control pattr
size and layout, OPTS_SET doesn't contribute anything here and should
only be used for writing into user-provided opts structs.
> +
> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> + if (err)
> + return libbpf_err(err);
> + }
> +
[...]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [External] [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-03-11 22:52 ` Andrii Nakryiko
@ 2026-03-16 14:23 ` Andrey Grodzovsky
0 siblings, 0 replies; 25+ messages in thread
From: Andrey Grodzovsky @ 2026-03-16 14:23 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Mykyta Yatsenko, Andrii Nakryiko, bpf, ast, daniel, kernel-team,
DL Linux Open Source Team
On Wed, Mar 11, 2026 at 6:52 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Mar 6, 2026 at 9:22 AM Andrey Grodzovsky
> <andrey.grodzovsky@crowdstrike.com> wrote:
> >
> > Mykyta and Andrii Hi!
> >
> > We're evaluating the bpf_object__prepare() +
> > bpf_program__clone() API for use in a production BPF
> > application that manages hundreds of BPF programs with
> > selective (dynamic) loading — some programs are loaded at
> > startup, others loaded/unloaded at runtime based on feature
> > configuration.
> >
> > We have a few questions about the intended usage and
> > potential extensions of this API:
> >
> > 1. Compatibility with bpf_object__load() and object state
> >
> > After bpf_object__prepare(), the object is in OBJ_PREPARED
> > state. Several libbpf APIs (e.g., bpf_program__set_type())
> > gate on OBJ_LOADED state.
> >
> > Is there a recommended way to transition the object to
> > OBJ_LOADED after cloning all desired programs? For example,
> > would a bpf_object__finalize() or similar API that runs
> > post_load_cleanup() and sets OBJ_LOADED be in scope? This
> > would allow users to benefit from prepare() + clone() for
> > selective loading while keeping the object in a state that
> > the rest of libbpf expects. Or, is the new API not intended
> > to work with bpf_object in the first place ?
>
> exactly, it's not. It's an escape hatch out of bpf_object into
> low-level FD, it was never designed to produce something that should
> be put back into bpf_object. This clone stuff if for generic low-level
> tooling like veristat and/or maybe bpftool's generic program loading.
> And another one is cloning the same fentry program to be attached into
> multiple uniform targets (I do a similar hack in retsnoop, for
> instance).
>
> In none of those cases cloned FDs are meant to be interoperable with
> bpf_object/bpf_program abstractions.
>
> >
> > 2. Storing the clone FD back on struct bpf_program
> >
> > bpf_program__clone() returns a caller-owned FD, but APIs
> > like bpf_program__attach() read prog->fd internally.
> > Without a way to set the FD back on the program struct, the
> > caller must reimplement attach logic (section-type dispatch
> > for kprobe, fentry, raw_tp, etc.).
> >
> > Would a bpf_program__set_fd() setter (similar to the
> > existing btf__set_fd()) be acceptable to store the clone FD
> > back, making bpf_program__attach() and related APIs usable
> > with cloned programs?
>
> technically this could be done, probably, but it just feels too dirty,
> tbh... there is so much program-specific information that libbpf
> internally preserves (and gives access to most of it through
> bpf_program's getter) that would need to be invalidated and/or
> re-fetched with this set_fd() approach, that I don't really even want
> to consider this too seriously... but see below
>
> >
> > 3. Use case: selective program loading from a single BPF
> > object
> >
> > Our use case involves a single large BPF object (skeleton)
> > with hundreds of programs where a subset is loaded at
> > startup and others are loaded/unloaded dynamically based on
> > runtime configuration. The current approach requires either:
> > - Loading all programs upfront (wasteful), or
> > - Maintaining out-of-tree patches to libbpf for selective
> > loading
> >
> > Last year we made an attempt to upstream our solution to
> > this use case to libbpf[1] but Andrii pointed out how our
> > approach was problematic for upstream. He then proposed
> > splitting bpf_object__load() into two steps:
> > bpf_object__prepare() (creates maps, loads BTF, does
> > relocations, produces final program instructions) and then
> > bpf_object__load(). We are trying to follow up on his
> > input and become more upstream compliant.
> >
> > The prepare() + clone() API seems similiar to this,
> > but the questions above about object state and FD ownership
> > are the main gaps for production adoption. Are there plans
> > to address these in future revisions, or is this
> > intentionally scoped to testing/tooling use cases only?
>
> I remember your use case. I don't think clone is really a great fit
> *if* you still want to stay at bpf_object/bpf skeleton high-level of
> API (i.e., if you want to use bpf_program__attach() APIs and BPF
> links).
>
> While definitely a complication, I think we can add support for
> loading BPF program after bpf_object__load() happened. You'd have to
> keep your optional programs as non-autoloaded (or
> bpf_program__set_autoload(false) explicitly), and I'm thinking we
> might want to make this behavior opt-in explicitly through
> bpf_object_open_opts(), as there are various points in bpf_object
> lifetime where we make some decisions with the assumption that
> programs will never be loaded, so we'll need to explicitly indicate
> that *all* programs would need to be considered loadable, but maybe
> much later.
>
> Another thing that won't (or rather might not) work is declarative
> prog_array initialization and struct_ops. Those two steps happen in
> bpf_object__load() after all programs are loaded. I don't think that
> is the problem for you, but I just want to point out that program
> loading is not always the last step.
>
> But other than that, despite added complications, it's probably better
> to just allow to load programs lazily after bpf_object__load(), after
> all.
Thanks for the detailed info! We can start working on this ourselves
once we have some available time, we hope for your guidance during the process.
Andrey
>
> >
> > Thanks,
> > Andrey
> >
> > [1] -https://urldefense.com/v3/__https://lore.kernel.org/all/20250122215206.59859-1-slava.imameev@crowdstrike.com/t/*m93ec917b3dfe3115be2a4b6439e2c649c791686d__;Iw!!BmdzS3_lV9HdKG8!w-rWKr0W2k1zo6qkXsDgmORVg5c3X9udhVYztkyvYonp_0GlVjNom_gDbcKSSOd7U-A_SQbGGfppVaaB5OYu01bMugTLr-R_tnkExA$
> >
> > On Fri, Feb 20, 2026 at 2:18 PM Mykyta Yatsenko
> > <mykyta.yatsenko5@gmail.com> wrote:
> > >
> > > From: Mykyta Yatsenko <yatsenko@meta.com>
> > >
> > > Add bpf_program__clone() API that loads a single BPF program from a
> > > prepared BPF object into the kernel, returning a file descriptor owned
> > > by the caller.
> > >
> > > After bpf_object__prepare(), callers can use bpf_program__clone() to
> > > load individual programs with custom bpf_prog_load_opts, instead of
> > > loading all programs at once via bpf_object__load(). Non-zero fields in
> > > opts override the defaults derived from the program and object
> > > internals; passing NULL opts populates everything automatically.
> > >
> > > Internally, bpf_program__clone() resolves BTF-based attach targets
> > > (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> > > func/line info, fd_array, license, and kern_version from the
> > > prepared object before calling bpf_prog_load().
> > >
> > > Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> > > ---
> > > tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> > > tools/lib/bpf/libbpf.h | 17 +++++++++++++
> > > tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ | 1 +
> > > 3 files changed, 82 insertions(+)
> > >
> > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > index 0c8bf0b5cce4..4b084bda3f47 100644
> > > --- a/tools/lib/bpf/libbpf.c
> > > +++ b/tools/lib/bpf/libbpf.c
> > > @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> > > return prog->line_info_cnt;
> > > }
> > >
> > > +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> > > +{
> > > + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> > > + struct bpf_prog_load_opts *pattr = &attr;
> > > + struct bpf_object *obj;
> > > + int err, fd;
> > > +
> > > + if (!prog)
> > > + return libbpf_err(-EINVAL);
> > > +
> > > + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> > > + return libbpf_err(-EINVAL);
> > > +
> > > + obj = prog->obj;
> > > + if (obj->state < OBJ_PREPARED)
> > > + return libbpf_err(-EINVAL);
> > > +
> > > + /* Copy caller opts, fall back to prog/object defaults */
> > > + OPTS_SET(pattr, expected_attach_type,
> > > + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> > > + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> > > + OPTS_SET(pattr, attach_btf_obj_fd,
> > > + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> > > + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> > > + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> > > + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> > > + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> > > + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> > > + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> > > + if (attr.token_fd)
> > > + attr.prog_flags |= BPF_F_TOKEN_FD;
> > > +
> > > + /* BTF func/line info */
> > > + if (obj->btf && btf__fd(obj->btf) >= 0) {
> > > + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> > > + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> > > + OPTS_SET(pattr, func_info_cnt,
> > > + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> > > + OPTS_SET(pattr, func_info_rec_size,
> > > + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> > > + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> > > + OPTS_SET(pattr, line_info_cnt,
> > > + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> > > + OPTS_SET(pattr, line_info_rec_size,
> > > + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> > > + }
> > > +
> > > + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> > > + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> > > + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> > > +
> > > + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> > > + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> > > + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> > > + if (err)
> > > + return libbpf_err(err);
> > > + }
> > > +
> > > + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> > > + pattr);
> > > +
> > > + return libbpf_err(fd);
> > > +}
> > > +
> > > #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> > > .sec = (char *)sec_pfx, \
> > > .prog_type = BPF_PROG_TYPE_##ptype, \
> > > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > > index dfc37a615578..0be34852350f 100644
> > > --- a/tools/lib/bpf/libbpf.h
> > > +++ b/tools/lib/bpf/libbpf.h
> > > @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
> > > */
> > > LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
> > >
> > > +/**
> > > + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
> > > + * BPF object into the kernel, returning its file descriptor.
> > > + *
> > > + * The BPF object must have been previously prepared with
> > > + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
> > > + * overrides the defaults derived from the program/object internals.
> > > + * If @opts is NULL, all fields are populated automatically.
> > > + *
> > > + * The returned FD is owned by the caller and must be closed with close().
> > > + *
> > > + * @param prog BPF program from a prepared object
> > > + * @param opts Optional load options; non-zero fields override defaults
> > > + * @return program FD (>= 0) on success; negative error code on failure
> > > + */
> > > +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
> > > +
> > > #ifdef __cplusplus
> > > } /* extern "C" */
> > > #endif
> > > diff --git a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > > index d18fbcea7578..e727a54e373a 100644
> > > --- a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > > +++ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > > @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
> > > bpf_map__set_exclusive_program;
> > > bpf_map__exclusive_program;
> > > bpf_prog_assoc_struct_ops;
> > > + bpf_program__clone;
> > > bpf_program__assoc_struct_ops;
> > > btf__permute;
> > > } LIBBPF_1.6.0;
> > >
> > > --
> > > 2.47.3
> > >
> > >
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2026-03-16 14:24 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-20 19:18 [PATCH bpf-next v2 0/2] libbpf: Add bpf_program__clone() for individual program loading Mykyta Yatsenko
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
2026-02-23 17:25 ` Emil Tsalapatis
2026-02-23 17:59 ` Mykyta Yatsenko
2026-02-23 18:04 ` Emil Tsalapatis
2026-02-24 19:28 ` Eduard Zingerman
2026-02-24 19:32 ` Eduard Zingerman
2026-02-24 20:47 ` Mykyta Yatsenko
2026-03-06 17:22 ` [External] " Andrey Grodzovsky
2026-03-10 0:08 ` Mykyta Yatsenko
2026-03-11 13:35 ` Andrey Grodzovsky
2026-03-11 22:52 ` Andrii Nakryiko
2026-03-16 14:23 ` Andrey Grodzovsky
2026-03-11 23:03 ` Andrii Nakryiko
2026-02-20 19:18 ` [PATCH bpf-next v2 2/2] selftests/bpf: Use bpf_program__clone() in veristat Mykyta Yatsenko
2026-02-23 17:49 ` Emil Tsalapatis
2026-02-23 18:39 ` Mykyta Yatsenko
2026-02-23 18:54 ` Emil Tsalapatis
2026-02-24 2:03 ` Eduard Zingerman
2026-02-24 12:20 ` Mykyta Yatsenko
2026-02-24 19:08 ` Eduard Zingerman
2026-02-24 19:12 ` Mykyta Yatsenko
2026-02-24 19:16 ` Eduard Zingerman
2026-02-20 22:48 ` [PATCH bpf-next v2 0/2] libbpf: Add bpf_program__clone() for individual program loading Alexei Starovoitov
2026-02-23 13:57 ` Mykyta Yatsenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox