* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
@ 2026-02-23 17:25 ` Emil Tsalapatis
2026-02-23 17:59 ` Mykyta Yatsenko
2026-02-24 19:28 ` Eduard Zingerman
` (2 subsequent siblings)
3 siblings, 1 reply; 25+ messages in thread
From: Emil Tsalapatis @ 2026-02-23 17:25 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team,
eddyz87
Cc: Mykyta Yatsenko
On Fri Feb 20, 2026 at 2:18 PM EST, Mykyta Yatsenko wrote:
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Add bpf_program__clone() API that loads a single BPF program from a
> prepared BPF object into the kernel, returning a file descriptor owned
> by the caller.
>
> After bpf_object__prepare(), callers can use bpf_program__clone() to
> load individual programs with custom bpf_prog_load_opts, instead of
> loading all programs at once via bpf_object__load(). Non-zero fields in
> opts override the defaults derived from the program and object
> internals; passing NULL opts populates everything automatically.
>
> Internally, bpf_program__clone() resolves BTF-based attach targets
> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> func/line info, fd_array, license, and kern_version from the
> prepared object before calling bpf_prog_load().
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 17 +++++++++++++
> tools/lib/bpf/libbpf.map | 1 +
> 3 files changed, 82 insertions(+)
>
The code looks in order, one issue below.
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 0c8bf0b5cce4..4b084bda3f47 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> return prog->line_info_cnt;
> }
>
> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> +{
> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> + struct bpf_prog_load_opts *pattr = &attr;
> + struct bpf_object *obj;
> + int err, fd;
> +
> + if (!prog)
> + return libbpf_err(-EINVAL);
> +
> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> + return libbpf_err(-EINVAL);
> +
> + obj = prog->obj;
> + if (obj->state < OBJ_PREPARED)
> + return libbpf_err(-EINVAL);
> +
> + /* Copy caller opts, fall back to prog/object defaults */
> + OPTS_SET(pattr, expected_attach_type,
> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> + OPTS_SET(pattr, attach_btf_obj_fd,
> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> + if (attr.token_fd)
> + attr.prog_flags |= BPF_F_TOKEN_FD;
> +
> + /* BTF func/line info */
> + if (obj->btf && btf__fd(obj->btf) >= 0) {
> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> + OPTS_SET(pattr, func_info_cnt,
> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> + OPTS_SET(pattr, func_info_rec_size,
> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> + OPTS_SET(pattr, line_info_cnt,
> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> + OPTS_SET(pattr, line_info_rec_size,
> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> + }
> +
> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> +
Can we make it so that cloning prepared but not loaded programs does
not load them? The name of the method itself implies the new instance is
identical to the old one, which is not the case - we're currently
loading the cloned program even if the original is not loaded. I don't
see why for OBJ_PREPARED progrmas this shouldn't be explicitly done by the
caller with bpf_prog_load() instead.
If we do make it so that the cloned program's obj->state is identical to the
original's let's also add a test that checks that.
> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> + if (err)
> + return libbpf_err(err);
> + }
> +
> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> + pattr);
> +
> + return libbpf_err(fd);
> +}
> +
> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> .sec = (char *)sec_pfx, \
> .prog_type = BPF_PROG_TYPE_##ptype, \
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index dfc37a615578..0be34852350f 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
> */
> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
>
> +/**
> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
> + * BPF object into the kernel, returning its file descriptor.
> + *
> + * The BPF object must have been previously prepared with
> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
> + * overrides the defaults derived from the program/object internals.
> + * If @opts is NULL, all fields are populated automatically.
> + *
> + * The returned FD is owned by the caller and must be closed with close().
> + *
> + * @param prog BPF program from a prepared object
> + * @param opts Optional load options; non-zero fields override defaults
> + * @return program FD (>= 0) on success; negative error code on failure
> + */
> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
> +
> #ifdef __cplusplus
> } /* extern "C" */
> #endif
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index d18fbcea7578..e727a54e373a 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
> bpf_map__set_exclusive_program;
> bpf_map__exclusive_program;
> bpf_prog_assoc_struct_ops;
> + bpf_program__clone;
> bpf_program__assoc_struct_ops;
> btf__permute;
> } LIBBPF_1.6.0;
^ permalink raw reply [flat|nested] 25+ messages in thread* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-23 17:25 ` Emil Tsalapatis
@ 2026-02-23 17:59 ` Mykyta Yatsenko
2026-02-23 18:04 ` Emil Tsalapatis
0 siblings, 1 reply; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-02-23 17:59 UTC (permalink / raw)
To: Emil Tsalapatis, bpf, ast, andrii, daniel, kafai, kernel-team,
eddyz87
Cc: Mykyta Yatsenko
"Emil Tsalapatis" <emil@etsalapatis.com> writes:
> On Fri Feb 20, 2026 at 2:18 PM EST, Mykyta Yatsenko wrote:
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Add bpf_program__clone() API that loads a single BPF program from a
>> prepared BPF object into the kernel, returning a file descriptor owned
>> by the caller.
>>
>> After bpf_object__prepare(), callers can use bpf_program__clone() to
>> load individual programs with custom bpf_prog_load_opts, instead of
>> loading all programs at once via bpf_object__load(). Non-zero fields in
>> opts override the defaults derived from the program and object
>> internals; passing NULL opts populates everything automatically.
>>
>> Internally, bpf_program__clone() resolves BTF-based attach targets
>> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
>> func/line info, fd_array, license, and kern_version from the
>> prepared object before calling bpf_prog_load().
>>
>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>> ---
>> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
>> tools/lib/bpf/libbpf.h | 17 +++++++++++++
>> tools/lib/bpf/libbpf.map | 1 +
>> 3 files changed, 82 insertions(+)
>>
>
> The code looks in order, one issue below.
>
>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>> index 0c8bf0b5cce4..4b084bda3f47 100644
>> --- a/tools/lib/bpf/libbpf.c
>> +++ b/tools/lib/bpf/libbpf.c
>> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
>> return prog->line_info_cnt;
>> }
>>
>> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
>> +{
>> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
>> + struct bpf_prog_load_opts *pattr = &attr;
>> + struct bpf_object *obj;
>> + int err, fd;
>> +
>> + if (!prog)
>> + return libbpf_err(-EINVAL);
>> +
>> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
>> + return libbpf_err(-EINVAL);
>> +
>> + obj = prog->obj;
>> + if (obj->state < OBJ_PREPARED)
>> + return libbpf_err(-EINVAL);
>> +
>> + /* Copy caller opts, fall back to prog/object defaults */
>> + OPTS_SET(pattr, expected_attach_type,
>> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
>> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
>> + OPTS_SET(pattr, attach_btf_obj_fd,
>> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
>> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
>> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
>> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
>> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
>> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
>> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
>> + if (attr.token_fd)
>> + attr.prog_flags |= BPF_F_TOKEN_FD;
>> +
>> + /* BTF func/line info */
>> + if (obj->btf && btf__fd(obj->btf) >= 0) {
>> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
>> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
>> + OPTS_SET(pattr, func_info_cnt,
>> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
>> + OPTS_SET(pattr, func_info_rec_size,
>> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
>> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
>> + OPTS_SET(pattr, line_info_cnt,
>> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
>> + OPTS_SET(pattr, line_info_rec_size,
>> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
>> + }
>> +
>> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
>> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
>> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
>> +
>
> Can we make it so that cloning prepared but not loaded programs does
> not load them? The name of the method itself implies the new instance is
> identical to the old one, which is not the case - we're currently
> loading the cloned program even if the original is not loaded. I don't
> see why for OBJ_PREPARED progrmas this shouldn't be explicitly done by the
> caller with bpf_prog_load() instead.
Mekes sense, but there are few problems:
we won't be cloning a program, but rather it's
attributes (struct bpf_prog_load_opts); I don't think we can do true
cloning with returning a new struct bpf_program.
So the best we can do is to change to something like
bpf_program__clone_attrs() (or bpf_program__load_attrs()) then in
veristat do:
attrs = bpf_program__clone_attrs(prog)
bpf_prog_load(bpf_program__insn(prog), attrs)
Let's see what option maintainers prefer.
>
> If we do make it so that the cloned program's obj->state is identical to the
> original's let's also add a test that checks that.
>
>> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
>> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
>> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
>> + if (err)
>> + return libbpf_err(err);
>> + }
>> +
>> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
>> + pattr);
>> +
>> + return libbpf_err(fd);
>> +}
>> +
>> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
>> .sec = (char *)sec_pfx, \
>> .prog_type = BPF_PROG_TYPE_##ptype, \
>> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
>> index dfc37a615578..0be34852350f 100644
>> --- a/tools/lib/bpf/libbpf.h
>> +++ b/tools/lib/bpf/libbpf.h
>> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
>> */
>> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
>>
>> +/**
>> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
>> + * BPF object into the kernel, returning its file descriptor.
>> + *
>> + * The BPF object must have been previously prepared with
>> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
>> + * overrides the defaults derived from the program/object internals.
>> + * If @opts is NULL, all fields are populated automatically.
>> + *
>> + * The returned FD is owned by the caller and must be closed with close().
>> + *
>> + * @param prog BPF program from a prepared object
>> + * @param opts Optional load options; non-zero fields override defaults
>> + * @return program FD (>= 0) on success; negative error code on failure
>> + */
>> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
>> +
>> #ifdef __cplusplus
>> } /* extern "C" */
>> #endif
>> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
>> index d18fbcea7578..e727a54e373a 100644
>> --- a/tools/lib/bpf/libbpf.map
>> +++ b/tools/lib/bpf/libbpf.map
>> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
>> bpf_map__set_exclusive_program;
>> bpf_map__exclusive_program;
>> bpf_prog_assoc_struct_ops;
>> + bpf_program__clone;
>> bpf_program__assoc_struct_ops;
>> btf__permute;
>> } LIBBPF_1.6.0;
^ permalink raw reply [flat|nested] 25+ messages in thread* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-23 17:59 ` Mykyta Yatsenko
@ 2026-02-23 18:04 ` Emil Tsalapatis
0 siblings, 0 replies; 25+ messages in thread
From: Emil Tsalapatis @ 2026-02-23 18:04 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team,
eddyz87
Cc: Mykyta Yatsenko
On Mon Feb 23, 2026 at 12:59 PM EST, Mykyta Yatsenko wrote:
> "Emil Tsalapatis" <emil@etsalapatis.com> writes:
>
>> On Fri Feb 20, 2026 at 2:18 PM EST, Mykyta Yatsenko wrote:
>>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>>
>>> Add bpf_program__clone() API that loads a single BPF program from a
>>> prepared BPF object into the kernel, returning a file descriptor owned
>>> by the caller.
>>>
>>> After bpf_object__prepare(), callers can use bpf_program__clone() to
>>> load individual programs with custom bpf_prog_load_opts, instead of
>>> loading all programs at once via bpf_object__load(). Non-zero fields in
>>> opts override the defaults derived from the program and object
>>> internals; passing NULL opts populates everything automatically.
>>>
>>> Internally, bpf_program__clone() resolves BTF-based attach targets
>>> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
>>> func/line info, fd_array, license, and kern_version from the
>>> prepared object before calling bpf_prog_load().
>>>
>>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>>> ---
>>> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
>>> tools/lib/bpf/libbpf.h | 17 +++++++++++++
>>> tools/lib/bpf/libbpf.map | 1 +
>>> 3 files changed, 82 insertions(+)
>>>
>>
>> The code looks in order, one issue below.
>>
>>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>>> index 0c8bf0b5cce4..4b084bda3f47 100644
>>> --- a/tools/lib/bpf/libbpf.c
>>> +++ b/tools/lib/bpf/libbpf.c
>>> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
>>> return prog->line_info_cnt;
>>> }
>>>
>>> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
>>> +{
>>> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
>>> + struct bpf_prog_load_opts *pattr = &attr;
>>> + struct bpf_object *obj;
>>> + int err, fd;
>>> +
>>> + if (!prog)
>>> + return libbpf_err(-EINVAL);
>>> +
>>> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
>>> + return libbpf_err(-EINVAL);
>>> +
>>> + obj = prog->obj;
>>> + if (obj->state < OBJ_PREPARED)
>>> + return libbpf_err(-EINVAL);
>>> +
>>> + /* Copy caller opts, fall back to prog/object defaults */
>>> + OPTS_SET(pattr, expected_attach_type,
>>> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
>>> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
>>> + OPTS_SET(pattr, attach_btf_obj_fd,
>>> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
>>> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
>>> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
>>> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
>>> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
>>> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
>>> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
>>> + if (attr.token_fd)
>>> + attr.prog_flags |= BPF_F_TOKEN_FD;
>>> +
>>> + /* BTF func/line info */
>>> + if (obj->btf && btf__fd(obj->btf) >= 0) {
>>> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
>>> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
>>> + OPTS_SET(pattr, func_info_cnt,
>>> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
>>> + OPTS_SET(pattr, func_info_rec_size,
>>> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
>>> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
>>> + OPTS_SET(pattr, line_info_cnt,
>>> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
>>> + OPTS_SET(pattr, line_info_rec_size,
>>> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
>>> + }
>>> +
>>> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
>>> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
>>> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
>>> +
>>
>> Can we make it so that cloning prepared but not loaded programs does
>> not load them? The name of the method itself implies the new instance is
>> identical to the old one, which is not the case - we're currently
>> loading the cloned program even if the original is not loaded. I don't
>> see why for OBJ_PREPARED progrmas this shouldn't be explicitly done by the
>> caller with bpf_prog_load() instead.
> Mekes sense, but there are few problems:
> we won't be cloning a program, but rather it's
> attributes (struct bpf_prog_load_opts); I don't think we can do true
> cloning with returning a new struct bpf_program.
>
> So the best we can do is to change to something like
> bpf_program__clone_attrs() (or bpf_program__load_attrs()) then in
> veristat do:
>
> attrs = bpf_program__clone_attrs(prog)
> bpf_prog_load(bpf_program__insn(prog), attrs)
>
> Let's see what option maintainers prefer.
I like clone_attrs(), though if we rename it so let's skip the bpf_prog_load()
regardless of whether the original program is already loaded.
>>
>> If we do make it so that the cloned program's obj->state is identical to the
>> original's let's also add a test that checks that.
>>
>>> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
>>> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
>>> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
>>> + if (err)
>>> + return libbpf_err(err);
>>> + }
>>> +
>>> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
>>> + pattr);
>>> +
>>> + return libbpf_err(fd);
>>> +}
>>> +
>>> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
>>> .sec = (char *)sec_pfx, \
>>> .prog_type = BPF_PROG_TYPE_##ptype, \
>>> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
>>> index dfc37a615578..0be34852350f 100644
>>> --- a/tools/lib/bpf/libbpf.h
>>> +++ b/tools/lib/bpf/libbpf.h
>>> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
>>> */
>>> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
>>>
>>> +/**
>>> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
>>> + * BPF object into the kernel, returning its file descriptor.
>>> + *
>>> + * The BPF object must have been previously prepared with
>>> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
>>> + * overrides the defaults derived from the program/object internals.
>>> + * If @opts is NULL, all fields are populated automatically.
>>> + *
>>> + * The returned FD is owned by the caller and must be closed with close().
>>> + *
>>> + * @param prog BPF program from a prepared object
>>> + * @param opts Optional load options; non-zero fields override defaults
>>> + * @return program FD (>= 0) on success; negative error code on failure
>>> + */
>>> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
>>> +
>>> #ifdef __cplusplus
>>> } /* extern "C" */
>>> #endif
>>> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
>>> index d18fbcea7578..e727a54e373a 100644
>>> --- a/tools/lib/bpf/libbpf.map
>>> +++ b/tools/lib/bpf/libbpf.map
>>> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
>>> bpf_map__set_exclusive_program;
>>> bpf_map__exclusive_program;
>>> bpf_prog_assoc_struct_ops;
>>> + bpf_program__clone;
>>> bpf_program__assoc_struct_ops;
>>> btf__permute;
>>> } LIBBPF_1.6.0;
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
2026-02-23 17:25 ` Emil Tsalapatis
@ 2026-02-24 19:28 ` Eduard Zingerman
2026-02-24 19:32 ` Eduard Zingerman
2026-02-24 20:47 ` Mykyta Yatsenko
2026-03-06 17:22 ` [External] " Andrey Grodzovsky
2026-03-11 23:03 ` Andrii Nakryiko
3 siblings, 2 replies; 25+ messages in thread
From: Eduard Zingerman @ 2026-02-24 19:28 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team
Cc: Mykyta Yatsenko
On Fri, 2026-02-20 at 11:18 -0800, Mykyta Yatsenko wrote:
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Add bpf_program__clone() API that loads a single BPF program from a
> prepared BPF object into the kernel, returning a file descriptor owned
> by the caller.
>
> After bpf_object__prepare(), callers can use bpf_program__clone() to
> load individual programs with custom bpf_prog_load_opts, instead of
> loading all programs at once via bpf_object__load(). Non-zero fields in
> opts override the defaults derived from the program and object
> internals; passing NULL opts populates everything automatically.
>
> Internally, bpf_program__clone() resolves BTF-based attach targets
> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> func/line info, fd_array, license, and kern_version from the
> prepared object before calling bpf_prog_load().
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 17 +++++++++++++
> tools/lib/bpf/libbpf.map | 1 +
> 3 files changed, 82 insertions(+)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 0c8bf0b5cce4..4b084bda3f47 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> return prog->line_info_cnt;
> }
>
> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> +{
> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> + struct bpf_prog_load_opts *pattr = &attr;
> + struct bpf_object *obj;
> + int err, fd;
> +
> + if (!prog)
> + return libbpf_err(-EINVAL);
> +
> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> + return libbpf_err(-EINVAL);
> +
> + obj = prog->obj;
> + if (obj->state < OBJ_PREPARED)
> + return libbpf_err(-EINVAL);
> +
> + /* Copy caller opts, fall back to prog/object defaults */
> + OPTS_SET(pattr, expected_attach_type,
> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> + OPTS_SET(pattr, attach_btf_obj_fd,
> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
It seems 'fd_array_cnt' is not copied, should it be?
> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> + if (attr.token_fd)
> + attr.prog_flags |= BPF_F_TOKEN_FD;
Nit: should this be 'if (OPTS_GET(opts, token_fd, 0) && attr.token_fd)' ?
> +
> + /* BTF func/line info */
> + if (obj->btf && btf__fd(obj->btf) >= 0) {
> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> + OPTS_SET(pattr, func_info_cnt,
> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> + OPTS_SET(pattr, func_info_rec_size,
> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> + OPTS_SET(pattr, line_info_cnt,
> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> + OPTS_SET(pattr, line_info_rec_size,
> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> + }
> +
> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
Just curious why did you decide not to inherit logging properties from
the original program?
Unless overridden, the original program would point to the buffer
specified for the object in bpf_object_open_opts->kernel_log_buf, right?
> +
> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> + if (err)
> + return libbpf_err(err);
> + }
> +
> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> + pattr);
> +
> + return libbpf_err(fd);
> +}
> +
> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> .sec = (char *)sec_pfx, \
> .prog_type = BPF_PROG_TYPE_##ptype, \
^ permalink raw reply [flat|nested] 25+ messages in thread* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-24 19:28 ` Eduard Zingerman
@ 2026-02-24 19:32 ` Eduard Zingerman
2026-02-24 20:47 ` Mykyta Yatsenko
1 sibling, 0 replies; 25+ messages in thread
From: Eduard Zingerman @ 2026-02-24 19:32 UTC (permalink / raw)
To: Mykyta Yatsenko, bpf, ast, andrii, daniel, kafai, kernel-team
Cc: Mykyta Yatsenko
On Tue, 2026-02-24 at 11:28 -0800, Eduard Zingerman wrote:
[...]
> > + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> > + if (attr.token_fd)
> > + attr.prog_flags |= BPF_F_TOKEN_FD;
>
> Nit: should this be 'if (OPTS_GET(opts, token_fd, 0) && attr.token_fd)' ?
Nope, bpf_object_load_prog() does it same way you do here.
Sorry for the noise.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-24 19:28 ` Eduard Zingerman
2026-02-24 19:32 ` Eduard Zingerman
@ 2026-02-24 20:47 ` Mykyta Yatsenko
1 sibling, 0 replies; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-02-24 20:47 UTC (permalink / raw)
To: Eduard Zingerman, bpf, ast, andrii, daniel, kafai, kernel-team
Cc: Mykyta Yatsenko
On 2/24/26 19:28, Eduard Zingerman wrote:
> On Fri, 2026-02-20 at 11:18 -0800, Mykyta Yatsenko wrote:
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Add bpf_program__clone() API that loads a single BPF program from a
>> prepared BPF object into the kernel, returning a file descriptor owned
>> by the caller.
>>
>> After bpf_object__prepare(), callers can use bpf_program__clone() to
>> load individual programs with custom bpf_prog_load_opts, instead of
>> loading all programs at once via bpf_object__load(). Non-zero fields in
>> opts override the defaults derived from the program and object
>> internals; passing NULL opts populates everything automatically.
>>
>> Internally, bpf_program__clone() resolves BTF-based attach targets
>> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
>> func/line info, fd_array, license, and kern_version from the
>> prepared object before calling bpf_prog_load().
>>
>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>> ---
>> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
>> tools/lib/bpf/libbpf.h | 17 +++++++++++++
>> tools/lib/bpf/libbpf.map | 1 +
>> 3 files changed, 82 insertions(+)
>>
>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>> index 0c8bf0b5cce4..4b084bda3f47 100644
>> --- a/tools/lib/bpf/libbpf.c
>> +++ b/tools/lib/bpf/libbpf.c
>> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
>> return prog->line_info_cnt;
>> }
>>
>> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
>> +{
>> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
>> + struct bpf_prog_load_opts *pattr = &attr;
>> + struct bpf_object *obj;
>> + int err, fd;
>> +
>> + if (!prog)
>> + return libbpf_err(-EINVAL);
>> +
>> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
>> + return libbpf_err(-EINVAL);
>> +
>> + obj = prog->obj;
>> + if (obj->state < OBJ_PREPARED)
>> + return libbpf_err(-EINVAL);
>> +
>> + /* Copy caller opts, fall back to prog/object defaults */
>> + OPTS_SET(pattr, expected_attach_type,
>> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
>> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
>> + OPTS_SET(pattr, attach_btf_obj_fd,
>> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
>> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
>> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
>> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
>> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
>> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> It seems 'fd_array_cnt' is not copied, should it be?
>
>> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
>> + if (attr.token_fd)
>> + attr.prog_flags |= BPF_F_TOKEN_FD;
> Nit: should this be 'if (OPTS_GET(opts, token_fd, 0) && attr.token_fd)' ?
>
>> +
>> + /* BTF func/line info */
>> + if (obj->btf && btf__fd(obj->btf) >= 0) {
>> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
>> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
>> + OPTS_SET(pattr, func_info_cnt,
>> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
>> + OPTS_SET(pattr, func_info_rec_size,
>> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
>> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
>> + OPTS_SET(pattr, line_info_cnt,
>> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
>> + OPTS_SET(pattr, line_info_rec_size,
>> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
>> + }
>> +
>> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
>> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
>> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> Just curious why did you decide not to inherit logging properties from
> the original program?
> Unless overridden, the original program would point to the buffer
> specified for the object in bpf_object_open_opts->kernel_log_buf, right?
Inheriting the object's log_buf here would mean writing
into a shared mutable buffer, which sounds not very good,
I don't see where this scenario is useful.
>
>> +
>> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
>> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
>> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
>> + if (err)
>> + return libbpf_err(err);
>> + }
>> +
>> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
>> + pattr);
>> +
>> + return libbpf_err(fd);
>> +}
>> +
>> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
>> .sec = (char *)sec_pfx, \
>> .prog_type = BPF_PROG_TYPE_##ptype, \
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [External] [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
2026-02-23 17:25 ` Emil Tsalapatis
2026-02-24 19:28 ` Eduard Zingerman
@ 2026-03-06 17:22 ` Andrey Grodzovsky
2026-03-10 0:08 ` Mykyta Yatsenko
2026-03-11 22:52 ` Andrii Nakryiko
2026-03-11 23:03 ` Andrii Nakryiko
3 siblings, 2 replies; 25+ messages in thread
From: Andrey Grodzovsky @ 2026-03-06 17:22 UTC (permalink / raw)
To: Mykyta Yatsenko, Andrii Nakryiko
Cc: bpf, ast, daniel, kernel-team, DL Linux Open Source Team
Mykyta and Andrii Hi!
We're evaluating the bpf_object__prepare() +
bpf_program__clone() API for use in a production BPF
application that manages hundreds of BPF programs with
selective (dynamic) loading — some programs are loaded at
startup, others loaded/unloaded at runtime based on feature
configuration.
We have a few questions about the intended usage and
potential extensions of this API:
1. Compatibility with bpf_object__load() and object state
After bpf_object__prepare(), the object is in OBJ_PREPARED
state. Several libbpf APIs (e.g., bpf_program__set_type())
gate on OBJ_LOADED state.
Is there a recommended way to transition the object to
OBJ_LOADED after cloning all desired programs? For example,
would a bpf_object__finalize() or similar API that runs
post_load_cleanup() and sets OBJ_LOADED be in scope? This
would allow users to benefit from prepare() + clone() for
selective loading while keeping the object in a state that
the rest of libbpf expects. Or, is the new API not intended
to work with bpf_object in the first place ?
2. Storing the clone FD back on struct bpf_program
bpf_program__clone() returns a caller-owned FD, but APIs
like bpf_program__attach() read prog->fd internally.
Without a way to set the FD back on the program struct, the
caller must reimplement attach logic (section-type dispatch
for kprobe, fentry, raw_tp, etc.).
Would a bpf_program__set_fd() setter (similar to the
existing btf__set_fd()) be acceptable to store the clone FD
back, making bpf_program__attach() and related APIs usable
with cloned programs?
3. Use case: selective program loading from a single BPF
object
Our use case involves a single large BPF object (skeleton)
with hundreds of programs where a subset is loaded at
startup and others are loaded/unloaded dynamically based on
runtime configuration. The current approach requires either:
- Loading all programs upfront (wasteful), or
- Maintaining out-of-tree patches to libbpf for selective
loading
Last year we made an attempt to upstream our solution to
this use case to libbpf[1] but Andrii pointed out how our
approach was problematic for upstream. He then proposed
splitting bpf_object__load() into two steps:
bpf_object__prepare() (creates maps, loads BTF, does
relocations, produces final program instructions) and then
bpf_object__load(). We are trying to follow up on his
input and become more upstream compliant.
The prepare() + clone() API seems similiar to this,
but the questions above about object state and FD ownership
are the main gaps for production adoption. Are there plans
to address these in future revisions, or is this
intentionally scoped to testing/tooling use cases only?
Thanks,
Andrey
[1] -https://lore.kernel.org/all/20250122215206.59859-1-slava.imameev@crowdstrike.com/t/#m93ec917b3dfe3115be2a4b6439e2c649c791686d
On Fri, Feb 20, 2026 at 2:18 PM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Add bpf_program__clone() API that loads a single BPF program from a
> prepared BPF object into the kernel, returning a file descriptor owned
> by the caller.
>
> After bpf_object__prepare(), callers can use bpf_program__clone() to
> load individual programs with custom bpf_prog_load_opts, instead of
> loading all programs at once via bpf_object__load(). Non-zero fields in
> opts override the defaults derived from the program and object
> internals; passing NULL opts populates everything automatically.
>
> Internally, bpf_program__clone() resolves BTF-based attach targets
> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> func/line info, fd_array, license, and kern_version from the
> prepared object before calling bpf_prog_load().
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 17 +++++++++++++
> tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ | 1 +
> 3 files changed, 82 insertions(+)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 0c8bf0b5cce4..4b084bda3f47 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> return prog->line_info_cnt;
> }
>
> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> +{
> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> + struct bpf_prog_load_opts *pattr = &attr;
> + struct bpf_object *obj;
> + int err, fd;
> +
> + if (!prog)
> + return libbpf_err(-EINVAL);
> +
> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> + return libbpf_err(-EINVAL);
> +
> + obj = prog->obj;
> + if (obj->state < OBJ_PREPARED)
> + return libbpf_err(-EINVAL);
> +
> + /* Copy caller opts, fall back to prog/object defaults */
> + OPTS_SET(pattr, expected_attach_type,
> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> + OPTS_SET(pattr, attach_btf_obj_fd,
> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> + if (attr.token_fd)
> + attr.prog_flags |= BPF_F_TOKEN_FD;
> +
> + /* BTF func/line info */
> + if (obj->btf && btf__fd(obj->btf) >= 0) {
> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> + OPTS_SET(pattr, func_info_cnt,
> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> + OPTS_SET(pattr, func_info_rec_size,
> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> + OPTS_SET(pattr, line_info_cnt,
> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> + OPTS_SET(pattr, line_info_rec_size,
> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> + }
> +
> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> +
> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> + if (err)
> + return libbpf_err(err);
> + }
> +
> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> + pattr);
> +
> + return libbpf_err(fd);
> +}
> +
> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> .sec = (char *)sec_pfx, \
> .prog_type = BPF_PROG_TYPE_##ptype, \
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index dfc37a615578..0be34852350f 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
> */
> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
>
> +/**
> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
> + * BPF object into the kernel, returning its file descriptor.
> + *
> + * The BPF object must have been previously prepared with
> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
> + * overrides the defaults derived from the program/object internals.
> + * If @opts is NULL, all fields are populated automatically.
> + *
> + * The returned FD is owned by the caller and must be closed with close().
> + *
> + * @param prog BPF program from a prepared object
> + * @param opts Optional load options; non-zero fields override defaults
> + * @return program FD (>= 0) on success; negative error code on failure
> + */
> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
> +
> #ifdef __cplusplus
> } /* extern "C" */
> #endif
> diff --git a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> index d18fbcea7578..e727a54e373a 100644
> --- a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> +++ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
> bpf_map__set_exclusive_program;
> bpf_map__exclusive_program;
> bpf_prog_assoc_struct_ops;
> + bpf_program__clone;
> bpf_program__assoc_struct_ops;
> btf__permute;
> } LIBBPF_1.6.0;
>
> --
> 2.47.3
>
>
^ permalink raw reply [flat|nested] 25+ messages in thread* Re: [External] [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-03-06 17:22 ` [External] " Andrey Grodzovsky
@ 2026-03-10 0:08 ` Mykyta Yatsenko
2026-03-11 13:35 ` Andrey Grodzovsky
2026-03-11 22:52 ` Andrii Nakryiko
1 sibling, 1 reply; 25+ messages in thread
From: Mykyta Yatsenko @ 2026-03-10 0:08 UTC (permalink / raw)
To: Andrey Grodzovsky, Andrii Nakryiko
Cc: bpf, ast, daniel, kernel-team, DL Linux Open Source Team
Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com> writes:
Hi,
Thanks for reaching out, I'm providing my own opinion on this, I did not
discuss this with Andrii in depth.
bpf_object__finalize() - you probably do not need this, the mentioned
example of bpf_program__set_type() actually rejects when object is in
LOADED state (you can't mutate loaded program).
To make your dynamic loading/unloading work, you need to keep your
object in the PREPARED state indefinitely. As clone() uses some of the
fields that are destroyed by the post_load_cleanup(). calling
bpf_object__load() in this setup may be unsafe (leaking fd, etc).
We have a precedent with bpf_map__reuse_fd(), bpf_program__set_fd() does
not seem too extreme to me, but it seems like some lifecycle invariants
are changing, as we'll have loaded program in prepared object, which I'm
not 100% sure is a problem right now, but possibly going to break
something.
Also a small detail: bpf_program__clone() does not support PROG_ARRAY
maps (just in case you need that) (see cover letter for details).
> Mykyta and Andrii Hi!
>
> We're evaluating the bpf_object__prepare() +
> bpf_program__clone() API for use in a production BPF
> application that manages hundreds of BPF programs with
> selective (dynamic) loading — some programs are loaded at
> startup, others loaded/unloaded at runtime based on feature
> configuration.
>
> We have a few questions about the intended usage and
> potential extensions of this API:
>
> 1. Compatibility with bpf_object__load() and object state
>
> After bpf_object__prepare(), the object is in OBJ_PREPARED
> state. Several libbpf APIs (e.g., bpf_program__set_type())
> gate on OBJ_LOADED state.
>
> Is there a recommended way to transition the object to
> OBJ_LOADED after cloning all desired programs? For example,
> would a bpf_object__finalize() or similar API that runs
> post_load_cleanup() and sets OBJ_LOADED be in scope? This
> would allow users to benefit from prepare() + clone() for
> selective loading while keeping the object in a state that
> the rest of libbpf expects. Or, is the new API not intended
> to work with bpf_object in the first place ?
>
> 2. Storing the clone FD back on struct bpf_program
>
> bpf_program__clone() returns a caller-owned FD, but APIs
> like bpf_program__attach() read prog->fd internally.
> Without a way to set the FD back on the program struct, the
> caller must reimplement attach logic (section-type dispatch
> for kprobe, fentry, raw_tp, etc.).
>
> Would a bpf_program__set_fd() setter (similar to the
> existing btf__set_fd()) be acceptable to store the clone FD
> back, making bpf_program__attach() and related APIs usable
> with cloned programs?
>
> 3. Use case: selective program loading from a single BPF
> object
>
> Our use case involves a single large BPF object (skeleton)
> with hundreds of programs where a subset is loaded at
> startup and others are loaded/unloaded dynamically based on
> runtime configuration. The current approach requires either:
> - Loading all programs upfront (wasteful), or
> - Maintaining out-of-tree patches to libbpf for selective
> loading
>
> Last year we made an attempt to upstream our solution to
> this use case to libbpf[1] but Andrii pointed out how our
> approach was problematic for upstream. He then proposed
> splitting bpf_object__load() into two steps:
> bpf_object__prepare() (creates maps, loads BTF, does
> relocations, produces final program instructions) and then
> bpf_object__load(). We are trying to follow up on his
> input and become more upstream compliant.
>
> The prepare() + clone() API seems similiar to this,
> but the questions above about object state and FD ownership
> are the main gaps for production adoption. Are there plans
> to address these in future revisions, or is this
> intentionally scoped to testing/tooling use cases only?
>
> Thanks,
> Andrey
>
> [1] -https://lore.kernel.org/all/20250122215206.59859-1-slava.imameev@crowdstrike.com/t/#m93ec917b3dfe3115be2a4b6439e2c649c791686d
>
> On Fri, Feb 20, 2026 at 2:18 PM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
>>
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Add bpf_program__clone() API that loads a single BPF program from a
>> prepared BPF object into the kernel, returning a file descriptor owned
>> by the caller.
>>
>> After bpf_object__prepare(), callers can use bpf_program__clone() to
>> load individual programs with custom bpf_prog_load_opts, instead of
>> loading all programs at once via bpf_object__load(). Non-zero fields in
>> opts override the defaults derived from the program and object
>> internals; passing NULL opts populates everything automatically.
>>
>> Internally, bpf_program__clone() resolves BTF-based attach targets
>> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
>> func/line info, fd_array, license, and kern_version from the
>> prepared object before calling bpf_prog_load().
>>
>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>> ---
>> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
>> tools/lib/bpf/libbpf.h | 17 +++++++++++++
>> tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ | 1 +
>> 3 files changed, 82 insertions(+)
>>
>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>> index 0c8bf0b5cce4..4b084bda3f47 100644
>> --- a/tools/lib/bpf/libbpf.c
>> +++ b/tools/lib/bpf/libbpf.c
>> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
>> return prog->line_info_cnt;
>> }
>>
>> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
>> +{
>> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
>> + struct bpf_prog_load_opts *pattr = &attr;
>> + struct bpf_object *obj;
>> + int err, fd;
>> +
>> + if (!prog)
>> + return libbpf_err(-EINVAL);
>> +
>> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
>> + return libbpf_err(-EINVAL);
>> +
>> + obj = prog->obj;
>> + if (obj->state < OBJ_PREPARED)
>> + return libbpf_err(-EINVAL);
>> +
>> + /* Copy caller opts, fall back to prog/object defaults */
>> + OPTS_SET(pattr, expected_attach_type,
>> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
>> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
>> + OPTS_SET(pattr, attach_btf_obj_fd,
>> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
>> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
>> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
>> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
>> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
>> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
>> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
>> + if (attr.token_fd)
>> + attr.prog_flags |= BPF_F_TOKEN_FD;
>> +
>> + /* BTF func/line info */
>> + if (obj->btf && btf__fd(obj->btf) >= 0) {
>> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
>> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
>> + OPTS_SET(pattr, func_info_cnt,
>> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
>> + OPTS_SET(pattr, func_info_rec_size,
>> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
>> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
>> + OPTS_SET(pattr, line_info_cnt,
>> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
>> + OPTS_SET(pattr, line_info_rec_size,
>> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
>> + }
>> +
>> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
>> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
>> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
>> +
>> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
>> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
>> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
>> + if (err)
>> + return libbpf_err(err);
>> + }
>> +
>> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
>> + pattr);
>> +
>> + return libbpf_err(fd);
>> +}
>> +
>> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
>> .sec = (char *)sec_pfx, \
>> .prog_type = BPF_PROG_TYPE_##ptype, \
>> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
>> index dfc37a615578..0be34852350f 100644
>> --- a/tools/lib/bpf/libbpf.h
>> +++ b/tools/lib/bpf/libbpf.h
>> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
>> */
>> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
>>
>> +/**
>> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
>> + * BPF object into the kernel, returning its file descriptor.
>> + *
>> + * The BPF object must have been previously prepared with
>> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
>> + * overrides the defaults derived from the program/object internals.
>> + * If @opts is NULL, all fields are populated automatically.
>> + *
>> + * The returned FD is owned by the caller and must be closed with close().
>> + *
>> + * @param prog BPF program from a prepared object
>> + * @param opts Optional load options; non-zero fields override defaults
>> + * @return program FD (>= 0) on success; negative error code on failure
>> + */
>> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
>> +
>> #ifdef __cplusplus
>> } /* extern "C" */
>> #endif
>> diff --git a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
>> index d18fbcea7578..e727a54e373a 100644
>> --- a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
>> +++ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
>> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
>> bpf_map__set_exclusive_program;
>> bpf_map__exclusive_program;
>> bpf_prog_assoc_struct_ops;
>> + bpf_program__clone;
>> bpf_program__assoc_struct_ops;
>> btf__permute;
>> } LIBBPF_1.6.0;
>>
>> --
>> 2.47.3
>>
>>
^ permalink raw reply [flat|nested] 25+ messages in thread* Re: [External] [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-03-10 0:08 ` Mykyta Yatsenko
@ 2026-03-11 13:35 ` Andrey Grodzovsky
0 siblings, 0 replies; 25+ messages in thread
From: Andrey Grodzovsky @ 2026-03-11 13:35 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: Andrii Nakryiko, bpf, ast, daniel, kernel-team,
DL Linux Open Source Team
Thanks for the reply and all the clarifications! We are looking forward
for this patchset to be merged so we can try to integrate it into our
dynamic loading solutions.
We will reach out with further questions down the road as we take a deeper
look into this.
Andrey
On Mon, Mar 9, 2026 at 8:08 PM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com> writes:
>
> Hi,
> Thanks for reaching out, I'm providing my own opinion on this, I did not
> discuss this with Andrii in depth.
>
> bpf_object__finalize() - you probably do not need this, the mentioned
> example of bpf_program__set_type() actually rejects when object is in
> LOADED state (you can't mutate loaded program).
>
> To make your dynamic loading/unloading work, you need to keep your
> object in the PREPARED state indefinitely. As clone() uses some of the
> fields that are destroyed by the post_load_cleanup(). calling
> bpf_object__load() in this setup may be unsafe (leaking fd, etc).
>
> We have a precedent with bpf_map__reuse_fd(), bpf_program__set_fd() does
> not seem too extreme to me, but it seems like some lifecycle invariants
> are changing, as we'll have loaded program in prepared object, which I'm
> not 100% sure is a problem right now, but possibly going to break
> something.
>
> Also a small detail: bpf_program__clone() does not support PROG_ARRAY
> maps (just in case you need that) (see cover letter for details).
>
> > Mykyta and Andrii Hi!
> >
> > We're evaluating the bpf_object__prepare() +
> > bpf_program__clone() API for use in a production BPF
> > application that manages hundreds of BPF programs with
> > selective (dynamic) loading — some programs are loaded at
> > startup, others loaded/unloaded at runtime based on feature
> > configuration.
> >
> > We have a few questions about the intended usage and
> > potential extensions of this API:
> >
> > 1. Compatibility with bpf_object__load() and object state
> >
> > After bpf_object__prepare(), the object is in OBJ_PREPARED
> > state. Several libbpf APIs (e.g., bpf_program__set_type())
> > gate on OBJ_LOADED state.
> >
> > Is there a recommended way to transition the object to
> > OBJ_LOADED after cloning all desired programs? For example,
> > would a bpf_object__finalize() or similar API that runs
> > post_load_cleanup() and sets OBJ_LOADED be in scope? This
> > would allow users to benefit from prepare() + clone() for
> > selective loading while keeping the object in a state that
> > the rest of libbpf expects. Or, is the new API not intended
> > to work with bpf_object in the first place ?
> >
> > 2. Storing the clone FD back on struct bpf_program
> >
> > bpf_program__clone() returns a caller-owned FD, but APIs
> > like bpf_program__attach() read prog->fd internally.
> > Without a way to set the FD back on the program struct, the
> > caller must reimplement attach logic (section-type dispatch
> > for kprobe, fentry, raw_tp, etc.).
> >
> > Would a bpf_program__set_fd() setter (similar to the
> > existing btf__set_fd()) be acceptable to store the clone FD
> > back, making bpf_program__attach() and related APIs usable
> > with cloned programs?
> >
> > 3. Use case: selective program loading from a single BPF
> > object
> >
> > Our use case involves a single large BPF object (skeleton)
> > with hundreds of programs where a subset is loaded at
> > startup and others are loaded/unloaded dynamically based on
> > runtime configuration. The current approach requires either:
> > - Loading all programs upfront (wasteful), or
> > - Maintaining out-of-tree patches to libbpf for selective
> > loading
> >
> > Last year we made an attempt to upstream our solution to
> > this use case to libbpf[1] but Andrii pointed out how our
> > approach was problematic for upstream. He then proposed
> > splitting bpf_object__load() into two steps:
> > bpf_object__prepare() (creates maps, loads BTF, does
> > relocations, produces final program instructions) and then
> > bpf_object__load(). We are trying to follow up on his
> > input and become more upstream compliant.
> >
> > The prepare() + clone() API seems similiar to this,
> > but the questions above about object state and FD ownership
> > are the main gaps for production adoption. Are there plans
> > to address these in future revisions, or is this
> > intentionally scoped to testing/tooling use cases only?
> >
> > Thanks,
> > Andrey
> >
> > [1] -https://urldefense.com/v3/__https://lore.kernel.org/all/20250122215206.59859-1-slava.imameev@crowdstrike.com/t/*m93ec917b3dfe3115be2a4b6439e2c649c791686d__;Iw!!BmdzS3_lV9HdKG8!wUu6d5O6o0uz-H7DEauZ0RGiTE7PdgMOgRPqHPUfmckEC1CBLs9ELwahqm-eLff0agg3fL7Ii21gdx8YQZkJyFxBWRFv-gkIXMwtIjk$
> >
> > On Fri, Feb 20, 2026 at 2:18 PM Mykyta Yatsenko
> > <mykyta.yatsenko5@gmail.com> wrote:
> >>
> >> From: Mykyta Yatsenko <yatsenko@meta.com>
> >>
> >> Add bpf_program__clone() API that loads a single BPF program from a
> >> prepared BPF object into the kernel, returning a file descriptor owned
> >> by the caller.
> >>
> >> After bpf_object__prepare(), callers can use bpf_program__clone() to
> >> load individual programs with custom bpf_prog_load_opts, instead of
> >> loading all programs at once via bpf_object__load(). Non-zero fields in
> >> opts override the defaults derived from the program and object
> >> internals; passing NULL opts populates everything automatically.
> >>
> >> Internally, bpf_program__clone() resolves BTF-based attach targets
> >> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> >> func/line info, fd_array, license, and kern_version from the
> >> prepared object before calling bpf_prog_load().
> >>
> >> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> >> ---
> >> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> >> tools/lib/bpf/libbpf.h | 17 +++++++++++++
> >> tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ | 1 +
> >> 3 files changed, 82 insertions(+)
> >>
> >> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> >> index 0c8bf0b5cce4..4b084bda3f47 100644
> >> --- a/tools/lib/bpf/libbpf.c
> >> +++ b/tools/lib/bpf/libbpf.c
> >> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> >> return prog->line_info_cnt;
> >> }
> >>
> >> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> >> +{
> >> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> >> + struct bpf_prog_load_opts *pattr = &attr;
> >> + struct bpf_object *obj;
> >> + int err, fd;
> >> +
> >> + if (!prog)
> >> + return libbpf_err(-EINVAL);
> >> +
> >> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> >> + return libbpf_err(-EINVAL);
> >> +
> >> + obj = prog->obj;
> >> + if (obj->state < OBJ_PREPARED)
> >> + return libbpf_err(-EINVAL);
> >> +
> >> + /* Copy caller opts, fall back to prog/object defaults */
> >> + OPTS_SET(pattr, expected_attach_type,
> >> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> >> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> >> + OPTS_SET(pattr, attach_btf_obj_fd,
> >> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> >> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> >> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> >> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> >> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> >> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> >> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> >> + if (attr.token_fd)
> >> + attr.prog_flags |= BPF_F_TOKEN_FD;
> >> +
> >> + /* BTF func/line info */
> >> + if (obj->btf && btf__fd(obj->btf) >= 0) {
> >> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> >> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> >> + OPTS_SET(pattr, func_info_cnt,
> >> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> >> + OPTS_SET(pattr, func_info_rec_size,
> >> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> >> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> >> + OPTS_SET(pattr, line_info_cnt,
> >> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> >> + OPTS_SET(pattr, line_info_rec_size,
> >> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> >> + }
> >> +
> >> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> >> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> >> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> >> +
> >> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> >> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> >> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> >> + if (err)
> >> + return libbpf_err(err);
> >> + }
> >> +
> >> + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> >> + pattr);
> >> +
> >> + return libbpf_err(fd);
> >> +}
> >> +
> >> #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> >> .sec = (char *)sec_pfx, \
> >> .prog_type = BPF_PROG_TYPE_##ptype, \
> >> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> >> index dfc37a615578..0be34852350f 100644
> >> --- a/tools/lib/bpf/libbpf.h
> >> +++ b/tools/lib/bpf/libbpf.h
> >> @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
> >> */
> >> LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
> >>
> >> +/**
> >> + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
> >> + * BPF object into the kernel, returning its file descriptor.
> >> + *
> >> + * The BPF object must have been previously prepared with
> >> + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
> >> + * overrides the defaults derived from the program/object internals.
> >> + * If @opts is NULL, all fields are populated automatically.
> >> + *
> >> + * The returned FD is owned by the caller and must be closed with close().
> >> + *
> >> + * @param prog BPF program from a prepared object
> >> + * @param opts Optional load options; non-zero fields override defaults
> >> + * @return program FD (>= 0) on success; negative error code on failure
> >> + */
> >> +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
> >> +
> >> #ifdef __cplusplus
> >> } /* extern "C" */
> >> #endif
> >> diff --git a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> >> index d18fbcea7578..e727a54e373a 100644
> >> --- a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> >> +++ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> >> @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
> >> bpf_map__set_exclusive_program;
> >> bpf_map__exclusive_program;
> >> bpf_prog_assoc_struct_ops;
> >> + bpf_program__clone;
> >> bpf_program__assoc_struct_ops;
> >> btf__permute;
> >> } LIBBPF_1.6.0;
> >>
> >> --
> >> 2.47.3
> >>
> >>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [External] [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-03-06 17:22 ` [External] " Andrey Grodzovsky
2026-03-10 0:08 ` Mykyta Yatsenko
@ 2026-03-11 22:52 ` Andrii Nakryiko
2026-03-16 14:23 ` Andrey Grodzovsky
1 sibling, 1 reply; 25+ messages in thread
From: Andrii Nakryiko @ 2026-03-11 22:52 UTC (permalink / raw)
To: Andrey Grodzovsky
Cc: Mykyta Yatsenko, Andrii Nakryiko, bpf, ast, daniel, kernel-team,
DL Linux Open Source Team
On Fri, Mar 6, 2026 at 9:22 AM Andrey Grodzovsky
<andrey.grodzovsky@crowdstrike.com> wrote:
>
> Mykyta and Andrii Hi!
>
> We're evaluating the bpf_object__prepare() +
> bpf_program__clone() API for use in a production BPF
> application that manages hundreds of BPF programs with
> selective (dynamic) loading — some programs are loaded at
> startup, others loaded/unloaded at runtime based on feature
> configuration.
>
> We have a few questions about the intended usage and
> potential extensions of this API:
>
> 1. Compatibility with bpf_object__load() and object state
>
> After bpf_object__prepare(), the object is in OBJ_PREPARED
> state. Several libbpf APIs (e.g., bpf_program__set_type())
> gate on OBJ_LOADED state.
>
> Is there a recommended way to transition the object to
> OBJ_LOADED after cloning all desired programs? For example,
> would a bpf_object__finalize() or similar API that runs
> post_load_cleanup() and sets OBJ_LOADED be in scope? This
> would allow users to benefit from prepare() + clone() for
> selective loading while keeping the object in a state that
> the rest of libbpf expects. Or, is the new API not intended
> to work with bpf_object in the first place ?
exactly, it's not. It's an escape hatch out of bpf_object into
low-level FD, it was never designed to produce something that should
be put back into bpf_object. This clone stuff if for generic low-level
tooling like veristat and/or maybe bpftool's generic program loading.
And another one is cloning the same fentry program to be attached into
multiple uniform targets (I do a similar hack in retsnoop, for
instance).
In none of those cases cloned FDs are meant to be interoperable with
bpf_object/bpf_program abstractions.
>
> 2. Storing the clone FD back on struct bpf_program
>
> bpf_program__clone() returns a caller-owned FD, but APIs
> like bpf_program__attach() read prog->fd internally.
> Without a way to set the FD back on the program struct, the
> caller must reimplement attach logic (section-type dispatch
> for kprobe, fentry, raw_tp, etc.).
>
> Would a bpf_program__set_fd() setter (similar to the
> existing btf__set_fd()) be acceptable to store the clone FD
> back, making bpf_program__attach() and related APIs usable
> with cloned programs?
technically this could be done, probably, but it just feels too dirty,
tbh... there is so much program-specific information that libbpf
internally preserves (and gives access to most of it through
bpf_program's getter) that would need to be invalidated and/or
re-fetched with this set_fd() approach, that I don't really even want
to consider this too seriously... but see below
>
> 3. Use case: selective program loading from a single BPF
> object
>
> Our use case involves a single large BPF object (skeleton)
> with hundreds of programs where a subset is loaded at
> startup and others are loaded/unloaded dynamically based on
> runtime configuration. The current approach requires either:
> - Loading all programs upfront (wasteful), or
> - Maintaining out-of-tree patches to libbpf for selective
> loading
>
> Last year we made an attempt to upstream our solution to
> this use case to libbpf[1] but Andrii pointed out how our
> approach was problematic for upstream. He then proposed
> splitting bpf_object__load() into two steps:
> bpf_object__prepare() (creates maps, loads BTF, does
> relocations, produces final program instructions) and then
> bpf_object__load(). We are trying to follow up on his
> input and become more upstream compliant.
>
> The prepare() + clone() API seems similiar to this,
> but the questions above about object state and FD ownership
> are the main gaps for production adoption. Are there plans
> to address these in future revisions, or is this
> intentionally scoped to testing/tooling use cases only?
I remember your use case. I don't think clone is really a great fit
*if* you still want to stay at bpf_object/bpf skeleton high-level of
API (i.e., if you want to use bpf_program__attach() APIs and BPF
links).
While definitely a complication, I think we can add support for
loading BPF program after bpf_object__load() happened. You'd have to
keep your optional programs as non-autoloaded (or
bpf_program__set_autoload(false) explicitly), and I'm thinking we
might want to make this behavior opt-in explicitly through
bpf_object_open_opts(), as there are various points in bpf_object
lifetime where we make some decisions with the assumption that
programs will never be loaded, so we'll need to explicitly indicate
that *all* programs would need to be considered loadable, but maybe
much later.
Another thing that won't (or rather might not) work is declarative
prog_array initialization and struct_ops. Those two steps happen in
bpf_object__load() after all programs are loaded. I don't think that
is the problem for you, but I just want to point out that program
loading is not always the last step.
But other than that, despite added complications, it's probably better
to just allow to load programs lazily after bpf_object__load(), after
all.
>
> Thanks,
> Andrey
>
> [1] -https://lore.kernel.org/all/20250122215206.59859-1-slava.imameev@crowdstrike.com/t/#m93ec917b3dfe3115be2a4b6439e2c649c791686d
>
> On Fri, Feb 20, 2026 at 2:18 PM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
> >
> > From: Mykyta Yatsenko <yatsenko@meta.com>
> >
> > Add bpf_program__clone() API that loads a single BPF program from a
> > prepared BPF object into the kernel, returning a file descriptor owned
> > by the caller.
> >
> > After bpf_object__prepare(), callers can use bpf_program__clone() to
> > load individual programs with custom bpf_prog_load_opts, instead of
> > loading all programs at once via bpf_object__load(). Non-zero fields in
> > opts override the defaults derived from the program and object
> > internals; passing NULL opts populates everything automatically.
> >
> > Internally, bpf_program__clone() resolves BTF-based attach targets
> > (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> > func/line info, fd_array, license, and kern_version from the
> > prepared object before calling bpf_prog_load().
> >
> > Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> > ---
> > tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> > tools/lib/bpf/libbpf.h | 17 +++++++++++++
> > tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ | 1 +
> > 3 files changed, 82 insertions(+)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 0c8bf0b5cce4..4b084bda3f47 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> > return prog->line_info_cnt;
> > }
> >
> > +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> > +{
> > + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> > + struct bpf_prog_load_opts *pattr = &attr;
> > + struct bpf_object *obj;
> > + int err, fd;
> > +
> > + if (!prog)
> > + return libbpf_err(-EINVAL);
> > +
> > + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> > + return libbpf_err(-EINVAL);
> > +
> > + obj = prog->obj;
> > + if (obj->state < OBJ_PREPARED)
> > + return libbpf_err(-EINVAL);
> > +
> > + /* Copy caller opts, fall back to prog/object defaults */
> > + OPTS_SET(pattr, expected_attach_type,
> > + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> > + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> > + OPTS_SET(pattr, attach_btf_obj_fd,
> > + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> > + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> > + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> > + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> > + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> > + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> > + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> > + if (attr.token_fd)
> > + attr.prog_flags |= BPF_F_TOKEN_FD;
> > +
> > + /* BTF func/line info */
> > + if (obj->btf && btf__fd(obj->btf) >= 0) {
> > + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> > + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> > + OPTS_SET(pattr, func_info_cnt,
> > + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> > + OPTS_SET(pattr, func_info_rec_size,
> > + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> > + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> > + OPTS_SET(pattr, line_info_cnt,
> > + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> > + OPTS_SET(pattr, line_info_rec_size,
> > + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> > + }
> > +
> > + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> > + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> > + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> > +
> > + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> > + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> > + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> > + if (err)
> > + return libbpf_err(err);
> > + }
> > +
> > + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> > + pattr);
> > +
> > + return libbpf_err(fd);
> > +}
> > +
> > #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> > .sec = (char *)sec_pfx, \
> > .prog_type = BPF_PROG_TYPE_##ptype, \
> > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > index dfc37a615578..0be34852350f 100644
> > --- a/tools/lib/bpf/libbpf.h
> > +++ b/tools/lib/bpf/libbpf.h
> > @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
> > */
> > LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
> >
> > +/**
> > + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
> > + * BPF object into the kernel, returning its file descriptor.
> > + *
> > + * The BPF object must have been previously prepared with
> > + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
> > + * overrides the defaults derived from the program/object internals.
> > + * If @opts is NULL, all fields are populated automatically.
> > + *
> > + * The returned FD is owned by the caller and must be closed with close().
> > + *
> > + * @param prog BPF program from a prepared object
> > + * @param opts Optional load options; non-zero fields override defaults
> > + * @return program FD (>= 0) on success; negative error code on failure
> > + */
> > +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
> > +
> > #ifdef __cplusplus
> > } /* extern "C" */
> > #endif
> > diff --git a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > index d18fbcea7578..e727a54e373a 100644
> > --- a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > +++ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
> > bpf_map__set_exclusive_program;
> > bpf_map__exclusive_program;
> > bpf_prog_assoc_struct_ops;
> > + bpf_program__clone;
> > bpf_program__assoc_struct_ops;
> > btf__permute;
> > } LIBBPF_1.6.0;
> >
> > --
> > 2.47.3
> >
> >
^ permalink raw reply [flat|nested] 25+ messages in thread* Re: [External] [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-03-11 22:52 ` Andrii Nakryiko
@ 2026-03-16 14:23 ` Andrey Grodzovsky
0 siblings, 0 replies; 25+ messages in thread
From: Andrey Grodzovsky @ 2026-03-16 14:23 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Mykyta Yatsenko, Andrii Nakryiko, bpf, ast, daniel, kernel-team,
DL Linux Open Source Team
On Wed, Mar 11, 2026 at 6:52 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Mar 6, 2026 at 9:22 AM Andrey Grodzovsky
> <andrey.grodzovsky@crowdstrike.com> wrote:
> >
> > Mykyta and Andrii Hi!
> >
> > We're evaluating the bpf_object__prepare() +
> > bpf_program__clone() API for use in a production BPF
> > application that manages hundreds of BPF programs with
> > selective (dynamic) loading — some programs are loaded at
> > startup, others loaded/unloaded at runtime based on feature
> > configuration.
> >
> > We have a few questions about the intended usage and
> > potential extensions of this API:
> >
> > 1. Compatibility with bpf_object__load() and object state
> >
> > After bpf_object__prepare(), the object is in OBJ_PREPARED
> > state. Several libbpf APIs (e.g., bpf_program__set_type())
> > gate on OBJ_LOADED state.
> >
> > Is there a recommended way to transition the object to
> > OBJ_LOADED after cloning all desired programs? For example,
> > would a bpf_object__finalize() or similar API that runs
> > post_load_cleanup() and sets OBJ_LOADED be in scope? This
> > would allow users to benefit from prepare() + clone() for
> > selective loading while keeping the object in a state that
> > the rest of libbpf expects. Or, is the new API not intended
> > to work with bpf_object in the first place ?
>
> exactly, it's not. It's an escape hatch out of bpf_object into
> low-level FD, it was never designed to produce something that should
> be put back into bpf_object. This clone stuff if for generic low-level
> tooling like veristat and/or maybe bpftool's generic program loading.
> And another one is cloning the same fentry program to be attached into
> multiple uniform targets (I do a similar hack in retsnoop, for
> instance).
>
> In none of those cases cloned FDs are meant to be interoperable with
> bpf_object/bpf_program abstractions.
>
> >
> > 2. Storing the clone FD back on struct bpf_program
> >
> > bpf_program__clone() returns a caller-owned FD, but APIs
> > like bpf_program__attach() read prog->fd internally.
> > Without a way to set the FD back on the program struct, the
> > caller must reimplement attach logic (section-type dispatch
> > for kprobe, fentry, raw_tp, etc.).
> >
> > Would a bpf_program__set_fd() setter (similar to the
> > existing btf__set_fd()) be acceptable to store the clone FD
> > back, making bpf_program__attach() and related APIs usable
> > with cloned programs?
>
> technically this could be done, probably, but it just feels too dirty,
> tbh... there is so much program-specific information that libbpf
> internally preserves (and gives access to most of it through
> bpf_program's getter) that would need to be invalidated and/or
> re-fetched with this set_fd() approach, that I don't really even want
> to consider this too seriously... but see below
>
> >
> > 3. Use case: selective program loading from a single BPF
> > object
> >
> > Our use case involves a single large BPF object (skeleton)
> > with hundreds of programs where a subset is loaded at
> > startup and others are loaded/unloaded dynamically based on
> > runtime configuration. The current approach requires either:
> > - Loading all programs upfront (wasteful), or
> > - Maintaining out-of-tree patches to libbpf for selective
> > loading
> >
> > Last year we made an attempt to upstream our solution to
> > this use case to libbpf[1] but Andrii pointed out how our
> > approach was problematic for upstream. He then proposed
> > splitting bpf_object__load() into two steps:
> > bpf_object__prepare() (creates maps, loads BTF, does
> > relocations, produces final program instructions) and then
> > bpf_object__load(). We are trying to follow up on his
> > input and become more upstream compliant.
> >
> > The prepare() + clone() API seems similiar to this,
> > but the questions above about object state and FD ownership
> > are the main gaps for production adoption. Are there plans
> > to address these in future revisions, or is this
> > intentionally scoped to testing/tooling use cases only?
>
> I remember your use case. I don't think clone is really a great fit
> *if* you still want to stay at bpf_object/bpf skeleton high-level of
> API (i.e., if you want to use bpf_program__attach() APIs and BPF
> links).
>
> While definitely a complication, I think we can add support for
> loading BPF program after bpf_object__load() happened. You'd have to
> keep your optional programs as non-autoloaded (or
> bpf_program__set_autoload(false) explicitly), and I'm thinking we
> might want to make this behavior opt-in explicitly through
> bpf_object_open_opts(), as there are various points in bpf_object
> lifetime where we make some decisions with the assumption that
> programs will never be loaded, so we'll need to explicitly indicate
> that *all* programs would need to be considered loadable, but maybe
> much later.
>
> Another thing that won't (or rather might not) work is declarative
> prog_array initialization and struct_ops. Those two steps happen in
> bpf_object__load() after all programs are loaded. I don't think that
> is the problem for you, but I just want to point out that program
> loading is not always the last step.
>
> But other than that, despite added complications, it's probably better
> to just allow to load programs lazily after bpf_object__load(), after
> all.
Thanks for the detailed info! We can start working on this ourselves
once we have some available time, we hope for your guidance during the process.
Andrey
>
> >
> > Thanks,
> > Andrey
> >
> > [1] -https://urldefense.com/v3/__https://lore.kernel.org/all/20250122215206.59859-1-slava.imameev@crowdstrike.com/t/*m93ec917b3dfe3115be2a4b6439e2c649c791686d__;Iw!!BmdzS3_lV9HdKG8!w-rWKr0W2k1zo6qkXsDgmORVg5c3X9udhVYztkyvYonp_0GlVjNom_gDbcKSSOd7U-A_SQbGGfppVaaB5OYu01bMugTLr-R_tnkExA$
> >
> > On Fri, Feb 20, 2026 at 2:18 PM Mykyta Yatsenko
> > <mykyta.yatsenko5@gmail.com> wrote:
> > >
> > > From: Mykyta Yatsenko <yatsenko@meta.com>
> > >
> > > Add bpf_program__clone() API that loads a single BPF program from a
> > > prepared BPF object into the kernel, returning a file descriptor owned
> > > by the caller.
> > >
> > > After bpf_object__prepare(), callers can use bpf_program__clone() to
> > > load individual programs with custom bpf_prog_load_opts, instead of
> > > loading all programs at once via bpf_object__load(). Non-zero fields in
> > > opts override the defaults derived from the program and object
> > > internals; passing NULL opts populates everything automatically.
> > >
> > > Internally, bpf_program__clone() resolves BTF-based attach targets
> > > (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> > > func/line info, fd_array, license, and kern_version from the
> > > prepared object before calling bpf_prog_load().
> > >
> > > Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> > > ---
> > > tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> > > tools/lib/bpf/libbpf.h | 17 +++++++++++++
> > > tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ | 1 +
> > > 3 files changed, 82 insertions(+)
> > >
> > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > index 0c8bf0b5cce4..4b084bda3f47 100644
> > > --- a/tools/lib/bpf/libbpf.c
> > > +++ b/tools/lib/bpf/libbpf.c
> > > @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> > > return prog->line_info_cnt;
> > > }
> > >
> > > +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> > > +{
> > > + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> > > + struct bpf_prog_load_opts *pattr = &attr;
> > > + struct bpf_object *obj;
> > > + int err, fd;
> > > +
> > > + if (!prog)
> > > + return libbpf_err(-EINVAL);
> > > +
> > > + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> > > + return libbpf_err(-EINVAL);
> > > +
> > > + obj = prog->obj;
> > > + if (obj->state < OBJ_PREPARED)
> > > + return libbpf_err(-EINVAL);
> > > +
> > > + /* Copy caller opts, fall back to prog/object defaults */
> > > + OPTS_SET(pattr, expected_attach_type,
> > > + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
> > > + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> > > + OPTS_SET(pattr, attach_btf_obj_fd,
> > > + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> > > + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> > > + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> > > + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> > > + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> > > + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> > > + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> > > + if (attr.token_fd)
> > > + attr.prog_flags |= BPF_F_TOKEN_FD;
> > > +
> > > + /* BTF func/line info */
> > > + if (obj->btf && btf__fd(obj->btf) >= 0) {
> > > + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> > > + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> > > + OPTS_SET(pattr, func_info_cnt,
> > > + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> > > + OPTS_SET(pattr, func_info_rec_size,
> > > + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> > > + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> > > + OPTS_SET(pattr, line_info_cnt,
> > > + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> > > + OPTS_SET(pattr, line_info_rec_size,
> > > + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> > > + }
> > > +
> > > + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> > > + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> > > + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
> > > +
> > > + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> > > + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> > > + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> > > + if (err)
> > > + return libbpf_err(err);
> > > + }
> > > +
> > > + fd = bpf_prog_load(prog->type, prog->name, obj->license, prog->insns, prog->insns_cnt,
> > > + pattr);
> > > +
> > > + return libbpf_err(fd);
> > > +}
> > > +
> > > #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
> > > .sec = (char *)sec_pfx, \
> > > .prog_type = BPF_PROG_TYPE_##ptype, \
> > > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > > index dfc37a615578..0be34852350f 100644
> > > --- a/tools/lib/bpf/libbpf.h
> > > +++ b/tools/lib/bpf/libbpf.h
> > > @@ -2021,6 +2021,23 @@ LIBBPF_API int libbpf_register_prog_handler(const char *sec,
> > > */
> > > LIBBPF_API int libbpf_unregister_prog_handler(int handler_id);
> > >
> > > +/**
> > > + * @brief **bpf_program__clone()** loads a single BPF program from a prepared
> > > + * BPF object into the kernel, returning its file descriptor.
> > > + *
> > > + * The BPF object must have been previously prepared with
> > > + * **bpf_object__prepare()**. If @opts is provided, any non-zero field
> > > + * overrides the defaults derived from the program/object internals.
> > > + * If @opts is NULL, all fields are populated automatically.
> > > + *
> > > + * The returned FD is owned by the caller and must be closed with close().
> > > + *
> > > + * @param prog BPF program from a prepared object
> > > + * @param opts Optional load options; non-zero fields override defaults
> > > + * @return program FD (>= 0) on success; negative error code on failure
> > > + */
> > > +LIBBPF_API int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts);
> > > +
> > > #ifdef __cplusplus
> > > } /* extern "C" */
> > > #endif
> > > diff --git a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > > index d18fbcea7578..e727a54e373a 100644
> > > --- a/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > > +++ b/tools/lib/bpf/https://urldefense.com/v3/__http://libbpf.map__;!!BmdzS3_lV9HdKG8!3OeUmMPxKbEw0Wl_ZYw9yzRwdxA2X7kuWyzFOxvKuIsQg1fhhJtfaqSd4n0N6UokYqOuQporDXrIKYc3k7dvu4Rel1BiJSjA99yzJZk$
> > > @@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
> > > bpf_map__set_exclusive_program;
> > > bpf_map__exclusive_program;
> > > bpf_prog_assoc_struct_ops;
> > > + bpf_program__clone;
> > > bpf_program__assoc_struct_ops;
> > > btf__permute;
> > > } LIBBPF_1.6.0;
> > >
> > > --
> > > 2.47.3
> > >
> > >
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone()
2026-02-20 19:18 ` [PATCH bpf-next v2 1/2] libbpf: Introduce bpf_program__clone() Mykyta Yatsenko
` (2 preceding siblings ...)
2026-03-06 17:22 ` [External] " Andrey Grodzovsky
@ 2026-03-11 23:03 ` Andrii Nakryiko
3 siblings, 0 replies; 25+ messages in thread
From: Andrii Nakryiko @ 2026-03-11 23:03 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
Mykyta Yatsenko
On Fri, Feb 20, 2026 at 11:18 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Add bpf_program__clone() API that loads a single BPF program from a
> prepared BPF object into the kernel, returning a file descriptor owned
> by the caller.
>
> After bpf_object__prepare(), callers can use bpf_program__clone() to
> load individual programs with custom bpf_prog_load_opts, instead of
> loading all programs at once via bpf_object__load(). Non-zero fields in
> opts override the defaults derived from the program and object
> internals; passing NULL opts populates everything automatically.
>
> Internally, bpf_program__clone() resolves BTF-based attach targets
> (attach_btf_id, attach_btf_obj_fd) and the sleepable flag, fills
> func/line info, fd_array, license, and kern_version from the
> prepared object before calling bpf_prog_load().
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> tools/lib/bpf/libbpf.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 17 +++++++++++++
> tools/lib/bpf/libbpf.map | 1 +
> 3 files changed, 82 insertions(+)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 0c8bf0b5cce4..4b084bda3f47 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -9793,6 +9793,70 @@ __u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
> return prog->line_info_cnt;
> }
>
> +int bpf_program__clone(struct bpf_program *prog, const struct bpf_prog_load_opts *opts)
> +{
> + LIBBPF_OPTS(bpf_prog_load_opts, attr);
> + struct bpf_prog_load_opts *pattr = &attr;
> + struct bpf_object *obj;
> + int err, fd;
> +
> + if (!prog)
> + return libbpf_err(-EINVAL);
> +
> + if (!OPTS_VALID(opts, bpf_prog_load_opts))
> + return libbpf_err(-EINVAL);
> +
> + obj = prog->obj;
> + if (obj->state < OBJ_PREPARED)
> + return libbpf_err(-EINVAL);
> +
> + /* Copy caller opts, fall back to prog/object defaults */
> + OPTS_SET(pattr, expected_attach_type,
> + OPTS_GET(opts, expected_attach_type, 0) ?: prog->expected_attach_type);
OPTS_GET(opts, expected_attach_type, prog->expected_attach_type)
and same almost everywhere else
> + OPTS_SET(pattr, attach_btf_id, OPTS_GET(opts, attach_btf_id, 0) ?: prog->attach_btf_id);
> + OPTS_SET(pattr, attach_btf_obj_fd,
> + OPTS_GET(opts, attach_btf_obj_fd, 0) ?: prog->attach_btf_obj_fd);
> + OPTS_SET(pattr, attach_prog_fd, OPTS_GET(opts, attach_prog_fd, 0) ?: prog->attach_prog_fd);
> + OPTS_SET(pattr, prog_flags, OPTS_GET(opts, prog_flags, 0) ?: prog->prog_flags);
> + OPTS_SET(pattr, prog_ifindex, OPTS_GET(opts, prog_ifindex, 0) ?: prog->prog_ifindex);
> + OPTS_SET(pattr, kern_version, OPTS_GET(opts, kern_version, 0) ?: obj->kern_version);
> + OPTS_SET(pattr, fd_array, OPTS_GET(opts, fd_array, NULL) ?: obj->fd_array);
> + OPTS_SET(pattr, token_fd, OPTS_GET(opts, token_fd, 0) ?: obj->token_fd);
> + if (attr.token_fd)
> + attr.prog_flags |= BPF_F_TOKEN_FD;
> +
> + /* BTF func/line info */
> + if (obj->btf && btf__fd(obj->btf) >= 0) {
> + OPTS_SET(pattr, prog_btf_fd, OPTS_GET(opts, prog_btf_fd, 0) ?: btf__fd(obj->btf));
> + OPTS_SET(pattr, func_info, OPTS_GET(opts, func_info, NULL) ?: prog->func_info);
> + OPTS_SET(pattr, func_info_cnt,
> + OPTS_GET(opts, func_info_cnt, 0) ?: prog->func_info_cnt);
> + OPTS_SET(pattr, func_info_rec_size,
> + OPTS_GET(opts, func_info_rec_size, 0) ?: prog->func_info_rec_size);
> + OPTS_SET(pattr, line_info, OPTS_GET(opts, line_info, NULL) ?: prog->line_info);
> + OPTS_SET(pattr, line_info_cnt,
> + OPTS_GET(opts, line_info_cnt, 0) ?: prog->line_info_cnt);
> + OPTS_SET(pattr, line_info_rec_size,
> + OPTS_GET(opts, line_info_rec_size, 0) ?: prog->line_info_rec_size);
> + }
> +
> + OPTS_SET(pattr, log_buf, OPTS_GET(opts, log_buf, NULL));
> + OPTS_SET(pattr, log_size, OPTS_GET(opts, log_size, 0));
> + OPTS_SET(pattr, log_level, OPTS_GET(opts, log_level, 0));
as discussed offline, we shouldn't use OPTS_SET(), we control pattr
size and layout, OPTS_SET doesn't contribute anything here and should
only be used for writing into user-provided opts structs.
> +
> + /* Resolve BTF attach targets, set sleepable/XDP flags, etc. */
> + if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
> + err = prog->sec_def->prog_prepare_load_fn(prog, pattr, prog->sec_def->cookie);
> + if (err)
> + return libbpf_err(err);
> + }
> +
[...]
^ permalink raw reply [flat|nested] 25+ messages in thread