* [PATCH bpf-next v9 01/10] libbpf: Add BTF permutation support for type reordering
2025-12-08 6:23 [PATCH bpf-next v9 00/10] Improve the performance of BTF type lookups with binary search Donglin Peng
@ 2025-12-08 6:23 ` Donglin Peng
2025-12-16 22:00 ` Eduard Zingerman
2025-12-08 6:23 ` [PATCH bpf-next v9 02/10] selftests/bpf: Add test cases for btf__permute functionality Donglin Peng
` (8 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Donglin Peng @ 2025-12-08 6:23 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
From: pengdonglin <pengdonglin@xiaomi.com>
Introduce btf__permute() API to allow in-place rearrangement of BTF types.
This function reorganizes BTF type order according to a provided array of
type IDs, updating all type references to maintain consistency.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
---
tools/lib/bpf/btf.c | 119 +++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/btf.h | 36 ++++++++++++
tools/lib/bpf/libbpf.map | 1 +
3 files changed, 156 insertions(+)
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 84a4b0abc8be..26ebc0234b9b 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -5868,3 +5868,122 @@ int btf__relocate(struct btf *btf, const struct btf *base_btf)
btf->owns_base = false;
return libbpf_err(err);
}
+
+struct btf_permute {
+ struct btf *btf;
+ __u32 *id_map;
+};
+
+/* Callback function to remap individual type ID references */
+static int btf_permute_remap_type_id(__u32 *type_id, void *ctx)
+{
+ struct btf_permute *p = ctx;
+ __u32 new_type_id = *type_id;
+
+ /* refer to the base BTF or VOID type */
+ if (new_type_id < p->btf->start_id)
+ return 0;
+
+ if (new_type_id >= btf__type_cnt(p->btf))
+ return -EINVAL;
+
+ *type_id = p->id_map[new_type_id - p->btf->start_id];
+ return 0;
+}
+
+int btf__permute(struct btf *btf, __u32 *id_map, __u32 id_map_cnt,
+ const struct btf_permute_opts *opts)
+{
+ struct btf_permute p;
+ struct btf_ext *btf_ext;
+ void *nt, *new_types = NULL;
+ __u32 *order_map = NULL;
+ int err = 0, i;
+ __u32 id;
+
+ if (!OPTS_VALID(opts, btf_permute_opts) || id_map_cnt != btf->nr_types)
+ return libbpf_err(-EINVAL);
+
+ /* record the sequence of types */
+ order_map = calloc(id_map_cnt, sizeof(*id_map));
+ if (!order_map) {
+ err = -ENOMEM;
+ goto done;
+ }
+
+ new_types = calloc(btf->hdr->type_len, 1);
+ if (!new_types) {
+ err = -ENOMEM;
+ goto done;
+ }
+
+ if (btf_ensure_modifiable(btf)) {
+ err = -ENOMEM;
+ goto done;
+ }
+
+ for (i = 0; i < id_map_cnt; i++) {
+ id = id_map[i];
+ if (id < btf->start_id || id >= btf__type_cnt(btf)) {
+ err = -EINVAL;
+ goto done;
+ }
+ id -= btf->start_id;
+ /* cannot be mapped to the same ID */
+ if (order_map[id]) {
+ err = -EINVAL;
+ goto done;
+ }
+ order_map[id] = i + btf->start_id;
+ }
+
+ p.btf = btf;
+ p.id_map = id_map;
+ nt = new_types;
+ for (i = 0; i < id_map_cnt; i++) {
+ struct btf_field_iter it;
+ const struct btf_type *t;
+ __u32 *type_id;
+ int type_size;
+
+ id = order_map[i];
+ t = btf__type_by_id(btf, id);
+ type_size = btf_type_size(t);
+ memcpy(nt, t, type_size);
+
+ /* fix up referenced IDs for BTF */
+ err = btf_field_iter_init(&it, nt, BTF_FIELD_ITER_IDS);
+ if (err)
+ goto done;
+ while ((type_id = btf_field_iter_next(&it))) {
+ err = btf_permute_remap_type_id(type_id, &p);
+ if (err)
+ goto done;
+ }
+
+ nt += type_size;
+ }
+
+ /* fix up referenced IDs for btf_ext */
+ btf_ext = OPTS_GET(opts, btf_ext, NULL);
+ if (btf_ext) {
+ err = btf_ext_visit_type_ids(btf_ext, btf_permute_remap_type_id, &p);
+ if (err)
+ goto done;
+ }
+
+ for (nt = new_types, i = 0; i < id_map_cnt; i++) {
+ btf->type_offs[i] = nt - new_types;
+ nt += btf_type_size(nt);
+ }
+
+ free(order_map);
+ free(btf->types_data);
+ btf->types_data = new_types;
+ return 0;
+
+done:
+ free(order_map);
+ free(new_types);
+ return libbpf_err(err);
+}
diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
index cc01494d6210..ba67e5457e3a 100644
--- a/tools/lib/bpf/btf.h
+++ b/tools/lib/bpf/btf.h
@@ -281,6 +281,42 @@ LIBBPF_API int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts);
*/
LIBBPF_API int btf__relocate(struct btf *btf, const struct btf *base_btf);
+struct btf_permute_opts {
+ size_t sz;
+ /* optional .BTF.ext info along the main BTF info */
+ struct btf_ext *btf_ext;
+ size_t :0;
+};
+#define btf_permute_opts__last_field btf_ext
+
+/**
+ * @brief **btf__permute()** performs in-place BTF type rearrangement
+ * @param btf BTF object to permute
+ * @param id_map Array mapping original type IDs to new IDs
+ * @param id_map_cnt Number of elements in @id_map
+ * @param opts Optional parameters for BTF extension updates
+ * @return 0 on success, negative error code on failure
+ *
+ * **btf__permute()** rearranges BTF types according to the specified ID mapping.
+ * The @id_map array defines the new type ID for each original type ID.
+ *
+ * For **base BTF**:
+ * - @id_map must include all types from ID 1 to `btf__type_cnt(btf)-1`
+ * - @id_map_cnt should be `btf__type_cnt(btf) - 1`
+ * - Mapping uses `id_map[original_id - 1] = new_id`
+ *
+ * For **split BTF**:
+ * - @id_map should cover only split types
+ * - @id_map_cnt should be `btf__type_cnt(btf) - btf__type_cnt(btf__base_btf(btf))`
+ * - Mapping uses `id_map[original_id - btf__type_cnt(btf__base_btf(btf))] = new_id`
+ *
+ * On error, returns negative error code and sets errno:
+ * - `-EINVAL`: Invalid parameters or ID mapping (duplicates, out-of-range)
+ * - `-ENOMEM`: Memory allocation failure
+ */
+LIBBPF_API int btf__permute(struct btf *btf, __u32 *id_map, __u32 id_map_cnt,
+ const struct btf_permute_opts *opts);
+
struct btf_dump;
struct btf_dump_opts {
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 8ed8749907d4..b778e5a5d0a8 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -451,4 +451,5 @@ LIBBPF_1.7.0 {
global:
bpf_map__set_exclusive_program;
bpf_map__exclusive_program;
+ btf__permute;
} LIBBPF_1.6.0;
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 01/10] libbpf: Add BTF permutation support for type reordering
2025-12-08 6:23 ` [PATCH bpf-next v9 01/10] libbpf: Add BTF permutation support for type reordering Donglin Peng
@ 2025-12-16 22:00 ` Eduard Zingerman
2025-12-16 22:47 ` Andrii Nakryiko
2025-12-17 3:30 ` Donglin Peng
0 siblings, 2 replies; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-16 22:00 UTC (permalink / raw)
To: Donglin Peng, ast, andrii.nakryiko
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, pengdonglin,
Alan Maguire
On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> From: pengdonglin <pengdonglin@xiaomi.com>
>
> Introduce btf__permute() API to allow in-place rearrangement of BTF types.
> This function reorganizes BTF type order according to a provided array of
> type IDs, updating all type references to maintain consistency.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> ---
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
[...]
> diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> index cc01494d6210..ba67e5457e3a 100644
> --- a/tools/lib/bpf/btf.h
> +++ b/tools/lib/bpf/btf.h
> @@ -281,6 +281,42 @@ LIBBPF_API int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts);
> */
> LIBBPF_API int btf__relocate(struct btf *btf, const struct btf *base_btf);
>
> +struct btf_permute_opts {
> + size_t sz;
> + /* optional .BTF.ext info along the main BTF info */
> + struct btf_ext *btf_ext;
> + size_t :0;
> +};
> +#define btf_permute_opts__last_field btf_ext
> +
> +/**
> + * @brief **btf__permute()** performs in-place BTF type rearrangement
> + * @param btf BTF object to permute
> + * @param id_map Array mapping original type IDs to new IDs
> + * @param id_map_cnt Number of elements in @id_map
> + * @param opts Optional parameters for BTF extension updates
> + * @return 0 on success, negative error code on failure
> + *
> + * **btf__permute()** rearranges BTF types according to the specified ID mapping.
> + * The @id_map array defines the new type ID for each original type ID.
> + *
> + * For **base BTF**:
> + * - @id_map must include all types from ID 1 to `btf__type_cnt(btf)-1`
> + * - @id_map_cnt should be `btf__type_cnt(btf) - 1`
> + * - Mapping uses `id_map[original_id - 1] = new_id`
> + *
> + * For **split BTF**:
> + * - @id_map should cover only split types
> + * - @id_map_cnt should be `btf__type_cnt(btf) - btf__type_cnt(btf__base_btf(btf))`
> + * - Mapping uses `id_map[original_id - btf__type_cnt(btf__base_btf(btf))] = new_id`
Nit: internally the rule does not have special cases:
id_map[original_id - start_id] = new_id
So, maybe there is no need split these cases in the docstring?
Otherwise it is not immediately clear that both cases are handled
uniformly.
> + *
> + * On error, returns negative error code and sets errno:
> + * - `-EINVAL`: Invalid parameters or ID mapping (duplicates, out-of-range)
> + * - `-ENOMEM`: Memory allocation failure
> + */
> +LIBBPF_API int btf__permute(struct btf *btf, __u32 *id_map, __u32 id_map_cnt,
> + const struct btf_permute_opts *opts);
> +
> struct btf_dump;
>
> struct btf_dump_opts {
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index 8ed8749907d4..b778e5a5d0a8 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -451,4 +451,5 @@ LIBBPF_1.7.0 {
> global:
> bpf_map__set_exclusive_program;
> bpf_map__exclusive_program;
> + btf__permute;
> } LIBBPF_1.6.0;
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 01/10] libbpf: Add BTF permutation support for type reordering
2025-12-16 22:00 ` Eduard Zingerman
@ 2025-12-16 22:47 ` Andrii Nakryiko
2025-12-17 3:30 ` Donglin Peng
1 sibling, 0 replies; 36+ messages in thread
From: Andrii Nakryiko @ 2025-12-16 22:47 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Donglin Peng, ast, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
On Tue, Dec 16, 2025 at 2:00 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> > From: pengdonglin <pengdonglin@xiaomi.com>
> >
> > Introduce btf__permute() API to allow in-place rearrangement of BTF types.
> > This function reorganizes BTF type order according to a provided array of
> > type IDs, updating all type references to maintain consistency.
> >
> > Cc: Eduard Zingerman <eddyz87@gmail.com>
> > Cc: Alexei Starovoitov <ast@kernel.org>
> > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> > Cc: Alan Maguire <alan.maguire@oracle.com>
> > Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> > Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> > Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> > ---
>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
>
> [...]
>
> > diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> > index cc01494d6210..ba67e5457e3a 100644
> > --- a/tools/lib/bpf/btf.h
> > +++ b/tools/lib/bpf/btf.h
> > @@ -281,6 +281,42 @@ LIBBPF_API int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts);
> > */
> > LIBBPF_API int btf__relocate(struct btf *btf, const struct btf *base_btf);
> >
> > +struct btf_permute_opts {
> > + size_t sz;
> > + /* optional .BTF.ext info along the main BTF info */
> > + struct btf_ext *btf_ext;
> > + size_t :0;
> > +};
> > +#define btf_permute_opts__last_field btf_ext
> > +
> > +/**
> > + * @brief **btf__permute()** performs in-place BTF type rearrangement
> > + * @param btf BTF object to permute
> > + * @param id_map Array mapping original type IDs to new IDs
> > + * @param id_map_cnt Number of elements in @id_map
> > + * @param opts Optional parameters for BTF extension updates
> > + * @return 0 on success, negative error code on failure
> > + *
> > + * **btf__permute()** rearranges BTF types according to the specified ID mapping.
> > + * The @id_map array defines the new type ID for each original type ID.
> > + *
> > + * For **base BTF**:
> > + * - @id_map must include all types from ID 1 to `btf__type_cnt(btf)-1`
> > + * - @id_map_cnt should be `btf__type_cnt(btf) - 1`
> > + * - Mapping uses `id_map[original_id - 1] = new_id`
I don't really like this - 1 here. Conceptually BTF starts at type #0,
which is hard-coded to VOID type. User cannot redefine or remap it,
but it's still there. So let's include this into id_map contract,
id_map[0] has to be zero, for and then id_map[original_id] = new_id.
For split BTF, types are shifted by amount of types that are in base
BTF, so id_map[original_id - btf__type_cnt(base)] = new_id. And that's
purely to not waste memory, because otherwise id_map[original_id] =
new_id would be the simplest and best API (but we don't want to force
users to allocate many kilobytes of zeroes, no?).
So funny enough, internal implementation will be inconsistent
(start_id doesn't work for base BTF case), but external contract will
be consistent.
> > + *
> > + * For **split BTF**:
> > + * - @id_map should cover only split types
> > + * - @id_map_cnt should be `btf__type_cnt(btf) - btf__type_cnt(btf__base_btf(btf))`
> > + * - Mapping uses `id_map[original_id - btf__type_cnt(btf__base_btf(btf))] = new_id`
>
> Nit: internally the rule does not have special cases:
>
> id_map[original_id - start_id] = new_id
>
> So, maybe there is no need split these cases in the docstring?
> Otherwise it is not immediately clear that both cases are handled
> uniformly.
>
> > + *
> > + * On error, returns negative error code and sets errno:
> > + * - `-EINVAL`: Invalid parameters or ID mapping (duplicates, out-of-range)
> > + * - `-ENOMEM`: Memory allocation failure
> > + */
> > +LIBBPF_API int btf__permute(struct btf *btf, __u32 *id_map, __u32 id_map_cnt,
> > + const struct btf_permute_opts *opts);
> > +
> > struct btf_dump;
> >
> > struct btf_dump_opts {
> > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> > index 8ed8749907d4..b778e5a5d0a8 100644
> > --- a/tools/lib/bpf/libbpf.map
> > +++ b/tools/lib/bpf/libbpf.map
> > @@ -451,4 +451,5 @@ LIBBPF_1.7.0 {
> > global:
> > bpf_map__set_exclusive_program;
> > bpf_map__exclusive_program;
> > + btf__permute;
> > } LIBBPF_1.6.0;
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 01/10] libbpf: Add BTF permutation support for type reordering
2025-12-16 22:00 ` Eduard Zingerman
2025-12-16 22:47 ` Andrii Nakryiko
@ 2025-12-17 3:30 ` Donglin Peng
1 sibling, 0 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-17 3:30 UTC (permalink / raw)
To: Eduard Zingerman
Cc: ast, andrii.nakryiko, zhangxiaoqin, ihor.solodrai, linux-kernel,
bpf, pengdonglin, Alan Maguire
On Wed, Dec 17, 2025 at 6:00 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> > From: pengdonglin <pengdonglin@xiaomi.com>
> >
> > Introduce btf__permute() API to allow in-place rearrangement of BTF types.
> > This function reorganizes BTF type order according to a provided array of
> > type IDs, updating all type references to maintain consistency.
> >
> > Cc: Eduard Zingerman <eddyz87@gmail.com>
> > Cc: Alexei Starovoitov <ast@kernel.org>
> > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> > Cc: Alan Maguire <alan.maguire@oracle.com>
> > Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> > Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> > Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> > ---
>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
>
> [...]
>
> > diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> > index cc01494d6210..ba67e5457e3a 100644
> > --- a/tools/lib/bpf/btf.h
> > +++ b/tools/lib/bpf/btf.h
> > @@ -281,6 +281,42 @@ LIBBPF_API int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts);
> > */
> > LIBBPF_API int btf__relocate(struct btf *btf, const struct btf *base_btf);
> >
> > +struct btf_permute_opts {
> > + size_t sz;
> > + /* optional .BTF.ext info along the main BTF info */
> > + struct btf_ext *btf_ext;
> > + size_t :0;
> > +};
> > +#define btf_permute_opts__last_field btf_ext
> > +
> > +/**
> > + * @brief **btf__permute()** performs in-place BTF type rearrangement
> > + * @param btf BTF object to permute
> > + * @param id_map Array mapping original type IDs to new IDs
> > + * @param id_map_cnt Number of elements in @id_map
> > + * @param opts Optional parameters for BTF extension updates
> > + * @return 0 on success, negative error code on failure
> > + *
> > + * **btf__permute()** rearranges BTF types according to the specified ID mapping.
> > + * The @id_map array defines the new type ID for each original type ID.
> > + *
> > + * For **base BTF**:
> > + * - @id_map must include all types from ID 1 to `btf__type_cnt(btf)-1`
> > + * - @id_map_cnt should be `btf__type_cnt(btf) - 1`
> > + * - Mapping uses `id_map[original_id - 1] = new_id`
> > + *
> > + * For **split BTF**:
> > + * - @id_map should cover only split types
> > + * - @id_map_cnt should be `btf__type_cnt(btf) - btf__type_cnt(btf__base_btf(btf))`
> > + * - Mapping uses `id_map[original_id - btf__type_cnt(btf__base_btf(btf))] = new_id`
>
> Nit: internally the rule does not have special cases:
>
> id_map[original_id - start_id] = new_id
>
> So, maybe there is no need split these cases in the docstring?
> Otherwise it is not immediately clear that both cases are handled
> uniformly.
Thanks, I agree and will refine the docstring.
>
> > + *
> > + * On error, returns negative error code and sets errno:
> > + * - `-EINVAL`: Invalid parameters or ID mapping (duplicates, out-of-range)
> > + * - `-ENOMEM`: Memory allocation failure
> > + */
> > +LIBBPF_API int btf__permute(struct btf *btf, __u32 *id_map, __u32 id_map_cnt,
> > + const struct btf_permute_opts *opts);
> > +
> > struct btf_dump;
> >
> > struct btf_dump_opts {
> > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> > index 8ed8749907d4..b778e5a5d0a8 100644
> > --- a/tools/lib/bpf/libbpf.map
> > +++ b/tools/lib/bpf/libbpf.map
> > @@ -451,4 +451,5 @@ LIBBPF_1.7.0 {
> > global:
> > bpf_map__set_exclusive_program;
> > bpf_map__exclusive_program;
> > + btf__permute;
> > } LIBBPF_1.6.0;
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH bpf-next v9 02/10] selftests/bpf: Add test cases for btf__permute functionality
2025-12-08 6:23 [PATCH bpf-next v9 00/10] Improve the performance of BTF type lookups with binary search Donglin Peng
2025-12-08 6:23 ` [PATCH bpf-next v9 01/10] libbpf: Add BTF permutation support for type reordering Donglin Peng
@ 2025-12-08 6:23 ` Donglin Peng
2025-12-16 22:00 ` Eduard Zingerman
2025-12-08 6:23 ` [PATCH bpf-next v9 03/10] tools/resolve_btfids: Support BTF sorting feature Donglin Peng
` (7 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Donglin Peng @ 2025-12-08 6:23 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
From: pengdonglin <pengdonglin@xiaomi.com>
This patch introduces test cases for the btf__permute function to ensure
it works correctly with both base BTF and split BTF scenarios.
The test suite includes:
- test_permute_base: Validates permutation on base BTF
- test_permute_split: Tests permutation on split BTF
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
---
.../selftests/bpf/prog_tests/btf_permute.c | 228 ++++++++++++++++++
1 file changed, 228 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/btf_permute.c
diff --git a/tools/testing/selftests/bpf/prog_tests/btf_permute.c b/tools/testing/selftests/bpf/prog_tests/btf_permute.c
new file mode 100644
index 000000000000..9aa71cdf984a
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/btf_permute.c
@@ -0,0 +1,228 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Xiaomi */
+
+#include <test_progs.h>
+#include <bpf/btf.h>
+#include "btf_helpers.h"
+
+static void permute_base_check(struct btf *btf)
+{
+ VALIDATE_RAW_BTF(
+ btf,
+ "[1] STRUCT 's2' size=4 vlen=1\n"
+ "\t'm' type_id=4 bits_offset=0",
+ "[2] FUNC 'f' type_id=6 linkage=static",
+ "[3] PTR '(anon)' type_id=4",
+ "[4] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
+ "[5] STRUCT 's1' size=4 vlen=1\n"
+ "\t'm' type_id=4 bits_offset=0",
+ "[6] FUNC_PROTO '(anon)' ret_type_id=4 vlen=1\n"
+ "\t'p' type_id=3");
+}
+
+/* Ensure btf__permute work as expected with base BTF */
+static void test_permute_base(void)
+{
+ struct btf *btf;
+ __u32 permute_ids[6];
+ int start_id = 1;
+ int err;
+
+ btf = btf__new_empty();
+ if (!ASSERT_OK_PTR(btf, "empty_main_btf"))
+ return;
+
+ btf__add_int(btf, "int", 4, BTF_INT_SIGNED); /* [1] int */
+ btf__add_ptr(btf, 1); /* [2] ptr to int */
+ btf__add_struct(btf, "s1", 4); /* [3] struct s1 { */
+ btf__add_field(btf, "m", 1, 0, 0); /* int m; */
+ /* } */
+ btf__add_struct(btf, "s2", 4); /* [4] struct s2 { */
+ btf__add_field(btf, "m", 1, 0, 0); /* int m; */
+ /* } */
+ btf__add_func_proto(btf, 1); /* [5] int (*)(int *p); */
+ btf__add_func_param(btf, "p", 2);
+ btf__add_func(btf, "f", BTF_FUNC_STATIC, 5); /* [6] int f(int *p); */
+
+ VALIDATE_RAW_BTF(
+ btf,
+ "[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
+ "[2] PTR '(anon)' type_id=1",
+ "[3] STRUCT 's1' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0",
+ "[4] STRUCT 's2' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0",
+ "[5] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
+ "\t'p' type_id=2",
+ "[6] FUNC 'f' type_id=5 linkage=static");
+
+ permute_ids[1 - start_id] = 4; /* [1] -> [4] */
+ permute_ids[2 - start_id] = 3; /* [2] -> [3] */
+ permute_ids[3 - start_id] = 5; /* [3] -> [5] */
+ permute_ids[4 - start_id] = 1; /* [4] -> [1] */
+ permute_ids[5 - start_id] = 6; /* [5] -> [6] */
+ permute_ids[6 - start_id] = 2; /* [6] -> [2] */
+ err = btf__permute(btf, permute_ids, ARRAY_SIZE(permute_ids), NULL);
+ if (!ASSERT_OK(err, "btf__permute_base"))
+ goto done;
+ permute_base_check(btf);
+
+ /* id_map_cnt is invalid */
+ permute_ids[1 - start_id] = 4; /* [1] -> [4] */
+ permute_ids[2 - start_id] = 3; /* [2] -> [3] */
+ permute_ids[3 - start_id] = 5; /* [3] -> [5] */
+ permute_ids[4 - start_id] = 1; /* [4] -> [1] */
+ permute_ids[5 - start_id] = 6; /* [5] -> [6] */
+ permute_ids[6 - start_id] = 2; /* [6] -> [2] */
+ err = btf__permute(btf, permute_ids, ARRAY_SIZE(permute_ids) - 1, NULL);
+ if (!ASSERT_ERR(err, "btf__permute_base"))
+ goto done;
+ /* BTF is not modified */
+ permute_base_check(btf);
+
+ /* Multiple types can not be mapped to the same ID */
+ permute_ids[1 - start_id] = 4;
+ permute_ids[2 - start_id] = 4;
+ permute_ids[3 - start_id] = 5;
+ permute_ids[4 - start_id] = 1;
+ permute_ids[5 - start_id] = 6;
+ permute_ids[6 - start_id] = 2;
+ err = btf__permute(btf, permute_ids, ARRAY_SIZE(permute_ids), NULL);
+ if (!ASSERT_ERR(err, "btf__permute_base"))
+ goto done;
+ /* BTF is not modified */
+ permute_base_check(btf);
+
+ /* Type ID must be valid */
+ permute_ids[1 - start_id] = 4;
+ permute_ids[2 - start_id] = 3;
+ permute_ids[3 - start_id] = 5;
+ permute_ids[4 - start_id] = 1;
+ permute_ids[5 - start_id] = 7;
+ permute_ids[6 - start_id] = 2;
+ err = btf__permute(btf, permute_ids, ARRAY_SIZE(permute_ids), NULL);
+ if (!ASSERT_ERR(err, "btf__permute_base"))
+ goto done;
+ /* BTF is not modified */
+ permute_base_check(btf);
+
+done:
+ btf__free(btf);
+}
+
+static void permute_split_check(struct btf *btf)
+{
+ VALIDATE_RAW_BTF(
+ btf,
+ "[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
+ "[2] PTR '(anon)' type_id=1",
+ "[3] STRUCT 's2' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0",
+ "[4] FUNC 'f' type_id=5 linkage=static",
+ "[5] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
+ "\t'p' type_id=2",
+ "[6] STRUCT 's1' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0");
+}
+
+/* Ensure btf__permute work as expected with split BTF */
+static void test_permute_split(void)
+{
+ struct btf *split_btf = NULL, *base_btf = NULL;
+ __u32 permute_ids[4];
+ int err;
+ int start_id;
+
+ base_btf = btf__new_empty();
+ if (!ASSERT_OK_PTR(base_btf, "empty_main_btf"))
+ return;
+
+ btf__add_int(base_btf, "int", 4, BTF_INT_SIGNED); /* [1] int */
+ btf__add_ptr(base_btf, 1); /* [2] ptr to int */
+ VALIDATE_RAW_BTF(
+ base_btf,
+ "[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
+ "[2] PTR '(anon)' type_id=1");
+ split_btf = btf__new_empty_split(base_btf);
+ if (!ASSERT_OK_PTR(split_btf, "empty_split_btf"))
+ goto cleanup;
+ btf__add_struct(split_btf, "s1", 4); /* [3] struct s1 { */
+ btf__add_field(split_btf, "m", 1, 0, 0); /* int m; */
+ /* } */
+ btf__add_struct(split_btf, "s2", 4); /* [4] struct s2 { */
+ btf__add_field(split_btf, "m", 1, 0, 0); /* int m; */
+ /* } */
+ btf__add_func_proto(split_btf, 1); /* [5] int (*)(int p); */
+ btf__add_func_param(split_btf, "p", 2);
+ btf__add_func(split_btf, "f", BTF_FUNC_STATIC, 5); /* [6] int f(int *p); */
+
+ VALIDATE_RAW_BTF(
+ split_btf,
+ "[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
+ "[2] PTR '(anon)' type_id=1",
+ "[3] STRUCT 's1' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0",
+ "[4] STRUCT 's2' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0",
+ "[5] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
+ "\t'p' type_id=2",
+ "[6] FUNC 'f' type_id=5 linkage=static");
+
+ start_id = btf__type_cnt(base_btf);
+ permute_ids[3 - start_id] = 6; /* [3] -> [6] */
+ permute_ids[4 - start_id] = 3; /* [4] -> [3] */
+ permute_ids[5 - start_id] = 5; /* [5] -> [5] */
+ permute_ids[6 - start_id] = 4; /* [6] -> [4] */
+ err = btf__permute(split_btf, permute_ids, ARRAY_SIZE(permute_ids), NULL);
+ if (!ASSERT_OK(err, "btf__permute_split"))
+ goto cleanup;
+ permute_split_check(split_btf);
+
+ /*
+ * For split BTF, id_map_cnt must equal to the number of types
+ * added on top of base BTF
+ */
+ permute_ids[3 - start_id] = 4;
+ permute_ids[4 - start_id] = 3;
+ permute_ids[5 - start_id] = 5;
+ permute_ids[6 - start_id] = 6;
+ err = btf__permute(split_btf, permute_ids, 3, NULL);
+ if (!ASSERT_ERR(err, "btf__permute_split"))
+ goto cleanup;
+ /* BTF is not modified */
+ permute_split_check(split_btf);
+
+ /* Multiple types can not be mapped to the same ID */
+ permute_ids[3 - start_id] = 4;
+ permute_ids[4 - start_id] = 3;
+ permute_ids[5 - start_id] = 3;
+ permute_ids[6 - start_id] = 6;
+ err = btf__permute(split_btf, permute_ids, 4, NULL);
+ if (!ASSERT_ERR(err, "btf__permute_split"))
+ goto cleanup;
+ /* BTF is not modified */
+ permute_split_check(split_btf);
+
+ /* Can not map to base ID */
+ permute_ids[3 - start_id] = 4;
+ permute_ids[4 - start_id] = 2;
+ permute_ids[5 - start_id] = 5;
+ permute_ids[6 - start_id] = 6;
+ err = btf__permute(split_btf, permute_ids, 4, NULL);
+ if (!ASSERT_ERR(err, "btf__permute_split"))
+ goto cleanup;
+ /* BTF is not modified */
+ permute_split_check(split_btf);
+
+cleanup:
+ btf__free(split_btf);
+ btf__free(base_btf);
+}
+
+void test_btf_permute(void)
+{
+ if (test__start_subtest("permute_base"))
+ test_permute_base();
+ if (test__start_subtest("permute_split"))
+ test_permute_split();
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 02/10] selftests/bpf: Add test cases for btf__permute functionality
2025-12-08 6:23 ` [PATCH bpf-next v9 02/10] selftests/bpf: Add test cases for btf__permute functionality Donglin Peng
@ 2025-12-16 22:00 ` Eduard Zingerman
0 siblings, 0 replies; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-16 22:00 UTC (permalink / raw)
To: Donglin Peng, ast, andrii.nakryiko
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, pengdonglin,
Alan Maguire
On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> From: pengdonglin <pengdonglin@xiaomi.com>
>
> This patch introduces test cases for the btf__permute function to ensure
> it works correctly with both base BTF and split BTF scenarios.
>
> The test suite includes:
> - test_permute_base: Validates permutation on base BTF
> - test_permute_split: Tests permutation on split BTF
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> ---
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH bpf-next v9 03/10] tools/resolve_btfids: Support BTF sorting feature
2025-12-08 6:23 [PATCH bpf-next v9 00/10] Improve the performance of BTF type lookups with binary search Donglin Peng
2025-12-08 6:23 ` [PATCH bpf-next v9 01/10] libbpf: Add BTF permutation support for type reordering Donglin Peng
2025-12-08 6:23 ` [PATCH bpf-next v9 02/10] selftests/bpf: Add test cases for btf__permute functionality Donglin Peng
@ 2025-12-08 6:23 ` Donglin Peng
2025-12-16 22:13 ` Eduard Zingerman
2025-12-08 6:23 ` [PATCH bpf-next v9 04/10] libbpf: Optimize type lookup with binary search for sorted BTF Donglin Peng
` (6 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Donglin Peng @ 2025-12-08 6:23 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
From: pengdonglin <pengdonglin@xiaomi.com>
This introduces a new BTF sorting phase that specifically sorts
BTF types by name in ascending order, so that the binary search
can be used to look up types.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
---
tools/bpf/resolve_btfids/main.c | 68 +++++++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
index e0e792017e77..b4ec3b556518 100644
--- a/tools/bpf/resolve_btfids/main.c
+++ b/tools/bpf/resolve_btfids/main.c
@@ -846,6 +846,71 @@ static int dump_raw_btf(struct btf *btf, const char *out_path)
return 0;
}
+/*
+ * Sort types by name in ascending order resulting in all
+ * anonymous types being placed before named types.
+ */
+static int cmp_type_names(const void *a, const void *b, void *priv)
+{
+ struct btf *btf = (struct btf *)priv;
+ const struct btf_type *ta = btf__type_by_id(btf, *(__u32 *)a);
+ const struct btf_type *tb = btf__type_by_id(btf, *(__u32 *)b);
+ const char *na, *nb;
+
+ na = btf__str_by_offset(btf, ta->name_off);
+ nb = btf__str_by_offset(btf, tb->name_off);
+ return strcmp(na, nb);
+}
+
+static int sort_btf_by_name(struct btf *btf)
+{
+ __u32 *permute_ids = NULL, *id_map = NULL;
+ int nr_types, i, err = 0;
+ __u32 start_id = 1, id;
+
+ if (btf__base_btf(btf))
+ start_id = btf__type_cnt(btf__base_btf(btf));
+ nr_types = btf__type_cnt(btf) - start_id;
+ if (nr_types < 2)
+ goto out;
+
+ permute_ids = calloc(nr_types, sizeof(*permute_ids));
+ if (!permute_ids) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ id_map = calloc(nr_types, sizeof(*id_map));
+ if (!id_map) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ for (i = 0, id = start_id; i < nr_types; i++, id++)
+ permute_ids[i] = id;
+
+ qsort_r(permute_ids, nr_types, sizeof(*permute_ids), cmp_type_names, btf);
+
+ for (i = 0; i < nr_types; i++) {
+ id = permute_ids[i] - start_id;
+ id_map[id] = i + start_id;
+ }
+
+ err = btf__permute(btf, id_map, nr_types, NULL);
+ if (err)
+ pr_err("FAILED: btf permute: %s\n", strerror(-err));
+
+out:
+ free(permute_ids);
+ free(id_map);
+ return err;
+}
+
+static int btf2btf(struct object *obj)
+{
+ return sort_btf_by_name(obj->btf);
+}
+
static inline int make_out_path(char *buf, const char *in_path, const char *suffix)
{
int len = snprintf(buf, PATH_MAX, "%s%s", in_path, suffix);
@@ -904,6 +969,9 @@ int main(int argc, const char **argv)
if (load_btf(&obj))
goto out;
+ if (btf2btf(&obj))
+ goto out;
+
if (elf_collect(&obj))
goto out;
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 03/10] tools/resolve_btfids: Support BTF sorting feature
2025-12-08 6:23 ` [PATCH bpf-next v9 03/10] tools/resolve_btfids: Support BTF sorting feature Donglin Peng
@ 2025-12-16 22:13 ` Eduard Zingerman
0 siblings, 0 replies; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-16 22:13 UTC (permalink / raw)
To: Donglin Peng, ast, andrii.nakryiko
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, pengdonglin,
Alan Maguire
On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> From: pengdonglin <pengdonglin@xiaomi.com>
>
> This introduces a new BTF sorting phase that specifically sorts
> BTF types by name in ascending order, so that the binary search
> can be used to look up types.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> ---
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
[...]
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH bpf-next v9 04/10] libbpf: Optimize type lookup with binary search for sorted BTF
2025-12-08 6:23 [PATCH bpf-next v9 00/10] Improve the performance of BTF type lookups with binary search Donglin Peng
` (2 preceding siblings ...)
2025-12-08 6:23 ` [PATCH bpf-next v9 03/10] tools/resolve_btfids: Support BTF sorting feature Donglin Peng
@ 2025-12-08 6:23 ` Donglin Peng
2025-12-16 23:38 ` Eduard Zingerman
2025-12-08 6:23 ` [PATCH bpf-next v9 05/10] libbpf: Verify BTF Sorting Donglin Peng
` (5 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Donglin Peng @ 2025-12-08 6:23 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
From: pengdonglin <pengdonglin@xiaomi.com>
This patch introduces binary search optimization for BTF type lookups
when the BTF instance contains sorted types.
The optimization significantly improves performance when searching for
types in large BTF instances with sorted types. For unsorted BTF, the
implementation falls back to the original linear search.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
---
tools/lib/bpf/btf.c | 103 ++++++++++++++++++++++++++++++++++----------
1 file changed, 80 insertions(+), 23 deletions(-)
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 26ebc0234b9b..7f150c869bf6 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -92,6 +92,8 @@ struct btf {
* - for split BTF counts number of types added on top of base BTF.
*/
__u32 nr_types;
+ /* the start IDs of named types in sorted BTF */
+ int sorted_start_id;
/* if not NULL, points to the base BTF on top of which the current
* split BTF is based
*/
@@ -897,46 +899,98 @@ int btf__resolve_type(const struct btf *btf, __u32 type_id)
return type_id;
}
-__s32 btf__find_by_name(const struct btf *btf, const char *type_name)
+static __s32 btf_find_by_name_bsearch(const struct btf *btf, const char *name,
+ __s32 start_id, __s32 end_id)
{
- __u32 i, nr_types = btf__type_cnt(btf);
-
- if (!strcmp(type_name, "void"))
- return 0;
-
- for (i = 1; i < nr_types; i++) {
- const struct btf_type *t = btf__type_by_id(btf, i);
- const char *name = btf__name_by_offset(btf, t->name_off);
-
- if (name && !strcmp(type_name, name))
- return i;
+ const struct btf_type *t;
+ const char *tname;
+ __s32 l, r, m, lmost = -ENOENT;
+ int ret;
+
+ l = start_id;
+ r = end_id;
+ while (l <= r) {
+ m = l + (r - l) / 2;
+ t = btf_type_by_id(btf, m);
+ tname = btf__str_by_offset(btf, t->name_off);
+ ret = strcmp(tname, name);
+ if (ret < 0) {
+ l = m + 1;
+ } else {
+ if (ret == 0)
+ lmost = m;
+ r = m - 1;
+ }
}
- return libbpf_err(-ENOENT);
+ return lmost;
}
static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
const char *type_name, __u32 kind)
{
- __u32 i, nr_types = btf__type_cnt(btf);
+ const struct btf_type *t;
+ const char *tname;
+ __s32 idx;
+
+ if (start_id < btf->start_id) {
+ idx = btf_find_by_name_kind(btf->base_btf, start_id,
+ type_name, kind);
+ if (idx >= 0)
+ return idx;
+ start_id = btf->start_id;
+ }
- if (kind == BTF_KIND_UNKN || !strcmp(type_name, "void"))
+ if (kind == BTF_KIND_UNKN || strcmp(type_name, "void") == 0)
return 0;
- for (i = start_id; i < nr_types; i++) {
- const struct btf_type *t = btf__type_by_id(btf, i);
- const char *name;
+ if (btf->sorted_start_id > 0) {
+ __s32 end_id = btf__type_cnt(btf) - 1;
+
+ /* skip anonymous types */
+ start_id = max(start_id, btf->sorted_start_id);
+ idx = btf_find_by_name_bsearch(btf, type_name, start_id, end_id);
+ if (unlikely(idx < 0))
+ return libbpf_err(-ENOENT);
+
+ if (unlikely(kind == -1))
+ return idx;
+
+ t = btf_type_by_id(btf, idx);
+ if (likely(BTF_INFO_KIND(t->info) == kind))
+ return idx;
+
+ for (idx++; idx <= end_id; idx++) {
+ t = btf__type_by_id(btf, idx);
+ tname = btf__str_by_offset(btf, t->name_off);
+ if (strcmp(tname, type_name) != 0)
+ return libbpf_err(-ENOENT);
+ if (btf_kind(t) == kind)
+ return idx;
+ }
+ } else {
+ __u32 i, total;
- if (btf_kind(t) != kind)
- continue;
- name = btf__name_by_offset(btf, t->name_off);
- if (name && !strcmp(type_name, name))
- return i;
+ total = btf__type_cnt(btf);
+ for (i = start_id; i < total; i++) {
+ t = btf_type_by_id(btf, i);
+ if (kind != -1 && btf_kind(t) != kind)
+ continue;
+ tname = btf__str_by_offset(btf, t->name_off);
+ if (tname && strcmp(tname, type_name) == 0)
+ return i;
+ }
}
return libbpf_err(-ENOENT);
}
+/* the kind value of -1 indicates that kind matching should be skipped */
+__s32 btf__find_by_name(const struct btf *btf, const char *type_name)
+{
+ return btf_find_by_name_kind(btf, btf->start_id, type_name, -1);
+}
+
__s32 btf__find_by_name_kind_own(const struct btf *btf, const char *type_name,
__u32 kind)
{
@@ -1006,6 +1060,7 @@ static struct btf *btf_new_empty(struct btf *base_btf)
btf->fd = -1;
btf->ptr_sz = sizeof(void *);
btf->swapped_endian = false;
+ btf->sorted_start_id = 0;
if (base_btf) {
btf->base_btf = base_btf;
@@ -1057,6 +1112,7 @@ static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf, b
btf->start_id = 1;
btf->start_str_off = 0;
btf->fd = -1;
+ btf->sorted_start_id = 0;
if (base_btf) {
btf->base_btf = base_btf;
@@ -1715,6 +1771,7 @@ static void btf_invalidate_raw_data(struct btf *btf)
free(btf->raw_data_swapped);
btf->raw_data_swapped = NULL;
}
+ btf->sorted_start_id = 0;
}
/* Ensure BTF is ready to be modified (by splitting into a three memory
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 04/10] libbpf: Optimize type lookup with binary search for sorted BTF
2025-12-08 6:23 ` [PATCH bpf-next v9 04/10] libbpf: Optimize type lookup with binary search for sorted BTF Donglin Peng
@ 2025-12-16 23:38 ` Eduard Zingerman
2025-12-16 23:43 ` Eduard Zingerman
2025-12-17 2:32 ` Donglin Peng
0 siblings, 2 replies; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-16 23:38 UTC (permalink / raw)
To: Donglin Peng, ast, andrii.nakryiko
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, pengdonglin,
Alan Maguire
On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
[...]
Lgtm, one question below.
> static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
> const char *type_name, __u32 kind)
> {
> - __u32 i, nr_types = btf__type_cnt(btf);
> + const struct btf_type *t;
> + const char *tname;
> + __s32 idx;
> +
> + if (start_id < btf->start_id) {
> + idx = btf_find_by_name_kind(btf->base_btf, start_id,
> + type_name, kind);
> + if (idx >= 0)
> + return idx;
> + start_id = btf->start_id;
> + }
>
> - if (kind == BTF_KIND_UNKN || !strcmp(type_name, "void"))
> + if (kind == BTF_KIND_UNKN || strcmp(type_name, "void") == 0)
> return 0;
>
> - for (i = start_id; i < nr_types; i++) {
> - const struct btf_type *t = btf__type_by_id(btf, i);
> - const char *name;
> + if (btf->sorted_start_id > 0) {
> + __s32 end_id = btf__type_cnt(btf) - 1;
> +
> + /* skip anonymous types */
> + start_id = max(start_id, btf->sorted_start_id);
> + idx = btf_find_by_name_bsearch(btf, type_name, start_id, end_id);
> + if (unlikely(idx < 0))
> + return libbpf_err(-ENOENT);
> +
> + if (unlikely(kind == -1))
> + return idx;
> +
> + t = btf_type_by_id(btf, idx);
> + if (likely(BTF_INFO_KIND(t->info) == kind))
> + return idx;
> +
> + for (idx++; idx <= end_id; idx++) {
> + t = btf__type_by_id(btf, idx);
> + tname = btf__str_by_offset(btf, t->name_off);
> + if (strcmp(tname, type_name) != 0)
> + return libbpf_err(-ENOENT);
> + if (btf_kind(t) == kind)
^^^^^^^^^^^^^^^^^^^
Is kind != -1 check missing here?
> + return idx;
> + }
> + } else {
> + __u32 i, total;
>
> - if (btf_kind(t) != kind)
> - continue;
> - name = btf__name_by_offset(btf, t->name_off);
> - if (name && !strcmp(type_name, name))
> - return i;
> + total = btf__type_cnt(btf);
> + for (i = start_id; i < total; i++) {
> + t = btf_type_by_id(btf, i);
> + if (kind != -1 && btf_kind(t) != kind)
> + continue;
> + tname = btf__str_by_offset(btf, t->name_off);
> + if (tname && strcmp(tname, type_name) == 0)
Nit: no need for `tname &&` part, as we found out.
> + return i;
> + }
> }
>
> return libbpf_err(-ENOENT);
> }
>
> +/* the kind value of -1 indicates that kind matching should be skipped */
> +__s32 btf__find_by_name(const struct btf *btf, const char *type_name)
> +{
> + return btf_find_by_name_kind(btf, btf->start_id, type_name, -1);
> +}
> +
> __s32 btf__find_by_name_kind_own(const struct btf *btf, const char *type_name,
> __u32 kind)
> {
[...]
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 04/10] libbpf: Optimize type lookup with binary search for sorted BTF
2025-12-16 23:38 ` Eduard Zingerman
@ 2025-12-16 23:43 ` Eduard Zingerman
2025-12-17 2:46 ` Donglin Peng
2025-12-17 2:32 ` Donglin Peng
1 sibling, 1 reply; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-16 23:43 UTC (permalink / raw)
To: Donglin Peng, ast, andrii.nakryiko
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, pengdonglin,
Alan Maguire
On Tue, 2025-12-16 at 15:38 -0800, Eduard Zingerman wrote:
> On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
>
> [...]
>
> Lgtm, one question below.
>
> > static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
> > const char *type_name, __u32 kind)
> > {
> > - __u32 i, nr_types = btf__type_cnt(btf);
> > + const struct btf_type *t;
> > + const char *tname;
> > + __s32 idx;
> > +
> > + if (start_id < btf->start_id) {
> > + idx = btf_find_by_name_kind(btf->base_btf, start_id,
> > + type_name, kind);
> > + if (idx >= 0)
> > + return idx;
> > + start_id = btf->start_id;
> > + }
> >
> > - if (kind == BTF_KIND_UNKN || !strcmp(type_name, "void"))
> > + if (kind == BTF_KIND_UNKN || strcmp(type_name, "void") == 0)
> > return 0;
> >
> > - for (i = start_id; i < nr_types; i++) {
> > - const struct btf_type *t = btf__type_by_id(btf, i);
> > - const char *name;
> > + if (btf->sorted_start_id > 0) {
^^^^^^^^^^^^^^^^^^^^^^^^
Also, previous implementation worked for anonymous types, but this one
will not work because of the 'max(start_id, btf->sorted_start_id)', right?
Maybe check that type is not anonymous in the condition above?
> > + __s32 end_id = btf__type_cnt(btf) - 1;
> > +
> > + /* skip anonymous types */
> > + start_id = max(start_id, btf->sorted_start_id);
> > + idx = btf_find_by_name_bsearch(btf, type_name, start_id, end_id);
> > + if (unlikely(idx < 0))
> > + return libbpf_err(-ENOENT);
> > +
> > + if (unlikely(kind == -1))
> > + return idx;
> > +
> > + t = btf_type_by_id(btf, idx);
> > + if (likely(BTF_INFO_KIND(t->info) == kind))
> > + return idx;
> > +
> > + for (idx++; idx <= end_id; idx++) {
> > + t = btf__type_by_id(btf, idx);
> > + tname = btf__str_by_offset(btf, t->name_off);
> > + if (strcmp(tname, type_name) != 0)
> > + return libbpf_err(-ENOENT);
> > + if (btf_kind(t) == kind)
> ^^^^^^^^^^^^^^^^^^^
> Is kind != -1 check missing here?
>
> > + return idx;
> > + }
> > + } else {
> > + __u32 i, total;
> >
> > - if (btf_kind(t) != kind)
> > - continue;
> > - name = btf__name_by_offset(btf, t->name_off);
> > - if (name && !strcmp(type_name, name))
> > - return i;
> > + total = btf__type_cnt(btf);
> > + for (i = start_id; i < total; i++) {
> > + t = btf_type_by_id(btf, i);
> > + if (kind != -1 && btf_kind(t) != kind)
> > + continue;
> > + tname = btf__str_by_offset(btf, t->name_off);
> > + if (tname && strcmp(tname, type_name) == 0)
>
> Nit: no need for `tname &&` part, as we found out.
>
> > + return i;
> > + }
> > }
> >
> > return libbpf_err(-ENOENT);
> > }
> >
> > +/* the kind value of -1 indicates that kind matching should be skipped */
> > +__s32 btf__find_by_name(const struct btf *btf, const char *type_name)
> > +{
> > + return btf_find_by_name_kind(btf, btf->start_id, type_name, -1);
> > +}
> > +
> > __s32 btf__find_by_name_kind_own(const struct btf *btf, const char *type_name,
> > __u32 kind)
> > {
>
> [...]
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 04/10] libbpf: Optimize type lookup with binary search for sorted BTF
2025-12-16 23:43 ` Eduard Zingerman
@ 2025-12-17 2:46 ` Donglin Peng
0 siblings, 0 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-17 2:46 UTC (permalink / raw)
To: Eduard Zingerman
Cc: ast, andrii.nakryiko, zhangxiaoqin, ihor.solodrai, linux-kernel,
bpf, pengdonglin, Alan Maguire
On Wed, Dec 17, 2025 at 7:43 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Tue, 2025-12-16 at 15:38 -0800, Eduard Zingerman wrote:
> > On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> >
> > [...]
> >
> > Lgtm, one question below.
> >
> > > static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
> > > const char *type_name, __u32 kind)
> > > {
> > > - __u32 i, nr_types = btf__type_cnt(btf);
> > > + const struct btf_type *t;
> > > + const char *tname;
> > > + __s32 idx;
> > > +
> > > + if (start_id < btf->start_id) {
> > > + idx = btf_find_by_name_kind(btf->base_btf, start_id,
> > > + type_name, kind);
> > > + if (idx >= 0)
> > > + return idx;
> > > + start_id = btf->start_id;
> > > + }
> > >
> > > - if (kind == BTF_KIND_UNKN || !strcmp(type_name, "void"))
> > > + if (kind == BTF_KIND_UNKN || strcmp(type_name, "void") == 0)
> > > return 0;
> > >
> > > - for (i = start_id; i < nr_types; i++) {
> > > - const struct btf_type *t = btf__type_by_id(btf, i);
> > > - const char *name;
> > > + if (btf->sorted_start_id > 0) {
> ^^^^^^^^^^^^^^^^^^^^^^^^
> Also, previous implementation worked for anonymous types, but this one
> will not work because of the 'max(start_id, btf->sorted_start_id)', right?
Yes.
> Maybe check that type is not anonymous in the condition above?
Thanks, I will add the check in the next version.
>
> > > + __s32 end_id = btf__type_cnt(btf) - 1;
> > > +
> > > + /* skip anonymous types */
> > > + start_id = max(start_id, btf->sorted_start_id);
> > > + idx = btf_find_by_name_bsearch(btf, type_name, start_id, end_id);
> > > + if (unlikely(idx < 0))
> > > + return libbpf_err(-ENOENT);
> > > +
> > > + if (unlikely(kind == -1))
> > > + return idx;
> > > +
> > > + t = btf_type_by_id(btf, idx);
> > > + if (likely(BTF_INFO_KIND(t->info) == kind))
> > > + return idx;
> > > +
> > > + for (idx++; idx <= end_id; idx++) {
> > > + t = btf__type_by_id(btf, idx);
> > > + tname = btf__str_by_offset(btf, t->name_off);
> > > + if (strcmp(tname, type_name) != 0)
> > > + return libbpf_err(-ENOENT);
> > > + if (btf_kind(t) == kind)
> > ^^^^^^^^^^^^^^^^^^^
> > Is kind != -1 check missing here?
> >
> > > + return idx;
> > > + }
> > > + } else {
> > > + __u32 i, total;
> > >
> > > - if (btf_kind(t) != kind)
> > > - continue;
> > > - name = btf__name_by_offset(btf, t->name_off);
> > > - if (name && !strcmp(type_name, name))
> > > - return i;
> > > + total = btf__type_cnt(btf);
> > > + for (i = start_id; i < total; i++) {
> > > + t = btf_type_by_id(btf, i);
> > > + if (kind != -1 && btf_kind(t) != kind)
> > > + continue;
> > > + tname = btf__str_by_offset(btf, t->name_off);
> > > + if (tname && strcmp(tname, type_name) == 0)
> >
> > Nit: no need for `tname &&` part, as we found out.
> >
> > > + return i;
> > > + }
> > > }
> > >
> > > return libbpf_err(-ENOENT);
> > > }
> > >
> > > +/* the kind value of -1 indicates that kind matching should be skipped */
> > > +__s32 btf__find_by_name(const struct btf *btf, const char *type_name)
> > > +{
> > > + return btf_find_by_name_kind(btf, btf->start_id, type_name, -1);
> > > +}
> > > +
> > > __s32 btf__find_by_name_kind_own(const struct btf *btf, const char *type_name,
> > > __u32 kind)
> > > {
> >
> > [...]
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH bpf-next v9 04/10] libbpf: Optimize type lookup with binary search for sorted BTF
2025-12-16 23:38 ` Eduard Zingerman
2025-12-16 23:43 ` Eduard Zingerman
@ 2025-12-17 2:32 ` Donglin Peng
2025-12-17 2:34 ` Donglin Peng
2025-12-17 2:57 ` Eduard Zingerman
1 sibling, 2 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-17 2:32 UTC (permalink / raw)
To: Eduard Zingerman
Cc: ast, andrii.nakryiko, zhangxiaoqin, ihor.solodrai, linux-kernel,
bpf, pengdonglin, Alan Maguire
On Wed, Dec 17, 2025 at 7:38 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
>
> [...]
>
> Lgtm, one question below.
>
> > static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
> > const char *type_name, __u32 kind)
> > {
> > - __u32 i, nr_types = btf__type_cnt(btf);
> > + const struct btf_type *t;
> > + const char *tname;
> > + __s32 idx;
> > +
> > + if (start_id < btf->start_id) {
> > + idx = btf_find_by_name_kind(btf->base_btf, start_id,
> > + type_name, kind);
> > + if (idx >= 0)
> > + return idx;
> > + start_id = btf->start_id;
> > + }
> >
> > - if (kind == BTF_KIND_UNKN || !strcmp(type_name, "void"))
> > + if (kind == BTF_KIND_UNKN || strcmp(type_name, "void") == 0)
> > return 0;
> >
> > - for (i = start_id; i < nr_types; i++) {
> > - const struct btf_type *t = btf__type_by_id(btf, i);
> > - const char *name;
> > + if (btf->sorted_start_id > 0) {
> > + __s32 end_id = btf__type_cnt(btf) - 1;
> > +
> > + /* skip anonymous types */
> > + start_id = max(start_id, btf->sorted_start_id);
> > + idx = btf_find_by_name_bsearch(btf, type_name, start_id, end_id);
> > + if (unlikely(idx < 0))
> > + return libbpf_err(-ENOENT);
> > +
> > + if (unlikely(kind == -1))
> > + return idx;
> > +
> > + t = btf_type_by_id(btf, idx);
> > + if (likely(BTF_INFO_KIND(t->info) == kind))
> > + return idx;
> > +
> > + for (idx++; idx <= end_id; idx++) {
> > + t = btf__type_by_id(btf, idx);
> > + tname = btf__str_by_offset(btf, t->name_off);
> > + if (strcmp(tname, type_name) != 0)
> > + return libbpf_err(-ENOENT);
> > + if (btf_kind(t) == kind)
> ^^^^^^^^^^^^^^^^^^^
> Is kind != -1 check missing here?
The check for kind != -1 is unnecessary here because it has already been
performed earlier in the logic, after btf_find_by_name_bsearch successfully
returned a valid idx. In v8, the implementation of btf_find_by_name_bsearch
was refined for better performance, and when idx > 0, it guarantees that the
name has been matched.
Thank you for the review.
Donglin
>
> > + return idx;
> > + }
> > + } else {
> > + __u32 i, total;
> >
> > - if (btf_kind(t) != kind)
> > - continue;
> > - name = btf__name_by_offset(btf, t->name_off);
> > - if (name && !strcmp(type_name, name))
> > - return i;
> > + total = btf__type_cnt(btf);
> > + for (i = start_id; i < total; i++) {
> > + t = btf_type_by_id(btf, i);
> > + if (kind != -1 && btf_kind(t) != kind)
> > + continue;
> > + tname = btf__str_by_offset(btf, t->name_off);
> > + if (tname && strcmp(tname, type_name) == 0)
>
> Nit: no need for `tname &&` part, as we found out.
>
> > + return i;
> > + }
> > }
> >
> > return libbpf_err(-ENOENT);
> > }
> >
> > +/* the kind value of -1 indicates that kind matching should be skipped */
> > +__s32 btf__find_by_name(const struct btf *btf, const char *type_name)
> > +{
> > + return btf_find_by_name_kind(btf, btf->start_id, type_name, -1);
> > +}
> > +
> > __s32 btf__find_by_name_kind_own(const struct btf *btf, const char *type_name,
> > __u32 kind)
> > {
>
> [...]
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 04/10] libbpf: Optimize type lookup with binary search for sorted BTF
2025-12-17 2:32 ` Donglin Peng
@ 2025-12-17 2:34 ` Donglin Peng
2025-12-17 2:57 ` Eduard Zingerman
1 sibling, 0 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-17 2:34 UTC (permalink / raw)
To: Eduard Zingerman
Cc: ast, andrii.nakryiko, zhangxiaoqin, ihor.solodrai, linux-kernel,
bpf, pengdonglin, Alan Maguire
On Wed, Dec 17, 2025 at 10:32 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
>
> On Wed, Dec 17, 2025 at 7:38 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
> >
> > On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> >
> > [...]
> >
> > Lgtm, one question below.
> >
> > > static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
> > > const char *type_name, __u32 kind)
> > > {
> > > - __u32 i, nr_types = btf__type_cnt(btf);
> > > + const struct btf_type *t;
> > > + const char *tname;
> > > + __s32 idx;
> > > +
> > > + if (start_id < btf->start_id) {
> > > + idx = btf_find_by_name_kind(btf->base_btf, start_id,
> > > + type_name, kind);
> > > + if (idx >= 0)
> > > + return idx;
> > > + start_id = btf->start_id;
> > > + }
> > >
> > > - if (kind == BTF_KIND_UNKN || !strcmp(type_name, "void"))
> > > + if (kind == BTF_KIND_UNKN || strcmp(type_name, "void") == 0)
> > > return 0;
> > >
> > > - for (i = start_id; i < nr_types; i++) {
> > > - const struct btf_type *t = btf__type_by_id(btf, i);
> > > - const char *name;
> > > + if (btf->sorted_start_id > 0) {
> > > + __s32 end_id = btf__type_cnt(btf) - 1;
> > > +
> > > + /* skip anonymous types */
> > > + start_id = max(start_id, btf->sorted_start_id);
> > > + idx = btf_find_by_name_bsearch(btf, type_name, start_id, end_id);
> > > + if (unlikely(idx < 0))
> > > + return libbpf_err(-ENOENT);
> > > +
> > > + if (unlikely(kind == -1))
> > > + return idx;
> > > +
> > > + t = btf_type_by_id(btf, idx);
> > > + if (likely(BTF_INFO_KIND(t->info) == kind))
> > > + return idx;
> > > +
> > > + for (idx++; idx <= end_id; idx++) {
> > > + t = btf__type_by_id(btf, idx);
> > > + tname = btf__str_by_offset(btf, t->name_off);
> > > + if (strcmp(tname, type_name) != 0)
> > > + return libbpf_err(-ENOENT);
> > > + if (btf_kind(t) == kind)
> > ^^^^^^^^^^^^^^^^^^^
> > Is kind != -1 check missing here?
>
> The check for kind != -1 is unnecessary here because it has already been
> performed earlier in the logic, after btf_find_by_name_bsearch successfully
> returned a valid idx. In v8, the implementation of btf_find_by_name_bsearch
> was refined for better performance, and when idx > 0, it guarantees that the
> name has been matched.
>
> Thank you for the review.
> Donglin
>
> >
> > > + return idx;
> > > + }
> > > + } else {
> > > + __u32 i, total;
> > >
> > > - if (btf_kind(t) != kind)
> > > - continue;
> > > - name = btf__name_by_offset(btf, t->name_off);
> > > - if (name && !strcmp(type_name, name))
> > > - return i;
> > > + total = btf__type_cnt(btf);
> > > + for (i = start_id; i < total; i++) {
> > > + t = btf_type_by_id(btf, i);
> > > + if (kind != -1 && btf_kind(t) != kind)
> > > + continue;
> > > + tname = btf__str_by_offset(btf, t->name_off);
> > > + if (tname && strcmp(tname, type_name) == 0)
> >
> > Nit: no need for `tname &&` part, as we found out.
Thanks, I will remove the check in the next version.
> >
> > > + return i;
> > > + }
> > > }
> > >
> > > return libbpf_err(-ENOENT);
> > > }
> > >
> > > +/* the kind value of -1 indicates that kind matching should be skipped */
> > > +__s32 btf__find_by_name(const struct btf *btf, const char *type_name)
> > > +{
> > > + return btf_find_by_name_kind(btf, btf->start_id, type_name, -1);
> > > +}
> > > +
> > > __s32 btf__find_by_name_kind_own(const struct btf *btf, const char *type_name,
> > > __u32 kind)
> > > {
> >
> > [...]
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 04/10] libbpf: Optimize type lookup with binary search for sorted BTF
2025-12-17 2:32 ` Donglin Peng
2025-12-17 2:34 ` Donglin Peng
@ 2025-12-17 2:57 ` Eduard Zingerman
1 sibling, 0 replies; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-17 2:57 UTC (permalink / raw)
To: Donglin Peng
Cc: ast, andrii.nakryiko, zhangxiaoqin, ihor.solodrai, linux-kernel,
bpf, pengdonglin, Alan Maguire
On Wed, 2025-12-17 at 10:32 +0800, Donglin Peng wrote:
[...]
> > > + if (unlikely(kind == -1))
> > > + return idx;
> > > +
> > > + t = btf_type_by_id(btf, idx);
> > > + if (likely(BTF_INFO_KIND(t->info) == kind))
> > > + return idx;
> > > +
> > > + for (idx++; idx <= end_id; idx++) {
> > > + t = btf__type_by_id(btf, idx);
> > > + tname = btf__str_by_offset(btf, t->name_off);
> > > + if (strcmp(tname, type_name) != 0)
> > > + return libbpf_err(-ENOENT);
> > > + if (btf_kind(t) == kind)
> > ^^^^^^^^^^^^^^^^^^^
> > Is kind != -1 check missing here?
>
> The check for kind != -1 is unnecessary here because it has already been
> performed earlier in the logic, after btf_find_by_name_bsearch successfully
> returned a valid idx. In v8, the implementation of btf_find_by_name_bsearch
> was refined for better performance, and when idx > 0, it guarantees that the
> name has been matched.
>
> Thank you for the review.
> Donglin
Ack, missed that, thank you for explaining.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH bpf-next v9 05/10] libbpf: Verify BTF Sorting
2025-12-08 6:23 [PATCH bpf-next v9 00/10] Improve the performance of BTF type lookups with binary search Donglin Peng
` (3 preceding siblings ...)
2025-12-08 6:23 ` [PATCH bpf-next v9 04/10] libbpf: Optimize type lookup with binary search for sorted BTF Donglin Peng
@ 2025-12-08 6:23 ` Donglin Peng
2025-12-17 0:32 ` Eduard Zingerman
2025-12-08 6:23 ` [PATCH bpf-next v9 06/10] btf: Optimize type lookup with binary search Donglin Peng
` (4 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Donglin Peng @ 2025-12-08 6:23 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
From: pengdonglin <pengdonglin@xiaomi.com>
This patch checks whether the BTF is sorted by name in ascending
order. If sorted, binary search will be used when looking up types.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
---
tools/lib/bpf/btf.c | 46 ++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 45 insertions(+), 1 deletion(-)
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 7f150c869bf6..a53d24704857 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -899,6 +899,49 @@ int btf__resolve_type(const struct btf *btf, __u32 type_id)
return type_id;
}
+/*
+ * Assuming that types are sorted by name in ascending order.
+ */
+static int btf_compare_type_names(const void *a, const void *b, void *priv)
+{
+ struct btf *btf = (struct btf *)priv;
+ struct btf_type *ta = btf_type_by_id(btf, *(__u32 *)a);
+ struct btf_type *tb = btf_type_by_id(btf, *(__u32 *)b);
+ const char *na, *nb;
+
+ na = btf__str_by_offset(btf, ta->name_off);
+ nb = btf__str_by_offset(btf, tb->name_off);
+ return strcmp(na, nb);
+}
+
+static void btf_check_sorted(struct btf *btf)
+{
+ const struct btf_type *t;
+ int i, k = 0, n;
+ __u32 sorted_start_id = 0;
+
+ if (btf->nr_types < 2)
+ return;
+
+ n = btf__type_cnt(btf) - 1;
+ for (i = btf->start_id; i < n; i++) {
+ k = i + 1;
+ if (btf_compare_type_names(&i, &k, btf) > 0)
+ return;
+ t = btf_type_by_id(btf, i);
+ if (sorted_start_id == 0 &&
+ !str_is_empty(btf__str_by_offset(btf, t->name_off)))
+ sorted_start_id = i;
+ }
+
+ t = btf_type_by_id(btf, k);
+ if (sorted_start_id == 0 &&
+ !str_is_empty(btf__str_by_offset(btf, t->name_off)))
+ sorted_start_id = k;
+ if (sorted_start_id)
+ btf->sorted_start_id = sorted_start_id;
+}
+
static __s32 btf_find_by_name_bsearch(const struct btf *btf, const char *name,
__s32 start_id, __s32 end_id)
{
@@ -935,7 +978,7 @@ static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
if (start_id < btf->start_id) {
idx = btf_find_by_name_kind(btf->base_btf, start_id,
- type_name, kind);
+ type_name, kind);
if (idx >= 0)
return idx;
start_id = btf->start_id;
@@ -1147,6 +1190,7 @@ static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf, b
err = err ?: btf_sanity_check(btf);
if (err)
goto done;
+ btf_check_sorted(btf);
done:
if (err) {
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 05/10] libbpf: Verify BTF Sorting
2025-12-08 6:23 ` [PATCH bpf-next v9 05/10] libbpf: Verify BTF Sorting Donglin Peng
@ 2025-12-17 0:32 ` Eduard Zingerman
2025-12-17 3:19 ` Donglin Peng
0 siblings, 1 reply; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-17 0:32 UTC (permalink / raw)
To: Donglin Peng, ast, andrii.nakryiko
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, pengdonglin,
Alan Maguire
On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> From: pengdonglin <pengdonglin@xiaomi.com>
>
> This patch checks whether the BTF is sorted by name in ascending
> order. If sorted, binary search will be used when looking up types.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> ---
> tools/lib/bpf/btf.c | 46 ++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 45 insertions(+), 1 deletion(-)
>
> diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> index 7f150c869bf6..a53d24704857 100644
> --- a/tools/lib/bpf/btf.c
> +++ b/tools/lib/bpf/btf.c
> @@ -899,6 +899,49 @@ int btf__resolve_type(const struct btf *btf, __u32 type_id)
> return type_id;
> }
>
> +/*
> + * Assuming that types are sorted by name in ascending order.
> + */
> +static int btf_compare_type_names(const void *a, const void *b, void *priv)
This can be declared as ...(u32 a, u32 b, struct btf *btf).
> +{
> + struct btf *btf = (struct btf *)priv;
> + struct btf_type *ta = btf_type_by_id(btf, *(__u32 *)a);
> + struct btf_type *tb = btf_type_by_id(btf, *(__u32 *)b);
> + const char *na, *nb;
> +
> + na = btf__str_by_offset(btf, ta->name_off);
> + nb = btf__str_by_offset(btf, tb->name_off);
> + return strcmp(na, nb);
> +}
> +
> +static void btf_check_sorted(struct btf *btf)
> +{
> + const struct btf_type *t;
> + int i, k = 0, n;
> + __u32 sorted_start_id = 0;
> +
> + if (btf->nr_types < 2)
> + return;
> +
> + n = btf__type_cnt(btf) - 1;
> + for (i = btf->start_id; i < n; i++) {
> + k = i + 1;
> + if (btf_compare_type_names(&i, &k, btf) > 0)
> + return;
> + t = btf_type_by_id(btf, i);
> + if (sorted_start_id == 0 &&
> + !str_is_empty(btf__str_by_offset(btf, t->name_off)))
^^^^^^^^
Nit: broken indentation.
> + sorted_start_id = i;
> + }
> +
> + t = btf_type_by_id(btf, k);
Nit: please use 'n' instead of 'k'.
Maybe just change condition in the loop and avoid the second part?
E.g.:
n = btf__type_cnt(btf);
for (...) {
...
if (k < n && btf_compare_type_names(a: &i, b: &k, priv: btf) > 0)
return;
...
}
A bit shorter/simpler this way.
> + if (sorted_start_id == 0 &&
> + !str_is_empty(btf__str_by_offset(btf, t->name_off)))
> + sorted_start_id = k;
> + if (sorted_start_id)
> + btf->sorted_start_id = sorted_start_id;
> +}
> +
> static __s32 btf_find_by_name_bsearch(const struct btf *btf, const char *name,
> __s32 start_id, __s32 end_id)
> {
> @@ -935,7 +978,7 @@ static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
>
> if (start_id < btf->start_id) {
> idx = btf_find_by_name_kind(btf->base_btf, start_id,
> - type_name, kind);
> + type_name, kind);
Nit: shouldn't be in this patch.
> if (idx >= 0)
> return idx;
> start_id = btf->start_id;
> @@ -1147,6 +1190,7 @@ static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf, b
> err = err ?: btf_sanity_check(btf);
> if (err)
> goto done;
> + btf_check_sorted(btf);
>
> done:
> if (err) {
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 05/10] libbpf: Verify BTF Sorting
2025-12-17 0:32 ` Eduard Zingerman
@ 2025-12-17 3:19 ` Donglin Peng
0 siblings, 0 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-17 3:19 UTC (permalink / raw)
To: Eduard Zingerman
Cc: ast, andrii.nakryiko, zhangxiaoqin, ihor.solodrai, linux-kernel,
bpf, pengdonglin, Alan Maguire
On Wed, Dec 17, 2025 at 8:32 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> > From: pengdonglin <pengdonglin@xiaomi.com>
> >
> > This patch checks whether the BTF is sorted by name in ascending
> > order. If sorted, binary search will be used when looking up types.
> >
> > Cc: Eduard Zingerman <eddyz87@gmail.com>
> > Cc: Alexei Starovoitov <ast@kernel.org>
> > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> > Cc: Alan Maguire <alan.maguire@oracle.com>
> > Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> > Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> > Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> > ---
> > tools/lib/bpf/btf.c | 46 ++++++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 45 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> > index 7f150c869bf6..a53d24704857 100644
> > --- a/tools/lib/bpf/btf.c
> > +++ b/tools/lib/bpf/btf.c
> > @@ -899,6 +899,49 @@ int btf__resolve_type(const struct btf *btf, __u32 type_id)
> > return type_id;
> > }
> >
> > +/*
> > + * Assuming that types are sorted by name in ascending order.
> > + */
> > +static int btf_compare_type_names(const void *a, const void *b, void *priv)
>
> This can be declared as ...(u32 a, u32 b, struct btf *btf).
Thanks, I will fix it in the next version.
>
> > +{
> > + struct btf *btf = (struct btf *)priv;
> > + struct btf_type *ta = btf_type_by_id(btf, *(__u32 *)a);
> > + struct btf_type *tb = btf_type_by_id(btf, *(__u32 *)b);
> > + const char *na, *nb;
> > +
> > + na = btf__str_by_offset(btf, ta->name_off);
> > + nb = btf__str_by_offset(btf, tb->name_off);
> > + return strcmp(na, nb);
> > +}
> > +
> > +static void btf_check_sorted(struct btf *btf)
> > +{
> > + const struct btf_type *t;
> > + int i, k = 0, n;
> > + __u32 sorted_start_id = 0;
> > +
> > + if (btf->nr_types < 2)
> > + return;
> > +
> > + n = btf__type_cnt(btf) - 1;
> > + for (i = btf->start_id; i < n; i++) {
> > + k = i + 1;
> > + if (btf_compare_type_names(&i, &k, btf) > 0)
> > + return;
> > + t = btf_type_by_id(btf, i);
> > + if (sorted_start_id == 0 &&
> > + !str_is_empty(btf__str_by_offset(btf, t->name_off)))
> ^^^^^^^^
> Nit: broken indentation.
Thanks, I will fix it.
>
> > + sorted_start_id = i;
> > + }
> > +
> > + t = btf_type_by_id(btf, k);
>
> Nit: please use 'n' instead of 'k'.
Thanks, I agree, although k equals n at this point.
> Maybe just change condition in the loop and avoid the second part?
> E.g.:
>
> n = btf__type_cnt(btf);
> for (...) {
> ...
> if (k < n && btf_compare_type_names(a: &i, b: &k, priv: btf) > 0)
> return;
> ...
> }
>
> A bit shorter/simpler this way.
Great, I agree and will fix it in the next version.
>
> > + if (sorted_start_id == 0 &&
> > + !str_is_empty(btf__str_by_offset(btf, t->name_off)))
> > + sorted_start_id = k;
> > + if (sorted_start_id)
> > + btf->sorted_start_id = sorted_start_id;
> > +}
> > +
> > static __s32 btf_find_by_name_bsearch(const struct btf *btf, const char *name,
> > __s32 start_id, __s32 end_id)
> > {
> > @@ -935,7 +978,7 @@ static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
> >
> > if (start_id < btf->start_id) {
> > idx = btf_find_by_name_kind(btf->base_btf, start_id,
> > - type_name, kind);
> > + type_name, kind);
>
> Nit: shouldn't be in this patch.
Thanks, I will fix it in the next version.
>
> > if (idx >= 0)
> > return idx;
> > start_id = btf->start_id;
> > @@ -1147,6 +1190,7 @@ static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf, b
> > err = err ?: btf_sanity_check(btf);
> > if (err)
> > goto done;
> > + btf_check_sorted(btf);
> >
> > done:
> > if (err) {
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH bpf-next v9 06/10] btf: Optimize type lookup with binary search
2025-12-08 6:23 [PATCH bpf-next v9 00/10] Improve the performance of BTF type lookups with binary search Donglin Peng
` (4 preceding siblings ...)
2025-12-08 6:23 ` [PATCH bpf-next v9 05/10] libbpf: Verify BTF Sorting Donglin Peng
@ 2025-12-08 6:23 ` Donglin Peng
2025-12-08 6:23 ` [PATCH bpf-next v9 07/10] btf: Verify BTF Sorting Donglin Peng
` (3 subsequent siblings)
9 siblings, 0 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-08 6:23 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
From: pengdonglin <pengdonglin@xiaomi.com>
Improve btf_find_by_name_kind() performance by adding binary search
support for sorted types. Falls back to linear search for compatibility.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
---
kernel/bpf/btf.c | 85 +++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 76 insertions(+), 9 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 0de8fc8a0e0b..842f9c0200e4 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -259,6 +259,7 @@ struct btf {
void *nohdr_data;
struct btf_header hdr;
u32 nr_types; /* includes VOID for base BTF */
+ u32 sorted_start_id;
u32 types_size;
u32 data_size;
refcount_t refcnt;
@@ -494,6 +495,11 @@ static bool btf_type_is_modifier(const struct btf_type *t)
return false;
}
+static int btf_start_id(const struct btf *btf)
+{
+ return btf->start_id + (btf->base_btf ? 0 : 1);
+}
+
bool btf_type_is_void(const struct btf_type *t)
{
return t == &btf_void;
@@ -544,21 +550,79 @@ u32 btf_nr_types(const struct btf *btf)
return total;
}
+static s32 btf_find_by_name_bsearch(const struct btf *btf, const char *name,
+ s32 start_id, s32 end_id)
+{
+ const struct btf_type *t;
+ const char *tname;
+ s32 l, r, m, lmost = -ENOENT;
+ int ret;
+
+ l = start_id;
+ r = end_id;
+ while (l <= r) {
+ m = l + (r - l) / 2;
+ t = btf_type_by_id(btf, m);
+ tname = btf_name_by_offset(btf, t->name_off);
+ ret = strcmp(tname, name);
+ if (ret < 0) {
+ l = m + 1;
+ } else {
+ if (ret == 0)
+ lmost = m;
+ r = m - 1;
+ }
+ }
+
+ return lmost;
+}
+
s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind)
{
+ const struct btf *base_btf = btf_base_btf(btf);
const struct btf_type *t;
const char *tname;
- u32 i, total;
+ s32 idx;
- total = btf_nr_types(btf);
- for (i = 1; i < total; i++) {
- t = btf_type_by_id(btf, i);
- if (BTF_INFO_KIND(t->info) != kind)
- continue;
+ if (base_btf) {
+ idx = btf_find_by_name_kind(base_btf, name, kind);
+ if (idx > 0)
+ return idx;
+ }
- tname = btf_name_by_offset(btf, t->name_off);
- if (!strcmp(tname, name))
- return i;
+ if (btf->sorted_start_id > 0) {
+ /* skip anonymous types */
+ s32 start_id = btf->sorted_start_id;
+ s32 end_id = btf_nr_types(btf) - 1;
+
+ idx = btf_find_by_name_bsearch(btf, name, start_id, end_id);
+ if (idx < 0)
+ return -ENOENT;
+
+ t = btf_type_by_id(btf, idx);
+ if (BTF_INFO_KIND(t->info) == kind)
+ return idx;
+
+ for (idx++; idx <= end_id; idx++) {
+ t = btf_type_by_id(btf, idx);
+ tname = btf_name_by_offset(btf, t->name_off);
+ if (strcmp(tname, name) != 0)
+ return -ENOENT;
+ if (BTF_INFO_KIND(t->info) == kind)
+ return idx;
+ }
+ } else {
+ u32 i, total;
+
+ total = btf_nr_types(btf);
+ for (i = btf_start_id(btf); i < total; i++) {
+ t = btf_type_by_id(btf, i);
+ if (BTF_INFO_KIND(t->info) != kind)
+ continue;
+ tname = btf_name_by_offset(btf, t->name_off);
+ if (strcmp(tname, name) == 0)
+ return i;
+ }
}
return -ENOENT;
@@ -5791,6 +5855,7 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
goto errout;
}
env->btf = btf;
+ btf->sorted_start_id = 0;
data = kvmalloc(attr->btf_size, GFP_KERNEL | __GFP_NOWARN);
if (!data) {
@@ -6210,6 +6275,7 @@ static struct btf *btf_parse_base(struct btf_verifier_env *env, const char *name
btf->data = data;
btf->data_size = data_size;
btf->kernel_btf = true;
+ btf->sorted_start_id = 0;
snprintf(btf->name, sizeof(btf->name), "%s", name);
err = btf_parse_hdr(env);
@@ -6327,6 +6393,7 @@ static struct btf *btf_parse_module(const char *module_name, const void *data,
btf->start_id = base_btf->nr_types;
btf->start_str_off = base_btf->hdr.str_len;
btf->kernel_btf = true;
+ btf->sorted_start_id = 0;
snprintf(btf->name, sizeof(btf->name), "%s", module_name);
btf->data = kvmemdup(data, data_size, GFP_KERNEL | __GFP_NOWARN);
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* [PATCH bpf-next v9 07/10] btf: Verify BTF Sorting
2025-12-08 6:23 [PATCH bpf-next v9 00/10] Improve the performance of BTF type lookups with binary search Donglin Peng
` (5 preceding siblings ...)
2025-12-08 6:23 ` [PATCH bpf-next v9 06/10] btf: Optimize type lookup with binary search Donglin Peng
@ 2025-12-08 6:23 ` Donglin Peng
2025-12-09 3:21 ` Donglin Peng
2025-12-17 0:48 ` Eduard Zingerman
2025-12-08 6:23 ` [PATCH bpf-next v9 08/10] bpf: Skip anonymous types in type lookup for performance Donglin Peng
` (2 subsequent siblings)
9 siblings, 2 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-08 6:23 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
From: pengdonglin <pengdonglin@xiaomi.com>
This patch checks whether the BTF is sorted by name in ascending order.
If sorted, binary search will be used when looking up types.
Specifically, vmlinux and kernel module BTFs are always sorted during
the build phase with anonymous types placed before named types, so we
only need to identify the starting ID of named types.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
---
kernel/bpf/btf.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 58 insertions(+)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 842f9c0200e4..925cb524f3a8 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -550,6 +550,60 @@ u32 btf_nr_types(const struct btf *btf)
return total;
}
+/*
+ * Assuming that types are sorted by name in ascending order.
+ */
+static int btf_compare_type_names(const void *a, const void *b, void *priv)
+{
+ struct btf *btf = (struct btf *)priv;
+ const struct btf_type *ta = btf_type_by_id(btf, *(__u32 *)a);
+ const struct btf_type *tb = btf_type_by_id(btf, *(__u32 *)b);
+ const char *na, *nb;
+
+ na = btf_name_by_offset(btf, ta->name_off);
+ nb = btf_name_by_offset(btf, tb->name_off);
+ return strcmp(na, nb);
+}
+
+/* Note that vmlinux and kernel module BTFs are always sorted
+ * during the building phase.
+ */
+static void btf_check_sorted(struct btf *btf)
+{
+ const struct btf_type *t;
+ bool skip_cmp = btf_is_kernel(btf);
+ u32 sorted_start_id = 0;
+ int i, n, k = 0;
+
+ if (btf->nr_types < 2)
+ return;
+
+ n = btf_nr_types(btf) - 1;
+ for (i = btf_start_id(btf); i < n; i++) {
+ k = i + 1;
+ if (!skip_cmp &&
+ btf_compare_type_names(&i, &k, btf) > 0)
+ return;
+
+ if (sorted_start_id == 0) {
+ t = btf_type_by_id(btf, i);
+ if (t->name_off) {
+ sorted_start_id = i;
+ if (skip_cmp)
+ break;
+ }
+ }
+ }
+
+ if (sorted_start_id == 0) {
+ t = btf_type_by_id(btf, k);
+ if (t->name_off)
+ sorted_start_id = k;
+ }
+ if (sorted_start_id)
+ btf->sorted_start_id = sorted_start_id;
+}
+
static s32 btf_find_by_name_bsearch(const struct btf *btf, const char *name,
s32 start_id, s32 end_id)
{
@@ -5889,6 +5943,8 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
if (err)
goto errout;
+ btf_check_sorted(btf);
+
struct_meta_tab = btf_parse_struct_metas(&env->log, btf);
if (IS_ERR(struct_meta_tab)) {
err = PTR_ERR(struct_meta_tab);
@@ -6296,6 +6352,7 @@ static struct btf *btf_parse_base(struct btf_verifier_env *env, const char *name
if (err)
goto errout;
+ btf_check_sorted(btf);
refcount_set(&btf->refcnt, 1);
return btf;
@@ -6430,6 +6487,7 @@ static struct btf *btf_parse_module(const char *module_name, const void *data,
}
btf_verifier_env_free(env);
+ btf_check_sorted(btf);
refcount_set(&btf->refcnt, 1);
return btf;
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 07/10] btf: Verify BTF Sorting
2025-12-08 6:23 ` [PATCH bpf-next v9 07/10] btf: Verify BTF Sorting Donglin Peng
@ 2025-12-09 3:21 ` Donglin Peng
2025-12-17 0:41 ` Eduard Zingerman
2025-12-17 0:48 ` Eduard Zingerman
1 sibling, 1 reply; 36+ messages in thread
From: Donglin Peng @ 2025-12-09 3:21 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
On Mon, Dec 8, 2025 at 2:24 PM Donglin Peng <dolinux.peng@gmail.com> wrote:
>
> From: pengdonglin <pengdonglin@xiaomi.com>
>
> This patch checks whether the BTF is sorted by name in ascending order.
> If sorted, binary search will be used when looking up types.
>
> Specifically, vmlinux and kernel module BTFs are always sorted during
> the build phase with anonymous types placed before named types, so we
> only need to identify the starting ID of named types.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> ---
> kernel/bpf/btf.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 58 insertions(+)
>
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 842f9c0200e4..925cb524f3a8 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -550,6 +550,60 @@ u32 btf_nr_types(const struct btf *btf)
> return total;
> }
>
> +/*
> + * Assuming that types are sorted by name in ascending order.
> + */
> +static int btf_compare_type_names(const void *a, const void *b, void *priv)
> +{
> + struct btf *btf = (struct btf *)priv;
> + const struct btf_type *ta = btf_type_by_id(btf, *(__u32 *)a);
> + const struct btf_type *tb = btf_type_by_id(btf, *(__u32 *)b);
> + const char *na, *nb;
> +
> + na = btf_name_by_offset(btf, ta->name_off);
> + nb = btf_name_by_offset(btf, tb->name_off);
> + return strcmp(na, nb);
> +}
> +
> +/* Note that vmlinux and kernel module BTFs are always sorted
> + * during the building phase.
> + */
> +static void btf_check_sorted(struct btf *btf)
> +{
> + const struct btf_type *t;
> + bool skip_cmp = btf_is_kernel(btf);
> + u32 sorted_start_id = 0;
> + int i, n, k = 0;
> +
> + if (btf->nr_types < 2)
> + return;
> +
> + n = btf_nr_types(btf) - 1;
> + for (i = btf_start_id(btf); i < n; i++) {
> + k = i + 1;
> + if (!skip_cmp &&
> + btf_compare_type_names(&i, &k, btf) > 0)
> + return;
> +
> + if (sorted_start_id == 0) {
> + t = btf_type_by_id(btf, i);
> + if (t->name_off) {
> + sorted_start_id = i;
> + if (skip_cmp)
> + break;
> + }
> + }
> + }
> +
> + if (sorted_start_id == 0) {
> + t = btf_type_by_id(btf, k);
> + if (t->name_off)
> + sorted_start_id = k;
> + }
> + if (sorted_start_id)
> + btf->sorted_start_id = sorted_start_id;
> +}
> +
> static s32 btf_find_by_name_bsearch(const struct btf *btf, const char *name,
> s32 start_id, s32 end_id)
> {
> @@ -5889,6 +5943,8 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
> if (err)
> goto errout;
>
> + btf_check_sorted(btf);
I think there is no need to check sorting here, because the BTF in this
code path is generated by the compiler. We only need to cover the cases
of vmlinux and kernel module BTF.
> +
> struct_meta_tab = btf_parse_struct_metas(&env->log, btf);
> if (IS_ERR(struct_meta_tab)) {
> err = PTR_ERR(struct_meta_tab);
> @@ -6296,6 +6352,7 @@ static struct btf *btf_parse_base(struct btf_verifier_env *env, const char *name
> if (err)
> goto errout;
>
> + btf_check_sorted(btf);
> refcount_set(&btf->refcnt, 1);
>
> return btf;
> @@ -6430,6 +6487,7 @@ static struct btf *btf_parse_module(const char *module_name, const void *data,
> }
>
> btf_verifier_env_free(env);
> + btf_check_sorted(btf);
> refcount_set(&btf->refcnt, 1);
> return btf;
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 07/10] btf: Verify BTF Sorting
2025-12-09 3:21 ` Donglin Peng
@ 2025-12-17 0:41 ` Eduard Zingerman
0 siblings, 0 replies; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-17 0:41 UTC (permalink / raw)
To: Donglin Peng, ast, andrii.nakryiko
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, pengdonglin,
Alan Maguire
On Tue, 2025-12-09 at 11:21 +0800, Donglin Peng wrote:
[...]
> > @@ -5889,6 +5943,8 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
> > if (err)
> > goto errout;
> >
> > + btf_check_sorted(btf);
>
> I think there is no need to check sorting here, because the BTF in this
> code path is generated by the compiler. We only need to cover the cases
> of vmlinux and kernel module BTF.
Idk, we can teach compiler to sort BTF types one day.
Having this call shouldn't incur too much overhead, right?
> > +
> > struct_meta_tab = btf_parse_struct_metas(&env->log, btf);
> > if (IS_ERR(struct_meta_tab)) {
> > err = PTR_ERR(struct_meta_tab);
[...]
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH bpf-next v9 07/10] btf: Verify BTF Sorting
2025-12-08 6:23 ` [PATCH bpf-next v9 07/10] btf: Verify BTF Sorting Donglin Peng
2025-12-09 3:21 ` Donglin Peng
@ 2025-12-17 0:48 ` Eduard Zingerman
2025-12-17 3:26 ` Donglin Peng
1 sibling, 1 reply; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-17 0:48 UTC (permalink / raw)
To: Donglin Peng, ast, andrii.nakryiko
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, pengdonglin,
Alan Maguire
On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> From: pengdonglin <pengdonglin@xiaomi.com>
>
> This patch checks whether the BTF is sorted by name in ascending order.
> If sorted, binary search will be used when looking up types.
>
> Specifically, vmlinux and kernel module BTFs are always sorted during
> the build phase with anonymous types placed before named types, so we
> only need to identify the starting ID of named types.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> ---
> kernel/bpf/btf.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 58 insertions(+)
>
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 842f9c0200e4..925cb524f3a8 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -550,6 +550,60 @@ u32 btf_nr_types(const struct btf *btf)
> return total;
> }
>
> +/*
> + * Assuming that types are sorted by name in ascending order.
> + */
> +static int btf_compare_type_names(const void *a, const void *b, void *priv)
> +{
> + struct btf *btf = (struct btf *)priv;
> + const struct btf_type *ta = btf_type_by_id(btf, *(__u32 *)a);
> + const struct btf_type *tb = btf_type_by_id(btf, *(__u32 *)b);
> + const char *na, *nb;
> +
> + na = btf_name_by_offset(btf, ta->name_off);
> + nb = btf_name_by_offset(btf, tb->name_off);
> + return strcmp(na, nb);
> +}
> +
> +/* Note that vmlinux and kernel module BTFs are always sorted
> + * during the building phase.
> + */
> +static void btf_check_sorted(struct btf *btf)
> +{
> + const struct btf_type *t;
> + bool skip_cmp = btf_is_kernel(btf);
Nit: maybe just do a separate loop when btf_is_kernel(btf) == true?
for (i = 0, n = btf_nr_types(btf); i < n, i++) {
t = btf_type_by_id(btf, i);
if (t->name_off) {
btf->sorted_start_id = i;
return;
}
}
return;
> + u32 sorted_start_id = 0;
> + int i, n, k = 0;
> +
> + if (btf->nr_types < 2)
> + return;
> +
> + n = btf_nr_types(btf) - 1;
> + for (i = btf_start_id(btf); i < n; i++) {
> + k = i + 1;
> + if (!skip_cmp &&
> + btf_compare_type_names(&i, &k, btf) > 0)
> + return;
> +
> + if (sorted_start_id == 0) {
> + t = btf_type_by_id(btf, i);
> + if (t->name_off) {
> + sorted_start_id = i;
> + if (skip_cmp)
> + break;
> + }
> + }
> + }
> +
> + if (sorted_start_id == 0) {
> + t = btf_type_by_id(btf, k);
> + if (t->name_off)
> + sorted_start_id = k;
> + }
> + if (sorted_start_id)
> + btf->sorted_start_id = sorted_start_id;
> +}
> +
> static s32 btf_find_by_name_bsearch(const struct btf *btf, const char *name,
> s32 start_id, s32 end_id)
> {
> @@ -5889,6 +5943,8 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
> if (err)
> goto errout;
>
> + btf_check_sorted(btf);
> +
> struct_meta_tab = btf_parse_struct_metas(&env->log, btf);
> if (IS_ERR(struct_meta_tab)) {
> err = PTR_ERR(struct_meta_tab);
> @@ -6296,6 +6352,7 @@ static struct btf *btf_parse_base(struct btf_verifier_env *env, const char *name
> if (err)
> goto errout;
>
> + btf_check_sorted(btf);
> refcount_set(&btf->refcnt, 1);
>
> return btf;
> @@ -6430,6 +6487,7 @@ static struct btf *btf_parse_module(const char *module_name, const void *data,
> }
>
> btf_verifier_env_free(env);
> + btf_check_sorted(btf);
> refcount_set(&btf->refcnt, 1);
> return btf;
>
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 07/10] btf: Verify BTF Sorting
2025-12-17 0:48 ` Eduard Zingerman
@ 2025-12-17 3:26 ` Donglin Peng
0 siblings, 0 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-17 3:26 UTC (permalink / raw)
To: Eduard Zingerman
Cc: ast, andrii.nakryiko, zhangxiaoqin, ihor.solodrai, linux-kernel,
bpf, pengdonglin, Alan Maguire
On Wed, Dec 17, 2025 at 8:48 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> > From: pengdonglin <pengdonglin@xiaomi.com>
> >
> > This patch checks whether the BTF is sorted by name in ascending order.
> > If sorted, binary search will be used when looking up types.
> >
> > Specifically, vmlinux and kernel module BTFs are always sorted during
> > the build phase with anonymous types placed before named types, so we
> > only need to identify the starting ID of named types.
> >
> > Cc: Eduard Zingerman <eddyz87@gmail.com>
> > Cc: Alexei Starovoitov <ast@kernel.org>
> > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> > Cc: Alan Maguire <alan.maguire@oracle.com>
> > Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> > Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> > Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> > ---
> > kernel/bpf/btf.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 58 insertions(+)
> >
> > diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> > index 842f9c0200e4..925cb524f3a8 100644
> > --- a/kernel/bpf/btf.c
> > +++ b/kernel/bpf/btf.c
> > @@ -550,6 +550,60 @@ u32 btf_nr_types(const struct btf *btf)
> > return total;
> > }
> >
> > +/*
> > + * Assuming that types are sorted by name in ascending order.
> > + */
> > +static int btf_compare_type_names(const void *a, const void *b, void *priv)
> > +{
> > + struct btf *btf = (struct btf *)priv;
> > + const struct btf_type *ta = btf_type_by_id(btf, *(__u32 *)a);
> > + const struct btf_type *tb = btf_type_by_id(btf, *(__u32 *)b);
> > + const char *na, *nb;
> > +
> > + na = btf_name_by_offset(btf, ta->name_off);
> > + nb = btf_name_by_offset(btf, tb->name_off);
> > + return strcmp(na, nb);
> > +}
> > +
> > +/* Note that vmlinux and kernel module BTFs are always sorted
> > + * during the building phase.
> > + */
> > +static void btf_check_sorted(struct btf *btf)
> > +{
> > + const struct btf_type *t;
> > + bool skip_cmp = btf_is_kernel(btf);
>
> Nit: maybe just do a separate loop when btf_is_kernel(btf) == true?
Thanks, I agree and it will make the code much cleaner.
>
> for (i = 0, n = btf_nr_types(btf); i < n, i++) {
> t = btf_type_by_id(btf, i);
> if (t->name_off) {
> btf->sorted_start_id = i;
> return;
> }
> }
> return;
>
> > + u32 sorted_start_id = 0;
> > + int i, n, k = 0;
> > +
> > + if (btf->nr_types < 2)
> > + return;
> > +
> > + n = btf_nr_types(btf) - 1;
> > + for (i = btf_start_id(btf); i < n; i++) {
> > + k = i + 1;
> > + if (!skip_cmp &&
> > + btf_compare_type_names(&i, &k, btf) > 0)
> > + return;
> > +
> > + if (sorted_start_id == 0) {
> > + t = btf_type_by_id(btf, i);
> > + if (t->name_off) {
> > + sorted_start_id = i;
> > + if (skip_cmp)
> > + break;
> > + }
> > + }
> > + }
> > +
> > + if (sorted_start_id == 0) {
> > + t = btf_type_by_id(btf, k);
> > + if (t->name_off)
> > + sorted_start_id = k;
> > + }
> > + if (sorted_start_id)
> > + btf->sorted_start_id = sorted_start_id;
> > +}
> > +
> > static s32 btf_find_by_name_bsearch(const struct btf *btf, const char *name,
> > s32 start_id, s32 end_id)
> > {
> > @@ -5889,6 +5943,8 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
> > if (err)
> > goto errout;
> >
> > + btf_check_sorted(btf);
> > +
> > struct_meta_tab = btf_parse_struct_metas(&env->log, btf);
> > if (IS_ERR(struct_meta_tab)) {
> > err = PTR_ERR(struct_meta_tab);
> > @@ -6296,6 +6352,7 @@ static struct btf *btf_parse_base(struct btf_verifier_env *env, const char *name
> > if (err)
> > goto errout;
> >
> > + btf_check_sorted(btf);
> > refcount_set(&btf->refcnt, 1);
> >
> > return btf;
> > @@ -6430,6 +6487,7 @@ static struct btf *btf_parse_module(const char *module_name, const void *data,
> > }
> >
> > btf_verifier_env_free(env);
> > + btf_check_sorted(btf);
> > refcount_set(&btf->refcnt, 1);
> > return btf;
> >
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH bpf-next v9 08/10] bpf: Skip anonymous types in type lookup for performance
2025-12-08 6:23 [PATCH bpf-next v9 00/10] Improve the performance of BTF type lookups with binary search Donglin Peng
` (6 preceding siblings ...)
2025-12-08 6:23 ` [PATCH bpf-next v9 07/10] btf: Verify BTF Sorting Donglin Peng
@ 2025-12-08 6:23 ` Donglin Peng
2025-12-08 6:36 ` Donglin Peng
2025-12-17 6:55 ` Eduard Zingerman
2025-12-08 6:23 ` [PATCH bpf-next v9 09/10] bpf: Optimize the performance of find_bpffs_btf_enums Donglin Peng
2025-12-08 6:23 ` [PATCH bpf-next v9 10/10] libbpf: Optimize the performance of determine_ptr_size Donglin Peng
9 siblings, 2 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-08 6:23 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
From: pengdonglin <pengdonglin@xiaomi.com>
Currently, vmlinux and kernel module BTFs are unconditionally
sorted during the build phase, with named types placed at the
end. Thus, anonymous types should be skipped when starting the
search. In my vmlinux BTF, the number of anonymous types is
61,747, which means the loop count can be reduced by 61,747.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
---
include/linux/btf.h | 1 +
kernel/bpf/btf.c | 15 ++++++++++-----
kernel/bpf/verifier.c | 7 +------
3 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/include/linux/btf.h b/include/linux/btf.h
index f06976ffb63f..2d28f2b22ae5 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -220,6 +220,7 @@ bool btf_is_module(const struct btf *btf);
bool btf_is_vmlinux(const struct btf *btf);
struct module *btf_try_get_module(const struct btf *btf);
u32 btf_nr_types(const struct btf *btf);
+u32 btf_sorted_start_id(const struct btf *btf);
struct btf *btf_base_btf(const struct btf *btf);
bool btf_type_is_i32(const struct btf_type *t);
bool btf_type_is_i64(const struct btf_type *t);
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 925cb524f3a8..5f4f51b0acf4 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -550,6 +550,11 @@ u32 btf_nr_types(const struct btf *btf)
return total;
}
+u32 btf_sorted_start_id(const struct btf *btf)
+{
+ return btf->sorted_start_id ?: (btf->start_id ?: 1);
+}
+
/*
* Assuming that types are sorted by name in ascending order.
*/
@@ -3540,9 +3545,9 @@ const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type
{
const char *value = NULL;
const struct btf_type *t;
- int len, id;
+ int len, id = btf->sorted_start_id > 0 ? btf->sorted_start_id - 1 : 0;
- id = btf_find_next_decl_tag(btf, pt, comp_idx, tag_key, 0);
+ id = btf_find_next_decl_tag(btf, pt, comp_idx, tag_key, id);
if (id < 0)
return ERR_PTR(id);
@@ -7859,7 +7864,7 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog)
*/
for (i = 0; i < nargs; i++) {
u32 tags = 0;
- int id = 0;
+ int id = btf->sorted_start_id > 0 ? btf->sorted_start_id - 1 : 0;
/* 'arg:<tag>' decl_tag takes precedence over derivation of
* register type from BTF type itself
@@ -9340,7 +9345,7 @@ bpf_core_find_cands(struct bpf_core_ctx *ctx, u32 local_type_id)
}
/* Attempt to find target candidates in vmlinux BTF first */
- cands = bpf_core_add_cands(cands, main_btf, 1);
+ cands = bpf_core_add_cands(cands, main_btf, main_btf->sorted_start_id);
if (IS_ERR(cands))
return ERR_CAST(cands);
@@ -9372,7 +9377,7 @@ bpf_core_find_cands(struct bpf_core_ctx *ctx, u32 local_type_id)
*/
btf_get(mod_btf);
spin_unlock_bh(&btf_idr_lock);
- cands = bpf_core_add_cands(cands, mod_btf, btf_nr_types(main_btf));
+ cands = bpf_core_add_cands(cands, mod_btf, mod_btf->sorted_start_id);
btf_put(mod_btf);
if (IS_ERR(cands))
return ERR_CAST(cands);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index f0ca69f888fa..2ae87075db6a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -20655,12 +20655,7 @@ static int find_btf_percpu_datasec(struct btf *btf)
* types to look at only module's own BTF types.
*/
n = btf_nr_types(btf);
- if (btf_is_module(btf))
- i = btf_nr_types(btf_vmlinux);
- else
- i = 1;
-
- for(; i < n; i++) {
+ for (i = btf_sorted_start_id(btf); i < n; i++) {
t = btf_type_by_id(btf, i);
if (BTF_INFO_KIND(t->info) != BTF_KIND_DATASEC)
continue;
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 08/10] bpf: Skip anonymous types in type lookup for performance
2025-12-08 6:23 ` [PATCH bpf-next v9 08/10] bpf: Skip anonymous types in type lookup for performance Donglin Peng
@ 2025-12-08 6:36 ` Donglin Peng
2025-12-17 6:55 ` Eduard Zingerman
1 sibling, 0 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-08 6:36 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
On Mon, Dec 8, 2025 at 2:24 PM Donglin Peng <dolinux.peng@gmail.com> wrote:
>
> From: pengdonglin <pengdonglin@xiaomi.com>
>
> Currently, vmlinux and kernel module BTFs are unconditionally
> sorted during the build phase, with named types placed at the
> end. Thus, anonymous types should be skipped when starting the
> search. In my vmlinux BTF, the number of anonymous types is
> 61,747, which means the loop count can be reduced by 61,747.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> ---
> include/linux/btf.h | 1 +
> kernel/bpf/btf.c | 15 ++++++++++-----
> kernel/bpf/verifier.c | 7 +------
> 3 files changed, 12 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/btf.h b/include/linux/btf.h
> index f06976ffb63f..2d28f2b22ae5 100644
> --- a/include/linux/btf.h
> +++ b/include/linux/btf.h
> @@ -220,6 +220,7 @@ bool btf_is_module(const struct btf *btf);
> bool btf_is_vmlinux(const struct btf *btf);
> struct module *btf_try_get_module(const struct btf *btf);
> u32 btf_nr_types(const struct btf *btf);
> +u32 btf_sorted_start_id(const struct btf *btf);
> struct btf *btf_base_btf(const struct btf *btf);
> bool btf_type_is_i32(const struct btf_type *t);
> bool btf_type_is_i64(const struct btf_type *t);
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 925cb524f3a8..5f4f51b0acf4 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -550,6 +550,11 @@ u32 btf_nr_types(const struct btf *btf)
> return total;
> }
>
> +u32 btf_sorted_start_id(const struct btf *btf)
> +{
> + return btf->sorted_start_id ?: (btf->start_id ?: 1);
> +}
> +
> /*
> * Assuming that types are sorted by name in ascending order.
> */
> @@ -3540,9 +3545,9 @@ const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type
> {
> const char *value = NULL;
> const struct btf_type *t;
> - int len, id;
> + int len, id = btf->sorted_start_id > 0 ? btf->sorted_start_id - 1 : 0;
>
> - id = btf_find_next_decl_tag(btf, pt, comp_idx, tag_key, 0);
> + id = btf_find_next_decl_tag(btf, pt, comp_idx, tag_key, id);
Sorry, we should pass the sorted_start_id of the base BTF.
> if (id < 0)
> return ERR_PTR(id);
>
> @@ -7859,7 +7864,7 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog)
> */
> for (i = 0; i < nargs; i++) {
> u32 tags = 0;
> - int id = 0;
> + int id = btf->sorted_start_id > 0 ? btf->sorted_start_id - 1 : 0;
Ditto.
>
> /* 'arg:<tag>' decl_tag takes precedence over derivation of
> * register type from BTF type itself
> @@ -9340,7 +9345,7 @@ bpf_core_find_cands(struct bpf_core_ctx *ctx, u32 local_type_id)
> }
>
> /* Attempt to find target candidates in vmlinux BTF first */
> - cands = bpf_core_add_cands(cands, main_btf, 1);
> + cands = bpf_core_add_cands(cands, main_btf, main_btf->sorted_start_id);
Invoke btf_sorted_start_id.
> if (IS_ERR(cands))
> return ERR_CAST(cands);
>
> @@ -9372,7 +9377,7 @@ bpf_core_find_cands(struct bpf_core_ctx *ctx, u32 local_type_id)
> */
> btf_get(mod_btf);
> spin_unlock_bh(&btf_idr_lock);
> - cands = bpf_core_add_cands(cands, mod_btf, btf_nr_types(main_btf));
> + cands = bpf_core_add_cands(cands, mod_btf, mod_btf->sorted_start_id);
> btf_put(mod_btf);
> if (IS_ERR(cands))
> return ERR_CAST(cands);
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index f0ca69f888fa..2ae87075db6a 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -20655,12 +20655,7 @@ static int find_btf_percpu_datasec(struct btf *btf)
> * types to look at only module's own BTF types.
> */
> n = btf_nr_types(btf);
> - if (btf_is_module(btf))
> - i = btf_nr_types(btf_vmlinux);
> - else
> - i = 1;
> -
> - for(; i < n; i++) {
> + for (i = btf_sorted_start_id(btf); i < n; i++) {
> t = btf_type_by_id(btf, i);
> if (BTF_INFO_KIND(t->info) != BTF_KIND_DATASEC)
> continue;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 08/10] bpf: Skip anonymous types in type lookup for performance
2025-12-08 6:23 ` [PATCH bpf-next v9 08/10] bpf: Skip anonymous types in type lookup for performance Donglin Peng
2025-12-08 6:36 ` Donglin Peng
@ 2025-12-17 6:55 ` Eduard Zingerman
2025-12-17 9:21 ` Donglin Peng
1 sibling, 1 reply; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-17 6:55 UTC (permalink / raw)
To: Donglin Peng, ast, andrii.nakryiko
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, pengdonglin,
Alan Maguire
On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
[...]
> @@ -550,6 +550,11 @@ u32 btf_nr_types(const struct btf *btf)
> return total;
> }
>
> +u32 btf_sorted_start_id(const struct btf *btf)
> +{
> + return btf->sorted_start_id ?: (btf->start_id ?: 1);
> +}
> +
I think that changes in this patch are correct. However, it seems
error prone to remember that sorted_start_id is always set for
vmlinux/module BTF and might not be set for program BTF.
Wdyt about using the above function everywhere instead of directly
reading the field?
> /*
> * Assuming that types are sorted by name in ascending order.
> */
[...]
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 08/10] bpf: Skip anonymous types in type lookup for performance
2025-12-17 6:55 ` Eduard Zingerman
@ 2025-12-17 9:21 ` Donglin Peng
2025-12-17 17:29 ` Eduard Zingerman
0 siblings, 1 reply; 36+ messages in thread
From: Donglin Peng @ 2025-12-17 9:21 UTC (permalink / raw)
To: Eduard Zingerman
Cc: ast, andrii.nakryiko, zhangxiaoqin, ihor.solodrai, linux-kernel,
bpf, pengdonglin, Alan Maguire
On Wed, Dec 17, 2025 at 2:55 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
>
> [...]
>
> > @@ -550,6 +550,11 @@ u32 btf_nr_types(const struct btf *btf)
> > return total;
> > }
> >
> > +u32 btf_sorted_start_id(const struct btf *btf)
> > +{
> > + return btf->sorted_start_id ?: (btf->start_id ?: 1);
> > +}
> > +
>
> I think that changes in this patch are correct. However, it seems
Thanks, I think the changes to btf_find_decl_tag_value and
btf_prepare_func_args will cause issues if the input btf is a
split BTF. We should search from its base BTF. Like this:
const struct btf *base_btf = btf;
while (btf_base_btf(base_btf))
base_btf = btf_base_btf(base_btf);
id = base_btf->sorted_start_id > 0 ? base_btf->sorted_start_id - 1 : 0;
> error prone to remember that sorted_start_id is always set for
> vmlinux/module BTF and might not be set for program BTF.
> Wdyt about using the above function everywhere instead of directly
> reading the field?
Agreed. If so, I think we need to add another helper function to check
whether the input BTF is sorted to improve code clarity.
bool btf_is_sorted(const struct btf *btf)
{
return btf->sorted_start_id > 0;
}
Besides, do you think we should reject loading a kernel module that is
not sorted?
Thanks,
Donglin
>
> > /*
> > * Assuming that types are sorted by name in ascending order.
> > */
>
> [...]
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 08/10] bpf: Skip anonymous types in type lookup for performance
2025-12-17 9:21 ` Donglin Peng
@ 2025-12-17 17:29 ` Eduard Zingerman
2025-12-18 9:16 ` Donglin Peng
0 siblings, 1 reply; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-17 17:29 UTC (permalink / raw)
To: Donglin Peng
Cc: ast, andrii.nakryiko, zhangxiaoqin, ihor.solodrai, linux-kernel,
bpf, pengdonglin, Alan Maguire
On Wed, 2025-12-17 at 17:21 +0800, Donglin Peng wrote:
> On Wed, Dec 17, 2025 at 2:55 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> >
> > On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> >
> > [...]
> >
> > > @@ -550,6 +550,11 @@ u32 btf_nr_types(const struct btf *btf)
> > > return total;
> > > }
> > >
> > > +u32 btf_sorted_start_id(const struct btf *btf)
> > > +{
> > > + return btf->sorted_start_id ?: (btf->start_id ?: 1);
> > > +}
> > > +
> >
> > I think that changes in this patch are correct. However, it seems
>
> Thanks, I think the changes to btf_find_decl_tag_value and
> btf_prepare_func_args will cause issues if the input btf is a
> split BTF. We should search from its base BTF. Like this:
>
> const struct btf *base_btf = btf;
> while (btf_base_btf(base_btf))
> base_btf = btf_base_btf(base_btf);
> id = base_btf->sorted_start_id > 0 ? base_btf->sorted_start_id - 1 : 0;
Missed that, makes sense.
> > error prone to remember that sorted_start_id is always set for
> > vmlinux/module BTF and might not be set for program BTF.
> > Wdyt about using the above function everywhere instead of directly
> > reading the field?
>
> Agreed. If so, I think we need to add another helper function to check
> whether the input BTF is sorted to improve code clarity.
>
> bool btf_is_sorted(const struct btf *btf)
> {
> return btf->sorted_start_id > 0;
> }
Sure, as you see fit.
> Besides, do you think we should reject loading a kernel module that is
> not sorted?
Not my strong side. As far as I understand, when external modules are
built for production use-cases DKMS is used. DKMS will use same
resolve_btfids as the kernel module is built for. Hence reject the
modules with not sorted BTFs should be fine. Are there other use-cases
when mismatch between resolve_btfids versions is allowed?
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 08/10] bpf: Skip anonymous types in type lookup for performance
2025-12-17 17:29 ` Eduard Zingerman
@ 2025-12-18 9:16 ` Donglin Peng
0 siblings, 0 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-18 9:16 UTC (permalink / raw)
To: Eduard Zingerman
Cc: ast, andrii.nakryiko, zhangxiaoqin, ihor.solodrai, linux-kernel,
bpf, pengdonglin, Alan Maguire
On Thu, Dec 18, 2025 at 1:29 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Wed, 2025-12-17 at 17:21 +0800, Donglin Peng wrote:
> > On Wed, Dec 17, 2025 at 2:55 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > >
> > > On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> > >
> > > [...]
> > >
> > > > @@ -550,6 +550,11 @@ u32 btf_nr_types(const struct btf *btf)
> > > > return total;
> > > > }
> > > >
> > > > +u32 btf_sorted_start_id(const struct btf *btf)
> > > > +{
> > > > + return btf->sorted_start_id ?: (btf->start_id ?: 1);
> > > > +}
> > > > +
> > >
> > > I think that changes in this patch are correct. However, it seems
> >
> > Thanks, I think the changes to btf_find_decl_tag_value and
> > btf_prepare_func_args will cause issues if the input btf is a
> > split BTF. We should search from its base BTF. Like this:
> >
> > const struct btf *base_btf = btf;
> > while (btf_base_btf(base_btf))
> > base_btf = btf_base_btf(base_btf);
> > id = base_btf->sorted_start_id > 0 ? base_btf->sorted_start_id - 1 : 0;
>
> Missed that, makes sense.
>
> > > error prone to remember that sorted_start_id is always set for
> > > vmlinux/module BTF and might not be set for program BTF.
> > > Wdyt about using the above function everywhere instead of directly
> > > reading the field?
> >
> > Agreed. If so, I think we need to add another helper function to check
> > whether the input BTF is sorted to improve code clarity.
> >
> > bool btf_is_sorted(const struct btf *btf)
> > {
> > return btf->sorted_start_id > 0;
> > }
>
> Sure, as you see fit.
>
> > Besides, do you think we should reject loading a kernel module that is
> > not sorted?
>
> Not my strong side. As far as I understand, when external modules are
> built for production use-cases DKMS is used. DKMS will use same
> resolve_btfids as the kernel module is built for. Hence reject the
> modules with not sorted BTFs should be fine. Are there other use-cases
> when mismatch between resolve_btfids versions is allowed?
Thanks. I found that RHEL supports loading a kernel module between
different kernel versions with stable KABI. So I think we should not reject
loading modules with unsorted BTFs.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH bpf-next v9 09/10] bpf: Optimize the performance of find_bpffs_btf_enums
2025-12-08 6:23 [PATCH bpf-next v9 00/10] Improve the performance of BTF type lookups with binary search Donglin Peng
` (7 preceding siblings ...)
2025-12-08 6:23 ` [PATCH bpf-next v9 08/10] bpf: Skip anonymous types in type lookup for performance Donglin Peng
@ 2025-12-08 6:23 ` Donglin Peng
2025-12-17 6:58 ` Eduard Zingerman
2025-12-08 6:23 ` [PATCH bpf-next v9 10/10] libbpf: Optimize the performance of determine_ptr_size Donglin Peng
9 siblings, 1 reply; 36+ messages in thread
From: Donglin Peng @ 2025-12-08 6:23 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
From: pengdonglin <pengdonglin@xiaomi.com>
Currently, vmlinux BTF is unconditionally sorted during
the build phase. The function btf_find_by_name_kind
executes the binary search branch, so find_bpffs_btf_enums
can be optimized by using btf_find_by_name_kind.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
---
kernel/bpf/inode.c | 42 +++++++++++++++++++-----------------------
1 file changed, 19 insertions(+), 23 deletions(-)
diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
index 81780bcf8d25..781c2c3181a4 100644
--- a/kernel/bpf/inode.c
+++ b/kernel/bpf/inode.c
@@ -605,10 +605,18 @@ struct bpffs_btf_enums {
static int find_bpffs_btf_enums(struct bpffs_btf_enums *info)
{
+ struct {
+ const struct btf_type **type;
+ const char *name;
+ } btf_enums[] = {
+ {&info->cmd_t, "bpf_cmd"},
+ {&info->map_t, "bpf_map_type"},
+ {&info->prog_t, "bpf_prog_type"},
+ {&info->attach_t, "bpf_attach_type"},
+ };
const struct btf *btf;
const struct btf_type *t;
- const char *name;
- int i, n;
+ int i, id;
memset(info, 0, sizeof(*info));
@@ -620,30 +628,18 @@ static int find_bpffs_btf_enums(struct bpffs_btf_enums *info)
info->btf = btf;
- for (i = 1, n = btf_nr_types(btf); i < n; i++) {
- t = btf_type_by_id(btf, i);
- if (!btf_type_is_enum(t))
- continue;
+ for (i = 0; i < ARRAY_SIZE(btf_enums); i++) {
+ id = btf_find_by_name_kind(btf, btf_enums[i].name,
+ BTF_KIND_ENUM);
+ if (id < 0)
+ goto out;
- name = btf_name_by_offset(btf, t->name_off);
- if (!name)
- continue;
-
- if (strcmp(name, "bpf_cmd") == 0)
- info->cmd_t = t;
- else if (strcmp(name, "bpf_map_type") == 0)
- info->map_t = t;
- else if (strcmp(name, "bpf_prog_type") == 0)
- info->prog_t = t;
- else if (strcmp(name, "bpf_attach_type") == 0)
- info->attach_t = t;
- else
- continue;
-
- if (info->cmd_t && info->map_t && info->prog_t && info->attach_t)
- return 0;
+ t = btf_type_by_id(btf, id);
+ *btf_enums[i].type = t;
}
+ return 0;
+out:
return -ESRCH;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 09/10] bpf: Optimize the performance of find_bpffs_btf_enums
2025-12-08 6:23 ` [PATCH bpf-next v9 09/10] bpf: Optimize the performance of find_bpffs_btf_enums Donglin Peng
@ 2025-12-17 6:58 ` Eduard Zingerman
0 siblings, 0 replies; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-17 6:58 UTC (permalink / raw)
To: Donglin Peng, ast, andrii.nakryiko
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, pengdonglin,
Alan Maguire
On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> From: pengdonglin <pengdonglin@xiaomi.com>
>
> Currently, vmlinux BTF is unconditionally sorted during
> the build phase. The function btf_find_by_name_kind
> executes the binary search branch, so find_bpffs_btf_enums
> can be optimized by using btf_find_by_name_kind.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
> ---
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
[...]
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH bpf-next v9 10/10] libbpf: Optimize the performance of determine_ptr_size
2025-12-08 6:23 [PATCH bpf-next v9 00/10] Improve the performance of BTF type lookups with binary search Donglin Peng
` (8 preceding siblings ...)
2025-12-08 6:23 ` [PATCH bpf-next v9 09/10] bpf: Optimize the performance of find_bpffs_btf_enums Donglin Peng
@ 2025-12-08 6:23 ` Donglin Peng
2025-12-17 7:06 ` Eduard Zingerman
9 siblings, 1 reply; 36+ messages in thread
From: Donglin Peng @ 2025-12-08 6:23 UTC (permalink / raw)
To: ast, andrii.nakryiko
Cc: eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
pengdonglin, Alan Maguire
From: pengdonglin <pengdonglin@xiaomi.com>
Leverage the performance improvement of btf__find_by_name_kind() when
BTF is sorted. For sorted BTF, the function uses binary search with
O(log n) complexity instead of linear search, providing significant
performance benefits, especially for large BTF like vmlinux.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
---
tools/lib/bpf/btf.c | 42 ++++++++++++++++++++++++++++--------------
1 file changed, 28 insertions(+), 14 deletions(-)
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index a53d24704857..6de09a5c4334 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -660,27 +660,41 @@ static int determine_ptr_size(const struct btf *btf)
};
const struct btf_type *t;
const char *name;
- int i, j, n;
+ int i, j, n, id;
if (btf->base_btf && btf->base_btf->ptr_sz > 0)
return btf->base_btf->ptr_sz;
- n = btf__type_cnt(btf);
- for (i = 1; i < n; i++) {
- t = btf__type_by_id(btf, i);
- if (!btf_is_int(t))
- continue;
+ if (btf->sorted_start_id > 0) {
+ for (i = 0; i < ARRAY_SIZE(long_aliases); i++) {
+ id = btf__find_by_name_kind(btf, long_aliases[i], BTF_KIND_INT);
+ if (id < 0)
+ continue;
- if (t->size != 4 && t->size != 8)
- continue;
+ t = btf__type_by_id(btf, id);
+ if (t->size != 4 && t->size != 8)
+ continue;
- name = btf__name_by_offset(btf, t->name_off);
- if (!name)
- continue;
+ return t->size;
+ }
+ } else {
+ n = btf__type_cnt(btf);
+ for (i = 1; i < n; i++) {
+ t = btf__type_by_id(btf, i);
+ if (!btf_is_int(t))
+ continue;
+
+ if (t->size != 4 && t->size != 8)
+ continue;
- for (j = 0; j < ARRAY_SIZE(long_aliases); j++) {
- if (strcmp(name, long_aliases[j]) == 0)
- return t->size;
+ name = btf__name_by_offset(btf, t->name_off);
+ if (!name)
+ continue;
+
+ for (j = 0; j < ARRAY_SIZE(long_aliases); j++) {
+ if (strcmp(name, long_aliases[j]) == 0)
+ return t->size;
+ }
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH bpf-next v9 10/10] libbpf: Optimize the performance of determine_ptr_size
2025-12-08 6:23 ` [PATCH bpf-next v9 10/10] libbpf: Optimize the performance of determine_ptr_size Donglin Peng
@ 2025-12-17 7:06 ` Eduard Zingerman
2025-12-17 8:38 ` Donglin Peng
0 siblings, 1 reply; 36+ messages in thread
From: Eduard Zingerman @ 2025-12-17 7:06 UTC (permalink / raw)
To: Donglin Peng, ast, andrii.nakryiko
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, pengdonglin,
Alan Maguire
On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> From: pengdonglin <pengdonglin@xiaomi.com>
>
> Leverage the performance improvement of btf__find_by_name_kind() when
> BTF is sorted. For sorted BTF, the function uses binary search with
> O(log n) complexity instead of linear search, providing significant
> performance benefits, especially for large BTF like vmlinux.
Is this a big win?
I don't like having two code paths for something which is done once
per BTF load. If it is a big win, maybe just stick with the first loop
(the one that uses btf__find_by_name_kind())? Wdyt?
[...]
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH bpf-next v9 10/10] libbpf: Optimize the performance of determine_ptr_size
2025-12-17 7:06 ` Eduard Zingerman
@ 2025-12-17 8:38 ` Donglin Peng
0 siblings, 0 replies; 36+ messages in thread
From: Donglin Peng @ 2025-12-17 8:38 UTC (permalink / raw)
To: Eduard Zingerman
Cc: ast, andrii.nakryiko, zhangxiaoqin, ihor.solodrai, linux-kernel,
bpf, pengdonglin, Alan Maguire
On Wed, Dec 17, 2025 at 3:06 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Mon, 2025-12-08 at 14:23 +0800, Donglin Peng wrote:
> > From: pengdonglin <pengdonglin@xiaomi.com>
> >
> > Leverage the performance improvement of btf__find_by_name_kind() when
> > BTF is sorted. For sorted BTF, the function uses binary search with
> > O(log n) complexity instead of linear search, providing significant
> > performance benefits, especially for large BTF like vmlinux.
>
> Is this a big win?
Here is a comparison:
w/: 1us
w/o: 351us
> I don't like having two code paths for something which is done once
> per BTF load. If it is a big win, maybe just stick with the first loop
> (the one that uses btf__find_by_name_kind())? Wdyt?
Yes, I agree and will only keep the first loop in the next version.
>
> [...]
^ permalink raw reply [flat|nested] 36+ messages in thread