* [PATCH bpf-next v12 01/11] libbpf: Add BTF permutation support for type reordering
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
@ 2026-01-09 12:59 ` Donglin Peng
2026-01-09 12:59 ` [PATCH bpf-next v12 02/11] selftests/bpf: Add test cases for btf__permute functionality Donglin Peng
` (10 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Donglin Peng @ 2026-01-09 12:59 UTC (permalink / raw)
To: ast, andrii.nakryiko, eddyz87
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, Donglin Peng,
Alan Maguire
From: Donglin Peng <pengdonglin@xiaomi.com>
Introduce btf__permute() API to allow in-place rearrangement of BTF types.
This function reorganizes BTF type order according to a provided array of
type IDs, updating all type references to maintain consistency.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
tools/lib/bpf/btf.c | 133 +++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/btf.h | 42 +++++++++++++
tools/lib/bpf/libbpf.map | 1 +
3 files changed, 176 insertions(+)
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index b136572e889a..bf75f770d29a 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -5887,3 +5887,136 @@ int btf__relocate(struct btf *btf, const struct btf *base_btf)
btf->owns_base = false;
return libbpf_err(err);
}
+
+struct btf_permute {
+ struct btf *btf;
+ __u32 *id_map;
+ __u32 start_offs;
+};
+
+/* Callback function to remap individual type ID references */
+static int btf_permute_remap_type_id(__u32 *type_id, void *ctx)
+{
+ struct btf_permute *p = ctx;
+ __u32 new_id = *type_id;
+
+ /* refer to the base BTF or VOID type */
+ if (new_id < p->btf->start_id)
+ return 0;
+
+ if (new_id >= btf__type_cnt(p->btf))
+ return -EINVAL;
+
+ *type_id = p->id_map[new_id - p->btf->start_id + p->start_offs];
+ return 0;
+}
+
+int btf__permute(struct btf *btf, __u32 *id_map, __u32 id_map_cnt,
+ const struct btf_permute_opts *opts)
+{
+ struct btf_permute p;
+ struct btf_ext *btf_ext;
+ void *nt, *new_types = NULL;
+ __u32 *order_map = NULL;
+ int err = 0, i;
+ __u32 n, id, start_offs = 0;
+
+ if (!OPTS_VALID(opts, btf_permute_opts))
+ return libbpf_err(-EINVAL);
+
+ if (btf__base_btf(btf)) {
+ n = btf->nr_types;
+ } else {
+ if (id_map[0] != 0)
+ return libbpf_err(-EINVAL);
+ n = btf__type_cnt(btf);
+ start_offs = 1;
+ }
+
+ if (id_map_cnt != n)
+ return libbpf_err(-EINVAL);
+
+ /* record the sequence of types */
+ order_map = calloc(id_map_cnt, sizeof(*id_map));
+ if (!order_map) {
+ err = -ENOMEM;
+ goto done;
+ }
+
+ new_types = calloc(btf->hdr->type_len, 1);
+ if (!new_types) {
+ err = -ENOMEM;
+ goto done;
+ }
+
+ if (btf_ensure_modifiable(btf)) {
+ err = -ENOMEM;
+ goto done;
+ }
+
+ for (i = start_offs; i < id_map_cnt; i++) {
+ id = id_map[i];
+ if (id < btf->start_id || id >= btf__type_cnt(btf)) {
+ err = -EINVAL;
+ goto done;
+ }
+ id -= btf->start_id - start_offs;
+ /* cannot be mapped to the same ID */
+ if (order_map[id]) {
+ err = -EINVAL;
+ goto done;
+ }
+ order_map[id] = i + btf->start_id - start_offs;
+ }
+
+ p.btf = btf;
+ p.id_map = id_map;
+ p.start_offs = start_offs;
+ nt = new_types;
+ for (i = start_offs; i < id_map_cnt; i++) {
+ struct btf_field_iter it;
+ const struct btf_type *t;
+ __u32 *type_id;
+ int type_size;
+
+ id = order_map[i];
+ t = btf__type_by_id(btf, id);
+ type_size = btf_type_size(t);
+ memcpy(nt, t, type_size);
+
+ /* fix up referenced IDs for BTF */
+ err = btf_field_iter_init(&it, nt, BTF_FIELD_ITER_IDS);
+ if (err)
+ goto done;
+ while ((type_id = btf_field_iter_next(&it))) {
+ err = btf_permute_remap_type_id(type_id, &p);
+ if (err)
+ goto done;
+ }
+
+ nt += type_size;
+ }
+
+ /* fix up referenced IDs for btf_ext */
+ btf_ext = OPTS_GET(opts, btf_ext, NULL);
+ if (btf_ext) {
+ err = btf_ext_visit_type_ids(btf_ext, btf_permute_remap_type_id, &p);
+ if (err)
+ goto done;
+ }
+
+ for (nt = new_types, i = 0; i < id_map_cnt - start_offs; i++) {
+ btf->type_offs[i] = nt - new_types;
+ nt += btf_type_size(nt);
+ }
+
+ free(order_map);
+ free(btf->types_data);
+ btf->types_data = new_types;
+ return 0;
+
+done:
+ free(order_map);
+ free(new_types);
+ return libbpf_err(err);
+}
diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
index cc01494d6210..b30008c267c0 100644
--- a/tools/lib/bpf/btf.h
+++ b/tools/lib/bpf/btf.h
@@ -281,6 +281,48 @@ LIBBPF_API int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts);
*/
LIBBPF_API int btf__relocate(struct btf *btf, const struct btf *base_btf);
+struct btf_permute_opts {
+ size_t sz;
+ /* optional .BTF.ext info along the main BTF info */
+ struct btf_ext *btf_ext;
+ size_t :0;
+};
+#define btf_permute_opts__last_field btf_ext
+
+/**
+ * @brief **btf__permute()** rearranges BTF types in-place according to a specified ID mapping
+ * @param btf BTF object to permute
+ * @param id_map Array mapping original type IDs to new IDs
+ * @param id_map_cnt Number of elements in @id_map
+ * @param opts Optional parameters, including BTF extension data for reference updates
+ * @return 0 on success, negative error code on failure
+ *
+ * **btf__permute()** reorders BTF types based on the provided @id_map array,
+ * updating all internal type references to maintain consistency. The function
+ * operates in-place, modifying the BTF object directly.
+ *
+ * For **base BTF**:
+ * - @id_map must include all types from ID 0 to `btf__type_cnt(btf) - 1`
+ * - @id_map_cnt must be `btf__type_cnt(btf)`
+ * - Mapping is defined as `id_map[original_id] = new_id`
+ * - `id_map[0]` must be 0 (void type cannot be moved)
+ *
+ * For **split BTF**:
+ * - @id_map must include only split types (types added on top of the base BTF)
+ * - @id_map_cnt must be `btf__type_cnt(btf) - btf__type_cnt(btf__base_btf(btf))`
+ * - Mapping is defined as `id_map[original_id - start_id] = new_id`
+ * - `start_id` equals `btf__type_cnt(btf__base_btf(btf))`
+ *
+ * After permutation, all type references within the BTF data and optional
+ * BTF extension (if provided via @opts) are updated automatically.
+ *
+ * On error, returns a negative error code and sets errno:
+ * - `-EINVAL`: Invalid parameters or invalid ID mapping
+ * - `-ENOMEM`: Memory allocation failure
+ */
+LIBBPF_API int btf__permute(struct btf *btf, __u32 *id_map, __u32 id_map_cnt,
+ const struct btf_permute_opts *opts);
+
struct btf_dump;
struct btf_dump_opts {
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 84fb90a016c9..d18fbcea7578 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -453,4 +453,5 @@ LIBBPF_1.7.0 {
bpf_map__exclusive_program;
bpf_prog_assoc_struct_ops;
bpf_program__assoc_struct_ops;
+ btf__permute;
} LIBBPF_1.6.0;
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next v12 02/11] selftests/bpf: Add test cases for btf__permute functionality
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
2026-01-09 12:59 ` [PATCH bpf-next v12 01/11] libbpf: Add BTF permutation support for type reordering Donglin Peng
@ 2026-01-09 12:59 ` Donglin Peng
2026-01-09 12:59 ` [PATCH bpf-next v12 03/11] tools/resolve_btfids: Support BTF sorting feature Donglin Peng
` (9 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Donglin Peng @ 2026-01-09 12:59 UTC (permalink / raw)
To: ast, andrii.nakryiko, eddyz87
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, Donglin Peng,
Alan Maguire
From: Donglin Peng <pengdonglin@xiaomi.com>
This patch introduces test cases for the btf__permute function to ensure
it works correctly with both base BTF and split BTF scenarios.
The test suite includes:
- test_permute_base: Validates permutation on base BTF
- test_permute_split: Tests permutation on split BTF
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
.../selftests/bpf/prog_tests/btf_permute.c | 244 ++++++++++++++++++
1 file changed, 244 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/btf_permute.c
diff --git a/tools/testing/selftests/bpf/prog_tests/btf_permute.c b/tools/testing/selftests/bpf/prog_tests/btf_permute.c
new file mode 100644
index 000000000000..04ade5ad77ac
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/btf_permute.c
@@ -0,0 +1,244 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Xiaomi */
+
+#include <test_progs.h>
+#include <bpf/btf.h>
+#include "btf_helpers.h"
+
+static void permute_base_check(struct btf *btf)
+{
+ VALIDATE_RAW_BTF(
+ btf,
+ "[1] STRUCT 's2' size=4 vlen=1\n"
+ "\t'm' type_id=4 bits_offset=0",
+ "[2] FUNC 'f' type_id=6 linkage=static",
+ "[3] PTR '(anon)' type_id=4",
+ "[4] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
+ "[5] STRUCT 's1' size=4 vlen=1\n"
+ "\t'm' type_id=4 bits_offset=0",
+ "[6] FUNC_PROTO '(anon)' ret_type_id=4 vlen=1\n"
+ "\t'p' type_id=3");
+}
+
+/* Ensure btf__permute works as expected in the base-BTF scenario */
+static void test_permute_base(void)
+{
+ struct btf *btf;
+ __u32 permute_ids[7];
+ int err;
+
+ btf = btf__new_empty();
+ if (!ASSERT_OK_PTR(btf, "empty_main_btf"))
+ return;
+
+ btf__add_int(btf, "int", 4, BTF_INT_SIGNED); /* [1] int */
+ btf__add_ptr(btf, 1); /* [2] ptr to int */
+ btf__add_struct(btf, "s1", 4); /* [3] struct s1 { */
+ btf__add_field(btf, "m", 1, 0, 0); /* int m; */
+ /* } */
+ btf__add_struct(btf, "s2", 4); /* [4] struct s2 { */
+ btf__add_field(btf, "m", 1, 0, 0); /* int m; */
+ /* } */
+ btf__add_func_proto(btf, 1); /* [5] int (*)(int *p); */
+ btf__add_func_param(btf, "p", 2);
+ btf__add_func(btf, "f", BTF_FUNC_STATIC, 5); /* [6] int f(int *p); */
+
+ VALIDATE_RAW_BTF(
+ btf,
+ "[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
+ "[2] PTR '(anon)' type_id=1",
+ "[3] STRUCT 's1' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0",
+ "[4] STRUCT 's2' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0",
+ "[5] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
+ "\t'p' type_id=2",
+ "[6] FUNC 'f' type_id=5 linkage=static");
+
+ permute_ids[0] = 0; /* [0] -> [0] */
+ permute_ids[1] = 4; /* [1] -> [4] */
+ permute_ids[2] = 3; /* [2] -> [3] */
+ permute_ids[3] = 5; /* [3] -> [5] */
+ permute_ids[4] = 1; /* [4] -> [1] */
+ permute_ids[5] = 6; /* [5] -> [6] */
+ permute_ids[6] = 2; /* [6] -> [2] */
+ err = btf__permute(btf, permute_ids, ARRAY_SIZE(permute_ids), NULL);
+ if (!ASSERT_OK(err, "btf__permute_base"))
+ goto done;
+ permute_base_check(btf);
+
+ /* ids[0] must be 0 for base BTF */
+ permute_ids[0] = 4; /* [0] -> [0] */
+ permute_ids[1] = 0; /* [1] -> [4] */
+ permute_ids[2] = 3; /* [2] -> [3] */
+ permute_ids[3] = 5; /* [3] -> [5] */
+ permute_ids[4] = 1; /* [4] -> [1] */
+ permute_ids[5] = 6; /* [5] -> [6] */
+ permute_ids[6] = 2; /* [6] -> [2] */
+ err = btf__permute(btf, permute_ids, ARRAY_SIZE(permute_ids), NULL);
+ if (!ASSERT_ERR(err, "btf__permute_base"))
+ goto done;
+ /* BTF is not modified */
+ permute_base_check(btf);
+
+ /* id_map_cnt is invalid */
+ permute_ids[0] = 0; /* [0] -> [0] */
+ permute_ids[1] = 4; /* [1] -> [4] */
+ permute_ids[2] = 3; /* [2] -> [3] */
+ permute_ids[3] = 5; /* [3] -> [5] */
+ permute_ids[4] = 1; /* [4] -> [1] */
+ permute_ids[5] = 6; /* [5] -> [6] */
+ permute_ids[6] = 2; /* [6] -> [2] */
+ err = btf__permute(btf, permute_ids, ARRAY_SIZE(permute_ids) - 1, NULL);
+ if (!ASSERT_ERR(err, "btf__permute_base"))
+ goto done;
+ /* BTF is not modified */
+ permute_base_check(btf);
+
+ /* Multiple types can not be mapped to the same ID */
+ permute_ids[0] = 0;
+ permute_ids[1] = 4;
+ permute_ids[2] = 4;
+ permute_ids[3] = 5;
+ permute_ids[4] = 1;
+ permute_ids[5] = 6;
+ permute_ids[6] = 2;
+ err = btf__permute(btf, permute_ids, ARRAY_SIZE(permute_ids), NULL);
+ if (!ASSERT_ERR(err, "btf__permute_base"))
+ goto done;
+ /* BTF is not modified */
+ permute_base_check(btf);
+
+ /* Type ID must be valid */
+ permute_ids[0] = 0;
+ permute_ids[1] = 4;
+ permute_ids[2] = 3;
+ permute_ids[3] = 5;
+ permute_ids[4] = 1;
+ permute_ids[5] = 7;
+ permute_ids[6] = 2;
+ err = btf__permute(btf, permute_ids, ARRAY_SIZE(permute_ids), NULL);
+ if (!ASSERT_ERR(err, "btf__permute_base"))
+ goto done;
+ /* BTF is not modified */
+ permute_base_check(btf);
+
+done:
+ btf__free(btf);
+}
+
+static void permute_split_check(struct btf *btf)
+{
+ VALIDATE_RAW_BTF(
+ btf,
+ "[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
+ "[2] PTR '(anon)' type_id=1",
+ "[3] STRUCT 's2' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0",
+ "[4] FUNC 'f' type_id=5 linkage=static",
+ "[5] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
+ "\t'p' type_id=2",
+ "[6] STRUCT 's1' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0");
+}
+
+/* Ensure btf__permute works as expected in the split-BTF scenario */
+static void test_permute_split(void)
+{
+ struct btf *split_btf = NULL, *base_btf = NULL;
+ __u32 permute_ids[4];
+ int err, start_id;
+
+ base_btf = btf__new_empty();
+ if (!ASSERT_OK_PTR(base_btf, "empty_main_btf"))
+ return;
+
+ btf__add_int(base_btf, "int", 4, BTF_INT_SIGNED); /* [1] int */
+ btf__add_ptr(base_btf, 1); /* [2] ptr to int */
+ VALIDATE_RAW_BTF(
+ base_btf,
+ "[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
+ "[2] PTR '(anon)' type_id=1");
+ split_btf = btf__new_empty_split(base_btf);
+ if (!ASSERT_OK_PTR(split_btf, "empty_split_btf"))
+ goto cleanup;
+ btf__add_struct(split_btf, "s1", 4); /* [3] struct s1 { */
+ btf__add_field(split_btf, "m", 1, 0, 0); /* int m; */
+ /* } */
+ btf__add_struct(split_btf, "s2", 4); /* [4] struct s2 { */
+ btf__add_field(split_btf, "m", 1, 0, 0); /* int m; */
+ /* } */
+ btf__add_func_proto(split_btf, 1); /* [5] int (*)(int p); */
+ btf__add_func_param(split_btf, "p", 2);
+ btf__add_func(split_btf, "f", BTF_FUNC_STATIC, 5); /* [6] int f(int *p); */
+
+ VALIDATE_RAW_BTF(
+ split_btf,
+ "[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
+ "[2] PTR '(anon)' type_id=1",
+ "[3] STRUCT 's1' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0",
+ "[4] STRUCT 's2' size=4 vlen=1\n"
+ "\t'm' type_id=1 bits_offset=0",
+ "[5] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
+ "\t'p' type_id=2",
+ "[6] FUNC 'f' type_id=5 linkage=static");
+
+ start_id = btf__type_cnt(base_btf);
+ permute_ids[3 - start_id] = 6; /* [3] -> [6] */
+ permute_ids[4 - start_id] = 3; /* [4] -> [3] */
+ permute_ids[5 - start_id] = 5; /* [5] -> [5] */
+ permute_ids[6 - start_id] = 4; /* [6] -> [4] */
+ err = btf__permute(split_btf, permute_ids, ARRAY_SIZE(permute_ids), NULL);
+ if (!ASSERT_OK(err, "btf__permute_split"))
+ goto cleanup;
+ permute_split_check(split_btf);
+
+ /*
+ * For split BTF, id_map_cnt must equal to the number of types
+ * added on top of base BTF
+ */
+ permute_ids[3 - start_id] = 4;
+ permute_ids[4 - start_id] = 3;
+ permute_ids[5 - start_id] = 5;
+ permute_ids[6 - start_id] = 6;
+ err = btf__permute(split_btf, permute_ids, ARRAY_SIZE(permute_ids) - 1, NULL);
+ if (!ASSERT_ERR(err, "btf__permute_split"))
+ goto cleanup;
+ /* BTF is not modified */
+ permute_split_check(split_btf);
+
+ /* Multiple types can not be mapped to the same ID */
+ permute_ids[3 - start_id] = 4;
+ permute_ids[4 - start_id] = 3;
+ permute_ids[5 - start_id] = 3;
+ permute_ids[6 - start_id] = 6;
+ err = btf__permute(split_btf, permute_ids, ARRAY_SIZE(permute_ids), NULL);
+ if (!ASSERT_ERR(err, "btf__permute_split"))
+ goto cleanup;
+ /* BTF is not modified */
+ permute_split_check(split_btf);
+
+ /* Can not map to base ID */
+ permute_ids[3 - start_id] = 4;
+ permute_ids[4 - start_id] = 2;
+ permute_ids[5 - start_id] = 5;
+ permute_ids[6 - start_id] = 6;
+ err = btf__permute(split_btf, permute_ids, ARRAY_SIZE(permute_ids), NULL);
+ if (!ASSERT_ERR(err, "btf__permute_split"))
+ goto cleanup;
+ /* BTF is not modified */
+ permute_split_check(split_btf);
+
+cleanup:
+ btf__free(split_btf);
+ btf__free(base_btf);
+}
+
+void test_btf_permute(void)
+{
+ if (test__start_subtest("permute_base"))
+ test_permute_base();
+ if (test__start_subtest("permute_split"))
+ test_permute_split();
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next v12 03/11] tools/resolve_btfids: Support BTF sorting feature
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
2026-01-09 12:59 ` [PATCH bpf-next v12 01/11] libbpf: Add BTF permutation support for type reordering Donglin Peng
2026-01-09 12:59 ` [PATCH bpf-next v12 02/11] selftests/bpf: Add test cases for btf__permute functionality Donglin Peng
@ 2026-01-09 12:59 ` Donglin Peng
2026-01-14 0:29 ` Andrii Nakryiko
2026-01-09 12:59 ` [PATCH bpf-next v12 04/11] libbpf: Optimize type lookup with binary search for sorted BTF Donglin Peng
` (8 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Donglin Peng @ 2026-01-09 12:59 UTC (permalink / raw)
To: ast, andrii.nakryiko, eddyz87
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, Donglin Peng,
Alan Maguire
From: Donglin Peng <pengdonglin@xiaomi.com>
This introduces a new BTF sorting phase that specifically sorts
BTF types by name in ascending order, so that the binary search
can be used to look up types.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
tools/bpf/resolve_btfids/main.c | 64 +++++++++++++++++++++++++++++++++
1 file changed, 64 insertions(+)
diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
index df39982f51df..343d08050116 100644
--- a/tools/bpf/resolve_btfids/main.c
+++ b/tools/bpf/resolve_btfids/main.c
@@ -850,6 +850,67 @@ static int dump_raw_btf(struct btf *btf, const char *out_path)
return 0;
}
+/*
+ * Sort types by name in ascending order resulting in all
+ * anonymous types being placed before named types.
+ */
+static int cmp_type_names(const void *a, const void *b, void *priv)
+{
+ struct btf *btf = (struct btf *)priv;
+ const struct btf_type *ta = btf__type_by_id(btf, *(__u32 *)a);
+ const struct btf_type *tb = btf__type_by_id(btf, *(__u32 *)b);
+ const char *na, *nb;
+
+ na = btf__str_by_offset(btf, ta->name_off);
+ nb = btf__str_by_offset(btf, tb->name_off);
+ return strcmp(na, nb);
+}
+
+static int sort_btf_by_name(struct btf *btf)
+{
+ __u32 *permute_ids = NULL, *id_map = NULL;
+ int nr_types, i, err = 0;
+ __u32 start_id = 0, start_offs = 1, id;
+
+ if (btf__base_btf(btf)) {
+ start_id = btf__type_cnt(btf__base_btf(btf));
+ start_offs = 0;
+ }
+ nr_types = btf__type_cnt(btf) - start_id;
+
+ permute_ids = calloc(nr_types, sizeof(*permute_ids));
+ if (!permute_ids) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ id_map = calloc(nr_types, sizeof(*id_map));
+ if (!id_map) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ for (i = 0, id = start_id; i < nr_types; i++, id++)
+ permute_ids[i] = id;
+
+ qsort_r(permute_ids + start_offs, nr_types - start_offs,
+ sizeof(*permute_ids), cmp_type_names, btf);
+
+ for (i = 0; i < nr_types; i++) {
+ id = permute_ids[i] - start_id;
+ id_map[id] = i + start_id;
+ }
+
+ err = btf__permute(btf, id_map, nr_types, NULL);
+ if (err)
+ pr_err("FAILED: btf permute: %s\n", strerror(-err));
+
+out:
+ free(permute_ids);
+ free(id_map);
+ return err;
+}
+
static inline int make_out_path(char *buf, u32 buf_sz, const char *in_path, const char *suffix)
{
int len = snprintf(buf, buf_sz, "%s%s", in_path, suffix);
@@ -1025,6 +1086,9 @@ int main(int argc, const char **argv)
if (load_btf(&obj))
goto out;
+ if (sort_btf_by_name(obj.btf))
+ goto out;
+
if (elf_collect(&obj))
goto out;
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 03/11] tools/resolve_btfids: Support BTF sorting feature
2026-01-09 12:59 ` [PATCH bpf-next v12 03/11] tools/resolve_btfids: Support BTF sorting feature Donglin Peng
@ 2026-01-14 0:29 ` Andrii Nakryiko
2026-01-14 1:44 ` Donglin Peng
0 siblings, 1 reply; 24+ messages in thread
From: Andrii Nakryiko @ 2026-01-14 0:29 UTC (permalink / raw)
To: Donglin Peng
Cc: ast, eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
Donglin Peng, Alan Maguire
On Fri, Jan 9, 2026 at 5:00 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
>
> From: Donglin Peng <pengdonglin@xiaomi.com>
>
> This introduces a new BTF sorting phase that specifically sorts
> BTF types by name in ascending order, so that the binary search
> can be used to look up types.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
> tools/bpf/resolve_btfids/main.c | 64 +++++++++++++++++++++++++++++++++
> 1 file changed, 64 insertions(+)
>
> diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
> index df39982f51df..343d08050116 100644
> --- a/tools/bpf/resolve_btfids/main.c
> +++ b/tools/bpf/resolve_btfids/main.c
> @@ -850,6 +850,67 @@ static int dump_raw_btf(struct btf *btf, const char *out_path)
> return 0;
> }
>
> +/*
> + * Sort types by name in ascending order resulting in all
> + * anonymous types being placed before named types.
> + */
> +static int cmp_type_names(const void *a, const void *b, void *priv)
> +{
> + struct btf *btf = (struct btf *)priv;
> + const struct btf_type *ta = btf__type_by_id(btf, *(__u32 *)a);
> + const struct btf_type *tb = btf__type_by_id(btf, *(__u32 *)b);
> + const char *na, *nb;
> +
> + na = btf__str_by_offset(btf, ta->name_off);
> + nb = btf__str_by_offset(btf, tb->name_off);
> + return strcmp(na, nb);
let's disambiguate the case when strcmp() == 0:
int r = strcmp(na, nb);
if (r != 0)
return r;
/* preserve original relative order of anonymous or same-named types */
return *(__u32 *)a < *(__u32 *)b ? -1 : 1;
(please send as a follow up)
> +}
> +
> +static int sort_btf_by_name(struct btf *btf)
> +{
> + __u32 *permute_ids = NULL, *id_map = NULL;
> + int nr_types, i, err = 0;
> + __u32 start_id = 0, start_offs = 1, id;
> +
> + if (btf__base_btf(btf)) {
> + start_id = btf__type_cnt(btf__base_btf(btf));
> + start_offs = 0;
with the above cmp_type_names disambiguation sorting becomes stable,
so you won't need this start_offs thing here, you can safely compare
VOID with any other type and it will stay the very first one
(please include in the follow up as well)
> + }
> + nr_types = btf__type_cnt(btf) - start_id;
> +
> + permute_ids = calloc(nr_types, sizeof(*permute_ids));
> + if (!permute_ids) {
> + err = -ENOMEM;
> + goto out;
> + }
> +
> + id_map = calloc(nr_types, sizeof(*id_map));
> + if (!id_map) {
> + err = -ENOMEM;
> + goto out;
> + }
> +
> + for (i = 0, id = start_id; i < nr_types; i++, id++)
> + permute_ids[i] = id;
> +
> + qsort_r(permute_ids + start_offs, nr_types - start_offs,
> + sizeof(*permute_ids), cmp_type_names, btf);
> +
> + for (i = 0; i < nr_types; i++) {
> + id = permute_ids[i] - start_id;
> + id_map[id] = i + start_id;
> + }
> +
> + err = btf__permute(btf, id_map, nr_types, NULL);
> + if (err)
> + pr_err("FAILED: btf permute: %s\n", strerror(-err));
> +
> +out:
> + free(permute_ids);
> + free(id_map);
> + return err;
> +}
> +
> static inline int make_out_path(char *buf, u32 buf_sz, const char *in_path, const char *suffix)
> {
> int len = snprintf(buf, buf_sz, "%s%s", in_path, suffix);
> @@ -1025,6 +1086,9 @@ int main(int argc, const char **argv)
> if (load_btf(&obj))
> goto out;
>
> + if (sort_btf_by_name(obj.btf))
> + goto out;
> +
> if (elf_collect(&obj))
> goto out;
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 03/11] tools/resolve_btfids: Support BTF sorting feature
2026-01-14 0:29 ` Andrii Nakryiko
@ 2026-01-14 1:44 ` Donglin Peng
0 siblings, 0 replies; 24+ messages in thread
From: Donglin Peng @ 2026-01-14 1:44 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: ast, eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
Donglin Peng, Alan Maguire
On Wed, Jan 14, 2026 at 8:29 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jan 9, 2026 at 5:00 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
> >
> > From: Donglin Peng <pengdonglin@xiaomi.com>
> >
> > This introduces a new BTF sorting phase that specifically sorts
> > BTF types by name in ascending order, so that the binary search
> > can be used to look up types.
> >
> > Cc: Eduard Zingerman <eddyz87@gmail.com>
> > Cc: Alexei Starovoitov <ast@kernel.org>
> > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> > Cc: Alan Maguire <alan.maguire@oracle.com>
> > Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> > Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> > Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
> > Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> > ---
> > tools/bpf/resolve_btfids/main.c | 64 +++++++++++++++++++++++++++++++++
> > 1 file changed, 64 insertions(+)
> >
> > diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
> > index df39982f51df..343d08050116 100644
> > --- a/tools/bpf/resolve_btfids/main.c
> > +++ b/tools/bpf/resolve_btfids/main.c
> > @@ -850,6 +850,67 @@ static int dump_raw_btf(struct btf *btf, const char *out_path)
> > return 0;
> > }
> >
> > +/*
> > + * Sort types by name in ascending order resulting in all
> > + * anonymous types being placed before named types.
> > + */
> > +static int cmp_type_names(const void *a, const void *b, void *priv)
> > +{
> > + struct btf *btf = (struct btf *)priv;
> > + const struct btf_type *ta = btf__type_by_id(btf, *(__u32 *)a);
> > + const struct btf_type *tb = btf__type_by_id(btf, *(__u32 *)b);
> > + const char *na, *nb;
> > +
> > + na = btf__str_by_offset(btf, ta->name_off);
> > + nb = btf__str_by_offset(btf, tb->name_off);
> > + return strcmp(na, nb);
>
> let's disambiguate the case when strcmp() == 0:
>
> int r = strcmp(na, nb);
> if (r != 0)
> return r;
>
> /* preserve original relative order of anonymous or same-named types */
> return *(__u32 *)a < *(__u32 *)b ? -1 : 1;
Thanks, I see.
>
> (please send as a follow up)
Ok.
>
>
> > +}
> > +
> > +static int sort_btf_by_name(struct btf *btf)
> > +{
> > + __u32 *permute_ids = NULL, *id_map = NULL;
> > + int nr_types, i, err = 0;
> > + __u32 start_id = 0, start_offs = 1, id;
> > +
> > + if (btf__base_btf(btf)) {
> > + start_id = btf__type_cnt(btf__base_btf(btf));
> > + start_offs = 0;
>
> with the above cmp_type_names disambiguation sorting becomes stable,
> so you won't need this start_offs thing here, you can safely compare
> VOID with any other type and it will stay the very first one
Great, I will fix it in v13 of this patch.
>
> (please include in the follow up as well)
Okay, thanks.
>
>
>
> > + }
> > + nr_types = btf__type_cnt(btf) - start_id;
> > +
> > + permute_ids = calloc(nr_types, sizeof(*permute_ids));
> > + if (!permute_ids) {
> > + err = -ENOMEM;
> > + goto out;
> > + }
> > +
> > + id_map = calloc(nr_types, sizeof(*id_map));
> > + if (!id_map) {
> > + err = -ENOMEM;
> > + goto out;
> > + }
> > +
> > + for (i = 0, id = start_id; i < nr_types; i++, id++)
> > + permute_ids[i] = id;
> > +
> > + qsort_r(permute_ids + start_offs, nr_types - start_offs,
> > + sizeof(*permute_ids), cmp_type_names, btf);
> > +
> > + for (i = 0; i < nr_types; i++) {
> > + id = permute_ids[i] - start_id;
> > + id_map[id] = i + start_id;
> > + }
> > +
> > + err = btf__permute(btf, id_map, nr_types, NULL);
> > + if (err)
> > + pr_err("FAILED: btf permute: %s\n", strerror(-err));
> > +
> > +out:
> > + free(permute_ids);
> > + free(id_map);
> > + return err;
> > +}
> > +
> > static inline int make_out_path(char *buf, u32 buf_sz, const char *in_path, const char *suffix)
> > {
> > int len = snprintf(buf, buf_sz, "%s%s", in_path, suffix);
> > @@ -1025,6 +1086,9 @@ int main(int argc, const char **argv)
> > if (load_btf(&obj))
> > goto out;
> >
> > + if (sort_btf_by_name(obj.btf))
> > + goto out;
> > +
> > if (elf_collect(&obj))
> > goto out;
> >
> > --
> > 2.34.1
> >
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH bpf-next v12 04/11] libbpf: Optimize type lookup with binary search for sorted BTF
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
` (2 preceding siblings ...)
2026-01-09 12:59 ` [PATCH bpf-next v12 03/11] tools/resolve_btfids: Support BTF sorting feature Donglin Peng
@ 2026-01-09 12:59 ` Donglin Peng
2026-01-14 0:29 ` Andrii Nakryiko
2026-01-09 12:59 ` [PATCH bpf-next v12 05/11] libbpf: Verify BTF sorting Donglin Peng
` (7 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Donglin Peng @ 2026-01-09 12:59 UTC (permalink / raw)
To: ast, andrii.nakryiko, eddyz87
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, Donglin Peng,
Alan Maguire
From: Donglin Peng <pengdonglin@xiaomi.com>
This patch introduces binary search optimization for BTF type lookups
when the BTF instance contains sorted types.
The optimization significantly improves performance when searching for
types in large BTF instances with sorted types. For unsorted BTF, the
implementation falls back to the original linear search.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
---
tools/lib/bpf/btf.c | 90 +++++++++++++++++++++++++++++++++------------
1 file changed, 66 insertions(+), 24 deletions(-)
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index bf75f770d29a..02407a022afb 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -92,6 +92,8 @@ struct btf {
* - for split BTF counts number of types added on top of base BTF.
*/
__u32 nr_types;
+ /* the start IDs of named types in sorted BTF */
+ int named_start_id;
/* if not NULL, points to the base BTF on top of which the current
* split BTF is based
*/
@@ -897,46 +899,83 @@ int btf__resolve_type(const struct btf *btf, __u32 type_id)
return type_id;
}
-__s32 btf__find_by_name(const struct btf *btf, const char *type_name)
+static __s32 btf_find_type_by_name_bsearch(const struct btf *btf, const char *name,
+ __s32 start_id)
{
- __u32 i, nr_types = btf__type_cnt(btf);
-
- if (!strcmp(type_name, "void"))
- return 0;
-
- for (i = 1; i < nr_types; i++) {
- const struct btf_type *t = btf__type_by_id(btf, i);
- const char *name = btf__name_by_offset(btf, t->name_off);
-
- if (name && !strcmp(type_name, name))
- return i;
+ const struct btf_type *t;
+ const char *tname;
+ __s32 l, r, m;
+
+ l = start_id;
+ r = btf__type_cnt(btf) - 1;
+ while (l <= r) {
+ m = l + (r - l) / 2;
+ t = btf_type_by_id(btf, m);
+ tname = btf__str_by_offset(btf, t->name_off);
+ if (strcmp(tname, name) >= 0) {
+ if (l == r)
+ return r;
+ r = m;
+ } else {
+ l = m + 1;
+ }
}
- return libbpf_err(-ENOENT);
+ return btf__type_cnt(btf);
}
static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
- const char *type_name, __u32 kind)
+ const char *type_name, __s32 kind)
{
- __u32 i, nr_types = btf__type_cnt(btf);
+ const struct btf_type *t;
+ const char *tname;
+ __s32 idx;
+
+ if (start_id < btf->start_id) {
+ idx = btf_find_by_name_kind(btf->base_btf, start_id,
+ type_name, kind);
+ if (idx >= 0)
+ return idx;
+ start_id = btf->start_id;
+ }
- if (kind == BTF_KIND_UNKN || !strcmp(type_name, "void"))
+ if (kind == BTF_KIND_UNKN || strcmp(type_name, "void") == 0)
return 0;
- for (i = start_id; i < nr_types; i++) {
- const struct btf_type *t = btf__type_by_id(btf, i);
- const char *name;
+ if (btf->named_start_id > 0 && type_name[0]) {
+ start_id = max(start_id, btf->named_start_id);
+ idx = btf_find_type_by_name_bsearch(btf, type_name, start_id);
+ for (; idx < btf__type_cnt(btf); idx++) {
+ t = btf__type_by_id(btf, idx);
+ tname = btf__str_by_offset(btf, t->name_off);
+ if (strcmp(tname, type_name) != 0)
+ return libbpf_err(-ENOENT);
+ if (kind < 0 || btf_kind(t) == kind)
+ return idx;
+ }
+ } else {
+ __u32 i, total;
- if (btf_kind(t) != kind)
- continue;
- name = btf__name_by_offset(btf, t->name_off);
- if (name && !strcmp(type_name, name))
- return i;
+ total = btf__type_cnt(btf);
+ for (i = start_id; i < total; i++) {
+ t = btf_type_by_id(btf, i);
+ if (kind > 0 && btf_kind(t) != kind)
+ continue;
+ tname = btf__str_by_offset(btf, t->name_off);
+ if (strcmp(tname, type_name) == 0)
+ return i;
+ }
}
return libbpf_err(-ENOENT);
}
+/* the kind value of -1 indicates that kind matching should be skipped */
+__s32 btf__find_by_name(const struct btf *btf, const char *type_name)
+{
+ return btf_find_by_name_kind(btf, 1, type_name, -1);
+}
+
__s32 btf__find_by_name_kind_own(const struct btf *btf, const char *type_name,
__u32 kind)
{
@@ -1006,6 +1045,7 @@ static struct btf *btf_new_empty(struct btf *base_btf)
btf->fd = -1;
btf->ptr_sz = sizeof(void *);
btf->swapped_endian = false;
+ btf->named_start_id = 0;
if (base_btf) {
btf->base_btf = base_btf;
@@ -1057,6 +1097,7 @@ static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf, b
btf->start_id = 1;
btf->start_str_off = 0;
btf->fd = -1;
+ btf->named_start_id = 0;
if (base_btf) {
btf->base_btf = base_btf;
@@ -1715,6 +1756,7 @@ static void btf_invalidate_raw_data(struct btf *btf)
free(btf->raw_data_swapped);
btf->raw_data_swapped = NULL;
}
+ btf->named_start_id = 0;
}
/* Ensure BTF is ready to be modified (by splitting into a three memory
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 04/11] libbpf: Optimize type lookup with binary search for sorted BTF
2026-01-09 12:59 ` [PATCH bpf-next v12 04/11] libbpf: Optimize type lookup with binary search for sorted BTF Donglin Peng
@ 2026-01-14 0:29 ` Andrii Nakryiko
2026-01-14 1:49 ` Donglin Peng
0 siblings, 1 reply; 24+ messages in thread
From: Andrii Nakryiko @ 2026-01-14 0:29 UTC (permalink / raw)
To: Donglin Peng
Cc: ast, eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
Donglin Peng, Alan Maguire
On Fri, Jan 9, 2026 at 5:00 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
>
> From: Donglin Peng <pengdonglin@xiaomi.com>
>
> This patch introduces binary search optimization for BTF type lookups
> when the BTF instance contains sorted types.
>
> The optimization significantly improves performance when searching for
> types in large BTF instances with sorted types. For unsorted BTF, the
> implementation falls back to the original linear search.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
> ---
> tools/lib/bpf/btf.c | 90 +++++++++++++++++++++++++++++++++------------
> 1 file changed, 66 insertions(+), 24 deletions(-)
>
[...]
> static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
> - const char *type_name, __u32 kind)
> + const char *type_name, __s32 kind)
> {
> - __u32 i, nr_types = btf__type_cnt(btf);
> + const struct btf_type *t;
> + const char *tname;
> + __s32 idx;
> +
> + if (start_id < btf->start_id) {
> + idx = btf_find_by_name_kind(btf->base_btf, start_id,
> + type_name, kind);
> + if (idx >= 0)
> + return idx;
> + start_id = btf->start_id;
> + }
>
> - if (kind == BTF_KIND_UNKN || !strcmp(type_name, "void"))
> + if (kind == BTF_KIND_UNKN || strcmp(type_name, "void") == 0)
> return 0;
>
> - for (i = start_id; i < nr_types; i++) {
> - const struct btf_type *t = btf__type_by_id(btf, i);
> - const char *name;
> + if (btf->named_start_id > 0 && type_name[0]) {
> + start_id = max(start_id, btf->named_start_id);
> + idx = btf_find_type_by_name_bsearch(btf, type_name, start_id);
> + for (; idx < btf__type_cnt(btf); idx++) {
I hope the compiler will optimize out btf__type_cnt() and won't be
recalculating it all the time, but I'd absolutely make sure by keeping
nr_types local variable which you deleted for some reason. Please
include in your follow up.
> + t = btf__type_by_id(btf, idx);
> + tname = btf__str_by_offset(btf, t->name_off);
> + if (strcmp(tname, type_name) != 0)
> + return libbpf_err(-ENOENT);
> + if (kind < 0 || btf_kind(t) == kind)
> + return idx;
> + }
> + } else {
> + __u32 i, total;
>
> - if (btf_kind(t) != kind)
> - continue;
> - name = btf__name_by_offset(btf, t->name_off);
> - if (name && !strcmp(type_name, name))
> - return i;
> + total = btf__type_cnt(btf);
and here you have a local total pre-calculated. Just move it outside
of this if/else and use in both branches
(I adjusted this trivially while applying, also unified idx,i -> id
> + for (i = start_id; i < total; i++) {
> + t = btf_type_by_id(btf, i);
> + if (kind > 0 && btf_kind(t) != kind)
> + continue;
> + tname = btf__str_by_offset(btf, t->name_off);
> + if (strcmp(tname, type_name) == 0)
> + return i;
> + }
> }
>
[...]
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 04/11] libbpf: Optimize type lookup with binary search for sorted BTF
2026-01-14 0:29 ` Andrii Nakryiko
@ 2026-01-14 1:49 ` Donglin Peng
2026-01-14 2:21 ` Donglin Peng
0 siblings, 1 reply; 24+ messages in thread
From: Donglin Peng @ 2026-01-14 1:49 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: ast, eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
Donglin Peng, Alan Maguire
On Wed, Jan 14, 2026 at 8:29 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jan 9, 2026 at 5:00 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
> >
> > From: Donglin Peng <pengdonglin@xiaomi.com>
> >
> > This patch introduces binary search optimization for BTF type lookups
> > when the BTF instance contains sorted types.
> >
> > The optimization significantly improves performance when searching for
> > types in large BTF instances with sorted types. For unsorted BTF, the
> > implementation falls back to the original linear search.
> >
> > Cc: Eduard Zingerman <eddyz87@gmail.com>
> > Cc: Alexei Starovoitov <ast@kernel.org>
> > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> > Cc: Alan Maguire <alan.maguire@oracle.com>
> > Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> > Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> > Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
> > ---
> > tools/lib/bpf/btf.c | 90 +++++++++++++++++++++++++++++++++------------
> > 1 file changed, 66 insertions(+), 24 deletions(-)
> >
>
> [...]
>
> > static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
> > - const char *type_name, __u32 kind)
> > + const char *type_name, __s32 kind)
> > {
> > - __u32 i, nr_types = btf__type_cnt(btf);
> > + const struct btf_type *t;
> > + const char *tname;
> > + __s32 idx;
> > +
> > + if (start_id < btf->start_id) {
> > + idx = btf_find_by_name_kind(btf->base_btf, start_id,
> > + type_name, kind);
> > + if (idx >= 0)
> > + return idx;
> > + start_id = btf->start_id;
> > + }
> >
> > - if (kind == BTF_KIND_UNKN || !strcmp(type_name, "void"))
> > + if (kind == BTF_KIND_UNKN || strcmp(type_name, "void") == 0)
> > return 0;
> >
> > - for (i = start_id; i < nr_types; i++) {
> > - const struct btf_type *t = btf__type_by_id(btf, i);
> > - const char *name;
> > + if (btf->named_start_id > 0 && type_name[0]) {
> > + start_id = max(start_id, btf->named_start_id);
> > + idx = btf_find_type_by_name_bsearch(btf, type_name, start_id);
> > + for (; idx < btf__type_cnt(btf); idx++) {
>
> I hope the compiler will optimize out btf__type_cnt() and won't be
> recalculating it all the time, but I'd absolutely make sure by keeping
> nr_types local variable which you deleted for some reason. Please
> include in your follow up.
Thanks, I will optimize it.
>
> > + t = btf__type_by_id(btf, idx);
> > + tname = btf__str_by_offset(btf, t->name_off);
> > + if (strcmp(tname, type_name) != 0)
> > + return libbpf_err(-ENOENT);
> > + if (kind < 0 || btf_kind(t) == kind)
> > + return idx;
> > + }
> > + } else {
> > + __u32 i, total;
> >
> > - if (btf_kind(t) != kind)
> > - continue;
> > - name = btf__name_by_offset(btf, t->name_off);
> > - if (name && !strcmp(type_name, name))
> > - return i;
> > + total = btf__type_cnt(btf);
>
> and here you have a local total pre-calculated. Just move it outside
> of this if/else and use in both branches
Thanks, I will fix it.
>
> (I adjusted this trivially while applying, also unified idx,i -> id
Thanks, I will fix it in v13.
>
>
> > + for (i = start_id; i < total; i++) {
> > + t = btf_type_by_id(btf, i);
> > + if (kind > 0 && btf_kind(t) != kind)
> > + continue;
> > + tname = btf__str_by_offset(btf, t->name_off);
> > + if (strcmp(tname, type_name) == 0)
> > + return i;
> > + }
> > }
> >
>
> [...]
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 04/11] libbpf: Optimize type lookup with binary search for sorted BTF
2026-01-14 1:49 ` Donglin Peng
@ 2026-01-14 2:21 ` Donglin Peng
0 siblings, 0 replies; 24+ messages in thread
From: Donglin Peng @ 2026-01-14 2:21 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: ast, eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
Donglin Peng, Alan Maguire
On Wed, Jan 14, 2026 at 9:49 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
>
> On Wed, Jan 14, 2026 at 8:29 AM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Fri, Jan 9, 2026 at 5:00 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
> > >
> > > From: Donglin Peng <pengdonglin@xiaomi.com>
> > >
> > > This patch introduces binary search optimization for BTF type lookups
> > > when the BTF instance contains sorted types.
> > >
> > > The optimization significantly improves performance when searching for
> > > types in large BTF instances with sorted types. For unsorted BTF, the
> > > implementation falls back to the original linear search.
> > >
> > > Cc: Eduard Zingerman <eddyz87@gmail.com>
> > > Cc: Alexei Starovoitov <ast@kernel.org>
> > > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> > > Cc: Alan Maguire <alan.maguire@oracle.com>
> > > Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> > > Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> > > Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
> > > ---
> > > tools/lib/bpf/btf.c | 90 +++++++++++++++++++++++++++++++++------------
> > > 1 file changed, 66 insertions(+), 24 deletions(-)
> > >
> >
> > [...]
> >
> > > static __s32 btf_find_by_name_kind(const struct btf *btf, int start_id,
> > > - const char *type_name, __u32 kind)
> > > + const char *type_name, __s32 kind)
> > > {
> > > - __u32 i, nr_types = btf__type_cnt(btf);
> > > + const struct btf_type *t;
> > > + const char *tname;
> > > + __s32 idx;
> > > +
> > > + if (start_id < btf->start_id) {
> > > + idx = btf_find_by_name_kind(btf->base_btf, start_id,
> > > + type_name, kind);
> > > + if (idx >= 0)
> > > + return idx;
> > > + start_id = btf->start_id;
> > > + }
> > >
> > > - if (kind == BTF_KIND_UNKN || !strcmp(type_name, "void"))
> > > + if (kind == BTF_KIND_UNKN || strcmp(type_name, "void") == 0)
> > > return 0;
> > >
> > > - for (i = start_id; i < nr_types; i++) {
> > > - const struct btf_type *t = btf__type_by_id(btf, i);
> > > - const char *name;
> > > + if (btf->named_start_id > 0 && type_name[0]) {
> > > + start_id = max(start_id, btf->named_start_id);
> > > + idx = btf_find_type_by_name_bsearch(btf, type_name, start_id);
> > > + for (; idx < btf__type_cnt(btf); idx++) {
> >
> > I hope the compiler will optimize out btf__type_cnt() and won't be
> > recalculating it all the time, but I'd absolutely make sure by keeping
> > nr_types local variable which you deleted for some reason. Please
> > include in your follow up.
>
> Thanks, I will optimize it.
>
> >
> > > + t = btf__type_by_id(btf, idx);
> > > + tname = btf__str_by_offset(btf, t->name_off);
> > > + if (strcmp(tname, type_name) != 0)
> > > + return libbpf_err(-ENOENT);
> > > + if (kind < 0 || btf_kind(t) == kind)
> > > + return idx;
> > > + }
> > > + } else {
> > > + __u32 i, total;
> > >
> > > - if (btf_kind(t) != kind)
> > > - continue;
> > > - name = btf__name_by_offset(btf, t->name_off);
> > > - if (name && !strcmp(type_name, name))
> > > - return i;
> > > + total = btf__type_cnt(btf);
> >
> > and here you have a local total pre-calculated. Just move it outside
> > of this if/else and use in both branches
>
> Thanks, I will fix it.
>
> >
> > (I adjusted this trivially while applying, also unified idx,i -> id
Thank you for fixing it.
>
> Thanks, I will fix it in v13.
>
> >
> >
> > > + for (i = start_id; i < total; i++) {
> > > + t = btf_type_by_id(btf, i);
> > > + if (kind > 0 && btf_kind(t) != kind)
> > > + continue;
> > > + tname = btf__str_by_offset(btf, t->name_off);
> > > + if (strcmp(tname, type_name) == 0)
> > > + return i;
> > > + }
> > > }
> > >
> >
> > [...]
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH bpf-next v12 05/11] libbpf: Verify BTF sorting
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
` (3 preceding siblings ...)
2026-01-09 12:59 ` [PATCH bpf-next v12 04/11] libbpf: Optimize type lookup with binary search for sorted BTF Donglin Peng
@ 2026-01-09 12:59 ` Donglin Peng
2026-01-09 12:59 ` [PATCH bpf-next v12 06/11] btf: Optimize type lookup with binary search Donglin Peng
` (6 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Donglin Peng @ 2026-01-09 12:59 UTC (permalink / raw)
To: ast, andrii.nakryiko, eddyz87
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, Donglin Peng,
Alan Maguire
From: Donglin Peng <pengdonglin@xiaomi.com>
This patch checks whether the BTF is sorted by name in ascending
order. If sorted, binary search will be used when looking up types.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
tools/lib/bpf/btf.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 02407a022afb..9a864de59597 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -899,6 +899,30 @@ int btf__resolve_type(const struct btf *btf, __u32 type_id)
return type_id;
}
+static void btf_check_sorted(struct btf *btf)
+{
+ __u32 i, n, named_start_id = 0;
+
+ n = btf__type_cnt(btf);
+ for (i = btf->start_id + 1; i < n; i++) {
+ struct btf_type *ta = btf_type_by_id(btf, i - 1);
+ struct btf_type *tb = btf_type_by_id(btf, i);
+ const char *na = btf__str_by_offset(btf, ta->name_off);
+ const char *nb = btf__str_by_offset(btf, tb->name_off);
+
+ if (strcmp(na, nb) > 0)
+ return;
+
+ if (named_start_id == 0 && na[0] != '\0')
+ named_start_id = i - 1;
+ if (named_start_id == 0 && nb[0] != '\0')
+ named_start_id = i;
+ }
+
+ if (named_start_id)
+ btf->named_start_id = named_start_id;
+}
+
static __s32 btf_find_type_by_name_bsearch(const struct btf *btf, const char *name,
__s32 start_id)
{
@@ -1132,6 +1156,7 @@ static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf, b
err = err ?: btf_sanity_check(btf);
if (err)
goto done;
+ btf_check_sorted(btf);
done:
if (err) {
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next v12 06/11] btf: Optimize type lookup with binary search
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
` (4 preceding siblings ...)
2026-01-09 12:59 ` [PATCH bpf-next v12 05/11] libbpf: Verify BTF sorting Donglin Peng
@ 2026-01-09 12:59 ` Donglin Peng
2026-01-14 0:29 ` Andrii Nakryiko
2026-01-09 12:59 ` [PATCH bpf-next v12 07/11] btf: Verify BTF sorting Donglin Peng
` (5 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Donglin Peng @ 2026-01-09 12:59 UTC (permalink / raw)
To: ast, andrii.nakryiko, eddyz87
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, Donglin Peng,
Alan Maguire
From: Donglin Peng <pengdonglin@xiaomi.com>
Improve btf_find_by_name_kind() performance by adding binary search
support for sorted types. Falls back to linear search for compatibility.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
---
include/linux/btf.h | 1 +
kernel/bpf/btf.c | 91 ++++++++++++++++++++++++++++++++++++++++-----
2 files changed, 83 insertions(+), 9 deletions(-)
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 691f09784933..78dc79810c7d 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -219,6 +219,7 @@ bool btf_is_module(const struct btf *btf);
bool btf_is_vmlinux(const struct btf *btf);
struct module *btf_try_get_module(const struct btf *btf);
u32 btf_nr_types(const struct btf *btf);
+u32 btf_named_start_id(const struct btf *btf, bool own);
struct btf *btf_base_btf(const struct btf *btf);
bool btf_type_is_i32(const struct btf_type *t);
bool btf_type_is_i64(const struct btf_type *t);
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 539c9fdea41d..d1f4b984100d 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -259,6 +259,7 @@ struct btf {
void *nohdr_data;
struct btf_header hdr;
u32 nr_types; /* includes VOID for base BTF */
+ u32 named_start_id;
u32 types_size;
u32 data_size;
refcount_t refcnt;
@@ -494,6 +495,11 @@ static bool btf_type_is_modifier(const struct btf_type *t)
return false;
}
+static int btf_start_id(const struct btf *btf)
+{
+ return btf->start_id + (btf->base_btf ? 0 : 1);
+}
+
bool btf_type_is_void(const struct btf_type *t)
{
return t == &btf_void;
@@ -544,21 +550,85 @@ u32 btf_nr_types(const struct btf *btf)
return total;
}
+/* btf_named_start_id - Get the named starting ID for the BTF
+ * @btf: Pointer to the target BTF object
+ * @own: Flag indicating whether to query only the current BTF (true = current BTF only,
+ * false = recursively traverse the base BTF chain)
+ *
+ * Return value rules:
+ * 1. For a sorted btf, return its named_start_id
+ * 2. Else for a split BTF, return its start_id
+ * 3. Else for a base BTF, return 1
+ */
+u32 btf_named_start_id(const struct btf *btf, bool own)
+{
+ const struct btf *base_btf = btf;
+
+ while (!own && base_btf->base_btf)
+ base_btf = base_btf->base_btf;
+
+ return base_btf->named_start_id ?: (base_btf->start_id ?: 1);
+}
+
+static s32 btf_find_by_name_kind_bsearch(const struct btf *btf, const char *name)
+{
+ const struct btf_type *t;
+ const char *tname;
+ s32 l, r, m;
+
+ l = btf_named_start_id(btf, true);
+ r = btf_nr_types(btf) - 1;
+ while (l <= r) {
+ m = l + (r - l) / 2;
+ t = btf_type_by_id(btf, m);
+ tname = btf_name_by_offset(btf, t->name_off);
+ if (strcmp(tname, name) >= 0) {
+ if (l == r)
+ return r;
+ r = m;
+ } else {
+ l = m + 1;
+ }
+ }
+
+ return btf_nr_types(btf);
+}
+
s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind)
{
+ const struct btf *base_btf = btf_base_btf(btf);
const struct btf_type *t;
const char *tname;
- u32 i, total;
+ s32 idx;
- total = btf_nr_types(btf);
- for (i = 1; i < total; i++) {
- t = btf_type_by_id(btf, i);
- if (BTF_INFO_KIND(t->info) != kind)
- continue;
+ if (base_btf) {
+ idx = btf_find_by_name_kind(base_btf, name, kind);
+ if (idx > 0)
+ return idx;
+ }
- tname = btf_name_by_offset(btf, t->name_off);
- if (!strcmp(tname, name))
- return i;
+ if (btf->named_start_id > 0 && name[0]) {
+ idx = btf_find_by_name_kind_bsearch(btf, name);
+ for (; idx < btf_nr_types(btf); idx++) {
+ t = btf_type_by_id(btf, idx);
+ tname = btf_name_by_offset(btf, t->name_off);
+ if (strcmp(tname, name) != 0)
+ return -ENOENT;
+ if (BTF_INFO_KIND(t->info) == kind)
+ return idx;
+ }
+ } else {
+ u32 i, total;
+
+ total = btf_nr_types(btf);
+ for (i = btf_start_id(btf); i < total; i++) {
+ t = btf_type_by_id(btf, i);
+ if (BTF_INFO_KIND(t->info) != kind)
+ continue;
+ tname = btf_name_by_offset(btf, t->name_off);
+ if (strcmp(tname, name) == 0)
+ return i;
+ }
}
return -ENOENT;
@@ -5791,6 +5861,7 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
goto errout;
}
env->btf = btf;
+ btf->named_start_id = 0;
data = kvmalloc(attr->btf_size, GFP_KERNEL | __GFP_NOWARN);
if (!data) {
@@ -6210,6 +6281,7 @@ static struct btf *btf_parse_base(struct btf_verifier_env *env, const char *name
btf->data = data;
btf->data_size = data_size;
btf->kernel_btf = true;
+ btf->named_start_id = 0;
snprintf(btf->name, sizeof(btf->name), "%s", name);
err = btf_parse_hdr(env);
@@ -6327,6 +6399,7 @@ static struct btf *btf_parse_module(const char *module_name, const void *data,
btf->start_id = base_btf->nr_types;
btf->start_str_off = base_btf->hdr.str_len;
btf->kernel_btf = true;
+ btf->named_start_id = 0;
snprintf(btf->name, sizeof(btf->name), "%s", module_name);
btf->data = kvmemdup(data, data_size, GFP_KERNEL | __GFP_NOWARN);
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 06/11] btf: Optimize type lookup with binary search
2026-01-09 12:59 ` [PATCH bpf-next v12 06/11] btf: Optimize type lookup with binary search Donglin Peng
@ 2026-01-14 0:29 ` Andrii Nakryiko
2026-01-14 1:49 ` Donglin Peng
0 siblings, 1 reply; 24+ messages in thread
From: Andrii Nakryiko @ 2026-01-14 0:29 UTC (permalink / raw)
To: Donglin Peng
Cc: ast, eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
Donglin Peng, Alan Maguire
On Fri, Jan 9, 2026 at 5:00 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
>
> From: Donglin Peng <pengdonglin@xiaomi.com>
>
> Improve btf_find_by_name_kind() performance by adding binary search
> support for sorted types. Falls back to linear search for compatibility.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
> ---
> include/linux/btf.h | 1 +
> kernel/bpf/btf.c | 91 ++++++++++++++++++++++++++++++++++++++++-----
> 2 files changed, 83 insertions(+), 9 deletions(-)
>
[...]
> s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind)
> {
> + const struct btf *base_btf = btf_base_btf(btf);
> const struct btf_type *t;
> const char *tname;
> - u32 i, total;
> + s32 idx;
>
> - total = btf_nr_types(btf);
> - for (i = 1; i < total; i++) {
> - t = btf_type_by_id(btf, i);
> - if (BTF_INFO_KIND(t->info) != kind)
> - continue;
> + if (base_btf) {
> + idx = btf_find_by_name_kind(base_btf, name, kind);
> + if (idx > 0)
> + return idx;
> + }
>
> - tname = btf_name_by_offset(btf, t->name_off);
> - if (!strcmp(tname, name))
> - return i;
> + if (btf->named_start_id > 0 && name[0]) {
> + idx = btf_find_by_name_kind_bsearch(btf, name);
> + for (; idx < btf_nr_types(btf); idx++) {
same nit about inconsistent btf_nr_types() usage between two branches:
compute once early and use in both branches
(fixed up similarly to libbpf implementation; also fixed up comment style )
> + t = btf_type_by_id(btf, idx);
> + tname = btf_name_by_offset(btf, t->name_off);
> + if (strcmp(tname, name) != 0)
> + return -ENOENT;
> + if (BTF_INFO_KIND(t->info) == kind)
> + return idx;
> + }
[...]
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 06/11] btf: Optimize type lookup with binary search
2026-01-14 0:29 ` Andrii Nakryiko
@ 2026-01-14 1:49 ` Donglin Peng
0 siblings, 0 replies; 24+ messages in thread
From: Donglin Peng @ 2026-01-14 1:49 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: ast, eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
Donglin Peng, Alan Maguire
On Wed, Jan 14, 2026 at 8:29 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jan 9, 2026 at 5:00 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
> >
> > From: Donglin Peng <pengdonglin@xiaomi.com>
> >
> > Improve btf_find_by_name_kind() performance by adding binary search
> > support for sorted types. Falls back to linear search for compatibility.
> >
> > Cc: Eduard Zingerman <eddyz87@gmail.com>
> > Cc: Alexei Starovoitov <ast@kernel.org>
> > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> > Cc: Alan Maguire <alan.maguire@oracle.com>
> > Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> > Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> > Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
> > ---
> > include/linux/btf.h | 1 +
> > kernel/bpf/btf.c | 91 ++++++++++++++++++++++++++++++++++++++++-----
> > 2 files changed, 83 insertions(+), 9 deletions(-)
> >
>
> [...]
>
> > s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind)
> > {
> > + const struct btf *base_btf = btf_base_btf(btf);
> > const struct btf_type *t;
> > const char *tname;
> > - u32 i, total;
> > + s32 idx;
> >
> > - total = btf_nr_types(btf);
> > - for (i = 1; i < total; i++) {
> > - t = btf_type_by_id(btf, i);
> > - if (BTF_INFO_KIND(t->info) != kind)
> > - continue;
> > + if (base_btf) {
> > + idx = btf_find_by_name_kind(base_btf, name, kind);
> > + if (idx > 0)
> > + return idx;
> > + }
> >
> > - tname = btf_name_by_offset(btf, t->name_off);
> > - if (!strcmp(tname, name))
> > - return i;
> > + if (btf->named_start_id > 0 && name[0]) {
> > + idx = btf_find_by_name_kind_bsearch(btf, name);
> > + for (; idx < btf_nr_types(btf); idx++) {
>
> same nit about inconsistent btf_nr_types() usage between two branches:
> compute once early and use in both branches
>
> (fixed up similarly to libbpf implementation; also fixed up comment style )
Thanks, I will fix it.
>
> > + t = btf_type_by_id(btf, idx);
> > + tname = btf_name_by_offset(btf, t->name_off);
> > + if (strcmp(tname, name) != 0)
> > + return -ENOENT;
> > + if (BTF_INFO_KIND(t->info) == kind)
> > + return idx;
> > + }
>
> [...]
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH bpf-next v12 07/11] btf: Verify BTF sorting
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
` (5 preceding siblings ...)
2026-01-09 12:59 ` [PATCH bpf-next v12 06/11] btf: Optimize type lookup with binary search Donglin Peng
@ 2026-01-09 12:59 ` Donglin Peng
2026-01-14 0:29 ` Andrii Nakryiko
2026-01-09 13:00 ` [PATCH bpf-next v12 08/11] bpf: Skip anonymous types in type lookup for performance Donglin Peng
` (4 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Donglin Peng @ 2026-01-09 12:59 UTC (permalink / raw)
To: ast, andrii.nakryiko, eddyz87
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, Donglin Peng,
Alan Maguire
From: Donglin Peng <pengdonglin@xiaomi.com>
This patch checks whether the BTF is sorted by name in ascending order.
If sorted, binary search will be used when looking up types.
Specifically, vmlinux and kernel module BTFs are always sorted during
the build phase with anonymous types placed before named types, so we
only need to identify the starting ID of named types.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
---
kernel/bpf/btf.c | 42 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index d1f4b984100d..12eecf59d71f 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -550,6 +550,46 @@ u32 btf_nr_types(const struct btf *btf)
return total;
}
+/* Note that vmlinux and kernel module BTFs are always sorted
+ * during the building phase.
+ */
+static void btf_check_sorted(struct btf *btf)
+{
+ u32 i, n, named_start_id = 0;
+
+ n = btf_nr_types(btf);
+ if (btf_is_vmlinux(btf)) {
+ for (i = btf_start_id(btf); i < n; i++) {
+ const struct btf_type *t = btf_type_by_id(btf, i);
+ const char *n = btf_name_by_offset(btf, t->name_off);
+
+ if (n[0] != '\0') {
+ btf->named_start_id = i;
+ return;
+ }
+ }
+ return;
+ }
+
+ for (i = btf_start_id(btf) + 1; i < n; i++) {
+ const struct btf_type *ta = btf_type_by_id(btf, i - 1);
+ const struct btf_type *tb = btf_type_by_id(btf, i);
+ const char *na = btf_name_by_offset(btf, ta->name_off);
+ const char *nb = btf_name_by_offset(btf, tb->name_off);
+
+ if (strcmp(na, nb) > 0)
+ return;
+
+ if (named_start_id == 0 && na[0] != '\0')
+ named_start_id = i - 1;
+ if (named_start_id == 0 && nb[0] != '\0')
+ named_start_id = i;
+ }
+
+ if (named_start_id)
+ btf->named_start_id = named_start_id;
+}
+
/* btf_named_start_id - Get the named starting ID for the BTF
* @btf: Pointer to the target BTF object
* @own: Flag indicating whether to query only the current BTF (true = current BTF only,
@@ -6302,6 +6342,7 @@ static struct btf *btf_parse_base(struct btf_verifier_env *env, const char *name
if (err)
goto errout;
+ btf_check_sorted(btf);
refcount_set(&btf->refcnt, 1);
return btf;
@@ -6436,6 +6477,7 @@ static struct btf *btf_parse_module(const char *module_name, const void *data,
}
btf_verifier_env_free(env);
+ btf_check_sorted(btf);
refcount_set(&btf->refcnt, 1);
return btf;
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 07/11] btf: Verify BTF sorting
2026-01-09 12:59 ` [PATCH bpf-next v12 07/11] btf: Verify BTF sorting Donglin Peng
@ 2026-01-14 0:29 ` Andrii Nakryiko
2026-01-14 2:17 ` Donglin Peng
0 siblings, 1 reply; 24+ messages in thread
From: Andrii Nakryiko @ 2026-01-14 0:29 UTC (permalink / raw)
To: Donglin Peng
Cc: ast, eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
Donglin Peng, Alan Maguire
On Fri, Jan 9, 2026 at 5:00 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
>
> From: Donglin Peng <pengdonglin@xiaomi.com>
>
> This patch checks whether the BTF is sorted by name in ascending order.
> If sorted, binary search will be used when looking up types.
>
> Specifically, vmlinux and kernel module BTFs are always sorted during
> the build phase with anonymous types placed before named types, so we
> only need to identify the starting ID of named types.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
> ---
> kernel/bpf/btf.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 42 insertions(+)
>
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index d1f4b984100d..12eecf59d71f 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -550,6 +550,46 @@ u32 btf_nr_types(const struct btf *btf)
> return total;
> }
>
> +/* Note that vmlinux and kernel module BTFs are always sorted
wrong comment style, I'll fix it up when applying, but keep this in
mind for the future
> + * during the building phase.
> + */
> +static void btf_check_sorted(struct btf *btf)
> +{
> + u32 i, n, named_start_id = 0;
> +
> + n = btf_nr_types(btf);
> + if (btf_is_vmlinux(btf)) {
> + for (i = btf_start_id(btf); i < n; i++) {
> + const struct btf_type *t = btf_type_by_id(btf, i);
> + const char *n = btf_name_by_offset(btf, t->name_off);
> +
> + if (n[0] != '\0') {
> + btf->named_start_id = i;
> + return;
> + }
> + }
> + return;
> + }
> +
> + for (i = btf_start_id(btf) + 1; i < n; i++) {
> + const struct btf_type *ta = btf_type_by_id(btf, i - 1);
> + const struct btf_type *tb = btf_type_by_id(btf, i);
> + const char *na = btf_name_by_offset(btf, ta->name_off);
> + const char *nb = btf_name_by_offset(btf, tb->name_off);
> +
> + if (strcmp(na, nb) > 0)
> + return;
> +
> + if (named_start_id == 0 && na[0] != '\0')
> + named_start_id = i - 1;
> + if (named_start_id == 0 && nb[0] != '\0')
> + named_start_id = i;
> + }
> +
> + if (named_start_id)
> + btf->named_start_id = named_start_id;
> +}
> +
> /* btf_named_start_id - Get the named starting ID for the BTF
> * @btf: Pointer to the target BTF object
> * @own: Flag indicating whether to query only the current BTF (true = current BTF only,
> @@ -6302,6 +6342,7 @@ static struct btf *btf_parse_base(struct btf_verifier_env *env, const char *name
> if (err)
> goto errout;
>
> + btf_check_sorted(btf);
> refcount_set(&btf->refcnt, 1);
>
> return btf;
> @@ -6436,6 +6477,7 @@ static struct btf *btf_parse_module(const char *module_name, const void *data,
> }
>
> btf_verifier_env_free(env);
> + btf_check_sorted(btf);
> refcount_set(&btf->refcnt, 1);
> return btf;
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 07/11] btf: Verify BTF sorting
2026-01-14 0:29 ` Andrii Nakryiko
@ 2026-01-14 2:17 ` Donglin Peng
0 siblings, 0 replies; 24+ messages in thread
From: Donglin Peng @ 2026-01-14 2:17 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: ast, eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
Donglin Peng, Alan Maguire
On Wed, Jan 14, 2026 at 8:30 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jan 9, 2026 at 5:00 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
> >
> > From: Donglin Peng <pengdonglin@xiaomi.com>
> >
> > This patch checks whether the BTF is sorted by name in ascending order.
> > If sorted, binary search will be used when looking up types.
> >
> > Specifically, vmlinux and kernel module BTFs are always sorted during
> > the build phase with anonymous types placed before named types, so we
> > only need to identify the starting ID of named types.
> >
> > Cc: Eduard Zingerman <eddyz87@gmail.com>
> > Cc: Alexei Starovoitov <ast@kernel.org>
> > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> > Cc: Alan Maguire <alan.maguire@oracle.com>
> > Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> > Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> > Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
> > ---
> > kernel/bpf/btf.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 42 insertions(+)
> >
> > diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> > index d1f4b984100d..12eecf59d71f 100644
> > --- a/kernel/bpf/btf.c
> > +++ b/kernel/bpf/btf.c
> > @@ -550,6 +550,46 @@ u32 btf_nr_types(const struct btf *btf)
> > return total;
> > }
> >
> > +/* Note that vmlinux and kernel module BTFs are always sorted
>
> wrong comment style, I'll fix it up when applying, but keep this in
> mind for the future
Thanks, I understood.
>
>
> > + * during the building phase.
> > + */
> > +static void btf_check_sorted(struct btf *btf)
> > +{
> > + u32 i, n, named_start_id = 0;
> > +
> > + n = btf_nr_types(btf);
> > + if (btf_is_vmlinux(btf)) {
> > + for (i = btf_start_id(btf); i < n; i++) {
> > + const struct btf_type *t = btf_type_by_id(btf, i);
> > + const char *n = btf_name_by_offset(btf, t->name_off);
> > +
> > + if (n[0] != '\0') {
> > + btf->named_start_id = i;
> > + return;
> > + }
> > + }
> > + return;
> > + }
> > +
> > + for (i = btf_start_id(btf) + 1; i < n; i++) {
> > + const struct btf_type *ta = btf_type_by_id(btf, i - 1);
> > + const struct btf_type *tb = btf_type_by_id(btf, i);
> > + const char *na = btf_name_by_offset(btf, ta->name_off);
> > + const char *nb = btf_name_by_offset(btf, tb->name_off);
> > +
> > + if (strcmp(na, nb) > 0)
> > + return;
> > +
> > + if (named_start_id == 0 && na[0] != '\0')
> > + named_start_id = i - 1;
> > + if (named_start_id == 0 && nb[0] != '\0')
> > + named_start_id = i;
> > + }
> > +
> > + if (named_start_id)
> > + btf->named_start_id = named_start_id;
> > +}
> > +
> > /* btf_named_start_id - Get the named starting ID for the BTF
> > * @btf: Pointer to the target BTF object
> > * @own: Flag indicating whether to query only the current BTF (true = current BTF only,
> > @@ -6302,6 +6342,7 @@ static struct btf *btf_parse_base(struct btf_verifier_env *env, const char *name
> > if (err)
> > goto errout;
> >
> > + btf_check_sorted(btf);
> > refcount_set(&btf->refcnt, 1);
> >
> > return btf;
> > @@ -6436,6 +6477,7 @@ static struct btf *btf_parse_module(const char *module_name, const void *data,
> > }
> >
> > btf_verifier_env_free(env);
> > + btf_check_sorted(btf);
> > refcount_set(&btf->refcnt, 1);
> > return btf;
> >
> > --
> > 2.34.1
> >
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH bpf-next v12 08/11] bpf: Skip anonymous types in type lookup for performance
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
` (6 preceding siblings ...)
2026-01-09 12:59 ` [PATCH bpf-next v12 07/11] btf: Verify BTF sorting Donglin Peng
@ 2026-01-09 13:00 ` Donglin Peng
2026-01-09 13:00 ` [PATCH bpf-next v12 09/11] bpf: Optimize the performance of find_bpffs_btf_enums Donglin Peng
` (3 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Donglin Peng @ 2026-01-09 13:00 UTC (permalink / raw)
To: ast, andrii.nakryiko, eddyz87
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, Donglin Peng,
Alan Maguire
From: Donglin Peng <pengdonglin@xiaomi.com>
Currently, vmlinux and kernel module BTFs are unconditionally
sorted during the build phase, with named types placed at the
end. Thus, anonymous types should be skipped when starting the
search. In my vmlinux BTF, the number of anonymous types is
61,747, which means the loop count can be reduced by 61,747.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
kernel/bpf/btf.c | 10 ++++++----
kernel/bpf/verifier.c | 7 +------
2 files changed, 7 insertions(+), 10 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 12eecf59d71f..686dbe18a97a 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3534,7 +3534,8 @@ const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type
const struct btf_type *t;
int len, id;
- id = btf_find_next_decl_tag(btf, pt, comp_idx, tag_key, 0);
+ id = btf_find_next_decl_tag(btf, pt, comp_idx, tag_key,
+ btf_named_start_id(btf, false) - 1);
if (id < 0)
return ERR_PTR(id);
@@ -7844,12 +7845,13 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog)
tname);
return -EINVAL;
}
+
/* Convert BTF function arguments into verifier types.
* Only PTR_TO_CTX and SCALAR are supported atm.
*/
for (i = 0; i < nargs; i++) {
u32 tags = 0;
- int id = 0;
+ int id = btf_named_start_id(btf, false) - 1;
/* 'arg:<tag>' decl_tag takes precedence over derivation of
* register type from BTF type itself
@@ -9331,7 +9333,7 @@ bpf_core_find_cands(struct bpf_core_ctx *ctx, u32 local_type_id)
}
/* Attempt to find target candidates in vmlinux BTF first */
- cands = bpf_core_add_cands(cands, main_btf, 1);
+ cands = bpf_core_add_cands(cands, main_btf, btf_named_start_id(main_btf, true));
if (IS_ERR(cands))
return ERR_CAST(cands);
@@ -9363,7 +9365,7 @@ bpf_core_find_cands(struct bpf_core_ctx *ctx, u32 local_type_id)
*/
btf_get(mod_btf);
spin_unlock_bh(&btf_idr_lock);
- cands = bpf_core_add_cands(cands, mod_btf, btf_nr_types(main_btf));
+ cands = bpf_core_add_cands(cands, mod_btf, btf_named_start_id(mod_btf, true));
btf_put(mod_btf);
if (IS_ERR(cands))
return ERR_CAST(cands);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 53635ea2e41b..53aa8f503775 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -20652,12 +20652,7 @@ static int find_btf_percpu_datasec(struct btf *btf)
* types to look at only module's own BTF types.
*/
n = btf_nr_types(btf);
- if (btf_is_module(btf))
- i = btf_nr_types(btf_vmlinux);
- else
- i = 1;
-
- for(; i < n; i++) {
+ for (i = btf_named_start_id(btf, true); i < n; i++) {
t = btf_type_by_id(btf, i);
if (BTF_INFO_KIND(t->info) != BTF_KIND_DATASEC)
continue;
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next v12 09/11] bpf: Optimize the performance of find_bpffs_btf_enums
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
` (7 preceding siblings ...)
2026-01-09 13:00 ` [PATCH bpf-next v12 08/11] bpf: Skip anonymous types in type lookup for performance Donglin Peng
@ 2026-01-09 13:00 ` Donglin Peng
2026-01-09 13:00 ` [PATCH bpf-next v12 10/11] libbpf: Optimize the performance of determine_ptr_size Donglin Peng
` (2 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Donglin Peng @ 2026-01-09 13:00 UTC (permalink / raw)
To: ast, andrii.nakryiko, eddyz87
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, Donglin Peng,
Alan Maguire
From: Donglin Peng <pengdonglin@xiaomi.com>
Currently, vmlinux BTF is unconditionally sorted during
the build phase. The function btf_find_by_name_kind
executes the binary search branch, so find_bpffs_btf_enums
can be optimized by using btf_find_by_name_kind.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
kernel/bpf/inode.c | 42 +++++++++++++++++-------------------------
1 file changed, 17 insertions(+), 25 deletions(-)
diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
index 9f866a010dad..005ea3a2cda7 100644
--- a/kernel/bpf/inode.c
+++ b/kernel/bpf/inode.c
@@ -600,10 +600,17 @@ struct bpffs_btf_enums {
static int find_bpffs_btf_enums(struct bpffs_btf_enums *info)
{
+ struct {
+ const struct btf_type **type;
+ const char *name;
+ } btf_enums[] = {
+ {&info->cmd_t, "bpf_cmd"},
+ {&info->map_t, "bpf_map_type"},
+ {&info->prog_t, "bpf_prog_type"},
+ {&info->attach_t, "bpf_attach_type"},
+ };
const struct btf *btf;
- const struct btf_type *t;
- const char *name;
- int i, n;
+ int i, id;
memset(info, 0, sizeof(*info));
@@ -615,31 +622,16 @@ static int find_bpffs_btf_enums(struct bpffs_btf_enums *info)
info->btf = btf;
- for (i = 1, n = btf_nr_types(btf); i < n; i++) {
- t = btf_type_by_id(btf, i);
- if (!btf_type_is_enum(t))
- continue;
-
- name = btf_name_by_offset(btf, t->name_off);
- if (!name)
- continue;
-
- if (strcmp(name, "bpf_cmd") == 0)
- info->cmd_t = t;
- else if (strcmp(name, "bpf_map_type") == 0)
- info->map_t = t;
- else if (strcmp(name, "bpf_prog_type") == 0)
- info->prog_t = t;
- else if (strcmp(name, "bpf_attach_type") == 0)
- info->attach_t = t;
- else
- continue;
+ for (i = 0; i < ARRAY_SIZE(btf_enums); i++) {
+ id = btf_find_by_name_kind(btf, btf_enums[i].name,
+ BTF_KIND_ENUM);
+ if (id < 0)
+ return -ESRCH;
- if (info->cmd_t && info->map_t && info->prog_t && info->attach_t)
- return 0;
+ *btf_enums[i].type = btf_type_by_id(btf, id);
}
- return -ESRCH;
+ return 0;
}
static bool find_btf_enum_const(const struct btf *btf, const struct btf_type *enum_t,
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH bpf-next v12 10/11] libbpf: Optimize the performance of determine_ptr_size
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
` (8 preceding siblings ...)
2026-01-09 13:00 ` [PATCH bpf-next v12 09/11] bpf: Optimize the performance of find_bpffs_btf_enums Donglin Peng
@ 2026-01-09 13:00 ` Donglin Peng
2026-01-14 0:29 ` Andrii Nakryiko
2026-01-09 13:00 ` [PATCH bpf-next v12 11/11] btf: Refactor the code by calling str_is_empty Donglin Peng
2026-01-14 0:30 ` [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search patchwork-bot+netdevbpf
11 siblings, 1 reply; 24+ messages in thread
From: Donglin Peng @ 2026-01-09 13:00 UTC (permalink / raw)
To: ast, andrii.nakryiko, eddyz87
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, Donglin Peng,
Alan Maguire, Andrii Nakryiko
From: Donglin Peng <pengdonglin@xiaomi.com>
Leverage the performance improvement of btf__find_by_name_kind() when
BTF is sorted. For sorted BTF, the function uses binary search with
O(log n) complexity instead of linear search, providing significant
performance benefits, especially for large BTF like vmlinux.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
---
tools/lib/bpf/btf.c | 20 ++++++--------------
1 file changed, 6 insertions(+), 14 deletions(-)
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 9a864de59597..918d9fa6ec36 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -659,29 +659,21 @@ static int determine_ptr_size(const struct btf *btf)
"int long unsigned",
};
const struct btf_type *t;
- const char *name;
- int i, j, n;
+ int i, id;
if (btf->base_btf && btf->base_btf->ptr_sz > 0)
return btf->base_btf->ptr_sz;
- n = btf__type_cnt(btf);
- for (i = 1; i < n; i++) {
- t = btf__type_by_id(btf, i);
- if (!btf_is_int(t))
+ for (i = 0; i < ARRAY_SIZE(long_aliases); i++) {
+ id = btf__find_by_name_kind(btf, long_aliases[i], BTF_KIND_INT);
+ if (id < 0)
continue;
+ t = btf__type_by_id(btf, id);
if (t->size != 4 && t->size != 8)
continue;
- name = btf__name_by_offset(btf, t->name_off);
- if (!name)
- continue;
-
- for (j = 0; j < ARRAY_SIZE(long_aliases); j++) {
- if (strcmp(name, long_aliases[j]) == 0)
- return t->size;
- }
+ return t->size;
}
return -1;
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 10/11] libbpf: Optimize the performance of determine_ptr_size
2026-01-09 13:00 ` [PATCH bpf-next v12 10/11] libbpf: Optimize the performance of determine_ptr_size Donglin Peng
@ 2026-01-14 0:29 ` Andrii Nakryiko
2026-01-14 2:00 ` Donglin Peng
0 siblings, 1 reply; 24+ messages in thread
From: Andrii Nakryiko @ 2026-01-14 0:29 UTC (permalink / raw)
To: Donglin Peng
Cc: ast, eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
Donglin Peng, Alan Maguire, Andrii Nakryiko
On Fri, Jan 9, 2026 at 5:00 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
>
> From: Donglin Peng <pengdonglin@xiaomi.com>
>
> Leverage the performance improvement of btf__find_by_name_kind() when
> BTF is sorted. For sorted BTF, the function uses binary search with
> O(log n) complexity instead of linear search, providing significant
> performance benefits, especially for large BTF like vmlinux.
>
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Cc: Alan Maguire <alan.maguire@oracle.com>
> Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
> ---
> tools/lib/bpf/btf.c | 20 ++++++--------------
> 1 file changed, 6 insertions(+), 14 deletions(-)
>
This change will be beneficial only if btf is sorted, otherwise the
previous approach is generally faster. So on older kernels this will
be significantly slower.
If we want to optimize determine_ptr_size() at all, I think we will
have to take into account whether BTF is sorted or not.
Or just not bother at all with this optimization.
I'll drop this patch.
> diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> index 9a864de59597..918d9fa6ec36 100644
> --- a/tools/lib/bpf/btf.c
> +++ b/tools/lib/bpf/btf.c
> @@ -659,29 +659,21 @@ static int determine_ptr_size(const struct btf *btf)
> "int long unsigned",
> };
> const struct btf_type *t;
> - const char *name;
> - int i, j, n;
> + int i, id;
>
> if (btf->base_btf && btf->base_btf->ptr_sz > 0)
> return btf->base_btf->ptr_sz;
>
> - n = btf__type_cnt(btf);
> - for (i = 1; i < n; i++) {
> - t = btf__type_by_id(btf, i);
> - if (!btf_is_int(t))
> + for (i = 0; i < ARRAY_SIZE(long_aliases); i++) {
> + id = btf__find_by_name_kind(btf, long_aliases[i], BTF_KIND_INT);
> + if (id < 0)
> continue;
>
> + t = btf__type_by_id(btf, id);
> if (t->size != 4 && t->size != 8)
> continue;
>
> - name = btf__name_by_offset(btf, t->name_off);
> - if (!name)
> - continue;
> -
> - for (j = 0; j < ARRAY_SIZE(long_aliases); j++) {
> - if (strcmp(name, long_aliases[j]) == 0)
> - return t->size;
> - }
> + return t->size;
> }
>
> return -1;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 10/11] libbpf: Optimize the performance of determine_ptr_size
2026-01-14 0:29 ` Andrii Nakryiko
@ 2026-01-14 2:00 ` Donglin Peng
0 siblings, 0 replies; 24+ messages in thread
From: Donglin Peng @ 2026-01-14 2:00 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: ast, eddyz87, zhangxiaoqin, ihor.solodrai, linux-kernel, bpf,
Donglin Peng, Alan Maguire, Andrii Nakryiko
On Wed, Jan 14, 2026 at 8:30 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jan 9, 2026 at 5:00 AM Donglin Peng <dolinux.peng@gmail.com> wrote:
> >
> > From: Donglin Peng <pengdonglin@xiaomi.com>
> >
> > Leverage the performance improvement of btf__find_by_name_kind() when
> > BTF is sorted. For sorted BTF, the function uses binary search with
> > O(log n) complexity instead of linear search, providing significant
> > performance benefits, especially for large BTF like vmlinux.
> >
> > Cc: Eduard Zingerman <eddyz87@gmail.com>
> > Cc: Alexei Starovoitov <ast@kernel.org>
> > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> > Cc: Alan Maguire <alan.maguire@oracle.com>
> > Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
> > Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
> > Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
> > Acked-by: Eduard Zingerman <eddyz87@gmail.com>
> > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> > tools/lib/bpf/btf.c | 20 ++++++--------------
> > 1 file changed, 6 insertions(+), 14 deletions(-)
> >
>
> This change will be beneficial only if btf is sorted, otherwise the
> previous approach is generally faster. So on older kernels this will
> be significantly slower.
Yes, I agree.
>
> If we want to optimize determine_ptr_size() at all, I think we will
> have to take into account whether BTF is sorted or not.
>
> Or just not bother at all with this optimization.
>
> I'll drop this patch.
Yes, that's correct. The actual lookup executes only once, so the
optimization provides limited value.
>
>
> > diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> > index 9a864de59597..918d9fa6ec36 100644
> > --- a/tools/lib/bpf/btf.c
> > +++ b/tools/lib/bpf/btf.c
> > @@ -659,29 +659,21 @@ static int determine_ptr_size(const struct btf *btf)
> > "int long unsigned",
> > };
> > const struct btf_type *t;
> > - const char *name;
> > - int i, j, n;
> > + int i, id;
> >
> > if (btf->base_btf && btf->base_btf->ptr_sz > 0)
> > return btf->base_btf->ptr_sz;
> >
> > - n = btf__type_cnt(btf);
> > - for (i = 1; i < n; i++) {
> > - t = btf__type_by_id(btf, i);
> > - if (!btf_is_int(t))
> > + for (i = 0; i < ARRAY_SIZE(long_aliases); i++) {
> > + id = btf__find_by_name_kind(btf, long_aliases[i], BTF_KIND_INT);
> > + if (id < 0)
> > continue;
> >
> > + t = btf__type_by_id(btf, id);
> > if (t->size != 4 && t->size != 8)
> > continue;
> >
> > - name = btf__name_by_offset(btf, t->name_off);
> > - if (!name)
> > - continue;
> > -
> > - for (j = 0; j < ARRAY_SIZE(long_aliases); j++) {
> > - if (strcmp(name, long_aliases[j]) == 0)
> > - return t->size;
> > - }
> > + return t->size;
> > }
> >
> > return -1;
> > --
> > 2.34.1
> >
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH bpf-next v12 11/11] btf: Refactor the code by calling str_is_empty
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
` (9 preceding siblings ...)
2026-01-09 13:00 ` [PATCH bpf-next v12 10/11] libbpf: Optimize the performance of determine_ptr_size Donglin Peng
@ 2026-01-09 13:00 ` Donglin Peng
2026-01-14 0:30 ` [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search patchwork-bot+netdevbpf
11 siblings, 0 replies; 24+ messages in thread
From: Donglin Peng @ 2026-01-09 13:00 UTC (permalink / raw)
To: ast, andrii.nakryiko, eddyz87
Cc: zhangxiaoqin, ihor.solodrai, linux-kernel, bpf, Donglin Peng,
Alan Maguire
From: Donglin Peng <pengdonglin@xiaomi.com>
Calling the str_is_empty function to clarify the code and
no functional changes are introduced.
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@linux.dev>
Cc: Xiaoqin Zhang <zhangxiaoqin@xiaomi.com>
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
tools/lib/bpf/btf.c | 34 +++++++++++++++++-----------------
tools/lib/bpf/libbpf.c | 4 ++--
2 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 918d9fa6ec36..66e4a57896b3 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -2128,7 +2128,7 @@ int btf__add_int(struct btf *btf, const char *name, size_t byte_sz, int encoding
int sz, name_off;
/* non-empty name */
- if (!name || !name[0])
+ if (str_is_empty(name))
return libbpf_err(-EINVAL);
/* byte_sz must be power of 2 */
if (!byte_sz || (byte_sz & (byte_sz - 1)) || byte_sz > 16)
@@ -2176,7 +2176,7 @@ int btf__add_float(struct btf *btf, const char *name, size_t byte_sz)
int sz, name_off;
/* non-empty name */
- if (!name || !name[0])
+ if (str_is_empty(name))
return libbpf_err(-EINVAL);
/* byte_sz must be one of the explicitly allowed values */
@@ -2231,7 +2231,7 @@ static int btf_add_ref_kind(struct btf *btf, int kind, const char *name, int ref
if (!t)
return libbpf_err(-ENOMEM);
- if (name && name[0]) {
+ if (!str_is_empty(name)) {
name_off = btf__add_str(btf, name);
if (name_off < 0)
return name_off;
@@ -2308,7 +2308,7 @@ static int btf_add_composite(struct btf *btf, int kind, const char *name, __u32
if (!t)
return libbpf_err(-ENOMEM);
- if (name && name[0]) {
+ if (!str_is_empty(name)) {
name_off = btf__add_str(btf, name);
if (name_off < 0)
return name_off;
@@ -2409,7 +2409,7 @@ int btf__add_field(struct btf *btf, const char *name, int type_id,
if (!m)
return libbpf_err(-ENOMEM);
- if (name && name[0]) {
+ if (!str_is_empty(name)) {
name_off = btf__add_str(btf, name);
if (name_off < 0)
return name_off;
@@ -2447,7 +2447,7 @@ static int btf_add_enum_common(struct btf *btf, const char *name, __u32 byte_sz,
if (!t)
return libbpf_err(-ENOMEM);
- if (name && name[0]) {
+ if (!str_is_empty(name)) {
name_off = btf__add_str(btf, name);
if (name_off < 0)
return name_off;
@@ -2505,7 +2505,7 @@ int btf__add_enum_value(struct btf *btf, const char *name, __s64 value)
return libbpf_err(-EINVAL);
/* non-empty name */
- if (!name || !name[0])
+ if (str_is_empty(name))
return libbpf_err(-EINVAL);
if (value < INT_MIN || value > UINT_MAX)
return libbpf_err(-E2BIG);
@@ -2582,7 +2582,7 @@ int btf__add_enum64_value(struct btf *btf, const char *name, __u64 value)
return libbpf_err(-EINVAL);
/* non-empty name */
- if (!name || !name[0])
+ if (str_is_empty(name))
return libbpf_err(-EINVAL);
/* decompose and invalidate raw data */
@@ -2622,7 +2622,7 @@ int btf__add_enum64_value(struct btf *btf, const char *name, __u64 value)
*/
int btf__add_fwd(struct btf *btf, const char *name, enum btf_fwd_kind fwd_kind)
{
- if (!name || !name[0])
+ if (str_is_empty(name))
return libbpf_err(-EINVAL);
switch (fwd_kind) {
@@ -2658,7 +2658,7 @@ int btf__add_fwd(struct btf *btf, const char *name, enum btf_fwd_kind fwd_kind)
*/
int btf__add_typedef(struct btf *btf, const char *name, int ref_type_id)
{
- if (!name || !name[0])
+ if (str_is_empty(name))
return libbpf_err(-EINVAL);
return btf_add_ref_kind(btf, BTF_KIND_TYPEDEF, name, ref_type_id, 0);
@@ -2710,7 +2710,7 @@ int btf__add_restrict(struct btf *btf, int ref_type_id)
*/
int btf__add_type_tag(struct btf *btf, const char *value, int ref_type_id)
{
- if (!value || !value[0])
+ if (str_is_empty(value))
return libbpf_err(-EINVAL);
return btf_add_ref_kind(btf, BTF_KIND_TYPE_TAG, value, ref_type_id, 0);
@@ -2727,7 +2727,7 @@ int btf__add_type_tag(struct btf *btf, const char *value, int ref_type_id)
*/
int btf__add_type_attr(struct btf *btf, const char *value, int ref_type_id)
{
- if (!value || !value[0])
+ if (str_is_empty(value))
return libbpf_err(-EINVAL);
return btf_add_ref_kind(btf, BTF_KIND_TYPE_TAG, value, ref_type_id, 1);
@@ -2746,7 +2746,7 @@ int btf__add_func(struct btf *btf, const char *name,
{
int id;
- if (!name || !name[0])
+ if (str_is_empty(name))
return libbpf_err(-EINVAL);
if (linkage != BTF_FUNC_STATIC && linkage != BTF_FUNC_GLOBAL &&
linkage != BTF_FUNC_EXTERN)
@@ -2832,7 +2832,7 @@ int btf__add_func_param(struct btf *btf, const char *name, int type_id)
if (!p)
return libbpf_err(-ENOMEM);
- if (name && name[0]) {
+ if (!str_is_empty(name)) {
name_off = btf__add_str(btf, name);
if (name_off < 0)
return name_off;
@@ -2867,7 +2867,7 @@ int btf__add_var(struct btf *btf, const char *name, int linkage, int type_id)
int sz, name_off;
/* non-empty name */
- if (!name || !name[0])
+ if (str_is_empty(name))
return libbpf_err(-EINVAL);
if (linkage != BTF_VAR_STATIC && linkage != BTF_VAR_GLOBAL_ALLOCATED &&
linkage != BTF_VAR_GLOBAL_EXTERN)
@@ -2916,7 +2916,7 @@ int btf__add_datasec(struct btf *btf, const char *name, __u32 byte_sz)
int sz, name_off;
/* non-empty name */
- if (!name || !name[0])
+ if (str_is_empty(name))
return libbpf_err(-EINVAL);
if (btf_ensure_modifiable(btf))
@@ -2993,7 +2993,7 @@ static int btf_add_decl_tag(struct btf *btf, const char *value, int ref_type_id,
struct btf_type *t;
int sz, value_off;
- if (!value || !value[0] || component_idx < -1)
+ if (str_is_empty(value) || component_idx < -1)
return libbpf_err(-EINVAL);
if (validate_type_id(ref_type_id))
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 6ea81701e274..bbcfd72b07d5 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -2904,7 +2904,7 @@ static int bpf_object__init_user_btf_map(struct bpf_object *obj,
var_extra = btf_var(var);
map_name = btf__name_by_offset(obj->btf, var->name_off);
- if (map_name == NULL || map_name[0] == '\0') {
+ if (str_is_empty(map_name)) {
pr_warn("map #%d: empty name.\n", var_idx);
return -EINVAL;
}
@@ -4281,7 +4281,7 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
if (!sym_is_extern(sym))
continue;
ext_name = elf_sym_str(obj, sym->st_name);
- if (!ext_name || !ext_name[0])
+ if (str_is_empty(ext_name))
continue;
ext = obj->externs;
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search
2026-01-09 12:59 [PATCH bpf-next v12 00/11] Improve the performance of BTF type lookups with binary search Donglin Peng
` (10 preceding siblings ...)
2026-01-09 13:00 ` [PATCH bpf-next v12 11/11] btf: Refactor the code by calling str_is_empty Donglin Peng
@ 2026-01-14 0:30 ` patchwork-bot+netdevbpf
11 siblings, 0 replies; 24+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-01-14 0:30 UTC (permalink / raw)
To: Donglin Peng
Cc: ast, andrii.nakryiko, eddyz87, zhangxiaoqin, ihor.solodrai,
linux-kernel, bpf, pengdonglin
Hello:
This series was applied to bpf/bpf-next.git (master)
by Andrii Nakryiko <andrii@kernel.org>:
On Fri, 9 Jan 2026 20:59:52 +0800 you wrote:
> From: Donglin Peng <pengdonglin@xiaomi.com>
>
> The series addresses the performance limitations of linear search in large
> BTFs by:
> 1. Adding BTF permutation support
> 2. Using resolve_btfids to sort BTF during the build phase
> 3. Checking BTF sorting
> 4. Using binary search when looking up types
>
> [...]
Here is the summary with links:
- [bpf-next,v12,01/11] libbpf: Add BTF permutation support for type reordering
https://git.kernel.org/bpf/bpf-next/c/6fbf129c4990
- [bpf-next,v12,02/11] selftests/bpf: Add test cases for btf__permute functionality
https://git.kernel.org/bpf/bpf-next/c/a3acd7d43462
- [bpf-next,v12,03/11] tools/resolve_btfids: Support BTF sorting feature
https://git.kernel.org/bpf/bpf-next/c/230e7d7de5a8
- [bpf-next,v12,04/11] libbpf: Optimize type lookup with binary search for sorted BTF
(no matching commit)
- [bpf-next,v12,05/11] libbpf: Verify BTF sorting
https://git.kernel.org/bpf/bpf-next/c/33ecca574f1c
- [bpf-next,v12,06/11] btf: Optimize type lookup with binary search
(no matching commit)
- [bpf-next,v12,07/11] btf: Verify BTF sorting
(no matching commit)
- [bpf-next,v12,08/11] bpf: Skip anonymous types in type lookup for performance
https://git.kernel.org/bpf/bpf-next/c/dc893cfa390a
- [bpf-next,v12,09/11] bpf: Optimize the performance of find_bpffs_btf_enums
https://git.kernel.org/bpf/bpf-next/c/434bcbc837a6
- [bpf-next,v12,10/11] libbpf: Optimize the performance of determine_ptr_size
(no matching commit)
- [bpf-next,v12,11/11] btf: Refactor the code by calling str_is_empty
https://git.kernel.org/bpf/bpf-next/c/9282a42a1fe1
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 24+ messages in thread