* [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf
@ 2023-04-18 0:21 Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 1/6] libbpf: misc internal libbpf clean ups around log fixup Andrii Nakryiko
` (6 more replies)
0 siblings, 7 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2023-04-18 0:21 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
This patch set moves bpf_for(), bpf_for_each(), and bpf_repeat() macros from
selftests-internal bpf_misc.h header to libbpf-provided bpf_helpers.h header.
To do this in a way to allow users to feature-detect and guard such
bpf_for()/bpf_for_each() uses on old kernels we also extend libbpf to improve
unresolved kfunc calls handling and reporting. This lets us mark
bpf_iter_num_{new,next,destroy}() declarations as __weak, and thus not fail
program loading outright if such kfuncs are missing on the host kernel.
Patches #1 and #2 do some simple clean ups and logging improvements. Patch #3
adds kfunc call poisoning and log fixup logic and is the hear of this patch
set, effectively. Patch #4 adds selftest for this logic. Patches #4 and #5
move bpf_for()/bpf_for_each()/bpf_repeat() into bpf_helpers.h header and mark
kfuncs as __weak to allow users to feature-detect and guard their uses.
Andrii Nakryiko (6):
libbpf: misc internal libbpf clean ups around log fixup
libbpf: report vmlinux vs module name when dealing with ksyms
libbpf: improve handling of unresolved kfuncs
selftests/bpf: add missing __weak kfunc log fixup test
libbpf: move bpf_for(), bpf_for_each(), and bpf_repeat() into
bpf_helpers.h
libbpf: mark bpf_iter_num_{new,next,destroy} as __weak
tools/lib/bpf/bpf_helpers.h | 103 +++++++++++++++++
tools/lib/bpf/libbpf.c | 107 ++++++++++++++----
.../selftests/bpf/prog_tests/log_fixup.c | 31 +++++
tools/testing/selftests/bpf/progs/bpf_misc.h | 103 -----------------
.../selftests/bpf/progs/test_log_fixup.c | 10 ++
5 files changed, 232 insertions(+), 122 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH bpf-next 1/6] libbpf: misc internal libbpf clean ups around log fixup
2023-04-18 0:21 [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf Andrii Nakryiko
@ 2023-04-18 0:21 ` Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 2/6] libbpf: report vmlinux vs module name when dealing with ksyms Andrii Nakryiko
` (5 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2023-04-18 0:21 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Normalize internal constants, field names, and comments related to log
fixup. Also add explicit `ext_idx` alias for relocation where relocation
is pointing to extern description for additional information.
No functional changes, just a clean up before subsequent additions.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
tools/lib/bpf/libbpf.c | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 49cd304ae3bc..a382ed3586bd 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -333,6 +333,7 @@ struct reloc_desc {
struct {
int map_idx;
int sym_off;
+ int ext_idx;
};
};
};
@@ -4042,7 +4043,7 @@ static int bpf_program__record_reloc(struct bpf_program *prog,
else
reloc_desc->type = RELO_EXTERN_LD64;
reloc_desc->insn_idx = insn_idx;
- reloc_desc->sym_off = i; /* sym_off stores extern index */
+ reloc_desc->ext_idx = i;
return 0;
}
@@ -5811,8 +5812,8 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
}
/* base map load ldimm64 special constant, used also for log fixup logic */
-#define MAP_LDIMM64_POISON_BASE 2001000000
-#define MAP_LDIMM64_POISON_PFX "200100"
+#define POISON_LDIMM64_MAP_BASE 2001000000
+#define POISON_LDIMM64_MAP_PFX "200100"
static void poison_map_ldimm64(struct bpf_program *prog, int relo_idx,
int insn_idx, struct bpf_insn *insn,
@@ -5834,7 +5835,7 @@ static void poison_map_ldimm64(struct bpf_program *prog, int relo_idx,
* invalid func unknown#2001000123
* where lower 123 is map index into obj->maps[] array
*/
- insn->imm = MAP_LDIMM64_POISON_BASE + map_idx;
+ insn->imm = POISON_LDIMM64_MAP_BASE + map_idx;
insn++;
}
@@ -5885,7 +5886,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
}
break;
case RELO_EXTERN_LD64:
- ext = &obj->externs[relo->sym_off];
+ ext = &obj->externs[relo->ext_idx];
if (ext->type == EXT_KCFG) {
if (obj->gen_loader) {
insn[0].src_reg = BPF_PSEUDO_MAP_IDX_VALUE;
@@ -5907,7 +5908,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
}
break;
case RELO_EXTERN_CALL:
- ext = &obj->externs[relo->sym_off];
+ ext = &obj->externs[relo->ext_idx];
insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
if (ext->is_set) {
insn[0].imm = ext->ksym.kernel_btf_id;
@@ -7022,13 +7023,13 @@ static void fixup_log_missing_map_load(struct bpf_program *prog,
char *buf, size_t buf_sz, size_t log_sz,
char *line1, char *line2, char *line3)
{
- /* Expected log for failed and not properly guarded CO-RE relocation:
+ /* Expected log for failed and not properly guarded map reference:
* line1 -> 123: (85) call unknown#2001000345
* line2 -> invalid func unknown#2001000345
* line3 -> <anything else or end of buffer>
*
* "123" is the index of the instruction that was poisoned.
- * "345" in "2001000345" are map index in obj->maps to fetch map name.
+ * "345" in "2001000345" is a map index in obj->maps to fetch map name.
*/
struct bpf_object *obj = prog->obj;
const struct bpf_map *map;
@@ -7038,7 +7039,7 @@ static void fixup_log_missing_map_load(struct bpf_program *prog,
if (sscanf(line1, "%d: (%*d) call unknown#%d\n", &insn_idx, &map_idx) != 2)
return;
- map_idx -= MAP_LDIMM64_POISON_BASE;
+ map_idx -= POISON_LDIMM64_MAP_BASE;
if (map_idx < 0 || map_idx >= obj->nr_maps)
return;
map = &obj->maps[map_idx];
@@ -7070,20 +7071,21 @@ static void fixup_verifier_log(struct bpf_program *prog, char *buf, size_t buf_s
if (!cur_line)
return;
- /* failed CO-RE relocation case */
if (str_has_pfx(cur_line, "invalid func unknown#195896080\n")) {
prev_line = find_prev_line(buf, cur_line);
if (!prev_line)
continue;
+ /* failed CO-RE relocation case */
fixup_log_failed_core_relo(prog, buf, buf_sz, log_sz,
prev_line, cur_line, next_line);
return;
- } else if (str_has_pfx(cur_line, "invalid func unknown#"MAP_LDIMM64_POISON_PFX)) {
+ } else if (str_has_pfx(cur_line, "invalid func unknown#"POISON_LDIMM64_MAP_PFX)) {
prev_line = find_prev_line(buf, cur_line);
if (!prev_line)
continue;
+ /* reference to uncreated BPF map */
fixup_log_missing_map_load(prog, buf, buf_sz, log_sz,
prev_line, cur_line, next_line);
return;
@@ -7098,7 +7100,7 @@ static int bpf_program_record_relos(struct bpf_program *prog)
for (i = 0; i < prog->nr_reloc; i++) {
struct reloc_desc *relo = &prog->reloc_desc[i];
- struct extern_desc *ext = &obj->externs[relo->sym_off];
+ struct extern_desc *ext = &obj->externs[relo->ext_idx];
int kind;
switch (relo->type) {
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next 2/6] libbpf: report vmlinux vs module name when dealing with ksyms
2023-04-18 0:21 [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 1/6] libbpf: misc internal libbpf clean ups around log fixup Andrii Nakryiko
@ 2023-04-18 0:21 ` Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 3/6] libbpf: improve handling of unresolved kfuncs Andrii Nakryiko
` (4 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2023-04-18 0:21 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Currently libbpf always reports "kernel" as a source of ksym BTF type,
which is ambiguous given ksym's BTF can come from either vmlinux or
kernel module BTFs. Make this explicit and log module name, if used BTF
is from kernel module.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
tools/lib/bpf/libbpf.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index a382ed3586bd..0a11563067b3 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -7538,8 +7538,9 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
ret = bpf_core_types_are_compat(obj->btf, local_func_proto_id,
kern_btf, kfunc_proto_id);
if (ret <= 0) {
- pr_warn("extern (func ksym) '%s': func_proto [%d] incompatible with kernel [%d]\n",
- ext->name, local_func_proto_id, kfunc_proto_id);
+ pr_warn("extern (func ksym) '%s': func_proto [%d] incompatible with %s [%d]\n",
+ ext->name, local_func_proto_id,
+ mod_btf ? mod_btf->name : "vmlinux", kfunc_proto_id);
return -EINVAL;
}
@@ -7573,8 +7574,8 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
* {kernel_btf_id, kernel_btf_obj_fd} -> fixup ld_imm64.
*/
ext->ksym.kernel_btf_obj_fd = mod_btf ? mod_btf->fd : 0;
- pr_debug("extern (func ksym) '%s': resolved to kernel [%d]\n",
- ext->name, kfunc_id);
+ pr_debug("extern (func ksym) '%s': resolved to %s [%d]\n",
+ ext->name, mod_btf ? mod_btf->name : "vmlinux", kfunc_id);
return 0;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next 3/6] libbpf: improve handling of unresolved kfuncs
2023-04-18 0:21 [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 1/6] libbpf: misc internal libbpf clean ups around log fixup Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 2/6] libbpf: report vmlinux vs module name when dealing with ksyms Andrii Nakryiko
@ 2023-04-18 0:21 ` Andrii Nakryiko
2023-04-18 1:10 ` Alexei Starovoitov
2023-04-18 0:21 ` [PATCH bpf-next 4/6] selftests/bpf: add missing __weak kfunc log fixup test Andrii Nakryiko
` (3 subsequent siblings)
6 siblings, 1 reply; 12+ messages in thread
From: Andrii Nakryiko @ 2023-04-18 0:21 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Currently, libbpf leaves `call #0` instruction for __weak unresolved
kfuncs, which might lead to a confusing verifier log situations, where
invalid `call #0` will be treated as successfully validated.
We can do better. Libbpf already has an established mechanism of
poisoning instructions that failed some form of resolution (e.g., CO-RE
relocation and BPF map set to not be auto-created). Libbpf doesn't fail
them outright to allow users to guard them through other means, and as
long as BPF verifier can prove that such poisoned instructions cannot be
ever reached, this doesn't consistute an invalid BPF program. If user
didn't guard such code, libbpf will extract few pieces of information to
tie such poisoned instructions back to additional information about what
entitity wasn't resolved (e.g., BPF map name, or CO-RE relocation
information).
__weak unresolved kfuncs fit this model well, so this patch extends
libbpf with poisioning and log fixup logic for kfunc calls.
Note, this poisoning is done only for kfunc *calls*, not kfunc address
resolution (ldimm64 instructions). The former cannot be ever valid, if
reached, so it's safe to poison them. The latter is a valid mechanism to
check if __weak kfunc ksym was resolved, and do necessary guarding and
work arounds based on this result, supported in most recent kernels. As
such, libbpf keeps such ldimm64 instructions as loading zero, never
poisoning them.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
tools/lib/bpf/libbpf.c | 72 ++++++++++++++++++++++++++++++++++++++++--
1 file changed, 69 insertions(+), 3 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 0a11563067b3..2600d8384252 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -5841,6 +5841,30 @@ static void poison_map_ldimm64(struct bpf_program *prog, int relo_idx,
}
}
+/* unresolved kfunc call special constant, used also for log fixup logic */
+#define POISON_CALL_KFUNC_BASE 2002000000
+#define POISON_CALL_KFUNC_PFX "2002"
+
+static void poison_kfunc_call(struct bpf_program *prog, int relo_idx,
+ int insn_idx, struct bpf_insn *insn,
+ int ext_idx, const struct extern_desc *ext)
+{
+ pr_debug("prog '%s': relo #%d: poisoning insn #%d that calls kfunc '%s'\n",
+ prog->name, relo_idx, insn_idx, ext->name);
+
+ /* we turn kfunc call into invalid helper call with identifiable constant */
+ insn->code = BPF_JMP | BPF_CALL;
+ insn->dst_reg = 0;
+ insn->src_reg = 0;
+ insn->off = 0;
+ /* if this instruction is reachable (not a dead code),
+ * verifier will complain with something like:
+ * invalid func unknown#2001000123
+ * where lower 123 is extern index into obj->externs[] array
+ */
+ insn->imm = POISON_CALL_KFUNC_BASE + ext_idx;
+}
+
/* Relocate data references within program code:
* - map references;
* - global variable references;
@@ -5913,9 +5937,9 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
if (ext->is_set) {
insn[0].imm = ext->ksym.kernel_btf_id;
insn[0].off = ext->ksym.btf_fd_idx;
- } else { /* unresolved weak kfunc */
- insn[0].imm = 0;
- insn[0].off = 0;
+ } else { /* unresolved weak kfunc call */
+ poison_kfunc_call(prog, i, relo->insn_idx, insn,
+ relo->ext_idx, ext);
}
break;
case RELO_SUBPROG_ADDR:
@@ -7052,6 +7076,39 @@ static void fixup_log_missing_map_load(struct bpf_program *prog,
patch_log(buf, buf_sz, log_sz, line1, line3 - line1, patch);
}
+static void fixup_log_missing_kfunc_call(struct bpf_program *prog,
+ char *buf, size_t buf_sz, size_t log_sz,
+ char *line1, char *line2, char *line3)
+{
+ /* Expected log for failed and not properly guarded kfunc call:
+ * line1 -> 123: (85) call unknown#2002000345
+ * line2 -> invalid func unknown#2002000345
+ * line3 -> <anything else or end of buffer>
+ *
+ * "123" is the index of the instruction that was poisoned.
+ * "345" in "2002000345" is an extern index in obj->externs to fetch kfunc name.
+ */
+ struct bpf_object *obj = prog->obj;
+ const struct extern_desc *ext;
+ int insn_idx, ext_idx;
+ char patch[128];
+
+ if (sscanf(line1, "%d: (%*d) call unknown#%d\n", &insn_idx, &ext_idx) != 2)
+ return;
+
+ ext_idx -= POISON_CALL_KFUNC_BASE;
+ if (ext_idx < 0 || ext_idx >= obj->nr_extern)
+ return;
+ ext = &obj->externs[ext_idx];
+
+ snprintf(patch, sizeof(patch),
+ "%d: <invalid kfunc call>\n"
+ "kfunc '%s' is referenced but wasn't resolved\n",
+ insn_idx, ext->name);
+
+ patch_log(buf, buf_sz, log_sz, line1, line3 - line1, patch);
+}
+
static void fixup_verifier_log(struct bpf_program *prog, char *buf, size_t buf_sz)
{
/* look for familiar error patterns in last N lines of the log */
@@ -7089,6 +7146,15 @@ static void fixup_verifier_log(struct bpf_program *prog, char *buf, size_t buf_s
fixup_log_missing_map_load(prog, buf, buf_sz, log_sz,
prev_line, cur_line, next_line);
return;
+ } else if (str_has_pfx(cur_line, "invalid func unknown#"POISON_CALL_KFUNC_PFX)) {
+ prev_line = find_prev_line(buf, cur_line);
+ if (!prev_line)
+ continue;
+
+ /* reference to unresolved kfunc */
+ fixup_log_missing_kfunc_call(prog, buf, buf_sz, log_sz,
+ prev_line, cur_line, next_line);
+ return;
}
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next 4/6] selftests/bpf: add missing __weak kfunc log fixup test
2023-04-18 0:21 [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf Andrii Nakryiko
` (2 preceding siblings ...)
2023-04-18 0:21 ` [PATCH bpf-next 3/6] libbpf: improve handling of unresolved kfuncs Andrii Nakryiko
@ 2023-04-18 0:21 ` Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 5/6] libbpf: move bpf_for(), bpf_for_each(), and bpf_repeat() into bpf_helpers.h Andrii Nakryiko
` (2 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2023-04-18 0:21 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Add test validating that libbpf correctly poisons and reports __weak
unresolved kfuncs in post-processed verifier log.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
.../selftests/bpf/prog_tests/log_fixup.c | 31 +++++++++++++++++++
.../selftests/bpf/progs/test_log_fixup.c | 10 ++++++
2 files changed, 41 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/log_fixup.c b/tools/testing/selftests/bpf/prog_tests/log_fixup.c
index bc27170bdeb0..dba71d98a227 100644
--- a/tools/testing/selftests/bpf/prog_tests/log_fixup.c
+++ b/tools/testing/selftests/bpf/prog_tests/log_fixup.c
@@ -135,6 +135,35 @@ static void missing_map(void)
test_log_fixup__destroy(skel);
}
+static void missing_kfunc(void)
+{
+ char log_buf[8 * 1024];
+ struct test_log_fixup* skel;
+ int err;
+
+ skel = test_log_fixup__open();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+ bpf_program__set_autoload(skel->progs.use_missing_kfunc, true);
+ bpf_program__set_log_buf(skel->progs.use_missing_kfunc, log_buf, sizeof(log_buf));
+
+ err = test_log_fixup__load(skel);
+ if (!ASSERT_ERR(err, "load_fail"))
+ goto cleanup;
+
+ ASSERT_HAS_SUBSTR(log_buf,
+ "0: <invalid kfunc call>\n"
+ "kfunc 'bpf_nonexistent_kfunc' is referenced but wasn't resolved\n",
+ "log_buf");
+
+ if (env.verbosity > VERBOSE_NONE)
+ printf("LOG: \n=================\n%s=================\n", log_buf);
+
+cleanup:
+ test_log_fixup__destroy(skel);
+}
+
void test_log_fixup(void)
{
if (test__start_subtest("bad_core_relo_trunc_none"))
@@ -147,4 +176,6 @@ void test_log_fixup(void)
bad_core_relo_subprog();
if (test__start_subtest("missing_map"))
missing_map();
+ if (test__start_subtest("missing_kfunc"))
+ missing_kfunc();
}
diff --git a/tools/testing/selftests/bpf/progs/test_log_fixup.c b/tools/testing/selftests/bpf/progs/test_log_fixup.c
index 60450cb0e72e..1bd48feaaa42 100644
--- a/tools/testing/selftests/bpf/progs/test_log_fixup.c
+++ b/tools/testing/selftests/bpf/progs/test_log_fixup.c
@@ -61,4 +61,14 @@ int use_missing_map(const void *ctx)
return value != NULL;
}
+extern int bpf_nonexistent_kfunc(void) __ksym __weak;
+
+SEC("?raw_tp/sys_enter")
+int use_missing_kfunc(const void *ctx)
+{
+ bpf_nonexistent_kfunc();
+
+ return 0;
+}
+
char _license[] SEC("license") = "GPL";
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next 5/6] libbpf: move bpf_for(), bpf_for_each(), and bpf_repeat() into bpf_helpers.h
2023-04-18 0:21 [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf Andrii Nakryiko
` (3 preceding siblings ...)
2023-04-18 0:21 ` [PATCH bpf-next 4/6] selftests/bpf: add missing __weak kfunc log fixup test Andrii Nakryiko
@ 2023-04-18 0:21 ` Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 6/6] libbpf: mark bpf_iter_num_{new,next,destroy} as __weak Andrii Nakryiko
2023-04-18 20:00 ` [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf patchwork-bot+netdevbpf
6 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2023-04-18 0:21 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
To make it easier for bleeding-edge BPF applications, such as sched_ext,
to utilize open-coded iterators, move bpf_for(), bpf_for_each(), and
bpf_repeat() macros from selftests/bpf-internal bpf_misc.h helper, to
libbpf-provided bpf_helpers.h header.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
tools/lib/bpf/bpf_helpers.h | 103 +++++++++++++++++++
tools/testing/selftests/bpf/progs/bpf_misc.h | 103 -------------------
2 files changed, 103 insertions(+), 103 deletions(-)
diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
index e7e1a8acc299..525dec66c129 100644
--- a/tools/lib/bpf/bpf_helpers.h
+++ b/tools/lib/bpf/bpf_helpers.h
@@ -291,4 +291,107 @@ enum libbpf_tristate {
/* Helper macro to print out debug messages */
#define bpf_printk(fmt, args...) ___bpf_pick_printk(args)(fmt, ##args)
+struct bpf_iter_num;
+
+extern int bpf_iter_num_new(struct bpf_iter_num *it, int start, int end) __ksym;
+extern int *bpf_iter_num_next(struct bpf_iter_num *it) __ksym;
+extern void bpf_iter_num_destroy(struct bpf_iter_num *it) __ksym;
+
+#ifndef bpf_for_each
+/* bpf_for_each(iter_type, cur_elem, args...) provides generic construct for
+ * using BPF open-coded iterators without having to write mundane explicit
+ * low-level loop logic. Instead, it provides for()-like generic construct
+ * that can be used pretty naturally. E.g., for some hypothetical cgroup
+ * iterator, you'd write:
+ *
+ * struct cgroup *cg, *parent_cg = <...>;
+ *
+ * bpf_for_each(cgroup, cg, parent_cg, CG_ITER_CHILDREN) {
+ * bpf_printk("Child cgroup id = %d", cg->cgroup_id);
+ * if (cg->cgroup_id == 123)
+ * break;
+ * }
+ *
+ * I.e., it looks almost like high-level for each loop in other languages,
+ * supports continue/break, and is verifiable by BPF verifier.
+ *
+ * For iterating integers, the difference betwen bpf_for_each(num, i, N, M)
+ * and bpf_for(i, N, M) is in that bpf_for() provides additional proof to
+ * verifier that i is in [N, M) range, and in bpf_for_each() case i is `int
+ * *`, not just `int`. So for integers bpf_for() is more convenient.
+ *
+ * Note: this macro relies on C99 feature of allowing to declare variables
+ * inside for() loop, bound to for() loop lifetime. It also utilizes GCC
+ * extension: __attribute__((cleanup(<func>))), supported by both GCC and
+ * Clang.
+ */
+#define bpf_for_each(type, cur, args...) for ( \
+ /* initialize and define destructor */ \
+ struct bpf_iter_##type ___it __attribute__((aligned(8), /* enforce, just in case */, \
+ cleanup(bpf_iter_##type##_destroy))), \
+ /* ___p pointer is just to call bpf_iter_##type##_new() *once* to init ___it */ \
+ *___p __attribute__((unused)) = ( \
+ bpf_iter_##type##_new(&___it, ##args), \
+ /* this is a workaround for Clang bug: it currently doesn't emit BTF */ \
+ /* for bpf_iter_##type##_destroy() when used from cleanup() attribute */ \
+ (void)bpf_iter_##type##_destroy, (void *)0); \
+ /* iteration and termination check */ \
+ (((cur) = bpf_iter_##type##_next(&___it))); \
+)
+#endif /* bpf_for_each */
+
+#ifndef bpf_for
+/* bpf_for(i, start, end) implements a for()-like looping construct that sets
+ * provided integer variable *i* to values starting from *start* through,
+ * but not including, *end*. It also proves to BPF verifier that *i* belongs
+ * to range [start, end), so this can be used for accessing arrays without
+ * extra checks.
+ *
+ * Note: *start* and *end* are assumed to be expressions with no side effects
+ * and whose values do not change throughout bpf_for() loop execution. They do
+ * not have to be statically known or constant, though.
+ *
+ * Note: similarly to bpf_for_each(), it relies on C99 feature of declaring for()
+ * loop bound variables and cleanup attribute, supported by GCC and Clang.
+ */
+#define bpf_for(i, start, end) for ( \
+ /* initialize and define destructor */ \
+ struct bpf_iter_num ___it __attribute__((aligned(8), /* enforce, just in case */ \
+ cleanup(bpf_iter_num_destroy))), \
+ /* ___p pointer is necessary to call bpf_iter_num_new() *once* to init ___it */ \
+ *___p __attribute__((unused)) = ( \
+ bpf_iter_num_new(&___it, (start), (end)), \
+ /* this is a workaround for Clang bug: it currently doesn't emit BTF */ \
+ /* for bpf_iter_num_destroy() when used from cleanup() attribute */ \
+ (void)bpf_iter_num_destroy, (void *)0); \
+ ({ \
+ /* iteration step */ \
+ int *___t = bpf_iter_num_next(&___it); \
+ /* termination and bounds check */ \
+ (___t && ((i) = *___t, (i) >= (start) && (i) < (end))); \
+ }); \
+)
+#endif /* bpf_for */
+
+#ifndef bpf_repeat
+/* bpf_repeat(N) performs N iterations without exposing iteration number
+ *
+ * Note: similarly to bpf_for_each(), it relies on C99 feature of declaring for()
+ * loop bound variables and cleanup attribute, supported by GCC and Clang.
+ */
+#define bpf_repeat(N) for ( \
+ /* initialize and define destructor */ \
+ struct bpf_iter_num ___it __attribute__((aligned(8), /* enforce, just in case */ \
+ cleanup(bpf_iter_num_destroy))), \
+ /* ___p pointer is necessary to call bpf_iter_num_new() *once* to init ___it */ \
+ *___p __attribute__((unused)) = ( \
+ bpf_iter_num_new(&___it, 0, (N)), \
+ /* this is a workaround for Clang bug: it currently doesn't emit BTF */ \
+ /* for bpf_iter_num_destroy() when used from cleanup() attribute */ \
+ (void)bpf_iter_num_destroy, (void *)0); \
+ bpf_iter_num_next(&___it); \
+ /* nothing here */ \
+)
+#endif /* bpf_repeat */
+
#endif
diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
index 6e3b4903c541..3b307de8dab9 100644
--- a/tools/testing/selftests/bpf/progs/bpf_misc.h
+++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
@@ -121,107 +121,4 @@
/* make it look to compiler like value is read and written */
#define __sink(expr) asm volatile("" : "+g"(expr))
-struct bpf_iter_num;
-
-extern int bpf_iter_num_new(struct bpf_iter_num *it, int start, int end) __ksym;
-extern int *bpf_iter_num_next(struct bpf_iter_num *it) __ksym;
-extern void bpf_iter_num_destroy(struct bpf_iter_num *it) __ksym;
-
-#ifndef bpf_for_each
-/* bpf_for_each(iter_type, cur_elem, args...) provides generic construct for
- * using BPF open-coded iterators without having to write mundane explicit
- * low-level loop logic. Instead, it provides for()-like generic construct
- * that can be used pretty naturally. E.g., for some hypothetical cgroup
- * iterator, you'd write:
- *
- * struct cgroup *cg, *parent_cg = <...>;
- *
- * bpf_for_each(cgroup, cg, parent_cg, CG_ITER_CHILDREN) {
- * bpf_printk("Child cgroup id = %d", cg->cgroup_id);
- * if (cg->cgroup_id == 123)
- * break;
- * }
- *
- * I.e., it looks almost like high-level for each loop in other languages,
- * supports continue/break, and is verifiable by BPF verifier.
- *
- * For iterating integers, the difference betwen bpf_for_each(num, i, N, M)
- * and bpf_for(i, N, M) is in that bpf_for() provides additional proof to
- * verifier that i is in [N, M) range, and in bpf_for_each() case i is `int
- * *`, not just `int`. So for integers bpf_for() is more convenient.
- *
- * Note: this macro relies on C99 feature of allowing to declare variables
- * inside for() loop, bound to for() loop lifetime. It also utilizes GCC
- * extension: __attribute__((cleanup(<func>))), supported by both GCC and
- * Clang.
- */
-#define bpf_for_each(type, cur, args...) for ( \
- /* initialize and define destructor */ \
- struct bpf_iter_##type ___it __attribute__((aligned(8), /* enforce, just in case */, \
- cleanup(bpf_iter_##type##_destroy))), \
- /* ___p pointer is just to call bpf_iter_##type##_new() *once* to init ___it */ \
- *___p __attribute__((unused)) = ( \
- bpf_iter_##type##_new(&___it, ##args), \
- /* this is a workaround for Clang bug: it currently doesn't emit BTF */ \
- /* for bpf_iter_##type##_destroy() when used from cleanup() attribute */ \
- (void)bpf_iter_##type##_destroy, (void *)0); \
- /* iteration and termination check */ \
- (((cur) = bpf_iter_##type##_next(&___it))); \
-)
-#endif /* bpf_for_each */
-
-#ifndef bpf_for
-/* bpf_for(i, start, end) implements a for()-like looping construct that sets
- * provided integer variable *i* to values starting from *start* through,
- * but not including, *end*. It also proves to BPF verifier that *i* belongs
- * to range [start, end), so this can be used for accessing arrays without
- * extra checks.
- *
- * Note: *start* and *end* are assumed to be expressions with no side effects
- * and whose values do not change throughout bpf_for() loop execution. They do
- * not have to be statically known or constant, though.
- *
- * Note: similarly to bpf_for_each(), it relies on C99 feature of declaring for()
- * loop bound variables and cleanup attribute, supported by GCC and Clang.
- */
-#define bpf_for(i, start, end) for ( \
- /* initialize and define destructor */ \
- struct bpf_iter_num ___it __attribute__((aligned(8), /* enforce, just in case */ \
- cleanup(bpf_iter_num_destroy))), \
- /* ___p pointer is necessary to call bpf_iter_num_new() *once* to init ___it */ \
- *___p __attribute__((unused)) = ( \
- bpf_iter_num_new(&___it, (start), (end)), \
- /* this is a workaround for Clang bug: it currently doesn't emit BTF */ \
- /* for bpf_iter_num_destroy() when used from cleanup() attribute */ \
- (void)bpf_iter_num_destroy, (void *)0); \
- ({ \
- /* iteration step */ \
- int *___t = bpf_iter_num_next(&___it); \
- /* termination and bounds check */ \
- (___t && ((i) = *___t, (i) >= (start) && (i) < (end))); \
- }); \
-)
-#endif /* bpf_for */
-
-#ifndef bpf_repeat
-/* bpf_repeat(N) performs N iterations without exposing iteration number
- *
- * Note: similarly to bpf_for_each(), it relies on C99 feature of declaring for()
- * loop bound variables and cleanup attribute, supported by GCC and Clang.
- */
-#define bpf_repeat(N) for ( \
- /* initialize and define destructor */ \
- struct bpf_iter_num ___it __attribute__((aligned(8), /* enforce, just in case */ \
- cleanup(bpf_iter_num_destroy))), \
- /* ___p pointer is necessary to call bpf_iter_num_new() *once* to init ___it */ \
- *___p __attribute__((unused)) = ( \
- bpf_iter_num_new(&___it, 0, (N)), \
- /* this is a workaround for Clang bug: it currently doesn't emit BTF */ \
- /* for bpf_iter_num_destroy() when used from cleanup() attribute */ \
- (void)bpf_iter_num_destroy, (void *)0); \
- bpf_iter_num_next(&___it); \
- /* nothing here */ \
-)
-#endif /* bpf_repeat */
-
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next 6/6] libbpf: mark bpf_iter_num_{new,next,destroy} as __weak
2023-04-18 0:21 [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf Andrii Nakryiko
` (4 preceding siblings ...)
2023-04-18 0:21 ` [PATCH bpf-next 5/6] libbpf: move bpf_for(), bpf_for_each(), and bpf_repeat() into bpf_helpers.h Andrii Nakryiko
@ 2023-04-18 0:21 ` Andrii Nakryiko
2023-04-18 20:00 ` [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf patchwork-bot+netdevbpf
6 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2023-04-18 0:21 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Mark bpf_iter_num_{new,next,destroy}() kfuncs declared for
bpf_for()/bpf_repeat() macros as __weak to allow users to feature-detect
their presence and guard bpf_for()/bpf_repeat() loops accordingly for
backwards compatibility with old kernels.
Now that libbpf supports kfunc calls poisoning and better reporting of
unresolved (but called) kfuncs, declaring number iterator kfuncs in
bpf_helpers.h won't degrade user experience and won't cause unnecessary
kernel feature dependencies.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
tools/lib/bpf/bpf_helpers.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
index 525dec66c129..929a3baca8ef 100644
--- a/tools/lib/bpf/bpf_helpers.h
+++ b/tools/lib/bpf/bpf_helpers.h
@@ -293,9 +293,9 @@ enum libbpf_tristate {
struct bpf_iter_num;
-extern int bpf_iter_num_new(struct bpf_iter_num *it, int start, int end) __ksym;
-extern int *bpf_iter_num_next(struct bpf_iter_num *it) __ksym;
-extern void bpf_iter_num_destroy(struct bpf_iter_num *it) __ksym;
+extern int bpf_iter_num_new(struct bpf_iter_num *it, int start, int end) __weak __ksym;
+extern int *bpf_iter_num_next(struct bpf_iter_num *it) __weak __ksym;
+extern void bpf_iter_num_destroy(struct bpf_iter_num *it) __weak __ksym;
#ifndef bpf_for_each
/* bpf_for_each(iter_type, cur_elem, args...) provides generic construct for
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 3/6] libbpf: improve handling of unresolved kfuncs
2023-04-18 0:21 ` [PATCH bpf-next 3/6] libbpf: improve handling of unresolved kfuncs Andrii Nakryiko
@ 2023-04-18 1:10 ` Alexei Starovoitov
2023-04-18 18:10 ` Andrii Nakryiko
0 siblings, 1 reply; 12+ messages in thread
From: Alexei Starovoitov @ 2023-04-18 1:10 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau,
Kernel Team
On Mon, Apr 17, 2023 at 5:22 PM Andrii Nakryiko <andrii@kernel.org> wrote:
> insn[0].imm = ext->ksym.kernel_btf_id;
> insn[0].off = ext->ksym.btf_fd_idx;
> - } else { /* unresolved weak kfunc */
> - insn[0].imm = 0;
> - insn[0].off = 0;
> + } else { /* unresolved weak kfunc call */
> + poison_kfunc_call(prog, i, relo->insn_idx, insn,
> + relo->ext_idx, ext);
With that done should we remove:
/* skip for now, but return error when we find this in fixup_kfunc_call */
if (!insn->imm)
return 0;
in check_kfunc_call()...
and if (!func_id && !offset) in add_kfunc_call() ?
That was added in commit a5d827275241 ("bpf: Be conservative while
processing invalid kfunc calls")
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 3/6] libbpf: improve handling of unresolved kfuncs
2023-04-18 1:10 ` Alexei Starovoitov
@ 2023-04-18 18:10 ` Andrii Nakryiko
2023-04-18 18:14 ` Alexei Starovoitov
0 siblings, 1 reply; 12+ messages in thread
From: Andrii Nakryiko @ 2023-04-18 18:10 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Kernel Team
On Mon, Apr 17, 2023 at 6:10 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Mon, Apr 17, 2023 at 5:22 PM Andrii Nakryiko <andrii@kernel.org> wrote:
> > insn[0].imm = ext->ksym.kernel_btf_id;
> > insn[0].off = ext->ksym.btf_fd_idx;
> > - } else { /* unresolved weak kfunc */
> > - insn[0].imm = 0;
> > - insn[0].off = 0;
> > + } else { /* unresolved weak kfunc call */
> > + poison_kfunc_call(prog, i, relo->insn_idx, insn,
> > + relo->ext_idx, ext);
>
> With that done should we remove:
> /* skip for now, but return error when we find this in fixup_kfunc_call */
> if (!insn->imm)
> return 0;
> in check_kfunc_call()...
>
> and if (!func_id && !offset) in add_kfunc_call() ?
>
> That was added in commit a5d827275241 ("bpf: Be conservative while
> processing invalid kfunc calls")
I guess?.. I don't know if there was any other situation that this fix
was handling, but if it's only due to unresolved kfuncs by libbpf,
then yep.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 3/6] libbpf: improve handling of unresolved kfuncs
2023-04-18 18:10 ` Andrii Nakryiko
@ 2023-04-18 18:14 ` Alexei Starovoitov
2023-04-18 18:45 ` Andrii Nakryiko
0 siblings, 1 reply; 12+ messages in thread
From: Alexei Starovoitov @ 2023-04-18 18:14 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Kernel Team
On Tue, Apr 18, 2023 at 11:10 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Mon, Apr 17, 2023 at 6:10 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Mon, Apr 17, 2023 at 5:22 PM Andrii Nakryiko <andrii@kernel.org> wrote:
> > > insn[0].imm = ext->ksym.kernel_btf_id;
> > > insn[0].off = ext->ksym.btf_fd_idx;
> > > - } else { /* unresolved weak kfunc */
> > > - insn[0].imm = 0;
> > > - insn[0].off = 0;
> > > + } else { /* unresolved weak kfunc call */
> > > + poison_kfunc_call(prog, i, relo->insn_idx, insn,
> > > + relo->ext_idx, ext);
> >
> > With that done should we remove:
> > /* skip for now, but return error when we find this in fixup_kfunc_call */
> > if (!insn->imm)
> > return 0;
> > in check_kfunc_call()...
> >
> > and if (!func_id && !offset) in add_kfunc_call() ?
> >
> > That was added in commit a5d827275241 ("bpf: Be conservative while
> > processing invalid kfunc calls")
>
> I guess?.. I don't know if there was any other situation that this fix
> was handling, but if it's only due to unresolved kfuncs by libbpf,
> then yep.
It was specifically to support weak kfunc with imm==0 off==0.
With libbpf doing poisoning and converting call kfunc into call
unknown helper that code is no longer needed.
The question is whether we should try to support new kernel plus old
libbpf combination.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 3/6] libbpf: improve handling of unresolved kfuncs
2023-04-18 18:14 ` Alexei Starovoitov
@ 2023-04-18 18:45 ` Andrii Nakryiko
0 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2023-04-18 18:45 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Kernel Team
On Tue, Apr 18, 2023 at 11:14 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Apr 18, 2023 at 11:10 AM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Mon, Apr 17, 2023 at 6:10 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Mon, Apr 17, 2023 at 5:22 PM Andrii Nakryiko <andrii@kernel.org> wrote:
> > > > insn[0].imm = ext->ksym.kernel_btf_id;
> > > > insn[0].off = ext->ksym.btf_fd_idx;
> > > > - } else { /* unresolved weak kfunc */
> > > > - insn[0].imm = 0;
> > > > - insn[0].off = 0;
> > > > + } else { /* unresolved weak kfunc call */
> > > > + poison_kfunc_call(prog, i, relo->insn_idx, insn,
> > > > + relo->ext_idx, ext);
> > >
> > > With that done should we remove:
> > > /* skip for now, but return error when we find this in fixup_kfunc_call */
> > > if (!insn->imm)
> > > return 0;
> > > in check_kfunc_call()...
> > >
> > > and if (!func_id && !offset) in add_kfunc_call() ?
> > >
> > > That was added in commit a5d827275241 ("bpf: Be conservative while
> > > processing invalid kfunc calls")
> >
> > I guess?.. I don't know if there was any other situation that this fix
> > was handling, but if it's only due to unresolved kfuncs by libbpf,
> > then yep.
>
> It was specifically to support weak kfunc with imm==0 off==0.
> With libbpf doing poisoning and converting call kfunc into call
> unknown helper that code is no longer needed.
> The question is whether we should try to support new kernel plus old
> libbpf combination.
Let's drop this kernel-side kludge. I don't believe anyone is actively
using __weak kfuncs yet, so it doesn't seem like it will hurt anyone.
But let's do it in a follow up if the series is good to go as is.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf
2023-04-18 0:21 [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf Andrii Nakryiko
` (5 preceding siblings ...)
2023-04-18 0:21 ` [PATCH bpf-next 6/6] libbpf: mark bpf_iter_num_{new,next,destroy} as __weak Andrii Nakryiko
@ 2023-04-18 20:00 ` patchwork-bot+netdevbpf
6 siblings, 0 replies; 12+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-04-18 20:00 UTC (permalink / raw)
To: Andrii Nakryiko; +Cc: bpf, ast, daniel, martin.lau, kernel-team
Hello:
This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Mon, 17 Apr 2023 17:21:42 -0700 you wrote:
> This patch set moves bpf_for(), bpf_for_each(), and bpf_repeat() macros from
> selftests-internal bpf_misc.h header to libbpf-provided bpf_helpers.h header.
> To do this in a way to allow users to feature-detect and guard such
> bpf_for()/bpf_for_each() uses on old kernels we also extend libbpf to improve
> unresolved kfunc calls handling and reporting. This lets us mark
> bpf_iter_num_{new,next,destroy}() declarations as __weak, and thus not fail
> program loading outright if such kfuncs are missing on the host kernel.
>
> [...]
Here is the summary with links:
- [bpf-next,1/6] libbpf: misc internal libbpf clean ups around log fixup
https://git.kernel.org/bpf/bpf-next/c/3055ddd654f6
- [bpf-next,2/6] libbpf: report vmlinux vs module name when dealing with ksyms
https://git.kernel.org/bpf/bpf-next/c/f709160d1724
- [bpf-next,3/6] libbpf: improve handling of unresolved kfuncs
https://git.kernel.org/bpf/bpf-next/c/05b6f766b25c
- [bpf-next,4/6] selftests/bpf: add missing __weak kfunc log fixup test
https://git.kernel.org/bpf/bpf-next/c/30bbfe3236b0
- [bpf-next,5/6] libbpf: move bpf_for(), bpf_for_each(), and bpf_repeat() into bpf_helpers.h
https://git.kernel.org/bpf/bpf-next/c/c5e647416708
- [bpf-next,6/6] libbpf: mark bpf_iter_num_{new,next,destroy} as __weak
https://git.kernel.org/bpf/bpf-next/c/94dccba79520
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2023-04-18 20:00 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-18 0:21 [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 1/6] libbpf: misc internal libbpf clean ups around log fixup Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 2/6] libbpf: report vmlinux vs module name when dealing with ksyms Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 3/6] libbpf: improve handling of unresolved kfuncs Andrii Nakryiko
2023-04-18 1:10 ` Alexei Starovoitov
2023-04-18 18:10 ` Andrii Nakryiko
2023-04-18 18:14 ` Alexei Starovoitov
2023-04-18 18:45 ` Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 4/6] selftests/bpf: add missing __weak kfunc log fixup test Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 5/6] libbpf: move bpf_for(), bpf_for_each(), and bpf_repeat() into bpf_helpers.h Andrii Nakryiko
2023-04-18 0:21 ` [PATCH bpf-next 6/6] libbpf: mark bpf_iter_num_{new,next,destroy} as __weak Andrii Nakryiko
2023-04-18 20:00 ` [PATCH bpf-next 0/6] Provide bpf_for() and bpf_for_each() by libbpf patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox