* [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules
@ 2023-10-22 5:03 thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 01/10] bpf: refactory struct_ops type initialization to a function thinker.li
` (9 more replies)
0 siblings, 10 replies; 24+ messages in thread
From: thinker.li @ 2023-10-22 5:03 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii, drosen
Cc: sinquersw, kuifeng, Kui-Feng Lee
From: Kui-Feng Lee <thinker.li@gmail.com>
Given the current constraints of the current implementation,
struct_ops cannot be registered dynamically. This presents a
significant limitation for modules like coming fuse-bpf, which seeks
to implement a new struct_ops type. To address this issue, a new API
is introduced that allows the registration of new struct_ops types
from modules.
Previously, struct_ops types were defined in bpf_struct_ops_types.h
and collected as a static array. The new API lets callers add new
struct_ops types dynamically. The static array has been removed and
replaced by the per-btf struct_ops_tab.
The struct_ops subsystem relies on BTF to determine the layout of
values in a struct_ops map and identify the subsystem that the
struct_ops map registers to. However, the kernel BTF does not include
the type information of struct_ops types defined by a module. The
struct_ops subsystem requires knowledge of the corresponding module
for a given struct_ops map and the utilization of BTF information from
that module. We empower libbpf to determine the correct module for
accessing the BTF information and pass an identity (FD) of the module
btf to the kernel. The kernel looks up type information and registered
struct_ops types directly from the given btf.
If a module exits while one or more struct_ops maps still refer to a
struct_ops type defined by the module, it can lead to unforeseen
complications. Therefore, it is crucial to ensure that a module
remains intact as long as any struct_ops map is still linked to a
struct_ops type defined by the module. To achieve this, every
struct_ops map holds a reference to the module while being registered.
Changes from v5:
- As the 2nd patch, we introduce "bpf_struct_ops_desc". This change
involves moving certain members of "bpf_struct_ops" to
"bpf_struct_ops_desc", which becomes a part of
"btf_struct_ops_tab". This ensures that these members remain
accessible even when the owner module of a "bpf_struct_ops" is
unloaded.
- Correct the order of arguments when calling
in the 3rd patch.
- Remove the owner argument from bpf_struct_ops_init_one(). Instead,
callers should fill in st_ops->owner.
- Make sure to hold the owner module when calling
bpf_struct_ops_find() and bpf_struct_ops_find_value() in the 6th
patch.
- Merge the functions register_bpf_struct_ops_btf() and
register_bpf_struct_ops() into a single function and relocate it to
btf.c for better organization and clarity.
- Undo the name modifications made to find_kernel_btf_id() and
find_ksym_btf_id() in the 8th patch.
Changes from v4:
- Fix the dependency between testing_helpers.o and
rcu_tasks_trace_gp.skel.h.
Changes from v3:
- Fix according to the feedback for v3.
- Change of the order of arguments to make btf as the first
argument.
- Use btf_try_get_module() instead of try_get_module() since the
module pointed by st_ops->owner can gone while some one is still
holding its btf.
- Move variables defined by BPF_STRUCT_OPS_COMMON_VALUE to struct
bpf_struct_ops_common_value to validation easier.
- Register the struct_ops type defined by bpf_testmod in its init
function.
- Rename field name to 'value_type_btf_obj_fd' to make it explicit.
- Fix leaking of btf objects on error.
- st_maps hold their modules to keep modules alive and prevent they
from unloading.
- bpf_map of libbpf keeps mod_btf_fd instead of a pointer to module_btf.
- Do call_rcu_tasks_trace() in kern_sync_rcu() to ensure the
bpf_testmod is unloaded properly. It uses rcu_tasks_trace_gp to
trigger call_rcu_tasks_trace() in the kernel.
- Merge and reorder patches in a reasonable order.
Changes from v2:
- Remove struct_ops array, and add a per-btf (module) struct_ops_tab
to collect registered struct_ops types.
- Validate value_type by checking member names and types.
---
v5: https://lore.kernel.org/all/20231017162306.176586-1-thinker.li@gmail.com/
v4: https://lore.kernel.org/all/20231013224304.187218-1-thinker.li@gmail.com/
v3: https://lore.kernel.org/all/20230920155923.151136-1-thinker.li@gmail.com/
v2: https://lore.kernel.org/all/20230913061449.1918219-1-thinker.li@gmail.com/
Kui-Feng Lee (10):
bpf: refactory struct_ops type initialization to a function.
bpf, net: introduce bpf_struct_ops_desc.
bpf: add struct_ops_tab to btf.
bpf: hold module for bpf_struct_ops_map.
bpf: validate value_type
bpf: pass attached BTF to the bpf_struct_ops subsystem
bpf, net: switch to dynamic registration
libbpf: Find correct module BTFs for struct_ops maps and progs.
bpf: export btf_ctx_access to modules.
selftests/bpf: test case for register_bpf_struct_ops().
include/linux/bpf.h | 51 ++-
include/linux/bpf_verifier.h | 1 +
include/linux/btf.h | 9 +
include/uapi/linux/bpf.h | 5 +
kernel/bpf/bpf_struct_ops.c | 430 ++++++++++--------
kernel/bpf/bpf_struct_ops_types.h | 12 -
kernel/bpf/btf.c | 113 ++++-
kernel/bpf/syscall.c | 2 +-
kernel/bpf/verifier.c | 21 +-
net/bpf/bpf_dummy_struct_ops.c | 23 +-
net/ipv4/bpf_tcp_ca.c | 24 +-
tools/include/uapi/linux/bpf.h | 5 +
tools/lib/bpf/bpf.c | 4 +-
tools/lib/bpf/bpf.h | 5 +-
tools/lib/bpf/libbpf.c | 50 +-
tools/testing/selftests/bpf/Makefile | 2 +
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 59 +++
.../selftests/bpf/bpf_testmod/bpf_testmod.h | 5 +
.../bpf/prog_tests/test_struct_ops_module.c | 38 ++
.../selftests/bpf/progs/struct_ops_module.c | 30 ++
tools/testing/selftests/bpf/testing_helpers.c | 35 ++
21 files changed, 689 insertions(+), 235 deletions(-)
delete mode 100644 kernel/bpf/bpf_struct_ops_types.h
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_module.c
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH bpf-next v6 01/10] bpf: refactory struct_ops type initialization to a function.
2023-10-22 5:03 [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules thinker.li
@ 2023-10-22 5:03 ` thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 02/10] bpf, net: introduce bpf_struct_ops_desc thinker.li
` (8 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: thinker.li @ 2023-10-22 5:03 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii, drosen
Cc: sinquersw, kuifeng, Kui-Feng Lee
From: Kui-Feng Lee <thinker.li@gmail.com>
Move the majority of the code to bpf_struct_ops_init_one(), which can then
be utilized for the initialization of newly registered dynamically
allocated struct_ops types in the following patches.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/bpf_struct_ops.c | 157 +++++++++++++++++++-----------------
1 file changed, 83 insertions(+), 74 deletions(-)
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index db6176fb64dc..627cf1ea840a 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -110,102 +110,111 @@ const struct bpf_prog_ops bpf_struct_ops_prog_ops = {
static const struct btf_type *module_type;
-void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log)
+static void bpf_struct_ops_init_one(struct bpf_struct_ops *st_ops,
+ struct btf *btf,
+ struct bpf_verifier_log *log)
{
- s32 type_id, value_id, module_id;
const struct btf_member *member;
- struct bpf_struct_ops *st_ops;
const struct btf_type *t;
+ s32 type_id, value_id;
char value_name[128];
const char *mname;
- u32 i, j;
+ int i;
- /* Ensure BTF type is emitted for "struct bpf_struct_ops_##_name" */
-#define BPF_STRUCT_OPS_TYPE(_name) BTF_TYPE_EMIT(struct bpf_struct_ops_##_name);
-#include "bpf_struct_ops_types.h"
-#undef BPF_STRUCT_OPS_TYPE
+ if (strlen(st_ops->name) + VALUE_PREFIX_LEN >=
+ sizeof(value_name)) {
+ pr_warn("struct_ops name %s is too long\n",
+ st_ops->name);
+ return;
+ }
+ sprintf(value_name, "%s%s", VALUE_PREFIX, st_ops->name);
- module_id = btf_find_by_name_kind(btf, "module", BTF_KIND_STRUCT);
- if (module_id < 0) {
- pr_warn("Cannot find struct module in btf_vmlinux\n");
+ value_id = btf_find_by_name_kind(btf, value_name,
+ BTF_KIND_STRUCT);
+ if (value_id < 0) {
+ pr_warn("Cannot find struct %s in btf_vmlinux\n",
+ value_name);
return;
}
- module_type = btf_type_by_id(btf, module_id);
- for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
- st_ops = bpf_struct_ops[i];
+ type_id = btf_find_by_name_kind(btf, st_ops->name,
+ BTF_KIND_STRUCT);
+ if (type_id < 0) {
+ pr_warn("Cannot find struct %s in btf_vmlinux\n",
+ st_ops->name);
+ return;
+ }
+ t = btf_type_by_id(btf, type_id);
+ if (btf_type_vlen(t) > BPF_STRUCT_OPS_MAX_NR_MEMBERS) {
+ pr_warn("Cannot support #%u members in struct %s\n",
+ btf_type_vlen(t), st_ops->name);
+ return;
+ }
- if (strlen(st_ops->name) + VALUE_PREFIX_LEN >=
- sizeof(value_name)) {
- pr_warn("struct_ops name %s is too long\n",
+ for_each_member(i, t, member) {
+ const struct btf_type *func_proto;
+
+ mname = btf_name_by_offset(btf, member->name_off);
+ if (!*mname) {
+ pr_warn("anon member in struct %s is not supported\n",
st_ops->name);
- continue;
+ break;
}
- sprintf(value_name, "%s%s", VALUE_PREFIX, st_ops->name);
- value_id = btf_find_by_name_kind(btf, value_name,
- BTF_KIND_STRUCT);
- if (value_id < 0) {
- pr_warn("Cannot find struct %s in btf_vmlinux\n",
- value_name);
- continue;
+ if (__btf_member_bitfield_size(t, member)) {
+ pr_warn("bit field member %s in struct %s is not supported\n",
+ mname, st_ops->name);
+ break;
}
- type_id = btf_find_by_name_kind(btf, st_ops->name,
- BTF_KIND_STRUCT);
- if (type_id < 0) {
- pr_warn("Cannot find struct %s in btf_vmlinux\n",
- st_ops->name);
- continue;
- }
- t = btf_type_by_id(btf, type_id);
- if (btf_type_vlen(t) > BPF_STRUCT_OPS_MAX_NR_MEMBERS) {
- pr_warn("Cannot support #%u members in struct %s\n",
- btf_type_vlen(t), st_ops->name);
- continue;
+ func_proto = btf_type_resolve_func_ptr(btf,
+ member->type,
+ NULL);
+ if (func_proto &&
+ btf_distill_func_proto(log, btf,
+ func_proto, mname,
+ &st_ops->func_models[i])) {
+ pr_warn("Error in parsing func ptr %s in struct %s\n",
+ mname, st_ops->name);
+ break;
}
+ }
- for_each_member(j, t, member) {
- const struct btf_type *func_proto;
+ if (i == btf_type_vlen(t)) {
+ if (st_ops->init(btf)) {
+ pr_warn("Error in init bpf_struct_ops %s\n",
+ st_ops->name);
+ } else {
+ st_ops->type_id = type_id;
+ st_ops->type = t;
+ st_ops->value_id = value_id;
+ st_ops->value_type = btf_type_by_id(btf,
+ value_id);
+ }
+ }
+}
- mname = btf_name_by_offset(btf, member->name_off);
- if (!*mname) {
- pr_warn("anon member in struct %s is not supported\n",
- st_ops->name);
- break;
- }
+void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log)
+{
+ struct bpf_struct_ops *st_ops;
+ s32 module_id;
+ u32 i;
- if (__btf_member_bitfield_size(t, member)) {
- pr_warn("bit field member %s in struct %s is not supported\n",
- mname, st_ops->name);
- break;
- }
+ /* Ensure BTF type is emitted for "struct bpf_struct_ops_##_name" */
+#define BPF_STRUCT_OPS_TYPE(_name) BTF_TYPE_EMIT(struct bpf_struct_ops_##_name);
+#include "bpf_struct_ops_types.h"
+#undef BPF_STRUCT_OPS_TYPE
- func_proto = btf_type_resolve_func_ptr(btf,
- member->type,
- NULL);
- if (func_proto &&
- btf_distill_func_proto(log, btf,
- func_proto, mname,
- &st_ops->func_models[j])) {
- pr_warn("Error in parsing func ptr %s in struct %s\n",
- mname, st_ops->name);
- break;
- }
- }
+ module_id = btf_find_by_name_kind(btf, "module", BTF_KIND_STRUCT);
+ if (module_id < 0) {
+ pr_warn("Cannot find struct module in btf_vmlinux\n");
+ return;
+ }
+ module_type = btf_type_by_id(btf, module_id);
- if (j == btf_type_vlen(t)) {
- if (st_ops->init(btf)) {
- pr_warn("Error in init bpf_struct_ops %s\n",
- st_ops->name);
- } else {
- st_ops->type_id = type_id;
- st_ops->type = t;
- st_ops->value_id = value_id;
- st_ops->value_type = btf_type_by_id(btf,
- value_id);
- }
- }
+ for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
+ st_ops = bpf_struct_ops[i];
+ bpf_struct_ops_init_one(st_ops, btf, log);
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH bpf-next v6 02/10] bpf, net: introduce bpf_struct_ops_desc.
2023-10-22 5:03 [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 01/10] bpf: refactory struct_ops type initialization to a function thinker.li
@ 2023-10-22 5:03 ` thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 03/10] bpf: add struct_ops_tab to btf thinker.li
` (7 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: thinker.li @ 2023-10-22 5:03 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii, drosen
Cc: sinquersw, kuifeng, Kui-Feng Lee, netdev
From: Kui-Feng Lee <thinker.li@gmail.com>
Move some of members of bpf_struct_ops to bpf_struct_ops_desc. When we
introduce the APIs to register new bpf_struct_ops types from modules,
bpf_struct_ops may destroyed when the module is unloaded. Moving these
members to bpf_struct_ops_desc make these data available even the module is
unloaded.
Cc: netdev@vger.kernel.org
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
include/linux/bpf.h | 13 ++++--
kernel/bpf/bpf_struct_ops.c | 76 +++++++++++++++++-----------------
kernel/bpf/verifier.c | 8 ++--
net/bpf/bpf_dummy_struct_ops.c | 9 +++-
net/ipv4/bpf_tcp_ca.c | 8 +++-
5 files changed, 68 insertions(+), 46 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 30063a760b5a..705e1accfb98 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1626,17 +1626,22 @@ struct bpf_struct_ops {
void (*unreg)(void *kdata);
int (*update)(void *kdata, void *old_kdata);
int (*validate)(void *kdata);
- const struct btf_type *type;
- const struct btf_type *value_type;
const char *name;
struct btf_func_model func_models[BPF_STRUCT_OPS_MAX_NR_MEMBERS];
+};
+
+struct bpf_struct_ops_desc {
+ struct bpf_struct_ops *st_ops;
+
+ const struct btf_type *type;
+ const struct btf_type *value_type;
u32 type_id;
u32 value_id;
};
#if defined(CONFIG_BPF_JIT) && defined(CONFIG_BPF_SYSCALL)
#define BPF_MODULE_OWNER ((void *)((0xeB9FUL << 2) + POISON_POINTER_DELTA))
-const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id);
+const struct bpf_struct_ops_desc *bpf_struct_ops_find(u32 type_id);
void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log);
bool bpf_struct_ops_get(const void *kdata);
void bpf_struct_ops_put(const void *kdata);
@@ -1679,7 +1684,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
union bpf_attr __user *uattr);
#endif
#else
-static inline const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id)
+static inline const struct bpf_struct_ops_desc *bpf_struct_ops_find(u32 type_id)
{
return NULL;
}
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 627cf1ea840a..e35d6321a2f8 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -32,7 +32,7 @@ struct bpf_struct_ops_value {
struct bpf_struct_ops_map {
struct bpf_map map;
struct rcu_head rcu;
- const struct bpf_struct_ops *st_ops;
+ const struct bpf_struct_ops_desc *st_ops_desc;
/* protect map_update */
struct mutex lock;
/* link has all the bpf_links that is populated
@@ -92,9 +92,9 @@ enum {
__NR_BPF_STRUCT_OPS_TYPE,
};
-static struct bpf_struct_ops * const bpf_struct_ops[] = {
+static struct bpf_struct_ops_desc bpf_struct_ops[] = {
#define BPF_STRUCT_OPS_TYPE(_name) \
- [BPF_STRUCT_OPS_TYPE_##_name] = &bpf_##_name,
+ [BPF_STRUCT_OPS_TYPE_##_name] = { .st_ops = &bpf_##_name },
#include "bpf_struct_ops_types.h"
#undef BPF_STRUCT_OPS_TYPE
};
@@ -110,10 +110,11 @@ const struct bpf_prog_ops bpf_struct_ops_prog_ops = {
static const struct btf_type *module_type;
-static void bpf_struct_ops_init_one(struct bpf_struct_ops *st_ops,
+static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
struct btf *btf,
struct bpf_verifier_log *log)
{
+ struct bpf_struct_ops *st_ops = st_ops_desc->st_ops;
const struct btf_member *member;
const struct btf_type *t;
s32 type_id, value_id;
@@ -185,18 +186,18 @@ static void bpf_struct_ops_init_one(struct bpf_struct_ops *st_ops,
pr_warn("Error in init bpf_struct_ops %s\n",
st_ops->name);
} else {
- st_ops->type_id = type_id;
- st_ops->type = t;
- st_ops->value_id = value_id;
- st_ops->value_type = btf_type_by_id(btf,
- value_id);
+ st_ops_desc->type_id = type_id;
+ st_ops_desc->type = t;
+ st_ops_desc->value_id = value_id;
+ st_ops_desc->value_type = btf_type_by_id(btf,
+ value_id);
}
}
}
void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log)
{
- struct bpf_struct_ops *st_ops;
+ struct bpf_struct_ops_desc *st_ops_desc;
s32 module_id;
u32 i;
@@ -213,14 +214,14 @@ void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log)
module_type = btf_type_by_id(btf, module_id);
for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
- st_ops = bpf_struct_ops[i];
- bpf_struct_ops_init_one(st_ops, btf, log);
+ st_ops_desc = &bpf_struct_ops[i];
+ bpf_struct_ops_init_one(st_ops_desc, btf, log);
}
}
extern struct btf *btf_vmlinux;
-static const struct bpf_struct_ops *
+static const struct bpf_struct_ops_desc *
bpf_struct_ops_find_value(u32 value_id)
{
unsigned int i;
@@ -229,14 +230,14 @@ bpf_struct_ops_find_value(u32 value_id)
return NULL;
for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
- if (bpf_struct_ops[i]->value_id == value_id)
- return bpf_struct_ops[i];
+ if (bpf_struct_ops[i].value_id == value_id)
+ return &bpf_struct_ops[i];
}
return NULL;
}
-const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id)
+const struct bpf_struct_ops_desc *bpf_struct_ops_find(u32 type_id)
{
unsigned int i;
@@ -244,8 +245,8 @@ const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id)
return NULL;
for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
- if (bpf_struct_ops[i]->type_id == type_id)
- return bpf_struct_ops[i];
+ if (bpf_struct_ops[i].type_id == type_id)
+ return &bpf_struct_ops[i];
}
return NULL;
@@ -305,7 +306,7 @@ static void *bpf_struct_ops_map_lookup_elem(struct bpf_map *map, void *key)
static void bpf_struct_ops_map_put_progs(struct bpf_struct_ops_map *st_map)
{
- const struct btf_type *t = st_map->st_ops->type;
+ const struct btf_type *t = st_map->st_ops_desc->type;
u32 i;
for (i = 0; i < btf_type_vlen(t); i++) {
@@ -379,10 +380,11 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
void *value, u64 flags)
{
struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map;
- const struct bpf_struct_ops *st_ops = st_map->st_ops;
+ const struct bpf_struct_ops_desc *st_ops_desc = st_map->st_ops_desc;
+ const struct bpf_struct_ops *st_ops = st_ops_desc->st_ops;
struct bpf_struct_ops_value *uvalue, *kvalue;
const struct btf_member *member;
- const struct btf_type *t = st_ops->type;
+ const struct btf_type *t = st_ops_desc->type;
struct bpf_tramp_links *tlinks;
void *udata, *kdata;
int prog_fd, err;
@@ -395,7 +397,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
if (*(u32 *)key != 0)
return -E2BIG;
- err = check_zero_holes(st_ops->value_type, value);
+ err = check_zero_holes(st_ops_desc->value_type, value);
if (err)
return err;
@@ -487,7 +489,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
}
if (prog->type != BPF_PROG_TYPE_STRUCT_OPS ||
- prog->aux->attach_btf_id != st_ops->type_id ||
+ prog->aux->attach_btf_id != st_ops_desc->type_id ||
prog->expected_attach_type != i) {
bpf_prog_put(prog);
err = -EINVAL;
@@ -583,7 +585,7 @@ static long bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key)
BPF_STRUCT_OPS_STATE_TOBEFREE);
switch (prev_state) {
case BPF_STRUCT_OPS_STATE_INUSE:
- st_map->st_ops->unreg(&st_map->kvalue.data);
+ st_map->st_ops_desc->st_ops->unreg(&st_map->kvalue.data);
bpf_map_put(map);
return 0;
case BPF_STRUCT_OPS_STATE_TOBEFREE:
@@ -664,22 +666,22 @@ static int bpf_struct_ops_map_alloc_check(union bpf_attr *attr)
static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
{
- const struct bpf_struct_ops *st_ops;
+ const struct bpf_struct_ops_desc *st_ops_desc;
size_t st_map_size;
struct bpf_struct_ops_map *st_map;
const struct btf_type *t, *vt;
struct bpf_map *map;
int ret;
- st_ops = bpf_struct_ops_find_value(attr->btf_vmlinux_value_type_id);
- if (!st_ops)
+ st_ops_desc = bpf_struct_ops_find_value(attr->btf_vmlinux_value_type_id);
+ if (!st_ops_desc)
return ERR_PTR(-ENOTSUPP);
- vt = st_ops->value_type;
+ vt = st_ops_desc->value_type;
if (attr->value_size != vt->size)
return ERR_PTR(-EINVAL);
- t = st_ops->type;
+ t = st_ops_desc->type;
st_map_size = sizeof(*st_map) +
/* kvalue stores the
@@ -691,7 +693,7 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
if (!st_map)
return ERR_PTR(-ENOMEM);
- st_map->st_ops = st_ops;
+ st_map->st_ops_desc = st_ops_desc;
map = &st_map->map;
ret = bpf_jit_charge_modmem(PAGE_SIZE);
@@ -729,8 +731,8 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
static u64 bpf_struct_ops_map_mem_usage(const struct bpf_map *map)
{
struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map;
- const struct bpf_struct_ops *st_ops = st_map->st_ops;
- const struct btf_type *vt = st_ops->value_type;
+ const struct bpf_struct_ops_desc *st_ops_desc = st_map->st_ops_desc;
+ const struct btf_type *vt = st_ops_desc->value_type;
u64 usage;
usage = sizeof(*st_map) +
@@ -804,7 +806,7 @@ static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link)
/* st_link->map can be NULL if
* bpf_struct_ops_link_create() fails to register.
*/
- st_map->st_ops->unreg(&st_map->kvalue.data);
+ st_map->st_ops_desc->st_ops->unreg(&st_map->kvalue.data);
bpf_map_put(&st_map->map);
}
kfree(st_link);
@@ -851,7 +853,7 @@ static int bpf_struct_ops_map_link_update(struct bpf_link *link, struct bpf_map
if (!bpf_struct_ops_valid_to_reg(new_map))
return -EINVAL;
- if (!st_map->st_ops->update)
+ if (!st_map->st_ops_desc->st_ops->update)
return -EOPNOTSUPP;
mutex_lock(&update_mutex);
@@ -864,12 +866,12 @@ static int bpf_struct_ops_map_link_update(struct bpf_link *link, struct bpf_map
old_st_map = container_of(old_map, struct bpf_struct_ops_map, map);
/* The new and old struct_ops must be the same type. */
- if (st_map->st_ops != old_st_map->st_ops) {
+ if (st_map->st_ops_desc != old_st_map->st_ops_desc) {
err = -EINVAL;
goto err_out;
}
- err = st_map->st_ops->update(st_map->kvalue.data, old_st_map->kvalue.data);
+ err = st_map->st_ops_desc->st_ops->update(st_map->kvalue.data, old_st_map->kvalue.data);
if (err)
goto err_out;
@@ -920,7 +922,7 @@ int bpf_struct_ops_link_create(union bpf_attr *attr)
if (err)
goto err_out;
- err = st_map->st_ops->reg(st_map->kvalue.data);
+ err = st_map->st_ops_desc->st_ops->reg(st_map->kvalue.data);
if (err) {
bpf_link_cleanup(&link_primer);
link = NULL;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a7178ecf676d..5c8029606dcc 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -19619,6 +19619,7 @@ static void print_verification_stats(struct bpf_verifier_env *env)
static int check_struct_ops_btf_id(struct bpf_verifier_env *env)
{
const struct btf_type *t, *func_proto;
+ const struct bpf_struct_ops_desc *st_ops_desc;
const struct bpf_struct_ops *st_ops;
const struct btf_member *member;
struct bpf_prog *prog = env->prog;
@@ -19631,14 +19632,15 @@ static int check_struct_ops_btf_id(struct bpf_verifier_env *env)
}
btf_id = prog->aux->attach_btf_id;
- st_ops = bpf_struct_ops_find(btf_id);
- if (!st_ops) {
+ st_ops_desc = bpf_struct_ops_find(btf_id);
+ if (!st_ops_desc) {
verbose(env, "attach_btf_id %u is not a supported struct\n",
btf_id);
return -ENOTSUPP;
}
+ st_ops = st_ops_desc->st_ops;
- t = st_ops->type;
+ t = st_ops_desc->type;
member_idx = prog->expected_attach_type;
if (member_idx >= btf_type_vlen(t)) {
verbose(env, "attach to invalid member idx %u of struct %s\n",
diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
index 5918d1b32e19..ffa224053a6c 100644
--- a/net/bpf/bpf_dummy_struct_ops.c
+++ b/net/bpf/bpf_dummy_struct_ops.c
@@ -17,6 +17,8 @@ struct bpf_dummy_ops_test_args {
struct bpf_dummy_ops_state state;
};
+static struct btf *bpf_dummy_ops_btf;
+
static struct bpf_dummy_ops_test_args *
dummy_ops_init_args(const union bpf_attr *kattr, unsigned int nr)
{
@@ -85,9 +87,13 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
void *image = NULL;
unsigned int op_idx;
int prog_ret;
+ u32 type_id;
int err;
- if (prog->aux->attach_btf_id != st_ops->type_id)
+ type_id = btf_find_by_name_kind(bpf_dummy_ops_btf,
+ bpf_bpf_dummy_ops.name,
+ BTF_KIND_STRUCT);
+ if (prog->aux->attach_btf_id != type_id)
return -EOPNOTSUPP;
func_proto = prog->aux->attach_func_proto;
@@ -143,6 +149,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
static int bpf_dummy_init(struct btf *btf)
{
+ bpf_dummy_ops_btf = btf;
return 0;
}
diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
index 39dcccf0f174..3c8b76578a2a 100644
--- a/net/ipv4/bpf_tcp_ca.c
+++ b/net/ipv4/bpf_tcp_ca.c
@@ -20,6 +20,7 @@ static u32 unsupported_ops[] = {
static const struct btf_type *tcp_sock_type;
static u32 tcp_sock_id, sock_id;
+static const struct btf_type *tcp_congestion_ops_type;
static int bpf_tcp_ca_init(struct btf *btf)
{
@@ -36,6 +37,11 @@ static int bpf_tcp_ca_init(struct btf *btf)
tcp_sock_id = type_id;
tcp_sock_type = btf_type_by_id(btf, tcp_sock_id);
+ type_id = btf_find_by_name_kind(btf, "tcp_congestion_ops", BTF_KIND_STRUCT);
+ if (type_id < 0)
+ return -EINVAL;
+ tcp_congestion_ops_type = btf_type_by_id(btf, type_id);
+
return 0;
}
@@ -149,7 +155,7 @@ static u32 prog_ops_moff(const struct bpf_prog *prog)
u32 midx;
midx = prog->expected_attach_type;
- t = bpf_tcp_congestion_ops.type;
+ t = tcp_congestion_ops_type;
m = &btf_type_member(t)[midx];
return __btf_member_bit_offset(t, m) / 8;
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH bpf-next v6 03/10] bpf: add struct_ops_tab to btf.
2023-10-22 5:03 [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 01/10] bpf: refactory struct_ops type initialization to a function thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 02/10] bpf, net: introduce bpf_struct_ops_desc thinker.li
@ 2023-10-22 5:03 ` thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 04/10] bpf: hold module for bpf_struct_ops_map thinker.li
` (6 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: thinker.li @ 2023-10-22 5:03 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii, drosen
Cc: sinquersw, kuifeng, Kui-Feng Lee
From: Kui-Feng Lee <thinker.li@gmail.com>
Maintain a registry of registered struct_ops types in the per-btf (module)
struct_ops_tab. This registry allows for easy lookup of struct_ops types
that are registered by a specific module.
Every struct_ops type should have an associated module BTF to provide type
information since we are going to allow modules to define and register new
struct_ops types. Once this change is made, the bpf_struct_ops subsystem
knows where to look up type info with just a bpf_struct_ops.
The subsystem looks up struct_ops types from a given module BTF although it
is always btf_vmlinux now. Once start using struct_ops_tab, btfs other than
btf_vmlinux can be used as well.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
include/linux/bpf.h | 5 +--
include/linux/btf.h | 8 +++++
kernel/bpf/bpf_struct_ops.c | 28 ++++++++-------
kernel/bpf/btf.c | 71 +++++++++++++++++++++++++++++++++++++
kernel/bpf/verifier.c | 2 +-
5 files changed, 99 insertions(+), 15 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 705e1accfb98..4f3b67932ded 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1633,6 +1633,7 @@ struct bpf_struct_ops {
struct bpf_struct_ops_desc {
struct bpf_struct_ops *st_ops;
+ struct btf *btf;
const struct btf_type *type;
const struct btf_type *value_type;
u32 type_id;
@@ -1641,7 +1642,7 @@ struct bpf_struct_ops_desc {
#if defined(CONFIG_BPF_JIT) && defined(CONFIG_BPF_SYSCALL)
#define BPF_MODULE_OWNER ((void *)((0xeB9FUL << 2) + POISON_POINTER_DELTA))
-const struct bpf_struct_ops_desc *bpf_struct_ops_find(u32 type_id);
+const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *btf, u32 type_id);
void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log);
bool bpf_struct_ops_get(const void *kdata);
void bpf_struct_ops_put(const void *kdata);
@@ -1684,7 +1685,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
union bpf_attr __user *uattr);
#endif
#else
-static inline const struct bpf_struct_ops_desc *bpf_struct_ops_find(u32 type_id)
+static inline const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *btf, u32 type_id)
{
return NULL;
}
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 928113a80a95..8e37f7eb02c7 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -571,4 +571,12 @@ static inline bool btf_type_is_struct_ptr(struct btf *btf, const struct btf_type
return btf_type_is_struct(t);
}
+struct bpf_struct_ops;
+struct bpf_struct_ops_desc;
+
+struct bpf_struct_ops_desc *
+btf_add_struct_ops(struct bpf_struct_ops *st_ops);
+const struct bpf_struct_ops_desc *
+btf_get_struct_ops(struct btf *btf, u32 *ret_cnt);
+
#endif
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index e35d6321a2f8..0bc21a39257d 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -186,6 +186,7 @@ static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
pr_warn("Error in init bpf_struct_ops %s\n",
st_ops->name);
} else {
+ st_ops_desc->btf = btf;
st_ops_desc->type_id = type_id;
st_ops_desc->type = t;
st_ops_desc->value_id = value_id;
@@ -222,7 +223,7 @@ void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log)
extern struct btf *btf_vmlinux;
static const struct bpf_struct_ops_desc *
-bpf_struct_ops_find_value(u32 value_id)
+bpf_struct_ops_find_value(struct btf *btf, u32 value_id)
{
unsigned int i;
@@ -237,7 +238,8 @@ bpf_struct_ops_find_value(u32 value_id)
return NULL;
}
-const struct bpf_struct_ops_desc *bpf_struct_ops_find(u32 type_id)
+const struct bpf_struct_ops_desc *
+bpf_struct_ops_find(struct btf *btf, u32 type_id)
{
unsigned int i;
@@ -317,7 +319,7 @@ static void bpf_struct_ops_map_put_progs(struct bpf_struct_ops_map *st_map)
}
}
-static int check_zero_holes(const struct btf_type *t, void *data)
+static int check_zero_holes(const struct btf *btf, const struct btf_type *t, void *data)
{
const struct btf_member *member;
u32 i, moff, msize, prev_mend = 0;
@@ -329,8 +331,8 @@ static int check_zero_holes(const struct btf_type *t, void *data)
memchr_inv(data + prev_mend, 0, moff - prev_mend))
return -EINVAL;
- mtype = btf_type_by_id(btf_vmlinux, member->type);
- mtype = btf_resolve_size(btf_vmlinux, mtype, &msize);
+ mtype = btf_type_by_id(btf, member->type);
+ mtype = btf_resolve_size(btf, mtype, &msize);
if (IS_ERR(mtype))
return PTR_ERR(mtype);
prev_mend = moff + msize;
@@ -397,12 +399,12 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
if (*(u32 *)key != 0)
return -E2BIG;
- err = check_zero_holes(st_ops_desc->value_type, value);
+ err = check_zero_holes(st_ops_desc->btf, st_ops_desc->value_type, value);
if (err)
return err;
uvalue = value;
- err = check_zero_holes(t, uvalue->data);
+ err = check_zero_holes(st_ops_desc->btf, t, uvalue->data);
if (err)
return err;
@@ -437,7 +439,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
u32 moff;
moff = __btf_member_bit_offset(t, member) / 8;
- ptype = btf_type_resolve_ptr(btf_vmlinux, member->type, NULL);
+ ptype = btf_type_resolve_ptr(st_ops_desc->btf, member->type, NULL);
if (ptype == module_type) {
if (*(void **)(udata + moff))
goto reset_unlock;
@@ -462,8 +464,8 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
if (!ptype || !btf_type_is_func_proto(ptype)) {
u32 msize;
- mtype = btf_type_by_id(btf_vmlinux, member->type);
- mtype = btf_resolve_size(btf_vmlinux, mtype, &msize);
+ mtype = btf_type_by_id(st_ops_desc->btf, member->type);
+ mtype = btf_resolve_size(st_ops_desc->btf, mtype, &msize);
if (IS_ERR(mtype)) {
err = PTR_ERR(mtype);
goto reset_unlock;
@@ -602,6 +604,7 @@ static long bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key)
static void bpf_struct_ops_map_seq_show_elem(struct bpf_map *map, void *key,
struct seq_file *m)
{
+ struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map;
void *value;
int err;
@@ -611,7 +614,8 @@ static void bpf_struct_ops_map_seq_show_elem(struct bpf_map *map, void *key,
err = bpf_struct_ops_map_sys_lookup_elem(map, key, value);
if (!err) {
- btf_type_seq_show(btf_vmlinux, map->btf_vmlinux_value_type_id,
+ btf_type_seq_show(st_map->st_ops_desc->btf,
+ map->btf_vmlinux_value_type_id,
value, m);
seq_puts(m, "\n");
}
@@ -673,7 +677,7 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
struct bpf_map *map;
int ret;
- st_ops_desc = bpf_struct_ops_find_value(attr->btf_vmlinux_value_type_id);
+ st_ops_desc = bpf_struct_ops_find_value(btf_vmlinux, attr->btf_vmlinux_value_type_id);
if (!st_ops_desc)
return ERR_PTR(-ENOTSUPP);
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index f93e835d90af..c1e2ed6c972e 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -241,6 +241,12 @@ struct btf_id_dtor_kfunc_tab {
struct btf_id_dtor_kfunc dtors[];
};
+struct btf_struct_ops_tab {
+ u32 cnt;
+ u32 capacity;
+ struct bpf_struct_ops_desc ops[];
+};
+
struct btf {
void *data;
struct btf_type **types;
@@ -258,6 +264,7 @@ struct btf {
struct btf_kfunc_set_tab *kfunc_set_tab;
struct btf_id_dtor_kfunc_tab *dtor_kfunc_tab;
struct btf_struct_metas *struct_meta_tab;
+ struct btf_struct_ops_tab *struct_ops_tab;
/* split BTF support */
struct btf *base_btf;
@@ -1688,11 +1695,20 @@ static void btf_free_struct_meta_tab(struct btf *btf)
btf->struct_meta_tab = NULL;
}
+static void btf_free_struct_ops_tab(struct btf *btf)
+{
+ struct btf_struct_ops_tab *tab = btf->struct_ops_tab;
+
+ kfree(tab);
+ btf->struct_ops_tab = NULL;
+}
+
static void btf_free(struct btf *btf)
{
btf_free_struct_meta_tab(btf);
btf_free_dtor_kfunc_tab(btf);
btf_free_kfunc_set_tab(btf);
+ btf_free_struct_ops_tab(btf);
kvfree(btf->types);
kvfree(btf->resolved_sizes);
kvfree(btf->resolved_ids);
@@ -8601,3 +8617,58 @@ bool btf_type_ids_nocast_alias(struct bpf_verifier_log *log,
return !strncmp(reg_name, arg_name, cmp_len);
}
+
+static struct bpf_struct_ops_desc *
+btf_add_struct_ops(struct btf *btf, struct bpf_struct_ops *st_ops)
+{
+ struct btf_struct_ops_tab *tab, *new_tab;
+ int i;
+
+ if (!btf)
+ return ERR_PTR(-ENOENT);
+
+ /* Assume this function is called for a module when the module is
+ * loading.
+ */
+
+ tab = btf->struct_ops_tab;
+ if (!tab) {
+ tab = kzalloc(offsetof(struct btf_struct_ops_tab, ops[4]),
+ GFP_KERNEL);
+ if (!tab)
+ return ERR_PTR(-ENOMEM);
+ tab->capacity = 4;
+ btf->struct_ops_tab = tab;
+ }
+
+ for (i = 0; i < tab->cnt; i++)
+ if (tab->ops[i].st_ops == st_ops)
+ return ERR_PTR(-EEXIST);
+
+ if (tab->cnt == tab->capacity) {
+ new_tab = krealloc(tab, sizeof(*tab) +
+ sizeof(struct bpf_struct_ops *) *
+ tab->capacity * 2, GFP_KERNEL);
+ if (!new_tab)
+ return ERR_PTR(-ENOMEM);
+ tab = new_tab;
+ tab->capacity *= 2;
+ btf->struct_ops_tab = tab;
+ }
+
+ btf->struct_ops_tab->ops[btf->struct_ops_tab->cnt].st_ops = st_ops;
+
+ return &btf->struct_ops_tab->ops[btf->struct_ops_tab->cnt++];
+}
+
+const struct bpf_struct_ops_desc *btf_get_struct_ops(struct btf *btf, u32 *ret_cnt)
+{
+ if (!btf)
+ return NULL;
+ if (!btf->struct_ops_tab)
+ return NULL;
+
+ *ret_cnt = btf->struct_ops_tab->cnt;
+ return (const struct bpf_struct_ops_desc *)btf->struct_ops_tab->ops;
+}
+
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5c8029606dcc..7d0cf7289092 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -19632,7 +19632,7 @@ static int check_struct_ops_btf_id(struct bpf_verifier_env *env)
}
btf_id = prog->aux->attach_btf_id;
- st_ops_desc = bpf_struct_ops_find(btf_id);
+ st_ops_desc = bpf_struct_ops_find(btf_vmlinux, btf_id);
if (!st_ops_desc) {
verbose(env, "attach_btf_id %u is not a supported struct\n",
btf_id);
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH bpf-next v6 04/10] bpf: hold module for bpf_struct_ops_map.
2023-10-22 5:03 [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules thinker.li
` (2 preceding siblings ...)
2023-10-22 5:03 ` [PATCH bpf-next v6 03/10] bpf: add struct_ops_tab to btf thinker.li
@ 2023-10-22 5:03 ` thinker.li
2023-10-26 21:11 ` Eduard Zingerman
2023-10-22 5:03 ` [PATCH bpf-next v6 05/10] bpf: validate value_type thinker.li
` (5 subsequent siblings)
9 siblings, 1 reply; 24+ messages in thread
From: thinker.li @ 2023-10-22 5:03 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii, drosen
Cc: sinquersw, kuifeng, Kui-Feng Lee
From: Kui-Feng Lee <thinker.li@gmail.com>
To ensure that a module remains accessible whenever a struct_ops object of
a struct_ops type provided by the module is still in use.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
include/linux/bpf.h | 1 +
include/linux/btf.h | 2 +-
kernel/bpf/bpf_struct_ops.c | 70 ++++++++++++++++++++++++++++++-------
3 files changed, 60 insertions(+), 13 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4f3b67932ded..26feb8a2da4f 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1626,6 +1626,7 @@ struct bpf_struct_ops {
void (*unreg)(void *kdata);
int (*update)(void *kdata, void *old_kdata);
int (*validate)(void *kdata);
+ struct module *owner;
const char *name;
struct btf_func_model func_models[BPF_STRUCT_OPS_MAX_NR_MEMBERS];
};
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 8e37f7eb02c7..6a64b372b7a0 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -575,7 +575,7 @@ struct bpf_struct_ops;
struct bpf_struct_ops_desc;
struct bpf_struct_ops_desc *
-btf_add_struct_ops(struct bpf_struct_ops *st_ops);
+btf_add_struct_ops(struct btf *btf, struct bpf_struct_ops *st_ops);
const struct bpf_struct_ops_desc *
btf_get_struct_ops(struct btf *btf, u32 *ret_cnt);
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 0bc21a39257d..413a3f8b26ba 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -388,6 +388,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
const struct btf_member *member;
const struct btf_type *t = st_ops_desc->type;
struct bpf_tramp_links *tlinks;
+ struct module *mod = NULL;
void *udata, *kdata;
int prog_fd, err;
void *image, *image_end;
@@ -425,6 +426,14 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
goto unlock;
}
+ if (st_ops_desc->btf != btf_vmlinux) {
+ mod = btf_try_get_module(st_ops_desc->btf);
+ if (!mod) {
+ err = -EBUSY;
+ goto unlock;
+ }
+ }
+
memcpy(uvalue, value, map->value_size);
udata = &uvalue->data;
@@ -552,6 +561,10 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
* can be seen once BPF_STRUCT_OPS_STATE_INUSE is set.
*/
smp_store_release(&kvalue->state, BPF_STRUCT_OPS_STATE_INUSE);
+ /* Hold the owner module until the struct_ops is
+ * unregistered
+ */
+ mod = NULL;
goto unlock;
}
@@ -568,6 +581,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
memset(uvalue, 0, map->value_size);
memset(kvalue, 0, map->value_size);
unlock:
+ module_put(mod);
kfree(tlinks);
mutex_unlock(&st_map->lock);
return err;
@@ -588,6 +602,7 @@ static long bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key)
switch (prev_state) {
case BPF_STRUCT_OPS_STATE_INUSE:
st_map->st_ops_desc->st_ops->unreg(&st_map->kvalue.data);
+ module_put(st_map->st_ops_desc->st_ops->owner);
bpf_map_put(map);
return 0;
case BPF_STRUCT_OPS_STATE_TOBEFREE:
@@ -674,6 +689,7 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
size_t st_map_size;
struct bpf_struct_ops_map *st_map;
const struct btf_type *t, *vt;
+ struct module *mod = NULL;
struct bpf_map *map;
int ret;
@@ -681,9 +697,17 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
if (!st_ops_desc)
return ERR_PTR(-ENOTSUPP);
+ if (st_ops_desc->btf != btf_vmlinux) {
+ mod = btf_try_get_module(st_ops_desc->btf);
+ if (!mod)
+ return ERR_PTR(-EINVAL);
+ }
+
vt = st_ops_desc->value_type;
- if (attr->value_size != vt->size)
- return ERR_PTR(-EINVAL);
+ if (attr->value_size != vt->size) {
+ ret = -EINVAL;
+ goto errout;
+ }
t = st_ops_desc->type;
@@ -694,17 +718,17 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
(vt->size - sizeof(struct bpf_struct_ops_value));
st_map = bpf_map_area_alloc(st_map_size, NUMA_NO_NODE);
- if (!st_map)
- return ERR_PTR(-ENOMEM);
+ if (!st_map) {
+ ret = -ENOMEM;
+ goto errout;
+ }
st_map->st_ops_desc = st_ops_desc;
map = &st_map->map;
ret = bpf_jit_charge_modmem(PAGE_SIZE);
- if (ret) {
- __bpf_struct_ops_map_free(map);
- return ERR_PTR(ret);
- }
+ if (ret)
+ goto errout_free;
st_map->image = bpf_jit_alloc_exec(PAGE_SIZE);
if (!st_map->image) {
@@ -713,23 +737,32 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
* here.
*/
bpf_jit_uncharge_modmem(PAGE_SIZE);
- __bpf_struct_ops_map_free(map);
- return ERR_PTR(-ENOMEM);
+ ret = -ENOMEM;
+ goto errout_free;
}
st_map->uvalue = bpf_map_area_alloc(vt->size, NUMA_NO_NODE);
st_map->links =
bpf_map_area_alloc(btf_type_vlen(t) * sizeof(struct bpf_links *),
NUMA_NO_NODE);
if (!st_map->uvalue || !st_map->links) {
- __bpf_struct_ops_map_free(map);
- return ERR_PTR(-ENOMEM);
+ ret = -ENOMEM;
+ goto errout_free;
}
mutex_init(&st_map->lock);
set_vm_flush_reset_perms(st_map->image);
bpf_map_init_from_attr(map, attr);
+ module_put(mod);
+
return map;
+
+errout_free:
+ __bpf_struct_ops_map_free(map);
+ btf = NULL; /* has been released */
+errout:
+ module_put(mod);
+ return ERR_PTR(ret);
}
static u64 bpf_struct_ops_map_mem_usage(const struct bpf_map *map)
@@ -811,6 +844,7 @@ static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link)
* bpf_struct_ops_link_create() fails to register.
*/
st_map->st_ops_desc->st_ops->unreg(&st_map->kvalue.data);
+ module_put(st_map->st_ops_desc->st_ops->owner);
bpf_map_put(&st_map->map);
}
kfree(st_link);
@@ -857,6 +891,10 @@ static int bpf_struct_ops_map_link_update(struct bpf_link *link, struct bpf_map
if (!bpf_struct_ops_valid_to_reg(new_map))
return -EINVAL;
+ /* The old map is holding the refcount for the owner module. The
+ * ownership of the owner module refcount is going to be
+ * transferred from the old map to the new map.
+ */
if (!st_map->st_ops_desc->st_ops->update)
return -EOPNOTSUPP;
@@ -902,6 +940,7 @@ int bpf_struct_ops_link_create(union bpf_attr *attr)
struct bpf_link_primer link_primer;
struct bpf_struct_ops_map *st_map;
struct bpf_map *map;
+ struct btf *btf;
int err;
map = bpf_map_get(attr->link_create.map_fd);
@@ -926,8 +965,15 @@ int bpf_struct_ops_link_create(union bpf_attr *attr)
if (err)
goto err_out;
+ /* Hold the owner module until the struct_ops is unregistered. */
+ btf = st_map->st_ops_desc->btf;
+ if (btf != btf_vmlinux && !btf_try_get_module(btf)) {
+ err = -EINVAL;
+ goto err_out;
+ }
err = st_map->st_ops_desc->st_ops->reg(st_map->kvalue.data);
if (err) {
+ module_put(st_map->st_ops_desc->st_ops->owner);
bpf_link_cleanup(&link_primer);
link = NULL;
goto err_out;
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH bpf-next v6 05/10] bpf: validate value_type
2023-10-22 5:03 [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules thinker.li
` (3 preceding siblings ...)
2023-10-22 5:03 ` [PATCH bpf-next v6 04/10] bpf: hold module for bpf_struct_ops_map thinker.li
@ 2023-10-22 5:03 ` thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 06/10] bpf: pass attached BTF to the bpf_struct_ops subsystem thinker.li
` (4 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: thinker.li @ 2023-10-22 5:03 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii, drosen
Cc: sinquersw, kuifeng, Kui-Feng Lee
From: Kui-Feng Lee <thinker.li@gmail.com>
A value_type should consist of three components: refcnt, state, and data.
refcnt and state has been move to struct bpf_struct_ops_common_value to
make it easier to check the value type.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/bpf_struct_ops.c | 86 ++++++++++++++++++++++++++++---------
1 file changed, 65 insertions(+), 21 deletions(-)
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 413a3f8b26ba..6367d42b2ce1 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -20,9 +20,11 @@ enum bpf_struct_ops_state {
BPF_STRUCT_OPS_STATE_READY,
};
-#define BPF_STRUCT_OPS_COMMON_VALUE \
- refcount_t refcnt; \
- enum bpf_struct_ops_state state
+struct bpf_struct_ops_common_value {
+ refcount_t refcnt;
+ enum bpf_struct_ops_state state;
+};
+#define BPF_STRUCT_OPS_COMMON_VALUE struct bpf_struct_ops_common_value common
struct bpf_struct_ops_value {
BPF_STRUCT_OPS_COMMON_VALUE;
@@ -109,6 +111,38 @@ const struct bpf_prog_ops bpf_struct_ops_prog_ops = {
};
static const struct btf_type *module_type;
+static const struct btf_type *common_value_type;
+
+static bool is_valid_value_type(struct btf *btf, s32 value_id,
+ const struct btf_type *type,
+ const char *value_name)
+{
+ const struct btf_member *member;
+ const struct btf_type *vt, *mt;
+
+ vt = btf_type_by_id(btf, value_id);
+ if (btf_vlen(vt) != 2) {
+ pr_warn("The number of %s's members should be 2, but we get %d\n",
+ value_name, btf_vlen(vt));
+ return false;
+ }
+ member = btf_type_member(vt);
+ mt = btf_type_by_id(btf, member->type);
+ if (mt != common_value_type) {
+ pr_warn("The first member of %s should be bpf_struct_ops_common_value\n",
+ value_name);
+ return false;
+ }
+ member++;
+ mt = btf_type_by_id(btf, member->type);
+ if (mt != type) {
+ pr_warn("The second member of %s should be %s\n",
+ value_name, btf_name_by_offset(btf, type->name_off));
+ return false;
+ }
+
+ return true;
+}
static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
struct btf *btf,
@@ -130,14 +164,6 @@ static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
}
sprintf(value_name, "%s%s", VALUE_PREFIX, st_ops->name);
- value_id = btf_find_by_name_kind(btf, value_name,
- BTF_KIND_STRUCT);
- if (value_id < 0) {
- pr_warn("Cannot find struct %s in btf_vmlinux\n",
- value_name);
- return;
- }
-
type_id = btf_find_by_name_kind(btf, st_ops->name,
BTF_KIND_STRUCT);
if (type_id < 0) {
@@ -152,6 +178,16 @@ static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
return;
}
+ value_id = btf_find_by_name_kind(btf, value_name,
+ BTF_KIND_STRUCT);
+ if (value_id < 0) {
+ pr_warn("Cannot find struct %s in btf_vmlinux\n",
+ value_name);
+ return;
+ }
+ if (!is_valid_value_type(btf, value_id, t, value_name))
+ return;
+
for_each_member(i, t, member) {
const struct btf_type *func_proto;
@@ -199,7 +235,7 @@ static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log)
{
struct bpf_struct_ops_desc *st_ops_desc;
- s32 module_id;
+ s32 module_id, common_value_id;
u32 i;
/* Ensure BTF type is emitted for "struct bpf_struct_ops_##_name" */
@@ -213,6 +249,14 @@ void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log)
return;
}
module_type = btf_type_by_id(btf, module_id);
+ common_value_id = btf_find_by_name_kind(btf,
+ "bpf_struct_ops_common_value",
+ BTF_KIND_STRUCT);
+ if (common_value_id < 0) {
+ pr_warn("Cannot find struct common_value in btf_vmlinux\n");
+ return;
+ }
+ common_value_type = btf_type_by_id(btf, common_value_id);
for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
st_ops_desc = &bpf_struct_ops[i];
@@ -277,7 +321,7 @@ int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
kvalue = &st_map->kvalue;
/* Pair with smp_store_release() during map_update */
- state = smp_load_acquire(&kvalue->state);
+ state = smp_load_acquire(&kvalue->common.state);
if (state == BPF_STRUCT_OPS_STATE_INIT) {
memset(value, 0, map->value_size);
return 0;
@@ -288,7 +332,7 @@ int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
*/
uvalue = value;
memcpy(uvalue, st_map->uvalue, map->value_size);
- uvalue->state = state;
+ uvalue->common.state = state;
/* This value offers the user space a general estimate of how
* many sockets are still utilizing this struct_ops for TCP
@@ -296,7 +340,7 @@ int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
* should sufficiently meet our present goals.
*/
refcnt = atomic64_read(&map->refcnt) - atomic64_read(&map->usercnt);
- refcount_set(&uvalue->refcnt, max_t(s64, refcnt, 0));
+ refcount_set(&uvalue->common.refcnt, max_t(s64, refcnt, 0));
return 0;
}
@@ -409,7 +453,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
if (err)
return err;
- if (uvalue->state || refcount_read(&uvalue->refcnt))
+ if (uvalue->common.state || refcount_read(&uvalue->common.refcnt))
return -EINVAL;
tlinks = kcalloc(BPF_TRAMP_MAX, sizeof(*tlinks), GFP_KERNEL);
@@ -421,7 +465,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
mutex_lock(&st_map->lock);
- if (kvalue->state != BPF_STRUCT_OPS_STATE_INIT) {
+ if (kvalue->common.state != BPF_STRUCT_OPS_STATE_INIT) {
err = -EBUSY;
goto unlock;
}
@@ -542,7 +586,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
*
* Pair with smp_load_acquire() during lookup_elem().
*/
- smp_store_release(&kvalue->state, BPF_STRUCT_OPS_STATE_READY);
+ smp_store_release(&kvalue->common.state, BPF_STRUCT_OPS_STATE_READY);
goto unlock;
}
@@ -560,7 +604,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
* It ensures the above udata updates (e.g. prog->aux->id)
* can be seen once BPF_STRUCT_OPS_STATE_INUSE is set.
*/
- smp_store_release(&kvalue->state, BPF_STRUCT_OPS_STATE_INUSE);
+ smp_store_release(&kvalue->common.state, BPF_STRUCT_OPS_STATE_INUSE);
/* Hold the owner module until the struct_ops is
* unregistered
*/
@@ -596,7 +640,7 @@ static long bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key)
if (st_map->map.map_flags & BPF_F_LINK)
return -EOPNOTSUPP;
- prev_state = cmpxchg(&st_map->kvalue.state,
+ prev_state = cmpxchg(&st_map->kvalue.common.state,
BPF_STRUCT_OPS_STATE_INUSE,
BPF_STRUCT_OPS_STATE_TOBEFREE);
switch (prev_state) {
@@ -828,7 +872,7 @@ static bool bpf_struct_ops_valid_to_reg(struct bpf_map *map)
return map->map_type == BPF_MAP_TYPE_STRUCT_OPS &&
map->map_flags & BPF_F_LINK &&
/* Pair with smp_store_release() during map_update */
- smp_load_acquire(&st_map->kvalue.state) == BPF_STRUCT_OPS_STATE_READY;
+ smp_load_acquire(&st_map->kvalue.common.state) == BPF_STRUCT_OPS_STATE_READY;
}
static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link)
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH bpf-next v6 06/10] bpf: pass attached BTF to the bpf_struct_ops subsystem
2023-10-22 5:03 [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules thinker.li
` (4 preceding siblings ...)
2023-10-22 5:03 ` [PATCH bpf-next v6 05/10] bpf: validate value_type thinker.li
@ 2023-10-22 5:03 ` thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration thinker.li
` (3 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: thinker.li @ 2023-10-22 5:03 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii, drosen
Cc: sinquersw, kuifeng, Kui-Feng Lee
From: Kui-Feng Lee <thinker.li@gmail.com>
Giving a BTF, the bpf_struct_ops knows the right place to look up type info
associated with a type ID. This enables a user space program to load a
struct_ops object linked to a struct_ops type defined by a module, by
providing the module BTF (fd).
The bpf_prog includes attach_btf in aux which is passed along with the
bpf_attr when loading the program. The purpose of attach_btf is to
determine the btf type of attach_btf_id. The attach_btf_id is then used to
identify the traced function for a trace program. In the case of struct_ops
programs, it is used to identify the struct_ops type of the struct_ops
object that a program is attached to.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
include/linux/bpf_verifier.h | 1 +
include/uapi/linux/bpf.h | 5 +++++
kernel/bpf/bpf_struct_ops.c | 32 +++++++++++++++++++++++++-------
kernel/bpf/syscall.c | 2 +-
kernel/bpf/verifier.c | 15 ++++++++++++++-
tools/include/uapi/linux/bpf.h | 5 +++++
6 files changed, 51 insertions(+), 9 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 94ec766432f5..5dc4d5fd8ab5 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -579,6 +579,7 @@ struct bpf_verifier_env {
u32 prev_insn_idx;
struct bpf_prog *prog; /* eBPF program being verified */
const struct bpf_verifier_ops *ops;
+ struct module *attach_btf_mod; /* The owner module of prog->aux->attach_btf */
struct bpf_verifier_stack_elem *head; /* stack of verifier states to be processed */
int stack_size; /* number of states to be processed */
bool strict_alignment; /* perform strict pointer alignment checks */
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 73b155e52204..b5ef22f65f35 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1390,6 +1390,11 @@ union bpf_attr {
* to using 5 hash functions).
*/
__u64 map_extra;
+
+ __u32 value_type_btf_obj_fd; /* fd pointing to a BTF
+ * type data for
+ * btf_vmlinux_value_type_id.
+ */
};
struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 6367d42b2ce1..99ab61cc6cad 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -694,6 +694,7 @@ static void __bpf_struct_ops_map_free(struct bpf_map *map)
bpf_jit_uncharge_modmem(PAGE_SIZE);
}
bpf_map_area_free(st_map->uvalue);
+ btf_put(st_map->st_ops_desc->btf);
bpf_map_area_free(st_map);
}
@@ -735,16 +736,31 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
const struct btf_type *t, *vt;
struct module *mod = NULL;
struct bpf_map *map;
+ struct btf *btf;
int ret;
- st_ops_desc = bpf_struct_ops_find_value(btf_vmlinux, attr->btf_vmlinux_value_type_id);
- if (!st_ops_desc)
- return ERR_PTR(-ENOTSUPP);
+ if (attr->value_type_btf_obj_fd) {
+ /* The map holds btf for its whole life time. */
+ btf = btf_get_by_fd(attr->value_type_btf_obj_fd);
+ if (IS_ERR(btf))
+ return ERR_PTR(PTR_ERR(btf));
+
+ if (btf != btf_vmlinux) {
+ mod = btf_try_get_module(btf);
+ if (!mod) {
+ ret = -EINVAL;
+ goto errout;
+ }
+ }
+ } else {
+ btf = btf_vmlinux;
+ btf_get(btf);
+ }
- if (st_ops_desc->btf != btf_vmlinux) {
- mod = btf_try_get_module(st_ops_desc->btf);
- if (!mod)
- return ERR_PTR(-EINVAL);
+ st_ops_desc = bpf_struct_ops_find_value(btf, attr->btf_vmlinux_value_type_id);
+ if (!st_ops_desc) {
+ ret = -ENOTSUPP;
+ goto errout;
}
vt = st_ops_desc->value_type;
@@ -805,7 +821,9 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
__bpf_struct_ops_map_free(map);
btf = NULL; /* has been released */
errout:
+ btf_put(btf);
module_put(mod);
+
return ERR_PTR(ret);
}
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 85c1d908f70f..5daf8a2c2bba 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1097,7 +1097,7 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
return ret;
}
-#define BPF_MAP_CREATE_LAST_FIELD map_extra
+#define BPF_MAP_CREATE_LAST_FIELD value_type_btf_obj_fd
/* called via syscall */
static int map_create(union bpf_attr *attr)
{
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 7d0cf7289092..0289574bb0d5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -19624,6 +19624,7 @@ static int check_struct_ops_btf_id(struct bpf_verifier_env *env)
const struct btf_member *member;
struct bpf_prog *prog = env->prog;
u32 btf_id, member_idx;
+ struct btf *btf;
const char *mname;
if (!prog->gpl_compatible) {
@@ -19631,8 +19632,18 @@ static int check_struct_ops_btf_id(struct bpf_verifier_env *env)
return -EINVAL;
}
+ btf = prog->aux->attach_btf;
+ if (btf != btf_vmlinux) {
+ /* Make sure st_ops is valid through the lifetime of env */
+ env->attach_btf_mod = btf_try_get_module(btf);
+ if (!env->attach_btf_mod) {
+ verbose(env, "owner module of btf is not found\n");
+ return -ENOTSUPP;
+ }
+ }
+
btf_id = prog->aux->attach_btf_id;
- st_ops_desc = bpf_struct_ops_find(btf_vmlinux, btf_id);
+ st_ops_desc = bpf_struct_ops_find(btf, btf_id);
if (!st_ops_desc) {
verbose(env, "attach_btf_id %u is not a supported struct\n",
btf_id);
@@ -20343,6 +20354,8 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
env->prog->expected_attach_type = 0;
*prog = env->prog;
+
+ module_put(env->attach_btf_mod);
err_unlock:
if (!is_priv)
mutex_unlock(&bpf_verifier_lock);
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 73b155e52204..b5ef22f65f35 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1390,6 +1390,11 @@ union bpf_attr {
* to using 5 hash functions).
*/
__u64 map_extra;
+
+ __u32 value_type_btf_obj_fd; /* fd pointing to a BTF
+ * type data for
+ * btf_vmlinux_value_type_id.
+ */
};
struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration
2023-10-22 5:03 [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules thinker.li
` (5 preceding siblings ...)
2023-10-22 5:03 ` [PATCH bpf-next v6 06/10] bpf: pass attached BTF to the bpf_struct_ops subsystem thinker.li
@ 2023-10-22 5:03 ` thinker.li
2023-10-22 6:46 ` kernel test robot
2023-10-26 21:02 ` Eduard Zingerman
2023-10-22 5:03 ` [PATCH bpf-next v6 08/10] libbpf: Find correct module BTFs for struct_ops maps and progs thinker.li
` (2 subsequent siblings)
9 siblings, 2 replies; 24+ messages in thread
From: thinker.li @ 2023-10-22 5:03 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii, drosen
Cc: sinquersw, kuifeng, Kui-Feng Lee, netdev
From: Kui-Feng Lee <thinker.li@gmail.com>
Replace the static list of struct_ops types with per-btf struct_ops_tab to
enable dynamic registration.
Both bpf_dummy_ops and bpf_tcp_ca now utilize the registration function
instead of being listed in bpf_struct_ops_types.h.
Cc: netdev@vger.kernel.org
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
include/linux/bpf.h | 36 +++++++--
include/linux/btf.h | 5 +-
kernel/bpf/bpf_struct_ops.c | 123 ++++++++----------------------
kernel/bpf/bpf_struct_ops_types.h | 12 ---
kernel/bpf/btf.c | 41 +++++++++-
net/bpf/bpf_dummy_struct_ops.c | 14 +++-
net/ipv4/bpf_tcp_ca.c | 16 +++-
7 files changed, 130 insertions(+), 117 deletions(-)
delete mode 100644 kernel/bpf/bpf_struct_ops_types.h
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 26feb8a2da4f..a53b2181c845 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1644,7 +1644,6 @@ struct bpf_struct_ops_desc {
#if defined(CONFIG_BPF_JIT) && defined(CONFIG_BPF_SYSCALL)
#define BPF_MODULE_OWNER ((void *)((0xeB9FUL << 2) + POISON_POINTER_DELTA))
const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *btf, u32 type_id);
-void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log);
bool bpf_struct_ops_get(const void *kdata);
void bpf_struct_ops_put(const void *kdata);
int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
@@ -1690,10 +1689,6 @@ static inline const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *
{
return NULL;
}
-static inline void bpf_struct_ops_init(struct btf *btf,
- struct bpf_verifier_log *log)
-{
-}
static inline bool bpf_try_module_get(const void *data, struct module *owner)
{
return try_module_get(owner);
@@ -3212,4 +3207,35 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog)
return prog->aux->func_idx != 0;
}
+int register_bpf_struct_ops(struct bpf_struct_ops *st_ops);
+
+enum bpf_struct_ops_state {
+ BPF_STRUCT_OPS_STATE_INIT,
+ BPF_STRUCT_OPS_STATE_INUSE,
+ BPF_STRUCT_OPS_STATE_TOBEFREE,
+ BPF_STRUCT_OPS_STATE_READY,
+};
+
+struct bpf_struct_ops_common_value {
+ refcount_t refcnt;
+ enum bpf_struct_ops_state state;
+};
+
+/* bpf_struct_ops_##_name (e.g. bpf_struct_ops_tcp_congestion_ops) is
+ * the map's value exposed to the userspace and its btf-type-id is
+ * stored at the map->btf_vmlinux_value_type_id.
+ *
+ */
+#define DEFINE_STRUCT_OPS_VALUE_TYPE(_name) \
+extern struct bpf_struct_ops bpf_##_name; \
+ \
+struct bpf_struct_ops_##_name { \
+ struct bpf_struct_ops_common_value common; \
+ struct _name data ____cacheline_aligned_in_smp; \
+}
+
+extern void bpf_struct_ops_init(struct bpf_struct_ops_desc *st_ops_desc,
+ struct btf *btf,
+ struct bpf_verifier_log *log);
+
#endif /* _LINUX_BPF_H */
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 6a64b372b7a0..533db3f406b3 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -12,6 +12,8 @@
#include <uapi/linux/bpf.h>
#define BTF_TYPE_EMIT(type) ((void)(type *)0)
+#define BTF_STRUCT_OPS_TYPE_EMIT(type) {((void)(struct type *)0); \
+ ((void)(struct bpf_struct_ops_##type *)0); }
#define BTF_TYPE_EMIT_ENUM(enum_val) ((void)enum_val)
/* These need to be macros, as the expressions are used in assembler input */
@@ -200,6 +202,7 @@ u32 btf_obj_id(const struct btf *btf);
bool btf_is_kernel(const struct btf *btf);
bool btf_is_module(const struct btf *btf);
struct module *btf_try_get_module(const struct btf *btf);
+struct btf *btf_get_module_btf(const struct module *module);
u32 btf_nr_types(const struct btf *btf);
bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s,
const struct btf_member *m,
@@ -574,8 +577,6 @@ static inline bool btf_type_is_struct_ptr(struct btf *btf, const struct btf_type
struct bpf_struct_ops;
struct bpf_struct_ops_desc;
-struct bpf_struct_ops_desc *
-btf_add_struct_ops(struct btf *btf, struct bpf_struct_ops *st_ops);
const struct bpf_struct_ops_desc *
btf_get_struct_ops(struct btf *btf, u32 *ret_cnt);
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 99ab61cc6cad..44412f95bc32 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -13,21 +13,8 @@
#include <linux/btf_ids.h>
#include <linux/rcupdate_wait.h>
-enum bpf_struct_ops_state {
- BPF_STRUCT_OPS_STATE_INIT,
- BPF_STRUCT_OPS_STATE_INUSE,
- BPF_STRUCT_OPS_STATE_TOBEFREE,
- BPF_STRUCT_OPS_STATE_READY,
-};
-
-struct bpf_struct_ops_common_value {
- refcount_t refcnt;
- enum bpf_struct_ops_state state;
-};
-#define BPF_STRUCT_OPS_COMMON_VALUE struct bpf_struct_ops_common_value common
-
struct bpf_struct_ops_value {
- BPF_STRUCT_OPS_COMMON_VALUE;
+ struct bpf_struct_ops_common_value common;
char data[] ____cacheline_aligned_in_smp;
};
@@ -72,35 +59,6 @@ static DEFINE_MUTEX(update_mutex);
#define VALUE_PREFIX "bpf_struct_ops_"
#define VALUE_PREFIX_LEN (sizeof(VALUE_PREFIX) - 1)
-/* bpf_struct_ops_##_name (e.g. bpf_struct_ops_tcp_congestion_ops) is
- * the map's value exposed to the userspace and its btf-type-id is
- * stored at the map->btf_vmlinux_value_type_id.
- *
- */
-#define BPF_STRUCT_OPS_TYPE(_name) \
-extern struct bpf_struct_ops bpf_##_name; \
- \
-struct bpf_struct_ops_##_name { \
- BPF_STRUCT_OPS_COMMON_VALUE; \
- struct _name data ____cacheline_aligned_in_smp; \
-};
-#include "bpf_struct_ops_types.h"
-#undef BPF_STRUCT_OPS_TYPE
-
-enum {
-#define BPF_STRUCT_OPS_TYPE(_name) BPF_STRUCT_OPS_TYPE_##_name,
-#include "bpf_struct_ops_types.h"
-#undef BPF_STRUCT_OPS_TYPE
- __NR_BPF_STRUCT_OPS_TYPE,
-};
-
-static struct bpf_struct_ops_desc bpf_struct_ops[] = {
-#define BPF_STRUCT_OPS_TYPE(_name) \
- [BPF_STRUCT_OPS_TYPE_##_name] = { .st_ops = &bpf_##_name },
-#include "bpf_struct_ops_types.h"
-#undef BPF_STRUCT_OPS_TYPE
-};
-
const struct bpf_verifier_ops bpf_struct_ops_verifier_ops = {
};
@@ -110,13 +68,22 @@ const struct bpf_prog_ops bpf_struct_ops_prog_ops = {
#endif
};
-static const struct btf_type *module_type;
-static const struct btf_type *common_value_type;
+BTF_ID_LIST(st_ops_ids)
+BTF_ID(struct, module)
+BTF_ID(struct, bpf_struct_ops_common_value)
+
+enum {
+ idx_module_id,
+ idx_st_ops_common_value_id,
+};
+
+extern struct btf *btf_vmlinux;
static bool is_valid_value_type(struct btf *btf, s32 value_id,
const struct btf_type *type,
const char *value_name)
{
+ const struct btf_type *common_value_type;
const struct btf_member *member;
const struct btf_type *vt, *mt;
@@ -128,6 +95,8 @@ static bool is_valid_value_type(struct btf *btf, s32 value_id,
}
member = btf_type_member(vt);
mt = btf_type_by_id(btf, member->type);
+ common_value_type = btf_type_by_id(btf_vmlinux,
+ st_ops_ids[idx_st_ops_common_value_id]);
if (mt != common_value_type) {
pr_warn("The first member of %s should be bpf_struct_ops_common_value\n",
value_name);
@@ -144,9 +113,9 @@ static bool is_valid_value_type(struct btf *btf, s32 value_id,
return true;
}
-static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
- struct btf *btf,
- struct bpf_verifier_log *log)
+void bpf_struct_ops_init(struct bpf_struct_ops_desc *st_ops_desc,
+ struct btf *btf,
+ struct bpf_verifier_log *log)
{
struct bpf_struct_ops *st_ops = st_ops_desc->st_ops;
const struct btf_member *member;
@@ -232,51 +201,20 @@ static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
}
}
-void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log)
-{
- struct bpf_struct_ops_desc *st_ops_desc;
- s32 module_id, common_value_id;
- u32 i;
-
- /* Ensure BTF type is emitted for "struct bpf_struct_ops_##_name" */
-#define BPF_STRUCT_OPS_TYPE(_name) BTF_TYPE_EMIT(struct bpf_struct_ops_##_name);
-#include "bpf_struct_ops_types.h"
-#undef BPF_STRUCT_OPS_TYPE
-
- module_id = btf_find_by_name_kind(btf, "module", BTF_KIND_STRUCT);
- if (module_id < 0) {
- pr_warn("Cannot find struct module in btf_vmlinux\n");
- return;
- }
- module_type = btf_type_by_id(btf, module_id);
- common_value_id = btf_find_by_name_kind(btf,
- "bpf_struct_ops_common_value",
- BTF_KIND_STRUCT);
- if (common_value_id < 0) {
- pr_warn("Cannot find struct common_value in btf_vmlinux\n");
- return;
- }
- common_value_type = btf_type_by_id(btf, common_value_id);
-
- for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
- st_ops_desc = &bpf_struct_ops[i];
- bpf_struct_ops_init_one(st_ops_desc, btf, log);
- }
-}
-
-extern struct btf *btf_vmlinux;
-
static const struct bpf_struct_ops_desc *
bpf_struct_ops_find_value(struct btf *btf, u32 value_id)
{
+ const struct bpf_struct_ops_desc *st_ops_list;
unsigned int i;
+ u32 cnt = 0;
- if (!value_id || !btf_vmlinux)
+ if (!value_id)
return NULL;
- for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
- if (bpf_struct_ops[i].value_id == value_id)
- return &bpf_struct_ops[i];
+ st_ops_list = btf_get_struct_ops(btf, &cnt);
+ for (i = 0; i < cnt; i++) {
+ if (st_ops_list[i].value_id == value_id)
+ return &st_ops_list[i];
}
return NULL;
@@ -285,14 +223,17 @@ bpf_struct_ops_find_value(struct btf *btf, u32 value_id)
const struct bpf_struct_ops_desc *
bpf_struct_ops_find(struct btf *btf, u32 type_id)
{
+ const struct bpf_struct_ops_desc *st_ops_list;
unsigned int i;
+ u32 cnt;
- if (!type_id || !btf_vmlinux)
+ if (!type_id)
return NULL;
- for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
- if (bpf_struct_ops[i].type_id == type_id)
- return &bpf_struct_ops[i];
+ st_ops_list = btf_get_struct_ops(btf, &cnt);
+ for (i = 0; i < cnt; i++) {
+ if (st_ops_list[i].type_id == type_id)
+ return &st_ops_list[i];
}
return NULL;
@@ -429,6 +370,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
const struct bpf_struct_ops_desc *st_ops_desc = st_map->st_ops_desc;
const struct bpf_struct_ops *st_ops = st_ops_desc->st_ops;
struct bpf_struct_ops_value *uvalue, *kvalue;
+ const struct btf_type *module_type;
const struct btf_member *member;
const struct btf_type *t = st_ops_desc->type;
struct bpf_tramp_links *tlinks;
@@ -485,6 +427,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
image = st_map->image;
image_end = st_map->image + PAGE_SIZE;
+ module_type = btf_type_by_id(btf_vmlinux, st_ops_ids[idx_module_id]);
for_each_member(i, t, member) {
const struct btf_type *mtype, *ptype;
struct bpf_prog *prog;
diff --git a/kernel/bpf/bpf_struct_ops_types.h b/kernel/bpf/bpf_struct_ops_types.h
deleted file mode 100644
index 5678a9ddf817..000000000000
--- a/kernel/bpf/bpf_struct_ops_types.h
+++ /dev/null
@@ -1,12 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* internal file - do not include directly */
-
-#ifdef CONFIG_BPF_JIT
-#ifdef CONFIG_NET
-BPF_STRUCT_OPS_TYPE(bpf_dummy_ops)
-#endif
-#ifdef CONFIG_INET
-#include <net/tcp.h>
-BPF_STRUCT_OPS_TYPE(tcp_congestion_ops)
-#endif
-#endif
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index c1e2ed6c972e..c53888e8dddb 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -5790,8 +5790,6 @@ struct btf *btf_parse_vmlinux(void)
/* btf_parse_vmlinux() runs under bpf_verifier_lock */
bpf_ctx_convert.t = btf_type_by_id(btf, bpf_ctx_convert_btf_id[0]);
- bpf_struct_ops_init(btf, log);
-
refcount_set(&btf->refcnt, 1);
err = btf_alloc_id(btf);
@@ -7532,7 +7530,7 @@ struct module *btf_try_get_module(const struct btf *btf)
/* Returns struct btf corresponding to the struct module.
* This function can return NULL or ERR_PTR.
*/
-static struct btf *btf_get_module_btf(const struct module *module)
+struct btf *btf_get_module_btf(const struct module *module)
{
#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
struct btf_module *btf_mod, *tmp;
@@ -8672,3 +8670,40 @@ const struct bpf_struct_ops_desc *btf_get_struct_ops(struct btf *btf, u32 *ret_c
return (const struct bpf_struct_ops_desc *)btf->struct_ops_tab->ops;
}
+int register_bpf_struct_ops(struct bpf_struct_ops *st_ops)
+{
+ struct bpf_struct_ops_desc *desc;
+ struct bpf_verifier_log *log;
+ struct btf *btf;
+ int err = 0;
+
+ if (st_ops == NULL)
+ return -EINVAL;
+
+ btf = btf_get_module_btf(st_ops->owner);
+ if (!btf)
+ return -EINVAL;
+
+ log = kzalloc(sizeof(*log), GFP_KERNEL | __GFP_NOWARN);
+ if (!log) {
+ err = -ENOMEM;
+ goto errout;
+ }
+
+ log->level = BPF_LOG_KERNEL;
+
+ desc = btf_add_struct_ops(btf, st_ops);
+ if (IS_ERR(desc)) {
+ err = PTR_ERR(desc);
+ goto errout;
+ }
+
+ bpf_struct_ops_init(desc, btf, log);
+
+errout:
+ kfree(log);
+ btf_put(btf);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(register_bpf_struct_ops);
diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
index ffa224053a6c..148a5851c4fa 100644
--- a/net/bpf/bpf_dummy_struct_ops.c
+++ b/net/bpf/bpf_dummy_struct_ops.c
@@ -7,7 +7,7 @@
#include <linux/bpf.h>
#include <linux/btf.h>
-extern struct bpf_struct_ops bpf_bpf_dummy_ops;
+static struct bpf_struct_ops bpf_bpf_dummy_ops;
/* A common type for test_N with return value in bpf_dummy_ops */
typedef int (*dummy_ops_test_ret_fn)(struct bpf_dummy_ops_state *state, ...);
@@ -223,11 +223,13 @@ static int bpf_dummy_reg(void *kdata)
return -EOPNOTSUPP;
}
+DEFINE_STRUCT_OPS_VALUE_TYPE(bpf_dummy_ops);
+
static void bpf_dummy_unreg(void *kdata)
{
}
-struct bpf_struct_ops bpf_bpf_dummy_ops = {
+static struct bpf_struct_ops bpf_bpf_dummy_ops = {
.verifier_ops = &bpf_dummy_verifier_ops,
.init = bpf_dummy_init,
.check_member = bpf_dummy_ops_check_member,
@@ -235,4 +237,12 @@ struct bpf_struct_ops bpf_bpf_dummy_ops = {
.reg = bpf_dummy_reg,
.unreg = bpf_dummy_unreg,
.name = "bpf_dummy_ops",
+ .owner = THIS_MODULE,
};
+
+static int __init bpf_dummy_struct_ops_init(void)
+{
+ BTF_STRUCT_OPS_TYPE_EMIT(bpf_dummy_ops);
+ return register_bpf_struct_ops(&bpf_bpf_dummy_ops);
+}
+late_initcall(bpf_dummy_struct_ops_init);
diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
index 3c8b76578a2a..b36a19274e5b 100644
--- a/net/ipv4/bpf_tcp_ca.c
+++ b/net/ipv4/bpf_tcp_ca.c
@@ -12,7 +12,7 @@
#include <net/bpf_sk_storage.h>
/* "extern" is to avoid sparse warning. It is only used in bpf_struct_ops.c. */
-extern struct bpf_struct_ops bpf_tcp_congestion_ops;
+static struct bpf_struct_ops bpf_tcp_congestion_ops;
static u32 unsupported_ops[] = {
offsetof(struct tcp_congestion_ops, get_info),
@@ -277,7 +277,9 @@ static int bpf_tcp_ca_validate(void *kdata)
return tcp_validate_congestion_control(kdata);
}
-struct bpf_struct_ops bpf_tcp_congestion_ops = {
+DEFINE_STRUCT_OPS_VALUE_TYPE(tcp_congestion_ops);
+
+static struct bpf_struct_ops bpf_tcp_congestion_ops = {
.verifier_ops = &bpf_tcp_ca_verifier_ops,
.reg = bpf_tcp_ca_reg,
.unreg = bpf_tcp_ca_unreg,
@@ -287,10 +289,18 @@ struct bpf_struct_ops bpf_tcp_congestion_ops = {
.init = bpf_tcp_ca_init,
.validate = bpf_tcp_ca_validate,
.name = "tcp_congestion_ops",
+ .owner = THIS_MODULE,
};
static int __init bpf_tcp_ca_kfunc_init(void)
{
- return register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &bpf_tcp_ca_kfunc_set);
+ int ret;
+
+ BTF_STRUCT_OPS_TYPE_EMIT(tcp_congestion_ops);
+
+ ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &bpf_tcp_ca_kfunc_set);
+ ret = ret ?: register_bpf_struct_ops(&bpf_tcp_congestion_ops);
+
+ return ret;
}
late_initcall(bpf_tcp_ca_kfunc_init);
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH bpf-next v6 08/10] libbpf: Find correct module BTFs for struct_ops maps and progs.
2023-10-22 5:03 [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules thinker.li
` (6 preceding siblings ...)
2023-10-22 5:03 ` [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration thinker.li
@ 2023-10-22 5:03 ` thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 09/10] bpf: export btf_ctx_access to modules thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops() thinker.li
9 siblings, 0 replies; 24+ messages in thread
From: thinker.li @ 2023-10-22 5:03 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii, drosen
Cc: sinquersw, kuifeng, Kui-Feng Lee
From: Kui-Feng Lee <thinker.li@gmail.com>
Locate the module BTFs for struct_ops maps and progs and pass them to the
kernel. This ensures that the kernel correctly resolves type IDs from the
appropriate module BTFs.
For the map of a struct_ops object, the FD of the module BTF is set to
bpf_map to keep a reference to the module BTF. The FD is passed to the
kernel as value_type_btf_obj_fd when the struct_ops object is loaded.
For a bpf_struct_ops prog, attach_btf_obj_fd of bpf_prog is the FD of a
module BTF in the kernel.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
---
tools/lib/bpf/bpf.c | 4 +++-
tools/lib/bpf/bpf.h | 5 ++++-
tools/lib/bpf/libbpf.c | 50 ++++++++++++++++++++++++++++--------------
3 files changed, 41 insertions(+), 18 deletions(-)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index b0f1913763a3..af46488e4ea9 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -169,7 +169,8 @@ int bpf_map_create(enum bpf_map_type map_type,
__u32 max_entries,
const struct bpf_map_create_opts *opts)
{
- const size_t attr_sz = offsetofend(union bpf_attr, map_extra);
+ const size_t attr_sz = offsetofend(union bpf_attr,
+ value_type_btf_obj_fd);
union bpf_attr attr;
int fd;
@@ -191,6 +192,7 @@ int bpf_map_create(enum bpf_map_type map_type,
attr.btf_key_type_id = OPTS_GET(opts, btf_key_type_id, 0);
attr.btf_value_type_id = OPTS_GET(opts, btf_value_type_id, 0);
attr.btf_vmlinux_value_type_id = OPTS_GET(opts, btf_vmlinux_value_type_id, 0);
+ attr.value_type_btf_obj_fd = OPTS_GET(opts, value_type_btf_obj_fd, 0);
attr.inner_map_fd = OPTS_GET(opts, inner_map_fd, 0);
attr.map_flags = OPTS_GET(opts, map_flags, 0);
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 74c2887cfd24..1733cdc21241 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -51,8 +51,11 @@ struct bpf_map_create_opts {
__u32 numa_node;
__u32 map_ifindex;
+
+ __u32 value_type_btf_obj_fd;
+ size_t:0;
};
-#define bpf_map_create_opts__last_field map_ifindex
+#define bpf_map_create_opts__last_field value_type_btf_obj_fd
LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
const char *map_name,
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 3a6108e3238b..9ff95c3d6a92 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -519,6 +519,7 @@ struct bpf_map {
struct bpf_map_def def;
__u32 numa_node;
__u32 btf_var_idx;
+ int mod_btf_fd;
__u32 btf_key_type_id;
__u32 btf_value_type_id;
__u32 btf_vmlinux_value_type_id;
@@ -922,22 +923,29 @@ find_member_by_name(const struct btf *btf, const struct btf_type *t,
return NULL;
}
+static int find_ksym_btf_id(struct bpf_object *obj, const char *ksym_name,
+ __u16 kind, struct btf **res_btf,
+ struct module_btf **res_mod_btf);
+
#define STRUCT_OPS_VALUE_PREFIX "bpf_struct_ops_"
static int find_btf_by_prefix_kind(const struct btf *btf, const char *prefix,
const char *name, __u32 kind);
static int
-find_struct_ops_kern_types(const struct btf *btf, const char *tname,
+find_struct_ops_kern_types(struct bpf_object *obj, const char *tname,
+ struct module_btf **mod_btf,
const struct btf_type **type, __u32 *type_id,
const struct btf_type **vtype, __u32 *vtype_id,
const struct btf_member **data_member)
{
const struct btf_type *kern_type, *kern_vtype;
const struct btf_member *kern_data_member;
+ struct btf *btf;
__s32 kern_vtype_id, kern_type_id;
__u32 i;
- kern_type_id = btf__find_by_name_kind(btf, tname, BTF_KIND_STRUCT);
+ kern_type_id = find_ksym_btf_id(obj, tname, BTF_KIND_STRUCT,
+ &btf, mod_btf);
if (kern_type_id < 0) {
pr_warn("struct_ops init_kern: struct %s is not found in kernel BTF\n",
tname);
@@ -991,14 +999,16 @@ static bool bpf_map__is_struct_ops(const struct bpf_map *map)
}
/* Init the map's fields that depend on kern_btf */
-static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
- const struct btf *btf,
- const struct btf *kern_btf)
+static int bpf_map__init_kern_struct_ops(struct bpf_map *map)
{
const struct btf_member *member, *kern_member, *kern_data_member;
const struct btf_type *type, *kern_type, *kern_vtype;
__u32 i, kern_type_id, kern_vtype_id, kern_data_off;
+ struct bpf_object *obj = map->obj;
+ const struct btf *btf = obj->btf;
struct bpf_struct_ops *st_ops;
+ const struct btf *kern_btf;
+ struct module_btf *mod_btf;
void *data, *kern_data;
const char *tname;
int err;
@@ -1006,16 +1016,19 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
st_ops = map->st_ops;
type = st_ops->type;
tname = st_ops->tname;
- err = find_struct_ops_kern_types(kern_btf, tname,
+ err = find_struct_ops_kern_types(obj, tname, &mod_btf,
&kern_type, &kern_type_id,
&kern_vtype, &kern_vtype_id,
&kern_data_member);
if (err)
return err;
+ kern_btf = mod_btf ? mod_btf->btf : obj->btf_vmlinux;
+
pr_debug("struct_ops init_kern %s: type_id:%u kern_type_id:%u kern_vtype_id:%u\n",
map->name, st_ops->type_id, kern_type_id, kern_vtype_id);
+ map->mod_btf_fd = mod_btf ? mod_btf->fd : 0;
map->def.value_size = kern_vtype->size;
map->btf_vmlinux_value_type_id = kern_vtype_id;
@@ -1091,6 +1104,8 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
return -ENOTSUP;
}
+ if (mod_btf)
+ prog->attach_btf_obj_fd = mod_btf->fd;
prog->attach_btf_id = kern_type_id;
prog->expected_attach_type = kern_member_idx;
@@ -1133,8 +1148,7 @@ static int bpf_object__init_kern_struct_ops_maps(struct bpf_object *obj)
if (!bpf_map__is_struct_ops(map))
continue;
- err = bpf_map__init_kern_struct_ops(map, obj->btf,
- obj->btf_vmlinux);
+ err = bpf_map__init_kern_struct_ops(map);
if (err)
return err;
}
@@ -5193,8 +5207,10 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
create_attr.numa_node = map->numa_node;
create_attr.map_extra = map->map_extra;
- if (bpf_map__is_struct_ops(map))
+ if (bpf_map__is_struct_ops(map)) {
create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
+ create_attr.value_type_btf_obj_fd = map->mod_btf_fd;
+ }
if (obj->btf && btf__fd(obj->btf) >= 0) {
create_attr.btf_fd = btf__fd(obj->btf);
@@ -9464,9 +9480,9 @@ static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
return err;
}
-static int find_kernel_btf_id(struct bpf_object *obj, const char *attach_name,
- enum bpf_attach_type attach_type,
- int *btf_obj_fd, int *btf_type_id)
+static int find_kernel_attach_btf_id(struct bpf_object *obj, const char *attach_name,
+ enum bpf_attach_type attach_type,
+ int *btf_obj_fd, int *btf_type_id)
{
int ret, i;
@@ -9531,7 +9547,9 @@ static int libbpf_find_attach_btf_id(struct bpf_program *prog, const char *attac
*btf_obj_fd = 0;
*btf_type_id = 1;
} else {
- err = find_kernel_btf_id(prog->obj, attach_name, attach_type, btf_obj_fd, btf_type_id);
+ err = find_kernel_attach_btf_id(prog->obj, attach_name,
+ attach_type, btf_obj_fd,
+ btf_type_id);
}
if (err) {
pr_warn("prog '%s': failed to find kernel BTF type ID of '%s': %d\n",
@@ -12945,9 +12963,9 @@ int bpf_program__set_attach_target(struct bpf_program *prog,
err = bpf_object__load_vmlinux_btf(prog->obj, true);
if (err)
return libbpf_err(err);
- err = find_kernel_btf_id(prog->obj, attach_func_name,
- prog->expected_attach_type,
- &btf_obj_fd, &btf_id);
+ err = find_kernel_attach_btf_id(prog->obj, attach_func_name,
+ prog->expected_attach_type,
+ &btf_obj_fd, &btf_id);
if (err)
return libbpf_err(err);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH bpf-next v6 09/10] bpf: export btf_ctx_access to modules.
2023-10-22 5:03 [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules thinker.li
` (7 preceding siblings ...)
2023-10-22 5:03 ` [PATCH bpf-next v6 08/10] libbpf: Find correct module BTFs for struct_ops maps and progs thinker.li
@ 2023-10-22 5:03 ` thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops() thinker.li
9 siblings, 0 replies; 24+ messages in thread
From: thinker.li @ 2023-10-22 5:03 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii, drosen
Cc: sinquersw, kuifeng, Kui-Feng Lee
From: Kui-Feng Lee <thinker.li@gmail.com>
The module requires the use of btf_ctx_access() to invoke
bpf_tracing_btf_ctx_access() from a module. This function is valuable for
implementing validation functions that ensure proper access to ctx.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
kernel/bpf/btf.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index c53888e8dddb..3773f6b16784 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6139,6 +6139,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
__btf_name_by_offset(btf, t->name_off));
return true;
}
+EXPORT_SYMBOL_GPL(btf_ctx_access);
enum bpf_struct_walk_result {
/* < 0 error */
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops().
2023-10-22 5:03 [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules thinker.li
` (8 preceding siblings ...)
2023-10-22 5:03 ` [PATCH bpf-next v6 09/10] bpf: export btf_ctx_access to modules thinker.li
@ 2023-10-22 5:03 ` thinker.li
2023-10-22 7:08 ` kernel test robot
2023-10-26 20:31 ` Eduard Zingerman
9 siblings, 2 replies; 24+ messages in thread
From: thinker.li @ 2023-10-22 5:03 UTC (permalink / raw)
To: bpf, ast, martin.lau, song, kernel-team, andrii, drosen
Cc: sinquersw, kuifeng, Kui-Feng Lee
From: Kui-Feng Lee <thinker.li@gmail.com>
Create a new struct_ops type called bpf_testmod_ops within the bpf_testmod
module. When a struct_ops object is registered, the bpf_testmod module will
invoke test_2 from the module.
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
tools/testing/selftests/bpf/Makefile | 2 +
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 59 +++++++++++++++++++
.../selftests/bpf/bpf_testmod/bpf_testmod.h | 5 ++
.../bpf/prog_tests/test_struct_ops_module.c | 38 ++++++++++++
.../selftests/bpf/progs/struct_ops_module.c | 30 ++++++++++
tools/testing/selftests/bpf/testing_helpers.c | 35 +++++++++++
6 files changed, 169 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_module.c
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index caede9b574cb..dd7ff14e1fdf 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -706,6 +706,8 @@ $(OUTPUT)/uprobe_multi: uprobe_multi.c
$(call msg,BINARY,,$@)
$(Q)$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LDLIBS) -o $@
+$(OUTPUT)/testing_helpers.o: $(OUTPUT)/rcu_tasks_trace_gp.skel.h
+
EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \
prog_tests/tests.h map_tests/tests.h verifier/tests.h \
feature bpftool \
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
index cefc5dd72573..f1a20669d884 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2020 Facebook */
+#include <linux/bpf.h>
#include <linux/btf.h>
#include <linux/btf_ids.h>
#include <linux/error-injection.h>
@@ -517,11 +518,66 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_offset)
BTF_SET8_END(bpf_testmod_check_kfunc_ids)
+#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
+
+DEFINE_STRUCT_OPS_VALUE_TYPE(bpf_testmod_ops);
+
+static int bpf_testmod_ops_init(struct btf *btf)
+{
+ return 0;
+}
+
+static bool bpf_testmod_ops_is_valid_access(int off, int size,
+ enum bpf_access_type type,
+ const struct bpf_prog *prog,
+ struct bpf_insn_access_aux *info)
+{
+ return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
+}
+
+static int bpf_testmod_ops_init_member(const struct btf_type *t,
+ const struct btf_member *member,
+ void *kdata, const void *udata)
+{
+ return 0;
+}
+
static const struct btf_kfunc_id_set bpf_testmod_kfunc_set = {
.owner = THIS_MODULE,
.set = &bpf_testmod_check_kfunc_ids,
};
+static const struct bpf_verifier_ops bpf_testmod_verifier_ops = {
+ .is_valid_access = bpf_testmod_ops_is_valid_access,
+};
+
+static int bpf_dummy_reg(void *kdata)
+{
+ struct bpf_testmod_ops *ops = kdata;
+ int r;
+
+ BTF_STRUCT_OPS_TYPE_EMIT(bpf_testmod_ops);
+ r = ops->test_2(4, 3);
+
+ return 0;
+}
+
+static void bpf_dummy_unreg(void *kdata)
+{
+}
+
+struct bpf_struct_ops bpf_bpf_testmod_ops = {
+ .verifier_ops = &bpf_testmod_verifier_ops,
+ .init = bpf_testmod_ops_init,
+ .init_member = bpf_testmod_ops_init_member,
+ .reg = bpf_dummy_reg,
+ .unreg = bpf_dummy_unreg,
+ .name = "bpf_testmod_ops",
+ .owner = THIS_MODULE,
+};
+
+#endif /* CONFIG_DEBUG_INFO_BTF_MODULES */
+
extern int bpf_fentry_test1(int a);
static int bpf_testmod_init(void)
@@ -532,6 +588,9 @@ static int bpf_testmod_init(void)
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_testmod_kfunc_set);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_testmod_kfunc_set);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &bpf_testmod_kfunc_set);
+#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
+ ret = ret ?: register_bpf_struct_ops(&bpf_bpf_testmod_ops);
+#endif
if (ret < 0)
return ret;
if (bpf_fentry_test1(0) < 0)
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
index f32793efe095..ca5435751c79 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
@@ -28,4 +28,9 @@ struct bpf_iter_testmod_seq {
int cnt;
};
+struct bpf_testmod_ops {
+ int (*test_1)(void);
+ int (*test_2)(int a, int b);
+};
+
#endif /* _BPF_TESTMOD_H */
diff --git a/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
new file mode 100644
index 000000000000..7261fc6c377a
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
@@ -0,0 +1,38 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+#include <test_progs.h>
+#include <time.h>
+
+#include "rcu_tasks_trace_gp.skel.h"
+#include "struct_ops_module.skel.h"
+
+static void test_regular_load(void)
+{
+ struct struct_ops_module *skel;
+ struct bpf_link *link;
+ DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts);
+ int err;
+
+ skel = struct_ops_module__open_opts(&opts);
+ if (!ASSERT_OK_PTR(skel, "struct_ops_module_open"))
+ return;
+ err = struct_ops_module__load(skel);
+ if (!ASSERT_OK(err, "struct_ops_module_load"))
+ return;
+
+ link = bpf_map__attach_struct_ops(skel->maps.testmod_1);
+ ASSERT_OK_PTR(link, "attach_test_mod_1");
+
+ ASSERT_EQ(skel->bss->test_2_result, 7, "test_2_result");
+
+ bpf_link__destroy(link);
+
+ struct_ops_module__destroy(skel);
+}
+
+void serial_test_struct_ops_module(void)
+{
+ if (test__start_subtest("regular_load"))
+ test_regular_load();
+}
+
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_module.c b/tools/testing/selftests/bpf/progs/struct_ops_module.c
new file mode 100644
index 000000000000..cb305d04342f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/struct_ops_module.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "../bpf_testmod/bpf_testmod.h"
+
+char _license[] SEC("license") = "GPL";
+
+int test_2_result = 0;
+
+SEC("struct_ops/test_1")
+int BPF_PROG(test_1)
+{
+ return 0xdeadbeef;
+}
+
+SEC("struct_ops/test_2")
+int BPF_PROG(test_2, int a, int b)
+{
+ test_2_result = a + b;
+ return a + b;
+}
+
+SEC(".struct_ops.link")
+struct bpf_testmod_ops testmod_1 = {
+ .test_1 = (void *)test_1,
+ .test_2 = (void *)test_2,
+};
+
diff --git a/tools/testing/selftests/bpf/testing_helpers.c b/tools/testing/selftests/bpf/testing_helpers.c
index 8d994884c7b4..05870cd62458 100644
--- a/tools/testing/selftests/bpf/testing_helpers.c
+++ b/tools/testing/selftests/bpf/testing_helpers.c
@@ -10,6 +10,7 @@
#include "test_progs.h"
#include "testing_helpers.h"
#include <linux/membarrier.h>
+#include "rcu_tasks_trace_gp.skel.h"
int parse_num_list(const char *s, bool **num_set, int *num_set_len)
{
@@ -380,10 +381,44 @@ int load_bpf_testmod(bool verbose)
return 0;
}
+/* This function will trigger call_rcu_tasks_trace() in the kernel */
+static int kern_sync_rcu_tasks_trace(void)
+{
+ struct rcu_tasks_trace_gp *rcu;
+ time_t start;
+ long gp_seq;
+ LIBBPF_OPTS(bpf_test_run_opts, opts);
+
+ rcu = rcu_tasks_trace_gp__open_and_load();
+ if (IS_ERR(rcu))
+ return -EFAULT;
+ if (rcu_tasks_trace_gp__attach(rcu))
+ return -EFAULT;
+
+ gp_seq = READ_ONCE(rcu->bss->gp_seq);
+
+ if (bpf_prog_test_run_opts(bpf_program__fd(rcu->progs.do_call_rcu_tasks_trace),
+ &opts))
+ return -EFAULT;
+ if (opts.retval != 0)
+ return -EFAULT;
+
+ start = time(NULL);
+ while ((start + 2) > time(NULL) &&
+ gp_seq == READ_ONCE(rcu->bss->gp_seq))
+ sched_yield();
+
+ rcu_tasks_trace_gp__destroy(rcu);
+
+ return 0;
+}
+
/*
* Trigger synchronize_rcu() in kernel.
*/
int kern_sync_rcu(void)
{
+ if (kern_sync_rcu_tasks_trace())
+ return -EFAULT;
return syscall(__NR_membarrier, MEMBARRIER_CMD_SHARED, 0, 0);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration
2023-10-22 5:03 ` [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration thinker.li
@ 2023-10-22 6:46 ` kernel test robot
2023-10-26 21:02 ` Eduard Zingerman
1 sibling, 0 replies; 24+ messages in thread
From: kernel test robot @ 2023-10-22 6:46 UTC (permalink / raw)
To: thinker.li, bpf, ast, martin.lau, song, kernel-team, andrii,
drosen
Cc: oe-kbuild-all
Hi,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/thinker-li-gmail-com/bpf-refactory-struct_ops-type-initialization-to-a-function/20231022-131121
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20231022050335.2579051-8-thinker.li%40gmail.com
patch subject: [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration
reproduce: (https://download.01.org/0day-ci/archive/20231022/202310221421.WsGCmmEc-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202310221421.WsGCmmEc-lkp@intel.com/
# many are suggestions rather than must-fix
WARNING:AVOID_EXTERNS: externs should be avoided in .c files
#192: FILE: kernel/bpf/bpf_struct_ops.c:80:
+extern struct btf *btf_vmlinux;
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops().
2023-10-22 5:03 ` [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops() thinker.li
@ 2023-10-22 7:08 ` kernel test robot
2023-10-26 20:31 ` Eduard Zingerman
1 sibling, 0 replies; 24+ messages in thread
From: kernel test robot @ 2023-10-22 7:08 UTC (permalink / raw)
To: thinker.li, bpf, ast, martin.lau, song, kernel-team, andrii,
drosen
Cc: oe-kbuild-all
Hi,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/thinker-li-gmail-com/bpf-refactory-struct_ops-type-initialization-to-a-function/20231022-131121
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20231022050335.2579051-11-thinker.li%40gmail.com
patch subject: [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops().
reproduce: (https://download.01.org/0day-ci/archive/20231022/202310221417.1kZi6WXz-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202310221417.1kZi6WXz-lkp@intel.com/
# many are suggestions rather than must-fix
WARNING:LINE_SPACING: Missing a blank line after declarations
#239: FILE: tools/testing/selftests/bpf/testing_helpers.c:390:
+ long gp_seq;
+ LIBBPF_OPTS(bpf_test_run_opts, opts);
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops().
2023-10-22 5:03 ` [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops() thinker.li
2023-10-22 7:08 ` kernel test robot
@ 2023-10-26 20:31 ` Eduard Zingerman
2023-10-27 4:55 ` Kui-Feng Lee
2023-10-29 2:34 ` Kui-Feng Lee
1 sibling, 2 replies; 24+ messages in thread
From: Eduard Zingerman @ 2023-10-26 20:31 UTC (permalink / raw)
To: thinker.li, bpf, ast, martin.lau, song, kernel-team, andrii,
drosen
Cc: sinquersw, kuifeng
On Sat, 2023-10-21 at 22:03 -0700, thinker.li@gmail.com wrote:
> From: Kui-Feng Lee <thinker.li@gmail.com>
>
> Create a new struct_ops type called bpf_testmod_ops within the bpf_testmod
> module. When a struct_ops object is registered, the bpf_testmod module will
> invoke test_2 from the module.
>
> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
Hello,
Sorry for the late response, was moving through the patch-set very slowly.
Please note that CI currently fails for this series [0], reported error is:
testing_helpers.c:13:10: fatal error: 'rcu_tasks_trace_gp.skel.h' file not found
13 | #include "rcu_tasks_trace_gp.skel.h"
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
I get the same error when try to run tests locally (after full clean).
On the other hand it looks like `kern_sync_rcu_tasks_trace` changes
are not really necessary, when I undo these changes but keep changes in:
- .../selftests/bpf/bpf_testmod/bpf_testmod.c
- .../selftests/bpf/bpf_testmod/bpf_testmod.h
- .../bpf/prog_tests/test_struct_ops_module.c
- .../selftests/bpf/progs/struct_ops_module.c
struct_ops_module/regular_load test still passes.
Regarding assertion:
> + ASSERT_EQ(skel->bss->test_2_result, 7, "test_2_result");
Could you please leave a comment explaining why the value is 7?
I don't understand what invokes 'test_2' but changing it to 8
forces test to fail, so something does call 'test_2' :)
Also, when running test_maps I get the following error:
libbpf: bpf_map_create_opts has non-zero extra bytes
map_create_opts(317):FAIL:bpf_map_create() error:Invalid argument (name=hash_of_maps)
[0] https://patchwork.kernel.org/project/netdevbpf/patch/20231022050335.2579051-11-thinker.li@gmail.com/
(look for 'Logs for x86_64-gcc / build / build for x86_64 with gcc ')
> ---
> tools/testing/selftests/bpf/Makefile | 2 +
> .../selftests/bpf/bpf_testmod/bpf_testmod.c | 59 +++++++++++++++++++
> .../selftests/bpf/bpf_testmod/bpf_testmod.h | 5 ++
> .../bpf/prog_tests/test_struct_ops_module.c | 38 ++++++++++++
> .../selftests/bpf/progs/struct_ops_module.c | 30 ++++++++++
> tools/testing/selftests/bpf/testing_helpers.c | 35 +++++++++++
> 6 files changed, 169 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
> create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_module.c
>
> diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> index caede9b574cb..dd7ff14e1fdf 100644
> --- a/tools/testing/selftests/bpf/Makefile
> +++ b/tools/testing/selftests/bpf/Makefile
> @@ -706,6 +706,8 @@ $(OUTPUT)/uprobe_multi: uprobe_multi.c
> $(call msg,BINARY,,$@)
> $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LDLIBS) -o $@
>
> +$(OUTPUT)/testing_helpers.o: $(OUTPUT)/rcu_tasks_trace_gp.skel.h
> +
> EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \
> prog_tests/tests.h map_tests/tests.h verifier/tests.h \
> feature bpftool \
> diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
> index cefc5dd72573..f1a20669d884 100644
> --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
> +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
> @@ -1,5 +1,6 @@
> // SPDX-License-Identifier: GPL-2.0
> /* Copyright (c) 2020 Facebook */
> +#include <linux/bpf.h>
> #include <linux/btf.h>
> #include <linux/btf_ids.h>
> #include <linux/error-injection.h>
> @@ -517,11 +518,66 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg)
> BTF_ID_FLAGS(func, bpf_kfunc_call_test_offset)
> BTF_SET8_END(bpf_testmod_check_kfunc_ids)
>
> +#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
> +
> +DEFINE_STRUCT_OPS_VALUE_TYPE(bpf_testmod_ops);
> +
> +static int bpf_testmod_ops_init(struct btf *btf)
> +{
> + return 0;
> +}
> +
> +static bool bpf_testmod_ops_is_valid_access(int off, int size,
> + enum bpf_access_type type,
> + const struct bpf_prog *prog,
> + struct bpf_insn_access_aux *info)
> +{
> + return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
> +}
> +
> +static int bpf_testmod_ops_init_member(const struct btf_type *t,
> + const struct btf_member *member,
> + void *kdata, const void *udata)
> +{
> + return 0;
> +}
> +
> static const struct btf_kfunc_id_set bpf_testmod_kfunc_set = {
> .owner = THIS_MODULE,
> .set = &bpf_testmod_check_kfunc_ids,
> };
>
> +static const struct bpf_verifier_ops bpf_testmod_verifier_ops = {
> + .is_valid_access = bpf_testmod_ops_is_valid_access,
> +};
> +
> +static int bpf_dummy_reg(void *kdata)
> +{
> + struct bpf_testmod_ops *ops = kdata;
> + int r;
> +
> + BTF_STRUCT_OPS_TYPE_EMIT(bpf_testmod_ops);
> + r = ops->test_2(4, 3);
> +
> + return 0;
> +}
> +
> +static void bpf_dummy_unreg(void *kdata)
> +{
> +}
> +
> +struct bpf_struct_ops bpf_bpf_testmod_ops = {
> + .verifier_ops = &bpf_testmod_verifier_ops,
> + .init = bpf_testmod_ops_init,
> + .init_member = bpf_testmod_ops_init_member,
> + .reg = bpf_dummy_reg,
> + .unreg = bpf_dummy_unreg,
> + .name = "bpf_testmod_ops",
> + .owner = THIS_MODULE,
> +};
> +
> +#endif /* CONFIG_DEBUG_INFO_BTF_MODULES */
> +
> extern int bpf_fentry_test1(int a);
>
> static int bpf_testmod_init(void)
> @@ -532,6 +588,9 @@ static int bpf_testmod_init(void)
> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_testmod_kfunc_set);
> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_testmod_kfunc_set);
> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &bpf_testmod_kfunc_set);
> +#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
> + ret = ret ?: register_bpf_struct_ops(&bpf_bpf_testmod_ops);
> +#endif
> if (ret < 0)
> return ret;
> if (bpf_fentry_test1(0) < 0)
> diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
> index f32793efe095..ca5435751c79 100644
> --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
> +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
> @@ -28,4 +28,9 @@ struct bpf_iter_testmod_seq {
> int cnt;
> };
>
> +struct bpf_testmod_ops {
> + int (*test_1)(void);
> + int (*test_2)(int a, int b);
> +};
> +
> #endif /* _BPF_TESTMOD_H */
> diff --git a/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
> new file mode 100644
> index 000000000000..7261fc6c377a
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
> @@ -0,0 +1,38 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
> +#include <test_progs.h>
> +#include <time.h>
> +
> +#include "rcu_tasks_trace_gp.skel.h"
> +#include "struct_ops_module.skel.h"
> +
> +static void test_regular_load(void)
> +{
> + struct struct_ops_module *skel;
> + struct bpf_link *link;
> + DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts);
> + int err;
> +
> + skel = struct_ops_module__open_opts(&opts);
> + if (!ASSERT_OK_PTR(skel, "struct_ops_module_open"))
> + return;
> + err = struct_ops_module__load(skel);
> + if (!ASSERT_OK(err, "struct_ops_module_load"))
> + return;
> +
> + link = bpf_map__attach_struct_ops(skel->maps.testmod_1);
> + ASSERT_OK_PTR(link, "attach_test_mod_1");
> +
> + ASSERT_EQ(skel->bss->test_2_result, 7, "test_2_result");
> +
> + bpf_link__destroy(link);
> +
> + struct_ops_module__destroy(skel);
> +}
> +
> +void serial_test_struct_ops_module(void)
> +{
> + if (test__start_subtest("regular_load"))
> + test_regular_load();
> +}
> +
> diff --git a/tools/testing/selftests/bpf/progs/struct_ops_module.c b/tools/testing/selftests/bpf/progs/struct_ops_module.c
> new file mode 100644
> index 000000000000..cb305d04342f
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/struct_ops_module.c
> @@ -0,0 +1,30 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
> +#include <vmlinux.h>
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +#include "../bpf_testmod/bpf_testmod.h"
> +
> +char _license[] SEC("license") = "GPL";
> +
> +int test_2_result = 0;
> +
> +SEC("struct_ops/test_1")
> +int BPF_PROG(test_1)
> +{
> + return 0xdeadbeef;
> +}
> +
> +SEC("struct_ops/test_2")
> +int BPF_PROG(test_2, int a, int b)
> +{
> + test_2_result = a + b;
> + return a + b;
> +}
> +
> +SEC(".struct_ops.link")
> +struct bpf_testmod_ops testmod_1 = {
> + .test_1 = (void *)test_1,
> + .test_2 = (void *)test_2,
> +};
> +
> diff --git a/tools/testing/selftests/bpf/testing_helpers.c b/tools/testing/selftests/bpf/testing_helpers.c
> index 8d994884c7b4..05870cd62458 100644
> --- a/tools/testing/selftests/bpf/testing_helpers.c
> +++ b/tools/testing/selftests/bpf/testing_helpers.c
> @@ -10,6 +10,7 @@
> #include "test_progs.h"
> #include "testing_helpers.h"
> #include <linux/membarrier.h>
> +#include "rcu_tasks_trace_gp.skel.h"
>
> int parse_num_list(const char *s, bool **num_set, int *num_set_len)
> {
> @@ -380,10 +381,44 @@ int load_bpf_testmod(bool verbose)
> return 0;
> }
>
> +/* This function will trigger call_rcu_tasks_trace() in the kernel */
> +static int kern_sync_rcu_tasks_trace(void)
> +{
> + struct rcu_tasks_trace_gp *rcu;
> + time_t start;
> + long gp_seq;
> + LIBBPF_OPTS(bpf_test_run_opts, opts);
> +
> + rcu = rcu_tasks_trace_gp__open_and_load();
> + if (IS_ERR(rcu))
> + return -EFAULT;
> + if (rcu_tasks_trace_gp__attach(rcu))
> + return -EFAULT;
> +
> + gp_seq = READ_ONCE(rcu->bss->gp_seq);
> +
> + if (bpf_prog_test_run_opts(bpf_program__fd(rcu->progs.do_call_rcu_tasks_trace),
> + &opts))
> + return -EFAULT;
> + if (opts.retval != 0)
> + return -EFAULT;
> +
> + start = time(NULL);
> + while ((start + 2) > time(NULL) &&
> + gp_seq == READ_ONCE(rcu->bss->gp_seq))
> + sched_yield();
> +
> + rcu_tasks_trace_gp__destroy(rcu);
> +
> + return 0;
> +}
> +
> /*
> * Trigger synchronize_rcu() in kernel.
> */
> int kern_sync_rcu(void)
> {
> + if (kern_sync_rcu_tasks_trace())
> + return -EFAULT;
> return syscall(__NR_membarrier, MEMBARRIER_CMD_SHARED, 0, 0);
> }
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration
2023-10-22 5:03 ` [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration thinker.li
2023-10-22 6:46 ` kernel test robot
@ 2023-10-26 21:02 ` Eduard Zingerman
2023-10-27 4:39 ` Kui-Feng Lee
1 sibling, 1 reply; 24+ messages in thread
From: Eduard Zingerman @ 2023-10-26 21:02 UTC (permalink / raw)
To: thinker.li, bpf, ast, martin.lau, song, kernel-team, andrii,
drosen
Cc: sinquersw, kuifeng, netdev
On Sat, 2023-10-21 at 22:03 -0700, thinker.li@gmail.com wrote:
> From: Kui-Feng Lee <thinker.li@gmail.com>
>
> Replace the static list of struct_ops types with per-btf struct_ops_tab to
> enable dynamic registration.
>
> Both bpf_dummy_ops and bpf_tcp_ca now utilize the registration function
> instead of being listed in bpf_struct_ops_types.h.
>
> Cc: netdev@vger.kernel.org
> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
Hello,
I left two nitpicks below, feel free to ignore if you think is good as-is.
> ---
> include/linux/bpf.h | 36 +++++++--
> include/linux/btf.h | 5 +-
> kernel/bpf/bpf_struct_ops.c | 123 ++++++++----------------------
> kernel/bpf/bpf_struct_ops_types.h | 12 ---
> kernel/bpf/btf.c | 41 +++++++++-
> net/bpf/bpf_dummy_struct_ops.c | 14 +++-
> net/ipv4/bpf_tcp_ca.c | 16 +++-
> 7 files changed, 130 insertions(+), 117 deletions(-)
> delete mode 100644 kernel/bpf/bpf_struct_ops_types.h
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 26feb8a2da4f..a53b2181c845 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1644,7 +1644,6 @@ struct bpf_struct_ops_desc {
> #if defined(CONFIG_BPF_JIT) && defined(CONFIG_BPF_SYSCALL)
> #define BPF_MODULE_OWNER ((void *)((0xeB9FUL << 2) + POISON_POINTER_DELTA))
> const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *btf, u32 type_id);
> -void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log);
> bool bpf_struct_ops_get(const void *kdata);
> void bpf_struct_ops_put(const void *kdata);
> int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
> @@ -1690,10 +1689,6 @@ static inline const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *
> {
> return NULL;
> }
> -static inline void bpf_struct_ops_init(struct btf *btf,
> - struct bpf_verifier_log *log)
> -{
> -}
> static inline bool bpf_try_module_get(const void *data, struct module *owner)
> {
> return try_module_get(owner);
> @@ -3212,4 +3207,35 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog)
> return prog->aux->func_idx != 0;
> }
>
> +int register_bpf_struct_ops(struct bpf_struct_ops *st_ops);
> +
> +enum bpf_struct_ops_state {
> + BPF_STRUCT_OPS_STATE_INIT,
> + BPF_STRUCT_OPS_STATE_INUSE,
> + BPF_STRUCT_OPS_STATE_TOBEFREE,
> + BPF_STRUCT_OPS_STATE_READY,
> +};
> +
> +struct bpf_struct_ops_common_value {
> + refcount_t refcnt;
> + enum bpf_struct_ops_state state;
> +};
> +
> +/* bpf_struct_ops_##_name (e.g. bpf_struct_ops_tcp_congestion_ops) is
> + * the map's value exposed to the userspace and its btf-type-id is
> + * stored at the map->btf_vmlinux_value_type_id.
> + *
> + */
> +#define DEFINE_STRUCT_OPS_VALUE_TYPE(_name) \
> +extern struct bpf_struct_ops bpf_##_name; \
> + \
> +struct bpf_struct_ops_##_name { \
> + struct bpf_struct_ops_common_value common; \
> + struct _name data ____cacheline_aligned_in_smp; \
> +}
> +
> +extern void bpf_struct_ops_init(struct bpf_struct_ops_desc *st_ops_desc,
> + struct btf *btf,
> + struct bpf_verifier_log *log);
> +
> #endif /* _LINUX_BPF_H */
> diff --git a/include/linux/btf.h b/include/linux/btf.h
> index 6a64b372b7a0..533db3f406b3 100644
> --- a/include/linux/btf.h
> +++ b/include/linux/btf.h
> @@ -12,6 +12,8 @@
> #include <uapi/linux/bpf.h>
>
> #define BTF_TYPE_EMIT(type) ((void)(type *)0)
> +#define BTF_STRUCT_OPS_TYPE_EMIT(type) {((void)(struct type *)0); \
> + ((void)(struct bpf_struct_ops_##type *)0); }
> #define BTF_TYPE_EMIT_ENUM(enum_val) ((void)enum_val)
>
> /* These need to be macros, as the expressions are used in assembler input */
> @@ -200,6 +202,7 @@ u32 btf_obj_id(const struct btf *btf);
> bool btf_is_kernel(const struct btf *btf);
> bool btf_is_module(const struct btf *btf);
> struct module *btf_try_get_module(const struct btf *btf);
> +struct btf *btf_get_module_btf(const struct module *module);
> u32 btf_nr_types(const struct btf *btf);
> bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s,
> const struct btf_member *m,
> @@ -574,8 +577,6 @@ static inline bool btf_type_is_struct_ptr(struct btf *btf, const struct btf_type
> struct bpf_struct_ops;
> struct bpf_struct_ops_desc;
>
> -struct bpf_struct_ops_desc *
> -btf_add_struct_ops(struct btf *btf, struct bpf_struct_ops *st_ops);
> const struct bpf_struct_ops_desc *
> btf_get_struct_ops(struct btf *btf, u32 *ret_cnt);
>
> diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
> index 99ab61cc6cad..44412f95bc32 100644
> --- a/kernel/bpf/bpf_struct_ops.c
> +++ b/kernel/bpf/bpf_struct_ops.c
> @@ -13,21 +13,8 @@
> #include <linux/btf_ids.h>
> #include <linux/rcupdate_wait.h>
>
> -enum bpf_struct_ops_state {
> - BPF_STRUCT_OPS_STATE_INIT,
> - BPF_STRUCT_OPS_STATE_INUSE,
> - BPF_STRUCT_OPS_STATE_TOBEFREE,
> - BPF_STRUCT_OPS_STATE_READY,
> -};
> -
> -struct bpf_struct_ops_common_value {
> - refcount_t refcnt;
> - enum bpf_struct_ops_state state;
> -};
> -#define BPF_STRUCT_OPS_COMMON_VALUE struct bpf_struct_ops_common_value common
> -
> struct bpf_struct_ops_value {
> - BPF_STRUCT_OPS_COMMON_VALUE;
> + struct bpf_struct_ops_common_value common;
> char data[] ____cacheline_aligned_in_smp;
> };
>
> @@ -72,35 +59,6 @@ static DEFINE_MUTEX(update_mutex);
> #define VALUE_PREFIX "bpf_struct_ops_"
> #define VALUE_PREFIX_LEN (sizeof(VALUE_PREFIX) - 1)
>
> -/* bpf_struct_ops_##_name (e.g. bpf_struct_ops_tcp_congestion_ops) is
> - * the map's value exposed to the userspace and its btf-type-id is
> - * stored at the map->btf_vmlinux_value_type_id.
> - *
> - */
> -#define BPF_STRUCT_OPS_TYPE(_name) \
> -extern struct bpf_struct_ops bpf_##_name; \
> - \
> -struct bpf_struct_ops_##_name { \
> - BPF_STRUCT_OPS_COMMON_VALUE; \
> - struct _name data ____cacheline_aligned_in_smp; \
> -};
> -#include "bpf_struct_ops_types.h"
> -#undef BPF_STRUCT_OPS_TYPE
> -
> -enum {
> -#define BPF_STRUCT_OPS_TYPE(_name) BPF_STRUCT_OPS_TYPE_##_name,
> -#include "bpf_struct_ops_types.h"
> -#undef BPF_STRUCT_OPS_TYPE
> - __NR_BPF_STRUCT_OPS_TYPE,
> -};
> -
> -static struct bpf_struct_ops_desc bpf_struct_ops[] = {
> -#define BPF_STRUCT_OPS_TYPE(_name) \
> - [BPF_STRUCT_OPS_TYPE_##_name] = { .st_ops = &bpf_##_name },
> -#include "bpf_struct_ops_types.h"
> -#undef BPF_STRUCT_OPS_TYPE
> -};
> -
> const struct bpf_verifier_ops bpf_struct_ops_verifier_ops = {
> };
>
> @@ -110,13 +68,22 @@ const struct bpf_prog_ops bpf_struct_ops_prog_ops = {
> #endif
> };
>
> -static const struct btf_type *module_type;
> -static const struct btf_type *common_value_type;
> +BTF_ID_LIST(st_ops_ids)
> +BTF_ID(struct, module)
> +BTF_ID(struct, bpf_struct_ops_common_value)
> +
> +enum {
> + idx_module_id,
> + idx_st_ops_common_value_id,
> +};
> +
> +extern struct btf *btf_vmlinux;
>
> static bool is_valid_value_type(struct btf *btf, s32 value_id,
> const struct btf_type *type,
> const char *value_name)
> {
> + const struct btf_type *common_value_type;
> const struct btf_member *member;
> const struct btf_type *vt, *mt;
>
> @@ -128,6 +95,8 @@ static bool is_valid_value_type(struct btf *btf, s32 value_id,
> }
> member = btf_type_member(vt);
> mt = btf_type_by_id(btf, member->type);
> + common_value_type = btf_type_by_id(btf_vmlinux,
> + st_ops_ids[idx_st_ops_common_value_id]);
> if (mt != common_value_type) {
> pr_warn("The first member of %s should be bpf_struct_ops_common_value\n",
> value_name);
> @@ -144,9 +113,9 @@ static bool is_valid_value_type(struct btf *btf, s32 value_id,
> return true;
> }
>
> -static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
> - struct btf *btf,
> - struct bpf_verifier_log *log)
> +void bpf_struct_ops_init(struct bpf_struct_ops_desc *st_ops_desc,
> + struct btf *btf,
> + struct bpf_verifier_log *log)
> {
> struct bpf_struct_ops *st_ops = st_ops_desc->st_ops;
> const struct btf_member *member;
> @@ -232,51 +201,20 @@ static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
> }
> }
>
> -void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log)
> -{
> - struct bpf_struct_ops_desc *st_ops_desc;
> - s32 module_id, common_value_id;
> - u32 i;
> -
> - /* Ensure BTF type is emitted for "struct bpf_struct_ops_##_name" */
> -#define BPF_STRUCT_OPS_TYPE(_name) BTF_TYPE_EMIT(struct bpf_struct_ops_##_name);
> -#include "bpf_struct_ops_types.h"
> -#undef BPF_STRUCT_OPS_TYPE
> -
> - module_id = btf_find_by_name_kind(btf, "module", BTF_KIND_STRUCT);
> - if (module_id < 0) {
> - pr_warn("Cannot find struct module in btf_vmlinux\n");
> - return;
> - }
> - module_type = btf_type_by_id(btf, module_id);
> - common_value_id = btf_find_by_name_kind(btf,
> - "bpf_struct_ops_common_value",
> - BTF_KIND_STRUCT);
> - if (common_value_id < 0) {
> - pr_warn("Cannot find struct common_value in btf_vmlinux\n");
> - return;
> - }
> - common_value_type = btf_type_by_id(btf, common_value_id);
> -
> - for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
> - st_ops_desc = &bpf_struct_ops[i];
> - bpf_struct_ops_init_one(st_ops_desc, btf, log);
> - }
> -}
> -
> -extern struct btf *btf_vmlinux;
> -
> static const struct bpf_struct_ops_desc *
> bpf_struct_ops_find_value(struct btf *btf, u32 value_id)
> {
> + const struct bpf_struct_ops_desc *st_ops_list;
> unsigned int i;
> + u32 cnt = 0;
>
> - if (!value_id || !btf_vmlinux)
> + if (!value_id)
> return NULL;
>
> - for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
> - if (bpf_struct_ops[i].value_id == value_id)
> - return &bpf_struct_ops[i];
> + st_ops_list = btf_get_struct_ops(btf, &cnt);
> + for (i = 0; i < cnt; i++) {
> + if (st_ops_list[i].value_id == value_id)
> + return &st_ops_list[i];
> }
>
> return NULL;
> @@ -285,14 +223,17 @@ bpf_struct_ops_find_value(struct btf *btf, u32 value_id)
> const struct bpf_struct_ops_desc *
> bpf_struct_ops_find(struct btf *btf, u32 type_id)
> {
> + const struct bpf_struct_ops_desc *st_ops_list;
> unsigned int i;
> + u32 cnt;
>
> - if (!type_id || !btf_vmlinux)
> + if (!type_id)
> return NULL;
>
> - for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
> - if (bpf_struct_ops[i].type_id == type_id)
> - return &bpf_struct_ops[i];
> + st_ops_list = btf_get_struct_ops(btf, &cnt);
> + for (i = 0; i < cnt; i++) {
> + if (st_ops_list[i].type_id == type_id)
> + return &st_ops_list[i];
> }
>
> return NULL;
> @@ -429,6 +370,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
> const struct bpf_struct_ops_desc *st_ops_desc = st_map->st_ops_desc;
> const struct bpf_struct_ops *st_ops = st_ops_desc->st_ops;
> struct bpf_struct_ops_value *uvalue, *kvalue;
> + const struct btf_type *module_type;
> const struct btf_member *member;
> const struct btf_type *t = st_ops_desc->type;
> struct bpf_tramp_links *tlinks;
> @@ -485,6 +427,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
> image = st_map->image;
> image_end = st_map->image + PAGE_SIZE;
>
> + module_type = btf_type_by_id(btf_vmlinux, st_ops_ids[idx_module_id]);
> for_each_member(i, t, member) {
> const struct btf_type *mtype, *ptype;
> struct bpf_prog *prog;
> diff --git a/kernel/bpf/bpf_struct_ops_types.h b/kernel/bpf/bpf_struct_ops_types.h
> deleted file mode 100644
> index 5678a9ddf817..000000000000
> --- a/kernel/bpf/bpf_struct_ops_types.h
> +++ /dev/null
> @@ -1,12 +0,0 @@
> -/* SPDX-License-Identifier: GPL-2.0 */
> -/* internal file - do not include directly */
> -
> -#ifdef CONFIG_BPF_JIT
> -#ifdef CONFIG_NET
> -BPF_STRUCT_OPS_TYPE(bpf_dummy_ops)
> -#endif
> -#ifdef CONFIG_INET
> -#include <net/tcp.h>
> -BPF_STRUCT_OPS_TYPE(tcp_congestion_ops)
> -#endif
> -#endif
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index c1e2ed6c972e..c53888e8dddb 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -5790,8 +5790,6 @@ struct btf *btf_parse_vmlinux(void)
> /* btf_parse_vmlinux() runs under bpf_verifier_lock */
> bpf_ctx_convert.t = btf_type_by_id(btf, bpf_ctx_convert_btf_id[0]);
>
> - bpf_struct_ops_init(btf, log);
> -
> refcount_set(&btf->refcnt, 1);
>
> err = btf_alloc_id(btf);
> @@ -7532,7 +7530,7 @@ struct module *btf_try_get_module(const struct btf *btf)
> /* Returns struct btf corresponding to the struct module.
> * This function can return NULL or ERR_PTR.
> */
> -static struct btf *btf_get_module_btf(const struct module *module)
> +struct btf *btf_get_module_btf(const struct module *module)
> {
> #ifdef CONFIG_DEBUG_INFO_BTF_MODULES
> struct btf_module *btf_mod, *tmp;
> @@ -8672,3 +8670,40 @@ const struct bpf_struct_ops_desc *btf_get_struct_ops(struct btf *btf, u32 *ret_c
> return (const struct bpf_struct_ops_desc *)btf->struct_ops_tab->ops;
> }
>
> +int register_bpf_struct_ops(struct bpf_struct_ops *st_ops)
> +{
> + struct bpf_struct_ops_desc *desc;
> + struct bpf_verifier_log *log;
> + struct btf *btf;
> + int err = 0;
> +
> + if (st_ops == NULL)
> + return -EINVAL;
> +
> + btf = btf_get_module_btf(st_ops->owner);
> + if (!btf)
> + return -EINVAL;
> +
> + log = kzalloc(sizeof(*log), GFP_KERNEL | __GFP_NOWARN);
> + if (!log) {
> + err = -ENOMEM;
> + goto errout;
> + }
> +
> + log->level = BPF_LOG_KERNEL;
Nit: maybe use bpf_vlog_init() here to avoid breaking encapsulation?
> +
> + desc = btf_add_struct_ops(btf, st_ops);
> + if (IS_ERR(desc)) {
> + err = PTR_ERR(desc);
> + goto errout;
> + }
> +
> + bpf_struct_ops_init(desc, btf, log);
Nit: I think bpf_struct_ops_init() could be changed to return 'int',
then register_bpf_struct_ops() could report to calling module if
something went wrong on the last phase, wdyt?
> +
> +errout:
> + kfree(log);
> + btf_put(btf);
> +
> + return err;
> +}
> +EXPORT_SYMBOL_GPL(register_bpf_struct_ops);
> diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
> index ffa224053a6c..148a5851c4fa 100644
> --- a/net/bpf/bpf_dummy_struct_ops.c
> +++ b/net/bpf/bpf_dummy_struct_ops.c
> @@ -7,7 +7,7 @@
> #include <linux/bpf.h>
> #include <linux/btf.h>
>
> -extern struct bpf_struct_ops bpf_bpf_dummy_ops;
> +static struct bpf_struct_ops bpf_bpf_dummy_ops;
>
> /* A common type for test_N with return value in bpf_dummy_ops */
> typedef int (*dummy_ops_test_ret_fn)(struct bpf_dummy_ops_state *state, ...);
> @@ -223,11 +223,13 @@ static int bpf_dummy_reg(void *kdata)
> return -EOPNOTSUPP;
> }
>
> +DEFINE_STRUCT_OPS_VALUE_TYPE(bpf_dummy_ops);
> +
> static void bpf_dummy_unreg(void *kdata)
> {
> }
>
> -struct bpf_struct_ops bpf_bpf_dummy_ops = {
> +static struct bpf_struct_ops bpf_bpf_dummy_ops = {
> .verifier_ops = &bpf_dummy_verifier_ops,
> .init = bpf_dummy_init,
> .check_member = bpf_dummy_ops_check_member,
> @@ -235,4 +237,12 @@ struct bpf_struct_ops bpf_bpf_dummy_ops = {
> .reg = bpf_dummy_reg,
> .unreg = bpf_dummy_unreg,
> .name = "bpf_dummy_ops",
> + .owner = THIS_MODULE,
> };
> +
> +static int __init bpf_dummy_struct_ops_init(void)
> +{
> + BTF_STRUCT_OPS_TYPE_EMIT(bpf_dummy_ops);
> + return register_bpf_struct_ops(&bpf_bpf_dummy_ops);
> +}
> +late_initcall(bpf_dummy_struct_ops_init);
> diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
> index 3c8b76578a2a..b36a19274e5b 100644
> --- a/net/ipv4/bpf_tcp_ca.c
> +++ b/net/ipv4/bpf_tcp_ca.c
> @@ -12,7 +12,7 @@
> #include <net/bpf_sk_storage.h>
>
> /* "extern" is to avoid sparse warning. It is only used in bpf_struct_ops.c. */
> -extern struct bpf_struct_ops bpf_tcp_congestion_ops;
> +static struct bpf_struct_ops bpf_tcp_congestion_ops;
>
> static u32 unsupported_ops[] = {
> offsetof(struct tcp_congestion_ops, get_info),
> @@ -277,7 +277,9 @@ static int bpf_tcp_ca_validate(void *kdata)
> return tcp_validate_congestion_control(kdata);
> }
>
> -struct bpf_struct_ops bpf_tcp_congestion_ops = {
> +DEFINE_STRUCT_OPS_VALUE_TYPE(tcp_congestion_ops);
> +
> +static struct bpf_struct_ops bpf_tcp_congestion_ops = {
> .verifier_ops = &bpf_tcp_ca_verifier_ops,
> .reg = bpf_tcp_ca_reg,
> .unreg = bpf_tcp_ca_unreg,
> @@ -287,10 +289,18 @@ struct bpf_struct_ops bpf_tcp_congestion_ops = {
> .init = bpf_tcp_ca_init,
> .validate = bpf_tcp_ca_validate,
> .name = "tcp_congestion_ops",
> + .owner = THIS_MODULE,
> };
>
> static int __init bpf_tcp_ca_kfunc_init(void)
> {
> - return register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &bpf_tcp_ca_kfunc_set);
> + int ret;
> +
> + BTF_STRUCT_OPS_TYPE_EMIT(tcp_congestion_ops);
> +
> + ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &bpf_tcp_ca_kfunc_set);
> + ret = ret ?: register_bpf_struct_ops(&bpf_tcp_congestion_ops);
> +
> + return ret;
> }
> late_initcall(bpf_tcp_ca_kfunc_init);
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 04/10] bpf: hold module for bpf_struct_ops_map.
2023-10-22 5:03 ` [PATCH bpf-next v6 04/10] bpf: hold module for bpf_struct_ops_map thinker.li
@ 2023-10-26 21:11 ` Eduard Zingerman
2023-10-27 4:35 ` Kui-Feng Lee
0 siblings, 1 reply; 24+ messages in thread
From: Eduard Zingerman @ 2023-10-26 21:11 UTC (permalink / raw)
To: thinker.li, bpf, ast, martin.lau, song, kernel-team, andrii,
drosen
Cc: sinquersw, kuifeng
On Sat, 2023-10-21 at 22:03 -0700, thinker.li@gmail.com wrote:
> From: Kui-Feng Lee <thinker.li@gmail.com>
>
> To ensure that a module remains accessible whenever a struct_ops object of
> a struct_ops type provided by the module is still in use.
>
> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
> ---
> include/linux/bpf.h | 1 +
> include/linux/btf.h | 2 +-
> kernel/bpf/bpf_struct_ops.c | 70 ++++++++++++++++++++++++++++++-------
> 3 files changed, 60 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 4f3b67932ded..26feb8a2da4f 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1626,6 +1626,7 @@ struct bpf_struct_ops {
> void (*unreg)(void *kdata);
> int (*update)(void *kdata, void *old_kdata);
> int (*validate)(void *kdata);
> + struct module *owner;
> const char *name;
> struct btf_func_model func_models[BPF_STRUCT_OPS_MAX_NR_MEMBERS];
> };
> diff --git a/include/linux/btf.h b/include/linux/btf.h
> index 8e37f7eb02c7..6a64b372b7a0 100644
> --- a/include/linux/btf.h
> +++ b/include/linux/btf.h
> @@ -575,7 +575,7 @@ struct bpf_struct_ops;
> struct bpf_struct_ops_desc;
>
> struct bpf_struct_ops_desc *
> -btf_add_struct_ops(struct bpf_struct_ops *st_ops);
> +btf_add_struct_ops(struct btf *btf, struct bpf_struct_ops *st_ops);
> const struct bpf_struct_ops_desc *
> btf_get_struct_ops(struct btf *btf, u32 *ret_cnt);
>
> diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
> index 0bc21a39257d..413a3f8b26ba 100644
> --- a/kernel/bpf/bpf_struct_ops.c
> +++ b/kernel/bpf/bpf_struct_ops.c
> @@ -388,6 +388,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
> const struct btf_member *member;
> const struct btf_type *t = st_ops_desc->type;
> struct bpf_tramp_links *tlinks;
> + struct module *mod = NULL;
> void *udata, *kdata;
> int prog_fd, err;
> void *image, *image_end;
> @@ -425,6 +426,14 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
> goto unlock;
> }
>
> + if (st_ops_desc->btf != btf_vmlinux) {
> + mod = btf_try_get_module(st_ops_desc->btf);
> + if (!mod) {
> + err = -EBUSY;
Nit: there is a disagreement about error code returned for
failing btf_try_get_module() across verifier code base:
- EINVAL is used 2 times;
- ENXIO is used 3 times;
- ENOTSUPP is used once.
Are you sure EBUSY is a good choice here?
> + goto unlock;
> + }
> + }
> +
> memcpy(uvalue, value, map->value_size);
>
> udata = &uvalue->data;
> @@ -552,6 +561,10 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
> * can be seen once BPF_STRUCT_OPS_STATE_INUSE is set.
> */
> smp_store_release(&kvalue->state, BPF_STRUCT_OPS_STATE_INUSE);
> + /* Hold the owner module until the struct_ops is
> + * unregistered
> + */
> + mod = NULL;
> goto unlock;
> }
>
> @@ -568,6 +581,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
> memset(uvalue, 0, map->value_size);
> memset(kvalue, 0, map->value_size);
> unlock:
> + module_put(mod);
> kfree(tlinks);
> mutex_unlock(&st_map->lock);
> return err;
> @@ -588,6 +602,7 @@ static long bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key)
> switch (prev_state) {
> case BPF_STRUCT_OPS_STATE_INUSE:
> st_map->st_ops_desc->st_ops->unreg(&st_map->kvalue.data);
> + module_put(st_map->st_ops_desc->st_ops->owner);
> bpf_map_put(map);
> return 0;
> case BPF_STRUCT_OPS_STATE_TOBEFREE:
> @@ -674,6 +689,7 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
> size_t st_map_size;
> struct bpf_struct_ops_map *st_map;
> const struct btf_type *t, *vt;
> + struct module *mod = NULL;
> struct bpf_map *map;
> int ret;
>
> @@ -681,9 +697,17 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
> if (!st_ops_desc)
> return ERR_PTR(-ENOTSUPP);
>
> + if (st_ops_desc->btf != btf_vmlinux) {
> + mod = btf_try_get_module(st_ops_desc->btf);
> + if (!mod)
> + return ERR_PTR(-EINVAL);
> + }
> +
> vt = st_ops_desc->value_type;
> - if (attr->value_size != vt->size)
> - return ERR_PTR(-EINVAL);
> + if (attr->value_size != vt->size) {
> + ret = -EINVAL;
> + goto errout;
> + }
>
> t = st_ops_desc->type;
>
> @@ -694,17 +718,17 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
> (vt->size - sizeof(struct bpf_struct_ops_value));
>
> st_map = bpf_map_area_alloc(st_map_size, NUMA_NO_NODE);
> - if (!st_map)
> - return ERR_PTR(-ENOMEM);
> + if (!st_map) {
> + ret = -ENOMEM;
> + goto errout;
> + }
>
> st_map->st_ops_desc = st_ops_desc;
> map = &st_map->map;
>
> ret = bpf_jit_charge_modmem(PAGE_SIZE);
> - if (ret) {
> - __bpf_struct_ops_map_free(map);
> - return ERR_PTR(ret);
> - }
> + if (ret)
> + goto errout_free;
>
> st_map->image = bpf_jit_alloc_exec(PAGE_SIZE);
> if (!st_map->image) {
> @@ -713,23 +737,32 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
> * here.
> */
> bpf_jit_uncharge_modmem(PAGE_SIZE);
> - __bpf_struct_ops_map_free(map);
> - return ERR_PTR(-ENOMEM);
> + ret = -ENOMEM;
> + goto errout_free;
> }
> st_map->uvalue = bpf_map_area_alloc(vt->size, NUMA_NO_NODE);
> st_map->links =
> bpf_map_area_alloc(btf_type_vlen(t) * sizeof(struct bpf_links *),
> NUMA_NO_NODE);
> if (!st_map->uvalue || !st_map->links) {
> - __bpf_struct_ops_map_free(map);
> - return ERR_PTR(-ENOMEM);
> + ret = -ENOMEM;
> + goto errout_free;
> }
>
> mutex_init(&st_map->lock);
> set_vm_flush_reset_perms(st_map->image);
> bpf_map_init_from_attr(map, attr);
>
> + module_put(mod);
> +
> return map;
> +
> +errout_free:
> + __bpf_struct_ops_map_free(map);
> + btf = NULL; /* has been released */
> +errout:
> + module_put(mod);
> + return ERR_PTR(ret);
> }
>
> static u64 bpf_struct_ops_map_mem_usage(const struct bpf_map *map)
> @@ -811,6 +844,7 @@ static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link)
> * bpf_struct_ops_link_create() fails to register.
> */
> st_map->st_ops_desc->st_ops->unreg(&st_map->kvalue.data);
> + module_put(st_map->st_ops_desc->st_ops->owner);
> bpf_map_put(&st_map->map);
> }
> kfree(st_link);
> @@ -857,6 +891,10 @@ static int bpf_struct_ops_map_link_update(struct bpf_link *link, struct bpf_map
> if (!bpf_struct_ops_valid_to_reg(new_map))
> return -EINVAL;
>
> + /* The old map is holding the refcount for the owner module. The
> + * ownership of the owner module refcount is going to be
> + * transferred from the old map to the new map.
> + */
> if (!st_map->st_ops_desc->st_ops->update)
> return -EOPNOTSUPP;
>
> @@ -902,6 +940,7 @@ int bpf_struct_ops_link_create(union bpf_attr *attr)
> struct bpf_link_primer link_primer;
> struct bpf_struct_ops_map *st_map;
> struct bpf_map *map;
> + struct btf *btf;
> int err;
>
> map = bpf_map_get(attr->link_create.map_fd);
> @@ -926,8 +965,15 @@ int bpf_struct_ops_link_create(union bpf_attr *attr)
> if (err)
> goto err_out;
>
> + /* Hold the owner module until the struct_ops is unregistered. */
> + btf = st_map->st_ops_desc->btf;
> + if (btf != btf_vmlinux && !btf_try_get_module(btf)) {
> + err = -EINVAL;
> + goto err_out;
> + }
> err = st_map->st_ops_desc->st_ops->reg(st_map->kvalue.data);
> if (err) {
> + module_put(st_map->st_ops_desc->st_ops->owner);
> bpf_link_cleanup(&link_primer);
> link = NULL;
> goto err_out;
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 04/10] bpf: hold module for bpf_struct_ops_map.
2023-10-26 21:11 ` Eduard Zingerman
@ 2023-10-27 4:35 ` Kui-Feng Lee
0 siblings, 0 replies; 24+ messages in thread
From: Kui-Feng Lee @ 2023-10-27 4:35 UTC (permalink / raw)
To: Eduard Zingerman, thinker.li, bpf, ast, martin.lau, song,
kernel-team, andrii, drosen
Cc: kuifeng
On 10/26/23 14:11, Eduard Zingerman wrote:
> On Sat, 2023-10-21 at 22:03 -0700, thinker.li@gmail.com wrote:
>> From: Kui-Feng Lee <thinker.li@gmail.com>
>>
>> To ensure that a module remains accessible whenever a struct_ops object of
>> a struct_ops type provided by the module is still in use.
>>
>> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
>> ---
>> include/linux/bpf.h | 1 +
>> include/linux/btf.h | 2 +-
>> kernel/bpf/bpf_struct_ops.c | 70 ++++++++++++++++++++++++++++++-------
>> 3 files changed, 60 insertions(+), 13 deletions(-)
>>
>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>> index 4f3b67932ded..26feb8a2da4f 100644
>> --- a/include/linux/bpf.h
>> +++ b/include/linux/bpf.h
>> @@ -1626,6 +1626,7 @@ struct bpf_struct_ops {
>> void (*unreg)(void *kdata);
>> int (*update)(void *kdata, void *old_kdata);
>> int (*validate)(void *kdata);
>> + struct module *owner;
>> const char *name;
>> struct btf_func_model func_models[BPF_STRUCT_OPS_MAX_NR_MEMBERS];
>> };
>> diff --git a/include/linux/btf.h b/include/linux/btf.h
>> index 8e37f7eb02c7..6a64b372b7a0 100644
>> --- a/include/linux/btf.h
>> +++ b/include/linux/btf.h
>> @@ -575,7 +575,7 @@ struct bpf_struct_ops;
>> struct bpf_struct_ops_desc;
>>
>> struct bpf_struct_ops_desc *
>> -btf_add_struct_ops(struct bpf_struct_ops *st_ops);
>> +btf_add_struct_ops(struct btf *btf, struct bpf_struct_ops *st_ops);
>> const struct bpf_struct_ops_desc *
>> btf_get_struct_ops(struct btf *btf, u32 *ret_cnt);
>>
>> diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
>> index 0bc21a39257d..413a3f8b26ba 100644
>> --- a/kernel/bpf/bpf_struct_ops.c
>> +++ b/kernel/bpf/bpf_struct_ops.c
>> @@ -388,6 +388,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
>> const struct btf_member *member;
>> const struct btf_type *t = st_ops_desc->type;
>> struct bpf_tramp_links *tlinks;
>> + struct module *mod = NULL;
>> void *udata, *kdata;
>> int prog_fd, err;
>> void *image, *image_end;
>> @@ -425,6 +426,14 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
>> goto unlock;
>> }
>>
>> + if (st_ops_desc->btf != btf_vmlinux) {
>> + mod = btf_try_get_module(st_ops_desc->btf);
>> + if (!mod) {
>> + err = -EBUSY;
>
> Nit: there is a disagreement about error code returned for
> failing btf_try_get_module() across verifier code base:
> - EINVAL is used 2 times;
> - ENXIO is used 3 times;
> - ENOTSUPP is used once.
> Are you sure EBUSY is a good choice here?
You are right. I think EINVAL is better or this case.
>
>> + goto unlock;
>> + }
>> + }
>> +
>> memcpy(uvalue, value, map->value_size);
>>
>> udata = &uvalue->data;
>> @@ -552,6 +561,10 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
>> * can be seen once BPF_STRUCT_OPS_STATE_INUSE is set.
>> */
>> smp_store_release(&kvalue->state, BPF_STRUCT_OPS_STATE_INUSE);
>> + /* Hold the owner module until the struct_ops is
>> + * unregistered
>> + */
>> + mod = NULL;
>> goto unlock;
>> }
>>
>> @@ -568,6 +581,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
>> memset(uvalue, 0, map->value_size);
>> memset(kvalue, 0, map->value_size);
>> unlock:
>> + module_put(mod);
>> kfree(tlinks);
>> mutex_unlock(&st_map->lock);
>> return err;
>> @@ -588,6 +602,7 @@ static long bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key)
>> switch (prev_state) {
>> case BPF_STRUCT_OPS_STATE_INUSE:
>> st_map->st_ops_desc->st_ops->unreg(&st_map->kvalue.data);
>> + module_put(st_map->st_ops_desc->st_ops->owner);
>> bpf_map_put(map);
>> return 0;
>> case BPF_STRUCT_OPS_STATE_TOBEFREE:
>> @@ -674,6 +689,7 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
>> size_t st_map_size;
>> struct bpf_struct_ops_map *st_map;
>> const struct btf_type *t, *vt;
>> + struct module *mod = NULL;
>> struct bpf_map *map;
>> int ret;
>>
>> @@ -681,9 +697,17 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
>> if (!st_ops_desc)
>> return ERR_PTR(-ENOTSUPP);
>>
>> + if (st_ops_desc->btf != btf_vmlinux) {
>> + mod = btf_try_get_module(st_ops_desc->btf);
>> + if (!mod)
>> + return ERR_PTR(-EINVAL);
>> + }
>> +
>> vt = st_ops_desc->value_type;
>> - if (attr->value_size != vt->size)
>> - return ERR_PTR(-EINVAL);
>> + if (attr->value_size != vt->size) {
>> + ret = -EINVAL;
>> + goto errout;
>> + }
>>
>> t = st_ops_desc->type;
>>
>> @@ -694,17 +718,17 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
>> (vt->size - sizeof(struct bpf_struct_ops_value));
>>
>> st_map = bpf_map_area_alloc(st_map_size, NUMA_NO_NODE);
>> - if (!st_map)
>> - return ERR_PTR(-ENOMEM);
>> + if (!st_map) {
>> + ret = -ENOMEM;
>> + goto errout;
>> + }
>>
>> st_map->st_ops_desc = st_ops_desc;
>> map = &st_map->map;
>>
>> ret = bpf_jit_charge_modmem(PAGE_SIZE);
>> - if (ret) {
>> - __bpf_struct_ops_map_free(map);
>> - return ERR_PTR(ret);
>> - }
>> + if (ret)
>> + goto errout_free;
>>
>> st_map->image = bpf_jit_alloc_exec(PAGE_SIZE);
>> if (!st_map->image) {
>> @@ -713,23 +737,32 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
>> * here.
>> */
>> bpf_jit_uncharge_modmem(PAGE_SIZE);
>> - __bpf_struct_ops_map_free(map);
>> - return ERR_PTR(-ENOMEM);
>> + ret = -ENOMEM;
>> + goto errout_free;
>> }
>> st_map->uvalue = bpf_map_area_alloc(vt->size, NUMA_NO_NODE);
>> st_map->links =
>> bpf_map_area_alloc(btf_type_vlen(t) * sizeof(struct bpf_links *),
>> NUMA_NO_NODE);
>> if (!st_map->uvalue || !st_map->links) {
>> - __bpf_struct_ops_map_free(map);
>> - return ERR_PTR(-ENOMEM);
>> + ret = -ENOMEM;
>> + goto errout_free;
>> }
>>
>> mutex_init(&st_map->lock);
>> set_vm_flush_reset_perms(st_map->image);
>> bpf_map_init_from_attr(map, attr);
>>
>> + module_put(mod);
>> +
>> return map;
>> +
>> +errout_free:
>> + __bpf_struct_ops_map_free(map);
>> + btf = NULL; /* has been released */
>> +errout:
>> + module_put(mod);
>> + return ERR_PTR(ret);
>> }
>>
>> static u64 bpf_struct_ops_map_mem_usage(const struct bpf_map *map)
>> @@ -811,6 +844,7 @@ static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link)
>> * bpf_struct_ops_link_create() fails to register.
>> */
>> st_map->st_ops_desc->st_ops->unreg(&st_map->kvalue.data);
>> + module_put(st_map->st_ops_desc->st_ops->owner);
>> bpf_map_put(&st_map->map);
>> }
>> kfree(st_link);
>> @@ -857,6 +891,10 @@ static int bpf_struct_ops_map_link_update(struct bpf_link *link, struct bpf_map
>> if (!bpf_struct_ops_valid_to_reg(new_map))
>> return -EINVAL;
>>
>> + /* The old map is holding the refcount for the owner module. The
>> + * ownership of the owner module refcount is going to be
>> + * transferred from the old map to the new map.
>> + */
>> if (!st_map->st_ops_desc->st_ops->update)
>> return -EOPNOTSUPP;
>>
>> @@ -902,6 +940,7 @@ int bpf_struct_ops_link_create(union bpf_attr *attr)
>> struct bpf_link_primer link_primer;
>> struct bpf_struct_ops_map *st_map;
>> struct bpf_map *map;
>> + struct btf *btf;
>> int err;
>>
>> map = bpf_map_get(attr->link_create.map_fd);
>> @@ -926,8 +965,15 @@ int bpf_struct_ops_link_create(union bpf_attr *attr)
>> if (err)
>> goto err_out;
>>
>> + /* Hold the owner module until the struct_ops is unregistered. */
>> + btf = st_map->st_ops_desc->btf;
>> + if (btf != btf_vmlinux && !btf_try_get_module(btf)) {
>> + err = -EINVAL;
>> + goto err_out;
>> + }
>> err = st_map->st_ops_desc->st_ops->reg(st_map->kvalue.data);
>> if (err) {
>> + module_put(st_map->st_ops_desc->st_ops->owner);
>> bpf_link_cleanup(&link_primer);
>> link = NULL;
>> goto err_out;
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration
2023-10-26 21:02 ` Eduard Zingerman
@ 2023-10-27 4:39 ` Kui-Feng Lee
2023-10-27 21:32 ` Kui-Feng Lee
0 siblings, 1 reply; 24+ messages in thread
From: Kui-Feng Lee @ 2023-10-27 4:39 UTC (permalink / raw)
To: Eduard Zingerman, thinker.li, bpf, ast, martin.lau, song,
kernel-team, andrii, drosen
Cc: kuifeng, netdev
On 10/26/23 14:02, Eduard Zingerman wrote:
> On Sat, 2023-10-21 at 22:03 -0700, thinker.li@gmail.com wrote:
>> From: Kui-Feng Lee <thinker.li@gmail.com>
>>
>> Replace the static list of struct_ops types with per-btf struct_ops_tab to
>> enable dynamic registration.
>>
>> Both bpf_dummy_ops and bpf_tcp_ca now utilize the registration function
>> instead of being listed in bpf_struct_ops_types.h.
>>
>> Cc: netdev@vger.kernel.org
>> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
>
> Hello,
>
> I left two nitpicks below, feel free to ignore if you think is good as-is.
>
>> ---
>> include/linux/bpf.h | 36 +++++++--
>> include/linux/btf.h | 5 +-
>> kernel/bpf/bpf_struct_ops.c | 123 ++++++++----------------------
>> kernel/bpf/bpf_struct_ops_types.h | 12 ---
>> kernel/bpf/btf.c | 41 +++++++++-
>> net/bpf/bpf_dummy_struct_ops.c | 14 +++-
>> net/ipv4/bpf_tcp_ca.c | 16 +++-
>> 7 files changed, 130 insertions(+), 117 deletions(-)
>> delete mode 100644 kernel/bpf/bpf_struct_ops_types.h
>>
>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>> index 26feb8a2da4f..a53b2181c845 100644
>> --- a/include/linux/bpf.h
>> +++ b/include/linux/bpf.h
>> @@ -1644,7 +1644,6 @@ struct bpf_struct_ops_desc {
>> #if defined(CONFIG_BPF_JIT) && defined(CONFIG_BPF_SYSCALL)
>> #define BPF_MODULE_OWNER ((void *)((0xeB9FUL << 2) + POISON_POINTER_DELTA))
>> const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *btf, u32 type_id);
>> -void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log);
>> bool bpf_struct_ops_get(const void *kdata);
>> void bpf_struct_ops_put(const void *kdata);
>> int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
>> @@ -1690,10 +1689,6 @@ static inline const struct bpf_struct_ops_desc *bpf_struct_ops_find(struct btf *
>> {
>> return NULL;
>> }
>> -static inline void bpf_struct_ops_init(struct btf *btf,
>> - struct bpf_verifier_log *log)
>> -{
>> -}
>> static inline bool bpf_try_module_get(const void *data, struct module *owner)
>> {
>> return try_module_get(owner);
>> @@ -3212,4 +3207,35 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog)
>> return prog->aux->func_idx != 0;
>> }
>>
>> +int register_bpf_struct_ops(struct bpf_struct_ops *st_ops);
>> +
>> +enum bpf_struct_ops_state {
>> + BPF_STRUCT_OPS_STATE_INIT,
>> + BPF_STRUCT_OPS_STATE_INUSE,
>> + BPF_STRUCT_OPS_STATE_TOBEFREE,
>> + BPF_STRUCT_OPS_STATE_READY,
>> +};
>> +
>> +struct bpf_struct_ops_common_value {
>> + refcount_t refcnt;
>> + enum bpf_struct_ops_state state;
>> +};
>> +
>> +/* bpf_struct_ops_##_name (e.g. bpf_struct_ops_tcp_congestion_ops) is
>> + * the map's value exposed to the userspace and its btf-type-id is
>> + * stored at the map->btf_vmlinux_value_type_id.
>> + *
>> + */
>> +#define DEFINE_STRUCT_OPS_VALUE_TYPE(_name) \
>> +extern struct bpf_struct_ops bpf_##_name; \
>> + \
>> +struct bpf_struct_ops_##_name { \
>> + struct bpf_struct_ops_common_value common; \
>> + struct _name data ____cacheline_aligned_in_smp; \
>> +}
>> +
>> +extern void bpf_struct_ops_init(struct bpf_struct_ops_desc *st_ops_desc,
>> + struct btf *btf,
>> + struct bpf_verifier_log *log);
>> +
>> #endif /* _LINUX_BPF_H */
>> diff --git a/include/linux/btf.h b/include/linux/btf.h
>> index 6a64b372b7a0..533db3f406b3 100644
>> --- a/include/linux/btf.h
>> +++ b/include/linux/btf.h
>> @@ -12,6 +12,8 @@
>> #include <uapi/linux/bpf.h>
>>
>> #define BTF_TYPE_EMIT(type) ((void)(type *)0)
>> +#define BTF_STRUCT_OPS_TYPE_EMIT(type) {((void)(struct type *)0); \
>> + ((void)(struct bpf_struct_ops_##type *)0); }
>> #define BTF_TYPE_EMIT_ENUM(enum_val) ((void)enum_val)
>>
>> /* These need to be macros, as the expressions are used in assembler input */
>> @@ -200,6 +202,7 @@ u32 btf_obj_id(const struct btf *btf);
>> bool btf_is_kernel(const struct btf *btf);
>> bool btf_is_module(const struct btf *btf);
>> struct module *btf_try_get_module(const struct btf *btf);
>> +struct btf *btf_get_module_btf(const struct module *module);
>> u32 btf_nr_types(const struct btf *btf);
>> bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s,
>> const struct btf_member *m,
>> @@ -574,8 +577,6 @@ static inline bool btf_type_is_struct_ptr(struct btf *btf, const struct btf_type
>> struct bpf_struct_ops;
>> struct bpf_struct_ops_desc;
>>
>> -struct bpf_struct_ops_desc *
>> -btf_add_struct_ops(struct btf *btf, struct bpf_struct_ops *st_ops);
>> const struct bpf_struct_ops_desc *
>> btf_get_struct_ops(struct btf *btf, u32 *ret_cnt);
>>
>> diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
>> index 99ab61cc6cad..44412f95bc32 100644
>> --- a/kernel/bpf/bpf_struct_ops.c
>> +++ b/kernel/bpf/bpf_struct_ops.c
>> @@ -13,21 +13,8 @@
>> #include <linux/btf_ids.h>
>> #include <linux/rcupdate_wait.h>
>>
>> -enum bpf_struct_ops_state {
>> - BPF_STRUCT_OPS_STATE_INIT,
>> - BPF_STRUCT_OPS_STATE_INUSE,
>> - BPF_STRUCT_OPS_STATE_TOBEFREE,
>> - BPF_STRUCT_OPS_STATE_READY,
>> -};
>> -
>> -struct bpf_struct_ops_common_value {
>> - refcount_t refcnt;
>> - enum bpf_struct_ops_state state;
>> -};
>> -#define BPF_STRUCT_OPS_COMMON_VALUE struct bpf_struct_ops_common_value common
>> -
>> struct bpf_struct_ops_value {
>> - BPF_STRUCT_OPS_COMMON_VALUE;
>> + struct bpf_struct_ops_common_value common;
>> char data[] ____cacheline_aligned_in_smp;
>> };
>>
>> @@ -72,35 +59,6 @@ static DEFINE_MUTEX(update_mutex);
>> #define VALUE_PREFIX "bpf_struct_ops_"
>> #define VALUE_PREFIX_LEN (sizeof(VALUE_PREFIX) - 1)
>>
>> -/* bpf_struct_ops_##_name (e.g. bpf_struct_ops_tcp_congestion_ops) is
>> - * the map's value exposed to the userspace and its btf-type-id is
>> - * stored at the map->btf_vmlinux_value_type_id.
>> - *
>> - */
>> -#define BPF_STRUCT_OPS_TYPE(_name) \
>> -extern struct bpf_struct_ops bpf_##_name; \
>> - \
>> -struct bpf_struct_ops_##_name { \
>> - BPF_STRUCT_OPS_COMMON_VALUE; \
>> - struct _name data ____cacheline_aligned_in_smp; \
>> -};
>> -#include "bpf_struct_ops_types.h"
>> -#undef BPF_STRUCT_OPS_TYPE
>> -
>> -enum {
>> -#define BPF_STRUCT_OPS_TYPE(_name) BPF_STRUCT_OPS_TYPE_##_name,
>> -#include "bpf_struct_ops_types.h"
>> -#undef BPF_STRUCT_OPS_TYPE
>> - __NR_BPF_STRUCT_OPS_TYPE,
>> -};
>> -
>> -static struct bpf_struct_ops_desc bpf_struct_ops[] = {
>> -#define BPF_STRUCT_OPS_TYPE(_name) \
>> - [BPF_STRUCT_OPS_TYPE_##_name] = { .st_ops = &bpf_##_name },
>> -#include "bpf_struct_ops_types.h"
>> -#undef BPF_STRUCT_OPS_TYPE
>> -};
>> -
>> const struct bpf_verifier_ops bpf_struct_ops_verifier_ops = {
>> };
>>
>> @@ -110,13 +68,22 @@ const struct bpf_prog_ops bpf_struct_ops_prog_ops = {
>> #endif
>> };
>>
>> -static const struct btf_type *module_type;
>> -static const struct btf_type *common_value_type;
>> +BTF_ID_LIST(st_ops_ids)
>> +BTF_ID(struct, module)
>> +BTF_ID(struct, bpf_struct_ops_common_value)
>> +
>> +enum {
>> + idx_module_id,
>> + idx_st_ops_common_value_id,
>> +};
>> +
>> +extern struct btf *btf_vmlinux;
>>
>> static bool is_valid_value_type(struct btf *btf, s32 value_id,
>> const struct btf_type *type,
>> const char *value_name)
>> {
>> + const struct btf_type *common_value_type;
>> const struct btf_member *member;
>> const struct btf_type *vt, *mt;
>>
>> @@ -128,6 +95,8 @@ static bool is_valid_value_type(struct btf *btf, s32 value_id,
>> }
>> member = btf_type_member(vt);
>> mt = btf_type_by_id(btf, member->type);
>> + common_value_type = btf_type_by_id(btf_vmlinux,
>> + st_ops_ids[idx_st_ops_common_value_id]);
>> if (mt != common_value_type) {
>> pr_warn("The first member of %s should be bpf_struct_ops_common_value\n",
>> value_name);
>> @@ -144,9 +113,9 @@ static bool is_valid_value_type(struct btf *btf, s32 value_id,
>> return true;
>> }
>>
>> -static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
>> - struct btf *btf,
>> - struct bpf_verifier_log *log)
>> +void bpf_struct_ops_init(struct bpf_struct_ops_desc *st_ops_desc,
>> + struct btf *btf,
>> + struct bpf_verifier_log *log)
>> {
>> struct bpf_struct_ops *st_ops = st_ops_desc->st_ops;
>> const struct btf_member *member;
>> @@ -232,51 +201,20 @@ static void bpf_struct_ops_init_one(struct bpf_struct_ops_desc *st_ops_desc,
>> }
>> }
>>
>> -void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log)
>> -{
>> - struct bpf_struct_ops_desc *st_ops_desc;
>> - s32 module_id, common_value_id;
>> - u32 i;
>> -
>> - /* Ensure BTF type is emitted for "struct bpf_struct_ops_##_name" */
>> -#define BPF_STRUCT_OPS_TYPE(_name) BTF_TYPE_EMIT(struct bpf_struct_ops_##_name);
>> -#include "bpf_struct_ops_types.h"
>> -#undef BPF_STRUCT_OPS_TYPE
>> -
>> - module_id = btf_find_by_name_kind(btf, "module", BTF_KIND_STRUCT);
>> - if (module_id < 0) {
>> - pr_warn("Cannot find struct module in btf_vmlinux\n");
>> - return;
>> - }
>> - module_type = btf_type_by_id(btf, module_id);
>> - common_value_id = btf_find_by_name_kind(btf,
>> - "bpf_struct_ops_common_value",
>> - BTF_KIND_STRUCT);
>> - if (common_value_id < 0) {
>> - pr_warn("Cannot find struct common_value in btf_vmlinux\n");
>> - return;
>> - }
>> - common_value_type = btf_type_by_id(btf, common_value_id);
>> -
>> - for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
>> - st_ops_desc = &bpf_struct_ops[i];
>> - bpf_struct_ops_init_one(st_ops_desc, btf, log);
>> - }
>> -}
>> -
>> -extern struct btf *btf_vmlinux;
>> -
>> static const struct bpf_struct_ops_desc *
>> bpf_struct_ops_find_value(struct btf *btf, u32 value_id)
>> {
>> + const struct bpf_struct_ops_desc *st_ops_list;
>> unsigned int i;
>> + u32 cnt = 0;
>>
>> - if (!value_id || !btf_vmlinux)
>> + if (!value_id)
>> return NULL;
>>
>> - for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
>> - if (bpf_struct_ops[i].value_id == value_id)
>> - return &bpf_struct_ops[i];
>> + st_ops_list = btf_get_struct_ops(btf, &cnt);
>> + for (i = 0; i < cnt; i++) {
>> + if (st_ops_list[i].value_id == value_id)
>> + return &st_ops_list[i];
>> }
>>
>> return NULL;
>> @@ -285,14 +223,17 @@ bpf_struct_ops_find_value(struct btf *btf, u32 value_id)
>> const struct bpf_struct_ops_desc *
>> bpf_struct_ops_find(struct btf *btf, u32 type_id)
>> {
>> + const struct bpf_struct_ops_desc *st_ops_list;
>> unsigned int i;
>> + u32 cnt;
>>
>> - if (!type_id || !btf_vmlinux)
>> + if (!type_id)
>> return NULL;
>>
>> - for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) {
>> - if (bpf_struct_ops[i].type_id == type_id)
>> - return &bpf_struct_ops[i];
>> + st_ops_list = btf_get_struct_ops(btf, &cnt);
>> + for (i = 0; i < cnt; i++) {
>> + if (st_ops_list[i].type_id == type_id)
>> + return &st_ops_list[i];
>> }
>>
>> return NULL;
>> @@ -429,6 +370,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
>> const struct bpf_struct_ops_desc *st_ops_desc = st_map->st_ops_desc;
>> const struct bpf_struct_ops *st_ops = st_ops_desc->st_ops;
>> struct bpf_struct_ops_value *uvalue, *kvalue;
>> + const struct btf_type *module_type;
>> const struct btf_member *member;
>> const struct btf_type *t = st_ops_desc->type;
>> struct bpf_tramp_links *tlinks;
>> @@ -485,6 +427,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
>> image = st_map->image;
>> image_end = st_map->image + PAGE_SIZE;
>>
>> + module_type = btf_type_by_id(btf_vmlinux, st_ops_ids[idx_module_id]);
>> for_each_member(i, t, member) {
>> const struct btf_type *mtype, *ptype;
>> struct bpf_prog *prog;
>> diff --git a/kernel/bpf/bpf_struct_ops_types.h b/kernel/bpf/bpf_struct_ops_types.h
>> deleted file mode 100644
>> index 5678a9ddf817..000000000000
>> --- a/kernel/bpf/bpf_struct_ops_types.h
>> +++ /dev/null
>> @@ -1,12 +0,0 @@
>> -/* SPDX-License-Identifier: GPL-2.0 */
>> -/* internal file - do not include directly */
>> -
>> -#ifdef CONFIG_BPF_JIT
>> -#ifdef CONFIG_NET
>> -BPF_STRUCT_OPS_TYPE(bpf_dummy_ops)
>> -#endif
>> -#ifdef CONFIG_INET
>> -#include <net/tcp.h>
>> -BPF_STRUCT_OPS_TYPE(tcp_congestion_ops)
>> -#endif
>> -#endif
>> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
>> index c1e2ed6c972e..c53888e8dddb 100644
>> --- a/kernel/bpf/btf.c
>> +++ b/kernel/bpf/btf.c
>> @@ -5790,8 +5790,6 @@ struct btf *btf_parse_vmlinux(void)
>> /* btf_parse_vmlinux() runs under bpf_verifier_lock */
>> bpf_ctx_convert.t = btf_type_by_id(btf, bpf_ctx_convert_btf_id[0]);
>>
>> - bpf_struct_ops_init(btf, log);
>> -
>> refcount_set(&btf->refcnt, 1);
>>
>> err = btf_alloc_id(btf);
>> @@ -7532,7 +7530,7 @@ struct module *btf_try_get_module(const struct btf *btf)
>> /* Returns struct btf corresponding to the struct module.
>> * This function can return NULL or ERR_PTR.
>> */
>> -static struct btf *btf_get_module_btf(const struct module *module)
>> +struct btf *btf_get_module_btf(const struct module *module)
>> {
>> #ifdef CONFIG_DEBUG_INFO_BTF_MODULES
>> struct btf_module *btf_mod, *tmp;
>> @@ -8672,3 +8670,40 @@ const struct bpf_struct_ops_desc *btf_get_struct_ops(struct btf *btf, u32 *ret_c
>> return (const struct bpf_struct_ops_desc *)btf->struct_ops_tab->ops;
>> }
>>
>> +int register_bpf_struct_ops(struct bpf_struct_ops *st_ops)
>> +{
>> + struct bpf_struct_ops_desc *desc;
>> + struct bpf_verifier_log *log;
>> + struct btf *btf;
>> + int err = 0;
>> +
>> + if (st_ops == NULL)
>> + return -EINVAL;
>> +
>> + btf = btf_get_module_btf(st_ops->owner);
>> + if (!btf)
>> + return -EINVAL;
>> +
>> + log = kzalloc(sizeof(*log), GFP_KERNEL | __GFP_NOWARN);
>> + if (!log) {
>> + err = -ENOMEM;
>> + goto errout;
>> + }
>> +
>> + log->level = BPF_LOG_KERNEL;
>
> Nit: maybe use bpf_vlog_init() here to avoid breaking encapsulation?
Agree!
>
>> +
>> + desc = btf_add_struct_ops(btf, st_ops);
>> + if (IS_ERR(desc)) {
>> + err = PTR_ERR(desc);
>> + goto errout;
>> + }
>> +
>> + bpf_struct_ops_init(desc, btf, log);
>
> Nit: I think bpf_struct_ops_init() could be changed to return 'int',
> then register_bpf_struct_ops() could report to calling module if
> something went wrong on the last phase, wdyt?
Agree!
>
>> +
>> +errout:
>> + kfree(log);
>> + btf_put(btf);
>> +
>> + return err;
>> +}
>> +EXPORT_SYMBOL_GPL(register_bpf_struct_ops);
>> diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
>> index ffa224053a6c..148a5851c4fa 100644
>> --- a/net/bpf/bpf_dummy_struct_ops.c
>> +++ b/net/bpf/bpf_dummy_struct_ops.c
>> @@ -7,7 +7,7 @@
>> #include <linux/bpf.h>
>> #include <linux/btf.h>
>>
>> -extern struct bpf_struct_ops bpf_bpf_dummy_ops;
>> +static struct bpf_struct_ops bpf_bpf_dummy_ops;
>>
>> /* A common type for test_N with return value in bpf_dummy_ops */
>> typedef int (*dummy_ops_test_ret_fn)(struct bpf_dummy_ops_state *state, ...);
>> @@ -223,11 +223,13 @@ static int bpf_dummy_reg(void *kdata)
>> return -EOPNOTSUPP;
>> }
>>
>> +DEFINE_STRUCT_OPS_VALUE_TYPE(bpf_dummy_ops);
>> +
>> static void bpf_dummy_unreg(void *kdata)
>> {
>> }
>>
>> -struct bpf_struct_ops bpf_bpf_dummy_ops = {
>> +static struct bpf_struct_ops bpf_bpf_dummy_ops = {
>> .verifier_ops = &bpf_dummy_verifier_ops,
>> .init = bpf_dummy_init,
>> .check_member = bpf_dummy_ops_check_member,
>> @@ -235,4 +237,12 @@ struct bpf_struct_ops bpf_bpf_dummy_ops = {
>> .reg = bpf_dummy_reg,
>> .unreg = bpf_dummy_unreg,
>> .name = "bpf_dummy_ops",
>> + .owner = THIS_MODULE,
>> };
>> +
>> +static int __init bpf_dummy_struct_ops_init(void)
>> +{
>> + BTF_STRUCT_OPS_TYPE_EMIT(bpf_dummy_ops);
>> + return register_bpf_struct_ops(&bpf_bpf_dummy_ops);
>> +}
>> +late_initcall(bpf_dummy_struct_ops_init);
>> diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
>> index 3c8b76578a2a..b36a19274e5b 100644
>> --- a/net/ipv4/bpf_tcp_ca.c
>> +++ b/net/ipv4/bpf_tcp_ca.c
>> @@ -12,7 +12,7 @@
>> #include <net/bpf_sk_storage.h>
>>
>> /* "extern" is to avoid sparse warning. It is only used in bpf_struct_ops.c. */
>> -extern struct bpf_struct_ops bpf_tcp_congestion_ops;
>> +static struct bpf_struct_ops bpf_tcp_congestion_ops;
>>
>> static u32 unsupported_ops[] = {
>> offsetof(struct tcp_congestion_ops, get_info),
>> @@ -277,7 +277,9 @@ static int bpf_tcp_ca_validate(void *kdata)
>> return tcp_validate_congestion_control(kdata);
>> }
>>
>> -struct bpf_struct_ops bpf_tcp_congestion_ops = {
>> +DEFINE_STRUCT_OPS_VALUE_TYPE(tcp_congestion_ops);
>> +
>> +static struct bpf_struct_ops bpf_tcp_congestion_ops = {
>> .verifier_ops = &bpf_tcp_ca_verifier_ops,
>> .reg = bpf_tcp_ca_reg,
>> .unreg = bpf_tcp_ca_unreg,
>> @@ -287,10 +289,18 @@ struct bpf_struct_ops bpf_tcp_congestion_ops = {
>> .init = bpf_tcp_ca_init,
>> .validate = bpf_tcp_ca_validate,
>> .name = "tcp_congestion_ops",
>> + .owner = THIS_MODULE,
>> };
>>
>> static int __init bpf_tcp_ca_kfunc_init(void)
>> {
>> - return register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &bpf_tcp_ca_kfunc_set);
>> + int ret;
>> +
>> + BTF_STRUCT_OPS_TYPE_EMIT(tcp_congestion_ops);
>> +
>> + ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &bpf_tcp_ca_kfunc_set);
>> + ret = ret ?: register_bpf_struct_ops(&bpf_tcp_congestion_ops);
>> +
>> + return ret;
>> }
>> late_initcall(bpf_tcp_ca_kfunc_init);
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops().
2023-10-26 20:31 ` Eduard Zingerman
@ 2023-10-27 4:55 ` Kui-Feng Lee
2023-10-27 7:09 ` Kui-Feng Lee
2023-10-27 14:13 ` Eduard Zingerman
2023-10-29 2:34 ` Kui-Feng Lee
1 sibling, 2 replies; 24+ messages in thread
From: Kui-Feng Lee @ 2023-10-27 4:55 UTC (permalink / raw)
To: Eduard Zingerman, thinker.li, bpf, ast, martin.lau, song,
kernel-team, andrii, drosen
Cc: kuifeng
On 10/26/23 13:31, Eduard Zingerman wrote:
> On Sat, 2023-10-21 at 22:03 -0700, thinker.li@gmail.com wrote:
>> From: Kui-Feng Lee <thinker.li@gmail.com>
>>
>> Create a new struct_ops type called bpf_testmod_ops within the bpf_testmod
>> module. When a struct_ops object is registered, the bpf_testmod module will
>> invoke test_2 from the module.
>>
>> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
>
> Hello,
>
> Sorry for the late response, was moving through the patch-set very slowly.
> Please note that CI currently fails for this series [0], reported error is:
>
> testing_helpers.c:13:10: fatal error: 'rcu_tasks_trace_gp.skel.h' file not found
> 13 | #include "rcu_tasks_trace_gp.skel.h"
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~
Thank! I will fix this dependency issue.
>
> I get the same error when try to run tests locally (after full clean).
> On the other hand it looks like `kern_sync_rcu_tasks_trace` changes
> are not really necessary, when I undo these changes but keep changes in:
>
> - .../selftests/bpf/bpf_testmod/bpf_testmod.c
> - .../selftests/bpf/bpf_testmod/bpf_testmod.h
> - .../bpf/prog_tests/test_struct_ops_module.c
> - .../selftests/bpf/progs/struct_ops_module.c
>
> struct_ops_module/regular_load test still passes.
>
The test will pass even without this change.
But, the test harness may complain by showing warnings.
You may see an additional warning message without this change.
> Regarding assertion:
>
>> + ASSERT_EQ(skel->bss->test_2_result, 7, "test_2_result");
>
> Could you please leave a comment explaining why the value is 7?
> I don't understand what invokes 'test_2' but changing it to 8
> forces test to fail, so something does call 'test_2' :)
It is called by bpf_dummy_reg() in bpf_testmod.c.
I will add a comment here.
>
> Also, when running test_maps I get the following error:
>
> libbpf: bpf_map_create_opts has non-zero extra bytes
> map_create_opts(317):FAIL:bpf_map_create() error:Invalid argument (name=hash_of_maps)
It looks like a padding issue. I will check it.
>
> [0] https://patchwork.kernel.org/project/netdevbpf/patch/20231022050335.2579051-11-thinker.li@gmail.com/
> (look for 'Logs for x86_64-gcc / build / build for x86_64 with gcc ')
>
>> ---
>> tools/testing/selftests/bpf/Makefile | 2 +
>> .../selftests/bpf/bpf_testmod/bpf_testmod.c | 59 +++++++++++++++++++
>> .../selftests/bpf/bpf_testmod/bpf_testmod.h | 5 ++
>> .../bpf/prog_tests/test_struct_ops_module.c | 38 ++++++++++++
>> .../selftests/bpf/progs/struct_ops_module.c | 30 ++++++++++
>> tools/testing/selftests/bpf/testing_helpers.c | 35 +++++++++++
>> 6 files changed, 169 insertions(+)
>> create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
>> create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_module.c
>>
>> diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
>> index caede9b574cb..dd7ff14e1fdf 100644
>> --- a/tools/testing/selftests/bpf/Makefile
>> +++ b/tools/testing/selftests/bpf/Makefile
>> @@ -706,6 +706,8 @@ $(OUTPUT)/uprobe_multi: uprobe_multi.c
>> $(call msg,BINARY,,$@)
>> $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LDLIBS) -o $@
>>
>> +$(OUTPUT)/testing_helpers.o: $(OUTPUT)/rcu_tasks_trace_gp.skel.h
>> +
>> EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \
>> prog_tests/tests.h map_tests/tests.h verifier/tests.h \
>> feature bpftool \
>> diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
>> index cefc5dd72573..f1a20669d884 100644
>> --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
>> +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
>> @@ -1,5 +1,6 @@
>> // SPDX-License-Identifier: GPL-2.0
>> /* Copyright (c) 2020 Facebook */
>> +#include <linux/bpf.h>
>> #include <linux/btf.h>
>> #include <linux/btf_ids.h>
>> #include <linux/error-injection.h>
>> @@ -517,11 +518,66 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg)
>> BTF_ID_FLAGS(func, bpf_kfunc_call_test_offset)
>> BTF_SET8_END(bpf_testmod_check_kfunc_ids)
>>
>> +#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
>> +
>> +DEFINE_STRUCT_OPS_VALUE_TYPE(bpf_testmod_ops);
>> +
>> +static int bpf_testmod_ops_init(struct btf *btf)
>> +{
>> + return 0;
>> +}
>> +
>> +static bool bpf_testmod_ops_is_valid_access(int off, int size,
>> + enum bpf_access_type type,
>> + const struct bpf_prog *prog,
>> + struct bpf_insn_access_aux *info)
>> +{
>> + return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
>> +}
>> +
>> +static int bpf_testmod_ops_init_member(const struct btf_type *t,
>> + const struct btf_member *member,
>> + void *kdata, const void *udata)
>> +{
>> + return 0;
>> +}
>> +
>> static const struct btf_kfunc_id_set bpf_testmod_kfunc_set = {
>> .owner = THIS_MODULE,
>> .set = &bpf_testmod_check_kfunc_ids,
>> };
>>
>> +static const struct bpf_verifier_ops bpf_testmod_verifier_ops = {
>> + .is_valid_access = bpf_testmod_ops_is_valid_access,
>> +};
>> +
>> +static int bpf_dummy_reg(void *kdata)
>> +{
>> + struct bpf_testmod_ops *ops = kdata;
>> + int r;
>> +
>> + BTF_STRUCT_OPS_TYPE_EMIT(bpf_testmod_ops);
>> + r = ops->test_2(4, 3);
>> +
>> + return 0;
>> +}
>> +
>> +static void bpf_dummy_unreg(void *kdata)
>> +{
>> +}
>> +
>> +struct bpf_struct_ops bpf_bpf_testmod_ops = {
>> + .verifier_ops = &bpf_testmod_verifier_ops,
>> + .init = bpf_testmod_ops_init,
>> + .init_member = bpf_testmod_ops_init_member,
>> + .reg = bpf_dummy_reg,
>> + .unreg = bpf_dummy_unreg,
>> + .name = "bpf_testmod_ops",
>> + .owner = THIS_MODULE,
>> +};
>> +
>> +#endif /* CONFIG_DEBUG_INFO_BTF_MODULES */
>> +
>> extern int bpf_fentry_test1(int a);
>>
>> static int bpf_testmod_init(void)
>> @@ -532,6 +588,9 @@ static int bpf_testmod_init(void)
>> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_testmod_kfunc_set);
>> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_testmod_kfunc_set);
>> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &bpf_testmod_kfunc_set);
>> +#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
>> + ret = ret ?: register_bpf_struct_ops(&bpf_bpf_testmod_ops);
>> +#endif
>> if (ret < 0)
>> return ret;
>> if (bpf_fentry_test1(0) < 0)
>> diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
>> index f32793efe095..ca5435751c79 100644
>> --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
>> +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
>> @@ -28,4 +28,9 @@ struct bpf_iter_testmod_seq {
>> int cnt;
>> };
>>
>> +struct bpf_testmod_ops {
>> + int (*test_1)(void);
>> + int (*test_2)(int a, int b);
>> +};
>> +
>> #endif /* _BPF_TESTMOD_H */
>> diff --git a/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
>> new file mode 100644
>> index 000000000000..7261fc6c377a
>> --- /dev/null
>> +++ b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
>> @@ -0,0 +1,38 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
>> +#include <test_progs.h>
>> +#include <time.h>
>> +
>> +#include "rcu_tasks_trace_gp.skel.h"
>> +#include "struct_ops_module.skel.h"
>> +
>> +static void test_regular_load(void)
>> +{
>> + struct struct_ops_module *skel;
>> + struct bpf_link *link;
>> + DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts);
>> + int err;
>> +
>> + skel = struct_ops_module__open_opts(&opts);
>> + if (!ASSERT_OK_PTR(skel, "struct_ops_module_open"))
>> + return;
>> + err = struct_ops_module__load(skel);
>> + if (!ASSERT_OK(err, "struct_ops_module_load"))
>> + return;
>> +
>> + link = bpf_map__attach_struct_ops(skel->maps.testmod_1);
>> + ASSERT_OK_PTR(link, "attach_test_mod_1");
>> +
>> + ASSERT_EQ(skel->bss->test_2_result, 7, "test_2_result");
>> +
>> + bpf_link__destroy(link);
>> +
>> + struct_ops_module__destroy(skel);
>> +}
>> +
>> +void serial_test_struct_ops_module(void)
>> +{
>> + if (test__start_subtest("regular_load"))
>> + test_regular_load();
>> +}
>> +
>> diff --git a/tools/testing/selftests/bpf/progs/struct_ops_module.c b/tools/testing/selftests/bpf/progs/struct_ops_module.c
>> new file mode 100644
>> index 000000000000..cb305d04342f
>> --- /dev/null
>> +++ b/tools/testing/selftests/bpf/progs/struct_ops_module.c
>> @@ -0,0 +1,30 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
>> +#include <vmlinux.h>
>> +#include <bpf/bpf_helpers.h>
>> +#include <bpf/bpf_tracing.h>
>> +#include "../bpf_testmod/bpf_testmod.h"
>> +
>> +char _license[] SEC("license") = "GPL";
>> +
>> +int test_2_result = 0;
>> +
>> +SEC("struct_ops/test_1")
>> +int BPF_PROG(test_1)
>> +{
>> + return 0xdeadbeef;
>> +}
>> +
>> +SEC("struct_ops/test_2")
>> +int BPF_PROG(test_2, int a, int b)
>> +{
>> + test_2_result = a + b;
>> + return a + b;
>> +}
>> +
>> +SEC(".struct_ops.link")
>> +struct bpf_testmod_ops testmod_1 = {
>> + .test_1 = (void *)test_1,
>> + .test_2 = (void *)test_2,
>> +};
>> +
>> diff --git a/tools/testing/selftests/bpf/testing_helpers.c b/tools/testing/selftests/bpf/testing_helpers.c
>> index 8d994884c7b4..05870cd62458 100644
>> --- a/tools/testing/selftests/bpf/testing_helpers.c
>> +++ b/tools/testing/selftests/bpf/testing_helpers.c
>> @@ -10,6 +10,7 @@
>> #include "test_progs.h"
>> #include "testing_helpers.h"
>> #include <linux/membarrier.h>
>> +#include "rcu_tasks_trace_gp.skel.h"
>>
>> int parse_num_list(const char *s, bool **num_set, int *num_set_len)
>> {
>> @@ -380,10 +381,44 @@ int load_bpf_testmod(bool verbose)
>> return 0;
>> }
>>
>> +/* This function will trigger call_rcu_tasks_trace() in the kernel */
>> +static int kern_sync_rcu_tasks_trace(void)
>> +{
>> + struct rcu_tasks_trace_gp *rcu;
>> + time_t start;
>> + long gp_seq;
>> + LIBBPF_OPTS(bpf_test_run_opts, opts);
>> +
>> + rcu = rcu_tasks_trace_gp__open_and_load();
>> + if (IS_ERR(rcu))
>> + return -EFAULT;
>> + if (rcu_tasks_trace_gp__attach(rcu))
>> + return -EFAULT;
>> +
>> + gp_seq = READ_ONCE(rcu->bss->gp_seq);
>> +
>> + if (bpf_prog_test_run_opts(bpf_program__fd(rcu->progs.do_call_rcu_tasks_trace),
>> + &opts))
>> + return -EFAULT;
>> + if (opts.retval != 0)
>> + return -EFAULT;
>> +
>> + start = time(NULL);
>> + while ((start + 2) > time(NULL) &&
>> + gp_seq == READ_ONCE(rcu->bss->gp_seq))
>> + sched_yield();
>> +
>> + rcu_tasks_trace_gp__destroy(rcu);
>> +
>> + return 0;
>> +}
>> +
>> /*
>> * Trigger synchronize_rcu() in kernel.
>> */
>> int kern_sync_rcu(void)
>> {
>> + if (kern_sync_rcu_tasks_trace())
>> + return -EFAULT;
>> return syscall(__NR_membarrier, MEMBARRIER_CMD_SHARED, 0, 0);
>> }
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops().
2023-10-27 4:55 ` Kui-Feng Lee
@ 2023-10-27 7:09 ` Kui-Feng Lee
2023-10-27 14:13 ` Eduard Zingerman
1 sibling, 0 replies; 24+ messages in thread
From: Kui-Feng Lee @ 2023-10-27 7:09 UTC (permalink / raw)
To: Eduard Zingerman, thinker.li, bpf, ast, martin.lau, song,
kernel-team, andrii, drosen
Cc: kuifeng
On 10/26/23 21:55, Kui-Feng Lee wrote:
>
>
> On 10/26/23 13:31, Eduard Zingerman wrote:
>> On Sat, 2023-10-21 at 22:03 -0700, thinker.li@gmail.com wrote:
>>> From: Kui-Feng Lee <thinker.li@gmail.com>
>>>
>>> Create a new struct_ops type called bpf_testmod_ops within the
>>> bpf_testmod
>>> module. When a struct_ops object is registered, the bpf_testmod
>>> module will
>>> invoke test_2 from the module.
>>>
>>> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
>>
>> Hello,
>>
>> Sorry for the late response, was moving through the patch-set very
>> slowly.
>> Please note that CI currently fails for this series [0], reported
>> error is:
>>
>> testing_helpers.c:13:10: fatal error: 'rcu_tasks_trace_gp.skel.h' file
>> not found
>> 13 | #include "rcu_tasks_trace_gp.skel.h"
>> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> Thank! I will fix this dependency issue.
>
>>
>> I get the same error when try to run tests locally (after full clean).
>> On the other hand it looks like `kern_sync_rcu_tasks_trace` changes
>> are not really necessary, when I undo these changes but keep changes in:
>>
>> - .../selftests/bpf/bpf_testmod/bpf_testmod.c
>> - .../selftests/bpf/bpf_testmod/bpf_testmod.h
>> - .../bpf/prog_tests/test_struct_ops_module.c
>> - .../selftests/bpf/progs/struct_ops_module.c
>>
>> struct_ops_module/regular_load test still passes.
>>
>
> The test will pass even without this change.
> But, the test harness may complain by showing warnings.
> You may see an additional warning message without this change.
One thing forgot to mentioned. The test harness may fail to unload
the bpf_testmod module.
>
>> Regarding assertion:
>>
>>> + ASSERT_EQ(skel->bss->test_2_result, 7, "test_2_result");
>>
>> Could you please leave a comment explaining why the value is 7?
>> I don't understand what invokes 'test_2' but changing it to 8
>> forces test to fail, so something does call 'test_2' :)
>
> It is called by bpf_dummy_reg() in bpf_testmod.c.
> I will add a comment here.
>
>>
>> Also, when running test_maps I get the following error:
>>
>> libbpf: bpf_map_create_opts has non-zero extra bytes
>> map_create_opts(317):FAIL:bpf_map_create() error:Invalid argument
>> (name=hash_of_maps)
>
> It looks like a padding issue. I will check it.
>
>>
>> [0]
>> https://patchwork.kernel.org/project/netdevbpf/patch/20231022050335.2579051-11-thinker.li@gmail.com/
>> (look for 'Logs for x86_64-gcc / build / build for x86_64 with
>> gcc ')
>>
>>> ---
>>> tools/testing/selftests/bpf/Makefile | 2 +
>>> .../selftests/bpf/bpf_testmod/bpf_testmod.c | 59 +++++++++++++++++++
>>> .../selftests/bpf/bpf_testmod/bpf_testmod.h | 5 ++
>>> .../bpf/prog_tests/test_struct_ops_module.c | 38 ++++++++++++
>>> .../selftests/bpf/progs/struct_ops_module.c | 30 ++++++++++
>>> tools/testing/selftests/bpf/testing_helpers.c | 35 +++++++++++
>>> 6 files changed, 169 insertions(+)
>>> create mode 100644
>>> tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
>>> create mode 100644
>>> tools/testing/selftests/bpf/progs/struct_ops_module.c
>>>
>>> diff --git a/tools/testing/selftests/bpf/Makefile
>>> b/tools/testing/selftests/bpf/Makefile
>>> index caede9b574cb..dd7ff14e1fdf 100644
>>> --- a/tools/testing/selftests/bpf/Makefile
>>> +++ b/tools/testing/selftests/bpf/Makefile
>>> @@ -706,6 +706,8 @@ $(OUTPUT)/uprobe_multi: uprobe_multi.c
>>> $(call msg,BINARY,,$@)
>>> $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LDLIBS) -o $@
>>> +$(OUTPUT)/testing_helpers.o: $(OUTPUT)/rcu_tasks_trace_gp.skel.h
>>> +
>>> EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR)
>>> $(HOST_SCRATCH_DIR) \
>>> prog_tests/tests.h map_tests/tests.h verifier/tests.h \
>>> feature bpftool \
>>> diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
>>> b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
>>> index cefc5dd72573..f1a20669d884 100644
>>> --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
>>> +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
>>> @@ -1,5 +1,6 @@
>>> // SPDX-License-Identifier: GPL-2.0
>>> /* Copyright (c) 2020 Facebook */
>>> +#include <linux/bpf.h>
>>> #include <linux/btf.h>
>>> #include <linux/btf_ids.h>
>>> #include <linux/error-injection.h>
>>> @@ -517,11 +518,66 @@ BTF_ID_FLAGS(func,
>>> bpf_kfunc_call_test_static_unused_arg)
>>> BTF_ID_FLAGS(func, bpf_kfunc_call_test_offset)
>>> BTF_SET8_END(bpf_testmod_check_kfunc_ids)
>>> +#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
>>> +
>>> +DEFINE_STRUCT_OPS_VALUE_TYPE(bpf_testmod_ops);
>>> +
>>> +static int bpf_testmod_ops_init(struct btf *btf)
>>> +{
>>> + return 0;
>>> +}
>>> +
>>> +static bool bpf_testmod_ops_is_valid_access(int off, int size,
>>> + enum bpf_access_type type,
>>> + const struct bpf_prog *prog,
>>> + struct bpf_insn_access_aux *info)
>>> +{
>>> + return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
>>> +}
>>> +
>>> +static int bpf_testmod_ops_init_member(const struct btf_type *t,
>>> + const struct btf_member *member,
>>> + void *kdata, const void *udata)
>>> +{
>>> + return 0;
>>> +}
>>> +
>>> static const struct btf_kfunc_id_set bpf_testmod_kfunc_set = {
>>> .owner = THIS_MODULE,
>>> .set = &bpf_testmod_check_kfunc_ids,
>>> };
>>> +static const struct bpf_verifier_ops bpf_testmod_verifier_ops = {
>>> + .is_valid_access = bpf_testmod_ops_is_valid_access,
>>> +};
>>> +
>>> +static int bpf_dummy_reg(void *kdata)
>>> +{
>>> + struct bpf_testmod_ops *ops = kdata;
>>> + int r;
>>> +
>>> + BTF_STRUCT_OPS_TYPE_EMIT(bpf_testmod_ops);
>>> + r = ops->test_2(4, 3);
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static void bpf_dummy_unreg(void *kdata)
>>> +{
>>> +}
>>> +
>>> +struct bpf_struct_ops bpf_bpf_testmod_ops = {
>>> + .verifier_ops = &bpf_testmod_verifier_ops,
>>> + .init = bpf_testmod_ops_init,
>>> + .init_member = bpf_testmod_ops_init_member,
>>> + .reg = bpf_dummy_reg,
>>> + .unreg = bpf_dummy_unreg,
>>> + .name = "bpf_testmod_ops",
>>> + .owner = THIS_MODULE,
>>> +};
>>> +
>>> +#endif /* CONFIG_DEBUG_INFO_BTF_MODULES */
>>> +
>>> extern int bpf_fentry_test1(int a);
>>> static int bpf_testmod_init(void)
>>> @@ -532,6 +588,9 @@ static int bpf_testmod_init(void)
>>> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS,
>>> &bpf_testmod_kfunc_set);
>>> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
>>> &bpf_testmod_kfunc_set);
>>> ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL,
>>> &bpf_testmod_kfunc_set);
>>> +#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
>>> + ret = ret ?: register_bpf_struct_ops(&bpf_bpf_testmod_ops);
>>> +#endif
>>> if (ret < 0)
>>> return ret;
>>> if (bpf_fentry_test1(0) < 0)
>>> diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
>>> b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
>>> index f32793efe095..ca5435751c79 100644
>>> --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
>>> +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
>>> @@ -28,4 +28,9 @@ struct bpf_iter_testmod_seq {
>>> int cnt;
>>> };
>>> +struct bpf_testmod_ops {
>>> + int (*test_1)(void);
>>> + int (*test_2)(int a, int b);
>>> +};
>>> +
>>> #endif /* _BPF_TESTMOD_H */
>>> diff --git
>>> a/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
>>> b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
>>> new file mode 100644
>>> index 000000000000..7261fc6c377a
>>> --- /dev/null
>>> +++ b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
>>> @@ -0,0 +1,38 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
>>> +#include <test_progs.h>
>>> +#include <time.h>
>>> +
>>> +#include "rcu_tasks_trace_gp.skel.h"
>>> +#include "struct_ops_module.skel.h"
>>> +
>>> +static void test_regular_load(void)
>>> +{
>>> + struct struct_ops_module *skel;
>>> + struct bpf_link *link;
>>> + DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts);
>>> + int err;
>>> +
>>> + skel = struct_ops_module__open_opts(&opts);
>>> + if (!ASSERT_OK_PTR(skel, "struct_ops_module_open"))
>>> + return;
>>> + err = struct_ops_module__load(skel);
>>> + if (!ASSERT_OK(err, "struct_ops_module_load"))
>>> + return;
>>> +
>>> + link = bpf_map__attach_struct_ops(skel->maps.testmod_1);
>>> + ASSERT_OK_PTR(link, "attach_test_mod_1");
>>> +
>>> + ASSERT_EQ(skel->bss->test_2_result, 7, "test_2_result");
>>> +
>>> + bpf_link__destroy(link);
>>> +
>>> + struct_ops_module__destroy(skel);
>>> +}
>>> +
>>> +void serial_test_struct_ops_module(void)
>>> +{
>>> + if (test__start_subtest("regular_load"))
>>> + test_regular_load();
>>> +}
>>> +
>>> diff --git a/tools/testing/selftests/bpf/progs/struct_ops_module.c
>>> b/tools/testing/selftests/bpf/progs/struct_ops_module.c
>>> new file mode 100644
>>> index 000000000000..cb305d04342f
>>> --- /dev/null
>>> +++ b/tools/testing/selftests/bpf/progs/struct_ops_module.c
>>> @@ -0,0 +1,30 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
>>> +#include <vmlinux.h>
>>> +#include <bpf/bpf_helpers.h>
>>> +#include <bpf/bpf_tracing.h>
>>> +#include "../bpf_testmod/bpf_testmod.h"
>>> +
>>> +char _license[] SEC("license") = "GPL";
>>> +
>>> +int test_2_result = 0;
>>> +
>>> +SEC("struct_ops/test_1")
>>> +int BPF_PROG(test_1)
>>> +{
>>> + return 0xdeadbeef;
>>> +}
>>> +
>>> +SEC("struct_ops/test_2")
>>> +int BPF_PROG(test_2, int a, int b)
>>> +{
>>> + test_2_result = a + b;
>>> + return a + b;
>>> +}
>>> +
>>> +SEC(".struct_ops.link")
>>> +struct bpf_testmod_ops testmod_1 = {
>>> + .test_1 = (void *)test_1,
>>> + .test_2 = (void *)test_2,
>>> +};
>>> +
>>> diff --git a/tools/testing/selftests/bpf/testing_helpers.c
>>> b/tools/testing/selftests/bpf/testing_helpers.c
>>> index 8d994884c7b4..05870cd62458 100644
>>> --- a/tools/testing/selftests/bpf/testing_helpers.c
>>> +++ b/tools/testing/selftests/bpf/testing_helpers.c
>>> @@ -10,6 +10,7 @@
>>> #include "test_progs.h"
>>> #include "testing_helpers.h"
>>> #include <linux/membarrier.h>
>>> +#include "rcu_tasks_trace_gp.skel.h"
>>> int parse_num_list(const char *s, bool **num_set, int *num_set_len)
>>> {
>>> @@ -380,10 +381,44 @@ int load_bpf_testmod(bool verbose)
>>> return 0;
>>> }
>>> +/* This function will trigger call_rcu_tasks_trace() in the kernel */
>>> +static int kern_sync_rcu_tasks_trace(void)
>>> +{
>>> + struct rcu_tasks_trace_gp *rcu;
>>> + time_t start;
>>> + long gp_seq;
>>> + LIBBPF_OPTS(bpf_test_run_opts, opts);
>>> +
>>> + rcu = rcu_tasks_trace_gp__open_and_load();
>>> + if (IS_ERR(rcu))
>>> + return -EFAULT;
>>> + if (rcu_tasks_trace_gp__attach(rcu))
>>> + return -EFAULT;
>>> +
>>> + gp_seq = READ_ONCE(rcu->bss->gp_seq);
>>> +
>>> + if
>>> (bpf_prog_test_run_opts(bpf_program__fd(rcu->progs.do_call_rcu_tasks_trace),
>>> + &opts))
>>> + return -EFAULT;
>>> + if (opts.retval != 0)
>>> + return -EFAULT;
>>> +
>>> + start = time(NULL);
>>> + while ((start + 2) > time(NULL) &&
>>> + gp_seq == READ_ONCE(rcu->bss->gp_seq))
>>> + sched_yield();
>>> +
>>> + rcu_tasks_trace_gp__destroy(rcu);
>>> +
>>> + return 0;
>>> +}
>>> +
>>> /*
>>> * Trigger synchronize_rcu() in kernel.
>>> */
>>> int kern_sync_rcu(void)
>>> {
>>> + if (kern_sync_rcu_tasks_trace())
>>> + return -EFAULT;
>>> return syscall(__NR_membarrier, MEMBARRIER_CMD_SHARED, 0, 0);
>>> }
>>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops().
2023-10-27 4:55 ` Kui-Feng Lee
2023-10-27 7:09 ` Kui-Feng Lee
@ 2023-10-27 14:13 ` Eduard Zingerman
1 sibling, 0 replies; 24+ messages in thread
From: Eduard Zingerman @ 2023-10-27 14:13 UTC (permalink / raw)
To: Kui-Feng Lee, thinker.li, bpf, ast, martin.lau, song, kernel-team,
andrii, drosen
Cc: kuifeng
On Thu, 2023-10-26 at 21:55 -0700, Kui-Feng Lee wrote:
[...]
>
> The test will pass even without this change.
> But, the test harness may complain by showing warnings.
> You may see an additional warning message without this change.
Understood, thank you for explanation.
> > Regarding assertion:
> >
> > > + ASSERT_EQ(skel->bss->test_2_result, 7, "test_2_result");
> >
> > Could you please leave a comment explaining why the value is 7?
> > I don't understand what invokes 'test_2' but changing it to 8
> > forces test to fail, so something does call 'test_2' :)
>
> It is called by bpf_dummy_reg() in bpf_testmod.c.
> I will add a comment here.
Oh, sorry, I did not notice, thank you.
[...]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration
2023-10-27 4:39 ` Kui-Feng Lee
@ 2023-10-27 21:32 ` Kui-Feng Lee
2023-10-27 22:02 ` Eduard Zingerman
0 siblings, 1 reply; 24+ messages in thread
From: Kui-Feng Lee @ 2023-10-27 21:32 UTC (permalink / raw)
To: Eduard Zingerman, thinker.li, bpf, ast, martin.lau, song,
kernel-team, andrii, drosen
Cc: kuifeng, netdev
On 10/26/23 21:39, Kui-Feng Lee wrote:
>
>
> On 10/26/23 14:02, Eduard Zingerman wrote:
>> On Sat, 2023-10-21 at 22:03 -0700, thinker.li@gmail.com wrote:
>>> From: Kui-Feng Lee <thinker.li@gmail.com>
[...]
>>> +
>>> + btf = btf_get_module_btf(st_ops->owner);
>>> + if (!btf)
>>> + return -EINVAL;
>>> +
>>> + log = kzalloc(sizeof(*log), GFP_KERNEL | __GFP_NOWARN);
>>> + if (!log) {
>>> + err = -ENOMEM;
>>> + goto errout;
>>> + }
>>> +
>>> + log->level = BPF_LOG_KERNEL;
>>
>> Nit: maybe use bpf_vlog_init() here to avoid breaking encapsulation?
>
> Agree!
>
I don't use bpf_vlog_init() eventually.
I found bpf_vlog_init() is not for BPF_LOG_KERNEL.
According to the comment next to BPF_LOG_KERNEL, it
is an internal log level.
According to the code of bpf_vlog_init(), the level passing to
bpf_vlog_init() should be covered by BPF_LOG_MASK. BPF_LOG_KERNEL is
defined as BPF_LOG_MASK + 1. So, it is intended not being used with
bpf_vlog_init().
>>
>>> +
>>> + desc = btf_add_struct_ops(btf, st_ops);
>>> + if (IS_ERR(desc)) {
>>> + err = PTR_ERR(desc);
>>> + goto errout;
>>> + }
>>> +
>>> + bpf_struct_ops_init(desc, btf, log);
>>
>> Nit: I think bpf_struct_ops_init() could be changed to return 'int',
>> then register_bpf_struct_ops() could report to calling module if
>> something went wrong on the last phase, wdyt?
>
>
> Agree!
>
>>
>>> +
>>> +errout:
>>> + kfree(log);
>>> + btf_put(btf);
>>> +
>>> + return err;
>>> +}
>>> +EXPORT_SYMBOL_GPL(register_bpf_struct_ops);
>>> diff --git a/net/bpf/bpf_dummy_struct_ops.c
>>> b/net/bpf/bpf_dummy_struct_ops.c
>>> index ffa224053a6c..148a5851c4fa 100644
>>> --- a/net/bpf/bpf_dummy_struct_ops.c
>>> +++ b/net/bpf/bpf_dummy_struct_ops.c
>>> @@ -7,7 +7,7 @@
>>> #include <linux/bpf.h>
>>> #include <linux/btf.h>
>>> -extern struct bpf_struct_ops bpf_bpf_dummy_ops;
>>> +static struct bpf_struct_ops bpf_bpf_dummy_ops;
>>> /* A common type for test_N with return value in bpf_dummy_ops */
>>> typedef int (*dummy_ops_test_ret_fn)(struct bpf_dummy_ops_state
>>> *state, ...);
>>> @@ -223,11 +223,13 @@ static int bpf_dummy_reg(void *kdata)
>>> return -EOPNOTSUPP;
>>> }
>>> +DEFINE_STRUCT_OPS_VALUE_TYPE(bpf_dummy_ops);
>>> +
>>> static void bpf_dummy_unreg(void *kdata)
>>> {
>>> }
>>> -struct bpf_struct_ops bpf_bpf_dummy_ops = {
>>> +static struct bpf_struct_ops bpf_bpf_dummy_ops = {
>>> .verifier_ops = &bpf_dummy_verifier_ops,
>>> .init = bpf_dummy_init,
>>> .check_member = bpf_dummy_ops_check_member,
>>> @@ -235,4 +237,12 @@ struct bpf_struct_ops bpf_bpf_dummy_ops = {
>>> .reg = bpf_dummy_reg,
>>> .unreg = bpf_dummy_unreg,
>>> .name = "bpf_dummy_ops",
>>> + .owner = THIS_MODULE,
>>> };
>>> +
>>> +static int __init bpf_dummy_struct_ops_init(void)
>>> +{
>>> + BTF_STRUCT_OPS_TYPE_EMIT(bpf_dummy_ops);
>>> + return register_bpf_struct_ops(&bpf_bpf_dummy_ops);
>>> +}
>>> +late_initcall(bpf_dummy_struct_ops_init);
>>> diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
>>> index 3c8b76578a2a..b36a19274e5b 100644
>>> --- a/net/ipv4/bpf_tcp_ca.c
>>> +++ b/net/ipv4/bpf_tcp_ca.c
>>> @@ -12,7 +12,7 @@
>>> #include <net/bpf_sk_storage.h>
>>> /* "extern" is to avoid sparse warning. It is only used in
>>> bpf_struct_ops.c. */
>>> -extern struct bpf_struct_ops bpf_tcp_congestion_ops;
>>> +static struct bpf_struct_ops bpf_tcp_congestion_ops;
>>> static u32 unsupported_ops[] = {
>>> offsetof(struct tcp_congestion_ops, get_info),
>>> @@ -277,7 +277,9 @@ static int bpf_tcp_ca_validate(void *kdata)
>>> return tcp_validate_congestion_control(kdata);
>>> }
>>> -struct bpf_struct_ops bpf_tcp_congestion_ops = {
>>> +DEFINE_STRUCT_OPS_VALUE_TYPE(tcp_congestion_ops);
>>> +
>>> +static struct bpf_struct_ops bpf_tcp_congestion_ops = {
>>> .verifier_ops = &bpf_tcp_ca_verifier_ops,
>>> .reg = bpf_tcp_ca_reg,
>>> .unreg = bpf_tcp_ca_unreg,
>>> @@ -287,10 +289,18 @@ struct bpf_struct_ops bpf_tcp_congestion_ops = {
>>> .init = bpf_tcp_ca_init,
>>> .validate = bpf_tcp_ca_validate,
>>> .name = "tcp_congestion_ops",
>>> + .owner = THIS_MODULE,
>>> };
>>> static int __init bpf_tcp_ca_kfunc_init(void)
>>> {
>>> - return register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS,
>>> &bpf_tcp_ca_kfunc_set);
>>> + int ret;
>>> +
>>> + BTF_STRUCT_OPS_TYPE_EMIT(tcp_congestion_ops);
>>> +
>>> + ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS,
>>> &bpf_tcp_ca_kfunc_set);
>>> + ret = ret ?: register_bpf_struct_ops(&bpf_tcp_congestion_ops);
>>> +
>>> + return ret;
>>> }
>>> late_initcall(bpf_tcp_ca_kfunc_init);
>>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration
2023-10-27 21:32 ` Kui-Feng Lee
@ 2023-10-27 22:02 ` Eduard Zingerman
0 siblings, 0 replies; 24+ messages in thread
From: Eduard Zingerman @ 2023-10-27 22:02 UTC (permalink / raw)
To: Kui-Feng Lee, thinker.li, bpf, ast, martin.lau, song, kernel-team,
andrii, drosen
Cc: kuifeng, netdev
On Fri, 2023-10-27 at 14:32 -0700, Kui-Feng Lee wrote:
>
> On 10/26/23 21:39, Kui-Feng Lee wrote:
> >
> >
> > On 10/26/23 14:02, Eduard Zingerman wrote:
> > > On Sat, 2023-10-21 at 22:03 -0700, thinker.li@gmail.com wrote:
> > > > From: Kui-Feng Lee <thinker.li@gmail.com>
> [...]
> > > > +
> > > > + btf = btf_get_module_btf(st_ops->owner);
> > > > + if (!btf)
> > > > + return -EINVAL;
> > > > +
> > > > + log = kzalloc(sizeof(*log), GFP_KERNEL | __GFP_NOWARN);
> > > > + if (!log) {
> > > > + err = -ENOMEM;
> > > > + goto errout;
> > > > + }
> > > > +
> > > > + log->level = BPF_LOG_KERNEL;
> > >
> > > Nit: maybe use bpf_vlog_init() here to avoid breaking encapsulation?
> >
> > Agree!
> >
>
> I don't use bpf_vlog_init() eventually.
>
> I found bpf_vlog_init() is not for BPF_LOG_KERNEL.
> According to the comment next to BPF_LOG_KERNEL, it
> is an internal log level.
> According to the code of bpf_vlog_init(), the level passing to
> bpf_vlog_init() should be covered by BPF_LOG_MASK. BPF_LOG_KERNEL is
> defined as BPF_LOG_MASK + 1. So, it is intended not being used with
> bpf_vlog_init().
I see, looks like btf_parse_vmlinux does the same, sorry should have checked there.
Thank you for looking into it.
>
> > >
> > > > +
> > > > + desc = btf_add_struct_ops(btf, st_ops);
> > > > + if (IS_ERR(desc)) {
> > > > + err = PTR_ERR(desc);
> > > > + goto errout;
> > > > + }
> > > > +
> > > > + bpf_struct_ops_init(desc, btf, log);
> > >
> > > Nit: I think bpf_struct_ops_init() could be changed to return 'int',
> > > then register_bpf_struct_ops() could report to calling module if
> > > something went wrong on the last phase, wdyt?
> >
> >
> > Agree!
> >
> > >
> > > > +
> > > > +errout:
> > > > + kfree(log);
> > > > + btf_put(btf);
> > > > +
> > > > + return err;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(register_bpf_struct_ops);
> > > > diff --git a/net/bpf/bpf_dummy_struct_ops.c
> > > > b/net/bpf/bpf_dummy_struct_ops.c
> > > > index ffa224053a6c..148a5851c4fa 100644
> > > > --- a/net/bpf/bpf_dummy_struct_ops.c
> > > > +++ b/net/bpf/bpf_dummy_struct_ops.c
> > > > @@ -7,7 +7,7 @@
> > > > #include <linux/bpf.h>
> > > > #include <linux/btf.h>
> > > > -extern struct bpf_struct_ops bpf_bpf_dummy_ops;
> > > > +static struct bpf_struct_ops bpf_bpf_dummy_ops;
> > > > /* A common type for test_N with return value in bpf_dummy_ops */
> > > > typedef int (*dummy_ops_test_ret_fn)(struct bpf_dummy_ops_state
> > > > *state, ...);
> > > > @@ -223,11 +223,13 @@ static int bpf_dummy_reg(void *kdata)
> > > > return -EOPNOTSUPP;
> > > > }
> > > > +DEFINE_STRUCT_OPS_VALUE_TYPE(bpf_dummy_ops);
> > > > +
> > > > static void bpf_dummy_unreg(void *kdata)
> > > > {
> > > > }
> > > > -struct bpf_struct_ops bpf_bpf_dummy_ops = {
> > > > +static struct bpf_struct_ops bpf_bpf_dummy_ops = {
> > > > .verifier_ops = &bpf_dummy_verifier_ops,
> > > > .init = bpf_dummy_init,
> > > > .check_member = bpf_dummy_ops_check_member,
> > > > @@ -235,4 +237,12 @@ struct bpf_struct_ops bpf_bpf_dummy_ops = {
> > > > .reg = bpf_dummy_reg,
> > > > .unreg = bpf_dummy_unreg,
> > > > .name = "bpf_dummy_ops",
> > > > + .owner = THIS_MODULE,
> > > > };
> > > > +
> > > > +static int __init bpf_dummy_struct_ops_init(void)
> > > > +{
> > > > + BTF_STRUCT_OPS_TYPE_EMIT(bpf_dummy_ops);
> > > > + return register_bpf_struct_ops(&bpf_bpf_dummy_ops);
> > > > +}
> > > > +late_initcall(bpf_dummy_struct_ops_init);
> > > > diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
> > > > index 3c8b76578a2a..b36a19274e5b 100644
> > > > --- a/net/ipv4/bpf_tcp_ca.c
> > > > +++ b/net/ipv4/bpf_tcp_ca.c
> > > > @@ -12,7 +12,7 @@
> > > > #include <net/bpf_sk_storage.h>
> > > > /* "extern" is to avoid sparse warning. It is only used in
> > > > bpf_struct_ops.c. */
> > > > -extern struct bpf_struct_ops bpf_tcp_congestion_ops;
> > > > +static struct bpf_struct_ops bpf_tcp_congestion_ops;
> > > > static u32 unsupported_ops[] = {
> > > > offsetof(struct tcp_congestion_ops, get_info),
> > > > @@ -277,7 +277,9 @@ static int bpf_tcp_ca_validate(void *kdata)
> > > > return tcp_validate_congestion_control(kdata);
> > > > }
> > > > -struct bpf_struct_ops bpf_tcp_congestion_ops = {
> > > > +DEFINE_STRUCT_OPS_VALUE_TYPE(tcp_congestion_ops);
> > > > +
> > > > +static struct bpf_struct_ops bpf_tcp_congestion_ops = {
> > > > .verifier_ops = &bpf_tcp_ca_verifier_ops,
> > > > .reg = bpf_tcp_ca_reg,
> > > > .unreg = bpf_tcp_ca_unreg,
> > > > @@ -287,10 +289,18 @@ struct bpf_struct_ops bpf_tcp_congestion_ops = {
> > > > .init = bpf_tcp_ca_init,
> > > > .validate = bpf_tcp_ca_validate,
> > > > .name = "tcp_congestion_ops",
> > > > + .owner = THIS_MODULE,
> > > > };
> > > > static int __init bpf_tcp_ca_kfunc_init(void)
> > > > {
> > > > - return register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS,
> > > > &bpf_tcp_ca_kfunc_set);
> > > > + int ret;
> > > > +
> > > > + BTF_STRUCT_OPS_TYPE_EMIT(tcp_congestion_ops);
> > > > +
> > > > + ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS,
> > > > &bpf_tcp_ca_kfunc_set);
> > > > + ret = ret ?: register_bpf_struct_ops(&bpf_tcp_congestion_ops);
> > > > +
> > > > + return ret;
> > > > }
> > > > late_initcall(bpf_tcp_ca_kfunc_init);
> > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops().
2023-10-26 20:31 ` Eduard Zingerman
2023-10-27 4:55 ` Kui-Feng Lee
@ 2023-10-29 2:34 ` Kui-Feng Lee
1 sibling, 0 replies; 24+ messages in thread
From: Kui-Feng Lee @ 2023-10-29 2:34 UTC (permalink / raw)
To: Eduard Zingerman, thinker.li, bpf, ast, martin.lau, song,
kernel-team, andrii, drosen
Cc: kuifeng
On 10/26/23 13:31, Eduard Zingerman wrote:
> On Sat, 2023-10-21 at 22:03 -0700, thinker.li@gmail.com wrote:
>> From: Kui-Feng Lee <thinker.li@gmail.com>
>>
>> Create a new struct_ops type called bpf_testmod_ops within the bpf_testmod
>> module. When a struct_ops object is registered, the bpf_testmod module will
>> invoke test_2 from the module.
>>
>> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
>
> Hello,
>
> Sorry for the late response, was moving through the patch-set very slowly.
> Please note that CI currently fails for this series [0], reported error is:
>
> testing_helpers.c:13:10: fatal error: 'rcu_tasks_trace_gp.skel.h' file not found
> 13 | #include "rcu_tasks_trace_gp.skel.h"
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> I get the same error when try to run tests locally (after full clean).
> On the other hand it looks like `kern_sync_rcu_tasks_trace` changes
> are not really necessary, when I undo these changes but keep changes in:
>
> - .../selftests/bpf/bpf_testmod/bpf_testmod.c
> - .../selftests/bpf/bpf_testmod/bpf_testmod.h
> - .../bpf/prog_tests/test_struct_ops_module.c
> - .../selftests/bpf/progs/struct_ops_module.c
>
> struct_ops_module/regular_load test still passes.
>
> Regarding assertion:
>
>> + ASSERT_EQ(skel->bss->test_2_result, 7, "test_2_result");
>
> Could you please leave a comment explaining why the value is 7?
> I don't understand what invokes 'test_2' but changing it to 8
> forces test to fail, so something does call 'test_2' :)
>
> Also, when running test_maps I get the following error:
>
> libbpf: bpf_map_create_opts has non-zero extra bytes
> map_create_opts(317):FAIL:bpf_map_create() error:Invalid argument (name=hash_of_maps)
According to what Andrii Nakryiko said,
once [1] is landed, this error should be fixed.
[1] https://lore.kernel.org/all/20231029011509.2479232-1-andrii@kernel.org/
>
> [0] https://patchwork.kernel.org/project/netdevbpf/patch/20231022050335.2579051-11-thinker.li@gmail.com/
> (look for 'Logs for x86_64-gcc / build / build for x86_64 with gcc ')
>
[...]
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2023-10-29 2:34 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-22 5:03 [PATCH bpf-next v6 00/10] Registrating struct_ops types from modules thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 01/10] bpf: refactory struct_ops type initialization to a function thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 02/10] bpf, net: introduce bpf_struct_ops_desc thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 03/10] bpf: add struct_ops_tab to btf thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 04/10] bpf: hold module for bpf_struct_ops_map thinker.li
2023-10-26 21:11 ` Eduard Zingerman
2023-10-27 4:35 ` Kui-Feng Lee
2023-10-22 5:03 ` [PATCH bpf-next v6 05/10] bpf: validate value_type thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 06/10] bpf: pass attached BTF to the bpf_struct_ops subsystem thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 07/10] bpf, net: switch to dynamic registration thinker.li
2023-10-22 6:46 ` kernel test robot
2023-10-26 21:02 ` Eduard Zingerman
2023-10-27 4:39 ` Kui-Feng Lee
2023-10-27 21:32 ` Kui-Feng Lee
2023-10-27 22:02 ` Eduard Zingerman
2023-10-22 5:03 ` [PATCH bpf-next v6 08/10] libbpf: Find correct module BTFs for struct_ops maps and progs thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 09/10] bpf: export btf_ctx_access to modules thinker.li
2023-10-22 5:03 ` [PATCH bpf-next v6 10/10] selftests/bpf: test case for register_bpf_struct_ops() thinker.li
2023-10-22 7:08 ` kernel test robot
2023-10-26 20:31 ` Eduard Zingerman
2023-10-27 4:55 ` Kui-Feng Lee
2023-10-27 7:09 ` Kui-Feng Lee
2023-10-27 14:13 ` Eduard Zingerman
2023-10-29 2:34 ` Kui-Feng Lee
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox