* [PATCH RFC bpf-next 0/1] bpf: Support 64-bit pointers to kfuncs
@ 2023-02-14 21:28 Ilya Leoshkevich
2023-02-14 21:28 ` [PATCH RFC bpf-next 1/1] " Ilya Leoshkevich
0 siblings, 1 reply; 9+ messages in thread
From: Ilya Leoshkevich @ 2023-02-14 21:28 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
Ilya Leoshkevich
Hi,
This is the first attempt to solve the problems outlined in [1, 2, 3].
The main problem is that kfuncs in modules do not fit into bpf_insn.imm
on s390x; the secondary problem is that there is a conflict between
"abstract" XDP metadata function BTF ids and their "concrete"
addresses.
The proposed solution is to keep fkunc BTF ids in bpf_insn.imm, and put
the addresses into bpf_kfunc_desc, which does not have size
restrictions.
Jiri and I discussed putting them into bpf_insn_aux_data, but at the
very beginning of the implementation it became clear that this struct
is freed at the end of verification and is not available during JITing.
I regtested this only on s390x, where it does not regress anything. If
this approach is fine (especially w.r.t. dev-bound kfuncs), I can check
the other arches.
[1] https://lore.kernel.org/bpf/Y9%2FyrKZkBK6yzXp+@krava/
[2] https://lore.kernel.org/bpf/20230128000650.1516334-1-iii@linux.ibm.com/
[3] https://lore.kernel.org/bpf/20230128000650.1516334-32-iii@linux.ibm.com/
Best regards,
Ilya
Ilya Leoshkevich (1):
bpf: Support 64-bit pointers to kfuncs
include/linux/bpf.h | 2 ++
kernel/bpf/core.c | 23 ++++++++++---
kernel/bpf/verifier.c | 79 +++++++++++++------------------------------
3 files changed, 45 insertions(+), 59 deletions(-)
--
2.39.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
2023-02-14 21:28 [PATCH RFC bpf-next 0/1] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
@ 2023-02-14 21:28 ` Ilya Leoshkevich
2023-02-14 23:58 ` Stanislav Fomichev
0 siblings, 1 reply; 9+ messages in thread
From: Ilya Leoshkevich @ 2023-02-14 21:28 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
Ilya Leoshkevich
test_ksyms_module fails to emit a kfunc call targeting a module on
s390x, because the verifier stores the difference between kfunc
address and __bpf_call_base in bpf_insn.imm, which is s32, and modules
are roughly (1 << 42) bytes away from the kernel on s390x.
Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
and storing the absolute address in bpf_kfunc_desc, which JITs retrieve
as usual by calling bpf_jit_get_func_addr().
This also fixes the problem with XDP metadata functions outlined in
the description of commit 63d7b53ab59f ("s390/bpf: Implement
bpf_jit_supports_kfunc_call()") by replacing address lookups with BTF
id lookups. This eliminates the inconsistency between "abstract" XDP
metadata functions' BTF ids and their concrete addresses.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
include/linux/bpf.h | 2 ++
kernel/bpf/core.c | 23 ++++++++++---
kernel/bpf/verifier.c | 79 +++++++++++++------------------------------
3 files changed, 45 insertions(+), 59 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index be34f7deb6c3..83ce94d11484 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2227,6 +2227,8 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
const struct btf_func_model *
bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
const struct bpf_insn *insn);
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+ u8 **func_addr);
struct bpf_core_ctx {
struct bpf_verifier_log *log;
const struct btf *btf;
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 3390961c4e10..a42382afe333 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
{
s16 off = insn->off;
s32 imm = insn->imm;
+ bool fixed;
u8 *addr;
+ int err;
- *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
- if (!*func_addr_fixed) {
+ switch (insn->src_reg) {
+ case BPF_PSEUDO_CALL:
/* Place-holder address till the last pass has collected
* all addresses for JITed subprograms in which case we
* can pick them up from prog->aux.
@@ -1200,16 +1202,29 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
addr = (u8 *)prog->aux->func[off]->bpf_func;
else
return -EINVAL;
- } else {
+ fixed = false;
+ break;
+ case 0:
/* Address of a BPF helper call. Since part of the core
* kernel, it's always at a fixed location. __bpf_call_base
* and the helper with imm relative to it are both in core
* kernel.
*/
addr = (u8 *)__bpf_call_base + imm;
+ fixed = true;
+ break;
+ case BPF_PSEUDO_KFUNC_CALL:
+ err = bpf_get_kfunc_addr(prog, imm, off, &addr);
+ if (err)
+ return err;
+ fixed = true;
+ break;
+ default:
+ return -EINVAL;
}
- *func_addr = (unsigned long)addr;
+ *func_addr_fixed = fixed;
+ *func_addr = addr;
return 0;
}
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 21e08c111702..aea59974f0d6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2115,8 +2115,8 @@ static int add_subprog(struct bpf_verifier_env *env, int off)
struct bpf_kfunc_desc {
struct btf_func_model func_model;
u32 func_id;
- s32 imm;
u16 offset;
+ unsigned long addr;
};
struct bpf_kfunc_btf {
@@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog, u32 func_id, u16 offset)
sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
}
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+ u8 **func_addr)
+{
+ const struct bpf_kfunc_desc *desc;
+
+ desc = find_kfunc_desc(prog, func_id, offset);
+ if (WARN_ON_ONCE(!desc))
+ return -EINVAL;
+
+ *func_addr = (u8 *)desc->addr;
+ return 0;
+}
+
static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env,
s16 offset)
{
@@ -2261,8 +2274,8 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
struct bpf_kfunc_desc *desc;
const char *func_name;
struct btf *desc_btf;
- unsigned long call_imm;
unsigned long addr;
+ void *xdp_kfunc;
int err;
prog_aux = env->prog->aux;
@@ -2346,24 +2359,21 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
return -EINVAL;
}
- call_imm = BPF_CALL_IMM(addr);
- /* Check whether or not the relative offset overflows desc->imm */
- if ((unsigned long)(s32)call_imm != call_imm) {
- verbose(env, "address of kernel function %s is out of range\n",
- func_name);
- return -EINVAL;
- }
-
if (bpf_dev_bound_kfunc_id(func_id)) {
err = bpf_dev_bound_kfunc_check(&env->log, prog_aux);
if (err)
return err;
+
+ xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, func_id);
+ if (xdp_kfunc)
+ addr = (unsigned long)xdp_kfunc;
+ /* fallback to default kfunc when not supported by netdev */
}
desc = &tab->descs[tab->nr_descs++];
desc->func_id = func_id;
- desc->imm = call_imm;
desc->offset = offset;
+ desc->addr = addr;
err = btf_distill_func_proto(&env->log, desc_btf,
func_proto, func_name,
&desc->func_model);
@@ -2373,30 +2383,6 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
return err;
}
-static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
-{
- const struct bpf_kfunc_desc *d0 = a;
- const struct bpf_kfunc_desc *d1 = b;
-
- if (d0->imm > d1->imm)
- return 1;
- else if (d0->imm < d1->imm)
- return -1;
- return 0;
-}
-
-static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
-{
- struct bpf_kfunc_desc_tab *tab;
-
- tab = prog->aux->kfunc_tab;
- if (!tab)
- return;
-
- sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
- kfunc_desc_cmp_by_imm, NULL);
-}
-
bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
{
return !!prog->aux->kfunc_tab;
@@ -2407,14 +2393,15 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
const struct bpf_insn *insn)
{
const struct bpf_kfunc_desc desc = {
- .imm = insn->imm,
+ .func_id = insn->imm,
+ .offset = insn->off,
};
const struct bpf_kfunc_desc *res;
struct bpf_kfunc_desc_tab *tab;
tab = prog->aux->kfunc_tab;
res = bsearch(&desc, tab->descs, tab->nr_descs,
- sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm);
+ sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
return res ? &res->func_model : NULL;
}
@@ -16251,7 +16238,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
struct bpf_insn *insn_buf, int insn_idx, int *cnt)
{
const struct bpf_kfunc_desc *desc;
- void *xdp_kfunc;
if (!insn->imm) {
verbose(env, "invalid kernel function call not eliminated in verifier pass\n");
@@ -16259,20 +16245,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
}
*cnt = 0;
-
- if (bpf_dev_bound_kfunc_id(insn->imm)) {
- xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, insn->imm);
- if (xdp_kfunc) {
- insn->imm = BPF_CALL_IMM(xdp_kfunc);
- return 0;
- }
-
- /* fallback to default kfunc when not supported by netdev */
- }
-
- /* insn->imm has the btf func_id. Replace it with
- * an address (relative to __bpf_call_base).
- */
desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
if (!desc) {
verbose(env, "verifier internal error: kernel function descriptor not found for func_id %u\n",
@@ -16280,7 +16252,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
return -EFAULT;
}
- insn->imm = desc->imm;
if (insn->off)
return 0;
if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl]) {
@@ -16834,8 +16805,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
}
}
- sort_kfunc_descs_by_imm(env->prog);
-
return 0;
}
--
2.39.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
2023-02-14 21:28 ` [PATCH RFC bpf-next 1/1] " Ilya Leoshkevich
@ 2023-02-14 23:58 ` Stanislav Fomichev
2023-02-15 10:07 ` Ilya Leoshkevich
0 siblings, 1 reply; 9+ messages in thread
From: Stanislav Fomichev @ 2023-02-14 23:58 UTC (permalink / raw)
To: Ilya Leoshkevich
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa
On 02/14, Ilya Leoshkevich wrote:
> test_ksyms_module fails to emit a kfunc call targeting a module on
> s390x, because the verifier stores the difference between kfunc
> address and __bpf_call_base in bpf_insn.imm, which is s32, and modules
> are roughly (1 << 42) bytes away from the kernel on s390x.
> Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
> and storing the absolute address in bpf_kfunc_desc, which JITs retrieve
> as usual by calling bpf_jit_get_func_addr().
> This also fixes the problem with XDP metadata functions outlined in
> the description of commit 63d7b53ab59f ("s390/bpf: Implement
> bpf_jit_supports_kfunc_call()") by replacing address lookups with BTF
> id lookups. This eliminates the inconsistency between "abstract" XDP
> metadata functions' BTF ids and their concrete addresses.
> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> ---
> include/linux/bpf.h | 2 ++
> kernel/bpf/core.c | 23 ++++++++++---
> kernel/bpf/verifier.c | 79 +++++++++++++------------------------------
> 3 files changed, 45 insertions(+), 59 deletions(-)
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index be34f7deb6c3..83ce94d11484 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2227,6 +2227,8 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog
> *prog);
> const struct btf_func_model *
> bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
> const struct bpf_insn *insn);
> +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16
> offset,
> + u8 **func_addr);
> struct bpf_core_ctx {
> struct bpf_verifier_log *log;
> const struct btf *btf;
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 3390961c4e10..a42382afe333 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct bpf_prog
> *prog,
> {
> s16 off = insn->off;
> s32 imm = insn->imm;
> + bool fixed;
> u8 *addr;
> + int err;
> - *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
> - if (!*func_addr_fixed) {
> + switch (insn->src_reg) {
> + case BPF_PSEUDO_CALL:
> /* Place-holder address till the last pass has collected
> * all addresses for JITed subprograms in which case we
> * can pick them up from prog->aux.
> @@ -1200,16 +1202,29 @@ int bpf_jit_get_func_addr(const struct bpf_prog
> *prog,
> addr = (u8 *)prog->aux->func[off]->bpf_func;
> else
> return -EINVAL;
> - } else {
> + fixed = false;
> + break;
[..]
> + case 0:
nit: Should we define BPF_HELPER_CALL here for consistency?
> /* Address of a BPF helper call. Since part of the core
> * kernel, it's always at a fixed location. __bpf_call_base
> * and the helper with imm relative to it are both in core
> * kernel.
> */
> addr = (u8 *)__bpf_call_base + imm;
> + fixed = true;
> + break;
> + case BPF_PSEUDO_KFUNC_CALL:
> + err = bpf_get_kfunc_addr(prog, imm, off, &addr);
> + if (err)
> + return err;
> + fixed = true;
> + break;
> + default:
> + return -EINVAL;
> }
> - *func_addr = (unsigned long)addr;
> + *func_addr_fixed = fixed;
> + *func_addr = addr;
> return 0;
> }
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 21e08c111702..aea59974f0d6 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2115,8 +2115,8 @@ static int add_subprog(struct bpf_verifier_env
> *env, int off)
> struct bpf_kfunc_desc {
> struct btf_func_model func_model;
> u32 func_id;
> - s32 imm;
> u16 offset;
> + unsigned long addr;
> };
> struct bpf_kfunc_btf {
> @@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog, u32
> func_id, u16 offset)
> sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
> }
[..]
> +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16
> offset,
> + u8 **func_addr)
> +{
> + const struct bpf_kfunc_desc *desc;
> +
> + desc = find_kfunc_desc(prog, func_id, offset);
> + if (WARN_ON_ONCE(!desc))
> + return -EINVAL;
> +
> + *func_addr = (u8 *)desc->addr;
> + return 0;
> +}
This function isn't doing much and has a single caller. Should we just
export find_kfunc_desc?
> +
> static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env,
> s16 offset)
> {
> @@ -2261,8 +2274,8 @@ static int add_kfunc_call(struct bpf_verifier_env
> *env, u32 func_id, s16 offset)
> struct bpf_kfunc_desc *desc;
> const char *func_name;
> struct btf *desc_btf;
> - unsigned long call_imm;
> unsigned long addr;
> + void *xdp_kfunc;
> int err;
> prog_aux = env->prog->aux;
> @@ -2346,24 +2359,21 @@ static int add_kfunc_call(struct bpf_verifier_env
> *env, u32 func_id, s16 offset)
> return -EINVAL;
> }
> - call_imm = BPF_CALL_IMM(addr);
> - /* Check whether or not the relative offset overflows desc->imm */
> - if ((unsigned long)(s32)call_imm != call_imm) {
> - verbose(env, "address of kernel function %s is out of range\n",
> - func_name);
> - return -EINVAL;
> - }
> -
> if (bpf_dev_bound_kfunc_id(func_id)) {
> err = bpf_dev_bound_kfunc_check(&env->log, prog_aux);
> if (err)
> return err;
> +
> + xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, func_id);
> + if (xdp_kfunc)
> + addr = (unsigned long)xdp_kfunc;
> + /* fallback to default kfunc when not supported by netdev */
> }
> desc = &tab->descs[tab->nr_descs++];
> desc->func_id = func_id;
> - desc->imm = call_imm;
> desc->offset = offset;
> + desc->addr = addr;
> err = btf_distill_func_proto(&env->log, desc_btf,
> func_proto, func_name,
> &desc->func_model);
> @@ -2373,30 +2383,6 @@ static int add_kfunc_call(struct bpf_verifier_env
> *env, u32 func_id, s16 offset)
> return err;
> }
> -static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
> -{
> - const struct bpf_kfunc_desc *d0 = a;
> - const struct bpf_kfunc_desc *d1 = b;
> -
> - if (d0->imm > d1->imm)
> - return 1;
> - else if (d0->imm < d1->imm)
> - return -1;
> - return 0;
> -}
> -
> -static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
> -{
> - struct bpf_kfunc_desc_tab *tab;
> -
> - tab = prog->aux->kfunc_tab;
> - if (!tab)
> - return;
> -
> - sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
> - kfunc_desc_cmp_by_imm, NULL);
> -}
> -
> bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
> {
> return !!prog->aux->kfunc_tab;
> @@ -2407,14 +2393,15 @@ bpf_jit_find_kfunc_model(const struct bpf_prog
> *prog,
> const struct bpf_insn *insn)
> {
> const struct bpf_kfunc_desc desc = {
> - .imm = insn->imm,
> + .func_id = insn->imm,
> + .offset = insn->off,
> };
> const struct bpf_kfunc_desc *res;
> struct bpf_kfunc_desc_tab *tab;
> tab = prog->aux->kfunc_tab;
> res = bsearch(&desc, tab->descs, tab->nr_descs,
> - sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm);
> + sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
> return res ? &res->func_model : NULL;
> }
> @@ -16251,7 +16238,6 @@ static int fixup_kfunc_call(struct
> bpf_verifier_env *env, struct bpf_insn *insn,
> struct bpf_insn *insn_buf, int insn_idx, int *cnt)
> {
> const struct bpf_kfunc_desc *desc;
> - void *xdp_kfunc;
> if (!insn->imm) {
> verbose(env, "invalid kernel function call not eliminated in verifier
> pass\n");
> @@ -16259,20 +16245,6 @@ static int fixup_kfunc_call(struct
> bpf_verifier_env *env, struct bpf_insn *insn,
> }
> *cnt = 0;
> -
> - if (bpf_dev_bound_kfunc_id(insn->imm)) {
> - xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, insn->imm);
> - if (xdp_kfunc) {
> - insn->imm = BPF_CALL_IMM(xdp_kfunc);
> - return 0;
> - }
> -
> - /* fallback to default kfunc when not supported by netdev */
> - }
> -
> - /* insn->imm has the btf func_id. Replace it with
> - * an address (relative to __bpf_call_base).
> - */
> desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
> if (!desc) {
> verbose(env, "verifier internal error: kernel function descriptor not
> found for func_id %u\n",
> @@ -16280,7 +16252,6 @@ static int fixup_kfunc_call(struct
> bpf_verifier_env *env, struct bpf_insn *insn,
> return -EFAULT;
> }
> - insn->imm = desc->imm;
> if (insn->off)
> return 0;
> if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl]) {
> @@ -16834,8 +16805,6 @@ static int do_misc_fixups(struct bpf_verifier_env
> *env)
> }
> }
[..]
> - sort_kfunc_descs_by_imm(env->prog);
If we are not doing sorting here, how does the bsearch work?
> -
> return 0;
> }
> --
> 2.39.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
2023-02-14 23:58 ` Stanislav Fomichev
@ 2023-02-15 10:07 ` Ilya Leoshkevich
2023-02-15 17:43 ` Stanislav Fomichev
0 siblings, 1 reply; 9+ messages in thread
From: Ilya Leoshkevich @ 2023-02-15 10:07 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa
On Tue, 2023-02-14 at 15:58 -0800, Stanislav Fomichev wrote:
> On 02/14, Ilya Leoshkevich wrote:
> > test_ksyms_module fails to emit a kfunc call targeting a module on
> > s390x, because the verifier stores the difference between kfunc
> > address and __bpf_call_base in bpf_insn.imm, which is s32, and
> > modules
> > are roughly (1 << 42) bytes away from the kernel on s390x.
>
> > Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
> > and storing the absolute address in bpf_kfunc_desc, which JITs
> > retrieve
> > as usual by calling bpf_jit_get_func_addr().
>
> > This also fixes the problem with XDP metadata functions outlined in
> > the description of commit 63d7b53ab59f ("s390/bpf: Implement
> > bpf_jit_supports_kfunc_call()") by replacing address lookups with
> > BTF
> > id lookups. This eliminates the inconsistency between "abstract"
> > XDP
> > metadata functions' BTF ids and their concrete addresses.
>
> > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > ---
> > include/linux/bpf.h | 2 ++
> > kernel/bpf/core.c | 23 ++++++++++---
> > kernel/bpf/verifier.c | 79 +++++++++++++-------------------------
> > -----
> > 3 files changed, 45 insertions(+), 59 deletions(-)
>
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index be34f7deb6c3..83ce94d11484 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -2227,6 +2227,8 @@ bool bpf_prog_has_kfunc_call(const struct
> > bpf_prog
> > *prog);
> > const struct btf_func_model *
> > bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
> > const struct bpf_insn *insn);
> > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> > u16
> > offset,
> > + u8 **func_addr);
> > struct bpf_core_ctx {
> > struct bpf_verifier_log *log;
> > const struct btf *btf;
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index 3390961c4e10..a42382afe333 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
> > @@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct
> > bpf_prog
> > *prog,
> > {
> > s16 off = insn->off;
> > s32 imm = insn->imm;
> > + bool fixed;
> > u8 *addr;
> > + int err;
>
> > - *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
> > - if (!*func_addr_fixed) {
> > + switch (insn->src_reg) {
> > + case BPF_PSEUDO_CALL:
> > /* Place-holder address till the last pass has
> > collected
> > * all addresses for JITed subprograms in which
> > case we
> > * can pick them up from prog->aux.
> > @@ -1200,16 +1202,29 @@ int bpf_jit_get_func_addr(const struct
> > bpf_prog
> > *prog,
> > addr = (u8 *)prog->aux->func[off]-
> > >bpf_func;
> > else
> > return -EINVAL;
> > - } else {
> > + fixed = false;
> > + break;
>
> [..]
>
> > + case 0:
>
> nit: Should we define BPF_HELPER_CALL here for consistency?
I think this would be good, the verifier currently uses 0 for this
purpose; having a symbolic constant would surely improve readability.
> > /* Address of a BPF helper call. Since part of the
> > core
> > * kernel, it's always at a fixed location.
> > __bpf_call_base
> > * and the helper with imm relative to it are both
> > in core
> > * kernel.
> > */
> > addr = (u8 *)__bpf_call_base + imm;
> > + fixed = true;
> > + break;
> > + case BPF_PSEUDO_KFUNC_CALL:
> > + err = bpf_get_kfunc_addr(prog, imm, off, &addr);
> > + if (err)
> > + return err;
> > + fixed = true;
> > + break;
> > + default:
> > + return -EINVAL;
> > }
>
> > - *func_addr = (unsigned long)addr;
> > + *func_addr_fixed = fixed;
> > + *func_addr = addr;
> > return 0;
> > }
>
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 21e08c111702..aea59974f0d6 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -2115,8 +2115,8 @@ static int add_subprog(struct
> > bpf_verifier_env
> > *env, int off)
> > struct bpf_kfunc_desc {
> > struct btf_func_model func_model;
> > u32 func_id;
> > - s32 imm;
> > u16 offset;
> > + unsigned long addr;
> > };
>
> > struct bpf_kfunc_btf {
> > @@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog,
> > u32
> > func_id, u16 offset)
> > sizeof(tab->descs[0]),
> > kfunc_desc_cmp_by_id_off);
> > }
>
>
> [..]
>
> > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> > u16
> > offset,
> > + u8 **func_addr)
> > +{
> > + const struct bpf_kfunc_desc *desc;
> > +
> > + desc = find_kfunc_desc(prog, func_id, offset);
> > + if (WARN_ON_ONCE(!desc))
> > + return -EINVAL;
> > +
> > + *func_addr = (u8 *)desc->addr;
> > + return 0;
> > +}
>
> This function isn't doing much and has a single caller. Should we
> just
> export find_kfunc_desc?
We would have to export struct bpf_kfunc_desc as well; I thought it's
better to add an extra function so that we could keep hiding the
struct.
> > +
> > static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env
> > *env,
> > s16 offset)
> > {
> > @@ -2261,8 +2274,8 @@ static int add_kfunc_call(struct
> > bpf_verifier_env
> > *env, u32 func_id, s16 offset)
> > struct bpf_kfunc_desc *desc;
> > const char *func_name;
> > struct btf *desc_btf;
> > - unsigned long call_imm;
> > unsigned long addr;
> > + void *xdp_kfunc;
> > int err;
>
> > prog_aux = env->prog->aux;
> > @@ -2346,24 +2359,21 @@ static int add_kfunc_call(struct
> > bpf_verifier_env
> > *env, u32 func_id, s16 offset)
> > return -EINVAL;
> > }
>
> > - call_imm = BPF_CALL_IMM(addr);
> > - /* Check whether or not the relative offset overflows desc-
> > >imm */
> > - if ((unsigned long)(s32)call_imm != call_imm) {
> > - verbose(env, "address of kernel function %s is out
> > of range\n",
> > - func_name);
> > - return -EINVAL;
> > - }
> > -
> > if (bpf_dev_bound_kfunc_id(func_id)) {
> > err = bpf_dev_bound_kfunc_check(&env->log,
> > prog_aux);
> > if (err)
> > return err;
> > +
> > + xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog,
> > func_id);
> > + if (xdp_kfunc)
> > + addr = (unsigned long)xdp_kfunc;
> > + /* fallback to default kfunc when not supported by
> > netdev */
> > }
>
> > desc = &tab->descs[tab->nr_descs++];
> > desc->func_id = func_id;
> > - desc->imm = call_imm;
> > desc->offset = offset;
> > + desc->addr = addr;
> > err = btf_distill_func_proto(&env->log, desc_btf,
> > func_proto, func_name,
> > &desc->func_model);
> > @@ -2373,30 +2383,6 @@ static int add_kfunc_call(struct
> > bpf_verifier_env
> > *env, u32 func_id, s16 offset)
> > return err;
> > }
>
> > -static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
> > -{
> > - const struct bpf_kfunc_desc *d0 = a;
> > - const struct bpf_kfunc_desc *d1 = b;
> > -
> > - if (d0->imm > d1->imm)
> > - return 1;
> > - else if (d0->imm < d1->imm)
> > - return -1;
> > - return 0;
> > -}
> > -
> > -static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
> > -{
> > - struct bpf_kfunc_desc_tab *tab;
> > -
> > - tab = prog->aux->kfunc_tab;
> > - if (!tab)
> > - return;
> > -
> > - sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
> > - kfunc_desc_cmp_by_imm, NULL);
> > -}
> > -
> > bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
> > {
> > return !!prog->aux->kfunc_tab;
> > @@ -2407,14 +2393,15 @@ bpf_jit_find_kfunc_model(const struct
> > bpf_prog
> > *prog,
> > const struct bpf_insn *insn)
> > {
> > const struct bpf_kfunc_desc desc = {
> > - .imm = insn->imm,
> > + .func_id = insn->imm,
> > + .offset = insn->off,
> > };
> > const struct bpf_kfunc_desc *res;
> > struct bpf_kfunc_desc_tab *tab;
>
> > tab = prog->aux->kfunc_tab;
> > res = bsearch(&desc, tab->descs, tab->nr_descs,
> > - sizeof(tab->descs[0]),
> > kfunc_desc_cmp_by_imm);
> > + sizeof(tab->descs[0]),
> > kfunc_desc_cmp_by_id_off);
>
> > return res ? &res->func_model : NULL;
> > }
> > @@ -16251,7 +16238,6 @@ static int fixup_kfunc_call(struct
> > bpf_verifier_env *env, struct bpf_insn *insn,
> > struct bpf_insn *insn_buf, int
> > insn_idx, int *cnt)
> > {
> > const struct bpf_kfunc_desc *desc;
> > - void *xdp_kfunc;
>
> > if (!insn->imm) {
> > verbose(env, "invalid kernel function call not
> > eliminated in verifier
> > pass\n");
> > @@ -16259,20 +16245,6 @@ static int fixup_kfunc_call(struct
> > bpf_verifier_env *env, struct bpf_insn *insn,
> > }
>
> > *cnt = 0;
> > -
> > - if (bpf_dev_bound_kfunc_id(insn->imm)) {
> > - xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog,
> > insn->imm);
> > - if (xdp_kfunc) {
> > - insn->imm = BPF_CALL_IMM(xdp_kfunc);
> > - return 0;
> > - }
> > -
> > - /* fallback to default kfunc when not supported by
> > netdev */
> > - }
> > -
> > - /* insn->imm has the btf func_id. Replace it with
> > - * an address (relative to __bpf_call_base).
> > - */
> > desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
> > if (!desc) {
> > verbose(env, "verifier internal error: kernel
> > function descriptor not
> > found for func_id %u\n",
> > @@ -16280,7 +16252,6 @@ static int fixup_kfunc_call(struct
> > bpf_verifier_env *env, struct bpf_insn *insn,
> > return -EFAULT;
> > }
>
> > - insn->imm = desc->imm;
> > if (insn->off)
> > return 0;
> > if (desc->func_id ==
> > special_kfunc_list[KF_bpf_obj_new_impl]) {
> > @@ -16834,8 +16805,6 @@ static int do_misc_fixups(struct
> > bpf_verifier_env
> > *env)
> > }
> > }
>
>
> [..]
>
> > - sort_kfunc_descs_by_imm(env->prog);
>
> If we are not doing sorting here, how does the bsearch work?
add_kfunc_call() already makes sure that kfuncs are sorted in the
kfunc_desc_cmp_by_id_off() order. fixup_kfunc_call() used to put
addresses into imms and re-sort based on that, however, after this
patch it's not needed anymore: the code (e.g.
bpf_jit_find_kfunc_model()) continues to do lookups in
the kfunc_desc_cmp_by_id_off() order.
> > -
> > return 0;
> > }
>
> > --
> > 2.39.1
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
2023-02-15 10:07 ` Ilya Leoshkevich
@ 2023-02-15 17:43 ` Stanislav Fomichev
2023-02-15 17:49 ` Ilya Leoshkevich
0 siblings, 1 reply; 9+ messages in thread
From: Stanislav Fomichev @ 2023-02-15 17:43 UTC (permalink / raw)
To: Ilya Leoshkevich
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa
On 02/15, Ilya Leoshkevich wrote:
> On Tue, 2023-02-14 at 15:58 -0800, Stanislav Fomichev wrote:
> > On 02/14, Ilya Leoshkevich wrote:
> > > test_ksyms_module fails to emit a kfunc call targeting a module on
> > > s390x, because the verifier stores the difference between kfunc
> > > address and __bpf_call_base in bpf_insn.imm, which is s32, and
> > > modules
> > > are roughly (1 << 42) bytes away from the kernel on s390x.
> >
> > > Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
> > > and storing the absolute address in bpf_kfunc_desc, which JITs
> > > retrieve
> > > as usual by calling bpf_jit_get_func_addr().
> >
> > > This also fixes the problem with XDP metadata functions outlined in
> > > the description of commit 63d7b53ab59f ("s390/bpf: Implement
> > > bpf_jit_supports_kfunc_call()") by replacing address lookups with
> > > BTF
> > > id lookups. This eliminates the inconsistency between "abstract"
> > > XDP
> > > metadata functions' BTF ids and their concrete addresses.
> >
> > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > ---
> > > include/linux/bpf.h | 2 ++
> > > kernel/bpf/core.c | 23 ++++++++++---
> > > kernel/bpf/verifier.c | 79 +++++++++++++-------------------------
> > > -----
> > > 3 files changed, 45 insertions(+), 59 deletions(-)
> >
> > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > > index be34f7deb6c3..83ce94d11484 100644
> > > --- a/include/linux/bpf.h
> > > +++ b/include/linux/bpf.h
> > > @@ -2227,6 +2227,8 @@ bool bpf_prog_has_kfunc_call(const struct
> > > bpf_prog
> > > *prog);
> > > const struct btf_func_model *
> > > bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
> > > const struct bpf_insn *insn);
> > > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> > > u16
> > > offset,
> > > + u8 **func_addr);
> > > struct bpf_core_ctx {
> > > struct bpf_verifier_log *log;
> > > const struct btf *btf;
> > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > > index 3390961c4e10..a42382afe333 100644
> > > --- a/kernel/bpf/core.c
> > > +++ b/kernel/bpf/core.c
> > > @@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct
> > > bpf_prog
> > > *prog,
> > > {
> > > s16 off = insn->off;
> > > s32 imm = insn->imm;
> > > + bool fixed;
> > > u8 *addr;
> > > + int err;
> >
> > > - *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
> > > - if (!*func_addr_fixed) {
> > > + switch (insn->src_reg) {
> > > + case BPF_PSEUDO_CALL:
> > > /* Place-holder address till the last pass has
> > > collected
> > > * all addresses for JITed subprograms in which
> > > case we
> > > * can pick them up from prog->aux.
> > > @@ -1200,16 +1202,29 @@ int bpf_jit_get_func_addr(const struct
> > > bpf_prog
> > > *prog,
> > > addr = (u8 *)prog->aux->func[off]-
> > > >bpf_func;
> > > else
> > > return -EINVAL;
> > > - } else {
> > > + fixed = false;
> > > + break;
> >
> > [..]
> >
> > > + case 0:
> >
> > nit: Should we define BPF_HELPER_CALL here for consistency?
> I think this would be good, the verifier currently uses 0 for this
> purpose; having a symbolic constant would surely improve readability.
> > > /* Address of a BPF helper call. Since part of the
> > > core
> > > * kernel, it's always at a fixed location.
> > > __bpf_call_base
> > > * and the helper with imm relative to it are both
> > > in core
> > > * kernel.
> > > */
> > > addr = (u8 *)__bpf_call_base + imm;
> > > + fixed = true;
> > > + break;
> > > + case BPF_PSEUDO_KFUNC_CALL:
> > > + err = bpf_get_kfunc_addr(prog, imm, off, &addr);
> > > + if (err)
> > > + return err;
> > > + fixed = true;
> > > + break;
> > > + default:
> > > + return -EINVAL;
> > > }
> >
> > > - *func_addr = (unsigned long)addr;
> > > + *func_addr_fixed = fixed;
> > > + *func_addr = addr;
> > > return 0;
> > > }
> >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 21e08c111702..aea59974f0d6 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -2115,8 +2115,8 @@ static int add_subprog(struct
> > > bpf_verifier_env
> > > *env, int off)
> > > struct bpf_kfunc_desc {
> > > struct btf_func_model func_model;
> > > u32 func_id;
> > > - s32 imm;
> > > u16 offset;
> > > + unsigned long addr;
> > > };
> >
> > > struct bpf_kfunc_btf {
> > > @@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog,
> > > u32
> > > func_id, u16 offset)
> > > sizeof(tab->descs[0]),
> > > kfunc_desc_cmp_by_id_off);
> > > }
> >
> >
> > [..]
> >
> > > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> > > u16
> > > offset,
> > > + u8 **func_addr)
> > > +{
> > > + const struct bpf_kfunc_desc *desc;
> > > +
> > > + desc = find_kfunc_desc(prog, func_id, offset);
> > > + if (WARN_ON_ONCE(!desc))
> > > + return -EINVAL;
> > > +
> > > + *func_addr = (u8 *)desc->addr;
> > > + return 0;
> > > +}
> >
> > This function isn't doing much and has a single caller. Should we
> > just
> > export find_kfunc_desc?
> We would have to export struct bpf_kfunc_desc as well; I thought it's
> better to add an extra function so that we could keep hiding the
> struct.
Ah, good point. In this case seems ok to have this extra wrapper.
On that note: what's the purpose of WARN_ON_ONCE here?
> > > +
> > > static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env
> > > *env,
> > > s16 offset)
> > > {
> > > @@ -2261,8 +2274,8 @@ static int add_kfunc_call(struct
> > > bpf_verifier_env
> > > *env, u32 func_id, s16 offset)
> > > struct bpf_kfunc_desc *desc;
> > > const char *func_name;
> > > struct btf *desc_btf;
> > > - unsigned long call_imm;
> > > unsigned long addr;
> > > + void *xdp_kfunc;
> > > int err;
> >
> > > prog_aux = env->prog->aux;
> > > @@ -2346,24 +2359,21 @@ static int add_kfunc_call(struct
> > > bpf_verifier_env
> > > *env, u32 func_id, s16 offset)
> > > return -EINVAL;
> > > }
> >
> > > - call_imm = BPF_CALL_IMM(addr);
> > > - /* Check whether or not the relative offset overflows desc-
> > > >imm */
> > > - if ((unsigned long)(s32)call_imm != call_imm) {
> > > - verbose(env, "address of kernel function %s is out
> > > of range\n",
> > > - func_name);
> > > - return -EINVAL;
> > > - }
> > > -
> > > if (bpf_dev_bound_kfunc_id(func_id)) {
> > > err = bpf_dev_bound_kfunc_check(&env->log,
> > > prog_aux);
> > > if (err)
> > > return err;
> > > +
> > > + xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog,
> > > func_id);
> > > + if (xdp_kfunc)
> > > + addr = (unsigned long)xdp_kfunc;
> > > + /* fallback to default kfunc when not supported by
> > > netdev */
> > > }
> >
> > > desc = &tab->descs[tab->nr_descs++];
> > > desc->func_id = func_id;
> > > - desc->imm = call_imm;
> > > desc->offset = offset;
> > > + desc->addr = addr;
> > > err = btf_distill_func_proto(&env->log, desc_btf,
> > > func_proto, func_name,
> > > &desc->func_model);
> > > @@ -2373,30 +2383,6 @@ static int add_kfunc_call(struct
> > > bpf_verifier_env
> > > *env, u32 func_id, s16 offset)
> > > return err;
> > > }
> >
> > > -static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
> > > -{
> > > - const struct bpf_kfunc_desc *d0 = a;
> > > - const struct bpf_kfunc_desc *d1 = b;
> > > -
> > > - if (d0->imm > d1->imm)
> > > - return 1;
> > > - else if (d0->imm < d1->imm)
> > > - return -1;
> > > - return 0;
> > > -}
> > > -
> > > -static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
> > > -{
> > > - struct bpf_kfunc_desc_tab *tab;
> > > -
> > > - tab = prog->aux->kfunc_tab;
> > > - if (!tab)
> > > - return;
> > > -
> > > - sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
> > > - kfunc_desc_cmp_by_imm, NULL);
> > > -}
> > > -
> > > bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
> > > {
> > > return !!prog->aux->kfunc_tab;
> > > @@ -2407,14 +2393,15 @@ bpf_jit_find_kfunc_model(const struct
> > > bpf_prog
> > > *prog,
> > > const struct bpf_insn *insn)
> > > {
> > > const struct bpf_kfunc_desc desc = {
> > > - .imm = insn->imm,
> > > + .func_id = insn->imm,
> > > + .offset = insn->off,
> > > };
> > > const struct bpf_kfunc_desc *res;
> > > struct bpf_kfunc_desc_tab *tab;
> >
> > > tab = prog->aux->kfunc_tab;
> > > res = bsearch(&desc, tab->descs, tab->nr_descs,
> > > - sizeof(tab->descs[0]),
> > > kfunc_desc_cmp_by_imm);
> > > + sizeof(tab->descs[0]),
> > > kfunc_desc_cmp_by_id_off);
> >
> > > return res ? &res->func_model : NULL;
> > > }
> > > @@ -16251,7 +16238,6 @@ static int fixup_kfunc_call(struct
> > > bpf_verifier_env *env, struct bpf_insn *insn,
> > > struct bpf_insn *insn_buf, int
> > > insn_idx, int *cnt)
> > > {
> > > const struct bpf_kfunc_desc *desc;
> > > - void *xdp_kfunc;
> >
> > > if (!insn->imm) {
> > > verbose(env, "invalid kernel function call not
> > > eliminated in verifier
> > > pass\n");
> > > @@ -16259,20 +16245,6 @@ static int fixup_kfunc_call(struct
> > > bpf_verifier_env *env, struct bpf_insn *insn,
> > > }
> >
> > > *cnt = 0;
> > > -
> > > - if (bpf_dev_bound_kfunc_id(insn->imm)) {
> > > - xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog,
> > > insn->imm);
> > > - if (xdp_kfunc) {
> > > - insn->imm = BPF_CALL_IMM(xdp_kfunc);
> > > - return 0;
> > > - }
> > > -
> > > - /* fallback to default kfunc when not supported by
> > > netdev */
> > > - }
> > > -
> > > - /* insn->imm has the btf func_id. Replace it with
> > > - * an address (relative to __bpf_call_base).
> > > - */
> > > desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
> > > if (!desc) {
> > > verbose(env, "verifier internal error: kernel
> > > function descriptor not
> > > found for func_id %u\n",
> > > @@ -16280,7 +16252,6 @@ static int fixup_kfunc_call(struct
> > > bpf_verifier_env *env, struct bpf_insn *insn,
> > > return -EFAULT;
> > > }
> >
> > > - insn->imm = desc->imm;
> > > if (insn->off)
> > > return 0;
> > > if (desc->func_id ==
> > > special_kfunc_list[KF_bpf_obj_new_impl]) {
> > > @@ -16834,8 +16805,6 @@ static int do_misc_fixups(struct
> > > bpf_verifier_env
> > > *env)
> > > }
> > > }
> >
> >
> > [..]
> >
> > > - sort_kfunc_descs_by_imm(env->prog);
> >
> > If we are not doing sorting here, how does the bsearch work?
> add_kfunc_call() already makes sure that kfuncs are sorted in the
> kfunc_desc_cmp_by_id_off() order. fixup_kfunc_call() used to put
> addresses into imms and re-sort based on that, however, after this
> patch it's not needed anymore: the code (e.g.
> bpf_jit_find_kfunc_model()) continues to do lookups in
> the kfunc_desc_cmp_by_id_off() order.
Makes sense, thank you 👍
> > > -
> > > return 0;
> > > }
> >
> > > --
> > > 2.39.1
> >
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
2023-02-15 17:43 ` Stanislav Fomichev
@ 2023-02-15 17:49 ` Ilya Leoshkevich
2023-02-15 18:33 ` Stanislav Fomichev
0 siblings, 1 reply; 9+ messages in thread
From: Ilya Leoshkevich @ 2023-02-15 17:49 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa
On Wed, 2023-02-15 at 09:43 -0800, Stanislav Fomichev wrote:
> On 02/15, Ilya Leoshkevich wrote:
> > On Tue, 2023-02-14 at 15:58 -0800, Stanislav Fomichev wrote:
> > > On 02/14, Ilya Leoshkevich wrote:
> > > > test_ksyms_module fails to emit a kfunc call targeting a module
> > > > on
> > > > s390x, because the verifier stores the difference between kfunc
> > > > address and __bpf_call_base in bpf_insn.imm, which is s32, and
> > > > modules
> > > > are roughly (1 << 42) bytes away from the kernel on s390x.
> > >
> > > > Fix by keeping BTF id in bpf_insn.imm for
> > > > BPF_PSEUDO_KFUNC_CALLs,
> > > > and storing the absolute address in bpf_kfunc_desc, which JITs
> > > > retrieve
> > > > as usual by calling bpf_jit_get_func_addr().
> > >
> > > > This also fixes the problem with XDP metadata functions
> > > > outlined in
> > > > the description of commit 63d7b53ab59f ("s390/bpf: Implement
> > > > bpf_jit_supports_kfunc_call()") by replacing address lookups
> > > > with
> > > > BTF
> > > > id lookups. This eliminates the inconsistency between
> > > > "abstract"
> > > > XDP
> > > > metadata functions' BTF ids and their concrete addresses.
> > >
> > > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > ---
> > > > include/linux/bpf.h | 2 ++
> > > > kernel/bpf/core.c | 23 ++++++++++---
> > > > kernel/bpf/verifier.c | 79 +++++++++++++---------------------
> > > > ----
> > > > -----
> > > > 3 files changed, 45 insertions(+), 59 deletions(-)
> >
[...]
> > > > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32
> > > > func_id,
> > > > u16
> > > > offset,
> > > > + u8 **func_addr)
> > > > +{
> > > > + const struct bpf_kfunc_desc *desc;
> > > > +
> > > > + desc = find_kfunc_desc(prog, func_id, offset);
> > > > + if (WARN_ON_ONCE(!desc))
> > > > + return -EINVAL;
> > > > +
> > > > + *func_addr = (u8 *)desc->addr;
> > > > + return 0;
> > > > +}
> > >
> > > This function isn't doing much and has a single caller. Should we
> > > just
> > > export find_kfunc_desc?
>
> > We would have to export struct bpf_kfunc_desc as well; I thought
> > it's
> > better to add an extra function so that we could keep hiding the
> > struct.
>
> Ah, good point. In this case seems ok to have this extra wrapper.
> On that note: what's the purpose of WARN_ON_ONCE here?
We can hit this only due to an internal verifier/JIT error, so it would
be good to get some indication of this happening. In verifier.c we have
verbose() function for that, but this function is called during JITing.
[...]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
2023-02-15 17:49 ` Ilya Leoshkevich
@ 2023-02-15 18:33 ` Stanislav Fomichev
2023-02-15 21:54 ` Ilya Leoshkevich
0 siblings, 1 reply; 9+ messages in thread
From: Stanislav Fomichev @ 2023-02-15 18:33 UTC (permalink / raw)
To: Ilya Leoshkevich
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa
On 02/15, Ilya Leoshkevich wrote:
> On Wed, 2023-02-15 at 09:43 -0800, Stanislav Fomichev wrote:
> > On 02/15, Ilya Leoshkevich wrote:
> > > On Tue, 2023-02-14 at 15:58 -0800, Stanislav Fomichev wrote:
> > > > On 02/14, Ilya Leoshkevich wrote:
> > > > > test_ksyms_module fails to emit a kfunc call targeting a module
> > > > > on
> > > > > s390x, because the verifier stores the difference between kfunc
> > > > > address and __bpf_call_base in bpf_insn.imm, which is s32, and
> > > > > modules
> > > > > are roughly (1 << 42) bytes away from the kernel on s390x.
> > > >
> > > > > Fix by keeping BTF id in bpf_insn.imm for
> > > > > BPF_PSEUDO_KFUNC_CALLs,
> > > > > and storing the absolute address in bpf_kfunc_desc, which JITs
> > > > > retrieve
> > > > > as usual by calling bpf_jit_get_func_addr().
> > > >
> > > > > This also fixes the problem with XDP metadata functions
> > > > > outlined in
> > > > > the description of commit 63d7b53ab59f ("s390/bpf: Implement
> > > > > bpf_jit_supports_kfunc_call()") by replacing address lookups
> > > > > with
> > > > > BTF
> > > > > id lookups. This eliminates the inconsistency between
> > > > > "abstract"
> > > > > XDP
> > > > > metadata functions' BTF ids and their concrete addresses.
> > > >
> > > > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > > ---
> > > > > � include/linux/bpf.h�� |� 2 ++
> > > > > � kernel/bpf/core.c���� | 23 ++++++++++---
> > > > > � kernel/bpf/verifier.c | 79 +++++++++++++---------------------
> > > > > ----
> > > > > -----
> > > > > � 3 files changed, 45 insertions(+), 59 deletions(-)
> > >
> [...]
> > > > > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32
> > > > > func_id,
> > > > > u16�
> > > > > offset,
> > > > > +��������������������� u8 **func_addr)
> > > > > +{
> > > > > +�������const struct bpf_kfunc_desc *desc;
> > > > > +
> > > > > +�������desc = find_kfunc_desc(prog, func_id, offset);
> > > > > +�������if (WARN_ON_ONCE(!desc))
> > > > > +���������������return -EINVAL;
> > > > > +
> > > > > +�������*func_addr = (u8 *)desc->addr;
> > > > > +�������return 0;
> > > > > +}
> > > >
> > > > This function isn't doing much and has a single caller. Should we
> > > > just
> > > > export find_kfunc_desc?
> >
> > > We would have to export struct bpf_kfunc_desc as well; I thought
> > > it's
> > > better to add an extra function so that we could keep hiding the
> > > struct.
> >
> > Ah, good point. In this case seems ok to have this extra wrapper.
> > On that note: what's the purpose of WARN_ON_ONCE here?
> We can hit this only due to an internal verifier/JIT error, so it would
> be good to get some indication of this happening. In verifier.c we have
> verbose() function for that, but this function is called during JITing.
> [...]
From my point of view, reading the code, it makes it a bit confusing. If
there
is a WARN_ON_ONCE, I'm assuming it's guarding against some kind of internal
inconsistency that can happen.
What kind of inconsistency is it guarding against here? We seem to have
find_kfunc_desc in fixup_kfunc_call that checks the same insn->imm
and returns early.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
2023-02-15 18:33 ` Stanislav Fomichev
@ 2023-02-15 21:54 ` Ilya Leoshkevich
2023-02-15 21:59 ` Alexei Starovoitov
0 siblings, 1 reply; 9+ messages in thread
From: Ilya Leoshkevich @ 2023-02-15 21:54 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa
On Wed, 2023-02-15 at 10:33 -0800, Stanislav Fomichev wrote:
> On 02/15, Ilya Leoshkevich wrote:
> > On Wed, 2023-02-15 at 09:43 -0800, Stanislav Fomichev wrote:
> > > On 02/15, Ilya Leoshkevich wrote:
> > > > On Tue, 2023-02-14 at 15:58 -0800, Stanislav Fomichev wrote:
> > > > > On 02/14, Ilya Leoshkevich wrote:
> > > > > > test_ksyms_module fails to emit a kfunc call targeting a
> > > > > > module
> > > > > > on
> > > > > > s390x, because the verifier stores the difference between
> > > > > > kfunc
> > > > > > address and __bpf_call_base in bpf_insn.imm, which is s32,
> > > > > > and
> > > > > > modules
> > > > > > are roughly (1 << 42) bytes away from the kernel on s390x.
> > > > >
> > > > > > Fix by keeping BTF id in bpf_insn.imm for
> > > > > > BPF_PSEUDO_KFUNC_CALLs,
> > > > > > and storing the absolute address in bpf_kfunc_desc, which
> > > > > > JITs
> > > > > > retrieve
> > > > > > as usual by calling bpf_jit_get_func_addr().
> > > > >
> > > > > > This also fixes the problem with XDP metadata functions
> > > > > > outlined in
> > > > > > the description of commit 63d7b53ab59f ("s390/bpf:
> > > > > > Implement
> > > > > > bpf_jit_supports_kfunc_call()") by replacing address
> > > > > > lookups
> > > > > > with
> > > > > > BTF
> > > > > > id lookups. This eliminates the inconsistency between
> > > > > > "abstract"
> > > > > > XDP
> > > > > > metadata functions' BTF ids and their concrete addresses.
> > > > >
> > > > > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > > > ---
> > > > > > � include/linux/bpf.h�� |� 2 ++
> > > > > > � kernel/bpf/core.c���� | 23 ++++++++++---
> > > > > > � kernel/bpf/verifier.c | 79 +++++++++++++-----------------
> > > > > > ----
> > > > > > ----
> > > > > > -----
> > > > > > � 3 files changed, 45 insertions(+), 59 deletions(-)
> > > >
>
> > [...]
>
> > > > > > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32
> > > > > > func_id,
> > > > > > u16�
> > > > > > offset,
> > > > > > +��������������������� u8 **func_addr)
> > > > > > +{
> > > > > > +�������const struct bpf_kfunc_desc *desc;
> > > > > > +
> > > > > > +�������desc = find_kfunc_desc(prog, func_id, offset);
> > > > > > +�������if (WARN_ON_ONCE(!desc))
> > > > > > +���������������return -EINVAL;
> > > > > > +
> > > > > > +�������*func_addr = (u8 *)desc->addr;
> > > > > > +�������return 0;
> > > > > > +}
> > > > >
> > > > > This function isn't doing much and has a single caller.
> > > > > Should we
> > > > > just
> > > > > export find_kfunc_desc?
> > >
> > > > We would have to export struct bpf_kfunc_desc as well; I
> > > > thought
> > > > it's
> > > > better to add an extra function so that we could keep hiding
> > > > the
> > > > struct.
> > >
> > > Ah, good point. In this case seems ok to have this extra wrapper.
> > > On that note: what's the purpose of WARN_ON_ONCE here?
>
> > We can hit this only due to an internal verifier/JIT error, so it
> > would
> > be good to get some indication of this happening. In verifier.c we
> > have
> > verbose() function for that, but this function is called during
> > JITing.
>
> > [...]
>
> From my point of view, reading the code, it makes it a bit
> confusing. If
> there
> is a WARN_ON_ONCE, I'm assuming it's guarding against some kind of
> internal
> inconsistency that can happen.
>
> What kind of inconsistency is it guarding against here? We seem to
> have
> find_kfunc_desc in fixup_kfunc_call that checks the same insn->imm
> and returns early.
The potential inconsistency is the situation when the check in
fixup_kfunc_call() passes, and the very same check here fails.
We could say it's impossible and directly dereference desc. But if
there is a bug and it still happens, the result is a panic or an oops.
Or we could still check whether it's NULL. And if we add such a check,
adding a WARN_ON_ONCE() on top is a cheap way to quickly pinpoint the
potential user-observable issues.
Of course, it's a defensive programming / gut feeling thing. I don't
think comparing all pointers to NULL before derefencing them is a good
idea. It's just that here the first NULL comparison is quite far away,
and with time code that modifies kfunc_tab may be added in between, in
which case this might come in handy.
In any case, I don't have a really strong opinion here, just explaining
my thought process. If the consensus is to drop it, I would not mind
dropping it at all.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
2023-02-15 21:54 ` Ilya Leoshkevich
@ 2023-02-15 21:59 ` Alexei Starovoitov
0 siblings, 0 replies; 9+ messages in thread
From: Alexei Starovoitov @ 2023-02-15 21:59 UTC (permalink / raw)
To: Ilya Leoshkevich
Cc: Stanislav Fomichev, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, bpf, Heiko Carstens, Vasily Gorbik,
Alexander Gordeev, Jiri Olsa
On Wed, Feb 15, 2023 at 1:55 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> In any case, I don't have a really strong opinion here, just explaining
> my thought process. If the consensus is to drop it, I would not mind
> dropping it at all.
I think doing:
+ desc = find_kfunc_desc(prog, func_id, offset);
+ if (!desc)
+ return -EFAULT;
would be enough to catch that verifier issue.
We have similar code in several places.
When we can print verbose() into the verifier log, we do.
In other cases we just EFAULT which is a sign that something
wrong with the verifier.
We don't use WARN in all those cases.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2023-02-15 22:00 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-14 21:28 [PATCH RFC bpf-next 0/1] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
2023-02-14 21:28 ` [PATCH RFC bpf-next 1/1] " Ilya Leoshkevich
2023-02-14 23:58 ` Stanislav Fomichev
2023-02-15 10:07 ` Ilya Leoshkevich
2023-02-15 17:43 ` Stanislav Fomichev
2023-02-15 17:49 ` Ilya Leoshkevich
2023-02-15 18:33 ` Stanislav Fomichev
2023-02-15 21:54 ` Ilya Leoshkevich
2023-02-15 21:59 ` Alexei Starovoitov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox