BPF List
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/3] support nocsr patterns for calls to kfuncs
@ 2024-08-12 23:43 Eduard Zingerman
  2024-08-12 23:43 ` [PATCH bpf-next 1/3] bpf: " Eduard Zingerman
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Eduard Zingerman @ 2024-08-12 23:43 UTC (permalink / raw)
  To: bpf, ast
  Cc: andrii, daniel, martin.lau, kernel-team, yonghong.song,
	Eduard Zingerman

As an extension of [1], allow nocsr patterns recognition for kfuncs:
- pattern rules are the same as for helpers;
- spill/fill removal is allowed only for kfuncs marked with KF_NOCSR
  flag;

Mark bpf_cast_to_kern_ctx() and bpf_rdonly_cast() kfuncs as KF_NOCSR
in order to conjure selftests for this feature.

After this patch-set verifier would rewrite the program below:

  r2 = 1
  *(u64 *)(r10 - 32) = r2
  call %[bpf_cast_to_kern_ctx]
  r2 = *(u64 *)(r10 - 32)
  r0 = r2;"

As follows:

  r2 = 1   /* spill/fill at r10[-32] is removed */
  r0 = r1  /* replacement for bpf_cast_to_kern_ctx() */
  r0 = r2
  exit

[1] no_caller_saved_registers attribute for helper calls
    https://lore.kernel.org/bpf/20240722233844.1406874-1-eddyz87@gmail.com/

Eduard Zingerman (3):
  bpf: support nocsr patterns for calls to kfuncs
  bpf: mark bpf_cast_to_kern_ctx and bpf_rdonly_cast as KF_NOCSR
  selftests/bpf: check if nocsr pattern is recognized for kfuncs

 include/linux/btf.h                           |  1 +
 kernel/bpf/helpers.c                          |  4 +-
 kernel/bpf/verifier.c                         | 37 ++++++++++++++
 .../selftests/bpf/progs/verifier_nocsr.c      | 50 +++++++++++++++++++
 4 files changed, 90 insertions(+), 2 deletions(-)

-- 
2.45.2


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs
  2024-08-12 23:43 [PATCH bpf-next 0/3] support nocsr patterns for calls to kfuncs Eduard Zingerman
@ 2024-08-12 23:43 ` Eduard Zingerman
  2024-08-13  5:36   ` Yonghong Song
  2024-08-15 21:24   ` Andrii Nakryiko
  2024-08-12 23:43 ` [PATCH bpf-next 2/3] bpf: mark bpf_cast_to_kern_ctx and bpf_rdonly_cast as KF_NOCSR Eduard Zingerman
  2024-08-12 23:43 ` [PATCH bpf-next 3/3] selftests/bpf: check if nocsr pattern is recognized for kfuncs Eduard Zingerman
  2 siblings, 2 replies; 18+ messages in thread
From: Eduard Zingerman @ 2024-08-12 23:43 UTC (permalink / raw)
  To: bpf, ast
  Cc: andrii, daniel, martin.lau, kernel-team, yonghong.song,
	Eduard Zingerman

Recognize nocsr patterns around kfunc calls.
For example, suppose bpf_cast_to_kern_ctx() follows nocsr contract
(which it does, it is rewritten by verifier as "r0 = r1" insn),
in such a case, rewrite BPF program below:

  r2 = 1;
  *(u64 *)(r10 - 32) = r2;
  call %[bpf_cast_to_kern_ctx];
  r2 = *(u64 *)(r10 - 32);
  r0 = r2;

Removing the spill/fill pair:

  r2 = 1;
  call %[bpf_cast_to_kern_ctx];
  r0 = r2;

Add a KF_NOCSR flag to mark kfuncs that follow nocsr contract.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 include/linux/btf.h   |  1 +
 kernel/bpf/verifier.c | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/include/linux/btf.h b/include/linux/btf.h
index cffb43133c68..59ca37300423 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -75,6 +75,7 @@
 #define KF_ITER_NEXT    (1 << 9) /* kfunc implements BPF iter next method */
 #define KF_ITER_DESTROY (1 << 10) /* kfunc implements BPF iter destructor */
 #define KF_RCU_PROTECTED (1 << 11) /* kfunc should be protected by rcu cs when they are invoked */
+#define KF_NOCSR        (1 << 12) /* kfunc follows nocsr calling contract */
 
 /*
  * Tag marking a kernel function as a kfunc. This is meant to minimize the
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index df3be12096cf..c579f74be3f9 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -16140,6 +16140,28 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
 	}
 }
 
+/* Same as helper_nocsr_clobber_mask() but for kfuncs, see comment above */
+static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
+{
+	const struct btf_param *params;
+	u32 vlen, i, mask;
+
+	params = btf_params(meta->func_proto);
+	vlen = btf_type_vlen(meta->func_proto);
+	mask = 0;
+	if (!btf_type_is_void(btf_type_by_id(meta->btf, meta->func_proto->type)))
+		mask |= BIT(BPF_REG_0);
+	for (i = 0; i < vlen; ++i)
+		mask |= BIT(BPF_REG_1 + i);
+	return mask;
+}
+
+/* Same as verifier_inlines_helper_call() but for kfuncs, see comment above */
+static bool verifier_inlines_kfunc_call(struct bpf_kfunc_call_arg_meta *meta)
+{
+	return false;
+}
+
 /* GCC and LLVM define a no_caller_saved_registers function attribute.
  * This attribute means that function scratches only some of
  * the caller saved registers defined by ABI.
@@ -16238,6 +16260,20 @@ static void mark_nocsr_pattern_for_call(struct bpf_verifier_env *env,
 				  bpf_jit_inlines_helper_call(call->imm));
 	}
 
+	if (bpf_pseudo_kfunc_call(call)) {
+		struct bpf_kfunc_call_arg_meta meta;
+		int err;
+
+		err = fetch_kfunc_meta(env, call, &meta, NULL);
+		if (err < 0)
+			/* error would be reported later */
+			return;
+
+		clobbered_regs_mask = kfunc_nocsr_clobber_mask(&meta);
+		can_be_inlined = (meta.kfunc_flags & KF_NOCSR) &&
+				 verifier_inlines_kfunc_call(&meta);
+	}
+
 	if (clobbered_regs_mask == ALL_CALLER_SAVED_REGS)
 		return;
 
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH bpf-next 2/3] bpf: mark bpf_cast_to_kern_ctx and bpf_rdonly_cast as KF_NOCSR
  2024-08-12 23:43 [PATCH bpf-next 0/3] support nocsr patterns for calls to kfuncs Eduard Zingerman
  2024-08-12 23:43 ` [PATCH bpf-next 1/3] bpf: " Eduard Zingerman
@ 2024-08-12 23:43 ` Eduard Zingerman
  2024-08-15 21:25   ` Andrii Nakryiko
  2024-08-12 23:43 ` [PATCH bpf-next 3/3] selftests/bpf: check if nocsr pattern is recognized for kfuncs Eduard Zingerman
  2 siblings, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2024-08-12 23:43 UTC (permalink / raw)
  To: bpf, ast
  Cc: andrii, daniel, martin.lau, kernel-team, yonghong.song,
	Eduard Zingerman

do_misc_fixups() relaces bpf_cast_to_kern_ctx() and bpf_rdonly_cast()
by a single instruction "r0 = r1". This clearly follows nocsr contract.
Mark these two functions as KF_NOCSR, in order to use them in
selftests checking KF_NOCSR behaviour for kfuncs.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 kernel/bpf/helpers.c  | 4 ++--
 kernel/bpf/verifier.c | 3 ++-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index d02ae323996b..cda3c326eeb1 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2987,8 +2987,8 @@ BTF_ID(func, bpf_cgroup_release_dtor)
 #endif
 
 BTF_KFUNCS_START(common_btf_ids)
-BTF_ID_FLAGS(func, bpf_cast_to_kern_ctx)
-BTF_ID_FLAGS(func, bpf_rdonly_cast)
+BTF_ID_FLAGS(func, bpf_cast_to_kern_ctx, KF_NOCSR)
+BTF_ID_FLAGS(func, bpf_rdonly_cast, KF_NOCSR)
 BTF_ID_FLAGS(func, bpf_rcu_read_lock)
 BTF_ID_FLAGS(func, bpf_rcu_read_unlock)
 BTF_ID_FLAGS(func, bpf_dynptr_slice, KF_RET_NULL)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index c579f74be3f9..88e583a37296 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -16159,7 +16159,8 @@ static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
 /* Same as verifier_inlines_helper_call() but for kfuncs, see comment above */
 static bool verifier_inlines_kfunc_call(struct bpf_kfunc_call_arg_meta *meta)
 {
-	return false;
+	return meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
+	       meta->func_id == special_kfunc_list[KF_bpf_rdonly_cast];
 }
 
 /* GCC and LLVM define a no_caller_saved_registers function attribute.
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH bpf-next 3/3] selftests/bpf: check if nocsr pattern is recognized for kfuncs
  2024-08-12 23:43 [PATCH bpf-next 0/3] support nocsr patterns for calls to kfuncs Eduard Zingerman
  2024-08-12 23:43 ` [PATCH bpf-next 1/3] bpf: " Eduard Zingerman
  2024-08-12 23:43 ` [PATCH bpf-next 2/3] bpf: mark bpf_cast_to_kern_ctx and bpf_rdonly_cast as KF_NOCSR Eduard Zingerman
@ 2024-08-12 23:43 ` Eduard Zingerman
  2 siblings, 0 replies; 18+ messages in thread
From: Eduard Zingerman @ 2024-08-12 23:43 UTC (permalink / raw)
  To: bpf, ast
  Cc: andrii, daniel, martin.lau, kernel-team, yonghong.song,
	Eduard Zingerman

Use kfunc_bpf_cast_to_kern_ctx() and kfunc_bpf_rdonly_cast()
to verify that nocsr pattern is recognized for kfunc calls.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/progs/verifier_nocsr.c      | 50 +++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/verifier_nocsr.c b/tools/testing/selftests/bpf/progs/verifier_nocsr.c
index a7fe277e5167..8ce8d90ea3ad 100644
--- a/tools/testing/selftests/bpf/progs/verifier_nocsr.c
+++ b/tools/testing/selftests/bpf/progs/verifier_nocsr.c
@@ -2,8 +2,11 @@
 
 #include <linux/bpf.h>
 #include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
 #include "../../../include/linux/filter.h"
 #include "bpf_misc.h"
+#include <stdbool.h>
+#include "bpf_kfuncs.h"
 
 SEC("raw_tp")
 __arch_x86_64
@@ -793,4 +796,51 @@ __naked int nocsr_max_stack_fail(void)
 	);
 }
 
+SEC("cgroup/getsockname_unix")
+__xlated("0: r2 = 1")
+/* bpf_cast_to_kern_ctx is replaced by a single assignment */
+__xlated("1: r0 = r1")
+__xlated("2: r0 = r2")
+__xlated("3: exit")
+__success
+__naked void kfunc_bpf_cast_to_kern_ctx(void)
+{
+	asm volatile (
+	"r2 = 1;"
+	"*(u64 *)(r10 - 32) = r2;"
+	"call %[bpf_cast_to_kern_ctx];"
+	"r2 = *(u64 *)(r10 - 32);"
+	"r0 = r2;"
+	"exit;"
+	:
+	: __imm(bpf_cast_to_kern_ctx)
+	: __clobber_all);
+}
+
+SEC("raw_tp")
+__xlated("3: r3 = 1")
+/* bpf_rdonly_cast is replaced by a single assignment */
+__xlated("4: r0 = r1")
+__xlated("5: r0 = r3")
+void kfunc_bpf_rdonly_cast(void)
+{
+	asm volatile (
+	"r2 = %[btf_id];"
+	"r3 = 1;"
+	"*(u64 *)(r10 - 32) = r3;"
+	"call %[bpf_rdonly_cast];"
+	"r3 = *(u64 *)(r10 - 32);"
+	"r0 = r3;"
+	:
+	: __imm(bpf_rdonly_cast),
+	 [btf_id]"r"(bpf_core_type_id_kernel(union bpf_attr))
+	: __clobber_common);
+}
+
+void kfunc_root(void)
+{
+	bpf_cast_to_kern_ctx(0);
+	bpf_rdonly_cast(0, 0);
+}
+
 char _license[] SEC("license") = "GPL";
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs
  2024-08-12 23:43 ` [PATCH bpf-next 1/3] bpf: " Eduard Zingerman
@ 2024-08-13  5:36   ` Yonghong Song
  2024-08-13  7:55     ` Eduard Zingerman
  2024-08-15 21:24   ` Andrii Nakryiko
  1 sibling, 1 reply; 18+ messages in thread
From: Yonghong Song @ 2024-08-13  5:36 UTC (permalink / raw)
  To: Eduard Zingerman, bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team


On 8/12/24 4:43 PM, Eduard Zingerman wrote:
> Recognize nocsr patterns around kfunc calls.
> For example, suppose bpf_cast_to_kern_ctx() follows nocsr contract
> (which it does, it is rewritten by verifier as "r0 = r1" insn),
> in such a case, rewrite BPF program below:
>
>    r2 = 1;
>    *(u64 *)(r10 - 32) = r2;
>    call %[bpf_cast_to_kern_ctx];
>    r2 = *(u64 *)(r10 - 32);
>    r0 = r2;
>
> Removing the spill/fill pair:
>
>    r2 = 1;
>    call %[bpf_cast_to_kern_ctx];
>    r0 = r2;

I can see this indeed a good optimization esp. when there is a register
pressure for the program, and like above r2 has to be spilled.
Using nocsr for bpf_cast_to_kern_ctx() can remove those spill/fill
insns.

>
> Add a KF_NOCSR flag to mark kfuncs that follow nocsr contract.
>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
>   include/linux/btf.h   |  1 +
>   kernel/bpf/verifier.c | 36 ++++++++++++++++++++++++++++++++++++
>   2 files changed, 37 insertions(+)
>
> diff --git a/include/linux/btf.h b/include/linux/btf.h
> index cffb43133c68..59ca37300423 100644
> --- a/include/linux/btf.h
> +++ b/include/linux/btf.h
> @@ -75,6 +75,7 @@
>   #define KF_ITER_NEXT    (1 << 9) /* kfunc implements BPF iter next method */
>   #define KF_ITER_DESTROY (1 << 10) /* kfunc implements BPF iter destructor */
>   #define KF_RCU_PROTECTED (1 << 11) /* kfunc should be protected by rcu cs when they are invoked */
> +#define KF_NOCSR        (1 << 12) /* kfunc follows nocsr calling contract */
>   
>   /*
>    * Tag marking a kernel function as a kfunc. This is meant to minimize the
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index df3be12096cf..c579f74be3f9 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -16140,6 +16140,28 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
>   	}
>   }
>   
> +/* Same as helper_nocsr_clobber_mask() but for kfuncs, see comment above */
> +static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
> +{
> +	const struct btf_param *params;
> +	u32 vlen, i, mask;

In helper_nocsr_clobber_mask, we have u8 mask. To be consistent, can we have 'u8 mask' here?
Are you worried that the number of arguments could be more than 7? This seems not the case
right now.

> +
> +	params = btf_params(meta->func_proto);
> +	vlen = btf_type_vlen(meta->func_proto);
> +	mask = 0;
> +	if (!btf_type_is_void(btf_type_by_id(meta->btf, meta->func_proto->type)))
> +		mask |= BIT(BPF_REG_0);
> +	for (i = 0; i < vlen; ++i)
> +		mask |= BIT(BPF_REG_1 + i);
> +	return mask;
> +}
> +
> +/* Same as verifier_inlines_helper_call() but for kfuncs, see comment above */
> +static bool verifier_inlines_kfunc_call(struct bpf_kfunc_call_arg_meta *meta)
> +{
> +	return false;
> +}
> +
>   /* GCC and LLVM define a no_caller_saved_registers function attribute.
>    * This attribute means that function scratches only some of
>    * the caller saved registers defined by ABI.
> @@ -16238,6 +16260,20 @@ static void mark_nocsr_pattern_for_call(struct bpf_verifier_env *env,
>   				  bpf_jit_inlines_helper_call(call->imm));
>   	}
>   
> +	if (bpf_pseudo_kfunc_call(call)) {
> +		struct bpf_kfunc_call_arg_meta meta;
> +		int err;
> +
> +		err = fetch_kfunc_meta(env, call, &meta, NULL);
> +		if (err < 0)
> +			/* error would be reported later */
> +			return;
> +
> +		clobbered_regs_mask = kfunc_nocsr_clobber_mask(&meta);
> +		can_be_inlined = (meta.kfunc_flags & KF_NOCSR) &&
> +				 verifier_inlines_kfunc_call(&meta);

I think we do not need both meta.kfunc_flags & KF_NOCSR and
verifier_inlines_kfunc_call(&meta). Only one of them is enough
since they test very similar thing. You do need to ensure
kfuncs with KF_NOCSR in special_kfunc_list though.
WDYT?

> +	}
> +
>   	if (clobbered_regs_mask == ALL_CALLER_SAVED_REGS)
>   		return;
>   

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs
  2024-08-13  5:36   ` Yonghong Song
@ 2024-08-13  7:55     ` Eduard Zingerman
  2024-08-13 15:18       ` Yonghong Song
  0 siblings, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2024-08-13  7:55 UTC (permalink / raw)
  To: Yonghong Song, bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team

On Mon, 2024-08-12 at 22:36 -0700, Yonghong Song wrote:

[...]

> > @@ -16140,6 +16140,28 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
> >   	}
> >   }
> >   
> > +/* Same as helper_nocsr_clobber_mask() but for kfuncs, see comment above */
> > +static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
> > +{
> > +	const struct btf_param *params;
> > +	u32 vlen, i, mask;
> 
> In helper_nocsr_clobber_mask, we have u8 mask. To be consistent, can we have 'u8 mask' here?
> Are you worried that the number of arguments could be more than 7? This seems not the case
> right now.

Before the nocsr part for helpers landed there was a change request to
make helper_nocsr_clobber_mask() return u32. I modified the function
but forgot to change the type for 'mask' local variable.

The main point in using u32 is uniformity.
I can either change kfunc_nocsr_clobber_mask() to use u8 for mask,
or update helper_nocsr_clobber_mask() to use u32 for mask.

> 
> > +
> > +	params = btf_params(meta->func_proto);
> > +	vlen = btf_type_vlen(meta->func_proto);
> > +	mask = 0;
> > +	if (!btf_type_is_void(btf_type_by_id(meta->btf, meta->func_proto->type)))
> > +		mask |= BIT(BPF_REG_0);
> > +	for (i = 0; i < vlen; ++i)
> > +		mask |= BIT(BPF_REG_1 + i);
> > +	return mask;
> > +}
> > +
> > +/* Same as verifier_inlines_helper_call() but for kfuncs, see comment above */
> > +static bool verifier_inlines_kfunc_call(struct bpf_kfunc_call_arg_meta *meta)
> > +{
> > +	return false;
> > +}
> > +
> >   /* GCC and LLVM define a no_caller_saved_registers function attribute.
> >    * This attribute means that function scratches only some of
> >    * the caller saved registers defined by ABI.
> > @@ -16238,6 +16260,20 @@ static void mark_nocsr_pattern_for_call(struct bpf_verifier_env *env,
> >   				  bpf_jit_inlines_helper_call(call->imm));
> >   	}
> >   
> > +	if (bpf_pseudo_kfunc_call(call)) {
> > +		struct bpf_kfunc_call_arg_meta meta;
> > +		int err;
> > +
> > +		err = fetch_kfunc_meta(env, call, &meta, NULL);
> > +		if (err < 0)
> > +			/* error would be reported later */
> > +			return;
> > +
> > +		clobbered_regs_mask = kfunc_nocsr_clobber_mask(&meta);
> > +		can_be_inlined = (meta.kfunc_flags & KF_NOCSR) &&
> > +				 verifier_inlines_kfunc_call(&meta);
> 
> I think we do not need both meta.kfunc_flags & KF_NOCSR and
> verifier_inlines_kfunc_call(&meta). Only one of them is enough
> since they test very similar thing. You do need to ensure
> kfuncs with KF_NOCSR in special_kfunc_list though.
> WDYT?

I can remove the flag in favour of verifier_inlines_kfunc_call().

> 
> > +	}
> > +
> >   	if (clobbered_regs_mask == ALL_CALLER_SAVED_REGS)
> >   		return;
> >   



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs
  2024-08-13  7:55     ` Eduard Zingerman
@ 2024-08-13 15:18       ` Yonghong Song
  2024-08-13 18:57         ` Eduard Zingerman
  0 siblings, 1 reply; 18+ messages in thread
From: Yonghong Song @ 2024-08-13 15:18 UTC (permalink / raw)
  To: Eduard Zingerman, bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team


On 8/13/24 12:55 AM, Eduard Zingerman wrote:
> On Mon, 2024-08-12 at 22:36 -0700, Yonghong Song wrote:
>
> [...]
>
>>> @@ -16140,6 +16140,28 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
>>>    	}
>>>    }
>>>    
>>> +/* Same as helper_nocsr_clobber_mask() but for kfuncs, see comment above */
>>> +static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
>>> +{
>>> +	const struct btf_param *params;
>>> +	u32 vlen, i, mask;
>> In helper_nocsr_clobber_mask, we have u8 mask. To be consistent, can we have 'u8 mask' here?
>> Are you worried that the number of arguments could be more than 7? This seems not the case
>> right now.
> Before the nocsr part for helpers landed there was a change request to
> make helper_nocsr_clobber_mask() return u32. I modified the function
> but forgot to change the type for 'mask' local variable.
>
> The main point in using u32 is uniformity.
> I can either change kfunc_nocsr_clobber_mask() to use u8 for mask,
> or update helper_nocsr_clobber_mask() to use u32 for mask.

Changing to u32 in helper_nocsr_clobber_mask() is okay. I
just want to have consistent type for 'mask' in both functions.

>
>>> +
>>> +	params = btf_params(meta->func_proto);
>>> +	vlen = btf_type_vlen(meta->func_proto);
>>> +	mask = 0;
>>> +	if (!btf_type_is_void(btf_type_by_id(meta->btf, meta->func_proto->type)))
>>> +		mask |= BIT(BPF_REG_0);
>>> +	for (i = 0; i < vlen; ++i)
>>> +		mask |= BIT(BPF_REG_1 + i);
>>> +	return mask;
>>> +}
>>> +
>>> +/* Same as verifier_inlines_helper_call() but for kfuncs, see comment above */
>>> +static bool verifier_inlines_kfunc_call(struct bpf_kfunc_call_arg_meta *meta)
>>> +{
>>> +	return false;
>>> +}
>>> +
>>>    /* GCC and LLVM define a no_caller_saved_registers function attribute.
>>>     * This attribute means that function scratches only some of
>>>     * the caller saved registers defined by ABI.
>>> @@ -16238,6 +16260,20 @@ static void mark_nocsr_pattern_for_call(struct bpf_verifier_env *env,
>>>    				  bpf_jit_inlines_helper_call(call->imm));
>>>    	}
>>>    
>>> +	if (bpf_pseudo_kfunc_call(call)) {
>>> +		struct bpf_kfunc_call_arg_meta meta;
>>> +		int err;
>>> +
>>> +		err = fetch_kfunc_meta(env, call, &meta, NULL);
>>> +		if (err < 0)
>>> +			/* error would be reported later */
>>> +			return;
>>> +
>>> +		clobbered_regs_mask = kfunc_nocsr_clobber_mask(&meta);
>>> +		can_be_inlined = (meta.kfunc_flags & KF_NOCSR) &&
>>> +				 verifier_inlines_kfunc_call(&meta);
>> I think we do not need both meta.kfunc_flags & KF_NOCSR and
>> verifier_inlines_kfunc_call(&meta). Only one of them is enough
>> since they test very similar thing. You do need to ensure
>> kfuncs with KF_NOCSR in special_kfunc_list though.
>> WDYT?
> I can remove the flag in favour of verifier_inlines_kfunc_call().

Sounds good to me.

>
>>> +	}
>>> +
>>>    	if (clobbered_regs_mask == ALL_CALLER_SAVED_REGS)
>>>    		return;
>>>    
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs
  2024-08-13 15:18       ` Yonghong Song
@ 2024-08-13 18:57         ` Eduard Zingerman
  0 siblings, 0 replies; 18+ messages in thread
From: Eduard Zingerman @ 2024-08-13 18:57 UTC (permalink / raw)
  To: Yonghong Song, bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team

On Tue, 2024-08-13 at 08:18 -0700, Yonghong Song wrote:

[...]

> > > > @@ -16238,6 +16260,20 @@ static void mark_nocsr_pattern_for_call(struct bpf_verifier_env *env,
> > > >    				  bpf_jit_inlines_helper_call(call->imm));
> > > >    	}
> > > >    
> > > > +	if (bpf_pseudo_kfunc_call(call)) {
> > > > +		struct bpf_kfunc_call_arg_meta meta;
> > > > +		int err;
> > > > +
> > > > +		err = fetch_kfunc_meta(env, call, &meta, NULL);
> > > > +		if (err < 0)
> > > > +			/* error would be reported later */
> > > > +			return;
> > > > +
> > > > +		clobbered_regs_mask = kfunc_nocsr_clobber_mask(&meta);
> > > > +		can_be_inlined = (meta.kfunc_flags & KF_NOCSR) &&
> > > > +				 verifier_inlines_kfunc_call(&meta);
> > > I think we do not need both meta.kfunc_flags & KF_NOCSR and
> > > verifier_inlines_kfunc_call(&meta). Only one of them is enough
> > > since they test very similar thing. You do need to ensure
> > > kfuncs with KF_NOCSR in special_kfunc_list though.
> > > WDYT?
> > I can remove the flag in favour of verifier_inlines_kfunc_call().
> 
> Sounds good to me.

Just one more point. The reason I added the KF_NOCSR was to keep the code
as close to helpers case as possible. For helpers there are two guards:
- verifier_inlines_helper_call() function shared between
  mark_nocsr_pattern_for_call() and do_misc_fixups();
- bpf_func_proto->allow_nocsr flag.

The idea is that verifier might inline some functions w/o allowing nocsr.
Hence I decided to use KF_NOCSR in place of bpf_func_proto->allow_nocsr.
On the other hand, verifier_inlines_kfunc_call() is not used by any
other function except mark_nocsr_pattern_for_call() at the moment,
so the KF_NOCSR flag might be redundant indeed.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs
  2024-08-12 23:43 ` [PATCH bpf-next 1/3] bpf: " Eduard Zingerman
  2024-08-13  5:36   ` Yonghong Song
@ 2024-08-15 21:24   ` Andrii Nakryiko
  2024-08-15 22:07     ` Eduard Zingerman
  2024-08-15 22:16     ` Yonghong Song
  1 sibling, 2 replies; 18+ messages in thread
From: Andrii Nakryiko @ 2024-08-15 21:24 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yonghong.song

On Mon, Aug 12, 2024 at 4:44 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> Recognize nocsr patterns around kfunc calls.
> For example, suppose bpf_cast_to_kern_ctx() follows nocsr contract
> (which it does, it is rewritten by verifier as "r0 = r1" insn),
> in such a case, rewrite BPF program below:
>
>   r2 = 1;
>   *(u64 *)(r10 - 32) = r2;
>   call %[bpf_cast_to_kern_ctx];
>   r2 = *(u64 *)(r10 - 32);
>   r0 = r2;
>
> Removing the spill/fill pair:
>
>   r2 = 1;
>   call %[bpf_cast_to_kern_ctx];
>   r0 = r2;
>
> Add a KF_NOCSR flag to mark kfuncs that follow nocsr contract.
>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
>  include/linux/btf.h   |  1 +
>  kernel/bpf/verifier.c | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
>
> diff --git a/include/linux/btf.h b/include/linux/btf.h
> index cffb43133c68..59ca37300423 100644
> --- a/include/linux/btf.h
> +++ b/include/linux/btf.h
> @@ -75,6 +75,7 @@
>  #define KF_ITER_NEXT    (1 << 9) /* kfunc implements BPF iter next method */
>  #define KF_ITER_DESTROY (1 << 10) /* kfunc implements BPF iter destructor */
>  #define KF_RCU_PROTECTED (1 << 11) /* kfunc should be protected by rcu cs when they are invoked */
> +#define KF_NOCSR        (1 << 12) /* kfunc follows nocsr calling contract */
>
>  /*
>   * Tag marking a kernel function as a kfunc. This is meant to minimize the
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index df3be12096cf..c579f74be3f9 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -16140,6 +16140,28 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
>         }
>  }
>
> +/* Same as helper_nocsr_clobber_mask() but for kfuncs, see comment above */
> +static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
> +{
> +       const struct btf_param *params;
> +       u32 vlen, i, mask;
> +
> +       params = btf_params(meta->func_proto);
> +       vlen = btf_type_vlen(meta->func_proto);
> +       mask = 0;
> +       if (!btf_type_is_void(btf_type_by_id(meta->btf, meta->func_proto->type)))
> +               mask |= BIT(BPF_REG_0);
> +       for (i = 0; i < vlen; ++i)
> +               mask |= BIT(BPF_REG_1 + i);

Somewhere deep in btf_dump implementation of libbpf, there is a
special handling of `<whatever> func(void)` (no args) function as
having vlen == 1 and type being VOID (i.e., zero). I don't know if
that still can happen, but I believe at some point we could get this
vlen==1 and type=VOID for no-args functions. So I wonder if we should
handle that here as well, or is it some compiler atavism we can forget
about?

> +       return mask;
> +}
> +
> +/* Same as verifier_inlines_helper_call() but for kfuncs, see comment above */
> +static bool verifier_inlines_kfunc_call(struct bpf_kfunc_call_arg_meta *meta)
> +{
> +       return false;
> +}
> +
>  /* GCC and LLVM define a no_caller_saved_registers function attribute.
>   * This attribute means that function scratches only some of
>   * the caller saved registers defined by ABI.
> @@ -16238,6 +16260,20 @@ static void mark_nocsr_pattern_for_call(struct bpf_verifier_env *env,
>                                   bpf_jit_inlines_helper_call(call->imm));
>         }
>
> +       if (bpf_pseudo_kfunc_call(call)) {
> +               struct bpf_kfunc_call_arg_meta meta;
> +               int err;
> +
> +               err = fetch_kfunc_meta(env, call, &meta, NULL);
> +               if (err < 0)
> +                       /* error would be reported later */
> +                       return;
> +
> +               clobbered_regs_mask = kfunc_nocsr_clobber_mask(&meta);
> +               can_be_inlined = (meta.kfunc_flags & KF_NOCSR) &&
> +                                verifier_inlines_kfunc_call(&meta);
> +       }
> +
>         if (clobbered_regs_mask == ALL_CALLER_SAVED_REGS)
>                 return;
>
> --
> 2.45.2
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 2/3] bpf: mark bpf_cast_to_kern_ctx and bpf_rdonly_cast as KF_NOCSR
  2024-08-12 23:43 ` [PATCH bpf-next 2/3] bpf: mark bpf_cast_to_kern_ctx and bpf_rdonly_cast as KF_NOCSR Eduard Zingerman
@ 2024-08-15 21:25   ` Andrii Nakryiko
  2024-08-15 21:59     ` Eduard Zingerman
  0 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2024-08-15 21:25 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yonghong.song

On Mon, Aug 12, 2024 at 4:44 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> do_misc_fixups() relaces bpf_cast_to_kern_ctx() and bpf_rdonly_cast()
> by a single instruction "r0 = r1". This clearly follows nocsr contract.
> Mark these two functions as KF_NOCSR, in order to use them in
> selftests checking KF_NOCSR behaviour for kfuncs.
>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
>  kernel/bpf/helpers.c  | 4 ++--
>  kernel/bpf/verifier.c | 3 ++-
>  2 files changed, 4 insertions(+), 3 deletions(-)

Isn't it now "bpf fastcall" and not "nocsr"? Shouldn't the flag and
verifier code reflect this updated terminology?

>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index d02ae323996b..cda3c326eeb1 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -2987,8 +2987,8 @@ BTF_ID(func, bpf_cgroup_release_dtor)
>  #endif
>
>  BTF_KFUNCS_START(common_btf_ids)
> -BTF_ID_FLAGS(func, bpf_cast_to_kern_ctx)
> -BTF_ID_FLAGS(func, bpf_rdonly_cast)
> +BTF_ID_FLAGS(func, bpf_cast_to_kern_ctx, KF_NOCSR)
> +BTF_ID_FLAGS(func, bpf_rdonly_cast, KF_NOCSR)
>  BTF_ID_FLAGS(func, bpf_rcu_read_lock)
>  BTF_ID_FLAGS(func, bpf_rcu_read_unlock)
>  BTF_ID_FLAGS(func, bpf_dynptr_slice, KF_RET_NULL)
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index c579f74be3f9..88e583a37296 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -16159,7 +16159,8 @@ static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
>  /* Same as verifier_inlines_helper_call() but for kfuncs, see comment above */
>  static bool verifier_inlines_kfunc_call(struct bpf_kfunc_call_arg_meta *meta)
>  {
> -       return false;
> +       return meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
> +              meta->func_id == special_kfunc_list[KF_bpf_rdonly_cast];
>  }
>
>  /* GCC and LLVM define a no_caller_saved_registers function attribute.
> --
> 2.45.2
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 2/3] bpf: mark bpf_cast_to_kern_ctx and bpf_rdonly_cast as KF_NOCSR
  2024-08-15 21:25   ` Andrii Nakryiko
@ 2024-08-15 21:59     ` Eduard Zingerman
  2024-08-15 22:12       ` Andrii Nakryiko
  0 siblings, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2024-08-15 21:59 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yonghong.song

On Thu, 2024-08-15 at 14:25 -0700, Andrii Nakryiko wrote:
> On Mon, Aug 12, 2024 at 4:44 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > 
> > do_misc_fixups() relaces bpf_cast_to_kern_ctx() and bpf_rdonly_cast()
> > by a single instruction "r0 = r1". This clearly follows nocsr contract.
> > Mark these two functions as KF_NOCSR, in order to use them in
> > selftests checking KF_NOCSR behaviour for kfuncs.
> > 
> > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> > ---
> >  kernel/bpf/helpers.c  | 4 ++--
> >  kernel/bpf/verifier.c | 3 ++-
> >  2 files changed, 4 insertions(+), 3 deletions(-)
> 
> Isn't it now "bpf fastcall" and not "nocsr"? Shouldn't the flag and
> verifier code reflect this updated terminology?

Here is a pull request for LLVM that lands the feature under
the new bpf_fastcall name: https://github.com/llvm/llvm-project/pull/101228
I hope that it would be approved today or tomorrow (more like tomorrow).

Kernel side uses NOCSR in all places.
I can add a first patch to the series, renaming all NOCSR to bpf_fastcall,
now that it looks like llvm upstream won't object the name.

[...]


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs
  2024-08-15 21:24   ` Andrii Nakryiko
@ 2024-08-15 22:07     ` Eduard Zingerman
  2024-08-15 22:23       ` Yonghong Song
  2024-08-15 22:16     ` Yonghong Song
  1 sibling, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2024-08-15 22:07 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yonghong.song

On Thu, 2024-08-15 at 14:24 -0700, Andrii Nakryiko wrote:

[...]

> > @@ -16140,6 +16140,28 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
> >         }
> >  }
> > 
> > +/* Same as helper_nocsr_clobber_mask() but for kfuncs, see comment above */
> > +static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
> > +{
> > +       const struct btf_param *params;
> > +       u32 vlen, i, mask;
> > +
> > +       params = btf_params(meta->func_proto);
> > +       vlen = btf_type_vlen(meta->func_proto);
> > +       mask = 0;
> > +       if (!btf_type_is_void(btf_type_by_id(meta->btf, meta->func_proto->type)))
> > +               mask |= BIT(BPF_REG_0);
> > +       for (i = 0; i < vlen; ++i)
> > +               mask |= BIT(BPF_REG_1 + i);
> 
> Somewhere deep in btf_dump implementation of libbpf, there is a
> special handling of `<whatever> func(void)` (no args) function as
> having vlen == 1 and type being VOID (i.e., zero). I don't know if
> that still can happen, but I believe at some point we could get this
> vlen==1 and type=VOID for no-args functions. So I wonder if we should
> handle that here as well, or is it some compiler atavism we can forget
> about?
>

I just checked BTF generated for 'int filelock_init(void)',
for gcc compiled kernel using latest pahole and func proto looks as follows:

  FUNC_PROTO '(anon)' ret_type_id=12 vlen=0

So I assume this is an atavism.

[...]


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 2/3] bpf: mark bpf_cast_to_kern_ctx and bpf_rdonly_cast as KF_NOCSR
  2024-08-15 21:59     ` Eduard Zingerman
@ 2024-08-15 22:12       ` Andrii Nakryiko
  2024-08-15 22:14         ` Eduard Zingerman
  0 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2024-08-15 22:12 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yonghong.song

On Thu, Aug 15, 2024 at 2:59 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Thu, 2024-08-15 at 14:25 -0700, Andrii Nakryiko wrote:
> > On Mon, Aug 12, 2024 at 4:44 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > >
> > > do_misc_fixups() relaces bpf_cast_to_kern_ctx() and bpf_rdonly_cast()
> > > by a single instruction "r0 = r1". This clearly follows nocsr contract.
> > > Mark these two functions as KF_NOCSR, in order to use them in
> > > selftests checking KF_NOCSR behaviour for kfuncs.
> > >
> > > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> > > ---
> > >  kernel/bpf/helpers.c  | 4 ++--
> > >  kernel/bpf/verifier.c | 3 ++-
> > >  2 files changed, 4 insertions(+), 3 deletions(-)
> >
> > Isn't it now "bpf fastcall" and not "nocsr"? Shouldn't the flag and
> > verifier code reflect this updated terminology?
>
> Here is a pull request for LLVM that lands the feature under
> the new bpf_fastcall name: https://github.com/llvm/llvm-project/pull/101228
> I hope that it would be approved today or tomorrow (more like tomorrow).
>
> Kernel side uses NOCSR in all places.
> I can add a first patch to the series, renaming all NOCSR to bpf_fastcall,
> now that it looks like llvm upstream won't object the name.

Yep, I'd do that. Let's keep terminology consistent throughout.

I assume you'll also eventually follow up with bpf_helpers_defs.h
(there is a script that generates it) change to add that bpf_fastcall
attribute for select helpers, right?

>
> [...]
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 2/3] bpf: mark bpf_cast_to_kern_ctx and bpf_rdonly_cast as KF_NOCSR
  2024-08-15 22:12       ` Andrii Nakryiko
@ 2024-08-15 22:14         ` Eduard Zingerman
  0 siblings, 0 replies; 18+ messages in thread
From: Eduard Zingerman @ 2024-08-15 22:14 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yonghong.song

On Thu, 2024-08-15 at 15:12 -0700, Andrii Nakryiko wrote:

[...]

> > Here is a pull request for LLVM that lands the feature under
> > the new bpf_fastcall name: https://github.com/llvm/llvm-project/pull/101228
> > I hope that it would be approved today or tomorrow (more like tomorrow).
> > 
> > Kernel side uses NOCSR in all places.
> > I can add a first patch to the series, renaming all NOCSR to bpf_fastcall,
> > now that it looks like llvm upstream won't object the name.
> 
> Yep, I'd do that. Let's keep terminology consistent throughout.

Ok, will do, thank you.

> I assume you'll also eventually follow up with bpf_helpers_defs.h
> (there is a script that generates it) change to add that bpf_fastcall
> attribute for select helpers, right?

Good point.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs
  2024-08-15 21:24   ` Andrii Nakryiko
  2024-08-15 22:07     ` Eduard Zingerman
@ 2024-08-15 22:16     ` Yonghong Song
  2024-08-15 22:22       ` Yonghong Song
  1 sibling, 1 reply; 18+ messages in thread
From: Yonghong Song @ 2024-08-15 22:16 UTC (permalink / raw)
  To: Andrii Nakryiko, Eduard Zingerman
  Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team


On 8/15/24 2:24 PM, Andrii Nakryiko wrote:
> On Mon, Aug 12, 2024 at 4:44 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>> Recognize nocsr patterns around kfunc calls.
>> For example, suppose bpf_cast_to_kern_ctx() follows nocsr contract
>> (which it does, it is rewritten by verifier as "r0 = r1" insn),
>> in such a case, rewrite BPF program below:
>>
>>    r2 = 1;
>>    *(u64 *)(r10 - 32) = r2;
>>    call %[bpf_cast_to_kern_ctx];
>>    r2 = *(u64 *)(r10 - 32);
>>    r0 = r2;
>>
>> Removing the spill/fill pair:
>>
>>    r2 = 1;
>>    call %[bpf_cast_to_kern_ctx];
>>    r0 = r2;
>>
>> Add a KF_NOCSR flag to mark kfuncs that follow nocsr contract.
>>
>> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
>> ---
>>   include/linux/btf.h   |  1 +
>>   kernel/bpf/verifier.c | 36 ++++++++++++++++++++++++++++++++++++
>>   2 files changed, 37 insertions(+)
>>
>> diff --git a/include/linux/btf.h b/include/linux/btf.h
>> index cffb43133c68..59ca37300423 100644
>> --- a/include/linux/btf.h
>> +++ b/include/linux/btf.h
>> @@ -75,6 +75,7 @@
>>   #define KF_ITER_NEXT    (1 << 9) /* kfunc implements BPF iter next method */
>>   #define KF_ITER_DESTROY (1 << 10) /* kfunc implements BPF iter destructor */
>>   #define KF_RCU_PROTECTED (1 << 11) /* kfunc should be protected by rcu cs when they are invoked */
>> +#define KF_NOCSR        (1 << 12) /* kfunc follows nocsr calling contract */
>>
>>   /*
>>    * Tag marking a kernel function as a kfunc. This is meant to minimize the
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index df3be12096cf..c579f74be3f9 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -16140,6 +16140,28 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
>>          }
>>   }
>>
>> +/* Same as helper_nocsr_clobber_mask() but for kfuncs, see comment above */
>> +static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
>> +{
>> +       const struct btf_param *params;
>> +       u32 vlen, i, mask;
>> +
>> +       params = btf_params(meta->func_proto);
>> +       vlen = btf_type_vlen(meta->func_proto);
>> +       mask = 0;
>> +       if (!btf_type_is_void(btf_type_by_id(meta->btf, meta->func_proto->type)))
>> +               mask |= BIT(BPF_REG_0);
>> +       for (i = 0; i < vlen; ++i)
>> +               mask |= BIT(BPF_REG_1 + i);
> Somewhere deep in btf_dump implementation of libbpf, there is a
> special handling of `<whatever> func(void)` (no args) function as
> having vlen == 1 and type being VOID (i.e., zero). I don't know if
> that still can happen, but I believe at some point we could get this
> vlen==1 and type=VOID for no-args functions. So I wonder if we should
> handle that here as well, or is it some compiler atavism we can forget
> about?

The case to have vlen=1 and type=VOID only happens for
bpf programs with llvm19 and later.
For example,

$ cat t.c
int foo(); // a kfunc or a helper
int bar() {
   return foo(1, 2);
}

$ clang --target=bpf -O2 -g -c t.c && llvm-dwarfdump t.o
t.c:3:13: warning: passing arguments to 'foo' without a prototype is deprecated in all versions of C and is not supported in C23 [-Wdeprecated-non-prototype]
     3 |   return foo(1, 2);
       |             ^
1 warning generated.
t.o:    file format elf64-bpf
...
0x00000039:   DW_TAG_subprogram
                 DW_AT_name      ("foo")
                 DW_AT_decl_file ("/home/yhs/t.c")
                 DW_AT_decl_line (1)
                 DW_AT_type      (0x00000043 "int")
                 DW_AT_declaration       (true)
                 DW_AT_external  (true)

0x00000041:     DW_TAG_unspecified_parameters

0x00000042:     NULL
...

If we do see a BPF kfunc/helper with vlen=1 and type is VOID,
that means the number of arguments is actual UNKNOWN
based on dwarf DW_TAG_subprogram tag. Although it is unlikely
people to write code like above, it might be still useful
to add check with vlen=1 and type=VOID and reject such a case.


>
>> +       return mask;
>> +}
>> +
>> +/* Same as verifier_inlines_helper_call() but for kfuncs, see comment above */
>> +static bool verifier_inlines_kfunc_call(struct bpf_kfunc_call_arg_meta *meta)
>> +{
>> +       return false;
>> +}
>> +
>>   /* GCC and LLVM define a no_caller_saved_registers function attribute.
>>    * This attribute means that function scratches only some of
>>    * the caller saved registers defined by ABI.
>> @@ -16238,6 +16260,20 @@ static void mark_nocsr_pattern_for_call(struct bpf_verifier_env *env,
>>                                    bpf_jit_inlines_helper_call(call->imm));
>>          }
>>
>> +       if (bpf_pseudo_kfunc_call(call)) {
>> +               struct bpf_kfunc_call_arg_meta meta;
>> +               int err;
>> +
>> +               err = fetch_kfunc_meta(env, call, &meta, NULL);
>> +               if (err < 0)
>> +                       /* error would be reported later */
>> +                       return;
>> +
>> +               clobbered_regs_mask = kfunc_nocsr_clobber_mask(&meta);
>> +               can_be_inlined = (meta.kfunc_flags & KF_NOCSR) &&
>> +                                verifier_inlines_kfunc_call(&meta);
>> +       }
>> +
>>          if (clobbered_regs_mask == ALL_CALLER_SAVED_REGS)
>>                  return;
>>
>> --
>> 2.45.2
>>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs
  2024-08-15 22:16     ` Yonghong Song
@ 2024-08-15 22:22       ` Yonghong Song
  0 siblings, 0 replies; 18+ messages in thread
From: Yonghong Song @ 2024-08-15 22:22 UTC (permalink / raw)
  To: Andrii Nakryiko, Eduard Zingerman
  Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team


On 8/15/24 3:16 PM, Yonghong Song wrote:
>
> On 8/15/24 2:24 PM, Andrii Nakryiko wrote:
>> On Mon, Aug 12, 2024 at 4:44 PM Eduard Zingerman <eddyz87@gmail.com> 
>> wrote:
>>> Recognize nocsr patterns around kfunc calls.
>>> For example, suppose bpf_cast_to_kern_ctx() follows nocsr contract
>>> (which it does, it is rewritten by verifier as "r0 = r1" insn),
>>> in such a case, rewrite BPF program below:
>>>
>>>    r2 = 1;
>>>    *(u64 *)(r10 - 32) = r2;
>>>    call %[bpf_cast_to_kern_ctx];
>>>    r2 = *(u64 *)(r10 - 32);
>>>    r0 = r2;
>>>
>>> Removing the spill/fill pair:
>>>
>>>    r2 = 1;
>>>    call %[bpf_cast_to_kern_ctx];
>>>    r0 = r2;
>>>
>>> Add a KF_NOCSR flag to mark kfuncs that follow nocsr contract.
>>>
>>> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
>>> ---
>>>   include/linux/btf.h   |  1 +
>>>   kernel/bpf/verifier.c | 36 ++++++++++++++++++++++++++++++++++++
>>>   2 files changed, 37 insertions(+)
>>>
>>> diff --git a/include/linux/btf.h b/include/linux/btf.h
>>> index cffb43133c68..59ca37300423 100644
>>> --- a/include/linux/btf.h
>>> +++ b/include/linux/btf.h
>>> @@ -75,6 +75,7 @@
>>>   #define KF_ITER_NEXT    (1 << 9) /* kfunc implements BPF iter next 
>>> method */
>>>   #define KF_ITER_DESTROY (1 << 10) /* kfunc implements BPF iter 
>>> destructor */
>>>   #define KF_RCU_PROTECTED (1 << 11) /* kfunc should be protected by 
>>> rcu cs when they are invoked */
>>> +#define KF_NOCSR        (1 << 12) /* kfunc follows nocsr calling 
>>> contract */
>>>
>>>   /*
>>>    * Tag marking a kernel function as a kfunc. This is meant to 
>>> minimize the
>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>> index df3be12096cf..c579f74be3f9 100644
>>> --- a/kernel/bpf/verifier.c
>>> +++ b/kernel/bpf/verifier.c
>>> @@ -16140,6 +16140,28 @@ static bool 
>>> verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
>>>          }
>>>   }
>>>
>>> +/* Same as helper_nocsr_clobber_mask() but for kfuncs, see comment 
>>> above */
>>> +static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta 
>>> *meta)
>>> +{
>>> +       const struct btf_param *params;
>>> +       u32 vlen, i, mask;
>>> +
>>> +       params = btf_params(meta->func_proto);
>>> +       vlen = btf_type_vlen(meta->func_proto);
>>> +       mask = 0;
>>> +       if (!btf_type_is_void(btf_type_by_id(meta->btf, 
>>> meta->func_proto->type)))
>>> +               mask |= BIT(BPF_REG_0);
>>> +       for (i = 0; i < vlen; ++i)
>>> +               mask |= BIT(BPF_REG_1 + i);
>> Somewhere deep in btf_dump implementation of libbpf, there is a
>> special handling of `<whatever> func(void)` (no args) function as
>> having vlen == 1 and type being VOID (i.e., zero). I don't know if
>> that still can happen, but I believe at some point we could get this
>> vlen==1 and type=VOID for no-args functions. So I wonder if we should
>> handle that here as well, or is it some compiler atavism we can forget
>> about?
>
> The case to have vlen=1 and type=VOID only happens for
> bpf programs with llvm19 and later.
> For example,
>
> $ cat t.c
> int foo(); // a kfunc or a helper
> int bar() {
>   return foo(1, 2);
> }
>
> $ clang --target=bpf -O2 -g -c t.c && llvm-dwarfdump t.o
> t.c:3:13: warning: passing arguments to 'foo' without a prototype is 
> deprecated in all versions of C and is not supported in C23 
> [-Wdeprecated-non-prototype]
>     3 |   return foo(1, 2);
>       |             ^
> 1 warning generated.
> t.o:    file format elf64-bpf
> ...
> 0x00000039:   DW_TAG_subprogram
>                 DW_AT_name      ("foo")
>                 DW_AT_decl_file ("/home/yhs/t.c")
>                 DW_AT_decl_line (1)
>                 DW_AT_type      (0x00000043 "int")
>                 DW_AT_declaration       (true)
>                 DW_AT_external  (true)
>
> 0x00000041:     DW_TAG_unspecified_parameters
>
> 0x00000042:     NULL
> ...
>
> If we do see a BPF kfunc/helper with vlen=1 and type is VOID,
> that means the number of arguments is actual UNKNOWN
> based on dwarf DW_TAG_subprogram tag. Although it is unlikely
> people to write code like above, it might be still useful
> to add check with vlen=1 and type=VOID and reject such a case.


For vmlinux BTF, this is not possible since eventually all function
has a definition which will define the function precisely w.r.t.
the number of arguments and their types.


>
>
>>
>>> +       return mask;
>>> +}
>>> +
>>> +/* Same as verifier_inlines_helper_call() but for kfuncs, see 
>>> comment above */
>>> +static bool verifier_inlines_kfunc_call(struct 
>>> bpf_kfunc_call_arg_meta *meta)
>>> +{
>>> +       return false;
>>> +}
>>> +
>>>   /* GCC and LLVM define a no_caller_saved_registers function 
>>> attribute.
>>>    * This attribute means that function scratches only some of
>>>    * the caller saved registers defined by ABI.
>>> @@ -16238,6 +16260,20 @@ static void 
>>> mark_nocsr_pattern_for_call(struct bpf_verifier_env *env,
>>> bpf_jit_inlines_helper_call(call->imm));
>>>          }
>>>
>>> +       if (bpf_pseudo_kfunc_call(call)) {
>>> +               struct bpf_kfunc_call_arg_meta meta;
>>> +               int err;
>>> +
>>> +               err = fetch_kfunc_meta(env, call, &meta, NULL);
>>> +               if (err < 0)
>>> +                       /* error would be reported later */
>>> +                       return;
>>> +
>>> +               clobbered_regs_mask = kfunc_nocsr_clobber_mask(&meta);
>>> +               can_be_inlined = (meta.kfunc_flags & KF_NOCSR) &&
>>> + verifier_inlines_kfunc_call(&meta);
>>> +       }
>>> +
>>>          if (clobbered_regs_mask == ALL_CALLER_SAVED_REGS)
>>>                  return;
>>>
>>> -- 
>>> 2.45.2
>>>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs
  2024-08-15 22:07     ` Eduard Zingerman
@ 2024-08-15 22:23       ` Yonghong Song
  2024-08-15 22:29         ` Eduard Zingerman
  0 siblings, 1 reply; 18+ messages in thread
From: Yonghong Song @ 2024-08-15 22:23 UTC (permalink / raw)
  To: Eduard Zingerman, Andrii Nakryiko
  Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team


On 8/15/24 3:07 PM, Eduard Zingerman wrote:
> On Thu, 2024-08-15 at 14:24 -0700, Andrii Nakryiko wrote:
>
> [...]
>
>>> @@ -16140,6 +16140,28 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
>>>          }
>>>   }
>>>
>>> +/* Same as helper_nocsr_clobber_mask() but for kfuncs, see comment above */
>>> +static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
>>> +{
>>> +       const struct btf_param *params;
>>> +       u32 vlen, i, mask;
>>> +
>>> +       params = btf_params(meta->func_proto);
>>> +       vlen = btf_type_vlen(meta->func_proto);
>>> +       mask = 0;
>>> +       if (!btf_type_is_void(btf_type_by_id(meta->btf, meta->func_proto->type)))
>>> +               mask |= BIT(BPF_REG_0);
>>> +       for (i = 0; i < vlen; ++i)
>>> +               mask |= BIT(BPF_REG_1 + i);
>> Somewhere deep in btf_dump implementation of libbpf, there is a
>> special handling of `<whatever> func(void)` (no args) function as
>> having vlen == 1 and type being VOID (i.e., zero). I don't know if
>> that still can happen, but I believe at some point we could get this
>> vlen==1 and type=VOID for no-args functions. So I wonder if we should
>> handle that here as well, or is it some compiler atavism we can forget
>> about?
>>
> I just checked BTF generated for 'int filelock_init(void)',
> for gcc compiled kernel using latest pahole and func proto looks as follows:
>
>    FUNC_PROTO '(anon)' ret_type_id=12 vlen=0
>
> So I assume this is an atavism.

Agree, for kernel vmlinux BTF, we should be fine.

>
> [...]
>
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs
  2024-08-15 22:23       ` Yonghong Song
@ 2024-08-15 22:29         ` Eduard Zingerman
  0 siblings, 0 replies; 18+ messages in thread
From: Eduard Zingerman @ 2024-08-15 22:29 UTC (permalink / raw)
  To: Yonghong Song, Andrii Nakryiko
  Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team

On Thu, 2024-08-15 at 15:23 -0700, Yonghong Song wrote:
> On 8/15/24 3:07 PM, Eduard Zingerman wrote:
> > On Thu, 2024-08-15 at 14:24 -0700, Andrii Nakryiko wrote:
> > 
> > [...]
> > 
> > > > @@ -16140,6 +16140,28 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
> > > >          }
> > > >   }
> > > > 
> > > > +/* Same as helper_nocsr_clobber_mask() but for kfuncs, see comment above */
> > > > +static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
> > > > +{
> > > > +       const struct btf_param *params;
> > > > +       u32 vlen, i, mask;
> > > > +
> > > > +       params = btf_params(meta->func_proto);
> > > > +       vlen = btf_type_vlen(meta->func_proto);
> > > > +       mask = 0;
> > > > +       if (!btf_type_is_void(btf_type_by_id(meta->btf, meta->func_proto->type)))
> > > > +               mask |= BIT(BPF_REG_0);
> > > > +       for (i = 0; i < vlen; ++i)
> > > > +               mask |= BIT(BPF_REG_1 + i);
> > > Somewhere deep in btf_dump implementation of libbpf, there is a
> > > special handling of `<whatever> func(void)` (no args) function as
> > > having vlen == 1 and type being VOID (i.e., zero). I don't know if
> > > that still can happen, but I believe at some point we could get this
> > > vlen==1 and type=VOID for no-args functions. So I wonder if we should
> > > handle that here as well, or is it some compiler atavism we can forget
> > > about?
> > > 
> > I just checked BTF generated for 'int filelock_init(void)',
> > for gcc compiled kernel using latest pahole and func proto looks as follows:
> > 
> >    FUNC_PROTO '(anon)' ret_type_id=12 vlen=0
> > 
> > So I assume this is an atavism.
> 
> Agree, for kernel vmlinux BTF, we should be fine.

Right, since we are dealing only with vmlinux BTF special case is not needed.
Please let me know if I misunderstand you or Andrii.


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2024-08-15 22:29 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-12 23:43 [PATCH bpf-next 0/3] support nocsr patterns for calls to kfuncs Eduard Zingerman
2024-08-12 23:43 ` [PATCH bpf-next 1/3] bpf: " Eduard Zingerman
2024-08-13  5:36   ` Yonghong Song
2024-08-13  7:55     ` Eduard Zingerman
2024-08-13 15:18       ` Yonghong Song
2024-08-13 18:57         ` Eduard Zingerman
2024-08-15 21:24   ` Andrii Nakryiko
2024-08-15 22:07     ` Eduard Zingerman
2024-08-15 22:23       ` Yonghong Song
2024-08-15 22:29         ` Eduard Zingerman
2024-08-15 22:16     ` Yonghong Song
2024-08-15 22:22       ` Yonghong Song
2024-08-12 23:43 ` [PATCH bpf-next 2/3] bpf: mark bpf_cast_to_kern_ctx and bpf_rdonly_cast as KF_NOCSR Eduard Zingerman
2024-08-15 21:25   ` Andrii Nakryiko
2024-08-15 21:59     ` Eduard Zingerman
2024-08-15 22:12       ` Andrii Nakryiko
2024-08-15 22:14         ` Eduard Zingerman
2024-08-12 23:43 ` [PATCH bpf-next 3/3] selftests/bpf: check if nocsr pattern is recognized for kfuncs Eduard Zingerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox