bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v2] bpf: make update_prog_stats always_inline
@ 2025-06-21  4:55 Menglong Dong
  2025-06-23 16:26 ` Alexei Starovoitov
  0 siblings, 1 reply; 3+ messages in thread
From: Menglong Dong @ 2025-06-21  4:55 UTC (permalink / raw)
  To: alexei.starovoitov
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
	yonghong.song, kpsingh, sdf, haoluo, jolsa, bpf, linux-kernel,
	Menglong Dong

The function update_prog_stats() will be called in the bpf trampoline.
In most cases, it will be optimized by the compiler by making it inline.
However, we can't rely on the compiler all the time, and just make it
__always_inline to reduce the possible overhead.

Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
v2:
- split out __update_prog_stats() and make update_prog_stats()
  __always_inline, as Alexei's advice
---
 kernel/bpf/trampoline.c | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index c4b1a98ff726..1f92246117eb 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -911,18 +911,16 @@ static u64 notrace __bpf_prog_enter_recur(struct bpf_prog *prog, struct bpf_tram
 	return bpf_prog_start_time();
 }
 
-static void notrace update_prog_stats(struct bpf_prog *prog,
-				      u64 start)
+static void notrace __update_prog_stats(struct bpf_prog *prog, u64 start)
 {
 	struct bpf_prog_stats *stats;
 
-	if (static_branch_unlikely(&bpf_stats_enabled_key) &&
-	    /* static_key could be enabled in __bpf_prog_enter*
-	     * and disabled in __bpf_prog_exit*.
-	     * And vice versa.
-	     * Hence check that 'start' is valid.
-	     */
-	    start > NO_START_TIME) {
+	/* static_key could be enabled in __bpf_prog_enter*
+	 * and disabled in __bpf_prog_exit*.
+	 * And vice versa.
+	 * Hence check that 'start' is valid.
+	 */
+	if (start > NO_START_TIME) {
 		u64 duration = sched_clock() - start;
 		unsigned long flags;
 
@@ -934,6 +932,13 @@ static void notrace update_prog_stats(struct bpf_prog *prog,
 	}
 }
 
+static __always_inline void notrace update_prog_stats(struct bpf_prog *prog,
+						      u64 start)
+{
+	if (static_branch_unlikely(&bpf_stats_enabled_key))
+		__update_prog_stats(prog, start);
+}
+
 static void notrace __bpf_prog_exit_recur(struct bpf_prog *prog, u64 start,
 					  struct bpf_tramp_run_ctx *run_ctx)
 	__releases(RCU)
-- 
2.39.5


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf-next v2] bpf: make update_prog_stats always_inline
  2025-06-21  4:55 [PATCH bpf-next v2] bpf: make update_prog_stats always_inline Menglong Dong
@ 2025-06-23 16:26 ` Alexei Starovoitov
  2025-06-24  2:03   ` Menglong Dong
  0 siblings, 1 reply; 3+ messages in thread
From: Alexei Starovoitov @ 2025-06-23 16:26 UTC (permalink / raw)
  To: Menglong Dong
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	bpf, LKML, Menglong Dong

On Fri, Jun 20, 2025 at 9:57 PM Menglong Dong <menglong8.dong@gmail.com> wrote:
>
> The function update_prog_stats() will be called in the bpf trampoline.
> In most cases, it will be optimized by the compiler by making it inline.
> However, we can't rely on the compiler all the time, and just make it
> __always_inline to reduce the possible overhead.
>
> Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
> ---
> v2:
> - split out __update_prog_stats() and make update_prog_stats()
>   __always_inline, as Alexei's advice
> ---
>  kernel/bpf/trampoline.c | 23 ++++++++++++++---------
>  1 file changed, 14 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index c4b1a98ff726..1f92246117eb 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
> @@ -911,18 +911,16 @@ static u64 notrace __bpf_prog_enter_recur(struct bpf_prog *prog, struct bpf_tram
>         return bpf_prog_start_time();
>  }
>
> -static void notrace update_prog_stats(struct bpf_prog *prog,
> -                                     u64 start)
> +static void notrace __update_prog_stats(struct bpf_prog *prog, u64 start)
>  {
>         struct bpf_prog_stats *stats;
>
> -       if (static_branch_unlikely(&bpf_stats_enabled_key) &&
> -           /* static_key could be enabled in __bpf_prog_enter*
> -            * and disabled in __bpf_prog_exit*.
> -            * And vice versa.
> -            * Hence check that 'start' is valid.
> -            */
> -           start > NO_START_TIME) {
> +       /* static_key could be enabled in __bpf_prog_enter*
> +        * and disabled in __bpf_prog_exit*.
> +        * And vice versa.
> +        * Hence check that 'start' is valid.
> +        */


Instead of old networking style I reformatted above to normal
kernel style comment.

> +       if (start > NO_START_TIME) {

and refactored it to <= and removed extra indent in below.
while applying.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf-next v2] bpf: make update_prog_stats always_inline
  2025-06-23 16:26 ` Alexei Starovoitov
@ 2025-06-24  2:03   ` Menglong Dong
  0 siblings, 0 replies; 3+ messages in thread
From: Menglong Dong @ 2025-06-24  2:03 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	bpf, LKML, Menglong Dong

On Tue, Jun 24, 2025 at 12:26 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Fri, Jun 20, 2025 at 9:57 PM Menglong Dong <menglong8.dong@gmail.com> wrote:
> >
> > The function update_prog_stats() will be called in the bpf trampoline.
> > In most cases, it will be optimized by the compiler by making it inline.
> > However, we can't rely on the compiler all the time, and just make it
> > __always_inline to reduce the possible overhead.
> >
> > Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
> > ---
> > v2:
> > - split out __update_prog_stats() and make update_prog_stats()
> >   __always_inline, as Alexei's advice
> > ---
> >  kernel/bpf/trampoline.c | 23 ++++++++++++++---------
> >  1 file changed, 14 insertions(+), 9 deletions(-)
> >
> > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> > index c4b1a98ff726..1f92246117eb 100644
> > --- a/kernel/bpf/trampoline.c
> > +++ b/kernel/bpf/trampoline.c
> > @@ -911,18 +911,16 @@ static u64 notrace __bpf_prog_enter_recur(struct bpf_prog *prog, struct bpf_tram
> >         return bpf_prog_start_time();
> >  }
> >
> > -static void notrace update_prog_stats(struct bpf_prog *prog,
> > -                                     u64 start)
> > +static void notrace __update_prog_stats(struct bpf_prog *prog, u64 start)
> >  {
> >         struct bpf_prog_stats *stats;
> >
> > -       if (static_branch_unlikely(&bpf_stats_enabled_key) &&
> > -           /* static_key could be enabled in __bpf_prog_enter*
> > -            * and disabled in __bpf_prog_exit*.
> > -            * And vice versa.
> > -            * Hence check that 'start' is valid.
> > -            */
> > -           start > NO_START_TIME) {
> > +       /* static_key could be enabled in __bpf_prog_enter*
> > +        * and disabled in __bpf_prog_exit*.
> > +        * And vice versa.
> > +        * Hence check that 'start' is valid.
> > +        */
>
>
> Instead of old networking style I reformatted above to normal
> kernel style comment.
>
> > +       if (start > NO_START_TIME) {
>
> and refactored it to <= and removed extra indent in below.
> while applying.

Looks much better, thanks a lot ~

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-06-24  2:03 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-21  4:55 [PATCH bpf-next v2] bpf: make update_prog_stats always_inline Menglong Dong
2025-06-23 16:26 ` Alexei Starovoitov
2025-06-24  2:03   ` Menglong Dong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).