* [PATCH bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP
@ 2024-03-19 21:20 Andrii Nakryiko
2024-03-20 3:47 ` Masami Hiramatsu
2024-03-25 16:10 ` patchwork-bot+netdevbpf
0 siblings, 2 replies; 7+ messages in thread
From: Andrii Nakryiko @ 2024-03-19 21:20 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau
Cc: andrii, kernel-team, Masami Hiramatsu, Peter Zijlstra
get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
is not free and it does pop up in performance profiles when
kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
Let's avoid using it if we know that fentry_ip - 4 can't cross page
boundary. We do that by masking lowest 12 bits and checking if they are
>= 4, in which case we can do direct memory read.
Another benefit (and actually what caused a closer look at this part of
code) is that now LBR record is (typically) not wasted on
copy_from_kernel_nofault() call and code, which helps tools like
retsnoop that grab LBR records from inside BPF code in kretprobes.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/trace/bpf_trace.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 0a5c4efc73c3..f81adabda38c 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
{
u32 instr;
- /* Being extra safe in here in case entry ip is on the page-edge. */
- if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
- return fentry_ip;
+ /* We want to be extra safe in case entry ip is on the page edge,
+ * but otherwise we need to avoid get_kernel_nofault()'s overhead.
+ */
+ if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
+ if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
+ return fentry_ip;
+ } else {
+ instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
+ }
if (is_endbr(instr))
fentry_ip -= ENDBR_INSN_SIZE;
return fentry_ip;
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP
2024-03-19 21:20 [PATCH bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP Andrii Nakryiko
@ 2024-03-20 3:47 ` Masami Hiramatsu
2024-03-20 8:34 ` Jiri Olsa
2024-03-25 16:10 ` patchwork-bot+netdevbpf
1 sibling, 1 reply; 7+ messages in thread
From: Masami Hiramatsu @ 2024-03-20 3:47 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, ast, daniel, martin.lau, kernel-team, Masami Hiramatsu,
Peter Zijlstra
On Tue, 19 Mar 2024 14:20:13 -0700
Andrii Nakryiko <andrii@kernel.org> wrote:
> get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> is not free and it does pop up in performance profiles when
> kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
>
> Let's avoid using it if we know that fentry_ip - 4 can't cross page
> boundary. We do that by masking lowest 12 bits and checking if they are
> >= 4, in which case we can do direct memory read.
>
> Another benefit (and actually what caused a closer look at this part of
> code) is that now LBR record is (typically) not wasted on
> copy_from_kernel_nofault() call and code, which helps tools like
> retsnoop that grab LBR records from inside BPF code in kretprobes.
Hmm, we may better to have this function in kprobe side and
store a flag which such architecture dependent offset is added.
That is more natural.
Thanks!
>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
> kernel/trace/bpf_trace.c | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 0a5c4efc73c3..f81adabda38c 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
> {
> u32 instr;
>
> - /* Being extra safe in here in case entry ip is on the page-edge. */
> - if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
> - return fentry_ip;
> + /* We want to be extra safe in case entry ip is on the page edge,
> + * but otherwise we need to avoid get_kernel_nofault()'s overhead.
> + */
> + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
> + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
> + return fentry_ip;
> + } else {
> + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
> + }
> if (is_endbr(instr))
> fentry_ip -= ENDBR_INSN_SIZE;
> return fentry_ip;
> --
> 2.43.0
>
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP
2024-03-20 3:47 ` Masami Hiramatsu
@ 2024-03-20 8:34 ` Jiri Olsa
2024-03-20 17:46 ` Andrii Nakryiko
0 siblings, 1 reply; 7+ messages in thread
From: Jiri Olsa @ 2024-03-20 8:34 UTC (permalink / raw)
To: Masami Hiramatsu
Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team,
Peter Zijlstra
On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote:
> On Tue, 19 Mar 2024 14:20:13 -0700
> Andrii Nakryiko <andrii@kernel.org> wrote:
>
> > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> > is not free and it does pop up in performance profiles when
> > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
> >
> > Let's avoid using it if we know that fentry_ip - 4 can't cross page
> > boundary. We do that by masking lowest 12 bits and checking if they are
> > >= 4, in which case we can do direct memory read.
> >
> > Another benefit (and actually what caused a closer look at this part of
> > code) is that now LBR record is (typically) not wasted on
> > copy_from_kernel_nofault() call and code, which helps tools like
> > retsnoop that grab LBR records from inside BPF code in kretprobes.
I think this is nice improvement
Acked-by: Jiri Olsa <jolsa@kernel.org>
>
> Hmm, we may better to have this function in kprobe side and
> store a flag which such architecture dependent offset is added.
> That is more natural.
I like the idea of new flag saying the address was adjusted for endbr
kprobe adjust the address in arch_adjust_kprobe_addr, it could be
easily added in there and then we'd adjust the address in get_entry_ip
accordingly
jirka
>
> Thanks!
>
> >
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> > kernel/trace/bpf_trace.c | 12 +++++++++---
> > 1 file changed, 9 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 0a5c4efc73c3..f81adabda38c 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
> > {
> > u32 instr;
> >
> > - /* Being extra safe in here in case entry ip is on the page-edge. */
> > - if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
> > - return fentry_ip;
> > + /* We want to be extra safe in case entry ip is on the page edge,
> > + * but otherwise we need to avoid get_kernel_nofault()'s overhead.
> > + */
> > + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
> > + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
> > + return fentry_ip;
> > + } else {
> > + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
> > + }
> > if (is_endbr(instr))
> > fentry_ip -= ENDBR_INSN_SIZE;
> > return fentry_ip;
> > --
> > 2.43.0
> >
>
>
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP
2024-03-20 8:34 ` Jiri Olsa
@ 2024-03-20 17:46 ` Andrii Nakryiko
2024-03-20 23:46 ` Masami Hiramatsu
0 siblings, 1 reply; 7+ messages in thread
From: Andrii Nakryiko @ 2024-03-20 17:46 UTC (permalink / raw)
To: Jiri Olsa
Cc: Masami Hiramatsu, Andrii Nakryiko, bpf, ast, daniel, martin.lau,
kernel-team, Peter Zijlstra
On Wed, Mar 20, 2024 at 1:34 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote:
> > On Tue, 19 Mar 2024 14:20:13 -0700
> > Andrii Nakryiko <andrii@kernel.org> wrote:
> >
> > > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> > > is not free and it does pop up in performance profiles when
> > > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
> > >
> > > Let's avoid using it if we know that fentry_ip - 4 can't cross page
> > > boundary. We do that by masking lowest 12 bits and checking if they are
> > > >= 4, in which case we can do direct memory read.
> > >
> > > Another benefit (and actually what caused a closer look at this part of
> > > code) is that now LBR record is (typically) not wasted on
> > > copy_from_kernel_nofault() call and code, which helps tools like
> > > retsnoop that grab LBR records from inside BPF code in kretprobes.
>
> I think this is nice improvement
>
> Acked-by: Jiri Olsa <jolsa@kernel.org>
>
Masami, are you ok if we land this rather straightforward fix in
bpf-next tree for now, and then you or someone a bit more familiar
with ftrace/kprobe internals can generalize this in a more generic
way?
> >
> > Hmm, we may better to have this function in kprobe side and
> > store a flag which such architecture dependent offset is added.
> > That is more natural.
>
> I like the idea of new flag saying the address was adjusted for endbr
>
instead of a flag, can kprobe low-level infrastructure just provide
"effective fentry ip" without any flags, so that BPF side of things
don't have to care?
> kprobe adjust the address in arch_adjust_kprobe_addr, it could be
> easily added in there and then we'd adjust the address in get_entry_ip
> accordingly
>
> jirka
>
> >
> > Thanks!
> >
> > >
> > > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > ---
> > > kernel/trace/bpf_trace.c | 12 +++++++++---
> > > 1 file changed, 9 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > index 0a5c4efc73c3..f81adabda38c 100644
> > > --- a/kernel/trace/bpf_trace.c
> > > +++ b/kernel/trace/bpf_trace.c
> > > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
> > > {
> > > u32 instr;
> > >
> > > - /* Being extra safe in here in case entry ip is on the page-edge. */
> > > - if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
> > > - return fentry_ip;
> > > + /* We want to be extra safe in case entry ip is on the page edge,
> > > + * but otherwise we need to avoid get_kernel_nofault()'s overhead.
> > > + */
> > > + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
> > > + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
> > > + return fentry_ip;
> > > + } else {
> > > + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
> > > + }
> > > if (is_endbr(instr))
> > > fentry_ip -= ENDBR_INSN_SIZE;
> > > return fentry_ip;
> > > --
> > > 2.43.0
> > >
> >
> >
> > --
> > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> >
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP
2024-03-20 17:46 ` Andrii Nakryiko
@ 2024-03-20 23:46 ` Masami Hiramatsu
2024-03-21 16:16 ` Andrii Nakryiko
0 siblings, 1 reply; 7+ messages in thread
From: Masami Hiramatsu @ 2024-03-20 23:46 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Jiri Olsa, Masami Hiramatsu, Andrii Nakryiko, bpf, ast, daniel,
martin.lau, kernel-team, Peter Zijlstra
On Wed, 20 Mar 2024 10:46:54 -0700
Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> On Wed, Mar 20, 2024 at 1:34 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote:
> > > On Tue, 19 Mar 2024 14:20:13 -0700
> > > Andrii Nakryiko <andrii@kernel.org> wrote:
> > >
> > > > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> > > > is not free and it does pop up in performance profiles when
> > > > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
> > > >
> > > > Let's avoid using it if we know that fentry_ip - 4 can't cross page
> > > > boundary. We do that by masking lowest 12 bits and checking if they are
> > > > >= 4, in which case we can do direct memory read.
> > > >
> > > > Another benefit (and actually what caused a closer look at this part of
> > > > code) is that now LBR record is (typically) not wasted on
> > > > copy_from_kernel_nofault() call and code, which helps tools like
> > > > retsnoop that grab LBR records from inside BPF code in kretprobes.
> >
> > I think this is nice improvement
> >
> > Acked-by: Jiri Olsa <jolsa@kernel.org>
> >
>
> Masami, are you ok if we land this rather straightforward fix in
> bpf-next tree for now, and then you or someone a bit more familiar
> with ftrace/kprobe internals can generalize this in a more generic
> way?
I'm OK for this change for short term fix. As far as I can see, the
kprobe-side change may involve more kprobe internal changes, so
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
>
> > >
> > > Hmm, we may better to have this function in kprobe side and
> > > store a flag which such architecture dependent offset is added.
> > > That is more natural.
> >
> > I like the idea of new flag saying the address was adjusted for endbr
> >
>
> instead of a flag, can kprobe low-level infrastructure just provide
> "effective fentry ip" without any flags, so that BPF side of things
> don't have to care?
It's possible. But it is a bit only for BPF and not fit to kprobe
itself. I think we can add it in trace_kprobe instead of kprobe,
which can be accessed from struct kprobe *kp.
Thank you,
>
> > kprobe adjust the address in arch_adjust_kprobe_addr, it could be
> > easily added in there and then we'd adjust the address in get_entry_ip
> > accordingly
> >
> > jirka
> >
> > >
> > > Thanks!
> > >
> > > >
> > > > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > > ---
> > > > kernel/trace/bpf_trace.c | 12 +++++++++---
> > > > 1 file changed, 9 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > > index 0a5c4efc73c3..f81adabda38c 100644
> > > > --- a/kernel/trace/bpf_trace.c
> > > > +++ b/kernel/trace/bpf_trace.c
> > > > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
> > > > {
> > > > u32 instr;
> > > >
> > > > - /* Being extra safe in here in case entry ip is on the page-edge. */
> > > > - if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
> > > > - return fentry_ip;
> > > > + /* We want to be extra safe in case entry ip is on the page edge,
> > > > + * but otherwise we need to avoid get_kernel_nofault()'s overhead.
> > > > + */
> > > > + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
> > > > + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
> > > > + return fentry_ip;
> > > > + } else {
> > > > + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
> > > > + }
> > > > if (is_endbr(instr))
> > > > fentry_ip -= ENDBR_INSN_SIZE;
> > > > return fentry_ip;
> > > > --
> > > > 2.43.0
> > > >
> > >
> > >
> > > --
> > > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > >
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP
2024-03-20 23:46 ` Masami Hiramatsu
@ 2024-03-21 16:16 ` Andrii Nakryiko
0 siblings, 0 replies; 7+ messages in thread
From: Andrii Nakryiko @ 2024-03-21 16:16 UTC (permalink / raw)
To: Masami Hiramatsu
Cc: Jiri Olsa, Andrii Nakryiko, bpf, ast, daniel, martin.lau,
kernel-team, Peter Zijlstra
On Wed, Mar 20, 2024 at 4:46 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> On Wed, 20 Mar 2024 10:46:54 -0700
> Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
>
> > On Wed, Mar 20, 2024 at 1:34 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > >
> > > On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote:
> > > > On Tue, 19 Mar 2024 14:20:13 -0700
> > > > Andrii Nakryiko <andrii@kernel.org> wrote:
> > > >
> > > > > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> > > > > is not free and it does pop up in performance profiles when
> > > > > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
> > > > >
> > > > > Let's avoid using it if we know that fentry_ip - 4 can't cross page
> > > > > boundary. We do that by masking lowest 12 bits and checking if they are
> > > > > >= 4, in which case we can do direct memory read.
> > > > >
> > > > > Another benefit (and actually what caused a closer look at this part of
> > > > > code) is that now LBR record is (typically) not wasted on
> > > > > copy_from_kernel_nofault() call and code, which helps tools like
> > > > > retsnoop that grab LBR records from inside BPF code in kretprobes.
> > >
> > > I think this is nice improvement
> > >
> > > Acked-by: Jiri Olsa <jolsa@kernel.org>
> > >
> >
> > Masami, are you ok if we land this rather straightforward fix in
> > bpf-next tree for now, and then you or someone a bit more familiar
> > with ftrace/kprobe internals can generalize this in a more generic
> > way?
>
> I'm OK for this change for short term fix. As far as I can see, the
> kprobe-side change may involve more kprobe internal changes, so
>
> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
>
Great, thank you!
> >
> > > >
> > > > Hmm, we may better to have this function in kprobe side and
> > > > store a flag which such architecture dependent offset is added.
> > > > That is more natural.
> > >
> > > I like the idea of new flag saying the address was adjusted for endbr
> > >
> >
> > instead of a flag, can kprobe low-level infrastructure just provide
> > "effective fentry ip" without any flags, so that BPF side of things
> > don't have to care?
>
> It's possible. But it is a bit only for BPF and not fit to kprobe
> itself. I think we can add it in trace_kprobe instead of kprobe,
> which can be accessed from struct kprobe *kp.
sure, if it can be just "endbr64 offset" instead of a true/false flag,
it would help to avoid extra conditionals in the hot path (which waste
LBR records in some mode, which are important in some applications)
>
> Thank you,
>
> >
> > > kprobe adjust the address in arch_adjust_kprobe_addr, it could be
> > > easily added in there and then we'd adjust the address in get_entry_ip
> > > accordingly
> > >
> > > jirka
> > >
> > > >
> > > > Thanks!
> > > >
> > > > >
> > > > > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > > > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > > > ---
> > > > > kernel/trace/bpf_trace.c | 12 +++++++++---
> > > > > 1 file changed, 9 insertions(+), 3 deletions(-)
> > > > >
> > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > > > index 0a5c4efc73c3..f81adabda38c 100644
> > > > > --- a/kernel/trace/bpf_trace.c
> > > > > +++ b/kernel/trace/bpf_trace.c
> > > > > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
> > > > > {
> > > > > u32 instr;
> > > > >
> > > > > - /* Being extra safe in here in case entry ip is on the page-edge. */
> > > > > - if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
> > > > > - return fentry_ip;
> > > > > + /* We want to be extra safe in case entry ip is on the page edge,
> > > > > + * but otherwise we need to avoid get_kernel_nofault()'s overhead.
> > > > > + */
> > > > > + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
> > > > > + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
> > > > > + return fentry_ip;
> > > > > + } else {
> > > > > + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
> > > > > + }
> > > > > if (is_endbr(instr))
> > > > > fentry_ip -= ENDBR_INSN_SIZE;
> > > > > return fentry_ip;
> > > > > --
> > > > > 2.43.0
> > > > >
> > > >
> > > >
> > > > --
> > > > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > >
>
>
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP
2024-03-19 21:20 [PATCH bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP Andrii Nakryiko
2024-03-20 3:47 ` Masami Hiramatsu
@ 2024-03-25 16:10 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 7+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-03-25 16:10 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, ast, daniel, martin.lau, kernel-team, mhiramat, peterz
Hello:
This patch was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:
On Tue, 19 Mar 2024 14:20:13 -0700 you wrote:
> get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> is not free and it does pop up in performance profiles when
> kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
>
> Let's avoid using it if we know that fentry_ip - 4 can't cross page
> boundary. We do that by masking lowest 12 bits and checking if they are
> >= 4, in which case we can do direct memory read.
>
> [...]
Here is the summary with links:
- [bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP
https://git.kernel.org/bpf/bpf-next/c/a8497506cd2c
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-03-25 16:10 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-19 21:20 [PATCH bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP Andrii Nakryiko
2024-03-20 3:47 ` Masami Hiramatsu
2024-03-20 8:34 ` Jiri Olsa
2024-03-20 17:46 ` Andrii Nakryiko
2024-03-20 23:46 ` Masami Hiramatsu
2024-03-21 16:16 ` Andrii Nakryiko
2024-03-25 16:10 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox