* [PATCH v2 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling
2017-06-01 10:48 [PATCH v2 0/4] ftrace/kprobe fixes Naveen N. Rao
@ 2017-06-01 10:48 ` Naveen N. Rao
2017-06-08 13:43 ` Masami Hiramatsu
2017-06-19 12:22 ` [v2, " Michael Ellerman
2017-06-01 10:48 ` [PATCH v2 2/4] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS Naveen N. Rao
` (2 subsequent siblings)
3 siblings, 2 replies; 13+ messages in thread
From: Naveen N. Rao @ 2017-06-01 10:48 UTC (permalink / raw)
To: Michael Ellerman
Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
linuxppc-dev
This fixes a crash when function_graph and jprobes are used together.
This is essentially commit 237d28db036e ("ftrace/jprobes/x86: Fix
conflict between jprobes and function graph tracing"), but for powerpc.
Jprobes breaks function_graph tracing since the jprobe hook needs to use
jprobe_return(), which never returns back to the hook, but instead to
the original jprobe'd function. The solution is to momentarily pause
function_graph tracing before invoking the jprobe hook and re-enable it
when returning back to the original jprobe'd function.
Fixes: 6794c78243bfd ("powerpc64: port of the function graph tracer")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
v2: No changes since v1.
arch/powerpc/kernel/kprobes.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index e1fb95385a9e..0554d6e66194 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -610,6 +610,15 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc);
#endif
+ /*
+ * jprobes use jprobe_return() which skips the normal return
+ * path of the function, and this messes up the accounting of the
+ * function graph tracer.
+ *
+ * Pause function graph tracing while performing the jprobe function.
+ */
+ pause_graph_tracing();
+
return 1;
}
NOKPROBE_SYMBOL(setjmp_pre_handler);
@@ -635,6 +644,8 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
* saved regs...
*/
memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));
+ /* It's OK to start function graph tracing again */
+ unpause_graph_tracing();
preempt_enable_no_resched();
return 1;
}
--
2.12.2
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v2 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling
2017-06-01 10:48 ` [PATCH v2 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling Naveen N. Rao
@ 2017-06-08 13:43 ` Masami Hiramatsu
2017-06-08 15:33 ` Steven Rostedt
2017-06-19 12:22 ` [v2, " Michael Ellerman
1 sibling, 1 reply; 13+ messages in thread
From: Masami Hiramatsu @ 2017-06-08 13:43 UTC (permalink / raw)
To: Naveen N. Rao
Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Masami Hiramatsu,
Steven Rostedt, linuxppc-dev
On Thu, 1 Jun 2017 16:18:15 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> This fixes a crash when function_graph and jprobes are used together.
> This is essentially commit 237d28db036e ("ftrace/jprobes/x86: Fix
> conflict between jprobes and function graph tracing"), but for powerpc.
>
> Jprobes breaks function_graph tracing since the jprobe hook needs to use
> jprobe_return(), which never returns back to the hook, but instead to
> the original jprobe'd function. The solution is to momentarily pause
> function_graph tracing before invoking the jprobe hook and re-enable it
> when returning back to the original jprobe'd function.
>
> Fixes: 6794c78243bfd ("powerpc64: port of the function graph tracer")
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Thanks!
> ---
> v2: No changes since v1.
>
> arch/powerpc/kernel/kprobes.c | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index e1fb95385a9e..0554d6e66194 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -610,6 +610,15 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
> regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc);
> #endif
>
> + /*
> + * jprobes use jprobe_return() which skips the normal return
> + * path of the function, and this messes up the accounting of the
> + * function graph tracer.
> + *
> + * Pause function graph tracing while performing the jprobe function.
> + */
> + pause_graph_tracing();
> +
> return 1;
> }
> NOKPROBE_SYMBOL(setjmp_pre_handler);
> @@ -635,6 +644,8 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
> * saved regs...
> */
> memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));
> + /* It's OK to start function graph tracing again */
> + unpause_graph_tracing();
> preempt_enable_no_resched();
> return 1;
> }
> --
> 2.12.2
>
--
Masami Hiramatsu <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling
2017-06-08 13:43 ` Masami Hiramatsu
@ 2017-06-08 15:33 ` Steven Rostedt
0 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2017-06-08 15:33 UTC (permalink / raw)
To: Masami Hiramatsu
Cc: Naveen N. Rao, Michael Ellerman, Ananth N Mavinakayanahalli,
linuxppc-dev
On Thu, 8 Jun 2017 22:43:54 +0900
Masami Hiramatsu <mhiramat@kernel.org> wrote:
> On Thu, 1 Jun 2017 16:18:15 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> > This fixes a crash when function_graph and jprobes are used together.
> > This is essentially commit 237d28db036e ("ftrace/jprobes/x86: Fix
> > conflict between jprobes and function graph tracing"), but for powerpc.
> >
> > Jprobes breaks function_graph tracing since the jprobe hook needs to use
> > jprobe_return(), which never returns back to the hook, but instead to
> > the original jprobe'd function. The solution is to momentarily pause
> > function_graph tracing before invoking the jprobe hook and re-enable it
> > when returning back to the original jprobe'd function.
> >
> > Fixes: 6794c78243bfd ("powerpc64: port of the function graph tracer")
> > Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>
> Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
For what it's worth...
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-- Steve
>
> Thanks!
>
> > ---
> > v2: No changes since v1.
> >
> > arch/powerpc/kernel/kprobes.c | 11 +++++++++++
> > 1 file changed, 11 insertions(+)
> >
> > diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> > index e1fb95385a9e..0554d6e66194 100644
> > --- a/arch/powerpc/kernel/kprobes.c
> > +++ b/arch/powerpc/kernel/kprobes.c
> > @@ -610,6 +610,15 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
> > regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc);
> > #endif
> >
> > + /*
> > + * jprobes use jprobe_return() which skips the normal return
> > + * path of the function, and this messes up the accounting of the
> > + * function graph tracer.
> > + *
> > + * Pause function graph tracing while performing the jprobe function.
> > + */
> > + pause_graph_tracing();
> > +
> > return 1;
> > }
> > NOKPROBE_SYMBOL(setjmp_pre_handler);
> > @@ -635,6 +644,8 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
> > * saved regs...
> > */
> > memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));
> > + /* It's OK to start function graph tracing again */
> > + unpause_graph_tracing();
> > preempt_enable_no_resched();
> > return 1;
> > }
> > --
> > 2.12.2
> >
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [v2, 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling
2017-06-01 10:48 ` [PATCH v2 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling Naveen N. Rao
2017-06-08 13:43 ` Masami Hiramatsu
@ 2017-06-19 12:22 ` Michael Ellerman
1 sibling, 0 replies; 13+ messages in thread
From: Michael Ellerman @ 2017-06-19 12:22 UTC (permalink / raw)
To: Naveen N. Rao; +Cc: linuxppc-dev, Masami Hiramatsu, Steven Rostedt
On Thu, 2017-06-01 at 10:48:15 UTC, "Naveen N. Rao" wrote:
> This fixes a crash when function_graph and jprobes are used together.
> This is essentially commit 237d28db036e ("ftrace/jprobes/x86: Fix
> conflict between jprobes and function graph tracing"), but for powerpc.
>
> Jprobes breaks function_graph tracing since the jprobe hook needs to use
> jprobe_return(), which never returns back to the hook, but instead to
> the original jprobe'd function. The solution is to momentarily pause
> function_graph tracing before invoking the jprobe hook and re-enable it
> when returning back to the original jprobe'd function.
>
> Fixes: 6794c78243bfd ("powerpc64: port of the function graph tracer")
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Applied to powerpc fixes, thanks.
https://git.kernel.org/powerpc/c/a9f8553e935f26cb5447f67e280946
cheers
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v2 2/4] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS
2017-06-01 10:48 [PATCH v2 0/4] ftrace/kprobe fixes Naveen N. Rao
2017-06-01 10:48 ` [PATCH v2 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling Naveen N. Rao
@ 2017-06-01 10:48 ` Naveen N. Rao
2017-06-19 12:22 ` [v2, " Michael Ellerman
2017-06-01 10:48 ` [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes Naveen N. Rao
2017-06-01 10:48 ` [PATCH v2 4/4] powerpc/xmon: Disable ftrace while in xmon Naveen N. Rao
3 siblings, 1 reply; 13+ messages in thread
From: Naveen N. Rao @ 2017-06-01 10:48 UTC (permalink / raw)
To: Michael Ellerman
Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
linuxppc-dev
For DYNAMIC_FTRACE_WITH_REGS, we should be passing-in the original set
of registers in pt_regs, to capture the state _before_ ftrace_caller.
However, we are instead passing the stack pointer *after* allocating a
stack frame in ftrace_caller. Fix this by saving the proper value of r1
in pt_regs. Also, use SAVE_10GPRS() to simplify the code.
Fixes: 153086644fd1f ("powerpc/ftrace: Add support for -mprofile-kernel
ftrace ABI")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
v2: No changes since v1.
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index 7c933a99f5d5..fa0921410fa4 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -45,10 +45,14 @@ _GLOBAL(ftrace_caller)
stdu r1,-SWITCH_FRAME_SIZE(r1)
/* Save all gprs to pt_regs */
- SAVE_8GPRS(0,r1)
- SAVE_8GPRS(8,r1)
- SAVE_8GPRS(16,r1)
- SAVE_8GPRS(24,r1)
+ SAVE_GPR(0, r1)
+ SAVE_10GPRS(2, r1)
+ SAVE_10GPRS(12, r1)
+ SAVE_10GPRS(22, r1)
+
+ /* Save previous stack pointer (r1) */
+ addi r8, r1, SWITCH_FRAME_SIZE
+ std r8, GPR1(r1)
/* Load special regs for save below */
mfmsr r8
@@ -103,10 +107,10 @@ ftrace_call:
#endif
/* Restore gprs */
- REST_8GPRS(0,r1)
- REST_8GPRS(8,r1)
- REST_8GPRS(16,r1)
- REST_8GPRS(24,r1)
+ REST_GPR(0,r1)
+ REST_10GPRS(2,r1)
+ REST_10GPRS(12,r1)
+ REST_10GPRS(22,r1)
/* Restore possibly modified LR */
ld r0, _LINK(r1)
--
2.12.2
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [v2, 2/4] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS
2017-06-01 10:48 ` [PATCH v2 2/4] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS Naveen N. Rao
@ 2017-06-19 12:22 ` Michael Ellerman
0 siblings, 0 replies; 13+ messages in thread
From: Michael Ellerman @ 2017-06-19 12:22 UTC (permalink / raw)
To: Naveen N. Rao; +Cc: linuxppc-dev, Masami Hiramatsu, Steven Rostedt
On Thu, 2017-06-01 at 10:48:16 UTC, "Naveen N. Rao" wrote:
> For DYNAMIC_FTRACE_WITH_REGS, we should be passing-in the original set
> of registers in pt_regs, to capture the state _before_ ftrace_caller.
> However, we are instead passing the stack pointer *after* allocating a
> stack frame in ftrace_caller. Fix this by saving the proper value of r1
> in pt_regs. Also, use SAVE_10GPRS() to simplify the code.
>
> Fixes: 153086644fd1f ("powerpc/ftrace: Add support for -mprofile-kernel
> ftrace ABI")
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Applied to powerpc fixes, thanks.
https://git.kernel.org/powerpc/c/a4979a7e71eb8da976cbe4a0a1fa50
cheers
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes
2017-06-01 10:48 [PATCH v2 0/4] ftrace/kprobe fixes Naveen N. Rao
2017-06-01 10:48 ` [PATCH v2 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling Naveen N. Rao
2017-06-01 10:48 ` [PATCH v2 2/4] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS Naveen N. Rao
@ 2017-06-01 10:48 ` Naveen N. Rao
2017-06-08 13:59 ` Masami Hiramatsu
` (2 more replies)
2017-06-01 10:48 ` [PATCH v2 4/4] powerpc/xmon: Disable ftrace while in xmon Naveen N. Rao
3 siblings, 3 replies; 13+ messages in thread
From: Naveen N. Rao @ 2017-06-01 10:48 UTC (permalink / raw)
To: Michael Ellerman
Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
linuxppc-dev
ftrace_caller() depends on a modified regs->nip to detect if a certain
function has been livepatched. However, with KPROBES_ON_FTRACE, it is
possible for regs->nip to have been modified by the kprobes pre_handler
(jprobes, for instance). In this case, we do not want to invoke the
livepatch_handler so as not to consume the livepatch stack.
To distinguish between the two (kprobes and livepatch), we check if
there is an active kprobe on the current function. If there is, then we
know for sure that it must have modified the NIP as we don't support
livepatching a kprobe'd function. In this case, we simply skip the
livepatch_handler and branch to the new NIP. Otherwise, the
livepatch_handler is invoked.
Fixes: ead514d5fb30a ("powerpc/kprobes: Add support for
KPROBES_ON_FTRACE")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
v2: Changed to use current_kprobe->addr to determine jprobe vs.
livepatch.
arch/powerpc/include/asm/kprobes.h | 1 +
arch/powerpc/kernel/kprobes.c | 6 +++++
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 32 ++++++++++++++++++++++----
3 files changed, 35 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
index a83821f33ea3..8814a7249ceb 100644
--- a/arch/powerpc/include/asm/kprobes.h
+++ b/arch/powerpc/include/asm/kprobes.h
@@ -103,6 +103,7 @@ extern int kprobe_exceptions_notify(struct notifier_block *self,
extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
extern int kprobe_handler(struct pt_regs *regs);
extern int kprobe_post_handler(struct pt_regs *regs);
+extern int is_current_kprobe_addr(unsigned long addr);
#ifdef CONFIG_KPROBES_ON_FTRACE
extern int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb);
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 0554d6e66194..ec9d94c82fbd 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -43,6 +43,12 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
+int is_current_kprobe_addr(unsigned long addr)
+{
+ struct kprobe *p = kprobe_running();
+ return (p && (unsigned long)p->addr == addr) ? 1 : 0;
+}
+
bool arch_within_kprobe_blacklist(unsigned long addr)
{
return (addr >= (unsigned long)__kprobes_text_start &&
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index fa0921410fa4..e6837e85ec28 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -99,13 +99,37 @@ ftrace_call:
bl ftrace_stub
nop
- /* Load ctr with the possibly modified NIP */
- ld r3, _NIP(r1)
- mtctr r3
+ /* Load the possibly modified NIP */
+ ld r15, _NIP(r1)
+
#ifdef CONFIG_LIVEPATCH
- cmpd r14,r3 /* has NIP been altered? */
+ cmpd r14, r15 /* has NIP been altered? */
+#endif
+
+#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
+ beq 1f
+
+ /* Check if there is an active kprobe on us */
+ subi r3, r14, 4
+ bl is_current_kprobe_addr
+ nop
+
+ /*
+ * If r3 == 1, then this is a kprobe/jprobe.
+ * else, this is livepatched function.
+ *
+ * The subsequent conditional branch for livepatch_handler
+ * will use the result of this compare. For kprobe/jprobe,
+ * we just need to branch to the new NIP, so nothing special
+ * is needed.
+ */
+ cmpdi r3, 1
+1:
#endif
+ /* Load CTR with the possibly modified NIP */
+ mtctr r15
+
/* Restore gprs */
REST_GPR(0,r1)
REST_10GPRS(2,r1)
--
2.12.2
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes
2017-06-01 10:48 ` [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes Naveen N. Rao
@ 2017-06-08 13:59 ` Masami Hiramatsu
2017-06-15 11:19 ` Michael Ellerman
2017-06-19 12:22 ` [v2, " Michael Ellerman
2 siblings, 0 replies; 13+ messages in thread
From: Masami Hiramatsu @ 2017-06-08 13:59 UTC (permalink / raw)
To: Naveen N. Rao
Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Masami Hiramatsu,
Steven Rostedt, linuxppc-dev
On Thu, 1 Jun 2017 16:18:17 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> ftrace_caller() depends on a modified regs->nip to detect if a certain
> function has been livepatched. However, with KPROBES_ON_FTRACE, it is
> possible for regs->nip to have been modified by the kprobes pre_handler
> (jprobes, for instance). In this case, we do not want to invoke the
> livepatch_handler so as not to consume the livepatch stack.
>
> To distinguish between the two (kprobes and livepatch), we check if
> there is an active kprobe on the current function. If there is, then we
> know for sure that it must have modified the NIP as we don't support
> livepatching a kprobe'd function. In this case, we simply skip the
> livepatch_handler and branch to the new NIP. Otherwise, the
> livepatch_handler is invoked.
OK, this logic seems good to me, but I weould like to leave it for
PPC64 maintainer too.
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Thanks!
>
> Fixes: ead514d5fb30a ("powerpc/kprobes: Add support for
> KPROBES_ON_FTRACE")
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
> v2: Changed to use current_kprobe->addr to determine jprobe vs.
> livepatch.
>
> arch/powerpc/include/asm/kprobes.h | 1 +
> arch/powerpc/kernel/kprobes.c | 6 +++++
> arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 32 ++++++++++++++++++++++----
> 3 files changed, 35 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
> index a83821f33ea3..8814a7249ceb 100644
> --- a/arch/powerpc/include/asm/kprobes.h
> +++ b/arch/powerpc/include/asm/kprobes.h
> @@ -103,6 +103,7 @@ extern int kprobe_exceptions_notify(struct notifier_block *self,
> extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
> extern int kprobe_handler(struct pt_regs *regs);
> extern int kprobe_post_handler(struct pt_regs *regs);
> +extern int is_current_kprobe_addr(unsigned long addr);
> #ifdef CONFIG_KPROBES_ON_FTRACE
> extern int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
> struct kprobe_ctlblk *kcb);
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 0554d6e66194..ec9d94c82fbd 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -43,6 +43,12 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>
> struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
>
> +int is_current_kprobe_addr(unsigned long addr)
> +{
> + struct kprobe *p = kprobe_running();
> + return (p && (unsigned long)p->addr == addr) ? 1 : 0;
> +}
> +
> bool arch_within_kprobe_blacklist(unsigned long addr)
> {
> return (addr >= (unsigned long)__kprobes_text_start &&
> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> index fa0921410fa4..e6837e85ec28 100644
> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> @@ -99,13 +99,37 @@ ftrace_call:
> bl ftrace_stub
> nop
>
> - /* Load ctr with the possibly modified NIP */
> - ld r3, _NIP(r1)
> - mtctr r3
> + /* Load the possibly modified NIP */
> + ld r15, _NIP(r1)
> +
> #ifdef CONFIG_LIVEPATCH
> - cmpd r14,r3 /* has NIP been altered? */
> + cmpd r14, r15 /* has NIP been altered? */
> +#endif
> +
> +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
> + beq 1f
> +
> + /* Check if there is an active kprobe on us */
> + subi r3, r14, 4
> + bl is_current_kprobe_addr
> + nop
> +
> + /*
> + * If r3 == 1, then this is a kprobe/jprobe.
> + * else, this is livepatched function.
> + *
> + * The subsequent conditional branch for livepatch_handler
> + * will use the result of this compare. For kprobe/jprobe,
> + * we just need to branch to the new NIP, so nothing special
> + * is needed.
> + */
> + cmpdi r3, 1
> +1:
> #endif
>
> + /* Load CTR with the possibly modified NIP */
> + mtctr r15
> +
> /* Restore gprs */
> REST_GPR(0,r1)
> REST_10GPRS(2,r1)
> --
> 2.12.2
>
--
Masami Hiramatsu <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes
2017-06-01 10:48 ` [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes Naveen N. Rao
2017-06-08 13:59 ` Masami Hiramatsu
@ 2017-06-15 11:19 ` Michael Ellerman
2017-06-15 15:32 ` Naveen N. Rao
2017-06-19 12:22 ` [v2, " Michael Ellerman
2 siblings, 1 reply; 13+ messages in thread
From: Michael Ellerman @ 2017-06-15 11:19 UTC (permalink / raw)
To: Naveen N. Rao
Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
linuxppc-dev
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:
> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> index fa0921410fa4..e6837e85ec28 100644
> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> @@ -99,13 +99,37 @@ ftrace_call:
> bl ftrace_stub
> nop
>
> - /* Load ctr with the possibly modified NIP */
> - ld r3, _NIP(r1)
> - mtctr r3
> + /* Load the possibly modified NIP */
> + ld r15, _NIP(r1)
> +
> #ifdef CONFIG_LIVEPATCH
> - cmpd r14,r3 /* has NIP been altered? */
> + cmpd r14, r15 /* has NIP been altered? */
> +#endif
> +
> +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
> + beq 1f
> +
> + /* Check if there is an active kprobe on us */
> + subi r3, r14, 4
> + bl is_current_kprobe_addr
> + nop
> +
> + /*
> + * If r3 == 1, then this is a kprobe/jprobe.
> + * else, this is livepatched function.
> + *
> + * The subsequent conditional branch for livepatch_handler
> + * will use the result of this compare. For kprobe/jprobe,
> + * we just need to branch to the new NIP, so nothing special
> + * is needed.
> + */
> + cmpdi r3, 1
> +1:
> #endif
I added some more comments. Hopefully I got them right :D
#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
+ /* NIP has not been altered, skip over further checks */
beq 1f
/* Check if there is an active kprobe on us */
@@ -118,10 +119,11 @@ ftrace_call:
* If r3 == 1, then this is a kprobe/jprobe.
* else, this is livepatched function.
*
- * The subsequent conditional branch for livepatch_handler
- * will use the result of this compare. For kprobe/jprobe,
- * we just need to branch to the new NIP, so nothing special
- * is needed.
+ * The conditional branch for livepatch_handler below will use the
+ * result of this comparison. For kprobe/jprobe, we just need to branch to
+ * the new NIP, not call livepatch_handler. The branch below is bne, so we
+ * want CR0[EQ] to be true if this is a kprobe/jprobe. Which means we want
+ * CR0[EQ] = (r3 == 1).
*/
cmpdi r3, 1
1:
@@ -147,7 +149,10 @@ ftrace_call:
addi r1, r1, SWITCH_FRAME_SIZE
#ifdef CONFIG_LIVEPATCH
- /* Based on the cmpd above, if the NIP was altered handle livepatch */
+ /*
+ * Based on the cmpd or cmpdi above, if the NIP was altered and we're
+ * not on a kprobe/jprobe, then handle livepatch.
+ */
bne- livepatch_handler
#endif
cheers
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes
2017-06-15 11:19 ` Michael Ellerman
@ 2017-06-15 15:32 ` Naveen N. Rao
0 siblings, 0 replies; 13+ messages in thread
From: Naveen N. Rao @ 2017-06-15 15:32 UTC (permalink / raw)
To: Michael Ellerman
Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
linuxppc-dev
On 2017/06/15 09:19PM, Michael Ellerman wrote:
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:
>
> > diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> > index fa0921410fa4..e6837e85ec28 100644
> > --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> > +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> > @@ -99,13 +99,37 @@ ftrace_call:
> > bl ftrace_stub
> > nop
> >
> > - /* Load ctr with the possibly modified NIP */
> > - ld r3, _NIP(r1)
> > - mtctr r3
> > + /* Load the possibly modified NIP */
> > + ld r15, _NIP(r1)
> > +
> > #ifdef CONFIG_LIVEPATCH
> > - cmpd r14,r3 /* has NIP been altered? */
> > + cmpd r14, r15 /* has NIP been altered? */
> > +#endif
> > +
> > +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
> > + beq 1f
> > +
> > + /* Check if there is an active kprobe on us */
> > + subi r3, r14, 4
> > + bl is_current_kprobe_addr
> > + nop
> > +
> > + /*
> > + * If r3 == 1, then this is a kprobe/jprobe.
> > + * else, this is livepatched function.
> > + *
> > + * The subsequent conditional branch for livepatch_handler
> > + * will use the result of this compare. For kprobe/jprobe,
> > + * we just need to branch to the new NIP, so nothing special
> > + * is needed.
> > + */
> > + cmpdi r3, 1
> > +1:
> > #endif
>
> I added some more comments. Hopefully I got them right :D
>
> #if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
> + /* NIP has not been altered, skip over further checks */
> beq 1f
>
> /* Check if there is an active kprobe on us */
> @@ -118,10 +119,11 @@ ftrace_call:
> * If r3 == 1, then this is a kprobe/jprobe.
> * else, this is livepatched function.
> *
> - * The subsequent conditional branch for livepatch_handler
> - * will use the result of this compare. For kprobe/jprobe,
> - * we just need to branch to the new NIP, so nothing special
> - * is needed.
> + * The conditional branch for livepatch_handler below will use the
> + * result of this comparison. For kprobe/jprobe, we just need to branch to
> + * the new NIP, not call livepatch_handler. The branch below is bne, so we
> + * want CR0[EQ] to be true if this is a kprobe/jprobe. Which means we want
> + * CR0[EQ] = (r3 == 1).
> */
> cmpdi r3, 1
> 1:
> @@ -147,7 +149,10 @@ ftrace_call:
> addi r1, r1, SWITCH_FRAME_SIZE
>
> #ifdef CONFIG_LIVEPATCH
> - /* Based on the cmpd above, if the NIP was altered handle livepatch */
> + /*
> + * Based on the cmpd or cmpdi above, if the NIP was altered and we're
> + * not on a kprobe/jprobe, then handle livepatch.
> + */
> bne- livepatch_handler
> #endif
Oh yes, this makes the code logic a whole lot more clearer and explicit.
Thanks for expanding on it!
- Naveen
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [v2, 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes
2017-06-01 10:48 ` [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes Naveen N. Rao
2017-06-08 13:59 ` Masami Hiramatsu
2017-06-15 11:19 ` Michael Ellerman
@ 2017-06-19 12:22 ` Michael Ellerman
2 siblings, 0 replies; 13+ messages in thread
From: Michael Ellerman @ 2017-06-19 12:22 UTC (permalink / raw)
To: Naveen N. Rao; +Cc: linuxppc-dev, Masami Hiramatsu, Steven Rostedt
On Thu, 2017-06-01 at 10:48:17 UTC, "Naveen N. Rao" wrote:
> ftrace_caller() depends on a modified regs->nip to detect if a certain
> function has been livepatched. However, with KPROBES_ON_FTRACE, it is
> possible for regs->nip to have been modified by the kprobes pre_handler
> (jprobes, for instance). In this case, we do not want to invoke the
> livepatch_handler so as not to consume the livepatch stack.
>
> To distinguish between the two (kprobes and livepatch), we check if
> there is an active kprobe on the current function. If there is, then we
> know for sure that it must have modified the NIP as we don't support
> livepatching a kprobe'd function. In this case, we simply skip the
> livepatch_handler and branch to the new NIP. Otherwise, the
> livepatch_handler is invoked.
>
> Fixes: ead514d5fb30a ("powerpc/kprobes: Add support for
> KPROBES_ON_FTRACE")
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Applied to powerpc fixes, thanks.
https://git.kernel.org/powerpc/c/c05b8c4474c03026aaa7f8872e7836
cheers
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v2 4/4] powerpc/xmon: Disable ftrace while in xmon
2017-06-01 10:48 [PATCH v2 0/4] ftrace/kprobe fixes Naveen N. Rao
` (2 preceding siblings ...)
2017-06-01 10:48 ` [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes Naveen N. Rao
@ 2017-06-01 10:48 ` Naveen N. Rao
3 siblings, 0 replies; 13+ messages in thread
From: Naveen N. Rao @ 2017-06-01 10:48 UTC (permalink / raw)
To: Michael Ellerman
Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
linuxppc-dev
Disable ftrace when we enter xmon so as not to clobber/pollute the
trace buffer. In addition, we cannot have function_graph enabled while
in xmon since we use setjmp/longjmp which confuses the function call
history maintained by function_graph.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
v2: Disable ftrace, rather than just pausing function_graph.
arch/powerpc/xmon/xmon.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index a728e1919613..e51ce32d21fd 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -28,6 +28,7 @@
#include <linux/bug.h>
#include <linux/nmi.h>
#include <linux/ctype.h>
+#include <linux/ftrace.h>
#include <asm/debugfs.h>
#include <asm/ptrace.h>
@@ -456,10 +457,13 @@ static int xmon_core(struct pt_regs *regs, int fromipi)
int cpu;
int secondary;
#endif
+ int save_ftrace_enabled;
local_irq_save(flags);
hard_irq_disable();
+ save_ftrace_enabled = __ftrace_enabled_save();
+
bp = in_breakpoint_table(regs->nip, &offset);
if (bp != NULL) {
regs->nip = bp->address + offset;
@@ -654,6 +658,7 @@ static int xmon_core(struct pt_regs *regs, int fromipi)
insert_cpu_bpts();
touch_nmi_watchdog();
+ __ftrace_enabled_restore(save_ftrace_enabled);
local_irq_restore(flags);
return cmd != 'X' && cmd != EOF;
--
2.12.2
^ permalink raw reply related [flat|nested] 13+ messages in thread