linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 1/2] ftrace: Allow arches to opt-out of static ftrace
@ 2018-03-16 13:46 Michael Ellerman
  2018-03-16 13:46 ` [RFC PATCH 2/2] powerpc: Only support DYNAMIC_FTRACE not static Michael Ellerman
  2018-03-16 14:40 ` [RFC PATCH 1/2] ftrace: Allow arches to opt-out of static ftrace Steven Rostedt
  0 siblings, 2 replies; 6+ messages in thread
From: Michael Ellerman @ 2018-03-16 13:46 UTC (permalink / raw)
  To: rostedt; +Cc: linuxppc-dev, linux-kernel, linux-arch, naveen.n.rao

There is a small but non-zero amount of code required by arches to
suppory non-dynamic (static) ftrace, and more importantly there is the
added work of testing both configurations.

There are also almost no down sides to dynamic ftrace once it's well
tested, other than a small increase in code/data size.

So give arches the option to opt-out of supporting static ftrace.

This is implemented as a DYNAMIC_FTRACE_CHOICE option, which controls
whether DYNAMIC_FTRACE is presented as a user-selectable option or if
it is just enabled based on its dependencies being enabled (because
it's already default y).

Then the CHOICE option depends on an arch *not* selecting
HAVE_DYNAMIC_FTRACE_ONLY. This would be more natural in reverse, as a
HAVE_STATIC_FTRACE option, but that would require updating every arch.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 Documentation/trace/ftrace-design.txt | 12 ++++++++++++
 kernel/trace/Kconfig                  |  9 ++++++++-
 2 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/Documentation/trace/ftrace-design.txt b/Documentation/trace/ftrace-design.txt
index a273dd0bbaaa..50c1d252f01d 100644
--- a/Documentation/trace/ftrace-design.txt
+++ b/Documentation/trace/ftrace-design.txt
@@ -391,3 +391,15 @@ Quick notes:
 	  ftrace_graph_call location with a call to ftrace_graph_caller()
 	- ftrace_disable_ftrace_graph_caller() will runtime patch the
 	  ftrace_graph_call location with nops
+
+HAVE_DYNAMIC_FTRACE_ONLY
+------------------------
+
+An arch can select this option to indicate that it only supports dynamic ftrace,
+and not non-dynamic (static) ftrace.
+
+Once dynamic ftrace is well tested it is superior to static ftrace in basically
+all respects other than code/data size.
+
+Selection this option allows an arch to only support dyanmic, removing a small
+amount of code complexity required to support both static and dynamic.
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 0b249e2f0c3c..1998e30d2ea2 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -29,6 +29,9 @@ config HAVE_DYNAMIC_FTRACE
 	help
 	  See Documentation/trace/ftrace-design.txt
 
+config HAVE_DYNAMIC_FTRACE_ONLY
+	bool
+
 config HAVE_DYNAMIC_FTRACE_WITH_REGS
 	bool
 
@@ -488,8 +491,12 @@ config BPF_EVENTS
 config PROBE_EVENTS
 	def_bool n
 
+config DYNAMIC_FTRACE_CHOICE
+	def_bool y
+	depends on !HAVE_DYNAMIC_FTRACE_ONLY
+
 config DYNAMIC_FTRACE
-	bool "enable/disable function tracing dynamically"
+	bool "enable/disable function tracing dynamically" if DYNAMIC_FTRACE_CHOICE
 	depends on FUNCTION_TRACER
 	depends on HAVE_DYNAMIC_FTRACE
 	default y
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC PATCH 2/2] powerpc: Only support DYNAMIC_FTRACE not static
  2018-03-16 13:46 [RFC PATCH 1/2] ftrace: Allow arches to opt-out of static ftrace Michael Ellerman
@ 2018-03-16 13:46 ` Michael Ellerman
  2018-03-16 14:42   ` Steven Rostedt
  2018-03-16 14:40 ` [RFC PATCH 1/2] ftrace: Allow arches to opt-out of static ftrace Steven Rostedt
  1 sibling, 1 reply; 6+ messages in thread
From: Michael Ellerman @ 2018-03-16 13:46 UTC (permalink / raw)
  To: rostedt; +Cc: linuxppc-dev, linux-kernel, linux-arch, naveen.n.rao

We've had dynamic ftrace support for over 9 years since Steve first
wrote it, all the distros use dynamic, and static is basically
untested these days, so drop support for static ftrace.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/Kconfig                           |  1 +
 arch/powerpc/include/asm/ftrace.h              |  4 +---
 arch/powerpc/include/asm/module.h              |  5 -----
 arch/powerpc/kernel/trace/ftrace.c             |  2 --
 arch/powerpc/kernel/trace/ftrace_32.S          | 20 ------------------
 arch/powerpc/kernel/trace/ftrace_64.S          | 29 --------------------------
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S |  3 ---
 arch/powerpc/kernel/trace/ftrace_64_pg.S       |  2 --
 8 files changed, 2 insertions(+), 64 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 73ce5dd07642..23a325df784a 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -189,6 +189,7 @@ config PPC
 	select HAVE_DEBUG_STACKOVERFLOW
 	select HAVE_DMA_API_DEBUG
 	select HAVE_DYNAMIC_FTRACE
+	select HAVE_DYNAMIC_FTRACE_ONLY
 	select HAVE_DYNAMIC_FTRACE_WITH_REGS	if MPROFILE_KERNEL
 	select HAVE_EBPF_JIT			if PPC64
 	select HAVE_EFFICIENT_UNALIGNED_ACCESS	if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index 9abddde372ab..e6c34d740ee9 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -48,7 +48,6 @@
 #else /* !__ASSEMBLY__ */
 extern void _mcount(void);
 
-#ifdef CONFIG_DYNAMIC_FTRACE
 # define FTRACE_ADDR ((unsigned long)ftrace_caller)
 # define FTRACE_REGS_ADDR FTRACE_ADDR
 static inline unsigned long ftrace_call_adjust(unsigned long addr)
@@ -60,13 +59,12 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
 struct dyn_arch_ftrace {
 	struct module *mod;
 };
-#endif /*  CONFIG_DYNAMIC_FTRACE */
 #endif /* __ASSEMBLY__ */
 
 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
 #define ARCH_SUPPORTS_FTRACE_OPS 1
 #endif
-#endif
+#endif /* CONFIG_FUNCTION_TRACER */
 
 #if defined(CONFIG_FTRACE_SYSCALLS) && !defined(__ASSEMBLY__)
 #ifdef PPC64_ELF_ABI_v1
diff --git a/arch/powerpc/include/asm/module.h b/arch/powerpc/include/asm/module.h
index 7e28442827f1..e09c96d0db69 100644
--- a/arch/powerpc/include/asm/module.h
+++ b/arch/powerpc/include/asm/module.h
@@ -90,11 +90,6 @@ int module_trampoline_target(struct module *mod, unsigned long trampoline,
 
 #ifdef CONFIG_DYNAMIC_FTRACE
 int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs);
-#else
-static inline int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs)
-{
-	return 0;
-}
 #endif
 
 #endif /* __KERNEL__ */
diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
index 4741fe112f05..c6d196d85260 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -538,7 +538,6 @@ int __init ftrace_dyn_arch_init(void)
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 
-#ifdef CONFIG_DYNAMIC_FTRACE
 extern void ftrace_graph_call(void);
 extern void ftrace_graph_stub(void);
 
@@ -567,7 +566,6 @@ int ftrace_disable_ftrace_graph_caller(void)
 
 	return ftrace_modify_code(ip, old, new);
 }
-#endif /* CONFIG_DYNAMIC_FTRACE */
 
 /*
  * Hook the return address and push it in the stack of return addrs
diff --git a/arch/powerpc/kernel/trace/ftrace_32.S b/arch/powerpc/kernel/trace/ftrace_32.S
index afef2c076282..2c29098f630f 100644
--- a/arch/powerpc/kernel/trace/ftrace_32.S
+++ b/arch/powerpc/kernel/trace/ftrace_32.S
@@ -14,7 +14,6 @@
 #include <asm/ftrace.h>
 #include <asm/export.h>
 
-#ifdef CONFIG_DYNAMIC_FTRACE
 _GLOBAL(mcount)
 _GLOBAL(_mcount)
 	/*
@@ -47,26 +46,7 @@ _GLOBAL(ftrace_graph_stub)
 	MCOUNT_RESTORE_FRAME
 	/* old link register ends up in ctr reg */
 	bctr
-#else
-_GLOBAL(mcount)
-_GLOBAL(_mcount)
-
-	MCOUNT_SAVE_FRAME
 
-	subi	r3, r3, MCOUNT_INSN_SIZE
-	LOAD_REG_ADDR(r5, ftrace_trace_function)
-	lwz	r5,0(r5)
-
-	mtctr	r5
-	bctrl
-	nop
-
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-	b	ftrace_graph_caller
-#endif
-	MCOUNT_RESTORE_FRAME
-	bctr
-#endif
 EXPORT_SYMBOL(_mcount)
 
 _GLOBAL(ftrace_stub)
diff --git a/arch/powerpc/kernel/trace/ftrace_64.S b/arch/powerpc/kernel/trace/ftrace_64.S
index e5ccea19821e..e25f77c10a72 100644
--- a/arch/powerpc/kernel/trace/ftrace_64.S
+++ b/arch/powerpc/kernel/trace/ftrace_64.S
@@ -14,7 +14,6 @@
 #include <asm/ppc-opcode.h>
 #include <asm/export.h>
 
-#ifdef CONFIG_DYNAMIC_FTRACE
 _GLOBAL(mcount)
 _GLOBAL(_mcount)
 EXPORT_SYMBOL(_mcount)
@@ -23,34 +22,6 @@ EXPORT_SYMBOL(_mcount)
 	mtlr	r0
 	bctr
 
-#else /* CONFIG_DYNAMIC_FTRACE */
-_GLOBAL_TOC(_mcount)
-EXPORT_SYMBOL(_mcount)
-	/* Taken from output of objdump from lib64/glibc */
-	mflr	r3
-	ld	r11, 0(r1)
-	stdu	r1, -112(r1)
-	std	r3, 128(r1)
-	ld	r4, 16(r11)
-
-	subi	r3, r3, MCOUNT_INSN_SIZE
-	LOAD_REG_ADDR(r5,ftrace_trace_function)
-	ld	r5,0(r5)
-	ld	r5,0(r5)
-	mtctr	r5
-	bctrl
-	nop
-
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-	b	ftrace_graph_caller
-#endif
-	ld	r0, 128(r1)
-	mtlr	r0
-	addi	r1, r1, 112
-_GLOBAL(ftrace_stub)
-	blr
-#endif /* CONFIG_DYNAMIC_FTRACE */
-
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 _GLOBAL(return_to_handler)
 	/* need to save return values */
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index 3f3e81852422..625f9b758da7 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -17,7 +17,6 @@
 #include <asm/bug.h>
 #include <asm/ptrace.h>
 
-#ifdef CONFIG_DYNAMIC_FTRACE
 /*
  *
  * ftrace_caller() is the function that replaces _mcount() when ftrace is
@@ -236,8 +235,6 @@ livepatch_handler:
 	blr
 #endif /* CONFIG_LIVEPATCH */
 
-#endif /* CONFIG_DYNAMIC_FTRACE */
-
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 _GLOBAL(ftrace_graph_caller)
 	stdu	r1, -112(r1)
diff --git a/arch/powerpc/kernel/trace/ftrace_64_pg.S b/arch/powerpc/kernel/trace/ftrace_64_pg.S
index f095358da96e..f18762827e51 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_pg.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_pg.S
@@ -14,7 +14,6 @@
 #include <asm/ppc-opcode.h>
 #include <asm/export.h>
 
-#ifdef CONFIG_DYNAMIC_FTRACE
 _GLOBAL_TOC(ftrace_caller)
 	/* Taken from output of objdump from lib64/glibc */
 	mflr	r3
@@ -39,7 +38,6 @@ _GLOBAL(ftrace_graph_stub)
 
 _GLOBAL(ftrace_stub)
 	blr
-#endif /* CONFIG_DYNAMIC_FTRACE */
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 _GLOBAL(ftrace_graph_caller)
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 1/2] ftrace: Allow arches to opt-out of static ftrace
  2018-03-16 13:46 [RFC PATCH 1/2] ftrace: Allow arches to opt-out of static ftrace Michael Ellerman
  2018-03-16 13:46 ` [RFC PATCH 2/2] powerpc: Only support DYNAMIC_FTRACE not static Michael Ellerman
@ 2018-03-16 14:40 ` Steven Rostedt
  2018-03-19  1:02   ` Michael Ellerman
  1 sibling, 1 reply; 6+ messages in thread
From: Steven Rostedt @ 2018-03-16 14:40 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: linuxppc-dev, linux-kernel, linux-arch, naveen.n.rao

On Sat, 17 Mar 2018 00:46:32 +1100
Michael Ellerman <mpe@ellerman.id.au> wrote:

> There is a small but non-zero amount of code required by arches to
> suppory non-dynamic (static) ftrace, and more importantly there is the
> added work of testing both configurations.
> 
> There are also almost no down sides to dynamic ftrace once it's well
> tested, other than a small increase in code/data size.
> 
> So give arches the option to opt-out of supporting static ftrace.
> 
> This is implemented as a DYNAMIC_FTRACE_CHOICE option, which controls
> whether DYNAMIC_FTRACE is presented as a user-selectable option or if
> it is just enabled based on its dependencies being enabled (because
> it's already default y).
> 
> Then the CHOICE option depends on an arch *not* selecting
> HAVE_DYNAMIC_FTRACE_ONLY. This would be more natural in reverse, as a
> HAVE_STATIC_FTRACE option, but that would require updating every arch.
> 
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>

Why not just add in arch/powerpc/Kconfig:

config PPC
	[..]
	select DYNAMIC_FTRACE			if FUNCTION_TRACER

?

It seems to work for me.

-- Steve

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 2/2] powerpc: Only support DYNAMIC_FTRACE not static
  2018-03-16 13:46 ` [RFC PATCH 2/2] powerpc: Only support DYNAMIC_FTRACE not static Michael Ellerman
@ 2018-03-16 14:42   ` Steven Rostedt
  2018-03-19  1:08     ` Michael Ellerman
  0 siblings, 1 reply; 6+ messages in thread
From: Steven Rostedt @ 2018-03-16 14:42 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: linuxppc-dev, linux-kernel, linux-arch, naveen.n.rao

On Sat, 17 Mar 2018 00:46:33 +1100
Michael Ellerman <mpe@ellerman.id.au> wrote:

> We've had dynamic ftrace support for over 9 years since Steve first
> wrote it, all the distros use dynamic, and static is basically
> untested these days, so drop support for static ftrace.
> 
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  arch/powerpc/Kconfig                           |  1 +
>  arch/powerpc/include/asm/ftrace.h              |  4 +---
>  arch/powerpc/include/asm/module.h              |  5 -----
>  arch/powerpc/kernel/trace/ftrace.c             |  2 --
>  arch/powerpc/kernel/trace/ftrace_32.S          | 20 ------------------
>  arch/powerpc/kernel/trace/ftrace_64.S          | 29 --------------------------
>  arch/powerpc/kernel/trace/ftrace_64_mprofile.S |  3 ---
>  arch/powerpc/kernel/trace/ftrace_64_pg.S       |  2 --
>  8 files changed, 2 insertions(+), 64 deletions(-)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 73ce5dd07642..23a325df784a 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -189,6 +189,7 @@ config PPC
>  	select HAVE_DEBUG_STACKOVERFLOW
>  	select HAVE_DMA_API_DEBUG
>  	select HAVE_DYNAMIC_FTRACE
> +	select HAVE_DYNAMIC_FTRACE_ONLY

I still think adding:

	select DYNAMIC_FTRACE			if FUNCTION_TRACER

is the better approach.

But I'm all for this patch. I've debated doing the same thing for x86,
but the only reason I have not, was because it's the only way I test
the !DYNAMIC_FTRACE code. I've broken the static function tracing
several times and only find out during my test suite that still tests
that case. But yeah, it would be nice to just nuke static function
tracing for all archs. Perhaps after we finish removing unused archs,
that may be the way to go forward.

-- Steve



>  	select HAVE_DYNAMIC_FTRACE_WITH_REGS	if MPROFILE_KERNEL
>  	select HAVE_EBPF_JIT			if PPC64
>  	select HAVE_EFFICIENT_UNALIGNED_ACCESS	if !(CPU_LITTLE_ENDIAN && POWER7_CPU)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 1/2] ftrace: Allow arches to opt-out of static ftrace
  2018-03-16 14:40 ` [RFC PATCH 1/2] ftrace: Allow arches to opt-out of static ftrace Steven Rostedt
@ 2018-03-19  1:02   ` Michael Ellerman
  0 siblings, 0 replies; 6+ messages in thread
From: Michael Ellerman @ 2018-03-19  1:02 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linuxppc-dev, linux-kernel, linux-arch, naveen.n.rao

Steven Rostedt <rostedt@goodmis.org> writes:

> On Sat, 17 Mar 2018 00:46:32 +1100
> Michael Ellerman <mpe@ellerman.id.au> wrote:
>
>> There is a small but non-zero amount of code required by arches to
>> suppory non-dynamic (static) ftrace, and more importantly there is the
>> added work of testing both configurations.
>> 
>> There are also almost no down sides to dynamic ftrace once it's well
>> tested, other than a small increase in code/data size.
>> 
>> So give arches the option to opt-out of supporting static ftrace.
>> 
>> This is implemented as a DYNAMIC_FTRACE_CHOICE option, which controls
>> whether DYNAMIC_FTRACE is presented as a user-selectable option or if
>> it is just enabled based on its dependencies being enabled (because
>> it's already default y).
>> 
>> Then the CHOICE option depends on an arch *not* selecting
>> HAVE_DYNAMIC_FTRACE_ONLY. This would be more natural in reverse, as a
>> HAVE_STATIC_FTRACE option, but that would require updating every arch.
>> 
>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>
> Why not just add in arch/powerpc/Kconfig:
>
> config PPC
> 	[..]
> 	select DYNAMIC_FTRACE			if FUNCTION_TRACER
>
> ?
>
> It seems to work for me.

It does work, but it's a bit fragile. It requires duplicating the
dependencies of DYNAMIC_FTRACE in the 'if' condition.

Currently that's:

config DYNAMIC_FTRACE
	depends on FUNCTION_TRACER
	depends on HAVE_DYNAMIC_FTRACE

So technically we should use:

 	select DYNAMIC_FTRACE			if FUNCTION_TRACER and HAVE_DYNAMIC_FTRACE

Though we happen to know we just selected HAVE_DYNAMIC_FTRACE so we can
leave that out.

As long as the dependencies of DYNAMIC_FTRACE don't change, or the 'if'
clause is updated when they do, then it's OK and it is certainly simpler.

cheers

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 2/2] powerpc: Only support DYNAMIC_FTRACE not static
  2018-03-16 14:42   ` Steven Rostedt
@ 2018-03-19  1:08     ` Michael Ellerman
  0 siblings, 0 replies; 6+ messages in thread
From: Michael Ellerman @ 2018-03-19  1:08 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linuxppc-dev, linux-kernel, linux-arch, naveen.n.rao

Steven Rostedt <rostedt@goodmis.org> writes:

> On Sat, 17 Mar 2018 00:46:33 +1100
> Michael Ellerman <mpe@ellerman.id.au> wrote:
>
>> We've had dynamic ftrace support for over 9 years since Steve first
>> wrote it, all the distros use dynamic, and static is basically
>> untested these days, so drop support for static ftrace.
>> 
>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>> ---
>>  arch/powerpc/Kconfig                           |  1 +
>>  arch/powerpc/include/asm/ftrace.h              |  4 +---
>>  arch/powerpc/include/asm/module.h              |  5 -----
>>  arch/powerpc/kernel/trace/ftrace.c             |  2 --
>>  arch/powerpc/kernel/trace/ftrace_32.S          | 20 ------------------
>>  arch/powerpc/kernel/trace/ftrace_64.S          | 29 --------------------------
>>  arch/powerpc/kernel/trace/ftrace_64_mprofile.S |  3 ---
>>  arch/powerpc/kernel/trace/ftrace_64_pg.S       |  2 --
>>  8 files changed, 2 insertions(+), 64 deletions(-)
>> 
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index 73ce5dd07642..23a325df784a 100644
>> --- a/arch/powerpc/Kconfig
>> +++ b/arch/powerpc/Kconfig
>> @@ -189,6 +189,7 @@ config PPC
>>  	select HAVE_DEBUG_STACKOVERFLOW
>>  	select HAVE_DMA_API_DEBUG
>>  	select HAVE_DYNAMIC_FTRACE
>> +	select HAVE_DYNAMIC_FTRACE_ONLY
>
> I still think adding:
>
> 	select DYNAMIC_FTRACE			if FUNCTION_TRACER
>
> is the better approach.

OK. As I said in my other reply it's a bit fragile, but it does work.

I'll do a version for powerpc using the above approach.

> But I'm all for this patch. I've debated doing the same thing for x86,
> but the only reason I have not, was because it's the only way I test
> the !DYNAMIC_FTRACE code. I've broken the static function tracing
> several times and only find out during my test suite that still tests
> that case. But yeah, it would be nice to just nuke static function
> tracing for all archs. Perhaps after we finish removing unused archs,
> that may be the way to go forward.

Yeah I did look and we still have some arches that support ftrace but
not dynamic ftrace, but there's not many.

cheers

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-03-19  1:08 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-03-16 13:46 [RFC PATCH 1/2] ftrace: Allow arches to opt-out of static ftrace Michael Ellerman
2018-03-16 13:46 ` [RFC PATCH 2/2] powerpc: Only support DYNAMIC_FTRACE not static Michael Ellerman
2018-03-16 14:42   ` Steven Rostedt
2018-03-19  1:08     ` Michael Ellerman
2018-03-16 14:40 ` [RFC PATCH 1/2] ftrace: Allow arches to opt-out of static ftrace Steven Rostedt
2018-03-19  1:02   ` Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).