linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups
@ 2015-10-01 11:55 Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 01/25] kernel/trace_probe: is_good_name can be boolean Steven Rostedt
                   ` (24 more replies)
  0 siblings, 25 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

Adds an options directory in the instance directories. Some options
are still global (the tracer specific ones). But formats are not.

  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
for-next

Head SHA1: 37aea98b84c0ce2ac638510fefeed9f8f920bd34


Steven Rostedt (Red Hat) (24):
      tracing: Move non perf code out of perf.h
      tracing: Remove ftrace_trace_stack_regs()
      tracing: Remove unused function trace_current_buffer_lock_reserve()
      tracing: Pass trace_array into trace_buffer_unlock_commit()
      tracing: Make ftrace_trace_stack() static
      tracing: Inject seq_print_userip_objs() into its only user
      tracing: Turn seq_print_user_ip() into a static function
      tracing: Move "display-graph" option to main options
      tracing: Remove unused tracing option "ftrace_preempt"
      tracing: Use enums instead of hard coded bitmasks for TRACE_ITER flags
      tracing: Use TRACE_FLAGS macro to keep enums and strings matched
      tracing: Only create function graph options when it is compiled in
      tracing: Only create branch tracer options when compiled in
      tracing: Do not create function tracer options when not compiled in
      tracing: Only create stacktrace option when STACKTRACE is configured
      tracing: Always show all tracer options in the options directory
      tracing: Add build bug if we have more trace_flags than bits
      tracing: Remove access to trace_flags in trace_printk.c
      tracing: Move sleep-time and graph-time options out of the core trace_flags
      tracing: Move trace_flags from global to a trace_array field
      tracing: Add a method to pass in trace_array descriptor to option files
      tracing: Make ftrace_trace_stack() depend on general trace_array flag
      tracing: Add trace options for core options to instances
      tracing: Add trace options for tracer options to instances

Yaowei Bai (1):
      kernel/trace_probe: is_good_name can be boolean

----
 include/linux/trace_events.h         |  13 +-
 include/trace/perf.h                 | 258 ----------------------
 include/trace/trace_events.h         | 258 ++++++++++++++++++++++
 kernel/trace/blktrace.c              |  11 +-
 kernel/trace/ftrace.c                |  19 +-
 kernel/trace/trace.c                 | 403 ++++++++++++++++++++++-------------
 kernel/trace/trace.h                 | 154 ++++++++-----
 kernel/trace/trace_events.c          |  12 +-
 kernel/trace/trace_functions_graph.c |  63 ++++--
 kernel/trace/trace_irqsoff.c         |  98 +++++----
 kernel/trace/trace_kdb.c             |   8 +-
 kernel/trace/trace_mmiotrace.c       |   4 +-
 kernel/trace/trace_output.c          |  97 ++++-----
 kernel/trace/trace_output.h          |   4 -
 kernel/trace/trace_printk.c          |  14 +-
 kernel/trace/trace_probe.h           |   8 +-
 kernel/trace/trace_sched_wakeup.c    | 112 +++++-----
 kernel/trace/trace_syscalls.c        |   3 +-
 18 files changed, 869 insertions(+), 670 deletions(-)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [for-next][PATCH 01/25] kernel/trace_probe: is_good_name can be boolean
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 02/25] tracing: Move non perf code out of perf.h Steven Rostedt
                   ` (23 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Yaowei Bai

[-- Attachment #1: 0001-kernel-trace_probe-is_good_name-can-be-boolean.patch --]
[-- Type: text/plain, Size: 1189 bytes --]

From: Yaowei Bai <bywxiaobai@163.com>

This patch makes is_good_name return bool to improve readability
due to this particular function only using either one or zero as its
return value.

No functional change.

Link: http://lkml.kernel.org/r/1442929393-4753-2-git-send-email-bywxiaobai@163.com

Signed-off-by: Yaowei Bai <bywxiaobai@163.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_probe.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
index b98dee914542..f6398db09114 100644
--- a/kernel/trace/trace_probe.h
+++ b/kernel/trace/trace_probe.h
@@ -302,15 +302,15 @@ static nokprobe_inline void call_fetch(struct fetch_param *fprm,
 }
 
 /* Check the name is good for event/group/fields */
-static inline int is_good_name(const char *name)
+static inline bool is_good_name(const char *name)
 {
 	if (!isalpha(*name) && *name != '_')
-		return 0;
+		return false;
 	while (*++name != '\0') {
 		if (!isalpha(*name) && !isdigit(*name) && *name != '_')
-			return 0;
+			return false;
 	}
-	return 1;
+	return true;
 }
 
 static inline struct event_file_link *
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 02/25] tracing: Move non perf code out of perf.h
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 01/25] kernel/trace_probe: is_good_name can be boolean Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 03/25] tracing: Remove ftrace_trace_stack_regs() Steven Rostedt
                   ` (22 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0002-tracing-Move-non-perf-code-out-of-perf.h.patch --]
[-- Type: text/plain, Size: 16814 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Commit ee53bbd17257 "tracing: Move the perf code out of trace_event.h" moved
more than just the perf code out of trace_event.h, but also removed a bit of
the tracing code too. Move it back.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/trace/perf.h         | 258 -------------------------------------------
 include/trace/trace_events.h | 258 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 258 insertions(+), 258 deletions(-)

diff --git a/include/trace/perf.h b/include/trace/perf.h
index 1b5443cebedc..26486fcd74ce 100644
--- a/include/trace/perf.h
+++ b/include/trace/perf.h
@@ -1,261 +1,3 @@
-/*
- * Stage 4 of the trace events.
- *
- * Override the macros in <trace/trace_events.h> to include the following:
- *
- * For those macros defined with TRACE_EVENT:
- *
- * static struct trace_event_call event_<call>;
- *
- * static void trace_event_raw_event_<call>(void *__data, proto)
- * {
- *	struct trace_event_file *trace_file = __data;
- *	struct trace_event_call *event_call = trace_file->event_call;
- *	struct trace_event_data_offsets_<call> __maybe_unused __data_offsets;
- *	unsigned long eflags = trace_file->flags;
- *	enum event_trigger_type __tt = ETT_NONE;
- *	struct ring_buffer_event *event;
- *	struct trace_event_raw_<call> *entry; <-- defined in stage 1
- *	struct ring_buffer *buffer;
- *	unsigned long irq_flags;
- *	int __data_size;
- *	int pc;
- *
- *	if (!(eflags & EVENT_FILE_FL_TRIGGER_COND)) {
- *		if (eflags & EVENT_FILE_FL_TRIGGER_MODE)
- *			event_triggers_call(trace_file, NULL);
- *		if (eflags & EVENT_FILE_FL_SOFT_DISABLED)
- *			return;
- *	}
- *
- *	local_save_flags(irq_flags);
- *	pc = preempt_count();
- *
- *	__data_size = trace_event_get_offsets_<call>(&__data_offsets, args);
- *
- *	event = trace_event_buffer_lock_reserve(&buffer, trace_file,
- *				  event_<call>->event.type,
- *				  sizeof(*entry) + __data_size,
- *				  irq_flags, pc);
- *	if (!event)
- *		return;
- *	entry	= ring_buffer_event_data(event);
- *
- *	{ <assign>; }  <-- Here we assign the entries by the __field and
- *			   __array macros.
- *
- *	if (eflags & EVENT_FILE_FL_TRIGGER_COND)
- *		__tt = event_triggers_call(trace_file, entry);
- *
- *	if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT,
- *		     &trace_file->flags))
- *		ring_buffer_discard_commit(buffer, event);
- *	else if (!filter_check_discard(trace_file, entry, buffer, event))
- *		trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
- *
- *	if (__tt)
- *		event_triggers_post_call(trace_file, __tt);
- * }
- *
- * static struct trace_event ftrace_event_type_<call> = {
- *	.trace			= trace_raw_output_<call>, <-- stage 2
- * };
- *
- * static char print_fmt_<call>[] = <TP_printk>;
- *
- * static struct trace_event_class __used event_class_<template> = {
- *	.system			= "<system>",
- *	.define_fields		= trace_event_define_fields_<call>,
- *	.fields			= LIST_HEAD_INIT(event_class_##call.fields),
- *	.raw_init		= trace_event_raw_init,
- *	.probe			= trace_event_raw_event_##call,
- *	.reg			= trace_event_reg,
- * };
- *
- * static struct trace_event_call event_<call> = {
- *	.class			= event_class_<template>,
- *	{
- *		.tp			= &__tracepoint_<call>,
- *	},
- *	.event			= &ftrace_event_type_<call>,
- *	.print_fmt		= print_fmt_<call>,
- *	.flags			= TRACE_EVENT_FL_TRACEPOINT,
- * };
- * // its only safe to use pointers when doing linker tricks to
- * // create an array.
- * static struct trace_event_call __used
- * __attribute__((section("_ftrace_events"))) *__event_<call> = &event_<call>;
- *
- */
-
-#ifdef CONFIG_PERF_EVENTS
-
-#define _TRACE_PERF_PROTO(call, proto)					\
-	static notrace void						\
-	perf_trace_##call(void *__data, proto);
-
-#define _TRACE_PERF_INIT(call)						\
-	.perf_probe		= perf_trace_##call,
-
-#else
-#define _TRACE_PERF_PROTO(call, proto)
-#define _TRACE_PERF_INIT(call)
-#endif /* CONFIG_PERF_EVENTS */
-
-#undef __entry
-#define __entry entry
-
-#undef __field
-#define __field(type, item)
-
-#undef __field_struct
-#define __field_struct(type, item)
-
-#undef __array
-#define __array(type, item, len)
-
-#undef __dynamic_array
-#define __dynamic_array(type, item, len)				\
-	__entry->__data_loc_##item = __data_offsets.item;
-
-#undef __string
-#define __string(item, src) __dynamic_array(char, item, -1)
-
-#undef __assign_str
-#define __assign_str(dst, src)						\
-	strcpy(__get_str(dst), (src) ? (const char *)(src) : "(null)");
-
-#undef __bitmask
-#define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, -1)
-
-#undef __get_bitmask
-#define __get_bitmask(field) (char *)__get_dynamic_array(field)
-
-#undef __assign_bitmask
-#define __assign_bitmask(dst, src, nr_bits)					\
-	memcpy(__get_bitmask(dst), (src), __bitmask_size_in_bytes(nr_bits))
-
-#undef TP_fast_assign
-#define TP_fast_assign(args...) args
-
-#undef __perf_addr
-#define __perf_addr(a)	(a)
-
-#undef __perf_count
-#define __perf_count(c)	(c)
-
-#undef __perf_task
-#define __perf_task(t)	(t)
-
-#undef DECLARE_EVENT_CLASS
-#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
-									\
-static notrace void							\
-trace_event_raw_event_##call(void *__data, proto)			\
-{									\
-	struct trace_event_file *trace_file = __data;			\
-	struct trace_event_data_offsets_##call __maybe_unused __data_offsets;\
-	struct trace_event_buffer fbuffer;				\
-	struct trace_event_raw_##call *entry;				\
-	int __data_size;						\
-									\
-	if (trace_trigger_soft_disabled(trace_file))			\
-		return;							\
-									\
-	__data_size = trace_event_get_offsets_##call(&__data_offsets, args); \
-									\
-	entry = trace_event_buffer_reserve(&fbuffer, trace_file,	\
-				 sizeof(*entry) + __data_size);		\
-									\
-	if (!entry)							\
-		return;							\
-									\
-	tstruct								\
-									\
-	{ assign; }							\
-									\
-	trace_event_buffer_commit(&fbuffer);				\
-}
-/*
- * The ftrace_test_probe is compiled out, it is only here as a build time check
- * to make sure that if the tracepoint handling changes, the ftrace probe will
- * fail to compile unless it too is updated.
- */
-
-#undef DEFINE_EVENT
-#define DEFINE_EVENT(template, call, proto, args)			\
-static inline void ftrace_test_probe_##call(void)			\
-{									\
-	check_trace_callback_type_##call(trace_event_raw_event_##template); \
-}
-
-#undef DEFINE_EVENT_PRINT
-#define DEFINE_EVENT_PRINT(template, name, proto, args, print)
-
-#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
-
-#undef __entry
-#define __entry REC
-
-#undef __print_flags
-#undef __print_symbolic
-#undef __print_hex
-#undef __get_dynamic_array
-#undef __get_dynamic_array_len
-#undef __get_str
-#undef __get_bitmask
-#undef __print_array
-
-#undef TP_printk
-#define TP_printk(fmt, args...) "\"" fmt "\", "  __stringify(args)
-
-#undef DECLARE_EVENT_CLASS
-#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
-_TRACE_PERF_PROTO(call, PARAMS(proto));					\
-static char print_fmt_##call[] = print;					\
-static struct trace_event_class __used __refdata event_class_##call = { \
-	.system			= TRACE_SYSTEM_STRING,			\
-	.define_fields		= trace_event_define_fields_##call,	\
-	.fields			= LIST_HEAD_INIT(event_class_##call.fields),\
-	.raw_init		= trace_event_raw_init,			\
-	.probe			= trace_event_raw_event_##call,		\
-	.reg			= trace_event_reg,			\
-	_TRACE_PERF_INIT(call)						\
-};
-
-#undef DEFINE_EVENT
-#define DEFINE_EVENT(template, call, proto, args)			\
-									\
-static struct trace_event_call __used event_##call = {			\
-	.class			= &event_class_##template,		\
-	{								\
-		.tp			= &__tracepoint_##call,		\
-	},								\
-	.event.funcs		= &trace_event_type_funcs_##template,	\
-	.print_fmt		= print_fmt_##template,			\
-	.flags			= TRACE_EVENT_FL_TRACEPOINT,		\
-};									\
-static struct trace_event_call __used					\
-__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call
-
-#undef DEFINE_EVENT_PRINT
-#define DEFINE_EVENT_PRINT(template, call, proto, args, print)		\
-									\
-static char print_fmt_##call[] = print;					\
-									\
-static struct trace_event_call __used event_##call = {			\
-	.class			= &event_class_##template,		\
-	{								\
-		.tp			= &__tracepoint_##call,		\
-	},								\
-	.event.funcs		= &trace_event_type_funcs_##call,	\
-	.print_fmt		= print_fmt_##call,			\
-	.flags			= TRACE_EVENT_FL_TRACEPOINT,		\
-};									\
-static struct trace_event_call __used					\
-__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call
-
-#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
 
 #undef TRACE_SYSTEM_VAR
 
diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
index 43be3b0e44d3..de996cf61053 100644
--- a/include/trace/trace_events.h
+++ b/include/trace/trace_events.h
@@ -506,3 +506,261 @@ static inline notrace int trace_event_get_offsets_##call(		\
 
 #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
 
+/*
+ * Stage 4 of the trace events.
+ *
+ * Override the macros in <trace/trace_events.h> to include the following:
+ *
+ * For those macros defined with TRACE_EVENT:
+ *
+ * static struct trace_event_call event_<call>;
+ *
+ * static void trace_event_raw_event_<call>(void *__data, proto)
+ * {
+ *	struct trace_event_file *trace_file = __data;
+ *	struct trace_event_call *event_call = trace_file->event_call;
+ *	struct trace_event_data_offsets_<call> __maybe_unused __data_offsets;
+ *	unsigned long eflags = trace_file->flags;
+ *	enum event_trigger_type __tt = ETT_NONE;
+ *	struct ring_buffer_event *event;
+ *	struct trace_event_raw_<call> *entry; <-- defined in stage 1
+ *	struct ring_buffer *buffer;
+ *	unsigned long irq_flags;
+ *	int __data_size;
+ *	int pc;
+ *
+ *	if (!(eflags & EVENT_FILE_FL_TRIGGER_COND)) {
+ *		if (eflags & EVENT_FILE_FL_TRIGGER_MODE)
+ *			event_triggers_call(trace_file, NULL);
+ *		if (eflags & EVENT_FILE_FL_SOFT_DISABLED)
+ *			return;
+ *	}
+ *
+ *	local_save_flags(irq_flags);
+ *	pc = preempt_count();
+ *
+ *	__data_size = trace_event_get_offsets_<call>(&__data_offsets, args);
+ *
+ *	event = trace_event_buffer_lock_reserve(&buffer, trace_file,
+ *				  event_<call>->event.type,
+ *				  sizeof(*entry) + __data_size,
+ *				  irq_flags, pc);
+ *	if (!event)
+ *		return;
+ *	entry	= ring_buffer_event_data(event);
+ *
+ *	{ <assign>; }  <-- Here we assign the entries by the __field and
+ *			   __array macros.
+ *
+ *	if (eflags & EVENT_FILE_FL_TRIGGER_COND)
+ *		__tt = event_triggers_call(trace_file, entry);
+ *
+ *	if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT,
+ *		     &trace_file->flags))
+ *		ring_buffer_discard_commit(buffer, event);
+ *	else if (!filter_check_discard(trace_file, entry, buffer, event))
+ *		trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
+ *
+ *	if (__tt)
+ *		event_triggers_post_call(trace_file, __tt);
+ * }
+ *
+ * static struct trace_event ftrace_event_type_<call> = {
+ *	.trace			= trace_raw_output_<call>, <-- stage 2
+ * };
+ *
+ * static char print_fmt_<call>[] = <TP_printk>;
+ *
+ * static struct trace_event_class __used event_class_<template> = {
+ *	.system			= "<system>",
+ *	.define_fields		= trace_event_define_fields_<call>,
+ *	.fields			= LIST_HEAD_INIT(event_class_##call.fields),
+ *	.raw_init		= trace_event_raw_init,
+ *	.probe			= trace_event_raw_event_##call,
+ *	.reg			= trace_event_reg,
+ * };
+ *
+ * static struct trace_event_call event_<call> = {
+ *	.class			= event_class_<template>,
+ *	{
+ *		.tp			= &__tracepoint_<call>,
+ *	},
+ *	.event			= &ftrace_event_type_<call>,
+ *	.print_fmt		= print_fmt_<call>,
+ *	.flags			= TRACE_EVENT_FL_TRACEPOINT,
+ * };
+ * // its only safe to use pointers when doing linker tricks to
+ * // create an array.
+ * static struct trace_event_call __used
+ * __attribute__((section("_ftrace_events"))) *__event_<call> = &event_<call>;
+ *
+ */
+
+#ifdef CONFIG_PERF_EVENTS
+
+#define _TRACE_PERF_PROTO(call, proto)					\
+	static notrace void						\
+	perf_trace_##call(void *__data, proto);
+
+#define _TRACE_PERF_INIT(call)						\
+	.perf_probe		= perf_trace_##call,
+
+#else
+#define _TRACE_PERF_PROTO(call, proto)
+#define _TRACE_PERF_INIT(call)
+#endif /* CONFIG_PERF_EVENTS */
+
+#undef __entry
+#define __entry entry
+
+#undef __field
+#define __field(type, item)
+
+#undef __field_struct
+#define __field_struct(type, item)
+
+#undef __array
+#define __array(type, item, len)
+
+#undef __dynamic_array
+#define __dynamic_array(type, item, len)				\
+	__entry->__data_loc_##item = __data_offsets.item;
+
+#undef __string
+#define __string(item, src) __dynamic_array(char, item, -1)
+
+#undef __assign_str
+#define __assign_str(dst, src)						\
+	strcpy(__get_str(dst), (src) ? (const char *)(src) : "(null)");
+
+#undef __bitmask
+#define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, -1)
+
+#undef __get_bitmask
+#define __get_bitmask(field) (char *)__get_dynamic_array(field)
+
+#undef __assign_bitmask
+#define __assign_bitmask(dst, src, nr_bits)					\
+	memcpy(__get_bitmask(dst), (src), __bitmask_size_in_bytes(nr_bits))
+
+#undef TP_fast_assign
+#define TP_fast_assign(args...) args
+
+#undef __perf_addr
+#define __perf_addr(a)	(a)
+
+#undef __perf_count
+#define __perf_count(c)	(c)
+
+#undef __perf_task
+#define __perf_task(t)	(t)
+
+#undef DECLARE_EVENT_CLASS
+#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
+									\
+static notrace void							\
+trace_event_raw_event_##call(void *__data, proto)			\
+{									\
+	struct trace_event_file *trace_file = __data;			\
+	struct trace_event_data_offsets_##call __maybe_unused __data_offsets;\
+	struct trace_event_buffer fbuffer;				\
+	struct trace_event_raw_##call *entry;				\
+	int __data_size;						\
+									\
+	if (trace_trigger_soft_disabled(trace_file))			\
+		return;							\
+									\
+	__data_size = trace_event_get_offsets_##call(&__data_offsets, args); \
+									\
+	entry = trace_event_buffer_reserve(&fbuffer, trace_file,	\
+				 sizeof(*entry) + __data_size);		\
+									\
+	if (!entry)							\
+		return;							\
+									\
+	tstruct								\
+									\
+	{ assign; }							\
+									\
+	trace_event_buffer_commit(&fbuffer);				\
+}
+/*
+ * The ftrace_test_probe is compiled out, it is only here as a build time check
+ * to make sure that if the tracepoint handling changes, the ftrace probe will
+ * fail to compile unless it too is updated.
+ */
+
+#undef DEFINE_EVENT
+#define DEFINE_EVENT(template, call, proto, args)			\
+static inline void ftrace_test_probe_##call(void)			\
+{									\
+	check_trace_callback_type_##call(trace_event_raw_event_##template); \
+}
+
+#undef DEFINE_EVENT_PRINT
+#define DEFINE_EVENT_PRINT(template, name, proto, args, print)
+
+#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
+
+#undef __entry
+#define __entry REC
+
+#undef __print_flags
+#undef __print_symbolic
+#undef __print_hex
+#undef __get_dynamic_array
+#undef __get_dynamic_array_len
+#undef __get_str
+#undef __get_bitmask
+#undef __print_array
+
+#undef TP_printk
+#define TP_printk(fmt, args...) "\"" fmt "\", "  __stringify(args)
+
+#undef DECLARE_EVENT_CLASS
+#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
+_TRACE_PERF_PROTO(call, PARAMS(proto));					\
+static char print_fmt_##call[] = print;					\
+static struct trace_event_class __used __refdata event_class_##call = { \
+	.system			= TRACE_SYSTEM_STRING,			\
+	.define_fields		= trace_event_define_fields_##call,	\
+	.fields			= LIST_HEAD_INIT(event_class_##call.fields),\
+	.raw_init		= trace_event_raw_init,			\
+	.probe			= trace_event_raw_event_##call,		\
+	.reg			= trace_event_reg,			\
+	_TRACE_PERF_INIT(call)						\
+};
+
+#undef DEFINE_EVENT
+#define DEFINE_EVENT(template, call, proto, args)			\
+									\
+static struct trace_event_call __used event_##call = {			\
+	.class			= &event_class_##template,		\
+	{								\
+		.tp			= &__tracepoint_##call,		\
+	},								\
+	.event.funcs		= &trace_event_type_funcs_##template,	\
+	.print_fmt		= print_fmt_##template,			\
+	.flags			= TRACE_EVENT_FL_TRACEPOINT,		\
+};									\
+static struct trace_event_call __used					\
+__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call
+
+#undef DEFINE_EVENT_PRINT
+#define DEFINE_EVENT_PRINT(template, call, proto, args, print)		\
+									\
+static char print_fmt_##call[] = print;					\
+									\
+static struct trace_event_call __used event_##call = {			\
+	.class			= &event_class_##template,		\
+	{								\
+		.tp			= &__tracepoint_##call,		\
+	},								\
+	.event.funcs		= &trace_event_type_funcs_##call,	\
+	.print_fmt		= print_fmt_##call,			\
+	.flags			= TRACE_EVENT_FL_TRACEPOINT,		\
+};									\
+static struct trace_event_call __used					\
+__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call
+
+#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 03/25] tracing: Remove ftrace_trace_stack_regs()
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 01/25] kernel/trace_probe: is_good_name can be boolean Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 02/25] tracing: Move non perf code out of perf.h Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 04/25] tracing: Remove unused function trace_current_buffer_lock_reserve() Steven Rostedt
                   ` (21 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0003-tracing-Remove-ftrace_trace_stack_regs.patch --]
[-- Type: text/plain, Size: 2958 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

ftrace_trace_stack_regs() is used in only one place, and because that is
such a simple function, just move its code into the location that it was
used in (trace_buffer_unlock_commit_regs()).

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 24 ++++++++++++++----------
 kernel/trace/trace.h |  9 ---------
 2 files changed, 14 insertions(+), 19 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 6e79408674aa..50820887dce9 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -468,6 +468,18 @@ static inline void trace_access_lock_init(void)
 
 #endif
 
+#ifdef CONFIG_STACKTRACE
+static void __ftrace_trace_stack(struct ring_buffer *buffer,
+				 unsigned long flags,
+				 int skip, int pc, struct pt_regs *regs);
+#else
+static inline void __ftrace_trace_stack(struct ring_buffer *buffer,
+					unsigned long flags,
+					int skip, int pc, struct pt_regs *regs)
+{
+}
+#endif
+
 /* trace_flags holds trace_options default values */
 unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
 	TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO | TRACE_ITER_SLEEP_TIME |
@@ -1744,7 +1756,8 @@ void trace_buffer_unlock_commit_regs(struct ring_buffer *buffer,
 {
 	__buffer_unlock_commit(buffer, event);
 
-	ftrace_trace_stack_regs(buffer, flags, 0, pc, regs);
+	if (trace_flags & TRACE_ITER_STACKTRACE)
+		__ftrace_trace_stack(buffer, flags, 0, pc, regs);
 	ftrace_trace_userstack(buffer, flags, pc);
 }
 EXPORT_SYMBOL_GPL(trace_buffer_unlock_commit_regs);
@@ -1873,15 +1886,6 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
 
 }
 
-void ftrace_trace_stack_regs(struct ring_buffer *buffer, unsigned long flags,
-			     int skip, int pc, struct pt_regs *regs)
-{
-	if (!(trace_flags & TRACE_ITER_STACKTRACE))
-		return;
-
-	__ftrace_trace_stack(buffer, flags, skip, pc, regs);
-}
-
 void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
 			int skip, int pc)
 {
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 74bde81601a9..3b2a950a6291 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -614,9 +614,6 @@ void update_max_tr_single(struct trace_array *tr,
 void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
 			int skip, int pc);
 
-void ftrace_trace_stack_regs(struct ring_buffer *buffer, unsigned long flags,
-			     int skip, int pc, struct pt_regs *regs);
-
 void ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags,
 			    int pc);
 
@@ -628,12 +625,6 @@ static inline void ftrace_trace_stack(struct ring_buffer *buffer,
 {
 }
 
-static inline void ftrace_trace_stack_regs(struct ring_buffer *buffer,
-					   unsigned long flags, int skip,
-					   int pc, struct pt_regs *regs)
-{
-}
-
 static inline void ftrace_trace_userstack(struct ring_buffer *buffer,
 					  unsigned long flags, int pc)
 {
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 04/25] tracing: Remove unused function trace_current_buffer_lock_reserve()
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (2 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 03/25] tracing: Remove ftrace_trace_stack_regs() Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 05/25] tracing: Pass trace_array into trace_buffer_unlock_commit() Steven Rostedt
                   ` (20 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0004-tracing-Remove-unused-function-trace_current_buffer_.patch --]
[-- Type: text/plain, Size: 1698 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

trace_current_buffer_lock_reserve() is not used by anything. Might as well
get rid of it.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/trace_events.h | 3 ---
 kernel/trace/trace.c         | 8 --------
 2 files changed, 11 deletions(-)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index ed27917cabc9..71c1191b9954 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -168,9 +168,6 @@ struct ring_buffer_event *
 trace_current_buffer_lock_reserve(struct ring_buffer **current_buffer,
 				  int type, unsigned long len,
 				  unsigned long flags, int pc);
-void trace_current_buffer_unlock_commit(struct ring_buffer *buffer,
-					struct ring_buffer_event *event,
-					unsigned long flags, int pc);
 void trace_buffer_unlock_commit(struct ring_buffer *buffer,
 				struct ring_buffer_event *event,
 				unsigned long flags, int pc);
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 50820887dce9..a499ec95fc61 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1741,14 +1741,6 @@ trace_current_buffer_lock_reserve(struct ring_buffer **current_rb,
 }
 EXPORT_SYMBOL_GPL(trace_current_buffer_lock_reserve);
 
-void trace_current_buffer_unlock_commit(struct ring_buffer *buffer,
-					struct ring_buffer_event *event,
-					unsigned long flags, int pc)
-{
-	__trace_buffer_unlock_commit(buffer, event, flags, pc);
-}
-EXPORT_SYMBOL_GPL(trace_current_buffer_unlock_commit);
-
 void trace_buffer_unlock_commit_regs(struct ring_buffer *buffer,
 				     struct ring_buffer_event *event,
 				     unsigned long flags, int pc,
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 05/25] tracing: Pass trace_array into trace_buffer_unlock_commit()
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (3 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 04/25] tracing: Remove unused function trace_current_buffer_lock_reserve() Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 06/25] tracing: Make ftrace_trace_stack() static Steven Rostedt
                   ` (19 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0005-tracing-Pass-trace_array-into-trace_buffer_unlock_co.patch --]
[-- Type: text/plain, Size: 7380 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

In preparation for having trace options be per instance, the trace_array
needs to be passed to the trace_buffer_unlock_commit(). The
trace_event_buffer_lock_reserve() already passes in the trace_event_file
where the trace_array can be derived from.

Also added a "__init" to the boot up test event plus function tracing
function function_test_events_call().

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/trace_events.h      | 10 ++++++----
 kernel/trace/blktrace.c           |  4 ++--
 kernel/trace/trace.c              | 18 ++++++------------
 kernel/trace/trace_events.c       |  9 +++++++--
 kernel/trace/trace_mmiotrace.c    |  4 ++--
 kernel/trace/trace_sched_wakeup.c |  4 ++--
 6 files changed, 25 insertions(+), 24 deletions(-)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 71c1191b9954..f85693bbcdc3 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -168,10 +168,12 @@ struct ring_buffer_event *
 trace_current_buffer_lock_reserve(struct ring_buffer **current_buffer,
 				  int type, unsigned long len,
 				  unsigned long flags, int pc);
-void trace_buffer_unlock_commit(struct ring_buffer *buffer,
+void trace_buffer_unlock_commit(struct trace_array *tr,
+				struct ring_buffer *buffer,
 				struct ring_buffer_event *event,
 				unsigned long flags, int pc);
-void trace_buffer_unlock_commit_regs(struct ring_buffer *buffer,
+void trace_buffer_unlock_commit_regs(struct trace_array *tr,
+				     struct ring_buffer *buffer,
 				     struct ring_buffer_event *event,
 				     unsigned long flags, int pc,
 				     struct pt_regs *regs);
@@ -505,7 +507,7 @@ event_trigger_unlock_commit(struct trace_event_file *file,
 	enum event_trigger_type tt = ETT_NONE;
 
 	if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
-		trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
+		trace_buffer_unlock_commit(file->tr, buffer, event, irq_flags, pc);
 
 	if (tt)
 		event_triggers_post_call(file, tt);
@@ -537,7 +539,7 @@ event_trigger_unlock_commit_regs(struct trace_event_file *file,
 	enum event_trigger_type tt = ETT_NONE;
 
 	if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
-		trace_buffer_unlock_commit_regs(buffer, event,
+		trace_buffer_unlock_commit_regs(file->tr, buffer, event,
 						irq_flags, pc, regs);
 
 	if (tt)
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 90e72a0c3047..973d41d81aa5 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -103,7 +103,7 @@ record_it:
 		memcpy((void *) t + sizeof(*t), data, len);
 
 		if (blk_tracer)
-			trace_buffer_unlock_commit(buffer, event, 0, pc);
+			trace_buffer_unlock_commit(blk_tr, buffer, event, 0, pc);
 	}
 }
 
@@ -278,7 +278,7 @@ record_it:
 			memcpy((void *) t + sizeof(*t), pdu_data, pdu_len);
 
 		if (blk_tracer) {
-			trace_buffer_unlock_commit(buffer, event, 0, pc);
+			trace_buffer_unlock_commit(blk_tr, buffer, event, 0, pc);
 			return;
 		}
 	}
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index a499ec95fc61..3329c8efb34f 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1683,23 +1683,16 @@ __buffer_unlock_commit(struct ring_buffer *buffer, struct ring_buffer_event *eve
 	ring_buffer_unlock_commit(buffer, event);
 }
 
-static inline void
-__trace_buffer_unlock_commit(struct ring_buffer *buffer,
-			     struct ring_buffer_event *event,
-			     unsigned long flags, int pc)
+void trace_buffer_unlock_commit(struct trace_array *tr,
+				struct ring_buffer *buffer,
+				struct ring_buffer_event *event,
+				unsigned long flags, int pc)
 {
 	__buffer_unlock_commit(buffer, event);
 
 	ftrace_trace_stack(buffer, flags, 6, pc);
 	ftrace_trace_userstack(buffer, flags, pc);
 }
-
-void trace_buffer_unlock_commit(struct ring_buffer *buffer,
-				struct ring_buffer_event *event,
-				unsigned long flags, int pc)
-{
-	__trace_buffer_unlock_commit(buffer, event, flags, pc);
-}
 EXPORT_SYMBOL_GPL(trace_buffer_unlock_commit);
 
 static struct ring_buffer *temp_buffer;
@@ -1741,7 +1734,8 @@ trace_current_buffer_lock_reserve(struct ring_buffer **current_rb,
 }
 EXPORT_SYMBOL_GPL(trace_current_buffer_lock_reserve);
 
-void trace_buffer_unlock_commit_regs(struct ring_buffer *buffer,
+void trace_buffer_unlock_commit_regs(struct trace_array *tr,
+				     struct ring_buffer *buffer,
 				     struct ring_buffer_event *event,
 				     unsigned long flags, int pc,
 				     struct pt_regs *regs)
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 7ca09cdc20c2..b2e3d8d80df8 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2891,7 +2891,9 @@ static __init void event_trace_self_tests(void)
 
 static DEFINE_PER_CPU(atomic_t, ftrace_test_event_disable);
 
-static void
+static struct trace_array *event_tr;
+
+static void __init
 function_test_events_call(unsigned long ip, unsigned long parent_ip,
 			  struct ftrace_ops *op, struct pt_regs *pt_regs)
 {
@@ -2922,7 +2924,7 @@ function_test_events_call(unsigned long ip, unsigned long parent_ip,
 	entry->ip			= ip;
 	entry->parent_ip		= parent_ip;
 
-	trace_buffer_unlock_commit(buffer, event, flags, pc);
+	trace_buffer_unlock_commit(event_tr, buffer, event, flags, pc);
 
  out:
 	atomic_dec(&per_cpu(ftrace_test_event_disable, cpu));
@@ -2944,6 +2946,9 @@ static __init void event_trace_self_test_with_function(void)
 		return;
 	}
 	pr_info("Running tests again, along with the function tracer\n");
+	event_tr = top_trace_array();
+	if (WARN_ON(!event_tr))
+		return;
 	event_trace_self_tests();
 	unregister_ftrace_function(&trace_ops);
 }
diff --git a/kernel/trace/trace_mmiotrace.c b/kernel/trace/trace_mmiotrace.c
index 638e110c5bfd..2be8c4f2403d 100644
--- a/kernel/trace/trace_mmiotrace.c
+++ b/kernel/trace/trace_mmiotrace.c
@@ -314,7 +314,7 @@ static void __trace_mmiotrace_rw(struct trace_array *tr,
 	entry->rw			= *rw;
 
 	if (!call_filter_check_discard(call, entry, buffer, event))
-		trace_buffer_unlock_commit(buffer, event, 0, pc);
+		trace_buffer_unlock_commit(tr, buffer, event, 0, pc);
 }
 
 void mmio_trace_rw(struct mmiotrace_rw *rw)
@@ -344,7 +344,7 @@ static void __trace_mmiotrace_map(struct trace_array *tr,
 	entry->map			= *map;
 
 	if (!call_filter_check_discard(call, entry, buffer, event))
-		trace_buffer_unlock_commit(buffer, event, 0, pc);
+		trace_buffer_unlock_commit(tr, buffer, event, 0, pc);
 }
 
 void mmio_trace_mapping(struct mmiotrace_map *map)
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index 12cbe77b4136..c29d49e0102b 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -388,7 +388,7 @@ tracing_sched_switch_trace(struct trace_array *tr,
 	entry->next_cpu	= task_cpu(next);
 
 	if (!call_filter_check_discard(call, entry, buffer, event))
-		trace_buffer_unlock_commit(buffer, event, flags, pc);
+		trace_buffer_unlock_commit(tr, buffer, event, flags, pc);
 }
 
 static void
@@ -416,7 +416,7 @@ tracing_sched_wakeup_trace(struct trace_array *tr,
 	entry->next_cpu			= task_cpu(wakee);
 
 	if (!call_filter_check_discard(call, entry, buffer, event))
-		trace_buffer_unlock_commit(buffer, event, flags, pc);
+		trace_buffer_unlock_commit(tr, buffer, event, flags, pc);
 }
 
 static void notrace
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 06/25] tracing: Make ftrace_trace_stack() static
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (4 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 05/25] tracing: Pass trace_array into trace_buffer_unlock_commit() Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 07/25] tracing: Inject seq_print_userip_objs() into its only user Steven Rostedt
                   ` (18 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0006-tracing-Make-ftrace_trace_stack-static.patch --]
[-- Type: text/plain, Size: 2280 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

ftrace_trace_stack() is not called outside of trace.c. Make it a static
function.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 10 +++++++++-
 kernel/trace/trace.h |  8 --------
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 3329c8efb34f..5d3ce2900d64 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -472,12 +472,20 @@ static inline void trace_access_lock_init(void)
 static void __ftrace_trace_stack(struct ring_buffer *buffer,
 				 unsigned long flags,
 				 int skip, int pc, struct pt_regs *regs);
+static void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
+			       int skip, int pc);
+
 #else
 static inline void __ftrace_trace_stack(struct ring_buffer *buffer,
 					unsigned long flags,
 					int skip, int pc, struct pt_regs *regs)
 {
 }
+static inline void ftrace_trace_stack(struct ring_buffer *buffer,
+				      unsigned long flags, int skip, int pc)
+{
+}
+
 #endif
 
 /* trace_flags holds trace_options default values */
@@ -1872,7 +1880,7 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
 
 }
 
-void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
+static void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
 			int skip, int pc)
 {
 	if (!(trace_flags & TRACE_ITER_STACKTRACE))
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 3b2a950a6291..107cf081ac27 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -611,20 +611,12 @@ void update_max_tr_single(struct trace_array *tr,
 #endif /* CONFIG_TRACER_MAX_TRACE */
 
 #ifdef CONFIG_STACKTRACE
-void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
-			int skip, int pc);
-
 void ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags,
 			    int pc);
 
 void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
 		   int pc);
 #else
-static inline void ftrace_trace_stack(struct ring_buffer *buffer,
-				      unsigned long flags, int skip, int pc)
-{
-}
-
 static inline void ftrace_trace_userstack(struct ring_buffer *buffer,
 					  unsigned long flags, int pc)
 {
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 07/25] tracing: Inject seq_print_userip_objs() into its only user
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (5 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 06/25] tracing: Make ftrace_trace_stack() static Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 08/25] tracing: Turn seq_print_user_ip() into a static function Steven Rostedt
                   ` (17 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0007-tracing-Inject-seq_print_userip_objs-into-its-only-u.patch --]
[-- Type: text/plain, Size: 3547 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

seq_print_userip_objs() is used only in one location, in one file. Instead
of having it as an external function, go one further than making it static,
but inject is code into its only user. It doesn't make the calling function
much more complex.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_output.c | 81 ++++++++++++++++++++-------------------------
 kernel/trace/trace_output.h |  2 --
 2 files changed, 36 insertions(+), 47 deletions(-)

diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index 8e481a84aeea..881cbdae1913 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -355,50 +355,6 @@ int seq_print_user_ip(struct trace_seq *s, struct mm_struct *mm,
 }
 
 int
-seq_print_userip_objs(const struct userstack_entry *entry, struct trace_seq *s,
-		      unsigned long sym_flags)
-{
-	struct mm_struct *mm = NULL;
-	unsigned int i;
-
-	if (trace_flags & TRACE_ITER_SYM_USEROBJ) {
-		struct task_struct *task;
-		/*
-		 * we do the lookup on the thread group leader,
-		 * since individual threads might have already quit!
-		 */
-		rcu_read_lock();
-		task = find_task_by_vpid(entry->tgid);
-		if (task)
-			mm = get_task_mm(task);
-		rcu_read_unlock();
-	}
-
-	for (i = 0; i < FTRACE_STACK_ENTRIES; i++) {
-		unsigned long ip = entry->caller[i];
-
-		if (ip == ULONG_MAX || trace_seq_has_overflowed(s))
-			break;
-
-		trace_seq_puts(s, " => ");
-
-		if (!ip) {
-			trace_seq_puts(s, "??");
-			trace_seq_putc(s, '\n');
-			continue;
-		}
-
-		seq_print_user_ip(s, mm, ip, sym_flags);
-		trace_seq_putc(s, '\n');
-	}
-
-	if (mm)
-		mmput(mm);
-
-	return !trace_seq_has_overflowed(s);
-}
-
-int
 seq_print_ip_sym(struct trace_seq *s, unsigned long ip, unsigned long sym_flags)
 {
 	if (!ip) {
@@ -1081,11 +1037,46 @@ static enum print_line_t trace_user_stack_print(struct trace_iterator *iter,
 {
 	struct userstack_entry *field;
 	struct trace_seq *s = &iter->seq;
+	struct mm_struct *mm = NULL;
+	unsigned int i;
 
 	trace_assign_type(field, iter->ent);
 
 	trace_seq_puts(s, "<user stack trace>\n");
-	seq_print_userip_objs(field, s, flags);
+
+	if (trace_flags & TRACE_ITER_SYM_USEROBJ) {
+		struct task_struct *task;
+		/*
+		 * we do the lookup on the thread group leader,
+		 * since individual threads might have already quit!
+		 */
+		rcu_read_lock();
+		task = find_task_by_vpid(field->tgid);
+		if (task)
+			mm = get_task_mm(task);
+		rcu_read_unlock();
+	}
+
+	for (i = 0; i < FTRACE_STACK_ENTRIES; i++) {
+		unsigned long ip = field->caller[i];
+
+		if (ip == ULONG_MAX || trace_seq_has_overflowed(s))
+			break;
+
+		trace_seq_puts(s, " => ");
+
+		if (!ip) {
+			trace_seq_puts(s, "??");
+			trace_seq_putc(s, '\n');
+			continue;
+		}
+
+		seq_print_user_ip(s, mm, ip, flags);
+		trace_seq_putc(s, '\n');
+	}
+
+	if (mm)
+		mmput(mm);
 
 	return trace_handle_return(s);
 }
diff --git a/kernel/trace/trace_output.h b/kernel/trace/trace_output.h
index 4cbfe85b99c8..b774c06cf423 100644
--- a/kernel/trace/trace_output.h
+++ b/kernel/trace/trace_output.h
@@ -14,8 +14,6 @@ trace_print_printk_msg_only(struct trace_iterator *iter);
 extern int
 seq_print_ip_sym(struct trace_seq *s, unsigned long ip,
 		unsigned long sym_flags);
-extern int seq_print_userip_objs(const struct userstack_entry *entry,
-				 struct trace_seq *s, unsigned long sym_flags);
 extern int seq_print_user_ip(struct trace_seq *s, struct mm_struct *mm,
 			     unsigned long ip, unsigned long sym_flags);
 
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 08/25] tracing: Turn seq_print_user_ip() into a static function
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (6 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 07/25] tracing: Inject seq_print_userip_objs() into its only user Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 09/25] tracing: Move "display-graph" option to main options Steven Rostedt
                   ` (16 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0008-tracing-Turn-seq_print_user_ip-into-a-static-functio.patch --]
[-- Type: text/plain, Size: 1664 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

seq_print_user_ip() is used in only one location in one file. Turn it into a
static function. We could inject its code into the caller, but that would
make the code a bit too complex. Keep the code separate.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_output.c | 4 ++--
 kernel/trace/trace_output.h | 2 --
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index 881cbdae1913..3b5dcdf19dea 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -322,8 +322,8 @@ seq_print_sym_offset(struct trace_seq *s, const char *fmt,
 # define IP_FMT "%016lx"
 #endif
 
-int seq_print_user_ip(struct trace_seq *s, struct mm_struct *mm,
-		      unsigned long ip, unsigned long sym_flags)
+static int seq_print_user_ip(struct trace_seq *s, struct mm_struct *mm,
+			     unsigned long ip, unsigned long sym_flags)
 {
 	struct file *file = NULL;
 	unsigned long vmstart = 0;
diff --git a/kernel/trace/trace_output.h b/kernel/trace/trace_output.h
index b774c06cf423..fabc49bcd493 100644
--- a/kernel/trace/trace_output.h
+++ b/kernel/trace/trace_output.h
@@ -14,8 +14,6 @@ trace_print_printk_msg_only(struct trace_iterator *iter);
 extern int
 seq_print_ip_sym(struct trace_seq *s, unsigned long ip,
 		unsigned long sym_flags);
-extern int seq_print_user_ip(struct trace_seq *s, struct mm_struct *mm,
-			     unsigned long ip, unsigned long sym_flags);
 
 extern int trace_print_context(struct trace_iterator *iter);
 extern int trace_print_lat_context(struct trace_iterator *iter);
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 09/25] tracing: Move "display-graph" option to main options
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (7 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 08/25] tracing: Turn seq_print_user_ip() into a static function Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 10/25] tracing: Remove unused tracing option "ftrace_preempt" Steven Rostedt
                   ` (15 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0009-tracing-Move-display-graph-option-to-main-options.patch --]
[-- Type: text/plain, Size: 8693 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

In order to facilitate making all tracer options visible even when the
tracer is not active, we need to get rid of duplicate options. Any option
that is shared between multiple tracers really should be a main option.

As the wakeup and irqsoff tracers both use the "display-graph" option, and
use it exactly the same way, move that option from the tracer options to the
main options and consolidate them.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c              |  1 +
 kernel/trace/trace.h              |  1 +
 kernel/trace/trace_irqsoff.c      | 46 ++++++++++++---------------------------
 kernel/trace/trace_sched_wakeup.c | 46 ++++++++++++---------------------------
 4 files changed, 30 insertions(+), 64 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 5d3ce2900d64..9a4ef5afb41c 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -882,6 +882,7 @@ static const char *trace_options[] = {
 	"irq-info",
 	"markers",
 	"function-trace",
+	"display-graph",
 	NULL
 };
 
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 107cf081ac27..dfa3cd2feb22 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -913,6 +913,7 @@ enum trace_iterator_flags {
 	TRACE_ITER_IRQ_INFO		= 0x800000,
 	TRACE_ITER_MARKERS		= 0x1000000,
 	TRACE_ITER_FUNCTION		= 0x2000000,
+	TRACE_ITER_DISPLAY_GRAPH	= 0x4000000,
 };
 
 /*
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 8523ea345f2b..446480a86123 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -57,22 +57,16 @@ irq_trace(void)
 # define irq_trace() (0)
 #endif
 
-#define TRACE_DISPLAY_GRAPH	1
+#define is_graph() (trace_flags & TRACE_ITER_DISPLAY_GRAPH)
 
-static struct tracer_opt trace_opts[] = {
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
-	/* display latency trace as call graph */
-	{ TRACER_OPT(display-graph, TRACE_DISPLAY_GRAPH) },
+static int irqsoff_display_graph(struct trace_array *tr, int set);
+#else
+static inline int irqsoff_display_graph(struct trace_array *tr, int set)
+{
+	return -EINVAL;
+}
 #endif
-	{ } /* Empty entry */
-};
-
-static struct tracer_flags tracer_flags = {
-	.val  = 0,
-	.opts = trace_opts,
-};
-
-#define is_graph() (tracer_flags.val & TRACE_DISPLAY_GRAPH)
 
 /*
  * Sequence count - we record it when starting a measurement and
@@ -152,14 +146,10 @@ irqsoff_tracer_call(unsigned long ip, unsigned long parent_ip,
 #endif /* CONFIG_FUNCTION_TRACER */
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
-static int
-irqsoff_set_flag(struct trace_array *tr, u32 old_flags, u32 bit, int set)
+static int irqsoff_display_graph(struct trace_array *tr, int set)
 {
 	int cpu;
 
-	if (!(bit & TRACE_DISPLAY_GRAPH))
-		return -EINVAL;
-
 	if (!(is_graph() ^ set))
 		return 0;
 
@@ -259,12 +249,6 @@ __trace_function(struct trace_array *tr,
 #else
 #define __trace_function trace_function
 
-static int
-irqsoff_set_flag(struct trace_array *tr, u32 old_flags, u32 bit, int set)
-{
-	return -EINVAL;
-}
-
 static int irqsoff_graph_entry(struct ftrace_graph_ent *trace)
 {
 	return -1;
@@ -556,12 +540,13 @@ static void unregister_irqsoff_function(struct trace_array *tr, int graph)
 	function_enabled = false;
 }
 
-static void irqsoff_function_set(struct trace_array *tr, int set)
+static int irqsoff_function_set(struct trace_array *tr, int set)
 {
 	if (set)
 		register_irqsoff_function(tr, is_graph(), 1);
 	else
 		unregister_irqsoff_function(tr, is_graph());
+	return 0;
 }
 
 static int irqsoff_flag_changed(struct trace_array *tr, u32 mask, int set)
@@ -569,7 +554,10 @@ static int irqsoff_flag_changed(struct trace_array *tr, u32 mask, int set)
 	struct tracer *tracer = tr->current_trace;
 
 	if (mask & TRACE_ITER_FUNCTION)
-		irqsoff_function_set(tr, set);
+		return irqsoff_function_set(tr, set);
+
+	if (mask & TRACE_ITER_DISPLAY_GRAPH)
+		return irqsoff_display_graph(tr, set);
 
 	return trace_keep_overwrite(tracer, mask, set);
 }
@@ -666,8 +654,6 @@ static struct tracer irqsoff_tracer __read_mostly =
 	.print_max	= true,
 	.print_header   = irqsoff_print_header,
 	.print_line     = irqsoff_print_line,
-	.flags		= &tracer_flags,
-	.set_flag	= irqsoff_set_flag,
 	.flag_changed	= irqsoff_flag_changed,
 #ifdef CONFIG_FTRACE_SELFTEST
 	.selftest    = trace_selftest_startup_irqsoff,
@@ -700,8 +686,6 @@ static struct tracer preemptoff_tracer __read_mostly =
 	.print_max	= true,
 	.print_header   = irqsoff_print_header,
 	.print_line     = irqsoff_print_line,
-	.flags		= &tracer_flags,
-	.set_flag	= irqsoff_set_flag,
 	.flag_changed	= irqsoff_flag_changed,
 #ifdef CONFIG_FTRACE_SELFTEST
 	.selftest    = trace_selftest_startup_preemptoff,
@@ -736,8 +720,6 @@ static struct tracer preemptirqsoff_tracer __read_mostly =
 	.print_max	= true,
 	.print_header   = irqsoff_print_header,
 	.print_line     = irqsoff_print_line,
-	.flags		= &tracer_flags,
-	.set_flag	= irqsoff_set_flag,
 	.flag_changed	= irqsoff_flag_changed,
 #ifdef CONFIG_FTRACE_SELFTEST
 	.selftest    = trace_selftest_startup_preemptirqsoff,
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index c29d49e0102b..f5d2e65e7c92 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -40,22 +40,17 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace);
 static int save_flags;
 static bool function_enabled;
 
-#define TRACE_DISPLAY_GRAPH     1
+#define is_graph() (trace_flags & TRACE_ITER_DISPLAY_GRAPH)
 
-static struct tracer_opt trace_opts[] = {
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
-	/* display latency trace as call graph */
-	{ TRACER_OPT(display-graph, TRACE_DISPLAY_GRAPH) },
+static int wakeup_display_graph(struct trace_array *tr, int set);
+#else
+static inline int wakeup_display_graph(struct trace_array *tr, int set)
+{
+	return -EINVAL;
+}
 #endif
-	{ } /* Empty entry */
-};
 
-static struct tracer_flags tracer_flags = {
-	.val  = 0,
-	.opts = trace_opts,
-};
-
-#define is_graph() (tracer_flags.val & TRACE_DISPLAY_GRAPH)
 
 #ifdef CONFIG_FUNCTION_TRACER
 
@@ -163,12 +158,13 @@ static void unregister_wakeup_function(struct trace_array *tr, int graph)
 	function_enabled = false;
 }
 
-static void wakeup_function_set(struct trace_array *tr, int set)
+static int wakeup_function_set(struct trace_array *tr, int set)
 {
 	if (set)
 		register_wakeup_function(tr, is_graph(), 1);
 	else
 		unregister_wakeup_function(tr, is_graph());
+	return 0;
 }
 
 static int wakeup_flag_changed(struct trace_array *tr, u32 mask, int set)
@@ -176,7 +172,10 @@ static int wakeup_flag_changed(struct trace_array *tr, u32 mask, int set)
 	struct tracer *tracer = tr->current_trace;
 
 	if (mask & TRACE_ITER_FUNCTION)
-		wakeup_function_set(tr, set);
+		return wakeup_function_set(tr, set);
+
+	if (mask & TRACE_ITER_DISPLAY_GRAPH)
+		return wakeup_display_graph(tr, set);
 
 	return trace_keep_overwrite(tracer, mask, set);
 }
@@ -203,13 +202,8 @@ static void stop_func_tracer(struct trace_array *tr, int graph)
 }
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
-static int
-wakeup_set_flag(struct trace_array *tr, u32 old_flags, u32 bit, int set)
+static int wakeup_display_graph(struct trace_array *tr, int set)
 {
-
-	if (!(bit & TRACE_DISPLAY_GRAPH))
-		return -EINVAL;
-
 	if (!(is_graph() ^ set))
 		return 0;
 
@@ -306,12 +300,6 @@ __trace_function(struct trace_array *tr,
 #else
 #define __trace_function trace_function
 
-static int
-wakeup_set_flag(struct trace_array *tr, u32 old_flags, u32 bit, int set)
-{
-	return -EINVAL;
-}
-
 static int wakeup_graph_entry(struct ftrace_graph_ent *trace)
 {
 	return -1;
@@ -740,8 +728,6 @@ static struct tracer wakeup_tracer __read_mostly =
 	.print_max	= true,
 	.print_header	= wakeup_print_header,
 	.print_line	= wakeup_print_line,
-	.flags		= &tracer_flags,
-	.set_flag	= wakeup_set_flag,
 	.flag_changed	= wakeup_flag_changed,
 #ifdef CONFIG_FTRACE_SELFTEST
 	.selftest    = trace_selftest_startup_wakeup,
@@ -762,8 +748,6 @@ static struct tracer wakeup_rt_tracer __read_mostly =
 	.print_max	= true,
 	.print_header	= wakeup_print_header,
 	.print_line	= wakeup_print_line,
-	.flags		= &tracer_flags,
-	.set_flag	= wakeup_set_flag,
 	.flag_changed	= wakeup_flag_changed,
 #ifdef CONFIG_FTRACE_SELFTEST
 	.selftest    = trace_selftest_startup_wakeup,
@@ -784,8 +768,6 @@ static struct tracer wakeup_dl_tracer __read_mostly =
 	.print_max	= true,
 	.print_header	= wakeup_print_header,
 	.print_line	= wakeup_print_line,
-	.flags		= &tracer_flags,
-	.set_flag	= wakeup_set_flag,
 	.flag_changed	= wakeup_flag_changed,
 #ifdef CONFIG_FTRACE_SELFTEST
 	.selftest    = trace_selftest_startup_wakeup,
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 10/25] tracing: Remove unused tracing option "ftrace_preempt"
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (8 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 09/25] tracing: Move "display-graph" option to main options Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 11/25] tracing: Use enums instead of hard coded bitmasks for TRACE_ITER flags Steven Rostedt
                   ` (14 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0010-tracing-Remove-unused-tracing-option-ftrace_preempt.patch --]
[-- Type: text/plain, Size: 2479 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

There was a time where the function tracing would disable interrupts unless
specifically told not to, where it would only disable preemption. With the
new lockless code, the function tracing never disalbes interrupts and just
uses disabling of preemption. Remove the option "ftrace_preempt" as it does
nothing anyway.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c |  1 -
 kernel/trace/trace.h | 33 ++++++++++++++++-----------------
 2 files changed, 16 insertions(+), 18 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 9a4ef5afb41c..f2fbf610d20e 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -866,7 +866,6 @@ static const char *trace_options[] = {
 	"block",
 	"stacktrace",
 	"trace_printk",
-	"ftrace_preempt",
 	"branch",
 	"annotate",
 	"userstacktrace",
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index dfa3cd2feb22..19d5c411d4ec 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -897,23 +897,22 @@ enum trace_iterator_flags {
 	TRACE_ITER_BLOCK		= 0x80,
 	TRACE_ITER_STACKTRACE		= 0x100,
 	TRACE_ITER_PRINTK		= 0x200,
-	TRACE_ITER_PREEMPTONLY		= 0x400,
-	TRACE_ITER_BRANCH		= 0x800,
-	TRACE_ITER_ANNOTATE		= 0x1000,
-	TRACE_ITER_USERSTACKTRACE       = 0x2000,
-	TRACE_ITER_SYM_USEROBJ          = 0x4000,
-	TRACE_ITER_PRINTK_MSGONLY	= 0x8000,
-	TRACE_ITER_CONTEXT_INFO		= 0x10000, /* Print pid/cpu/time */
-	TRACE_ITER_LATENCY_FMT		= 0x20000,
-	TRACE_ITER_SLEEP_TIME		= 0x40000,
-	TRACE_ITER_GRAPH_TIME		= 0x80000,
-	TRACE_ITER_RECORD_CMD		= 0x100000,
-	TRACE_ITER_OVERWRITE		= 0x200000,
-	TRACE_ITER_STOP_ON_FREE		= 0x400000,
-	TRACE_ITER_IRQ_INFO		= 0x800000,
-	TRACE_ITER_MARKERS		= 0x1000000,
-	TRACE_ITER_FUNCTION		= 0x2000000,
-	TRACE_ITER_DISPLAY_GRAPH	= 0x4000000,
+	TRACE_ITER_BRANCH		= 0x400,
+	TRACE_ITER_ANNOTATE		= 0x800,
+	TRACE_ITER_USERSTACKTRACE       = 0x1000,
+	TRACE_ITER_SYM_USEROBJ          = 0x2000,
+	TRACE_ITER_PRINTK_MSGONLY	= 0x4000,
+	TRACE_ITER_CONTEXT_INFO		= 0x8000, /* Print pid/cpu/time */
+	TRACE_ITER_LATENCY_FMT		= 0x10000,
+	TRACE_ITER_SLEEP_TIME		= 0x20000,
+	TRACE_ITER_GRAPH_TIME		= 0x40000,
+	TRACE_ITER_RECORD_CMD		= 0x80000,
+	TRACE_ITER_OVERWRITE		= 0x100000,
+	TRACE_ITER_STOP_ON_FREE		= 0x200000,
+	TRACE_ITER_IRQ_INFO		= 0x400000,
+	TRACE_ITER_MARKERS		= 0x800000,
+	TRACE_ITER_FUNCTION		= 0x1000000,
+	TRACE_ITER_DISPLAY_GRAPH	= 0x2000000,
 };
 
 /*
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 11/25] tracing: Use enums instead of hard coded bitmasks for TRACE_ITER flags
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (9 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 10/25] tracing: Remove unused tracing option "ftrace_preempt" Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 12/25] tracing: Use TRACE_FLAGS macro to keep enums and strings matched Steven Rostedt
                   ` (13 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0011-tracing-Use-enums-instead-of-hard-coded-bitmasks-for.patch --]
[-- Type: text/plain, Size: 4077 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Using enums with FLAG_BIT and then defining a FLAG = (1 << FLAG_BIT), is a
bit more robust as we require that there are no bits out of order or skipped
to match the file names that represent the bits.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.h | 81 +++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 55 insertions(+), 26 deletions(-)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 19d5c411d4ec..31d8395c8dc5 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -886,33 +886,62 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
  * NOTE: These bits must match the trace_options array in
  *       trace.c.
  */
+enum trace_iterator_bits {
+	TRACE_ITER_PRINT_PARENT_BIT	= 0,
+	TRACE_ITER_SYM_OFFSET_BIT,
+	TRACE_ITER_SYM_ADDR_BIT,
+	TRACE_ITER_VERBOSE_BIT,
+	TRACE_ITER_RAW_BIT,
+	TRACE_ITER_HEX_BIT,
+	TRACE_ITER_BIN_BIT,
+	TRACE_ITER_BLOCK_BIT,
+	TRACE_ITER_STACKTRACE_BIT,
+	TRACE_ITER_PRINTK_BIT,
+	TRACE_ITER_BRANCH_BIT,
+	TRACE_ITER_ANNOTATE_BIT,
+	TRACE_ITER_USERSTACKTRACE_BIT,
+	TRACE_ITER_SYM_USEROBJ_BIT,
+	TRACE_ITER_PRINTK_MSGONLY_BIT,
+	TRACE_ITER_CONTEXT_INFO_BIT,	/* Print pid/cpu/time */
+	TRACE_ITER_LATENCY_FMT_BIT,
+	TRACE_ITER_SLEEP_TIME_BIT,
+	TRACE_ITER_GRAPH_TIME_BIT,
+	TRACE_ITER_RECORD_CMD_BIT,
+	TRACE_ITER_OVERWRITE_BIT,
+	TRACE_ITER_STOP_ON_FREE_BIT,
+	TRACE_ITER_IRQ_INFO_BIT,
+	TRACE_ITER_MARKERS_BIT,
+	TRACE_ITER_FUNCTION_BIT,
+	TRACE_ITER_DISPLAY_GRAPH_BIT,
+};
+
 enum trace_iterator_flags {
-	TRACE_ITER_PRINT_PARENT		= 0x01,
-	TRACE_ITER_SYM_OFFSET		= 0x02,
-	TRACE_ITER_SYM_ADDR		= 0x04,
-	TRACE_ITER_VERBOSE		= 0x08,
-	TRACE_ITER_RAW			= 0x10,
-	TRACE_ITER_HEX			= 0x20,
-	TRACE_ITER_BIN			= 0x40,
-	TRACE_ITER_BLOCK		= 0x80,
-	TRACE_ITER_STACKTRACE		= 0x100,
-	TRACE_ITER_PRINTK		= 0x200,
-	TRACE_ITER_BRANCH		= 0x400,
-	TRACE_ITER_ANNOTATE		= 0x800,
-	TRACE_ITER_USERSTACKTRACE       = 0x1000,
-	TRACE_ITER_SYM_USEROBJ          = 0x2000,
-	TRACE_ITER_PRINTK_MSGONLY	= 0x4000,
-	TRACE_ITER_CONTEXT_INFO		= 0x8000, /* Print pid/cpu/time */
-	TRACE_ITER_LATENCY_FMT		= 0x10000,
-	TRACE_ITER_SLEEP_TIME		= 0x20000,
-	TRACE_ITER_GRAPH_TIME		= 0x40000,
-	TRACE_ITER_RECORD_CMD		= 0x80000,
-	TRACE_ITER_OVERWRITE		= 0x100000,
-	TRACE_ITER_STOP_ON_FREE		= 0x200000,
-	TRACE_ITER_IRQ_INFO		= 0x400000,
-	TRACE_ITER_MARKERS		= 0x800000,
-	TRACE_ITER_FUNCTION		= 0x1000000,
-	TRACE_ITER_DISPLAY_GRAPH	= 0x2000000,
+	TRACE_ITER_PRINT_PARENT		= (1 << TRACE_ITER_PRINT_PARENT_BIT),
+	TRACE_ITER_SYM_OFFSET		= (1 << TRACE_ITER_SYM_OFFSET_BIT),
+	TRACE_ITER_SYM_ADDR		= (1 << TRACE_ITER_SYM_ADDR_BIT),
+	TRACE_ITER_VERBOSE		= (1 << TRACE_ITER_VERBOSE_BIT),
+	TRACE_ITER_RAW			= (1 << TRACE_ITER_RAW_BIT),
+	TRACE_ITER_HEX			= (1 << TRACE_ITER_HEX_BIT),
+	TRACE_ITER_BIN			= (1 << TRACE_ITER_BIN_BIT),
+	TRACE_ITER_BLOCK		= (1 << TRACE_ITER_BLOCK_BIT),
+	TRACE_ITER_STACKTRACE		= (1 << TRACE_ITER_STACKTRACE_BIT),
+	TRACE_ITER_PRINTK		= (1 << TRACE_ITER_PRINTK_BIT),
+	TRACE_ITER_BRANCH		= (1 << TRACE_ITER_BRANCH_BIT),
+	TRACE_ITER_ANNOTATE		= (1 << TRACE_ITER_ANNOTATE_BIT),
+	TRACE_ITER_USERSTACKTRACE       = (1 << TRACE_ITER_USERSTACKTRACE_BIT),
+	TRACE_ITER_SYM_USEROBJ          = (1 << TRACE_ITER_SYM_USEROBJ_BIT),
+	TRACE_ITER_PRINTK_MSGONLY	= (1 << TRACE_ITER_PRINTK_MSGONLY_BIT),
+	TRACE_ITER_CONTEXT_INFO		= (1 << TRACE_ITER_CONTEXT_INFO_BIT),
+	TRACE_ITER_LATENCY_FMT		= (1 << TRACE_ITER_LATENCY_FMT_BIT),
+	TRACE_ITER_SLEEP_TIME		= (1 << TRACE_ITER_SLEEP_TIME_BIT),
+	TRACE_ITER_GRAPH_TIME		= (1 << TRACE_ITER_GRAPH_TIME_BIT),
+	TRACE_ITER_RECORD_CMD		= (1 << TRACE_ITER_RECORD_CMD_BIT),
+	TRACE_ITER_OVERWRITE		= (1 << TRACE_ITER_OVERWRITE_BIT),
+	TRACE_ITER_STOP_ON_FREE		= (1 << TRACE_ITER_STOP_ON_FREE_BIT),
+	TRACE_ITER_IRQ_INFO		= (1 << TRACE_ITER_IRQ_INFO_BIT),
+	TRACE_ITER_MARKERS		= (1 << TRACE_ITER_MARKERS_BIT),
+	TRACE_ITER_FUNCTION		= (1 << TRACE_ITER_FUNCTION_BIT),
+	TRACE_ITER_DISPLAY_GRAPH	= (1 << TRACE_ITER_DISPLAY_GRAPH_BIT),
 };
 
 /*
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 12/25] tracing: Use TRACE_FLAGS macro to keep enums and strings matched
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (10 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 11/25] tracing: Use enums instead of hard coded bitmasks for TRACE_ITER flags Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 13/25] tracing: Only create function graph options when it is compiled in Steven Rostedt
                   ` (12 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0012-tracing-Use-TRACE_FLAGS-macro-to-keep-enums-and-stri.patch --]
[-- Type: text/plain, Size: 7169 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Use a cute little macro trick to keep the names of the trace flags file
guaranteed to match the corresponding masks.

The macro TRACE_FLAGS is defined as a serious of enum names followed by
the string name of the file that matches it. For example:

 #define TRACE_FLAGS						\
		C(PRINT_PARENT,		"print-parent"),	\
		C(SYM_OFFSET,		"sym-offset"),		\
		C(SYM_ADDR,		"sym-addr"),		\
		C(VERBOSE,		"verbose"),

Now we can define the following:

 #undef C
 #define C(a, b) TRACE_ITER_##a##_BIT
 enum trace_iterator_bits { TRACE_FLAGS };

The above creates:

 enum trace_iterator_bits {
	TRACE_ITER_PRINT_PARENT_BIT,
	TRACE_ITER_SYM_OFFSET_BIT,
	TRACE_ITER_SYM_ADDR_BIT,
	TRACE_ITER_VERBOSE_BIT,
 };

Then we can redefine C as:

 #undef C
 #define C(a, b) TRACE_ITER_##a = (1 << TRACE_ITER_##a##_BIT)
 enum trace_iterator_flags { TRACE_FLAGS };

Which creates:

 enum trace_iterator_flags {
	TRACE_ITER_PRINT_PARENT	= (1 << TRACE_ITER_PRINT_PARENT_BIT),
	TRACE_ITER_SYM_OFFSET	= (1 << TRACE_ITER_SYM_OFFSET_BIT),
	TRACE_ITER_SYM_ADDR	= (1 << TRACE_ITER_SYM_ADDR_BIT),
	TRACE_ITER_VERBOSE	= (1 << TRACE_ITER_VERBOSE_BIT),
 };

Then finally we can create the list of file names:

 #undef C
 #define C(a, b) b
 static const char *trace_options[] = {
	TRACE_FLAGS
	NULL
 };

Which creates:
 static const char *trace_options[] = {
	"print-parent",
	"sym-offset",
	"sym-addr",
	"verbose",
	NULL
 };

The importance of this is that the strings match the bit index.

	trace_options[TRACE_ITER_SYM_ADDR_BIT] == "sym-addr"

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c |  36 +++++-------------
 kernel/trace/trace.h | 102 +++++++++++++++++++++++----------------------------
 2 files changed, 55 insertions(+), 83 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index f2fbf610d20e..e80e380d0238 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -854,34 +854,18 @@ unsigned long nsecs_to_usecs(unsigned long nsecs)
 	return nsecs / 1000;
 }
 
+/*
+ * TRACE_FLAGS is defined as a tuple matching bit masks with strings.
+ * It uses C(a, b) where 'a' is the enum name and 'b' is the string that
+ * matches it. By defining "C(a, b) b", TRACE_FLAGS becomes a list
+ * of strings in the order that the enums were defined.
+ */
+#undef C
+#define C(a, b) b
+
 /* These must match the bit postions in trace_iterator_flags */
 static const char *trace_options[] = {
-	"print-parent",
-	"sym-offset",
-	"sym-addr",
-	"verbose",
-	"raw",
-	"hex",
-	"bin",
-	"block",
-	"stacktrace",
-	"trace_printk",
-	"branch",
-	"annotate",
-	"userstacktrace",
-	"sym-userobj",
-	"printk-msg-only",
-	"context-info",
-	"latency-format",
-	"sleep-time",
-	"graph-time",
-	"record-cmd",
-	"overwrite",
-	"disable_on_free",
-	"irq-info",
-	"markers",
-	"function-trace",
-	"display-graph",
+	TRACE_FLAGS
 	NULL
 };
 
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 31d8395c8dc5..d164845edddd 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -884,65 +884,53 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
  * positions into trace_flags that controls the output.
  *
  * NOTE: These bits must match the trace_options array in
- *       trace.c.
+ *       trace.c (this macro guarantees it).
  */
-enum trace_iterator_bits {
-	TRACE_ITER_PRINT_PARENT_BIT	= 0,
-	TRACE_ITER_SYM_OFFSET_BIT,
-	TRACE_ITER_SYM_ADDR_BIT,
-	TRACE_ITER_VERBOSE_BIT,
-	TRACE_ITER_RAW_BIT,
-	TRACE_ITER_HEX_BIT,
-	TRACE_ITER_BIN_BIT,
-	TRACE_ITER_BLOCK_BIT,
-	TRACE_ITER_STACKTRACE_BIT,
-	TRACE_ITER_PRINTK_BIT,
-	TRACE_ITER_BRANCH_BIT,
-	TRACE_ITER_ANNOTATE_BIT,
-	TRACE_ITER_USERSTACKTRACE_BIT,
-	TRACE_ITER_SYM_USEROBJ_BIT,
-	TRACE_ITER_PRINTK_MSGONLY_BIT,
-	TRACE_ITER_CONTEXT_INFO_BIT,	/* Print pid/cpu/time */
-	TRACE_ITER_LATENCY_FMT_BIT,
-	TRACE_ITER_SLEEP_TIME_BIT,
-	TRACE_ITER_GRAPH_TIME_BIT,
-	TRACE_ITER_RECORD_CMD_BIT,
-	TRACE_ITER_OVERWRITE_BIT,
-	TRACE_ITER_STOP_ON_FREE_BIT,
-	TRACE_ITER_IRQ_INFO_BIT,
-	TRACE_ITER_MARKERS_BIT,
-	TRACE_ITER_FUNCTION_BIT,
-	TRACE_ITER_DISPLAY_GRAPH_BIT,
-};
+#define TRACE_FLAGS						\
+		C(PRINT_PARENT,		"print-parent"),	\
+		C(SYM_OFFSET,		"sym-offset"),		\
+		C(SYM_ADDR,		"sym-addr"),		\
+		C(VERBOSE,		"verbose"),		\
+		C(RAW,			"raw"),			\
+		C(HEX,			"hex"),			\
+		C(BIN,			"bin"),			\
+		C(BLOCK,		"block"),		\
+		C(STACKTRACE,		"stacktrace"),		\
+		C(PRINTK,		"trace_printk"),	\
+		C(BRANCH,		"branch"),		\
+		C(ANNOTATE,		"annotate"),		\
+		C(USERSTACKTRACE,	"userstacktrace"),	\
+		C(SYM_USEROBJ,		"sym-userobj"),		\
+		C(PRINTK_MSGONLY,	"printk-msg-only"),	\
+		C(CONTEXT_INFO,		"context-info"),   /* Print pid/cpu/time */ \
+		C(LATENCY_FMT,		"latency-format"),	\
+		C(SLEEP_TIME,		"sleep-time"),		\
+		C(GRAPH_TIME,		"graph-time"),		\
+		C(RECORD_CMD,		"record-cmd"),		\
+		C(OVERWRITE,		"overwrite"),		\
+		C(STOP_ON_FREE,		"disable_on_free"),	\
+		C(IRQ_INFO,		"irq-info"),		\
+		C(MARKERS,		"markers"),		\
+		C(FUNCTION,		"function-trace"),	\
+		C(DISPLAY_GRAPH,	"display-graph"),
 
-enum trace_iterator_flags {
-	TRACE_ITER_PRINT_PARENT		= (1 << TRACE_ITER_PRINT_PARENT_BIT),
-	TRACE_ITER_SYM_OFFSET		= (1 << TRACE_ITER_SYM_OFFSET_BIT),
-	TRACE_ITER_SYM_ADDR		= (1 << TRACE_ITER_SYM_ADDR_BIT),
-	TRACE_ITER_VERBOSE		= (1 << TRACE_ITER_VERBOSE_BIT),
-	TRACE_ITER_RAW			= (1 << TRACE_ITER_RAW_BIT),
-	TRACE_ITER_HEX			= (1 << TRACE_ITER_HEX_BIT),
-	TRACE_ITER_BIN			= (1 << TRACE_ITER_BIN_BIT),
-	TRACE_ITER_BLOCK		= (1 << TRACE_ITER_BLOCK_BIT),
-	TRACE_ITER_STACKTRACE		= (1 << TRACE_ITER_STACKTRACE_BIT),
-	TRACE_ITER_PRINTK		= (1 << TRACE_ITER_PRINTK_BIT),
-	TRACE_ITER_BRANCH		= (1 << TRACE_ITER_BRANCH_BIT),
-	TRACE_ITER_ANNOTATE		= (1 << TRACE_ITER_ANNOTATE_BIT),
-	TRACE_ITER_USERSTACKTRACE       = (1 << TRACE_ITER_USERSTACKTRACE_BIT),
-	TRACE_ITER_SYM_USEROBJ          = (1 << TRACE_ITER_SYM_USEROBJ_BIT),
-	TRACE_ITER_PRINTK_MSGONLY	= (1 << TRACE_ITER_PRINTK_MSGONLY_BIT),
-	TRACE_ITER_CONTEXT_INFO		= (1 << TRACE_ITER_CONTEXT_INFO_BIT),
-	TRACE_ITER_LATENCY_FMT		= (1 << TRACE_ITER_LATENCY_FMT_BIT),
-	TRACE_ITER_SLEEP_TIME		= (1 << TRACE_ITER_SLEEP_TIME_BIT),
-	TRACE_ITER_GRAPH_TIME		= (1 << TRACE_ITER_GRAPH_TIME_BIT),
-	TRACE_ITER_RECORD_CMD		= (1 << TRACE_ITER_RECORD_CMD_BIT),
-	TRACE_ITER_OVERWRITE		= (1 << TRACE_ITER_OVERWRITE_BIT),
-	TRACE_ITER_STOP_ON_FREE		= (1 << TRACE_ITER_STOP_ON_FREE_BIT),
-	TRACE_ITER_IRQ_INFO		= (1 << TRACE_ITER_IRQ_INFO_BIT),
-	TRACE_ITER_MARKERS		= (1 << TRACE_ITER_MARKERS_BIT),
-	TRACE_ITER_FUNCTION		= (1 << TRACE_ITER_FUNCTION_BIT),
-	TRACE_ITER_DISPLAY_GRAPH	= (1 << TRACE_ITER_DISPLAY_GRAPH_BIT),
-};
+/*
+ * By defining C, we can make TRACE_FLAGS a list of bit names
+ * that will define the bits for the flag masks.
+ */
+#undef C
+#define C(a, b) TRACE_ITER_##a##_BIT
+
+enum trace_iterator_bits { TRACE_FLAGS };
+
+/*
+ * By redefining C, we can make TRACE_FLAGS a list of masks that
+ * use the bits as defined above.
+ */
+#undef C
+#define C(a, b) TRACE_ITER_##a = (1 << TRACE_ITER_##a##_BIT)
+
+enum trace_iterator_flags { TRACE_FLAGS };
 
 /*
  * TRACE_ITER_SYM_MASK masks the options in trace_flags that
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 13/25] tracing: Only create function graph options when it is compiled in
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (11 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 12/25] tracing: Use TRACE_FLAGS macro to keep enums and strings matched Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 14/25] tracing: Only create branch tracer options when " Steven Rostedt
                   ` (11 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0013-tracing-Only-create-function-graph-options-when-it-i.patch --]
[-- Type: text/plain, Size: 4997 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Do not create fuction graph tracer options when function graph tracer is not
even compiled in.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c              | 11 +++++++----
 kernel/trace/trace.h              | 20 +++++++++++++++++---
 kernel/trace/trace_irqsoff.c      |  6 ++++--
 kernel/trace/trace_sched_wakeup.c |  6 ++++--
 4 files changed, 32 insertions(+), 11 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index e80e380d0238..68fcb40fc764 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -489,10 +489,13 @@ static inline void ftrace_trace_stack(struct ring_buffer *buffer,
 #endif
 
 /* trace_flags holds trace_options default values */
-unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
-	TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO | TRACE_ITER_SLEEP_TIME |
-	TRACE_ITER_GRAPH_TIME | TRACE_ITER_RECORD_CMD | TRACE_ITER_OVERWRITE |
-	TRACE_ITER_IRQ_INFO | TRACE_ITER_MARKERS | TRACE_ITER_FUNCTION;
+unsigned long trace_flags =
+	FUNCTION_GRAPH_DEFAULT_FLAGS |
+	TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
+	TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO |
+	TRACE_ITER_RECORD_CMD | TRACE_ITER_OVERWRITE |
+	TRACE_ITER_IRQ_INFO | TRACE_ITER_MARKERS | TRACE_ITER_FUNCTION
+	;
 
 static void tracer_tracing_on(struct trace_array *tr)
 {
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index d164845edddd..33cd09799ceb 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -880,6 +880,22 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
 	size_t cnt, loff_t *ppos);
 
 /*
+ * Only create function graph options if function graph is configured.
+ */
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+# define FGRAPH_FLAGS						\
+		C(SLEEP_TIME,		"sleep-time"),		\
+		C(GRAPH_TIME,		"graph-time"),		\
+		C(DISPLAY_GRAPH,	"display-graph"),
+/* Initially set for trace_flags */
+# define FUNCTION_GRAPH_DEFAULT_FLAGS \
+	(TRACE_ITER_SLEEP_TIME | TRACE_ITER_GRAPH_TIME)
+#else
+# define FGRAPH_FLAGS
+# define FUNCTION_GRAPH_DEFAULT_FLAGS  0UL
+#endif
+
+/*
  * trace_iterator_flags is an enumeration that defines bit
  * positions into trace_flags that controls the output.
  *
@@ -904,15 +920,13 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
 		C(PRINTK_MSGONLY,	"printk-msg-only"),	\
 		C(CONTEXT_INFO,		"context-info"),   /* Print pid/cpu/time */ \
 		C(LATENCY_FMT,		"latency-format"),	\
-		C(SLEEP_TIME,		"sleep-time"),		\
-		C(GRAPH_TIME,		"graph-time"),		\
 		C(RECORD_CMD,		"record-cmd"),		\
 		C(OVERWRITE,		"overwrite"),		\
 		C(STOP_ON_FREE,		"disable_on_free"),	\
 		C(IRQ_INFO,		"irq-info"),		\
 		C(MARKERS,		"markers"),		\
 		C(FUNCTION,		"function-trace"),	\
-		C(DISPLAY_GRAPH,	"display-graph"),
+		FGRAPH_FLAGS
 
 /*
  * By defining C, we can make TRACE_FLAGS a list of bit names
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 446480a86123..bd9cd0e2c13c 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -57,15 +57,15 @@ irq_trace(void)
 # define irq_trace() (0)
 #endif
 
-#define is_graph() (trace_flags & TRACE_ITER_DISPLAY_GRAPH)
-
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 static int irqsoff_display_graph(struct trace_array *tr, int set);
+# define is_graph() (trace_flags & TRACE_ITER_DISPLAY_GRAPH)
 #else
 static inline int irqsoff_display_graph(struct trace_array *tr, int set)
 {
 	return -EINVAL;
 }
+# define is_graph() false
 #endif
 
 /*
@@ -556,8 +556,10 @@ static int irqsoff_flag_changed(struct trace_array *tr, u32 mask, int set)
 	if (mask & TRACE_ITER_FUNCTION)
 		return irqsoff_function_set(tr, set);
 
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	if (mask & TRACE_ITER_DISPLAY_GRAPH)
 		return irqsoff_display_graph(tr, set);
+#endif
 
 	return trace_keep_overwrite(tracer, mask, set);
 }
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index f5d2e65e7c92..a6c350c681cc 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -40,15 +40,15 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace);
 static int save_flags;
 static bool function_enabled;
 
-#define is_graph() (trace_flags & TRACE_ITER_DISPLAY_GRAPH)
-
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 static int wakeup_display_graph(struct trace_array *tr, int set);
+# define is_graph() (trace_flags & TRACE_ITER_DISPLAY_GRAPH)
 #else
 static inline int wakeup_display_graph(struct trace_array *tr, int set)
 {
 	return -EINVAL;
 }
+# define is_graph() false
 #endif
 
 
@@ -174,8 +174,10 @@ static int wakeup_flag_changed(struct trace_array *tr, u32 mask, int set)
 	if (mask & TRACE_ITER_FUNCTION)
 		return wakeup_function_set(tr, set);
 
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	if (mask & TRACE_ITER_DISPLAY_GRAPH)
 		return wakeup_display_graph(tr, set);
+#endif
 
 	return trace_keep_overwrite(tracer, mask, set);
 }
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 14/25] tracing: Only create branch tracer options when compiled in
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (12 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 13/25] tracing: Only create function graph options when it is compiled in Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 15/25] tracing: Do not create function tracer options when not " Steven Rostedt
                   ` (10 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0014-tracing-Only-create-branch-tracer-options-when-compi.patch --]
[-- Type: text/plain, Size: 1513 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

When the branch tracer is not compiled in, do not create the option files
associated to it.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.h | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 33cd09799ceb..3f1cc45b7007 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -895,6 +895,13 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
 # define FUNCTION_GRAPH_DEFAULT_FLAGS  0UL
 #endif
 
+#ifdef CONFIG_BRANCH_TRACER
+# define BRANCH_FLAGS					\
+		C(BRANCH,		"branch"),
+#else
+# define BRANCH_FLAGS
+#endif
+
 /*
  * trace_iterator_flags is an enumeration that defines bit
  * positions into trace_flags that controls the output.
@@ -913,7 +920,6 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
 		C(BLOCK,		"block"),		\
 		C(STACKTRACE,		"stacktrace"),		\
 		C(PRINTK,		"trace_printk"),	\
-		C(BRANCH,		"branch"),		\
 		C(ANNOTATE,		"annotate"),		\
 		C(USERSTACKTRACE,	"userstacktrace"),	\
 		C(SYM_USEROBJ,		"sym-userobj"),		\
@@ -926,7 +932,8 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
 		C(IRQ_INFO,		"irq-info"),		\
 		C(MARKERS,		"markers"),		\
 		C(FUNCTION,		"function-trace"),	\
-		FGRAPH_FLAGS
+		FGRAPH_FLAGS					\
+		BRANCH_FLAGS
 
 /*
  * By defining C, we can make TRACE_FLAGS a list of bit names
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 15/25] tracing: Do not create function tracer options when not compiled in
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (13 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 14/25] tracing: Only create branch tracer options when " Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 16/25] tracing: Only create stacktrace option when STACKTRACE is configured Steven Rostedt
                   ` (9 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0015-tracing-Do-not-create-function-tracer-options-when-n.patch --]
[-- Type: text/plain, Size: 8211 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

When the function tracer is not compiled in, do not create the option files
for it.

Fix up both the sched_wakeup and irqsoff tracers to handle the change.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c              |  4 ++--
 kernel/trace/trace.h              | 11 +++++++++-
 kernel/trace/trace_irqsoff.c      | 28 +++++++++++++++++++++-----
 kernel/trace/trace_sched_wakeup.c | 42 ++++++++++++++++++++++++++-------------
 4 files changed, 63 insertions(+), 22 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 68fcb40fc764..cb223ad51cdf 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -490,11 +490,11 @@ static inline void ftrace_trace_stack(struct ring_buffer *buffer,
 
 /* trace_flags holds trace_options default values */
 unsigned long trace_flags =
-	FUNCTION_GRAPH_DEFAULT_FLAGS |
+	FUNCTION_DEFAULT_FLAGS | FUNCTION_GRAPH_DEFAULT_FLAGS |
 	TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
 	TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO |
 	TRACE_ITER_RECORD_CMD | TRACE_ITER_OVERWRITE |
-	TRACE_ITER_IRQ_INFO | TRACE_ITER_MARKERS | TRACE_ITER_FUNCTION
+	TRACE_ITER_IRQ_INFO | TRACE_ITER_MARKERS
 	;
 
 static void tracer_tracing_on(struct trace_array *tr)
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 3f1cc45b7007..b389d409b952 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -902,6 +902,15 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
 # define BRANCH_FLAGS
 #endif
 
+#ifdef CONFIG_FUNCTION_TRACER
+# define FUNCTION_FLAGS						\
+		C(FUNCTION,		"function-trace"),
+# define FUNCTION_DEFAULT_FLAGS		TRACE_ITER_FUNCTION
+#else
+# define FUNCTION_FLAGS
+# define FUNCTION_DEFAULT_FLAGS		0UL
+#endif
+
 /*
  * trace_iterator_flags is an enumeration that defines bit
  * positions into trace_flags that controls the output.
@@ -931,7 +940,7 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
 		C(STOP_ON_FREE,		"disable_on_free"),	\
 		C(IRQ_INFO,		"irq-info"),		\
 		C(MARKERS,		"markers"),		\
-		C(FUNCTION,		"function-trace"),	\
+		FUNCTION_FLAGS					\
 		FGRAPH_FLAGS					\
 		BRANCH_FLAGS
 
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index bd9cd0e2c13c..c834b95cbe0b 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -31,7 +31,6 @@ enum {
 static int trace_type __read_mostly;
 
 static int save_flags;
-static bool function_enabled;
 
 static void stop_irqsoff_tracer(struct trace_array *tr, int graph);
 static int start_irqsoff_tracer(struct trace_array *tr, int graph);
@@ -249,21 +248,23 @@ __trace_function(struct trace_array *tr,
 #else
 #define __trace_function trace_function
 
+#ifdef CONFIG_FUNCTION_TRACER
 static int irqsoff_graph_entry(struct ftrace_graph_ent *trace)
 {
 	return -1;
 }
+#endif
 
 static enum print_line_t irqsoff_print_line(struct trace_iterator *iter)
 {
 	return TRACE_TYPE_UNHANDLED;
 }
 
-static void irqsoff_graph_return(struct ftrace_graph_ret *trace) { }
 static void irqsoff_trace_open(struct trace_iterator *iter) { }
 static void irqsoff_trace_close(struct trace_iterator *iter) { }
 
 #ifdef CONFIG_FUNCTION_TRACER
+static void irqsoff_graph_return(struct ftrace_graph_ret *trace) { }
 static void irqsoff_print_header(struct seq_file *s)
 {
 	trace_default_header(s);
@@ -507,6 +508,9 @@ void trace_preempt_off(unsigned long a0, unsigned long a1)
 }
 #endif /* CONFIG_PREEMPT_TRACER */
 
+#ifdef CONFIG_FUNCTION_TRACER
+static bool function_enabled;
+
 static int register_irqsoff_function(struct trace_array *tr, int graph, int set)
 {
 	int ret;
@@ -540,21 +544,35 @@ static void unregister_irqsoff_function(struct trace_array *tr, int graph)
 	function_enabled = false;
 }
 
-static int irqsoff_function_set(struct trace_array *tr, int set)
+static int irqsoff_function_set(struct trace_array *tr, u32 mask, int set)
 {
+	if (!(mask & TRACE_ITER_FUNCTION))
+		return 0;
+
 	if (set)
 		register_irqsoff_function(tr, is_graph(), 1);
 	else
 		unregister_irqsoff_function(tr, is_graph());
+	return 1;
+}
+#else
+static int register_irqsoff_function(struct trace_array *tr, int graph, int set)
+{
 	return 0;
 }
+static void unregister_irqsoff_function(struct trace_array *tr, int graph) { }
+static inline int irqsoff_function_set(struct trace_array *tr, u32 mask, int set)
+{
+	return 0;
+}
+#endif /* CONFIG_FUNCTION_TRACER */
 
 static int irqsoff_flag_changed(struct trace_array *tr, u32 mask, int set)
 {
 	struct tracer *tracer = tr->current_trace;
 
-	if (mask & TRACE_ITER_FUNCTION)
-		return irqsoff_function_set(tr, set);
+	if (irqsoff_function_set(tr, mask, set))
+		return 0;
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	if (mask & TRACE_ITER_DISPLAY_GRAPH)
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index a6c350c681cc..4a20f61274d1 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -34,11 +34,8 @@ static arch_spinlock_t wakeup_lock =
 
 static void wakeup_reset(struct trace_array *tr);
 static void __wakeup_reset(struct trace_array *tr);
-static int wakeup_graph_entry(struct ftrace_graph_ent *trace);
-static void wakeup_graph_return(struct ftrace_graph_ret *trace);
 
 static int save_flags;
-static bool function_enabled;
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 static int wakeup_display_graph(struct trace_array *tr, int set);
@@ -46,7 +43,7 @@ static int wakeup_display_graph(struct trace_array *tr, int set);
 #else
 static inline int wakeup_display_graph(struct trace_array *tr, int set)
 {
-	return -EINVAL;
+	return 0;
 }
 # define is_graph() false
 #endif
@@ -54,6 +51,11 @@ static inline int wakeup_display_graph(struct trace_array *tr, int set)
 
 #ifdef CONFIG_FUNCTION_TRACER
 
+static int wakeup_graph_entry(struct ftrace_graph_ent *trace);
+static void wakeup_graph_return(struct ftrace_graph_ret *trace);
+
+static bool function_enabled;
+
 /*
  * Prologue for the wakeup function tracers.
  *
@@ -123,7 +125,6 @@ wakeup_tracer_call(unsigned long ip, unsigned long parent_ip,
 	atomic_dec(&data->disabled);
 	preempt_enable_notrace();
 }
-#endif /* CONFIG_FUNCTION_TRACER */
 
 static int register_wakeup_function(struct trace_array *tr, int graph, int set)
 {
@@ -158,21 +159,35 @@ static void unregister_wakeup_function(struct trace_array *tr, int graph)
 	function_enabled = false;
 }
 
-static int wakeup_function_set(struct trace_array *tr, int set)
+static int wakeup_function_set(struct trace_array *tr, u32 mask, int set)
 {
+	if (!(mask & TRACE_ITER_FUNCTION))
+		return 0;
+
 	if (set)
 		register_wakeup_function(tr, is_graph(), 1);
 	else
 		unregister_wakeup_function(tr, is_graph());
+	return 1;
+}
+#else
+static int register_wakeup_function(struct trace_array *tr, int graph, int set)
+{
 	return 0;
 }
+static void unregister_wakeup_function(struct trace_array *tr, int graph) { }
+static int wakeup_function_set(struct trace_array *tr, u32 mask, int set)
+{
+	return 0;
+}
+#endif /* CONFIG_FUNCTION_TRACER */
 
 static int wakeup_flag_changed(struct trace_array *tr, u32 mask, int set)
 {
 	struct tracer *tracer = tr->current_trace;
 
-	if (mask & TRACE_ITER_FUNCTION)
-		return wakeup_function_set(tr, set);
+	if (wakeup_function_set(tr, mask, set))
+		return 0;
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	if (mask & TRACE_ITER_DISPLAY_GRAPH)
@@ -302,21 +317,20 @@ __trace_function(struct trace_array *tr,
 #else
 #define __trace_function trace_function
 
-static int wakeup_graph_entry(struct ftrace_graph_ent *trace)
-{
-	return -1;
-}
-
 static enum print_line_t wakeup_print_line(struct trace_iterator *iter)
 {
 	return TRACE_TYPE_UNHANDLED;
 }
 
-static void wakeup_graph_return(struct ftrace_graph_ret *trace) { }
 static void wakeup_trace_open(struct trace_iterator *iter) { }
 static void wakeup_trace_close(struct trace_iterator *iter) { }
 
 #ifdef CONFIG_FUNCTION_TRACER
+static int wakeup_graph_entry(struct ftrace_graph_ent *trace)
+{
+	return -1;
+}
+static void wakeup_graph_return(struct ftrace_graph_ret *trace) { }
 static void wakeup_print_header(struct seq_file *s)
 {
 	trace_default_header(s);
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 16/25] tracing: Only create stacktrace option when STACKTRACE is configured
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (14 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 15/25] tracing: Do not create function tracer options when not " Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 17/25] tracing: Always show all tracer options in the options directory Steven Rostedt
                   ` (8 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0016-tracing-Only-create-stacktrace-option-when-STACKTRAC.patch --]
[-- Type: text/plain, Size: 5075 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Only create the stacktrace trace option when CONFIG_STACKTRACE is
configured.

Cleaned up the ftrace_trace_stack() function call a little to allow better
encapsulation of the stacktrace trace flag.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 28 +++++++++++++++-------------
 kernel/trace/trace.h |  9 ++++++++-
 2 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index cb223ad51cdf..865f3fad9ff0 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -472,8 +472,9 @@ static inline void trace_access_lock_init(void)
 static void __ftrace_trace_stack(struct ring_buffer *buffer,
 				 unsigned long flags,
 				 int skip, int pc, struct pt_regs *regs);
-static void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
-			       int skip, int pc);
+static inline void ftrace_trace_stack(struct ring_buffer *buffer,
+				      unsigned long flags,
+				      int skip, int pc, struct pt_regs *regs);
 
 #else
 static inline void __ftrace_trace_stack(struct ring_buffer *buffer,
@@ -482,7 +483,8 @@ static inline void __ftrace_trace_stack(struct ring_buffer *buffer,
 {
 }
 static inline void ftrace_trace_stack(struct ring_buffer *buffer,
-				      unsigned long flags, int skip, int pc)
+				      unsigned long flags,
+				      int skip, int pc, struct pt_regs *regs)
 {
 }
 
@@ -571,7 +573,7 @@ int __trace_puts(unsigned long ip, const char *str, int size)
 		entry->buf[size] = '\0';
 
 	__buffer_unlock_commit(buffer, event);
-	ftrace_trace_stack(buffer, irq_flags, 4, pc);
+	ftrace_trace_stack(buffer, irq_flags, 4, pc, NULL);
 
 	return size;
 }
@@ -611,7 +613,7 @@ int __trace_bputs(unsigned long ip, const char *str)
 	entry->str			= str;
 
 	__buffer_unlock_commit(buffer, event);
-	ftrace_trace_stack(buffer, irq_flags, 4, pc);
+	ftrace_trace_stack(buffer, irq_flags, 4, pc, NULL);
 
 	return 1;
 }
@@ -1685,7 +1687,7 @@ void trace_buffer_unlock_commit(struct trace_array *tr,
 {
 	__buffer_unlock_commit(buffer, event);
 
-	ftrace_trace_stack(buffer, flags, 6, pc);
+	ftrace_trace_stack(buffer, flags, 6, pc, NULL);
 	ftrace_trace_userstack(buffer, flags, pc);
 }
 EXPORT_SYMBOL_GPL(trace_buffer_unlock_commit);
@@ -1737,8 +1739,7 @@ void trace_buffer_unlock_commit_regs(struct trace_array *tr,
 {
 	__buffer_unlock_commit(buffer, event);
 
-	if (trace_flags & TRACE_ITER_STACKTRACE)
-		__ftrace_trace_stack(buffer, flags, 0, pc, regs);
+	ftrace_trace_stack(buffer, flags, 6, pc, regs);
 	ftrace_trace_userstack(buffer, flags, pc);
 }
 EXPORT_SYMBOL_GPL(trace_buffer_unlock_commit_regs);
@@ -1867,13 +1868,14 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
 
 }
 
-static void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
-			int skip, int pc)
+static inline void ftrace_trace_stack(struct ring_buffer *buffer,
+				      unsigned long flags,
+				      int skip, int pc, struct pt_regs *regs)
 {
 	if (!(trace_flags & TRACE_ITER_STACKTRACE))
 		return;
 
-	__ftrace_trace_stack(buffer, flags, skip, pc, NULL);
+	__ftrace_trace_stack(buffer, flags, skip, pc, regs);
 }
 
 void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
@@ -2158,7 +2160,7 @@ int trace_vbprintk(unsigned long ip, const char *fmt, va_list args)
 	memcpy(entry->buf, tbuffer, sizeof(u32) * len);
 	if (!call_filter_check_discard(call, entry, buffer, event)) {
 		__buffer_unlock_commit(buffer, event);
-		ftrace_trace_stack(buffer, flags, 6, pc);
+		ftrace_trace_stack(buffer, flags, 6, pc, NULL);
 	}
 
 out:
@@ -2210,7 +2212,7 @@ __trace_array_vprintk(struct ring_buffer *buffer,
 	memcpy(&entry->buf, tbuffer, len + 1);
 	if (!call_filter_check_discard(call, entry, buffer, event)) {
 		__buffer_unlock_commit(buffer, event);
-		ftrace_trace_stack(buffer, flags, 6, pc);
+		ftrace_trace_stack(buffer, flags, 6, pc, NULL);
 	}
  out:
 	preempt_enable_notrace();
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index b389d409b952..af34e1822dad 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -911,6 +911,13 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
 # define FUNCTION_DEFAULT_FLAGS		0UL
 #endif
 
+#ifdef CONFIG_STACKTRACE
+# define STACK_FLAGS				\
+		C(STACKTRACE,		"stacktrace"),
+#else
+# define STACK_FLAGS
+#endif
+
 /*
  * trace_iterator_flags is an enumeration that defines bit
  * positions into trace_flags that controls the output.
@@ -927,7 +934,6 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
 		C(HEX,			"hex"),			\
 		C(BIN,			"bin"),			\
 		C(BLOCK,		"block"),		\
-		C(STACKTRACE,		"stacktrace"),		\
 		C(PRINTK,		"trace_printk"),	\
 		C(ANNOTATE,		"annotate"),		\
 		C(USERSTACKTRACE,	"userstacktrace"),	\
@@ -942,6 +948,7 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
 		C(MARKERS,		"markers"),		\
 		FUNCTION_FLAGS					\
 		FGRAPH_FLAGS					\
+		STACK_FLAGS					\
 		BRANCH_FLAGS
 
 /*
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 17/25] tracing: Always show all tracer options in the options directory
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (15 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 16/25] tracing: Only create stacktrace option when STACKTRACE is configured Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 18/25] tracing: Add build bug if we have more trace_flags than bits Steven Rostedt
                   ` (7 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0017-tracing-Always-show-all-tracer-options-in-the-option.patch --]
[-- Type: text/plain, Size: 6153 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

There are options that are unique to a specific tracer (like function and
function graph). Currently, these options are only visible in the options
directory when the tracer is enabled.

This has been a pain, especially for something like the func_stack_trace
option that if used inappropriately, could bring the system to a crawl. But
the only way to see it, is to enable the function tracer.

For example, if one had done:

 # cd /sys/kernel/tracing
 # echo __schedule > set_ftrace_filter
 # echo 1 > options/func_stack_trace
 # echo function > current_tracer

The __schedule call will be traced and a stack trace will also be recorded
there. Now when you were done, you may do...

 # echo nop > current_tracer
 # echo > set_ftrace_filter

But you forgot to disable the func_stack_trace. The only way to disable it
is to re-enable function tracing first. If you do not add a filter to
set_ftrace_filter and just do:

 # echo function > current_tracer

Now you would be performing a stack trace on *every* function! On some
systems, that causes a live lock. Others may take a few minutes to fix your
mistake.

Having the func_stack_trace option visible allows you to check it and
disable it before enabling the funtion tracer.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 53 +++++++++++++++++++---------------------------------
 kernel/trace/trace.h |  8 ++++++++
 2 files changed, 27 insertions(+), 34 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 865f3fad9ff0..7446d4238f87 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1213,6 +1213,8 @@ static inline int run_tracer_selftest(struct tracer *type)
 }
 #endif /* CONFIG_FTRACE_STARTUP_TEST */
 
+static void add_tracer_options(struct trace_array *tr, struct tracer *t);
+
 /**
  * register_tracer - register a tracer with the ftrace system.
  * @type - the plugin for the tracer
@@ -1262,6 +1264,7 @@ int register_tracer(struct tracer *type)
 
 	type->next = trace_types;
 	trace_types = type;
+	add_tracer_options(&global_trace, type);
 
  out:
 	tracing_selftest_running = false;
@@ -4287,9 +4290,6 @@ struct trace_option_dentry;
 static struct trace_option_dentry *
 create_trace_option_files(struct trace_array *tr, struct tracer *tracer);
 
-static void
-destroy_trace_option_files(struct trace_option_dentry *topts);
-
 /*
  * Used to clear out the tracer before deletion of an instance.
  * Must have trace_types_lock held.
@@ -4307,10 +4307,8 @@ static void tracing_set_nop(struct trace_array *tr)
 	tr->current_trace = &nop_trace;
 }
 
-static void update_tracer_options(struct trace_array *tr, struct tracer *t)
+static void add_tracer_options(struct trace_array *tr, struct tracer *t)
 {
-	static struct trace_option_dentry *topts;
-
 	/* Only enable if the directory has been created already. */
 	if (!tr->dir)
 		return;
@@ -4319,8 +4317,11 @@ static void update_tracer_options(struct trace_array *tr, struct tracer *t)
 	if (!(tr->flags & TRACE_ARRAY_FL_GLOBAL))
 		return;
 
-	destroy_trace_option_files(topts);
-	topts = create_trace_option_files(tr, t);
+	/* Ignore if they were already created */
+	if (t->topts)
+		return;
+
+	t->topts = create_trace_option_files(tr, t);
 }
 
 static int tracing_set_tracer(struct trace_array *tr, const char *buf)
@@ -4389,7 +4390,6 @@ static int tracing_set_tracer(struct trace_array *tr, const char *buf)
 		free_snapshot(tr);
 	}
 #endif
-	update_tracer_options(tr, t);
 
 #ifdef CONFIG_TRACER_MAX_TRACE
 	if (t->use_max_tr && !had_max_tr) {
@@ -6119,13 +6119,6 @@ tracing_init_tracefs_percpu(struct trace_array *tr, long cpu)
 #include "trace_selftest.c"
 #endif
 
-struct trace_option_dentry {
-	struct tracer_opt		*opt;
-	struct tracer_flags		*flags;
-	struct trace_array		*tr;
-	struct dentry			*entry;
-};
-
 static ssize_t
 trace_options_read(struct file *filp, char __user *ubuf, size_t cnt,
 			loff_t *ppos)
@@ -6310,27 +6303,17 @@ create_trace_option_files(struct trace_array *tr, struct tracer *tracer)
 	if (!topts)
 		return NULL;
 
-	for (cnt = 0; opts[cnt].name; cnt++)
+	for (cnt = 0; opts[cnt].name; cnt++) {
 		create_trace_option_file(tr, &topts[cnt], flags,
 					 &opts[cnt]);
+		WARN_ONCE(topts[cnt].entry == NULL,
+			  "Failed to create trace option: %s",
+			  opts[cnt].name);
+	}
 
 	return topts;
 }
 
-static void
-destroy_trace_option_files(struct trace_option_dentry *topts)
-{
-	int cnt;
-
-	if (!topts)
-		return;
-
-	for (cnt = 0; topts[cnt].opt; cnt++)
-		tracefs_remove(topts[cnt].entry);
-
-	kfree(topts);
-}
-
 static struct dentry *
 create_trace_option_core_file(struct trace_array *tr,
 			      const char *option, long index)
@@ -6812,6 +6795,7 @@ static struct notifier_block trace_module_nb = {
 static __init int tracer_init_tracefs(void)
 {
 	struct dentry *d_tracer;
+	struct tracer *t;
 
 	trace_access_lock_init();
 
@@ -6850,9 +6834,10 @@ static __init int tracer_init_tracefs(void)
 
 	create_trace_options_dir(&global_trace);
 
-	/* If the tracer was started via cmdline, create options for it here */
-	if (global_trace.current_trace != &nop_trace)
-		update_tracer_options(&global_trace, global_trace.current_trace);
+	mutex_lock(&trace_types_lock);
+	for (t = trace_types; t; t = t->next)
+		add_tracer_options(&global_trace, t);
+	mutex_unlock(&trace_types_lock);
 
 	return 0;
 }
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index af34e1822dad..8ed97872b65b 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -333,6 +333,13 @@ struct tracer_flags {
 #define TRACER_OPT(s, b)	.name = #s, .bit = b
 
 
+struct trace_option_dentry {
+	struct tracer_opt		*opt;
+	struct tracer_flags		*flags;
+	struct trace_array		*tr;
+	struct dentry			*entry;
+};
+
 /**
  * struct tracer - a specific tracer and its callbacks to interact with tracefs
  * @name: the name chosen to select it on the available_tracers file
@@ -387,6 +394,7 @@ struct tracer {
 						u32 mask, int set);
 	struct tracer		*next;
 	struct tracer_flags	*flags;
+	struct trace_option_dentry *topts;
 	int			enabled;
 	int			ref;
 	bool			print_max;
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 18/25] tracing: Add build bug if we have more trace_flags than bits
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (16 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 17/25] tracing: Always show all tracer options in the options directory Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 19/25] tracing: Remove access to trace_flags in trace_printk.c Steven Rostedt
                   ` (6 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0018-tracing-Add-build-bug-if-we-have-more-trace_flags-th.patch --]
[-- Type: text/plain, Size: 1432 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Add a enum that denotes the last bit of the trace_flags and have a
BUILD_BUG_ON(last_bit > 32).

If we add more bits than we have in trace_flags, the kernel wont build.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 6 ++++++
 kernel/trace/trace.h | 6 +++++-
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 7446d4238f87..991bab9b79d2 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -7046,6 +7046,12 @@ __init static int tracer_alloc_buffers(void)
 	int ring_buf_size;
 	int ret = -ENOMEM;
 
+	/*
+	 * Make sure we don't accidently add more trace options
+	 * than we have bits for.
+	 */
+	BUILD_BUG_ON(TRACE_ITER_LAST_BIT > 32);
+
 	if (!alloc_cpumask_var(&tracing_buffer_mask, GFP_KERNEL))
 		goto out;
 
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 8ed97872b65b..07155652254d 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -966,7 +966,11 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
 #undef C
 #define C(a, b) TRACE_ITER_##a##_BIT
 
-enum trace_iterator_bits { TRACE_FLAGS };
+enum trace_iterator_bits {
+	TRACE_FLAGS
+	/* Make sure we don't go more than we have bits for */
+	TRACE_ITER_LAST_BIT
+};
 
 /*
  * By redefining C, we can make TRACE_FLAGS a list of masks that
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 19/25] tracing: Remove access to trace_flags in trace_printk.c
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (17 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 18/25] tracing: Add build bug if we have more trace_flags than bits Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 20/25] tracing: Move sleep-time and graph-time options out of the core trace_flags Steven Rostedt
                   ` (5 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0019-tracing-Remove-access-to-trace_flags-in-trace_printk.patch --]
[-- Type: text/plain, Size: 3018 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

In the effort to move the global trace_flags to the tracing instances, the
direct access to trace_flags must be removed from trace_printk.c

Instead, add a new trace_printk_enabled boolean that is set by a new access
function trace_printk_control(), that will enable or disable trace_printk.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c        |  4 +++-
 kernel/trace/trace.h        |  1 +
 kernel/trace/trace_printk.c | 14 ++++++++++----
 3 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 991bab9b79d2..d98789b112c6 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3555,8 +3555,10 @@ int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled)
 #endif
 	}
 
-	if (mask == TRACE_ITER_PRINTK)
+	if (mask == TRACE_ITER_PRINTK) {
 		trace_printk_start_stop_comm(enabled);
+		trace_printk_control(enabled);
+	}
 
 	return 0;
 }
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 07155652254d..33d1e5384481 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1318,6 +1318,7 @@ extern const char *__stop___trace_bprintk_fmt[];
 extern const char *__start___tracepoint_str[];
 extern const char *__stop___tracepoint_str[];
 
+void trace_printk_control(bool enabled);
 void trace_printk_init_buffers(void);
 void trace_printk_start_comm(void);
 int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set);
diff --git a/kernel/trace/trace_printk.c b/kernel/trace/trace_printk.c
index 36c1455b7567..1c2b28536feb 100644
--- a/kernel/trace/trace_printk.c
+++ b/kernel/trace/trace_printk.c
@@ -178,6 +178,12 @@ static inline void format_mod_start(void) { }
 static inline void format_mod_stop(void) { }
 #endif /* CONFIG_MODULES */
 
+static bool __read_mostly trace_printk_enabled = true;
+
+void trace_printk_control(bool enabled)
+{
+	trace_printk_enabled = enabled;
+}
 
 __initdata_or_module static
 struct notifier_block module_trace_bprintk_format_nb = {
@@ -192,7 +198,7 @@ int __trace_bprintk(unsigned long ip, const char *fmt, ...)
 	if (unlikely(!fmt))
 		return 0;
 
-	if (!(trace_flags & TRACE_ITER_PRINTK))
+	if (!trace_printk_enabled)
 		return 0;
 
 	va_start(ap, fmt);
@@ -207,7 +213,7 @@ int __ftrace_vbprintk(unsigned long ip, const char *fmt, va_list ap)
 	if (unlikely(!fmt))
 		return 0;
 
-	if (!(trace_flags & TRACE_ITER_PRINTK))
+	if (!trace_printk_enabled)
 		return 0;
 
 	return trace_vbprintk(ip, fmt, ap);
@@ -219,7 +225,7 @@ int __trace_printk(unsigned long ip, const char *fmt, ...)
 	int ret;
 	va_list ap;
 
-	if (!(trace_flags & TRACE_ITER_PRINTK))
+	if (!trace_printk_enabled)
 		return 0;
 
 	va_start(ap, fmt);
@@ -231,7 +237,7 @@ EXPORT_SYMBOL_GPL(__trace_printk);
 
 int __ftrace_vprintk(unsigned long ip, const char *fmt, va_list ap)
 {
-	if (!(trace_flags & TRACE_ITER_PRINTK))
+	if (!trace_printk_enabled)
 		return 0;
 
 	return trace_vprintk(ip, fmt, ap);
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 20/25] tracing: Move sleep-time and graph-time options out of the core trace_flags
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (18 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 19/25] tracing: Remove access to trace_flags in trace_printk.c Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 21/25] tracing: Move trace_flags from global to a trace_array field Steven Rostedt
                   ` (4 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0020-tracing-Move-sleep-time-and-graph-time-options-out-o.patch --]
[-- Type: text/plain, Size: 5476 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

The sleep-time and graph-time options are only for the function graph tracer
and are not used by anything else. As tracer options are now visible when
the tracer is not activated, its better to move the function graph specific
tracer options into the function graph tracer.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c                | 19 +++++++++++++++++--
 kernel/trace/trace.c                 |  2 +-
 kernel/trace/trace.h                 | 11 +++++------
 kernel/trace/trace_functions_graph.c | 13 ++++++++++++-
 4 files changed, 35 insertions(+), 10 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index b0623ac785a2..e76384894147 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -243,6 +243,11 @@ static void ftrace_sync_ipi(void *data)
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 static void update_function_graph_func(void);
+
+/* Both enabled by default (can be cleared by function_graph tracer flags */
+static bool fgraph_sleep_time = true;
+static bool fgraph_graph_time = true;
+
 #else
 static inline void update_function_graph_func(void) { }
 #endif
@@ -917,7 +922,7 @@ static void profile_graph_return(struct ftrace_graph_ret *trace)
 
 	calltime = trace->rettime - trace->calltime;
 
-	if (!(trace_flags & TRACE_ITER_GRAPH_TIME)) {
+	if (!fgraph_graph_time) {
 		int index;
 
 		index = trace->depth;
@@ -5639,6 +5644,16 @@ static struct ftrace_ops graph_ops = {
 	ASSIGN_OPS_HASH(graph_ops, &global_ops.local_hash)
 };
 
+void ftrace_graph_sleep_time_control(bool enable)
+{
+	fgraph_sleep_time = enable;
+}
+
+void ftrace_graph_graph_time_control(bool enable)
+{
+	fgraph_graph_time = enable;
+}
+
 int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace)
 {
 	return 0;
@@ -5707,7 +5722,7 @@ ftrace_graph_probe_sched_switch(void *ignore,
 	 * Does the user want to count the time a function was asleep.
 	 * If so, do not update the time stamps.
 	 */
-	if (trace_flags & TRACE_ITER_SLEEP_TIME)
+	if (fgraph_sleep_time)
 		return;
 
 	timestamp = trace_clock_local();
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index d98789b112c6..e26933c2edaa 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -492,7 +492,7 @@ static inline void ftrace_trace_stack(struct ring_buffer *buffer,
 
 /* trace_flags holds trace_options default values */
 unsigned long trace_flags =
-	FUNCTION_DEFAULT_FLAGS | FUNCTION_GRAPH_DEFAULT_FLAGS |
+	FUNCTION_DEFAULT_FLAGS |
 	TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
 	TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO |
 	TRACE_ITER_RECORD_CMD | TRACE_ITER_OVERWRITE |
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 33d1e5384481..5219bf5f708a 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -714,9 +714,14 @@ extern char trace_find_mark(unsigned long long duration);
 #define TRACE_GRAPH_PRINT_ABS_TIME      0x20
 #define TRACE_GRAPH_PRINT_IRQS          0x40
 #define TRACE_GRAPH_PRINT_TAIL          0x80
+#define TRACE_GRAPH_SLEEP_TIME		0x100
+#define TRACE_GRAPH_GRAPH_TIME		0x200
 #define TRACE_GRAPH_PRINT_FILL_SHIFT	28
 #define TRACE_GRAPH_PRINT_FILL_MASK	(0x3 << TRACE_GRAPH_PRINT_FILL_SHIFT)
 
+extern void ftrace_graph_sleep_time_control(bool enable);
+extern void ftrace_graph_graph_time_control(bool enable);
+
 extern enum print_line_t
 print_graph_function_flags(struct trace_iterator *iter, u32 flags);
 extern void print_graph_headers_flags(struct seq_file *s, u32 flags);
@@ -892,15 +897,9 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
  */
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 # define FGRAPH_FLAGS						\
-		C(SLEEP_TIME,		"sleep-time"),		\
-		C(GRAPH_TIME,		"graph-time"),		\
 		C(DISPLAY_GRAPH,	"display-graph"),
-/* Initially set for trace_flags */
-# define FUNCTION_GRAPH_DEFAULT_FLAGS \
-	(TRACE_ITER_SLEEP_TIME | TRACE_ITER_GRAPH_TIME)
 #else
 # define FGRAPH_FLAGS
-# define FUNCTION_GRAPH_DEFAULT_FLAGS  0UL
 #endif
 
 #ifdef CONFIG_BRANCH_TRACER
diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index ca98445782ac..86e45c2658e4 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -83,13 +83,18 @@ static struct tracer_opt trace_opts[] = {
 	{ TRACER_OPT(funcgraph-irqs, TRACE_GRAPH_PRINT_IRQS) },
 	/* Display function name after trailing } */
 	{ TRACER_OPT(funcgraph-tail, TRACE_GRAPH_PRINT_TAIL) },
+	/* Include sleep time (scheduled out) between entry and return */
+	{ TRACER_OPT(sleep-time, TRACE_GRAPH_SLEEP_TIME) },
+	/* Include time within nested functions */
+	{ TRACER_OPT(graph-time, TRACE_GRAPH_GRAPH_TIME) },
 	{ } /* Empty entry */
 };
 
 static struct tracer_flags tracer_flags = {
 	/* Don't display overruns, proc, or tail by default */
 	.val = TRACE_GRAPH_PRINT_CPU | TRACE_GRAPH_PRINT_OVERHEAD |
-	       TRACE_GRAPH_PRINT_DURATION | TRACE_GRAPH_PRINT_IRQS,
+	       TRACE_GRAPH_PRINT_DURATION | TRACE_GRAPH_PRINT_IRQS |
+	       TRACE_GRAPH_SLEEP_TIME | TRACE_GRAPH_GRAPH_TIME,
 	.opts = trace_opts
 };
 
@@ -1362,6 +1367,12 @@ func_graph_set_flag(struct trace_array *tr, u32 old_flags, u32 bit, int set)
 	if (bit == TRACE_GRAPH_PRINT_IRQS)
 		ftrace_graph_skip_irqs = !set;
 
+	if (bit == TRACE_GRAPH_SLEEP_TIME)
+		ftrace_graph_sleep_time_control(set);
+
+	if (bit == TRACE_GRAPH_GRAPH_TIME)
+		ftrace_graph_graph_time_control(set);
+
 	return 0;
 }
 
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 21/25] tracing: Move trace_flags from global to a trace_array field
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (19 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 20/25] tracing: Move sleep-time and graph-time options out of the core trace_flags Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 22/25] tracing: Add a method to pass in trace_array descriptor to option files Steven Rostedt
                   ` (3 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0021-tracing-Move-trace_flags-from-global-to-a-trace_arra.patch --]
[-- Type: text/plain, Size: 35900 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

In preparation to make trace options per instance, the global trace_flags
needs to be moved from being a global variable to a field within the trace
instance trace_array structure.

There's still more work to do, as there's some functions that use
trace_flags without passing in a way to get to the current_trace array. For
those, the global_trace is used directly (from trace.c). This includes
setting and clearing the trace_flags. This means that when a new instance is
created, it just gets the trace_flags of the global_trace and will not be
able to modify them. Depending on the functions that have access to the
trace_array, the flags of an instance may not affect parts of its trace,
where the global_trace is used. These will be fixed in future changes.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/blktrace.c              |  7 +--
 kernel/trace/trace.c                 | 92 +++++++++++++++++++++---------------
 kernel/trace/trace.h                 |  5 +-
 kernel/trace/trace_events.c          |  3 +-
 kernel/trace/trace_functions_graph.c | 50 ++++++++++++--------
 kernel/trace/trace_irqsoff.c         | 28 ++++++-----
 kernel/trace/trace_kdb.c             |  8 ++--
 kernel/trace/trace_output.c          | 14 ++++--
 kernel/trace/trace_sched_wakeup.c    | 26 +++++-----
 kernel/trace/trace_syscalls.c        |  3 +-
 10 files changed, 135 insertions(+), 101 deletions(-)

diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 973d41d81aa5..b2fcf472774e 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -1343,6 +1343,7 @@ static const struct {
 static enum print_line_t print_one_line(struct trace_iterator *iter,
 					bool classic)
 {
+	struct trace_array *tr = iter->tr;
 	struct trace_seq *s = &iter->seq;
 	const struct blk_io_trace *t;
 	u16 what;
@@ -1351,7 +1352,7 @@ static enum print_line_t print_one_line(struct trace_iterator *iter,
 
 	t	   = te_blk_io_trace(iter->ent);
 	what	   = t->action & ((1 << BLK_TC_SHIFT) - 1);
-	long_act   = !!(trace_flags & TRACE_ITER_VERBOSE);
+	long_act   = !!(tr->trace_flags & TRACE_ITER_VERBOSE);
 	log_action = classic ? &blk_log_action_classic : &blk_log_action;
 
 	if (t->action == BLK_TN_MESSAGE) {
@@ -1413,9 +1414,9 @@ blk_tracer_set_flag(struct trace_array *tr, u32 old_flags, u32 bit, int set)
 	/* don't output context-info for blk_classic output */
 	if (bit == TRACE_BLK_OPT_CLASSIC) {
 		if (set)
-			trace_flags &= ~TRACE_ITER_CONTEXT_INFO;
+			tr->trace_flags &= ~TRACE_ITER_CONTEXT_INFO;
 		else
-			trace_flags |= TRACE_ITER_CONTEXT_INFO;
+			tr->trace_flags |= TRACE_ITER_CONTEXT_INFO;
 	}
 	return 0;
 }
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index e26933c2edaa..4e82f4ad68dc 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -250,6 +250,14 @@ unsigned long long ns2usecs(cycle_t nsec)
 	return nsec;
 }
 
+/* trace_flags holds trace_options default values */
+#define TRACE_DEFAULT_FLAGS						\
+	(FUNCTION_DEFAULT_FLAGS |					\
+	 TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |			\
+	 TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO |		\
+	 TRACE_ITER_RECORD_CMD | TRACE_ITER_OVERWRITE |			\
+	 TRACE_ITER_IRQ_INFO | TRACE_ITER_MARKERS)
+
 /*
  * The global_trace is the descriptor that holds the tracing
  * buffers for the live tracing. For each CPU, it contains
@@ -262,7 +270,9 @@ unsigned long long ns2usecs(cycle_t nsec)
  * pages for the buffer for that CPU. Each CPU has the same number
  * of pages allocated for its buffer.
  */
-static struct trace_array	global_trace;
+static struct trace_array global_trace = {
+	.trace_flags = TRACE_DEFAULT_FLAGS,
+};
 
 LIST_HEAD(ftrace_trace_arrays);
 
@@ -490,15 +500,6 @@ static inline void ftrace_trace_stack(struct ring_buffer *buffer,
 
 #endif
 
-/* trace_flags holds trace_options default values */
-unsigned long trace_flags =
-	FUNCTION_DEFAULT_FLAGS |
-	TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
-	TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO |
-	TRACE_ITER_RECORD_CMD | TRACE_ITER_OVERWRITE |
-	TRACE_ITER_IRQ_INFO | TRACE_ITER_MARKERS
-	;
-
 static void tracer_tracing_on(struct trace_array *tr)
 {
 	if (tr->trace_buffer.buffer)
@@ -543,7 +544,7 @@ int __trace_puts(unsigned long ip, const char *str, int size)
 	int alloc;
 	int pc;
 
-	if (!(trace_flags & TRACE_ITER_PRINTK))
+	if (!(global_trace.trace_flags & TRACE_ITER_PRINTK))
 		return 0;
 
 	pc = preempt_count();
@@ -593,7 +594,7 @@ int __trace_bputs(unsigned long ip, const char *str)
 	int size = sizeof(struct bputs_entry);
 	int pc;
 
-	if (!(trace_flags & TRACE_ITER_PRINTK))
+	if (!(global_trace.trace_flags & TRACE_ITER_PRINTK))
 		return 0;
 
 	pc = preempt_count();
@@ -1875,7 +1876,7 @@ static inline void ftrace_trace_stack(struct ring_buffer *buffer,
 				      unsigned long flags,
 				      int skip, int pc, struct pt_regs *regs)
 {
-	if (!(trace_flags & TRACE_ITER_STACKTRACE))
+	if (!(global_trace.trace_flags & TRACE_ITER_STACKTRACE))
 		return;
 
 	__ftrace_trace_stack(buffer, flags, skip, pc, regs);
@@ -1919,7 +1920,7 @@ ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
 	struct userstack_entry *entry;
 	struct stack_trace trace;
 
-	if (!(trace_flags & TRACE_ITER_USERSTACKTRACE))
+	if (!(global_trace.trace_flags & TRACE_ITER_USERSTACKTRACE))
 		return;
 
 	/*
@@ -2236,7 +2237,7 @@ int trace_array_printk(struct trace_array *tr,
 	int ret;
 	va_list ap;
 
-	if (!(trace_flags & TRACE_ITER_PRINTK))
+	if (!(global_trace.trace_flags & TRACE_ITER_PRINTK))
 		return 0;
 
 	va_start(ap, fmt);
@@ -2251,7 +2252,7 @@ int trace_array_printk_buf(struct ring_buffer *buffer,
 	int ret;
 	va_list ap;
 
-	if (!(trace_flags & TRACE_ITER_PRINTK))
+	if (!(global_trace.trace_flags & TRACE_ITER_PRINTK))
 		return 0;
 
 	va_start(ap, fmt);
@@ -2592,7 +2593,7 @@ static void print_func_help_header_irq(struct trace_buffer *buf, struct seq_file
 void
 print_trace_header(struct seq_file *m, struct trace_iterator *iter)
 {
-	unsigned long sym_flags = (trace_flags & TRACE_ITER_SYM_MASK);
+	unsigned long sym_flags = (global_trace.trace_flags & TRACE_ITER_SYM_MASK);
 	struct trace_buffer *buf = iter->trace_buffer;
 	struct trace_array_cpu *data = per_cpu_ptr(buf->data, buf->cpu);
 	struct tracer *type = iter->trace;
@@ -2654,8 +2655,9 @@ print_trace_header(struct seq_file *m, struct trace_iterator *iter)
 static void test_cpu_buff_start(struct trace_iterator *iter)
 {
 	struct trace_seq *s = &iter->seq;
+	struct trace_array *tr = iter->tr;
 
-	if (!(trace_flags & TRACE_ITER_ANNOTATE))
+	if (!(tr->trace_flags & TRACE_ITER_ANNOTATE))
 		return;
 
 	if (!(iter->iter_flags & TRACE_FILE_ANNOTATE))
@@ -2677,8 +2679,9 @@ static void test_cpu_buff_start(struct trace_iterator *iter)
 
 static enum print_line_t print_trace_fmt(struct trace_iterator *iter)
 {
+	struct trace_array *tr = iter->tr;
 	struct trace_seq *s = &iter->seq;
-	unsigned long sym_flags = (trace_flags & TRACE_ITER_SYM_MASK);
+	unsigned long sym_flags = (tr->trace_flags & TRACE_ITER_SYM_MASK);
 	struct trace_entry *entry;
 	struct trace_event *event;
 
@@ -2688,7 +2691,7 @@ static enum print_line_t print_trace_fmt(struct trace_iterator *iter)
 
 	event = ftrace_find_event(entry->type);
 
-	if (trace_flags & TRACE_ITER_CONTEXT_INFO) {
+	if (tr->trace_flags & TRACE_ITER_CONTEXT_INFO) {
 		if (iter->iter_flags & TRACE_FILE_LAT_FMT)
 			trace_print_lat_context(iter);
 		else
@@ -2708,13 +2711,14 @@ static enum print_line_t print_trace_fmt(struct trace_iterator *iter)
 
 static enum print_line_t print_raw_fmt(struct trace_iterator *iter)
 {
+	struct trace_array *tr = iter->tr;
 	struct trace_seq *s = &iter->seq;
 	struct trace_entry *entry;
 	struct trace_event *event;
 
 	entry = iter->ent;
 
-	if (trace_flags & TRACE_ITER_CONTEXT_INFO)
+	if (tr->trace_flags & TRACE_ITER_CONTEXT_INFO)
 		trace_seq_printf(s, "%d %d %llu ",
 				 entry->pid, iter->cpu, iter->ts);
 
@@ -2732,6 +2736,7 @@ static enum print_line_t print_raw_fmt(struct trace_iterator *iter)
 
 static enum print_line_t print_hex_fmt(struct trace_iterator *iter)
 {
+	struct trace_array *tr = iter->tr;
 	struct trace_seq *s = &iter->seq;
 	unsigned char newline = '\n';
 	struct trace_entry *entry;
@@ -2739,7 +2744,7 @@ static enum print_line_t print_hex_fmt(struct trace_iterator *iter)
 
 	entry = iter->ent;
 
-	if (trace_flags & TRACE_ITER_CONTEXT_INFO) {
+	if (tr->trace_flags & TRACE_ITER_CONTEXT_INFO) {
 		SEQ_PUT_HEX_FIELD(s, entry->pid);
 		SEQ_PUT_HEX_FIELD(s, iter->cpu);
 		SEQ_PUT_HEX_FIELD(s, iter->ts);
@@ -2761,13 +2766,14 @@ static enum print_line_t print_hex_fmt(struct trace_iterator *iter)
 
 static enum print_line_t print_bin_fmt(struct trace_iterator *iter)
 {
+	struct trace_array *tr = iter->tr;
 	struct trace_seq *s = &iter->seq;
 	struct trace_entry *entry;
 	struct trace_event *event;
 
 	entry = iter->ent;
 
-	if (trace_flags & TRACE_ITER_CONTEXT_INFO) {
+	if (tr->trace_flags & TRACE_ITER_CONTEXT_INFO) {
 		SEQ_PUT_FIELD(s, entry->pid);
 		SEQ_PUT_FIELD(s, iter->cpu);
 		SEQ_PUT_FIELD(s, iter->ts);
@@ -2816,6 +2822,8 @@ int trace_empty(struct trace_iterator *iter)
 /*  Called with trace_event_read_lock() held. */
 enum print_line_t print_trace_line(struct trace_iterator *iter)
 {
+	struct trace_array *tr = iter->tr;
+	unsigned long trace_flags = tr->trace_flags;
 	enum print_line_t ret;
 
 	if (iter->lost_events) {
@@ -2861,6 +2869,7 @@ enum print_line_t print_trace_line(struct trace_iterator *iter)
 void trace_latency_header(struct seq_file *m)
 {
 	struct trace_iterator *iter = m->private;
+	struct trace_array *tr = iter->tr;
 
 	/* print nothing if the buffers are empty */
 	if (trace_empty(iter))
@@ -2869,13 +2878,15 @@ void trace_latency_header(struct seq_file *m)
 	if (iter->iter_flags & TRACE_FILE_LAT_FMT)
 		print_trace_header(m, iter);
 
-	if (!(trace_flags & TRACE_ITER_VERBOSE))
+	if (!(tr->trace_flags & TRACE_ITER_VERBOSE))
 		print_lat_help_header(m);
 }
 
 void trace_default_header(struct seq_file *m)
 {
 	struct trace_iterator *iter = m->private;
+	struct trace_array *tr = iter->tr;
+	unsigned long trace_flags = tr->trace_flags;
 
 	if (!(trace_flags & TRACE_ITER_CONTEXT_INFO))
 		return;
@@ -3220,7 +3231,7 @@ static int tracing_open(struct inode *inode, struct file *file)
 		iter = __tracing_open(inode, file, false);
 		if (IS_ERR(iter))
 			ret = PTR_ERR(iter);
-		else if (trace_flags & TRACE_ITER_LATENCY_FMT)
+		else if (tr->trace_flags & TRACE_ITER_LATENCY_FMT)
 			iter->iter_flags |= TRACE_FILE_LAT_FMT;
 	}
 
@@ -3467,7 +3478,7 @@ static int tracing_trace_options_show(struct seq_file *m, void *v)
 	trace_opts = tr->current_trace->flags->opts;
 
 	for (i = 0; trace_options[i]; i++) {
-		if (trace_flags & (1 << i))
+		if (tr->trace_flags & (1 << i))
 			seq_printf(m, "%s\n", trace_options[i]);
 		else
 			seq_printf(m, "no%s\n", trace_options[i]);
@@ -3532,7 +3543,7 @@ int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set)
 int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled)
 {
 	/* do nothing if flag is already set */
-	if (!!(trace_flags & mask) == !!enabled)
+	if (!!(tr->trace_flags & mask) == !!enabled)
 		return 0;
 
 	/* Give the tracer a chance to approve the change */
@@ -3541,9 +3552,9 @@ int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled)
 			return -EINVAL;
 
 	if (enabled)
-		trace_flags |= mask;
+		tr->trace_flags |= mask;
 	else
-		trace_flags &= ~mask;
+		tr->trace_flags &= ~mask;
 
 	if (mask == TRACE_ITER_RECORD_CMD)
 		trace_event_enable_cmd_record(enabled);
@@ -4558,7 +4569,7 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp)
 	/* trace pipe does not show start of buffer */
 	cpumask_setall(iter->started);
 
-	if (trace_flags & TRACE_ITER_LATENCY_FMT)
+	if (tr->trace_flags & TRACE_ITER_LATENCY_FMT)
 		iter->iter_flags |= TRACE_FILE_LAT_FMT;
 
 	/* Output in nanoseconds only if we are using a clock in nanoseconds. */
@@ -4615,11 +4626,13 @@ static int tracing_release_pipe(struct inode *inode, struct file *file)
 static unsigned int
 trace_poll(struct trace_iterator *iter, struct file *filp, poll_table *poll_table)
 {
+	struct trace_array *tr = iter->tr;
+
 	/* Iterators are static, they should be filled or empty */
 	if (trace_buffer_iter(iter, iter->cpu_file))
 		return POLLIN | POLLRDNORM;
 
-	if (trace_flags & TRACE_ITER_BLOCK)
+	if (tr->trace_flags & TRACE_ITER_BLOCK)
 		/*
 		 * Always select as readable when in blocking mode
 		 */
@@ -5036,7 +5049,7 @@ tracing_free_buffer_release(struct inode *inode, struct file *filp)
 	struct trace_array *tr = inode->i_private;
 
 	/* disable tracing ? */
-	if (trace_flags & TRACE_ITER_STOP_ON_FREE)
+	if (tr->trace_flags & TRACE_ITER_STOP_ON_FREE)
 		tracer_tracing_off(tr);
 	/* resize the ring buffer to 0 */
 	tracing_resize_ring_buffer(tr, 0, RING_BUFFER_ALL_CPUS);
@@ -5069,7 +5082,7 @@ tracing_mark_write(struct file *filp, const char __user *ubuf,
 	if (tracing_disabled)
 		return -EINVAL;
 
-	if (!(trace_flags & TRACE_ITER_MARKERS))
+	if (!(tr->trace_flags & TRACE_ITER_MARKERS))
 		return -EINVAL;
 
 	if (cnt > TRACE_BUF_SIZE)
@@ -6180,7 +6193,7 @@ trace_options_core_read(struct file *filp, char __user *ubuf, size_t cnt,
 	long index = (long)filp->private_data;
 	char *buf;
 
-	if (trace_flags & (1 << index))
+	if (global_trace.trace_flags & (1 << index))
 		buf = "1\n";
 	else
 		buf = "0\n";
@@ -6407,7 +6420,7 @@ allocate_trace_buffer(struct trace_array *tr, struct trace_buffer *buf, int size
 {
 	enum ring_buffer_flags rb_flags;
 
-	rb_flags = trace_flags & TRACE_ITER_OVERWRITE ? RB_FL_OVERWRITE : 0;
+	rb_flags = tr->trace_flags & TRACE_ITER_OVERWRITE ? RB_FL_OVERWRITE : 0;
 
 	buf->tr = tr;
 
@@ -6502,6 +6515,8 @@ static int instance_mkdir(const char *name)
 	if (!alloc_cpumask_var(&tr->tracing_cpumask, GFP_KERNEL))
 		goto out_free_tr;
 
+	tr->trace_flags = global_trace.trace_flags;
+
 	cpumask_copy(tr->tracing_cpumask, cpu_all_mask);
 
 	raw_spin_lock_init(&tr->start_lock);
@@ -6938,6 +6953,7 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
 	/* use static because iter can be a bit big for the stack */
 	static struct trace_iterator iter;
 	static atomic_t dump_running;
+	struct trace_array *tr = &global_trace;
 	unsigned int old_userobj;
 	unsigned long flags;
 	int cnt = 0, cpu;
@@ -6967,10 +6983,10 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
 		atomic_inc(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
 	}
 
-	old_userobj = trace_flags & TRACE_ITER_SYM_USEROBJ;
+	old_userobj = tr->trace_flags & TRACE_ITER_SYM_USEROBJ;
 
 	/* don't look at user memory in panic mode */
-	trace_flags &= ~TRACE_ITER_SYM_USEROBJ;
+	tr->trace_flags &= ~TRACE_ITER_SYM_USEROBJ;
 
 	switch (oops_dump_mode) {
 	case DUMP_ALL:
@@ -7033,7 +7049,7 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
 		printk(KERN_TRACE "---------------------------------\n");
 
  out_enable:
-	trace_flags |= old_userobj;
+	tr->trace_flags |= old_userobj;
 
 	for_each_tracing_cpu(cpu) {
 		atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 5219bf5f708a..eda4e6f8159b 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -217,6 +217,7 @@ struct trace_array {
 	int			stop_count;
 	int			clock_id;
 	struct tracer		*current_trace;
+	unsigned int		trace_flags;
 	unsigned int		flags;
 	raw_spinlock_t		start_lock;
 	struct dentry		*dir;
@@ -698,8 +699,6 @@ int trace_array_printk_buf(struct ring_buffer *buffer,
 void trace_printk_seq(struct trace_seq *s);
 enum print_line_t print_trace_line(struct trace_iterator *iter);
 
-extern unsigned long trace_flags;
-
 extern char trace_find_mark(unsigned long long duration);
 
 /* Standard output formatting function used for function return traces */
@@ -994,7 +993,7 @@ extern int enable_branch_tracing(struct trace_array *tr);
 extern void disable_branch_tracing(void);
 static inline int trace_branch_enable(struct trace_array *tr)
 {
-	if (trace_flags & TRACE_ITER_BRANCH)
+	if (tr->trace_flags & TRACE_ITER_BRANCH)
 		return enable_branch_tracing(tr);
 	return 0;
 }
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index b2e3d8d80df8..0f394112a0a7 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -338,6 +338,7 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
 					 int enable, int soft_disable)
 {
 	struct trace_event_call *call = file->event_call;
+	struct trace_array *tr = file->tr;
 	int ret = 0;
 	int disable;
 
@@ -401,7 +402,7 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
 			if (soft_disable)
 				set_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags);
 
-			if (trace_flags & TRACE_ITER_RECORD_CMD) {
+			if (tr->trace_flags & TRACE_ITER_RECORD_CMD) {
 				tracing_start_cmdline_record();
 				set_bit(EVENT_FILE_FL_RECORDED_CMD_BIT, &file->flags);
 			}
diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 86e45c2658e4..92382af7a213 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -112,8 +112,8 @@ enum {
 };
 
 static void
-print_graph_duration(unsigned long long duration, struct trace_seq *s,
-		     u32 flags);
+print_graph_duration(struct trace_array *tr, unsigned long long duration,
+		     struct trace_seq *s, u32 flags);
 
 /* Add a function return address to the trace stack on thread info.*/
 int
@@ -658,6 +658,7 @@ static void
 print_graph_irq(struct trace_iterator *iter, unsigned long addr,
 		enum trace_type type, int cpu, pid_t pid, u32 flags)
 {
+	struct trace_array *tr = iter->tr;
 	struct trace_seq *s = &iter->seq;
 	struct trace_entry *ent = iter->ent;
 
@@ -665,7 +666,7 @@ print_graph_irq(struct trace_iterator *iter, unsigned long addr,
 		addr >= (unsigned long)__irqentry_text_end)
 		return;
 
-	if (trace_flags & TRACE_ITER_CONTEXT_INFO) {
+	if (tr->trace_flags & TRACE_ITER_CONTEXT_INFO) {
 		/* Absolute time */
 		if (flags & TRACE_GRAPH_PRINT_ABS_TIME)
 			print_graph_abs_time(iter->ts, s);
@@ -681,19 +682,19 @@ print_graph_irq(struct trace_iterator *iter, unsigned long addr,
 		}
 
 		/* Latency format */
-		if (trace_flags & TRACE_ITER_LATENCY_FMT)
+		if (tr->trace_flags & TRACE_ITER_LATENCY_FMT)
 			print_graph_lat_fmt(s, ent);
 	}
 
 	/* No overhead */
-	print_graph_duration(0, s, flags | FLAGS_FILL_START);
+	print_graph_duration(tr, 0, s, flags | FLAGS_FILL_START);
 
 	if (type == TRACE_GRAPH_ENT)
 		trace_seq_puts(s, "==========>");
 	else
 		trace_seq_puts(s, "<==========");
 
-	print_graph_duration(0, s, flags | FLAGS_FILL_END);
+	print_graph_duration(tr, 0, s, flags | FLAGS_FILL_END);
 	trace_seq_putc(s, '\n');
 }
 
@@ -731,11 +732,11 @@ trace_print_graph_duration(unsigned long long duration, struct trace_seq *s)
 }
 
 static void
-print_graph_duration(unsigned long long duration, struct trace_seq *s,
-		     u32 flags)
+print_graph_duration(struct trace_array *tr, unsigned long long duration,
+		     struct trace_seq *s, u32 flags)
 {
 	if (!(flags & TRACE_GRAPH_PRINT_DURATION) ||
-	    !(trace_flags & TRACE_ITER_CONTEXT_INFO))
+	    !(tr->trace_flags & TRACE_ITER_CONTEXT_INFO))
 		return;
 
 	/* No real adata, just filling the column with spaces */
@@ -769,6 +770,7 @@ print_graph_entry_leaf(struct trace_iterator *iter,
 		struct trace_seq *s, u32 flags)
 {
 	struct fgraph_data *data = iter->private;
+	struct trace_array *tr = iter->tr;
 	struct ftrace_graph_ret *graph_ret;
 	struct ftrace_graph_ent *call;
 	unsigned long long duration;
@@ -797,7 +799,7 @@ print_graph_entry_leaf(struct trace_iterator *iter,
 	}
 
 	/* Overhead and duration */
-	print_graph_duration(duration, s, flags);
+	print_graph_duration(tr, duration, s, flags);
 
 	/* Function */
 	for (i = 0; i < call->depth * TRACE_GRAPH_INDENT; i++)
@@ -815,6 +817,7 @@ print_graph_entry_nested(struct trace_iterator *iter,
 {
 	struct ftrace_graph_ent *call = &entry->graph_ent;
 	struct fgraph_data *data = iter->private;
+	struct trace_array *tr = iter->tr;
 	int i;
 
 	if (data) {
@@ -830,7 +833,7 @@ print_graph_entry_nested(struct trace_iterator *iter,
 	}
 
 	/* No time */
-	print_graph_duration(0, s, flags | FLAGS_FILL_FULL);
+	print_graph_duration(tr, 0, s, flags | FLAGS_FILL_FULL);
 
 	/* Function */
 	for (i = 0; i < call->depth * TRACE_GRAPH_INDENT; i++)
@@ -854,6 +857,7 @@ print_graph_prologue(struct trace_iterator *iter, struct trace_seq *s,
 {
 	struct fgraph_data *data = iter->private;
 	struct trace_entry *ent = iter->ent;
+	struct trace_array *tr = iter->tr;
 	int cpu = iter->cpu;
 
 	/* Pid */
@@ -863,7 +867,7 @@ print_graph_prologue(struct trace_iterator *iter, struct trace_seq *s,
 		/* Interrupt */
 		print_graph_irq(iter, addr, type, cpu, ent->pid, flags);
 
-	if (!(trace_flags & TRACE_ITER_CONTEXT_INFO))
+	if (!(tr->trace_flags & TRACE_ITER_CONTEXT_INFO))
 		return;
 
 	/* Absolute time */
@@ -881,7 +885,7 @@ print_graph_prologue(struct trace_iterator *iter, struct trace_seq *s,
 	}
 
 	/* Latency format */
-	if (trace_flags & TRACE_ITER_LATENCY_FMT)
+	if (tr->trace_flags & TRACE_ITER_LATENCY_FMT)
 		print_graph_lat_fmt(s, ent);
 
 	return;
@@ -1032,6 +1036,7 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s,
 {
 	unsigned long long duration = trace->rettime - trace->calltime;
 	struct fgraph_data *data = iter->private;
+	struct trace_array *tr = iter->tr;
 	pid_t pid = ent->pid;
 	int cpu = iter->cpu;
 	int func_match = 1;
@@ -1063,7 +1068,7 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s,
 	print_graph_prologue(iter, s, 0, 0, flags);
 
 	/* Overhead and duration */
-	print_graph_duration(duration, s, flags);
+	print_graph_duration(tr, duration, s, flags);
 
 	/* Closing brace */
 	for (i = 0; i < trace->depth * TRACE_GRAPH_INDENT; i++)
@@ -1096,7 +1101,8 @@ static enum print_line_t
 print_graph_comment(struct trace_seq *s, struct trace_entry *ent,
 		    struct trace_iterator *iter, u32 flags)
 {
-	unsigned long sym_flags = (trace_flags & TRACE_ITER_SYM_MASK);
+	struct trace_array *tr = iter->tr;
+	unsigned long sym_flags = (tr->trace_flags & TRACE_ITER_SYM_MASK);
 	struct fgraph_data *data = iter->private;
 	struct trace_event *event;
 	int depth = 0;
@@ -1109,7 +1115,7 @@ print_graph_comment(struct trace_seq *s, struct trace_entry *ent,
 	print_graph_prologue(iter, s, 0, 0, flags);
 
 	/* No time */
-	print_graph_duration(0, s, flags | FLAGS_FILL_FULL);
+	print_graph_duration(tr, 0, s, flags | FLAGS_FILL_FULL);
 
 	/* Indentation */
 	if (depth > 0)
@@ -1250,9 +1256,10 @@ static void print_lat_header(struct seq_file *s, u32 flags)
 	seq_printf(s, "#%.*s||| /                      \n", size, spaces);
 }
 
-static void __print_graph_headers_flags(struct seq_file *s, u32 flags)
+static void __print_graph_headers_flags(struct trace_array *tr,
+					struct seq_file *s, u32 flags)
 {
-	int lat = trace_flags & TRACE_ITER_LATENCY_FMT;
+	int lat = tr->trace_flags & TRACE_ITER_LATENCY_FMT;
 
 	if (lat)
 		print_lat_header(s, flags);
@@ -1294,11 +1301,12 @@ static void print_graph_headers(struct seq_file *s)
 void print_graph_headers_flags(struct seq_file *s, u32 flags)
 {
 	struct trace_iterator *iter = s->private;
+	struct trace_array *tr = iter->tr;
 
-	if (!(trace_flags & TRACE_ITER_CONTEXT_INFO))
+	if (!(tr->trace_flags & TRACE_ITER_CONTEXT_INFO))
 		return;
 
-	if (trace_flags & TRACE_ITER_LATENCY_FMT) {
+	if (tr->trace_flags & TRACE_ITER_LATENCY_FMT) {
 		/* print nothing if the buffers are empty */
 		if (trace_empty(iter))
 			return;
@@ -1306,7 +1314,7 @@ void print_graph_headers_flags(struct seq_file *s, u32 flags)
 		print_trace_header(s, iter);
 	}
 
-	__print_graph_headers_flags(s, flags);
+	__print_graph_headers_flags(tr, s, flags);
 }
 
 void graph_trace_open(struct trace_iterator *iter)
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index c834b95cbe0b..eaf5291bcf63 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -58,13 +58,13 @@ irq_trace(void)
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 static int irqsoff_display_graph(struct trace_array *tr, int set);
-# define is_graph() (trace_flags & TRACE_ITER_DISPLAY_GRAPH)
+# define is_graph(tr) ((tr)->trace_flags & TRACE_ITER_DISPLAY_GRAPH)
 #else
 static inline int irqsoff_display_graph(struct trace_array *tr, int set)
 {
 	return -EINVAL;
 }
-# define is_graph() false
+# define is_graph(tr) false
 #endif
 
 /*
@@ -149,7 +149,7 @@ static int irqsoff_display_graph(struct trace_array *tr, int set)
 {
 	int cpu;
 
-	if (!(is_graph() ^ set))
+	if (!(is_graph(tr) ^ set))
 		return 0;
 
 	stop_irqsoff_tracer(irqsoff_trace, !set);
@@ -198,7 +198,7 @@ static void irqsoff_graph_return(struct ftrace_graph_ret *trace)
 
 static void irqsoff_trace_open(struct trace_iterator *iter)
 {
-	if (is_graph())
+	if (is_graph(iter->tr))
 		graph_trace_open(iter);
 
 }
@@ -220,7 +220,7 @@ static enum print_line_t irqsoff_print_line(struct trace_iterator *iter)
 	 * In graph mode call the graph tracer output function,
 	 * otherwise go with the TRACE_FN event handler
 	 */
-	if (is_graph())
+	if (is_graph(iter->tr))
 		return print_graph_function_flags(iter, GRAPH_TRACER_FLAGS);
 
 	return TRACE_TYPE_UNHANDLED;
@@ -228,7 +228,9 @@ static enum print_line_t irqsoff_print_line(struct trace_iterator *iter)
 
 static void irqsoff_print_header(struct seq_file *s)
 {
-	if (is_graph())
+	struct trace_array *tr = irqsoff_trace;
+
+	if (is_graph(tr))
 		print_graph_headers_flags(s, GRAPH_TRACER_FLAGS);
 	else
 		trace_default_header(s);
@@ -239,7 +241,7 @@ __trace_function(struct trace_array *tr,
 		 unsigned long ip, unsigned long parent_ip,
 		 unsigned long flags, int pc)
 {
-	if (is_graph())
+	if (is_graph(tr))
 		trace_graph_function(tr, ip, parent_ip, flags, pc);
 	else
 		trace_function(tr, ip, parent_ip, flags, pc);
@@ -516,7 +518,7 @@ static int register_irqsoff_function(struct trace_array *tr, int graph, int set)
 	int ret;
 
 	/* 'set' is set if TRACE_ITER_FUNCTION is about to be set */
-	if (function_enabled || (!set && !(trace_flags & TRACE_ITER_FUNCTION)))
+	if (function_enabled || (!set && !(tr->trace_flags & TRACE_ITER_FUNCTION)))
 		return 0;
 
 	if (graph)
@@ -550,9 +552,9 @@ static int irqsoff_function_set(struct trace_array *tr, u32 mask, int set)
 		return 0;
 
 	if (set)
-		register_irqsoff_function(tr, is_graph(), 1);
+		register_irqsoff_function(tr, is_graph(tr), 1);
 	else
-		unregister_irqsoff_function(tr, is_graph());
+		unregister_irqsoff_function(tr, is_graph(tr));
 	return 1;
 }
 #else
@@ -610,7 +612,7 @@ static int __irqsoff_tracer_init(struct trace_array *tr)
 	if (irqsoff_busy)
 		return -EBUSY;
 
-	save_flags = trace_flags;
+	save_flags = tr->trace_flags;
 
 	/* non overwrite screws up the latency tracers */
 	set_tracer_flag(tr, TRACE_ITER_OVERWRITE, 1);
@@ -626,7 +628,7 @@ static int __irqsoff_tracer_init(struct trace_array *tr)
 
 	/* Only toplevel instance supports graph tracing */
 	if (start_irqsoff_tracer(tr, (tr->flags & TRACE_ARRAY_FL_GLOBAL &&
-				      is_graph())))
+				      is_graph(tr))))
 		printk(KERN_ERR "failed to start irqsoff tracer\n");
 
 	irqsoff_busy = true;
@@ -638,7 +640,7 @@ static void irqsoff_tracer_reset(struct trace_array *tr)
 	int lat_flag = save_flags & TRACE_ITER_LATENCY_FMT;
 	int overwrite_flag = save_flags & TRACE_ITER_OVERWRITE;
 
-	stop_irqsoff_tracer(tr, is_graph());
+	stop_irqsoff_tracer(tr, is_graph(tr));
 
 	set_tracer_flag(tr, TRACE_ITER_LATENCY_FMT, lat_flag);
 	set_tracer_flag(tr, TRACE_ITER_OVERWRITE, overwrite_flag);
diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
index 3ccf5c2c1320..57149bce6aad 100644
--- a/kernel/trace/trace_kdb.c
+++ b/kernel/trace/trace_kdb.c
@@ -21,20 +21,22 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
 	/* use static because iter can be a bit big for the stack */
 	static struct trace_iterator iter;
 	static struct ring_buffer_iter *buffer_iter[CONFIG_NR_CPUS];
+	struct trace_array *tr;
 	unsigned int old_userobj;
 	int cnt = 0, cpu;
 
 	trace_init_global_iter(&iter);
 	iter.buffer_iter = buffer_iter;
+	tr = iter.tr;
 
 	for_each_tracing_cpu(cpu) {
 		atomic_inc(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
 	}
 
-	old_userobj = trace_flags;
+	old_userobj = tr->trace_flags;
 
 	/* don't look at user memory in panic mode */
-	trace_flags &= ~TRACE_ITER_SYM_USEROBJ;
+	tr->trace_flags &= ~TRACE_ITER_SYM_USEROBJ;
 
 	kdb_printf("Dumping ftrace buffer:\n");
 
@@ -82,7 +84,7 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
 		kdb_printf("---------------------------------\n");
 
 out:
-	trace_flags = old_userobj;
+	tr->trace_flags = old_userobj;
 
 	for_each_tracing_cpu(cpu) {
 		atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index 3b5dcdf19dea..282982195e09 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -476,7 +476,8 @@ char trace_find_mark(unsigned long long d)
 static int
 lat_print_timestamp(struct trace_iterator *iter, u64 next_ts)
 {
-	unsigned long verbose = trace_flags & TRACE_ITER_VERBOSE;
+	struct trace_array *tr = iter->tr;
+	unsigned long verbose = tr->trace_flags & TRACE_ITER_VERBOSE;
 	unsigned long in_ns = iter->iter_flags & TRACE_FILE_TIME_IN_NS;
 	unsigned long long abs_ts = iter->ts - iter->trace_buffer->time_start;
 	unsigned long long rel_ts = next_ts - iter->ts;
@@ -519,6 +520,7 @@ lat_print_timestamp(struct trace_iterator *iter, u64 next_ts)
 
 int trace_print_context(struct trace_iterator *iter)
 {
+	struct trace_array *tr = iter->tr;
 	struct trace_seq *s = &iter->seq;
 	struct trace_entry *entry = iter->ent;
 	unsigned long long t;
@@ -530,7 +532,7 @@ int trace_print_context(struct trace_iterator *iter)
 	trace_seq_printf(s, "%16s-%-5d [%03d] ",
 			       comm, entry->pid, iter->cpu);
 
-	if (trace_flags & TRACE_ITER_IRQ_INFO)
+	if (tr->trace_flags & TRACE_ITER_IRQ_INFO)
 		trace_print_lat_fmt(s, entry);
 
 	if (iter->iter_flags & TRACE_FILE_TIME_IN_NS) {
@@ -546,14 +548,15 @@ int trace_print_context(struct trace_iterator *iter)
 
 int trace_print_lat_context(struct trace_iterator *iter)
 {
-	u64 next_ts;
+	struct trace_array *tr = iter->tr;
 	/* trace_find_next_entry will reset ent_size */
 	int ent_size = iter->ent_size;
 	struct trace_seq *s = &iter->seq;
+	u64 next_ts;
 	struct trace_entry *entry = iter->ent,
 			   *next_entry = trace_find_next_entry(iter, NULL,
 							       &next_ts);
-	unsigned long verbose = (trace_flags & TRACE_ITER_VERBOSE);
+	unsigned long verbose = (tr->trace_flags & TRACE_ITER_VERBOSE);
 
 	/* Restore the original ent_size */
 	iter->ent_size = ent_size;
@@ -1035,6 +1038,7 @@ static struct trace_event trace_stack_event = {
 static enum print_line_t trace_user_stack_print(struct trace_iterator *iter,
 						int flags, struct trace_event *event)
 {
+	struct trace_array *tr = iter->tr;
 	struct userstack_entry *field;
 	struct trace_seq *s = &iter->seq;
 	struct mm_struct *mm = NULL;
@@ -1044,7 +1048,7 @@ static enum print_line_t trace_user_stack_print(struct trace_iterator *iter,
 
 	trace_seq_puts(s, "<user stack trace>\n");
 
-	if (trace_flags & TRACE_ITER_SYM_USEROBJ) {
+	if (tr->trace_flags & TRACE_ITER_SYM_USEROBJ) {
 		struct task_struct *task;
 		/*
 		 * we do the lookup on the thread group leader,
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index 4a20f61274d1..4661442de07d 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -39,13 +39,13 @@ static int save_flags;
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 static int wakeup_display_graph(struct trace_array *tr, int set);
-# define is_graph() (trace_flags & TRACE_ITER_DISPLAY_GRAPH)
+# define is_graph(tr) ((tr)->trace_flags & TRACE_ITER_DISPLAY_GRAPH)
 #else
 static inline int wakeup_display_graph(struct trace_array *tr, int set)
 {
 	return 0;
 }
-# define is_graph() false
+# define is_graph(tr) false
 #endif
 
 
@@ -131,7 +131,7 @@ static int register_wakeup_function(struct trace_array *tr, int graph, int set)
 	int ret;
 
 	/* 'set' is set if TRACE_ITER_FUNCTION is about to be set */
-	if (function_enabled || (!set && !(trace_flags & TRACE_ITER_FUNCTION)))
+	if (function_enabled || (!set && !(tr->trace_flags & TRACE_ITER_FUNCTION)))
 		return 0;
 
 	if (graph)
@@ -165,9 +165,9 @@ static int wakeup_function_set(struct trace_array *tr, u32 mask, int set)
 		return 0;
 
 	if (set)
-		register_wakeup_function(tr, is_graph(), 1);
+		register_wakeup_function(tr, is_graph(tr), 1);
 	else
-		unregister_wakeup_function(tr, is_graph());
+		unregister_wakeup_function(tr, is_graph(tr));
 	return 1;
 }
 #else
@@ -221,7 +221,7 @@ static void stop_func_tracer(struct trace_array *tr, int graph)
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 static int wakeup_display_graph(struct trace_array *tr, int set)
 {
-	if (!(is_graph() ^ set))
+	if (!(is_graph(tr) ^ set))
 		return 0;
 
 	stop_func_tracer(tr, !set);
@@ -270,7 +270,7 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace)
 
 static void wakeup_trace_open(struct trace_iterator *iter)
 {
-	if (is_graph())
+	if (is_graph(iter->tr))
 		graph_trace_open(iter);
 }
 
@@ -290,7 +290,7 @@ static enum print_line_t wakeup_print_line(struct trace_iterator *iter)
 	 * In graph mode call the graph tracer output function,
 	 * otherwise go with the TRACE_FN event handler
 	 */
-	if (is_graph())
+	if (is_graph(iter->tr))
 		return print_graph_function_flags(iter, GRAPH_TRACER_FLAGS);
 
 	return TRACE_TYPE_UNHANDLED;
@@ -298,7 +298,7 @@ static enum print_line_t wakeup_print_line(struct trace_iterator *iter)
 
 static void wakeup_print_header(struct seq_file *s)
 {
-	if (is_graph())
+	if (is_graph(wakeup_trace))
 		print_graph_headers_flags(s, GRAPH_TRACER_FLAGS);
 	else
 		trace_default_header(s);
@@ -309,7 +309,7 @@ __trace_function(struct trace_array *tr,
 		 unsigned long ip, unsigned long parent_ip,
 		 unsigned long flags, int pc)
 {
-	if (is_graph())
+	if (is_graph(tr))
 		trace_graph_function(tr, ip, parent_ip, flags, pc);
 	else
 		trace_function(tr, ip, parent_ip, flags, pc);
@@ -639,7 +639,7 @@ static void start_wakeup_tracer(struct trace_array *tr)
 	 */
 	smp_wmb();
 
-	if (start_func_tracer(tr, is_graph()))
+	if (start_func_tracer(tr, is_graph(tr)))
 		printk(KERN_ERR "failed to start wakeup tracer\n");
 
 	return;
@@ -652,7 +652,7 @@ fail_deprobe:
 static void stop_wakeup_tracer(struct trace_array *tr)
 {
 	tracer_enabled = 0;
-	stop_func_tracer(tr, is_graph());
+	stop_func_tracer(tr, is_graph(tr));
 	unregister_trace_sched_switch(probe_wakeup_sched_switch, NULL);
 	unregister_trace_sched_wakeup_new(probe_wakeup, NULL);
 	unregister_trace_sched_wakeup(probe_wakeup, NULL);
@@ -663,7 +663,7 @@ static bool wakeup_busy;
 
 static int __wakeup_tracer_init(struct trace_array *tr)
 {
-	save_flags = trace_flags;
+	save_flags = tr->trace_flags;
 
 	/* non overwrite screws up the latency tracers */
 	set_tracer_flag(tr, TRACE_ITER_OVERWRITE, 1);
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 7d567a4b9fa7..0655afbea83f 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -110,6 +110,7 @@ static enum print_line_t
 print_syscall_enter(struct trace_iterator *iter, int flags,
 		    struct trace_event *event)
 {
+	struct trace_array *tr = iter->tr;
 	struct trace_seq *s = &iter->seq;
 	struct trace_entry *ent = iter->ent;
 	struct syscall_trace_enter *trace;
@@ -136,7 +137,7 @@ print_syscall_enter(struct trace_iterator *iter, int flags,
 			goto end;
 
 		/* parameter types */
-		if (trace_flags & TRACE_ITER_VERBOSE)
+		if (tr->trace_flags & TRACE_ITER_VERBOSE)
 			trace_seq_printf(s, "%s ", entry->types[i]);
 
 		/* parameter values */
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 22/25] tracing: Add a method to pass in trace_array descriptor to option files
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (20 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 21/25] tracing: Move trace_flags from global to a trace_array field Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 23/25] tracing: Make ftrace_trace_stack() depend on general trace_array flag Steven Rostedt
                   ` (2 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Johannes Berg

[-- Attachment #1: 0022-tracing-Add-a-method-to-pass-in-trace_array-descript.patch --]
[-- Type: text/plain, Size: 5636 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

In preparation of having the multi buffer instances having their own trace
option flags, the trace option files needs a way to not only pass in the
flag they represent, but also the trace_array descriptor.

A new field is added to the trace_array descriptor called trace_flags_index,
which is a 32 byte character array representing a bit. This array is simply
filled with the index of the array, where

  index_array[n] = n;

Then the address of this array is passed to the file callbacks instead of
the index of the flag index. Then to retrieve both the flag index and the
trace_array descriptor:

  data is the passed in argument.

  index = *(unsigned char *)data;

  data -= index;

  /* Now data points to the address of the array in the trace_array */

  tr = container_of(data, struct trace_array, trace_flags_index);

Suggested-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++------
 kernel/trace/trace.h |  3 +++
 2 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 4e82f4ad68dc..5f481887e98b 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -6186,14 +6186,51 @@ static const struct file_operations trace_options_fops = {
 	.llseek	= generic_file_llseek,
 };
 
+/*
+ * In order to pass in both the trace_array descriptor as well as the index
+ * to the flag that the trace option file represents, the trace_array
+ * has a character array of trace_flags_index[], which holds the index
+ * of the bit for the flag it represents. index[0] == 0, index[1] == 1, etc.
+ * The address of this character array is passed to the flag option file
+ * read/write callbacks.
+ *
+ * In order to extract both the index and the trace_array descriptor,
+ * get_tr_index() uses the following algorithm.
+ *
+ *   idx = *ptr;
+ *
+ * As the pointer itself contains the address of the index (remember
+ * index[1] == 1).
+ *
+ * Then to get the trace_array descriptor, by subtracting that index
+ * from the ptr, we get to the start of the index itself.
+ *
+ *   ptr - idx == &index[0]
+ *
+ * Then a simple container_of() from that pointer gets us to the
+ * trace_array descriptor.
+ */
+static void get_tr_index(void *data, struct trace_array **ptr,
+			 unsigned int *pindex)
+{
+	*pindex = *(unsigned char *)data;
+
+	*ptr = container_of(data - *pindex, struct trace_array,
+			    trace_flags_index);
+}
+
 static ssize_t
 trace_options_core_read(struct file *filp, char __user *ubuf, size_t cnt,
 			loff_t *ppos)
 {
-	long index = (long)filp->private_data;
+	void *tr_index = filp->private_data;
+	struct trace_array *tr;
+	unsigned int index;
 	char *buf;
 
-	if (global_trace.trace_flags & (1 << index))
+	get_tr_index(tr_index, &tr, &index);
+
+	if (tr->trace_flags & (1 << index))
 		buf = "1\n";
 	else
 		buf = "0\n";
@@ -6205,11 +6242,14 @@ static ssize_t
 trace_options_core_write(struct file *filp, const char __user *ubuf, size_t cnt,
 			 loff_t *ppos)
 {
-	struct trace_array *tr = &global_trace;
-	long index = (long)filp->private_data;
+	void *tr_index = filp->private_data;
+	struct trace_array *tr;
+	unsigned int index;
 	unsigned long val;
 	int ret;
 
+	get_tr_index(tr_index, &tr, &index);
+
 	ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
 	if (ret)
 		return ret;
@@ -6339,8 +6379,9 @@ create_trace_option_core_file(struct trace_array *tr,
 	if (!t_options)
 		return NULL;
 
-	return trace_create_file(option, 0644, t_options, (void *)index,
-				    &trace_options_core_fops);
+	return trace_create_file(option, 0644, t_options,
+				 (void *)&tr->trace_flags_index[index],
+				 &trace_options_core_fops);
 }
 
 static __init void create_trace_options_dir(struct trace_array *tr)
@@ -6490,6 +6531,15 @@ static void free_trace_buffers(struct trace_array *tr)
 #endif
 }
 
+static void init_trace_flags_index(struct trace_array *tr)
+{
+	int i;
+
+	/* Used by the trace options files */
+	for (i = 0; i < TRACE_FLAGS_MAX_SIZE; i++)
+		tr->trace_flags_index[i] = i;
+}
+
 static int instance_mkdir(const char *name)
 {
 	struct trace_array *tr;
@@ -6542,6 +6592,7 @@ static int instance_mkdir(const char *name)
 	}
 
 	init_tracer_tracefs(tr, tr->dir);
+	init_trace_flags_index(tr);
 
 	list_add(&tr->list, &ftrace_trace_arrays);
 
@@ -7068,7 +7119,7 @@ __init static int tracer_alloc_buffers(void)
 	 * Make sure we don't accidently add more trace options
 	 * than we have bits for.
 	 */
-	BUILD_BUG_ON(TRACE_ITER_LAST_BIT > 32);
+	BUILD_BUG_ON(TRACE_ITER_LAST_BIT > TRACE_FLAGS_MAX_SIZE);
 
 	if (!alloc_cpumask_var(&tracing_buffer_mask, GFP_KERNEL))
 		goto out;
@@ -7128,6 +7179,8 @@ __init static int tracer_alloc_buffers(void)
 
 	ftrace_init_global_array_ops(&global_trace);
 
+	init_trace_flags_index(&global_trace);
+
 	register_tracer(&nop_trace);
 
 	/* All seems OK, enable tracing */
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index eda4e6f8159b..423cb48a1d6d 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -168,6 +168,8 @@ struct trace_buffer {
 	int				cpu;
 };
 
+#define TRACE_FLAGS_MAX_SIZE		32
+
 /*
  * The trace array - an array of per-CPU trace arrays. This is the
  * highest level data structure that individual tracers deal with.
@@ -218,6 +220,7 @@ struct trace_array {
 	int			clock_id;
 	struct tracer		*current_trace;
 	unsigned int		trace_flags;
+	unsigned char		trace_flags_index[TRACE_FLAGS_MAX_SIZE];
 	unsigned int		flags;
 	raw_spinlock_t		start_lock;
 	struct dentry		*dir;
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 23/25] tracing: Make ftrace_trace_stack() depend on general trace_array flag
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (21 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 22/25] tracing: Add a method to pass in trace_array descriptor to option files Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 24/25] tracing: Add trace options for core options to instances Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 25/25] tracing: Add trace options for tracer " Steven Rostedt
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0023-tracing-Make-ftrace_trace_stack-depend-on-general-tr.patch --]
[-- Type: text/plain, Size: 4758 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

In preparation for the multi buffer instances to have their own trace_flags,
the check in ftrace_trace_stack() needs to test the trace_array descriptor
flag that is for the current event, not the global_trace descriptor.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c        | 23 +++++++++++++----------
 kernel/trace/trace_events.c |  6 +++---
 2 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 5f481887e98b..51697b41f5d4 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -482,7 +482,8 @@ static inline void trace_access_lock_init(void)
 static void __ftrace_trace_stack(struct ring_buffer *buffer,
 				 unsigned long flags,
 				 int skip, int pc, struct pt_regs *regs);
-static inline void ftrace_trace_stack(struct ring_buffer *buffer,
+static inline void ftrace_trace_stack(struct trace_array *tr,
+				      struct ring_buffer *buffer,
 				      unsigned long flags,
 				      int skip, int pc, struct pt_regs *regs);
 
@@ -492,7 +493,8 @@ static inline void __ftrace_trace_stack(struct ring_buffer *buffer,
 					int skip, int pc, struct pt_regs *regs)
 {
 }
-static inline void ftrace_trace_stack(struct ring_buffer *buffer,
+static inline void ftrace_trace_stack(struct trace_array *tr,
+				      struct ring_buffer *buffer,
 				      unsigned long flags,
 				      int skip, int pc, struct pt_regs *regs)
 {
@@ -574,7 +576,7 @@ int __trace_puts(unsigned long ip, const char *str, int size)
 		entry->buf[size] = '\0';
 
 	__buffer_unlock_commit(buffer, event);
-	ftrace_trace_stack(buffer, irq_flags, 4, pc, NULL);
+	ftrace_trace_stack(&global_trace, buffer, irq_flags, 4, pc, NULL);
 
 	return size;
 }
@@ -614,7 +616,7 @@ int __trace_bputs(unsigned long ip, const char *str)
 	entry->str			= str;
 
 	__buffer_unlock_commit(buffer, event);
-	ftrace_trace_stack(buffer, irq_flags, 4, pc, NULL);
+	ftrace_trace_stack(&global_trace, buffer, irq_flags, 4, pc, NULL);
 
 	return 1;
 }
@@ -1691,7 +1693,7 @@ void trace_buffer_unlock_commit(struct trace_array *tr,
 {
 	__buffer_unlock_commit(buffer, event);
 
-	ftrace_trace_stack(buffer, flags, 6, pc, NULL);
+	ftrace_trace_stack(tr, buffer, flags, 6, pc, NULL);
 	ftrace_trace_userstack(buffer, flags, pc);
 }
 EXPORT_SYMBOL_GPL(trace_buffer_unlock_commit);
@@ -1743,7 +1745,7 @@ void trace_buffer_unlock_commit_regs(struct trace_array *tr,
 {
 	__buffer_unlock_commit(buffer, event);
 
-	ftrace_trace_stack(buffer, flags, 6, pc, regs);
+	ftrace_trace_stack(tr, buffer, flags, 6, pc, regs);
 	ftrace_trace_userstack(buffer, flags, pc);
 }
 EXPORT_SYMBOL_GPL(trace_buffer_unlock_commit_regs);
@@ -1872,11 +1874,12 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
 
 }
 
-static inline void ftrace_trace_stack(struct ring_buffer *buffer,
+static inline void ftrace_trace_stack(struct trace_array *tr,
+				      struct ring_buffer *buffer,
 				      unsigned long flags,
 				      int skip, int pc, struct pt_regs *regs)
 {
-	if (!(global_trace.trace_flags & TRACE_ITER_STACKTRACE))
+	if (!(tr->trace_flags & TRACE_ITER_STACKTRACE))
 		return;
 
 	__ftrace_trace_stack(buffer, flags, skip, pc, regs);
@@ -2164,7 +2167,7 @@ int trace_vbprintk(unsigned long ip, const char *fmt, va_list args)
 	memcpy(entry->buf, tbuffer, sizeof(u32) * len);
 	if (!call_filter_check_discard(call, entry, buffer, event)) {
 		__buffer_unlock_commit(buffer, event);
-		ftrace_trace_stack(buffer, flags, 6, pc, NULL);
+		ftrace_trace_stack(tr, buffer, flags, 6, pc, NULL);
 	}
 
 out:
@@ -2216,7 +2219,7 @@ __trace_array_vprintk(struct ring_buffer *buffer,
 	memcpy(&entry->buf, tbuffer, len + 1);
 	if (!call_filter_check_discard(call, entry, buffer, event)) {
 		__buffer_unlock_commit(buffer, event);
-		ftrace_trace_stack(buffer, flags, 6, pc, NULL);
+		ftrace_trace_stack(&global_trace, buffer, flags, 6, pc, NULL);
 	}
  out:
 	preempt_enable_notrace();
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 0f394112a0a7..57c9e709772c 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2941,15 +2941,15 @@ static struct ftrace_ops trace_ops __initdata  =
 static __init void event_trace_self_test_with_function(void)
 {
 	int ret;
+	event_tr = top_trace_array();
+	if (WARN_ON(!event_tr))
+		return;
 	ret = register_ftrace_function(&trace_ops);
 	if (WARN_ON(ret < 0)) {
 		pr_info("Failed to enable function tracer for event tests\n");
 		return;
 	}
 	pr_info("Running tests again, along with the function tracer\n");
-	event_tr = top_trace_array();
-	if (WARN_ON(!event_tr))
-		return;
 	event_trace_self_tests();
 	unregister_ftrace_function(&trace_ops);
 }
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 24/25] tracing: Add trace options for core options to instances
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (22 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 23/25] tracing: Make ftrace_trace_stack() depend on general trace_array flag Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  2015-10-01 11:55 ` [for-next][PATCH 25/25] tracing: Add trace options for tracer " Steven Rostedt
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0024-tracing-Add-trace-options-for-core-options-to-instan.patch --]
[-- Type: text/plain, Size: 2437 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Allow instances to have their own options, at least for the core options
(non tracer specific ones). There are a few global options that should not
be added to instances, like enabling of trace_printk, and the sched comm
recording, which do not have a specific trace instance associated to them.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 51697b41f5d4..7b99e36b8973 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -258,6 +258,11 @@ unsigned long long ns2usecs(cycle_t nsec)
 	 TRACE_ITER_RECORD_CMD | TRACE_ITER_OVERWRITE |			\
 	 TRACE_ITER_IRQ_INFO | TRACE_ITER_MARKERS)
 
+/* trace_options that are only supported by global_trace */
+#define TOP_LEVEL_TRACE_FLAGS (TRACE_ITER_PRINTK |			\
+	       TRACE_ITER_PRINTK_MSGONLY | TRACE_ITER_RECORD_CMD)
+
+
 /*
  * The global_trace is the descriptor that holds the tracing
  * buffers for the live tracing. For each CPU, it contains
@@ -6387,17 +6392,21 @@ create_trace_option_core_file(struct trace_array *tr,
 				 &trace_options_core_fops);
 }
 
-static __init void create_trace_options_dir(struct trace_array *tr)
+static void create_trace_options_dir(struct trace_array *tr)
 {
 	struct dentry *t_options;
+	bool top_level = tr == &global_trace;
 	int i;
 
 	t_options = trace_options_init_dentry(tr);
 	if (!t_options)
 		return;
 
-	for (i = 0; trace_options[i]; i++)
-		create_trace_option_core_file(tr, trace_options[i], i);
+	for (i = 0; trace_options[i]; i++) {
+		if (top_level ||
+		    !((1 << i) & TOP_LEVEL_TRACE_FLAGS))
+			create_trace_option_core_file(tr, trace_options[i], i);
+	}
 }
 
 static ssize_t
@@ -6707,6 +6716,8 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
 	trace_create_file("tracing_on", 0644, d_tracer,
 			  tr, &rb_simple_fops);
 
+	create_trace_options_dir(tr);
+
 #ifdef CONFIG_TRACER_MAX_TRACE
 	trace_create_file("tracing_max_latency", 0644, d_tracer,
 			&tr->max_latency, &tracing_max_lat_fops);
@@ -6903,8 +6914,6 @@ static __init int tracer_init_tracefs(void)
 
 	create_trace_instances(d_tracer);
 
-	create_trace_options_dir(&global_trace);
-
 	mutex_lock(&trace_types_lock);
 	for (t = trace_types; t; t = t->next)
 		add_tracer_options(&global_trace, t);
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [for-next][PATCH 25/25] tracing: Add trace options for tracer options to instances
  2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
                   ` (23 preceding siblings ...)
  2015-10-01 11:55 ` [for-next][PATCH 24/25] tracing: Add trace options for core options to instances Steven Rostedt
@ 2015-10-01 11:55 ` Steven Rostedt
  24 siblings, 0 replies; 26+ messages in thread
From: Steven Rostedt @ 2015-10-01 11:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0025-tracing-Add-trace-options-for-tracer-options-to-inst.patch --]
[-- Type: text/plain, Size: 6058 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Add the tracer options to instances options directory as well. Only add the
options for tracers that are allowed to be enabled by an instance. But note,
that tracer options are global. That is, tracer options enabled in an
instance, also take affect at the top level and in other instances.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 80 ++++++++++++++++++++++++++++++++++++++--------------
 kernel/trace/trace.h |  9 +++++-
 2 files changed, 67 insertions(+), 22 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 7b99e36b8973..78022c1a125f 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4308,7 +4308,7 @@ int tracing_update_buffers(void)
 
 struct trace_option_dentry;
 
-static struct trace_option_dentry *
+static void
 create_trace_option_files(struct trace_array *tr, struct tracer *tracer);
 
 /*
@@ -4334,15 +4334,7 @@ static void add_tracer_options(struct trace_array *tr, struct tracer *t)
 	if (!tr->dir)
 		return;
 
-	/* Currently, only the top instance has options */
-	if (!(tr->flags & TRACE_ARRAY_FL_GLOBAL))
-		return;
-
-	/* Ignore if they were already created */
-	if (t->topts)
-		return;
-
-	t->topts = create_trace_option_files(tr, t);
+	create_trace_option_files(tr, t);
 }
 
 static int tracing_set_tracer(struct trace_array *tr, const char *buf)
@@ -6341,21 +6333,39 @@ create_trace_option_file(struct trace_array *tr,
 
 }
 
-static struct trace_option_dentry *
+static void
 create_trace_option_files(struct trace_array *tr, struct tracer *tracer)
 {
 	struct trace_option_dentry *topts;
+	struct trace_options *tr_topts;
 	struct tracer_flags *flags;
 	struct tracer_opt *opts;
 	int cnt;
+	int i;
 
 	if (!tracer)
-		return NULL;
+		return;
 
 	flags = tracer->flags;
 
 	if (!flags || !flags->opts)
-		return NULL;
+		return;
+
+	/*
+	 * If this is an instance, only create flags for tracers
+	 * the instance may have.
+	 */
+	if (!trace_ok_for_array(tracer, tr))
+		return;
+
+	for (i = 0; i < tr->nr_topts; i++) {
+		/*
+		 * Check if these flags have already been added.
+		 * Some tracers share flags.
+		 */
+		if (tr->topts[i].tracer->flags == tracer->flags)
+			return;
+	}
 
 	opts = flags->opts;
 
@@ -6364,7 +6374,19 @@ create_trace_option_files(struct trace_array *tr, struct tracer *tracer)
 
 	topts = kcalloc(cnt + 1, sizeof(*topts), GFP_KERNEL);
 	if (!topts)
-		return NULL;
+		return;
+
+	tr_topts = krealloc(tr->topts, sizeof(*tr->topts) * (tr->nr_topts + 1),
+			    GFP_KERNEL);
+	if (!tr_topts) {
+		kfree(topts);
+		return;
+	}
+
+	tr->topts = tr_topts;
+	tr->topts[tr->nr_topts].tracer = tracer;
+	tr->topts[tr->nr_topts].topts = topts;
+	tr->nr_topts++;
 
 	for (cnt = 0; opts[cnt].name; cnt++) {
 		create_trace_option_file(tr, &topts[cnt], flags,
@@ -6373,8 +6395,6 @@ create_trace_option_files(struct trace_array *tr, struct tracer *tracer)
 			  "Failed to create trace option: %s",
 			  opts[cnt].name);
 	}
-
-	return topts;
 }
 
 static struct dentry *
@@ -6552,6 +6572,21 @@ static void init_trace_flags_index(struct trace_array *tr)
 		tr->trace_flags_index[i] = i;
 }
 
+static void __update_tracer_options(struct trace_array *tr)
+{
+	struct tracer *t;
+
+	for (t = trace_types; t; t = t->next)
+		add_tracer_options(tr, t);
+}
+
+static void update_tracer_options(struct trace_array *tr)
+{
+	mutex_lock(&trace_types_lock);
+	__update_tracer_options(tr);
+	mutex_unlock(&trace_types_lock);
+}
+
 static int instance_mkdir(const char *name)
 {
 	struct trace_array *tr;
@@ -6605,6 +6640,7 @@ static int instance_mkdir(const char *name)
 
 	init_tracer_tracefs(tr, tr->dir);
 	init_trace_flags_index(tr);
+	__update_tracer_options(tr);
 
 	list_add(&tr->list, &ftrace_trace_arrays);
 
@@ -6630,6 +6666,7 @@ static int instance_rmdir(const char *name)
 	struct trace_array *tr;
 	int found = 0;
 	int ret;
+	int i;
 
 	mutex_lock(&trace_types_lock);
 
@@ -6655,6 +6692,11 @@ static int instance_rmdir(const char *name)
 	debugfs_remove_recursive(tr->dir);
 	free_trace_buffers(tr);
 
+	for (i = 0; i < tr->nr_topts; i++) {
+		kfree(tr->topts[i].topts);
+	}
+	kfree(tr->topts);
+
 	kfree(tr->name);
 	kfree(tr);
 
@@ -6877,7 +6919,6 @@ static struct notifier_block trace_module_nb = {
 static __init int tracer_init_tracefs(void)
 {
 	struct dentry *d_tracer;
-	struct tracer *t;
 
 	trace_access_lock_init();
 
@@ -6914,10 +6955,7 @@ static __init int tracer_init_tracefs(void)
 
 	create_trace_instances(d_tracer);
 
-	mutex_lock(&trace_types_lock);
-	for (t = trace_types; t; t = t->next)
-		add_tracer_options(&global_trace, t);
-	mutex_unlock(&trace_types_lock);
+	update_tracer_options(&global_trace);
 
 	return 0;
 }
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 423cb48a1d6d..fb8a61c710ea 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -159,6 +159,7 @@ struct trace_array_cpu {
 };
 
 struct tracer;
+struct trace_option_dentry;
 
 struct trace_buffer {
 	struct trace_array		*tr;
@@ -170,6 +171,11 @@ struct trace_buffer {
 
 #define TRACE_FLAGS_MAX_SIZE		32
 
+struct trace_options {
+	struct tracer			*tracer;
+	struct trace_option_dentry	*topts;
+};
+
 /*
  * The trace array - an array of per-CPU trace arrays. This is the
  * highest level data structure that individual tracers deal with.
@@ -218,6 +224,7 @@ struct trace_array {
 #endif
 	int			stop_count;
 	int			clock_id;
+	int			nr_topts;
 	struct tracer		*current_trace;
 	unsigned int		trace_flags;
 	unsigned char		trace_flags_index[TRACE_FLAGS_MAX_SIZE];
@@ -227,6 +234,7 @@ struct trace_array {
 	struct dentry		*options;
 	struct dentry		*percpu_dir;
 	struct dentry		*event_dir;
+	struct trace_options	*topts;
 	struct list_head	systems;
 	struct list_head	events;
 	cpumask_var_t		tracing_cpumask; /* only trace on set CPUs */
@@ -398,7 +406,6 @@ struct tracer {
 						u32 mask, int set);
 	struct tracer		*next;
 	struct tracer_flags	*flags;
-	struct trace_option_dentry *topts;
 	int			enabled;
 	int			ref;
 	bool			print_max;
-- 
2.5.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2015-10-01 12:05 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-10-01 11:55 [for-next][PATCH 00/25] tracing: Have instances have their own options + clean ups Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 01/25] kernel/trace_probe: is_good_name can be boolean Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 02/25] tracing: Move non perf code out of perf.h Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 03/25] tracing: Remove ftrace_trace_stack_regs() Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 04/25] tracing: Remove unused function trace_current_buffer_lock_reserve() Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 05/25] tracing: Pass trace_array into trace_buffer_unlock_commit() Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 06/25] tracing: Make ftrace_trace_stack() static Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 07/25] tracing: Inject seq_print_userip_objs() into its only user Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 08/25] tracing: Turn seq_print_user_ip() into a static function Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 09/25] tracing: Move "display-graph" option to main options Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 10/25] tracing: Remove unused tracing option "ftrace_preempt" Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 11/25] tracing: Use enums instead of hard coded bitmasks for TRACE_ITER flags Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 12/25] tracing: Use TRACE_FLAGS macro to keep enums and strings matched Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 13/25] tracing: Only create function graph options when it is compiled in Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 14/25] tracing: Only create branch tracer options when " Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 15/25] tracing: Do not create function tracer options when not " Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 16/25] tracing: Only create stacktrace option when STACKTRACE is configured Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 17/25] tracing: Always show all tracer options in the options directory Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 18/25] tracing: Add build bug if we have more trace_flags than bits Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 19/25] tracing: Remove access to trace_flags in trace_printk.c Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 20/25] tracing: Move sleep-time and graph-time options out of the core trace_flags Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 21/25] tracing: Move trace_flags from global to a trace_array field Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 22/25] tracing: Add a method to pass in trace_array descriptor to option files Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 23/25] tracing: Make ftrace_trace_stack() depend on general trace_array flag Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 24/25] tracing: Add trace options for core options to instances Steven Rostedt
2015-10-01 11:55 ` [for-next][PATCH 25/25] tracing: Add trace options for tracer " Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).