* [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools
@ 2013-03-08 2:59 Steven Rostedt
2013-03-08 2:59 ` [for-next][PATCH 01/17] ring-buffer: Init waitqueue for blocked readers Steven Rostedt
` (16 more replies)
0 siblings, 17 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 2:59 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
Here's more updates for linux-next.
I rebased the series on top of my last pull request of
tip/perf/urgent.
Here I made the multi-buffers work with the snapshots, as well as
added per-cpu directory for the snapshots.
I added a snapshot_raw to allow tools to pull in the binary data
of the snapshot so that it can be analyzed better.
I removed the max_tr and added two trace buffers into the trace_array.
This actually makes things much simpler, and I really should have
designed ftrace to do this from day one. It makes things so much easier.
I added an internal tracing tool called tracing_snapshot() and
tracing_snapshot_alloc(). This will allow a developer to place in
the kernel a way to say "snapshot" here, and let the tracing
continue in case there's a better place to take a snapshot.
Note, tracing_snapshot() will fail if the snapshot buffer was never
allocated, but tracing_snapshot_alloc() needs to be called from
a context that can sleep (or early boot up). When debugging, it's
best to through in a tracing_snapshot_alloc() early on so that
it will be ready to use. Hmm, I should also add a kernel command
line parameter to allocate it on boot up. Darn, I thought I was
done for 3.10. That will need to be next :-/
Enjoy,
-- Steve
Steven Rostedt (Red Hat) (17):
ring-buffer: Init waitqueue for blocked readers
tracing: Add comment for trace event flag IGNORE_ENABLE
tracing: Only clear trace buffer on module unload if event was traced
tracing: Clear all trace buffers when unloaded module event was used
tracing: Enable snapshot when any latency tracer is enabled
tracing: Consolidate max_tr into main trace_array structure
tracing: Add snapshot in the per_cpu trace directories
tracing: Add config option to allow snapshot to swap per cpu
tracing: Add snapshot_raw to extract the raw data from snapshot
tracing: Have trace_array keep track if snapshot buffer is allocated
tracing: Consolidate buffer allocation code
tracing: Add snapshot feature to instances
tracing: Add per_cpu directory into tracing instances
tracing: Prevent deleting instances when they are being read
tracing: Add internal tracing_snapshot() functions
ring-buffer: Do not use schedule_work_on() for current CPU
tracing: Move the tracing selftest code into its own function
----
include/linux/ftrace_event.h | 8 +
include/linux/kernel.h | 4 +
kernel/trace/Kconfig | 26 +
kernel/trace/blktrace.c | 4 +-
kernel/trace/ring_buffer.c | 35 +-
kernel/trace/trace.c | 997 ++++++++++++++++++++++------------
kernel/trace/trace.h | 40 +-
kernel/trace/trace_events.c | 20 +-
kernel/trace/trace_functions.c | 8 +-
kernel/trace/trace_functions_graph.c | 12 +-
kernel/trace/trace_irqsoff.c | 10 +-
kernel/trace/trace_kdb.c | 8 +-
kernel/trace/trace_mmiotrace.c | 12 +-
kernel/trace/trace_output.c | 2 +-
kernel/trace/trace_sched_switch.c | 8 +-
kernel/trace/trace_sched_wakeup.c | 16 +-
kernel/trace/trace_selftest.c | 42 +-
kernel/trace/trace_syscalls.c | 4 +-
18 files changed, 824 insertions(+), 432 deletions(-)
^ permalink raw reply [flat|nested] 18+ messages in thread
* [for-next][PATCH 01/17] ring-buffer: Init waitqueue for blocked readers
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
@ 2013-03-08 2:59 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 02/17] tracing: Add comment for trace event flag IGNORE_ENABLE Steven Rostedt
` (15 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 2:59 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0001-ring-buffer-Init-waitqueue-for-blocked-readers.patch --]
[-- Type: text/plain, Size: 1362 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
The move of blocked readers to the ring buffer left out the
init of the wait queue that is used. Tests missed this due to running
stress tests against the buffers, which didn't allow for any
readers to end up waiting. Running a simple read and wait triggered
a bug.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/ring_buffer.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 56b6ea3..65fe2a4 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1185,6 +1185,7 @@ rb_allocate_cpu_buffer(struct ring_buffer *buffer, int nr_pages, int cpu)
INIT_WORK(&cpu_buffer->update_pages_work, update_pages_handler);
init_completion(&cpu_buffer->update_done);
init_irq_work(&cpu_buffer->irq_work.work, rb_wake_up_waiters);
+ init_waitqueue_head(&cpu_buffer->irq_work.waiters);
bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()),
GFP_KERNEL, cpu_to_node(cpu));
@@ -1281,6 +1282,7 @@ struct ring_buffer *__ring_buffer_alloc(unsigned long size, unsigned flags,
buffer->reader_lock_key = key;
init_irq_work(&buffer->irq_work.work, rb_wake_up_waiters);
+ init_waitqueue_head(&buffer->irq_work.waiters);
/* need at least two pages */
if (nr_pages < 2)
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 02/17] tracing: Add comment for trace event flag IGNORE_ENABLE
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
2013-03-08 2:59 ` [for-next][PATCH 01/17] ring-buffer: Init waitqueue for blocked readers Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 03/17] tracing: Only clear trace buffer on module unload if event was traced Steven Rostedt
` (14 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0002-tracing-Add-comment-for-trace-event-flag-IGNORE_ENAB.patch --]
[-- Type: text/plain, Size: 1201 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
All the trace event flags have comments but the IGNORE_ENABLE flag
which is set for ftrace internal events that should not be enabled
via the debugfs "enable" file. That is, if the top level enable file
is set, it will enable all events. It use to just check the ftrace
event call descriptor "reg" field and skip those whithout it, but now
some ftrace internal events have a reg field but still need to be
skipped. The flag was created to ignore those events.
Now document it.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/ftrace_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 4d79d2d..0b0814d 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -204,6 +204,7 @@ enum {
* FILTERED - The event has a filter attached
* CAP_ANY - Any user can enable for perf
* NO_SET_FILTER - Set when filter has error and is to be ignored
+ * IGNORE_ENABLE - For ftrace internal events, do not enable with debugfs file
*/
enum {
TRACE_EVENT_FL_FILTERED = (1 << TRACE_EVENT_FL_FILTERED_BIT),
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 03/17] tracing: Only clear trace buffer on module unload if event was traced
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
2013-03-08 2:59 ` [for-next][PATCH 01/17] ring-buffer: Init waitqueue for blocked readers Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 02/17] tracing: Add comment for trace event flag IGNORE_ENABLE Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 04/17] tracing: Clear all trace buffers when unloaded module event was used Steven Rostedt
` (13 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0003-tracing-Only-clear-trace-buffer-on-module-unload-if-.patch --]
[-- Type: text/plain, Size: 3251 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
Currently, when a module with events is unloaded, the trace buffer is
cleared. This is just a safety net in case the module might have some
strange callback when its event is outputted. But there's no reason
to reset the buffer if the module didn't have any of its events traced.
Add a flag to the event "call" structure called WAS_ENABLED and gets set
when the event is ever enabled, and this flag never gets cleared. When a
module gets unloaded, if any of its events have this flag set, then the
trace buffer will get cleared.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/ftrace_event.h | 5 +++++
kernel/trace/trace_events.c | 12 ++++++++----
2 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 0b0814d..d696424 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -197,6 +197,7 @@ enum {
TRACE_EVENT_FL_CAP_ANY_BIT,
TRACE_EVENT_FL_NO_SET_FILTER_BIT,
TRACE_EVENT_FL_IGNORE_ENABLE_BIT,
+ TRACE_EVENT_FL_WAS_ENABLED_BIT,
};
/*
@@ -205,12 +206,16 @@ enum {
* CAP_ANY - Any user can enable for perf
* NO_SET_FILTER - Set when filter has error and is to be ignored
* IGNORE_ENABLE - For ftrace internal events, do not enable with debugfs file
+ * WAS_ENABLED - Set and stays set when an event was ever enabled
+ * (used for module unloading, if a module event is enabled,
+ * it is best to clear the buffers that used it).
*/
enum {
TRACE_EVENT_FL_FILTERED = (1 << TRACE_EVENT_FL_FILTERED_BIT),
TRACE_EVENT_FL_CAP_ANY = (1 << TRACE_EVENT_FL_CAP_ANY_BIT),
TRACE_EVENT_FL_NO_SET_FILTER = (1 << TRACE_EVENT_FL_NO_SET_FILTER_BIT),
TRACE_EVENT_FL_IGNORE_ENABLE = (1 << TRACE_EVENT_FL_IGNORE_ENABLE_BIT),
+ TRACE_EVENT_FL_WAS_ENABLED = (1 << TRACE_EVENT_FL_WAS_ENABLED_BIT),
};
struct ftrace_event_call {
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 0f1307a..9a7dc4b 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -245,6 +245,9 @@ static int ftrace_event_enable_disable(struct ftrace_event_file *file,
break;
}
file->flags |= FTRACE_EVENT_FL_ENABLED;
+
+ /* WAS_ENABLED gets set but never cleared. */
+ call->flags |= TRACE_EVENT_FL_WAS_ENABLED;
}
break;
}
@@ -1626,12 +1629,13 @@ static void trace_module_remove_events(struct module *mod)
{
struct ftrace_module_file_ops *file_ops;
struct ftrace_event_call *call, *p;
- bool found = false;
+ bool clear_trace = false;
down_write(&trace_event_mutex);
list_for_each_entry_safe(call, p, &ftrace_events, list) {
if (call->mod == mod) {
- found = true;
+ if (call->flags & TRACE_EVENT_FL_WAS_ENABLED)
+ clear_trace = true;
__trace_remove_event_call(call);
}
}
@@ -1648,9 +1652,9 @@ static void trace_module_remove_events(struct module *mod)
/*
* It is safest to reset the ring buffer if the module being unloaded
- * registered any events.
+ * registered any events that were used.
*/
- if (found)
+ if (clear_trace)
tracing_reset_current_online_cpus();
up_write(&trace_event_mutex);
}
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 04/17] tracing: Clear all trace buffers when unloaded module event was used
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (2 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 03/17] tracing: Only clear trace buffer on module unload if event was traced Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 05/17] tracing: Enable snapshot when any latency tracer is enabled Steven Rostedt
` (12 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0004-tracing-Clear-all-trace-buffers-when-unloaded-module.patch --]
[-- Type: text/plain, Size: 2796 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
Currently we do not know what buffer a module event was enabled in.
On unload, it is safest to clear all buffer instances, not just the
top level buffer.
Todo: Clear only the buffer that the event was used in. The
infrastructure is there to do this, but it makes the code a bit
more complex. Lets get the current code vetted before we add that.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/trace.c | 10 ++++++++--
kernel/trace/trace.h | 2 +-
kernel/trace/trace_events.c | 10 +++++++---
3 files changed, 16 insertions(+), 6 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index f743121..d852165 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -911,9 +911,15 @@ void tracing_reset_current(int cpu)
tracing_reset(&global_trace, cpu);
}
-void tracing_reset_current_online_cpus(void)
+void tracing_reset_all_online_cpus(void)
{
- tracing_reset_online_cpus(&global_trace);
+ struct trace_array *tr;
+
+ mutex_lock(&trace_types_lock);
+ list_for_each_entry(tr, &ftrace_trace_arrays, list) {
+ tracing_reset_online_cpus(tr);
+ }
+ mutex_unlock(&trace_types_lock);
}
#define SAVED_CMDLINES 128
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 50a9c81..e9486c74d 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -492,7 +492,7 @@ int tracing_is_enabled(void);
void tracing_reset(struct trace_array *tr, int cpu);
void tracing_reset_online_cpus(struct trace_array *tr);
void tracing_reset_current(int cpu);
-void tracing_reset_current_online_cpus(void);
+void tracing_reset_all_online_cpus(void);
int tracing_open_generic(struct inode *inode, struct file *filp);
struct dentry *trace_create_file(const char *name,
umode_t mode,
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 9a7dc4b..a376ab5 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -1649,14 +1649,18 @@ static void trace_module_remove_events(struct module *mod)
list_del(&file_ops->list);
kfree(file_ops);
}
+ up_write(&trace_event_mutex);
/*
* It is safest to reset the ring buffer if the module being unloaded
- * registered any events that were used.
+ * registered any events that were used. The only worry is if
+ * a new module gets loaded, and takes on the same id as the events
+ * of this module. When printing out the buffer, traced events left
+ * over from this module may be passed to the new module events and
+ * unexpected results may occur.
*/
if (clear_trace)
- tracing_reset_current_online_cpus();
- up_write(&trace_event_mutex);
+ tracing_reset_all_online_cpus();
}
static int trace_module_notify(struct notifier_block *self,
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 05/17] tracing: Enable snapshot when any latency tracer is enabled
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (3 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 04/17] tracing: Clear all trace buffers when unloaded module event was used Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 06/17] tracing: Consolidate max_tr into main trace_array structure Steven Rostedt
` (11 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0005-tracing-Enable-snapshot-when-any-latency-tracer-is-e.patch --]
[-- Type: text/plain, Size: 1574 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
The snapshot utility is extremely useful, and does not add any more
overhead in memory when another latency tracer is enabled. They use
the snapshot underneath. There's no reason to hide the snapshot file
when a latency tracer has been enabled in the kernel.
If any of the latency tracers (irq, preempt or wakeup) is enabled
then also select the snapshot facility.
Note, snapshot can be enabled without the latency tracers enabled.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/Kconfig | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index b516a8e..590a27f 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -191,6 +191,7 @@ config IRQSOFF_TRACER
select GENERIC_TRACER
select TRACER_MAX_TRACE
select RING_BUFFER_ALLOW_SWAP
+ select TRACER_SNAPSHOT
help
This option measures the time spent in irqs-off critical
sections, with microsecond accuracy.
@@ -213,6 +214,7 @@ config PREEMPT_TRACER
select GENERIC_TRACER
select TRACER_MAX_TRACE
select RING_BUFFER_ALLOW_SWAP
+ select TRACER_SNAPSHOT
help
This option measures the time spent in preemption-off critical
sections, with microsecond accuracy.
@@ -232,6 +234,7 @@ config SCHED_TRACER
select GENERIC_TRACER
select CONTEXT_SWITCH_TRACER
select TRACER_MAX_TRACE
+ select TRACER_SNAPSHOT
help
This tracer tracks the latency of the highest priority task
to be scheduled in, starting from the point it has woken up.
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 06/17] tracing: Consolidate max_tr into main trace_array structure
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (4 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 05/17] tracing: Enable snapshot when any latency tracer is enabled Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 07/17] tracing: Add snapshot in the per_cpu trace directories Steven Rostedt
` (10 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0006-tracing-Consolidate-max_tr-into-main-trace_array-str.patch --]
[-- Type: text/plain, Size: 72380 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
Currently, the way the latency tracers and snapshot feature works
is to have a separate trace_array called "max_tr" that holds the
snapshot buffer. For latency tracers, this snapshot buffer is used
to swap the running buffer with this buffer to save the current max
latency.
The only items needed for the max_tr is really just a copy of the buffer
itself, the per_cpu data pointers, the time_start timestamp that states
when the max latency was triggered, and the cpu that the max latency
was triggered on. All other fields in trace_array are unused by the
max_tr, making the max_tr mostly bloat.
This change removes the max_tr completely, and adds a new structure
called trace_buffer, that holds the buffer pointer, the per_cpu data
pointers, the time_start timestamp, and the cpu where the latency occurred.
The trace_array, now has two trace_buffers, one for the normal trace and
one for the max trace or snapshot. By doing this, not only do we remove
the bloat from the max_trace but the instances of traces can now use
their own snapshot feature and not have just the top level global_trace have
the snapshot feature and latency tracers for itself.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/ftrace_event.h | 2 +
kernel/trace/blktrace.c | 4 +-
kernel/trace/trace.c | 484 +++++++++++++++++++---------------
kernel/trace/trace.h | 35 ++-
kernel/trace/trace_functions.c | 8 +-
kernel/trace/trace_functions_graph.c | 12 +-
kernel/trace/trace_irqsoff.c | 10 +-
kernel/trace/trace_kdb.c | 8 +-
kernel/trace/trace_mmiotrace.c | 12 +-
kernel/trace/trace_output.c | 2 +-
kernel/trace/trace_sched_switch.c | 8 +-
kernel/trace/trace_sched_wakeup.c | 16 +-
kernel/trace/trace_selftest.c | 42 +--
kernel/trace/trace_syscalls.c | 4 +-
14 files changed, 363 insertions(+), 284 deletions(-)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index d696424..d84c4a5 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -8,6 +8,7 @@
#include <linux/perf_event.h>
struct trace_array;
+struct trace_buffer;
struct tracer;
struct dentry;
@@ -67,6 +68,7 @@ struct trace_entry {
struct trace_iterator {
struct trace_array *tr;
struct tracer *trace;
+ struct trace_buffer *trace_buffer;
void *private;
int cpu_file;
struct mutex mutex;
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 71259e2..90a5505 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -72,7 +72,7 @@ static void trace_note(struct blk_trace *bt, pid_t pid, int action,
bool blk_tracer = blk_tracer_enabled;
if (blk_tracer) {
- buffer = blk_tr->buffer;
+ buffer = blk_tr->trace_buffer.buffer;
pc = preempt_count();
event = trace_buffer_lock_reserve(buffer, TRACE_BLK,
sizeof(*t) + len,
@@ -218,7 +218,7 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes,
if (blk_tracer) {
tracing_record_cmdline(current);
- buffer = blk_tr->buffer;
+ buffer = blk_tr->trace_buffer.buffer;
pc = preempt_count();
event = trace_buffer_lock_reserve(buffer, TRACE_BLK,
sizeof(*t) + pdu_len,
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index d852165..cd15864 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -195,27 +195,15 @@ cycle_t ftrace_now(int cpu)
u64 ts;
/* Early boot up does not have a buffer yet */
- if (!global_trace.buffer)
+ if (!global_trace.trace_buffer.buffer)
return trace_clock_local();
- ts = ring_buffer_time_stamp(global_trace.buffer, cpu);
- ring_buffer_normalize_time_stamp(global_trace.buffer, cpu, &ts);
+ ts = ring_buffer_time_stamp(global_trace.trace_buffer.buffer, cpu);
+ ring_buffer_normalize_time_stamp(global_trace.trace_buffer.buffer, cpu, &ts);
return ts;
}
-/*
- * The max_tr is used to snapshot the global_trace when a maximum
- * latency is reached. Some tracers will use this to store a maximum
- * trace while it continues examining live traces.
- *
- * The buffers for the max_tr are set up the same as the global_trace.
- * When a snapshot is taken, the link list of the max_tr is swapped
- * with the link list of the global_trace and the buffers are reset for
- * the global_trace so the tracing can continue.
- */
-static struct trace_array max_tr;
-
int tracing_is_enabled(void)
{
return tracing_is_on();
@@ -339,8 +327,8 @@ unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
*/
void tracing_on(void)
{
- if (global_trace.buffer)
- ring_buffer_record_on(global_trace.buffer);
+ if (global_trace.trace_buffer.buffer)
+ ring_buffer_record_on(global_trace.trace_buffer.buffer);
/*
* This flag is only looked at when buffers haven't been
* allocated yet. We don't really care about the race
@@ -361,8 +349,8 @@ EXPORT_SYMBOL_GPL(tracing_on);
*/
void tracing_off(void)
{
- if (global_trace.buffer)
- ring_buffer_record_off(global_trace.buffer);
+ if (global_trace.trace_buffer.buffer)
+ ring_buffer_record_off(global_trace.trace_buffer.buffer);
/*
* This flag is only looked at when buffers haven't been
* allocated yet. We don't really care about the race
@@ -378,8 +366,8 @@ EXPORT_SYMBOL_GPL(tracing_off);
*/
int tracing_is_on(void)
{
- if (global_trace.buffer)
- return ring_buffer_record_is_on(global_trace.buffer);
+ if (global_trace.trace_buffer.buffer)
+ return ring_buffer_record_is_on(global_trace.trace_buffer.buffer);
return !global_trace.buffer_disabled;
}
EXPORT_SYMBOL_GPL(tracing_is_on);
@@ -637,13 +625,14 @@ unsigned long __read_mostly tracing_max_latency;
static void
__update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)
{
- struct trace_array_cpu *data = per_cpu_ptr(tr->data, cpu);
- struct trace_array_cpu *max_data;
+ struct trace_buffer *trace_buf = &tr->trace_buffer;
+ struct trace_buffer *max_buf = &tr->max_buffer;
+ struct trace_array_cpu *data = per_cpu_ptr(trace_buf->data, cpu);
+ struct trace_array_cpu *max_data = per_cpu_ptr(max_buf->data, cpu);
- max_tr.cpu = cpu;
- max_tr.time_start = data->preempt_timestamp;
+ max_buf->cpu = cpu;
+ max_buf->time_start = data->preempt_timestamp;
- max_data = per_cpu_ptr(max_tr.data, cpu);
max_data->saved_latency = tracing_max_latency;
max_data->critical_start = data->critical_start;
max_data->critical_end = data->critical_end;
@@ -671,7 +660,7 @@ __update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)
void
update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)
{
- struct ring_buffer *buf = tr->buffer;
+ struct ring_buffer *buf = tr->trace_buffer.buffer;
if (tr->stop_count)
return;
@@ -686,8 +675,8 @@ update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)
arch_spin_lock(&ftrace_max_lock);
- tr->buffer = max_tr.buffer;
- max_tr.buffer = buf;
+ tr->trace_buffer.buffer = tr->max_buffer.buffer;
+ tr->max_buffer.buffer = buf;
__update_max_tr(tr, tsk, cpu);
arch_spin_unlock(&ftrace_max_lock);
@@ -715,7 +704,7 @@ update_max_tr_single(struct trace_array *tr, struct task_struct *tsk, int cpu)
arch_spin_lock(&ftrace_max_lock);
- ret = ring_buffer_swap_cpu(max_tr.buffer, tr->buffer, cpu);
+ ret = ring_buffer_swap_cpu(tr->max_buffer.buffer, tr->trace_buffer.buffer, cpu);
if (ret == -EBUSY) {
/*
@@ -724,7 +713,7 @@ update_max_tr_single(struct trace_array *tr, struct task_struct *tsk, int cpu)
* the max trace buffer (no one writes directly to it)
* and flag that it failed.
*/
- trace_array_printk(&max_tr, _THIS_IP_,
+ trace_array_printk_buf(tr->max_buffer.buffer, _THIS_IP_,
"Failed to swap buffers due to commit in progress\n");
}
@@ -741,7 +730,7 @@ static void default_wait_pipe(struct trace_iterator *iter)
if (trace_buffer_iter(iter, iter->cpu_file))
return;
- ring_buffer_wait(iter->tr->buffer, iter->cpu_file);
+ ring_buffer_wait(iter->trace_buffer->buffer, iter->cpu_file);
}
/**
@@ -802,17 +791,19 @@ int register_tracer(struct tracer *type)
* internal tracing to verify that everything is in order.
* If we fail, we do not register this tracer.
*/
- tracing_reset_online_cpus(tr);
+ tracing_reset_online_cpus(&tr->trace_buffer);
tr->current_trace = type;
+#ifdef CONFIG_TRACER_MAX_TRACE
if (type->use_max_tr) {
/* If we expanded the buffers, make sure the max is expanded too */
if (ring_buffer_expanded)
- ring_buffer_resize(max_tr.buffer, trace_buf_size,
+ ring_buffer_resize(tr->max_buffer.buffer, trace_buf_size,
RING_BUFFER_ALL_CPUS);
type->allocated_snapshot = true;
}
+#endif
/* the test is responsible for initializing and enabling */
pr_info("Testing tracer %s: ", type->name);
@@ -826,16 +817,18 @@ int register_tracer(struct tracer *type)
goto out;
}
/* Only reset on passing, to avoid touching corrupted buffers */
- tracing_reset_online_cpus(tr);
+ tracing_reset_online_cpus(&tr->trace_buffer);
+#ifdef CONFIG_TRACER_MAX_TRACE
if (type->use_max_tr) {
type->allocated_snapshot = false;
/* Shrink the max buffer again */
if (ring_buffer_expanded)
- ring_buffer_resize(max_tr.buffer, 1,
+ ring_buffer_resize(tr->max_buffer.buffer, 1,
RING_BUFFER_ALL_CPUS);
}
+#endif
printk(KERN_CONT "PASSED\n");
}
@@ -869,9 +862,9 @@ int register_tracer(struct tracer *type)
return ret;
}
-void tracing_reset(struct trace_array *tr, int cpu)
+void tracing_reset(struct trace_buffer *buf, int cpu)
{
- struct ring_buffer *buffer = tr->buffer;
+ struct ring_buffer *buffer = buf->buffer;
if (!buffer)
return;
@@ -885,9 +878,9 @@ void tracing_reset(struct trace_array *tr, int cpu)
ring_buffer_record_enable(buffer);
}
-void tracing_reset_online_cpus(struct trace_array *tr)
+void tracing_reset_online_cpus(struct trace_buffer *buf)
{
- struct ring_buffer *buffer = tr->buffer;
+ struct ring_buffer *buffer = buf->buffer;
int cpu;
if (!buffer)
@@ -898,7 +891,7 @@ void tracing_reset_online_cpus(struct trace_array *tr)
/* Make sure all commits have finished */
synchronize_sched();
- tr->time_start = ftrace_now(tr->cpu);
+ buf->time_start = ftrace_now(buf->cpu);
for_each_online_cpu(cpu)
ring_buffer_reset_cpu(buffer, cpu);
@@ -908,7 +901,7 @@ void tracing_reset_online_cpus(struct trace_array *tr)
void tracing_reset_current(int cpu)
{
- tracing_reset(&global_trace, cpu);
+ tracing_reset(&global_trace.trace_buffer, cpu);
}
void tracing_reset_all_online_cpus(void)
@@ -917,7 +910,10 @@ void tracing_reset_all_online_cpus(void)
mutex_lock(&trace_types_lock);
list_for_each_entry(tr, &ftrace_trace_arrays, list) {
- tracing_reset_online_cpus(tr);
+ tracing_reset_online_cpus(&tr->trace_buffer);
+#ifdef CONFIG_TRACER_MAX_TRACE
+ tracing_reset_online_cpus(&tr->max_buffer);
+#endif
}
mutex_unlock(&trace_types_lock);
}
@@ -987,13 +983,15 @@ void tracing_start(void)
/* Prevent the buffers from switching */
arch_spin_lock(&ftrace_max_lock);
- buffer = global_trace.buffer;
+ buffer = global_trace.trace_buffer.buffer;
if (buffer)
ring_buffer_record_enable(buffer);
- buffer = max_tr.buffer;
+#ifdef CONFIG_TRACER_MAX_TRACE
+ buffer = global_trace.max_buffer.buffer;
if (buffer)
ring_buffer_record_enable(buffer);
+#endif
arch_spin_unlock(&ftrace_max_lock);
@@ -1025,7 +1023,7 @@ static void tracing_start_tr(struct trace_array *tr)
goto out;
}
- buffer = tr->buffer;
+ buffer = tr->trace_buffer.buffer;
if (buffer)
ring_buffer_record_enable(buffer);
@@ -1052,13 +1050,15 @@ void tracing_stop(void)
/* Prevent the buffers from switching */
arch_spin_lock(&ftrace_max_lock);
- buffer = global_trace.buffer;
+ buffer = global_trace.trace_buffer.buffer;
if (buffer)
ring_buffer_record_disable(buffer);
- buffer = max_tr.buffer;
+#ifdef CONFIG_TRACER_MAX_TRACE
+ buffer = global_trace.max_buffer.buffer;
if (buffer)
ring_buffer_record_disable(buffer);
+#endif
arch_spin_unlock(&ftrace_max_lock);
@@ -1079,7 +1079,7 @@ static void tracing_stop_tr(struct trace_array *tr)
if (tr->stop_count++)
goto out;
- buffer = tr->buffer;
+ buffer = tr->trace_buffer.buffer;
if (buffer)
ring_buffer_record_disable(buffer);
@@ -1245,7 +1245,7 @@ trace_event_buffer_lock_reserve(struct ring_buffer **current_rb,
int type, unsigned long len,
unsigned long flags, int pc)
{
- *current_rb = ftrace_file->tr->buffer;
+ *current_rb = ftrace_file->tr->trace_buffer.buffer;
return trace_buffer_lock_reserve(*current_rb,
type, len, flags, pc);
}
@@ -1256,7 +1256,7 @@ trace_current_buffer_lock_reserve(struct ring_buffer **current_rb,
int type, unsigned long len,
unsigned long flags, int pc)
{
- *current_rb = global_trace.buffer;
+ *current_rb = global_trace.trace_buffer.buffer;
return trace_buffer_lock_reserve(*current_rb,
type, len, flags, pc);
}
@@ -1295,7 +1295,7 @@ trace_function(struct trace_array *tr,
int pc)
{
struct ftrace_event_call *call = &event_function;
- struct ring_buffer *buffer = tr->buffer;
+ struct ring_buffer *buffer = tr->trace_buffer.buffer;
struct ring_buffer_event *event;
struct ftrace_entry *entry;
@@ -1436,7 +1436,7 @@ void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
int pc)
{
- __ftrace_trace_stack(tr->buffer, flags, skip, pc, NULL);
+ __ftrace_trace_stack(tr->trace_buffer.buffer, flags, skip, pc, NULL);
}
/**
@@ -1452,7 +1452,8 @@ void trace_dump_stack(void)
local_save_flags(flags);
/* skipping 3 traces, seems to get us at the caller of this function */
- __ftrace_trace_stack(global_trace.buffer, flags, 3, preempt_count(), NULL);
+ __ftrace_trace_stack(global_trace.trace_buffer.buffer, flags, 3,
+ preempt_count(), NULL);
}
static DEFINE_PER_CPU(int, user_stack_count);
@@ -1622,7 +1623,7 @@ void trace_printk_init_buffers(void)
* directly here. If the global_trace.buffer is already
* allocated here, then this was called by module code.
*/
- if (global_trace.buffer)
+ if (global_trace.trace_buffer.buffer)
tracing_start_cmdline_record();
}
@@ -1682,7 +1683,7 @@ int trace_vbprintk(unsigned long ip, const char *fmt, va_list args)
local_save_flags(flags);
size = sizeof(*entry) + sizeof(u32) * len;
- buffer = tr->buffer;
+ buffer = tr->trace_buffer.buffer;
event = trace_buffer_lock_reserve(buffer, TRACE_BPRINT, size,
flags, pc);
if (!event)
@@ -1705,27 +1706,12 @@ out:
}
EXPORT_SYMBOL_GPL(trace_vbprintk);
-int trace_array_printk(struct trace_array *tr,
- unsigned long ip, const char *fmt, ...)
-{
- int ret;
- va_list ap;
-
- if (!(trace_flags & TRACE_ITER_PRINTK))
- return 0;
-
- va_start(ap, fmt);
- ret = trace_array_vprintk(tr, ip, fmt, ap);
- va_end(ap);
- return ret;
-}
-
-int trace_array_vprintk(struct trace_array *tr,
- unsigned long ip, const char *fmt, va_list args)
+static int
+__trace_array_vprintk(struct ring_buffer *buffer,
+ unsigned long ip, const char *fmt, va_list args)
{
struct ftrace_event_call *call = &event_print;
struct ring_buffer_event *event;
- struct ring_buffer *buffer;
int len = 0, size, pc;
struct print_entry *entry;
unsigned long flags;
@@ -1753,7 +1739,6 @@ int trace_array_vprintk(struct trace_array *tr,
local_save_flags(flags);
size = sizeof(*entry) + len + 1;
- buffer = tr->buffer;
event = trace_buffer_lock_reserve(buffer, TRACE_PRINT, size,
flags, pc);
if (!event)
@@ -1774,6 +1759,42 @@ int trace_array_vprintk(struct trace_array *tr,
return len;
}
+int trace_array_vprintk(struct trace_array *tr,
+ unsigned long ip, const char *fmt, va_list args)
+{
+ return __trace_array_vprintk(tr->trace_buffer.buffer, ip, fmt, args);
+}
+
+int trace_array_printk(struct trace_array *tr,
+ unsigned long ip, const char *fmt, ...)
+{
+ int ret;
+ va_list ap;
+
+ if (!(trace_flags & TRACE_ITER_PRINTK))
+ return 0;
+
+ va_start(ap, fmt);
+ ret = trace_array_vprintk(tr, ip, fmt, ap);
+ va_end(ap);
+ return ret;
+}
+
+int trace_array_printk_buf(struct ring_buffer *buffer,
+ unsigned long ip, const char *fmt, ...)
+{
+ int ret;
+ va_list ap;
+
+ if (!(trace_flags & TRACE_ITER_PRINTK))
+ return 0;
+
+ va_start(ap, fmt);
+ ret = __trace_array_vprintk(buffer, ip, fmt, ap);
+ va_end(ap);
+ return ret;
+}
+
int trace_vprintk(unsigned long ip, const char *fmt, va_list args)
{
return trace_array_vprintk(&global_trace, ip, fmt, args);
@@ -1799,7 +1820,7 @@ peek_next_entry(struct trace_iterator *iter, int cpu, u64 *ts,
if (buf_iter)
event = ring_buffer_iter_peek(buf_iter, ts);
else
- event = ring_buffer_peek(iter->tr->buffer, cpu, ts,
+ event = ring_buffer_peek(iter->trace_buffer->buffer, cpu, ts,
lost_events);
if (event) {
@@ -1814,7 +1835,7 @@ static struct trace_entry *
__find_next_entry(struct trace_iterator *iter, int *ent_cpu,
unsigned long *missing_events, u64 *ent_ts)
{
- struct ring_buffer *buffer = iter->tr->buffer;
+ struct ring_buffer *buffer = iter->trace_buffer->buffer;
struct trace_entry *ent, *next = NULL;
unsigned long lost_events = 0, next_lost = 0;
int cpu_file = iter->cpu_file;
@@ -1891,7 +1912,7 @@ void *trace_find_next_entry_inc(struct trace_iterator *iter)
static void trace_consume(struct trace_iterator *iter)
{
- ring_buffer_consume(iter->tr->buffer, iter->cpu, &iter->ts,
+ ring_buffer_consume(iter->trace_buffer->buffer, iter->cpu, &iter->ts,
&iter->lost_events);
}
@@ -1924,13 +1945,12 @@ static void *s_next(struct seq_file *m, void *v, loff_t *pos)
void tracing_iter_reset(struct trace_iterator *iter, int cpu)
{
- struct trace_array *tr = iter->tr;
struct ring_buffer_event *event;
struct ring_buffer_iter *buf_iter;
unsigned long entries = 0;
u64 ts;
- per_cpu_ptr(tr->data, cpu)->skipped_entries = 0;
+ per_cpu_ptr(iter->trace_buffer->data, cpu)->skipped_entries = 0;
buf_iter = trace_buffer_iter(iter, cpu);
if (!buf_iter)
@@ -1944,13 +1964,13 @@ void tracing_iter_reset(struct trace_iterator *iter, int cpu)
* by the timestamp being before the start of the buffer.
*/
while ((event = ring_buffer_iter_peek(buf_iter, &ts))) {
- if (ts >= iter->tr->time_start)
+ if (ts >= iter->trace_buffer->time_start)
break;
entries++;
ring_buffer_read(buf_iter, NULL);
}
- per_cpu_ptr(tr->data, cpu)->skipped_entries = entries;
+ per_cpu_ptr(iter->trace_buffer->data, cpu)->skipped_entries = entries;
}
/*
@@ -1977,8 +1997,10 @@ static void *s_start(struct seq_file *m, loff_t *pos)
*iter->trace = *tr->current_trace;
mutex_unlock(&trace_types_lock);
+#ifdef CONFIG_TRACER_MAX_TRACE
if (iter->snapshot && iter->trace->use_max_tr)
return ERR_PTR(-EBUSY);
+#endif
if (!iter->snapshot)
atomic_inc(&trace_record_cmdline_disabled);
@@ -2020,17 +2042,21 @@ static void s_stop(struct seq_file *m, void *p)
{
struct trace_iterator *iter = m->private;
+#ifdef CONFIG_TRACER_MAX_TRACE
if (iter->snapshot && iter->trace->use_max_tr)
return;
+#endif
if (!iter->snapshot)
atomic_dec(&trace_record_cmdline_disabled);
+
trace_access_unlock(iter->cpu_file);
trace_event_read_unlock();
}
static void
-get_total_entries(struct trace_array *tr, unsigned long *total, unsigned long *entries)
+get_total_entries(struct trace_buffer *buf,
+ unsigned long *total, unsigned long *entries)
{
unsigned long count;
int cpu;
@@ -2039,19 +2065,19 @@ get_total_entries(struct trace_array *tr, unsigned long *total, unsigned long *e
*entries = 0;
for_each_tracing_cpu(cpu) {
- count = ring_buffer_entries_cpu(tr->buffer, cpu);
+ count = ring_buffer_entries_cpu(buf->buffer, cpu);
/*
* If this buffer has skipped entries, then we hold all
* entries for the trace and we need to ignore the
* ones before the time stamp.
*/
- if (per_cpu_ptr(tr->data, cpu)->skipped_entries) {
- count -= per_cpu_ptr(tr->data, cpu)->skipped_entries;
+ if (per_cpu_ptr(buf->data, cpu)->skipped_entries) {
+ count -= per_cpu_ptr(buf->data, cpu)->skipped_entries;
/* total is the same as the entries */
*total += count;
} else
*total += count +
- ring_buffer_overrun_cpu(tr->buffer, cpu);
+ ring_buffer_overrun_cpu(buf->buffer, cpu);
*entries += count;
}
}
@@ -2068,27 +2094,27 @@ static void print_lat_help_header(struct seq_file *m)
seq_puts(m, "# \\ / ||||| \\ | / \n");
}
-static void print_event_info(struct trace_array *tr, struct seq_file *m)
+static void print_event_info(struct trace_buffer *buf, struct seq_file *m)
{
unsigned long total;
unsigned long entries;
- get_total_entries(tr, &total, &entries);
+ get_total_entries(buf, &total, &entries);
seq_printf(m, "# entries-in-buffer/entries-written: %lu/%lu #P:%d\n",
entries, total, num_online_cpus());
seq_puts(m, "#\n");
}
-static void print_func_help_header(struct trace_array *tr, struct seq_file *m)
+static void print_func_help_header(struct trace_buffer *buf, struct seq_file *m)
{
- print_event_info(tr, m);
+ print_event_info(buf, m);
seq_puts(m, "# TASK-PID CPU# TIMESTAMP FUNCTION\n");
seq_puts(m, "# | | | | |\n");
}
-static void print_func_help_header_irq(struct trace_array *tr, struct seq_file *m)
+static void print_func_help_header_irq(struct trace_buffer *buf, struct seq_file *m)
{
- print_event_info(tr, m);
+ print_event_info(buf, m);
seq_puts(m, "# _-----=> irqs-off\n");
seq_puts(m, "# / _----=> need-resched\n");
seq_puts(m, "# | / _---=> hardirq/softirq\n");
@@ -2102,8 +2128,8 @@ void
print_trace_header(struct seq_file *m, struct trace_iterator *iter)
{
unsigned long sym_flags = (trace_flags & TRACE_ITER_SYM_MASK);
- struct trace_array *tr = iter->tr;
- struct trace_array_cpu *data = per_cpu_ptr(tr->data, tr->cpu);
+ struct trace_buffer *buf = iter->trace_buffer;
+ struct trace_array_cpu *data = per_cpu_ptr(buf->data, buf->cpu);
struct tracer *type = iter->trace;
unsigned long entries;
unsigned long total;
@@ -2111,7 +2137,7 @@ print_trace_header(struct seq_file *m, struct trace_iterator *iter)
name = type->name;
- get_total_entries(tr, &total, &entries);
+ get_total_entries(buf, &total, &entries);
seq_printf(m, "# %s latency trace v1.1.5 on %s\n",
name, UTS_RELEASE);
@@ -2122,7 +2148,7 @@ print_trace_header(struct seq_file *m, struct trace_iterator *iter)
nsecs_to_usecs(data->saved_latency),
entries,
total,
- tr->cpu,
+ buf->cpu,
#if defined(CONFIG_PREEMPT_NONE)
"server",
#elif defined(CONFIG_PREEMPT_VOLUNTARY)
@@ -2173,7 +2199,7 @@ static void test_cpu_buff_start(struct trace_iterator *iter)
if (cpumask_test_cpu(iter->cpu, iter->started))
return;
- if (per_cpu_ptr(iter->tr->data, iter->cpu)->skipped_entries)
+ if (per_cpu_ptr(iter->trace_buffer->data, iter->cpu)->skipped_entries)
return;
cpumask_set_cpu(iter->cpu, iter->started);
@@ -2303,7 +2329,7 @@ int trace_empty(struct trace_iterator *iter)
if (!ring_buffer_iter_empty(buf_iter))
return 0;
} else {
- if (!ring_buffer_empty_cpu(iter->tr->buffer, cpu))
+ if (!ring_buffer_empty_cpu(iter->trace_buffer->buffer, cpu))
return 0;
}
return 1;
@@ -2315,7 +2341,7 @@ int trace_empty(struct trace_iterator *iter)
if (!ring_buffer_iter_empty(buf_iter))
return 0;
} else {
- if (!ring_buffer_empty_cpu(iter->tr->buffer, cpu))
+ if (!ring_buffer_empty_cpu(iter->trace_buffer->buffer, cpu))
return 0;
}
}
@@ -2393,9 +2419,9 @@ void trace_default_header(struct seq_file *m)
} else {
if (!(trace_flags & TRACE_ITER_VERBOSE)) {
if (trace_flags & TRACE_ITER_IRQ_INFO)
- print_func_help_header_irq(iter->tr, m);
+ print_func_help_header_irq(iter->trace_buffer, m);
else
- print_func_help_header(iter->tr, m);
+ print_func_help_header(iter->trace_buffer, m);
}
}
}
@@ -2514,11 +2540,15 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
if (!zalloc_cpumask_var(&iter->started, GFP_KERNEL))
goto fail;
+ iter->tr = tr;
+
+#ifdef CONFIG_TRACER_MAX_TRACE
/* Currently only the top directory has a snapshot */
if (tr->current_trace->print_max || snapshot)
- iter->tr = &max_tr;
+ iter->trace_buffer = &tr->max_buffer;
else
- iter->tr = tr;
+#endif
+ iter->trace_buffer = &tr->trace_buffer;
iter->snapshot = snapshot;
iter->pos = -1;
mutex_init(&iter->mutex);
@@ -2529,7 +2559,7 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
iter->trace->open(iter);
/* Annotate start of buffers if we had overruns */
- if (ring_buffer_overruns(iter->tr->buffer))
+ if (ring_buffer_overruns(iter->trace_buffer->buffer))
iter->iter_flags |= TRACE_FILE_ANNOTATE;
/* Output in nanoseconds only if we are using a clock in nanoseconds. */
@@ -2543,7 +2573,7 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
for_each_tracing_cpu(cpu) {
iter->buffer_iter[cpu] =
- ring_buffer_read_prepare(iter->tr->buffer, cpu);
+ ring_buffer_read_prepare(iter->trace_buffer->buffer, cpu);
}
ring_buffer_read_prepare_sync();
for_each_tracing_cpu(cpu) {
@@ -2553,7 +2583,7 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
} else {
cpu = iter->cpu_file;
iter->buffer_iter[cpu] =
- ring_buffer_read_prepare(iter->tr->buffer, cpu);
+ ring_buffer_read_prepare(iter->trace_buffer->buffer, cpu);
ring_buffer_read_prepare_sync();
ring_buffer_read_start(iter->buffer_iter[cpu]);
tracing_iter_reset(iter, cpu);
@@ -2592,12 +2622,7 @@ static int tracing_release(struct inode *inode, struct file *file)
return 0;
iter = m->private;
-
- /* Only the global tracer has a matching max_tr */
- if (iter->tr == &max_tr)
- tr = &global_trace;
- else
- tr = iter->tr;
+ tr = iter->tr;
mutex_lock(&trace_types_lock);
for_each_tracing_cpu(cpu) {
@@ -2633,9 +2658,9 @@ static int tracing_open(struct inode *inode, struct file *file)
struct trace_array *tr = tc->tr;
if (tc->cpu == RING_BUFFER_ALL_CPUS)
- tracing_reset_online_cpus(tr);
+ tracing_reset_online_cpus(&tr->trace_buffer);
else
- tracing_reset(tr, tc->cpu);
+ tracing_reset(&tr->trace_buffer, tc->cpu);
}
if (file->f_mode & FMODE_READ) {
@@ -2804,13 +2829,13 @@ tracing_cpumask_write(struct file *filp, const char __user *ubuf,
*/
if (cpumask_test_cpu(cpu, tracing_cpumask) &&
!cpumask_test_cpu(cpu, tracing_cpumask_new)) {
- atomic_inc(&per_cpu_ptr(tr->data, cpu)->disabled);
- ring_buffer_record_disable_cpu(tr->buffer, cpu);
+ atomic_inc(&per_cpu_ptr(tr->trace_buffer.data, cpu)->disabled);
+ ring_buffer_record_disable_cpu(tr->trace_buffer.buffer, cpu);
}
if (!cpumask_test_cpu(cpu, tracing_cpumask) &&
cpumask_test_cpu(cpu, tracing_cpumask_new)) {
- atomic_dec(&per_cpu_ptr(tr->data, cpu)->disabled);
- ring_buffer_record_enable_cpu(tr->buffer, cpu);
+ atomic_dec(&per_cpu_ptr(tr->trace_buffer.data, cpu)->disabled);
+ ring_buffer_record_enable_cpu(tr->trace_buffer.buffer, cpu);
}
}
arch_spin_unlock(&ftrace_max_lock);
@@ -2915,7 +2940,7 @@ static void set_tracer_flags(unsigned int mask, int enabled)
trace_event_enable_cmd_record(enabled);
if (mask == TRACE_ITER_OVERWRITE)
- ring_buffer_change_overwrite(global_trace.buffer, enabled);
+ ring_buffer_change_overwrite(global_trace.trace_buffer.buffer, enabled);
if (mask == TRACE_ITER_PRINTK)
trace_printk_start_stop_comm(enabled);
@@ -3091,42 +3116,44 @@ tracing_set_trace_read(struct file *filp, char __user *ubuf,
int tracer_init(struct tracer *t, struct trace_array *tr)
{
- tracing_reset_online_cpus(tr);
+ tracing_reset_online_cpus(&tr->trace_buffer);
return t->init(tr);
}
-static void set_buffer_entries(struct trace_array *tr, unsigned long val)
+static void set_buffer_entries(struct trace_buffer *buf, unsigned long val)
{
int cpu;
for_each_tracing_cpu(cpu)
- per_cpu_ptr(tr->data, cpu)->entries = val;
+ per_cpu_ptr(buf->data, cpu)->entries = val;
}
+#ifdef CONFIG_TRACER_MAX_TRACE
/* resize @tr's buffer to the size of @size_tr's entries */
-static int resize_buffer_duplicate_size(struct trace_array *tr,
- struct trace_array *size_tr, int cpu_id)
+static int resize_buffer_duplicate_size(struct trace_buffer *trace_buf,
+ struct trace_buffer *size_buf, int cpu_id)
{
int cpu, ret = 0;
if (cpu_id == RING_BUFFER_ALL_CPUS) {
for_each_tracing_cpu(cpu) {
- ret = ring_buffer_resize(tr->buffer,
- per_cpu_ptr(size_tr->data, cpu)->entries, cpu);
+ ret = ring_buffer_resize(trace_buf->buffer,
+ per_cpu_ptr(size_buf->data, cpu)->entries, cpu);
if (ret < 0)
break;
- per_cpu_ptr(tr->data, cpu)->entries =
- per_cpu_ptr(size_tr->data, cpu)->entries;
+ per_cpu_ptr(trace_buf->data, cpu)->entries =
+ per_cpu_ptr(size_buf->data, cpu)->entries;
}
} else {
- ret = ring_buffer_resize(tr->buffer,
- per_cpu_ptr(size_tr->data, cpu_id)->entries, cpu_id);
+ ret = ring_buffer_resize(trace_buf->buffer,
+ per_cpu_ptr(size_buf->data, cpu_id)->entries, cpu_id);
if (ret == 0)
- per_cpu_ptr(tr->data, cpu_id)->entries =
- per_cpu_ptr(size_tr->data, cpu_id)->entries;
+ per_cpu_ptr(trace_buf->data, cpu_id)->entries =
+ per_cpu_ptr(size_buf->data, cpu_id)->entries;
}
return ret;
}
+#endif /* CONFIG_TRACER_MAX_TRACE */
static int __tracing_resize_ring_buffer(struct trace_array *tr,
unsigned long size, int cpu)
@@ -3141,20 +3168,22 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr,
ring_buffer_expanded = 1;
/* May be called before buffers are initialized */
- if (!tr->buffer)
+ if (!tr->trace_buffer.buffer)
return 0;
- ret = ring_buffer_resize(tr->buffer, size, cpu);
+ ret = ring_buffer_resize(tr->trace_buffer.buffer, size, cpu);
if (ret < 0)
return ret;
+#ifdef CONFIG_TRACER_MAX_TRACE
if (!(tr->flags & TRACE_ARRAY_FL_GLOBAL) ||
!tr->current_trace->use_max_tr)
goto out;
- ret = ring_buffer_resize(max_tr.buffer, size, cpu);
+ ret = ring_buffer_resize(tr->max_buffer.buffer, size, cpu);
if (ret < 0) {
- int r = resize_buffer_duplicate_size(tr, tr, cpu);
+ int r = resize_buffer_duplicate_size(&tr->trace_buffer,
+ &tr->trace_buffer, cpu);
if (r < 0) {
/*
* AARGH! We are left with different
@@ -3177,15 +3206,17 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr,
}
if (cpu == RING_BUFFER_ALL_CPUS)
- set_buffer_entries(&max_tr, size);
+ set_buffer_entries(&tr->max_buffer, size);
else
- per_cpu_ptr(max_tr.data, cpu)->entries = size;
+ per_cpu_ptr(tr->max_buffer.data, cpu)->entries = size;
out:
+#endif /* CONFIG_TRACER_MAX_TRACE */
+
if (cpu == RING_BUFFER_ALL_CPUS)
- set_buffer_entries(tr, size);
+ set_buffer_entries(&tr->trace_buffer, size);
else
- per_cpu_ptr(tr->data, cpu)->entries = size;
+ per_cpu_ptr(tr->trace_buffer.data, cpu)->entries = size;
return ret;
}
@@ -3252,7 +3283,9 @@ static int tracing_set_tracer(const char *buf)
static struct trace_option_dentry *topts;
struct trace_array *tr = &global_trace;
struct tracer *t;
+#ifdef CONFIG_TRACER_MAX_TRACE
bool had_max_tr;
+#endif
int ret = 0;
mutex_lock(&trace_types_lock);
@@ -3280,7 +3313,10 @@ static int tracing_set_tracer(const char *buf)
if (tr->current_trace->reset)
tr->current_trace->reset(tr);
+#ifdef CONFIG_TRACER_MAX_TRACE
had_max_tr = tr->current_trace->allocated_snapshot;
+
+ /* Current trace needs to be nop_trace before synchronize_sched */
tr->current_trace = &nop_trace;
if (had_max_tr && !t->use_max_tr) {
@@ -3297,22 +3333,28 @@ static int tracing_set_tracer(const char *buf)
* The max_tr ring buffer has some state (e.g. ring->clock) and
* we want preserve it.
*/
- ring_buffer_resize(max_tr.buffer, 1, RING_BUFFER_ALL_CPUS);
- set_buffer_entries(&max_tr, 1);
- tracing_reset_online_cpus(&max_tr);
+ ring_buffer_resize(tr->max_buffer.buffer, 1, RING_BUFFER_ALL_CPUS);
+ set_buffer_entries(&tr->max_buffer, 1);
+ tracing_reset_online_cpus(&tr->max_buffer);
tr->current_trace->allocated_snapshot = false;
}
+#else
+ tr->current_trace = &nop_trace;
+#endif
destroy_trace_option_files(topts);
topts = create_trace_option_files(tr, t);
+
+#ifdef CONFIG_TRACER_MAX_TRACE
if (t->use_max_tr && !had_max_tr) {
/* we need to make per cpu buffer sizes equivalent */
- ret = resize_buffer_duplicate_size(&max_tr, &global_trace,
+ ret = resize_buffer_duplicate_size(&tr->max_buffer, &tr->trace_buffer,
RING_BUFFER_ALL_CPUS);
if (ret < 0)
goto out;
t->allocated_snapshot = true;
}
+#endif
if (t->init) {
ret = tracer_init(t, tr);
@@ -3439,6 +3481,7 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp)
iter->cpu_file = tc->cpu;
iter->tr = tc->tr;
+ iter->trace_buffer = &tc->tr->trace_buffer;
mutex_init(&iter->mutex);
filp->private_data = iter;
@@ -3489,7 +3532,7 @@ trace_poll(struct trace_iterator *iter, struct file *filp, poll_table *poll_tabl
*/
return POLLIN | POLLRDNORM;
else
- return ring_buffer_poll_wait(iter->tr->buffer, iter->cpu_file,
+ return ring_buffer_poll_wait(iter->trace_buffer->buffer, iter->cpu_file,
filp, poll_table);
}
@@ -3828,8 +3871,8 @@ tracing_entries_read(struct file *filp, char __user *ubuf,
for_each_tracing_cpu(cpu) {
/* fill in the size from first enabled cpu */
if (size == 0)
- size = per_cpu_ptr(tr->data, cpu)->entries;
- if (size != per_cpu_ptr(tr->data, cpu)->entries) {
+ size = per_cpu_ptr(tr->trace_buffer.data, cpu)->entries;
+ if (size != per_cpu_ptr(tr->trace_buffer.data, cpu)->entries) {
buf_size_same = 0;
break;
}
@@ -3845,7 +3888,7 @@ tracing_entries_read(struct file *filp, char __user *ubuf,
} else
r = sprintf(buf, "X\n");
} else
- r = sprintf(buf, "%lu\n", per_cpu_ptr(tr->data, tc->cpu)->entries >> 10);
+ r = sprintf(buf, "%lu\n", per_cpu_ptr(tr->trace_buffer.data, tc->cpu)->entries >> 10);
mutex_unlock(&trace_types_lock);
@@ -3892,7 +3935,7 @@ tracing_total_entries_read(struct file *filp, char __user *ubuf,
mutex_lock(&trace_types_lock);
for_each_tracing_cpu(cpu) {
- size += per_cpu_ptr(tr->data, cpu)->entries >> 10;
+ size += per_cpu_ptr(tr->trace_buffer.data, cpu)->entries >> 10;
if (!ring_buffer_expanded)
expanded_size += trace_buf_size >> 10;
}
@@ -3997,7 +4040,7 @@ tracing_mark_write(struct file *filp, const char __user *ubuf,
local_save_flags(irq_flags);
size = sizeof(*entry) + cnt + 2; /* possible \n added */
- buffer = global_trace.buffer;
+ buffer = global_trace.trace_buffer.buffer;
event = trace_buffer_lock_reserve(buffer, TRACE_PRINT, size,
irq_flags, preempt_count());
if (!event) {
@@ -4082,16 +4125,19 @@ static ssize_t tracing_clock_write(struct file *filp, const char __user *ubuf,
tr->clock_id = i;
- ring_buffer_set_clock(tr->buffer, trace_clocks[i].func);
- if (tr->flags & TRACE_ARRAY_FL_GLOBAL && max_tr.buffer)
- ring_buffer_set_clock(max_tr.buffer, trace_clocks[i].func);
+ ring_buffer_set_clock(tr->trace_buffer.buffer, trace_clocks[i].func);
/*
* New clock may not be consistent with the previous clock.
* Reset the buffer so that it doesn't have incomparable timestamps.
*/
- tracing_reset_online_cpus(&global_trace);
- tracing_reset_online_cpus(&max_tr);
+ tracing_reset_online_cpus(&global_trace.trace_buffer);
+
+#ifdef CONFIG_TRACER_MAX_TRACE
+ if (tr->flags & TRACE_ARRAY_FL_GLOBAL && tr->max_buffer.buffer)
+ ring_buffer_set_clock(tr->max_buffer.buffer, trace_clocks[i].func);
+ tracing_reset_online_cpus(&global_trace.max_buffer);
+#endif
mutex_unlock(&trace_types_lock);
@@ -4131,6 +4177,7 @@ static int tracing_snapshot_open(struct inode *inode, struct file *file)
return -ENOMEM;
}
iter->tr = tc->tr;
+ iter->trace_buffer = &tc->tr->max_buffer;
m->private = iter;
file->private_data = m;
}
@@ -4167,18 +4214,18 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
case 0:
if (tr->current_trace->allocated_snapshot) {
/* free spare buffer */
- ring_buffer_resize(max_tr.buffer, 1,
+ ring_buffer_resize(tr->max_buffer.buffer, 1,
RING_BUFFER_ALL_CPUS);
- set_buffer_entries(&max_tr, 1);
- tracing_reset_online_cpus(&max_tr);
+ set_buffer_entries(&tr->max_buffer, 1);
+ tracing_reset_online_cpus(&tr->max_buffer);
tr->current_trace->allocated_snapshot = false;
}
break;
case 1:
if (!tr->current_trace->allocated_snapshot) {
/* allocate spare buffer */
- ret = resize_buffer_duplicate_size(&max_tr,
- &global_trace, RING_BUFFER_ALL_CPUS);
+ ret = resize_buffer_duplicate_size(&tr->max_buffer,
+ &tr->trace_buffer, RING_BUFFER_ALL_CPUS);
if (ret < 0)
break;
tr->current_trace->allocated_snapshot = true;
@@ -4191,7 +4238,7 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
break;
default:
if (tr->current_trace->allocated_snapshot)
- tracing_reset_online_cpus(&max_tr);
+ tracing_reset_online_cpus(&tr->max_buffer);
break;
}
@@ -4309,6 +4356,7 @@ static int tracing_buffers_open(struct inode *inode, struct file *filp)
info->iter.tr = tr;
info->iter.cpu_file = tc->cpu;
info->iter.trace = tr->current_trace;
+ info->iter.trace_buffer = &tr->trace_buffer;
info->spare = NULL;
/* Force reading ring buffer for first read */
info->read = (unsigned int)-1;
@@ -4340,7 +4388,8 @@ tracing_buffers_read(struct file *filp, char __user *ubuf,
return 0;
if (!info->spare)
- info->spare = ring_buffer_alloc_read_page(iter->tr->buffer, iter->cpu_file);
+ info->spare = ring_buffer_alloc_read_page(iter->trace_buffer->buffer,
+ iter->cpu_file);
if (!info->spare)
return -ENOMEM;
@@ -4350,7 +4399,7 @@ tracing_buffers_read(struct file *filp, char __user *ubuf,
again:
trace_access_lock(iter->cpu_file);
- ret = ring_buffer_read_page(iter->tr->buffer,
+ ret = ring_buffer_read_page(iter->trace_buffer->buffer,
&info->spare,
count,
iter->cpu_file, 0);
@@ -4392,7 +4441,7 @@ static int tracing_buffers_release(struct inode *inode, struct file *file)
struct trace_iterator *iter = &info->iter;
if (info->spare)
- ring_buffer_free_read_page(iter->tr->buffer, info->spare);
+ ring_buffer_free_read_page(iter->trace_buffer->buffer, info->spare);
kfree(info);
return 0;
@@ -4492,7 +4541,7 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
again:
trace_access_lock(iter->cpu_file);
- entries = ring_buffer_entries_cpu(iter->tr->buffer, iter->cpu_file);
+ entries = ring_buffer_entries_cpu(iter->trace_buffer->buffer, iter->cpu_file);
for (i = 0; i < pipe->buffers && len && entries; i++, len -= PAGE_SIZE) {
struct page *page;
@@ -4503,7 +4552,7 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
break;
ref->ref = 1;
- ref->buffer = iter->tr->buffer;
+ ref->buffer = iter->trace_buffer->buffer;
ref->page = ring_buffer_alloc_read_page(ref->buffer, iter->cpu_file);
if (!ref->page) {
kfree(ref);
@@ -4535,7 +4584,7 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
spd.nr_pages++;
*ppos += PAGE_SIZE;
- entries = ring_buffer_entries_cpu(iter->tr->buffer, iter->cpu_file);
+ entries = ring_buffer_entries_cpu(iter->trace_buffer->buffer, iter->cpu_file);
}
trace_access_unlock(iter->cpu_file);
@@ -4576,6 +4625,7 @@ tracing_stats_read(struct file *filp, char __user *ubuf,
{
struct trace_cpu *tc = filp->private_data;
struct trace_array *tr = tc->tr;
+ struct trace_buffer *trace_buf = &tr->trace_buffer;
struct trace_seq *s;
unsigned long cnt;
unsigned long long t;
@@ -4588,41 +4638,41 @@ tracing_stats_read(struct file *filp, char __user *ubuf,
trace_seq_init(s);
- cnt = ring_buffer_entries_cpu(tr->buffer, cpu);
+ cnt = ring_buffer_entries_cpu(trace_buf->buffer, cpu);
trace_seq_printf(s, "entries: %ld\n", cnt);
- cnt = ring_buffer_overrun_cpu(tr->buffer, cpu);
+ cnt = ring_buffer_overrun_cpu(trace_buf->buffer, cpu);
trace_seq_printf(s, "overrun: %ld\n", cnt);
- cnt = ring_buffer_commit_overrun_cpu(tr->buffer, cpu);
+ cnt = ring_buffer_commit_overrun_cpu(trace_buf->buffer, cpu);
trace_seq_printf(s, "commit overrun: %ld\n", cnt);
- cnt = ring_buffer_bytes_cpu(tr->buffer, cpu);
+ cnt = ring_buffer_bytes_cpu(trace_buf->buffer, cpu);
trace_seq_printf(s, "bytes: %ld\n", cnt);
if (trace_clocks[trace_clock_id].in_ns) {
/* local or global for trace_clock */
- t = ns2usecs(ring_buffer_oldest_event_ts(tr->buffer, cpu));
+ t = ns2usecs(ring_buffer_oldest_event_ts(trace_buf->buffer, cpu));
usec_rem = do_div(t, USEC_PER_SEC);
trace_seq_printf(s, "oldest event ts: %5llu.%06lu\n",
t, usec_rem);
- t = ns2usecs(ring_buffer_time_stamp(tr->buffer, cpu));
+ t = ns2usecs(ring_buffer_time_stamp(trace_buf->buffer, cpu));
usec_rem = do_div(t, USEC_PER_SEC);
trace_seq_printf(s, "now ts: %5llu.%06lu\n", t, usec_rem);
} else {
/* counter or tsc mode for trace_clock */
trace_seq_printf(s, "oldest event ts: %llu\n",
- ring_buffer_oldest_event_ts(tr->buffer, cpu));
+ ring_buffer_oldest_event_ts(trace_buf->buffer, cpu));
trace_seq_printf(s, "now ts: %llu\n",
- ring_buffer_time_stamp(tr->buffer, cpu));
+ ring_buffer_time_stamp(trace_buf->buffer, cpu));
}
- cnt = ring_buffer_dropped_events_cpu(tr->buffer, cpu);
+ cnt = ring_buffer_dropped_events_cpu(trace_buf->buffer, cpu);
trace_seq_printf(s, "dropped events: %ld\n", cnt);
- cnt = ring_buffer_read_events_cpu(tr->buffer, cpu);
+ cnt = ring_buffer_read_events_cpu(trace_buf->buffer, cpu);
trace_seq_printf(s, "read events: %ld\n", cnt);
count = simple_read_from_buffer(ubuf, count, ppos, s->buffer, s->len);
@@ -4725,7 +4775,7 @@ static struct dentry *tracing_dentry_percpu(struct trace_array *tr, int cpu)
static void
tracing_init_debugfs_percpu(struct trace_array *tr, long cpu)
{
- struct trace_array_cpu *data = per_cpu_ptr(tr->data, cpu);
+ struct trace_array_cpu *data = per_cpu_ptr(tr->trace_buffer.data, cpu);
struct dentry *d_percpu = tracing_dentry_percpu(tr, cpu);
struct dentry *d_cpu;
char cpu_dir[30]; /* 30 characters should be more than enough */
@@ -5002,7 +5052,7 @@ rb_simple_read(struct file *filp, char __user *ubuf,
size_t cnt, loff_t *ppos)
{
struct trace_array *tr = filp->private_data;
- struct ring_buffer *buffer = tr->buffer;
+ struct ring_buffer *buffer = tr->trace_buffer.buffer;
char buf[64];
int r;
@@ -5021,7 +5071,7 @@ rb_simple_write(struct file *filp, const char __user *ubuf,
size_t cnt, loff_t *ppos)
{
struct trace_array *tr = filp->private_data;
- struct ring_buffer *buffer = tr->buffer;
+ struct ring_buffer *buffer = tr->trace_buffer.buffer;
unsigned long val;
int ret;
@@ -5093,18 +5143,18 @@ static int new_instance_create(const char *name)
rb_flags = trace_flags & TRACE_ITER_OVERWRITE ? RB_FL_OVERWRITE : 0;
- tr->buffer = ring_buffer_alloc(trace_buf_size, rb_flags);
- if (!tr->buffer)
+ tr->trace_buffer.buffer = ring_buffer_alloc(trace_buf_size, rb_flags);
+ if (!tr->trace_buffer.buffer)
goto out_free_tr;
- tr->data = alloc_percpu(struct trace_array_cpu);
- if (!tr->data)
+ tr->trace_buffer.data = alloc_percpu(struct trace_array_cpu);
+ if (!tr->trace_buffer.data)
goto out_free_tr;
for_each_tracing_cpu(i) {
- memset(per_cpu_ptr(tr->data, i), 0, sizeof(struct trace_array_cpu));
- per_cpu_ptr(tr->data, i)->trace_cpu.cpu = i;
- per_cpu_ptr(tr->data, i)->trace_cpu.tr = tr;
+ memset(per_cpu_ptr(tr->trace_buffer.data, i), 0, sizeof(struct trace_array_cpu));
+ per_cpu_ptr(tr->trace_buffer.data, i)->trace_cpu.cpu = i;
+ per_cpu_ptr(tr->trace_buffer.data, i)->trace_cpu.tr = tr;
}
/* Holder for file callbacks */
@@ -5128,8 +5178,8 @@ static int new_instance_create(const char *name)
return 0;
out_free_tr:
- if (tr->buffer)
- ring_buffer_free(tr->buffer);
+ if (tr->trace_buffer.buffer)
+ ring_buffer_free(tr->trace_buffer.buffer);
kfree(tr->name);
kfree(tr);
@@ -5162,8 +5212,8 @@ static int instance_delete(const char *name)
event_trace_del_tracer(tr);
debugfs_remove_recursive(tr->dir);
- free_percpu(tr->data);
- ring_buffer_free(tr->buffer);
+ free_percpu(tr->trace_buffer.data);
+ ring_buffer_free(tr->trace_buffer.buffer);
kfree(tr->name);
kfree(tr);
@@ -5403,6 +5453,7 @@ void trace_init_global_iter(struct trace_iterator *iter)
iter->tr = &global_trace;
iter->trace = iter->tr->current_trace;
iter->cpu_file = RING_BUFFER_ALL_CPUS;
+ iter->trace_buffer = &global_trace.trace_buffer;
}
static void
@@ -5440,7 +5491,7 @@ __ftrace_dump(bool disable_tracing, enum ftrace_dump_mode oops_dump_mode)
trace_init_global_iter(&iter);
for_each_tracing_cpu(cpu) {
- atomic_inc(&per_cpu_ptr(iter.tr->data, cpu)->disabled);
+ atomic_inc(&per_cpu_ptr(iter.tr->trace_buffer.data, cpu)->disabled);
}
old_userobj = trace_flags & TRACE_ITER_SYM_USEROBJ;
@@ -5508,7 +5559,7 @@ __ftrace_dump(bool disable_tracing, enum ftrace_dump_mode oops_dump_mode)
trace_flags |= old_userobj;
for_each_tracing_cpu(cpu) {
- atomic_dec(&per_cpu_ptr(iter.tr->data, cpu)->disabled);
+ atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
}
tracing_on();
}
@@ -5558,58 +5609,59 @@ __init static int tracer_alloc_buffers(void)
raw_spin_lock_init(&global_trace.start_lock);
/* TODO: make the number of buffers hot pluggable with CPUS */
- global_trace.buffer = ring_buffer_alloc(ring_buf_size, rb_flags);
- if (!global_trace.buffer) {
+ global_trace.trace_buffer.buffer = ring_buffer_alloc(ring_buf_size, rb_flags);
+ if (!global_trace.trace_buffer.buffer) {
printk(KERN_ERR "tracer: failed to allocate ring buffer!\n");
WARN_ON(1);
goto out_free_cpumask;
}
- global_trace.data = alloc_percpu(struct trace_array_cpu);
+ global_trace.trace_buffer.data = alloc_percpu(struct trace_array_cpu);
- if (!global_trace.data) {
+ if (!global_trace.trace_buffer.data) {
printk(KERN_ERR "tracer: failed to allocate percpu memory!\n");
WARN_ON(1);
goto out_free_cpumask;
}
for_each_tracing_cpu(i) {
- memset(per_cpu_ptr(global_trace.data, i), 0, sizeof(struct trace_array_cpu));
- per_cpu_ptr(global_trace.data, i)->trace_cpu.cpu = i;
- per_cpu_ptr(global_trace.data, i)->trace_cpu.tr = &global_trace;
+ memset(per_cpu_ptr(global_trace.trace_buffer.data, i), 0,
+ sizeof(struct trace_array_cpu));
+ per_cpu_ptr(global_trace.trace_buffer.data, i)->trace_cpu.cpu = i;
+ per_cpu_ptr(global_trace.trace_buffer.data, i)->trace_cpu.tr = &global_trace;
}
if (global_trace.buffer_disabled)
tracing_off();
#ifdef CONFIG_TRACER_MAX_TRACE
- max_tr.data = alloc_percpu(struct trace_array_cpu);
- if (!max_tr.data) {
+ global_trace.max_buffer.data = alloc_percpu(struct trace_array_cpu);
+ if (!global_trace.max_buffer.data) {
printk(KERN_ERR "tracer: failed to allocate percpu memory!\n");
WARN_ON(1);
goto out_free_cpumask;
}
- max_tr.buffer = ring_buffer_alloc(1, rb_flags);
- raw_spin_lock_init(&max_tr.start_lock);
- if (!max_tr.buffer) {
+ global_trace.max_buffer.buffer = ring_buffer_alloc(1, rb_flags);
+ if (!global_trace.max_buffer.buffer) {
printk(KERN_ERR "tracer: failed to allocate max ring buffer!\n");
WARN_ON(1);
- ring_buffer_free(global_trace.buffer);
+ ring_buffer_free(global_trace.trace_buffer.buffer);
goto out_free_cpumask;
}
for_each_tracing_cpu(i) {
- memset(per_cpu_ptr(max_tr.data, i), 0, sizeof(struct trace_array_cpu));
- per_cpu_ptr(max_tr.data, i)->trace_cpu.cpu = i;
- per_cpu_ptr(max_tr.data, i)->trace_cpu.tr = &max_tr;
+ memset(per_cpu_ptr(global_trace.max_buffer.data, i), 0,
+ sizeof(struct trace_array_cpu));
+ per_cpu_ptr(global_trace.max_buffer.data, i)->trace_cpu.cpu = i;
+ per_cpu_ptr(global_trace.max_buffer.data, i)->trace_cpu.tr = &global_trace;
}
#endif
/* Allocate the first page for all buffers */
- set_buffer_entries(&global_trace,
- ring_buffer_size(global_trace.buffer, 0));
+ set_buffer_entries(&global_trace.trace_buffer,
+ ring_buffer_size(global_trace.trace_buffer.buffer, 0));
#ifdef CONFIG_TRACER_MAX_TRACE
- set_buffer_entries(&max_tr, 1);
+ set_buffer_entries(&global_trace.max_buffer, 1);
#endif
trace_init_cmdlines();
@@ -5646,8 +5698,10 @@ __init static int tracer_alloc_buffers(void)
return 0;
out_free_cpumask:
- free_percpu(global_trace.data);
- free_percpu(max_tr.data);
+ free_percpu(global_trace.trace_buffer.data);
+#ifdef CONFIG_TRACER_MAX_TRACE
+ free_percpu(global_trace.max_buffer.data);
+#endif
free_cpumask_var(tracing_cpumask);
out_free_buffer_mask:
free_cpumask_var(tracing_buffer_mask);
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index e9486c74d..18f7403 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -167,16 +167,37 @@ struct trace_array_cpu {
struct tracer;
+struct trace_buffer {
+ struct trace_array *tr;
+ struct ring_buffer *buffer;
+ struct trace_array_cpu __percpu *data;
+ cycle_t time_start;
+ int cpu;
+};
+
/*
* The trace array - an array of per-CPU trace arrays. This is the
* highest level data structure that individual tracers deal with.
* They have on/off state as well:
*/
struct trace_array {
- struct ring_buffer *buffer;
struct list_head list;
char *name;
- int cpu;
+ struct trace_buffer trace_buffer;
+#ifdef CONFIG_TRACER_MAX_TRACE
+ /*
+ * The max_buffer is used to snapshot the trace when a maximum
+ * latency is reached, or when the user initiates a snapshot.
+ * Some tracers will use this to store a maximum trace while
+ * it continues examining live traces.
+ *
+ * The buffers for the max_buffer are set up the same as the trace_buffer
+ * When a snapshot is taken, the buffer of the max_buffer is swapped
+ * with the buffer of the trace_buffer and the buffers are reset for
+ * the trace_buffer so the tracing can continue.
+ */
+ struct trace_buffer max_buffer;
+#endif
int buffer_disabled;
struct trace_cpu trace_cpu; /* place holder */
#ifdef CONFIG_FTRACE_SYSCALLS
@@ -189,7 +210,6 @@ struct trace_array {
int clock_id;
struct tracer *current_trace;
unsigned int flags;
- cycle_t time_start;
raw_spinlock_t start_lock;
struct dentry *dir;
struct dentry *options;
@@ -198,7 +218,6 @@ struct trace_array {
struct list_head systems;
struct list_head events;
struct task_struct *waiter;
- struct trace_array_cpu __percpu *data;
};
enum {
@@ -342,8 +361,10 @@ struct tracer {
struct tracer *next;
struct tracer_flags *flags;
bool print_max;
+#ifdef CONFIG_TRACER_MAX_TRACE
bool use_max_tr;
bool allocated_snapshot;
+#endif
};
@@ -489,8 +510,8 @@ trace_buffer_iter(struct trace_iterator *iter, int cpu)
int tracer_init(struct tracer *t, struct trace_array *tr);
int tracing_is_enabled(void);
-void tracing_reset(struct trace_array *tr, int cpu);
-void tracing_reset_online_cpus(struct trace_array *tr);
+void tracing_reset(struct trace_buffer *buf, int cpu);
+void tracing_reset_online_cpus(struct trace_buffer *buf);
void tracing_reset_current(int cpu);
void tracing_reset_all_online_cpus(void);
int tracing_open_generic(struct inode *inode, struct file *filp);
@@ -670,6 +691,8 @@ trace_array_vprintk(struct trace_array *tr,
unsigned long ip, const char *fmt, va_list args);
int trace_array_printk(struct trace_array *tr,
unsigned long ip, const char *fmt, ...);
+int trace_array_printk_buf(struct ring_buffer *buffer,
+ unsigned long ip, const char *fmt, ...);
void trace_printk_seq(struct trace_seq *s);
enum print_line_t print_trace_line(struct trace_iterator *iter);
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index 9d73861..e467c0c 100644
--- a/kernel/trace/trace_functions.c
+++ b/kernel/trace/trace_functions.c
@@ -28,7 +28,7 @@ static void tracing_stop_function_trace(void);
static int function_trace_init(struct trace_array *tr)
{
func_trace = tr;
- tr->cpu = get_cpu();
+ tr->trace_buffer.cpu = get_cpu();
put_cpu();
tracing_start_cmdline_record();
@@ -44,7 +44,7 @@ static void function_trace_reset(struct trace_array *tr)
static void function_trace_start(struct trace_array *tr)
{
- tracing_reset_online_cpus(tr);
+ tracing_reset_online_cpus(&tr->trace_buffer);
}
/* Our option */
@@ -76,7 +76,7 @@ function_trace_call(unsigned long ip, unsigned long parent_ip,
goto out;
cpu = smp_processor_id();
- data = per_cpu_ptr(tr->data, cpu);
+ data = per_cpu_ptr(tr->trace_buffer.data, cpu);
if (!atomic_read(&data->disabled)) {
local_save_flags(flags);
trace_function(tr, ip, parent_ip, flags, pc);
@@ -107,7 +107,7 @@ function_stack_trace_call(unsigned long ip, unsigned long parent_ip,
*/
local_irq_save(flags);
cpu = raw_smp_processor_id();
- data = per_cpu_ptr(tr->data, cpu);
+ data = per_cpu_ptr(tr->trace_buffer.data, cpu);
disabled = atomic_inc_return(&data->disabled);
if (likely(disabled == 1)) {
diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index ca986d6..8388bc9 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -218,7 +218,7 @@ int __trace_graph_entry(struct trace_array *tr,
{
struct ftrace_event_call *call = &event_funcgraph_entry;
struct ring_buffer_event *event;
- struct ring_buffer *buffer = tr->buffer;
+ struct ring_buffer *buffer = tr->trace_buffer.buffer;
struct ftrace_graph_ent_entry *entry;
if (unlikely(__this_cpu_read(ftrace_cpu_disabled)))
@@ -265,7 +265,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace)
local_irq_save(flags);
cpu = raw_smp_processor_id();
- data = per_cpu_ptr(tr->data, cpu);
+ data = per_cpu_ptr(tr->trace_buffer.data, cpu);
disabled = atomic_inc_return(&data->disabled);
if (likely(disabled == 1)) {
pc = preempt_count();
@@ -323,7 +323,7 @@ void __trace_graph_return(struct trace_array *tr,
{
struct ftrace_event_call *call = &event_funcgraph_exit;
struct ring_buffer_event *event;
- struct ring_buffer *buffer = tr->buffer;
+ struct ring_buffer *buffer = tr->trace_buffer.buffer;
struct ftrace_graph_ret_entry *entry;
if (unlikely(__this_cpu_read(ftrace_cpu_disabled)))
@@ -350,7 +350,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace)
local_irq_save(flags);
cpu = raw_smp_processor_id();
- data = per_cpu_ptr(tr->data, cpu);
+ data = per_cpu_ptr(tr->trace_buffer.data, cpu);
disabled = atomic_inc_return(&data->disabled);
if (likely(disabled == 1)) {
pc = preempt_count();
@@ -560,9 +560,9 @@ get_return_for_leaf(struct trace_iterator *iter,
* We need to consume the current entry to see
* the next one.
*/
- ring_buffer_consume(iter->tr->buffer, iter->cpu,
+ ring_buffer_consume(iter->trace_buffer->buffer, iter->cpu,
NULL, NULL);
- event = ring_buffer_peek(iter->tr->buffer, iter->cpu,
+ event = ring_buffer_peek(iter->trace_buffer->buffer, iter->cpu,
NULL, NULL);
}
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 7137a0f..f4f1f06 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -121,7 +121,7 @@ static int func_prolog_dec(struct trace_array *tr,
if (!irqs_disabled_flags(*flags))
return 0;
- *data = per_cpu_ptr(tr->data, cpu);
+ *data = per_cpu_ptr(tr->trace_buffer.data, cpu);
disabled = atomic_inc_return(&(*data)->disabled);
if (likely(disabled == 1))
@@ -175,7 +175,7 @@ static int irqsoff_set_flag(u32 old_flags, u32 bit, int set)
per_cpu(tracing_cpu, cpu) = 0;
tracing_max_latency = 0;
- tracing_reset_online_cpus(irqsoff_trace);
+ tracing_reset_online_cpus(&irqsoff_trace->trace_buffer);
return start_irqsoff_tracer(irqsoff_trace, set);
}
@@ -380,7 +380,7 @@ start_critical_timing(unsigned long ip, unsigned long parent_ip)
if (per_cpu(tracing_cpu, cpu))
return;
- data = per_cpu_ptr(tr->data, cpu);
+ data = per_cpu_ptr(tr->trace_buffer.data, cpu);
if (unlikely(!data) || atomic_read(&data->disabled))
return;
@@ -418,7 +418,7 @@ stop_critical_timing(unsigned long ip, unsigned long parent_ip)
if (!tracer_enabled)
return;
- data = per_cpu_ptr(tr->data, cpu);
+ data = per_cpu_ptr(tr->trace_buffer.data, cpu);
if (unlikely(!data) ||
!data->critical_start || atomic_read(&data->disabled))
@@ -565,7 +565,7 @@ static void __irqsoff_tracer_init(struct trace_array *tr)
irqsoff_trace = tr;
/* make sure that the tracer is visible */
smp_wmb();
- tracing_reset_online_cpus(tr);
+ tracing_reset_online_cpus(&tr->trace_buffer);
if (start_irqsoff_tracer(tr, is_graph()))
printk(KERN_ERR "failed to start irqsoff tracer\n");
diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
index 349f694..bd90e1b 100644
--- a/kernel/trace/trace_kdb.c
+++ b/kernel/trace/trace_kdb.c
@@ -26,7 +26,7 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
trace_init_global_iter(&iter);
for_each_tracing_cpu(cpu) {
- atomic_inc(&per_cpu_ptr(iter.tr->data, cpu)->disabled);
+ atomic_inc(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
}
old_userobj = trace_flags;
@@ -46,14 +46,14 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
if (cpu_file == RING_BUFFER_ALL_CPUS) {
for_each_tracing_cpu(cpu) {
iter.buffer_iter[cpu] =
- ring_buffer_read_prepare(iter.tr->buffer, cpu);
+ ring_buffer_read_prepare(iter.trace_buffer->buffer, cpu);
ring_buffer_read_start(iter.buffer_iter[cpu]);
tracing_iter_reset(&iter, cpu);
}
} else {
iter.cpu_file = cpu_file;
iter.buffer_iter[cpu_file] =
- ring_buffer_read_prepare(iter.tr->buffer, cpu_file);
+ ring_buffer_read_prepare(iter.trace_buffer->buffer, cpu_file);
ring_buffer_read_start(iter.buffer_iter[cpu_file]);
tracing_iter_reset(&iter, cpu_file);
}
@@ -83,7 +83,7 @@ out:
trace_flags = old_userobj;
for_each_tracing_cpu(cpu) {
- atomic_dec(&per_cpu_ptr(iter.tr->data, cpu)->disabled);
+ atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
}
for_each_tracing_cpu(cpu)
diff --git a/kernel/trace/trace_mmiotrace.c b/kernel/trace/trace_mmiotrace.c
index 2472f6f..a5e8f48 100644
--- a/kernel/trace/trace_mmiotrace.c
+++ b/kernel/trace/trace_mmiotrace.c
@@ -31,7 +31,7 @@ static void mmio_reset_data(struct trace_array *tr)
overrun_detected = false;
prev_overruns = 0;
- tracing_reset_online_cpus(tr);
+ tracing_reset_online_cpus(&tr->trace_buffer);
}
static int mmio_trace_init(struct trace_array *tr)
@@ -128,7 +128,7 @@ static void mmio_close(struct trace_iterator *iter)
static unsigned long count_overruns(struct trace_iterator *iter)
{
unsigned long cnt = atomic_xchg(&dropped_count, 0);
- unsigned long over = ring_buffer_overruns(iter->tr->buffer);
+ unsigned long over = ring_buffer_overruns(iter->trace_buffer->buffer);
if (over > prev_overruns)
cnt += over - prev_overruns;
@@ -309,7 +309,7 @@ static void __trace_mmiotrace_rw(struct trace_array *tr,
struct mmiotrace_rw *rw)
{
struct ftrace_event_call *call = &event_mmiotrace_rw;
- struct ring_buffer *buffer = tr->buffer;
+ struct ring_buffer *buffer = tr->trace_buffer.buffer;
struct ring_buffer_event *event;
struct trace_mmiotrace_rw *entry;
int pc = preempt_count();
@@ -330,7 +330,7 @@ static void __trace_mmiotrace_rw(struct trace_array *tr,
void mmio_trace_rw(struct mmiotrace_rw *rw)
{
struct trace_array *tr = mmio_trace_array;
- struct trace_array_cpu *data = per_cpu_ptr(tr->data, smp_processor_id());
+ struct trace_array_cpu *data = per_cpu_ptr(tr->trace_buffer.data, smp_processor_id());
__trace_mmiotrace_rw(tr, data, rw);
}
@@ -339,7 +339,7 @@ static void __trace_mmiotrace_map(struct trace_array *tr,
struct mmiotrace_map *map)
{
struct ftrace_event_call *call = &event_mmiotrace_map;
- struct ring_buffer *buffer = tr->buffer;
+ struct ring_buffer *buffer = tr->trace_buffer.buffer;
struct ring_buffer_event *event;
struct trace_mmiotrace_map *entry;
int pc = preempt_count();
@@ -363,7 +363,7 @@ void mmio_trace_mapping(struct mmiotrace_map *map)
struct trace_array_cpu *data;
preempt_disable();
- data = per_cpu_ptr(tr->data, smp_processor_id());
+ data = per_cpu_ptr(tr->trace_buffer.data, smp_processor_id());
__trace_mmiotrace_map(tr, data, map);
preempt_enable();
}
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index aa92ac3..2edc722 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -643,7 +643,7 @@ lat_print_timestamp(struct trace_iterator *iter, u64 next_ts)
{
unsigned long verbose = trace_flags & TRACE_ITER_VERBOSE;
unsigned long in_ns = iter->iter_flags & TRACE_FILE_TIME_IN_NS;
- unsigned long long abs_ts = iter->ts - iter->tr->time_start;
+ unsigned long long abs_ts = iter->ts - iter->trace_buffer->time_start;
unsigned long long rel_ts = next_ts - iter->ts;
struct trace_seq *s = &iter->seq;
diff --git a/kernel/trace/trace_sched_switch.c b/kernel/trace/trace_sched_switch.c
index 1ffe39a..4e98e3b 100644
--- a/kernel/trace/trace_sched_switch.c
+++ b/kernel/trace/trace_sched_switch.c
@@ -28,7 +28,7 @@ tracing_sched_switch_trace(struct trace_array *tr,
unsigned long flags, int pc)
{
struct ftrace_event_call *call = &event_context_switch;
- struct ring_buffer *buffer = tr->buffer;
+ struct ring_buffer *buffer = tr->trace_buffer.buffer;
struct ring_buffer_event *event;
struct ctx_switch_entry *entry;
@@ -69,7 +69,7 @@ probe_sched_switch(void *ignore, struct task_struct *prev, struct task_struct *n
pc = preempt_count();
local_irq_save(flags);
cpu = raw_smp_processor_id();
- data = per_cpu_ptr(ctx_trace->data, cpu);
+ data = per_cpu_ptr(ctx_trace->trace_buffer.data, cpu);
if (likely(!atomic_read(&data->disabled)))
tracing_sched_switch_trace(ctx_trace, prev, next, flags, pc);
@@ -86,7 +86,7 @@ tracing_sched_wakeup_trace(struct trace_array *tr,
struct ftrace_event_call *call = &event_wakeup;
struct ring_buffer_event *event;
struct ctx_switch_entry *entry;
- struct ring_buffer *buffer = tr->buffer;
+ struct ring_buffer *buffer = tr->trace_buffer.buffer;
event = trace_buffer_lock_reserve(buffer, TRACE_WAKE,
sizeof(*entry), flags, pc);
@@ -123,7 +123,7 @@ probe_sched_wakeup(void *ignore, struct task_struct *wakee, int success)
pc = preempt_count();
local_irq_save(flags);
cpu = raw_smp_processor_id();
- data = per_cpu_ptr(ctx_trace->data, cpu);
+ data = per_cpu_ptr(ctx_trace->trace_buffer.data, cpu);
if (likely(!atomic_read(&data->disabled)))
tracing_sched_wakeup_trace(ctx_trace, wakee, current,
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index e6725c8..fbe950a 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -89,7 +89,7 @@ func_prolog_preempt_disable(struct trace_array *tr,
if (cpu != wakeup_current_cpu)
goto out_enable;
- *data = per_cpu_ptr(tr->data, cpu);
+ *data = per_cpu_ptr(tr->trace_buffer.data, cpu);
disabled = atomic_inc_return(&(*data)->disabled);
if (unlikely(disabled != 1))
goto out;
@@ -353,7 +353,7 @@ probe_wakeup_sched_switch(void *ignore,
/* disable local data, not wakeup_cpu data */
cpu = raw_smp_processor_id();
- disabled = atomic_inc_return(&per_cpu_ptr(wakeup_trace->data, cpu)->disabled);
+ disabled = atomic_inc_return(&per_cpu_ptr(wakeup_trace->trace_buffer.data, cpu)->disabled);
if (likely(disabled != 1))
goto out;
@@ -365,7 +365,7 @@ probe_wakeup_sched_switch(void *ignore,
goto out_unlock;
/* The task we are waiting for is waking up */
- data = per_cpu_ptr(wakeup_trace->data, wakeup_cpu);
+ data = per_cpu_ptr(wakeup_trace->trace_buffer.data, wakeup_cpu);
__trace_function(wakeup_trace, CALLER_ADDR0, CALLER_ADDR1, flags, pc);
tracing_sched_switch_trace(wakeup_trace, prev, next, flags, pc);
@@ -387,7 +387,7 @@ out_unlock:
arch_spin_unlock(&wakeup_lock);
local_irq_restore(flags);
out:
- atomic_dec(&per_cpu_ptr(wakeup_trace->data, cpu)->disabled);
+ atomic_dec(&per_cpu_ptr(wakeup_trace->trace_buffer.data, cpu)->disabled);
}
static void __wakeup_reset(struct trace_array *tr)
@@ -405,7 +405,7 @@ static void wakeup_reset(struct trace_array *tr)
{
unsigned long flags;
- tracing_reset_online_cpus(tr);
+ tracing_reset_online_cpus(&tr->trace_buffer);
local_irq_save(flags);
arch_spin_lock(&wakeup_lock);
@@ -435,7 +435,7 @@ probe_wakeup(void *ignore, struct task_struct *p, int success)
return;
pc = preempt_count();
- disabled = atomic_inc_return(&per_cpu_ptr(wakeup_trace->data, cpu)->disabled);
+ disabled = atomic_inc_return(&per_cpu_ptr(wakeup_trace->trace_buffer.data, cpu)->disabled);
if (unlikely(disabled != 1))
goto out;
@@ -458,7 +458,7 @@ probe_wakeup(void *ignore, struct task_struct *p, int success)
local_save_flags(flags);
- data = per_cpu_ptr(wakeup_trace->data, wakeup_cpu);
+ data = per_cpu_ptr(wakeup_trace->trace_buffer.data, wakeup_cpu);
data->preempt_timestamp = ftrace_now(cpu);
tracing_sched_wakeup_trace(wakeup_trace, p, current, flags, pc);
@@ -472,7 +472,7 @@ probe_wakeup(void *ignore, struct task_struct *p, int success)
out_locked:
arch_spin_unlock(&wakeup_lock);
out:
- atomic_dec(&per_cpu_ptr(wakeup_trace->data, cpu)->disabled);
+ atomic_dec(&per_cpu_ptr(wakeup_trace->trace_buffer.data, cpu)->disabled);
}
static void start_wakeup_tracer(struct trace_array *tr)
diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
index 51c819c..8672c40 100644
--- a/kernel/trace/trace_selftest.c
+++ b/kernel/trace/trace_selftest.c
@@ -21,13 +21,13 @@ static inline int trace_valid_entry(struct trace_entry *entry)
return 0;
}
-static int trace_test_buffer_cpu(struct trace_array *tr, int cpu)
+static int trace_test_buffer_cpu(struct trace_buffer *buf, int cpu)
{
struct ring_buffer_event *event;
struct trace_entry *entry;
unsigned int loops = 0;
- while ((event = ring_buffer_consume(tr->buffer, cpu, NULL, NULL))) {
+ while ((event = ring_buffer_consume(buf->buffer, cpu, NULL, NULL))) {
entry = ring_buffer_event_data(event);
/*
@@ -58,7 +58,7 @@ static int trace_test_buffer_cpu(struct trace_array *tr, int cpu)
* Test the trace buffer to see if all the elements
* are still sane.
*/
-static int trace_test_buffer(struct trace_array *tr, unsigned long *count)
+static int trace_test_buffer(struct trace_buffer *buf, unsigned long *count)
{
unsigned long flags, cnt = 0;
int cpu, ret = 0;
@@ -67,7 +67,7 @@ static int trace_test_buffer(struct trace_array *tr, unsigned long *count)
local_irq_save(flags);
arch_spin_lock(&ftrace_max_lock);
- cnt = ring_buffer_entries(tr->buffer);
+ cnt = ring_buffer_entries(buf->buffer);
/*
* The trace_test_buffer_cpu runs a while loop to consume all data.
@@ -78,7 +78,7 @@ static int trace_test_buffer(struct trace_array *tr, unsigned long *count)
*/
tracing_off();
for_each_possible_cpu(cpu) {
- ret = trace_test_buffer_cpu(tr, cpu);
+ ret = trace_test_buffer_cpu(buf, cpu);
if (ret)
break;
}
@@ -355,7 +355,7 @@ int trace_selftest_startup_dynamic_tracing(struct tracer *trace,
msleep(100);
/* we should have nothing in the buffer */
- ret = trace_test_buffer(tr, &count);
+ ret = trace_test_buffer(&tr->trace_buffer, &count);
if (ret)
goto out;
@@ -376,7 +376,7 @@ int trace_selftest_startup_dynamic_tracing(struct tracer *trace,
ftrace_enabled = 0;
/* check the trace buffer */
- ret = trace_test_buffer(tr, &count);
+ ret = trace_test_buffer(&tr->trace_buffer, &count);
tracing_start();
/* we should only have one item */
@@ -666,7 +666,7 @@ trace_selftest_startup_function(struct tracer *trace, struct trace_array *tr)
ftrace_enabled = 0;
/* check the trace buffer */
- ret = trace_test_buffer(tr, &count);
+ ret = trace_test_buffer(&tr->trace_buffer, &count);
trace->reset(tr);
tracing_start();
@@ -737,7 +737,7 @@ trace_selftest_startup_function_graph(struct tracer *trace,
* Simulate the init() callback but we attach a watchdog callback
* to detect and recover from possible hangs
*/
- tracing_reset_online_cpus(tr);
+ tracing_reset_online_cpus(&tr->trace_buffer);
set_graph_array(tr);
ret = register_ftrace_graph(&trace_graph_return,
&trace_graph_entry_watchdog);
@@ -760,7 +760,7 @@ trace_selftest_startup_function_graph(struct tracer *trace,
tracing_stop();
/* check the trace buffer */
- ret = trace_test_buffer(tr, &count);
+ ret = trace_test_buffer(&tr->trace_buffer, &count);
trace->reset(tr);
tracing_start();
@@ -815,9 +815,9 @@ trace_selftest_startup_irqsoff(struct tracer *trace, struct trace_array *tr)
/* stop the tracing. */
tracing_stop();
/* check both trace buffers */
- ret = trace_test_buffer(tr, NULL);
+ ret = trace_test_buffer(&tr->trace_buffer, NULL);
if (!ret)
- ret = trace_test_buffer(&max_tr, &count);
+ ret = trace_test_buffer(&tr->max_buffer, &count);
trace->reset(tr);
tracing_start();
@@ -877,9 +877,9 @@ trace_selftest_startup_preemptoff(struct tracer *trace, struct trace_array *tr)
/* stop the tracing. */
tracing_stop();
/* check both trace buffers */
- ret = trace_test_buffer(tr, NULL);
+ ret = trace_test_buffer(&tr->trace_buffer, NULL);
if (!ret)
- ret = trace_test_buffer(&max_tr, &count);
+ ret = trace_test_buffer(&tr->max_buffer, &count);
trace->reset(tr);
tracing_start();
@@ -943,11 +943,11 @@ trace_selftest_startup_preemptirqsoff(struct tracer *trace, struct trace_array *
/* stop the tracing. */
tracing_stop();
/* check both trace buffers */
- ret = trace_test_buffer(tr, NULL);
+ ret = trace_test_buffer(&tr->trace_buffer, NULL);
if (ret)
goto out;
- ret = trace_test_buffer(&max_tr, &count);
+ ret = trace_test_buffer(&tr->max_buffer, &count);
if (ret)
goto out;
@@ -973,11 +973,11 @@ trace_selftest_startup_preemptirqsoff(struct tracer *trace, struct trace_array *
/* stop the tracing. */
tracing_stop();
/* check both trace buffers */
- ret = trace_test_buffer(tr, NULL);
+ ret = trace_test_buffer(&tr->trace_buffer, NULL);
if (ret)
goto out;
- ret = trace_test_buffer(&max_tr, &count);
+ ret = trace_test_buffer(&tr->max_buffer, &count);
if (!ret && !count) {
printk(KERN_CONT ".. no entries found ..");
@@ -1084,10 +1084,10 @@ trace_selftest_startup_wakeup(struct tracer *trace, struct trace_array *tr)
/* stop the tracing. */
tracing_stop();
/* check both trace buffers */
- ret = trace_test_buffer(tr, NULL);
+ ret = trace_test_buffer(&tr->trace_buffer, NULL);
printk("ret = %d\n", ret);
if (!ret)
- ret = trace_test_buffer(&max_tr, &count);
+ ret = trace_test_buffer(&tr->max_buffer, &count);
trace->reset(tr);
@@ -1126,7 +1126,7 @@ trace_selftest_startup_sched_switch(struct tracer *trace, struct trace_array *tr
/* stop the tracing. */
tracing_stop();
/* check the trace buffer */
- ret = trace_test_buffer(tr, &count);
+ ret = trace_test_buffer(&tr->trace_buffer, &count);
trace->reset(tr);
tracing_start();
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 1cd37ff..68f3f34 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -321,7 +321,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args;
- buffer = tr->buffer;
+ buffer = tr->trace_buffer.buffer;
event = trace_buffer_lock_reserve(buffer,
sys_data->enter_event->event.type, size, 0, 0);
if (!event)
@@ -355,7 +355,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
if (!sys_data)
return;
- buffer = tr->buffer;
+ buffer = tr->trace_buffer.buffer;
event = trace_buffer_lock_reserve(buffer,
sys_data->exit_event->event.type, sizeof(*entry), 0, 0);
if (!event)
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 07/17] tracing: Add snapshot in the per_cpu trace directories
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (5 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 06/17] tracing: Consolidate max_tr into main trace_array structure Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 08/17] tracing: Add config option to allow snapshot to swap per cpu Steven Rostedt
` (9 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0007-tracing-Add-snapshot-in-the-per_cpu-trace-directorie.patch --]
[-- Type: text/plain, Size: 5566 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
Add the snapshot file into the per_cpu tracing directories to allow
them to be read for an individual cpu. This also allows to clear
an individual cpu from the snapshot buffer.
If the kernel allows it (CONFIG_RING_BUFFER_ALLOW_SWAP is set), then
echoing in '1' into one of the per_cpu snapshot files will do an
individual cpu buffer swap instead of the entire file.
Cc: Hiraku Toyooka <hiraku.toyooka.gu@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/trace.c | 66 ++++++++++++++++++++++++++++++++++++++++++--------
1 file changed, 56 insertions(+), 10 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index cd15864..96cd165 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2435,6 +2435,31 @@ static void test_ftrace_alive(struct seq_file *m)
}
#ifdef CONFIG_TRACER_MAX_TRACE
+static void show_snapshot_main_help(struct seq_file *m)
+{
+ seq_printf(m, "# echo 0 > snapshot : Clears and frees snapshot buffer\n");
+ seq_printf(m, "# echo 1 > snapshot : Allocates snapshot buffer, if not already allocated.\n");
+ seq_printf(m, "# Takes a snapshot of the main buffer.\n");
+ seq_printf(m, "# echo 2 > snapshot : Clears snapshot buffer (but does not allocate)\n");
+ seq_printf(m, "# (Doesn't have to be '2' works with any number that\n");
+ seq_printf(m, "# is not a '0' or '1')\n");
+}
+
+static void show_snapshot_percpu_help(struct seq_file *m)
+{
+ seq_printf(m, "# echo 0 > snapshot : Invalid for per_cpu snapshot file.\n");
+#ifdef CONFIG_RING_BUFFER_ALLOW_SWAP
+ seq_printf(m, "# echo 1 > snapshot : Allocates snapshot buffer, if not already allocated.\n");
+ seq_printf(m, "# Takes a snapshot of the main buffer for this cpu.\n");
+#else
+ seq_printf(m, "# echo 1 > snapshot : Not supported with this kernel.\n");
+ seq_printf(m, "# Must use main snapshot file to allocate.\n");
+#endif
+ seq_printf(m, "# echo 2 > snapshot : Clears this cpu's snapshot buffer (but does not allocate)\n");
+ seq_printf(m, "# (Doesn't have to be '2' works with any number that\n");
+ seq_printf(m, "# is not a '0' or '1')\n");
+}
+
static void print_snapshot_help(struct seq_file *m, struct trace_iterator *iter)
{
if (iter->trace->allocated_snapshot)
@@ -2443,12 +2468,10 @@ static void print_snapshot_help(struct seq_file *m, struct trace_iterator *iter)
seq_printf(m, "#\n# * Snapshot is freed *\n#\n");
seq_printf(m, "# Snapshot commands:\n");
- seq_printf(m, "# echo 0 > snapshot : Clears and frees snapshot buffer\n");
- seq_printf(m, "# echo 1 > snapshot : Allocates snapshot buffer, if not already allocated.\n");
- seq_printf(m, "# Takes a snapshot of the main buffer.\n");
- seq_printf(m, "# echo 2 > snapshot : Clears snapshot buffer (but does not allocate)\n");
- seq_printf(m, "# (Doesn't have to be '2' works with any number that\n");
- seq_printf(m, "# is not a '0' or '1')\n");
+ if (iter->cpu_file == RING_BUFFER_ALL_CPUS)
+ show_snapshot_main_help(m);
+ else
+ show_snapshot_percpu_help(m);
}
#else
/* Should never be called */
@@ -4178,6 +4201,7 @@ static int tracing_snapshot_open(struct inode *inode, struct file *file)
}
iter->tr = tc->tr;
iter->trace_buffer = &tc->tr->max_buffer;
+ iter->cpu_file = tc->cpu;
m->private = iter;
file->private_data = m;
}
@@ -4212,6 +4236,10 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
switch (val) {
case 0:
+ if (iter->cpu_file != RING_BUFFER_ALL_CPUS) {
+ ret = -EINVAL;
+ break;
+ }
if (tr->current_trace->allocated_snapshot) {
/* free spare buffer */
ring_buffer_resize(tr->max_buffer.buffer, 1,
@@ -4222,6 +4250,13 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
}
break;
case 1:
+/* Only allow per-cpu swap if the ring buffer supports it */
+#ifndef CONFIG_RING_BUFFER_ALLOW_SWAP
+ if (iter->cpu_file != RING_BUFFER_ALL_CPUS) {
+ ret = -EINVAL;
+ break;
+ }
+#endif
if (!tr->current_trace->allocated_snapshot) {
/* allocate spare buffer */
ret = resize_buffer_duplicate_size(&tr->max_buffer,
@@ -4230,15 +4265,21 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
break;
tr->current_trace->allocated_snapshot = true;
}
-
local_irq_disable();
/* Now, we're going to swap */
- update_max_tr(&global_trace, current, smp_processor_id());
+ if (iter->cpu_file == RING_BUFFER_ALL_CPUS)
+ update_max_tr(&global_trace, current, smp_processor_id());
+ else
+ update_max_tr_single(&global_trace, current, iter->cpu_file);
local_irq_enable();
break;
default:
- if (tr->current_trace->allocated_snapshot)
- tracing_reset_online_cpus(&tr->max_buffer);
+ if (tr->current_trace->allocated_snapshot) {
+ if (iter->cpu_file == RING_BUFFER_ALL_CPUS)
+ tracing_reset_online_cpus(&tr->max_buffer);
+ else
+ tracing_reset(&tr->max_buffer, iter->cpu_file);
+ }
break;
}
@@ -4806,6 +4847,11 @@ tracing_init_debugfs_percpu(struct trace_array *tr, long cpu)
trace_create_file("buffer_size_kb", 0444, d_cpu,
(void *)&data->trace_cpu, &tracing_entries_fops);
+
+#ifdef CONFIG_TRACER_SNAPSHOT
+ trace_create_file("snapshot", 0644, d_cpu,
+ (void *)&data->trace_cpu, &snapshot_fops);
+#endif
}
#ifdef CONFIG_FTRACE_SELFTEST
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 08/17] tracing: Add config option to allow snapshot to swap per cpu
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (6 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 07/17] tracing: Add snapshot in the per_cpu trace directories Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 09/17] tracing: Add snapshot_raw to extract the raw data from snapshot Steven Rostedt
` (8 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0008-tracing-Add-config-option-to-allow-snapshot-to-swap-.patch --]
[-- Type: text/plain, Size: 2843 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
When the preempt or irq latency tracers are enabled, they require
the ring buffer to be able to swap the per cpu sub buffers between
two main buffers. This adds a slight overhead to tracing as the
trace recording needs to perform some checks to synchronize
between recording and swaps that might be happening on other CPUs.
The config RING_BUFFER_ALLOW_SWAP is set when a user of the ring
buffer needs the "swap cpu" feature, otherwise the extra checks
are not implemented and removed from the tracing overhead.
The snapshot feature will swap per CPU if the RING_BUFFER_ALLOW_SWAP
config is set. But that only gets set by things like OPROFILE
and the irqs and preempt latency tracers.
This config is added to let the user decide to include this feature
with the snapshot agnostic from whether or not another user of
the ring buffer sets this config.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/Kconfig | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 590a27f..f78eab2 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -192,6 +192,7 @@ config IRQSOFF_TRACER
select TRACER_MAX_TRACE
select RING_BUFFER_ALLOW_SWAP
select TRACER_SNAPSHOT
+ select TRACER_SNAPSHOT_PER_CPU_SWAP
help
This option measures the time spent in irqs-off critical
sections, with microsecond accuracy.
@@ -215,6 +216,7 @@ config PREEMPT_TRACER
select TRACER_MAX_TRACE
select RING_BUFFER_ALLOW_SWAP
select TRACER_SNAPSHOT
+ select TRACER_SNAPSHOT_PER_CPU_SWAP
help
This option measures the time spent in preemption-off critical
sections, with microsecond accuracy.
@@ -266,6 +268,27 @@ config TRACER_SNAPSHOT
echo 1 > /sys/kernel/debug/tracing/snapshot
cat snapshot
+config TRACER_SNAPSHOT_PER_CPU_SWAP
+ bool "Allow snapshot to swap per CPU"
+ depends on TRACER_SNAPSHOT
+ select RING_BUFFER_ALLOW_SWAP
+ help
+ Allow doing a snapshot of a single CPU buffer instead of a
+ full swap (all buffers). If this is set, then the following is
+ allowed:
+
+ echo 1 > /sys/kernel/debug/tracing/per_cpu/cpu2/snapshot
+
+ After which, only the tracing buffer for CPU 2 was swapped with
+ the main tracing buffer, and the other CPU buffers remain the same.
+
+ When this is enabled, this adds a little more overhead to the
+ trace recording, as it needs to add some checks to synchronize
+ recording with swaps. But this does not affect the performance
+ of the overall system. This is enabled by default when the preempt
+ or irq latency tracers are enabled, as those need to swap as well
+ and already adds the overhead (plus a lot more).
+
config TRACE_BRANCH_PROFILING
bool
select GENERIC_TRACER
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 09/17] tracing: Add snapshot_raw to extract the raw data from snapshot
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (7 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 08/17] tracing: Add config option to allow snapshot to swap per cpu Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 10/17] tracing: Have trace_array keep track if snapshot buffer is allocated Steven Rostedt
` (7 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0009-tracing-Add-snapshot_raw-to-extract-the-raw-data-fro.patch --]
[-- Type: text/plain, Size: 5615 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
Add a 'snapshot_raw' per_cpu file that allows tools to read the raw
binary data of the snapshot buffer.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/trace.c | 113 ++++++++++++++++++++++++++++++++++++++++++--------
1 file changed, 95 insertions(+), 18 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 96cd165..e5ce4dd 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4177,6 +4177,12 @@ static int tracing_clock_open(struct inode *inode, struct file *file)
return single_open(file, tracing_clock_show, inode->i_private);
}
+struct ftrace_buffer_info {
+ struct trace_iterator iter;
+ void *spare;
+ unsigned int read;
+};
+
#ifdef CONFIG_TRACER_SNAPSHOT
static int tracing_snapshot_open(struct inode *inode, struct file *file)
{
@@ -4307,6 +4313,35 @@ static int tracing_snapshot_release(struct inode *inode, struct file *file)
return 0;
}
+static int tracing_buffers_open(struct inode *inode, struct file *filp);
+static ssize_t tracing_buffers_read(struct file *filp, char __user *ubuf,
+ size_t count, loff_t *ppos);
+static int tracing_buffers_release(struct inode *inode, struct file *file);
+static ssize_t tracing_buffers_splice_read(struct file *file, loff_t *ppos,
+ struct pipe_inode_info *pipe, size_t len, unsigned int flags);
+
+static int snapshot_raw_open(struct inode *inode, struct file *filp)
+{
+ struct ftrace_buffer_info *info;
+ int ret;
+
+ ret = tracing_buffers_open(inode, filp);
+ if (ret < 0)
+ return ret;
+
+ info = filp->private_data;
+
+ if (info->iter.trace->use_max_tr) {
+ tracing_buffers_release(inode, filp);
+ return -EBUSY;
+ }
+
+ info->iter.snapshot = true;
+ info->iter.trace_buffer = &info->iter.tr->max_buffer;
+
+ return ret;
+}
+
#endif /* CONFIG_TRACER_SNAPSHOT */
@@ -4373,14 +4408,17 @@ static const struct file_operations snapshot_fops = {
.llseek = tracing_seek,
.release = tracing_snapshot_release,
};
-#endif /* CONFIG_TRACER_SNAPSHOT */
-struct ftrace_buffer_info {
- struct trace_iterator iter;
- void *spare;
- unsigned int read;
+static const struct file_operations snapshot_raw_fops = {
+ .open = snapshot_raw_open,
+ .read = tracing_buffers_read,
+ .release = tracing_buffers_release,
+ .splice_read = tracing_buffers_splice_read,
+ .llseek = no_llseek,
};
+#endif /* CONFIG_TRACER_SNAPSHOT */
+
static int tracing_buffers_open(struct inode *inode, struct file *filp)
{
struct trace_cpu *tc = inode->i_private;
@@ -4423,16 +4461,26 @@ tracing_buffers_read(struct file *filp, char __user *ubuf,
struct ftrace_buffer_info *info = filp->private_data;
struct trace_iterator *iter = &info->iter;
ssize_t ret;
- size_t size;
+ ssize_t size;
if (!count)
return 0;
+ mutex_lock(&trace_types_lock);
+
+#ifdef CONFIG_TRACER_MAX_TRACE
+ if (iter->snapshot && iter->tr->current_trace->use_max_tr) {
+ size = -EBUSY;
+ goto out_unlock;
+ }
+#endif
+
if (!info->spare)
info->spare = ring_buffer_alloc_read_page(iter->trace_buffer->buffer,
iter->cpu_file);
+ size = -ENOMEM;
if (!info->spare)
- return -ENOMEM;
+ goto out_unlock;
/* Do we have previous read data to read? */
if (info->read < PAGE_SIZE)
@@ -4448,31 +4496,42 @@ tracing_buffers_read(struct file *filp, char __user *ubuf,
if (ret < 0) {
if (trace_empty(iter)) {
- if ((filp->f_flags & O_NONBLOCK))
- return -EAGAIN;
+ if ((filp->f_flags & O_NONBLOCK)) {
+ size = -EAGAIN;
+ goto out_unlock;
+ }
+ mutex_unlock(&trace_types_lock);
iter->trace->wait_pipe(iter);
- if (signal_pending(current))
- return -EINTR;
+ mutex_lock(&trace_types_lock);
+ if (signal_pending(current)) {
+ size = -EINTR;
+ goto out_unlock;
+ }
goto again;
}
- return 0;
+ size = 0;
+ goto out_unlock;
}
info->read = 0;
-
read:
size = PAGE_SIZE - info->read;
if (size > count)
size = count;
ret = copy_to_user(ubuf, info->spare + info->read, size);
- if (ret == size)
- return -EFAULT;
+ if (ret == size) {
+ size = -EFAULT;
+ goto out_unlock;
+ }
size -= ret;
*ppos += size;
info->read += size;
+ out_unlock:
+ mutex_unlock(&trace_types_lock);
+
return size;
}
@@ -4562,10 +4621,21 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
};
struct buffer_ref *ref;
int entries, size, i;
- size_t ret;
+ ssize_t ret;
- if (splice_grow_spd(pipe, &spd))
- return -ENOMEM;
+ mutex_lock(&trace_types_lock);
+
+#ifdef CONFIG_TRACER_MAX_TRACE
+ if (iter->snapshot && iter->tr->current_trace->use_max_tr) {
+ ret = -EBUSY;
+ goto out;
+ }
+#endif
+
+ if (splice_grow_spd(pipe, &spd)) {
+ ret = -ENOMEM;
+ goto out;
+ }
if (*ppos & (PAGE_SIZE - 1)) {
ret = -EINVAL;
@@ -4637,7 +4707,9 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
ret = -EAGAIN;
goto out;
}
+ mutex_unlock(&trace_types_lock);
iter->trace->wait_pipe(iter);
+ mutex_lock(&trace_types_lock);
if (signal_pending(current)) {
ret = -EINTR;
goto out;
@@ -4648,6 +4720,8 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
ret = splice_to_pipe(pipe, &spd);
splice_shrink_spd(&spd);
out:
+ mutex_unlock(&trace_types_lock);
+
return ret;
}
@@ -4851,6 +4925,9 @@ tracing_init_debugfs_percpu(struct trace_array *tr, long cpu)
#ifdef CONFIG_TRACER_SNAPSHOT
trace_create_file("snapshot", 0644, d_cpu,
(void *)&data->trace_cpu, &snapshot_fops);
+
+ trace_create_file("snapshot_raw", 0444, d_cpu,
+ (void *)&data->trace_cpu, &snapshot_raw_fops);
#endif
}
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 10/17] tracing: Have trace_array keep track if snapshot buffer is allocated
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (8 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 09/17] tracing: Add snapshot_raw to extract the raw data from snapshot Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 11/17] tracing: Consolidate buffer allocation code Steven Rostedt
` (6 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0010-tracing-Have-trace_array-keep-track-if-snapshot-buff.patch --]
[-- Type: text/plain, Size: 5294 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
The snapshot buffer belongs to the trace array not the tracer that is
running. The trace array should be the data structure that keeps track
of whether or not the snapshot buffer is allocated, not the tracer
desciptor. Having the trace array keep track of it makes modifications
so much easier.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/trace.c | 32 +++++++++++++++-----------------
kernel/trace/trace.h | 2 +-
2 files changed, 16 insertions(+), 18 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index e5ce4dd..3213f1e 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -667,7 +667,7 @@ update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)
WARN_ON_ONCE(!irqs_disabled());
- if (!tr->current_trace->allocated_snapshot) {
+ if (!tr->allocated_snapshot) {
/* Only the nop tracer should hit this when disabling */
WARN_ON_ONCE(tr->current_trace != &nop_trace);
return;
@@ -699,7 +699,7 @@ update_max_tr_single(struct trace_array *tr, struct task_struct *tsk, int cpu)
return;
WARN_ON_ONCE(!irqs_disabled());
- if (WARN_ON_ONCE(!tr->current_trace->allocated_snapshot))
+ if (WARN_ON_ONCE(!tr->allocated_snapshot))
return;
arch_spin_lock(&ftrace_max_lock);
@@ -801,7 +801,7 @@ int register_tracer(struct tracer *type)
if (ring_buffer_expanded)
ring_buffer_resize(tr->max_buffer.buffer, trace_buf_size,
RING_BUFFER_ALL_CPUS);
- type->allocated_snapshot = true;
+ tr->allocated_snapshot = true;
}
#endif
@@ -821,7 +821,7 @@ int register_tracer(struct tracer *type)
#ifdef CONFIG_TRACER_MAX_TRACE
if (type->use_max_tr) {
- type->allocated_snapshot = false;
+ tr->allocated_snapshot = false;
/* Shrink the max buffer again */
if (ring_buffer_expanded)
@@ -2462,7 +2462,7 @@ static void show_snapshot_percpu_help(struct seq_file *m)
static void print_snapshot_help(struct seq_file *m, struct trace_iterator *iter)
{
- if (iter->trace->allocated_snapshot)
+ if (iter->tr->allocated_snapshot)
seq_printf(m, "#\n# * Snapshot is allocated *\n#\n");
else
seq_printf(m, "#\n# * Snapshot is freed *\n#\n");
@@ -3336,12 +3336,12 @@ static int tracing_set_tracer(const char *buf)
if (tr->current_trace->reset)
tr->current_trace->reset(tr);
-#ifdef CONFIG_TRACER_MAX_TRACE
- had_max_tr = tr->current_trace->allocated_snapshot;
-
/* Current trace needs to be nop_trace before synchronize_sched */
tr->current_trace = &nop_trace;
+#ifdef CONFIG_TRACER_MAX_TRACE
+ had_max_tr = tr->allocated_snapshot;
+
if (had_max_tr && !t->use_max_tr) {
/*
* We need to make sure that the update_max_tr sees that
@@ -3359,10 +3359,8 @@ static int tracing_set_tracer(const char *buf)
ring_buffer_resize(tr->max_buffer.buffer, 1, RING_BUFFER_ALL_CPUS);
set_buffer_entries(&tr->max_buffer, 1);
tracing_reset_online_cpus(&tr->max_buffer);
- tr->current_trace->allocated_snapshot = false;
+ tr->allocated_snapshot = false;
}
-#else
- tr->current_trace = &nop_trace;
#endif
destroy_trace_option_files(topts);
@@ -3375,7 +3373,7 @@ static int tracing_set_tracer(const char *buf)
RING_BUFFER_ALL_CPUS);
if (ret < 0)
goto out;
- t->allocated_snapshot = true;
+ tr->allocated_snapshot = true;
}
#endif
@@ -4246,13 +4244,13 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
ret = -EINVAL;
break;
}
- if (tr->current_trace->allocated_snapshot) {
+ if (tr->allocated_snapshot) {
/* free spare buffer */
ring_buffer_resize(tr->max_buffer.buffer, 1,
RING_BUFFER_ALL_CPUS);
set_buffer_entries(&tr->max_buffer, 1);
tracing_reset_online_cpus(&tr->max_buffer);
- tr->current_trace->allocated_snapshot = false;
+ tr->allocated_snapshot = false;
}
break;
case 1:
@@ -4263,13 +4261,13 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
break;
}
#endif
- if (!tr->current_trace->allocated_snapshot) {
+ if (!tr->allocated_snapshot) {
/* allocate spare buffer */
ret = resize_buffer_duplicate_size(&tr->max_buffer,
&tr->trace_buffer, RING_BUFFER_ALL_CPUS);
if (ret < 0)
break;
- tr->current_trace->allocated_snapshot = true;
+ tr->allocated_snapshot = true;
}
local_irq_disable();
/* Now, we're going to swap */
@@ -4280,7 +4278,7 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
local_irq_enable();
break;
default:
- if (tr->current_trace->allocated_snapshot) {
+ if (tr->allocated_snapshot) {
if (iter->cpu_file == RING_BUFFER_ALL_CPUS)
tracing_reset_online_cpus(&tr->max_buffer);
else
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 18f7403..6111933 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -197,6 +197,7 @@ struct trace_array {
* the trace_buffer so the tracing can continue.
*/
struct trace_buffer max_buffer;
+ bool allocated_snapshot;
#endif
int buffer_disabled;
struct trace_cpu trace_cpu; /* place holder */
@@ -363,7 +364,6 @@ struct tracer {
bool print_max;
#ifdef CONFIG_TRACER_MAX_TRACE
bool use_max_tr;
- bool allocated_snapshot;
#endif
};
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 11/17] tracing: Consolidate buffer allocation code
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (9 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 10/17] tracing: Have trace_array keep track if snapshot buffer is allocated Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 12/17] tracing: Add snapshot feature to instances Steven Rostedt
` (5 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0011-tracing-Consolidate-buffer-allocation-code.patch --]
[-- Type: text/plain, Size: 6298 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
There's a bit of duplicate code in creating the trace buffers for
the normal trace buffer and the max trace buffer among the instances
and the main global_trace. This code can be consolidated and cleaned
up a bit making the code cleaner and more readable as well as less
duplication.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/trace.c | 130 ++++++++++++++++++++++++--------------------------
1 file changed, 63 insertions(+), 67 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 3213f1e..1dec636 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3146,6 +3146,7 @@ int tracer_init(struct tracer *t, struct trace_array *tr)
static void set_buffer_entries(struct trace_buffer *buf, unsigned long val)
{
int cpu;
+
for_each_tracing_cpu(cpu)
per_cpu_ptr(buf->data, cpu)->entries = val;
}
@@ -5231,12 +5232,70 @@ struct dentry *trace_instance_dir;
static void
init_tracer_debugfs(struct trace_array *tr, struct dentry *d_tracer);
-static int new_instance_create(const char *name)
+static void init_trace_buffers(struct trace_array *tr, struct trace_buffer *buf)
+{
+ int cpu;
+
+ for_each_tracing_cpu(cpu) {
+ memset(per_cpu_ptr(buf->data, cpu), 0, sizeof(struct trace_array_cpu));
+ per_cpu_ptr(buf->data, cpu)->trace_cpu.cpu = cpu;
+ per_cpu_ptr(buf->data, cpu)->trace_cpu.tr = tr;
+ }
+}
+
+static int allocate_trace_buffers(struct trace_array *tr, int size)
{
enum ring_buffer_flags rb_flags;
+
+ rb_flags = trace_flags & TRACE_ITER_OVERWRITE ? RB_FL_OVERWRITE : 0;
+
+ tr->trace_buffer.buffer = ring_buffer_alloc(size, rb_flags);
+ if (!tr->trace_buffer.buffer)
+ goto out_free;
+
+ tr->trace_buffer.data = alloc_percpu(struct trace_array_cpu);
+ if (!tr->trace_buffer.data)
+ goto out_free;
+
+ init_trace_buffers(tr, &tr->trace_buffer);
+
+ /* Allocate the first page for all buffers */
+ set_buffer_entries(&tr->trace_buffer,
+ ring_buffer_size(tr->trace_buffer.buffer, 0));
+
+#ifdef CONFIG_TRACER_MAX_TRACE
+
+ tr->max_buffer.buffer = ring_buffer_alloc(1, rb_flags);
+ if (!tr->max_buffer.buffer)
+ goto out_free;
+
+ tr->max_buffer.data = alloc_percpu(struct trace_array_cpu);
+ if (!tr->max_buffer.data)
+ goto out_free;
+
+ init_trace_buffers(tr, &tr->max_buffer);
+
+ set_buffer_entries(&tr->max_buffer, 1);
+#endif
+ return 0;
+
+ out_free:
+ if (tr->trace_buffer.buffer)
+ ring_buffer_free(tr->trace_buffer.buffer);
+ free_percpu(tr->trace_buffer.data);
+
+#ifdef CONFIG_TRACER_MAX_TRACE
+ if (tr->max_buffer.buffer)
+ ring_buffer_free(tr->max_buffer.buffer);
+ free_percpu(tr->max_buffer.data);
+#endif
+ return -ENOMEM;
+}
+
+static int new_instance_create(const char *name)
+{
struct trace_array *tr;
int ret;
- int i;
mutex_lock(&trace_types_lock);
@@ -5262,22 +5321,9 @@ static int new_instance_create(const char *name)
INIT_LIST_HEAD(&tr->systems);
INIT_LIST_HEAD(&tr->events);
- rb_flags = trace_flags & TRACE_ITER_OVERWRITE ? RB_FL_OVERWRITE : 0;
-
- tr->trace_buffer.buffer = ring_buffer_alloc(trace_buf_size, rb_flags);
- if (!tr->trace_buffer.buffer)
- goto out_free_tr;
-
- tr->trace_buffer.data = alloc_percpu(struct trace_array_cpu);
- if (!tr->trace_buffer.data)
+ if (allocate_trace_buffers(tr, trace_buf_size) < 0)
goto out_free_tr;
- for_each_tracing_cpu(i) {
- memset(per_cpu_ptr(tr->trace_buffer.data, i), 0, sizeof(struct trace_array_cpu));
- per_cpu_ptr(tr->trace_buffer.data, i)->trace_cpu.cpu = i;
- per_cpu_ptr(tr->trace_buffer.data, i)->trace_cpu.tr = tr;
- }
-
/* Holder for file callbacks */
tr->trace_cpu.cpu = RING_BUFFER_ALL_CPUS;
tr->trace_cpu.tr = tr;
@@ -5700,8 +5746,6 @@ EXPORT_SYMBOL_GPL(ftrace_dump);
__init static int tracer_alloc_buffers(void)
{
int ring_buf_size;
- enum ring_buffer_flags rb_flags;
- int i;
int ret = -ENOMEM;
@@ -5722,69 +5766,21 @@ __init static int tracer_alloc_buffers(void)
else
ring_buf_size = 1;
- rb_flags = trace_flags & TRACE_ITER_OVERWRITE ? RB_FL_OVERWRITE : 0;
-
cpumask_copy(tracing_buffer_mask, cpu_possible_mask);
cpumask_copy(tracing_cpumask, cpu_all_mask);
raw_spin_lock_init(&global_trace.start_lock);
/* TODO: make the number of buffers hot pluggable with CPUS */
- global_trace.trace_buffer.buffer = ring_buffer_alloc(ring_buf_size, rb_flags);
- if (!global_trace.trace_buffer.buffer) {
+ if (allocate_trace_buffers(&global_trace, ring_buf_size) < 0) {
printk(KERN_ERR "tracer: failed to allocate ring buffer!\n");
WARN_ON(1);
goto out_free_cpumask;
}
- global_trace.trace_buffer.data = alloc_percpu(struct trace_array_cpu);
-
- if (!global_trace.trace_buffer.data) {
- printk(KERN_ERR "tracer: failed to allocate percpu memory!\n");
- WARN_ON(1);
- goto out_free_cpumask;
- }
-
- for_each_tracing_cpu(i) {
- memset(per_cpu_ptr(global_trace.trace_buffer.data, i), 0,
- sizeof(struct trace_array_cpu));
- per_cpu_ptr(global_trace.trace_buffer.data, i)->trace_cpu.cpu = i;
- per_cpu_ptr(global_trace.trace_buffer.data, i)->trace_cpu.tr = &global_trace;
- }
-
if (global_trace.buffer_disabled)
tracing_off();
-#ifdef CONFIG_TRACER_MAX_TRACE
- global_trace.max_buffer.data = alloc_percpu(struct trace_array_cpu);
- if (!global_trace.max_buffer.data) {
- printk(KERN_ERR "tracer: failed to allocate percpu memory!\n");
- WARN_ON(1);
- goto out_free_cpumask;
- }
- global_trace.max_buffer.buffer = ring_buffer_alloc(1, rb_flags);
- if (!global_trace.max_buffer.buffer) {
- printk(KERN_ERR "tracer: failed to allocate max ring buffer!\n");
- WARN_ON(1);
- ring_buffer_free(global_trace.trace_buffer.buffer);
- goto out_free_cpumask;
- }
-
- for_each_tracing_cpu(i) {
- memset(per_cpu_ptr(global_trace.max_buffer.data, i), 0,
- sizeof(struct trace_array_cpu));
- per_cpu_ptr(global_trace.max_buffer.data, i)->trace_cpu.cpu = i;
- per_cpu_ptr(global_trace.max_buffer.data, i)->trace_cpu.tr = &global_trace;
- }
-#endif
-
- /* Allocate the first page for all buffers */
- set_buffer_entries(&global_trace.trace_buffer,
- ring_buffer_size(global_trace.trace_buffer.buffer, 0));
-#ifdef CONFIG_TRACER_MAX_TRACE
- set_buffer_entries(&global_trace.max_buffer, 1);
-#endif
-
trace_init_cmdlines();
register_tracer(&nop_trace);
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 12/17] tracing: Add snapshot feature to instances
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (10 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 11/17] tracing: Consolidate buffer allocation code Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 13/17] tracing: Add per_cpu directory into tracing instances Steven Rostedt
` (4 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0012-tracing-Add-snapshot-feature-to-instances.patch --]
[-- Type: text/plain, Size: 2319 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
Add the "snapshot" file to the the multi-buffer instances.
cd /sys/kernel/debug/tracing/instances
mkdir foo
ls foo
buffer_size_kb buffer_total_size_kb events free_buffer set_event
snapshot trace trace_clock trace_marker trace_options trace_pipe
tracing_on
cat foo/snapshot
# tracer: nop
#
#
# * Snapshot is freed *
#
# Snapshot commands:
# echo 0 > snapshot : Clears and frees snapshot buffer
# echo 1 > snapshot : Allocates snapshot buffer, if not already allocated.
# Takes a snapshot of the main buffer.
# echo 2 > snapshot : Clears snapshot buffer (but does not allocate)
# (Doesn't have to be '2' works with any number that
# is not a '0' or '1')
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/trace.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 1dec636..163743a 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4273,9 +4273,9 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
local_irq_disable();
/* Now, we're going to swap */
if (iter->cpu_file == RING_BUFFER_ALL_CPUS)
- update_max_tr(&global_trace, current, smp_processor_id());
+ update_max_tr(tr, current, smp_processor_id());
else
- update_max_tr_single(&global_trace, current, iter->cpu_file);
+ update_max_tr_single(tr, current, iter->cpu_file);
local_irq_enable();
break;
default:
@@ -5497,6 +5497,11 @@ init_tracer_debugfs(struct trace_array *tr, struct dentry *d_tracer)
trace_create_file("tracing_on", 0644, d_tracer,
tr, &rb_simple_fops);
+
+#ifdef CONFIG_TRACER_SNAPSHOT
+ trace_create_file("snapshot", 0644, d_tracer,
+ (void *)&tr->trace_cpu, &snapshot_fops);
+#endif
}
static __init int tracer_init_debugfs(void)
@@ -5538,11 +5543,6 @@ static __init int tracer_init_debugfs(void)
&ftrace_update_tot_cnt, &tracing_dyn_info_fops);
#endif
-#ifdef CONFIG_TRACER_SNAPSHOT
- trace_create_file("snapshot", 0644, d_tracer,
- (void *)&global_trace.trace_cpu, &snapshot_fops);
-#endif
-
create_trace_instances(d_tracer);
create_trace_options_dir(&global_trace);
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 13/17] tracing: Add per_cpu directory into tracing instances
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (11 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 12/17] tracing: Add snapshot feature to instances Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 14/17] tracing: Prevent deleting instances when they are being read Steven Rostedt
` (3 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0013-tracing-Add-per_cpu-directory-into-tracing-instances.patch --]
[-- Type: text/plain, Size: 1454 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
Add the per_cpu directory to the created tracing instances:
cd /sys/kernel/debug/tracing/instances
mkdir foo
ls foo/per_cpu/cpu0
buffer_size_kb snapshot_raw trace trace_pipe_raw
snapshot stats trace_pipe
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/trace.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 163743a..e6547ea 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -5470,6 +5470,7 @@ static __init void create_trace_instances(struct dentry *d_tracer)
static void
init_tracer_debugfs(struct trace_array *tr, struct dentry *d_tracer)
{
+ int cpu;
trace_create_file("trace_options", 0644, d_tracer,
tr, &tracing_iter_fops);
@@ -5502,12 +5503,15 @@ init_tracer_debugfs(struct trace_array *tr, struct dentry *d_tracer)
trace_create_file("snapshot", 0644, d_tracer,
(void *)&tr->trace_cpu, &snapshot_fops);
#endif
+
+ for_each_tracing_cpu(cpu)
+ tracing_init_debugfs_percpu(tr, cpu);
+
}
static __init int tracer_init_debugfs(void)
{
struct dentry *d_tracer;
- int cpu;
trace_access_lock_init();
@@ -5547,9 +5551,6 @@ static __init int tracer_init_debugfs(void)
create_trace_options_dir(&global_trace);
- for_each_tracing_cpu(cpu)
- tracing_init_debugfs_percpu(&global_trace, cpu);
-
return 0;
}
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 14/17] tracing: Prevent deleting instances when they are being read
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (12 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 13/17] tracing: Add per_cpu directory into tracing instances Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 15/17] tracing: Add internal tracing_snapshot() functions Steven Rostedt
` (2 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0014-tracing-Prevent-deleting-instances-when-they-are-bei.patch --]
[-- Type: text/plain, Size: 2425 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
Add a ref count to the trace_array structure and prevent removal
of instances that have open descriptors.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/trace.c | 23 +++++++++++++++++++++++
kernel/trace/trace.h | 1 +
2 files changed, 24 insertions(+)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index e6547ea..8ede7d1 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2612,6 +2612,8 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
tracing_iter_reset(iter, cpu);
}
+ tr->ref++;
+
mutex_unlock(&trace_types_lock);
return iter;
@@ -2648,6 +2650,10 @@ static int tracing_release(struct inode *inode, struct file *file)
tr = iter->tr;
mutex_lock(&trace_types_lock);
+
+ WARN_ON(!tr->ref);
+ tr->ref--;
+
for_each_tracing_cpu(cpu) {
if (iter->buffer_iter[cpu])
ring_buffer_read_finish(iter->buffer_iter[cpu]);
@@ -4431,6 +4437,10 @@ static int tracing_buffers_open(struct inode *inode, struct file *filp)
if (!info)
return -ENOMEM;
+ mutex_lock(&trace_types_lock);
+
+ tr->ref++;
+
info->iter.tr = tr;
info->iter.cpu_file = tc->cpu;
info->iter.trace = tr->current_trace;
@@ -4441,6 +4451,8 @@ static int tracing_buffers_open(struct inode *inode, struct file *filp)
filp->private_data = info;
+ mutex_unlock(&trace_types_lock);
+
return nonseekable_open(inode, filp);
}
@@ -4539,10 +4551,17 @@ static int tracing_buffers_release(struct inode *inode, struct file *file)
struct ftrace_buffer_info *info = file->private_data;
struct trace_iterator *iter = &info->iter;
+ mutex_lock(&trace_types_lock);
+
+ WARN_ON(!iter->tr->ref);
+ iter->tr->ref--;
+
if (info->spare)
ring_buffer_free_read_page(iter->trace_buffer->buffer, info->spare);
kfree(info);
+ mutex_unlock(&trace_types_lock);
+
return 0;
}
@@ -5375,6 +5394,10 @@ static int instance_delete(const char *name)
if (!found)
goto out_unlock;
+ ret = -EBUSY;
+ if (tr->ref)
+ goto out_unlock;
+
list_del(&tr->list);
event_trace_del_tracer(tr);
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 6111933..a19459d 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -219,6 +219,7 @@ struct trace_array {
struct list_head systems;
struct list_head events;
struct task_struct *waiter;
+ int ref;
};
enum {
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 15/17] tracing: Add internal tracing_snapshot() functions
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (13 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 14/17] tracing: Prevent deleting instances when they are being read Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 16/17] ring-buffer: Do not use schedule_work_on() for current CPU Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 17/17] tracing: Move the tracing selftest code into its own function Steven Rostedt
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka, Peter Zijlstra
[-- Attachment #1: 0015-tracing-Add-internal-tracing_snapshot-functions.patch --]
[-- Type: text/plain, Size: 5603 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
The new snapshot feature is quite handy. It's a way for the user
to take advantage of the spare buffer that, until then, only
the latency tracers used to "snapshot" the buffer when it hit
a max latency. Now users can trigger a "snapshot" manually when
some condition is hit in a program. But a snapshot currently can
not be triggered by a condition inside the kernel.
With the addition of tracing_snapshot() and tracing_snapshot_alloc(),
snapshots can now be taking when a condition is hit, and the
developer wants to snapshot the case without stopping the trace.
Note, any snapshot will overwrite the old one, so take care
in how this is done.
These new functions are to be used like tracing_on(), tracing_off()
and trace_printk() are. That is, they should never be called
in the mainline Linux kernel. They are solely for the purpose
of debugging.
The tracing_snapshot() will not allocate a buffer, but it is
safe to be called from any context (except NMIs). But if a
snapshot buffer isn't allocated when it is called, it will write
to the live buffer, complaining about the lack of a snapshot
buffer, and then stop tracing (giving you the "permanent snapshot").
tracing_snapshot_alloc() will allocate the snapshot buffer if
it was not already allocated and then take the snapshot. This routine
*may sleep*, and must be called from context that can sleep.
The allocation is done with GFP_KERNEL and not atomic.
If you need a snapshot in an atomic context, say in early boot,
then it is best to call the tracing_snapshot_alloc() before then,
where it will allocate the buffer, and then you can use the
tracing_snapshot() anywhere you want and still get snapshots.
Cc: Hiraku Toyooka <hiraku.toyooka.gu@hitachi.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/kernel.h | 4 +++
kernel/trace/trace.c | 84 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 88 insertions(+)
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index c566927..bc5392a 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -483,6 +483,8 @@ enum ftrace_dump_mode {
void tracing_on(void);
void tracing_off(void);
int tracing_is_on(void);
+void tracing_snapshot(void);
+void tracing_snapshot_alloc(void);
extern void tracing_start(void);
extern void tracing_stop(void);
@@ -570,6 +572,8 @@ static inline void trace_dump_stack(void) { }
static inline void tracing_on(void) { }
static inline void tracing_off(void) { }
static inline int tracing_is_on(void) { return 0; }
+static inline void tracing_snapshot(void) { }
+static inline void tracing_snapshot_alloc(void) { }
static inline __printf(1, 2)
int trace_printk(const char *fmt, ...)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 8ede7d1..c15ba8b 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -339,6 +339,90 @@ void tracing_on(void)
}
EXPORT_SYMBOL_GPL(tracing_on);
+#ifdef CONFIG_TRACER_SNAPSHOT
+/**
+ * trace_snapshot - take a snapshot of the current buffer.
+ *
+ * This causes a swap between the snapshot buffer and the current live
+ * tracing buffer. You can use this to take snapshots of the live
+ * trace when some condition is triggered, but continue to trace.
+ *
+ * Note, make sure to allocate the snapshot with either
+ * a tracing_snapshot_alloc(), or by doing it manually
+ * with: echo 1 > /sys/kernel/debug/tracing/snapshot
+ *
+ * If the snapshot buffer is not allocated, it will stop tracing.
+ * Basically making a permanent snapshot.
+ */
+void tracing_snapshot(void)
+{
+ struct trace_array *tr = &global_trace;
+ struct tracer *tracer = tr->current_trace;
+ unsigned long flags;
+
+ if (!tr->allocated_snapshot) {
+ trace_printk("*** SNAPSHOT NOT ALLOCATED ***\n");
+ trace_printk("*** stopping trace here! ***\n");
+ tracing_off();
+ return;
+ }
+
+ /* Note, snapshot can not be used when the tracer uses it */
+ if (tracer->use_max_tr) {
+ trace_printk("*** LATENCY TRACER ACTIVE ***\n");
+ trace_printk("*** Can not use snapshot (sorry) ***\n");
+ return;
+ }
+
+ local_irq_save(flags);
+ update_max_tr(tr, current, smp_processor_id());
+ local_irq_restore(flags);
+}
+
+static int resize_buffer_duplicate_size(struct trace_buffer *trace_buf,
+ struct trace_buffer *size_buf, int cpu_id);
+
+/**
+ * trace_snapshot_alloc - allocate and take a snapshot of the current buffer.
+ *
+ * This is similar to trace_snapshot(), but it will allocate the
+ * snapshot buffer if it isn't already allocated. Use this only
+ * where it is safe to sleep, as the allocation may sleep.
+ *
+ * This causes a swap between the snapshot buffer and the current live
+ * tracing buffer. You can use this to take snapshots of the live
+ * trace when some condition is triggered, but continue to trace.
+ */
+void tracing_snapshot_alloc(void)
+{
+ struct trace_array *tr = &global_trace;
+ int ret;
+
+ if (!tr->allocated_snapshot) {
+
+ /* allocate spare buffer */
+ ret = resize_buffer_duplicate_size(&tr->max_buffer,
+ &tr->trace_buffer, RING_BUFFER_ALL_CPUS);
+ if (WARN_ON(ret < 0))
+ return;
+
+ tr->allocated_snapshot = true;
+ }
+
+ tracing_snapshot();
+}
+#else
+void tracing_snapshot(void)
+{
+ WARN_ONCE(1, "Snapshot feature not enabled, but internal snapshot used");
+}
+void tracing_snapshot_alloc(void)
+{
+ /* Give warning */
+ tracing_snapshot();
+}
+#endif /* CONFIG_TRACER_SNAPSHOT */
+
/**
* tracing_off - turn off tracing buffers
*
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 16/17] ring-buffer: Do not use schedule_work_on() for current CPU
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (14 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 15/17] tracing: Add internal tracing_snapshot() functions Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 17/17] tracing: Move the tracing selftest code into its own function Steven Rostedt
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0016-ring-buffer-Do-not-use-schedule_work_on-for-current-.patch --]
[-- Type: text/plain, Size: 2465 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
The ring buffer updates when done while the ring buffer is active,
needs to be completed on the CPU that is used for the ring buffer
per_cpu buffer. To accomplish this, schedule_work_on() is used to
schedule work on the given CPU.
Now there's no reason to use schedule_work_on() if the process
doing the update happens to be on the CPU that it is processing.
It has already filled the requirement. Instead, just do the work
and continue.
This is needed for tracing_snapshot_alloc() where it may be called
really early in boot, where the work queues have not been set up yet.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/ring_buffer.c | 33 +++++++++++++++++++++++++++------
1 file changed, 27 insertions(+), 6 deletions(-)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 65fe2a4..d1c85c5 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1679,11 +1679,22 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
if (!cpu_buffer->nr_pages_to_update)
continue;
- if (cpu_online(cpu))
+ /* The update must run on the CPU that is being updated. */
+ preempt_disable();
+ if (cpu == smp_processor_id() || !cpu_online(cpu)) {
+ rb_update_pages(cpu_buffer);
+ cpu_buffer->nr_pages_to_update = 0;
+ } else {
+ /*
+ * Can not disable preemption for schedule_work_on()
+ * on PREEMPT_RT.
+ */
+ preempt_enable();
schedule_work_on(cpu,
&cpu_buffer->update_pages_work);
- else
- rb_update_pages(cpu_buffer);
+ preempt_disable();
+ }
+ preempt_enable();
}
/* wait for all the updates to complete */
@@ -1721,12 +1732,22 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
get_online_cpus();
- if (cpu_online(cpu_id)) {
+ preempt_disable();
+ /* The update must run on the CPU that is being updated. */
+ if (cpu_id == smp_processor_id() || !cpu_online(cpu_id))
+ rb_update_pages(cpu_buffer);
+ else {
+ /*
+ * Can not disable preemption for schedule_work_on()
+ * on PREEMPT_RT.
+ */
+ preempt_enable();
schedule_work_on(cpu_id,
&cpu_buffer->update_pages_work);
wait_for_completion(&cpu_buffer->update_done);
- } else
- rb_update_pages(cpu_buffer);
+ preempt_disable();
+ }
+ preempt_enable();
cpu_buffer->nr_pages_to_update = 0;
put_online_cpus();
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [for-next][PATCH 17/17] tracing: Move the tracing selftest code into its own function
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
` (15 preceding siblings ...)
2013-03-08 3:00 ` [for-next][PATCH 16/17] ring-buffer: Do not use schedule_work_on() for current CPU Steven Rostedt
@ 2013-03-08 3:00 ` Steven Rostedt
16 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2013-03-08 3:00 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Frederic Weisbecker,
Masami Hiramatsu, David Sharp, Vaibhav Nagarnaik, hcochran,
Hiraku Toyooka
[-- Attachment #1: 0017-tracing-Move-the-tracing-selftest-code-into-its-own-.patch --]
[-- Type: text/plain, Size: 4556 bytes --]
From: "Steven Rostedt (Red Hat)" <srostedt@redhat.com>
Move the tracing startup selftest code into its own function and
when not enabled, always have that function succeed.
This makes the register_tracer() function much more readable.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/trace.c | 124 ++++++++++++++++++++++++++++----------------------
1 file changed, 69 insertions(+), 55 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index c15ba8b..86a6d7a 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -817,6 +817,72 @@ static void default_wait_pipe(struct trace_iterator *iter)
ring_buffer_wait(iter->trace_buffer->buffer, iter->cpu_file);
}
+#ifdef CONFIG_FTRACE_STARTUP_TEST
+static int run_tracer_selftest(struct tracer *type)
+{
+ struct trace_array *tr = &global_trace;
+ struct tracer *saved_tracer = tr->current_trace;
+ int ret;
+
+ if (!type->selftest || tracing_selftest_disabled)
+ return 0;
+
+ /*
+ * Run a selftest on this tracer.
+ * Here we reset the trace buffer, and set the current
+ * tracer to be this tracer. The tracer can then run some
+ * internal tracing to verify that everything is in order.
+ * If we fail, we do not register this tracer.
+ */
+ tracing_reset_online_cpus(&tr->trace_buffer);
+
+ tr->current_trace = type;
+
+#ifdef CONFIG_TRACER_MAX_TRACE
+ if (type->use_max_tr) {
+ /* If we expanded the buffers, make sure the max is expanded too */
+ if (ring_buffer_expanded)
+ ring_buffer_resize(tr->max_buffer.buffer, trace_buf_size,
+ RING_BUFFER_ALL_CPUS);
+ tr->allocated_snapshot = true;
+ }
+#endif
+
+ /* the test is responsible for initializing and enabling */
+ pr_info("Testing tracer %s: ", type->name);
+ ret = type->selftest(type, tr);
+ /* the test is responsible for resetting too */
+ tr->current_trace = saved_tracer;
+ if (ret) {
+ printk(KERN_CONT "FAILED!\n");
+ /* Add the warning after printing 'FAILED' */
+ WARN_ON(1);
+ return -1;
+ }
+ /* Only reset on passing, to avoid touching corrupted buffers */
+ tracing_reset_online_cpus(&tr->trace_buffer);
+
+#ifdef CONFIG_TRACER_MAX_TRACE
+ if (type->use_max_tr) {
+ tr->allocated_snapshot = false;
+
+ /* Shrink the max buffer again */
+ if (ring_buffer_expanded)
+ ring_buffer_resize(tr->max_buffer.buffer, 1,
+ RING_BUFFER_ALL_CPUS);
+ }
+#endif
+
+ printk(KERN_CONT "PASSED\n");
+ return 0;
+}
+#else
+static inline int run_tracer_selftest(struct tracer *type)
+{
+ return 0;
+}
+#endif /* CONFIG_FTRACE_STARTUP_TEST */
+
/**
* register_tracer - register a tracer with the ftrace system.
* @type - the plugin for the tracer
@@ -862,61 +928,9 @@ int register_tracer(struct tracer *type)
if (!type->wait_pipe)
type->wait_pipe = default_wait_pipe;
-
-#ifdef CONFIG_FTRACE_STARTUP_TEST
- if (type->selftest && !tracing_selftest_disabled) {
- struct trace_array *tr = &global_trace;
- struct tracer *saved_tracer = tr->current_trace;
-
- /*
- * Run a selftest on this tracer.
- * Here we reset the trace buffer, and set the current
- * tracer to be this tracer. The tracer can then run some
- * internal tracing to verify that everything is in order.
- * If we fail, we do not register this tracer.
- */
- tracing_reset_online_cpus(&tr->trace_buffer);
-
- tr->current_trace = type;
-
-#ifdef CONFIG_TRACER_MAX_TRACE
- if (type->use_max_tr) {
- /* If we expanded the buffers, make sure the max is expanded too */
- if (ring_buffer_expanded)
- ring_buffer_resize(tr->max_buffer.buffer, trace_buf_size,
- RING_BUFFER_ALL_CPUS);
- tr->allocated_snapshot = true;
- }
-#endif
-
- /* the test is responsible for initializing and enabling */
- pr_info("Testing tracer %s: ", type->name);
- ret = type->selftest(type, tr);
- /* the test is responsible for resetting too */
- tr->current_trace = saved_tracer;
- if (ret) {
- printk(KERN_CONT "FAILED!\n");
- /* Add the warning after printing 'FAILED' */
- WARN_ON(1);
- goto out;
- }
- /* Only reset on passing, to avoid touching corrupted buffers */
- tracing_reset_online_cpus(&tr->trace_buffer);
-
-#ifdef CONFIG_TRACER_MAX_TRACE
- if (type->use_max_tr) {
- tr->allocated_snapshot = false;
-
- /* Shrink the max buffer again */
- if (ring_buffer_expanded)
- ring_buffer_resize(tr->max_buffer.buffer, 1,
- RING_BUFFER_ALL_CPUS);
- }
-#endif
-
- printk(KERN_CONT "PASSED\n");
- }
-#endif
+ ret = run_tracer_selftest(type);
+ if (ret < 0)
+ goto out;
type->next = trace_types;
trace_types = type;
--
1.7.10.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
end of thread, other threads:[~2013-03-08 3:07 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-08 2:59 [for-next][PATCH 00/17] tracing: multi-buffers with snapshots, per_cpu and some debugging tools Steven Rostedt
2013-03-08 2:59 ` [for-next][PATCH 01/17] ring-buffer: Init waitqueue for blocked readers Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 02/17] tracing: Add comment for trace event flag IGNORE_ENABLE Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 03/17] tracing: Only clear trace buffer on module unload if event was traced Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 04/17] tracing: Clear all trace buffers when unloaded module event was used Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 05/17] tracing: Enable snapshot when any latency tracer is enabled Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 06/17] tracing: Consolidate max_tr into main trace_array structure Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 07/17] tracing: Add snapshot in the per_cpu trace directories Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 08/17] tracing: Add config option to allow snapshot to swap per cpu Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 09/17] tracing: Add snapshot_raw to extract the raw data from snapshot Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 10/17] tracing: Have trace_array keep track if snapshot buffer is allocated Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 11/17] tracing: Consolidate buffer allocation code Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 12/17] tracing: Add snapshot feature to instances Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 13/17] tracing: Add per_cpu directory into tracing instances Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 14/17] tracing: Prevent deleting instances when they are being read Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 15/17] tracing: Add internal tracing_snapshot() functions Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 16/17] ring-buffer: Do not use schedule_work_on() for current CPU Steven Rostedt
2013-03-08 3:00 ` [for-next][PATCH 17/17] tracing: Move the tracing selftest code into its own function Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox