* [PATCH 0/7] [GIT PULL] tracing: updates
@ 2010-07-22 20:24 Steven Rostedt
2010-07-22 20:24 ` [PATCH 1/7] tracing: Allow to disable cmdline recording Steven Rostedt
` (7 more replies)
0 siblings, 8 replies; 9+ messages in thread
From: Steven Rostedt @ 2010-07-22 20:24 UTC (permalink / raw)
To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker
Ingo,
Please pull the latest tip/perf/core tree, which can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace.git
tip/perf/core
Dan Carpenter (1):
trace: strlen() return doesn't account for the NULL
David Daney (1):
tracing: Fix $mcount_regex for MIPS in recordmcount.pl
KOSAKI Motohiro (1):
tracing: Shrink max latency ringbuffer if unnecessary
Lai Jiangshan (1):
tracing: Reduce latency and remove percpu trace_seq
Li Zefan (1):
tracing: Allow to disable cmdline recording
Mike Frysinger (1):
tracing/documentation: Document dynamic ftracer internals
Richard Kennedy (1):
trace: Reorder struct ring_buffer_per_cpu to remove padding on 64bit
----
Documentation/trace/ftrace-design.txt | 153 +++++++++++++++++++++++++++++++-
include/linux/ftrace.h | 5 +
include/linux/ftrace_event.h | 12 ++-
include/trace/ftrace.h | 12 +--
kernel/trace/ring_buffer.c | 2 +-
kernel/trace/trace.c | 46 ++++++++--
kernel/trace/trace.h | 4 +
kernel/trace/trace_events.c | 30 ++++++-
kernel/trace/trace_irqsoff.c | 3 +
kernel/trace/trace_output.c | 3 -
kernel/trace/trace_sched_wakeup.c | 2 +
scripts/recordmcount.pl | 2 +-
12 files changed, 241 insertions(+), 33 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/7] tracing: Allow to disable cmdline recording
2010-07-22 20:24 [PATCH 0/7] [GIT PULL] tracing: updates Steven Rostedt
@ 2010-07-22 20:24 ` Steven Rostedt
2010-07-22 20:24 ` [PATCH 2/7] trace: Reorder struct ring_buffer_per_cpu to remove padding on 64bit Steven Rostedt
` (6 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2010-07-22 20:24 UTC (permalink / raw)
To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Li Zefan
[-- Attachment #1: 0001-tracing-Allow-to-disable-cmdline-recording.patch --]
[-- Type: text/plain, Size: 5313 bytes --]
From: Li Zefan <lizf@cn.fujitsu.com>
We found that even enabling a single trace event that will rarely be
triggered can add big overhead to context switch.
(lmbench context switch test)
-------------------------------------------------
2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K
ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw
------ ------ ------ ------ ------ ------- -------
2.19 2.3 2.21 2.56 2.13 2.54 2.07
2.39 2.51 2.35 2.75 2.27 2.81 2.24
The overhead is 6% ~ 11%.
It's because when a trace event is enabled 3 tracepoints (sched_switch,
sched_wakeup, sched_wakeup_new) will be activated to map pid to cmdname.
We'd like to avoid this overhead, so add a trace option '(no)record-cmd'
to allow to disable cmdline recording.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
LKML-Reference: <4C2D57F4.2050204@cn.fujitsu.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/ftrace_event.h | 7 +++++--
kernel/trace/trace.c | 6 +++++-
kernel/trace/trace.h | 3 +++
kernel/trace/trace_events.c | 30 ++++++++++++++++++++++++++++--
4 files changed, 41 insertions(+), 5 deletions(-)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 01df7ca..2b7b139 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -152,11 +152,13 @@ extern int ftrace_event_reg(struct ftrace_event_call *event,
enum {
TRACE_EVENT_FL_ENABLED_BIT,
TRACE_EVENT_FL_FILTERED_BIT,
+ TRACE_EVENT_FL_RECORDED_CMD_BIT,
};
enum {
- TRACE_EVENT_FL_ENABLED = (1 << TRACE_EVENT_FL_ENABLED_BIT),
- TRACE_EVENT_FL_FILTERED = (1 << TRACE_EVENT_FL_FILTERED_BIT),
+ TRACE_EVENT_FL_ENABLED = (1 << TRACE_EVENT_FL_ENABLED_BIT),
+ TRACE_EVENT_FL_FILTERED = (1 << TRACE_EVENT_FL_FILTERED_BIT),
+ TRACE_EVENT_FL_RECORDED_CMD = (1 << TRACE_EVENT_FL_RECORDED_CMD_BIT),
};
struct ftrace_event_call {
@@ -174,6 +176,7 @@ struct ftrace_event_call {
* 32 bit flags:
* bit 1: enabled
* bit 2: filter_active
+ * bit 3: enabled cmd record
*
* Changes to flags must hold the event_mutex.
*
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 8683dec..af90429 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -344,7 +344,7 @@ static DECLARE_WAIT_QUEUE_HEAD(trace_wait);
/* trace_flags holds trace_options default values */
unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO | TRACE_ITER_SLEEP_TIME |
- TRACE_ITER_GRAPH_TIME;
+ TRACE_ITER_GRAPH_TIME | TRACE_ITER_RECORD_CMD;
static int trace_stop_count;
static DEFINE_SPINLOCK(tracing_start_lock);
@@ -428,6 +428,7 @@ static const char *trace_options[] = {
"latency-format",
"sleep-time",
"graph-time",
+ "record-cmd",
NULL
};
@@ -2561,6 +2562,9 @@ static void set_tracer_flags(unsigned int mask, int enabled)
trace_flags |= mask;
else
trace_flags &= ~mask;
+
+ if (mask == TRACE_ITER_RECORD_CMD)
+ trace_event_enable_cmd_record(enabled);
}
static ssize_t
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 84d3f12..7778f06 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -591,6 +591,7 @@ enum trace_iterator_flags {
TRACE_ITER_LATENCY_FMT = 0x20000,
TRACE_ITER_SLEEP_TIME = 0x40000,
TRACE_ITER_GRAPH_TIME = 0x80000,
+ TRACE_ITER_RECORD_CMD = 0x100000,
};
/*
@@ -723,6 +724,8 @@ filter_check_discard(struct ftrace_event_call *call, void *rec,
return 0;
}
+extern void trace_event_enable_cmd_record(bool enable);
+
extern struct mutex event_mutex;
extern struct list_head ftrace_events;
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index e8e6043..09b4fa6 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -170,6 +170,26 @@ int ftrace_event_reg(struct ftrace_event_call *call, enum trace_reg type)
}
EXPORT_SYMBOL_GPL(ftrace_event_reg);
+void trace_event_enable_cmd_record(bool enable)
+{
+ struct ftrace_event_call *call;
+
+ mutex_lock(&event_mutex);
+ list_for_each_entry(call, &ftrace_events, list) {
+ if (!(call->flags & TRACE_EVENT_FL_ENABLED))
+ continue;
+
+ if (enable) {
+ tracing_start_cmdline_record();
+ call->flags |= TRACE_EVENT_FL_RECORDED_CMD;
+ } else {
+ tracing_stop_cmdline_record();
+ call->flags &= ~TRACE_EVENT_FL_RECORDED_CMD;
+ }
+ }
+ mutex_unlock(&event_mutex);
+}
+
static int ftrace_event_enable_disable(struct ftrace_event_call *call,
int enable)
{
@@ -179,13 +199,19 @@ static int ftrace_event_enable_disable(struct ftrace_event_call *call,
case 0:
if (call->flags & TRACE_EVENT_FL_ENABLED) {
call->flags &= ~TRACE_EVENT_FL_ENABLED;
- tracing_stop_cmdline_record();
+ if (call->flags & TRACE_EVENT_FL_RECORDED_CMD) {
+ tracing_stop_cmdline_record();
+ call->flags &= ~TRACE_EVENT_FL_RECORDED_CMD;
+ }
call->class->reg(call, TRACE_REG_UNREGISTER);
}
break;
case 1:
if (!(call->flags & TRACE_EVENT_FL_ENABLED)) {
- tracing_start_cmdline_record();
+ if (trace_flags & TRACE_ITER_RECORD_CMD) {
+ tracing_start_cmdline_record();
+ call->flags |= TRACE_EVENT_FL_RECORDED_CMD;
+ }
ret = call->class->reg(call, TRACE_REG_REGISTER);
if (ret) {
tracing_stop_cmdline_record();
--
1.7.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/7] trace: Reorder struct ring_buffer_per_cpu to remove padding on 64bit
2010-07-22 20:24 [PATCH 0/7] [GIT PULL] tracing: updates Steven Rostedt
2010-07-22 20:24 ` [PATCH 1/7] tracing: Allow to disable cmdline recording Steven Rostedt
@ 2010-07-22 20:24 ` Steven Rostedt
2010-07-22 20:24 ` [PATCH 3/7] tracing: Reduce latency and remove percpu trace_seq Steven Rostedt
` (5 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2010-07-22 20:24 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Richard Kennedy
[-- Attachment #1: 0002-trace-Reorder-struct-ring_buffer_per_cpu-to-remove-p.patch --]
[-- Type: text/plain, Size: 1093 bytes --]
From: Richard Kennedy <richard@rsk.demon.co.uk>
Reorder structure to remove 8 bytes of padding on 64 bit builds.
This shrinks the size to 128 bytes so allowing allocation from a smaller
slab & needed one fewer cache lines.
Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk>
LKML-Reference: <1269516456.2054.8.camel@localhost>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/ring_buffer.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 28d0615..3632ce8 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -443,6 +443,7 @@ int ring_buffer_print_page_header(struct trace_seq *s)
*/
struct ring_buffer_per_cpu {
int cpu;
+ atomic_t record_disabled;
struct ring_buffer *buffer;
spinlock_t reader_lock; /* serialize readers */
arch_spinlock_t lock;
@@ -462,7 +463,6 @@ struct ring_buffer_per_cpu {
unsigned long read;
u64 write_stamp;
u64 read_stamp;
- atomic_t record_disabled;
};
struct ring_buffer {
--
1.7.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/7] tracing: Reduce latency and remove percpu trace_seq
2010-07-22 20:24 [PATCH 0/7] [GIT PULL] tracing: updates Steven Rostedt
2010-07-22 20:24 ` [PATCH 1/7] tracing: Allow to disable cmdline recording Steven Rostedt
2010-07-22 20:24 ` [PATCH 2/7] trace: Reorder struct ring_buffer_per_cpu to remove padding on 64bit Steven Rostedt
@ 2010-07-22 20:24 ` Steven Rostedt
2010-07-22 20:24 ` [PATCH 4/7] tracing: Shrink max latency ringbuffer if unnecessary Steven Rostedt
` (4 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2010-07-22 20:24 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Lai Jiangshan
[-- Attachment #1: 0003-tracing-Reduce-latency-and-remove-percpu-trace_seq.patch --]
[-- Type: text/plain, Size: 4208 bytes --]
From: Lai Jiangshan <laijs@cn.fujitsu.com>
__print_flags() and __print_symbolic() use percpu trace_seq:
1) Its memory is allocated at compile time, it wastes memory if we don't use tracing.
2) It is percpu data and it wastes more memory for multi-cpus system.
3) It disables preemption when it executes its core routine
"trace_seq_printf(s, "%s: ", #call);" and introduces latency.
So we move this trace_seq to struct trace_iterator.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
LKML-Reference: <4C078350.7090106@cn.fujitsu.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/ftrace_event.h | 5 +++--
include/trace/ftrace.h | 12 +++---------
kernel/trace/trace_output.c | 3 ---
3 files changed, 6 insertions(+), 14 deletions(-)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 2b7b139..02b8b24 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -11,8 +11,6 @@ struct trace_array;
struct tracer;
struct dentry;
-DECLARE_PER_CPU(struct trace_seq, ftrace_event_seq);
-
struct trace_print_flags {
unsigned long mask;
const char *name;
@@ -58,6 +56,9 @@ struct trace_iterator {
struct ring_buffer_iter *buffer_iter[NR_CPUS];
unsigned long iter_flags;
+ /* trace_seq for __print_flags() and __print_symbolic() etc. */
+ struct trace_seq tmp_seq;
+
/* The below is zeroed out in pipe_read */
struct trace_seq seq;
struct trace_entry *ent;
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index 55c1fd1..fb783d9 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -145,7 +145,7 @@
* struct trace_seq *s = &iter->seq;
* struct ftrace_raw_<call> *field; <-- defined in stage 1
* struct trace_entry *entry;
- * struct trace_seq *p;
+ * struct trace_seq *p = &iter->tmp_seq;
* int ret;
*
* entry = iter->ent;
@@ -157,12 +157,10 @@
*
* field = (typeof(field))entry;
*
- * p = &get_cpu_var(ftrace_event_seq);
* trace_seq_init(p);
* ret = trace_seq_printf(s, "%s: ", <call>);
* if (ret)
* ret = trace_seq_printf(s, <TP_printk> "\n");
- * put_cpu();
* if (!ret)
* return TRACE_TYPE_PARTIAL_LINE;
*
@@ -216,7 +214,7 @@ ftrace_raw_output_##call(struct trace_iterator *iter, int flags, \
struct trace_seq *s = &iter->seq; \
struct ftrace_raw_##call *field; \
struct trace_entry *entry; \
- struct trace_seq *p; \
+ struct trace_seq *p = &iter->tmp_seq; \
int ret; \
\
event = container_of(trace_event, struct ftrace_event_call, \
@@ -231,12 +229,10 @@ ftrace_raw_output_##call(struct trace_iterator *iter, int flags, \
\
field = (typeof(field))entry; \
\
- p = &get_cpu_var(ftrace_event_seq); \
trace_seq_init(p); \
ret = trace_seq_printf(s, "%s: ", event->name); \
if (ret) \
ret = trace_seq_printf(s, print); \
- put_cpu(); \
if (!ret) \
return TRACE_TYPE_PARTIAL_LINE; \
\
@@ -255,7 +251,7 @@ ftrace_raw_output_##call(struct trace_iterator *iter, int flags, \
struct trace_seq *s = &iter->seq; \
struct ftrace_raw_##template *field; \
struct trace_entry *entry; \
- struct trace_seq *p; \
+ struct trace_seq *p = &iter->tmp_seq; \
int ret; \
\
entry = iter->ent; \
@@ -267,12 +263,10 @@ ftrace_raw_output_##call(struct trace_iterator *iter, int flags, \
\
field = (typeof(field))entry; \
\
- p = &get_cpu_var(ftrace_event_seq); \
trace_seq_init(p); \
ret = trace_seq_printf(s, "%s: ", #call); \
if (ret) \
ret = trace_seq_printf(s, print); \
- put_cpu(); \
if (!ret) \
return TRACE_TYPE_PARTIAL_LINE; \
\
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index 57c1b45..1ba64d3 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -16,9 +16,6 @@
DECLARE_RWSEM(trace_event_mutex);
-DEFINE_PER_CPU(struct trace_seq, ftrace_event_seq);
-EXPORT_PER_CPU_SYMBOL(ftrace_event_seq);
-
static struct hlist_head event_hash[EVENT_HASHSIZE] __read_mostly;
static int next_event_type = __TRACE_LAST_TYPE + 1;
--
1.7.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/7] tracing: Shrink max latency ringbuffer if unnecessary
2010-07-22 20:24 [PATCH 0/7] [GIT PULL] tracing: updates Steven Rostedt
` (2 preceding siblings ...)
2010-07-22 20:24 ` [PATCH 3/7] tracing: Reduce latency and remove percpu trace_seq Steven Rostedt
@ 2010-07-22 20:24 ` Steven Rostedt
2010-07-22 20:24 ` [PATCH 5/7] tracing/documentation: Document dynamic ftracer internals Steven Rostedt
` (3 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2010-07-22 20:24 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, KOSAKI Motohiro
[-- Attachment #1: 0004-tracing-Shrink-max-latency-ringbuffer-if-unnecessary.patch --]
[-- Type: text/plain, Size: 6654 bytes --]
From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Documentation/trace/ftrace.txt says
buffer_size_kb:
This sets or displays the number of kilobytes each CPU
buffer can hold. The tracer buffers are the same size
for each CPU. The displayed number is the size of the
CPU buffer and not total size of all buffers. The
trace buffers are allocated in pages (blocks of memory
that the kernel uses for allocation, usually 4 KB in size).
If the last page allocated has room for more bytes
than requested, the rest of the page will be used,
making the actual allocation bigger than requested.
( Note, the size may not be a multiple of the page size
due to buffer management overhead. )
This can only be updated when the current_tracer
is set to "nop".
But it's incorrect. currently total memory consumption is
'buffer_size_kb x CPUs x 2'.
Why two times difference is there? because ftrace implicitly allocate
the buffer for max latency too.
That makes sad result when admin want to use large buffer. (If admin
want full logging and makes detail analysis). example, If admin
have 24 CPUs machine and write 200MB to buffer_size_kb, the system
consume ~10GB memory (200MB x 24 x 2). umm.. 5GB memory waste is
usually unacceptable.
Fortunatelly, almost all users don't use max latency feature.
The max latency buffer can be disabled easily.
This patch shrink buffer size of the max latency buffer if
unnecessary.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
LKML-Reference: <20100701104554.DA2D.A69D9226@jp.fujitsu.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/trace.c | 38 +++++++++++++++++++++++++++++++-----
kernel/trace/trace.h | 1 +
kernel/trace/trace_irqsoff.c | 3 ++
kernel/trace/trace_sched_wakeup.c | 2 +
4 files changed, 38 insertions(+), 6 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index af90429..f7488f4 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -660,6 +660,10 @@ update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)
return;
WARN_ON_ONCE(!irqs_disabled());
+ if (!current_trace->use_max_tr) {
+ WARN_ON_ONCE(1);
+ return;
+ }
arch_spin_lock(&ftrace_max_lock);
tr->buffer = max_tr.buffer;
@@ -686,6 +690,11 @@ update_max_tr_single(struct trace_array *tr, struct task_struct *tsk, int cpu)
return;
WARN_ON_ONCE(!irqs_disabled());
+ if (!current_trace->use_max_tr) {
+ WARN_ON_ONCE(1);
+ return;
+ }
+
arch_spin_lock(&ftrace_max_lock);
ftrace_disable_cpu();
@@ -2801,6 +2810,9 @@ static int tracing_resize_ring_buffer(unsigned long size)
if (ret < 0)
return ret;
+ if (!current_trace->use_max_tr)
+ goto out;
+
ret = ring_buffer_resize(max_tr.buffer, size);
if (ret < 0) {
int r;
@@ -2828,11 +2840,14 @@ static int tracing_resize_ring_buffer(unsigned long size)
return ret;
}
+ max_tr.entries = size;
+ out:
global_trace.entries = size;
return ret;
}
+
/**
* tracing_update_buffers - used by tracing facility to expand ring buffers
*
@@ -2893,12 +2908,26 @@ static int tracing_set_tracer(const char *buf)
trace_branch_disable();
if (current_trace && current_trace->reset)
current_trace->reset(tr);
-
+ if (current_trace && current_trace->use_max_tr) {
+ /*
+ * We don't free the ring buffer. instead, resize it because
+ * The max_tr ring buffer has some state (e.g. ring->clock) and
+ * we want preserve it.
+ */
+ ring_buffer_resize(max_tr.buffer, 1);
+ max_tr.entries = 1;
+ }
destroy_trace_option_files(topts);
current_trace = t;
topts = create_trace_option_files(current_trace);
+ if (current_trace->use_max_tr) {
+ ret = ring_buffer_resize(max_tr.buffer, global_trace.entries);
+ if (ret < 0)
+ goto out;
+ max_tr.entries = global_trace.entries;
+ }
if (t->init) {
ret = tracer_init(t, tr);
@@ -3480,7 +3509,6 @@ tracing_entries_write(struct file *filp, const char __user *ubuf,
}
tracing_start();
- max_tr.entries = global_trace.entries;
mutex_unlock(&trace_types_lock);
return cnt;
@@ -4578,16 +4606,14 @@ __init static int tracer_alloc_buffers(void)
#ifdef CONFIG_TRACER_MAX_TRACE
- max_tr.buffer = ring_buffer_alloc(ring_buf_size,
- TRACE_BUFFER_FLAGS);
+ max_tr.buffer = ring_buffer_alloc(1, TRACE_BUFFER_FLAGS);
if (!max_tr.buffer) {
printk(KERN_ERR "tracer: failed to allocate max ring buffer!\n");
WARN_ON(1);
ring_buffer_free(global_trace.buffer);
goto out_free_cpumask;
}
- max_tr.entries = ring_buffer_size(max_tr.buffer);
- WARN_ON(max_tr.entries != global_trace.entries);
+ max_tr.entries = 1;
#endif
/* Allocate the first page for all buffers */
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 7778f06..cb629b3 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -276,6 +276,7 @@ struct tracer {
struct tracer *next;
int print_max;
struct tracer_flags *flags;
+ int use_max_tr;
};
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 6fd486e..73a6b06 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -649,6 +649,7 @@ static struct tracer irqsoff_tracer __read_mostly =
#endif
.open = irqsoff_trace_open,
.close = irqsoff_trace_close,
+ .use_max_tr = 1,
};
# define register_irqsoff(trace) register_tracer(&trace)
#else
@@ -681,6 +682,7 @@ static struct tracer preemptoff_tracer __read_mostly =
#endif
.open = irqsoff_trace_open,
.close = irqsoff_trace_close,
+ .use_max_tr = 1,
};
# define register_preemptoff(trace) register_tracer(&trace)
#else
@@ -715,6 +717,7 @@ static struct tracer preemptirqsoff_tracer __read_mostly =
#endif
.open = irqsoff_trace_open,
.close = irqsoff_trace_close,
+ .use_max_tr = 1,
};
# define register_preemptirqsoff(trace) register_tracer(&trace)
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index c9fd5bd..4086eae 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -382,6 +382,7 @@ static struct tracer wakeup_tracer __read_mostly =
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_wakeup,
#endif
+ .use_max_tr = 1,
};
static struct tracer wakeup_rt_tracer __read_mostly =
@@ -396,6 +397,7 @@ static struct tracer wakeup_rt_tracer __read_mostly =
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_wakeup,
#endif
+ .use_max_tr = 1,
};
__init static int init_wakeup_tracer(void)
--
1.7.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 5/7] tracing/documentation: Document dynamic ftracer internals
2010-07-22 20:24 [PATCH 0/7] [GIT PULL] tracing: updates Steven Rostedt
` (3 preceding siblings ...)
2010-07-22 20:24 ` [PATCH 4/7] tracing: Shrink max latency ringbuffer if unnecessary Steven Rostedt
@ 2010-07-22 20:24 ` Steven Rostedt
2010-07-22 20:24 ` [PATCH 6/7] tracing: Fix $mcount_regex for MIPS in recordmcount.pl Steven Rostedt
` (2 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2010-07-22 20:24 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Mike Frysinger
[-- Attachment #1: 0005-tracing-documentation-Document-dynamic-ftracer-inter.patch --]
[-- Type: text/plain, Size: 8218 bytes --]
From: Mike Frysinger <vapier@gentoo.org>
Add more details to the dynamic function tracing design implementation.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
LKML-Reference: <1279610015-10250-1-git-send-email-vapier@gentoo.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
Documentation/trace/ftrace-design.txt | 153 +++++++++++++++++++++++++++++++-
include/linux/ftrace.h | 5 +
2 files changed, 153 insertions(+), 5 deletions(-)
diff --git a/Documentation/trace/ftrace-design.txt b/Documentation/trace/ftrace-design.txt
index f1f81af..dc52bd4 100644
--- a/Documentation/trace/ftrace-design.txt
+++ b/Documentation/trace/ftrace-design.txt
@@ -13,6 +13,9 @@ Note that this focuses on architecture implementation details only. If you
want more explanation of a feature in terms of common code, review the common
ftrace.txt file.
+Ideally, everyone who wishes to retain performance while supporting tracing in
+their kernel should make it all the way to dynamic ftrace support.
+
Prerequisites
-------------
@@ -215,7 +218,7 @@ An arch may pass in a unique value (frame pointer) to both the entering and
exiting of a function. On exit, the value is compared and if it does not
match, then it will panic the kernel. This is largely a sanity check for bad
code generation with gcc. If gcc for your port sanely updates the frame
-pointer under different opitmization levels, then ignore this option.
+pointer under different optimization levels, then ignore this option.
However, adding support for it isn't terribly difficult. In your assembly code
that calls prepare_ftrace_return(), pass the frame pointer as the 3rd argument.
@@ -234,7 +237,7 @@ If you can't trace NMI functions, then skip this option.
HAVE_SYSCALL_TRACEPOINTS
----------------------
+------------------------
You need very few things to get the syscalls tracing in an arch.
@@ -250,12 +253,152 @@ You need very few things to get the syscalls tracing in an arch.
HAVE_FTRACE_MCOUNT_RECORD
-------------------------
-See scripts/recordmcount.pl for more info.
+See scripts/recordmcount.pl for more info. Just fill in the arch-specific
+details for how to locate the addresses of mcount call sites via objdump.
+This option doesn't make much sense without also implementing dynamic ftrace.
+
+HAVE_DYNAMIC_FTRACE
+-------------------
+
+You will first need HAVE_FTRACE_MCOUNT_RECORD and HAVE_FUNCTION_TRACER, so
+scroll your reader back up if you got over eager.
+
+Once those are out of the way, you will need to implement:
+ - asm/ftrace.h:
+ - MCOUNT_ADDR
+ - ftrace_call_adjust()
+ - struct dyn_arch_ftrace{}
+ - asm code:
+ - mcount() (new stub)
+ - ftrace_caller()
+ - ftrace_call()
+ - ftrace_stub()
+ - C code:
+ - ftrace_dyn_arch_init()
+ - ftrace_make_nop()
+ - ftrace_make_call()
+ - ftrace_update_ftrace_func()
+
+First you will need to fill out some arch details in your asm/ftrace.h.
+
+Define MCOUNT_ADDR as the address of your mcount symbol similar to:
+ #define MCOUNT_ADDR ((unsigned long)mcount)
+Since no one else will have a decl for that function, you will need to:
+ extern void mcount(void);
+
+You will also need the helper function ftrace_call_adjust(). Most people
+will be able to stub it out like so:
+ static inline unsigned long ftrace_call_adjust(unsigned long addr)
+ {
+ return addr;
+ }
<details to be filled>
+Lastly you will need the custom dyn_arch_ftrace structure. If you need
+some extra state when runtime patching arbitrary call sites, this is the
+place. For now though, create an empty struct:
+ struct dyn_arch_ftrace {
+ /* No extra data needed */
+ };
+
+With the header out of the way, we can fill out the assembly code. While we
+did already create a mcount() function earlier, dynamic ftrace only wants a
+stub function. This is because the mcount() will only be used during boot
+and then all references to it will be patched out never to return. Instead,
+the guts of the old mcount() will be used to create a new ftrace_caller()
+function. Because the two are hard to merge, it will most likely be a lot
+easier to have two separate definitions split up by #ifdefs. Same goes for
+the ftrace_stub() as that will now be inlined in ftrace_caller().
+
+Before we get confused anymore, let's check out some pseudo code so you can
+implement your own stuff in assembly:
-HAVE_DYNAMIC_FTRACE
----------------------
+void mcount(void)
+{
+ return;
+}
+
+void ftrace_caller(void)
+{
+ /* implement HAVE_FUNCTION_TRACE_MCOUNT_TEST if you desire */
+
+ /* save all state needed by the ABI (see paragraph above) */
+
+ unsigned long frompc = ...;
+ unsigned long selfpc = <return address> - MCOUNT_INSN_SIZE;
+
+ftrace_call:
+ ftrace_stub(frompc, selfpc);
+
+ /* restore all state needed by the ABI */
+
+ftrace_stub:
+ return;
+}
+
+This might look a little odd at first, but keep in mind that we will be runtime
+patching multiple things. First, only functions that we actually want to trace
+will be patched to call ftrace_caller(). Second, since we only have one tracer
+active at a time, we will patch the ftrace_caller() function itself to call the
+specific tracer in question. That is the point of the ftrace_call label.
+
+With that in mind, let's move on to the C code that will actually be doing the
+runtime patching. You'll need a little knowledge of your arch's opcodes in
+order to make it through the next section.
+
+Every arch has an init callback function. If you need to do something early on
+to initialize some state, this is the time to do that. Otherwise, this simple
+function below should be sufficient for most people:
+
+int __init ftrace_dyn_arch_init(void *data)
+{
+ /* return value is done indirectly via data */
+ *(unsigned long *)data = 0;
+
+ return 0;
+}
+
+There are two functions that are used to do runtime patching of arbitrary
+functions. The first is used to turn the mcount call site into a nop (which
+is what helps us retain runtime performance when not tracing). The second is
+used to turn the mcount call site into a call to an arbitrary location (but
+typically that is ftracer_caller()). See the general function definition in
+linux/ftrace.h for the functions:
+ ftrace_make_nop()
+ ftrace_make_call()
+The rec->ip value is the address of the mcount call site that was collected
+by the scripts/recordmcount.pl during build time.
+
+The last function is used to do runtime patching of the active tracer. This
+will be modifying the assembly code at the location of the ftrace_call symbol
+inside of the ftrace_caller() function. So you should have sufficient padding
+at that location to support the new function calls you'll be inserting. Some
+people will be using a "call" type instruction while others will be using a
+"branch" type instruction. Specifically, the function is:
+ ftrace_update_ftrace_func()
+
+
+HAVE_DYNAMIC_FTRACE + HAVE_FUNCTION_GRAPH_TRACER
+------------------------------------------------
+
+The function grapher needs a few tweaks in order to work with dynamic ftrace.
+Basically, you will need to:
+ - update:
+ - ftrace_caller()
+ - ftrace_graph_call()
+ - ftrace_graph_caller()
+ - implement:
+ - ftrace_enable_ftrace_graph_caller()
+ - ftrace_disable_ftrace_graph_caller()
<details to be filled>
+Quick notes:
+ - add a nop stub after the ftrace_call location named ftrace_graph_call;
+ stub needs to be large enough to support a call to ftrace_graph_caller()
+ - update ftrace_graph_caller() to work with being called by the new
+ ftrace_caller() since some semantics may have changed
+ - ftrace_enable_ftrace_graph_caller() will runtime patch the
+ ftrace_graph_call location with a call to ftrace_graph_caller()
+ - ftrace_disable_ftrace_graph_caller() will runtime patch the
+ ftrace_graph_call location with nops
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 41e4633..dcd6a7c 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -1,3 +1,8 @@
+/*
+ * Ftrace header. For implementation details beyond the random comments
+ * scattered below, see: Documentation/trace/ftrace-design.txt
+ */
+
#ifndef _LINUX_FTRACE_H
#define _LINUX_FTRACE_H
--
1.7.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 6/7] tracing: Fix $mcount_regex for MIPS in recordmcount.pl
2010-07-22 20:24 [PATCH 0/7] [GIT PULL] tracing: updates Steven Rostedt
` (4 preceding siblings ...)
2010-07-22 20:24 ` [PATCH 5/7] tracing/documentation: Document dynamic ftracer internals Steven Rostedt
@ 2010-07-22 20:24 ` Steven Rostedt
2010-07-22 20:24 ` [PATCH 7/7] trace: strlen() return doesnt account for the NULL Steven Rostedt
2010-07-23 7:12 ` [PATCH 0/7] [GIT PULL] tracing: updates Ingo Molnar
7 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2010-07-22 20:24 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Ralf Baechle,
Wu Zhangjin, David Daney, Li Hong, Matt Fleming
[-- Attachment #1: 0006-tracing-Fix-mcount_regex-for-MIPS-in-recordmcount.pl.patch --]
[-- Type: text/plain, Size: 1969 bytes --]
From: David Daney <ddaney@caviumnetworks.com>
I found this issue in a locally patched 2.6.32.x, current kernels have
moved the offending code to an __init function which is skipped by
recordmcount.pl, so the bug is not currently being exercised.
However, I think the patch is still a good idea, to avoid future
problems if _mcount were to ever have its address taken in normal
code.
This is what I originally saw:
Although arch/mips/kernel/ftrace.c is built without -pg, and thus
contains no calls to _mcount, it does use the address of _mcount
in ftrace_make_nop(). This was causing relocations to be emitted
for _mcount which recordmcount.pl erronously took to be _mcount
call sites. The result was that the text of ftrace_make_nop()
would be patched with garbage leading to a system crash.
In non-module code, all _mcount call sites will have R_MIPS_26
relocations, so we restrict $mcount_regex to only match on these.
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Acked-by: Wu Zhangjin <wuzhangjin@gmail.com>
Signed-off-by: David Daney <ddaney@caviumnetworks.com>
LKML-Reference: <1278712325-12050-1-git-send-email-ddaney@caviumnetworks.com>
Cc: Li Hong <lihong.hi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
scripts/recordmcount.pl | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
index f3c9c0a..0171060 100755
--- a/scripts/recordmcount.pl
+++ b/scripts/recordmcount.pl
@@ -326,7 +326,7 @@ if ($arch eq "x86_64") {
# 14: R_MIPS_NONE *ABS*
# 18: 00020021 nop
if ($is_module eq "0") {
- $mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s_mcount\$";
+ $mcount_regex = "^\\s*([0-9a-fA-F]+): R_MIPS_26\\s+_mcount\$";
} else {
$mcount_regex = "^\\s*([0-9a-fA-F]+): R_MIPS_HI16\\s+_mcount\$";
}
--
1.7.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 7/7] trace: strlen() return doesnt account for the NULL
2010-07-22 20:24 [PATCH 0/7] [GIT PULL] tracing: updates Steven Rostedt
` (5 preceding siblings ...)
2010-07-22 20:24 ` [PATCH 6/7] tracing: Fix $mcount_regex for MIPS in recordmcount.pl Steven Rostedt
@ 2010-07-22 20:24 ` Steven Rostedt
2010-07-23 7:12 ` [PATCH 0/7] [GIT PULL] tracing: updates Ingo Molnar
7 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2010-07-22 20:24 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Dan Carpenter
[-- Attachment #1: 0007-trace-strlen-return-doesn-t-account-for-the-NULL.patch --]
[-- Type: text/plain, Size: 919 bytes --]
From: Dan Carpenter <error27@gmail.com>
We need to add one to the strlen() return because of the NULL
character. The type->name here generally comes from the kernel and I
don't think any of them come close to being MAX_TRACER_SIZE (100)
characters long so this is basically a cleanup.
Signed-off-by: Dan Carpenter <error27@gmail.com>
LKML-Reference: <20100710100644.GV19184@bicker>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/trace/trace.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index f7488f4..cacb6f0 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -739,7 +739,7 @@ __acquires(kernel_lock)
return -1;
}
- if (strlen(type->name) > MAX_TRACER_SIZE) {
+ if (strlen(type->name) >= MAX_TRACER_SIZE) {
pr_info("Tracer has a name longer than %d\n", MAX_TRACER_SIZE);
return -1;
}
--
1.7.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 0/7] [GIT PULL] tracing: updates
2010-07-22 20:24 [PATCH 0/7] [GIT PULL] tracing: updates Steven Rostedt
` (6 preceding siblings ...)
2010-07-22 20:24 ` [PATCH 7/7] trace: strlen() return doesnt account for the NULL Steven Rostedt
@ 2010-07-23 7:12 ` Ingo Molnar
7 siblings, 0 replies; 9+ messages in thread
From: Ingo Molnar @ 2010-07-23 7:12 UTC (permalink / raw)
To: Steven Rostedt; +Cc: linux-kernel, Andrew Morton, Frederic Weisbecker
* Steven Rostedt <rostedt@goodmis.org> wrote:
>
> Ingo,
>
> Please pull the latest tip/perf/core tree, which can be found at:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace.git
> tip/perf/core
>
>
> Dan Carpenter (1):
> trace: strlen() return doesn't account for the NULL
>
> David Daney (1):
> tracing: Fix $mcount_regex for MIPS in recordmcount.pl
>
> KOSAKI Motohiro (1):
> tracing: Shrink max latency ringbuffer if unnecessary
>
> Lai Jiangshan (1):
> tracing: Reduce latency and remove percpu trace_seq
>
> Li Zefan (1):
> tracing: Allow to disable cmdline recording
>
> Mike Frysinger (1):
> tracing/documentation: Document dynamic ftracer internals
>
> Richard Kennedy (1):
> trace: Reorder struct ring_buffer_per_cpu to remove padding on 64bit
>
> ----
> Documentation/trace/ftrace-design.txt | 153 +++++++++++++++++++++++++++++++-
> include/linux/ftrace.h | 5 +
> include/linux/ftrace_event.h | 12 ++-
> include/trace/ftrace.h | 12 +--
> kernel/trace/ring_buffer.c | 2 +-
> kernel/trace/trace.c | 46 ++++++++--
> kernel/trace/trace.h | 4 +
> kernel/trace/trace_events.c | 30 ++++++-
> kernel/trace/trace_irqsoff.c | 3 +
> kernel/trace/trace_output.c | 3 -
> kernel/trace/trace_sched_wakeup.c | 2 +
> scripts/recordmcount.pl | 2 +-
> 12 files changed, 241 insertions(+), 33 deletions(-)
Pulled, thanks a lot Steve!
Ingo
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2010-07-23 7:12 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-22 20:24 [PATCH 0/7] [GIT PULL] tracing: updates Steven Rostedt
2010-07-22 20:24 ` [PATCH 1/7] tracing: Allow to disable cmdline recording Steven Rostedt
2010-07-22 20:24 ` [PATCH 2/7] trace: Reorder struct ring_buffer_per_cpu to remove padding on 64bit Steven Rostedt
2010-07-22 20:24 ` [PATCH 3/7] tracing: Reduce latency and remove percpu trace_seq Steven Rostedt
2010-07-22 20:24 ` [PATCH 4/7] tracing: Shrink max latency ringbuffer if unnecessary Steven Rostedt
2010-07-22 20:24 ` [PATCH 5/7] tracing/documentation: Document dynamic ftracer internals Steven Rostedt
2010-07-22 20:24 ` [PATCH 6/7] tracing: Fix $mcount_regex for MIPS in recordmcount.pl Steven Rostedt
2010-07-22 20:24 ` [PATCH 7/7] trace: strlen() return doesnt account for the NULL Steven Rostedt
2010-07-23 7:12 ` [PATCH 0/7] [GIT PULL] tracing: updates Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox