* [PATCH 0/5] ftrace clean ups
@ 2008-05-22 4:22 Steven Rostedt
2008-05-22 4:22 ` [PATCH 1/5] ftrace: limit use of check pages Steven Rostedt
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: Steven Rostedt @ 2008-05-22 4:22 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel, pq, proski, sandmann, a.p.zijlstra
This patch series contains various ftrace clean ups.
-- Steve
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/5] ftrace: limit use of check pages
2008-05-22 4:22 [PATCH 0/5] ftrace clean ups Steven Rostedt
@ 2008-05-22 4:22 ` Steven Rostedt
2008-05-22 17:38 ` Pekka Paalanen
2008-05-27 15:34 ` [PATCH 1/5] ftrace: limit use of check pages Thomas Gleixner
2008-05-22 4:22 ` [PATCH 2/5] ftrace: remove wakeup functions from sched_switch tracer Steven Rostedt
` (3 subsequent siblings)
4 siblings, 2 replies; 11+ messages in thread
From: Steven Rostedt @ 2008-05-22 4:22 UTC (permalink / raw)
To: Ingo Molnar
Cc: linux-kernel, pq, proski, sandmann, a.p.zijlstra, Steven Rostedt
[-- Attachment #1: ftrace-limit-checkpages.patch --]
[-- Type: text/plain, Size: 3359 bytes --]
The check_pages function is called often enough that it can cause problems
with trace outputs or even bringing the system to a halt.
This patch limits the check_pages to the places that are most likely to
have problems. The check is made at the flip between the global array and
the max save array, as well as when the size of the buffers changes and
the self tests.
This patch also removes the BUG_ON from check_pages and replaces it with
a WARN_ON and disabling of the tracer.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
kernel/trace/trace.c | 32 +++++++++++++++++++++++---------
kernel/trace/trace_selftest.c | 1 +
2 files changed, 24 insertions(+), 9 deletions(-)
Index: linux-tip.git/kernel/trace/trace.c
===================================================================
--- linux-tip.git.orig/kernel/trace/trace.c 2008-05-21 13:25:34.000000000 -0400
+++ linux-tip.git/kernel/trace/trace.c 2008-05-21 21:32:23.000000000 -0400
@@ -249,24 +249,32 @@ __update_max_tr(struct trace_array *tr,
tracing_record_cmdline(current);
}
+#define CHECK_COND(cond) \
+ if (unlikely(cond)) { \
+ tracing_disabled = 0; \
+ WARN_ON(1); \
+ return -1; \
+ }
+
/**
* check_pages - integrity check of trace buffers
*
* As a safty measure we check to make sure the data pages have not
- * been corrupted. TODO: configure to disable this because it adds
- * a bit of overhead.
+ * been corrupted.
*/
-void check_pages(struct trace_array_cpu *data)
+int check_pages(struct trace_array_cpu *data)
{
struct page *page, *tmp;
- BUG_ON(data->trace_pages.next->prev != &data->trace_pages);
- BUG_ON(data->trace_pages.prev->next != &data->trace_pages);
+ CHECK_COND(data->trace_pages.next->prev != &data->trace_pages);
+ CHECK_COND(data->trace_pages.prev->next != &data->trace_pages);
list_for_each_entry_safe(page, tmp, &data->trace_pages, lru) {
- BUG_ON(page->lru.next->prev != &page->lru);
- BUG_ON(page->lru.prev->next != &page->lru);
+ CHECK_COND(page->lru.next->prev != &page->lru);
+ CHECK_COND(page->lru.prev->next != &page->lru);
}
+
+ return 0;
}
/**
@@ -280,7 +288,6 @@ void *head_page(struct trace_array_cpu *
{
struct page *page;
- check_pages(data);
if (list_empty(&data->trace_pages))
return NULL;
@@ -2608,7 +2615,7 @@ tracing_entries_write(struct file *filp,
{
unsigned long val;
char buf[64];
- int ret;
+ int i, ret;
if (cnt >= sizeof(buf))
return -EINVAL;
@@ -2677,8 +2684,15 @@ tracing_entries_write(struct file *filp,
trace_free_page();
}
+ /* check integrity */
+ for_each_tracing_cpu(i)
+ check_pages(global_trace.data[i]);
+
filp->f_pos += cnt;
+ /* If check pages failed, return ENOMEM */
+ if (tracing_disabled)
+ cnt = -ENOMEM;
out:
max_tr.entries = global_trace.entries;
mutex_unlock(&trace_types_lock);
Index: linux-tip.git/kernel/trace/trace_selftest.c
===================================================================
--- linux-tip.git.orig/kernel/trace/trace_selftest.c 2008-05-21 19:06:11.000000000 -0400
+++ linux-tip.git/kernel/trace/trace_selftest.c 2008-05-21 20:25:05.000000000 -0400
@@ -28,6 +28,7 @@ trace_test_buffer_cpu(struct trace_array
page = list_entry(data->trace_pages.next, struct page, lru);
entries = page_address(page);
+ check_pages(data);
if (head_page(data) != entries)
goto failed;
--
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 2/5] ftrace: remove wakeup functions from sched_switch tracer
2008-05-22 4:22 [PATCH 0/5] ftrace clean ups Steven Rostedt
2008-05-22 4:22 ` [PATCH 1/5] ftrace: limit use of check pages Steven Rostedt
@ 2008-05-22 4:22 ` Steven Rostedt
2008-05-22 9:46 ` Peter Zijlstra
2008-05-22 4:22 ` [PATCH 3/5] ftrace: move ftrace_special to trace.c Steven Rostedt
` (2 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: Steven Rostedt @ 2008-05-22 4:22 UTC (permalink / raw)
To: Ingo Molnar
Cc: linux-kernel, pq, proski, sandmann, a.p.zijlstra, Steven Rostedt
[-- Attachment #1: ftrace-remove-wakeup-from-sched.patch --]
[-- Type: text/plain, Size: 3930 bytes --]
The added wakeup functions inside the sched_switch tracer dirty the trace
quite a bit. The sched_switch tracer is used to watch context switches.
Adding wakeups to this make the flow of this trace much more complex to
understand.
Perhaps we can add a way to turn on or off this tracing but for now I'm
reverting the sched_switch back to its original behavior.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
kernel/trace/trace_sched_switch.c | 92 --------------------------------------
1 file changed, 1 insertion(+), 91 deletions(-)
Index: linux-tip.git/kernel/trace/trace_sched_switch.c
===================================================================
--- linux-tip.git.orig/kernel/trace/trace_sched_switch.c 2008-05-21 13:25:34.000000000 -0400
+++ linux-tip.git/kernel/trace/trace_sched_switch.c 2008-05-21 22:24:13.000000000 -0400
@@ -72,59 +72,6 @@ sched_switch_callback(void *probe_data,
sched_switch_func(probe_data, __rq, prev, next);
}
-static void
-wakeup_func(void *private, void *__rq, struct task_struct *wakee, struct
- task_struct *curr)
-{
- struct trace_array **ptr = private;
- struct trace_array *tr = *ptr;
- struct trace_array_cpu *data;
- unsigned long flags;
- long disabled;
- int cpu;
-
- if (!tracer_enabled)
- return;
-
- tracing_record_cmdline(curr);
-
- local_irq_save(flags);
- cpu = raw_smp_processor_id();
- data = tr->data[cpu];
- disabled = atomic_inc_return(&data->disabled);
-
- if (likely(disabled == 1))
- tracing_sched_wakeup_trace(tr, data, wakee, curr, flags);
-
- atomic_dec(&data->disabled);
- local_irq_restore(flags);
-}
-
-static notrace void
-wake_up_callback(void *probe_data, void *call_data,
- const char *format, va_list *args)
-{
- struct task_struct *curr;
- struct task_struct *task;
- struct rq *__rq;
-
- if (likely(!tracer_enabled))
- return;
-
- /* Skip pid %d state %ld */
- (void)va_arg(*args, int);
- (void)va_arg(*args, long);
- /* now get the meat: "rq %p task %p rq->curr %p" */
- __rq = va_arg(*args, typeof(__rq));
- task = va_arg(*args, typeof(task));
- curr = va_arg(*args, typeof(curr));
-
- tracing_record_cmdline(task);
- tracing_record_cmdline(curr);
-
- wakeup_func(probe_data, __rq, task, curr);
-}
-
void
ftrace_special(unsigned long arg1, unsigned long arg2, unsigned long arg3)
{
@@ -163,47 +110,16 @@ static int tracing_sched_register(void)
{
int ret;
- ret = marker_probe_register("kernel_sched_wakeup",
- "pid %d state %ld ## rq %p task %p rq->curr %p",
- wake_up_callback,
- &ctx_trace);
- if (ret) {
- pr_info("wakeup trace: Couldn't add marker"
- " probe to kernel_sched_wakeup\n");
- return ret;
- }
-
- ret = marker_probe_register("kernel_sched_wakeup_new",
- "pid %d state %ld ## rq %p task %p rq->curr %p",
- wake_up_callback,
- &ctx_trace);
- if (ret) {
- pr_info("wakeup trace: Couldn't add marker"
- " probe to kernel_sched_wakeup_new\n");
- goto fail_deprobe;
- }
-
ret = marker_probe_register("kernel_sched_schedule",
"prev_pid %d next_pid %d prev_state %ld "
"## rq %p prev %p next %p",
sched_switch_callback,
&ctx_trace);
- if (ret) {
+ if (ret)
pr_info("sched trace: Couldn't add marker"
" probe to kernel_sched_schedule\n");
- goto fail_deprobe_wake_new;
- }
return ret;
-fail_deprobe_wake_new:
- marker_probe_unregister("kernel_sched_wakeup_new",
- wake_up_callback,
- &ctx_trace);
-fail_deprobe:
- marker_probe_unregister("kernel_sched_wakeup",
- wake_up_callback,
- &ctx_trace);
- return ret;
}
static void tracing_sched_unregister(void)
@@ -211,12 +127,6 @@ static void tracing_sched_unregister(voi
marker_probe_unregister("kernel_sched_schedule",
sched_switch_callback,
&ctx_trace);
- marker_probe_unregister("kernel_sched_wakeup_new",
- wake_up_callback,
- &ctx_trace);
- marker_probe_unregister("kernel_sched_wakeup",
- wake_up_callback,
- &ctx_trace);
}
void tracing_start_sched_switch(void)
--
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 3/5] ftrace: move ftrace_special to trace.c
2008-05-22 4:22 [PATCH 0/5] ftrace clean ups Steven Rostedt
2008-05-22 4:22 ` [PATCH 1/5] ftrace: limit use of check pages Steven Rostedt
2008-05-22 4:22 ` [PATCH 2/5] ftrace: remove wakeup functions from sched_switch tracer Steven Rostedt
@ 2008-05-22 4:22 ` Steven Rostedt
2008-05-22 4:22 ` [PATCH 4/5] ftrace: add function tracing to wake up tracing Steven Rostedt
2008-05-22 4:22 ` [PATCH 5/5] ftrace: remove printks from irqsoff trace Steven Rostedt
4 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2008-05-22 4:22 UTC (permalink / raw)
To: Ingo Molnar
Cc: linux-kernel, pq, proski, sandmann, a.p.zijlstra, Steven Rostedt
[-- Attachment #1: ftrace-move-ftrace-special-out-of-sched-switch.patch --]
[-- Type: text/plain, Size: 2829 bytes --]
Move the ftrace_special out of sched_switch to trace.c.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
kernel/trace/trace.c | 27 +++++++++++++++++++++++++--
kernel/trace/trace_sched_switch.c | 24 ------------------------
2 files changed, 25 insertions(+), 26 deletions(-)
Index: linux-tip.git/kernel/trace/trace.c
===================================================================
--- linux-tip.git.orig/kernel/trace/trace.c 2008-05-21 21:32:23.000000000 -0400
+++ linux-tip.git/kernel/trace/trace.c 2008-05-21 23:19:55.000000000 -0400
@@ -983,6 +983,30 @@ tracing_sched_wakeup_trace(struct trace_
trace_wake_up();
}
+void
+ftrace_special(unsigned long arg1, unsigned long arg2, unsigned long arg3)
+{
+ struct trace_array *tr = &global_trace;
+ struct trace_array_cpu *data;
+ unsigned long flags;
+ long disabled;
+ int cpu;
+
+ if (tracing_disabled || current_trace == &no_tracer || !tr->ctrl)
+ return;
+
+ local_irq_save(flags);
+ cpu = raw_smp_processor_id();
+ data = tr->data[cpu];
+ disabled = atomic_inc_return(&data->disabled);
+
+ if (likely(disabled == 1))
+ __trace_special(tr, data, arg1, arg2, arg3);
+
+ atomic_dec(&data->disabled);
+ local_irq_restore(flags);
+}
+
#ifdef CONFIG_FTRACE
static void
function_trace_call(unsigned long ip, unsigned long parent_ip)
@@ -2986,8 +3010,6 @@ __init static int tracer_alloc_buffers(v
int ret = -ENOMEM;
int i;
- global_trace.ctrl = tracer_enabled;
-
/* TODO: make the number of buffers hot pluggable with CPUS */
tracing_nr_buffers = num_possible_cpus();
tracing_buffer_mask = cpu_possible_map;
@@ -3057,6 +3079,7 @@ __init static int tracer_alloc_buffers(v
current_trace = &no_tracer;
/* All seems OK, enable tracing */
+ global_trace.ctrl = tracer_enabled;
tracing_disabled = 0;
return 0;
Index: linux-tip.git/kernel/trace/trace_sched_switch.c
===================================================================
--- linux-tip.git.orig/kernel/trace/trace_sched_switch.c 2008-05-21 22:24:13.000000000 -0400
+++ linux-tip.git/kernel/trace/trace_sched_switch.c 2008-05-21 22:41:50.000000000 -0400
@@ -72,30 +72,6 @@ sched_switch_callback(void *probe_data,
sched_switch_func(probe_data, __rq, prev, next);
}
-void
-ftrace_special(unsigned long arg1, unsigned long arg2, unsigned long arg3)
-{
- struct trace_array *tr = ctx_trace;
- struct trace_array_cpu *data;
- unsigned long flags;
- long disabled;
- int cpu;
-
- if (!tracer_enabled)
- return;
-
- local_irq_save(flags);
- cpu = raw_smp_processor_id();
- data = tr->data[cpu];
- disabled = atomic_inc_return(&data->disabled);
-
- if (likely(disabled == 1))
- __trace_special(tr, data, arg1, arg2, arg3);
-
- atomic_dec(&data->disabled);
- local_irq_restore(flags);
-}
-
static void sched_switch_reset(struct trace_array *tr)
{
int cpu;
--
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 4/5] ftrace: add function tracing to wake up tracing
2008-05-22 4:22 [PATCH 0/5] ftrace clean ups Steven Rostedt
` (2 preceding siblings ...)
2008-05-22 4:22 ` [PATCH 3/5] ftrace: move ftrace_special to trace.c Steven Rostedt
@ 2008-05-22 4:22 ` Steven Rostedt
2008-05-22 4:22 ` [PATCH 5/5] ftrace: remove printks from irqsoff trace Steven Rostedt
4 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2008-05-22 4:22 UTC (permalink / raw)
To: Ingo Molnar
Cc: linux-kernel, pq, proski, sandmann, a.p.zijlstra, Steven Rostedt
[-- Attachment #1: ftrace-add-function-trace-to-wakeup.patch --]
[-- Type: text/plain, Size: 2810 bytes --]
This patch adds function tracing to the functions that are called
on the CPU of the task being traced.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
kernel/trace/trace_sched_wakeup.c | 67 +++++++++++++++++++++++++++++++++++++-
1 file changed, 66 insertions(+), 1 deletion(-)
Index: linux-tip.git/kernel/trace/trace_sched_wakeup.c
===================================================================
--- linux-tip.git.orig/kernel/trace/trace_sched_wakeup.c 2008-05-21 20:22:27.000000000 -0400
+++ linux-tip.git/kernel/trace/trace_sched_wakeup.c 2008-05-22 00:01:50.000000000 -0400
@@ -30,6 +30,69 @@ static DEFINE_SPINLOCK(wakeup_lock);
static void __wakeup_reset(struct trace_array *tr);
+#ifdef CONFIG_FTRACE
+/*
+ * irqsoff uses its own tracer function to keep the overhead down:
+ */
+static void
+wakeup_tracer_call(unsigned long ip, unsigned long parent_ip)
+{
+ struct trace_array *tr = wakeup_trace;
+ struct trace_array_cpu *data;
+ unsigned long flags;
+ long disabled;
+ int resched;
+ int cpu;
+
+ if (likely(!wakeup_task))
+ return;
+
+ resched = need_resched();
+ preempt_disable_notrace();
+
+ cpu = raw_smp_processor_id();
+ data = tr->data[cpu];
+ disabled = atomic_inc_return(&data->disabled);
+ if (unlikely(disabled != 1))
+ goto out;
+
+ spin_lock_irqsave(&wakeup_lock, flags);
+
+ if (unlikely(!wakeup_task))
+ goto unlock;
+
+ /*
+ * The task can't disappear because it needs to
+ * wake up first, and we have the wakeup_lock.
+ */
+ if (task_cpu(wakeup_task) != cpu)
+ goto unlock;
+
+ trace_function(tr, data, ip, parent_ip, flags);
+
+ unlock:
+ spin_unlock_irqrestore(&wakeup_lock, flags);
+
+ out:
+ atomic_dec(&data->disabled);
+
+ /*
+ * To prevent recursion from the scheduler, if the
+ * resched flag was set before we entered, then
+ * don't reschedule.
+ */
+ if (resched)
+ preempt_enable_no_resched_notrace();
+ else
+ preempt_enable_notrace();
+}
+
+static struct ftrace_ops trace_ops __read_mostly =
+{
+ .func = wakeup_tracer_call,
+};
+#endif /* CONFIG_FTRACE */
+
/*
* Should this new latency be reported/recorded?
*/
@@ -73,7 +136,7 @@ wakeup_sched_switch(void *private, void
if (next != wakeup_task)
return;
- /* The task we are waitng for is waking up */
+ /* The task we are waiting for is waking up */
data = tr->data[wakeup_cpu];
/* disable local data, not wakeup_cpu data */
@@ -290,6 +353,7 @@ static void start_wakeup_tracer(struct t
smp_wmb();
tracer_enabled = 1;
+ register_ftrace_function(&trace_ops);
return;
fail_deprobe_wake_new:
@@ -305,6 +369,7 @@ fail_deprobe:
static void stop_wakeup_tracer(struct trace_array *tr)
{
tracer_enabled = 0;
+ unregister_ftrace_function(&trace_ops);
marker_probe_unregister("kernel_sched_schedule",
sched_switch_callback,
&wakeup_trace);
--
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 5/5] ftrace: remove printks from irqsoff trace
2008-05-22 4:22 [PATCH 0/5] ftrace clean ups Steven Rostedt
` (3 preceding siblings ...)
2008-05-22 4:22 ` [PATCH 4/5] ftrace: add function tracing to wake up tracing Steven Rostedt
@ 2008-05-22 4:22 ` Steven Rostedt
4 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2008-05-22 4:22 UTC (permalink / raw)
To: Ingo Molnar
Cc: linux-kernel, pq, proski, sandmann, a.p.zijlstra, Steven Rostedt
[-- Attachment #1: ftrace-dont-printk-on-max.patch --]
[-- Type: text/plain, Size: 1458 bytes --]
Printing out new max latencies was fine for the old RT tracer. But for
mainline it is a bit messy. We also need to test if the run queue
is locked before we can do the print. This means that we may not be
printing out latencies if the run queue is locked on another CPU.
This produces inconsistencies in the output.
This patch simply removes the print altogether.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
kernel/trace/trace_irqsoff.c | 16 ----------------
1 file changed, 16 deletions(-)
Index: linux-tip.git/kernel/trace/trace_irqsoff.c
===================================================================
--- linux-tip.git.orig/kernel/trace/trace_irqsoff.c 2008-05-21 20:22:27.000000000 -0400
+++ linux-tip.git/kernel/trace/trace_irqsoff.c 2008-05-22 00:11:10.000000000 -0400
@@ -165,22 +165,6 @@ check_critical_timing(struct trace_array
update_max_tr_single(tr, current, cpu);
- if (!runqueue_is_locked()) {
- if (tracing_thresh) {
- printk(KERN_INFO "(%16s-%-5d|#%d): %lu us critical"
- " section violates %lu us threshold.\n",
- current->comm, current->pid,
- raw_smp_processor_id(),
- latency, nsecs_to_usecs(tracing_thresh));
- } else {
- printk(KERN_INFO "(%16s-%-5d|#%d): new %lu us"
- " maximum-latency critical section.\n",
- current->comm, current->pid,
- raw_smp_processor_id(),
- latency);
- }
- }
-
max_sequence++;
out_unlock:
--
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/5] ftrace: remove wakeup functions from sched_switch tracer
2008-05-22 4:22 ` [PATCH 2/5] ftrace: remove wakeup functions from sched_switch tracer Steven Rostedt
@ 2008-05-22 9:46 ` Peter Zijlstra
0 siblings, 0 replies; 11+ messages in thread
From: Peter Zijlstra @ 2008-05-22 9:46 UTC (permalink / raw)
To: Steven Rostedt
Cc: Ingo Molnar, linux-kernel, pq, proski, sandmann, Steven Rostedt
On Thu, 2008-05-22 at 00:22 -0400, Steven Rostedt wrote:
> plain text document attachment (ftrace-remove-wakeup-from-sched.patch)
> The added wakeup functions inside the sched_switch tracer dirty the trace
> quite a bit. The sched_switch tracer is used to watch context switches.
> Adding wakeups to this make the flow of this trace much more complex to
> understand.
>
> Perhaps we can add a way to turn on or off this tracing but for now I'm
> reverting the sched_switch back to its original behavior.
Not liking this - this is very useful.
use grep already.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/5] ftrace: limit use of check pages
2008-05-22 4:22 ` [PATCH 1/5] ftrace: limit use of check pages Steven Rostedt
@ 2008-05-22 17:38 ` Pekka Paalanen
2008-05-22 17:43 ` Steven Rostedt
2008-05-22 18:06 ` [PATCH] ftrace: fix check_pages disable set Steven Rostedt
2008-05-27 15:34 ` [PATCH 1/5] ftrace: limit use of check pages Thomas Gleixner
1 sibling, 2 replies; 11+ messages in thread
From: Pekka Paalanen @ 2008-05-22 17:38 UTC (permalink / raw)
To: Steven Rostedt
Cc: Ingo Molnar, linux-kernel, proski, sandmann, a.p.zijlstra,
Steven Rostedt
On Thu, 22 May 2008 00:22:16 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:
> The check_pages function is called often enough that it can cause problems
> with trace outputs or even bringing the system to a halt.
>
> This patch limits the check_pages to the places that are most likely to
> have problems. The check is made at the flip between the global array and
> the max save array, as well as when the size of the buffers changes and
> the self tests.
>
> This patch also removes the BUG_ON from check_pages and replaces it with
> a WARN_ON and disabling of the tracer.
>
> Signed-off-by: Steven Rostedt <srostedt@redhat.com>
> ---
> kernel/trace/trace.c | 32 +++++++++++++++++++++++---------
> kernel/trace/trace_selftest.c | 1 +
> 2 files changed, 24 insertions(+), 9 deletions(-)
>
> Index: linux-tip.git/kernel/trace/trace.c
> ===================================================================
> --- linux-tip.git.orig/kernel/trace/trace.c 2008-05-21 13:25:34.000000000 -0400
> +++ linux-tip.git/kernel/trace/trace.c 2008-05-21 21:32:23.000000000 -0400
> @@ -249,24 +249,32 @@ __update_max_tr(struct trace_array *tr,
> tracing_record_cmdline(current);
> }
>
> +#define CHECK_COND(cond) \
> + if (unlikely(cond)) { \
> + tracing_disabled = 0; \
-----------------------------------^
This looks like a typo.
I also quickly read through the other patches and didn't see any
obvious problems in them.
> + WARN_ON(1); \
> + return -1; \
> + }
> +
> /**
> * check_pages - integrity check of trace buffers
> *
> * As a safty measure we check to make sure the data pages have not
> - * been corrupted. TODO: configure to disable this because it adds
> - * a bit of overhead.
> + * been corrupted.
> */
> -void check_pages(struct trace_array_cpu *data)
> +int check_pages(struct trace_array_cpu *data)
> {
> struct page *page, *tmp;
>
> - BUG_ON(data->trace_pages.next->prev != &data->trace_pages);
> - BUG_ON(data->trace_pages.prev->next != &data->trace_pages);
> + CHECK_COND(data->trace_pages.next->prev != &data->trace_pages);
> + CHECK_COND(data->trace_pages.prev->next != &data->trace_pages);
>
> list_for_each_entry_safe(page, tmp, &data->trace_pages, lru) {
> - BUG_ON(page->lru.next->prev != &page->lru);
> - BUG_ON(page->lru.prev->next != &page->lru);
> + CHECK_COND(page->lru.next->prev != &page->lru);
> + CHECK_COND(page->lru.prev->next != &page->lru);
> }
> +
> + return 0;
> }
--
Pekka Paalanen
http://www.iki.fi/pq/
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/5] ftrace: limit use of check pages
2008-05-22 17:38 ` Pekka Paalanen
@ 2008-05-22 17:43 ` Steven Rostedt
2008-05-22 18:06 ` [PATCH] ftrace: fix check_pages disable set Steven Rostedt
1 sibling, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2008-05-22 17:43 UTC (permalink / raw)
To: Pekka Paalanen
Cc: Ingo Molnar, linux-kernel, proski, sandmann, a.p.zijlstra,
Steven Rostedt
On Thu, 22 May 2008, Pekka Paalanen wrote:
> On Thu, 22 May 2008 00:22:16 -0400
> Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> > +#define CHECK_COND(cond) \
> > + if (unlikely(cond)) { \
> > + tracing_disabled = 0; \
>
> -----------------------------------^
> This looks like a typo.
>
> I also quickly read through the other patches and didn't see any
> obvious problems in them.
Indeed it is! I haven't hit the cond so I wouldn't have seen it in my
tests ;-)
Thanks!
-- Steve
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH] ftrace: fix check_pages disable set
2008-05-22 17:38 ` Pekka Paalanen
2008-05-22 17:43 ` Steven Rostedt
@ 2008-05-22 18:06 ` Steven Rostedt
1 sibling, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2008-05-22 18:06 UTC (permalink / raw)
To: Pekka Paalanen
Cc: Ingo Molnar, linux-kernel, proski, sandmann, a.p.zijlstra,
Steven Rostedt
Pekka Paalanen pointed out that I didn't set the tracing_disabled flag
when check_pages failed.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
kernel/trace/trace.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-tip.git/kernel/trace/trace.c
===================================================================
--- linux-tip.git.orig/kernel/trace/trace.c 2008-05-22 11:44:51.000000000 -0400
+++ linux-tip.git/kernel/trace/trace.c 2008-05-22 14:03:51.000000000 -0400
@@ -251,7 +251,7 @@ __update_max_tr(struct trace_array *tr,
#define CHECK_COND(cond) \
if (unlikely(cond)) { \
- tracing_disabled = 0; \
+ tracing_disabled = 1; \
WARN_ON(1); \
return -1; \
}
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/5] ftrace: limit use of check pages
2008-05-22 4:22 ` [PATCH 1/5] ftrace: limit use of check pages Steven Rostedt
2008-05-22 17:38 ` Pekka Paalanen
@ 2008-05-27 15:34 ` Thomas Gleixner
1 sibling, 0 replies; 11+ messages in thread
From: Thomas Gleixner @ 2008-05-27 15:34 UTC (permalink / raw)
To: Steven Rostedt
Cc: Ingo Molnar, linux-kernel, pq, proski, sandmann, a.p.zijlstra,
Steven Rostedt
On Thu, 22 May 2008, Steven Rostedt wrote:
> The check_pages function is called often enough that it can cause problems
> with trace outputs or even bringing the system to a halt.
>
> This patch limits the check_pages to the places that are most likely to
> have problems. The check is made at the flip between the global array and
> the max save array, as well as when the size of the buffers changes and
> the self tests.
>
> This patch also removes the BUG_ON from check_pages and replaces it with
> a WARN_ON and disabling of the tracer.
Applied and fixed the typo.
> + tracing_disabled = 0; \
Thanks,
tglx
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2008-05-27 15:35 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-05-22 4:22 [PATCH 0/5] ftrace clean ups Steven Rostedt
2008-05-22 4:22 ` [PATCH 1/5] ftrace: limit use of check pages Steven Rostedt
2008-05-22 17:38 ` Pekka Paalanen
2008-05-22 17:43 ` Steven Rostedt
2008-05-22 18:06 ` [PATCH] ftrace: fix check_pages disable set Steven Rostedt
2008-05-27 15:34 ` [PATCH 1/5] ftrace: limit use of check pages Thomas Gleixner
2008-05-22 4:22 ` [PATCH 2/5] ftrace: remove wakeup functions from sched_switch tracer Steven Rostedt
2008-05-22 9:46 ` Peter Zijlstra
2008-05-22 4:22 ` [PATCH 3/5] ftrace: move ftrace_special to trace.c Steven Rostedt
2008-05-22 4:22 ` [PATCH 4/5] ftrace: add function tracing to wake up tracing Steven Rostedt
2008-05-22 4:22 ` [PATCH 5/5] ftrace: remove printks from irqsoff trace Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox