public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/3] tracing/stat: change dummpy_cmp() to return -1
@ 2009-05-27  3:04 Li Zefan
  2009-05-27  3:04 ` [PATCH 2/3] tracing/stat: remember to free root node Li Zefan
  2009-05-27  3:05 ` [PATCH 3/3] tracing/stat: do some cleanups Li Zefan
  0 siblings, 2 replies; 6+ messages in thread
From: Li Zefan @ 2009-05-27  3:04 UTC (permalink / raw)
  To: Frederic Weisbecker, Steven Rostedt; +Cc: Ingo Molnar, LKML

Currently the output of trace_stat/workqueues is totally reversed:

 # cat /debug/tracing/trace_stat/workqueues
    ...
    1       17       17      210       37   `-blk_unplug_work+0x0/0x57
    1     3779     3779      181       11   |-cfq_kick_queue+0x0/0x2f
    1     3796     3796                     kblockd/1:120
    ...

The correct output should be:

    1     3796     3796                     kblockd/1:120
    1     3779     3779      181       11   |-cfq_kick_queue+0x0/0x2f
    1       17       17      210       37   `-blk_unplug_work+0x0/0x57

It's caused by "tracing/stat: replace linked list by an rbtree for sorting"
(53059c9b67a62a3dc8c80204d3da42b9267ea5a0).

dummpy_cmp() should return -1, so rb_node will always be inserted as
right-most node in the rbtree, thus we sort the output in ascending
order.

[ Impact: fix the output of trace_stat/workqueues ]

Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
---
 kernel/trace/trace_stat.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/trace/trace_stat.c b/kernel/trace/trace_stat.c
index 2e849b5..8b7420a 100644
--- a/kernel/trace/trace_stat.c
+++ b/kernel/trace/trace_stat.c
@@ -127,7 +127,7 @@ insert_stat(struct rb_root *root, struct stat_node *data, cmp_stat_t cmp)
  */
 static int dummy_cmp(void *p1, void *p2)
 {
-	return 1;
+	return -1;
 }
 
 /*
-- 
1.5.4.rc3



^ permalink raw reply related	[flat|nested] 6+ messages in thread
* [PATCH 0/3] tracing/stat: fixes, cleanups
@ 2009-05-28 16:42 Frederic Weisbecker
  2009-05-28 16:42 ` [PATCH 3/3] tracing/stat: do some cleanups Frederic Weisbecker
  0 siblings, 1 reply; 6+ messages in thread
From: Frederic Weisbecker @ 2009-05-28 16:42 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Frederic Weisbecker, Li Zefan, Steven Rostedt, Zhaolei

Hi,

Here are the latest histogram tracing core updates.
Those are about output fixes, memory leak fixes and code clarity.

Thanks,
Frederic.

The following changes since commit 53059c9b67a62a3dc8c80204d3da42b9267ea5a0:
  Frederic Weisbecker (1):
        tracing/stat: replace linked list by an rbtree for sorting

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing.git tracing/stat

Li Zefan (3):
      tracing/stat: change dummpy_cmp() to return -1
      tracing/stat: remember to free root node
      tracing/stat: do some cleanups

 kernel/trace/trace_stat.c |   60 ++++++++++++++++++---------------------------
 1 files changed, 24 insertions(+), 36 deletions(-)

^ permalink raw reply	[flat|nested] 6+ messages in thread
* [PATCH 1/3] tracing/stat: sort in ascending order
@ 2009-05-25  8:46 Li Zefan
  2009-05-25  8:46 ` [PATCH 3/3] tracing/stat: do some cleanups Li Zefan
  0 siblings, 1 reply; 6+ messages in thread
From: Li Zefan @ 2009-05-25  8:46 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Frederic Weisbecker, Steven Rostedt, LKML

Currently the output of trace_stat/workqueues is totally reversed:

 # cat /debug/tracing/trace_stat/workqueues
    ...
    1       17       17      210       37   `-blk_unplug_work+0x0/0x57
    1     3779     3779      181       11   |-cfq_kick_queue+0x0/0x2f
    1     3796     3796                     kblockd/1:120
    ...

The correct output should be:

    1     3796     3796                     kblockd/1:120
    1     3779     3779      181       11   |-cfq_kick_queue+0x0/0x2f
    1       17       17      210       37   `-blk_unplug_work+0x0/0x57

It's caused by "tracing/stat: replace linked list by an rbtree for sorting"
(53059c9b67a62a3dc8c80204d3da42b9267ea5a0).

Though we can simply change dummy_cmp() to return -1 instead of 1, IMO
it's better to always do ascending sorting in trace_stat.c, and leave each
stat tracer to decide whether to sort in descending or ascending order.

[ Impact: fix the output of trace_stat/workqueue ]

Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
---
 kernel/trace/ftrace.c       |   12 ++++++------
 kernel/trace/trace_branch.c |    5 +++--
 kernel/trace/trace_stat.c   |    6 +-----
 3 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 140699a..3dd16bd 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -315,29 +315,29 @@ static void *function_stat_start(struct tracer_stat *trace)
 }
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
-/* function graph compares on total time */
+/* function graph compares on total time in reverse order */
 static int function_stat_cmp(void *p1, void *p2)
 {
 	struct ftrace_profile *a = p1;
 	struct ftrace_profile *b = p2;
 
-	if (a->time < b->time)
-		return -1;
 	if (a->time > b->time)
+		return -1;
+	if (a->time < b->time)
 		return 1;
 	else
 		return 0;
 }
 #else
-/* not function graph compares against hits */
+/* not function graph compares against hits in reverse order */
 static int function_stat_cmp(void *p1, void *p2)
 {
 	struct ftrace_profile *a = p1;
 	struct ftrace_profile *b = p2;
 
-	if (a->counter < b->counter)
-		return -1;
 	if (a->counter > b->counter)
+		return -1;
+	if (a->counter < b->counter)
 		return 1;
 	else
 		return 0;
diff --git a/kernel/trace/trace_branch.c b/kernel/trace/trace_branch.c
index 7a7a9fd..df58411 100644
--- a/kernel/trace/trace_branch.c
+++ b/kernel/trace/trace_branch.c
@@ -301,9 +301,10 @@ static int annotated_branch_stat_cmp(void *p1, void *p2)
 	percent_a = get_incorrect_percent(a);
 	percent_b = get_incorrect_percent(b);
 
-	if (percent_a < percent_b)
-		return -1;
+	/* sort in descending order */
 	if (percent_a > percent_b)
+		return -1;
+	if (percent_a < percent_b)
 		return 1;
 	else
 		return 0;
diff --git a/kernel/trace/trace_stat.c b/kernel/trace/trace_stat.c
index 2e849b5..6efbcb4 100644
--- a/kernel/trace/trace_stat.c
+++ b/kernel/trace/trace_stat.c
@@ -98,10 +98,6 @@ insert_stat(struct rb_root *root, struct stat_node *data, cmp_stat_t cmp)
 {
 	struct rb_node **new = &(root->rb_node), *parent = NULL;
 
-	/*
-	 * Figure out where to put new node
-	 * This is a descendent sorting
-	 */
 	while (*new) {
 		struct stat_node *this;
 		int result;
@@ -110,7 +106,7 @@ insert_stat(struct rb_root *root, struct stat_node *data, cmp_stat_t cmp)
 		result = cmp(data->stat, this->stat);
 
 		parent = *new;
-		if (result >= 0)
+		if (result < 0)
 			new = &((*new)->rb_left);
 		else
 			new = &((*new)->rb_right);
-- 
1.5.4.rc3


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2009-05-28 16:42 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-27  3:04 [PATCH 1/3] tracing/stat: change dummpy_cmp() to return -1 Li Zefan
2009-05-27  3:04 ` [PATCH 2/3] tracing/stat: remember to free root node Li Zefan
2009-05-27  3:05 ` [PATCH 3/3] tracing/stat: do some cleanups Li Zefan
2009-05-27  3:42   ` [PATCH 3/3 -v2] " Li Zefan
  -- strict thread matches above, loose matches on Subject: below --
2009-05-28 16:42 [PATCH 0/3] tracing/stat: fixes, cleanups Frederic Weisbecker
2009-05-28 16:42 ` [PATCH 3/3] tracing/stat: do some cleanups Frederic Weisbecker
2009-05-25  8:46 [PATCH 1/3] tracing/stat: sort in ascending order Li Zefan
2009-05-25  8:46 ` [PATCH 3/3] tracing/stat: do some cleanups Li Zefan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox