public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH -tip] x86,perf: P4 PMU -- protect sensible procedures from preemption
@ 2010-05-05 15:07 Cyrill Gorcunov
  2010-05-05 16:57 ` Frederic Weisbecker
  0 siblings, 1 reply; 16+ messages in thread
From: Cyrill Gorcunov @ 2010-05-05 15:07 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Peter Zijlstra, Steven Rostedt, Frederic Weisbecker

Steven reported
|
| I'm getting:
|
| Pid: 3477, comm: perf Not tainted 2.6.34-rc6 #2727
| Call Trace:
|  [<ffffffff811c7565>] debug_smp_processor_id+0xd5/0xf0
|  [<ffffffff81019874>] p4_hw_config+0x2b/0x15c
|  [<ffffffff8107acbc>] ? trace_hardirqs_on_caller+0x12b/0x14f
|  [<ffffffff81019143>] hw_perf_event_init+0x468/0x7be
|  [<ffffffff810782fd>] ? debug_mutex_init+0x31/0x3c
|  [<ffffffff810c68b2>] T.850+0x273/0x42e
|  [<ffffffff810c6cab>] sys_perf_event_open+0x23e/0x3f1
|  [<ffffffff81009e6a>] ? sysret_check+0x2e/0x69
|  [<ffffffff81009e32>] system_call_fastpath+0x16/0x1b
|
| When running perf record in latest tip/perf/core
|

Due to the fact that p4 counters are shared between HT threads
we synthetically divide the whole set of counters into two
non-intersected subsets. And while we're borrowing counters
from these subsets we should not be preempted. So use
get_cpu/put_cpu pair.

Reported-by: Steven Rostedt <rostedt@goodmis.org>
Tested-by: Steven Rostedt <rostedt@goodmis.org>
CC: Steven Rostedt <rostedt@goodmis.org>
CC: Peter Zijlstra <peterz@infradead.org>
CC: Ingo Molnar <mingo@elte.hu>
CC: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org>
---
 arch/x86/kernel/cpu/perf_event_p4.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

Index: linux-2.6.git/arch/x86/kernel/cpu/perf_event_p4.c
=====================================================================
--- linux-2.6.git.orig/arch/x86/kernel/cpu/perf_event_p4.c
+++ linux-2.6.git/arch/x86/kernel/cpu/perf_event_p4.c
@@ -421,7 +421,7 @@ static u64 p4_pmu_event_map(int hw_event
 
 static int p4_hw_config(struct perf_event *event)
 {
-	int cpu = raw_smp_processor_id();
+	int cpu = get_cpu();
 	u32 escr, cccr;
 
 	/*
@@ -440,7 +440,7 @@ static int p4_hw_config(struct perf_even
 		event->hw.config = p4_set_ht_bit(event->hw.config);
 
 	if (event->attr.type != PERF_TYPE_RAW)
-		return 0;
+		goto out;
 
 	/*
 	 * We don't control raw events so it's up to the caller
@@ -455,6 +455,8 @@ static int p4_hw_config(struct perf_even
 		(p4_config_pack_escr(P4_ESCR_MASK_HT) |
 		 p4_config_pack_cccr(P4_CCCR_MASK_HT));
 
+out:
+	put_cpu();
 	return 0;
 }
 
@@ -741,7 +743,7 @@ static int p4_pmu_schedule_events(struct
 {
 	unsigned long used_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
 	unsigned long escr_mask[BITS_TO_LONGS(ARCH_P4_TOTAL_ESCR)];
-	int cpu = raw_smp_processor_id();
+	int cpu = get_cpu();
 	struct hw_perf_event *hwc;
 	struct p4_event_bind *bind;
 	unsigned int i, thread, num;
@@ -777,6 +779,7 @@ reserve:
 	}
 
 done:
+	put_cpu();
 	return num ? -ENOSPC : 0;
 }
 

^ permalink raw reply	[flat|nested] 16+ messages in thread
* [PATCH -tip] x86,perf: P4 PMU -- protect sensible procedures from preemption
@ 2010-05-07 15:05 Cyrill Gorcunov
  2010-05-08  8:06 ` Ingo Molnar
  0 siblings, 1 reply; 16+ messages in thread
From: Cyrill Gorcunov @ 2010-05-07 15:05 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Steven Rostedt, Peter Zijlstra, Frederic Weisbecker,
	Lin Ming

Steven reported
|
| I'm getting:
|
| Pid: 3477, comm: perf Not tainted 2.6.34-rc6 #2727
| Call Trace:
|  [<ffffffff811c7565>] debug_smp_processor_id+0xd5/0xf0
|  [<ffffffff81019874>] p4_hw_config+0x2b/0x15c
|  [<ffffffff8107acbc>] ? trace_hardirqs_on_caller+0x12b/0x14f
|  [<ffffffff81019143>] hw_perf_event_init+0x468/0x7be
|  [<ffffffff810782fd>] ? debug_mutex_init+0x31/0x3c
|  [<ffffffff810c68b2>] T.850+0x273/0x42e
|  [<ffffffff810c6cab>] sys_perf_event_open+0x23e/0x3f1
|  [<ffffffff81009e6a>] ? sysret_check+0x2e/0x69
|  [<ffffffff81009e32>] system_call_fastpath+0x16/0x1b
|
| When running perf record in latest tip/perf/core
|

Due to the fact that p4 counters are shared between HT threads
we synthetically divide the whole set of counters into two
non-intersected subsets. And while we're borrowing counters
from these subsets we should not be preempted. So use
get_cpu/put_cpu pair.

Also p4_pmu_schedule_events should use smp_processor_id rather
than raw_ version. This allow us to catch up preemption issue
(if there will ever be).

Reported-by: Steven Rostedt <rostedt@goodmis.org>
Tested-by: Steven Rostedt <rostedt@goodmis.org>
CC: Steven Rostedt <rostedt@goodmis.org>
CC: Peter Zijlstra <peterz@infradead.org>
CC: Ingo Molnar <mingo@elte.hu>
CC: Frederic Weisbecker <fweisbec@gmail.com>
CC: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org>
---

N.B. I'm still invetigating the need of preemt to be
disabled of p4_pmu_schedule_events, though Steven has
tested this version and preemtion issue didn't reveal itself
anyhow I think it's valueable to close the former issue
in p4_hw_config first.

 arch/x86/kernel/cpu/perf_event_p4.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

Index: linux-2.6.git/arch/x86/kernel/cpu/perf_event_p4.c
=====================================================================
--- linux-2.6.git.orig/arch/x86/kernel/cpu/perf_event_p4.c
+++ linux-2.6.git/arch/x86/kernel/cpu/perf_event_p4.c
@@ -421,7 +421,7 @@ static u64 p4_pmu_event_map(int hw_event
 
 static int p4_hw_config(struct perf_event *event)
 {
-	int cpu = raw_smp_processor_id();
+	int cpu = get_cpu();
 	u32 escr, cccr;
 
 	/*
@@ -440,7 +440,7 @@ static int p4_hw_config(struct perf_even
 		event->hw.config = p4_set_ht_bit(event->hw.config);
 
 	if (event->attr.type != PERF_TYPE_RAW)
-		return 0;
+		goto out;
 
 	/*
 	 * We don't control raw events so it's up to the caller
@@ -455,6 +455,8 @@ static int p4_hw_config(struct perf_even
 		(p4_config_pack_escr(P4_ESCR_MASK_HT) |
 		 p4_config_pack_cccr(P4_CCCR_MASK_HT));
 
+out:
+	put_cpu();
 	return 0;
 }
 
@@ -741,7 +743,7 @@ static int p4_pmu_schedule_events(struct
 {
 	unsigned long used_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
 	unsigned long escr_mask[BITS_TO_LONGS(ARCH_P4_TOTAL_ESCR)];
-	int cpu = raw_smp_processor_id();
+	int cpu = smp_processor_id();
 	struct hw_perf_event *hwc;
 	struct p4_event_bind *bind;
 	unsigned int i, thread, num;

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2010-05-08  8:09 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-05-05 15:07 [PATCH -tip] x86,perf: P4 PMU -- protect sensible procedures from preemption Cyrill Gorcunov
2010-05-05 16:57 ` Frederic Weisbecker
2010-05-05 17:42   ` Cyrill Gorcunov
2010-05-05 17:58     ` Frederic Weisbecker
2010-05-06  6:44     ` Ingo Molnar
2010-05-06  7:39       ` Cyrill Gorcunov
2010-05-06  7:42         ` Ingo Molnar
2010-05-06  7:45           ` Cyrill Gorcunov
2010-05-06 13:45             ` Steven Rostedt
2010-05-06 14:48               ` Cyrill Gorcunov
2010-05-06 15:26                 ` Cyrill Gorcunov
2010-05-06 18:32                   ` Steven Rostedt
2010-05-06 18:36                     ` Cyrill Gorcunov
  -- strict thread matches above, loose matches on Subject: below --
2010-05-07 15:05 Cyrill Gorcunov
2010-05-08  8:06 ` Ingo Molnar
2010-05-08  8:09   ` Cyrill Gorcunov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox