public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* cpu_clock() in NMIs
@ 2009-12-14  2:25 David Miller
  2009-12-14  9:02 ` Peter Zijlstra
  2009-12-15  8:11 ` [tip:sched/urgent] sched: Fix cpu_clock() in NMIs, on !CONFIG_HAVE_UNSTABLE_SCHED_CLOCK tip-bot for David Miller
  0 siblings, 2 replies; 7+ messages in thread
From: David Miller @ 2009-12-14  2:25 UTC (permalink / raw)
  To: mingo; +Cc: tglx, peterz, linux-kernel


The background is that I was trying to resolve a sparc64 perf
issue when I discovered this problem.

On sparc64 I implement pseudo NMIs by simply running the kernel
at IRQ level 14 when local_irq_disable() is called, this allows
performance counter events to still come in at IRQ level 15.

This doesn't work if any code in an NMI handler does local_irq_save()
or local_irq_disable() since the "disable" will kick us back to cpu
IRQ level 14 thus letting NMIs back in and we recurse.

The only path which that does that in the perf event IRQ handling path
is the code supporting frequency based events.  It uses cpu_clock().

cpu_clock() simply invokes sched_clock() with IRQs disabled.

And that's a fundamental bug all on it's own, particularly for the
HAVE_UNSTABLE_SCHED_CLOCK case.  NMIs can thus get into the
sched_clock() code interrupting the local IRQ disable code sections
of it.

Furthermore, for the not-HAVE_UNSTABLE_SCHED_CLOCK case, the IRQ
disabling done by cpu_clock() is just pure overhead and completely
unnecessary.

So the core problem is that sched_clock() is not NMI safe, but we
are invoking it from NMI contexts in the perf events code (via
cpu_clock()).

A less important issue is the overhead of IRQ disabling when it isn't
necessary in cpu_clock().  Maybe something simple like the patch below
to handle that.

diff --git a/kernel/sched_clock.c b/kernel/sched_clock.c
index 479ce56..5b49613 100644
--- a/kernel/sched_clock.c
+++ b/kernel/sched_clock.c
@@ -236,6 +236,18 @@ void sched_clock_idle_wakeup_event(u64 delta_ns)
 }
 EXPORT_SYMBOL_GPL(sched_clock_idle_wakeup_event);
 
+unsigned long long cpu_clock(int cpu)
+{
+	unsigned long long clock;
+	unsigned long flags;
+
+	local_irq_save(flags);
+	clock = sched_clock_cpu(cpu);
+	local_irq_restore(flags);
+
+	return clock;
+}
+
 #else /* CONFIG_HAVE_UNSTABLE_SCHED_CLOCK */
 
 void sched_clock_init(void)
@@ -251,17 +263,12 @@ u64 sched_clock_cpu(int cpu)
 	return sched_clock();
 }
 
-#endif /* CONFIG_HAVE_UNSTABLE_SCHED_CLOCK */
 
 unsigned long long cpu_clock(int cpu)
 {
-	unsigned long long clock;
-	unsigned long flags;
+	return sched_clock_cpu(cpu);
+}
 
-	local_irq_save(flags);
-	clock = sched_clock_cpu(cpu);
-	local_irq_restore(flags);
+#endif /* CONFIG_HAVE_UNSTABLE_SCHED_CLOCK */
 
-	return clock;
-}
 EXPORT_SYMBOL_GPL(cpu_clock);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2009-12-15  8:12 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-12-14  2:25 cpu_clock() in NMIs David Miller
2009-12-14  9:02 ` Peter Zijlstra
2009-12-14 19:09   ` David Miller
2009-12-14 19:32     ` Peter Zijlstra
2009-12-15  5:36       ` David Miller
2009-12-14 19:57     ` Mike Frysinger
2009-12-15  8:11 ` [tip:sched/urgent] sched: Fix cpu_clock() in NMIs, on !CONFIG_HAVE_UNSTABLE_SCHED_CLOCK tip-bot for David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox