linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH/RFC] synchronize_rcu(): high latency on idle system
@ 2008-01-12  1:26 Benjamin LaHaise
  2008-01-12  2:37 ` Andi Kleen
  2008-01-12  9:23 ` Peter Zijlstra
  0 siblings, 2 replies; 7+ messages in thread
From: Benjamin LaHaise @ 2008-01-12  1:26 UTC (permalink / raw)
  To: dipankar, Andrew Morton; +Cc: linux-kernel, linux-arch

Hello folks,

I'd like to put the patch below out for comments to see if folks think the 
approach is a valid fix to reduce the latency of synchronize_rcu().  The 
motivation is that an otherwise idle system takes about 3 ticks per network 
interface in unregister_netdev() due to multiple calls to synchronize_rcu(), 
which adds up to quite a few seconds for tearing down thousands of 
interfaces.  By flushing pending rcu callbacks in the idle loop, the system 
makes progress hundreds of times faster.  If this is indeed a sane thing to, 
it probably needs to be done for other architectures than x86.  And yes, the 
network stack shouldn't call synchronize_rcu() quite so much, but fixing that 
is a little more involved.

		-ben

diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
index 9663c2a..592f6e4 100644
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -188,6 +188,9 @@ void cpu_idle(void)
 			rmb();
 			idle = pm_idle;
 
+			if (rcu_pending(cpu))
+				rcu_check_callbacks(cpu, 0);
+
 			if (!idle)
 				idle = default_idle;
 

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2008-01-12 18:36 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-12  1:26 [PATCH/RFC] synchronize_rcu(): high latency on idle system Benjamin LaHaise
2008-01-12  2:37 ` Andi Kleen
2008-01-12 17:51   ` Benjamin LaHaise
2008-01-12 18:35     ` Andi Kleen
2008-01-12  9:23 ` Peter Zijlstra
2008-01-12 16:55   ` Paul E. McKenney
2008-01-12 17:33   ` Andi Kleen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).