From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754676Ab0CILM7 (ORCPT ); Tue, 9 Mar 2010 06:12:59 -0500 Received: from cn.fujitsu.com ([222.73.24.84]:56128 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1753685Ab0CILMz (ORCPT ); Tue, 9 Mar 2010 06:12:55 -0500 Message-ID: <4B962D57.1000406@cn.fujitsu.com> Date: Tue, 09 Mar 2010 19:13:27 +0800 From: Lai Jiangshan User-Agent: Thunderbird 2.0.0.6 (Windows/20070728) MIME-Version: 1.0 To: "Paul E. McKenney" CC: Ingo Molnar , Peter Zijlstra , Steven Rostedt , Mathieu Desnoyers , josh@joshtriplett.org, LKML , Frederic Weisbecker Subject: [RFC PATCH] rcu: don't ignore preempt_disable() in the idle loop Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Current, synchronize_sched() ignores preempt-disable() sequences in the idle loop. It makes synchronize_sched() is not so pure, and it hurts tracing. Paul have a proposal before: http://lkml.org/lkml/2009/4/5/140 http://lkml.org/lkml/2009/4/6/496 But old fix needs to hack into all architectures' idle loops. This is another try, it uses the fact that idle loops are executing with preept_count()=1. But I didn't look deep into all idle loops. Signed-off-by: Lai Jiangshan --- diff --git a/kernel/rcutree.c b/kernel/rcutree.c index 3ec8160..0761723 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -80,6 +80,10 @@ DEFINE_PER_CPU(struct rcu_data, rcu_sched_data); struct rcu_state rcu_bh_state = RCU_STATE_INITIALIZER(rcu_bh_state); DEFINE_PER_CPU(struct rcu_data, rcu_bh_data); +#ifndef IDLE_CORE_LOOP_PREEMPT_COUNT +#define IDLE_CORE_LOOP_PREEMPT_COUNT (1) +#endif + /* * Return true if an RCU grace period is in progress. The ACCESS_ONCE()s * permit this function to be invoked without holding the root rcu_node @@ -1114,6 +1118,26 @@ static void rcu_do_batch(struct rcu_state *rsp, struct rcu_data *rdp) raise_softirq(RCU_SOFTIRQ); } +static inline int rcu_idle_qs(int cpu) +{ + if (!idle_cpu(cpu)) + return 0; + + if (!rcu_scheduler_active) + return 0; + + if (in_softirq()) + return 0; + + if (hardirq_count() > (1 << HARDIRQ_SHIFT)) + return 0; + + if ((preempt_count() & PREEMPT_MASK) > IDLE_CORE_LOOP_PREEMPT_COUNT) + return 0; + + return 1; +} + /* * Check to see if this CPU is in a non-context-switch quiescent state * (user mode or idle loop for rcu, non-softirq execution for rcu_bh). @@ -1127,9 +1151,7 @@ void rcu_check_callbacks(int cpu, int user) { if (!rcu_pending(cpu)) return; /* if nothing for RCU to do. */ - if (user || - (idle_cpu(cpu) && rcu_scheduler_active && - !in_softirq() && hardirq_count() <= (1 << HARDIRQ_SHIFT))) { + if (user || rcu_idle_qs(cpu)) { /* * Get here if this CPU took its interrupt from user