From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760419Ab2EWQFt (ORCPT ); Wed, 23 May 2012 12:05:49 -0400 Received: from e31.co.us.ibm.com ([32.97.110.149]:51280 "EHLO e31.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752042Ab2EWQFr (ORCPT ); Wed, 23 May 2012 12:05:47 -0400 Date: Wed, 23 May 2012 09:01:21 -0700 From: "Paul E. McKenney" To: Frederic Weisbecker Cc: LKML , linaro-sched-sig@lists.linaro.org, Alessio Igor Bogani , Andrew Morton , Avi Kivity , Chris Metcalf , Christoph Lameter , Daniel Lezcano , Geoff Levand , Gilad Ben Yossef , Hakan Akkan , Ingo Molnar , Kevin Hilman , Max Krasnyansky , Peter Zijlstra , Stephen Hemminger , Steven Rostedt , Sven-Thorsten Dietrich , Thomas Gleixner Subject: Re: [PATCH 17/41] rcu: Restart tick if we enqueue a callback in a nohz/cpuset CPU Message-ID: <20120523160121.GC2402@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <1335830115-14335-1-git-send-email-fweisbec@gmail.com> <1335830115-14335-18-git-send-email-fweisbec@gmail.com> <20120522172714.GC8087@linux.vnet.ibm.com> <20120523140010.GD1663@somewhere> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20120523140010.GD1663@somewhere> User-Agent: Mutt/1.5.21 (2010-09-15) X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12052316-7282-0000-0000-00000946649D Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 23, 2012 at 04:00:15PM +0200, Frederic Weisbecker wrote: > On Tue, May 22, 2012 at 10:27:14AM -0700, Paul E. McKenney wrote: > > On Tue, May 01, 2012 at 01:54:51AM +0200, Frederic Weisbecker wrote: > > > If we enqueue an rcu callback, we need the CPU tick to stay > > > alive until we take care of those by completing the appropriate > > > grace period. > > > > > > Thus, when we call_rcu(), send a self IPI that checks rcu_needs_cpu() > > > so that we restore a periodic tick behaviour that can take care of > > > everything. > > > > Ouch, I hadn't considered RCU callbacks being posted from within an > > extended quiescent state. I guess I need to make __call_rcu() either > > complain about this or handle it correctly... It would -usually- be > > harmless, but there is getting to be quite a bit of active machinery > > in the various idle loops, so just assuming that it cannot happen is > > probably getting to be an obsolete assumption. > > May be first provide some detection to warn in such case. And if it happens > to warn too much perhaps you can allow that? Heh. It is just as simple to allow it as it is to warn about it, so I am just transitioning immediately to allowing it. Thanx, Paul > > > Signed-off-by: Frederic Weisbecker > > > Cc: Alessio Igor Bogani > > > Cc: Andrew Morton > > > Cc: Avi Kivity > > > Cc: Chris Metcalf > > > Cc: Christoph Lameter > > > Cc: Daniel Lezcano > > > Cc: Geoff Levand > > > Cc: Gilad Ben Yossef > > > Cc: Hakan Akkan > > > Cc: Ingo Molnar > > > Cc: Kevin Hilman > > > Cc: Max Krasnyansky > > > Cc: Paul E. McKenney > > > Cc: Peter Zijlstra > > > Cc: Stephen Hemminger > > > Cc: Steven Rostedt > > > Cc: Sven-Thorsten Dietrich > > > Cc: Thomas Gleixner > > > --- > > > kernel/rcutree.c | 7 +++++++ > > > 1 files changed, 7 insertions(+), 0 deletions(-) > > > > > > diff --git a/kernel/rcutree.c b/kernel/rcutree.c > > > index 3fffc26..b8d300c 100644 > > > --- a/kernel/rcutree.c > > > +++ b/kernel/rcutree.c > > > @@ -1749,6 +1749,13 @@ __call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu), > > > else > > > trace_rcu_callback(rsp->name, head, rdp->qlen); > > > > > > + /* Restart the timer if needed to handle the callbacks */ > > > + if (cpuset_adaptive_nohz()) { > > > + /* Make updates on nxtlist visible to self IPI */ > > > + barrier(); > > > + smp_cpuset_update_nohz(smp_processor_id()); > > > + } > > > + > > > /* If interrupts were disabled, don't dive into RCU core. */ > > > if (irqs_disabled_flags(flags)) { > > > local_irq_restore(flags); > > > -- > > > 1.7.5.4 > > > > > >