From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753731AbbJGSLY (ORCPT ); Wed, 7 Oct 2015 14:11:24 -0400 Received: from e32.co.us.ibm.com ([32.97.110.150]:39919 "EHLO e32.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751275AbbJGSLX (ORCPT ); Wed, 7 Oct 2015 14:11:23 -0400 X-IBM-Helo: d03dlp03.boulder.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Date: Wed, 7 Oct 2015 11:11:17 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, dvhart@linux.intel.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com Subject: Re: [PATCH tip/core/rcu 04/18] rcu: Use single-stage IPI algorithm for RCU expedited grace period Message-ID: <20151007181117.GS3910@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20151006162907.GA12020@linux.vnet.ibm.com> <1444148977-14108-1-git-send-email-paulmck@linux.vnet.ibm.com> <1444148977-14108-4-git-send-email-paulmck@linux.vnet.ibm.com> <20151007132454.GA3604@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151007132454.GA3604@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15100718-0005-0000-0000-000018C59A09 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Oct 07, 2015 at 03:24:54PM +0200, Peter Zijlstra wrote: > On Tue, Oct 06, 2015 at 09:29:23AM -0700, Paul E. McKenney wrote: > > @@ -3494,19 +3483,21 @@ static int sync_rcu_preempt_exp_done(struct rcu_node *rnp) > > * recursively up the tree. (Calm down, calm down, we do the recursion > > * iteratively!) > > * > > - * Caller must hold the root rcu_node's exp_funnel_mutex. > > + * Caller must hold the root rcu_node's exp_funnel_mutex and the > > + * specified rcu_node structure's ->lock. > > */ > > -static void __maybe_unused rcu_report_exp_rnp(struct rcu_state *rsp, > > - struct rcu_node *rnp, bool wake) > > +static void __rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp, > > + bool wake, unsigned long flags) > > + __releases(rnp->lock) > > { > > - unsigned long flags; > > unsigned long mask; > > > > - raw_spin_lock_irqsave(&rnp->lock, flags); > > - smp_mb__after_unlock_lock(); > > lockdep_assert_held(&rnp->lock); > > > +/* > > + * Report expedited quiescent state for specified node. This is a > > + * lock-acquisition wrapper function for __rcu_report_exp_rnp(). > > + * > > + * Caller must hold the root rcu_node's exp_funnel_mutex. > > + */ > > +static void __maybe_unused rcu_report_exp_rnp(struct rcu_state *rsp, > > + struct rcu_node *rnp, bool wake) > > +{ > > + unsigned long flags; > > lockdep_assert_held(&rcu_get_root(rsp)->exp_funnel_mutex); > > > + > > + raw_spin_lock_irqsave(&rnp->lock, flags); > > + smp_mb__after_unlock_lock(); > > + __rcu_report_exp_rnp(rsp, rnp, wake, flags); > > +} > > > Etc.. these are much harder to ignore than comments. Good point! I probably should do the same for interrupt disabling. Thanx, Paul