public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Steven Rostedt <rostedt@goodmis.org>
Cc: Byungchul Park <byungchul.park@lge.com>,
	jiangshanlai@gmail.com, josh@joshtriplett.org,
	mathieu.desnoyers@efficios.com, linux-kernel@vger.kernel.org,
	kernel-team@lge.com, joel@joelfernandes.org
Subject: Re: [PATCH] rcu: Refactor rcu_{nmi,irq}_{enter,exit}()
Date: Mon, 25 Jun 2018 08:43:38 -0700	[thread overview]
Message-ID: <20180625154338.GP3593@linux.vnet.ibm.com> (raw)
In-Reply-To: <20180625110248.5e679a8d@gandalf.local.home>

On Mon, Jun 25, 2018 at 11:02:48AM -0400, Steven Rostedt wrote:
> On Mon, 25 Jun 2018 07:48:49 -0700
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> 
> > > > @@ -923,7 +932,7 @@ void rcu_user_exit(void)
> > > >  #endif /* CONFIG_NO_HZ_FULL */
> > > >  
> > > >  /**
> > > > - * rcu_nmi_enter - inform RCU of entry to NMI context
> > > > + * rcu_nmi_enter_common - inform RCU of entry to NMI context
> > > >   *
> > > >   * If the CPU was idle from RCU's viewpoint, update rdtp->dynticks and
> > > >   * rdtp->dynticks_nmi_nesting to let the RCU grace-period handling know
> > > > @@ -931,10 +940,10 @@ void rcu_user_exit(void)
> > > >   * long as the nesting level does not overflow an int.  (You will probably
> > > >   * run out of stack space first.)
> > > >   *
> > > > - * If you add or remove a call to rcu_nmi_enter(), be sure to test
> > > > + * If you add or remove a call to rcu_nmi_enter_common(), be sure to test
> > > >   * with CONFIG_RCU_EQS_DEBUG=y.
> > > >   */
> > > > -void rcu_nmi_enter(void)
> > > > +static __always_inline void rcu_nmi_enter_common(bool irq)
> > > >  {
> > > >  	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
> > > >  	long incby = 2;
> > > > @@ -951,7 +960,15 @@ void rcu_nmi_enter(void)
> > > >  	 * period (observation due to Andy Lutomirski).
> > > >  	 */
> > > >  	if (rcu_dynticks_curr_cpu_in_eqs()) {
> > > > +
> > > > +		if (irq)
> > > > +			rcu_dynticks_task_exit();
> > > > +
> > > >  		rcu_dynticks_eqs_exit();
> > > > +
> > > > +		if (irq)
> > > > +			rcu_cleanup_after_idle();
> > > > +
> > > >  		incby = 1;
> > > >  	}
> > > >  	trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="),  
> > > 
> > > 
> > > There is a slight change here, although I don't think it is an issue,
> > > but I want to bring it up just in case.
> > > 
> > > The old way had:
> > > 
> > > 	rcu_dynticks_task_exit();
> > > 	rcu_dynticks_eqs_exit();
> > > 	trace_rcu_dyntick();
> > > 	rcu_cleanup_after_idle();
> > > 
> > > The new way has:
> > > 
> > > 	rcu_dynticks_task_exit();
> > > 	rcu_dynticks_eqs_exit();
> > > 	rcu_cleanup_after_idle();
> > > 	trace_rcu_dyntick();
> > > 
> > > As that tracepoint will use RCU, will this cause any side effects?
> > > 
> > > My thought is that the new way is actually more correct, as I'm not
> > > sure we wanted RCU usage before the rcu_cleanup_after_idle().  
> > 
> > I believe that this is OK because is is the position of the call to
> > rcu_dynticks_eqs_exit() that really matters.  Before this call, RCU
> > is not yet watching, and after this call it is watching.  Reversing
> > the calls to rcu_cleanup_after_idle() and trace_rcu_dyntick() has them
> > both being invoked while RCU is watching.
> > 
> > All that rcu_cleanup_after_idle() does is to account for the time that
> > passed while the CPU was idle, for example, advancing callbacks to allow
> > for how ever many RCU grace periods completed during that idle period.
> > 
> > Or am I missing something subtle.
> 
> As I stated above, I actually think the new way is more correct. That's
> because the trace event is the first user of RCU here and it probably
> wont be the last. It makes more sense to do it after the call to
> rcu_cleanup_after_idle(), just because it keeps all the RCU users after
> the RCU internal code for coming out of idle. Sure,
> rcu_cleanup_after_idle() doesn't do anything now that could affect
> this, but maybe it will in the future?

If rcu_cleanup_after_idle() job changes, then yes, changes might be
needed here and perhaps elsewhere as well.  ;-)

> > (At the very least, you would be quite right to ask that this be added
> > to the commit log!)
> 
> Yes, I agree. There should be a comment in the change log about this
> simply because this is technically a functional change.

Very good, will do!

							Thanx, Paul


  reply	other threads:[~2018-06-25 15:41 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-22  6:12 [PATCH] rcu: Refactor rcu_{nmi,irq}_{enter,exit}() Byungchul Park
2018-06-22  6:23 ` Byungchul Park
2018-06-23 17:49   ` Paul E. McKenney
2018-06-25  8:21     ` Byungchul Park
2018-06-25 14:07     ` Steven Rostedt
2018-06-25 14:48       ` Paul E. McKenney
2018-06-25 15:02         ` Steven Rostedt
2018-06-25 15:43           ` Paul E. McKenney [this message]
2018-06-22  8:34 ` kbuild test robot
2018-06-22 13:39   ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180625154338.GP3593@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=byungchul.park@lge.com \
    --cc=jiangshanlai@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=josh@joshtriplett.org \
    --cc=kernel-team@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=rostedt@goodmis.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox