From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Arjan van de Ven <arjan@infradead.org>
Cc: dipankar@in.ibm.com, linux-input@vger.kernel.org,
dmitry.torokhov@gmail.com, linux-kernel@vger.kernel.org
Subject: Re: Question about usage of RCU in the input layer
Date: Fri, 20 Mar 2009 18:27:46 -0700 [thread overview]
Message-ID: <20090321012746.GM6698@linux.vnet.ibm.com> (raw)
In-Reply-To: <20090320111354.679ab53d@infradead.org>
On Fri, Mar 20, 2009 at 11:13:54AM -0700, Arjan van de Ven wrote:
> On Fri, 20 Mar 2009 07:31:04 -0700
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> > >
> > > that'd be throwing out the baby with the bathwater... I'm trying to
> > > use the other cpus to do some of the boot work (so that the total
> > > goes faster); not using the other cpus would be counter productive
> > > to that. (As is just sitting in synchronize_rcu() when the other
> > > cpu is working.. hence this discussion ;-)
> >
> > OK, so you are definitely running multiple CPUs when the offending
> > synchronize_rcu() executes, then?
>
> absolutely.
> (and I'm using bootgraph.pl in scripts to track who's stalling etc)
> >
> > If so, here are some follow-on questions:
> >
> > 1. How many synchronize_rcu() calls are you seeing on the
> > critical boot path
>
> I've seen only this (input) one to take a long time
Ouch!!! A -single- synchronize_rcu() taking a full second??? That
indicates breakage.
> > and what value of HZ are you running?
>
> 1000
K, in absence of readers for RCU_CLASSIC, we should see a handful
of milliseconds for synchronize_rcu().
> > If each synchronize_rcu() is taking (say) tens of jiffies,
> > then, as Peter Zijlstra notes earlier in this thread, we need to focus
> > on what is taking too long to get through its RCU read-side
> > critical sections
>
> I know that "the other guy" is not optimal and takes waaay too long.
That could explain why Peter focused on this case. ;-)
> > Otherwise, if each synchronize_rcu() is
> > in the 3-5 jiffy range, I may finally be forced to create an
> > expedited version of the synchronize_rcu() API.
>
> I think a simplified API for the "add to a list" case might make sense.
> Because the request isn't for a full sync for sure...
>
> (independent of that .. the open question is if this specific case is
> even needed; I think the code confused "send to others" with "wait
> until everyone sees"; afaik synchronize_rcu() has no pushing behavior
> at all, nor should it)
Quite possibly, perhaps Dmitry will come up with something.
> > 2. If expediting is required, then the code calling
> > synchronize_rcu() might or might not have any idea whether or not
> > expediting is appropriate. If it does not, then we would need some
> > sort of way to tell synchronize_rcu() that it should act more
> > aggressively, perhaps /proc flag or kernel global variable indicating
> > that boot is in progress.
> >
> > No, we do not want to make synchronize_rcu() aggressive all
> > the time, as this would harm performance and energy efficiency in
> > the normal runtime situation.
> >
> > So, if it turns out that synchronize_rcu()'s caller does not
> > know whether or not expediting is appropriate, can the boot
> > path manipulate such a flag or variable?
> >
> > 3. Which RCU implementation are you using? CONFIG_CLASSIC_RCU,
> > CONFIG_TREE_RCU, or CONFIG_PREEMPT_RCU?
>
> CLASSIC
OK, it usually has the fastest synchronize_rcu() at the moment, though
I will be giving TREE_RCU some more help.
Sounds like I should hold off in favor of Dmitry's and Peter's efforts.
Thanx, Paul
next prev parent reply other threads:[~2009-03-21 1:27 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-19 4:58 Question about usage of RCU in the input layer Arjan van de Ven
2009-03-19 7:23 ` Dmitry Torokhov
2009-03-19 14:02 ` Arjan van de Ven
2009-03-19 8:56 ` Dipankar Sarma
2009-03-19 14:18 ` Arjan van de Ven
2009-03-20 2:07 ` Paul E. McKenney
2009-03-20 3:20 ` Arjan van de Ven
2009-03-20 4:45 ` Paul E. McKenney
2009-03-20 5:28 ` Eric Dumazet
2009-03-20 6:01 ` Dipankar Sarma
2009-03-20 6:35 ` Eric Dumazet
2009-03-20 13:50 ` Arjan van de Ven
2009-03-20 14:31 ` Paul E. McKenney
2009-03-20 18:13 ` Arjan van de Ven
2009-03-21 1:27 ` Paul E. McKenney [this message]
2009-03-21 4:58 ` Arjan van de Ven
2009-03-21 18:58 ` Paul E. McKenney
2009-03-21 19:51 ` Arjan van de Ven
2009-03-21 20:26 ` Eric Dumazet
2009-03-21 21:07 ` Paul E. McKenney
2009-03-22 3:40 ` Arjan van de Ven
2009-03-22 4:38 ` Paul E. McKenney
2009-03-22 4:51 ` Arjan van de Ven
2009-03-22 5:18 ` Paul E. McKenney
2009-03-22 5:53 ` Arjan van de Ven
2009-03-22 16:53 ` Paul E. McKenney
2009-03-22 19:46 ` Arjan van de Ven
2009-03-22 20:52 ` Paul E. McKenney
2009-03-22 22:44 ` Arjan van de Ven
2009-03-22 23:03 ` Paul E. McKenney
2009-03-22 23:16 ` Arjan van de Ven
2009-03-23 1:27 ` Paul E. McKenney
2009-04-03 1:27 ` Paul E. McKenney
2009-03-21 21:13 ` Arjan van de Ven
2009-03-20 22:21 ` Paul E. McKenney
2009-03-21 5:46 ` Dmitry Torokhov
2009-03-21 9:13 ` Eric Dumazet
2009-03-21 18:58 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090321012746.GM6698@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=arjan@infradead.org \
--cc=dipankar@in.ibm.com \
--cc=dmitry.torokhov@gmail.com \
--cc=linux-input@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).