From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Josh Triplett <josh@joshtriplett.org>,
linux-kernel@vger.kernel.org, mingo@elte.hu,
laijs@cn.fujitsu.com, dipankar@in.ibm.com,
akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca,
niv@us.ibm.com, tglx@linutronix.de, rostedt@goodmis.org,
Valdis.Kletnieks@vt.edu, dhowells@redhat.com,
edumazet@google.com, darren@dvhart.com, fweisbec@gmail.com,
sbw@mit.edu
Subject: Re: [PATCH tip/core/rcu 6/7] rcu: Drive quiescent-state-forcing delay from HZ
Date: Tue, 21 May 2013 09:54:59 -0700 [thread overview]
Message-ID: <20130521165459.GO3578@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130521094531.GE26912@twins.programming.kicks-ass.net>
On Tue, May 21, 2013 at 11:45:31AM +0200, Peter Zijlstra wrote:
> On Thu, May 16, 2013 at 06:22:10AM -0700, Paul E. McKenney wrote:
> > > But somehow I imagined making a CPU part of the GP would be easier than taking
> > > it out. After all, taking it out is dangerous and careful work, one is not to
> > > accidentally execute a callback or otherwise end a GP before time.
> > >
> > > When entering the GP cycle there is no such concern, the CPU state is clean
> > > after all.
> >
> > But that would increase the overhead of GP initialization. Right now,
> > GP initialization touches only the leaf rcu_node structures, of which
> > there are by default one per 16 CPUs (and can be configured up to one per
> > 64 CPUs, which it is on really big systems). So on busy mixed-workload
> > systems, this approach increases GP initialization overhead for no
> > good reason -- and on systems running these sorts of workloads, there
> > usually aren't "sacrificial lamb" timekeeping CPUs whose utilization
> > doesn't matter.
>
> Right, so I read through some of the fqs code to get a better feel for
> things and I suppose I see what you're talking about :-)
>
> The only thing I could come up with is making fqslock a global/local
> style lock, so that individual CPUs can adjust their own state without
> bouncing the lock around.
Maybe... The current design uses bitmasks at each level, and avoiding the
upper-level locks would mean making RCU work with out-of-date bitmasks
at the upper levels. Might be possible, but it is not clear to me that
this would be a win.
I could also maintain yet another bitmask at the bottom level to record
the idle CPUs, but it is not clear that this is a win, either, especially
on systems with frequent idle/busy transitions.
> It would make the fqs itself a 'bit' more expensive but ideally those
> don't happen that often, ha!.
>
> But yeah, every time you let the fqs propagate 'idle' state up the tree
> your join becomes more expensive too.
Yep! :-/
Thanx, Paul
next prev parent reply other threads:[~2013-05-21 17:05 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-04-12 23:18 [PATCH tip/core/rcu 0/7] RCU fixes for 3.11 Paul E. McKenney
2013-04-12 23:19 ` [PATCH tip/core/rcu 1/7] rcu: Convert rcutree.c printk calls Paul E. McKenney
2013-04-12 23:19 ` [PATCH tip/core/rcu 2/7] rcu: Convert rcutree_plugin.h " Paul E. McKenney
2013-04-12 23:19 ` [PATCH tip/core/rcu 3/7] rcu: Kick adaptive-ticks CPUs that are holding up RCU grace periods Paul E. McKenney
2013-04-13 14:06 ` Frederic Weisbecker
2013-04-13 15:19 ` Paul E. McKenney
2013-04-12 23:19 ` [PATCH tip/core/rcu 4/7] rcu: Don't allocate bootmem from rcu_init() Paul E. McKenney
2013-04-12 23:19 ` [PATCH tip/core/rcu 5/7] rcu: Remove "Experimental" flags Paul E. McKenney
2013-04-12 23:19 ` [PATCH tip/core/rcu 6/7] rcu: Drive quiescent-state-forcing delay from HZ Paul E. McKenney
2013-04-12 23:54 ` Josh Triplett
2013-04-13 6:38 ` Paul E. McKenney
2013-04-13 18:18 ` Josh Triplett
2013-04-13 19:34 ` Paul E. McKenney
2013-04-13 19:53 ` Josh Triplett
2013-04-13 22:09 ` Paul E. McKenney
2013-04-14 6:10 ` Paul E. McKenney
2013-05-14 12:20 ` Peter Zijlstra
2013-05-14 14:12 ` Paul E. McKenney
2013-05-14 14:51 ` Peter Zijlstra
2013-05-14 15:47 ` Paul E. McKenney
2013-05-15 8:56 ` Peter Zijlstra
2013-05-15 9:02 ` Peter Zijlstra
2013-05-15 17:31 ` Paul E. McKenney
2013-05-16 9:45 ` Peter Zijlstra
2013-05-16 13:22 ` Paul E. McKenney
2013-05-21 9:45 ` Peter Zijlstra
2013-05-21 16:54 ` Paul E. McKenney [this message]
2013-05-15 16:37 ` Paul E. McKenney
2013-05-16 9:37 ` Peter Zijlstra
2013-05-16 13:13 ` Paul E. McKenney
2013-05-15 9:20 ` Ingo Molnar
2013-05-15 15:44 ` Paul E. McKenney
2013-05-28 10:07 ` Ingo Molnar
2013-05-29 1:29 ` Paul E. McKenney
2013-04-15 2:03 ` Paul Mackerras
2013-04-15 17:26 ` Paul E. McKenney
2013-04-12 23:19 ` [PATCH tip/core/rcu 7/7] rcu: Merge adjacent identical ifdefs Paul E. McKenney
2013-04-13 0:01 ` [PATCH tip/core/rcu 0/7] RCU fixes for 3.11 Josh Triplett
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130521165459.GO3578@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=Valdis.Kletnieks@vt.edu \
--cc=akpm@linux-foundation.org \
--cc=darren@dvhart.com \
--cc=dhowells@redhat.com \
--cc=dipankar@in.ibm.com \
--cc=edumazet@google.com \
--cc=fweisbec@gmail.com \
--cc=josh@joshtriplett.org \
--cc=laijs@cn.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@polymtl.ca \
--cc=mingo@elte.hu \
--cc=niv@us.ibm.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=sbw@mit.edu \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).