From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Christoph Lameter <cl@linux.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>, linux-kernel@vger.kernel.org
Subject: Re: [RFC] dynticks: dynticks_idle is only modified locally use this_cpu ops
Date: Thu, 4 Sep 2014 09:16:29 -0700 [thread overview]
Message-ID: <20140904161629.GF5001@linux.vnet.ibm.com> (raw)
In-Reply-To: <alpine.DEB.2.11.1409040944220.4565@gentwo.org>
On Thu, Sep 04, 2014 at 10:04:17AM -0500, Christoph Lameter wrote:
>
> On Wed, 3 Sep 2014, Paul E. McKenney wrote:
>
> > As noted earlier, in theory, the atomic operations could be nonatomic,
>
> Well as demonstrated by the patch earlier: The atomic operations are only
> used on a the local cpu. There is no synchronization in that sense needed
> between processors because there is never a remote atomic operation.
Easy to say! ;-)
> > > The code looks fragile and bound to have issues in the future given the
> > > barriers/atomics etc. Its going to be cleaner without that.
> >
> > What exactly looks fragile about it, and exactly what issues do you
> > anticipate?
>
> I am concerned about creation of unecessary synchronization issues. In
> this case we have already discovered that the atomic operations on per
> cpu variables are only used to modify the contents from the local cpu.
>
> This means at minimum we can give up on the use of atomics and keep the
> barriers to enforce visibility.
Sounds like a desire for a potential optimization rather than any
sort of fragility. And in this case, it is not clear that your desire
of replacing a value-returning atomic operation with a normal memory
reference and a pair of memory barrier actually makes anything go faster.
So in short, you don't see the potential for this use case actually
breaking anything, correct?
Besides RCU is not the only place where atomics are used on per-CPU
variables. For one thing, there are a number of per-CPU spinlocks in use
in various places throughout the kernel. For another thing, there is also
a large number of per-CPU structures (not pointers to structures, actual
structures), and I bet that a fair number of these feature cross-CPU
writes and cross-CPU atomics. RCU's rcu_data structures certainly do.
> > > And we are right now focusing on the simplest case. The atomics scheme is
> > > used multiple times in the RCU subsystem. There is more weird looking code
> > > there like atomic_add using zero etc.
> >
> > The atomic_add_return(0,...) reads the value out, forcing full ordering.
> > Again, in theory, this could be a volatile read with explicit memory-barrier
> > instructions on either side, but it is not clear which wins. (Keep in
> > mind that almost all of the atomic_add_return(0,...) calls for a given
> > dynticks counter are executed from a single kthread.)
> >
> > If systems continue to add CPUs like they have over the past decade or
> > so, I expect that you will be seeing more code like RCU's, not less.
>
> We have other code like this in multiple subsystems but it does not have
> the barrier issues, per cpu variables are updated always without the use
> of atomics and the inspection of the per cpu state from remote cpus works
> just fine also without them.
Including the per-CPU spinlocks? That seems a bit unlikely. And again,
I expect that a fair number of the per-CPU structures involve cross-CPU
synchronization.
> I'd like to simplify this as much as possible and make it consistent
> throughout the kernel.
It already is consistent, just not in the manner that you want. ;-)
But -why- do you want these restrictions? How does it help anything?
Thanx, Paul
next prev parent reply other threads:[~2014-09-04 16:16 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-02 20:14 [RFC] dynticks: dynticks_idle is only modified locally use this_cpu ops Christoph Lameter
2014-09-02 20:37 ` Paul E. McKenney
2014-09-02 20:47 ` Christoph Lameter
2014-09-02 21:15 ` Paul E. McKenney
2014-09-02 23:22 ` Christoph Lameter
2014-09-03 2:11 ` Paul E. McKenney
2014-09-03 14:10 ` Christoph Lameter
2014-09-03 14:43 ` Paul E. McKenney
2014-09-03 15:51 ` Christoph Lameter
2014-09-03 17:14 ` Paul E. McKenney
2014-09-03 17:43 ` Christoph Lameter
2014-09-03 18:19 ` Paul E. McKenney
2014-09-04 15:04 ` Christoph Lameter
2014-09-04 16:16 ` Paul E. McKenney [this message]
2014-09-04 18:19 ` Christoph Lameter
2014-09-04 19:32 ` Paul E. McKenney
2014-09-08 18:20 ` Christoph Lameter
2014-09-10 22:18 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140904161629.GF5001@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=cl@linux.com \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox