From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Dmitry Shmidt <dimitrysh@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>, Tejun Heo <tj@kernel.org>,
John Stultz <john.stultz@linaro.org>,
Ingo Molnar <mingo@redhat.com>,
lkml <linux-kernel@vger.kernel.org>,
Rom Lemarchand <romlem@google.com>,
Colin Cross <ccross@google.com>, Todd Kjos <tkjos@google.com>,
Oleg Nesterov <oleg@redhat.com>
Subject: Re: Severe performance regression w/ 4.4+ on Android due to cgroup locking changes
Date: Wed, 13 Jul 2016 14:05:56 -0700 [thread overview]
Message-ID: <20160713210556.GP7094@linux.vnet.ibm.com> (raw)
In-Reply-To: <CAH7ZN-yMyaF+UYKajVhhMdoq3Yn+GHFDmsO01sFHQsv_oVUbNw@mail.gmail.com>
On Wed, Jul 13, 2016 at 02:01:27PM -0700, Dmitry Shmidt wrote:
> On Wed, Jul 13, 2016 at 1:52 PM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Wed, Jul 13, 2016 at 10:26:57PM +0200, Peter Zijlstra wrote:
> >> On Wed, Jul 13, 2016 at 04:18:23PM -0400, Tejun Heo wrote:
> >> > Hello, John.
> >> >
> >> > On Wed, Jul 13, 2016 at 01:13:11PM -0700, John Stultz wrote:
> >> > > On Wed, Jul 13, 2016 at 11:33 AM, Tejun Heo <tj@kernel.org> wrote:
> >> > > > On Wed, Jul 13, 2016 at 02:21:02PM -0400, Tejun Heo wrote:
> >> > > >> One interesting thing to try would be replacing it with a regular
> >> > > >> non-percpu rwsem and see how it behaves. That should easily tell us
> >> > > >> whether this is from actual contention or artifacts from percpu_rwsem
> >> > > >> implementation.
> >> > > >
> >> > > > So, something like the following. Can you please see whether this
> >> > > > makes any difference?
> >> > >
> >> > > Yea. So this brings it down for me closer to what we're seeing with
> >> > > the Dmitry's patch reverting the two problematic commits, usually
> >> > > 10-50us with one early spike at 18ms.
> >> >
> >> > So, it's a percpu rwsem issue then. I haven't really followed the
> >> > perpcpu rwsem changes closely. Oleg, are multi-milisec delay expected
> >> > on down write expected with the current implementation of
> >> > percpu_rwsem?
> >>
> >> There is a synchronize_sched() in there, so sorta. That thing is heavily
> >> geared towards readers, as is the only 'sane' choice for global locks.
> >
> > Then one diagnostic step to take would be to replace that
> > synchronize_sched() with synchronize_sched_expedited(), and see if that
> > gets rid of the delays.
> >
> > Not a particularly real-time-friendly fix, but certainly a good check
> > on our various assumptions.
>
> All delays <200 us, but one that is 3 ms.
That does indicate that synchronize_sched() is the latency culprit.
Tejun's lglock-like approach seems to me to be the next thing to try.
Thanx, Paul
> > ------------------------------------------------------------------------
> >
> > diff --git a/kernel/rcu/sync.c b/kernel/rcu/sync.c
> > index be922c9f3d37..211acddc7e21 100644
> > --- a/kernel/rcu/sync.c
> > +++ b/kernel/rcu/sync.c
> > @@ -38,19 +38,19 @@ static const struct {
> > #endif
> > } gp_ops[] = {
> > [RCU_SYNC] = {
> > - .sync = synchronize_rcu,
> > + .sync = synchronize_rcu_expedited,
> > .call = call_rcu,
> > .wait = rcu_barrier,
> > __INIT_HELD(rcu_read_lock_held)
> > },
> > [RCU_SCHED_SYNC] = {
> > - .sync = synchronize_sched,
> > + .sync = synchronize_sched_expedited,
> > .call = call_rcu_sched,
> > .wait = rcu_barrier_sched,
> > __INIT_HELD(rcu_read_lock_sched_held)
> > },
> > [RCU_BH_SYNC] = {
> > - .sync = synchronize_rcu_bh,
> > + .sync = synchronize_rcu_bh_expedited,
> > .call = call_rcu_bh,
> > .wait = rcu_barrier_bh,
> > __INIT_HELD(rcu_read_lock_bh_held)
> >
>
next prev parent reply other threads:[~2016-07-13 21:05 UTC|newest]
Thread overview: 67+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-13 0:00 Severe performance regression w/ 4.4+ on Android due to cgroup locking changes John Stultz
2016-07-13 8:21 ` Peter Zijlstra
2016-07-13 14:42 ` Paul E. McKenney
2016-07-13 18:13 ` Dmitry Shmidt
2016-07-13 18:32 ` Paul E. McKenney
2016-07-13 18:21 ` Tejun Heo
2016-07-13 18:33 ` Tejun Heo
2016-07-13 20:13 ` John Stultz
2016-07-13 20:18 ` Tejun Heo
2016-07-13 20:26 ` Peter Zijlstra
2016-07-13 20:39 ` Tejun Heo
2016-07-13 20:51 ` Peter Zijlstra
2016-07-13 21:01 ` Tejun Heo
2016-07-13 21:03 ` Paul E. McKenney
2016-07-13 21:05 ` Tejun Heo
2016-07-13 21:18 ` Paul E. McKenney
2016-07-13 21:42 ` Paul E. McKenney
2016-07-13 21:46 ` John Stultz
2016-07-13 22:17 ` Paul E. McKenney
2016-07-13 22:39 ` John Stultz
2016-07-13 23:02 ` Paul E. McKenney
2016-07-13 23:04 ` Paul E. McKenney
2016-07-14 11:35 ` Tejun Heo
2016-07-14 12:04 ` Peter Zijlstra
2016-07-14 12:08 ` Tejun Heo
2016-07-14 12:20 ` Peter Zijlstra
2016-07-14 15:07 ` Tejun Heo
2016-07-14 15:24 ` Tejun Heo
2016-07-14 16:32 ` Peter Zijlstra
2016-07-14 17:34 ` Oleg Nesterov
2016-07-14 16:54 ` John Stultz
2016-07-13 22:25 ` John Stultz
2016-07-13 22:01 ` Tejun Heo
2016-07-13 22:33 ` Paul E. McKenney
2016-07-14 6:49 ` Peter Zijlstra
2016-07-14 11:20 ` Tejun Heo
2016-07-14 12:11 ` Peter Zijlstra
2016-07-14 15:14 ` Tejun Heo
2016-07-14 13:18 ` Peter Zijlstra
2016-07-14 14:14 ` Peter Zijlstra
2016-07-14 14:58 ` Oleg Nesterov
2016-07-14 16:14 ` Peter Zijlstra
2016-07-14 16:37 ` Peter Zijlstra
2016-07-14 17:05 ` Oleg Nesterov
2016-07-14 16:23 ` Paul E. McKenney
2016-07-14 16:45 ` Peter Zijlstra
2016-07-14 17:15 ` Paul E. McKenney
2016-07-14 16:43 ` John Stultz
2016-07-14 16:49 ` Peter Zijlstra
2016-07-14 17:02 ` John Stultz
2016-07-14 17:13 ` Oleg Nesterov
2016-07-14 17:30 ` John Stultz
2016-07-14 17:41 ` Oleg Nesterov
2016-07-14 17:51 ` John Stultz
2016-07-14 18:09 ` Oleg Nesterov
2016-07-14 18:36 ` Peter Zijlstra
2016-07-14 19:35 ` Peter Zijlstra
2016-07-13 20:57 ` John Stultz
2016-07-13 20:52 ` Paul E. McKenney
2016-07-13 20:57 ` Peter Zijlstra
2016-07-13 21:08 ` Paul E. McKenney
2016-07-13 21:01 ` Dmitry Shmidt
2016-07-13 21:03 ` John Stultz
2016-07-13 21:05 ` Paul E. McKenney [this message]
2016-07-13 20:31 ` Dmitry Shmidt
2016-07-13 20:44 ` Colin Cross
2016-07-13 20:54 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160713210556.GP7094@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=ccross@google.com \
--cc=dimitrysh@google.com \
--cc=john.stultz@linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=oleg@redhat.com \
--cc=peterz@infradead.org \
--cc=romlem@google.com \
--cc=tj@kernel.org \
--cc=tkjos@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox