linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Dave Hansen <dave.hansen@intel.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Josh Triplett <josh@joshtriplett.org>,
	"Chen, Tim C" <tim.c.chen@intel.com>,
	Andi Kleen <ak@linux.intel.com>, Christoph Lameter <cl@linux.com>
Subject: Re: [bisected] pre-3.16 regression on open() scalability
Date: Wed, 18 Jun 2014 05:58:31 -0700	[thread overview]
Message-ID: <20140618125831.GB4669@linux.vnet.ibm.com> (raw)
In-Reply-To: <53A132D4.60408@intel.com>

On Tue, Jun 17, 2014 at 11:33:56PM -0700, Dave Hansen wrote:
> On 06/17/2014 05:18 PM, Paul E. McKenney wrote:
> > So if I understand correctly, a goodly part of the regression is due not
> > to the overhead added to cond_resched(), but rather because grace periods
> > are now happening faster, thus incurring more overhead.  Is that correct?
> 
> Yes, that's the theory at least.
> 
> > If this is the case, could you please let me know roughly how sensitive is
> > the performance to the time delay in RCU_COND_RESCHED_EVERY_THIS_JIFFIES?
> 
> This is the previous kernel, plus RCU tracing, so it's not 100%
> apples-to-apples (and it peaks a bit lower than the other kernel).  But
> here's the will-it-scale open1 throughput on the y axis vs
> RCU_COND_RESCHED_EVERY_THIS_JIFFIES on x:
> 
> 	http://sr71.net/~dave/intel/jiffies-vs-openops.png
> 
> This was a quick and dirty single run with very little averaging, so I
> expect there to be a good amount of noise.  I ran it from 1->100, but it
> seemed to peak at about 30.

OK, so a default setting on the order of 20-30 jiffies looks promising.

> > The patch looks promising.  I will probably drive the time-setup deeper
> > into the guts of RCU, which should allow moving the access to jiffies
> > and the comparison off of the fast path as well, but this appears to
> > me to be good and sufficient for others encountering this same problem
> > in the meantime.
> 
> Yeah, the more overhead we can push out of cond_resched(), the better.
> I had no idea how much we call it!

Me neither!

							Thanx, Paul


  reply	other threads:[~2014-06-18 12:58 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-13 20:04 [bisected] pre-3.16 regression on open() scalability Dave Hansen
2014-06-13 22:45 ` Paul E. McKenney
2014-06-13 23:35   ` Dave Hansen
2014-06-14  2:03     ` Paul E. McKenney
2014-06-17 23:10   ` Dave Hansen
2014-06-18  0:00     ` Josh Triplett
2014-06-18  0:15     ` Andi Kleen
2014-06-18  1:04       ` Paul E. McKenney
2014-06-18  2:27         ` Andi Kleen
2014-06-18  4:47           ` Paul E. McKenney
2014-06-18 12:40             ` Andi Kleen
2014-06-18 12:56               ` Paul E. McKenney
2014-06-18 14:29       ` Christoph Lameter
2014-06-18  0:18     ` Paul E. McKenney
2014-06-18  6:33       ` Dave Hansen
2014-06-18 12:58         ` Paul E. McKenney [this message]
2014-06-18 17:36           ` Dave Hansen
2014-06-18 20:30             ` Paul E. McKenney
2014-06-18 23:51               ` Paul E. McKenney
2014-06-19  1:42                 ` Andi Kleen
2014-06-19  2:13                   ` Paul E. McKenney
2014-06-19  2:29                     ` Paul E. McKenney
2014-06-19  2:50                     ` Mike Galbraith
2014-06-19  4:19                       ` Paul E. McKenney
2014-06-19  3:38                     ` Andi Kleen
2014-06-19  4:19                       ` Paul E. McKenney
2014-06-19  5:24                         ` Mike Galbraith
2014-06-19 18:14                           ` Paul E. McKenney
2014-06-19  4:52                       ` Eric Dumazet
2014-06-19  5:23                         ` Paul E. McKenney
2014-06-19 14:42                   ` Christoph Lameter
2014-06-19 18:09                     ` Paul E. McKenney
2014-06-19 20:31                       ` Christoph Lameter
2014-06-19 20:42                         ` Paul E. McKenney
2014-06-19 20:50                           ` Andi Kleen
2014-06-19 21:03                             ` Paul E. McKenney
2014-06-19 21:13                           ` Christoph Lameter
2014-06-19 21:16                             ` Christoph Lameter
2014-06-19 21:32                               ` josh
2014-06-19 23:07                                 ` Paul E. McKenney
2014-06-20 15:20                                   ` Christoph Lameter
2014-06-20 15:38                                     ` Paul E. McKenney
2014-06-20 16:07                                       ` Christoph Lameter
2014-06-20 16:30                                         ` Paul E. McKenney
2014-06-20 17:39                                           ` Dave Hansen
2014-06-20 18:15                                             ` Paul E. McKenney
2014-06-18 21:48 ` Paul E. McKenney
2014-06-18 22:03   ` Dave Hansen
2014-06-18 22:52     ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140618125831.GB4669@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=ak@linux.intel.com \
    --cc=cl@linux.com \
    --cc=dave.hansen@intel.com \
    --cc=josh@joshtriplett.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tim.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).