From: Dave Hansen <dave.hansen@intel.com>
To: paulmck@linux.vnet.ibm.com
Cc: linux-kernel@vger.kernel.org, mingo@kernel.org,
laijs@cn.fujitsu.com, dipankar@in.ibm.com,
akpm@linux-foundation.org, mathieu.desnoyers@efficios.com,
josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org,
rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com,
dvhart@linux.intel.com, fweisbec@gmail.com, oleg@redhat.com,
ak@linux.intel.com, cl@gentwo.org, umgwanakikbuti@gmail.com
Subject: Re: [PATCH tip/core/rcu] Reduce overhead of cond_resched() checks for RCU
Date: Mon, 23 Jun 2014 16:30:12 -0700 [thread overview]
Message-ID: <53A8B884.6000600@intel.com> (raw)
In-Reply-To: <20140623180945.GL4603@linux.vnet.ibm.com>
On 06/23/2014 11:09 AM, Paul E. McKenney wrote:
> So let's see... The open1 benchmark sits in a loop doing open()
> and close(), and probably spends most of its time in the kernel.
> It doesn't do much context switching. I am guessing that you don't
> have CONFIG_NO_HZ_FULL=y, or the boot/sysfs parameter would not have
> much effect because then the first quiescent-state-forcing attempt would
> likely finish the grace period.
>
> So, given that short grace periods help other workloads (I have the
> scars to prove it), and given that the patch fixes some real problems,
I'm not arguing that short grace periods _can_ help some workloads, or
that one is better than the other. The patch in question changes
existing behavior by shortening grace periods. This change of existing
behavior removes some of the benefits that my system gets out of RCU. I
suspect this affects a lot more systems, but my core cout makes it
easier to see.
Perhaps I'm misunderstanding the original patch's intent, but it seemed
to me to be working around an overactive debug message. While often a
_useful_ debug message, it was firing falsely in the case being
addressed in the patch.
> and given that the large number for rcutree.jiffies_till_sched_qs got
> us within 3%, shouldn't we consider this issue closed?
With the default value for the tunable, the regression is still solidly
over 10%. I think we can have a reasonable argument about it once the
default delta is down to the small single digits.
One more thing I just realized: this isn't a scalability problem, at
least with rcutree.jiffies_till_sched_qs=12. There's a pretty
consistent delta in throughput throughout the entire range of threads
from 1->160. See the "processes" column in the data files:
plain 3.15:
> https://www.sr71.net/~dave/intel/willitscale/systems/bigbox/3.15/open1.csv
e552592e0383bc:
> https://www.sr71.net/~dave/intel/willitscale/systems/bigbox/3.16.0-rc1-pf2/open1.csv
or visually:
> https://www.sr71.net/~dave/intel/array-join.html?1=willitscale/systems/bigbox/3.15&2=willitscale/systems/bigbox/3.16.0-rc1-pf2&hide=linear,threads_idle,processes_idle
next prev parent reply other threads:[~2014-06-23 23:30 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-21 2:59 [PATCH tip/core/rcu] Reduce overhead of cond_resched() checks for RCU Paul E. McKenney
2014-06-21 4:29 ` Josh Triplett
2014-06-21 6:06 ` Paul E. McKenney
2014-06-23 13:53 ` Christoph Lameter
2014-06-23 15:26 ` Paul E. McKenney
2014-06-23 6:26 ` Peter Zijlstra
2014-06-23 13:33 ` Paul E. McKenney
2014-06-23 13:51 ` Christoph Lameter
2014-06-23 16:45 ` Paul E. McKenney
2014-06-23 15:49 ` Andi Kleen
2014-06-23 16:43 ` Paul E. McKenney
2014-06-23 17:19 ` Andi Kleen
2014-06-23 17:42 ` Paul E. McKenney
2014-06-23 16:55 ` Dave Hansen
2014-06-23 17:16 ` Paul E. McKenney
2014-06-23 17:34 ` Dave Hansen
2014-06-23 17:17 ` Dave Hansen
2014-06-23 18:09 ` Paul E. McKenney
2014-06-23 23:30 ` Dave Hansen [this message]
2014-06-24 0:15 ` Paul E. McKenney
2014-06-24 0:20 ` Dave Hansen
2014-06-24 0:39 ` Paul E. McKenney
2014-06-24 16:18 ` Dave Hansen
2014-06-24 20:43 ` Dave Hansen
2014-06-24 21:15 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53A8B884.6000600@intel.com \
--to=dave.hansen@intel.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=cl@gentwo.org \
--cc=dhowells@redhat.com \
--cc=dipankar@in.ibm.com \
--cc=dvhart@linux.intel.com \
--cc=edumazet@google.com \
--cc=fweisbec@gmail.com \
--cc=josh@joshtriplett.org \
--cc=laijs@cn.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mingo@kernel.org \
--cc=oleg@redhat.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=umgwanakikbuti@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox