From: Daniel J Blueman <daniel@numascale.com>
To: paulmck@linux.vnet.ibm.com
Cc: Steffen Persvold <sp@numascale.com>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: RCU fanout leaf balancing
Date: Mon, 27 Oct 2014 09:08:07 +0800 [thread overview]
Message-ID: <544D9AF7.2080206@numascale.com> (raw)
In-Reply-To: <20141025134835.GD28247@linux.vnet.ibm.com>
On 25/10/2014 21:48, Paul E. McKenney wrote:
> On Sat, Oct 25, 2014 at 02:59:24PM +0800, Daniel J Blueman wrote:
>> Hi Paul,
>>
>> Finding earlier reference to increasing RCU fanout leaf for the
>> purpose of "decrease[ing] cache-miss overhead for large systems",
>> would your suggestion be to increase the value to the next hierarchy
>> core-count above 16?
>>
>> If we have say 32 interconnected 48-core servers; 3 sockets of
>> dual-node 8-core Opteron 6300s, so 1536 cores in all. Latency across
>> the coherent interconnect is O(100x) higher than the internal
>> Hypertransport interconnect, so if we set RCU_FANOUT_LEAF to 48 to
>> keep leaf-checking local to one Hypertransport fabric, what wisdom
>> would one use for RCU_FANOUT? 4x leaf?
>>
>> Or, would it be more cache-friendly to set RCU_FANOUT_LEAF to 8 and
>> RCU_FANOUT to 48?
>
> The easiest approach would be to use the default of 16. Assuming
> consecutive CPU numbering within each 48-core server, this would mean that
> you would have three rcu_node structures per 48-core server. The next
> level up would of course span servers, but that level is accessed much
> less frequently than is the root level, so this should still work.
>
> If you also have hyperthreading, so that there are 96 hardware threads
> per server, and if you are using the same "interesting" numbering scheme
> that Intel uses, then this still works. You have three leaf rcu_node
> structure for the first set of hardware threads and another set of three
> for the second set of hardware threads.
>
> Or are you seeing some problem with the default? If so, please tell me
> what that problem is.
>
> You can of course increase RCU_FANOUT to 24 or 48 (this latter assuming
> a 64-bit kernel), at least if you are using a recent kernel. However,
> the penalty for too large a value for RCU_FANOUT is lock contention at
> scheduling-clock-interrupt time. So if you are setting RCU_FANOUT to 48,
> you probably also want to boot with skew_tick set.
>
> But the best approach is to try it. I bet that the default will work
> just fine for you. ;-)
Good info. I'll stick with the defaults of 16/64, will schedule some
tuning later, and let you know if I find anything significant.
Thanks Paul!
Daniel
--
Daniel J Blueman
Principal Software Engineer, Numascale
prev parent reply other threads:[~2014-10-27 1:08 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-10-25 6:59 RCU fanout leaf balancing Daniel J Blueman
2014-10-25 13:48 ` Paul E. McKenney
2014-10-27 1:08 ` Daniel J Blueman [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=544D9AF7.2080206@numascale.com \
--to=daniel@numascale.com \
--cc=linux-kernel@vger.kernel.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=sp@numascale.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).