From: "Paul E. McKenney" <paulmck@linux.ibm.com>
To: Joel Fernandes <joel@joelfernandes.org>
Cc: linux-kernel@vger.kernel.org, josh@joshtriplett.org,
rostedt@goodmis.org, mathieu.desnoyers@efficios.com,
jiangshanlai@gmail.com
Subject: Re: dyntick-idle CPU and node's qsmask
Date: Sat, 10 Nov 2018 15:04:36 -0800 [thread overview]
Message-ID: <20181110230436.GL4170@linux.ibm.com> (raw)
In-Reply-To: <20181110214659.GA96924@google.com>
On Sat, Nov 10, 2018 at 01:46:59PM -0800, Joel Fernandes wrote:
> Hi Paul and everyone,
>
> I was tracing/studying the RCU code today in paul/dev branch and noticed that
> for dyntick-idle CPUs, the RCU GP thread is clearing the rnp->qsmask
> corresponding to the leaf node for the idle CPU, and reporting a QS on their
> behalf.
>
> rcu_sched-10 [003] 40.008039: rcu_fqs: rcu_sched 792 0 dti
> rcu_sched-10 [003] 40.008039: rcu_fqs: rcu_sched 801 2 dti
> rcu_sched-10 [003] 40.008041: rcu_quiescent_state_report: rcu_sched 805 5>0 0 0 3 0
>
> That's all good but I was wondering if we can do better for the idle CPUs if
> we can some how not set the qsmask of the node in the first place. Then no
> reporting would be needed of quiescent state is needed for idle CPUs right?
> And we would also not need to acquire the rnp lock I think.
>
> At least for a single node tree RCU system, it seems that would avoid needing
> to acquire the lock without complications. Anyway let me know your thoughts
> and happy to discuss this at the hallways of the LPC as well for folks
> attending :)
We could, but that would require consulting the rcu_data structure for
each CPU while initializing the grace period, thus increasing the number
of cache misses during grace-period initialization and also shortly after
for any non-idle CPUs. This seems backwards on busy systems where each
CPU will with high probability report its own quiescent state before three
jiffies pass, in which case the cache misses on the rcu_data structures
would be wasted motion.
Now, this does increase overhead on mostly idle systems, but the theory
is that mostly idle systems are most able to absorb this extra overhead.
Thoughts?
Thanx, Paul
next prev parent reply other threads:[~2018-11-10 23:06 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-10 21:46 dyntick-idle CPU and node's qsmask Joel Fernandes
2018-11-10 23:04 ` Paul E. McKenney [this message]
2018-11-11 3:09 ` Joel Fernandes
2018-11-11 4:22 ` Paul E. McKenney
2018-11-11 18:09 ` Joel Fernandes
2018-11-11 18:36 ` Paul E. McKenney
2018-11-11 21:04 ` Joel Fernandes
2018-11-20 20:42 ` Joel Fernandes
2018-11-20 22:28 ` Paul E. McKenney
2018-11-20 22:34 ` Paul E. McKenney
2018-11-21 2:06 ` Joel Fernandes
2018-11-21 2:41 ` Paul E. McKenney
2018-11-21 4:37 ` Joel Fernandes
2018-11-21 14:39 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181110230436.GL4170@linux.ibm.com \
--to=paulmck@linux.ibm.com \
--cc=jiangshanlai@gmail.com \
--cc=joel@joelfernandes.org \
--cc=josh@joshtriplett.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=rostedt@goodmis.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox