From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Byungchul Park <byungchul.park@lge.com>
Cc: jiangshanlai@gmail.com, josh@joshtriplett.org,
rostedt@goodmis.org, mathieu.desnoyers@efficios.com,
linux-kernel@vger.kernel.org, kernel-team@lge.com,
joel@joelfernandes.org
Subject: Re: [RFC 1/2] rcu: Do prepare and cleanup idle depending on in_nmi()
Date: Wed, 20 Jun 2018 07:50:58 -0700 [thread overview]
Message-ID: <20180620145058.GP3593@linux.vnet.ibm.com> (raw)
In-Reply-To: <1529484440-20634-1-git-send-email-byungchul.park@lge.com>
On Wed, Jun 20, 2018 at 05:47:19PM +0900, Byungchul Park wrote:
> Get rid of dependency on ->dynticks_nmi_nesting.
>
> Signed-off-by: Byungchul Park <byungchul.park@lge.com>
> ---
> kernel/rcu/tree.c | 22 ++++++++++------------
> 1 file changed, 10 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index deb2508..59ae94e 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -797,6 +797,11 @@ void rcu_nmi_exit(void)
> return;
> }
>
> + if (!in_nmi()) {
Is in_nmi() sufficiently reliable for use here? In the past, there have
been tracepoints that invoked these functions between the time that the
handlers were entered and the time that software updated the state so that
the various handler-check functions (such as in_nmi()) would return true.
Steve, has there been any change in this situation?
Thanx, Paul
> + rcu_prepare_for_idle();
> + rcu_dynticks_task_enter();
> + }
> +
> /* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */
> trace_rcu_dyntick(TPS("Startirq"), rdtp->dynticks_nmi_nesting, 0, rdtp->dynticks);
> WRITE_ONCE(rdtp->dynticks_nmi_nesting, 0); /* Avoid store tearing. */
> @@ -824,14 +829,8 @@ void rcu_nmi_exit(void)
> */
> void rcu_irq_exit(void)
> {
> - struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
> -
> lockdep_assert_irqs_disabled();
> - if (rdtp->dynticks_nmi_nesting == 1)
> - rcu_prepare_for_idle();
> rcu_nmi_exit();
> - if (rdtp->dynticks_nmi_nesting == 0)
> - rcu_dynticks_task_enter();
> }
>
> /*
> @@ -944,6 +943,11 @@ void rcu_nmi_enter(void)
> if (rcu_dynticks_curr_cpu_in_eqs()) {
> rcu_dynticks_eqs_exit();
> incby = 1;
> +
> + if (!in_nmi()) {
> + rcu_dynticks_task_exit();
> + rcu_cleanup_after_idle();
> + }
> }
> trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="),
> rdtp->dynticks_nmi_nesting,
> @@ -977,14 +981,8 @@ void rcu_nmi_enter(void)
> */
> void rcu_irq_enter(void)
> {
> - struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
> -
> lockdep_assert_irqs_disabled();
> - if (rdtp->dynticks_nmi_nesting == 0)
> - rcu_dynticks_task_exit();
> rcu_nmi_enter();
> - if (rdtp->dynticks_nmi_nesting == 1)
> - rcu_cleanup_after_idle();
> }
>
> /*
> --
> 1.9.1
>
next prev parent reply other threads:[~2018-06-20 14:49 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-20 8:47 [RFC 1/2] rcu: Do prepare and cleanup idle depending on in_nmi() Byungchul Park
2018-06-20 8:47 ` [RFC 2/2] rcu: Remove ->dynticks_nmi_nesting from struct rcu_dynticks Byungchul Park
2018-06-20 14:58 ` Paul E. McKenney
2018-06-20 16:05 ` Byungchul Park
2018-06-20 16:49 ` Paul E. McKenney
2018-06-20 17:15 ` Byungchul Park
2018-06-20 17:40 ` Paul E. McKenney
2018-06-21 6:39 ` Byungchul Park
2018-06-21 6:48 ` Byungchul Park
2018-06-21 10:08 ` Byungchul Park
2018-06-21 15:05 ` Paul E. McKenney
2018-06-21 15:04 ` Paul E. McKenney
2018-06-22 3:00 ` Byungchul Park
2018-06-22 13:36 ` Paul E. McKenney
2018-06-22 5:56 ` Joel Fernandes
2018-06-22 13:28 ` Paul E. McKenney
2018-06-22 14:19 ` Andy Lutomirski
2018-06-22 16:12 ` Paul E. McKenney
2018-06-22 16:01 ` Steven Rostedt
2018-06-22 18:14 ` Paul E. McKenney
2018-06-22 18:19 ` Joel Fernandes
2018-06-22 18:32 ` Steven Rostedt
2018-06-22 20:05 ` Joel Fernandes
2018-06-25 8:28 ` Byungchul Park
2018-06-25 16:39 ` Joel Fernandes
2018-06-25 17:19 ` Paul E. McKenney
2018-06-25 19:15 ` Joel Fernandes
2018-06-25 20:25 ` Steven Rostedt
2018-06-25 20:47 ` Paul E. McKenney
2018-06-25 20:47 ` Andy Lutomirski
2018-06-25 22:16 ` Steven Rostedt
2018-06-25 23:30 ` Andy Lutomirski
2018-06-25 22:15 ` Steven Rostedt
2018-06-25 23:32 ` Andy Lutomirski
2018-06-25 21:25 ` Joel Fernandes
2018-06-22 20:58 ` Paul E. McKenney
2018-06-22 20:58 ` Paul E. McKenney
2018-06-22 21:00 ` Steven Rostedt
2018-06-22 21:16 ` Paul E. McKenney
2018-06-22 22:03 ` Andy Lutomirski
2018-06-23 17:53 ` Paul E. McKenney
2018-06-28 20:02 ` Paul E. McKenney
2018-06-28 21:13 ` Joel Fernandes
2018-06-28 21:41 ` Paul E. McKenney
2018-06-23 15:48 ` Joel Fernandes
2018-06-23 17:56 ` Paul E. McKenney
2018-06-24 3:02 ` Joel Fernandes
2018-06-20 13:33 ` [RFC 1/2] rcu: Do prepare and cleanup idle depending on in_nmi() Steven Rostedt
2018-06-20 14:58 ` Paul E. McKenney
2018-06-20 15:25 ` Byungchul Park
2018-06-20 14:50 ` Paul E. McKenney [this message]
2018-06-20 15:43 ` Steven Rostedt
2018-06-20 15:56 ` Paul E. McKenney
2018-06-20 16:11 ` Byungchul Park
2018-06-20 16:14 ` Steven Rostedt
2018-06-20 16:37 ` Byungchul Park
2018-06-20 16:11 ` Steven Rostedt
2018-06-20 16:30 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180620145058.GP3593@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=byungchul.park@lge.com \
--cc=jiangshanlai@gmail.com \
--cc=joel@joelfernandes.org \
--cc=josh@joshtriplett.org \
--cc=kernel-team@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=rostedt@goodmis.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox