From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Pranith Kumar <bobby.prani@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>,
"open list:READ-COPY UPDATE..." <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/1] rcu: remove remaining read-modify-write ACCESS_ONCE() calls
Date: Tue, 8 Jul 2014 15:09:27 -0700 [thread overview]
Message-ID: <20140708220927.GC4603@linux.vnet.ibm.com> (raw)
In-Reply-To: <1404856010-7506-1-git-send-email-bobby.prani@gmail.com>
On Tue, Jul 08, 2014 at 05:46:50PM -0400, Pranith Kumar wrote:
> Change the remaining uses of ACCESS_ONCE() so that each ACCESS_ONCE() either does a load or a store, but not both.
>
> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Queued for 3.18, thank you Pranith!
Thanx, Paul
> ---
> kernel/rcu/tree.c | 6 ++++--
> kernel/rcu/tree_plugin.h | 8 +++++---
> 2 files changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index dac6d20..c356bf6 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1700,7 +1700,8 @@ static int rcu_gp_fqs(struct rcu_state *rsp, int fqs_state_in)
> if (ACCESS_ONCE(rsp->gp_flags) & RCU_GP_FLAG_FQS) {
> raw_spin_lock_irq(&rnp->lock);
> smp_mb__after_unlock_lock();
> - ACCESS_ONCE(rsp->gp_flags) &= ~RCU_GP_FLAG_FQS;
> + ACCESS_ONCE(rsp->gp_flags) =
> + ACCESS_ONCE(rsp->gp_flags) & ~RCU_GP_FLAG_FQS;
> raw_spin_unlock_irq(&rnp->lock);
> }
> return fqs_state;
> @@ -2514,7 +2515,8 @@ static void force_quiescent_state(struct rcu_state *rsp)
> raw_spin_unlock_irqrestore(&rnp_old->lock, flags);
> return; /* Someone beat us to it. */
> }
> - ACCESS_ONCE(rsp->gp_flags) |= RCU_GP_FLAG_FQS;
> + ACCESS_ONCE(rsp->gp_flags) =
> + ACCESS_ONCE(rsp->gp_flags) | RCU_GP_FLAG_FQS;
> raw_spin_unlock_irqrestore(&rnp_old->lock, flags);
> wake_up(&rsp->gp_wq); /* Memory barrier implied by wake_up() path. */
> }
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index 637a8a9..f87b88c 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -897,7 +897,8 @@ void synchronize_rcu_expedited(void)
>
> /* Clean up and exit. */
> smp_mb(); /* ensure expedited GP seen before counter increment. */
> - ACCESS_ONCE(sync_rcu_preempt_exp_count)++;
> + ACCESS_ONCE(sync_rcu_preempt_exp_count) =
> + sync_rcu_preempt_exp_count + 1;
> unlock_mb_ret:
> mutex_unlock(&sync_rcu_preempt_exp_mutex);
> mb_ret:
> @@ -2307,8 +2308,9 @@ static int rcu_nocb_kthread(void *arg)
> list = next;
> }
> trace_rcu_batch_end(rdp->rsp->name, c, !!list, 0, 0, 1);
> - ACCESS_ONCE(rdp->nocb_p_count) -= c;
> - ACCESS_ONCE(rdp->nocb_p_count_lazy) -= cl;
> + ACCESS_ONCE(rdp->nocb_p_count) = rdp->nocb_p_count - c;
> + ACCESS_ONCE(rdp->nocb_p_count_lazy) =
> + rdp->nocb_p_count_lazy - cl;
> rdp->n_nocbs_invoked += c;
> }
> return 0;
> --
> 1.9.1
>
prev parent reply other threads:[~2014-07-08 22:09 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-08 21:46 [PATCH 1/1] rcu: remove remaining read-modify-write ACCESS_ONCE() calls Pranith Kumar
2014-07-08 22:09 ` Paul E. McKenney [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140708220927.GC4603@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=bobby.prani@gmail.com \
--cc=josh@joshtriplett.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox