linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: linux-kernel@vger.kernel.org
Cc: mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com,
	akpm@linux-foundation.org, mathieu.desnoyers@efficios.com,
	josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org,
	rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com,
	dvhart@linux.intel.com, fweisbec@gmail.com, oleg@redhat.com,
	bobby.prani@gmail.com,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Subject: [PATCH tip/core/rcu 07/19] rcu: Abstract sequence counting from synchronize_sched_expedited()
Date: Fri, 17 Jul 2015 16:29:12 -0700	[thread overview]
Message-ID: <1437175764-24096-7-git-send-email-paulmck@linux.vnet.ibm.com> (raw)
In-Reply-To: <1437175764-24096-1-git-send-email-paulmck@linux.vnet.ibm.com>

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit creates rcu_exp_gp_seq_start() and rcu_exp_gp_seq_end() to
bracket an expedited grace period, rcu_exp_gp_seq_snap() to snapshot the
sequence counter, and rcu_exp_gp_seq_done() to check to see if a full
expedited grace period has elapsed since the snapshot.  These will be
applied to synchronize_rcu_expedited().  These are defined in terms of
underlying rcu_seq_start(), rcu_seq_end(), rcu_seq_snap(), rcu_seq_done(),
which will be applied to _rcu_barrier().

One reason that this commit doesn't use the seqcount primitives themselves
is that the smp_wmb() in those primitive is insufficient due to the fact
that expedited grace periods do reads as well as writes.  In addition,
the read-side seqcount primitives detect a potentially partial change,
where the expedited primitives instead need a guaranteed full change.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 68 +++++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 58 insertions(+), 10 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index c5c8509054ef..67fe75725486 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3255,6 +3255,60 @@ void cond_synchronize_rcu(unsigned long oldstate)
 }
 EXPORT_SYMBOL_GPL(cond_synchronize_rcu);
 
+/* Adjust sequence number for start of update-side operation. */
+static void rcu_seq_start(unsigned long *sp)
+{
+	WRITE_ONCE(*sp, *sp + 1);
+	smp_mb(); /* Ensure update-side operation after counter increment. */
+	WARN_ON_ONCE(!(*sp & 0x1));
+}
+
+/* Adjust sequence number for end of update-side operation. */
+static void rcu_seq_end(unsigned long *sp)
+{
+	smp_mb(); /* Ensure update-side operation before counter increment. */
+	WRITE_ONCE(*sp, *sp + 1);
+	WARN_ON_ONCE(*sp & 0x1);
+}
+
+/* Take a snapshot of the update side's sequence number. */
+static unsigned long rcu_seq_snap(unsigned long *sp)
+{
+	unsigned long s;
+
+	smp_mb(); /* Caller's modifications seen first by other CPUs. */
+	s = (READ_ONCE(*sp) + 3) & ~0x1;
+	smp_mb(); /* Above access must not bleed into critical section. */
+	return s;
+}
+
+/*
+ * Given a snapshot from rcu_seq_snap(), determine whether or not a
+ * full update-side operation has occurred.
+ */
+static bool rcu_seq_done(unsigned long *sp, unsigned long s)
+{
+	return ULONG_CMP_GE(READ_ONCE(*sp), s);
+}
+
+/* Wrapper functions for expedited grace periods.  */
+static void rcu_exp_gp_seq_start(struct rcu_state *rsp)
+{
+	rcu_seq_start(&rsp->expedited_sequence);
+}
+static void rcu_exp_gp_seq_end(struct rcu_state *rsp)
+{
+	rcu_seq_end(&rsp->expedited_sequence);
+}
+static unsigned long rcu_exp_gp_seq_snap(struct rcu_state *rsp)
+{
+	return rcu_seq_snap(&rsp->expedited_sequence);
+}
+static bool rcu_exp_gp_seq_done(struct rcu_state *rsp, unsigned long s)
+{
+	return rcu_seq_done(&rsp->expedited_sequence, s);
+}
+
 static int synchronize_sched_expedited_cpu_stop(void *data)
 {
 	struct rcu_state *rsp = data;
@@ -3269,7 +3323,7 @@ static int synchronize_sched_expedited_cpu_stop(void *data)
 static bool sync_sched_exp_wd(struct rcu_state *rsp, struct rcu_node *rnp,
 			      atomic_long_t *stat, unsigned long s)
 {
-	if (ULONG_CMP_GE(READ_ONCE(rsp->expedited_sequence), s)) {
+	if (rcu_exp_gp_seq_done(rsp, s)) {
 		if (rnp)
 			mutex_unlock(&rnp->exp_funnel_mutex);
 		/* Ensure test happens before caller kfree(). */
@@ -3306,9 +3360,7 @@ void synchronize_sched_expedited(void)
 	struct rcu_state *rsp = &rcu_sched_state;
 
 	/* Take a snapshot of the sequence number.  */
-	smp_mb(); /* Caller's modifications seen first by other CPUs. */
-	s = (READ_ONCE(rsp->expedited_sequence) + 3) & ~0x1;
-	smp_mb(); /* Above access must not bleed into critical section. */
+	s = rcu_exp_gp_seq_snap(rsp);
 
 	if (!try_get_online_cpus()) {
 		/* CPU hotplug operation in flight, fall back to normal GP. */
@@ -3339,9 +3391,7 @@ void synchronize_sched_expedited(void)
 	if (sync_sched_exp_wd(rsp, rnp0, &rsp->expedited_workdone2, s))
 		return;
 
-	WRITE_ONCE(rsp->expedited_sequence, rsp->expedited_sequence + 1);
-	smp_mb(); /* Ensure expedited GP seen after counter increment. */
-	WARN_ON_ONCE(!(rsp->expedited_sequence & 0x1));
+	rcu_exp_gp_seq_start(rsp);
 
 	/* Stop each CPU that is online, non-idle, and not us. */
 	init_waitqueue_head(&rsp->expedited_wq);
@@ -3364,9 +3414,7 @@ void synchronize_sched_expedited(void)
 		wait_event(rsp->expedited_wq,
 			   !atomic_read(&rsp->expedited_need_qs));
 
-	smp_mb(); /* Ensure expedited GP seen before counter increment. */
-	WRITE_ONCE(rsp->expedited_sequence, rsp->expedited_sequence + 1);
-	WARN_ON_ONCE(rsp->expedited_sequence & 0x1);
+	rcu_exp_gp_seq_end(rsp);
 	mutex_unlock(&rnp0->exp_funnel_mutex);
 	smp_mb(); /* ensure subsequent action seen after grace period. */
 
-- 
1.8.1.5


  parent reply	other threads:[~2015-07-17 23:29 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-17 23:29 [PATCH tip/core/rcu 0/19] Expedited grace period changes for 4.3 Paul E. McKenney
2015-07-17 23:29 ` [PATCH tip/core/rcu 01/19] rcu: Stop disabling CPU hotplug in synchronize_rcu_expedited() Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 02/19] rcu: Remove CONFIG_RCU_CPU_STALL_INFO Paul E. McKenney
2015-07-30 12:49     ` Peter Zijlstra
2015-07-30 15:13       ` Paul E. McKenney
2015-07-30 15:31         ` Peter Zijlstra
2015-07-30 15:45         ` Josh Triplett
2015-07-17 23:29   ` [PATCH tip/core/rcu 03/19] rcu: Switch synchronize_sched_expedited() to stop_one_cpu() Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 04/19] rcu: Rework synchronize_sched_expedited() counter handling Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 05/19] rcu: Get rid of synchronize_sched_expedited()'s polling loop Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 06/19] rcu: Make expedited GP CPU stoppage asynchronous Paul E. McKenney
2015-07-17 23:29   ` Paul E. McKenney [this message]
2015-07-17 23:29   ` [PATCH tip/core/rcu 08/19] rcu: Make synchronize_rcu_expedited() use sequence-counter scheme Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 09/19] rcu: Abstract funnel locking from synchronize_sched_expedited() Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 10/19] rcu: Fix synchronize_sched_expedited() type error for "s" Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 11/19] rcu: Use funnel locking for synchronize_rcu_expedited()'s polling loop Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 12/19] rcu: Apply rcu_seq operations to _rcu_barrier() Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 13/19] rcu: Consolidate last open-coded expedited memory barrier Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 14/19] rcu: Extend expedited funnel locking to rcu_data structure Paul E. McKenney
2015-09-20 14:58     ` Sasha Levin
2015-09-21  4:12       ` Paul E. McKenney
2015-09-21 22:04         ` Sasha Levin
2015-09-22 15:10           ` Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 15/19] rcu: Add stall warnings to synchronize_sched_expedited() Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 16/19] documentation: Describe new expedited stall warnings Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 17/19] rcu: Pull out wait_event*() condition into helper function Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 18/19] rcu: Rename RCU_GP_DONE_FQS to RCU_GP_DOING_FQS Paul E. McKenney
2015-07-17 23:29   ` [PATCH tip/core/rcu 19/19] rcu: Add fastpath bypassing funnel locking Paul E. McKenney
2015-07-30 14:44     ` Peter Zijlstra
2015-07-30 15:34       ` Paul E. McKenney
2015-07-30 15:40         ` Peter Zijlstra
2015-08-03 20:05           ` Steven Rostedt
2015-08-03 20:06             ` Peter Zijlstra
2015-07-30 16:34         ` Peter Zijlstra
2015-07-31 15:57           ` Paul E. McKenney
2015-07-31  2:03       ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1437175764-24096-7-git-send-email-paulmck@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=bobby.prani@gmail.com \
    --cc=dhowells@redhat.com \
    --cc=dipankar@in.ibm.com \
    --cc=dvhart@linux.intel.com \
    --cc=edumazet@google.com \
    --cc=fweisbec@gmail.com \
    --cc=jiangshanlai@gmail.com \
    --cc=josh@joshtriplett.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mingo@kernel.org \
    --cc=oleg@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).