public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH tip/core/rcu 0/6] Idle entry/exit changes for 3.13
@ 2013-09-25  1:49 Paul E. McKenney
  2013-09-25  1:50 ` [PATCH tip/core/rcu 1/6] rcu: Remove redundant code from rcu_cleanup_after_idle() Paul E. McKenney
  2013-09-25  4:08 ` [PATCH tip/core/rcu 0/6] Idle entry/exit changes for 3.13 Josh Triplett
  0 siblings, 2 replies; 9+ messages in thread
From: Paul E. McKenney @ 2013-09-25  1:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw

Hello!

This series updates RCU's idle entry/exit processing:

1.	Remove redundant code from rcu_cleanup_after_idle().

2.	Throttle rcu_try_advance_all_cbs() execution to avoid kbuild
	slowdowns.

3.	Throttle non-lazy-callback-induced invoke_rcu_core() invocations.

4.	Add primitive to determine whether it is safe to enter an RCU
	read-side critical section.

5.	Upgrade EXPORT_SYMBOL() to EXPORT_SYMBOL_GPL().

6.	Change rcu_is_cpu_idle() function to __rcu_is_watching() for
	naming consistency.

							Thanx, Paul


 b/include/linux/rcupdate.h |   26 +++++++++++-----------
 b/include/linux/rcutiny.h  |   25 ++++++++++++++++++----
 b/include/linux/rcutree.h  |    4 ++-
 b/kernel/lockdep.c         |    4 +--
 b/kernel/rcupdate.c        |    2 -
 b/kernel/rcutiny.c         |   10 ++++----
 b/kernel/rcutree.c         |   51 ++++++++++++++++++++++++++++-----------------
 b/kernel/rcutree.h         |    2 +
 b/kernel/rcutree_plugin.h  |   24 +++++++++++----------
 9 files changed, 92 insertions(+), 56 deletions(-)


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH tip/core/rcu 1/6] rcu: Remove redundant code from rcu_cleanup_after_idle()
  2013-09-25  1:49 [PATCH tip/core/rcu 0/6] Idle entry/exit changes for 3.13 Paul E. McKenney
@ 2013-09-25  1:50 ` Paul E. McKenney
  2013-09-25  1:50   ` [PATCH tip/core/rcu 2/6] rcu: Throttle rcu_try_advance_all_cbs() execution Paul E. McKenney
                     ` (4 more replies)
  2013-09-25  4:08 ` [PATCH tip/core/rcu 0/6] Idle entry/exit changes for 3.13 Josh Triplett
  1 sibling, 5 replies; 9+ messages in thread
From: Paul E. McKenney @ 2013-09-25  1:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The rcu_try_advance_all_cbs() function returns a bool saying whether or
not there are callbacks ready to invoke, but rcu_cleanup_after_idle()
rechecks this regardless.  This commit therefore uses the value returned
by rcu_try_advance_all_cbs() instead of making rcu_cleanup_after_idle()
do this recheck.

Reported-by: Tibor Billes <tbilles@gmx.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Tibor Billes <tbilles@gmx.com>
---
 kernel/rcutree_plugin.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 130c97b..18d9c91 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -1768,17 +1768,11 @@ static void rcu_prepare_for_idle(int cpu)
  */
 static void rcu_cleanup_after_idle(int cpu)
 {
-	struct rcu_data *rdp;
-	struct rcu_state *rsp;
 
 	if (rcu_is_nocb_cpu(cpu))
 		return;
-	rcu_try_advance_all_cbs();
-	for_each_rcu_flavor(rsp) {
-		rdp = per_cpu_ptr(rsp->rda, cpu);
-		if (cpu_has_callbacks_ready_to_invoke(rdp))
-			invoke_rcu_core();
-	}
+	if (rcu_try_advance_all_cbs())
+		invoke_rcu_core();
 }
 
 /*
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH tip/core/rcu 2/6] rcu: Throttle rcu_try_advance_all_cbs() execution
  2013-09-25  1:50 ` [PATCH tip/core/rcu 1/6] rcu: Remove redundant code from rcu_cleanup_after_idle() Paul E. McKenney
@ 2013-09-25  1:50   ` Paul E. McKenney
  2013-09-25  1:50   ` [PATCH tip/core/rcu 3/6] rcu: Throttle invoke_rcu_core() invocations due to non-lazy callbacks Paul E. McKenney
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Paul E. McKenney @ 2013-09-25  1:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The rcu_try_advance_all_cbs() function is invoked on each attempted
entry to and every exit from idle.  If this function determines that
there are callbacks ready to invoke, the caller will invoke the RCU
core, which in turn will result in a pair of context switches.  If a
CPU enters and exits idle extremely frequently, this can result in
an excessive number of context switches and high CPU overhead.

This commit therefore causes rcu_try_advance_all_cbs() to throttle
itself, refusing to do work more than once per jiffy.

Reported-by: Tibor Billes <tbilles@gmx.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Tibor Billes <tbilles@gmx.com>
---
 kernel/rcutree.h        |  2 ++
 kernel/rcutree_plugin.h | 12 +++++++++---
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/kernel/rcutree.h b/kernel/rcutree.h
index 5f97eab..52be957 100644
--- a/kernel/rcutree.h
+++ b/kernel/rcutree.h
@@ -104,6 +104,8 @@ struct rcu_dynticks {
 				    /* idle-period nonlazy_posted snapshot. */
 	unsigned long last_accelerate;
 				    /* Last jiffy CBs were accelerated. */
+	unsigned long last_advance_all;
+				    /* Last jiffy CBs were all advanced. */
 	int tick_nohz_enabled_snap; /* Previously seen value from sysfs. */
 #endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
 };
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 18d9c91..d81e385 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -1630,17 +1630,23 @@ module_param(rcu_idle_lazy_gp_delay, int, 0644);
 extern int tick_nohz_enabled;
 
 /*
- * Try to advance callbacks for all flavors of RCU on the current CPU.
- * Afterwards, if there are any callbacks ready for immediate invocation,
- * return true.
+ * Try to advance callbacks for all flavors of RCU on the current CPU, but
+ * only if it has been awhile since the last time we did so.  Afterwards,
+ * if there are any callbacks ready for immediate invocation, return true.
  */
 static bool rcu_try_advance_all_cbs(void)
 {
 	bool cbs_ready = false;
 	struct rcu_data *rdp;
+	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
 	struct rcu_node *rnp;
 	struct rcu_state *rsp;
 
+	/* Exit early if we advanced recently. */
+	if (jiffies == rdtp->last_advance_all)
+		return 0;
+	rdtp->last_advance_all = jiffies;
+
 	for_each_rcu_flavor(rsp) {
 		rdp = this_cpu_ptr(rsp->rda);
 		rnp = rdp->mynode;
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH tip/core/rcu 3/6] rcu: Throttle invoke_rcu_core() invocations due to non-lazy callbacks
  2013-09-25  1:50 ` [PATCH tip/core/rcu 1/6] rcu: Remove redundant code from rcu_cleanup_after_idle() Paul E. McKenney
  2013-09-25  1:50   ` [PATCH tip/core/rcu 2/6] rcu: Throttle rcu_try_advance_all_cbs() execution Paul E. McKenney
@ 2013-09-25  1:50   ` Paul E. McKenney
  2013-09-25  1:50   ` [PATCH tip/core/rcu 4/6] rcu: Is it safe to enter an RCU read-side critical section? Paul E. McKenney
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Paul E. McKenney @ 2013-09-25  1:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

If a non-lazy callback arrives on a CPU that has previously gone idle
with no non-lazy callbacks, invoke_rcu_core() forces the RCU core to
run.  However, it does not update the conditions, which could result
in several closely spaced invocations of the RCU core, which in turn
could result in an excessively high context-switch rate and resulting
high overhead.

This commit therefore updates the ->all_lazy and ->nonlazy_posted_snap
fields to prevent closely spaced invocations.

Reported-by: Tibor Billes <tbilles@gmx.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Tibor Billes <tbilles@gmx.com>
---
 kernel/rcutree_plugin.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index d81e385..2c15d7c 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -1745,6 +1745,8 @@ static void rcu_prepare_for_idle(int cpu)
 	 */
 	if (rdtp->all_lazy &&
 	    rdtp->nonlazy_posted != rdtp->nonlazy_posted_snap) {
+		rdtp->all_lazy = false;
+		rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted;
 		invoke_rcu_core();
 		return;
 	}
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH tip/core/rcu 4/6] rcu: Is it safe to enter an RCU read-side critical section?
  2013-09-25  1:50 ` [PATCH tip/core/rcu 1/6] rcu: Remove redundant code from rcu_cleanup_after_idle() Paul E. McKenney
  2013-09-25  1:50   ` [PATCH tip/core/rcu 2/6] rcu: Throttle rcu_try_advance_all_cbs() execution Paul E. McKenney
  2013-09-25  1:50   ` [PATCH tip/core/rcu 3/6] rcu: Throttle invoke_rcu_core() invocations due to non-lazy callbacks Paul E. McKenney
@ 2013-09-25  1:50   ` Paul E. McKenney
  2013-09-25  1:50   ` [PATCH tip/core/rcu 5/6] rcu: Change EXPORT_SYMBOL() to EXPORT_SYMBOL_GPL() Paul E. McKenney
  2013-09-25  1:50   ` [PATCH tip/core/rcu 6/6] rcu: Consistent rcu_is_watching() naming Paul E. McKenney
  4 siblings, 0 replies; 9+ messages in thread
From: Paul E. McKenney @ 2013-09-25  1:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

There is currently no way for kernel code to determine whether it
is safe to enter an RCU read-side critical section, in other words,
whether or not RCU is paying attention to the currently running CPU.
Given the large and increasing quantity of code shared by the idle loop
and non-idle code, the this shortcoming is becoming increasingly painful.

This commit therefore adds __rcu_is_watching(), which returns true if
it is safe to enter an RCU read-side critical section on the currently
running CPU.  This function is quite fast, using only a __this_cpu_read().
However, the caller must disable preemption.

Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/rcupdate.h |  8 ++++----
 include/linux/rcutiny.h  |  9 +++++++++
 include/linux/rcutree.h  |  2 ++
 kernel/rcutiny.c         |  4 ++--
 kernel/rcutree.c         | 13 +++++++++++++
 5 files changed, 30 insertions(+), 6 deletions(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index f1f1bc3..a53a21a 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -261,6 +261,10 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev,
 		rcu_irq_exit(); \
 	} while (0)
 
+#if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP)
+extern int rcu_is_cpu_idle(void);
+#endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */
+
 /*
  * Infrastructure to implement the synchronize_() primitives in
  * TREE_RCU and rcu_barrier_() primitives in TINY_RCU.
@@ -297,10 +301,6 @@ static inline void destroy_rcu_head_on_stack(struct rcu_head *head)
 }
 #endif	/* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */
 
-#if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_SMP)
-extern int rcu_is_cpu_idle(void);
-#endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_SMP) */
-
 #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU)
 bool rcu_lockdep_current_cpu_online(void);
 #else /* #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU) */
diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
index e31005e..bee6659 100644
--- a/include/linux/rcutiny.h
+++ b/include/linux/rcutiny.h
@@ -132,4 +132,13 @@ static inline void rcu_scheduler_starting(void)
 }
 #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
 
+#ifdef CONFIG_RCU_TRACE
+
+static inline bool __rcu_is_watching(void)
+{
+	return !rcu_is_cpu_idle();
+}
+
+#endif /* #ifdef CONFIG_RCU_TRACE */
+
 #endif /* __LINUX_RCUTINY_H */
diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
index 226169d..293613d 100644
--- a/include/linux/rcutree.h
+++ b/include/linux/rcutree.h
@@ -90,4 +90,6 @@ extern void exit_rcu(void);
 extern void rcu_scheduler_starting(void);
 extern int rcu_scheduler_active __read_mostly;
 
+extern bool __rcu_is_watching(void);
+
 #endif /* __LINUX_RCUTREE_H */
diff --git a/kernel/rcutiny.c b/kernel/rcutiny.c
index 9ed6075..b4bc618 100644
--- a/kernel/rcutiny.c
+++ b/kernel/rcutiny.c
@@ -174,7 +174,7 @@ void rcu_irq_enter(void)
 }
 EXPORT_SYMBOL_GPL(rcu_irq_enter);
 
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE)
 
 /*
  * Test whether RCU thinks that the current CPU is idle.
@@ -185,7 +185,7 @@ int rcu_is_cpu_idle(void)
 }
 EXPORT_SYMBOL(rcu_is_cpu_idle);
 
-#endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
+#endif /* defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) */
 
 /*
  * Test whether the current CPU was interrupted from idle.  Nested
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 32618b3..910d868 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -671,6 +671,19 @@ int rcu_is_cpu_idle(void)
 }
 EXPORT_SYMBOL(rcu_is_cpu_idle);
 
+/**
+ * __rcu_is_watching - are RCU read-side critical sections safe?
+ *
+ * Return true if RCU is watching the running CPU, which means that
+ * this CPU can safely enter RCU read-side critical sections.  Unlike
+ * rcu_is_cpu_idle(), the caller of __rcu_is_watching() must have at
+ * least disabled preemption.
+ */
+bool __rcu_is_watching(void)
+{
+	return !!(atomic_read(this_cpu_ptr(&rcu_dynticks.dynticks)) & 0x1);
+}
+
 #if defined(CONFIG_PROVE_RCU) && defined(CONFIG_HOTPLUG_CPU)
 
 /*
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH tip/core/rcu 5/6] rcu: Change EXPORT_SYMBOL() to EXPORT_SYMBOL_GPL()
  2013-09-25  1:50 ` [PATCH tip/core/rcu 1/6] rcu: Remove redundant code from rcu_cleanup_after_idle() Paul E. McKenney
                     ` (2 preceding siblings ...)
  2013-09-25  1:50   ` [PATCH tip/core/rcu 4/6] rcu: Is it safe to enter an RCU read-side critical section? Paul E. McKenney
@ 2013-09-25  1:50   ` Paul E. McKenney
  2013-09-25  1:50   ` [PATCH tip/core/rcu 6/6] rcu: Consistent rcu_is_watching() naming Paul E. McKenney
  4 siblings, 0 replies; 9+ messages in thread
From: Paul E. McKenney @ 2013-09-25  1:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

Commit e6b80a3b (rcu: Detect illegal rcu dereference in extended
quiescent state) exported the pre-existing rcu_is_cpu_idle() function
using EXPORT_SYMBOL().  However, this is inconsistent with the remaining
exports from RCU, which are all EXPORT_SYMBOL_GPL().  The current state
of affairs means that a non-GPL module could use rcu_is_cpu_idle(),
but in a CONFIG_TREE_PREEMPT_RCU=y kernel would be unable to invoke
rcu_read_lock() and rcu_read_unlock().

This commit therefore makes rcu_is_cpu_idle()'s export be consistent
with the rest of RCU, namely EXPORT_SYMBOL_GPL().

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
---
 kernel/rcutree.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 910d868..1b123e1 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -669,7 +669,7 @@ int rcu_is_cpu_idle(void)
 	preempt_enable();
 	return ret;
 }
-EXPORT_SYMBOL(rcu_is_cpu_idle);
+EXPORT_SYMBOL_GPL(rcu_is_cpu_idle);
 
 /**
  * __rcu_is_watching - are RCU read-side critical sections safe?
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH tip/core/rcu 6/6] rcu: Consistent rcu_is_watching() naming
  2013-09-25  1:50 ` [PATCH tip/core/rcu 1/6] rcu: Remove redundant code from rcu_cleanup_after_idle() Paul E. McKenney
                     ` (3 preceding siblings ...)
  2013-09-25  1:50   ` [PATCH tip/core/rcu 5/6] rcu: Change EXPORT_SYMBOL() to EXPORT_SYMBOL_GPL() Paul E. McKenney
@ 2013-09-25  1:50   ` Paul E. McKenney
  4 siblings, 0 replies; 9+ messages in thread
From: Paul E. McKenney @ 2013-09-25  1:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The old rcu_is_cpu_idle() function is just __rcu_is_watching() with
preemption disabled.  This commit therefore renames rcu_is_cpu_idle()
to rcu_is_watching.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/rcupdate.h | 18 +++++++++---------
 include/linux/rcutiny.h  | 16 ++++++++++++----
 include/linux/rcutree.h  |  2 +-
 kernel/lockdep.c         |  4 ++--
 kernel/rcupdate.c        |  2 +-
 kernel/rcutiny.c         |  6 +++---
 kernel/rcutree.c         | 36 ++++++++++++++++++------------------
 7 files changed, 46 insertions(+), 38 deletions(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index a53a21a..39cbb88 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -262,7 +262,7 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev,
 	} while (0)
 
 #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP)
-extern int rcu_is_cpu_idle(void);
+extern bool __rcu_is_watching(void);
 #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */
 
 /*
@@ -351,7 +351,7 @@ static inline int rcu_read_lock_held(void)
 {
 	if (!debug_lockdep_rcu_enabled())
 		return 1;
-	if (rcu_is_cpu_idle())
+	if (!rcu_is_watching())
 		return 0;
 	if (!rcu_lockdep_current_cpu_online())
 		return 0;
@@ -402,7 +402,7 @@ static inline int rcu_read_lock_sched_held(void)
 
 	if (!debug_lockdep_rcu_enabled())
 		return 1;
-	if (rcu_is_cpu_idle())
+	if (!rcu_is_watching())
 		return 0;
 	if (!rcu_lockdep_current_cpu_online())
 		return 0;
@@ -771,7 +771,7 @@ static inline void rcu_read_lock(void)
 	__rcu_read_lock();
 	__acquire(RCU);
 	rcu_lock_acquire(&rcu_lock_map);
-	rcu_lockdep_assert(!rcu_is_cpu_idle(),
+	rcu_lockdep_assert(rcu_is_watching(),
 			   "rcu_read_lock() used illegally while idle");
 }
 
@@ -792,7 +792,7 @@ static inline void rcu_read_lock(void)
  */
 static inline void rcu_read_unlock(void)
 {
-	rcu_lockdep_assert(!rcu_is_cpu_idle(),
+	rcu_lockdep_assert(rcu_is_watching(),
 			   "rcu_read_unlock() used illegally while idle");
 	rcu_lock_release(&rcu_lock_map);
 	__release(RCU);
@@ -821,7 +821,7 @@ static inline void rcu_read_lock_bh(void)
 	local_bh_disable();
 	__acquire(RCU_BH);
 	rcu_lock_acquire(&rcu_bh_lock_map);
-	rcu_lockdep_assert(!rcu_is_cpu_idle(),
+	rcu_lockdep_assert(rcu_is_watching(),
 			   "rcu_read_lock_bh() used illegally while idle");
 }
 
@@ -832,7 +832,7 @@ static inline void rcu_read_lock_bh(void)
  */
 static inline void rcu_read_unlock_bh(void)
 {
-	rcu_lockdep_assert(!rcu_is_cpu_idle(),
+	rcu_lockdep_assert(rcu_is_watching(),
 			   "rcu_read_unlock_bh() used illegally while idle");
 	rcu_lock_release(&rcu_bh_lock_map);
 	__release(RCU_BH);
@@ -857,7 +857,7 @@ static inline void rcu_read_lock_sched(void)
 	preempt_disable();
 	__acquire(RCU_SCHED);
 	rcu_lock_acquire(&rcu_sched_lock_map);
-	rcu_lockdep_assert(!rcu_is_cpu_idle(),
+	rcu_lockdep_assert(rcu_is_watching(),
 			   "rcu_read_lock_sched() used illegally while idle");
 }
 
@@ -875,7 +875,7 @@ static inline notrace void rcu_read_lock_sched_notrace(void)
  */
 static inline void rcu_read_unlock_sched(void)
 {
-	rcu_lockdep_assert(!rcu_is_cpu_idle(),
+	rcu_lockdep_assert(rcu_is_watching(),
 			   "rcu_read_unlock_sched() used illegally while idle");
 	rcu_lock_release(&rcu_sched_lock_map);
 	__release(RCU_SCHED);
diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
index bee6659..09ebcbe 100644
--- a/include/linux/rcutiny.h
+++ b/include/linux/rcutiny.h
@@ -132,13 +132,21 @@ static inline void rcu_scheduler_starting(void)
 }
 #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
 
-#ifdef CONFIG_RCU_TRACE
+#if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE)
 
-static inline bool __rcu_is_watching(void)
+static inline bool rcu_is_watching(void)
 {
-	return !rcu_is_cpu_idle();
+	return __rcu_is_watching();
 }
 
-#endif /* #ifdef CONFIG_RCU_TRACE */
+#else /* defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) */
+
+static inline bool rcu_is_watching(void)
+{
+	return true;
+}
+
+
+#endif /* #else defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) */
 
 #endif /* __LINUX_RCUTINY_H */
diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
index 293613d..4b9c815 100644
--- a/include/linux/rcutree.h
+++ b/include/linux/rcutree.h
@@ -90,6 +90,6 @@ extern void exit_rcu(void);
 extern void rcu_scheduler_starting(void);
 extern int rcu_scheduler_active __read_mostly;
 
-extern bool __rcu_is_watching(void);
+extern bool rcu_is_watching(void);
 
 #endif /* __LINUX_RCUTREE_H */
diff --git a/kernel/lockdep.c b/kernel/lockdep.c
index e16c45b..4e8e14c 100644
--- a/kernel/lockdep.c
+++ b/kernel/lockdep.c
@@ -4224,7 +4224,7 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
 	printk("\n%srcu_scheduler_active = %d, debug_locks = %d\n",
 	       !rcu_lockdep_current_cpu_online()
 			? "RCU used illegally from offline CPU!\n"
-			: rcu_is_cpu_idle()
+			: !rcu_is_watching()
 				? "RCU used illegally from idle CPU!\n"
 				: "",
 	       rcu_scheduler_active, debug_locks);
@@ -4247,7 +4247,7 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
 	 * So complain bitterly if someone does call rcu_read_lock(),
 	 * rcu_read_lock_bh() and so on from extended quiescent states.
 	 */
-	if (rcu_is_cpu_idle())
+	if (!rcu_is_watching())
 		printk("RCU used illegally from extended quiescent state!\n");
 
 	lockdep_print_held_locks(curr);
diff --git a/kernel/rcupdate.c b/kernel/rcupdate.c
index b02a339..3b3c046 100644
--- a/kernel/rcupdate.c
+++ b/kernel/rcupdate.c
@@ -148,7 +148,7 @@ int rcu_read_lock_bh_held(void)
 {
 	if (!debug_lockdep_rcu_enabled())
 		return 1;
-	if (rcu_is_cpu_idle())
+	if (!rcu_is_watching())
 		return 0;
 	if (!rcu_lockdep_current_cpu_online())
 		return 0;
diff --git a/kernel/rcutiny.c b/kernel/rcutiny.c
index b4bc618..0fa061d 100644
--- a/kernel/rcutiny.c
+++ b/kernel/rcutiny.c
@@ -179,11 +179,11 @@ EXPORT_SYMBOL_GPL(rcu_irq_enter);
 /*
  * Test whether RCU thinks that the current CPU is idle.
  */
-int rcu_is_cpu_idle(void)
+bool __rcu_is_watching(void)
 {
-	return !rcu_dynticks_nesting;
+	return rcu_dynticks_nesting;
 }
-EXPORT_SYMBOL(rcu_is_cpu_idle);
+EXPORT_SYMBOL(__rcu_is_watching);
 
 #endif /* defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) */
 
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 1b123e1..981d0c1 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -655,34 +655,34 @@ void rcu_nmi_exit(void)
 }
 
 /**
- * rcu_is_cpu_idle - see if RCU thinks that the current CPU is idle
+ * __rcu_is_watching - are RCU read-side critical sections safe?
+ *
+ * Return true if RCU is watching the running CPU, which means that
+ * this CPU can safely enter RCU read-side critical sections.  Unlike
+ * rcu_is_watching(), the caller of __rcu_is_watching() must have at
+ * least disabled preemption.
+ */
+bool __rcu_is_watching(void)
+{
+	return atomic_read(this_cpu_ptr(&rcu_dynticks.dynticks)) & 0x1;
+}
+
+/**
+ * rcu_is_watching - see if RCU thinks that the current CPU is idle
  *
  * If the current CPU is in its idle loop and is neither in an interrupt
  * or NMI handler, return true.
  */
-int rcu_is_cpu_idle(void)
+bool rcu_is_watching(void)
 {
 	int ret;
 
 	preempt_disable();
-	ret = (atomic_read(&__get_cpu_var(rcu_dynticks).dynticks) & 0x1) == 0;
+	ret = __rcu_is_watching();
 	preempt_enable();
 	return ret;
 }
-EXPORT_SYMBOL_GPL(rcu_is_cpu_idle);
-
-/**
- * __rcu_is_watching - are RCU read-side critical sections safe?
- *
- * Return true if RCU is watching the running CPU, which means that
- * this CPU can safely enter RCU read-side critical sections.  Unlike
- * rcu_is_cpu_idle(), the caller of __rcu_is_watching() must have at
- * least disabled preemption.
- */
-bool __rcu_is_watching(void)
-{
-	return !!(atomic_read(this_cpu_ptr(&rcu_dynticks.dynticks)) & 0x1);
-}
+EXPORT_SYMBOL_GPL(rcu_is_watching);
 
 #if defined(CONFIG_PROVE_RCU) && defined(CONFIG_HOTPLUG_CPU)
 
@@ -2268,7 +2268,7 @@ static void __call_rcu_core(struct rcu_state *rsp, struct rcu_data *rdp,
 	 * If called from an extended quiescent state, invoke the RCU
 	 * core in order to force a re-evaluation of RCU's idleness.
 	 */
-	if (rcu_is_cpu_idle() && cpu_online(smp_processor_id()))
+	if (!rcu_is_watching() && cpu_online(smp_processor_id()))
 		invoke_rcu_core();
 
 	/* If interrupts were disabled or CPU offline, don't invoke RCU core. */
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH tip/core/rcu 0/6] Idle entry/exit changes for 3.13
  2013-09-25  1:49 [PATCH tip/core/rcu 0/6] Idle entry/exit changes for 3.13 Paul E. McKenney
  2013-09-25  1:50 ` [PATCH tip/core/rcu 1/6] rcu: Remove redundant code from rcu_cleanup_after_idle() Paul E. McKenney
@ 2013-09-25  4:08 ` Josh Triplett
  2013-09-25 13:45   ` Paul E. McKenney
  1 sibling, 1 reply; 9+ messages in thread
From: Josh Triplett @ 2013-09-25  4:08 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	niv, tglx, peterz, rostedt, dhowells, edumazet, darren, fweisbec,
	sbw

On Tue, Sep 24, 2013 at 06:49:55PM -0700, Paul E. McKenney wrote:
> Hello!
> 
> This series updates RCU's idle entry/exit processing:
> 
> 1.	Remove redundant code from rcu_cleanup_after_idle().
> 
> 2.	Throttle rcu_try_advance_all_cbs() execution to avoid kbuild
> 	slowdowns.
> 
> 3.	Throttle non-lazy-callback-induced invoke_rcu_core() invocations.
> 
> 4.	Add primitive to determine whether it is safe to enter an RCU
> 	read-side critical section.
> 
> 5.	Upgrade EXPORT_SYMBOL() to EXPORT_SYMBOL_GPL().
> 
> 6.	Change rcu_is_cpu_idle() function to __rcu_is_watching() for
> 	naming consistency.

For all six:
Reviewed-by: Josh Triplett <josh@joshtriplett.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH tip/core/rcu 0/6] Idle entry/exit changes for 3.13
  2013-09-25  4:08 ` [PATCH tip/core/rcu 0/6] Idle entry/exit changes for 3.13 Josh Triplett
@ 2013-09-25 13:45   ` Paul E. McKenney
  0 siblings, 0 replies; 9+ messages in thread
From: Paul E. McKenney @ 2013-09-25 13:45 UTC (permalink / raw)
  To: Josh Triplett
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	niv, tglx, peterz, rostedt, dhowells, edumazet, darren, fweisbec,
	sbw

On Tue, Sep 24, 2013 at 09:08:10PM -0700, Josh Triplett wrote:
> On Tue, Sep 24, 2013 at 06:49:55PM -0700, Paul E. McKenney wrote:
> > Hello!
> > 
> > This series updates RCU's idle entry/exit processing:
> > 
> > 1.	Remove redundant code from rcu_cleanup_after_idle().
> > 
> > 2.	Throttle rcu_try_advance_all_cbs() execution to avoid kbuild
> > 	slowdowns.
> > 
> > 3.	Throttle non-lazy-callback-induced invoke_rcu_core() invocations.
> > 
> > 4.	Add primitive to determine whether it is safe to enter an RCU
> > 	read-side critical section.
> > 
> > 5.	Upgrade EXPORT_SYMBOL() to EXPORT_SYMBOL_GPL().
> > 
> > 6.	Change rcu_is_cpu_idle() function to __rcu_is_watching() for
> > 	naming consistency.
> 
> For all six:
> Reviewed-by: Josh Triplett <josh@joshtriplett.org>

Got it, thank you for reviewing!

							Thanx, Paul


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-09-25 13:45 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-25  1:49 [PATCH tip/core/rcu 0/6] Idle entry/exit changes for 3.13 Paul E. McKenney
2013-09-25  1:50 ` [PATCH tip/core/rcu 1/6] rcu: Remove redundant code from rcu_cleanup_after_idle() Paul E. McKenney
2013-09-25  1:50   ` [PATCH tip/core/rcu 2/6] rcu: Throttle rcu_try_advance_all_cbs() execution Paul E. McKenney
2013-09-25  1:50   ` [PATCH tip/core/rcu 3/6] rcu: Throttle invoke_rcu_core() invocations due to non-lazy callbacks Paul E. McKenney
2013-09-25  1:50   ` [PATCH tip/core/rcu 4/6] rcu: Is it safe to enter an RCU read-side critical section? Paul E. McKenney
2013-09-25  1:50   ` [PATCH tip/core/rcu 5/6] rcu: Change EXPORT_SYMBOL() to EXPORT_SYMBOL_GPL() Paul E. McKenney
2013-09-25  1:50   ` [PATCH tip/core/rcu 6/6] rcu: Consistent rcu_is_watching() naming Paul E. McKenney
2013-09-25  4:08 ` [PATCH tip/core/rcu 0/6] Idle entry/exit changes for 3.13 Josh Triplett
2013-09-25 13:45   ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox