linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched: remove rcu_read_lock from wake_affine
@ 2011-06-07 10:13 Nikunj A. Dadhania
  2011-06-07 10:26 ` Peter Zijlstra
  2011-07-01 15:16 ` [tip:sched/core] sched: Remove rcu_read_lock() from wake_affine() tip-bot for Nikunj A. Dadhania
  0 siblings, 2 replies; 6+ messages in thread
From: Nikunj A. Dadhania @ 2011-06-07 10:13 UTC (permalink / raw)
  To: peterz, mingo; +Cc: linux-kernel

wake_affine is called from one path: select_task_rq_fair, which already has
rcu read lock held.

Signed-off-by: Nikunj A. Dadhania <nikunj@linux.vnet.ibm.com>
---
 kernel/sched_fair.c |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 354e26b..0bfec93 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1461,6 +1461,7 @@ static inline unsigned long effective_load(struct task_group *tg, int cpu,
 
 #endif
 
+/* Assumes rcu_read_lock is held */
 static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
 {
 	s64 this_load, load;
@@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
 	 * effect of the currently running task from the load
 	 * of the current CPU:
 	 */
-	rcu_read_lock();
 	if (sync) {
 		tg = task_group(current);
 		weight = current->se.load.weight;
@@ -1517,7 +1517,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
 		balanced = this_eff_load <= prev_eff_load;
 	} else
 		balanced = true;
-	rcu_read_unlock();
 
 	/*
 	 * If the currently running task will sleep within


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched: remove rcu_read_lock from wake_affine
  2011-06-07 10:13 [PATCH] sched: remove rcu_read_lock from wake_affine Nikunj A. Dadhania
@ 2011-06-07 10:26 ` Peter Zijlstra
  2011-06-07 17:26   ` Paul E. McKenney
  2011-07-01 15:16 ` [tip:sched/core] sched: Remove rcu_read_lock() from wake_affine() tip-bot for Nikunj A. Dadhania
  1 sibling, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2011-06-07 10:26 UTC (permalink / raw)
  To: Nikunj A. Dadhania; +Cc: mingo, linux-kernel

On Tue, 2011-06-07 at 15:43 +0530, Nikunj A. Dadhania wrote:
> wake_affine is called from one path: select_task_rq_fair, which already has
> rcu read lock held.
> 
> Signed-off-by: Nikunj A. Dadhania <nikunj@linux.vnet.ibm.com>
> ---
>  kernel/sched_fair.c |    3 +--
>  1 files changed, 1 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index 354e26b..0bfec93 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -1461,6 +1461,7 @@ static inline unsigned long effective_load(struct task_group *tg, int cpu,
>  
>  #endif
>  
> +/* Assumes rcu_read_lock is held */

Not a big fan of such comments, esp with CONFIG_PROVE_RCU its better to
use those facilities, which is to say: if we're missing a
rcu_read_lock() the thing will yell bloody murder.

>  static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
>  {
>  	s64 this_load, load;
> @@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
>  	 * effect of the currently running task from the load
>  	 * of the current CPU:
>  	 */
> -	rcu_read_lock();
>  	if (sync) {
>  		tg = task_group(current);
>  		weight = current->se.load.weight;
> @@ -1517,7 +1517,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
>  		balanced = this_eff_load <= prev_eff_load;
>  	} else
>  		balanced = true;
> -	rcu_read_unlock();
>  
>  	/*
>  	 * If the currently running task will sleep within
> 

OK, took the patch and removed the comment, thanks!

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched: remove rcu_read_lock from wake_affine
  2011-06-07 10:26 ` Peter Zijlstra
@ 2011-06-07 17:26   ` Paul E. McKenney
  2011-06-07 17:29     ` Peter Zijlstra
  0 siblings, 1 reply; 6+ messages in thread
From: Paul E. McKenney @ 2011-06-07 17:26 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Nikunj A. Dadhania, mingo, linux-kernel

On Tue, Jun 07, 2011 at 12:26:51PM +0200, Peter Zijlstra wrote:
> On Tue, 2011-06-07 at 15:43 +0530, Nikunj A. Dadhania wrote:
> > wake_affine is called from one path: select_task_rq_fair, which already has
> > rcu read lock held.
> > 
> > Signed-off-by: Nikunj A. Dadhania <nikunj@linux.vnet.ibm.com>
> > ---
> >  kernel/sched_fair.c |    3 +--
> >  1 files changed, 1 insertions(+), 2 deletions(-)
> > 
> > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> > index 354e26b..0bfec93 100644
> > --- a/kernel/sched_fair.c
> > +++ b/kernel/sched_fair.c
> > @@ -1461,6 +1461,7 @@ static inline unsigned long effective_load(struct task_group *tg, int cpu,
> >  
> >  #endif
> >  
> > +/* Assumes rcu_read_lock is held */
> 
> Not a big fan of such comments, esp with CONFIG_PROVE_RCU its better to
> use those facilities, which is to say: if we're missing a
> rcu_read_lock() the thing will yell bloody murder.

Nikunj, one such approach is is "WARN_ON_ONCE(!rcu_read_lock_held())".

This will complain if this function is called without an rcu_read_lock()
in effect, but only if CONFIG_PROVE_RCU=y.

							Thanx, Paul

> >  static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> >  {
> >  	s64 this_load, load;
> > @@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> >  	 * effect of the currently running task from the load
> >  	 * of the current CPU:
> >  	 */
> > -	rcu_read_lock();
> >  	if (sync) {
> >  		tg = task_group(current);
> >  		weight = current->se.load.weight;
> > @@ -1517,7 +1517,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> >  		balanced = this_eff_load <= prev_eff_load;
> >  	} else
> >  		balanced = true;
> > -	rcu_read_unlock();
> >  
> >  	/*
> >  	 * If the currently running task will sleep within
> > 
> 
> OK, took the patch and removed the comment, thanks!
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched: remove rcu_read_lock from wake_affine
  2011-06-07 17:26   ` Paul E. McKenney
@ 2011-06-07 17:29     ` Peter Zijlstra
  2011-06-07 18:11       ` Paul E. McKenney
  0 siblings, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2011-06-07 17:29 UTC (permalink / raw)
  To: paulmck; +Cc: Nikunj A. Dadhania, mingo, linux-kernel

On Tue, 2011-06-07 at 10:26 -0700, Paul E. McKenney wrote:
> 
> Nikunj, one such approach is is "WARN_ON_ONCE(!rcu_read_lock_held())".
> 
> This will complain if this function is called without an rcu_read_lock()
> in effect, but only if CONFIG_PROVE_RCU=y.

rcu_lockdep_assert(rcu_read_lock_held()) would be nicer, however, since
the below:

> > >  static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > >  {
> > >     s64 this_load, load;
> > > @@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > >      * effect of the currently running task from the load
> > >      * of the current CPU:
> > >      */
> > > -   rcu_read_lock();
> > >     if (sync) {
> > >             tg = task_group(current);
> > >             weight = current->se.load.weight; 

task_group() has an rcu_dereference_check() in, its really not needed,
the thing will yell if we get this wrong.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched: remove rcu_read_lock from wake_affine
  2011-06-07 17:29     ` Peter Zijlstra
@ 2011-06-07 18:11       ` Paul E. McKenney
  0 siblings, 0 replies; 6+ messages in thread
From: Paul E. McKenney @ 2011-06-07 18:11 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Nikunj A. Dadhania, mingo, linux-kernel

On Tue, Jun 07, 2011 at 07:29:23PM +0200, Peter Zijlstra wrote:
> On Tue, 2011-06-07 at 10:26 -0700, Paul E. McKenney wrote:
> > 
> > Nikunj, one such approach is is "WARN_ON_ONCE(!rcu_read_lock_held())".
> > 
> > This will complain if this function is called without an rcu_read_lock()
> > in effect, but only if CONFIG_PROVE_RCU=y.
> 
> rcu_lockdep_assert(rcu_read_lock_held()) would be nicer,

Good point!

>                                                          however, since
> the below:
> 
> > > >  static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > > >  {
> > > >     s64 this_load, load;
> > > > @@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > > >      * effect of the currently running task from the load
> > > >      * of the current CPU:
> > > >      */
> > > > -   rcu_read_lock();
> > > >     if (sync) {
> > > >             tg = task_group(current);
> > > >             weight = current->se.load.weight; 
> 
> task_group() has an rcu_dereference_check() in, its really not needed,
> the thing will yell if we get this wrong.

Fair enough!  The main reason for adding it at this level as well is
to prevent people from "fixing" splats by adding rcu_read_lock() and
rcu_read_unlock() at this level.  But you would see any such patch, so
such a "fix" would not get far.  ;-)

							Thanx, Paul

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tip:sched/core] sched: Remove rcu_read_lock() from wake_affine()
  2011-06-07 10:13 [PATCH] sched: remove rcu_read_lock from wake_affine Nikunj A. Dadhania
  2011-06-07 10:26 ` Peter Zijlstra
@ 2011-07-01 15:16 ` tip-bot for Nikunj A. Dadhania
  1 sibling, 0 replies; 6+ messages in thread
From: tip-bot for Nikunj A. Dadhania @ 2011-07-01 15:16 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, paulmck, hpa, mingo, a.p.zijlstra, nikunj, tglx,
	mingo

Commit-ID:  2a46dae38087e62dd5fb08a6dadf1407717ed13c
Gitweb:     http://git.kernel.org/tip/2a46dae38087e62dd5fb08a6dadf1407717ed13c
Author:     Nikunj A. Dadhania <nikunj@linux.vnet.ibm.com>
AuthorDate: Tue, 7 Jun 2011 15:43:22 +0530
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Fri, 1 Jul 2011 10:39:06 +0200

sched: Remove rcu_read_lock() from wake_affine()

wake_affine() is only called from one path: select_task_rq_fair(),
which already has the RCU read lock held.

Signed-off-by: Nikunj A. Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/20110607101251.777.34547.stgit@IBM-009124035060.in.ibm.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/sched_fair.c |    2 --
 1 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 433491c..eb98f77 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1481,7 +1481,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
 	 * effect of the currently running task from the load
 	 * of the current CPU:
 	 */
-	rcu_read_lock();
 	if (sync) {
 		tg = task_group(current);
 		weight = current->se.load.weight;
@@ -1517,7 +1516,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
 		balanced = this_eff_load <= prev_eff_load;
 	} else
 		balanced = true;
-	rcu_read_unlock();
 
 	/*
 	 * If the currently running task will sleep within

^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2011-07-01 15:16 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-07 10:13 [PATCH] sched: remove rcu_read_lock from wake_affine Nikunj A. Dadhania
2011-06-07 10:26 ` Peter Zijlstra
2011-06-07 17:26   ` Paul E. McKenney
2011-06-07 17:29     ` Peter Zijlstra
2011-06-07 18:11       ` Paul E. McKenney
2011-07-01 15:16 ` [tip:sched/core] sched: Remove rcu_read_lock() from wake_affine() tip-bot for Nikunj A. Dadhania

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).