public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [patch, rfc: 1/2] sched, hotplug: safe use of rq->migration_thread and find_busiest_queue()
@ 2008-07-24 22:11 Dmitry Adamushko
  2008-07-25 11:39 ` Peter Zijlstra
  0 siblings, 1 reply; 5+ messages in thread
From: Dmitry Adamushko @ 2008-07-24 22:11 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, LKML


From: Dmitry Adamushko <dmitry.adamushko@gmail.com>
Subject: sched, hotplug: safe use of rq->migration_thread
and find_busiest_queue()

---

    sched, hotplug: safe use of rq->migration_thread and find_busiest_queue()
    
    (1) make usre rq->migration_thread is valid when we access it in set_cpus_allowed_ptr()
    after releasing the rq-lock;
    
    (2) in load_balance() and load_balance_idle()
    
    ensure that we don't get 'busiest' which can disappear as a result of cpu_down()
    while we are manipulating it. For this goal, we choose 'busiest' only amongst
    'cpu_active_map' cpus.
    
    load_balance() and load_balance_idle() get called with preemption being disabled
    so synchronize_sched() in cpu_down() should get us synced.
    
    IOW, as soon as synchronize_sched() has been done in cpu_down(cpu), the run-queue for
    can't be manipulated/accessed by the load-balancer.
    
    Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>

diff --git a/kernel/sched.c b/kernel/sched.c
index 6acf749..b4ccc8b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3409,7 +3409,14 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 	struct rq *busiest;
 	unsigned long flags;
 
-	cpus_setall(*cpus);
+	/*
+	 * Ensure that we don't get 'busiest' which can disappear
+	 * as a result of cpu_down() while we are manipulating it.
+	 *
+	 * load_balance() gets called with preemption being disabled
+	 * so synchronize_sched() in cpu_down() should get us synced.
+	 */
+	*cpus = cpu_active_map;
 
 	/*
 	 * When power savings policy is enabled for the parent domain, idle
@@ -3571,7 +3578,14 @@ load_balance_newidle(int this_cpu, struct rq *this_rq, struct sched_domain *sd,
 	int sd_idle = 0;
 	int all_pinned = 0;
 
-	cpus_setall(*cpus);
+	/*
+	 * Ensure that we don't get 'busiest' which can disappear
+	 * as a result of cpu_down() while we are manipulating it.
+	 *
+	 * load_balance_newidle() gets called with preemption being disabled
+	 * so synchronize_sched() in cpu_down() should get us synced.
+	 */
+	*cpus = cpu_active_map;
 
 	/*
 	 * When power savings policy is enabled for the parent domain, idle
@@ -5764,9 +5778,14 @@ int set_cpus_allowed_ptr(struct task_struct *p, const cpumask_t *new_mask)
 		goto out;
 
 	if (migrate_task(p, any_online_cpu(*new_mask), &req)) {
-		/* Need help from migration thread: drop lock and wait. */
+		/* Need to wait for migration thread (might exit: take ref). */
+		struct task_struct *mt = rq->migration_thread;
+
+		get_task_struct(mt);
 		task_rq_unlock(rq, &flags);
-		wake_up_process(rq->migration_thread);
+		wake_up_process(mt);
+		put_task_struct(mt);
+
 		wait_for_completion(&req.done);
 		tlb_migrate_finish(p->mm);
 		return 0;



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [patch, rfc: 1/2] sched, hotplug: safe use of rq->migration_thread and find_busiest_queue()
  2008-07-24 22:11 [patch, rfc: 1/2] sched, hotplug: safe use of rq->migration_thread and find_busiest_queue() Dmitry Adamushko
@ 2008-07-25 11:39 ` Peter Zijlstra
  2008-07-25 11:52   ` Dmitry Adamushko
  0 siblings, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2008-07-25 11:39 UTC (permalink / raw)
  To: Dmitry Adamushko; +Cc: Ingo Molnar, LKML, Steven Rostedt, Thomas Gleixner

On Fri, 2008-07-25 at 00:11 +0200, Dmitry Adamushko wrote:
> From: Dmitry Adamushko <dmitry.adamushko@gmail.com>
> Subject: sched, hotplug: safe use of rq->migration_thread
> and find_busiest_queue()
> 
> ---
> 
>     sched, hotplug: safe use of rq->migration_thread and find_busiest_queue()
>     
>     (1) make usre rq->migration_thread is valid when we access it in set_cpus_allowed_ptr()
>     after releasing the rq-lock;
>     
>     (2) in load_balance() and load_balance_idle()
>     
>     ensure that we don't get 'busiest' which can disappear as a result of cpu_down()
>     while we are manipulating it. For this goal, we choose 'busiest' only amongst
>     'cpu_active_map' cpus.
>     
>     load_balance() and load_balance_idle() get called with preemption being disabled
>     so synchronize_sched() in cpu_down() should get us synced.
>     
>     IOW, as soon as synchronize_sched() has been done in cpu_down(cpu), the run-queue for
>     can't be manipulated/accessed by the load-balancer.
>     
>     Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>

> diff --git a/kernel/sched.c b/kernel/sched.c
> index 6acf749..b4ccc8b 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -3409,7 +3409,14 @@ static int load_balance(int this_cpu, struct rq *this_rq,
>  	struct rq *busiest;
>  	unsigned long flags;
>  
> -	cpus_setall(*cpus);
> +	/*
> +	 * Ensure that we don't get 'busiest' which can disappear
> +	 * as a result of cpu_down() while we are manipulating it.
> +	 *
> +	 * load_balance() gets called with preemption being disabled
> +	 * so synchronize_sched() in cpu_down() should get us synced.
> +	 */
> +	*cpus = cpu_active_map;

This is going to be painful on -rt... there it can be preempted. I guess
we can put get_online_cpus() around it or something..

>  	/*
>  	 * When power savings policy is enabled for the parent domain, idle
> @@ -3571,7 +3578,14 @@ load_balance_newidle(int this_cpu, struct rq *this_rq, struct sched_domain *sd,
>  	int sd_idle = 0;
>  	int all_pinned = 0;
>  
> -	cpus_setall(*cpus);
> +	/*
> +	 * Ensure that we don't get 'busiest' which can disappear
> +	 * as a result of cpu_down() while we are manipulating it.
> +	 *
> +	 * load_balance_newidle() gets called with preemption being disabled
> +	 * so synchronize_sched() in cpu_down() should get us synced.
> +	 */
> +	*cpus = cpu_active_map;
>  
>  	/*
>  	 * When power savings policy is enabled for the parent domain, idle
> @@ -5764,9 +5778,14 @@ int set_cpus_allowed_ptr(struct task_struct *p, const cpumask_t *new_mask)
>  		goto out;
>  
>  	if (migrate_task(p, any_online_cpu(*new_mask), &req)) {
> -		/* Need help from migration thread: drop lock and wait. */
> +		/* Need to wait for migration thread (might exit: take ref). */
> +		struct task_struct *mt = rq->migration_thread;
> +
> +		get_task_struct(mt);
>  		task_rq_unlock(rq, &flags);
> -		wake_up_process(rq->migration_thread);
> +		wake_up_process(mt);
> +		put_task_struct(mt);
> +
>  		wait_for_completion(&req.done);
>  		tlb_migrate_finish(p->mm);
>  		return 0;
> 
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [patch, rfc: 1/2] sched, hotplug: safe use of rq->migration_thread and find_busiest_queue()
  2008-07-25 11:39 ` Peter Zijlstra
@ 2008-07-25 11:52   ` Dmitry Adamushko
  2008-07-25 12:13     ` Peter Zijlstra
  2008-07-25 22:31     ` Gautham R Shenoy
  0 siblings, 2 replies; 5+ messages in thread
From: Dmitry Adamushko @ 2008-07-25 11:52 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, LKML, Steven Rostedt, Thomas Gleixner

2008/7/25 Peter Zijlstra <a.p.zijlstra@chello.nl>:
> On Fri, 2008-07-25 at 00:11 +0200, Dmitry Adamushko wrote:
>> From: Dmitry Adamushko <dmitry.adamushko@gmail.com>
>> Subject: sched, hotplug: safe use of rq->migration_thread
>> and find_busiest_queue()
>>
>> ---
>>
>>     sched, hotplug: safe use of rq->migration_thread and find_busiest_queue()
>>
>>     (1) make usre rq->migration_thread is valid when we access it in set_cpus_allowed_ptr()
>>     after releasing the rq-lock;
>>
>>     (2) in load_balance() and load_balance_idle()
>>
>>     ensure that we don't get 'busiest' which can disappear as a result of cpu_down()
>>     while we are manipulating it. For this goal, we choose 'busiest' only amongst
>>     'cpu_active_map' cpus.
>>
>>     load_balance() and load_balance_idle() get called with preemption being disabled
>>     so synchronize_sched() in cpu_down() should get us synced.
>>
>>     IOW, as soon as synchronize_sched() has been done in cpu_down(cpu), the run-queue for
>>     can't be manipulated/accessed by the load-balancer.
>>
>>     Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
>
> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>

Thanks.

>
>> diff --git a/kernel/sched.c b/kernel/sched.c
>> index 6acf749..b4ccc8b 100644
>> --- a/kernel/sched.c
>> +++ b/kernel/sched.c
>> @@ -3409,7 +3409,14 @@ static int load_balance(int this_cpu, struct rq *this_rq,
>>       struct rq *busiest;
>>       unsigned long flags;
>>
>> -     cpus_setall(*cpus);
>> +     /*
>> +      * Ensure that we don't get 'busiest' which can disappear
>> +      * as a result of cpu_down() while we are manipulating it.
>> +      *
>> +      * load_balance() gets called with preemption being disabled
>> +      * so synchronize_sched() in cpu_down() should get us synced.
>> +      */
>> +     *cpus = cpu_active_map;
>
> This is going to be painful on -rt... there it can be preempted. I guess
> we can put get_online_cpus() around it or something..

I've considered using get_online_cpus() for a moment but dropped this
idea exactly because I thought it would harm us latency-wise.
cpu_down() and cpu_up() may take quite a long time to complete and
load_balance() && load_balance_idle() would need to wait all this
time. And they both are kind of generic (primary) scheduler
operations.

but yea, my scheme relies on the fact that load_balance() &&
load_balance_idle() are atomic one way or another wrt. cpu_clear() +
synchronize_sched() in cpu_down().

[ speculating here ] I'd rather add an additional mechanism which
would be light-weight for load_balance() and add
synch_this_mechanism() (alike to synchonise_sched()) in cpu_down() as
perhaps we don't care that much on how fast the later one is.


-- 
Best regards,
Dmitry Adamushko

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [patch, rfc: 1/2] sched, hotplug: safe use of rq->migration_thread and find_busiest_queue()
  2008-07-25 11:52   ` Dmitry Adamushko
@ 2008-07-25 12:13     ` Peter Zijlstra
  2008-07-25 22:31     ` Gautham R Shenoy
  1 sibling, 0 replies; 5+ messages in thread
From: Peter Zijlstra @ 2008-07-25 12:13 UTC (permalink / raw)
  To: Dmitry Adamushko; +Cc: Ingo Molnar, LKML, Steven Rostedt, Thomas Gleixner

On Fri, 2008-07-25 at 13:52 +0200, Dmitry Adamushko wrote:
> 2008/7/25 Peter Zijlstra <a.p.zijlstra@chello.nl>:
> > On Fri, 2008-07-25 at 00:11 +0200, Dmitry Adamushko wrote:
> >> From: Dmitry Adamushko <dmitry.adamushko@gmail.com>
> >> Subject: sched, hotplug: safe use of rq->migration_thread
> >> and find_busiest_queue()
> >>
> >> ---
> >>
> >>     sched, hotplug: safe use of rq->migration_thread and find_busiest_queue()
> >>
> >>     (1) make usre rq->migration_thread is valid when we access it in set_cpus_allowed_ptr()
> >>     after releasing the rq-lock;
> >>
> >>     (2) in load_balance() and load_balance_idle()
> >>
> >>     ensure that we don't get 'busiest' which can disappear as a result of cpu_down()
> >>     while we are manipulating it. For this goal, we choose 'busiest' only amongst
> >>     'cpu_active_map' cpus.
> >>
> >>     load_balance() and load_balance_idle() get called with preemption being disabled
> >>     so synchronize_sched() in cpu_down() should get us synced.
> >>
> >>     IOW, as soon as synchronize_sched() has been done in cpu_down(cpu), the run-queue for
> >>     can't be manipulated/accessed by the load-balancer.
> >>
> >>     Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
> >
> > Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> 
> Thanks.
> 
> >
> >> diff --git a/kernel/sched.c b/kernel/sched.c
> >> index 6acf749..b4ccc8b 100644
> >> --- a/kernel/sched.c
> >> +++ b/kernel/sched.c
> >> @@ -3409,7 +3409,14 @@ static int load_balance(int this_cpu, struct rq *this_rq,
> >>       struct rq *busiest;
> >>       unsigned long flags;
> >>
> >> -     cpus_setall(*cpus);
> >> +     /*
> >> +      * Ensure that we don't get 'busiest' which can disappear
> >> +      * as a result of cpu_down() while we are manipulating it.
> >> +      *
> >> +      * load_balance() gets called with preemption being disabled
> >> +      * so synchronize_sched() in cpu_down() should get us synced.
> >> +      */
> >> +     *cpus = cpu_active_map;
> >
> > This is going to be painful on -rt... there it can be preempted. I guess
> > we can put get_online_cpus() around it or something..
> 
> I've considered using get_online_cpus() for a moment but dropped this
> idea exactly because I thought it would harm us latency-wise.
> cpu_down() and cpu_up() may take quite a long time to complete and
> load_balance() && load_balance_idle() would need to wait all this
> time. And they both are kind of generic (primary) scheduler
> operations.
> 
> but yea, my scheme relies on the fact that load_balance() &&
> load_balance_idle() are atomic one way or another wrt. cpu_clear() +
> synchronize_sched() in cpu_down().
> 
> [ speculating here ] I'd rather add an additional mechanism which
> would be light-weight for load_balance() and add
> synch_this_mechanism() (alike to synchonise_sched()) in cpu_down() as
> perhaps we don't care that much on how fast the later one is.


Right, I suppose we could stick in an SRCU domain or something to do
that.

So we do:

  srcu_read_lock();
  load_balance();
  srcu_read_unlock();


and then in cpu_down():

  srcu_synchronize();


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [patch, rfc: 1/2] sched, hotplug: safe use of rq->migration_thread and find_busiest_queue()
  2008-07-25 11:52   ` Dmitry Adamushko
  2008-07-25 12:13     ` Peter Zijlstra
@ 2008-07-25 22:31     ` Gautham R Shenoy
  1 sibling, 0 replies; 5+ messages in thread
From: Gautham R Shenoy @ 2008-07-25 22:31 UTC (permalink / raw)
  To: Dmitry Adamushko
  Cc: Peter Zijlstra, Ingo Molnar, LKML, Steven Rostedt,
	Thomas Gleixner

On Fri, Jul 25, 2008 at 01:52:17PM +0200, Dmitry Adamushko wrote:
> 2008/7/25 Peter Zijlstra <a.p.zijlstra@chello.nl>:
> > On Fri, 2008-07-25 at 00:11 +0200, Dmitry Adamushko wrote:
> >> From: Dmitry Adamushko <dmitry.adamushko@gmail.com>
> >> Subject: sched, hotplug: safe use of rq->migration_thread
> >> and find_busiest_queue()
> >>
> >> ---
> >>
> >>     sched, hotplug: safe use of rq->migration_thread and find_busiest_queue()
> >>
> >>     (1) make usre rq->migration_thread is valid when we access it in set_cpus_allowed_ptr()
> >>     after releasing the rq-lock;
> >>
> >>     (2) in load_balance() and load_balance_idle()
> >>
> >>     ensure that we don't get 'busiest' which can disappear as a result of cpu_down()
> >>     while we are manipulating it. For this goal, we choose 'busiest' only amongst
> >>     'cpu_active_map' cpus.
> >>
> >>     load_balance() and load_balance_idle() get called with preemption being disabled
> >>     so synchronize_sched() in cpu_down() should get us synced.
> >>
> >>     IOW, as soon as synchronize_sched() has been done in cpu_down(cpu), the run-queue for
> >>     can't be manipulated/accessed by the load-balancer.
> >>
> >>     Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
> >
> > Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> 
> Thanks.
> 
> >
> >> diff --git a/kernel/sched.c b/kernel/sched.c
> >> index 6acf749..b4ccc8b 100644
> >> --- a/kernel/sched.c
> >> +++ b/kernel/sched.c
> >> @@ -3409,7 +3409,14 @@ static int load_balance(int this_cpu, struct rq *this_rq,
> >>       struct rq *busiest;
> >>       unsigned long flags;
> >>
> >> -     cpus_setall(*cpus);
> >> +     /*
> >> +      * Ensure that we don't get 'busiest' which can disappear
> >> +      * as a result of cpu_down() while we are manipulating it.
> >> +      *
> >> +      * load_balance() gets called with preemption being disabled
> >> +      * so synchronize_sched() in cpu_down() should get us synced.
> >> +      */
> >> +     *cpus = cpu_active_map;
> >
> > This is going to be painful on -rt... there it can be preempted. I guess
> > we can put get_online_cpus() around it or something..
> 
> I've considered using get_online_cpus() for a moment but dropped this
> idea exactly because I thought it would harm us latency-wise.

get_online_cpus() can be made to be extremely lightweight (as simple as
updating a per_cpu variable). But yes, if a cpu-hotplug operation is in
progress one might block there.. So probably we need the try_ variant
here..


> cpu_down() and cpu_up() may take quite a long time to complete and
> load_balance() && load_balance_idle() would need to wait all this
> time. And they both are kind of generic (primary) scheduler
> operations.



> 
> but yea, my scheme relies on the fact that load_balance() &&
> load_balance_idle() are atomic one way or another wrt. cpu_clear() +
> synchronize_sched() in cpu_down().
> 
> [ speculating here ] I'd rather add an additional mechanism which
> would be light-weight for load_balance() and add
> synch_this_mechanism() (alike to synchonise_sched()) in cpu_down() as
> perhaps we don't care that much on how fast the later one is.
> 
> 
> -- 
> Best regards,
> Dmitry Adamushko
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

-- 
Thanks and Regards
gautham

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2008-07-25 22:32 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-07-24 22:11 [patch, rfc: 1/2] sched, hotplug: safe use of rq->migration_thread and find_busiest_queue() Dmitry Adamushko
2008-07-25 11:39 ` Peter Zijlstra
2008-07-25 11:52   ` Dmitry Adamushko
2008-07-25 12:13     ` Peter Zijlstra
2008-07-25 22:31     ` Gautham R Shenoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox