linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] sched: misc fixes for stable-25.y and 25-rt
@ 2008-07-03 21:37 Gregory Haskins
  2008-07-03 21:37 ` [PATCH 1/2] sched: remove extraneous load manipulations Gregory Haskins
  2008-07-03 21:37 ` [PATCH 2/2] sched: readjust the load whenever task_setprio() is invoked Gregory Haskins
  0 siblings, 2 replies; 8+ messages in thread
From: Gregory Haskins @ 2008-07-03 21:37 UTC (permalink / raw)
  To: stable, linux-rt-users, rostedt, mingo; +Cc: peterz, ghaskins, linux-kernel

Hi Ingo, Steven, Peter,
  I found a few minor issues w.r.t. the CFS load values during a few
  dequeue/enqueue operations in the 2.6.25 kernel.  I found these via
  code review as opposed to observing a bug in the field.  I believe
  at least the first issue is already fixed in 26-rcx.  I have
  confirmed that the issue is present in both 25.8-rt7 as well as
  stable-2.6.25.10.  These patches are against 25.8-rt7, though they
  should apply trivially to stable as well.  Please consider for
  inclusion in the next -rt if the -stable team does not pick them up
  immediately.

Stable team,
  Please consider these patches for 25.11 (assuming the appropriate acks
  from Ingo et. al. are recieved) though I do not believe they are
  considered significant risks and are therefore not urgent.

Feedback welcome..

Thanks!
-Greg

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] sched: remove extraneous load manipulations
  2008-07-03 21:37 [PATCH 0/2] sched: misc fixes for stable-25.y and 25-rt Gregory Haskins
@ 2008-07-03 21:37 ` Gregory Haskins
  2008-07-18 12:39   ` Peter Zijlstra
  2008-07-03 21:37 ` [PATCH 2/2] sched: readjust the load whenever task_setprio() is invoked Gregory Haskins
  1 sibling, 1 reply; 8+ messages in thread
From: Gregory Haskins @ 2008-07-03 21:37 UTC (permalink / raw)
  To: stable, linux-rt-users, rostedt, mingo; +Cc: peterz, ghaskins, linux-kernel

commit 62fb185130e4d420f71a30ff59d8b16b74ef5d2b reverted some patches
in the scheduler, but it looks like it may have left a few redundant
calls to inc_load/dec_load remain in set_user_nice (since the
 dequeue_task/enqueue_task take care of the load.  This could result
in the load values being off since the load may change while dequeued.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
CC: Peter Zijlstra <peterz@infradead.org>
CC: Ingo Molnar <mingo@elte.hu>
---

 kernel/sched.c |    6 ++----
 1 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 31f91d9..b046754 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4679,10 +4679,8 @@ void set_user_nice(struct task_struct *p, long nice)
 		goto out_unlock;
 	}
 	on_rq = p->se.on_rq;
-	if (on_rq) {
+	if (on_rq)
 		dequeue_task(rq, p, 0);
-		dec_load(rq, p);
-	}
 
 	p->static_prio = NICE_TO_PRIO(nice);
 	set_load_weight(p);
@@ -4692,7 +4690,7 @@ void set_user_nice(struct task_struct *p, long nice)
 
 	if (on_rq) {
 		enqueue_task(rq, p, 0);
-		inc_load(rq, p);
+
 		/*
 		 * If the task increased its priority or is running and
 		 * lowered its priority, then reschedule its CPU:


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] sched: readjust the load whenever task_setprio() is invoked
  2008-07-03 21:37 [PATCH 0/2] sched: misc fixes for stable-25.y and 25-rt Gregory Haskins
  2008-07-03 21:37 ` [PATCH 1/2] sched: remove extraneous load manipulations Gregory Haskins
@ 2008-07-03 21:37 ` Gregory Haskins
  2008-07-18 12:43   ` Peter Zijlstra
  1 sibling, 1 reply; 8+ messages in thread
From: Gregory Haskins @ 2008-07-03 21:37 UTC (permalink / raw)
  To: stable, linux-rt-users, rostedt, mingo; +Cc: peterz, ghaskins, linux-kernel

The load may change with the priority, so be sure to recompute its value.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
CC: Peter Zijlstra <peterz@infradead.org>
CC: Ingo Molnar <mingo@elte.hu>
---

 kernel/sched.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index b046754..c3f41b9 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4637,6 +4637,7 @@ void task_setprio(struct task_struct *p, int prio)
 		p->sched_class = &fair_sched_class;
 
 	p->prio = prio;
+	set_load_weight(p);
 
 //	trace_special_pid(p->pid, __PRIO(oldprio), PRIO(p));
 	prev_resched = _need_resched();


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] sched: remove extraneous load manipulations
  2008-07-03 21:37 ` [PATCH 1/2] sched: remove extraneous load manipulations Gregory Haskins
@ 2008-07-18 12:39   ` Peter Zijlstra
  2008-07-18 12:53     ` Gregory Haskins
  2008-07-21 22:06     ` Gregory Haskins
  0 siblings, 2 replies; 8+ messages in thread
From: Peter Zijlstra @ 2008-07-18 12:39 UTC (permalink / raw)
  To: Gregory Haskins; +Cc: stable, linux-rt-users, rostedt, mingo, linux-kernel

On Thu, 2008-07-03 at 15:37 -0600, Gregory Haskins wrote:
> commit 62fb185130e4d420f71a30ff59d8b16b74ef5d2b reverted some patches
> in the scheduler, but it looks like it may have left a few redundant
> calls to inc_load/dec_load remain in set_user_nice (since the
>  dequeue_task/enqueue_task take care of the load.  This could result
> in the load values being off since the load may change while dequeued.

I just checked out v2.6.25.10 but cannot see dequeue_task() do it.

deactivate_task() otoh does do it.

static void dequeue_task(struct rq *rq, struct task_struct *p, int sleep)
{
	p->sched_class->dequeue_task(rq, p, sleep);
	p->se.on_rq = 0;
}

vs

static void deactivate_task(struct rq *rq, struct task_struct *p, int sleep)
{
	if (task_contributes_to_load(p))
		rq->nr_uninterruptible++;

	dequeue_task(rq, p, sleep);
	dec_nr_running(p, rq);
}

where

static void dec_nr_running(struct task_struct *p, struct rq *rq)
{
	rq->nr_running--;
	dec_load(rq, p);
}

And since set_user_nice() actually changes the load we'd better not
forget to do this dec/inc load stuff.

So I'm thinking this patch would actually break stuff.

> Signed-off-by: Gregory Haskins <ghaskins@novell.com>
> CC: Peter Zijlstra <peterz@infradead.org>
> CC: Ingo Molnar <mingo@elte.hu>
> ---
> 
>  kernel/sched.c |    6 ++----
>  1 files changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 31f91d9..b046754 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -4679,10 +4679,8 @@ void set_user_nice(struct task_struct *p, long nice)
>  		goto out_unlock;
>  	}
>  	on_rq = p->se.on_rq;
> -	if (on_rq) {
> +	if (on_rq)
>  		dequeue_task(rq, p, 0);
> -		dec_load(rq, p);
> -	}
>  
>  	p->static_prio = NICE_TO_PRIO(nice);
>  	set_load_weight(p);
> @@ -4692,7 +4690,7 @@ void set_user_nice(struct task_struct *p, long nice)
>  
>  	if (on_rq) {
>  		enqueue_task(rq, p, 0);
> -		inc_load(rq, p);
> +
>  		/*
>  		 * If the task increased its priority or is running and
>  		 * lowered its priority, then reschedule its CPU:
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] sched: readjust the load whenever task_setprio() is invoked
  2008-07-03 21:37 ` [PATCH 2/2] sched: readjust the load whenever task_setprio() is invoked Gregory Haskins
@ 2008-07-18 12:43   ` Peter Zijlstra
  2008-07-18 13:01     ` [PATCH 2/2] sched: readjust the load whenever task_setprio()is invoked Gregory Haskins
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2008-07-18 12:43 UTC (permalink / raw)
  To: Gregory Haskins; +Cc: stable, linux-rt-users, rostedt, mingo, linux-kernel

On Thu, 2008-07-03 at 15:37 -0600, Gregory Haskins wrote:
> The load may change with the priority, so be sure to recompute its value.
> 
> Signed-off-by: Gregory Haskins <ghaskins@novell.com>
> CC: Peter Zijlstra <peterz@infradead.org>
> CC: Ingo Molnar <mingo@elte.hu>

Right, but in this case we'd need to do the dec/inc load game again
because otherwise we'll skew stuff - see the previuos mail on how
dequeue_task() doesn't actually do that.

Also, it looks like current mainline still has this issue.

OTOH - since prio boosting is a temporal feature, not changing the load
isn't too bad, we ought to get back to where we came from pretty
quickly.



> ---
> 
>  kernel/sched.c |    1 +
>  1 files changed, 1 insertions(+), 0 deletions(-)
> 
> diff --git a/kernel/sched.c b/kernel/sched.c
> index b046754..c3f41b9 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -4637,6 +4637,7 @@ void task_setprio(struct task_struct *p, int prio)
>  		p->sched_class = &fair_sched_class;
>  
>  	p->prio = prio;
> +	set_load_weight(p);
>  
>  //	trace_special_pid(p->pid, __PRIO(oldprio), PRIO(p));
>  	prev_resched = _need_resched();
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] sched: remove extraneous load manipulations
  2008-07-18 12:39   ` Peter Zijlstra
@ 2008-07-18 12:53     ` Gregory Haskins
  2008-07-21 22:06     ` Gregory Haskins
  1 sibling, 0 replies; 8+ messages in thread
From: Gregory Haskins @ 2008-07-18 12:53 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, rostedt, linux-kernel, linux-rt-users, stable

>>> On Fri, Jul 18, 2008 at  8:39 AM, in message <1216384754.28405.31.camel@twins>,
Peter Zijlstra <peterz@infradead.org> wrote: 
> On Thu, 2008-07-03 at 15:37 -0600, Gregory Haskins wrote:
>> commit 62fb185130e4d420f71a30ff59d8b16b74ef5d2b reverted some patches
>> in the scheduler, but it looks like it may have left a few redundant
>> calls to inc_load/dec_load remain in set_user_nice (since the
>>  dequeue_task/enqueue_task take care of the load.  This could result
>> in the load values being off since the load may change while dequeued.
> 
> I just checked out v2.6.25.10 but cannot see dequeue_task() do it.

Perhaps I was trying to hit a moving target, or did not have enough coffee that day ;)

I will look again to see if I made a mistake.

Thanks Peter,

-Greg

> 
> deactivate_task() otoh does do it.
> 
> static void dequeue_task(struct rq *rq, struct task_struct *p, int sleep)
> {
> 	p->sched_class->dequeue_task(rq, p, sleep);
> 	p->se.on_rq = 0;
> }
> 
> vs
> 
> static void deactivate_task(struct rq *rq, struct task_struct *p, int sleep)
> {
> 	if (task_contributes_to_load(p))
> 		rq->nr_uninterruptible++;
> 
> 	dequeue_task(rq, p, sleep);
> 	dec_nr_running(p, rq);
> }
> 
> where
> 
> static void dec_nr_running(struct task_struct *p, struct rq *rq)
> {
> 	rq->nr_running--;
> 	dec_load(rq, p);
> }
> 
> And since set_user_nice() actually changes the load we'd better not
> forget to do this dec/inc load stuff.
> 
> So I'm thinking this patch would actually break stuff.
> 
>> Signed-off-by: Gregory Haskins <ghaskins@novell.com>
>> CC: Peter Zijlstra <peterz@infradead.org>
>> CC: Ingo Molnar <mingo@elte.hu>
>> ---
>> 
>>  kernel/sched.c |    6 ++----
>>  1 files changed, 2 insertions(+), 4 deletions(-)
>> 
>> diff --git a/kernel/sched.c b/kernel/sched.c
>> index 31f91d9..b046754 100644
>> --- a/kernel/sched.c
>> +++ b/kernel/sched.c
>> @@ -4679,10 +4679,8 @@ void set_user_nice(struct task_struct *p, long nice)
>>  		goto out_unlock;
>>  	}
>>  	on_rq = p->se.on_rq;
>> -	if (on_rq) {
>> +	if (on_rq)
>>  		dequeue_task(rq, p, 0);
>> -		dec_load(rq, p);
>> -	}
>>  
>>  	p->static_prio = NICE_TO_PRIO(nice);
>>  	set_load_weight(p);
>> @@ -4692,7 +4690,7 @@ void set_user_nice(struct task_struct *p, long nice)
>>  
>>  	if (on_rq) {
>>  		enqueue_task(rq, p, 0);
>> -		inc_load(rq, p);
>> +
>>  		/*
>>  		 * If the task increased its priority or is running and
>>  		 * lowered its priority, then reschedule its CPU:
>> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] sched: readjust the load whenever task_setprio()is invoked
  2008-07-18 12:43   ` Peter Zijlstra
@ 2008-07-18 13:01     ` Gregory Haskins
  0 siblings, 0 replies; 8+ messages in thread
From: Gregory Haskins @ 2008-07-18 13:01 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: mingo, rostedt, linux-kernel, linux-rt-users, stable

>>> On Fri, Jul 18, 2008 at  8:43 AM, in message <1216384984.28405.36.camel@twins>,
Peter Zijlstra <peterz@infradead.org> wrote: 
> On Thu, 2008-07-03 at 15:37 -0600, Gregory Haskins wrote:
>> The load may change with the priority, so be sure to recompute its value.
>> 
>> Signed-off-by: Gregory Haskins <ghaskins@novell.com>
>> CC: Peter Zijlstra <peterz@infradead.org>
>> CC: Ingo Molnar <mingo@elte.hu>
> 
> Right, but in this case we'd need to do the dec/inc load game again
> because otherwise we'll skew stuff - see the previuos mail on how
> dequeue_task() doesn't actually do that.
> 
> Also, it looks like current mainline still has this issue.
> 
> OTOH - since prio boosting is a temporal feature, not changing the load
> isn't too bad, we ought to get back to where we came from pretty
> quickly.

Yeah, I agree.  This issue probably didn't actually matter much in practice.

It just "looked" wrong, so I figured I'd fix it ;)

-Greg

> 
> 
> 
>> ---
>> 
>>  kernel/sched.c |    1 +
>>  1 files changed, 1 insertions(+), 0 deletions(-)
>> 
>> diff --git a/kernel/sched.c b/kernel/sched.c
>> index b046754..c3f41b9 100644
>> --- a/kernel/sched.c
>> +++ b/kernel/sched.c
>> @@ -4637,6 +4637,7 @@ void task_setprio(struct task_struct *p, int prio)
>>  		p->sched_class = &fair_sched_class;
>>  
>>  	p->prio = prio;
>> +	set_load_weight(p);
>>  
>>  //	trace_special_pid(p->pid, __PRIO(oldprio), PRIO(p));
>>  	prev_resched = _need_resched();
>> 




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] sched: remove extraneous load manipulations
  2008-07-18 12:39   ` Peter Zijlstra
  2008-07-18 12:53     ` Gregory Haskins
@ 2008-07-21 22:06     ` Gregory Haskins
  1 sibling, 0 replies; 8+ messages in thread
From: Gregory Haskins @ 2008-07-21 22:06 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-rt-users, rostedt, mingo, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2699 bytes --]

Peter Zijlstra wrote:
> On Thu, 2008-07-03 at 15:37 -0600, Gregory Haskins wrote:
>   
>> commit 62fb185130e4d420f71a30ff59d8b16b74ef5d2b reverted some patches
>> in the scheduler, but it looks like it may have left a few redundant
>> calls to inc_load/dec_load remain in set_user_nice (since the
>>  dequeue_task/enqueue_task take care of the load.  This could result
>> in the load values being off since the load may change while dequeued.
>>     
>
> I just checked out v2.6.25.10 but cannot see dequeue_task() do it.
>   

Indeed.  I think my eyes glazed over the dequeue vs deactivate, and 
enqueue vs activate.   Good catch, and sorry for the noise.  I was wrong..

Please ignore this patch.

-Greg


> deactivate_task() otoh does do it.
>
> static void dequeue_task(struct rq *rq, struct task_struct *p, int sleep)
> {
> 	p->sched_class->dequeue_task(rq, p, sleep);
> 	p->se.on_rq = 0;
> }
>
> vs
>
> static void deactivate_task(struct rq *rq, struct task_struct *p, int sleep)
> {
> 	if (task_contributes_to_load(p))
> 		rq->nr_uninterruptible++;
>
> 	dequeue_task(rq, p, sleep);
> 	dec_nr_running(p, rq);
> }
>
> where
>
> static void dec_nr_running(struct task_struct *p, struct rq *rq)
> {
> 	rq->nr_running--;
> 	dec_load(rq, p);
> }
>
> And since set_user_nice() actually changes the load we'd better not
> forget to do this dec/inc load stuff.
>
> So I'm thinking this patch would actually break stuff.
>
>   
>> Signed-off-by: Gregory Haskins <ghaskins@novell.com>
>> CC: Peter Zijlstra <peterz@infradead.org>
>> CC: Ingo Molnar <mingo@elte.hu>
>> ---
>>
>>  kernel/sched.c |    6 ++----
>>  1 files changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/sched.c b/kernel/sched.c
>> index 31f91d9..b046754 100644
>> --- a/kernel/sched.c
>> +++ b/kernel/sched.c
>> @@ -4679,10 +4679,8 @@ void set_user_nice(struct task_struct *p, long nice)
>>  		goto out_unlock;
>>  	}
>>  	on_rq = p->se.on_rq;
>> -	if (on_rq) {
>> +	if (on_rq)
>>  		dequeue_task(rq, p, 0);
>> -		dec_load(rq, p);
>> -	}
>>  
>>  	p->static_prio = NICE_TO_PRIO(nice);
>>  	set_load_weight(p);
>> @@ -4692,7 +4690,7 @@ void set_user_nice(struct task_struct *p, long nice)
>>  
>>  	if (on_rq) {
>>  		enqueue_task(rq, p, 0);
>> -		inc_load(rq, p);
>> +
>>  		/*
>>  		 * If the task increased its priority or is running and
>>  		 * lowered its priority, then reschedule its CPU:
>>
>>     
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>   



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 257 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2008-07-21 22:08 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-07-03 21:37 [PATCH 0/2] sched: misc fixes for stable-25.y and 25-rt Gregory Haskins
2008-07-03 21:37 ` [PATCH 1/2] sched: remove extraneous load manipulations Gregory Haskins
2008-07-18 12:39   ` Peter Zijlstra
2008-07-18 12:53     ` Gregory Haskins
2008-07-21 22:06     ` Gregory Haskins
2008-07-03 21:37 ` [PATCH 2/2] sched: readjust the load whenever task_setprio() is invoked Gregory Haskins
2008-07-18 12:43   ` Peter Zijlstra
2008-07-18 13:01     ` [PATCH 2/2] sched: readjust the load whenever task_setprio()is invoked Gregory Haskins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).