linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: "Michel Dänzer" <michel@tungstengraphics.com>
Cc: Ingo Molnar <mingo@elte.hu>,
	vatsa@linux.vnet.ibm.com, linuxppc-dev@ozlabs.org
Subject: Re: ppc32: Weird process scheduling behaviour with 2.6.24-rc
Date: Mon, 28 Jan 2008 09:50:36 +0100	[thread overview]
Message-ID: <1201510236.6149.24.camel@lappy> (raw)
In-Reply-To: <1201450409.1931.23.camel@thor.sulgenrain.local>


On Sun, 2008-01-27 at 17:13 +0100, Michel Dänzer wrote:

> In summary, there are two separate problems with similar symptoms, which
> had me confused at times:
> 
>       * With CONFIG_FAIR_USER_SCHED disabled, there are severe
>         interactivity hickups with a niced CPU hog and top running. This
>         started with commit 810e95ccd58d91369191aa4ecc9e6d4a10d8d0c8. 

The revert at the bottom causes the wakeup granularity to shrink for +
nice and to grow for - nice. That is, it becomes easier to preempt a +
nice task, and harder to preempt a - nice task.

I think we originally had that; didn't comment it, forgot the reason
changed it because the units didn't match. Another reason might have
been the more difficult preemption of - nice tasks. That might - niced
tasks to cause horrible latencies - Ingo, any recollection?

Are you perhaps running with a very low HZ (HZ=100)? (If wakeup
preemption fails, tick preemption will take over).

Also, could you try lowering:
  /proc/sys/kernel/sched_wakeup_granularity_ns

>       * With CONFIG_FAIR_USER_SCHED enabled, X becomes basically
>         unusable with a niced CPU hog, with or without top running. I
>         don't know when this started, possibly when this option was
>         first introduced.

Srivatsa found an issue that might explain the very bad behaviour under
group scheduling. But I gather you're not at all interested in this
feature?

> FWIW, the patch below (which reverts commit
> 810e95ccd58d91369191aa4ecc9e6d4a10d8d0c8) restores 2.6.24 interactivity
> to the same level as 2.6.23 here with CONFIG_FAIR_USER_SCHED disabled
> (my previous report to the contrary was with CONFIG_FAIR_USER_SCHED
> enabled because I didn't yet realize the difference it makes), but I
> don't know if that's the real fix.
> 
> 
> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index da7c061..a7cc22a 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -843,7 +843,6 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p)
>  	struct task_struct *curr = rq->curr;
>  	struct cfs_rq *cfs_rq = task_cfs_rq(curr);
>  	struct sched_entity *se = &curr->se, *pse = &p->se;
> -	unsigned long gran;
>  
>  	if (unlikely(rt_prio(p->prio))) {
>  		update_rq_clock(rq);
> @@ -866,11 +865,8 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p)
>  		pse = parent_entity(pse);
>  	}
>  
> -	gran = sysctl_sched_wakeup_granularity;
> -	if (unlikely(se->load.weight != NICE_0_LOAD))
> -		gran = calc_delta_fair(gran, &se->load);
>  
> -	if (pse->vruntime + gran < se->vruntime)
> +	if (pse->vruntime + sysctl_sched_wakeup_granularity < se->vruntime)
>  		resched_task(curr);
>  }

  parent reply	other threads:[~2008-01-28  8:56 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-01-18 12:34 ppc32: Weird process scheduling behaviour with 2.6.24-rc Michel Dänzer
2008-01-22 14:56 ` Michel Dänzer
2008-01-23 12:18   ` Michel Dänzer
2008-01-23 12:36     ` Peter Zijlstra
2008-01-23 13:14       ` Michel Dänzer
2008-01-24  8:18         ` Benjamin Herrenschmidt
2008-01-24  8:46       ` Benjamin Herrenschmidt
2008-01-25 10:57         ` Michel Dänzer
2008-01-23 12:42     ` Peter Zijlstra
2008-01-25  6:54       ` Benjamin Herrenschmidt
2008-01-25  7:03         ` Benjamin Herrenschmidt
2008-01-25  7:25           ` Benjamin Herrenschmidt
2008-01-25  8:50             ` Peter Zijlstra
2008-01-26  4:07               ` Srivatsa Vaddagiri
2008-01-26  4:13                 ` Benjamin Herrenschmidt
2008-01-26  5:07                   ` Srivatsa Vaddagiri
2008-01-26  5:15                     ` Benjamin Herrenschmidt
2008-01-26  9:26                       ` Srivatsa Vaddagiri
2008-01-26  5:07                   ` Srivatsa Vaddagiri
2008-01-27 16:13                     ` Michel Dänzer
2008-01-28  4:25                       ` Benjamin Herrenschmidt
2008-01-28  8:16                         ` Michel Dänzer
2008-01-28  8:50                       ` Peter Zijlstra [this message]
2008-01-28  9:14                         ` Michel Dänzer
2008-01-28 12:11                           ` Srivatsa Vaddagiri
2008-01-28 12:32                         ` Ingo Molnar
2008-01-28 12:53                           ` Peter Zijlstra
2008-01-28 12:56                             ` Ingo Molnar
2008-01-29 10:14                               ` Michel Dänzer
2008-01-28 13:11                           ` Srivatsa Vaddagiri
2008-01-25 11:34         ` Michel Dänzer
2008-01-25 15:04           ` Michel Dänzer
2008-01-25 21:10             ` Benjamin Herrenschmidt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1201510236.6149.24.camel@lappy \
    --to=a.p.zijlstra@chello.nl \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=michel@tungstengraphics.com \
    --cc=mingo@elte.hu \
    --cc=vatsa@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).