From: Con Kolivas <kernel@kolivas.org>
To: Bernardo Innocenti <bernie@develer.com>
Cc: Arjan van de Ven <arjanv@redhat.com>,
Nalin Dahyabhai <nalin@redhat.com>,
lkml <linux-kernel@vger.kernel.org>
Subject: Re: RFA: Changing scheduler quantum (Was: REQUEST: OpenLDAP 2.3.7)
Date: Mon, 19 Sep 2005 10:46:59 +1000 [thread overview]
Message-ID: <200509191046.59801.kernel@kolivas.org> (raw)
In-Reply-To: <432DE1C9.5050809@develer.com>
On Mon, 19 Sep 2005 07:53, Bernardo Innocenti wrote:
> Con Kolivas wrote:
> > On Sun, 18 Sep 2005 21:37, Bernardo Innocenti wrote:
> >>The DEF_TIMESLICE of 400ms looks a bit too gross for
> >>most applications and the maximum 800ms is just
> >>ridicolously high.
> >
> > Not quite.
> >
> > The default timeslice of nice 0 tasks is 100ms. The timeslice is not
> > altered the way you have read sched.c. It is altered thus:
> > 1. For 'nice' levels it varies from 5ms at nice 19 to 800ms at nice -20.
> > 2. For interactive tasks, it is cut up into smaller pieces down to 10ms
> > and round robins with other tasks at the same dynamic priority, but still
> > is based on the nice levels for the full length of cpu time before
> > expiration overall.
Please do not cc mailing lists that reply with the "your email is awaiting
moderator approval" to lkml.
> I see. Then there must be something else to explain
> the behavior I'm observing with slapd.
>
> Each and every call to sched_yield() makes the process
> sleep for over *50ms* while a "nice make bootstrap" is
> running in the background:
Why this preoccupation with how long sched_yield takes? We've already
established that it takes a variable unpredictable (yet long) time for
SCHED_NORMAL tasks. No, cancel that question or we'll start having people
tell us what the kernel should do all over again.
You're almost certainly seeing the effect of fork during 'make bootstrap' and
multiple tasks are running prior to expiration on the active runqueue.
SCHED_NORMAL tasks that have done sched_yield will yield till nothing is left
wanting cpu time on the active runqueue.
> Actually, I'm now noticing that several slapd threads were
> involved here. Depending how strace handles relative
> timestamps of multiple processes, it may mean both 8780 and
> 8781 slept too much or just 8781 did and 8780 was quick.
>
> Any idea? I'm planning to patch my kernel to print the
> time_slice value in /proc/*/stat. This way I can check
> it's being computed as intended for both slapd and gcc.
Feel free to do as much checking on kernel code as you like.
Cheers,
Con
prev parent reply other threads:[~2005-09-19 0:47 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <432B9F4A.6070805@develer.com>
[not found] ` <1126982265.3010.12.camel@localhost.localdomain>
[not found] ` <432CBABC.8090906@develer.com>
[not found] ` <20050918013247.GA31974@devserv.devel.redhat.com>
[not found] ` <432CD09A.2060201@develer.com>
[not found] ` <20050918110524.GA23910@devserv.devel.redhat.com>
2005-09-18 11:37 ` RFA: Changing scheduler quantum (Was: REQUEST: OpenLDAP 2.3.7) Bernardo Innocenti
2005-09-18 11:44 ` Con Kolivas
2005-09-18 21:53 ` Bernardo Innocenti
2005-09-19 0:46 ` Con Kolivas [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200509191046.59801.kernel@kolivas.org \
--to=kernel@kolivas.org \
--cc=arjanv@redhat.com \
--cc=bernie@develer.com \
--cc=linux-kernel@vger.kernel.org \
--cc=nalin@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox