From: Arjan van de Ven <arjan@infradead.org>
To: Frans Pop <elendil@planet.nl>
Cc: Mike Galbraith <efault@gmx.de>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Ingo Molnar <mingo@elte.hu>,
Peter Zijlstra <peterz@infradead.org>,
linux-wireless@vger.kernel.org
Subject: Re: [.32-rc3] scheduler: iwlagn consistently high in "waiting for CPU"
Date: Thu, 8 Oct 2009 08:09:36 -0700 [thread overview]
Message-ID: <20091008080936.5f3b0e1b@infradead.org> (raw)
In-Reply-To: <200910081655.37485.elendil@planet.nl>
On Thu, 8 Oct 2009 16:55:36 +0200
Frans Pop <elendil@planet.nl> wrote:
> > It turns out that on x86, these two 'opportunistic' timers only
> > get checked when another "real" timer happens.
> > These opportunistic timers have the objective to save power by
> > hitchhiking on other wakeups, as to avoid CPU wakeups by themselves
> > as much as possible.
>
> This patch makes quite a difference for me. iwlagn and phy0 now
> consistently show at ~10 ms or lower.\
most excellent
> I do still get occasional high latencies, but those are for things
> like "[rpc_wait_bit_killable]" or "Writing a page to disk", where I
> guess you'd expect them. Those high latencies are mostly only listed
> for "Global" and don't translate to individual processes.
and they're very different types of latencies, caused by disk and such.
> The ~10 ms I still get for iwlagn and phy0 (and sometimes higher (~30
> ms) for others like Xorg and artsd) is still "Scheduler: waiting for
> cpu'. If it is actually due to (un)interuptable sleep, isn't that a
> misleading label? I directly associated that with scheduler latency.
it's actually the time between wakeup and running, as measured by
scheduler statistics
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
next prev parent reply other threads:[~2009-10-08 15:09 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-05 13:00 [.32-rc3] scheduler: iwlagn consistently high in "waiting for CPU" Frans Pop
2009-10-05 14:13 ` Frans Pop
2009-10-05 14:24 ` Arjan van de Ven
2009-10-06 15:49 ` Frans Pop
2009-10-07 17:10 ` Frans Pop
2009-10-07 18:10 ` Mike Galbraith
2009-10-07 18:34 ` Frans Pop
2009-10-08 4:05 ` Mike Galbraith
2009-10-08 6:23 ` Mike Galbraith
2009-10-08 13:40 ` Arjan van de Ven
2009-10-08 14:13 ` Mike Galbraith
2009-10-08 14:54 ` Mike Galbraith
2009-10-08 14:55 ` Frans Pop
2009-10-08 15:09 ` Arjan van de Ven [this message]
2009-10-08 18:23 ` Mike Galbraith
2009-10-08 20:34 ` Markus Trippelsdorf
2009-10-09 3:35 ` Mike Galbraith
2009-10-09 3:51 ` Markus Trippelsdorf
2009-10-08 20:59 ` Frans Pop
2009-10-09 3:04 ` Mike Galbraith
2009-10-09 6:35 ` Mike Galbraith
2009-10-09 7:13 ` Peter Zijlstra
2009-10-09 7:55 ` Sedat Dilek
2009-10-09 8:06 ` Peter Zijlstra
2009-10-09 16:27 ` Frans Pop
2009-10-09 20:06 ` Mike Galbraith
2009-10-08 11:24 ` Mike Galbraith
2009-10-08 13:09 ` Frans Pop
2009-10-08 13:18 ` Mike Galbraith
2009-10-08 13:45 ` Arjan van de Ven
2009-10-08 14:15 ` Mike Galbraith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091008080936.5f3b0e1b@infradead.org \
--to=arjan@infradead.org \
--cc=efault@gmx.de \
--cc=elendil@planet.nl \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-wireless@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).