From: Mark Gross <mgross@linux.intel.com>
To: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: linux-pm@lists.linux-foundation.org
Subject: Re: pm qos and cpufreq interaction [Was: pm qos infrastructure and interface]
Date: Thu, 25 Oct 2007 13:53:47 -0700 [thread overview]
Message-ID: <20071025205347.GA21681@linux.intel.com> (raw)
In-Reply-To: <20071025185428.GA1692@isilmar.linta.de>
On Thu, Oct 25, 2007 at 08:54:28PM +0200, Dominik Brodowski wrote:
> Hi Mark,
>
> On Wed, Oct 24, 2007 at 02:21:50PM -0700, Mark Gross wrote:
> > > On Thu, Oct 04, 2007 at 02:51:39PM -0700, Mark Gross wrote:
> > > What about cpu_throughput{_min,_max}, as being something considered to be
> > > proportional to the CPU frequency? This way, the cpufreq policy notifiers
> > > might be able to utilize the pm_qos infrastructure; but maybe even also the
> > > userspace interface (at least the min freq/max freq one)... Haven't thought
> > > this through, but maybe you (or someone else) has.
> >
> > I've only thought it though enough to choose to avoid cpufreq
> > interactions.
> >
> > Sadly core frequency is not proportional to throughput on X86
> > processors. I don't know how one would reliably quantify cpu throughput
> > in this context, other than defining latencies.
>
> Well it's not exactly throughput, but the CPU frequency surely has an
> influence on it and also affects the quality of the service provided...
This is true. I just worry going down this path would be a rat-hole.
FWIW I suppose one could use define bogo-mips as the throughput
parameter and use that as away for a CPUFREQ driver to use a way to
constrain the throttling. This way if an application that knows it
needs higher throughputs and can't deal with the latencies of the
governor changing core voltage and frequencies, it could register itself
to PM_QOS.
In theory this could open a window to more aggressive governor policies
for saving power. I can see how that could work well for specific
workloads but not so much for desktop / general purpose workloads..
>
> > I could see something like this to prevent cpufreq throttling at bad
> > times, but how common of an issue is this any more?
>
> Hopefully none :) I was just wondering whether this generalization would
> make sense in the big scheme of things (i.e. grand plan of unified power
> management)...
>
Maybe you are right.
Is there interest in anyone creating a new, or enhanced CPUFREQ governor
that takes advantage of a PM_QOS bogo-mips throughput parameter to limit
how low of a P-state to drop into?
If so I'd be happy to work with you to see what we can accomplish.
--mgross
prev parent reply other threads:[~2007-10-25 20:53 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20071004215139.GA20078@linux.intel.com>
2007-10-11 5:17 ` pm qos infrastructure and interface Andrew Morton
[not found] ` <20071010221704.6e438c71.akpm@linux-foundation.org>
2007-10-11 15:08 ` Mark Gross
2007-10-11 15:38 ` Arjan van de Ven
2007-10-23 18:03 ` pm qos and cpufreq interaction [Was: pm qos infrastructure and interface] Dominik Brodowski
2007-10-24 21:21 ` Mark Gross
2007-10-25 18:54 ` Dominik Brodowski
2007-10-25 20:53 ` Mark Gross [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20071025205347.GA21681@linux.intel.com \
--to=mgross@linux.intel.com \
--cc=linux-pm@lists.linux-foundation.org \
--cc=linux@dominikbrodowski.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox