From: Matthew Garrett <mjg59@srcf.ucam.org>
To: Amit Kucheria <amit.kucheria@nokia.com>
Cc: ipw2100-devel@lists.sourceforge.net, linux-pm@lists.osdl.org,
netdev@vger.kernel.org
Subject: Re: [linux-pm] [RFC] Runtime power management on ipw2100
Date: Wed, 31 Jan 2007 09:48:58 +0000 [thread overview]
Message-ID: <20070131094858.GA23842@srcf.ucam.org> (raw)
In-Reply-To: <1170234787.6746.22.camel@amit-laptop>
On Wed, Jan 31, 2007 at 11:13:07AM +0200, Amit Kucheria wrote:
> What is the latency in changing between different PCI power states for
> peripherals?
I'm not sure in the general case, but the power-down path for the
ipw2100 involves a static wait of 100ms in ipw2100_hw_stop_adapter().
> Would it be possible e.g. to put the peripheral into a low power state
> after each Tx/Rx (with reasonable hyteresis)?
Most wireless drivers support some degree of power management at this
scale, but (in ipw2100 at least) it's implemented in the firmware so I
have absolutely no idea what it's actually doing.
> <snip>
>
> > The situation is slightly more complicated for wired interfaces. As
> > previously discussed, we potentially want three interface states (on,
> > low power, off) with the intermediate one powering down as much of the
> > hardware as possible while still implementing link detection.
>
> And this low power state is what the HW should be in all the time,
> except when it has work to do.
PCI seems to require a delay of 10ms when sequencing from D3 to D0,
which probably isn't acceptable latency for an "up" state. While there's
definitely a benefit to the sort of PM you're describing (it's a model
we've already started using on the desktop as far as the CPU goes), I
think we still want to be able to expose as much power saving as
possible.
--
Matthew Garrett | mjg59@srcf.ucam.org
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
next prev parent reply other threads:[~2007-01-31 9:48 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-01-31 7:52 [RFC] Runtime power management on ipw2100 Matthew Garrett
2007-01-31 9:13 ` Amit Kucheria
2007-01-31 9:48 ` Matthew Garrett [this message]
2007-01-31 11:04 ` Amit Kucheria
2007-01-31 11:13 ` Andi Kleen
2007-01-31 10:27 ` [linux-pm] " Matthew Garrett
2007-01-31 10:48 ` Andi Kleen
2007-01-31 11:53 ` Amit Kucheria
2007-01-31 13:04 ` Pavel Machek
2007-01-31 13:12 ` [linux-pm] " Oliver Neukum
2007-01-31 13:13 ` samuel
2007-01-31 13:24 ` Amit Kucheria
2007-01-31 13:44 ` [linux-pm] " Pavel Machek
2007-01-31 14:11 ` Matthew Garrett
2007-01-31 10:39 ` Pavel Machek
2007-02-01 1:47 ` [Ipw2100-devel] " Zhu Yi
2007-02-06 21:44 ` Matthew Garrett
2007-02-08 9:01 ` Zhu Yi
2007-02-19 21:08 ` [linux-pm] [Ipw2100-devel] " David Brownell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070131094858.GA23842@srcf.ucam.org \
--to=mjg59@srcf.ucam.org \
--cc=amit.kucheria@nokia.com \
--cc=ipw2100-devel@lists.sourceforge.net \
--cc=linux-pm@lists.osdl.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).