public inbox for linux-pm@vger.kernel.org
 help / color / mirror / Atom feed
* PM-QOS hot path discussion.
@ 2009-12-23 17:27 640E9920
  2010-01-26 17:36 ` Ai Li
  0 siblings, 1 reply; 4+ messages in thread
From: 640E9920 @ 2009-12-23 17:27 UTC (permalink / raw)
  To: linux-pm


[-- Attachment #1.1: Type: text/plain, Size: 2550 bytes --]

This is a bit of a rant and call for better collaboration on pm-qos
applications.

Recently I've had done some work to modify PM-QOS to implement a handle
based interface to avoid some performance issues identified by others
indirectly and directly regarding the list and string compares baked
into the initial pm-qos implementation.

BTW I'm still disappointed in the folks that indirectly raised
pm-qos performance issues.  Perhaps better collaboration could be had in
the future.

Anyway, at the time of the latest direct collaboration about the
performance the problem of hitting the request API from a driver hot
path I worried that making the API handle based will still not address
the scaling issue when there are more than a few pm-qos requests that
need to get aggregated on every parameter change.  i.e. making the API
handle based may not be enough.

I did the patch anyway, but as I feared this scaling issue came up.  I
don't have specifics other than an email saying I was correct to worry.
Ok, so now what do we do, and why?

Now what seems to be asked for is that something akin to the android
WAKE_LOCK_IDLE "thing".

Last night I gave some thought to this.  One could put a boolean or bit
mask flag into the aggregated value that when polled the poller could
know to not go to any lower performing state or kick the system into
higher performance states.  Setting the flag would only happen from
kernel API's and no notification trees would be called as it gets
toggled.  (that would kill fast path performance)

But as I thought about it it became clear the correct behavior is really
a function of the parameter class.  If the parameter is cpu_dma_latency
then the flag would be used to simply disable CPUIDLE from doing any
high latency idle states.  However; such a flag for network bandwidth (a
flag hit in some fast code path) shouldn't be telling the NIC to kick up
from 100Mbs to 1000Mbs or go back.

So what would a fast path pm-qos parameter look like and how would it be
used, outside of CPUIDLE?

Is this fast path problem even worth solving?  Looking at the andorid
kernel for clients of the WAKE_LOCK_IDLE makes me think, perhaps not.
If there are users of this thing then, they are not public and I don't
think I should care about them unless they are collaborating well.

For the more shy folks on the list, this is an invitation to collaborate
on some evolutionary changes to pm-qos by identifying problems and
applications you have that need help.

--mgross


[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2010-01-27 19:08 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-12-23 17:27 PM-QOS hot path discussion 640E9920
2010-01-26 17:36 ` Ai Li
2010-01-27  2:00   ` Mike Chan
2010-01-27 19:08     ` Ai Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox