* PM-QOS hot path discussion.
@ 2009-12-23 17:27 640E9920
2010-01-26 17:36 ` Ai Li
0 siblings, 1 reply; 4+ messages in thread
From: 640E9920 @ 2009-12-23 17:27 UTC (permalink / raw)
To: linux-pm
[-- Attachment #1.1: Type: text/plain, Size: 2550 bytes --]
This is a bit of a rant and call for better collaboration on pm-qos
applications.
Recently I've had done some work to modify PM-QOS to implement a handle
based interface to avoid some performance issues identified by others
indirectly and directly regarding the list and string compares baked
into the initial pm-qos implementation.
BTW I'm still disappointed in the folks that indirectly raised
pm-qos performance issues. Perhaps better collaboration could be had in
the future.
Anyway, at the time of the latest direct collaboration about the
performance the problem of hitting the request API from a driver hot
path I worried that making the API handle based will still not address
the scaling issue when there are more than a few pm-qos requests that
need to get aggregated on every parameter change. i.e. making the API
handle based may not be enough.
I did the patch anyway, but as I feared this scaling issue came up. I
don't have specifics other than an email saying I was correct to worry.
Ok, so now what do we do, and why?
Now what seems to be asked for is that something akin to the android
WAKE_LOCK_IDLE "thing".
Last night I gave some thought to this. One could put a boolean or bit
mask flag into the aggregated value that when polled the poller could
know to not go to any lower performing state or kick the system into
higher performance states. Setting the flag would only happen from
kernel API's and no notification trees would be called as it gets
toggled. (that would kill fast path performance)
But as I thought about it it became clear the correct behavior is really
a function of the parameter class. If the parameter is cpu_dma_latency
then the flag would be used to simply disable CPUIDLE from doing any
high latency idle states. However; such a flag for network bandwidth (a
flag hit in some fast code path) shouldn't be telling the NIC to kick up
from 100Mbs to 1000Mbs or go back.
So what would a fast path pm-qos parameter look like and how would it be
used, outside of CPUIDLE?
Is this fast path problem even worth solving? Looking at the andorid
kernel for clients of the WAKE_LOCK_IDLE makes me think, perhaps not.
If there are users of this thing then, they are not public and I don't
think I should care about them unless they are collaborating well.
For the more shy folks on the list, this is an invitation to collaborate
on some evolutionary changes to pm-qos by identifying problems and
applications you have that need help.
--mgross
[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: PM-QOS hot path discussion.
2009-12-23 17:27 PM-QOS hot path discussion 640E9920
@ 2010-01-26 17:36 ` Ai Li
2010-01-27 2:00 ` Mike Chan
0 siblings, 1 reply; 4+ messages in thread
From: Ai Li @ 2010-01-26 17:36 UTC (permalink / raw)
To: '640E9920', linux-pm
> Now what seems to be asked for is that something akin to the
> android WAKE_LOCK_IDLE "thing".
>
> Last night I gave some thought to this. One could put a boolean
> or bit mask flag into the aggregated value that when polled the
> poller could know to not go to any lower performing state or
> kick the system into higher performance states. Setting the
> flag would only happen from kernel API's and no notification
> trees would be called as it gets toggled. (that would kill fast
> path performance)
Android WAKE_LOCK_IDLE is a very coarse setting. It does not take
advantage of the multiple performance states with their corresponding
latency values. WAKE_LOCK_IDLE potentially disallows some low
performance states when their latencies may be quite acceptable. In
contrast, an integer setting would allow a better match between the
requested latency value and the appropriate low power states. pm_qos
already provides the integer setting. IMO, the question is how to
enable hot-path flow through pm_qos...
> But as I thought about it it became clear the correct behavior
> is really a function of the parameter class. If the parameter
> is cpu_dma_latency then the flag would be used to simply disable
> CPUIDLE from doing any high latency idle states. However; such
> a flag for network bandwidth (a flag hit in some fast code path)
> shouldn't be telling the NIC to kick up from 100Mbs to 1000Mbs
> or go back.
>
> So what would a fast path pm-qos parameter look like and how
> would it be used, outside of CPUIDLE?
>
Stating it in another way: the correct hot path behavior for each
parameter class is dependent on what the parameter class is. To
accommodate this, we could add a hot_path_fn in struct pm_qos_object.
It specifies the hot path behavior for the parameter class. In the
hot path flow, update_target() will call
pm_qos_array[pm_qos_class]->hot_path_fn() instead of the notifier
chain.
With regards to NIC cards, I'm not sure what hot_path_fn() would
perform. Maybe the default hot_path_fn for PM_QOS_NETWORK_THROUGHPUT
does nothing; the platform code or NIC card code can register a
different hot_path_fn.
~Ai
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: PM-QOS hot path discussion.
2010-01-26 17:36 ` Ai Li
@ 2010-01-27 2:00 ` Mike Chan
2010-01-27 19:08 ` Ai Li
0 siblings, 1 reply; 4+ messages in thread
From: Mike Chan @ 2010-01-27 2:00 UTC (permalink / raw)
To: Ai Li; +Cc: linux-pm, 640E9920
On Tue, Jan 26, 2010 at 9:36 AM, Ai Li <aili@codeaurora.org> wrote:
>> Now what seems to be asked for is that something akin to the
>> android WAKE_LOCK_IDLE "thing".
>>
>> Last night I gave some thought to this. One could put a boolean
>> or bit mask flag into the aggregated value that when polled the
>> poller could know to not go to any lower performing state or
>> kick the system into higher performance states. Setting the
>> flag would only happen from kernel API's and no notification
>> trees would be called as it gets toggled. (that would kill fast
>> path performance)
>
> Android WAKE_LOCK_IDLE is a very coarse setting. It does not take
> advantage of the multiple performance states with their corresponding
> latency values. WAKE_LOCK_IDLE potentially disallows some low
> performance states when their latencies may be quite acceptable. In
> contrast, an integer setting would allow a better match between the
> requested latency value and the appropriate low power states. pm_qos
> already provides the integer setting. IMO, the question is how to
> enable hot-path flow through pm_qos...
>
There are a few lower power states on msm that we don't differentiate
between when taking an idle lock.
We don't use idle locks on omap. The resource framework in the omap
tree is sufficient with the latency requirement calls in
resource34xx.c/h
I might be wrong here but it sounds like you're trying to solve a
similar problem but working it into the pm-qos framework? Or am I
confused on what the subject of this thread is?
-- Mike
>
>> But as I thought about it it became clear the correct behavior
>> is really a function of the parameter class. If the parameter
>> is cpu_dma_latency then the flag would be used to simply disable
>> CPUIDLE from doing any high latency idle states. However; such
>> a flag for network bandwidth (a flag hit in some fast code path)
>> shouldn't be telling the NIC to kick up from 100Mbs to 1000Mbs
>> or go back.
>>
>> So what would a fast path pm-qos parameter look like and how
>> would it be used, outside of CPUIDLE?
>>
>
> Stating it in another way: the correct hot path behavior for each
> parameter class is dependent on what the parameter class is. To
> accommodate this, we could add a hot_path_fn in struct pm_qos_object.
> It specifies the hot path behavior for the parameter class. In the
> hot path flow, update_target() will call
> pm_qos_array[pm_qos_class]->hot_path_fn() instead of the notifier
> chain.
>
> With regards to NIC cards, I'm not sure what hot_path_fn() would
> perform. Maybe the default hot_path_fn for PM_QOS_NETWORK_THROUGHPUT
> does nothing; the platform code or NIC card code can register a
> different hot_path_fn.
>
> ~Ai
>
>
> _______________________________________________
> linux-pm mailing list
> linux-pm@lists.linux-foundation.org
> https://lists.linux-foundation.org/mailman/listinfo/linux-pm
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: PM-QOS hot path discussion.
2010-01-27 2:00 ` Mike Chan
@ 2010-01-27 19:08 ` Ai Li
0 siblings, 0 replies; 4+ messages in thread
From: Ai Li @ 2010-01-27 19:08 UTC (permalink / raw)
To: 'Mike Chan'; +Cc: linux-pm, '640E9920'
> There are a few lower power states on msm that we don't
> differentiate
> between when taking an idle lock.
>
> We don't use idle locks on omap. The resource framework in the
> omap
> tree is sufficient with the latency requirement calls in
> resource34xx.c/h
>
> I might be wrong here but it sounds like you're trying to solve
> a
> similar problem but working it into the pm-qos framework? Or am
> I
> confused on what the subject of this thread is?
pm_qos framework handles latency requirements already and can be used
on all platforms. Our code at codeaurora.org for MSM chips uses
pm_qos for latency. This thread discusses improving efficient
execution of pm_qos in hot-path, i.e. when pm_qos is called very
frequently. The improvement would apply to all pm_qos parameters,
not just latency. The latency implementations in pm_qos and in
android's WAKE_LOCK_IDLE are references as to what have been done
already.
There is a related discussion in the thread "[linux-pm] platform
specific pm_qos parameters". That thread focuses on how to add
platform-specifc parameters to pm_qos, i.e. creating a mechanism so
that each platform (i.e. x86, OMAP, MSM) can append its own set of
pm_qos parameters and associated behavior.
~Ai
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2010-01-27 19:08 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-12-23 17:27 PM-QOS hot path discussion 640E9920
2010-01-26 17:36 ` Ai Li
2010-01-27 2:00 ` Mike Chan
2010-01-27 19:08 ` Ai Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox