public inbox for linux-pm@vger.kernel.org
 help / color / mirror / Atom feed
* Re: Adding PM QoS parameters
@ 2009-04-30 12:28 Patrick Bellasi
  0 siblings, 0 replies; 14+ messages in thread
From: Patrick Bellasi @ 2009-04-30 12:28 UTC (permalink / raw)
  To: linux-pm


[-- Attachment #1.1: Type: text/plain, Size: 7792 bytes --]

mark gross <mgross <at> linux.intel.com> writes:

> I don't want to see PM-QoS or constraint based PM to degenerate into a
> DPM OppPoint type of thing.  This statement reads that you are not
> aggregating the PMQoS requests in a sensible manner.  (i.e. attempting
> to code DPM styled PM using pmqos interfaces)
>
> One thing that is core to PMQoS and Constraint based PM is that there is
> a assumed partial ordering of the PM states.

Is that assumption really feasible?

Of course each device can define a _local_ partial ordering of its own power
states, but what can be assumed about the system-wide power state?
If we consider devices interdependencies, it may be that a _local_
optimization
has indirect impact on some other device performances and thus the
system-wide
state could turn out to be not the overall optimal one.

Let us consider two devices (D1 and D2) which _local_ optimization policies
are both influenced by the same QoS parameter (C1).
Let consider also someone asserting a constraint (C1<c) on that parameter.
It could happen for instance that D1 is able to fulfill this requirement, by
reconfiguring itself on a compatible operating mode, while instead D2 cannot
respect the constraint.
In this case it could happen that, finally, the required QoS level (C1<c)
cannot be granted by the system (i.e. due to the D2 impossibility to satisfy
it), anyway the D1 local policy has configured its device to work according
to
the required service level... at the end, perhaps, we spend "more power for
nothing".

This is just a dummy example, we are still trying to map it on a real
scenario,
but actually we have the feeling that the "composition" of locally optimal
configurations (based on a local partial ordering) cannot be sufficient to
get a global optimal configuration too.

> Absolute or specified performance settings are explicitly not part of
PMQoS.

I agree on your view about what PMQoS should be: definitely it should
support
a distributed control model. In this model each driver has a local
optimization
policy, usually trying to reduce power consumption, and PMQoS just delivers
some informations system-wide about expected QoS, in order to constraint
local
policies.
Therefore, such a framework will not be in charge of directly specify
performance settings in the DPM style.

However, I think that it should provide also some kind of support for the
identification on feasible system-wide optimal configurations.
We have some ideas on how such a support could be effectively implemented
but
in this sense, of course, every contribution that could come from this
discussion is welcome.
Certainly our idea is not to overcame the best-effort approach of pm_qos,
which
actually is also at the base of its simplicity, but at least also to provide

support for a "distributed agreement" approach.
Such an approach, even if generally not necessary, could be eventually
better exploited in some specific embedded application contexts, e.g. on
complex
and multi-functional new generation SoC based multimedia mobile devices.


> > Once a requested level is achieved the requester should be
> > notified for possible reconfiguration. It could be via an
> > optional registration.
>
> performance / power state entry notification?
> I think we should be careful with that idea.

A notification after entering could be difficult, and could also imply
energy
wastage whenever the required state should turn out to be not feasible.
Perhaps it could be simpler to verify whether the required performance level
can
 be granted and _only then_ either:
- to notify affected subsystems to grant the required service level
or
- to notify the service level requester that the required configuration is
not
feasible


> > We could start with a smaller set, e.g.:
> > - Interrupt latency (in lieu of DMA latency)
> > - Sleep lateny (to control sleep in absence of cpuidle)
> > - Cpu frequency
> > - Cpu voltage
>
> These are bottom up performance parameters.  Lets first go top down and
> keep in mind the partial ordering component of the system.  You can only
> constrain a min or max value not both.  Typically you constrain the
> lowest platform power a parameter is allowed to enter.

If we consider sufficiently abstract parameters (top-down approach) it could
be
difficult to understand "what" maps to lower power consumptions.
Perhaps, we could identify that mapping if we consider just one device.
But if we consider a platform, and the interdependencies among different
devices, then it may be complex to foresee how a local optimization can
impact
on system-wide power consumption.
For instance, the "run-to-idle" approach adopted by the on-demand cpufreq
governor testifies that a local non-optimal policy decision can have
system-wide
benefits (e.g. longer idle times).

> If you are looking to constrain the highest power setting the platform
> can go too, then you are talking PMQoS when really thinking DPM.

I agree.

> > I am not sure if I understood this completely, but I believe
> > that abstract -> specific mapping should be done at system level.
> > Letting drivers define them, may not be portable; and might lead
> > to more confusion.
>
> Parameters need to be defined at the application (solution) level and
> exposed to the drivers to enable them to make the best choice.  Not the
> other way around.  So if by system level you mean "solution" level then
> we agree, but over email I'm not sure if we do.

Along with "solution" level parameters (abstract parameters), it could be
interesting to have also system-level parameters.
These will be defined by drivers and platform code in order to track devices
functional dependencies.

Lets have an example:
- solution param: mems-sample-rate
    which is an abstract parameter used for instance by applications to
    assert a QoS level
    (e.g. how frequently I expect to read mems' accelerations)
- system-level param: i2c-bus-bandwidth
    - mems driver: assert constraint on that param to translate the abstract
        parameter request according to the specific device capabilities
        (e.g. we are I2C bus attached)
    - platform code: specify the platform-specific I2C channel
        corresponding to that device. This can be done for instance
        defining the system-level param as a device resource.

Such a solution should allow to translate abstract "solution" parameters to
platform-specific in a sufficiently general and portable way.

> > Generic params that impact the apps could/should be an array;
> > while arch/plat specific ones could be a linked list.
>
> I'll look at this.

I personally would prefer a common solution: "solution" params could be both
statically defined and pre-loaded into the same dynamic data structure that
will host platform-specific params too.

> Keep in mind I am against the DPM-ification of this design.  At a basic
> level the Linux OS is a best effort OS and although you can always bang
> some CR's to set power states, that isn't useful as platform independent
> or common code.

That's true, but since some time Linux is winking to real-time and embedded
systems. On such resource constrained systems, searching for "as much as
possible granted" system-wide optimal solutions is one of the main
challenges.
If we will be able to extend pm_qos to be suitable not only for general
purpose
systems but also to fit well on these application scenarios I think we will
have
interesting returns from our efforts.

> --mgross
Patrick

-- 
#include <best/regards.h>

Patrick Bellasi <bellasi at elet dot polimi dot it>
PhD student at Politecnico di Milano


GnuPG     0x72ABC1EE (keyserver.linux.it)
    pub      1024D/72ABC1EE 2003-12-04
    Key fingerprint = 3958 7B5F 36EC D1F8 C752
                             9589 C3B7 FD49 72AB C1EE

[-- Attachment #1.2: Type: text/html, Size: 8891 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 14+ messages in thread
* Adding PM QoS parameters
@ 2009-04-02 20:25 Premi, Sanjeev
  2009-04-06 21:12 ` mark gross
  0 siblings, 1 reply; 14+ messages in thread
From: Premi, Sanjeev @ 2009-04-02 20:25 UTC (permalink / raw)
  To: linux-pm@lists.linux-foundation.org

I have just started looking at the PM QoS implementation; I came across this
text in "pm_qos_interface.txt"

[quote]
The infrastructure exposes multiple misc device nodes one per implemented
parameter.  The set of parameters implement is defined by pm_qos_power_init()
and pm_qos_params.h.  This is done because having the available parameters
being runtime configurable or changeable from a driver was seen as too easy to
abuse.
[/quote]

Though I have understood the intent; i feel it may also be limiting the use
where there is a genuine need - specific to an arch/ platform.

Can we allow number of these params to grow upto a reasonable limit (say 8)?
If an arch/platform does not specifies more params, everything remains same.
But we get an opportunity to add arch/platform specific requirements.

Not sure if this has already been discussed earlier, but would like to hear
more thoughts.

Best regards,
Sanjeev

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2009-04-30 12:28 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <mailman.459.1240339694.10269.linux-pm@lists.linux-foundation.org>
2009-04-21 20:02 ` Adding PM QoS parameters Premi, Sanjeev
2009-04-22 16:35   ` mark gross
2009-04-27 12:41   ` Matteo Carnevali
2009-04-30 12:28 Patrick Bellasi
  -- strict thread matches above, loose matches on Subject: below --
2009-04-02 20:25 Premi, Sanjeev
2009-04-06 21:12 ` mark gross
2009-04-07  9:00   ` Premi, Sanjeev
2009-04-09 18:57     ` mark gross
2009-04-14 12:24       ` Patrick Bellasi
2009-04-15 18:35         ` mark gross
2009-04-21  8:08           ` Derkling
2009-04-21 23:43             ` mark gross
2009-04-27 12:50               ` Matteo Carnevali
2009-04-27 20:46                 ` mark gross

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox