cpufreq.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RFC: Leave sysfs nodes alone during hotplug
@ 2014-07-03 21:59 Saravana Kannan
  2014-07-07 11:01 ` Viresh Kumar
  0 siblings, 1 reply; 10+ messages in thread
From: Saravana Kannan @ 2014-07-03 21:59 UTC (permalink / raw)
  To: cpufreq, Linux PM mailing list, Rafael J. Wysocki, Viresh Kumar
  Cc: linux-arm-msm@vger.kernel.org, linux-arm-kernel

The adding and removing of sysfs nodes in cpufreq causes a ton of pain. 
There's always some stability or deadlock issue every few weeks on our 
internal tree. We sync up our internal tree fairly often with the 
upstream cpufreq code. And more of these issues are popping up as we 
start exercising the cpufreq framework for b.L systems or HMP systems.

It looks like we adding a lot of unnecessary complexity by adding and 
removing these sysfs nodes. The other per CPU sysfs nodes like:
/sys/devices/system/cpu/cpu1/power or cpuidle are left alone during 
hotplug. So, why are we not doing the same for cpufreq too?

Any objections to leaving them alone during hotplug? If those files are 
read/written to when the entire cluster is hotplugged off, we could just 
return an error. I'm not saying it would be impossible to fix all these 
deadlock and race issues in the current code -- but it seems like a lot 
of pointless effort to remove/add sysfs nodes.

Examples of issues caused by this:
1. Race when changing governor really quickly from userspace. The 
governors end up getting 2 STOP or 2 START events. This was introduced 
by [1] when it tried to fix another deadlock issue.

2. Incorrect policy/sysfs handling during suspend/resume. Suspend takes 
out CPU in the order n, n+1, n+2, etc and resume adds them back in the 
same order. Both sysfs and policy ownership transfer aren't handled 
correctly in this case. This obviously applies even outside 
suspend/resume if the same sequence is repeated using just hotplug.

I'd be willing to take a shot at this if there isn't any objection to 
this. It's a lot of work/refactor -- so I don't want to spend a lot of 
time on it if there's a strong case for removing these sysfs nodes.

Thoughts?

-Saravana
P.S: I always find myself sending emails to the lists close to one 
holiday or another. Sigh.

[1] - 
https://kernel.googlesource.com/pub/scm/linux/kernel/git/rafael/linux-pm/+/955ef4833574636819cd269cfbae12f79cbde63a%5E!/

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Leave sysfs nodes alone during hotplug
  2014-07-03 21:59 RFC: Leave sysfs nodes alone during hotplug Saravana Kannan
@ 2014-07-07 11:01 ` Viresh Kumar
  2014-07-07 19:30   ` Saravana Kannan
  0 siblings, 1 reply; 10+ messages in thread
From: Viresh Kumar @ 2014-07-07 11:01 UTC (permalink / raw)
  To: Saravana Kannan, Srivatsa S. Bhat
  Cc: cpufreq@vger.kernel.org, Linux PM mailing list,
	linux-arm-msm@vger.kernel.org, linux-arm-kernel,
	Rafael J. Wysocki

Cc'ing Srivatsa and fixing Rafael's id.

On 4 July 2014 03:29, Saravana Kannan <skannan@codeaurora.org> wrote:
> The adding and removing of sysfs nodes in cpufreq causes a ton of pain.
> There's always some stability or deadlock issue every few weeks on our
> internal tree. We sync up our internal tree fairly often with the upstream
> cpufreq code. And more of these issues are popping up as we start exercising
> the cpufreq framework for b.L systems or HMP systems.
>
> It looks like we adding a lot of unnecessary complexity by adding and
> removing these sysfs nodes. The other per CPU sysfs nodes like:
> /sys/devices/system/cpu/cpu1/power or cpuidle are left alone during hotplug.
> So, why are we not doing the same for cpufreq too?

This is how it had been since ever, don't know which method is correct.
Though these are the requirements I have from them:
- On hotplug files values should get reset ..
- On suspend/resume values must be retained.

> Any objections to leaving them alone during hotplug? If those files are
> read/written to when the entire cluster is hotplugged off, we could just
> return an error. I'm not saying it would be impossible to fix all these
> deadlock and race issues in the current code -- but it seems like a lot of
> pointless effort to remove/add sysfs nodes.

Lets understand the problem first and then can take the right decision.

> Examples of issues caused by this:
> 1. Race when changing governor really quickly from userspace. The governors
> end up getting 2 STOP or 2 START events. This was introduced by [1] when it
> tried to fix another deadlock issue.

I was talking about [1] offline with Srivatsa, and one of us might look in
detail why [1] was actually required.

But I don't know how exactly can we get 2 STOP/START in latest mainline
code. As we have enough protection against that now.

So, we would really like to see some reports against mainline for this.

> 2. Incorrect policy/sysfs handling during suspend/resume. Suspend takes out
> CPU in the order n, n+1, n+2, etc and resume adds them back in the same
> order. Both sysfs and policy ownership transfer aren't handled correctly in
> this case.

I know few of these, but can you please tell what you have in mind?

> This obviously applies even outside suspend/resume if the same
> sequence is repeated using just hotplug.

Again, what's the issue?

> I'd be willing to take a shot at this if there isn't any objection to this.
> It's a lot of work/refactor -- so I don't want to spend a lot of time on it
> if there's a strong case for removing these sysfs nodes.

Sure, I fully understand this but still wanna understand the issue first.

> P.S: I always find myself sending emails to the lists close to one holiday
> or another. Sigh.

Sorry for being late to reply to this. I saw it on friday, but couldn't reply
whole day. Was following something with ticks core. :(

> [1] -
> https://kernel.googlesource.com/pub/scm/linux/kernel/git/rafael/linux-pm/+/955ef4833574636819cd269cfbae12f79cbde63a%5E!/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Leave sysfs nodes alone during hotplug
  2014-07-07 11:01 ` Viresh Kumar
@ 2014-07-07 19:30   ` Saravana Kannan
  2014-07-07 22:40     ` Todd Poynor
  2014-07-08  2:36     ` Viresh Kumar
  0 siblings, 2 replies; 10+ messages in thread
From: Saravana Kannan @ 2014-07-07 19:30 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Srivatsa S. Bhat, cpufreq@vger.kernel.org, Linux PM mailing list,
	linux-arm-msm@vger.kernel.org, linux-arm-kernel,
	Rafael J. Wysocki

Rafael's email is bounced. Looks like I used a stale ID. Fixing it in 
this email.

On 07/07/2014 04:01 AM, Viresh Kumar wrote:
> Cc'ing Srivatsa and fixing Rafael's id.
>
> On 4 July 2014 03:29, Saravana Kannan <skannan@codeaurora.org> wrote:
>> The adding and removing of sysfs nodes in cpufreq causes a ton of pain.
>> There's always some stability or deadlock issue every few weeks on our
>> internal tree. We sync up our internal tree fairly often with the upstream
>> cpufreq code. And more of these issues are popping up as we start exercising
>> the cpufreq framework for b.L systems or HMP systems.
>>
>> It looks like we adding a lot of unnecessary complexity by adding and
>> removing these sysfs nodes. The other per CPU sysfs nodes like:
>> /sys/devices/system/cpu/cpu1/power or cpuidle are left alone during hotplug.
>> So, why are we not doing the same for cpufreq too?
>
> This is how it had been since ever, don't know which method is correct.

Rafael, If you don't have any objections, I would like to simplify it.

> Though these are the requirements I have from them:
> - On hotplug files values should get reset ..
> - On suspend/resume values must be retained.

Hmm... There's actually enough interest in NOT reseting across hotplug 
because it's also used by thermal when a CPU gets too hot and then it's 
plugged in later. So, userspace has no way to cleanly restore the 
values. But that's a separate topic.

>> Any objections to leaving them alone during hotplug? If those files are
>> read/written to when the entire cluster is hotplugged off, we could just
>> return an error. I'm not saying it would be impossible to fix all these
>> deadlock and race issues in the current code -- but it seems like a lot of
>> pointless effort to remove/add sysfs nodes.
>
> Lets understand the problem first and then can take the right decision.

My point is more of a, do we even need to allow this problem to happen 
instead of trying and fixing it. I see no benefit at all in 
removing/adding the files during hotplug.

>
>> Examples of issues caused by this:
>> 1. Race when changing governor really quickly from userspace. The governors
>> end up getting 2 STOP or 2 START events. This was introduced by [1] when it
>> tried to fix another deadlock issue.
>
> I was talking about [1] offline with Srivatsa, and one of us might look in
> detail why [1] was actually required.

I believe it has to do with a race that's something along these lines:
1. Echo/store into a sysfs node grabbing a sysfs lock (inside sysfs 
code) before trying to grab one of the cpufreq locks
2. The hotplug path grabbing one of the cpufreq locks before trying to 
remove the sysfs group.

The two START/STOP happens because of [1]. It can happen when userspace 
is quickly changes governors back to back. Or say, multiple threads 
trying to store the same governor. The double START/STOP happens because 
the 2nd request is able to slip in when you unlock the rwsem for sending 
the policy INIT/EXIT.

>
> But I don't know how exactly can we get 2 STOP/START in latest mainline
> code. As we have enough protection against that now.
>
> So, we would really like to see some reports against mainline for this.

Is the above sufficient enough? It's really easy to trigger if you have 
a b.L system. Just slam scaling_governor between performance and another 
governor (or it might have been the same value). b.L is important 
because if you don't have multi-cluster, you don't get POLICY_EXIT often.

>> 2. Incorrect policy/sysfs handling during suspend/resume. Suspend takes out
>> CPU in the order n, n+1, n+2, etc and resume adds them back in the same
>> order. Both sysfs and policy ownership transfer aren't handled correctly in
>> this case.
>
> I know few of these, but can you please tell what you have in mind?

Not sure what you mean by "what you have in mind". Just simple 
suspend/resume is broken. Again, easier to reproduce in b.L system since 
you need to actually have POLICY_EXIT happen.

>> This obviously applies even outside suspend/resume if the same
>> sequence is repeated using just hotplug.
>
> Again, what's the issue?

Just hotplug all CPUs in a cluster in one order and bring it up in 
another. It should crash or panic about sysfs. Basically, the kobj of 
the real sysfs stays with the last CPU that went down, but the first CPU 
that comes up owns the policy without owning the real kobj.

>> I'd be willing to take a shot at this if there isn't any objection to this.
>> It's a lot of work/refactor -- so I don't want to spend a lot of time on it
>> if there's a strong case for removing these sysfs nodes.
>
> Sure, I fully understand this but still wanna understand the issue first.

We might be able to throw around more locks, etc to fix this. But again, 
my main point is that all this seems pointless.

We should just leave the sysfs nodes alone. We won't break any backward 
compatibility since userspace can't be depending on these to be present 
when a CPU was OFFLINE. Anything that depended on that would be broken 
userspace code anyway.

>> P.S: I always find myself sending emails to the lists close to one holiday
>> or another. Sigh.
>
> Sorry for being late to reply to this. I saw it on friday, but couldn't reply
> whole day. Was following something with ticks core. :(

No worries. This was a fast enough :)

>
>> [1] -
>> https://kernel.googlesource.com/pub/scm/linux/kernel/git/rafael/linux-pm/+/955ef4833574636819cd269cfbae12f79cbde63a%5E!/

-Saravana

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Leave sysfs nodes alone during hotplug
  2014-07-07 19:30   ` Saravana Kannan
@ 2014-07-07 22:40     ` Todd Poynor
  2014-07-08  1:18       ` Saravana Kannan
  2014-07-08  2:40       ` Viresh Kumar
  2014-07-08  2:36     ` Viresh Kumar
  1 sibling, 2 replies; 10+ messages in thread
From: Todd Poynor @ 2014-07-07 22:40 UTC (permalink / raw)
  To: Saravana Kannan
  Cc: Viresh Kumar, Srivatsa S. Bhat, cpufreq@vger.kernel.org,
	Linux PM mailing list, linux-arm-msm@vger.kernel.org,
	linux-arm-kernel, Rafael J. Wysocki, Ruchi Kandoi

On Mon, Jul 7, 2014 at 12:30 PM, Saravana Kannan <skannan@codeaurora.org> wrote:
...
>> Though these are the requirements I have from them:
>> - On hotplug files values should get reset ..
>> - On suspend/resume values must be retained.
>
> Hmm... There's actually enough interest in NOT reseting across hotplug
> because it's also used by thermal when a CPU gets too hot and then it's
> plugged in later. So, userspace has no way to cleanly restore the values.
> But that's a separate topic.

For Android's usage we're also interested in both:

1. not removing and recreating the cpufreq sysfs files for a CPU on
hotplug events (we currently use hotplug uevents to reset file
ownership such that power policy can be controlled by non-root).

2. not resetting the contents of policy files such as scaling_max_freq
(also fixed up from uevents) or stats files (we currently keep a
separate persistent time_in_state for battery accounting purposes).

>>> Any objections to leaving them alone during hotplug? If those files are
>>> read/written to when the entire cluster is hotplugged off, we could just
>>> return an error. I'm not saying it would be impossible to fix all these
>>> deadlock and race issues in the current code -- but it seems like a lot
>>> of
>>> pointless effort to remove/add sysfs nodes.

No objections from our standpoint.


Thanks -- Todd

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Leave sysfs nodes alone during hotplug
  2014-07-07 22:40     ` Todd Poynor
@ 2014-07-08  1:18       ` Saravana Kannan
  2014-07-08  2:40       ` Viresh Kumar
  1 sibling, 0 replies; 10+ messages in thread
From: Saravana Kannan @ 2014-07-08  1:18 UTC (permalink / raw)
  To: Todd Poynor
  Cc: Viresh Kumar, Srivatsa S. Bhat, cpufreq@vger.kernel.org,
	Linux PM mailing list, linux-arm-msm@vger.kernel.org,
	linux-arm-kernel, Rafael J. Wysocki, Ruchi Kandoi

On 07/07/2014 03:40 PM, Todd Poynor wrote:
> On Mon, Jul 7, 2014 at 12:30 PM, Saravana Kannan <skannan@codeaurora.org> wrote:
> ...
>>> Though these are the requirements I have from them:
>>> - On hotplug files values should get reset ..
>>> - On suspend/resume values must be retained.
>>
>> Hmm... There's actually enough interest in NOT reseting across hotplug
>> because it's also used by thermal when a CPU gets too hot and then it's
>> plugged in later. So, userspace has no way to cleanly restore the values.
>> But that's a separate topic.
>
> For Android's usage we're also interested in both:
>
> 1. not removing and recreating the cpufreq sysfs files for a CPU on
> hotplug events (we currently use hotplug uevents to reset file
> ownership such that power policy can be controlled by non-root).

Ah thanks! This is another good point towards leaving the sysfs nodes 
alone that I forgot about.

> 2. not resetting the contents of policy files such as scaling_max_freq
> (also fixed up from uevents) or stats files (we currently keep a
> separate persistent time_in_state for battery accounting purposes).
>
>>>> Any objections to leaving them alone during hotplug? If those files are
>>>> read/written to when the entire cluster is hotplugged off, we could just
>>>> return an error. I'm not saying it would be impossible to fix all these
>>>> deadlock and race issues in the current code -- but it seems like a lot
>>>> of
>>>> pointless effort to remove/add sysfs nodes.
>
> No objections from our standpoint.
>
Thanks,

-Saravana

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Leave sysfs nodes alone during hotplug
  2014-07-07 19:30   ` Saravana Kannan
  2014-07-07 22:40     ` Todd Poynor
@ 2014-07-08  2:36     ` Viresh Kumar
  1 sibling, 0 replies; 10+ messages in thread
From: Viresh Kumar @ 2014-07-08  2:36 UTC (permalink / raw)
  To: Saravana Kannan
  Cc: Srivatsa S. Bhat, cpufreq@vger.kernel.org, Linux PM mailing list,
	linux-arm-msm@vger.kernel.org, linux-arm-kernel,
	Rafael J. Wysocki

On 8 July 2014 01:00, Saravana Kannan <skannan@codeaurora.org> wrote:
> Rafael's email is bounced. Looks like I used a stale ID. Fixing it in this
> email.

Because you have the wrong address initially, which I fixed and told you
below. You missed that and again added the wrong address :)

Fixing it yet again. Don't change it now :)

> On 07/07/2014 04:01 AM, Viresh Kumar wrote:
>> Though these are the requirements I have from them:
>> - On hotplug files values should get reset ..
>> - On suspend/resume values must be retained.
>
>
> Hmm... There's actually enough interest in NOT reseting across hotplug
> because it's also used by thermal when a CPU gets too hot and then it's
> plugged in later. So, userspace has no way to cleanly restore the values.
> But that's a separate topic.

Okay, will think about it more and see what's the right thing to do apart
from the requirements we have.

> My point is more of a, do we even need to allow this problem to happen
> instead of trying and fixing it. I see no benefit at all in removing/adding
> the files during hotplug.

We can surely think separately if leaving these sysfs directories is the
right thing to do, but we need to see why this bug is shown up again.

>> I was talking about [1] offline with Srivatsa, and one of us might look in
>> detail why [1] was actually required.
>
>
> I believe it has to do with a race that's something along these lines:
> 1. Echo/store into a sysfs node grabbing a sysfs lock (inside sysfs code)
> before trying to grab one of the cpufreq locks
> 2. The hotplug path grabbing one of the cpufreq locks before trying to
> remove the sysfs group.
>
> The two START/STOP happens because of [1]. It can happen when userspace is
> quickly changes governors back to back. Or say, multiple threads trying to
> store the same governor. The double START/STOP happens because the 2nd
> request is able to slip in when you unlock the rwsem for sending the policy
> INIT/EXIT.

Yeah, we still need to see how can we avoid dropping of locks before exit.

>> But I don't know how exactly can we get 2 STOP/START in latest mainline
>> code. As we have enough protection against that now.
>>
>> So, we would really like to see some reports against mainline for this.
>
>
> Is the above sufficient enough? It's really easy to trigger if you have a
> b.L system. Just slam scaling_governor between performance and another
> governor (or it might have been the same value). b.L is important because if
> you don't have multi-cluster, you don't get POLICY_EXIT often.

Hmm, b.L. isn't new to cpufreq. I did worked a lot with it over one and half
year back which working on TC2, and that's when I came very close to
cpufreq internals.

There were lots of problems around hotplug paths, etc, but all are fixed
now and nothing pending is in my knowledge.

And inside Linaro we do have automated testing going on TC2 with
hotplug/suspend/resume tests and nothing is reported since sometime.
We also have Juno (64 bit b.L.) onboard now, and nothing reported on
that as well.

> Not sure what you mean by "what you have in mind". Just simple

I meant the problem you are trying to mention.

> suspend/resume is broken. Again, easier to reproduce in b.L system since you
> need to actually have POLICY_EXIT happen.

Oh, I want to believe that but couldn't. This has been tested well on
TC2. Can you show something against mainline to prove this issue?
I tend to believe that you are working with some old version of tree..

> Just hotplug all CPUs in a cluster in one order and bring it up in another.
> It should crash or panic about sysfs. Basically, the kobj of the real sysfs
> stays with the last CPU that went down, but the first CPU that comes up owns
> the policy without owning the real kobj.

These issues were already fixed, can you try mainline please?

> We might be able to throw around more locks, etc to fix this. But again, my
> main point is that all this seems pointless.

No more locks. But I do agree that we need some simplicity in code here.
Will address that separately, lets fix your issues first.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Leave sysfs nodes alone during hotplug
  2014-07-07 22:40     ` Todd Poynor
  2014-07-08  1:18       ` Saravana Kannan
@ 2014-07-08  2:40       ` Viresh Kumar
  2014-07-08  3:30         ` skannan
  1 sibling, 1 reply; 10+ messages in thread
From: Viresh Kumar @ 2014-07-08  2:40 UTC (permalink / raw)
  To: Todd Poynor
  Cc: Saravana Kannan, Srivatsa S. Bhat, cpufreq@vger.kernel.org,
	Linux PM mailing list, linux-arm-msm@vger.kernel.org,
	linux-arm-kernel, Rafael J. Wysocki, Ruchi Kandoi

On 8 July 2014 04:10, Todd Poynor <toddpoynor@google.com> wrote:
> For Android's usage we're also interested in both:
>
> 1. not removing and recreating the cpufreq sysfs files for a CPU on
> hotplug events (we currently use hotplug uevents to reset file
> ownership such that power policy can be controlled by non-root).
>
> 2. not resetting the contents of policy files such as scaling_max_freq
> (also fixed up from uevents) or stats files (we currently keep a
> separate persistent time_in_state for battery accounting purposes).

So, we actually need to retain all the files. I will try to see this
separately. Will add it in my todo list.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Leave sysfs nodes alone during hotplug
  2014-07-08  2:40       ` Viresh Kumar
@ 2014-07-08  3:30         ` skannan
  2014-07-08  4:21           ` Viresh Kumar
  0 siblings, 1 reply; 10+ messages in thread
From: skannan @ 2014-07-08  3:30 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Todd Poynor, Saravana Kannan, Srivatsa S. Bhat,
	cpufreq@vger.kernel.org, Linux PM mailing list,
	linux-arm-msm@vger.kernel.org, linux-arm-kernel,
	Rafael J. Wysocki, Ruchi Kandoi


Viresh Kumar wrote:
> On 8 July 2014 04:10, Todd Poynor <toddpoynor@google.com> wrote:
>> For Android's usage we're also interested in both:
>>
>> 1. not removing and recreating the cpufreq sysfs files for a CPU on
>> hotplug events (we currently use hotplug uevents to reset file
>> ownership such that power policy can be controlled by non-root).
>>
>> 2. not resetting the contents of policy files such as scaling_max_freq
>> (also fixed up from uevents) or stats files (we currently keep a
>> separate persistent time_in_state for battery accounting purposes).
>
> So, we actually need to retain all the files. I will try to see this
> separately. Will add it in my todo list.
>

Looks like there's enough interest. I'll also try to send out a patch of
what I think a simplified CPUfreq should look like.

-Saravana

-- 
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Leave sysfs nodes alone during hotplug
  2014-07-08  3:30         ` skannan
@ 2014-07-08  4:21           ` Viresh Kumar
  2014-07-10  2:40             ` Saravana Kannan
  0 siblings, 1 reply; 10+ messages in thread
From: Viresh Kumar @ 2014-07-08  4:21 UTC (permalink / raw)
  To: Saravana Kannan
  Cc: Todd Poynor, Srivatsa S. Bhat, cpufreq@vger.kernel.org,
	Linux PM mailing list, linux-arm-msm@vger.kernel.org,
	linux-arm-kernel, Rafael J. Wysocki, Ruchi Kandoi

On 8 July 2014 09:00,  <skannan@codeaurora.org> wrote:
> Looks like there's enough interest. I'll also try to send out a patch of
> what I think a simplified CPUfreq should look like.

Okay, will wait for it then.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Leave sysfs nodes alone during hotplug
  2014-07-08  4:21           ` Viresh Kumar
@ 2014-07-10  2:40             ` Saravana Kannan
  0 siblings, 0 replies; 10+ messages in thread
From: Saravana Kannan @ 2014-07-10  2:40 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Todd Poynor, Srivatsa S. Bhat, cpufreq@vger.kernel.org,
	Linux PM mailing list, linux-arm-msm@vger.kernel.org,
	linux-arm-kernel, Rafael J. Wysocki, Ruchi Kandoi

On 07/07/2014 09:21 PM, Viresh Kumar wrote:
> On 8 July 2014 09:00,  <skannan@codeaurora.org> wrote:
>> Looks like there's enough interest. I'll also try to send out a patch of
>> what I think a simplified CPUfreq should look like.
>
> Okay, will wait for it then.
>

Sent a super preliminary version to give an idea of what I'm going for. 
Future patches would be replies to that thread.

-Saravana

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-07-10  2:40 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-03 21:59 RFC: Leave sysfs nodes alone during hotplug Saravana Kannan
2014-07-07 11:01 ` Viresh Kumar
2014-07-07 19:30   ` Saravana Kannan
2014-07-07 22:40     ` Todd Poynor
2014-07-08  1:18       ` Saravana Kannan
2014-07-08  2:40       ` Viresh Kumar
2014-07-08  3:30         ` skannan
2014-07-08  4:21           ` Viresh Kumar
2014-07-10  2:40             ` Saravana Kannan
2014-07-08  2:36     ` Viresh Kumar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).