From: skannan@codeaurora.org
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Saravana Kannan <skannan@codeaurora.org>,
"Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>,
"Rafael J. Wysocki" <rjw@rjwysocki.net>,
"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
linux-arm-msm@vger.kernel.org,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH] cpufreq: Set policy to non-NULL only after all hotplug online work is done
Date: Mon, 24 Feb 2014 08:47:15 -0000 [thread overview]
Message-ID: <b59f4a49af4ebd12c2d61e58dfb27a39.squirrel@www.codeaurora.org> (raw)
In-Reply-To: <CAKohpomRqboKjHRF8ev9erJ6MdRWcsX2jO7A=g47gMmewm_ZiQ@mail.gmail.com>
Viresh Kumar wrote:
> On 24 February 2014 14:11, <skannan@codeaurora.org> wrote:
>> I just replied to the other email. I think I answered both your
>> questions
>> there. Sorry about mixing up CPU and policy. In my case, each CPU is
>> independently scalable -- so for now take CPU == policy. I'll fix it up
>> once we agree on the fix.
>
> But why do you say this then?
Sorry, not sure I understand what you mean.
I agree, wording in my commit text might be unclear. I'll fix it after we
agree on the code fix. In the MSM case, each CPU has it's own policy.
I'm assuming your original complaint was about my confusing wording. Maybe
that's not what you were pointing out?
-Saravana
--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
next prev parent reply other threads:[~2014-02-24 8:47 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-02-24 6:57 [PATCH] cpufreq: Set policy to non-NULL only after all hotplug online work is done Saravana Kannan
2014-02-24 7:42 ` Srivatsa S. Bhat
2014-02-24 8:11 ` Viresh Kumar
2014-02-24 8:41 ` skannan
2014-02-24 8:43 ` Viresh Kumar
2014-02-24 8:47 ` skannan [this message]
2014-02-24 8:50 ` Viresh Kumar
2014-02-24 9:00 ` skannan
2014-02-24 8:39 ` skannan
2014-02-24 10:55 ` Viresh Kumar
2014-02-24 20:23 ` Saravana Kannan
2014-02-25 8:50 ` Viresh Kumar
2014-02-25 13:04 ` Rafael J. Wysocki
2014-02-25 14:40 ` Viresh Kumar
2014-02-25 21:11 ` Saravana Kannan
2014-02-25 22:41 ` Rafael J. Wysocki
2014-02-26 1:48 ` Saravana Kannan
2014-02-26 6:02 ` Viresh Kumar
2014-02-26 20:20 ` Saravana Kannan
2014-02-26 3:38 ` [PATCH 1/3] cpufreq: stats: Remove redundant cpufreq_cpu_get() call Saravana Kannan
2014-02-26 5:06 ` Viresh Kumar
2014-02-26 20:04 ` Saravana Kannan
2014-02-26 3:38 ` [PATCH 2/3] cpufreq: stats: Fix error handling in __cpufreq_stats_create_table() Saravana Kannan
2014-02-26 5:11 ` Viresh Kumar
2014-02-26 3:38 ` [PATCH 3/3] cpufreq: Set policy to non-NULL only after all hotplug online work is done Saravana Kannan
2014-02-26 6:14 ` Viresh Kumar
2014-02-26 5:20 ` [PATCH] " Viresh Kumar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b59f4a49af4ebd12c2d61e58dfb27a39.squirrel@www.codeaurora.org \
--to=skannan@codeaurora.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=rjw@rjwysocki.net \
--cc=srivatsa.bhat@linux.vnet.ibm.com \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).