From: Saravana Kannan <skannan@codeaurora.org>
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Rafael Wysocki <rjw@rjwysocki.net>,
Linaro Kernel Mailman List <linaro-kernel@lists.linaro.org>,
"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
Stephen Boyd <sboyd@codeaurora.org>,
Prarit Bhargava <prarit@redhat.com>
Subject: Re: [PATCH V2 06/20] cpufreq: Create for_each_{in}active_policy()
Date: Fri, 20 Mar 2015 12:18:49 -0700 [thread overview]
Message-ID: <550C7299.7030308@codeaurora.org> (raw)
In-Reply-To: <CAKohpokanT+6scDTYotp90GNhyatURQNyjKpkfDKxgU9vhbWqQ@mail.gmail.com>
On 03/19/2015 09:41 PM, Viresh Kumar wrote:
> On 20 March 2015 at 06:31, Saravana Kannan <skannan@codeaurora.org> wrote:
>> On 02/19/2015 03:32 AM, Viresh Kumar wrote:
>
>>> +static struct cpufreq_policy *next_policy(struct cpufreq_policy *policy,
>>> + bool active)
>>> +{
>>> + while (1) {
>>
>>
>> I don't like while(1) unless it's really meant to be an infinite loop. I
>
> I don't hate it that much, and neither does other parts of kernel :)
>
>> think a do while would work here and also be more compact and readable.
>
> So you want this:
>
> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
> index d3f9ce3b94d3..ecbd8c2118c2 100644
> --- a/drivers/cpufreq/cpufreq.c
> +++ b/drivers/cpufreq/cpufreq.c
> @@ -47,7 +47,7 @@ static inline bool policy_is_inactive(struct
> cpufreq_policy *policy)
> static struct cpufreq_policy *next_policy(struct cpufreq_policy *policy,
> bool active)
> {
> - while (1) {
> + do {
> if (likely(policy))
> policy = list_next_entry(policy, policy_list);
> else
> @@ -69,9 +69,9 @@ static struct cpufreq_policy *next_policy(struct
> cpufreq_policy *policy,
> * 1 0 policy
> * 1 1 next
> */
> - if (active ^ policy_is_inactive(policy))
> - return policy;
> - };
> + } while (!(active ^ policy_is_inactive(policy)));
> +
> + return policy;
> }
Yes please!! The other uses like inside a thread seem more reasonable to me.
>
> Not sure which one looked better :)
>
>>> + if (likely(policy))
>>> + policy = list_next_entry(policy, policy_list);
>>> + else
>>> + policy = list_first_entry(&cpufreq_policy_list,
>>> + typeof(*policy),
>>> policy_list);
>>
>>
>> Can't you just move this part into expr1? That would make it much
>> clear/easier to understand
>
> No, because we want that for-loop to iterate over active/inactive
> policies only, and we need to run this routine to find it..
>
> For ex:
> - We want to iterate over active policies only
> - The first policy of the list is inactive
> - The change you are suggesting will break things here..
Ah, right. Makes sense.
>
>>> +
>>> + /* No more policies */
>>> + if (&policy->policy_list == &cpufreq_policy_list)
>>> + return policy;
>>
>>
>> I'm kinda confused by the fact that you return policy here unconditionally.
>> I think it's a bug. No? Eg: Is there's only one policy in the system and you
>> are looking for an inactive policy. Wouldn't this return the only policy --
>> the one that's active.
>
> No. What we are returning here isn't a real policy actually but an container-of
> of the HEAD of the list, so it only has a valid ->policy_list value,
> others might
> give us a crash. See how list_next_entry() works :)
I thought the last valid entry is what would point to the list head. Not
the actual list head. I'll check again.
-Saravana
--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
next prev parent reply other threads:[~2015-03-20 19:18 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-02-19 11:32 [PATCH V2 00/20] cpufreq: Don't loose cpufreq history on CPU hotplug Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 01/20] cpufreq: Add doc style comment about cpufreq_cpu_{get|put}() Viresh Kumar
2015-03-20 0:34 ` Saravana Kannan
2015-02-19 11:32 ` [PATCH V2 02/20] cpufreq: Merge __cpufreq_add_dev() and cpufreq_add_dev() Viresh Kumar
2015-03-20 0:34 ` Saravana Kannan
2015-02-19 11:32 ` [PATCH V2 03/20] cpufreq: Throw warning when we try to get policy for an invalid CPU Viresh Kumar
2015-03-20 0:34 ` Saravana Kannan
2015-02-19 11:32 ` [PATCH V2 04/20] cpufreq: Keep a single path for adding managed CPUs Viresh Kumar
2015-03-20 0:37 ` Saravana Kannan
2015-03-20 3:16 ` Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 05/20] cpufreq: Clear policy->cpus even for the last CPU Viresh Kumar
2015-03-20 0:43 ` Saravana Kannan
2015-02-19 11:32 ` [PATCH V2 06/20] cpufreq: Create for_each_{in}active_policy() Viresh Kumar
2015-03-20 1:01 ` Saravana Kannan
2015-03-20 4:41 ` Viresh Kumar
2015-03-20 19:18 ` Saravana Kannan [this message]
2015-05-07 22:11 ` Rafael J. Wysocki
2015-05-08 2:33 ` Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 07/20] cpufreq: Call schedule_work() for the last active policy Viresh Kumar
2015-04-02 3:40 ` Saravana Kannan
2015-04-02 5:02 ` Viresh Kumar
2015-05-07 22:13 ` Rafael J. Wysocki
2015-05-08 2:36 ` Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 08/20] cpufreq: Don't clear cpufreq_cpu_data and policy list for inactive policies Viresh Kumar
2015-04-02 4:14 ` Saravana Kannan
2015-04-02 5:11 ` Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 09/20] cpufreq: Get rid of cpufreq_cpu_data_fallback Viresh Kumar
2015-04-02 4:20 ` Saravana Kannan
2015-02-19 11:32 ` [PATCH V2 10/20] cpufreq: Don't traverse list of all policies for adding policy for a cpu Viresh Kumar
2015-04-02 4:24 ` Saravana Kannan
2015-02-19 11:32 ` [PATCH V2 11/20] cpufreq: Manage governor usage history with 'policy->last_governor' Viresh Kumar
2015-04-02 4:34 ` Saravana Kannan
2015-04-02 5:26 ` Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 12/20] cpufreq: Mark policy->governor = NULL for inactive policies Viresh Kumar
2015-04-02 4:38 ` Saravana Kannan
2015-04-02 6:09 ` Viresh Kumar
2015-04-04 1:20 ` Saravana Kannan
2015-04-04 3:07 ` Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 13/20] cpufreq: Don't allow updating inactive-policies from sysfs Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 14/20] cpufreq: Track cpu managing sysfs kobjects separately Viresh Kumar
2015-04-02 4:40 ` Saravana Kannan
2015-04-02 5:41 ` Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 15/20] cpufreq: Stop migrating sysfs files on hotplug Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 16/20] cpufreq: Remove cpufreq_update_policy() Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 17/20] cpufreq: Initialize policy->kobj while allocating policy Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 18/20] cpufreq: Call cpufreq_policy_put_kobj() from cpufreq_policy_free() Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 19/20] cpufreq: Restart governor as soon as possible Viresh Kumar
2015-02-19 11:32 ` [PATCH V2 20/20] cpufreq: Add support for physical hoplug of CPUs Viresh Kumar
2015-02-27 5:26 ` [PATCH V2 00/20] cpufreq: Don't loose cpufreq history on CPU hotplug Viresh Kumar
2015-02-28 2:36 ` Saravana Kannan
2015-03-16 9:45 ` Viresh Kumar
2015-03-17 22:13 ` Saravana Kannan
2015-03-26 11:59 ` Viresh Kumar
2015-03-26 20:28 ` Rafael J. Wysocki
2015-03-26 20:41 ` Saravana Kannan
2015-03-27 5:15 ` Viresh Kumar
2015-03-20 0:33 ` Saravana Kannan
2015-05-07 22:18 ` Rafael J. Wysocki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=550C7299.7030308@codeaurora.org \
--to=skannan@codeaurora.org \
--cc=linaro-kernel@lists.linaro.org \
--cc=linux-pm@vger.kernel.org \
--cc=prarit@redhat.com \
--cc=rjw@rjwysocki.net \
--cc=sboyd@codeaurora.org \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).