From: Leonard Crestez <leonard.crestez@nxp.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: linux-pm@vger.kernel.org,
Vincent Guittot <vincent.guittot@linaro.org>,
linux@dominikbrodowski.net, linux-kernel@vger.kernel.org,
Rafael Wysocki <rjw@rjwysocki.net>
Subject: Re: [PATCH V3 3/9] cpufreq: Cap the default transition delay value to 10 ms
Date: Thu, 27 Jul 2017 19:54:13 +0300 [thread overview]
Message-ID: <1501174453.7957.30.camel@nxp.com> (raw)
In-Reply-To: <20170726060634.GY352@vireshk-i7>
On Wed, 2017-07-26 at 11:36 +0530, Viresh Kumar wrote:
> On 25-07-17, 14:54, Leonard Crestez wrote:
> > This patch made it's way into linux-next and it seems to cause imx socs
> > to almost always hang around their max frequency with the ondemand
> > governor, even when almost completely idle. The lowest frequency is
> > never reached. This seems wrong?
> > This driver calculates transition_latency at probe time, the value is
> > not terribly accurate but it reaches values like latency = 109 us, so
> So this is the value that is stored in the global variable
> "transition_latency" in the imx6q-cpufreq.c file? i.e.
> transition_latency = 109000 (ns) to be exact ?
Yes.
> - Don't use this patch and try to change ondemand's sampling rate from
> sysfs. Try setting it to 10000 and see if the behavior is identical
> to after this patch.
Yes, it seems to be. Also setting 100000 explicitly fixes this.
I also tried to switch from HZ=100 to HZ=1000 but that did not make a
difference.
> - Find how much time does it really take to change the frequency of
> the CPU. I don't really thing 109 us is the right transition
> latency. Use attached patch for that and look for the print message.
Your patch measures latencies of around 2.5ms, but it can vary between
1.6 ms to 3ms from boot-to-boot. This is a lot more than what the
driver reports. Most transitions seem to be faster.
I did a little digging and it seems that a majority of time is always
spent inside clk_pllv3_wait_lock which spins on a HW bit while doing
usleep_range(50, 500). I originally thought it was because of
regulators but the delays involved in that are smaller.
Measuring wall time on a process that can sleep seems dubious, isn't
this vulnerable to random delays because of other tasks?
> Without this patch the sampling rate of ondemand governor will be 109
> ms. And after this patch it would be capped at 10 ms. Why would that
> screw up anyone's setup ? I don't have an answer to that right now.
On a closer look it seems that most of the time is actually spent at
low cpufreq though (90%+).
Your change makes it so that even something like "sleep 1; cat
scaling_cur_freq" raises the frequency to the maximum. This happens
enough that even if you do it in a loop you will never see the minimum
frequency. It seems there is enough internal bookkeeping on such a
wakeup that it takes more than 10ms and enough for a reevaluation of
cpufreq until cat returns the value?!
I found this by enabling the power:cpu_frequency tracepoint event and
checking for deltas with a script. Enabling CPU_FREQ_STAT show this:
time_in_state:
396000 1609
792000 71
996000 54
trans_table:
From : To
: 396000 792000 996000
396000: 0 10 7
792000: 16 0 12
996000: 1 18 0
This is very unexpected but not necessarily wrong.
--
Regards,
Leonard
next prev parent reply other threads:[~2017-07-27 16:54 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-07-19 10:12 [PATCH V3 0/9] cpufreq: transition-latency cleanups Viresh Kumar
2017-07-19 10:12 ` [PATCH V3 1/9] cpufreq: governor: Drop min_sampling_rate Viresh Kumar
2017-07-19 10:12 ` [PATCH V3 2/9] cpufreq: Use transition_delay_us for legacy governors as well Viresh Kumar
2017-07-24 16:17 ` Peter Zijlstra
2017-07-28 4:48 ` Viresh Kumar
2017-07-19 10:12 ` [PATCH V3 3/9] cpufreq: Cap the default transition delay value to 10 ms Viresh Kumar
2017-07-25 11:54 ` Leonard Crestez
2017-07-26 0:19 ` Rafael J. Wysocki
2017-07-26 6:06 ` Viresh Kumar
2017-07-27 16:54 ` Leonard Crestez [this message]
2017-07-28 5:28 ` Viresh Kumar
2017-08-01 17:48 ` Leonard Crestez
2017-08-02 3:23 ` Viresh Kumar
2017-08-16 6:34 ` Viresh Kumar
2017-08-16 9:42 ` Leonard Crestez
2017-08-17 3:38 ` Viresh Kumar
2017-07-19 10:12 ` [PATCH V3 4/9] cpufreq: Don't set transition_latency for setpolicy drivers Viresh Kumar
2017-07-19 10:12 ` [PATCH V3 5/9] cpufreq: arm_big_little: Make ->get_transition_latency() mandatory Viresh Kumar
2017-07-19 10:12 ` [PATCH V3 6/9] cpufreq: Replace "max_transition_latency" with "dynamic_switching" Viresh Kumar
2017-07-19 10:12 ` [PATCH V3 7/9] cpufreq: schedutil: Set dynamic_switching to true Viresh Kumar
2017-07-19 10:12 ` [PATCH V3 8/9] cpufreq: Add CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING cpufreq driver flag Viresh Kumar
2017-07-19 16:30 ` Dominik Brodowski
2017-07-19 10:12 ` [PATCH V3 9/9] cpufreq: Allow dynamic switching with CPUFREQ_ETERNAL latency Viresh Kumar
2017-07-19 12:42 ` [PATCH V3 0/9] cpufreq: transition-latency cleanups Rafael J. Wysocki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1501174453.7957.30.camel@nxp.com \
--to=leonard.crestez@nxp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=linux@dominikbrodowski.net \
--cc=rjw@rjwysocki.net \
--cc=vincent.guittot@linaro.org \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).