From: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
To: rjw@rjwysocki.net
Cc: linux-pm@vger.kernel.org,
Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Subject: [PATCH 2/2] cpufreq: intel_pstate: Fix set_policy interface for no_turbo
Date: Tue, 7 Jun 2016 17:38:53 -0700 [thread overview]
Message-ID: <1465346333-3104-2-git-send-email-srinivas.pandruvada@linux.intel.com> (raw)
In-Reply-To: <1465346333-3104-1-git-send-email-srinivas.pandruvada@linux.intel.com>
When turbo is disabled, the set_policy interface is broken.
For example, when turbo is disabled and
cpuinfo.max = 2900000 (full max turbo frequency)
Setting the limits results in frequency less than settings:
Set 1000000 KHz results in 0700000 KHz
Set 1500000 KHz results in 1100000 KHz
Set 2000000 KHz results in 1500000 KHz
This is because limits->max_perf fraction is calculated using max
turbo frequency as the reference, but when the max P-State is
capped in the function intel_pstate_get_min_max, the reference
is not the max turbo P-State. This results in reducing max
P-State.
One option is to always use max turbo as reference for calculating
limits. But this will not be correct. By definition the intel_pstate
sysfs limits, shows percentage of available performance. So when
BIOS has disabled turbo, the available performance is max non turbo.
So the max_perf_pct should still show 100%.
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
---
drivers/cpufreq/intel_pstate.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 724b905..2116666 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -1561,8 +1561,14 @@ static int intel_pstate_cpu_init(struct cpufreq_policy *policy)
/* cpuinfo and default policy values */
policy->cpuinfo.min_freq = cpu->pstate.min_pstate * cpu->pstate.scaling;
- policy->cpuinfo.max_freq =
- cpu->pstate.turbo_pstate * cpu->pstate.scaling;
+ update_turbo_state();
+ if (limits->turbo_disabled)
+ policy->cpuinfo.max_freq =
+ cpu->pstate.max_pstate * cpu->pstate.scaling;
+ else
+ policy->cpuinfo.max_freq =
+ cpu->pstate.turbo_pstate * cpu->pstate.scaling;
+
intel_pstate_init_acpi_perf_limits(policy);
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
cpumask_set_cpu(policy->cpu, policy->cpus);
--
2.5.0
next prev parent reply other threads:[~2016-06-08 0:37 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-08 0:38 [PATCH 1/2] cpufreq: intel_pstate: Move limits->max_perf to correct position Srinivas Pandruvada
2016-06-08 0:38 ` Srinivas Pandruvada [this message]
2016-06-08 0:42 ` [PATCH 2/2] cpufreq: intel_pstate: Fix set_policy interface for no_turbo Rafael J. Wysocki
2016-06-08 0:48 ` Srinivas Pandruvada
2016-06-08 0:50 ` Rafael J. Wysocki
2016-06-08 0:55 ` Srinivas Pandruvada
2016-06-08 1:06 ` Rafael J. Wysocki
2016-06-08 1:10 ` Rafael J. Wysocki
2016-06-08 1:24 ` Rafael J. Wysocki
2016-06-08 15:39 ` Srinivas Pandruvada
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1465346333-3104-2-git-send-email-srinivas.pandruvada@linux.intel.com \
--to=srinivas.pandruvada@linux.intel.com \
--cc=linux-pm@vger.kernel.org \
--cc=rjw@rjwysocki.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).