linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
To: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>, rjw@rjwysocki.net
Cc: linux-pm@vger.kernel.org
Subject: Re: [PATCH] cpufreq: intel_pstate: Enforce _PPC limits
Date: Wed, 20 Apr 2016 10:28:02 -0700	[thread overview]
Message-ID: <1461173282.8946.125.camel@linux.intel.com> (raw)
In-Reply-To: <57178C09.50603@yandex-team.ru>

On Wed, 2016-04-20 at 17:02 +0300, Konstantin Khlebnikov wrote:
> On 05.04.2016 02:22, Srinivas Pandruvada wrote:
> > 
> > Use ACPI _PPC notification to limit max P state driver will
> > request.
[...]
> > I guess this is consequences of my post year ago: "[PATCH RFC]
> intel_pstate:
> play well with frequency limits set by acpi". It would be nice if you
> keep me in Cc.

Sure. This patchset is mini version of the patch which you reviewed and
commented on
"	Re: [RFC PATCH] cpufreq: intel_pstate: Use ACPI perf".


> <couple notes below>
[...]

> +	turbo_pss_ctl = convert_to_native_pstate_format(cpu, 0);
> > +	if (turbo_pss_ctl > cpu->pstate.max_pstate)
> > +		cpu->acpi_perf_data.states[0].core_frequency =
> > +					policy->cpuinfo.max_freq /
> > 1000;
> I'm afraid not only first entry could have bogus frequency.
> 
> Maybe just ignore them all and recalculate frequencies from pstates?
> (frequency = clamp(pstate, min_pstate, turbo_pstate) * scaling /
> 1000)

We have to trust control values in _PSS then.
This was the mistake we did when we merged the patch to 4.4-rc1, where
we were overly optimistic about the control values in _PSS table. On
some system these values were 0xff.
The _PSS table will have max_non_turbo_freq in MHz + 1 only for turbo
frequency. If ever Windows worked on that system the table should have
correct values.
Still there will be some systems, where as you suggested there can be
junk in frequency or control value, hence I am no longer turning on
this feature by default.
> 
> > 
> > +	cpu->valid_pss_table = true;

[...]

> >   static struct cpufreq_driver intel_pstate_driver = {
> >   	.flags		= CPUFREQ_CONST_LOOPS,
> >   	.verify		= intel_pstate_verify_policy,
> >   	.setpolicy	= intel_pstate_set_policy,
> >   	.get		= intel_pstate_get,
> >   	.init		= intel_pstate_cpu_init,
> If you add
> 
> #if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
>          .bios_limit     = acpi_processor_get_bios_limit,
> #endif
> 
> current limit will be shown in
> /sys/devices/system/cpu/cpu*/cpufreq/bios_limit
> 
The BIOS limit is no longer represent the max frequency you will get.
This will be start of turbo range when config tdp feature on processor
after SandyBridge.
Does it add value, when it doesn't mean what it used before?

Thanks,
Srinivas


      reply	other threads:[~2016-04-20 17:27 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-04 23:22 [PATCH] cpufreq: intel_pstate: Enforce _PPC limits Srinivas Pandruvada
2016-04-20  1:20 ` Rafael J. Wysocki
2016-04-20 20:11   ` Srinivas Pandruvada
2016-04-20 14:02 ` Konstantin Khlebnikov
2016-04-20 17:28   ` Srinivas Pandruvada [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1461173282.8946.125.camel@linux.intel.com \
    --to=srinivas.pandruvada@linux.intel.com \
    --cc=khlebnikov@yandex-team.ru \
    --cc=linux-pm@vger.kernel.org \
    --cc=rjw@rjwysocki.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).