From: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
Len Brown <lenb@kernel.org>, Linux PM <linux-pm@vger.kernel.org>
Subject: Re: [PATCH] cpufreq: intel_pstate: Improve IO performance
Date: Thu, 03 Aug 2017 18:47:09 -0700 [thread overview]
Message-ID: <1501811229.4920.93.camel@linux.intel.com> (raw)
In-Reply-To: <CAJZ5v0henjb2Y0rJdc92BJVgGHg1Nb=eWLnd75eVBP5PJVmCog@mail.gmail.com>
On Fri, 2017-08-04 at 02:34 +0200, Rafael J. Wysocki wrote:
> On Wed, Aug 2, 2017 at 5:45 AM, Srinivas Pandruvada
> <srinivas.pandruvada@linux.intel.com> wrote:
> >
> > In the current implementation the latency from SCHED_CPUFREQ_IOWAIT
> > is
> > set to actual P-state adjustment can be upto 10ms. This can be
> > improved
> > by reacting to SCHED_CPUFREQ_IOWAIT by jumping to max P-state
> > immediately
> > . With this change the IO performance improves significantly.
> >
> > With a simple "grep -r . linux" (Here linux is kernel source
> > folder) with
> > dropped caches every time on a platform with per core P-states on a
> > Broadwell Xeon workstation, the user and system time improves as
> > much as
> > 30% to 40%.
> >
> > The same performance difference was not observed on clients, which
> > don't
> > have per core P-state support.
> >
> > Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel
> > .com>
> > ---
> > drivers/cpufreq/intel_pstate.c | 10 ++++++++++
> > 1 file changed, 10 insertions(+)
> >
> > diff --git a/drivers/cpufreq/intel_pstate.c
> > b/drivers/cpufreq/intel_pstate.c
> > index 8c67b77..7762255 100644
> > --- a/drivers/cpufreq/intel_pstate.c
> > +++ b/drivers/cpufreq/intel_pstate.c
> > @@ -1527,6 +1527,15 @@ static void intel_pstate_update_util(struct
> > update_util_data *data, u64 time,
> >
> > if (flags & SCHED_CPUFREQ_IOWAIT) {
> > cpu->iowait_boost = int_tofp(1);
> > + /*
> > + * The last time the busy was 100% so P-state was
> > max anyway
> > + * so avoid overhead of computation.
> > + */
> > + if (fp_toint(cpu->sample.busy_scaled) == 100) {
> > + cpu->last_update = time;
> > + return;
> > + }
> > + goto set_pstate;
> cpu->last_update should also be updated when you jump to set_pstate,
> shouldn't it?
Yes. It should be updated.
Thanks,
Srinivas
>
> >
> > } else if (cpu->iowait_boost) {
> > /* Clear iowait_boost if the CPU may have been
> > idle. */
> > delta_ns = time - cpu->last_update;
> > @@ -1538,6 +1547,7 @@ static void intel_pstate_update_util(struct
> > update_util_data *data, u64 time,
> > if ((s64)delta_ns < INTEL_PSTATE_DEFAULT_SAMPLING_INTERVAL)
> > return;
> >
> > +set_pstate:
> > if (intel_pstate_sample(cpu, time)) {
> > int target_pstate;
> >
> > --
> > 2.7.4
> >
prev parent reply other threads:[~2017-08-04 1:47 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-02 3:45 [PATCH] cpufreq: intel_pstate: Improve IO performance Srinivas Pandruvada
2017-08-04 0:34 ` Rafael J. Wysocki
2017-08-04 1:47 ` Srinivas Pandruvada [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1501811229.4920.93.camel@linux.intel.com \
--to=srinivas.pandruvada@linux.intel.com \
--cc=lenb@kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=rafael@kernel.org \
--cc=rjw@rjwysocki.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).