From: Andreas Herrmann <aherrmann@suse.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org,
Stratos Karafotis <stratosk@semaphore.gr>,
Thomas Renninger <trenn@suse.com>
Subject: Re: [PATCH 1/1] cpufreq: pcc-cpufreq: Re-introduce deadband effect to reduce number of frequency changes
Date: Tue, 11 Oct 2016 08:28:47 +0200 [thread overview]
Message-ID: <20161011062847.GA4102@suselix.suse.de> (raw)
In-Reply-To: <20161005051753.GK4664@vireshk-i7>
On Wed, Oct 05, 2016 at 10:47:53AM +0530, Viresh Kumar wrote:
> Sorry for being late Andreas :(
>
> On 14-09-16, 16:56, Andreas Herrmann wrote:
>
> First of all, thanks for your hardwork in getting these numbers out.
> Really appreciate it.
>
> > Below is some trace data. I hope it is of some help.
> >
> > (A) - sampling 10s period when system is idle
> > (B) - sampling 10s period when system partially loaded (kernel
> > compilation using 2 jobs)
> >
> > (1) 4.8-rc5
> > (2) 4.8-rc5 with my patch (reintro of deadband effect within
> > pcc-cpufreq)
> > (3) 4.8-rc5 with reversal of 6393d6a102 (cpufreq: ondemand: Eliminate
> > the deadband effect)
> >
> > Let me know whether you are looking for other trace data wrt this
> > issue.
> >
> >
> > Thanks,
> >
> > Andreas
> >
> > ---
> >
> > (A)-(1)
> >
> > # Total Lost Samples: 0
> > # Samples: 41 of event 'power:cpu_frequency'
> > # Event count (approx.): 41
> > # Overhead Command Shared Object Symbol
> > # ........ ............ ................ .............................
> > 39.02% kworker/14:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 29.27% kworker/0:0 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 19.51% kworker/10:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 7.32% kworker/5:2 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 2.44% kworker/23:2 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 2.44% kworker/40:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> >
> > (A)-(2)
> >
> > # Total Lost Samples: 0
> > # Samples: 6 of event 'power:cpu_frequency'
> > # Event count (approx.): 6
> > # Overhead Command Shared Object Symbol
> > # ........ ............ ................ .............................
> > 33.33% kworker/1:2 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 16.67% kworker/16:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 16.67% kworker/22:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 16.67% kworker/26:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 16.67% kworker/33:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> >
> > (A)-(3)
> >
> > # Total Lost Samples: 0
> > # Samples: 7 of event 'power:cpu_frequency'
> > # Event count (approx.): 7
> > # Overhead Command Shared Object Symbol
> > # ........ ............ ................ .............................
> > 28.57% kworker/58:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 14.29% kworker/19:2 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 14.29% kworker/20:2 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 14.29% kworker/22:2 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 14.29% kworker/23:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 14.29% kworker/35:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> >
> > ---
> >
> > (B)-(1)
> >
> > # Total Lost Samples: 0
> > # Samples: 2K of event 'power:cpu_frequency'
> > # Event count (approx.): 2382
> > # Overhead Command Shared Object Symbol
> > # ........ ............ ................ .............................
> > 5.75% kworker/0:0 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 4.16% kworker/12:2 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 3.11% kworker/17:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 2.94% kworker/2:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 2.73% kworker/19:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > ...
> >
> > (B)-(2)
> >
> > # Total Lost Samples: 0
> > # Samples: 320 of event 'power:cpu_frequency'
> > # Event count (approx.): 320
> > # Overhead Command Shared Object Symbol
> > # ........ ............ ................ .............................
> > 4.69% kworker/56:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 4.06% kworker/12:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 4.06% kworker/28:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 4.06% kworker/6:2 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 3.75% kworker/32:2 [kernel.vmlinux] [k] cpufreq_notify_transition
> > ...
> >
> > (B)-(3)
> >
> > # Total Lost Samples: 0
> > # Samples: 333 of event 'power:cpu_frequency'
> > # Event count (approx.): 333
> > # Overhead Command Shared Object Symbol
> > # ........ ............ ................ .............................
> > 4.80% kworker/51:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 4.50% kworker/39:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 4.20% kworker/47:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 3.90% kworker/59:1 [kernel.vmlinux] [k] cpufreq_notify_transition
> > 3.90% kworker/7:2 [kernel.vmlinux] [k] cpufreq_notify_transition
> > ...
>
> I am worried by all of these.. So, its not the DVFS transition which
> took time, but cpufreq_notify_transition(). And probably one of the
> drivers in your setup is screwing up here, which has registered with
> cpufreq_register_notifier().
>
> Can you please take a look at that ? Just check which all routines are
> getting called as part of srcu_notifier_call_chain().
I'll take a look into this.
> Also a simple (hacky) solution to fix the problem you have got is to
> divide the range of frequency in steps (which you have already done
> AFAICT in one of your patches) and not mention the frequencies that
> were part of the deadband earlier. That will keep the CPU at either
> low freq or some higher freqs.
Yes, seems worth to try this.
> But I am quite sure that we have an abuser here and we better find it
> now.
Thanks,
Andreas
next prev parent reply other threads:[~2016-10-11 6:29 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-08-19 12:18 [PATCH 0/1] cpufreq: pcc-cpufreq: Re-introduce deadband effect to reduce number of frequency changes Andreas Herrmann
2016-08-19 12:21 ` [PATCH 1/1] " Andreas Herrmann
2016-08-29 6:01 ` Viresh Kumar
2016-09-01 13:21 ` Andreas Herrmann
2016-09-07 5:02 ` Viresh Kumar
2016-09-13 10:53 ` Andreas Herrmann
2016-09-14 14:56 ` Andreas Herrmann
2016-10-05 5:17 ` Viresh Kumar
2016-10-11 6:28 ` Andreas Herrmann [this message]
2016-09-16 9:47 ` Andreas Herrmann
2016-09-16 18:48 ` Stratos Karafotis
[not found] ` <CADmjqpNE9f7fzQjWsHKB4wEjLq-4ZvQpaC314OcLdQ-i_TAABg@mail.gmail.com>
2016-09-19 16:16 ` Andreas Herrmann
2016-09-19 19:39 ` Stratos Karafotis
2016-09-22 17:54 ` Andreas Herrmann
2016-10-05 5:21 ` Viresh Kumar
2016-08-19 12:40 ` [PATCH 0/1] " Andreas Herrmann
2016-09-23 16:56 ` [PATCH v2 0/2] " Andreas Herrmann
2016-09-23 17:02 ` [PATCH v2 1/2] cpufreq/ondemand: Introduce op to customize mapping of load to frequency Andreas Herrmann
2016-10-05 4:01 ` Viresh Kumar
2016-10-11 6:30 ` Andreas Herrmann
2016-09-23 17:07 ` [PATCH v2 2/2] cpufreq/pcc-cpufreq: Make use of map_load_to_freq op Andreas Herrmann
2016-09-26 9:05 ` [PATCH v3 " Andreas Herrmann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161011062847.GA4102@suselix.suse.de \
--to=aherrmann@suse.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=rjw@rjwysocki.net \
--cc=stratosk@semaphore.gr \
--cc=trenn@suse.com \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).