From: "Javi Merino" <javi.merino@arm.com>
To: Steven Rostedt <rostedt@goodmis.org>
Cc: "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Punit Agrawal <Punit.Agrawal@arm.com>,
"broonie@kernel.org" <broonie@kernel.org>,
Zhang Rui <rui.zhang@intel.com>,
Eduardo Valentin <edubezval@gmail.com>,
Frederic Weisbecker <fweisbec@gmail.com>,
Ingo Molnar <mingo@redhat.com>
Subject: Re: [RFC PATCH v5 09/10] thermal: add trace events to the power allocator governor
Date: Thu, 10 Jul 2014 17:20:14 +0100 [thread overview]
Message-ID: <20140710162014.GB2622@e104805> (raw)
In-Reply-To: <20140710114451.4bbf6785@gandalf.local.home>
On Thu, Jul 10, 2014 at 04:44:51PM +0100, Steven Rostedt wrote:
> On Thu, 10 Jul 2014 15:18:47 +0100
> "Javi Merino" <javi.merino@arm.com> wrote:
>
> > Add trace events for the power allocator governor and the power actor
> > interface of the cpu cooling device.
> >
> > Cc: Zhang Rui <rui.zhang@intel.com>
> > Cc: Eduardo Valentin <edubezval@gmail.com>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > Cc: Frederic Weisbecker <fweisbec@gmail.com>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Signed-off-by: Javi Merino <javi.merino@arm.com>
> > ---
> > drivers/thermal/cpu_actor.c | 17 ++-
> > drivers/thermal/power_allocator.c | 22 +++-
> > include/trace/events/thermal_power_allocator.h | 138 +++++++++++++++++++++++++
> > 3 files changed, 173 insertions(+), 4 deletions(-)
> > create mode 100644 include/trace/events/thermal_power_allocator.h
> >
> > diff --git a/drivers/thermal/cpu_actor.c b/drivers/thermal/cpu_actor.c
> > index 45ea4fa92ea0..b5ed2e80e288 100644
> > --- a/drivers/thermal/cpu_actor.c
> > +++ b/drivers/thermal/cpu_actor.c
> > @@ -28,6 +28,8 @@
> > #include <linux/printk.h>
> > #include <linux/slab.h>
> >
> > +#include <trace/events/thermal_power_allocator.h>
> > +
> > /**
> > * struct power_table - frequency to power conversion
> > * @frequency: frequency in KHz
> > @@ -184,11 +186,12 @@ static u32 get_static_power(struct cpu_actor *cpu_actor,
> > */
> > static u32 get_dynamic_power(struct cpu_actor *cpu_actor, unsigned long freq)
> > {
> > - int cpu;
> > - u32 power = 0, raw_cpu_power, total_load = 0;
> > + int i, cpu;
> > + u32 power = 0, raw_cpu_power, total_load = 0, load_cpu[NR_CPUS];
>
> When NR_CPUS == 1024, you just killed the stack, as you added 4K to it.
> We upped the stack recently to 16k, but still.
True, this array should be static.
> >
> > raw_cpu_power = cpu_freq_to_power(cpu_actor, freq);
> >
> > + i = 0;
> > for_each_cpu(cpu, &cpu_actor->cpumask) {
> > u32 load;
> >
> > @@ -198,8 +201,15 @@ static u32 get_dynamic_power(struct cpu_actor *cpu_actor, unsigned long freq)
> > load = get_load(cpu_actor, cpu);
> > power += (raw_cpu_power * load) / 100;
> > total_load += load;
> > + load_cpu[i] = load;
> > +
> > + i++;
> > }
> >
> > + trace_thermal_power_actor_cpu_get_dyn_power(&cpu_actor->cpumask, freq,
> > + raw_cpu_power, load_cpu, i,
> > + power);
>
> How many CPUs are you saving load_cpu on? A trace event can't be bigger
> than a page. And the data is actually a little less than that with the
> required headers.
The biggest system I've tested it on is an 8 cpu system (with
NR_CPUS==8). So yes, small and we haven't seen any issues.
Are you saying that we are siphoning too much data through ftrace? He
find it really valuable to collect information during run and process
it afterwards but I can see how this may not be feasible for systems
with thousands of cpus.
next prev parent reply other threads:[~2014-07-10 16:20 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-10 14:18 [RFC PATCH v5 00/10] The power allocator thermal governor Javi Merino
2014-07-10 14:18 ` [RFC PATCH v5 01/10] tracing: Add array printing helpers Javi Merino
2014-07-10 15:40 ` Steven Rostedt
2014-07-10 14:18 ` [RFC PATCH v5 02/10] tools lib traceevent: Generalize numeric argument Javi Merino
2014-07-10 14:18 ` [RFC PATCH v5 03/10] tools lib traceevent: Add support for __print_u{8,16,32,64}_array() Javi Merino
2014-07-10 14:18 ` [RFC PATCH v5 04/10] thermal: document struct thermal_zone_device and thermal_governor Javi Merino
2014-08-19 13:03 ` Eduardo Valentin
2014-07-10 14:18 ` [RFC PATCH v5 05/10] thermal: let governors have private data for each thermal zone Javi Merino
2014-08-19 12:49 ` edubezval
2014-08-19 15:40 ` Javi Merino
2014-07-10 14:18 ` [RFC PATCH v5 06/10] thermal: introduce the Power Actor API Javi Merino
2014-07-10 14:18 ` [RFC PATCH v5 07/10] thermal: add a basic cpu power actor Javi Merino
2014-07-10 14:18 ` [RFC PATCH v5 08/10] thermal: introduce the Power Allocator governor Javi Merino
2014-08-19 12:56 ` Eduardo Valentin
2014-08-19 13:45 ` Eduardo Valentin
2014-08-19 16:02 ` Javi Merino
2014-07-10 14:18 ` [RFC PATCH v5 09/10] thermal: add trace events to the power allocator governor Javi Merino
2014-07-10 15:44 ` Steven Rostedt
2014-07-10 16:20 ` Javi Merino [this message]
2014-07-10 18:03 ` Steven Rostedt
2014-07-11 8:27 ` Javi Merino
2014-07-10 14:18 ` [RFC PATCH v5 10/10] of: thermal: Introduce sustainable power for a thermal zone Javi Merino
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140710162014.GB2622@e104805 \
--to=javi.merino@arm.com \
--cc=Punit.Agrawal@arm.com \
--cc=broonie@kernel.org \
--cc=edubezval@gmail.com \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=rostedt@goodmis.org \
--cc=rui.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).