From: Namhyung Kim <namhyung@kernel.org>
To: Jiri Olsa <jolsa@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>,
lkml <linux-kernel@vger.kernel.org>,
David Ahern <dsahern@gmail.com>, Ingo Molnar <mingo@kernel.org>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
"Liang, Kan" <kan.liang@intel.com>
Subject: Re: [PATCH 7/8] perf script: Add python support for stat events
Date: Wed, 23 Dec 2015 22:42:47 +0900 [thread overview]
Message-ID: <20151223134247.GC23199@danjae.kornet> (raw)
In-Reply-To: <1450799014-31469-8-git-send-email-jolsa@kernel.org>
On Tue, Dec 22, 2015 at 04:43:33PM +0100, Jiri Olsa wrote:
> Add support to get stat events data in perf python scripts.
>
> The python script shall implement following
> new interface to process stat data:
>
> def stat__<event_name>_[<modifier>](cpu, thread, time, val, ena, run):
>
> - is called for every stat event for given counter,
> if user monitors 'cycles,instructions:u" following
> callbacks should be defined:
>
> def stat__cycles(cpu, thread, time, val, ena, run):
> def stat__instructions_u(cpu, thread, time, val, ena, run):
>
> def stat__interval(time):
>
> - is called for every interval with its time,
> in non interval mode it's called after last
> stat event with total measured time in ns
>
> The rest of the current interface stays untouched..
>
> Please check example CPI metrics script in following patch
> with command line examples in changelogs.
>
> Tested-by: Kan Liang <kan.liang@intel.com>
> Link: http://lkml.kernel.org/n/tip-jojiaelyckrw6040wqc06q1j@git.kernel.org
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> .../util/scripting-engines/trace-event-python.c | 114 +++++++++++++++++++--
> 1 file changed, 108 insertions(+), 6 deletions(-)
>
> diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
> index a8e825fca42a..8436eb23eb16 100644
> --- a/tools/perf/util/scripting-engines/trace-event-python.c
> +++ b/tools/perf/util/scripting-engines/trace-event-python.c
> @@ -41,6 +41,9 @@
> #include "../thread-stack.h"
> #include "../trace-event.h"
> #include "../machine.h"
> +#include "thread_map.h"
> +#include "cpumap.h"
> +#include "stat.h"
>
> PyMODINIT_FUNC initperf_trace_context(void);
>
> @@ -859,6 +862,103 @@ static void python_process_event(union perf_event *event,
> }
> }
>
> +static void get_handler_name(char *str, size_t size,
> + struct perf_evsel *evsel)
> +{
> + char *p = str;
> +
> + scnprintf(str, size, "stat__%s", perf_evsel__name(evsel));
> +
> + while ((p = strchr(p, ':'))) {
> + *p = '_';
> + p++;
> + }
> +}
> +
> +static void
> +process_stat(struct perf_evsel *counter, int cpu, int thread, u64 time,
> + struct perf_counts_values *count)
> +{
> + PyObject *handler, *t;
> + static char handler_name[256];
> + int n = 0;
> +
> + t = PyTuple_New(MAX_FIELDS);
> + if (!t)
> + Py_FatalError("couldn't create Python tuple");
> +
> + get_handler_name(handler_name, sizeof(handler_name),
> + counter);
> +
> + handler = get_handler(handler_name);
> + if (!handler) {
> + pr_debug("can't find python handler %s\n", handler_name);
> + return;
> + }
> +
> + PyTuple_SetItem(t, n++, PyInt_FromLong(cpu));
> + PyTuple_SetItem(t, n++, PyInt_FromLong(thread));
> + PyTuple_SetItem(t, n++, PyLong_FromLong(time));
> + PyTuple_SetItem(t, n++, PyLong_FromLong(count->val));
> + PyTuple_SetItem(t, n++, PyLong_FromLong(count->ena));
> + PyTuple_SetItem(t, n++, PyLong_FromLong(count->run));
What about 32-bit systems? It seems the PyLong_FromLong() takes long
but the counts are u64.
> +
> + if (_PyTuple_Resize(&t, n) == -1)
> + Py_FatalError("error resizing Python tuple");
> +
> + call_object(handler, t, handler_name);
> +
> + Py_DECREF(t);
> +}
> +
> +static void python_process_stat(struct perf_stat_config *config,
> + struct perf_evsel *counter, u64 time)
> +{
> + struct thread_map *threads = counter->threads;
> + struct cpu_map *cpus = counter->cpus;
> + int cpu, thread;
> +
> + if (config->aggr_mode == AGGR_GLOBAL) {
> + process_stat(counter, -1, -1, time,
> + &counter->counts->aggr);
> + return;
> + }
> +
> + for (thread = 0; thread < threads->nr; thread++) {
> + for (cpu = 0; cpu < cpus->nr; cpu++) {
> + process_stat(counter, cpus->map[cpu],
> + thread_map__pid(threads, thread), time,
> + perf_counts(counter->counts, cpu, thread));
> + }
> + }
> +}
> +
> +static void python_process_stat_interval(u64 time)
> +{
> + PyObject *handler, *t;
> + static const char handler_name[] = "stat__interval";
> + int n = 0;
> +
> + t = PyTuple_New(MAX_FIELDS);
> + if (!t)
> + Py_FatalError("couldn't create Python tuple");
> +
> + handler = get_handler(handler_name);
> + if (!handler) {
> + pr_debug("can't find python handler %s\n", handler_name);
> + return;
> + }
> +
> + PyTuple_SetItem(t, n++, PyLong_FromLong(time));
Ditto.
Thanks,
Namhyung
> +
> + if (_PyTuple_Resize(&t, n) == -1)
> + Py_FatalError("error resizing Python tuple");
> +
> + call_object(handler, t, handler_name);
> +
> + Py_DECREF(t);
> +}
> +
> static int run_start_sub(void)
> {
> main_module = PyImport_AddModule("__main__");
> @@ -1201,10 +1301,12 @@ static int python_generate_script(struct pevent *pevent, const char *outfile)
> }
>
> struct scripting_ops python_scripting_ops = {
> - .name = "Python",
> - .start_script = python_start_script,
> - .flush_script = python_flush_script,
> - .stop_script = python_stop_script,
> - .process_event = python_process_event,
> - .generate_script = python_generate_script,
> + .name = "Python",
> + .start_script = python_start_script,
> + .flush_script = python_flush_script,
> + .stop_script = python_stop_script,
> + .process_event = python_process_event,
> + .process_stat = python_process_stat,
> + .process_stat_interval = python_process_stat_interval,
> + .generate_script = python_generate_script,
> };
> --
> 2.4.3
>
next prev parent reply other threads:[~2015-12-23 13:43 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-22 15:43 [PATCHv7 00/25] perf stat: Add scripting support Jiri Olsa
2015-12-22 15:43 ` [PATCH 1/8] perf stat record: Keep sample_type 0 for pipe session Jiri Olsa
2015-12-22 15:43 ` [PATCH 2/8] perf script: Process cpu/threads maps Jiri Olsa
2015-12-22 15:43 ` [PATCH 3/8] perf script: Process stat config event Jiri Olsa
2015-12-22 15:43 ` [PATCH 4/8] perf script: Add process_stat/process_stat_interval scripting interface Jiri Olsa
2015-12-22 15:43 ` [PATCH 5/8] perf script: Add stat default handlers Jiri Olsa
2015-12-23 13:40 ` Namhyung Kim
2015-12-23 16:01 ` Jiri Olsa
2015-12-22 15:43 ` [PATCH 6/8] perf script: Display stat events by default Jiri Olsa
2015-12-22 15:43 ` [PATCH 7/8] perf script: Add python support for stat events Jiri Olsa
2015-12-23 13:42 ` Namhyung Kim [this message]
2015-12-23 16:04 ` Jiri Olsa
2015-12-22 15:43 ` [PATCH 8/8] perf script: Add stat-cpi.py script Jiri Olsa
2015-12-23 13:44 ` Namhyung Kim
2015-12-23 16:04 ` Jiri Olsa
2015-12-22 15:53 ` [PATCHv7 00/25] perf stat: Add scripting support Jiri Olsa
-- strict thread matches above, loose matches on Subject: below --
2016-01-05 21:09 [PATCHv8 0/8] " Jiri Olsa
2016-01-05 21:09 ` [PATCH 7/8] perf script: Add python support for stat events Jiri Olsa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151223134247.GC23199@danjae.kornet \
--to=namhyung@kernel.org \
--cc=a.p.zijlstra@chello.nl \
--cc=acme@kernel.org \
--cc=dsahern@gmail.com \
--cc=jolsa@kernel.org \
--cc=kan.liang@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox