public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Arnaldo Carvalho de Melo <acme@redhat.com>
To: Stephane Eranian <eranian@google.com>
Cc: linux-kernel@vger.kernel.org, peterz@infradead.org,
	mingo@elte.hu, ak@linux.intel.com, jolsa@redhat.com,
	namhyung.kim@lge.com
Subject: Re: [PATCH v2 1/3] perf stat: refactor aggregation code
Date: Mon, 25 Mar 2013 13:22:47 -0300	[thread overview]
Message-ID: <20130325162247.GA14604@infradead.org> (raw)
In-Reply-To: <1360846649-6411-2-git-send-email-eranian@google.com>

Em Thu, Feb 14, 2013 at 01:57:27PM +0100, Stephane Eranian escreveu:
> Refactor aggregation code by introducing a single aggr_mode variable
> and an enum for aggregation.

<SNIP>
 
> @@ -542,26 +553,37 @@ static void print_noise(struct perf_evsel *evsel, double avg)
>  	print_noise_pct(stddev_stats(&ps->res_stats[0]), avg);
>  }
>  
> -static void nsec_printout(int cpu, int nr, struct perf_evsel *evsel, double avg)
> +static void aggr_printout(int cpu, int nr)
>  {
> -	double msecs = avg / 1e6;
> -	char cpustr[16] = { '\0', };
> -	const char *fmt = csv_output ? "%s%.6f%s%s" : "%s%18.6f%s%-25s";
> -
> -	if (aggr_socket)
> -		sprintf(cpustr, "S%*d%s%*d%s",
> +	switch (aggr_mode) {
> +	case AGGR_SOCKET:
> +		fprintf(output, "S%*d%s%*d%s",
>  			csv_output ? 0 : -5,
>  			cpu,
>  			csv_sep,
>  			csv_output ? 0 : 4,
>  			nr,
>  			csv_sep);
> -	else if (no_aggr)
> -		sprintf(cpustr, "CPU%*d%s",
> +			break;
> +	case AGGR_NONE:
> +		fprintf(output, "CPU%*d%s",
>  			csv_output ? 0 : -4,
>  			perf_evsel__cpus(evsel)->map[cpu], csv_sep);

I'm fixing this up, how would evsel here work if it is not passed to
aggr_printout()?

I'm also fixing it up wrt the --forever patch and the one Namyung sent
that makes perf stat use perf_evlist__prepare_workload that I had
applied before this one. Will do some tests, put on a separate branch
for you to check that all is still ok and works as expected,

thanks,

- Arnaldo

  parent reply	other threads:[~2013-03-25 16:23 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-14 12:57 [PATCH v2 0/3] perf stat: add per-core count aggregation Stephane Eranian
2013-02-14 12:57 ` [PATCH v2 1/3] perf stat: refactor aggregation code Stephane Eranian
2013-03-07 21:38   ` Jiri Olsa
2013-03-25 16:22   ` Arnaldo Carvalho de Melo [this message]
2013-04-02  9:33   ` [tip:perf/core] perf stat: Refactor " tip-bot for Stephane Eranian
2013-02-14 12:57 ` [PATCH v2 2/3] perf stat: rename --aggr-socket to --per-socket Stephane Eranian
2013-04-02  9:34   ` [tip:perf/core] perf stat: Rename " tip-bot for Stephane Eranian
2013-02-14 12:57 ` [PATCH v2 3/3] perf stat: add per-core aggregation Stephane Eranian
2013-04-02  9:36   ` [tip:perf/core] perf stat: Add " tip-bot for Stephane Eranian
2013-03-07 16:22 ` [PATCH v2 0/3] perf stat: add per-core count aggregation Stephane Eranian
2013-03-25 13:57   ` Stephane Eranian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130325162247.GA14604@infradead.org \
    --to=acme@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=eranian@google.com \
    --cc=jolsa@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=namhyung.kim@lge.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox