public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jiri Olsa <jolsa@redhat.com>
To: Jin Yao <yao.jin@linux.intel.com>
Cc: acme@kernel.org, jolsa@kernel.org, peterz@infradead.org,
	mingo@redhat.com, alexander.shishkin@linux.intel.com,
	Linux-kernel@vger.kernel.org, ak@linux.intel.com,
	kan.liang@intel.com, yao.jin@intel.com
Subject: Re: [PATCH v1 3/5] perf tools: Check if mem_events is supported for hybrid
Date: Mon, 24 May 2021 19:19:25 +0200	[thread overview]
Message-ID: <YKvgHfNdi7U/sEVg@krava> (raw)
In-Reply-To: <20210520070040.710-4-yao.jin@linux.intel.com>

On Thu, May 20, 2021 at 03:00:38PM +0800, Jin Yao wrote:
> Check if the mem_events ('mem-loads' and 'mem-stores') exist
> in the sysfs path.
> 
> For Alderlake, the hybrid cpu pmu are "cpu_core" and "cpu_atom".
> Check the existing of following paths:
> /sys/devices/cpu_atom/events/mem-loads
> /sys/devices/cpu_atom/events/mem-stores
> /sys/devices/cpu_core/events/mem-loads
> /sys/devices/cpu_core/events/mem-stores
> 
> If the patch exists, the mem_event is supported.
> 
> Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
> ---
>  tools/perf/util/mem-events.c | 43 +++++++++++++++++++++++++++++-------
>  1 file changed, 35 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
> index c736eaded06c..e8f6e745eaf0 100644
> --- a/tools/perf/util/mem-events.c
> +++ b/tools/perf/util/mem-events.c
> @@ -12,14 +12,16 @@
>  #include "mem-events.h"
>  #include "debug.h"
>  #include "symbol.h"
> +#include "pmu.h"
> +#include "pmu-hybrid.h"
>  
>  unsigned int perf_mem_events__loads_ldlat = 30;
>  
>  #define E(t, n, s) { .tag = t, .name = n, .sysfs_name = s }
>  
>  static struct perf_mem_event perf_mem_events[PERF_MEM_EVENTS__MAX] = {
> -	E("ldlat-loads",	"cpu/mem-loads,ldlat=%u/P",	"cpu/events/mem-loads"),
> -	E("ldlat-stores",	"cpu/mem-stores/P",		"cpu/events/mem-stores"),
> +	E("ldlat-loads",	"%s/mem-loads,ldlat=%u/P",	"%s/events/mem-loads"),
> +	E("ldlat-stores",	"%s/mem-stores/P",		"%s/events/mem-stores"),
>  	E(NULL,			NULL,				NULL),

so this was generic place, now it's x86 specific, I wonder we should
move it under arch/x86 to avoid confusion

>  };
>  #undef E
> @@ -100,6 +102,18 @@ int perf_mem_events__parse(const char *str)
>  	return -1;
>  }
>  
> +static bool perf_mem_events__supported(const char *mnt, char *sysfs_name)
> +{
> +	char path[PATH_MAX];
> +	struct stat st;
> +
> +	scnprintf(path, PATH_MAX, "%s/devices/%s", mnt, sysfs_name);
> +	if (!stat(path, &st))
> +		return true;
> +
> +	return false;

could be just 'return !stat(path, &st);' right?

> +}
> +
>  int perf_mem_events__init(void)
>  {
>  	const char *mnt = sysfs__mount();
> @@ -110,9 +124,10 @@ int perf_mem_events__init(void)
>  		return -ENOENT;
>  
>  	for (j = 0; j < PERF_MEM_EVENTS__MAX; j++) {
> -		char path[PATH_MAX];
>  		struct perf_mem_event *e = perf_mem_events__ptr(j);
> -		struct stat st;
> +		struct perf_pmu *pmu;
> +		char sysfs_name[100];
> +		int unsupported = 0;
>  
>  		/*
>  		 * If the event entry isn't valid, skip initialization
> @@ -121,11 +136,23 @@ int perf_mem_events__init(void)
>  		if (!e->tag)
>  			continue;
>  
> -		scnprintf(path, PATH_MAX, "%s/devices/%s",
> -			  mnt, e->sysfs_name);
> +		if (!perf_pmu__has_hybrid()) {
> +			scnprintf(sysfs_name, sizeof(sysfs_name),
> +				  e->sysfs_name, "cpu");
> +			e->supported = perf_mem_events__supported(mnt, sysfs_name);
> +		} else {
> +			perf_pmu__for_each_hybrid_pmu(pmu) {
> +				scnprintf(sysfs_name, sizeof(sysfs_name),
> +					  e->sysfs_name, pmu->name);
> +				if (!perf_mem_events__supported(mnt, sysfs_name))
> +					unsupported++;
> +			}
> +
> +			e->supported = (unsupported == 0) ? true : false;

could you just do in the above loop:
			e->supported |= perf_mem_events__supported(mnt, sysfs_name);

jirka

> +		}
>  
> -		if (!stat(path, &st))
> -			e->supported = found = true;
> +		if (e->supported)
> +			found = true;
>  	}
>  
>  	return found ? 0 : -ENOENT;
> -- 
> 2.17.1
> 


  reply	other threads:[~2021-05-24 17:19 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-20  7:00 [PATCH v1 0/5] perf: Support perf-mem/perf-c2c for AlderLake Jin Yao
2021-05-20  7:00 ` [PATCH v1 1/5] perf util: Check mem-loads auxiliary event Jin Yao
2021-05-20  7:00 ` [PATCH v1 2/5] perf tools: Support pmu name in perf_mem_events__name Jin Yao
2021-05-24 17:20   ` Jiri Olsa
2021-05-25  5:39     ` Jin, Yao
2021-05-20  7:00 ` [PATCH v1 3/5] perf tools: Check if mem_events is supported for hybrid Jin Yao
2021-05-24 17:19   ` Jiri Olsa [this message]
2021-05-25  6:14     ` Jin, Yao
2021-05-20  7:00 ` [PATCH v1 4/5] perf mem: Support record for hybrid platform Jin Yao
2021-05-24 17:19   ` Jiri Olsa
2021-05-25  7:00     ` Jin, Yao
2021-05-25  7:39       ` Jin, Yao
2021-05-26  1:51         ` Jin, Yao
2021-05-26 11:44           ` Jiri Olsa
2021-05-20  7:00 ` [PATCH v1 5/5] perf c2c: " Jin Yao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YKvgHfNdi7U/sEVg@krava \
    --to=jolsa@redhat.com \
    --cc=Linux-kernel@vger.kernel.org \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@intel.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=yao.jin@intel.com \
    --cc=yao.jin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox