linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Liang, Kan" <kan.liang@linux.intel.com>
To: Michael Petlan <mpetlan@redhat.com>,
	alexander.shishkin@linux.intel.com, adrian.hunter@intel.com,
	linux-perf-users@vger.kernel.org
Cc: qzhao@redhat.com, vmolnaro@redhat.com
Subject: Re: Intel Arrowlake and hwcache events
Date: Thu, 14 Nov 2024 09:32:14 -0500	[thread overview]
Message-ID: <e37d7c6b-91c7-4fd7-831b-98f0dca5d46c@linux.intel.com> (raw)
In-Reply-To: <ea7b2c27-7512-f412-32c0-3a247fd930f9@redhat.com>



On 2024-11-14 4:54 a.m., Michael Petlan wrote:
> Hello!
> 
> Qiao Zhao (CC'd) has found out that there are no hwcache events available
> on an Arrowlake system he was testing perf on. 

There are several variants for Arrowlake.
#define INTEL_ARROWLAKE_H		IFM(6, 0xC5)
#define INTEL_ARROWLAKE			IFM(6, 0xC6)
#define INTEL_ARROWLAKE_U		IFM(6, 0xB5)

The INTEL_ARROWLAKE should be supported in 6.10 and later.
The INTEL_ARROWLAKE_H was just merged and should be available in the
upcoming 6.13-rc.
https://lore.kernel.org/lkml/20240808140210.1666783-1-dapeng1.mi@linux.intel.com/

The patch to support INTEL_ARROWLAKE_U hasn't been posted yet.

Which system were you testing?

> We have found out that it
> does not work even on 6.12.0-rc6+. Are there still drivers that haven't
> been merged yet?
> 
> Happens that nothing matches in this loop:

The HW cache events are usually model specific events. You cannot see it
unless there is specific support.

You may also get more clues via dmesg | grep PMU.
If you see "generic architected perfmon", it means the specific support
isn't ready in the kernel.

Thanks,
Kan
> 
> int print_hwcache_events(const struct print_callbacks *print_cb, void *print_state)
> [...]
> 305		for (int type = 0; type < PERF_COUNT_HW_CACHE_MAX; type++) {
> 306 		for (int op = 0; op < PERF_COUNT_HW_CACHE_OP_MAX; op++) {
> 307  	 		/* skip invalid cache type */
> 308 			if (!evsel__is_cache_op_valid(type, op))
> 309					continue;
> 310
> 311			for (int res = 0; res < PERF_COUNT_HW_CACHE_RESULT_MAX; res++) {
> 312 			char name[64];
> 313				char alias_name[128];
> 314				__u64 config;
> 315				int ret;
> 316
> 317				__evsel__hw_cache_type_op_res_name(type, op, res,
> 318													name, sizeof(name));
> 319
> 320				ret = parse_events__decode_legacy_cache(name, pmu->type,
> 321													&config);
> 322				if (ret || !is_event_supported(PERF_TYPE_HW_CACHE, config))
> 323					continue;
> 
> Thanks,
> Michael
> 
> 


  reply	other threads:[~2024-11-14 14:32 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-14  9:54 Intel Arrowlake and hwcache events Michael Petlan
2024-11-14 14:32 ` Liang, Kan [this message]
     [not found]   ` <CAATMXfkt28rjB03xFEcYzvhWsgtCe9Fp6eaFgDMEJq3c8_2hAQ@mail.gmail.com>
2024-11-15 13:36     ` Liang, Kan
2024-11-19  0:50       ` Namhyung Kim
2024-11-19  1:26         ` Liang, Kan
2024-11-20  2:50           ` Qiao Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e37d7c6b-91c7-4fd7-831b-98f0dca5d46c@linux.intel.com \
    --to=kan.liang@linux.intel.com \
    --cc=adrian.hunter@intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mpetlan@redhat.com \
    --cc=qzhao@redhat.com \
    --cc=vmolnaro@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).