linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Liang, Kan" <kan.liang@linux.intel.com>
To: Namhyung Kim <namhyung@kernel.org>
Cc: Qiao Zhao <qzhao@redhat.com>, Michael Petlan <mpetlan@redhat.com>,
	alexander.shishkin@linux.intel.com, adrian.hunter@intel.com,
	linux-perf-users@vger.kernel.org, vmolnaro@redhat.com
Subject: Re: Intel Arrowlake and hwcache events
Date: Mon, 18 Nov 2024 20:26:13 -0500	[thread overview]
Message-ID: <bb5bbfe2-453f-4681-81bf-e3eb59e7a58d@linux.intel.com> (raw)
In-Reply-To: <Zzvg41ngE0oQeWx4@google.com>



On 2024-11-18 7:50 p.m., Namhyung Kim wrote:
> Hello,
> 
> On Fri, Nov 15, 2024 at 08:36:55AM -0500, Liang, Kan wrote:
>>
>>
>> On 2024-11-15 12:43 a.m., Qiao Zhao wrote:
>>> On Thu, Nov 14, 2024 at 10:33 PM Liang, Kan <kan.liang@linux.intel.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On 2024-11-14 4:54 a.m., Michael Petlan wrote:
>>>>> Hello!
>>>>>
>>>>> Qiao Zhao (CC'd) has found out that there are no hwcache events available
>>>>> on an Arrowlake system he was testing perf on.
>>>>
>>>> There are several variants for Arrowlake.
>>>> #define INTEL_ARROWLAKE_H               IFM(6, 0xC5)
>>>> #define INTEL_ARROWLAKE                 IFM(6, 0xC6)
>>>> #define INTEL_ARROWLAKE_U               IFM(6, 0xB5)
>>>>
>>>> The INTEL_ARROWLAKE should be supported in 6.10 and later.
>>>> The INTEL_ARROWLAKE_H was just merged and should be available in the
>>>> upcoming 6.13-rc.
>>>>
>>>> https://lore.kernel.org/lkml/20240808140210.1666783-1-dapeng1.mi@linux.intel.com/
>>>>
>>>> The patch to support INTEL_ARROWLAKE_U hasn't been posted yet.
>>>>
>>>> Which system were you testing?
>>>>
>>>
>>> Hi Kan, thank you for explaining this. I checked my testing history, and I
>>> happened to use Arrow Lake-U for testing.
>>
>> Thanks for the confirmation.
>> I will find a machine and post a patch to fix it ASAP.
>>
>> Thanks,
>> Kan
>>
>>> # lscpu
>>> Architecture:             x86_64
>>>   CPU op-mode(s):         32-bit, 64-bit
>>>   Address sizes:          46 bits physical, 48 bits virtual
>>>   Byte Order:             Little Endian
>>> CPU(s):                   14
>>>   On-line CPU(s) list:    0-13
>>> Vendor ID:                GenuineIntel
>>>   BIOS Vendor ID:         Intel(R) Corporation
>>>   Model name:             Genuine Intel(R) 0000
>>>     BIOS Model name:      Genuine Intel(R) 0000
>>>     CPU family:           6
>>>     Model:                197
> 
>   $ python -c 'print(hex(197))'
>   0xc5
> 
> Isn't it ArrowLake-H ?

Good catch. :)

For ARL-H, it should be ready with the upcoming 6.13-rc.

Thanks,
Kan
> 
> Thanks,
> Namhyung
> 
> 
>>>     Thread(s) per core:   1
>>>     Core(s) per socket:   14
>>>     Socket(s):            1
>>>     Stepping:             2
>>>
>>>
>>>>
>>>>> We have found out that it
>>>>> does not work even on 6.12.0-rc6+. Are there still drivers that haven't
>>>>> been merged yet?
>>>>>
>>>>> Happens that nothing matches in this loop:
>>>>
>>>> The HW cache events are usually model specific events. You cannot see it
>>>> unless there is specific support.
>>>>
>>>> You may also get more clues via dmesg | grep PMU.
>>>> If you see "generic architected perfmon", it means the specific support
>>>> isn't ready in the kernel.
>>>>
>>>
>>> Understand! Thank you Ken.
>>>
>>> - Qiao
>>>
>>>
>>>>
>>>> Thanks,
>>>> Kan
>>>>>
>>>>> int print_hwcache_events(const struct print_callbacks *print_cb, void
>>>> *print_state)
>>>>> [...]
>>>>> 305           for (int type = 0; type < PERF_COUNT_HW_CACHE_MAX; type++)
>>>> {
>>>>> 306           for (int op = 0; op < PERF_COUNT_HW_CACHE_OP_MAX; op++) {
>>>>> 307                   /* skip invalid cache type */
>>>>> 308                   if (!evsel__is_cache_op_valid(type, op))
>>>>> 309                                   continue;
>>>>> 310
>>>>> 311                   for (int res = 0; res <
>>>> PERF_COUNT_HW_CACHE_RESULT_MAX; res++) {
>>>>> 312                   char name[64];
>>>>> 313                           char alias_name[128];
>>>>> 314                           __u64 config;
>>>>> 315                           int ret;
>>>>> 316
>>>>> 317                           __evsel__hw_cache_type_op_res_name(type,
>>>> op, res,
>>>>> 318
>>>>                              name, sizeof(name));
>>>>> 319
>>>>> 320                           ret =
>>>> parse_events__decode_legacy_cache(name, pmu->type,
>>>>> 321
>>>>                              &config);
>>>>> 322                           if (ret ||
>>>> !is_event_supported(PERF_TYPE_HW_CACHE, config))
>>>>> 323                                   continue;
>>>>>
>>>>> Thanks,
>>>>> Michael
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
> 


  reply	other threads:[~2024-11-19  1:26 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-14  9:54 Intel Arrowlake and hwcache events Michael Petlan
2024-11-14 14:32 ` Liang, Kan
     [not found]   ` <CAATMXfkt28rjB03xFEcYzvhWsgtCe9Fp6eaFgDMEJq3c8_2hAQ@mail.gmail.com>
2024-11-15 13:36     ` Liang, Kan
2024-11-19  0:50       ` Namhyung Kim
2024-11-19  1:26         ` Liang, Kan [this message]
2024-11-20  2:50           ` Qiao Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bb5bbfe2-453f-4681-81bf-e3eb59e7a58d@linux.intel.com \
    --to=kan.liang@linux.intel.com \
    --cc=adrian.hunter@intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mpetlan@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=qzhao@redhat.com \
    --cc=vmolnaro@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).