linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: "Liang, Kan" <kan.liang@linux.intel.com>
To: Leo Yan <leo.yan@linaro.org>
Cc: acme@kernel.org, irogers@google.com, peterz@infradead.org,
	mingo@redhat.com, namhyung@kernel.org, jolsa@kernel.org,
	adrian.hunter@intel.com, john.g.garry@oracle.com,
	will@kernel.org, james.clark@arm.com, mike.leach@linaro.org,
	yuhaixin.yhx@linux.alibaba.com, renyu.zj@linux.alibaba.com,
	tmricht@linux.ibm.com, ravi.bangoria@amd.com,
	atrajeev@linux.vnet.ibm.com, linux-kernel@vger.kernel.org,
	linux-perf-users@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH V3 0/7] Clean up perf mem
Date: Tue, 9 Jan 2024 09:01:46 -0500	[thread overview]
Message-ID: <1417fcb1-4d16-481b-b043-acf022de07fd@linux.intel.com> (raw)
In-Reply-To: <20240107040740.GA888@debian-dev>



On 2024-01-06 11:08 p.m., Leo Yan wrote:
> On Wed, Dec 13, 2023 at 11:51:47AM -0800, kan.liang@linux.intel.com wrote:
> 
> [...]
> 
>> Introduce generic functions perf_mem_events__ptr(),
>> perf_mem_events__name() ,and is_mem_loads_aux_event() to replace the
>> ARCH specific ones.
>> Simplify the perf_mem_event__supported().
>>
>> Only keeps the ARCH-specific perf_mem_events array in the corresponding
>> mem-events.c for each ARCH.
>>
>> There is no functional change.
>>
>> The patch set touches almost all the ARCHs, Intel, AMD, ARM, Power and
>> etc. But I can only test it on two Intel platforms.
>> Please give it try, if you have machines with other ARCHs.
>>
>> Here are the test results:
>> Intel hybrid machine:
>>
>> $perf mem record -e list
>> ldlat-loads  : available
>> ldlat-stores : available
>>
>> $perf mem record -e ldlat-loads -v --ldlat 50
>> calling: record -e cpu_atom/mem-loads,ldlat=50/P -e cpu_core/mem-loads,ldlat=50/P
>>
>> $perf mem record -v
>> calling: record -e cpu_atom/mem-loads,ldlat=30/P -e cpu_atom/mem-stores/P -e cpu_core/mem-loads,ldlat=30/P -e cpu_core/mem-stores/P
>>
>> $perf mem record -t store -v
>> calling: record -e cpu_atom/mem-stores/P -e cpu_core/mem-stores/P
>>
>>
>> Intel SPR:
>> $perf mem record -e list
>> ldlat-loads  : available
>> ldlat-stores : available
>>
>> $perf mem record -e ldlat-loads -v --ldlat 50
>> calling: record -e {cpu/mem-loads-aux/,cpu/mem-loads,ldlat=50/}:P
>>
>> $perf mem record -v
>> calling: record -e {cpu/mem-loads-aux/,cpu/mem-loads,ldlat=30/}:P -e cpu/mem-stores/P
>>
>> $perf mem record -t store -v
>> calling: record -e cpu/mem-stores/P
> 
> After applying this series, below tests pass with Arm SPE:
> 
> # ./perf c2c record -- /home/leoy/false_sharing.exe 2
> # ./perf c2c report
> 
> # ./perf mem record -e list
> # ./perf mem record -e spe-load -v --ldlat 50
> # ./perf mem record -v
> # ./perf mem report
> # ./perf mem record -t store -v
> # ./perf mem report
> 
> Tested-by: Leo Yan <leo.yan@linaro.org>
>

Thanks Leo.

Kan


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

      reply	other threads:[~2024-01-09 14:02 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-13 19:51 [PATCH V3 0/7] Clean up perf mem kan.liang
2023-12-13 19:51 ` [PATCH V3 1/7] perf mem: Add mem_events into the supported perf_pmu kan.liang
2023-12-19  8:48   ` kajoljain
2023-12-13 19:51 ` [PATCH V3 2/7] perf mem: Clean up perf_mem_events__ptr() kan.liang
2024-01-16 13:22   ` kajoljain
2023-12-13 19:51 ` [PATCH V3 3/7] perf mem: Clean up perf_mem_events__name() kan.liang
2024-01-16 13:58   ` kajoljain
2023-12-13 19:51 ` [PATCH V3 4/7] perf mem: Clean up perf_mem_event__supported() kan.liang
2023-12-13 19:51 ` [PATCH V3 5/7] perf mem: Clean up is_mem_loads_aux_event() kan.liang
2023-12-13 19:51 ` [PATCH V3 6/7] perf mem: Clean up perf_mem_events__record_args() kan.liang
2023-12-13 19:51 ` [PATCH V3 7/7] perf mem: Clean up perf_pmus__num_mem_pmus() kan.liang
2023-12-16  3:29 ` [PATCH V3 0/7] Clean up perf mem Leo Yan
2023-12-19  9:26 ` kajoljain
2023-12-19 14:15   ` Liang, Kan
2024-01-02 20:08     ` Liang, Kan
2024-01-05  6:38       ` kajoljain
2024-01-05 14:38         ` Liang, Kan
2024-01-16 14:05           ` kajoljain
2024-01-16 16:37             ` Liang, Kan
2024-01-23  5:30               ` kajoljain
2024-01-23  5:56                 ` Thomas Richter
2024-01-23 14:36                   ` Liang, Kan
2024-01-07  4:08 ` Leo Yan
2024-01-09 14:01   ` Liang, Kan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1417fcb1-4d16-481b-b043-acf022de07fd@linux.intel.com \
    --to=kan.liang@linux.intel.com \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=atrajeev@linux.vnet.ibm.com \
    --cc=irogers@google.com \
    --cc=james.clark@arm.com \
    --cc=john.g.garry@oracle.com \
    --cc=jolsa@kernel.org \
    --cc=leo.yan@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mike.leach@linaro.org \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=ravi.bangoria@amd.com \
    --cc=renyu.zj@linux.alibaba.com \
    --cc=tmricht@linux.ibm.com \
    --cc=will@kernel.org \
    --cc=yuhaixin.yhx@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).