linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ian Rogers <irogers@google.com>
To: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>, Namhyung Kim <namhyung@kernel.org>,
	Ian Rogers <irogers@google.com>,
	Adrian Hunter <adrian.hunter@intel.com>,
	Kan Liang <kan.liang@linux.intel.com>,
	Zhengjun Xing <zhengjun.xing@linux.intel.com>,
	linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org
Subject: [PATCH v2 1/5] perf vendor events intel: Update free running alderlake events
Date: Thu,  6 Apr 2023 17:13:18 -0700	[thread overview]
Message-ID: <20230407001322.2776268-1-irogers@google.com> (raw)

Fix the PMU name, event code and umask.

These updates were generated by:
https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.py
with this PR:
https://github.com/intel/perfmon/pull/66

Signed-off-by: Ian Rogers <irogers@google.com>
---
 .../arch/x86/alderlake/uncore-memory.json        | 16 ++++++++++++----
 .../arch/x86/alderlaken/uncore-memory.json       | 16 ++++++++++++----
 2 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/tools/perf/pmu-events/arch/x86/alderlake/uncore-memory.json b/tools/perf/pmu-events/arch/x86/alderlake/uncore-memory.json
index 2ccd9cf96957..163d7e7755c4 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/uncore-memory.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/uncore-memory.json
@@ -1,29 +1,37 @@
 [
     {
         "BriefDescription": "Counts every 64B read  request entering the Memory Controller 0 to DRAM (sum of all channels).",
+        "EventCode": "0xff",
         "EventName": "UNC_MC0_RDCAS_COUNT_FREERUN",
         "PerPkg": "1",
         "PublicDescription": "Counts every 64B read request entering the Memory Controller 0 to DRAM (sum of all channels).",
-        "Unit": "iMC"
+        "UMask": "0x20",
+        "Unit": "imc_free_running_0"
     },
     {
         "BriefDescription": "Counts every 64B write request entering the Memory Controller 0 to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAM.",
+        "EventCode": "0xff",
         "EventName": "UNC_MC0_WRCAS_COUNT_FREERUN",
         "PerPkg": "1",
-        "Unit": "iMC"
+        "UMask": "0x30",
+        "Unit": "imc_free_running_0"
     },
     {
         "BriefDescription": "Counts every 64B read request entering the Memory Controller 1 to DRAM (sum of all channels).",
+        "EventCode": "0xff",
         "EventName": "UNC_MC1_RDCAS_COUNT_FREERUN",
         "PerPkg": "1",
         "PublicDescription": "Counts every 64B read entering the Memory Controller 1 to DRAM (sum of all channels).",
-        "Unit": "iMC"
+        "UMask": "0x20",
+        "Unit": "imc_free_running_1"
     },
     {
         "BriefDescription": "Counts every 64B write request entering the Memory Controller 1 to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAM.",
+        "EventCode": "0xff",
         "EventName": "UNC_MC1_WRCAS_COUNT_FREERUN",
         "PerPkg": "1",
-        "Unit": "iMC"
+        "UMask": "0x30",
+        "Unit": "imc_free_running_1"
     },
     {
         "BriefDescription": "ACT command for a read request sent to DRAM",
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/uncore-memory.json b/tools/perf/pmu-events/arch/x86/alderlaken/uncore-memory.json
index 2ccd9cf96957..163d7e7755c4 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/uncore-memory.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/uncore-memory.json
@@ -1,29 +1,37 @@
 [
     {
         "BriefDescription": "Counts every 64B read  request entering the Memory Controller 0 to DRAM (sum of all channels).",
+        "EventCode": "0xff",
         "EventName": "UNC_MC0_RDCAS_COUNT_FREERUN",
         "PerPkg": "1",
         "PublicDescription": "Counts every 64B read request entering the Memory Controller 0 to DRAM (sum of all channels).",
-        "Unit": "iMC"
+        "UMask": "0x20",
+        "Unit": "imc_free_running_0"
     },
     {
         "BriefDescription": "Counts every 64B write request entering the Memory Controller 0 to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAM.",
+        "EventCode": "0xff",
         "EventName": "UNC_MC0_WRCAS_COUNT_FREERUN",
         "PerPkg": "1",
-        "Unit": "iMC"
+        "UMask": "0x30",
+        "Unit": "imc_free_running_0"
     },
     {
         "BriefDescription": "Counts every 64B read request entering the Memory Controller 1 to DRAM (sum of all channels).",
+        "EventCode": "0xff",
         "EventName": "UNC_MC1_RDCAS_COUNT_FREERUN",
         "PerPkg": "1",
         "PublicDescription": "Counts every 64B read entering the Memory Controller 1 to DRAM (sum of all channels).",
-        "Unit": "iMC"
+        "UMask": "0x20",
+        "Unit": "imc_free_running_1"
     },
     {
         "BriefDescription": "Counts every 64B write request entering the Memory Controller 1 to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAM.",
+        "EventCode": "0xff",
         "EventName": "UNC_MC1_WRCAS_COUNT_FREERUN",
         "PerPkg": "1",
-        "Unit": "iMC"
+        "UMask": "0x30",
+        "Unit": "imc_free_running_1"
     },
     {
         "BriefDescription": "ACT command for a read request sent to DRAM",
-- 
2.40.0.577.gac1e443424-goog


             reply	other threads:[~2023-04-07  0:13 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-07  0:13 Ian Rogers [this message]
2023-04-07  0:13 ` [PATCH v2 2/5] perf vendor events intel: Update free running icelakex events Ian Rogers
2023-04-07  0:13 ` [PATCH v2 3/5] perf vendor events intel: Correct knightslanding memory topic Ian Rogers
2023-04-07  0:13 ` [PATCH v2 4/5] perf vendor events intel: Update free running snowridgex events Ian Rogers
2023-04-07  0:13 ` [PATCH v2 5/5] perf vendor events intel: Update free running tigerlake events Ian Rogers
2023-04-07  0:56 ` [PATCH v2 1/5] perf vendor events intel: Update free running alderlake events Arnaldo Carvalho de Melo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230407001322.2776268-1-irogers@google.com \
    --to=irogers@google.com \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).