* [PATCH v1 0/2] Add procfs based memory and network tool events
@ 2026-01-04 1:17 Ian Rogers
2026-01-04 1:17 ` [PATCH v1 1/2] perf tool_pmu: Add memory events Ian Rogers
` (4 more replies)
0 siblings, 5 replies; 9+ messages in thread
From: Ian Rogers @ 2026-01-04 1:17 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Jiri Olsa, Ian Rogers, Adrian Hunter, James Clark,
Thomas Falcon, Thomas Richter, linux-perf-users, linux-kernel
Add events for memory use and network activity based on data readily
available in /prod/pid/statm, /proc/pid/smaps_rollup and
/proc/pid/net/dev. For example the network usage of chrome processes
on a system may be gathered with:
```
$ perf stat -e net_rx_bytes,net_rx_compressed,net_rx_drop,net_rx_errors,net_rx_fifo,net_rx_frame,net_rx_multicast,net_rx_packets,net_tx_bytes,net_tx_carrier,net_tx_colls,net_tx_compressed,net_tx_drop,net_tx_errors,net_tx_fifo,net_tx_packets -p $(pidof -d, chrome) -I 1000
1.001023475 0 net_rx_bytes
1.001023475 0 net_rx_compressed
1.001023475 42,647,328 net_rx_drop
1.001023475 463,069,152 net_rx_errors
1.001023475 0 net_rx_fifo
1.001023475 0 net_rx_frame
1.001023475 0 net_rx_multicast
1.001023475 423,195,831,744 net_rx_packets
1.001023475 0 net_tx_bytes
1.001023475 0 net_tx_carrier
1.001023475 0 net_tx_colls
1.001023475 0 net_tx_compressed
1.001023475 0 net_tx_drop
1.001023475 0 net_tx_errors
1.001023475 0 net_tx_fifo
1.001023475 0 net_tx_packets
```
As the events are in the tool_pmu they can be used in metrics. The
json descriptions they are exposed in `perf list` and the events can
be seen in the python ilist application.
Note, if a process terminates then the count reading returns an error
and this can expose what appear to be latent bugs in the aggregation
and display code.
Ian Rogers (2):
perf tool_pmu: Add memory events
perf tool_pmu: Add network events
tools/perf/builtin-stat.c | 10 +-
.../pmu-events/arch/common/common/tool.json | 266 ++++++++-
tools/perf/pmu-events/empty-pmu-events.c | 312 +++++++----
tools/perf/util/tool_pmu.c | 514 +++++++++++++++++-
tools/perf/util/tool_pmu.h | 44 ++
5 files changed, 1026 insertions(+), 120 deletions(-)
--
2.52.0.351.gbe84eed79e-goog
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v1 1/2] perf tool_pmu: Add memory events
2026-01-04 1:17 [PATCH v1 0/2] Add procfs based memory and network tool events Ian Rogers
@ 2026-01-04 1:17 ` Ian Rogers
2026-01-04 1:17 ` [PATCH v1 2/2] perf tool_pmu: Add network events Ian Rogers
` (3 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Ian Rogers @ 2026-01-04 1:17 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Jiri Olsa, Ian Rogers, Adrian Hunter, James Clark,
Thomas Falcon, Thomas Richter, linux-perf-users, linux-kernel
Add tool PMU events to report memory usage information exposed by
/proc/pid/smaps_rollup and /proc/pid/statm.
The following events are added:
- memory_anon_huge_pages
- memory_anonymous
- memory_data
- memory_file_pmd_mapped
- memory_ksm
- memory_lazyfree
- memory_locked
- memory_private_clean
- memory_private_dirty
- memory_private_hugetlb
- memory_pss
- memory_pss_anon
- memory_pss_dirty
- memory_pss_file
- memory_pss_shmem
- memory_referenced
- memory_resident
- memory_rss
- memory_shared
- memory_shared_clean
- memory_shared_dirty
- memory_shared_hugetlb
- memory_shmem_pmd_mapped
- memory_size
- memory_swap
- memory_swap_pss
- memory_text
- memory_uss
The events are aggregated across processes in system-wide mode, or
reported per-process when monitoring specific tasks. The tool_pmu
implementation is updated to read from /proc files and tool.json is
updated with the new events and detailed descriptions. The data can
only be gathered for a running process except for memory_rss and
memory_resident that use rusage data from wait4 to give a final value.
Examples of gathering system-wide statm and smaps_rollup based events:
```
$ perf stat -e memory_size,memory_resident,memory_shared,memory_text,memory_data -I 1000
1.001046524 94,942,406,868,992 memory_size
1.001046524 24,885,137,408 memory_resident
1.001046524 15,494,561,792 memory_shared
1.001046524 15,249,453,056 memory_text
1.001046524 74,510,024,704 memory_data
...
$ perf stat -e memory_anon_huge_pages,memory_anonymous,memory_file_pmd_mapped,memory_ksm,memory_lazyfree,memory_locked,memory_private_clean,memory_private_dirty,memory_private_hugetlb,memory_pss,memory_pss_anon,memory_pss_dirty,memory_pss_file,memory_pss_shmem,memory_referenced,memory_rss,memory_shared,memory_shared_clean,memory_shared_dirty,memory_shared_hugetlb,memory_shmem_pmd_mapped,memory_swap,memory_swap_pss,memory_uss -I 1000
1.001107268 981,467,136 memory_anon_huge_pages
1.001107268 9,459,105,792 memory_anonymous
1.001107268 0 memory_file_pmd_mapped
1.001107268 0 memory_ksm
1.001107268 1,269,760 memory_lazyfree
1.001107268 634,880 memory_locked
1.001107268 958,963,712 memory_private_clean
1.001107268 8,400,080,896 memory_private_dirty
1.001107268 0 memory_private_hugetlb
1.001107268 10,711,451,648 memory_pss
1.001107268 8,387,961,856 memory_pss_anon
1.001107268 8,642,944,000 memory_pss_dirty
1.001107268 2,078,282,752 memory_pss_file
1.001107268 245,125,120 memory_pss_shmem
1.001107268 24,356,818,944 memory_referenced
1.001107268 25,256,157,184 memory_rss
1.001107268 15,757,029,376 memory_shared
1.001107268 14,230,978,560 memory_shared_clean
1.001107268 1,670,934,528 memory_shared_dirty
1.001107268 0 memory_shared_hugetlb
1.001107268 0 memory_shmem_pmd_mapped
1.001107268 0 memory_swap
1.001107268 0 memory_swap_pss
1.001107268 9,328,041,984 memory_uss
...
```
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/builtin-stat.c | 10 +-
.../pmu-events/arch/common/common/tool.json | 168 ++++++++
tools/perf/pmu-events/empty-pmu-events.c | 280 +++++++------
tools/perf/util/tool_pmu.c | 375 +++++++++++++++++-
tools/perf/util/tool_pmu.h | 28 ++
5 files changed, 742 insertions(+), 119 deletions(-)
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index ab40d85fb125..8c6290174afb 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -107,6 +107,7 @@
struct rusage_stats {
struct stats ru_utime_usec_stat;
struct stats ru_stime_usec_stat;
+ struct stats ru_maxrss_stat;
};
static void print_counters(struct timespec *ts, int argc, const char **argv);
@@ -288,7 +289,9 @@ static int read_single_counter(struct evsel *counter, int cpu_map_idx, int threa
*/
if (err && cpu_map_idx == 0 &&
(evsel__tool_event(counter) == TOOL_PMU__EVENT_USER_TIME ||
- evsel__tool_event(counter) == TOOL_PMU__EVENT_SYSTEM_TIME)) {
+ evsel__tool_event(counter) == TOOL_PMU__EVENT_SYSTEM_TIME ||
+ evsel__tool_event(counter) == TOOL_PMU__EVENT_MEMORY_RESIDENT ||
+ evsel__tool_event(counter) == TOOL_PMU__EVENT_MEMORY_RSS)) {
struct perf_counts_values *count =
perf_counts(counter->counts, cpu_map_idx, thread);
struct perf_counts_values *old_count = NULL;
@@ -299,8 +302,10 @@ static int read_single_counter(struct evsel *counter, int cpu_map_idx, int threa
if (evsel__tool_event(counter) == TOOL_PMU__EVENT_USER_TIME)
val = ru_stats.ru_utime_usec_stat.mean;
- else
+ else if (evsel__tool_event(counter) == TOOL_PMU__EVENT_SYSTEM_TIME)
val = ru_stats.ru_stime_usec_stat.mean;
+ else
+ val = ru_stats.ru_maxrss_stat.mean;
count->val = val;
if (old_count) {
@@ -778,6 +783,7 @@ static void update_rusage_stats(const struct rusage *rusage)
(rusage->ru_utime.tv_usec * us_to_ns + rusage->ru_utime.tv_sec * s_to_ns));
update_stats(&ru_stats.ru_stime_usec_stat,
(rusage->ru_stime.tv_usec * us_to_ns + rusage->ru_stime.tv_sec * s_to_ns));
+ update_stats(&ru_stats.ru_maxrss_stat, rusage->ru_maxrss * 1024);
}
static int __run_perf_stat(int argc, const char **argv, int run_idx)
diff --git a/tools/perf/pmu-events/arch/common/common/tool.json b/tools/perf/pmu-events/arch/common/common/tool.json
index 14d0d60a1976..4b3fce655f8a 100644
--- a/tools/perf/pmu-events/arch/common/common/tool.json
+++ b/tools/perf/pmu-events/arch/common/common/tool.json
@@ -82,5 +82,173 @@
"EventName": "target_cpu",
"BriefDescription": "1 if CPUs being analyzed, 0 if threads/processes",
"ConfigCode": "14"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_anon_huge_pages",
+ "BriefDescription": "Memory backed by anonymous huge pages in bytes",
+ "ConfigCode": "15"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_anonymous",
+ "BriefDescription": "Memory not mapped to a file (anonymous) in bytes",
+ "ConfigCode": "16"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_data",
+ "BriefDescription": "Memory dedicated to data and stack in bytes",
+ "ConfigCode": "17"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_file_pmd_mapped",
+ "BriefDescription": "Memory backed by file and mapped with Page Middle Directory (PMD) in bytes",
+ "ConfigCode": "18"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_ksm",
+ "BriefDescription": "Memory shared via Kernel Samepage Merging (KSM) in bytes",
+ "ConfigCode": "19"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_lazyfree",
+ "BriefDescription": "Memory marked as LazyFree in bytes",
+ "ConfigCode": "20"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_locked",
+ "BriefDescription": "Memory locked in RAM in bytes",
+ "ConfigCode": "21"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_private_clean",
+ "BriefDescription": "Private clean memory (not shared, not modified) in bytes",
+ "ConfigCode": "22"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_private_dirty",
+ "BriefDescription": "Private dirty memory (not shared, modified) in bytes",
+ "ConfigCode": "23"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_private_hugetlb",
+ "BriefDescription": "Private memory backed by huge pages in bytes",
+ "ConfigCode": "24"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_pss",
+ "BriefDescription": "Proportional Share Size (PSS) in bytes",
+ "ConfigCode": "25"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_pss_anon",
+ "BriefDescription": "Proportional Share Size (PSS) for anonymous memory in bytes",
+ "ConfigCode": "26"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_pss_dirty",
+ "BriefDescription": "Proportional Share Size (PSS) for dirty memory in bytes",
+ "ConfigCode": "27"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_pss_file",
+ "BriefDescription": "Proportional Share Size (PSS) for file-backed memory in bytes",
+ "ConfigCode": "28"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_pss_shmem",
+ "BriefDescription": "Proportional Share Size (PSS) for shared memory in bytes",
+ "ConfigCode": "29"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_referenced",
+ "BriefDescription": "Memory marked as referenced/accessed in bytes",
+ "ConfigCode": "30"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_resident",
+ "BriefDescription": "Resident Set Size (RSS) in bytes (from /proc/pid/statm). The sum of anonymous, file and shared memory.",
+ "ConfigCode": "31"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_rss",
+ "BriefDescription": "Resident Set Size (RSS) in bytes (from /proc/pid/smaps_rollup). The sum of anonymous, file and shared memory.",
+ "ConfigCode": "32"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_shared",
+ "BriefDescription": "Shared memory (shared with other processes via files/shmem) in bytes",
+ "ConfigCode": "33"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_shared_clean",
+ "BriefDescription": "Shared clean memory (shared with other processes, not modified) in bytes",
+ "ConfigCode": "34"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_shared_dirty",
+ "BriefDescription": "Shared dirty memory (shared with other processes, modified) in bytes",
+ "ConfigCode": "35"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_shared_hugetlb",
+ "BriefDescription": "Shared memory backed by huge pages in bytes",
+ "ConfigCode": "36"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_shmem_pmd_mapped",
+ "BriefDescription": "Shared memory mapped with Page Middle Directory (PMD) in bytes",
+ "ConfigCode": "37"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_size",
+ "BriefDescription": "Virtual memory size in bytes",
+ "ConfigCode": "38"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_swap",
+ "BriefDescription": "Memory swapped out to disk in bytes",
+ "ConfigCode": "39"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_swap_pss",
+ "BriefDescription": "Proportional Share Size (PSS) for swap memory in bytes",
+ "ConfigCode": "40"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_text",
+ "BriefDescription": "Memory dedicated to code (text segment) in bytes",
+ "ConfigCode": "41"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "memory_uss",
+ "BriefDescription": "Unique Set Size (USS) in bytes",
+ "ConfigCode": "42"
}
]
diff --git a/tools/perf/pmu-events/empty-pmu-events.c b/tools/perf/pmu-events/empty-pmu-events.c
index 76c395cf513c..4b7c534f1801 100644
--- a/tools/perf/pmu-events/empty-pmu-events.c
+++ b/tools/perf/pmu-events/empty-pmu-events.c
@@ -1281,62 +1281,90 @@ static const char *const big_c_string =
/* offset=126106 */ "system_tsc_freq\000tool\000The amount a Time Stamp Counter (TSC) increases per second\000config=0xc\000\00000\000\000\000\000\000"
/* offset=126205 */ "core_wide\000tool\0001 if not SMT, if SMT are events being gathered on all SMT threads 1 otherwise 0\000config=0xd\000\00000\000\000\000\000\000"
/* offset=126319 */ "target_cpu\000tool\0001 if CPUs being analyzed, 0 if threads/processes\000config=0xe\000\00000\000\000\000\000\000"
-/* offset=126403 */ "bp_l1_btb_correct\000branch\000L1 BTB Correction\000event=0x8a\000\00000\000\000\000\000\000"
-/* offset=126465 */ "bp_l2_btb_correct\000branch\000L2 BTB Correction\000event=0x8b\000\00000\000\000\000\000\000"
-/* offset=126527 */ "l3_cache_rd\000cache\000L3 cache access, read\000event=0x40\000\00000\000\000\000\000Attributable Level 3 cache access, read\000"
-/* offset=126625 */ "segment_reg_loads.any\000other\000Number of segment register loads\000event=6,period=200000,umask=0x80\000\00000\000\000\000\000\000"
-/* offset=126727 */ "dispatch_blocked.any\000other\000Memory cluster signals to block micro-op dispatch for any reason\000event=9,period=200000,umask=0x20\000\00000\000\000\000\000\000"
-/* offset=126860 */ "eist_trans\000other\000Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions\000event=0x3a,period=200000\000\00000\000\000\000\000\000"
-/* offset=126978 */ "hisi_sccl,ddrc\000"
-/* offset=126993 */ "uncore_hisi_ddrc.flux_wcmd\000uncore\000DDRC write commands\000event=2\000\00000\000\000\000\000\000"
-/* offset=127063 */ "uncore_cbox\000"
-/* offset=127075 */ "unc_cbo_xsnp_response.miss_eviction\000uncore\000A cross-core snoop resulted from L3 Eviction which misses in some processor core\000event=0x22,umask=0x81\000\00000\000\000\000\000\000"
-/* offset=127229 */ "event-hyphen\000uncore\000UNC_CBO_HYPHEN\000event=0xe0\000\00000\000\000\000\000\000"
-/* offset=127283 */ "event-two-hyph\000uncore\000UNC_CBO_TWO_HYPH\000event=0xc0\000\00000\000\000\000\000\000"
-/* offset=127341 */ "hisi_sccl,l3c\000"
-/* offset=127355 */ "uncore_hisi_l3c.rd_hit_cpipe\000uncore\000Total read hits\000event=7\000\00000\000\000\000\000\000"
-/* offset=127423 */ "uncore_imc_free_running\000"
-/* offset=127447 */ "uncore_imc_free_running.cache_miss\000uncore\000Total cache misses\000event=0x12\000\00000\000\000\000\000\000"
-/* offset=127527 */ "uncore_imc\000"
-/* offset=127538 */ "uncore_imc.cache_hits\000uncore\000Total cache hits\000event=0x34\000\00000\000\000\000\000\000"
-/* offset=127603 */ "uncore_sys_ddr_pmu\000"
-/* offset=127622 */ "sys_ddr_pmu.write_cycles\000uncore\000ddr write-cycles event\000event=0x2b\000v8\00000\000\000\000\000\000"
-/* offset=127698 */ "uncore_sys_ccn_pmu\000"
-/* offset=127717 */ "sys_ccn_pmu.read_cycles\000uncore\000ccn read-cycles event\000config=0x2c\0000x01\00000\000\000\000\000\000"
-/* offset=127794 */ "uncore_sys_cmn_pmu\000"
-/* offset=127813 */ "sys_cmn_pmu.hnf_cache_miss\000uncore\000Counts total cache misses in first lookup result (high priority)\000eventid=1,type=5\000(434|436|43c|43a).*\00000\000\000\000\000\000"
-/* offset=127956 */ "CPUs_utilized\000Default\000(software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@) / (duration_time * 1e9)\000\000Average CPU utilization\000\0001CPUs\000\000\000\000011"
-/* offset=128142 */ "cs_per_second\000Default\000software@context\\-switches\\,name\\=context\\-switches@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Context switches per CPU second\000\0001cs/sec\000\000\000\000011"
-/* offset=128375 */ "migrations_per_second\000Default\000software@cpu\\-migrations\\,name\\=cpu\\-migrations@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Process migrations to a new CPU per CPU second\000\0001migrations/sec\000\000\000\000011"
-/* offset=128635 */ "page_faults_per_second\000Default\000software@page\\-faults\\,name\\=page\\-faults@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Page faults per CPU second\000\0001faults/sec\000\000\000\000011"
-/* offset=128866 */ "insn_per_cycle\000Default\000instructions / cpu\\-cycles\000insn_per_cycle < 1\000Instructions Per Cycle\000\0001instructions\000\000\000\000001"
-/* offset=128979 */ "stalled_cycles_per_instruction\000Default\000max(stalled\\-cycles\\-frontend, stalled\\-cycles\\-backend) / instructions\000\000Max front or backend stalls per instruction\000\000\000\000\000\000001"
-/* offset=129143 */ "frontend_cycles_idle\000Default\000stalled\\-cycles\\-frontend / cpu\\-cycles\000frontend_cycles_idle > 0.1\000Frontend stalls per cycle\000\000\000\000\000\000001"
-/* offset=129273 */ "backend_cycles_idle\000Default\000stalled\\-cycles\\-backend / cpu\\-cycles\000backend_cycles_idle > 0.2\000Backend stalls per cycle\000\000\000\000\000\000001"
-/* offset=129399 */ "cycles_frequency\000Default\000cpu\\-cycles / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Cycles per CPU second\000\0001GHz\000\000\000\000011"
-/* offset=129575 */ "branch_frequency\000Default\000branches / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Branches per CPU second\000\0001000M/sec\000\000\000\000011"
-/* offset=129755 */ "branch_miss_rate\000Default\000branch\\-misses / branches\000branch_miss_rate > 0.05\000Branch miss rate\000\000100%\000\000\000\000001"
-/* offset=129859 */ "l1d_miss_rate\000Default2\000L1\\-dcache\\-load\\-misses / L1\\-dcache\\-loads\000l1d_miss_rate > 0.05\000L1D miss rate\000\000100%\000\000\000\000001"
-/* offset=129975 */ "llc_miss_rate\000Default2\000LLC\\-load\\-misses / LLC\\-loads\000llc_miss_rate > 0.05\000LLC miss rate\000\000100%\000\000\000\000001"
-/* offset=130076 */ "l1i_miss_rate\000Default3\000L1\\-icache\\-load\\-misses / L1\\-icache\\-loads\000l1i_miss_rate > 0.05\000L1I miss rate\000\000100%\000\000\000\000001"
-/* offset=130191 */ "dtlb_miss_rate\000Default3\000dTLB\\-load\\-misses / dTLB\\-loads\000dtlb_miss_rate > 0.05\000dTLB miss rate\000\000100%\000\000\000\000001"
-/* offset=130297 */ "itlb_miss_rate\000Default3\000iTLB\\-load\\-misses / iTLB\\-loads\000itlb_miss_rate > 0.05\000iTLB miss rate\000\000100%\000\000\000\000001"
-/* offset=130403 */ "l1_prefetch_miss_rate\000Default4\000L1\\-dcache\\-prefetch\\-misses / L1\\-dcache\\-prefetches\000l1_prefetch_miss_rate > 0.05\000L1 prefetch miss rate\000\000100%\000\000\000\000001"
-/* offset=130551 */ "CPI\000\0001 / IPC\000\000\000\000\000\000\000\000000"
-/* offset=130574 */ "IPC\000group1\000inst_retired.any / cpu_clk_unhalted.thread\000\000\000\000\000\000\000\000000"
-/* offset=130638 */ "Frontend_Bound_SMT\000\000idq_uops_not_delivered.core / (4 * (cpu_clk_unhalted.thread / 2 * (1 + cpu_clk_unhalted.one_thread_active / cpu_clk_unhalted.ref_xclk)))\000\000\000\000\000\000\000\000000"
-/* offset=130805 */ "dcache_miss_cpi\000\000l1d\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000"
-/* offset=130870 */ "icache_miss_cycles\000\000l1i\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000"
-/* offset=130938 */ "cache_miss_cycles\000group1\000dcache_miss_cpi + icache_miss_cycles\000\000\000\000\000\000\000\000000"
-/* offset=131010 */ "DCache_L2_All_Hits\000\000l2_rqsts.demand_data_rd_hit + l2_rqsts.pf_hit + l2_rqsts.rfo_hit\000\000\000\000\000\000\000\000000"
-/* offset=131105 */ "DCache_L2_All_Miss\000\000max(l2_rqsts.all_demand_data_rd - l2_rqsts.demand_data_rd_hit, 0) + l2_rqsts.pf_miss + l2_rqsts.rfo_miss\000\000\000\000\000\000\000\000000"
-/* offset=131240 */ "DCache_L2_All\000\000DCache_L2_All_Hits + DCache_L2_All_Miss\000\000\000\000\000\000\000\000000"
-/* offset=131305 */ "DCache_L2_Hits\000\000d_ratio(DCache_L2_All_Hits, DCache_L2_All)\000\000\000\000\000\000\000\000000"
-/* offset=131374 */ "DCache_L2_Misses\000\000d_ratio(DCache_L2_All_Miss, DCache_L2_All)\000\000\000\000\000\000\000\000000"
-/* offset=131445 */ "M1\000\000ipc + M2\000\000\000\000\000\000\000\000000"
-/* offset=131468 */ "M2\000\000ipc + M1\000\000\000\000\000\000\000\000000"
-/* offset=131491 */ "M3\000\0001 / M3\000\000\000\000\000\000\000\000000"
-/* offset=131512 */ "L1D_Cache_Fill_BW\000\00064 * l1d.replacement / 1e9 / duration_time\000\000\000\000\000\000\000\000000"
+/* offset=126403 */ "memory_anon_huge_pages\000tool\000Memory backed by anonymous huge pages in bytes\000config=0xf\000\00000\000\000\000\000\000"
+/* offset=126497 */ "memory_anonymous\000tool\000Memory not mapped to a file (anonymous) in bytes\000config=0x10\000\00000\000\000\000\000\000"
+/* offset=126588 */ "memory_data\000tool\000Memory dedicated to data and stack in bytes\000config=0x11\000\00000\000\000\000\000\000"
+/* offset=126669 */ "memory_file_pmd_mapped\000tool\000Memory backed by file and mapped with Page Middle Directory (PMD) in bytes\000config=0x12\000\00000\000\000\000\000\000"
+/* offset=126792 */ "memory_ksm\000tool\000Memory shared via Kernel Samepage Merging (KSM) in bytes\000config=0x13\000\00000\000\000\000\000\000"
+/* offset=126885 */ "memory_lazyfree\000tool\000Memory marked as LazyFree in bytes\000config=0x14\000\00000\000\000\000\000\000"
+/* offset=126961 */ "memory_locked\000tool\000Memory locked in RAM in bytes\000config=0x15\000\00000\000\000\000\000\000"
+/* offset=127030 */ "memory_private_clean\000tool\000Private clean memory (not shared, not modified) in bytes\000config=0x16\000\00000\000\000\000\000\000"
+/* offset=127133 */ "memory_private_dirty\000tool\000Private dirty memory (not shared, modified) in bytes\000config=0x17\000\00000\000\000\000\000\000"
+/* offset=127232 */ "memory_private_hugetlb\000tool\000Private memory backed by huge pages in bytes\000config=0x18\000\00000\000\000\000\000\000"
+/* offset=127325 */ "memory_pss\000tool\000Proportional Share Size (PSS) in bytes\000config=0x19\000\00000\000\000\000\000\000"
+/* offset=127400 */ "memory_pss_anon\000tool\000Proportional Share Size (PSS) for anonymous memory in bytes\000config=0x1a\000\00000\000\000\000\000\000"
+/* offset=127501 */ "memory_pss_dirty\000tool\000Proportional Share Size (PSS) for dirty memory in bytes\000config=0x1b\000\00000\000\000\000\000\000"
+/* offset=127599 */ "memory_pss_file\000tool\000Proportional Share Size (PSS) for file-backed memory in bytes\000config=0x1c\000\00000\000\000\000\000\000"
+/* offset=127702 */ "memory_pss_shmem\000tool\000Proportional Share Size (PSS) for shared memory in bytes\000config=0x1d\000\00000\000\000\000\000\000"
+/* offset=127801 */ "memory_referenced\000tool\000Memory marked as referenced/accessed in bytes\000config=0x1e\000\00000\000\000\000\000\000"
+/* offset=127890 */ "memory_resident\000tool\000Resident Set Size (RSS) in bytes (from /proc/pid/statm). The sum of anonymous, file and shared memory\000config=0x1f\000\00000\000\000\000\000\000"
+/* offset=128033 */ "memory_rss\000tool\000Resident Set Size (RSS) in bytes (from /proc/pid/smaps_rollup). The sum of anonymous, file and shared memory\000config=0x20\000\00000\000\000\000\000\000"
+/* offset=128178 */ "memory_shared\000tool\000Shared memory (shared with other processes via files/shmem) in bytes\000config=0x21\000\00000\000\000\000\000\000"
+/* offset=128286 */ "memory_shared_clean\000tool\000Shared clean memory (shared with other processes, not modified) in bytes\000config=0x22\000\00000\000\000\000\000\000"
+/* offset=128404 */ "memory_shared_dirty\000tool\000Shared dirty memory (shared with other processes, modified) in bytes\000config=0x23\000\00000\000\000\000\000\000"
+/* offset=128518 */ "memory_shared_hugetlb\000tool\000Shared memory backed by huge pages in bytes\000config=0x24\000\00000\000\000\000\000\000"
+/* offset=128609 */ "memory_shmem_pmd_mapped\000tool\000Shared memory mapped with Page Middle Directory (PMD) in bytes\000config=0x25\000\00000\000\000\000\000\000"
+/* offset=128721 */ "memory_size\000tool\000Virtual memory size in bytes\000config=0x26\000\00000\000\000\000\000\000"
+/* offset=128787 */ "memory_swap\000tool\000Memory swapped out to disk in bytes\000config=0x27\000\00000\000\000\000\000\000"
+/* offset=128860 */ "memory_swap_pss\000tool\000Proportional Share Size (PSS) for swap memory in bytes\000config=0x28\000\00000\000\000\000\000\000"
+/* offset=128956 */ "memory_text\000tool\000Memory dedicated to code (text segment) in bytes\000config=0x29\000\00000\000\000\000\000\000"
+/* offset=129042 */ "memory_uss\000tool\000Unique Set Size (USS) in bytes\000config=0x2a\000\00000\000\000\000\000\000"
+/* offset=129109 */ "bp_l1_btb_correct\000branch\000L1 BTB Correction\000event=0x8a\000\00000\000\000\000\000\000"
+/* offset=129171 */ "bp_l2_btb_correct\000branch\000L2 BTB Correction\000event=0x8b\000\00000\000\000\000\000\000"
+/* offset=129233 */ "l3_cache_rd\000cache\000L3 cache access, read\000event=0x40\000\00000\000\000\000\000Attributable Level 3 cache access, read\000"
+/* offset=129331 */ "segment_reg_loads.any\000other\000Number of segment register loads\000event=6,period=200000,umask=0x80\000\00000\000\000\000\000\000"
+/* offset=129433 */ "dispatch_blocked.any\000other\000Memory cluster signals to block micro-op dispatch for any reason\000event=9,period=200000,umask=0x20\000\00000\000\000\000\000\000"
+/* offset=129566 */ "eist_trans\000other\000Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions\000event=0x3a,period=200000\000\00000\000\000\000\000\000"
+/* offset=129684 */ "hisi_sccl,ddrc\000"
+/* offset=129699 */ "uncore_hisi_ddrc.flux_wcmd\000uncore\000DDRC write commands\000event=2\000\00000\000\000\000\000\000"
+/* offset=129769 */ "uncore_cbox\000"
+/* offset=129781 */ "unc_cbo_xsnp_response.miss_eviction\000uncore\000A cross-core snoop resulted from L3 Eviction which misses in some processor core\000event=0x22,umask=0x81\000\00000\000\000\000\000\000"
+/* offset=129935 */ "event-hyphen\000uncore\000UNC_CBO_HYPHEN\000event=0xe0\000\00000\000\000\000\000\000"
+/* offset=129989 */ "event-two-hyph\000uncore\000UNC_CBO_TWO_HYPH\000event=0xc0\000\00000\000\000\000\000\000"
+/* offset=130047 */ "hisi_sccl,l3c\000"
+/* offset=130061 */ "uncore_hisi_l3c.rd_hit_cpipe\000uncore\000Total read hits\000event=7\000\00000\000\000\000\000\000"
+/* offset=130129 */ "uncore_imc_free_running\000"
+/* offset=130153 */ "uncore_imc_free_running.cache_miss\000uncore\000Total cache misses\000event=0x12\000\00000\000\000\000\000\000"
+/* offset=130233 */ "uncore_imc\000"
+/* offset=130244 */ "uncore_imc.cache_hits\000uncore\000Total cache hits\000event=0x34\000\00000\000\000\000\000\000"
+/* offset=130309 */ "uncore_sys_ddr_pmu\000"
+/* offset=130328 */ "sys_ddr_pmu.write_cycles\000uncore\000ddr write-cycles event\000event=0x2b\000v8\00000\000\000\000\000\000"
+/* offset=130404 */ "uncore_sys_ccn_pmu\000"
+/* offset=130423 */ "sys_ccn_pmu.read_cycles\000uncore\000ccn read-cycles event\000config=0x2c\0000x01\00000\000\000\000\000\000"
+/* offset=130500 */ "uncore_sys_cmn_pmu\000"
+/* offset=130519 */ "sys_cmn_pmu.hnf_cache_miss\000uncore\000Counts total cache misses in first lookup result (high priority)\000eventid=1,type=5\000(434|436|43c|43a).*\00000\000\000\000\000\000"
+/* offset=130662 */ "CPUs_utilized\000Default\000(software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@) / (duration_time * 1e9)\000\000Average CPU utilization\000\0001CPUs\000\000\000\000011"
+/* offset=130848 */ "cs_per_second\000Default\000software@context\\-switches\\,name\\=context\\-switches@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Context switches per CPU second\000\0001cs/sec\000\000\000\000011"
+/* offset=131081 */ "migrations_per_second\000Default\000software@cpu\\-migrations\\,name\\=cpu\\-migrations@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Process migrations to a new CPU per CPU second\000\0001migrations/sec\000\000\000\000011"
+/* offset=131341 */ "page_faults_per_second\000Default\000software@page\\-faults\\,name\\=page\\-faults@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Page faults per CPU second\000\0001faults/sec\000\000\000\000011"
+/* offset=131572 */ "insn_per_cycle\000Default\000instructions / cpu\\-cycles\000insn_per_cycle < 1\000Instructions Per Cycle\000\0001instructions\000\000\000\000001"
+/* offset=131685 */ "stalled_cycles_per_instruction\000Default\000max(stalled\\-cycles\\-frontend, stalled\\-cycles\\-backend) / instructions\000\000Max front or backend stalls per instruction\000\000\000\000\000\000001"
+/* offset=131849 */ "frontend_cycles_idle\000Default\000stalled\\-cycles\\-frontend / cpu\\-cycles\000frontend_cycles_idle > 0.1\000Frontend stalls per cycle\000\000\000\000\000\000001"
+/* offset=131979 */ "backend_cycles_idle\000Default\000stalled\\-cycles\\-backend / cpu\\-cycles\000backend_cycles_idle > 0.2\000Backend stalls per cycle\000\000\000\000\000\000001"
+/* offset=132105 */ "cycles_frequency\000Default\000cpu\\-cycles / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Cycles per CPU second\000\0001GHz\000\000\000\000011"
+/* offset=132281 */ "branch_frequency\000Default\000branches / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Branches per CPU second\000\0001000M/sec\000\000\000\000011"
+/* offset=132461 */ "branch_miss_rate\000Default\000branch\\-misses / branches\000branch_miss_rate > 0.05\000Branch miss rate\000\000100%\000\000\000\000001"
+/* offset=132565 */ "l1d_miss_rate\000Default2\000L1\\-dcache\\-load\\-misses / L1\\-dcache\\-loads\000l1d_miss_rate > 0.05\000L1D miss rate\000\000100%\000\000\000\000001"
+/* offset=132681 */ "llc_miss_rate\000Default2\000LLC\\-load\\-misses / LLC\\-loads\000llc_miss_rate > 0.05\000LLC miss rate\000\000100%\000\000\000\000001"
+/* offset=132782 */ "l1i_miss_rate\000Default3\000L1\\-icache\\-load\\-misses / L1\\-icache\\-loads\000l1i_miss_rate > 0.05\000L1I miss rate\000\000100%\000\000\000\000001"
+/* offset=132897 */ "dtlb_miss_rate\000Default3\000dTLB\\-load\\-misses / dTLB\\-loads\000dtlb_miss_rate > 0.05\000dTLB miss rate\000\000100%\000\000\000\000001"
+/* offset=133003 */ "itlb_miss_rate\000Default3\000iTLB\\-load\\-misses / iTLB\\-loads\000itlb_miss_rate > 0.05\000iTLB miss rate\000\000100%\000\000\000\000001"
+/* offset=133109 */ "l1_prefetch_miss_rate\000Default4\000L1\\-dcache\\-prefetch\\-misses / L1\\-dcache\\-prefetches\000l1_prefetch_miss_rate > 0.05\000L1 prefetch miss rate\000\000100%\000\000\000\000001"
+/* offset=133257 */ "CPI\000\0001 / IPC\000\000\000\000\000\000\000\000000"
+/* offset=133280 */ "IPC\000group1\000inst_retired.any / cpu_clk_unhalted.thread\000\000\000\000\000\000\000\000000"
+/* offset=133344 */ "Frontend_Bound_SMT\000\000idq_uops_not_delivered.core / (4 * (cpu_clk_unhalted.thread / 2 * (1 + cpu_clk_unhalted.one_thread_active / cpu_clk_unhalted.ref_xclk)))\000\000\000\000\000\000\000\000000"
+/* offset=133511 */ "dcache_miss_cpi\000\000l1d\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000"
+/* offset=133576 */ "icache_miss_cycles\000\000l1i\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000"
+/* offset=133644 */ "cache_miss_cycles\000group1\000dcache_miss_cpi + icache_miss_cycles\000\000\000\000\000\000\000\000000"
+/* offset=133716 */ "DCache_L2_All_Hits\000\000l2_rqsts.demand_data_rd_hit + l2_rqsts.pf_hit + l2_rqsts.rfo_hit\000\000\000\000\000\000\000\000000"
+/* offset=133811 */ "DCache_L2_All_Miss\000\000max(l2_rqsts.all_demand_data_rd - l2_rqsts.demand_data_rd_hit, 0) + l2_rqsts.pf_miss + l2_rqsts.rfo_miss\000\000\000\000\000\000\000\000000"
+/* offset=133946 */ "DCache_L2_All\000\000DCache_L2_All_Hits + DCache_L2_All_Miss\000\000\000\000\000\000\000\000000"
+/* offset=134011 */ "DCache_L2_Hits\000\000d_ratio(DCache_L2_All_Hits, DCache_L2_All)\000\000\000\000\000\000\000\000000"
+/* offset=134080 */ "DCache_L2_Misses\000\000d_ratio(DCache_L2_All_Miss, DCache_L2_All)\000\000\000\000\000\000\000\000000"
+/* offset=134151 */ "M1\000\000ipc + M2\000\000\000\000\000\000\000\000000"
+/* offset=134174 */ "M2\000\000ipc + M1\000\000\000\000\000\000\000\000000"
+/* offset=134197 */ "M3\000\0001 / M3\000\000\000\000\000\000\000\000000"
+/* offset=134218 */ "L1D_Cache_Fill_BW\000\00064 * l1d.replacement / 1e9 / duration_time\000\000\000\000\000\000\000\000000"
;
static const struct compact_pmu_event pmu_events__common_default_core[] = {
@@ -2592,6 +2620,34 @@ static const struct compact_pmu_event pmu_events__common_tool[] = {
{ 126205 }, /* core_wide\000tool\0001 if not SMT, if SMT are events being gathered on all SMT threads 1 otherwise 0\000config=0xd\000\00000\000\000\000\000\000 */
{ 125072 }, /* duration_time\000tool\000Wall clock interval time in nanoseconds\000config=1\000\00000\000\000\000\000\000 */
{ 125286 }, /* has_pmem\000tool\0001 if persistent memory installed otherwise 0\000config=4\000\00000\000\000\000\000\000 */
+{ 126403 }, /* memory_anon_huge_pages\000tool\000Memory backed by anonymous huge pages in bytes\000config=0xf\000\00000\000\000\000\000\000 */
+{ 126497 }, /* memory_anonymous\000tool\000Memory not mapped to a file (anonymous) in bytes\000config=0x10\000\00000\000\000\000\000\000 */
+{ 126588 }, /* memory_data\000tool\000Memory dedicated to data and stack in bytes\000config=0x11\000\00000\000\000\000\000\000 */
+{ 126669 }, /* memory_file_pmd_mapped\000tool\000Memory backed by file and mapped with Page Middle Directory (PMD) in bytes\000config=0x12\000\00000\000\000\000\000\000 */
+{ 126792 }, /* memory_ksm\000tool\000Memory shared via Kernel Samepage Merging (KSM) in bytes\000config=0x13\000\00000\000\000\000\000\000 */
+{ 126885 }, /* memory_lazyfree\000tool\000Memory marked as LazyFree in bytes\000config=0x14\000\00000\000\000\000\000\000 */
+{ 126961 }, /* memory_locked\000tool\000Memory locked in RAM in bytes\000config=0x15\000\00000\000\000\000\000\000 */
+{ 127030 }, /* memory_private_clean\000tool\000Private clean memory (not shared, not modified) in bytes\000config=0x16\000\00000\000\000\000\000\000 */
+{ 127133 }, /* memory_private_dirty\000tool\000Private dirty memory (not shared, modified) in bytes\000config=0x17\000\00000\000\000\000\000\000 */
+{ 127232 }, /* memory_private_hugetlb\000tool\000Private memory backed by huge pages in bytes\000config=0x18\000\00000\000\000\000\000\000 */
+{ 127325 }, /* memory_pss\000tool\000Proportional Share Size (PSS) in bytes\000config=0x19\000\00000\000\000\000\000\000 */
+{ 127400 }, /* memory_pss_anon\000tool\000Proportional Share Size (PSS) for anonymous memory in bytes\000config=0x1a\000\00000\000\000\000\000\000 */
+{ 127501 }, /* memory_pss_dirty\000tool\000Proportional Share Size (PSS) for dirty memory in bytes\000config=0x1b\000\00000\000\000\000\000\000 */
+{ 127599 }, /* memory_pss_file\000tool\000Proportional Share Size (PSS) for file-backed memory in bytes\000config=0x1c\000\00000\000\000\000\000\000 */
+{ 127702 }, /* memory_pss_shmem\000tool\000Proportional Share Size (PSS) for shared memory in bytes\000config=0x1d\000\00000\000\000\000\000\000 */
+{ 127801 }, /* memory_referenced\000tool\000Memory marked as referenced/accessed in bytes\000config=0x1e\000\00000\000\000\000\000\000 */
+{ 127890 }, /* memory_resident\000tool\000Resident Set Size (RSS) in bytes (from /proc/pid/statm). The sum of anonymous, file and shared memory\000config=0x1f\000\00000\000\000\000\000\000 */
+{ 128033 }, /* memory_rss\000tool\000Resident Set Size (RSS) in bytes (from /proc/pid/smaps_rollup). The sum of anonymous, file and shared memory\000config=0x20\000\00000\000\000\000\000\000 */
+{ 128178 }, /* memory_shared\000tool\000Shared memory (shared with other processes via files/shmem) in bytes\000config=0x21\000\00000\000\000\000\000\000 */
+{ 128286 }, /* memory_shared_clean\000tool\000Shared clean memory (shared with other processes, not modified) in bytes\000config=0x22\000\00000\000\000\000\000\000 */
+{ 128404 }, /* memory_shared_dirty\000tool\000Shared dirty memory (shared with other processes, modified) in bytes\000config=0x23\000\00000\000\000\000\000\000 */
+{ 128518 }, /* memory_shared_hugetlb\000tool\000Shared memory backed by huge pages in bytes\000config=0x24\000\00000\000\000\000\000\000 */
+{ 128609 }, /* memory_shmem_pmd_mapped\000tool\000Shared memory mapped with Page Middle Directory (PMD) in bytes\000config=0x25\000\00000\000\000\000\000\000 */
+{ 128721 }, /* memory_size\000tool\000Virtual memory size in bytes\000config=0x26\000\00000\000\000\000\000\000 */
+{ 128787 }, /* memory_swap\000tool\000Memory swapped out to disk in bytes\000config=0x27\000\00000\000\000\000\000\000 */
+{ 128860 }, /* memory_swap_pss\000tool\000Proportional Share Size (PSS) for swap memory in bytes\000config=0x28\000\00000\000\000\000\000\000 */
+{ 128956 }, /* memory_text\000tool\000Memory dedicated to code (text segment) in bytes\000config=0x29\000\00000\000\000\000\000\000 */
+{ 129042 }, /* memory_uss\000tool\000Unique Set Size (USS) in bytes\000config=0x2a\000\00000\000\000\000\000\000 */
{ 125362 }, /* num_cores\000tool\000Number of cores. A core consists of 1 or more thread, with each thread being associated with a logical Linux CPU\000config=5\000\00000\000\000\000\000\000 */
{ 125507 }, /* num_cpus\000tool\000Number of logical Linux CPUs. There may be multiple such CPUs on a core\000config=6\000\00000\000\000\000\000\000 */
{ 125610 }, /* num_cpus_online\000tool\000Number of online logical Linux CPUs. There may be multiple such CPUs on a core\000config=7\000\00000\000\000\000\000\000 */
@@ -2625,23 +2681,23 @@ static const struct pmu_table_entry pmu_events__common[] = {
};
static const struct compact_pmu_event pmu_metrics__common_default_core[] = {
-{ 127956 }, /* CPUs_utilized\000Default\000(software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@) / (duration_time * 1e9)\000\000Average CPU utilization\000\0001CPUs\000\000\000\000011 */
-{ 129273 }, /* backend_cycles_idle\000Default\000stalled\\-cycles\\-backend / cpu\\-cycles\000backend_cycles_idle > 0.2\000Backend stalls per cycle\000\000\000\000\000\000001 */
-{ 129575 }, /* branch_frequency\000Default\000branches / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Branches per CPU second\000\0001000M/sec\000\000\000\000011 */
-{ 129755 }, /* branch_miss_rate\000Default\000branch\\-misses / branches\000branch_miss_rate > 0.05\000Branch miss rate\000\000100%\000\000\000\000001 */
-{ 128142 }, /* cs_per_second\000Default\000software@context\\-switches\\,name\\=context\\-switches@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Context switches per CPU second\000\0001cs/sec\000\000\000\000011 */
-{ 129399 }, /* cycles_frequency\000Default\000cpu\\-cycles / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Cycles per CPU second\000\0001GHz\000\000\000\000011 */
-{ 130191 }, /* dtlb_miss_rate\000Default3\000dTLB\\-load\\-misses / dTLB\\-loads\000dtlb_miss_rate > 0.05\000dTLB miss rate\000\000100%\000\000\000\000001 */
-{ 129143 }, /* frontend_cycles_idle\000Default\000stalled\\-cycles\\-frontend / cpu\\-cycles\000frontend_cycles_idle > 0.1\000Frontend stalls per cycle\000\000\000\000\000\000001 */
-{ 128866 }, /* insn_per_cycle\000Default\000instructions / cpu\\-cycles\000insn_per_cycle < 1\000Instructions Per Cycle\000\0001instructions\000\000\000\000001 */
-{ 130297 }, /* itlb_miss_rate\000Default3\000iTLB\\-load\\-misses / iTLB\\-loads\000itlb_miss_rate > 0.05\000iTLB miss rate\000\000100%\000\000\000\000001 */
-{ 130403 }, /* l1_prefetch_miss_rate\000Default4\000L1\\-dcache\\-prefetch\\-misses / L1\\-dcache\\-prefetches\000l1_prefetch_miss_rate > 0.05\000L1 prefetch miss rate\000\000100%\000\000\000\000001 */
-{ 129859 }, /* l1d_miss_rate\000Default2\000L1\\-dcache\\-load\\-misses / L1\\-dcache\\-loads\000l1d_miss_rate > 0.05\000L1D miss rate\000\000100%\000\000\000\000001 */
-{ 130076 }, /* l1i_miss_rate\000Default3\000L1\\-icache\\-load\\-misses / L1\\-icache\\-loads\000l1i_miss_rate > 0.05\000L1I miss rate\000\000100%\000\000\000\000001 */
-{ 129975 }, /* llc_miss_rate\000Default2\000LLC\\-load\\-misses / LLC\\-loads\000llc_miss_rate > 0.05\000LLC miss rate\000\000100%\000\000\000\000001 */
-{ 128375 }, /* migrations_per_second\000Default\000software@cpu\\-migrations\\,name\\=cpu\\-migrations@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Process migrations to a new CPU per CPU second\000\0001migrations/sec\000\000\000\000011 */
-{ 128635 }, /* page_faults_per_second\000Default\000software@page\\-faults\\,name\\=page\\-faults@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Page faults per CPU second\000\0001faults/sec\000\000\000\000011 */
-{ 128979 }, /* stalled_cycles_per_instruction\000Default\000max(stalled\\-cycles\\-frontend, stalled\\-cycles\\-backend) / instructions\000\000Max front or backend stalls per instruction\000\000\000\000\000\000001 */
+{ 130662 }, /* CPUs_utilized\000Default\000(software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@) / (duration_time * 1e9)\000\000Average CPU utilization\000\0001CPUs\000\000\000\000011 */
+{ 131979 }, /* backend_cycles_idle\000Default\000stalled\\-cycles\\-backend / cpu\\-cycles\000backend_cycles_idle > 0.2\000Backend stalls per cycle\000\000\000\000\000\000001 */
+{ 132281 }, /* branch_frequency\000Default\000branches / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Branches per CPU second\000\0001000M/sec\000\000\000\000011 */
+{ 132461 }, /* branch_miss_rate\000Default\000branch\\-misses / branches\000branch_miss_rate > 0.05\000Branch miss rate\000\000100%\000\000\000\000001 */
+{ 130848 }, /* cs_per_second\000Default\000software@context\\-switches\\,name\\=context\\-switches@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Context switches per CPU second\000\0001cs/sec\000\000\000\000011 */
+{ 132105 }, /* cycles_frequency\000Default\000cpu\\-cycles / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Cycles per CPU second\000\0001GHz\000\000\000\000011 */
+{ 132897 }, /* dtlb_miss_rate\000Default3\000dTLB\\-load\\-misses / dTLB\\-loads\000dtlb_miss_rate > 0.05\000dTLB miss rate\000\000100%\000\000\000\000001 */
+{ 131849 }, /* frontend_cycles_idle\000Default\000stalled\\-cycles\\-frontend / cpu\\-cycles\000frontend_cycles_idle > 0.1\000Frontend stalls per cycle\000\000\000\000\000\000001 */
+{ 131572 }, /* insn_per_cycle\000Default\000instructions / cpu\\-cycles\000insn_per_cycle < 1\000Instructions Per Cycle\000\0001instructions\000\000\000\000001 */
+{ 133003 }, /* itlb_miss_rate\000Default3\000iTLB\\-load\\-misses / iTLB\\-loads\000itlb_miss_rate > 0.05\000iTLB miss rate\000\000100%\000\000\000\000001 */
+{ 133109 }, /* l1_prefetch_miss_rate\000Default4\000L1\\-dcache\\-prefetch\\-misses / L1\\-dcache\\-prefetches\000l1_prefetch_miss_rate > 0.05\000L1 prefetch miss rate\000\000100%\000\000\000\000001 */
+{ 132565 }, /* l1d_miss_rate\000Default2\000L1\\-dcache\\-load\\-misses / L1\\-dcache\\-loads\000l1d_miss_rate > 0.05\000L1D miss rate\000\000100%\000\000\000\000001 */
+{ 132782 }, /* l1i_miss_rate\000Default3\000L1\\-icache\\-load\\-misses / L1\\-icache\\-loads\000l1i_miss_rate > 0.05\000L1I miss rate\000\000100%\000\000\000\000001 */
+{ 132681 }, /* llc_miss_rate\000Default2\000LLC\\-load\\-misses / LLC\\-loads\000llc_miss_rate > 0.05\000LLC miss rate\000\000100%\000\000\000\000001 */
+{ 131081 }, /* migrations_per_second\000Default\000software@cpu\\-migrations\\,name\\=cpu\\-migrations@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Process migrations to a new CPU per CPU second\000\0001migrations/sec\000\000\000\000011 */
+{ 131341 }, /* page_faults_per_second\000Default\000software@page\\-faults\\,name\\=page\\-faults@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Page faults per CPU second\000\0001faults/sec\000\000\000\000011 */
+{ 131685 }, /* stalled_cycles_per_instruction\000Default\000max(stalled\\-cycles\\-frontend, stalled\\-cycles\\-backend) / instructions\000\000Max front or backend stalls per instruction\000\000\000\000\000\000001 */
};
@@ -2654,29 +2710,29 @@ static const struct pmu_table_entry pmu_metrics__common[] = {
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_default_core[] = {
-{ 126403 }, /* bp_l1_btb_correct\000branch\000L1 BTB Correction\000event=0x8a\000\00000\000\000\000\000\000 */
-{ 126465 }, /* bp_l2_btb_correct\000branch\000L2 BTB Correction\000event=0x8b\000\00000\000\000\000\000\000 */
-{ 126727 }, /* dispatch_blocked.any\000other\000Memory cluster signals to block micro-op dispatch for any reason\000event=9,period=200000,umask=0x20\000\00000\000\000\000\000\000 */
-{ 126860 }, /* eist_trans\000other\000Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions\000event=0x3a,period=200000\000\00000\000\000\000\000\000 */
-{ 126527 }, /* l3_cache_rd\000cache\000L3 cache access, read\000event=0x40\000\00000\000\000\000\000Attributable Level 3 cache access, read\000 */
-{ 126625 }, /* segment_reg_loads.any\000other\000Number of segment register loads\000event=6,period=200000,umask=0x80\000\00000\000\000\000\000\000 */
+{ 129109 }, /* bp_l1_btb_correct\000branch\000L1 BTB Correction\000event=0x8a\000\00000\000\000\000\000\000 */
+{ 129171 }, /* bp_l2_btb_correct\000branch\000L2 BTB Correction\000event=0x8b\000\00000\000\000\000\000\000 */
+{ 129433 }, /* dispatch_blocked.any\000other\000Memory cluster signals to block micro-op dispatch for any reason\000event=9,period=200000,umask=0x20\000\00000\000\000\000\000\000 */
+{ 129566 }, /* eist_trans\000other\000Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions\000event=0x3a,period=200000\000\00000\000\000\000\000\000 */
+{ 129233 }, /* l3_cache_rd\000cache\000L3 cache access, read\000event=0x40\000\00000\000\000\000\000Attributable Level 3 cache access, read\000 */
+{ 129331 }, /* segment_reg_loads.any\000other\000Number of segment register loads\000event=6,period=200000,umask=0x80\000\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_hisi_sccl_ddrc[] = {
-{ 126993 }, /* uncore_hisi_ddrc.flux_wcmd\000uncore\000DDRC write commands\000event=2\000\00000\000\000\000\000\000 */
+{ 129699 }, /* uncore_hisi_ddrc.flux_wcmd\000uncore\000DDRC write commands\000event=2\000\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_hisi_sccl_l3c[] = {
-{ 127355 }, /* uncore_hisi_l3c.rd_hit_cpipe\000uncore\000Total read hits\000event=7\000\00000\000\000\000\000\000 */
+{ 130061 }, /* uncore_hisi_l3c.rd_hit_cpipe\000uncore\000Total read hits\000event=7\000\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_uncore_cbox[] = {
-{ 127229 }, /* event-hyphen\000uncore\000UNC_CBO_HYPHEN\000event=0xe0\000\00000\000\000\000\000\000 */
-{ 127283 }, /* event-two-hyph\000uncore\000UNC_CBO_TWO_HYPH\000event=0xc0\000\00000\000\000\000\000\000 */
-{ 127075 }, /* unc_cbo_xsnp_response.miss_eviction\000uncore\000A cross-core snoop resulted from L3 Eviction which misses in some processor core\000event=0x22,umask=0x81\000\00000\000\000\000\000\000 */
+{ 129935 }, /* event-hyphen\000uncore\000UNC_CBO_HYPHEN\000event=0xe0\000\00000\000\000\000\000\000 */
+{ 129989 }, /* event-two-hyph\000uncore\000UNC_CBO_TWO_HYPH\000event=0xc0\000\00000\000\000\000\000\000 */
+{ 129781 }, /* unc_cbo_xsnp_response.miss_eviction\000uncore\000A cross-core snoop resulted from L3 Eviction which misses in some processor core\000event=0x22,umask=0x81\000\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_uncore_imc[] = {
-{ 127538 }, /* uncore_imc.cache_hits\000uncore\000Total cache hits\000event=0x34\000\00000\000\000\000\000\000 */
+{ 130244 }, /* uncore_imc.cache_hits\000uncore\000Total cache hits\000event=0x34\000\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_uncore_imc_free_running[] = {
-{ 127447 }, /* uncore_imc_free_running.cache_miss\000uncore\000Total cache misses\000event=0x12\000\00000\000\000\000\000\000 */
+{ 130153 }, /* uncore_imc_free_running.cache_miss\000uncore\000Total cache misses\000event=0x12\000\00000\000\000\000\000\000 */
};
@@ -2689,46 +2745,46 @@ static const struct pmu_table_entry pmu_events__test_soc_cpu[] = {
{
.entries = pmu_events__test_soc_cpu_hisi_sccl_ddrc,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_cpu_hisi_sccl_ddrc),
- .pmu_name = { 126978 /* hisi_sccl,ddrc\000 */ },
+ .pmu_name = { 129684 /* hisi_sccl,ddrc\000 */ },
},
{
.entries = pmu_events__test_soc_cpu_hisi_sccl_l3c,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_cpu_hisi_sccl_l3c),
- .pmu_name = { 127341 /* hisi_sccl,l3c\000 */ },
+ .pmu_name = { 130047 /* hisi_sccl,l3c\000 */ },
},
{
.entries = pmu_events__test_soc_cpu_uncore_cbox,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_cpu_uncore_cbox),
- .pmu_name = { 127063 /* uncore_cbox\000 */ },
+ .pmu_name = { 129769 /* uncore_cbox\000 */ },
},
{
.entries = pmu_events__test_soc_cpu_uncore_imc,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_cpu_uncore_imc),
- .pmu_name = { 127527 /* uncore_imc\000 */ },
+ .pmu_name = { 130233 /* uncore_imc\000 */ },
},
{
.entries = pmu_events__test_soc_cpu_uncore_imc_free_running,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_cpu_uncore_imc_free_running),
- .pmu_name = { 127423 /* uncore_imc_free_running\000 */ },
+ .pmu_name = { 130129 /* uncore_imc_free_running\000 */ },
},
};
static const struct compact_pmu_event pmu_metrics__test_soc_cpu_default_core[] = {
-{ 130551 }, /* CPI\000\0001 / IPC\000\000\000\000\000\000\000\000000 */
-{ 131240 }, /* DCache_L2_All\000\000DCache_L2_All_Hits + DCache_L2_All_Miss\000\000\000\000\000\000\000\000000 */
-{ 131010 }, /* DCache_L2_All_Hits\000\000l2_rqsts.demand_data_rd_hit + l2_rqsts.pf_hit + l2_rqsts.rfo_hit\000\000\000\000\000\000\000\000000 */
-{ 131105 }, /* DCache_L2_All_Miss\000\000max(l2_rqsts.all_demand_data_rd - l2_rqsts.demand_data_rd_hit, 0) + l2_rqsts.pf_miss + l2_rqsts.rfo_miss\000\000\000\000\000\000\000\000000 */
-{ 131305 }, /* DCache_L2_Hits\000\000d_ratio(DCache_L2_All_Hits, DCache_L2_All)\000\000\000\000\000\000\000\000000 */
-{ 131374 }, /* DCache_L2_Misses\000\000d_ratio(DCache_L2_All_Miss, DCache_L2_All)\000\000\000\000\000\000\000\000000 */
-{ 130638 }, /* Frontend_Bound_SMT\000\000idq_uops_not_delivered.core / (4 * (cpu_clk_unhalted.thread / 2 * (1 + cpu_clk_unhalted.one_thread_active / cpu_clk_unhalted.ref_xclk)))\000\000\000\000\000\000\000\000000 */
-{ 130574 }, /* IPC\000group1\000inst_retired.any / cpu_clk_unhalted.thread\000\000\000\000\000\000\000\000000 */
-{ 131512 }, /* L1D_Cache_Fill_BW\000\00064 * l1d.replacement / 1e9 / duration_time\000\000\000\000\000\000\000\000000 */
-{ 131445 }, /* M1\000\000ipc + M2\000\000\000\000\000\000\000\000000 */
-{ 131468 }, /* M2\000\000ipc + M1\000\000\000\000\000\000\000\000000 */
-{ 131491 }, /* M3\000\0001 / M3\000\000\000\000\000\000\000\000000 */
-{ 130938 }, /* cache_miss_cycles\000group1\000dcache_miss_cpi + icache_miss_cycles\000\000\000\000\000\000\000\000000 */
-{ 130805 }, /* dcache_miss_cpi\000\000l1d\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000 */
-{ 130870 }, /* icache_miss_cycles\000\000l1i\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000 */
+{ 133257 }, /* CPI\000\0001 / IPC\000\000\000\000\000\000\000\000000 */
+{ 133946 }, /* DCache_L2_All\000\000DCache_L2_All_Hits + DCache_L2_All_Miss\000\000\000\000\000\000\000\000000 */
+{ 133716 }, /* DCache_L2_All_Hits\000\000l2_rqsts.demand_data_rd_hit + l2_rqsts.pf_hit + l2_rqsts.rfo_hit\000\000\000\000\000\000\000\000000 */
+{ 133811 }, /* DCache_L2_All_Miss\000\000max(l2_rqsts.all_demand_data_rd - l2_rqsts.demand_data_rd_hit, 0) + l2_rqsts.pf_miss + l2_rqsts.rfo_miss\000\000\000\000\000\000\000\000000 */
+{ 134011 }, /* DCache_L2_Hits\000\000d_ratio(DCache_L2_All_Hits, DCache_L2_All)\000\000\000\000\000\000\000\000000 */
+{ 134080 }, /* DCache_L2_Misses\000\000d_ratio(DCache_L2_All_Miss, DCache_L2_All)\000\000\000\000\000\000\000\000000 */
+{ 133344 }, /* Frontend_Bound_SMT\000\000idq_uops_not_delivered.core / (4 * (cpu_clk_unhalted.thread / 2 * (1 + cpu_clk_unhalted.one_thread_active / cpu_clk_unhalted.ref_xclk)))\000\000\000\000\000\000\000\000000 */
+{ 133280 }, /* IPC\000group1\000inst_retired.any / cpu_clk_unhalted.thread\000\000\000\000\000\000\000\000000 */
+{ 134218 }, /* L1D_Cache_Fill_BW\000\00064 * l1d.replacement / 1e9 / duration_time\000\000\000\000\000\000\000\000000 */
+{ 134151 }, /* M1\000\000ipc + M2\000\000\000\000\000\000\000\000000 */
+{ 134174 }, /* M2\000\000ipc + M1\000\000\000\000\000\000\000\000000 */
+{ 134197 }, /* M3\000\0001 / M3\000\000\000\000\000\000\000\000000 */
+{ 133644 }, /* cache_miss_cycles\000group1\000dcache_miss_cpi + icache_miss_cycles\000\000\000\000\000\000\000\000000 */
+{ 133511 }, /* dcache_miss_cpi\000\000l1d\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000 */
+{ 133576 }, /* icache_miss_cycles\000\000l1i\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000 */
};
@@ -2741,13 +2797,13 @@ static const struct pmu_table_entry pmu_metrics__test_soc_cpu[] = {
};
static const struct compact_pmu_event pmu_events__test_soc_sys_uncore_sys_ccn_pmu[] = {
-{ 127717 }, /* sys_ccn_pmu.read_cycles\000uncore\000ccn read-cycles event\000config=0x2c\0000x01\00000\000\000\000\000\000 */
+{ 130423 }, /* sys_ccn_pmu.read_cycles\000uncore\000ccn read-cycles event\000config=0x2c\0000x01\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_sys_uncore_sys_cmn_pmu[] = {
-{ 127813 }, /* sys_cmn_pmu.hnf_cache_miss\000uncore\000Counts total cache misses in first lookup result (high priority)\000eventid=1,type=5\000(434|436|43c|43a).*\00000\000\000\000\000\000 */
+{ 130519 }, /* sys_cmn_pmu.hnf_cache_miss\000uncore\000Counts total cache misses in first lookup result (high priority)\000eventid=1,type=5\000(434|436|43c|43a).*\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_sys_uncore_sys_ddr_pmu[] = {
-{ 127622 }, /* sys_ddr_pmu.write_cycles\000uncore\000ddr write-cycles event\000event=0x2b\000v8\00000\000\000\000\000\000 */
+{ 130328 }, /* sys_ddr_pmu.write_cycles\000uncore\000ddr write-cycles event\000event=0x2b\000v8\00000\000\000\000\000\000 */
};
@@ -2755,17 +2811,17 @@ static const struct pmu_table_entry pmu_events__test_soc_sys[] = {
{
.entries = pmu_events__test_soc_sys_uncore_sys_ccn_pmu,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_sys_uncore_sys_ccn_pmu),
- .pmu_name = { 127698 /* uncore_sys_ccn_pmu\000 */ },
+ .pmu_name = { 130404 /* uncore_sys_ccn_pmu\000 */ },
},
{
.entries = pmu_events__test_soc_sys_uncore_sys_cmn_pmu,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_sys_uncore_sys_cmn_pmu),
- .pmu_name = { 127794 /* uncore_sys_cmn_pmu\000 */ },
+ .pmu_name = { 130500 /* uncore_sys_cmn_pmu\000 */ },
},
{
.entries = pmu_events__test_soc_sys_uncore_sys_ddr_pmu,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_sys_uncore_sys_ddr_pmu),
- .pmu_name = { 127603 /* uncore_sys_ddr_pmu\000 */ },
+ .pmu_name = { 130309 /* uncore_sys_ddr_pmu\000 */ },
},
};
diff --git a/tools/perf/util/tool_pmu.c b/tools/perf/util/tool_pmu.c
index 37c4eae0bef1..2d1f244264dd 100644
--- a/tools/perf/util/tool_pmu.c
+++ b/tools/perf/util/tool_pmu.c
@@ -13,11 +13,14 @@
#include "tsc.h"
#include <api/fs/fs.h>
#include <api/io.h>
+#include <internal/lib.h> // page_size
#include <internal/threadmap.h>
#include <perf/cpumap.h>
#include <perf/threadmap.h>
#include <fcntl.h>
#include <strings.h>
+#include <api/io_dir.h>
+#include <ctype.h>
static const char *const tool_pmu__event_names[TOOL_PMU__EVENT_MAX] = {
NULL,
@@ -35,6 +38,34 @@ static const char *const tool_pmu__event_names[TOOL_PMU__EVENT_MAX] = {
"system_tsc_freq",
"core_wide",
"target_cpu",
+ "memory_anon_huge_pages",
+ "memory_anonymous",
+ "memory_data",
+ "memory_file_pmd_mapped",
+ "memory_ksm",
+ "memory_lazyfree",
+ "memory_locked",
+ "memory_private_clean",
+ "memory_private_dirty",
+ "memory_private_hugetlb",
+ "memory_pss",
+ "memory_pss_anon",
+ "memory_pss_dirty",
+ "memory_pss_file",
+ "memory_pss_shmem",
+ "memory_referenced",
+ "memory_resident",
+ "memory_rss",
+ "memory_shared",
+ "memory_shared_clean",
+ "memory_shared_dirty",
+ "memory_shared_hugetlb",
+ "memory_shmem_pmd_mapped",
+ "memory_size",
+ "memory_swap",
+ "memory_swap_pss",
+ "memory_text",
+ "memory_uss",
};
bool tool_pmu__skip_event(const char *name __maybe_unused)
@@ -220,6 +251,190 @@ static int read_pid_stat_field(int fd, int field, __u64 *val)
return -EINVAL;
}
+static bool tool_pmu__is_memory_event(enum tool_pmu_event ev)
+{
+ return ev >= TOOL_PMU__EVENT_MEMORY_ANON_HUGE_PAGES &&
+ ev <= TOOL_PMU__EVENT_MEMORY_USS;
+}
+
+static bool tool_pmu__is_memory_statm_event(enum tool_pmu_event ev)
+{
+ return ev == TOOL_PMU__EVENT_MEMORY_SIZE ||
+ ev == TOOL_PMU__EVENT_MEMORY_RESIDENT ||
+ ev == TOOL_PMU__EVENT_MEMORY_SHARED ||
+ ev == TOOL_PMU__EVENT_MEMORY_TEXT ||
+ ev == TOOL_PMU__EVENT_MEMORY_DATA;
+}
+
+static const char *tool_pmu__memory_event_to_key(enum tool_pmu_event ev)
+{
+ switch (ev) {
+ case TOOL_PMU__EVENT_MEMORY_ANON_HUGE_PAGES: return "AnonHugePages:";
+ case TOOL_PMU__EVENT_MEMORY_ANONYMOUS: return "Anonymous:";
+ case TOOL_PMU__EVENT_MEMORY_FILE_PMD_MAPPED: return "FilePmdMapped:";
+ case TOOL_PMU__EVENT_MEMORY_KSM: return "KSM:";
+ case TOOL_PMU__EVENT_MEMORY_LAZYFREE: return "LazyFree:";
+ case TOOL_PMU__EVENT_MEMORY_LOCKED: return "Locked:";
+ case TOOL_PMU__EVENT_MEMORY_PRIVATE_CLEAN: return "Private_Clean:";
+ case TOOL_PMU__EVENT_MEMORY_PRIVATE_DIRTY: return "Private_Dirty:";
+ case TOOL_PMU__EVENT_MEMORY_PRIVATE_HUGETLB: return "Private_Hugetlb:";
+ case TOOL_PMU__EVENT_MEMORY_PSS: return "Pss:";
+ case TOOL_PMU__EVENT_MEMORY_PSS_ANON: return "Pss_Anon:";
+ case TOOL_PMU__EVENT_MEMORY_PSS_DIRTY: return "Pss_Dirty:";
+ case TOOL_PMU__EVENT_MEMORY_PSS_FILE: return "Pss_File:";
+ case TOOL_PMU__EVENT_MEMORY_PSS_SHMEM: return "Pss_Shmem:";
+ case TOOL_PMU__EVENT_MEMORY_REFERENCED: return "Referenced:";
+ case TOOL_PMU__EVENT_MEMORY_RSS: return "Rss:";
+ case TOOL_PMU__EVENT_MEMORY_SHARED_CLEAN: return "Shared_Clean:";
+ case TOOL_PMU__EVENT_MEMORY_SHARED_DIRTY: return "Shared_Dirty:";
+ case TOOL_PMU__EVENT_MEMORY_SHARED_HUGETLB: return "Shared_Hugetlb:";
+ case TOOL_PMU__EVENT_MEMORY_SHMEM_PMD_MAPPED: return "ShmemPmdMapped:";
+ case TOOL_PMU__EVENT_MEMORY_SWAP: return "Swap:";
+ case TOOL_PMU__EVENT_MEMORY_SWAP_PSS: return "SwapPss:";
+ case TOOL_PMU__EVENT_MEMORY_DATA:
+ case TOOL_PMU__EVENT_MEMORY_RESIDENT:
+ case TOOL_PMU__EVENT_MEMORY_SHARED:
+ case TOOL_PMU__EVENT_MEMORY_SIZE:
+ case TOOL_PMU__EVENT_MEMORY_TEXT:
+ case TOOL_PMU__EVENT_MEMORY_USS:
+ case TOOL_PMU__EVENT_DURATION_TIME:
+ case TOOL_PMU__EVENT_USER_TIME:
+ case TOOL_PMU__EVENT_SYSTEM_TIME:
+ case TOOL_PMU__EVENT_HAS_PMEM:
+ case TOOL_PMU__EVENT_NUM_CORES:
+ case TOOL_PMU__EVENT_NUM_CPUS:
+ case TOOL_PMU__EVENT_NUM_CPUS_ONLINE:
+ case TOOL_PMU__EVENT_NUM_DIES:
+ case TOOL_PMU__EVENT_NUM_PACKAGES:
+ case TOOL_PMU__EVENT_SLOTS:
+ case TOOL_PMU__EVENT_SMT_ON:
+ case TOOL_PMU__EVENT_SYSTEM_TSC_FREQ:
+ case TOOL_PMU__EVENT_CORE_WIDE:
+ case TOOL_PMU__EVENT_TARGET_CPU:
+ case TOOL_PMU__EVENT_NONE:
+ case TOOL_PMU__EVENT_MAX:
+ default: return NULL;
+ }
+}
+
+static int read_smaps_rollup_field(int fd, const char *key, u64 *val)
+{
+ char buf[4096];
+ struct io io;
+ int ch;
+
+ io__init(&io, fd, buf, sizeof(buf));
+
+ while ((ch = io__get_char(&io)) != -1) {
+ /* Check if line starts with key */
+ if (ch == key[0]) {
+ const char *k = key + 1;
+
+ while (*k && (ch = io__get_char(&io)) == *k)
+ k++;
+
+ if (!*k) {
+ /* Found key, skip whitespace */
+ while ((ch = io__get_char(&io)) == ' ' || ch == '\t')
+ ;
+ /* Read value */
+ if (ch >= '0' && ch <= '9') {
+ *val = ch - '0';
+ while ((ch = io__get_char(&io)) >= '0' && ch <= '9') {
+ *val = *val * 10 + (ch - '0');
+ }
+ /* Convert kB to bytes */
+ *val *= 1024;
+ return 0;
+ }
+ }
+ }
+ /* Skip rest of line */
+ if (ch != '\n')
+ read_until_char(&io, '\n');
+ }
+ return -EINVAL;
+}
+
+static int read_smaps_rollup(int fd, enum tool_pmu_event ev, u64 *val)
+{
+ int ret;
+
+ if (ev == TOOL_PMU__EVENT_MEMORY_USS) {
+ u64 pc, pd;
+
+ lseek(fd, 0, SEEK_SET);
+ ret = read_smaps_rollup_field(fd, "Private_Clean:", &pc);
+ if (ret)
+ return ret;
+ lseek(fd, 0, SEEK_SET);
+ ret = read_smaps_rollup_field(fd, "Private_Dirty:", &pd);
+ if (ret)
+ return ret;
+ *val = pc + pd;
+ return 0;
+ }
+
+ lseek(fd, 0, SEEK_SET);
+ return read_smaps_rollup_field(fd, tool_pmu__memory_event_to_key(ev), val);
+}
+
+static int read_statm(int fd, enum tool_pmu_event ev, u64 *val)
+{
+ char buf[128];
+ struct io io;
+ u64 v;
+
+ io__init(&io, fd, buf, sizeof(buf));
+ lseek(fd, 0, SEEK_SET);
+
+ /* Size */
+ if (io__get_dec(&io, (__u64 *)&v) == -1)
+ return -EINVAL;
+ if (ev == TOOL_PMU__EVENT_MEMORY_SIZE) {
+ *val = v * page_size;
+ return 0;
+ }
+
+ /* Resident */
+ if (io__get_dec(&io, (__u64 *)&v) == -1) /* Skip */
+ return -EINVAL;
+ if (ev == TOOL_PMU__EVENT_MEMORY_RESIDENT) {
+ *val = v * page_size;
+ return 0;
+ }
+
+ /* Shared */
+ if (io__get_dec(&io, (__u64 *)&v) == -1) /* Skip */
+ return -EINVAL;
+ if (ev == TOOL_PMU__EVENT_MEMORY_SHARED) {
+ *val = v * page_size;
+ return 0;
+ }
+
+ /* Text */
+ if (io__get_dec(&io, (__u64 *)&v) == -1)
+ return -EINVAL;
+ if (ev == TOOL_PMU__EVENT_MEMORY_TEXT) {
+ *val = v * page_size;
+ return 0;
+ }
+
+ /* Lib */
+ if (io__get_dec(&io, (__u64 *)&v) == -1) /* Skip */
+ return -EINVAL;
+
+ /* Data */
+ if (io__get_dec(&io, (__u64 *)&v) == -1)
+ return -EINVAL;
+ if (ev == TOOL_PMU__EVENT_MEMORY_DATA) {
+ *val = v * page_size;
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
int evsel__tool_pmu_prepare_open(struct evsel *evsel,
struct perf_cpu_map *cpus,
int nthreads)
@@ -267,6 +482,7 @@ int evsel__tool_pmu_open(struct evsel *evsel,
if (ev == TOOL_PMU__EVENT_USER_TIME || ev == TOOL_PMU__EVENT_SYSTEM_TIME) {
bool system = ev == TOOL_PMU__EVENT_SYSTEM_TIME;
__u64 *start_time = NULL;
+ char buf[PATH_MAX];
int fd;
if (evsel->core.attr.sample_period) {
@@ -275,14 +491,14 @@ int evsel__tool_pmu_open(struct evsel *evsel,
goto out_close;
}
if (pid > -1) {
- char buf[64];
-
- snprintf(buf, sizeof(buf), "/proc/%d/stat", pid);
- fd = open(buf, O_RDONLY);
+ snprintf(buf, sizeof(buf), "%s/%d/stat",
+ procfs__mountpoint(), pid);
evsel->pid_stat = true;
} else {
- fd = open("/proc/stat", O_RDONLY);
+ snprintf(buf, sizeof(buf), "%s/stat",
+ procfs__mountpoint());
}
+ fd = open(buf, O_RDONLY);
FD(evsel, idx, thread) = fd;
if (fd < 0) {
err = -errno;
@@ -301,6 +517,30 @@ int evsel__tool_pmu_open(struct evsel *evsel,
}
if (err)
goto out_close;
+ } else if (tool_pmu__is_memory_event(ev)) {
+ int fd = -1;
+
+ if (pid > -1) {
+ char buf[PATH_MAX];
+
+ if (tool_pmu__is_memory_statm_event(ev)) {
+ snprintf(buf, sizeof(buf), "%s/%d/statm",
+ procfs__mountpoint(), pid);
+ } else {
+ snprintf(buf, sizeof(buf), "%s/%d/smaps_rollup",
+ procfs__mountpoint(), pid);
+ }
+ fd = open(buf, O_RDONLY);
+ }
+ /*
+ * For system-wide (pid == -1), we don't open a file here.
+ * We will aggregate in read().
+ */
+ if (pid > -1 && fd < 0) {
+ err = -errno;
+ goto out_close;
+ }
+ FD(evsel, idx, thread) = fd;
}
}
@@ -455,6 +695,34 @@ bool tool_pmu__read_event(enum tool_pmu_event ev,
*result = system_wide || (user_requested_cpu_list != NULL) ? 1 : 0;
return true;
+ case TOOL_PMU__EVENT_MEMORY_SIZE:
+ case TOOL_PMU__EVENT_MEMORY_RSS:
+ case TOOL_PMU__EVENT_MEMORY_PSS:
+ case TOOL_PMU__EVENT_MEMORY_SHARED:
+ case TOOL_PMU__EVENT_MEMORY_SHARED_CLEAN:
+ case TOOL_PMU__EVENT_MEMORY_SHARED_DIRTY:
+ case TOOL_PMU__EVENT_MEMORY_PRIVATE_CLEAN:
+ case TOOL_PMU__EVENT_MEMORY_PRIVATE_DIRTY:
+ case TOOL_PMU__EVENT_MEMORY_USS:
+ case TOOL_PMU__EVENT_MEMORY_SWAP:
+ case TOOL_PMU__EVENT_MEMORY_SWAP_PSS:
+ case TOOL_PMU__EVENT_MEMORY_PSS_DIRTY:
+ case TOOL_PMU__EVENT_MEMORY_PSS_ANON:
+ case TOOL_PMU__EVENT_MEMORY_PSS_FILE:
+ case TOOL_PMU__EVENT_MEMORY_PSS_SHMEM:
+ case TOOL_PMU__EVENT_MEMORY_RESIDENT:
+ case TOOL_PMU__EVENT_MEMORY_REFERENCED:
+ case TOOL_PMU__EVENT_MEMORY_ANONYMOUS:
+ case TOOL_PMU__EVENT_MEMORY_KSM:
+ case TOOL_PMU__EVENT_MEMORY_LAZYFREE:
+ case TOOL_PMU__EVENT_MEMORY_ANON_HUGE_PAGES:
+ case TOOL_PMU__EVENT_MEMORY_SHMEM_PMD_MAPPED:
+ case TOOL_PMU__EVENT_MEMORY_FILE_PMD_MAPPED:
+ case TOOL_PMU__EVENT_MEMORY_SHARED_HUGETLB:
+ case TOOL_PMU__EVENT_MEMORY_PRIVATE_HUGETLB:
+ case TOOL_PMU__EVENT_MEMORY_LOCKED:
+ case TOOL_PMU__EVENT_MEMORY_DATA:
+ case TOOL_PMU__EVENT_MEMORY_TEXT:
case TOOL_PMU__EVENT_NONE:
case TOOL_PMU__EVENT_DURATION_TIME:
case TOOL_PMU__EVENT_USER_TIME:
@@ -487,6 +755,52 @@ static void perf_counts__update(struct perf_counts_values *count,
}
}
+static int tool_pmu__aggregate_memory_event(enum tool_pmu_event ev, u64 *val)
+{
+ struct io_dir iod;
+ struct io_dirent64 *ent;
+ int proc_fd;
+
+ *val = 0;
+ proc_fd = open(procfs__mountpoint(), O_DIRECTORY | O_RDONLY);
+ if (proc_fd < 0)
+ return -errno;
+
+ io_dir__init(&iod, proc_fd);
+
+ while ((ent = io_dir__readdir(&iod)) != NULL) {
+ char buf[PATH_MAX];
+ u64 pid_val;
+ int fd;
+
+ if (!io_dir__is_dir(&iod, ent))
+ continue;
+
+ if (!isdigit(ent->d_name[0]))
+ continue;
+
+ if (tool_pmu__is_memory_statm_event(ev))
+ snprintf(buf, sizeof(buf), "%s/statm", ent->d_name);
+ else
+ snprintf(buf, sizeof(buf), "%s/smaps_rollup", ent->d_name);
+
+ fd = openat(proc_fd, buf, O_RDONLY);
+ if (fd < 0)
+ continue;
+
+ if (tool_pmu__is_memory_statm_event(ev)) {
+ if (!read_statm(fd, ev, &pid_val))
+ *val += pid_val;
+ } else {
+ if (!read_smaps_rollup(fd, ev, &pid_val))
+ *val += pid_val;
+ }
+ close(fd);
+ }
+ close(proc_fd);
+ return 0;
+}
+
int evsel__tool_pmu_read(struct evsel *evsel, int cpu_map_idx, int thread)
{
__u64 *start_time, cur_time, delta_start;
@@ -564,6 +878,57 @@ int evsel__tool_pmu_read(struct evsel *evsel, int cpu_map_idx, int thread)
adjust = true;
break;
}
+ case TOOL_PMU__EVENT_MEMORY_SIZE:
+ case TOOL_PMU__EVENT_MEMORY_RSS:
+ case TOOL_PMU__EVENT_MEMORY_PSS:
+ case TOOL_PMU__EVENT_MEMORY_SHARED:
+ case TOOL_PMU__EVENT_MEMORY_SHARED_CLEAN:
+ case TOOL_PMU__EVENT_MEMORY_SHARED_DIRTY:
+ case TOOL_PMU__EVENT_MEMORY_PRIVATE_CLEAN:
+ case TOOL_PMU__EVENT_MEMORY_PRIVATE_DIRTY:
+ case TOOL_PMU__EVENT_MEMORY_USS:
+ case TOOL_PMU__EVENT_MEMORY_SWAP:
+ case TOOL_PMU__EVENT_MEMORY_SWAP_PSS:
+ case TOOL_PMU__EVENT_MEMORY_PSS_DIRTY:
+ case TOOL_PMU__EVENT_MEMORY_PSS_ANON:
+ case TOOL_PMU__EVENT_MEMORY_PSS_FILE:
+ case TOOL_PMU__EVENT_MEMORY_PSS_SHMEM:
+ case TOOL_PMU__EVENT_MEMORY_REFERENCED:
+ case TOOL_PMU__EVENT_MEMORY_RESIDENT:
+ case TOOL_PMU__EVENT_MEMORY_ANONYMOUS:
+ case TOOL_PMU__EVENT_MEMORY_KSM:
+ case TOOL_PMU__EVENT_MEMORY_LAZYFREE:
+ case TOOL_PMU__EVENT_MEMORY_ANON_HUGE_PAGES:
+ case TOOL_PMU__EVENT_MEMORY_SHMEM_PMD_MAPPED:
+ case TOOL_PMU__EVENT_MEMORY_FILE_PMD_MAPPED:
+ case TOOL_PMU__EVENT_MEMORY_SHARED_HUGETLB:
+ case TOOL_PMU__EVENT_MEMORY_PRIVATE_HUGETLB:
+ case TOOL_PMU__EVENT_MEMORY_LOCKED:
+ case TOOL_PMU__EVENT_MEMORY_DATA:
+ case TOOL_PMU__EVENT_MEMORY_TEXT: {
+ int fd = FD(evsel, cpu_map_idx, thread);
+ u64 val = 0;
+
+ if (fd >= 0) {
+ /* Per-process */
+ int ret;
+
+ if (tool_pmu__is_memory_statm_event(ev))
+ ret = read_statm(fd, ev, &val);
+ else
+ ret = read_smaps_rollup(fd, ev, &val);
+
+ if (ret)
+ return ret;
+ } else {
+ /* System-wide aggregation */
+ if (cpu_map_idx == 0 && thread == 0) {
+ tool_pmu__aggregate_memory_event(ev, &val);
+ }
+ }
+ perf_counts__update(count, old_count, /*raw=*/false, val);
+ return 0;
+ }
case TOOL_PMU__EVENT_NONE:
case TOOL_PMU__EVENT_MAX:
default:
diff --git a/tools/perf/util/tool_pmu.h b/tools/perf/util/tool_pmu.h
index ea343d1983d3..bf6bb196ad75 100644
--- a/tools/perf/util/tool_pmu.h
+++ b/tools/perf/util/tool_pmu.h
@@ -24,6 +24,34 @@ enum tool_pmu_event {
TOOL_PMU__EVENT_SYSTEM_TSC_FREQ,
TOOL_PMU__EVENT_CORE_WIDE,
TOOL_PMU__EVENT_TARGET_CPU,
+ TOOL_PMU__EVENT_MEMORY_ANON_HUGE_PAGES,
+ TOOL_PMU__EVENT_MEMORY_ANONYMOUS,
+ TOOL_PMU__EVENT_MEMORY_DATA,
+ TOOL_PMU__EVENT_MEMORY_FILE_PMD_MAPPED,
+ TOOL_PMU__EVENT_MEMORY_KSM,
+ TOOL_PMU__EVENT_MEMORY_LAZYFREE,
+ TOOL_PMU__EVENT_MEMORY_LOCKED,
+ TOOL_PMU__EVENT_MEMORY_PRIVATE_CLEAN,
+ TOOL_PMU__EVENT_MEMORY_PRIVATE_DIRTY,
+ TOOL_PMU__EVENT_MEMORY_PRIVATE_HUGETLB,
+ TOOL_PMU__EVENT_MEMORY_PSS,
+ TOOL_PMU__EVENT_MEMORY_PSS_ANON,
+ TOOL_PMU__EVENT_MEMORY_PSS_DIRTY,
+ TOOL_PMU__EVENT_MEMORY_PSS_FILE,
+ TOOL_PMU__EVENT_MEMORY_PSS_SHMEM,
+ TOOL_PMU__EVENT_MEMORY_REFERENCED,
+ TOOL_PMU__EVENT_MEMORY_RESIDENT,
+ TOOL_PMU__EVENT_MEMORY_RSS,
+ TOOL_PMU__EVENT_MEMORY_SHARED,
+ TOOL_PMU__EVENT_MEMORY_SHARED_CLEAN,
+ TOOL_PMU__EVENT_MEMORY_SHARED_DIRTY,
+ TOOL_PMU__EVENT_MEMORY_SHARED_HUGETLB,
+ TOOL_PMU__EVENT_MEMORY_SHMEM_PMD_MAPPED,
+ TOOL_PMU__EVENT_MEMORY_SIZE,
+ TOOL_PMU__EVENT_MEMORY_SWAP,
+ TOOL_PMU__EVENT_MEMORY_SWAP_PSS,
+ TOOL_PMU__EVENT_MEMORY_TEXT,
+ TOOL_PMU__EVENT_MEMORY_USS,
TOOL_PMU__EVENT_MAX,
};
--
2.52.0.351.gbe84eed79e-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v1 2/2] perf tool_pmu: Add network events
2026-01-04 1:17 [PATCH v1 0/2] Add procfs based memory and network tool events Ian Rogers
2026-01-04 1:17 ` [PATCH v1 1/2] perf tool_pmu: Add memory events Ian Rogers
@ 2026-01-04 1:17 ` Ian Rogers
2026-01-04 1:21 ` [PATCH v1 0/2] Add procfs based memory and network tool events Ian Rogers
` (2 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Ian Rogers @ 2026-01-04 1:17 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Jiri Olsa, Ian Rogers, Adrian Hunter, James Clark,
Thomas Falcon, Thomas Richter, linux-perf-users, linux-kernel
Add tool PMU events to report network statistics from /proc/net/dev.
The events can be read system-wide (from /proc/net/dev) or per-process
(from /proc/pid/net/dev).
The following events are added for both RX (receive) and TX (transmit):
- bytes
- packets
- errors
- drop
- fifo
- frame (RX only) / colls (TX only)
- compressed
- multicast (RX only) / carrier (TX only)
Updates tool.json with the new events and descriptions. Updates
tool_pmu implementation to parse the net/dev format.
Below are examples of system-wide and per-process gathering respectively:
```
$ perf stat -e net_rx_bytes,net_rx_compressed,net_rx_drop,net_rx_errors,net_rx_fifo,net_rx_frame,net_rx_multicast,net_rx_packets,net_tx_bytes,net_tx_carrier,net_tx_colls,net_tx_compressed,net_tx_drop,net_tx_errors,net_tx_fifo,net_tx_packets -I 1000
1.001154872 0 net_rx_bytes
1.001154872 0 net_rx_compressed
1.001154872 444,577 net_rx_drop
1.001154872 4,824,888 net_rx_errors
1.001154872 0 net_rx_fifo
1.001154872 0 net_rx_frame
1.001154872 0 net_rx_multicast
1.001154872 4,408,889,452 net_rx_packets
1.001154872 0 net_tx_bytes
1.001154872 0 net_tx_carrier
1.001154872 0 net_tx_colls
1.001154872 0 net_tx_compressed
1.001154872 0 net_tx_drop
1.001154872 0 net_tx_errors
1.001154872 0 net_tx_fifo
1.001154872 0 net_tx_packets
$ perf stat -e net_rx_bytes,net_rx_compressed,net_rx_drop,net_rx_errors,net_rx_fifo,net_rx_frame,net_rx_multicast,net_rx_packets,net_tx_bytes,net_tx_carrier,net_tx_colls,net_tx_compressed,net_tx_drop,net_tx_errors,net_tx_fifo,net_tx_packets -p $(pidof -d, chrome) -I 1000
1.001023475 0 net_rx_bytes
1.001023475 0 net_rx_compressed
1.001023475 42,647,328 net_rx_drop
1.001023475 463,069,152 net_rx_errors
1.001023475 0 net_rx_fifo
1.001023475 0 net_rx_frame
1.001023475 0 net_rx_multicast
1.001023475 423,195,831,744 net_rx_packets
1.001023475 0 net_tx_bytes
1.001023475 0 net_tx_carrier
1.001023475 0 net_tx_colls
1.001023475 0 net_tx_compressed
1.001023475 0 net_tx_drop
1.001023475 0 net_tx_errors
1.001023475 0 net_tx_fifo
1.001023475 0 net_tx_packets
...
```
Signed-off-by: Ian Rogers <irogers@google.com>
---
.../pmu-events/arch/common/common/tool.json | 98 ++++++-
tools/perf/pmu-events/empty-pmu-events.c | 256 ++++++++++--------
tools/perf/util/tool_pmu.c | 155 ++++++++++-
tools/perf/util/tool_pmu.h | 16 ++
4 files changed, 404 insertions(+), 121 deletions(-)
diff --git a/tools/perf/pmu-events/arch/common/common/tool.json b/tools/perf/pmu-events/arch/common/common/tool.json
index 4b3fce655f8a..ebd3c5a6d15d 100644
--- a/tools/perf/pmu-events/arch/common/common/tool.json
+++ b/tools/perf/pmu-events/arch/common/common/tool.json
@@ -250,5 +250,101 @@
"EventName": "memory_uss",
"BriefDescription": "Unique Set Size (USS) in bytes",
"ConfigCode": "42"
- }
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_rx_bytes",
+ "BriefDescription": "Network received bytes",
+ "ConfigCode": "43"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_rx_packets",
+ "BriefDescription": "Network received packets",
+ "ConfigCode": "44"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_rx_errors",
+ "BriefDescription": "Network received errors",
+ "ConfigCode": "45"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_rx_drop",
+ "BriefDescription": "Network received dropped packets",
+ "ConfigCode": "46"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_rx_fifo",
+ "BriefDescription": "Network received fifo overruns",
+ "ConfigCode": "47"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_rx_frame",
+ "BriefDescription": "Network received framing errors",
+ "ConfigCode": "48"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_rx_compressed",
+ "BriefDescription": "Network received compressed packets",
+ "ConfigCode": "49"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_rx_multicast",
+ "BriefDescription": "Network received multicast packets",
+ "ConfigCode": "50"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_tx_bytes",
+ "BriefDescription": "Network transmitted bytes",
+ "ConfigCode": "51"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_tx_packets",
+ "BriefDescription": "Network transmitted packets",
+ "ConfigCode": "52"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_tx_errors",
+ "BriefDescription": "Network transmitted errors",
+ "ConfigCode": "53"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_tx_drop",
+ "BriefDescription": "Network transmitted dropped packets",
+ "ConfigCode": "54"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_tx_fifo",
+ "BriefDescription": "Network transmitted fifo overruns",
+ "ConfigCode": "55"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_tx_colls",
+ "BriefDescription": "Network transmitted collisions",
+ "ConfigCode": "56"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_tx_carrier",
+ "BriefDescription": "Network transmitted carrier losses",
+ "ConfigCode": "57"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "net_tx_compressed",
+ "BriefDescription": "Network transmitted compressed packets",
+ "ConfigCode": "58"
+ }
]
diff --git a/tools/perf/pmu-events/empty-pmu-events.c b/tools/perf/pmu-events/empty-pmu-events.c
index 4b7c534f1801..0debd0003dbc 100644
--- a/tools/perf/pmu-events/empty-pmu-events.c
+++ b/tools/perf/pmu-events/empty-pmu-events.c
@@ -1309,62 +1309,78 @@ static const char *const big_c_string =
/* offset=128860 */ "memory_swap_pss\000tool\000Proportional Share Size (PSS) for swap memory in bytes\000config=0x28\000\00000\000\000\000\000\000"
/* offset=128956 */ "memory_text\000tool\000Memory dedicated to code (text segment) in bytes\000config=0x29\000\00000\000\000\000\000\000"
/* offset=129042 */ "memory_uss\000tool\000Unique Set Size (USS) in bytes\000config=0x2a\000\00000\000\000\000\000\000"
-/* offset=129109 */ "bp_l1_btb_correct\000branch\000L1 BTB Correction\000event=0x8a\000\00000\000\000\000\000\000"
-/* offset=129171 */ "bp_l2_btb_correct\000branch\000L2 BTB Correction\000event=0x8b\000\00000\000\000\000\000\000"
-/* offset=129233 */ "l3_cache_rd\000cache\000L3 cache access, read\000event=0x40\000\00000\000\000\000\000Attributable Level 3 cache access, read\000"
-/* offset=129331 */ "segment_reg_loads.any\000other\000Number of segment register loads\000event=6,period=200000,umask=0x80\000\00000\000\000\000\000\000"
-/* offset=129433 */ "dispatch_blocked.any\000other\000Memory cluster signals to block micro-op dispatch for any reason\000event=9,period=200000,umask=0x20\000\00000\000\000\000\000\000"
-/* offset=129566 */ "eist_trans\000other\000Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions\000event=0x3a,period=200000\000\00000\000\000\000\000\000"
-/* offset=129684 */ "hisi_sccl,ddrc\000"
-/* offset=129699 */ "uncore_hisi_ddrc.flux_wcmd\000uncore\000DDRC write commands\000event=2\000\00000\000\000\000\000\000"
-/* offset=129769 */ "uncore_cbox\000"
-/* offset=129781 */ "unc_cbo_xsnp_response.miss_eviction\000uncore\000A cross-core snoop resulted from L3 Eviction which misses in some processor core\000event=0x22,umask=0x81\000\00000\000\000\000\000\000"
-/* offset=129935 */ "event-hyphen\000uncore\000UNC_CBO_HYPHEN\000event=0xe0\000\00000\000\000\000\000\000"
-/* offset=129989 */ "event-two-hyph\000uncore\000UNC_CBO_TWO_HYPH\000event=0xc0\000\00000\000\000\000\000\000"
-/* offset=130047 */ "hisi_sccl,l3c\000"
-/* offset=130061 */ "uncore_hisi_l3c.rd_hit_cpipe\000uncore\000Total read hits\000event=7\000\00000\000\000\000\000\000"
-/* offset=130129 */ "uncore_imc_free_running\000"
-/* offset=130153 */ "uncore_imc_free_running.cache_miss\000uncore\000Total cache misses\000event=0x12\000\00000\000\000\000\000\000"
-/* offset=130233 */ "uncore_imc\000"
-/* offset=130244 */ "uncore_imc.cache_hits\000uncore\000Total cache hits\000event=0x34\000\00000\000\000\000\000\000"
-/* offset=130309 */ "uncore_sys_ddr_pmu\000"
-/* offset=130328 */ "sys_ddr_pmu.write_cycles\000uncore\000ddr write-cycles event\000event=0x2b\000v8\00000\000\000\000\000\000"
-/* offset=130404 */ "uncore_sys_ccn_pmu\000"
-/* offset=130423 */ "sys_ccn_pmu.read_cycles\000uncore\000ccn read-cycles event\000config=0x2c\0000x01\00000\000\000\000\000\000"
-/* offset=130500 */ "uncore_sys_cmn_pmu\000"
-/* offset=130519 */ "sys_cmn_pmu.hnf_cache_miss\000uncore\000Counts total cache misses in first lookup result (high priority)\000eventid=1,type=5\000(434|436|43c|43a).*\00000\000\000\000\000\000"
-/* offset=130662 */ "CPUs_utilized\000Default\000(software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@) / (duration_time * 1e9)\000\000Average CPU utilization\000\0001CPUs\000\000\000\000011"
-/* offset=130848 */ "cs_per_second\000Default\000software@context\\-switches\\,name\\=context\\-switches@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Context switches per CPU second\000\0001cs/sec\000\000\000\000011"
-/* offset=131081 */ "migrations_per_second\000Default\000software@cpu\\-migrations\\,name\\=cpu\\-migrations@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Process migrations to a new CPU per CPU second\000\0001migrations/sec\000\000\000\000011"
-/* offset=131341 */ "page_faults_per_second\000Default\000software@page\\-faults\\,name\\=page\\-faults@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Page faults per CPU second\000\0001faults/sec\000\000\000\000011"
-/* offset=131572 */ "insn_per_cycle\000Default\000instructions / cpu\\-cycles\000insn_per_cycle < 1\000Instructions Per Cycle\000\0001instructions\000\000\000\000001"
-/* offset=131685 */ "stalled_cycles_per_instruction\000Default\000max(stalled\\-cycles\\-frontend, stalled\\-cycles\\-backend) / instructions\000\000Max front or backend stalls per instruction\000\000\000\000\000\000001"
-/* offset=131849 */ "frontend_cycles_idle\000Default\000stalled\\-cycles\\-frontend / cpu\\-cycles\000frontend_cycles_idle > 0.1\000Frontend stalls per cycle\000\000\000\000\000\000001"
-/* offset=131979 */ "backend_cycles_idle\000Default\000stalled\\-cycles\\-backend / cpu\\-cycles\000backend_cycles_idle > 0.2\000Backend stalls per cycle\000\000\000\000\000\000001"
-/* offset=132105 */ "cycles_frequency\000Default\000cpu\\-cycles / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Cycles per CPU second\000\0001GHz\000\000\000\000011"
-/* offset=132281 */ "branch_frequency\000Default\000branches / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Branches per CPU second\000\0001000M/sec\000\000\000\000011"
-/* offset=132461 */ "branch_miss_rate\000Default\000branch\\-misses / branches\000branch_miss_rate > 0.05\000Branch miss rate\000\000100%\000\000\000\000001"
-/* offset=132565 */ "l1d_miss_rate\000Default2\000L1\\-dcache\\-load\\-misses / L1\\-dcache\\-loads\000l1d_miss_rate > 0.05\000L1D miss rate\000\000100%\000\000\000\000001"
-/* offset=132681 */ "llc_miss_rate\000Default2\000LLC\\-load\\-misses / LLC\\-loads\000llc_miss_rate > 0.05\000LLC miss rate\000\000100%\000\000\000\000001"
-/* offset=132782 */ "l1i_miss_rate\000Default3\000L1\\-icache\\-load\\-misses / L1\\-icache\\-loads\000l1i_miss_rate > 0.05\000L1I miss rate\000\000100%\000\000\000\000001"
-/* offset=132897 */ "dtlb_miss_rate\000Default3\000dTLB\\-load\\-misses / dTLB\\-loads\000dtlb_miss_rate > 0.05\000dTLB miss rate\000\000100%\000\000\000\000001"
-/* offset=133003 */ "itlb_miss_rate\000Default3\000iTLB\\-load\\-misses / iTLB\\-loads\000itlb_miss_rate > 0.05\000iTLB miss rate\000\000100%\000\000\000\000001"
-/* offset=133109 */ "l1_prefetch_miss_rate\000Default4\000L1\\-dcache\\-prefetch\\-misses / L1\\-dcache\\-prefetches\000l1_prefetch_miss_rate > 0.05\000L1 prefetch miss rate\000\000100%\000\000\000\000001"
-/* offset=133257 */ "CPI\000\0001 / IPC\000\000\000\000\000\000\000\000000"
-/* offset=133280 */ "IPC\000group1\000inst_retired.any / cpu_clk_unhalted.thread\000\000\000\000\000\000\000\000000"
-/* offset=133344 */ "Frontend_Bound_SMT\000\000idq_uops_not_delivered.core / (4 * (cpu_clk_unhalted.thread / 2 * (1 + cpu_clk_unhalted.one_thread_active / cpu_clk_unhalted.ref_xclk)))\000\000\000\000\000\000\000\000000"
-/* offset=133511 */ "dcache_miss_cpi\000\000l1d\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000"
-/* offset=133576 */ "icache_miss_cycles\000\000l1i\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000"
-/* offset=133644 */ "cache_miss_cycles\000group1\000dcache_miss_cpi + icache_miss_cycles\000\000\000\000\000\000\000\000000"
-/* offset=133716 */ "DCache_L2_All_Hits\000\000l2_rqsts.demand_data_rd_hit + l2_rqsts.pf_hit + l2_rqsts.rfo_hit\000\000\000\000\000\000\000\000000"
-/* offset=133811 */ "DCache_L2_All_Miss\000\000max(l2_rqsts.all_demand_data_rd - l2_rqsts.demand_data_rd_hit, 0) + l2_rqsts.pf_miss + l2_rqsts.rfo_miss\000\000\000\000\000\000\000\000000"
-/* offset=133946 */ "DCache_L2_All\000\000DCache_L2_All_Hits + DCache_L2_All_Miss\000\000\000\000\000\000\000\000000"
-/* offset=134011 */ "DCache_L2_Hits\000\000d_ratio(DCache_L2_All_Hits, DCache_L2_All)\000\000\000\000\000\000\000\000000"
-/* offset=134080 */ "DCache_L2_Misses\000\000d_ratio(DCache_L2_All_Miss, DCache_L2_All)\000\000\000\000\000\000\000\000000"
-/* offset=134151 */ "M1\000\000ipc + M2\000\000\000\000\000\000\000\000000"
-/* offset=134174 */ "M2\000\000ipc + M1\000\000\000\000\000\000\000\000000"
-/* offset=134197 */ "M3\000\0001 / M3\000\000\000\000\000\000\000\000000"
-/* offset=134218 */ "L1D_Cache_Fill_BW\000\00064 * l1d.replacement / 1e9 / duration_time\000\000\000\000\000\000\000\000000"
+/* offset=129109 */ "net_rx_bytes\000tool\000Network received bytes\000config=0x2b\000\00000\000\000\000\000\000"
+/* offset=129170 */ "net_rx_packets\000tool\000Network received packets\000config=0x2c\000\00000\000\000\000\000\000"
+/* offset=129235 */ "net_rx_errors\000tool\000Network received errors\000config=0x2d\000\00000\000\000\000\000\000"
+/* offset=129298 */ "net_rx_drop\000tool\000Network received dropped packets\000config=0x2e\000\00000\000\000\000\000\000"
+/* offset=129368 */ "net_rx_fifo\000tool\000Network received fifo overruns\000config=0x2f\000\00000\000\000\000\000\000"
+/* offset=129436 */ "net_rx_frame\000tool\000Network received framing errors\000config=0x30\000\00000\000\000\000\000\000"
+/* offset=129506 */ "net_rx_compressed\000tool\000Network received compressed packets\000config=0x31\000\00000\000\000\000\000\000"
+/* offset=129585 */ "net_rx_multicast\000tool\000Network received multicast packets\000config=0x32\000\00000\000\000\000\000\000"
+/* offset=129662 */ "net_tx_bytes\000tool\000Network transmitted bytes\000config=0x33\000\00000\000\000\000\000\000"
+/* offset=129726 */ "net_tx_packets\000tool\000Network transmitted packets\000config=0x34\000\00000\000\000\000\000\000"
+/* offset=129794 */ "net_tx_errors\000tool\000Network transmitted errors\000config=0x35\000\00000\000\000\000\000\000"
+/* offset=129860 */ "net_tx_drop\000tool\000Network transmitted dropped packets\000config=0x36\000\00000\000\000\000\000\000"
+/* offset=129933 */ "net_tx_fifo\000tool\000Network transmitted fifo overruns\000config=0x37\000\00000\000\000\000\000\000"
+/* offset=130004 */ "net_tx_colls\000tool\000Network transmitted collisions\000config=0x38\000\00000\000\000\000\000\000"
+/* offset=130073 */ "net_tx_carrier\000tool\000Network transmitted carrier losses\000config=0x39\000\00000\000\000\000\000\000"
+/* offset=130148 */ "net_tx_compressed\000tool\000Network transmitted compressed packets\000config=0x3a\000\00000\000\000\000\000\000"
+/* offset=130230 */ "bp_l1_btb_correct\000branch\000L1 BTB Correction\000event=0x8a\000\00000\000\000\000\000\000"
+/* offset=130292 */ "bp_l2_btb_correct\000branch\000L2 BTB Correction\000event=0x8b\000\00000\000\000\000\000\000"
+/* offset=130354 */ "l3_cache_rd\000cache\000L3 cache access, read\000event=0x40\000\00000\000\000\000\000Attributable Level 3 cache access, read\000"
+/* offset=130452 */ "segment_reg_loads.any\000other\000Number of segment register loads\000event=6,period=200000,umask=0x80\000\00000\000\000\000\000\000"
+/* offset=130554 */ "dispatch_blocked.any\000other\000Memory cluster signals to block micro-op dispatch for any reason\000event=9,period=200000,umask=0x20\000\00000\000\000\000\000\000"
+/* offset=130687 */ "eist_trans\000other\000Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions\000event=0x3a,period=200000\000\00000\000\000\000\000\000"
+/* offset=130805 */ "hisi_sccl,ddrc\000"
+/* offset=130820 */ "uncore_hisi_ddrc.flux_wcmd\000uncore\000DDRC write commands\000event=2\000\00000\000\000\000\000\000"
+/* offset=130890 */ "uncore_cbox\000"
+/* offset=130902 */ "unc_cbo_xsnp_response.miss_eviction\000uncore\000A cross-core snoop resulted from L3 Eviction which misses in some processor core\000event=0x22,umask=0x81\000\00000\000\000\000\000\000"
+/* offset=131056 */ "event-hyphen\000uncore\000UNC_CBO_HYPHEN\000event=0xe0\000\00000\000\000\000\000\000"
+/* offset=131110 */ "event-two-hyph\000uncore\000UNC_CBO_TWO_HYPH\000event=0xc0\000\00000\000\000\000\000\000"
+/* offset=131168 */ "hisi_sccl,l3c\000"
+/* offset=131182 */ "uncore_hisi_l3c.rd_hit_cpipe\000uncore\000Total read hits\000event=7\000\00000\000\000\000\000\000"
+/* offset=131250 */ "uncore_imc_free_running\000"
+/* offset=131274 */ "uncore_imc_free_running.cache_miss\000uncore\000Total cache misses\000event=0x12\000\00000\000\000\000\000\000"
+/* offset=131354 */ "uncore_imc\000"
+/* offset=131365 */ "uncore_imc.cache_hits\000uncore\000Total cache hits\000event=0x34\000\00000\000\000\000\000\000"
+/* offset=131430 */ "uncore_sys_ddr_pmu\000"
+/* offset=131449 */ "sys_ddr_pmu.write_cycles\000uncore\000ddr write-cycles event\000event=0x2b\000v8\00000\000\000\000\000\000"
+/* offset=131525 */ "uncore_sys_ccn_pmu\000"
+/* offset=131544 */ "sys_ccn_pmu.read_cycles\000uncore\000ccn read-cycles event\000config=0x2c\0000x01\00000\000\000\000\000\000"
+/* offset=131621 */ "uncore_sys_cmn_pmu\000"
+/* offset=131640 */ "sys_cmn_pmu.hnf_cache_miss\000uncore\000Counts total cache misses in first lookup result (high priority)\000eventid=1,type=5\000(434|436|43c|43a).*\00000\000\000\000\000\000"
+/* offset=131783 */ "CPUs_utilized\000Default\000(software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@) / (duration_time * 1e9)\000\000Average CPU utilization\000\0001CPUs\000\000\000\000011"
+/* offset=131969 */ "cs_per_second\000Default\000software@context\\-switches\\,name\\=context\\-switches@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Context switches per CPU second\000\0001cs/sec\000\000\000\000011"
+/* offset=132202 */ "migrations_per_second\000Default\000software@cpu\\-migrations\\,name\\=cpu\\-migrations@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Process migrations to a new CPU per CPU second\000\0001migrations/sec\000\000\000\000011"
+/* offset=132462 */ "page_faults_per_second\000Default\000software@page\\-faults\\,name\\=page\\-faults@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Page faults per CPU second\000\0001faults/sec\000\000\000\000011"
+/* offset=132693 */ "insn_per_cycle\000Default\000instructions / cpu\\-cycles\000insn_per_cycle < 1\000Instructions Per Cycle\000\0001instructions\000\000\000\000001"
+/* offset=132806 */ "stalled_cycles_per_instruction\000Default\000max(stalled\\-cycles\\-frontend, stalled\\-cycles\\-backend) / instructions\000\000Max front or backend stalls per instruction\000\000\000\000\000\000001"
+/* offset=132970 */ "frontend_cycles_idle\000Default\000stalled\\-cycles\\-frontend / cpu\\-cycles\000frontend_cycles_idle > 0.1\000Frontend stalls per cycle\000\000\000\000\000\000001"
+/* offset=133100 */ "backend_cycles_idle\000Default\000stalled\\-cycles\\-backend / cpu\\-cycles\000backend_cycles_idle > 0.2\000Backend stalls per cycle\000\000\000\000\000\000001"
+/* offset=133226 */ "cycles_frequency\000Default\000cpu\\-cycles / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Cycles per CPU second\000\0001GHz\000\000\000\000011"
+/* offset=133402 */ "branch_frequency\000Default\000branches / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Branches per CPU second\000\0001000M/sec\000\000\000\000011"
+/* offset=133582 */ "branch_miss_rate\000Default\000branch\\-misses / branches\000branch_miss_rate > 0.05\000Branch miss rate\000\000100%\000\000\000\000001"
+/* offset=133686 */ "l1d_miss_rate\000Default2\000L1\\-dcache\\-load\\-misses / L1\\-dcache\\-loads\000l1d_miss_rate > 0.05\000L1D miss rate\000\000100%\000\000\000\000001"
+/* offset=133802 */ "llc_miss_rate\000Default2\000LLC\\-load\\-misses / LLC\\-loads\000llc_miss_rate > 0.05\000LLC miss rate\000\000100%\000\000\000\000001"
+/* offset=133903 */ "l1i_miss_rate\000Default3\000L1\\-icache\\-load\\-misses / L1\\-icache\\-loads\000l1i_miss_rate > 0.05\000L1I miss rate\000\000100%\000\000\000\000001"
+/* offset=134018 */ "dtlb_miss_rate\000Default3\000dTLB\\-load\\-misses / dTLB\\-loads\000dtlb_miss_rate > 0.05\000dTLB miss rate\000\000100%\000\000\000\000001"
+/* offset=134124 */ "itlb_miss_rate\000Default3\000iTLB\\-load\\-misses / iTLB\\-loads\000itlb_miss_rate > 0.05\000iTLB miss rate\000\000100%\000\000\000\000001"
+/* offset=134230 */ "l1_prefetch_miss_rate\000Default4\000L1\\-dcache\\-prefetch\\-misses / L1\\-dcache\\-prefetches\000l1_prefetch_miss_rate > 0.05\000L1 prefetch miss rate\000\000100%\000\000\000\000001"
+/* offset=134378 */ "CPI\000\0001 / IPC\000\000\000\000\000\000\000\000000"
+/* offset=134401 */ "IPC\000group1\000inst_retired.any / cpu_clk_unhalted.thread\000\000\000\000\000\000\000\000000"
+/* offset=134465 */ "Frontend_Bound_SMT\000\000idq_uops_not_delivered.core / (4 * (cpu_clk_unhalted.thread / 2 * (1 + cpu_clk_unhalted.one_thread_active / cpu_clk_unhalted.ref_xclk)))\000\000\000\000\000\000\000\000000"
+/* offset=134632 */ "dcache_miss_cpi\000\000l1d\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000"
+/* offset=134697 */ "icache_miss_cycles\000\000l1i\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000"
+/* offset=134765 */ "cache_miss_cycles\000group1\000dcache_miss_cpi + icache_miss_cycles\000\000\000\000\000\000\000\000000"
+/* offset=134837 */ "DCache_L2_All_Hits\000\000l2_rqsts.demand_data_rd_hit + l2_rqsts.pf_hit + l2_rqsts.rfo_hit\000\000\000\000\000\000\000\000000"
+/* offset=134932 */ "DCache_L2_All_Miss\000\000max(l2_rqsts.all_demand_data_rd - l2_rqsts.demand_data_rd_hit, 0) + l2_rqsts.pf_miss + l2_rqsts.rfo_miss\000\000\000\000\000\000\000\000000"
+/* offset=135067 */ "DCache_L2_All\000\000DCache_L2_All_Hits + DCache_L2_All_Miss\000\000\000\000\000\000\000\000000"
+/* offset=135132 */ "DCache_L2_Hits\000\000d_ratio(DCache_L2_All_Hits, DCache_L2_All)\000\000\000\000\000\000\000\000000"
+/* offset=135201 */ "DCache_L2_Misses\000\000d_ratio(DCache_L2_All_Miss, DCache_L2_All)\000\000\000\000\000\000\000\000000"
+/* offset=135272 */ "M1\000\000ipc + M2\000\000\000\000\000\000\000\000000"
+/* offset=135295 */ "M2\000\000ipc + M1\000\000\000\000\000\000\000\000000"
+/* offset=135318 */ "M3\000\0001 / M3\000\000\000\000\000\000\000\000000"
+/* offset=135339 */ "L1D_Cache_Fill_BW\000\00064 * l1d.replacement / 1e9 / duration_time\000\000\000\000\000\000\000\000000"
;
static const struct compact_pmu_event pmu_events__common_default_core[] = {
@@ -2648,6 +2664,22 @@ static const struct compact_pmu_event pmu_events__common_tool[] = {
{ 128860 }, /* memory_swap_pss\000tool\000Proportional Share Size (PSS) for swap memory in bytes\000config=0x28\000\00000\000\000\000\000\000 */
{ 128956 }, /* memory_text\000tool\000Memory dedicated to code (text segment) in bytes\000config=0x29\000\00000\000\000\000\000\000 */
{ 129042 }, /* memory_uss\000tool\000Unique Set Size (USS) in bytes\000config=0x2a\000\00000\000\000\000\000\000 */
+{ 129109 }, /* net_rx_bytes\000tool\000Network received bytes\000config=0x2b\000\00000\000\000\000\000\000 */
+{ 129506 }, /* net_rx_compressed\000tool\000Network received compressed packets\000config=0x31\000\00000\000\000\000\000\000 */
+{ 129298 }, /* net_rx_drop\000tool\000Network received dropped packets\000config=0x2e\000\00000\000\000\000\000\000 */
+{ 129235 }, /* net_rx_errors\000tool\000Network received errors\000config=0x2d\000\00000\000\000\000\000\000 */
+{ 129368 }, /* net_rx_fifo\000tool\000Network received fifo overruns\000config=0x2f\000\00000\000\000\000\000\000 */
+{ 129436 }, /* net_rx_frame\000tool\000Network received framing errors\000config=0x30\000\00000\000\000\000\000\000 */
+{ 129585 }, /* net_rx_multicast\000tool\000Network received multicast packets\000config=0x32\000\00000\000\000\000\000\000 */
+{ 129170 }, /* net_rx_packets\000tool\000Network received packets\000config=0x2c\000\00000\000\000\000\000\000 */
+{ 129662 }, /* net_tx_bytes\000tool\000Network transmitted bytes\000config=0x33\000\00000\000\000\000\000\000 */
+{ 130073 }, /* net_tx_carrier\000tool\000Network transmitted carrier losses\000config=0x39\000\00000\000\000\000\000\000 */
+{ 130004 }, /* net_tx_colls\000tool\000Network transmitted collisions\000config=0x38\000\00000\000\000\000\000\000 */
+{ 130148 }, /* net_tx_compressed\000tool\000Network transmitted compressed packets\000config=0x3a\000\00000\000\000\000\000\000 */
+{ 129860 }, /* net_tx_drop\000tool\000Network transmitted dropped packets\000config=0x36\000\00000\000\000\000\000\000 */
+{ 129794 }, /* net_tx_errors\000tool\000Network transmitted errors\000config=0x35\000\00000\000\000\000\000\000 */
+{ 129933 }, /* net_tx_fifo\000tool\000Network transmitted fifo overruns\000config=0x37\000\00000\000\000\000\000\000 */
+{ 129726 }, /* net_tx_packets\000tool\000Network transmitted packets\000config=0x34\000\00000\000\000\000\000\000 */
{ 125362 }, /* num_cores\000tool\000Number of cores. A core consists of 1 or more thread, with each thread being associated with a logical Linux CPU\000config=5\000\00000\000\000\000\000\000 */
{ 125507 }, /* num_cpus\000tool\000Number of logical Linux CPUs. There may be multiple such CPUs on a core\000config=6\000\00000\000\000\000\000\000 */
{ 125610 }, /* num_cpus_online\000tool\000Number of online logical Linux CPUs. There may be multiple such CPUs on a core\000config=7\000\00000\000\000\000\000\000 */
@@ -2681,23 +2713,23 @@ static const struct pmu_table_entry pmu_events__common[] = {
};
static const struct compact_pmu_event pmu_metrics__common_default_core[] = {
-{ 130662 }, /* CPUs_utilized\000Default\000(software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@) / (duration_time * 1e9)\000\000Average CPU utilization\000\0001CPUs\000\000\000\000011 */
-{ 131979 }, /* backend_cycles_idle\000Default\000stalled\\-cycles\\-backend / cpu\\-cycles\000backend_cycles_idle > 0.2\000Backend stalls per cycle\000\000\000\000\000\000001 */
-{ 132281 }, /* branch_frequency\000Default\000branches / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Branches per CPU second\000\0001000M/sec\000\000\000\000011 */
-{ 132461 }, /* branch_miss_rate\000Default\000branch\\-misses / branches\000branch_miss_rate > 0.05\000Branch miss rate\000\000100%\000\000\000\000001 */
-{ 130848 }, /* cs_per_second\000Default\000software@context\\-switches\\,name\\=context\\-switches@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Context switches per CPU second\000\0001cs/sec\000\000\000\000011 */
-{ 132105 }, /* cycles_frequency\000Default\000cpu\\-cycles / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Cycles per CPU second\000\0001GHz\000\000\000\000011 */
-{ 132897 }, /* dtlb_miss_rate\000Default3\000dTLB\\-load\\-misses / dTLB\\-loads\000dtlb_miss_rate > 0.05\000dTLB miss rate\000\000100%\000\000\000\000001 */
-{ 131849 }, /* frontend_cycles_idle\000Default\000stalled\\-cycles\\-frontend / cpu\\-cycles\000frontend_cycles_idle > 0.1\000Frontend stalls per cycle\000\000\000\000\000\000001 */
-{ 131572 }, /* insn_per_cycle\000Default\000instructions / cpu\\-cycles\000insn_per_cycle < 1\000Instructions Per Cycle\000\0001instructions\000\000\000\000001 */
-{ 133003 }, /* itlb_miss_rate\000Default3\000iTLB\\-load\\-misses / iTLB\\-loads\000itlb_miss_rate > 0.05\000iTLB miss rate\000\000100%\000\000\000\000001 */
-{ 133109 }, /* l1_prefetch_miss_rate\000Default4\000L1\\-dcache\\-prefetch\\-misses / L1\\-dcache\\-prefetches\000l1_prefetch_miss_rate > 0.05\000L1 prefetch miss rate\000\000100%\000\000\000\000001 */
-{ 132565 }, /* l1d_miss_rate\000Default2\000L1\\-dcache\\-load\\-misses / L1\\-dcache\\-loads\000l1d_miss_rate > 0.05\000L1D miss rate\000\000100%\000\000\000\000001 */
-{ 132782 }, /* l1i_miss_rate\000Default3\000L1\\-icache\\-load\\-misses / L1\\-icache\\-loads\000l1i_miss_rate > 0.05\000L1I miss rate\000\000100%\000\000\000\000001 */
-{ 132681 }, /* llc_miss_rate\000Default2\000LLC\\-load\\-misses / LLC\\-loads\000llc_miss_rate > 0.05\000LLC miss rate\000\000100%\000\000\000\000001 */
-{ 131081 }, /* migrations_per_second\000Default\000software@cpu\\-migrations\\,name\\=cpu\\-migrations@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Process migrations to a new CPU per CPU second\000\0001migrations/sec\000\000\000\000011 */
-{ 131341 }, /* page_faults_per_second\000Default\000software@page\\-faults\\,name\\=page\\-faults@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Page faults per CPU second\000\0001faults/sec\000\000\000\000011 */
-{ 131685 }, /* stalled_cycles_per_instruction\000Default\000max(stalled\\-cycles\\-frontend, stalled\\-cycles\\-backend) / instructions\000\000Max front or backend stalls per instruction\000\000\000\000\000\000001 */
+{ 131783 }, /* CPUs_utilized\000Default\000(software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@) / (duration_time * 1e9)\000\000Average CPU utilization\000\0001CPUs\000\000\000\000011 */
+{ 133100 }, /* backend_cycles_idle\000Default\000stalled\\-cycles\\-backend / cpu\\-cycles\000backend_cycles_idle > 0.2\000Backend stalls per cycle\000\000\000\000\000\000001 */
+{ 133402 }, /* branch_frequency\000Default\000branches / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Branches per CPU second\000\0001000M/sec\000\000\000\000011 */
+{ 133582 }, /* branch_miss_rate\000Default\000branch\\-misses / branches\000branch_miss_rate > 0.05\000Branch miss rate\000\000100%\000\000\000\000001 */
+{ 131969 }, /* cs_per_second\000Default\000software@context\\-switches\\,name\\=context\\-switches@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Context switches per CPU second\000\0001cs/sec\000\000\000\000011 */
+{ 133226 }, /* cycles_frequency\000Default\000cpu\\-cycles / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Cycles per CPU second\000\0001GHz\000\000\000\000011 */
+{ 134018 }, /* dtlb_miss_rate\000Default3\000dTLB\\-load\\-misses / dTLB\\-loads\000dtlb_miss_rate > 0.05\000dTLB miss rate\000\000100%\000\000\000\000001 */
+{ 132970 }, /* frontend_cycles_idle\000Default\000stalled\\-cycles\\-frontend / cpu\\-cycles\000frontend_cycles_idle > 0.1\000Frontend stalls per cycle\000\000\000\000\000\000001 */
+{ 132693 }, /* insn_per_cycle\000Default\000instructions / cpu\\-cycles\000insn_per_cycle < 1\000Instructions Per Cycle\000\0001instructions\000\000\000\000001 */
+{ 134124 }, /* itlb_miss_rate\000Default3\000iTLB\\-load\\-misses / iTLB\\-loads\000itlb_miss_rate > 0.05\000iTLB miss rate\000\000100%\000\000\000\000001 */
+{ 134230 }, /* l1_prefetch_miss_rate\000Default4\000L1\\-dcache\\-prefetch\\-misses / L1\\-dcache\\-prefetches\000l1_prefetch_miss_rate > 0.05\000L1 prefetch miss rate\000\000100%\000\000\000\000001 */
+{ 133686 }, /* l1d_miss_rate\000Default2\000L1\\-dcache\\-load\\-misses / L1\\-dcache\\-loads\000l1d_miss_rate > 0.05\000L1D miss rate\000\000100%\000\000\000\000001 */
+{ 133903 }, /* l1i_miss_rate\000Default3\000L1\\-icache\\-load\\-misses / L1\\-icache\\-loads\000l1i_miss_rate > 0.05\000L1I miss rate\000\000100%\000\000\000\000001 */
+{ 133802 }, /* llc_miss_rate\000Default2\000LLC\\-load\\-misses / LLC\\-loads\000llc_miss_rate > 0.05\000LLC miss rate\000\000100%\000\000\000\000001 */
+{ 132202 }, /* migrations_per_second\000Default\000software@cpu\\-migrations\\,name\\=cpu\\-migrations@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Process migrations to a new CPU per CPU second\000\0001migrations/sec\000\000\000\000011 */
+{ 132462 }, /* page_faults_per_second\000Default\000software@page\\-faults\\,name\\=page\\-faults@ * 1e9 / (software@cpu\\-clock\\,name\\=cpu\\-clock@ if #target_cpu else software@task\\-clock\\,name\\=task\\-clock@)\000\000Page faults per CPU second\000\0001faults/sec\000\000\000\000011 */
+{ 132806 }, /* stalled_cycles_per_instruction\000Default\000max(stalled\\-cycles\\-frontend, stalled\\-cycles\\-backend) / instructions\000\000Max front or backend stalls per instruction\000\000\000\000\000\000001 */
};
@@ -2710,29 +2742,29 @@ static const struct pmu_table_entry pmu_metrics__common[] = {
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_default_core[] = {
-{ 129109 }, /* bp_l1_btb_correct\000branch\000L1 BTB Correction\000event=0x8a\000\00000\000\000\000\000\000 */
-{ 129171 }, /* bp_l2_btb_correct\000branch\000L2 BTB Correction\000event=0x8b\000\00000\000\000\000\000\000 */
-{ 129433 }, /* dispatch_blocked.any\000other\000Memory cluster signals to block micro-op dispatch for any reason\000event=9,period=200000,umask=0x20\000\00000\000\000\000\000\000 */
-{ 129566 }, /* eist_trans\000other\000Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions\000event=0x3a,period=200000\000\00000\000\000\000\000\000 */
-{ 129233 }, /* l3_cache_rd\000cache\000L3 cache access, read\000event=0x40\000\00000\000\000\000\000Attributable Level 3 cache access, read\000 */
-{ 129331 }, /* segment_reg_loads.any\000other\000Number of segment register loads\000event=6,period=200000,umask=0x80\000\00000\000\000\000\000\000 */
+{ 130230 }, /* bp_l1_btb_correct\000branch\000L1 BTB Correction\000event=0x8a\000\00000\000\000\000\000\000 */
+{ 130292 }, /* bp_l2_btb_correct\000branch\000L2 BTB Correction\000event=0x8b\000\00000\000\000\000\000\000 */
+{ 130554 }, /* dispatch_blocked.any\000other\000Memory cluster signals to block micro-op dispatch for any reason\000event=9,period=200000,umask=0x20\000\00000\000\000\000\000\000 */
+{ 130687 }, /* eist_trans\000other\000Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions\000event=0x3a,period=200000\000\00000\000\000\000\000\000 */
+{ 130354 }, /* l3_cache_rd\000cache\000L3 cache access, read\000event=0x40\000\00000\000\000\000\000Attributable Level 3 cache access, read\000 */
+{ 130452 }, /* segment_reg_loads.any\000other\000Number of segment register loads\000event=6,period=200000,umask=0x80\000\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_hisi_sccl_ddrc[] = {
-{ 129699 }, /* uncore_hisi_ddrc.flux_wcmd\000uncore\000DDRC write commands\000event=2\000\00000\000\000\000\000\000 */
+{ 130820 }, /* uncore_hisi_ddrc.flux_wcmd\000uncore\000DDRC write commands\000event=2\000\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_hisi_sccl_l3c[] = {
-{ 130061 }, /* uncore_hisi_l3c.rd_hit_cpipe\000uncore\000Total read hits\000event=7\000\00000\000\000\000\000\000 */
+{ 131182 }, /* uncore_hisi_l3c.rd_hit_cpipe\000uncore\000Total read hits\000event=7\000\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_uncore_cbox[] = {
-{ 129935 }, /* event-hyphen\000uncore\000UNC_CBO_HYPHEN\000event=0xe0\000\00000\000\000\000\000\000 */
-{ 129989 }, /* event-two-hyph\000uncore\000UNC_CBO_TWO_HYPH\000event=0xc0\000\00000\000\000\000\000\000 */
-{ 129781 }, /* unc_cbo_xsnp_response.miss_eviction\000uncore\000A cross-core snoop resulted from L3 Eviction which misses in some processor core\000event=0x22,umask=0x81\000\00000\000\000\000\000\000 */
+{ 131056 }, /* event-hyphen\000uncore\000UNC_CBO_HYPHEN\000event=0xe0\000\00000\000\000\000\000\000 */
+{ 131110 }, /* event-two-hyph\000uncore\000UNC_CBO_TWO_HYPH\000event=0xc0\000\00000\000\000\000\000\000 */
+{ 130902 }, /* unc_cbo_xsnp_response.miss_eviction\000uncore\000A cross-core snoop resulted from L3 Eviction which misses in some processor core\000event=0x22,umask=0x81\000\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_uncore_imc[] = {
-{ 130244 }, /* uncore_imc.cache_hits\000uncore\000Total cache hits\000event=0x34\000\00000\000\000\000\000\000 */
+{ 131365 }, /* uncore_imc.cache_hits\000uncore\000Total cache hits\000event=0x34\000\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_cpu_uncore_imc_free_running[] = {
-{ 130153 }, /* uncore_imc_free_running.cache_miss\000uncore\000Total cache misses\000event=0x12\000\00000\000\000\000\000\000 */
+{ 131274 }, /* uncore_imc_free_running.cache_miss\000uncore\000Total cache misses\000event=0x12\000\00000\000\000\000\000\000 */
};
@@ -2745,46 +2777,46 @@ static const struct pmu_table_entry pmu_events__test_soc_cpu[] = {
{
.entries = pmu_events__test_soc_cpu_hisi_sccl_ddrc,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_cpu_hisi_sccl_ddrc),
- .pmu_name = { 129684 /* hisi_sccl,ddrc\000 */ },
+ .pmu_name = { 130805 /* hisi_sccl,ddrc\000 */ },
},
{
.entries = pmu_events__test_soc_cpu_hisi_sccl_l3c,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_cpu_hisi_sccl_l3c),
- .pmu_name = { 130047 /* hisi_sccl,l3c\000 */ },
+ .pmu_name = { 131168 /* hisi_sccl,l3c\000 */ },
},
{
.entries = pmu_events__test_soc_cpu_uncore_cbox,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_cpu_uncore_cbox),
- .pmu_name = { 129769 /* uncore_cbox\000 */ },
+ .pmu_name = { 130890 /* uncore_cbox\000 */ },
},
{
.entries = pmu_events__test_soc_cpu_uncore_imc,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_cpu_uncore_imc),
- .pmu_name = { 130233 /* uncore_imc\000 */ },
+ .pmu_name = { 131354 /* uncore_imc\000 */ },
},
{
.entries = pmu_events__test_soc_cpu_uncore_imc_free_running,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_cpu_uncore_imc_free_running),
- .pmu_name = { 130129 /* uncore_imc_free_running\000 */ },
+ .pmu_name = { 131250 /* uncore_imc_free_running\000 */ },
},
};
static const struct compact_pmu_event pmu_metrics__test_soc_cpu_default_core[] = {
-{ 133257 }, /* CPI\000\0001 / IPC\000\000\000\000\000\000\000\000000 */
-{ 133946 }, /* DCache_L2_All\000\000DCache_L2_All_Hits + DCache_L2_All_Miss\000\000\000\000\000\000\000\000000 */
-{ 133716 }, /* DCache_L2_All_Hits\000\000l2_rqsts.demand_data_rd_hit + l2_rqsts.pf_hit + l2_rqsts.rfo_hit\000\000\000\000\000\000\000\000000 */
-{ 133811 }, /* DCache_L2_All_Miss\000\000max(l2_rqsts.all_demand_data_rd - l2_rqsts.demand_data_rd_hit, 0) + l2_rqsts.pf_miss + l2_rqsts.rfo_miss\000\000\000\000\000\000\000\000000 */
-{ 134011 }, /* DCache_L2_Hits\000\000d_ratio(DCache_L2_All_Hits, DCache_L2_All)\000\000\000\000\000\000\000\000000 */
-{ 134080 }, /* DCache_L2_Misses\000\000d_ratio(DCache_L2_All_Miss, DCache_L2_All)\000\000\000\000\000\000\000\000000 */
-{ 133344 }, /* Frontend_Bound_SMT\000\000idq_uops_not_delivered.core / (4 * (cpu_clk_unhalted.thread / 2 * (1 + cpu_clk_unhalted.one_thread_active / cpu_clk_unhalted.ref_xclk)))\000\000\000\000\000\000\000\000000 */
-{ 133280 }, /* IPC\000group1\000inst_retired.any / cpu_clk_unhalted.thread\000\000\000\000\000\000\000\000000 */
-{ 134218 }, /* L1D_Cache_Fill_BW\000\00064 * l1d.replacement / 1e9 / duration_time\000\000\000\000\000\000\000\000000 */
-{ 134151 }, /* M1\000\000ipc + M2\000\000\000\000\000\000\000\000000 */
-{ 134174 }, /* M2\000\000ipc + M1\000\000\000\000\000\000\000\000000 */
-{ 134197 }, /* M3\000\0001 / M3\000\000\000\000\000\000\000\000000 */
-{ 133644 }, /* cache_miss_cycles\000group1\000dcache_miss_cpi + icache_miss_cycles\000\000\000\000\000\000\000\000000 */
-{ 133511 }, /* dcache_miss_cpi\000\000l1d\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000 */
-{ 133576 }, /* icache_miss_cycles\000\000l1i\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000 */
+{ 134378 }, /* CPI\000\0001 / IPC\000\000\000\000\000\000\000\000000 */
+{ 135067 }, /* DCache_L2_All\000\000DCache_L2_All_Hits + DCache_L2_All_Miss\000\000\000\000\000\000\000\000000 */
+{ 134837 }, /* DCache_L2_All_Hits\000\000l2_rqsts.demand_data_rd_hit + l2_rqsts.pf_hit + l2_rqsts.rfo_hit\000\000\000\000\000\000\000\000000 */
+{ 134932 }, /* DCache_L2_All_Miss\000\000max(l2_rqsts.all_demand_data_rd - l2_rqsts.demand_data_rd_hit, 0) + l2_rqsts.pf_miss + l2_rqsts.rfo_miss\000\000\000\000\000\000\000\000000 */
+{ 135132 }, /* DCache_L2_Hits\000\000d_ratio(DCache_L2_All_Hits, DCache_L2_All)\000\000\000\000\000\000\000\000000 */
+{ 135201 }, /* DCache_L2_Misses\000\000d_ratio(DCache_L2_All_Miss, DCache_L2_All)\000\000\000\000\000\000\000\000000 */
+{ 134465 }, /* Frontend_Bound_SMT\000\000idq_uops_not_delivered.core / (4 * (cpu_clk_unhalted.thread / 2 * (1 + cpu_clk_unhalted.one_thread_active / cpu_clk_unhalted.ref_xclk)))\000\000\000\000\000\000\000\000000 */
+{ 134401 }, /* IPC\000group1\000inst_retired.any / cpu_clk_unhalted.thread\000\000\000\000\000\000\000\000000 */
+{ 135339 }, /* L1D_Cache_Fill_BW\000\00064 * l1d.replacement / 1e9 / duration_time\000\000\000\000\000\000\000\000000 */
+{ 135272 }, /* M1\000\000ipc + M2\000\000\000\000\000\000\000\000000 */
+{ 135295 }, /* M2\000\000ipc + M1\000\000\000\000\000\000\000\000000 */
+{ 135318 }, /* M3\000\0001 / M3\000\000\000\000\000\000\000\000000 */
+{ 134765 }, /* cache_miss_cycles\000group1\000dcache_miss_cpi + icache_miss_cycles\000\000\000\000\000\000\000\000000 */
+{ 134632 }, /* dcache_miss_cpi\000\000l1d\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000 */
+{ 134697 }, /* icache_miss_cycles\000\000l1i\\-loads\\-misses / inst_retired.any\000\000\000\000\000\000\000\000000 */
};
@@ -2797,13 +2829,13 @@ static const struct pmu_table_entry pmu_metrics__test_soc_cpu[] = {
};
static const struct compact_pmu_event pmu_events__test_soc_sys_uncore_sys_ccn_pmu[] = {
-{ 130423 }, /* sys_ccn_pmu.read_cycles\000uncore\000ccn read-cycles event\000config=0x2c\0000x01\00000\000\000\000\000\000 */
+{ 131544 }, /* sys_ccn_pmu.read_cycles\000uncore\000ccn read-cycles event\000config=0x2c\0000x01\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_sys_uncore_sys_cmn_pmu[] = {
-{ 130519 }, /* sys_cmn_pmu.hnf_cache_miss\000uncore\000Counts total cache misses in first lookup result (high priority)\000eventid=1,type=5\000(434|436|43c|43a).*\00000\000\000\000\000\000 */
+{ 131640 }, /* sys_cmn_pmu.hnf_cache_miss\000uncore\000Counts total cache misses in first lookup result (high priority)\000eventid=1,type=5\000(434|436|43c|43a).*\00000\000\000\000\000\000 */
};
static const struct compact_pmu_event pmu_events__test_soc_sys_uncore_sys_ddr_pmu[] = {
-{ 130328 }, /* sys_ddr_pmu.write_cycles\000uncore\000ddr write-cycles event\000event=0x2b\000v8\00000\000\000\000\000\000 */
+{ 131449 }, /* sys_ddr_pmu.write_cycles\000uncore\000ddr write-cycles event\000event=0x2b\000v8\00000\000\000\000\000\000 */
};
@@ -2811,17 +2843,17 @@ static const struct pmu_table_entry pmu_events__test_soc_sys[] = {
{
.entries = pmu_events__test_soc_sys_uncore_sys_ccn_pmu,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_sys_uncore_sys_ccn_pmu),
- .pmu_name = { 130404 /* uncore_sys_ccn_pmu\000 */ },
+ .pmu_name = { 131525 /* uncore_sys_ccn_pmu\000 */ },
},
{
.entries = pmu_events__test_soc_sys_uncore_sys_cmn_pmu,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_sys_uncore_sys_cmn_pmu),
- .pmu_name = { 130500 /* uncore_sys_cmn_pmu\000 */ },
+ .pmu_name = { 131621 /* uncore_sys_cmn_pmu\000 */ },
},
{
.entries = pmu_events__test_soc_sys_uncore_sys_ddr_pmu,
.num_entries = ARRAY_SIZE(pmu_events__test_soc_sys_uncore_sys_ddr_pmu),
- .pmu_name = { 130309 /* uncore_sys_ddr_pmu\000 */ },
+ .pmu_name = { 131430 /* uncore_sys_ddr_pmu\000 */ },
},
};
diff --git a/tools/perf/util/tool_pmu.c b/tools/perf/util/tool_pmu.c
index 2d1f244264dd..52d8188efc5e 100644
--- a/tools/perf/util/tool_pmu.c
+++ b/tools/perf/util/tool_pmu.c
@@ -66,6 +66,22 @@ static const char *const tool_pmu__event_names[TOOL_PMU__EVENT_MAX] = {
"memory_swap_pss",
"memory_text",
"memory_uss",
+ "net_rx_bytes",
+ "net_rx_packets",
+ "net_rx_errors",
+ "net_rx_drop",
+ "net_rx_fifo",
+ "net_rx_frame",
+ "net_rx_compressed",
+ "net_rx_multicast",
+ "net_tx_bytes",
+ "net_tx_packets",
+ "net_tx_errors",
+ "net_tx_drop",
+ "net_tx_fifo",
+ "net_tx_colls",
+ "net_tx_carrier",
+ "net_tx_compressed",
};
bool tool_pmu__skip_event(const char *name __maybe_unused)
@@ -297,6 +313,22 @@ static const char *tool_pmu__memory_event_to_key(enum tool_pmu_event ev)
case TOOL_PMU__EVENT_MEMORY_SIZE:
case TOOL_PMU__EVENT_MEMORY_TEXT:
case TOOL_PMU__EVENT_MEMORY_USS:
+ case TOOL_PMU__EVENT_NET_RX_BYTES:
+ case TOOL_PMU__EVENT_NET_RX_PACKETS:
+ case TOOL_PMU__EVENT_NET_RX_ERRORS:
+ case TOOL_PMU__EVENT_NET_RX_DROP:
+ case TOOL_PMU__EVENT_NET_RX_FIFO:
+ case TOOL_PMU__EVENT_NET_RX_FRAME:
+ case TOOL_PMU__EVENT_NET_RX_COMPRESSED:
+ case TOOL_PMU__EVENT_NET_RX_MULTICAST:
+ case TOOL_PMU__EVENT_NET_TX_BYTES:
+ case TOOL_PMU__EVENT_NET_TX_PACKETS:
+ case TOOL_PMU__EVENT_NET_TX_ERRORS:
+ case TOOL_PMU__EVENT_NET_TX_DROP:
+ case TOOL_PMU__EVENT_NET_TX_FIFO:
+ case TOOL_PMU__EVENT_NET_TX_COLLS:
+ case TOOL_PMU__EVENT_NET_TX_CARRIER:
+ case TOOL_PMU__EVENT_NET_TX_COMPRESSED:
case TOOL_PMU__EVENT_DURATION_TIME:
case TOOL_PMU__EVENT_USER_TIME:
case TOOL_PMU__EVENT_SYSTEM_TIME:
@@ -435,6 +467,68 @@ static int read_statm(int fd, enum tool_pmu_event ev, u64 *val)
return -EINVAL;
}
+static bool tool_pmu__is_net_event(enum tool_pmu_event ev)
+{
+ return ev >= TOOL_PMU__EVENT_NET_RX_BYTES &&
+ ev <= TOOL_PMU__EVENT_NET_TX_COMPRESSED;
+}
+
+static int read_net_dev(int fd, enum tool_pmu_event ev, u64 *val)
+{
+ struct io io;
+ char buf[4096];
+ int i;
+ int index = ev - TOOL_PMU__EVENT_NET_RX_BYTES;
+
+ io__init(&io, fd, buf, sizeof(buf));
+ lseek(fd, 0, SEEK_SET);
+
+ /*
+ * Drop first two lines of:
+ * Inter-| Receive | Transmit
+ * face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
+ */
+ if (!read_until_char(&io, '\n'))
+ return -EINVAL;
+ if (!read_until_char(&io, '\n'))
+ return -EINVAL;
+
+ *val = 0;
+ while (true) {
+ int ch = io__get_char(&io);
+ __u64 read_val;
+
+ /* First read interface name, such as " lo:" */
+ if (ch == -1)
+ break;
+ while (ch == ' ')
+ ch = io__get_char(&io);
+ if (ch == -1)
+ break;
+ while (ch != ':' && ch != -1 && ch != '\n')
+ ch = io__get_char(&io);
+ if (ch != ':') {
+ if (ch == '\n')
+ continue;
+ if (ch == -1)
+ return 0; /* Assume EOF. */
+ read_until_char(&io, '\n');
+ continue;
+ }
+ /* Ignore columns before one being read. */
+ for (i = 0; i < index; i++) {
+ if (io__get_dec(&io, &read_val) == -1)
+ return 0; /* Assume EOF. */
+ }
+ /* Read actually value. */
+ if (io__get_dec(&io, &read_val) != -1)
+ *val += read_val;
+ /* Move to the next line. */
+ read_until_char(&io, '\n');
+ }
+ return 0;
+}
+
int evsel__tool_pmu_prepare_open(struct evsel *evsel,
struct perf_cpu_map *cpus,
int nthreads)
@@ -517,26 +611,36 @@ int evsel__tool_pmu_open(struct evsel *evsel,
}
if (err)
goto out_close;
- } else if (tool_pmu__is_memory_event(ev)) {
+ } else if (tool_pmu__is_memory_event(ev) ||
+ tool_pmu__is_net_event(ev)) {
+ char buf[PATH_MAX];
int fd = -1;
if (pid > -1) {
- char buf[PATH_MAX];
-
if (tool_pmu__is_memory_statm_event(ev)) {
snprintf(buf, sizeof(buf), "%s/%d/statm",
procfs__mountpoint(), pid);
+ } else if (tool_pmu__is_net_event(ev)) {
+ snprintf(buf, sizeof(buf), "%s/%d/net/dev",
+ procfs__mountpoint(), pid);
} else {
snprintf(buf, sizeof(buf), "%s/%d/smaps_rollup",
procfs__mountpoint(), pid);
}
fd = open(buf, O_RDONLY);
}
+ if (pid == -1 && tool_pmu__is_net_event(ev)) {
+ /* Read /proc/net/dev that already aggregates the counts. */
+ snprintf(buf, sizeof(buf), "%s/net/dev",
+ procfs__mountpoint());
+ fd = open(buf, O_RDONLY);
+ }
/*
- * For system-wide (pid == -1), we don't open a file here.
- * We will aggregate in read().
+ * For memory event system-wide (pid == -1), we
+ * don't open a file here. We will aggregate in
+ * read().
*/
- if (pid > -1 && fd < 0) {
+ if ((pid > -1 || tool_pmu__is_net_event(ev)) && fd < 0) {
err = -errno;
goto out_close;
}
@@ -723,6 +827,22 @@ bool tool_pmu__read_event(enum tool_pmu_event ev,
case TOOL_PMU__EVENT_MEMORY_LOCKED:
case TOOL_PMU__EVENT_MEMORY_DATA:
case TOOL_PMU__EVENT_MEMORY_TEXT:
+ case TOOL_PMU__EVENT_NET_RX_BYTES:
+ case TOOL_PMU__EVENT_NET_RX_PACKETS:
+ case TOOL_PMU__EVENT_NET_RX_ERRORS:
+ case TOOL_PMU__EVENT_NET_RX_DROP:
+ case TOOL_PMU__EVENT_NET_RX_FIFO:
+ case TOOL_PMU__EVENT_NET_RX_FRAME:
+ case TOOL_PMU__EVENT_NET_RX_COMPRESSED:
+ case TOOL_PMU__EVENT_NET_RX_MULTICAST:
+ case TOOL_PMU__EVENT_NET_TX_BYTES:
+ case TOOL_PMU__EVENT_NET_TX_PACKETS:
+ case TOOL_PMU__EVENT_NET_TX_ERRORS:
+ case TOOL_PMU__EVENT_NET_TX_DROP:
+ case TOOL_PMU__EVENT_NET_TX_FIFO:
+ case TOOL_PMU__EVENT_NET_TX_COLLS:
+ case TOOL_PMU__EVENT_NET_TX_CARRIER:
+ case TOOL_PMU__EVENT_NET_TX_COMPRESSED:
case TOOL_PMU__EVENT_NONE:
case TOOL_PMU__EVENT_DURATION_TIME:
case TOOL_PMU__EVENT_USER_TIME:
@@ -905,16 +1025,34 @@ int evsel__tool_pmu_read(struct evsel *evsel, int cpu_map_idx, int thread)
case TOOL_PMU__EVENT_MEMORY_PRIVATE_HUGETLB:
case TOOL_PMU__EVENT_MEMORY_LOCKED:
case TOOL_PMU__EVENT_MEMORY_DATA:
- case TOOL_PMU__EVENT_MEMORY_TEXT: {
+ case TOOL_PMU__EVENT_MEMORY_TEXT:
+ case TOOL_PMU__EVENT_NET_RX_BYTES:
+ case TOOL_PMU__EVENT_NET_RX_PACKETS:
+ case TOOL_PMU__EVENT_NET_RX_ERRORS:
+ case TOOL_PMU__EVENT_NET_RX_DROP:
+ case TOOL_PMU__EVENT_NET_RX_FIFO:
+ case TOOL_PMU__EVENT_NET_RX_FRAME:
+ case TOOL_PMU__EVENT_NET_RX_COMPRESSED:
+ case TOOL_PMU__EVENT_NET_RX_MULTICAST:
+ case TOOL_PMU__EVENT_NET_TX_BYTES:
+ case TOOL_PMU__EVENT_NET_TX_PACKETS:
+ case TOOL_PMU__EVENT_NET_TX_ERRORS:
+ case TOOL_PMU__EVENT_NET_TX_DROP:
+ case TOOL_PMU__EVENT_NET_TX_FIFO:
+ case TOOL_PMU__EVENT_NET_TX_COLLS:
+ case TOOL_PMU__EVENT_NET_TX_CARRIER:
+ case TOOL_PMU__EVENT_NET_TX_COMPRESSED: {
int fd = FD(evsel, cpu_map_idx, thread);
u64 val = 0;
if (fd >= 0) {
- /* Per-process */
+ /* Per-process or system-wide net. */
int ret;
if (tool_pmu__is_memory_statm_event(ev))
ret = read_statm(fd, ev, &val);
+ else if (tool_pmu__is_net_event(ev))
+ ret = read_net_dev(fd, ev, &val);
else
ret = read_smaps_rollup(fd, ev, &val);
@@ -923,6 +1061,7 @@ int evsel__tool_pmu_read(struct evsel *evsel, int cpu_map_idx, int thread)
} else {
/* System-wide aggregation */
if (cpu_map_idx == 0 && thread == 0) {
+ assert(tool_pmu__is_memory_event(ev));
tool_pmu__aggregate_memory_event(ev, &val);
}
}
diff --git a/tools/perf/util/tool_pmu.h b/tools/perf/util/tool_pmu.h
index bf6bb196ad75..be8ebd9aacfb 100644
--- a/tools/perf/util/tool_pmu.h
+++ b/tools/perf/util/tool_pmu.h
@@ -52,6 +52,22 @@ enum tool_pmu_event {
TOOL_PMU__EVENT_MEMORY_SWAP_PSS,
TOOL_PMU__EVENT_MEMORY_TEXT,
TOOL_PMU__EVENT_MEMORY_USS,
+ TOOL_PMU__EVENT_NET_RX_BYTES,
+ TOOL_PMU__EVENT_NET_RX_PACKETS,
+ TOOL_PMU__EVENT_NET_RX_ERRORS,
+ TOOL_PMU__EVENT_NET_RX_DROP,
+ TOOL_PMU__EVENT_NET_RX_FIFO,
+ TOOL_PMU__EVENT_NET_RX_FRAME,
+ TOOL_PMU__EVENT_NET_RX_COMPRESSED,
+ TOOL_PMU__EVENT_NET_RX_MULTICAST,
+ TOOL_PMU__EVENT_NET_TX_BYTES,
+ TOOL_PMU__EVENT_NET_TX_PACKETS,
+ TOOL_PMU__EVENT_NET_TX_ERRORS,
+ TOOL_PMU__EVENT_NET_TX_DROP,
+ TOOL_PMU__EVENT_NET_TX_FIFO,
+ TOOL_PMU__EVENT_NET_TX_COLLS,
+ TOOL_PMU__EVENT_NET_TX_CARRIER,
+ TOOL_PMU__EVENT_NET_TX_COMPRESSED,
TOOL_PMU__EVENT_MAX,
};
--
2.52.0.351.gbe84eed79e-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v1 0/2] Add procfs based memory and network tool events
2026-01-04 1:17 [PATCH v1 0/2] Add procfs based memory and network tool events Ian Rogers
2026-01-04 1:17 ` [PATCH v1 1/2] perf tool_pmu: Add memory events Ian Rogers
2026-01-04 1:17 ` [PATCH v1 2/2] perf tool_pmu: Add network events Ian Rogers
@ 2026-01-04 1:21 ` Ian Rogers
2026-01-07 8:08 ` Namhyung Kim
2026-01-12 16:50 ` Andi Kleen
4 siblings, 0 replies; 9+ messages in thread
From: Ian Rogers @ 2026-01-04 1:21 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Jiri Olsa, Ian Rogers, Adrian Hunter, James Clark,
Thomas Falcon, Thomas Richter, linux-perf-users, linux-kernel
On Sat, Jan 3, 2026 at 5:17 PM Ian Rogers <irogers@google.com> wrote:
>
> Add events for memory use and network activity based on data readily
> available in /prod/pid/statm, /proc/pid/smaps_rollup and
> /proc/pid/net/dev. For example the network usage of chrome processes
> on a system may be gathered with:
> ```
> $ perf stat -e net_rx_bytes,net_rx_compressed,net_rx_drop,net_rx_errors,net_rx_fifo,net_rx_frame,net_rx_multicast,net_rx_packets,net_tx_bytes,net_tx_carrier,net_tx_colls,net_tx_compressed,net_tx_drop,net_tx_errors,net_tx_fifo,net_tx_packets -p $(pidof -d, chrome) -I 1000
> 1.001023475 0 net_rx_bytes
> 1.001023475 0 net_rx_compressed
> 1.001023475 42,647,328 net_rx_drop
> 1.001023475 463,069,152 net_rx_errors
> 1.001023475 0 net_rx_fifo
> 1.001023475 0 net_rx_frame
> 1.001023475 0 net_rx_multicast
> 1.001023475 423,195,831,744 net_rx_packets
> 1.001023475 0 net_tx_bytes
> 1.001023475 0 net_tx_carrier
> 1.001023475 0 net_tx_colls
> 1.001023475 0 net_tx_compressed
> 1.001023475 0 net_tx_drop
> 1.001023475 0 net_tx_errors
> 1.001023475 0 net_tx_fifo
> 1.001023475 0 net_tx_packets
> ```
>
> As the events are in the tool_pmu they can be used in metrics. The
> json descriptions they are exposed in `perf list` and the events can
> be seen in the python ilist application.
>
> Note, if a process terminates then the count reading returns an error
> and this can expose what appear to be latent bugs in the aggregation
> and display code.
I forgot to mention there are also other events that could be added.
For example, from /proc/self/status the signal information could be
useful as could voluntary vs involuntary context switch counts. What
is here is likely useful enough that a bigger patch series isn't
warranted.
Thanks,
Ian
> Ian Rogers (2):
> perf tool_pmu: Add memory events
> perf tool_pmu: Add network events
>
> tools/perf/builtin-stat.c | 10 +-
> .../pmu-events/arch/common/common/tool.json | 266 ++++++++-
> tools/perf/pmu-events/empty-pmu-events.c | 312 +++++++----
> tools/perf/util/tool_pmu.c | 514 +++++++++++++++++-
> tools/perf/util/tool_pmu.h | 44 ++
> 5 files changed, 1026 insertions(+), 120 deletions(-)
>
> --
> 2.52.0.351.gbe84eed79e-goog
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 0/2] Add procfs based memory and network tool events
2026-01-04 1:17 [PATCH v1 0/2] Add procfs based memory and network tool events Ian Rogers
` (2 preceding siblings ...)
2026-01-04 1:21 ` [PATCH v1 0/2] Add procfs based memory and network tool events Ian Rogers
@ 2026-01-07 8:08 ` Namhyung Kim
2026-01-07 19:03 ` Ian Rogers
2026-01-12 16:50 ` Andi Kleen
4 siblings, 1 reply; 9+ messages in thread
From: Namhyung Kim @ 2026-01-07 8:08 UTC (permalink / raw)
To: Ian Rogers
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo, Jiri Olsa,
Adrian Hunter, James Clark, Thomas Falcon, Thomas Richter,
linux-perf-users, linux-kernel
Hi Ian,
On Sat, Jan 03, 2026 at 05:17:36PM -0800, Ian Rogers wrote:
> Add events for memory use and network activity based on data readily
> available in /prod/pid/statm, /proc/pid/smaps_rollup and
> /proc/pid/net/dev. For example the network usage of chrome processes
> on a system may be gathered with:
> ```
> $ perf stat -e net_rx_bytes,net_rx_compressed,net_rx_drop,net_rx_errors,net_rx_fifo,net_rx_frame,net_rx_multicast,net_rx_packets,net_tx_bytes,net_tx_carrier,net_tx_colls,net_tx_compressed,net_tx_drop,net_tx_errors,net_tx_fifo,net_tx_packets -p $(pidof -d, chrome) -I 1000
> 1.001023475 0 net_rx_bytes
> 1.001023475 0 net_rx_compressed
> 1.001023475 42,647,328 net_rx_drop
> 1.001023475 463,069,152 net_rx_errors
> 1.001023475 0 net_rx_fifo
> 1.001023475 0 net_rx_frame
> 1.001023475 0 net_rx_multicast
> 1.001023475 423,195,831,744 net_rx_packets
> 1.001023475 0 net_tx_bytes
> 1.001023475 0 net_tx_carrier
> 1.001023475 0 net_tx_colls
> 1.001023475 0 net_tx_compressed
> 1.001023475 0 net_tx_drop
> 1.001023475 0 net_tx_errors
> 1.001023475 0 net_tx_fifo
> 1.001023475 0 net_tx_packets
> ```
Interesting.
>
> As the events are in the tool_pmu they can be used in metrics. The
> json descriptions they are exposed in `perf list` and the events can
> be seen in the python ilist application.
>
> Note, if a process terminates then the count reading returns an error
> and this can expose what appear to be latent bugs in the aggregation
> and display code.
How do you handle system-wide mode and sampling (perf record)?
Thanks,
Namhyung
>
> Ian Rogers (2):
> perf tool_pmu: Add memory events
> perf tool_pmu: Add network events
>
> tools/perf/builtin-stat.c | 10 +-
> .../pmu-events/arch/common/common/tool.json | 266 ++++++++-
> tools/perf/pmu-events/empty-pmu-events.c | 312 +++++++----
> tools/perf/util/tool_pmu.c | 514 +++++++++++++++++-
> tools/perf/util/tool_pmu.h | 44 ++
> 5 files changed, 1026 insertions(+), 120 deletions(-)
>
> --
> 2.52.0.351.gbe84eed79e-goog
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 0/2] Add procfs based memory and network tool events
2026-01-07 8:08 ` Namhyung Kim
@ 2026-01-07 19:03 ` Ian Rogers
0 siblings, 0 replies; 9+ messages in thread
From: Ian Rogers @ 2026-01-07 19:03 UTC (permalink / raw)
To: Namhyung Kim
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo, Jiri Olsa,
Adrian Hunter, James Clark, Thomas Falcon, Thomas Richter,
linux-perf-users, linux-kernel
On Wed, Jan 7, 2026 at 12:08 AM Namhyung Kim <namhyung@kernel.org> wrote:
>
> Hi Ian,
>
> On Sat, Jan 03, 2026 at 05:17:36PM -0800, Ian Rogers wrote:
> > Add events for memory use and network activity based on data readily
> > available in /prod/pid/statm, /proc/pid/smaps_rollup and
> > /proc/pid/net/dev. For example the network usage of chrome processes
> > on a system may be gathered with:
> > ```
> > $ perf stat -e net_rx_bytes,net_rx_compressed,net_rx_drop,net_rx_errors,net_rx_fifo,net_rx_frame,net_rx_multicast,net_rx_packets,net_tx_bytes,net_tx_carrier,net_tx_colls,net_tx_compressed,net_tx_drop,net_tx_errors,net_tx_fifo,net_tx_packets -p $(pidof -d, chrome) -I 1000
> > 1.001023475 0 net_rx_bytes
> > 1.001023475 0 net_rx_compressed
> > 1.001023475 42,647,328 net_rx_drop
> > 1.001023475 463,069,152 net_rx_errors
> > 1.001023475 0 net_rx_fifo
> > 1.001023475 0 net_rx_frame
> > 1.001023475 0 net_rx_multicast
> > 1.001023475 423,195,831,744 net_rx_packets
> > 1.001023475 0 net_tx_bytes
> > 1.001023475 0 net_tx_carrier
> > 1.001023475 0 net_tx_colls
> > 1.001023475 0 net_tx_compressed
> > 1.001023475 0 net_tx_drop
> > 1.001023475 0 net_tx_errors
> > 1.001023475 0 net_tx_fifo
> > 1.001023475 0 net_tx_packets
> > ```
>
> Interesting.
Thanks.
> >
> > As the events are in the tool_pmu they can be used in metrics. The
> > json descriptions they are exposed in `perf list` and the events can
> > be seen in the python ilist application.
> >
> > Note, if a process terminates then the count reading returns an error
> > and this can expose what appear to be latent bugs in the aggregation
> > and display code.
>
> How do you handle system-wide mode and sampling (perf record)?
So tool events don't support `perf record` and fail at opening due to
the invalid PMU type and config. This is the same as if you did `perf
record -e duration_time` with a perf today, which looks like:
```
$ perf record -e duration_time -a sleep 1
Error:
Failure to open event 'duration_time' on PMU 'tool' which will be removed.
No fallback found for 'duration_time' for error 0
Error:
Failure to open any events for recording.
```
For system-wide the behavior is hopefully intuitive in that the memory
and network counts are for the whole system rather than the given
processes. For the memory events the proc directory is scanned and all
processes counts aggregated. For network data /proc/net/dev is read
rather than /proc/pid/net/dev. There is more detail in the individual
commit messages on this.
Thanks,
Ian
> Thanks,
> Namhyung
>
> >
> > Ian Rogers (2):
> > perf tool_pmu: Add memory events
> > perf tool_pmu: Add network events
> >
> > tools/perf/builtin-stat.c | 10 +-
> > .../pmu-events/arch/common/common/tool.json | 266 ++++++++-
> > tools/perf/pmu-events/empty-pmu-events.c | 312 +++++++----
> > tools/perf/util/tool_pmu.c | 514 +++++++++++++++++-
> > tools/perf/util/tool_pmu.h | 44 ++
> > 5 files changed, 1026 insertions(+), 120 deletions(-)
> >
> > --
> > 2.52.0.351.gbe84eed79e-goog
> >
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 0/2] Add procfs based memory and network tool events
2026-01-04 1:17 [PATCH v1 0/2] Add procfs based memory and network tool events Ian Rogers
` (3 preceding siblings ...)
2026-01-07 8:08 ` Namhyung Kim
@ 2026-01-12 16:50 ` Andi Kleen
2026-01-12 18:08 ` Ian Rogers
4 siblings, 1 reply; 9+ messages in thread
From: Andi Kleen @ 2026-01-12 16:50 UTC (permalink / raw)
To: Ian Rogers
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Jiri Olsa, Adrian Hunter, James Clark,
Thomas Falcon, Thomas Richter, linux-perf-users, linux-kernel
Ian Rogers <irogers@google.com> writes:
> Add events for memory use and network activity based on data readily
> available in /prod/pid/statm, /proc/pid/smaps_rollup and
> /proc/pid/net/dev. For example the network usage of chrome processes
> on a system may be gathered with:
> ```
> $ perf stat -e
> net_rx_bytes,net_rx_compressed,net_rx_drop,net_rx_errors,net_rx_fifo,net_rx_frame,net_rx_multicast,net_rx_packets,net_tx_bytes,net_tx_carrier,net_tx_colls,net_tx_compressed,net_tx_drop,net_tx_errors,net_tx_fifo,net_tx_packets
> -p $(pidof -d, chrome) -I 1000
But AFAIK that's for the complete network name space, not just the
process, thus highly misleading in perf context because the scope
is incompatible.
-Andi
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 0/2] Add procfs based memory and network tool events
2026-01-12 16:50 ` Andi Kleen
@ 2026-01-12 18:08 ` Ian Rogers
2026-01-15 5:00 ` Namhyung Kim
0 siblings, 1 reply; 9+ messages in thread
From: Ian Rogers @ 2026-01-12 18:08 UTC (permalink / raw)
To: Andi Kleen
Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Jiri Olsa, Adrian Hunter, James Clark,
Thomas Falcon, Thomas Richter, linux-perf-users, linux-kernel
On Mon, Jan 12, 2026 at 8:51 AM Andi Kleen <ak@linux.intel.com> wrote:
>
> Ian Rogers <irogers@google.com> writes:
>
> > Add events for memory use and network activity based on data readily
> > available in /prod/pid/statm, /proc/pid/smaps_rollup and
> > /proc/pid/net/dev. For example the network usage of chrome processes
> > on a system may be gathered with:
> > ```
> > $ perf stat -e
> > net_rx_bytes,net_rx_compressed,net_rx_drop,net_rx_errors,net_rx_fifo,net_rx_frame,net_rx_multicast,net_rx_packets,net_tx_bytes,net_tx_carrier,net_tx_colls,net_tx_compressed,net_tx_drop,net_tx_errors,net_tx_fifo,net_tx_packets
> > -p $(pidof -d, chrome) -I 1000
>
> But AFAIK that's for the complete network name space, not just the
> process, thus highly misleading in perf context because the scope
> is incompatible.
Yeah, we can point this out in the event descriptions or just not have
the events and try to do some per process BPF type thing. Given we
don't have the BPF thing it is still tempting to have these counters
as-is for the system-wide case.
Thanks,
Ian
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 0/2] Add procfs based memory and network tool events
2026-01-12 18:08 ` Ian Rogers
@ 2026-01-15 5:00 ` Namhyung Kim
0 siblings, 0 replies; 9+ messages in thread
From: Namhyung Kim @ 2026-01-15 5:00 UTC (permalink / raw)
To: Ian Rogers
Cc: Andi Kleen, Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Jiri Olsa, Adrian Hunter, James Clark, Thomas Falcon,
Thomas Richter, linux-perf-users, linux-kernel
On Mon, Jan 12, 2026 at 10:08:20AM -0800, Ian Rogers wrote:
> On Mon, Jan 12, 2026 at 8:51 AM Andi Kleen <ak@linux.intel.com> wrote:
> >
> > Ian Rogers <irogers@google.com> writes:
> >
> > > Add events for memory use and network activity based on data readily
> > > available in /prod/pid/statm, /proc/pid/smaps_rollup and
> > > /proc/pid/net/dev. For example the network usage of chrome processes
> > > on a system may be gathered with:
> > > ```
> > > $ perf stat -e
> > > net_rx_bytes,net_rx_compressed,net_rx_drop,net_rx_errors,net_rx_fifo,net_rx_frame,net_rx_multicast,net_rx_packets,net_tx_bytes,net_tx_carrier,net_tx_colls,net_tx_compressed,net_tx_drop,net_tx_errors,net_tx_fifo,net_tx_packets
> > > -p $(pidof -d, chrome) -I 1000
> >
> > But AFAIK that's for the complete network name space, not just the
> > process, thus highly misleading in perf context because the scope
> > is incompatible.
>
> Yeah, we can point this out in the event descriptions or just not have
> the events and try to do some per process BPF type thing. Given we
> don't have the BPF thing it is still tempting to have these counters
> as-is for the system-wide case.
You may want to make it fail to open for per-process mode.
Thanks,
Namhyung
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-01-15 5:00 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-04 1:17 [PATCH v1 0/2] Add procfs based memory and network tool events Ian Rogers
2026-01-04 1:17 ` [PATCH v1 1/2] perf tool_pmu: Add memory events Ian Rogers
2026-01-04 1:17 ` [PATCH v1 2/2] perf tool_pmu: Add network events Ian Rogers
2026-01-04 1:21 ` [PATCH v1 0/2] Add procfs based memory and network tool events Ian Rogers
2026-01-07 8:08 ` Namhyung Kim
2026-01-07 19:03 ` Ian Rogers
2026-01-12 16:50 ` Andi Kleen
2026-01-12 18:08 ` Ian Rogers
2026-01-15 5:00 ` Namhyung Kim
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox