From: Ian Rogers <irogers@google.com>
To: irogers@google.com, acme@kernel.org, adrian.hunter@intel.com,
james.clark@linaro.org, leo.yan@linux.dev, namhyung@kernel.org,
tmricht@linux.ibm.com
Cc: 9erthalion6@gmail.com, adityab1@linux.ibm.com,
alexandre.chartre@oracle.com, alice.mei.rogers@gmail.com,
ankur.a.arora@oracle.com, ashelat@redhat.com,
atrajeev@linux.ibm.com, blakejones@google.com,
changbin.du@huawei.com, chuck.lever@oracle.com,
collin.funk1@gmail.com, coresight@lists.linaro.org,
ctshao@google.com, dapeng1.mi@linux.intel.com,
derek.foreman@collabora.com, dsterba@suse.com,
gautam@linux.ibm.com, howardchu95@gmail.com,
john.g.garry@oracle.com, jolsa@kernel.org,
jonathan.cameron@huawei.com, justinstitt@google.com,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org,
mike.leach@arm.com, mingo@redhat.com, morbo@google.com,
nathan@kernel.org, nichen@iscas.ac.cn,
nick.desaulniers+lkml@gmail.com, pan.deng@intel.com,
peterz@infradead.org, ravi.bangoria@amd.com,
ricky.ringler@proton.me, stephen.s.brennan@oracle.com,
sun.jian.kdev@gmail.com, suzuki.poulose@arm.com,
swapnil.sapkal@amd.com, tanze@kylinos.cn, terrelln@fb.com,
thomas.falcon@intel.com, tianyou.li@intel.com, tycho@kernel.org,
wangyang.guo@intel.com, xiaqinxin@huawei.com,
yang.lee@linux.alibaba.com, yuzhuo@google.com,
zhiguo.zhou@intel.com, zli94@ncsu.edu
Subject: [PATCH v4 12/58] perf evlist: Add reference count checking
Date: Thu, 23 Apr 2026 09:33:20 -0700 [thread overview]
Message-ID: <20260423163406.1779809-13-irogers@google.com> (raw)
In-Reply-To: <20260423163406.1779809-1-irogers@google.com>
Now the evlist is reference counted, add reference count checking so
that gets and puts are paired and easy to debug. Reference count
checking is documented here:
https://perfwiki.github.io/main/reference-count-checking/
This large patch is adding accessors to evlist functions and switching
to their use. There was some minor renaming as evlist__mmap is now an
accessor to the mmap variable, and the original evlist__mmap is
renamed to evlist__do_mmap.
Signed-off-by: Ian Rogers <irogers@google.com>
---
v2:
1. Fixed Memory Leak in evlist__new : Added free(evlist) in the else
branch if ADD_RC_CHK fails, preventing a leak of the allocated raw
structure.
2. Fixed Potential NULL Dereference: Added a NULL check after
from_list_start(_evlist) in perf_evlist__mmap_cb_get() .
3. Fixed Use-After-Free Risk: In evlist__add() , I changed
entry->evlist = evlist; to entry->evlist = evlist__get(evlist);
. This ensures that the evsel holds a valid reference (wrapper) to
the evlist , preventing it from becoming a dangling pointer if the
original wrapper is freed.
4. Fixed Test Masking Bug: In test__perf_time__parse_for_ranges() , I
replaced TEST_ASSERT_VAL with a manual check and return false; to
avoid boolean evaluation of -1 inadvertently passing the test.
5. Fix reference count checker memory leaks from missed puts and due
to cyclic evsel to evlist references. A leak still exists in
__perf_evlist__propagate_maps due to empty CPU maps and not
deleting the removed evsel.
---
tools/perf/arch/arm/util/cs-etm.c | 10 +-
tools/perf/arch/arm64/util/arm-spe.c | 8 +-
tools/perf/arch/arm64/util/hisi-ptt.c | 2 +-
tools/perf/arch/x86/tests/hybrid.c | 20 +-
tools/perf/arch/x86/util/auxtrace.c | 2 +-
tools/perf/arch/x86/util/intel-bts.c | 6 +-
tools/perf/arch/x86/util/intel-pt.c | 9 +-
tools/perf/arch/x86/util/iostat.c | 6 +-
tools/perf/bench/evlist-open-close.c | 11 +-
tools/perf/builtin-annotate.c | 2 +-
tools/perf/builtin-ftrace.c | 6 +-
tools/perf/builtin-inject.c | 4 +-
tools/perf/builtin-kvm.c | 10 +-
tools/perf/builtin-kwork.c | 8 +-
tools/perf/builtin-record.c | 91 ++---
tools/perf/builtin-report.c | 6 +-
tools/perf/builtin-sched.c | 20 +-
tools/perf/builtin-script.c | 13 +-
tools/perf/builtin-stat.c | 71 ++--
tools/perf/builtin-top.c | 52 +--
tools/perf/builtin-trace.c | 22 +-
tools/perf/tests/backward-ring-buffer.c | 8 +-
tools/perf/tests/code-reading.c | 10 +-
tools/perf/tests/event-times.c | 2 +-
tools/perf/tests/event_update.c | 2 +-
tools/perf/tests/expand-cgroup.c | 4 +-
tools/perf/tests/hwmon_pmu.c | 5 +-
tools/perf/tests/keep-tracking.c | 8 +-
tools/perf/tests/mmap-basic.c | 6 +-
tools/perf/tests/openat-syscall-tp-fields.c | 8 +-
tools/perf/tests/parse-events.c | 135 +++----
tools/perf/tests/parse-metric.c | 4 +-
tools/perf/tests/perf-record.c | 20 +-
tools/perf/tests/perf-time-to-tsc.c | 10 +-
tools/perf/tests/pfm.c | 8 +-
tools/perf/tests/pmu-events.c | 5 +-
tools/perf/tests/sample-parsing.c | 38 +-
tools/perf/tests/sw-clock.c | 6 +-
tools/perf/tests/switch-tracking.c | 8 +-
tools/perf/tests/task-exit.c | 6 +-
tools/perf/tests/time-utils-test.c | 14 +-
tools/perf/tests/tool_pmu.c | 5 +-
tools/perf/tests/topology.c | 2 +-
tools/perf/ui/browsers/annotate.c | 2 +-
tools/perf/ui/browsers/hists.c | 22 +-
tools/perf/util/amd-sample-raw.c | 2 +-
tools/perf/util/annotate-data.c | 2 +-
tools/perf/util/annotate.c | 10 +-
tools/perf/util/auxtrace.c | 14 +-
tools/perf/util/block-info.c | 4 +-
tools/perf/util/bpf_counter.c | 2 +-
tools/perf/util/bpf_counter_cgroup.c | 8 +-
tools/perf/util/bpf_ftrace.c | 9 +-
tools/perf/util/bpf_lock_contention.c | 12 +-
tools/perf/util/bpf_off_cpu.c | 14 +-
tools/perf/util/cgroup.c | 20 +-
tools/perf/util/evlist.c | 386 ++++++++++++--------
tools/perf/util/evlist.h | 251 ++++++++++++-
tools/perf/util/evsel.c | 6 +-
tools/perf/util/evsel.h | 4 +-
| 39 +-
| 2 +-
tools/perf/util/intel-tpebs.c | 7 +-
tools/perf/util/metricgroup.c | 6 +-
tools/perf/util/parse-events.c | 6 +-
tools/perf/util/pfm.c | 2 +-
tools/perf/util/python.c | 30 +-
tools/perf/util/record.c | 9 +-
tools/perf/util/sample-raw.c | 4 +-
tools/perf/util/session.c | 56 +--
tools/perf/util/sideband_evlist.c | 24 +-
tools/perf/util/sort.c | 2 +-
tools/perf/util/stat-display.c | 6 +-
tools/perf/util/stat-shadow.c | 4 +-
tools/perf/util/stat.c | 4 +-
tools/perf/util/stream.c | 4 +-
tools/perf/util/synthetic-events.c | 11 +-
tools/perf/util/time-utils.c | 12 +-
tools/perf/util/top.c | 4 +-
79 files changed, 1004 insertions(+), 689 deletions(-)
diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c
index cdf8e3e60606..d2861d66a661 100644
--- a/tools/perf/arch/arm/util/cs-etm.c
+++ b/tools/perf/arch/arm/util/cs-etm.c
@@ -201,7 +201,7 @@ static int cs_etm_validate_config(struct perf_pmu *cs_etm_pmu,
{
unsigned int idx;
int err = 0;
- struct perf_cpu_map *event_cpus = evsel->evlist->core.user_requested_cpus;
+ struct perf_cpu_map *event_cpus = evlist__core(evsel->evlist)->user_requested_cpus;
struct perf_cpu_map *intersect_cpus;
struct perf_cpu cpu;
@@ -325,7 +325,7 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
container_of(itr, struct cs_etm_recording, itr);
struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
struct evsel *evsel, *cs_etm_evsel = NULL;
- struct perf_cpu_map *cpus = evlist->core.user_requested_cpus;
+ struct perf_cpu_map *cpus = evlist__core(evlist)->user_requested_cpus;
bool privileged = perf_event_paranoid_check(-1);
int err = 0;
@@ -551,7 +551,7 @@ cs_etm_info_priv_size(struct auxtrace_record *itr,
{
unsigned int idx;
int etmv3 = 0, etmv4 = 0, ete = 0;
- struct perf_cpu_map *event_cpus = evlist->core.user_requested_cpus;
+ struct perf_cpu_map *event_cpus = evlist__core(evlist)->user_requested_cpus;
struct perf_cpu_map *intersect_cpus;
struct perf_cpu cpu;
struct perf_pmu *cs_etm_pmu = cs_etm_get_pmu(itr);
@@ -790,7 +790,7 @@ static int cs_etm_info_fill(struct auxtrace_record *itr,
u32 offset;
u64 nr_cpu, type;
struct perf_cpu_map *cpu_map;
- struct perf_cpu_map *event_cpus = session->evlist->core.user_requested_cpus;
+ struct perf_cpu_map *event_cpus = evlist__core(session->evlist)->user_requested_cpus;
struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
struct cs_etm_recording *ptr =
container_of(itr, struct cs_etm_recording, itr);
@@ -800,7 +800,7 @@ static int cs_etm_info_fill(struct auxtrace_record *itr,
if (priv_size != cs_etm_info_priv_size(itr, session->evlist))
return -EINVAL;
- if (!session->evlist->core.nr_mmaps)
+ if (!evlist__core(session->evlist)->nr_mmaps)
return -EINVAL;
/* If the cpu_map has the "any" CPU all online CPUs are involved */
diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c
index f00d72d087fc..abbc67109fc0 100644
--- a/tools/perf/arch/arm64/util/arm-spe.c
+++ b/tools/perf/arch/arm64/util/arm-spe.c
@@ -60,7 +60,7 @@ static bool arm_spe_is_set_freq(struct evsel *evsel)
*/
static struct perf_cpu_map *arm_spe_find_cpus(struct evlist *evlist)
{
- struct perf_cpu_map *event_cpus = evlist->core.user_requested_cpus;
+ struct perf_cpu_map *event_cpus = evlist__core(evlist)->user_requested_cpus;
struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
struct perf_cpu_map *intersect_cpus;
@@ -157,7 +157,7 @@ static int arm_spe_info_fill(struct auxtrace_record *itr,
if (priv_size != arm_spe_info_priv_size(itr, session->evlist))
return -EINVAL;
- if (!session->evlist->core.nr_mmaps)
+ if (!evlist__core(session->evlist)->nr_mmaps)
return -EINVAL;
cpu_map = arm_spe_find_cpus(session->evlist);
@@ -363,7 +363,7 @@ static int arm_spe_setup_tracking_event(struct evlist *evlist,
{
int err;
struct evsel *tracking_evsel;
- struct perf_cpu_map *cpus = evlist->core.user_requested_cpus;
+ struct perf_cpu_map *cpus = evlist__core(evlist)->user_requested_cpus;
/* Add dummy event to keep tracking */
err = parse_event(evlist, "dummy:u");
@@ -396,7 +396,7 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
struct arm_spe_recording *sper =
container_of(itr, struct arm_spe_recording, itr);
struct evsel *evsel, *tmp;
- struct perf_cpu_map *cpus = evlist->core.user_requested_cpus;
+ struct perf_cpu_map *cpus = evlist__core(evlist)->user_requested_cpus;
bool discard = false;
int err;
u64 discard_bit;
diff --git a/tools/perf/arch/arm64/util/hisi-ptt.c b/tools/perf/arch/arm64/util/hisi-ptt.c
index fe457fd58c9e..52257715d2b7 100644
--- a/tools/perf/arch/arm64/util/hisi-ptt.c
+++ b/tools/perf/arch/arm64/util/hisi-ptt.c
@@ -53,7 +53,7 @@ static int hisi_ptt_info_fill(struct auxtrace_record *itr,
if (priv_size != HISI_PTT_AUXTRACE_PRIV_SIZE)
return -EINVAL;
- if (!session->evlist->core.nr_mmaps)
+ if (!evlist__core(session->evlist)->nr_mmaps)
return -EINVAL;
auxtrace_info->type = PERF_AUXTRACE_HISI_PTT;
diff --git a/tools/perf/arch/x86/tests/hybrid.c b/tools/perf/arch/x86/tests/hybrid.c
index dfb0ffc0d030..0477e17b8e53 100644
--- a/tools/perf/arch/x86/tests/hybrid.c
+++ b/tools/perf/arch/x86/tests/hybrid.c
@@ -26,7 +26,7 @@ static int test__hybrid_hw_event_with_pmu(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_VAL("wrong number of entries", 1 == evlist->core.nr_entries);
+ TEST_ASSERT_VAL("wrong number of entries", 1 == evlist__nr_entries(evlist));
TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
TEST_ASSERT_VAL("wrong hybrid type", test_hybrid_type(evsel, PERF_TYPE_RAW));
TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
@@ -38,7 +38,7 @@ static int test__hybrid_hw_group_event(struct evlist *evlist)
struct evsel *evsel, *leader;
evsel = leader = evlist__first(evlist);
- TEST_ASSERT_VAL("wrong number of entries", 2 == evlist->core.nr_entries);
+ TEST_ASSERT_VAL("wrong number of entries", 2 == evlist__nr_entries(evlist));
TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
TEST_ASSERT_VAL("wrong hybrid type", test_hybrid_type(evsel, PERF_TYPE_RAW));
TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
@@ -57,7 +57,7 @@ static int test__hybrid_sw_hw_group_event(struct evlist *evlist)
struct evsel *evsel, *leader;
evsel = leader = evlist__first(evlist);
- TEST_ASSERT_VAL("wrong number of entries", 2 == evlist->core.nr_entries);
+ TEST_ASSERT_VAL("wrong number of entries", 2 == evlist__nr_entries(evlist));
TEST_ASSERT_VAL("wrong type", PERF_TYPE_SOFTWARE == evsel->core.attr.type);
TEST_ASSERT_VAL("wrong leader", evsel__has_leader(evsel, leader));
@@ -74,7 +74,7 @@ static int test__hybrid_hw_sw_group_event(struct evlist *evlist)
struct evsel *evsel, *leader;
evsel = leader = evlist__first(evlist);
- TEST_ASSERT_VAL("wrong number of entries", 2 == evlist->core.nr_entries);
+ TEST_ASSERT_VAL("wrong number of entries", 2 == evlist__nr_entries(evlist));
TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
TEST_ASSERT_VAL("wrong hybrid type", test_hybrid_type(evsel, PERF_TYPE_RAW));
TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
@@ -91,7 +91,7 @@ static int test__hybrid_group_modifier1(struct evlist *evlist)
struct evsel *evsel, *leader;
evsel = leader = evlist__first(evlist);
- TEST_ASSERT_VAL("wrong number of entries", 2 == evlist->core.nr_entries);
+ TEST_ASSERT_VAL("wrong number of entries", 2 == evlist__nr_entries(evlist));
TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
TEST_ASSERT_VAL("wrong hybrid type", test_hybrid_type(evsel, PERF_TYPE_RAW));
TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
@@ -113,7 +113,7 @@ static int test__hybrid_raw1(struct evlist *evlist)
{
struct perf_evsel *evsel;
- perf_evlist__for_each_evsel(&evlist->core, evsel) {
+ perf_evlist__for_each_evsel(evlist__core(evlist), evsel) {
struct perf_pmu *pmu = perf_pmus__find_by_type(evsel->attr.type);
TEST_ASSERT_VAL("missing pmu", pmu);
@@ -127,7 +127,7 @@ static int test__hybrid_raw2(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_VAL("wrong number of entries", 1 == evlist->core.nr_entries);
+ TEST_ASSERT_VAL("wrong number of entries", 1 == evlist__nr_entries(evlist));
TEST_ASSERT_VAL("wrong type", PERF_TYPE_RAW == evsel->core.attr.type);
TEST_ASSERT_VAL("wrong config", test_config(evsel, 0x1a));
return TEST_OK;
@@ -137,7 +137,7 @@ static int test__hybrid_cache_event(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_VAL("wrong number of entries", 1 == evlist->core.nr_entries);
+ TEST_ASSERT_VAL("wrong number of entries", 1 == evlist__nr_entries(evlist));
TEST_ASSERT_VAL("wrong type", PERF_TYPE_HW_CACHE == evsel->core.attr.type);
TEST_ASSERT_VAL("wrong config", 0x2 == (evsel->core.attr.config & 0xffffffff));
return TEST_OK;
@@ -148,7 +148,7 @@ static int test__checkevent_pmu(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_VAL("wrong number of entries", 1 == evlist->core.nr_entries);
+ TEST_ASSERT_VAL("wrong number of entries", 1 == evlist__nr_entries(evlist));
TEST_ASSERT_VAL("wrong type", PERF_TYPE_RAW == evsel->core.attr.type);
TEST_ASSERT_VAL("wrong config", 10 == evsel->core.attr.config);
TEST_ASSERT_VAL("wrong config1", 1 == evsel->core.attr.config1);
@@ -168,7 +168,7 @@ static int test__hybrid_hw_group_event_2(struct evlist *evlist)
struct evsel *evsel, *leader;
evsel = leader = evlist__first(evlist);
- TEST_ASSERT_VAL("wrong number of entries", 2 == evlist->core.nr_entries);
+ TEST_ASSERT_VAL("wrong number of entries", 2 == evlist__nr_entries(evlist));
TEST_ASSERT_VAL("wrong type", PERF_TYPE_HARDWARE == evsel->core.attr.type);
TEST_ASSERT_VAL("wrong hybrid type", test_hybrid_type(evsel, PERF_TYPE_RAW));
TEST_ASSERT_VAL("wrong config", test_config(evsel, PERF_COUNT_HW_CPU_CYCLES));
diff --git a/tools/perf/arch/x86/util/auxtrace.c b/tools/perf/arch/x86/util/auxtrace.c
index ecbf61a7eb3a..84fce0b51ccf 100644
--- a/tools/perf/arch/x86/util/auxtrace.c
+++ b/tools/perf/arch/x86/util/auxtrace.c
@@ -55,7 +55,7 @@ struct auxtrace_record *auxtrace_record__init(struct evlist *evlist,
int *err)
{
char buffer[64];
- struct perf_cpu cpu = perf_cpu_map__min(evlist->core.all_cpus);
+ struct perf_cpu cpu = perf_cpu_map__min(evlist__core(evlist)->all_cpus);
int ret;
*err = 0;
diff --git a/tools/perf/arch/x86/util/intel-bts.c b/tools/perf/arch/x86/util/intel-bts.c
index 100a23d27998..d44d568a6d21 100644
--- a/tools/perf/arch/x86/util/intel-bts.c
+++ b/tools/perf/arch/x86/util/intel-bts.c
@@ -79,10 +79,10 @@ static int intel_bts_info_fill(struct auxtrace_record *itr,
if (priv_size != INTEL_BTS_AUXTRACE_PRIV_SIZE)
return -EINVAL;
- if (!session->evlist->core.nr_mmaps)
+ if (!evlist__core(session->evlist)->nr_mmaps)
return -EINVAL;
- pc = session->evlist->mmap[0].core.base;
+ pc = evlist__mmap(session->evlist)[0].core.base;
if (pc) {
err = perf_read_tsc_conversion(pc, &tc);
if (err) {
@@ -114,7 +114,7 @@ static int intel_bts_recording_options(struct auxtrace_record *itr,
container_of(itr, struct intel_bts_recording, itr);
struct perf_pmu *intel_bts_pmu = btsr->intel_bts_pmu;
struct evsel *evsel, *intel_bts_evsel = NULL;
- const struct perf_cpu_map *cpus = evlist->core.user_requested_cpus;
+ const struct perf_cpu_map *cpus = evlist__core(evlist)->user_requested_cpus;
bool privileged = perf_event_paranoid_check(-1);
if (opts->auxtrace_sample_mode) {
diff --git a/tools/perf/arch/x86/util/intel-pt.c b/tools/perf/arch/x86/util/intel-pt.c
index 0307ff15d9fc..a533114c0048 100644
--- a/tools/perf/arch/x86/util/intel-pt.c
+++ b/tools/perf/arch/x86/util/intel-pt.c
@@ -360,10 +360,10 @@ static int intel_pt_info_fill(struct auxtrace_record *itr,
filter = intel_pt_find_filter(session->evlist, ptr->intel_pt_pmu);
filter_str_len = filter ? strlen(filter) : 0;
- if (!session->evlist->core.nr_mmaps)
+ if (!evlist__core(session->evlist)->nr_mmaps)
return -EINVAL;
- pc = session->evlist->mmap[0].core.base;
+ pc = evlist__mmap(session->evlist)[0].core.base;
if (pc) {
err = perf_read_tsc_conversion(pc, &tc);
if (err) {
@@ -376,7 +376,8 @@ static int intel_pt_info_fill(struct auxtrace_record *itr,
ui__warning("Intel Processor Trace: TSC not available\n");
}
- per_cpu_mmaps = !perf_cpu_map__is_any_cpu_or_is_empty(session->evlist->core.user_requested_cpus);
+ per_cpu_mmaps = !perf_cpu_map__is_any_cpu_or_is_empty(
+ evlist__core(session->evlist)->user_requested_cpus);
auxtrace_info->type = PERF_AUXTRACE_INTEL_PT;
auxtrace_info->priv[INTEL_PT_PMU_TYPE] = intel_pt_pmu->type;
@@ -621,7 +622,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
struct perf_pmu *intel_pt_pmu = ptr->intel_pt_pmu;
bool have_timing_info, need_immediate = false;
struct evsel *evsel, *intel_pt_evsel = NULL;
- const struct perf_cpu_map *cpus = evlist->core.user_requested_cpus;
+ const struct perf_cpu_map *cpus = evlist__core(evlist)->user_requested_cpus;
bool privileged = perf_event_paranoid_check(-1);
u64 tsc_bit;
int err;
diff --git a/tools/perf/arch/x86/util/iostat.c b/tools/perf/arch/x86/util/iostat.c
index e0417552b0cb..a0baa6cdefd8 100644
--- a/tools/perf/arch/x86/util/iostat.c
+++ b/tools/perf/arch/x86/util/iostat.c
@@ -334,7 +334,7 @@ static int iostat_event_group(struct evlist *evl,
int iostat_prepare(struct evlist *evlist, struct perf_stat_config *config)
{
- if (evlist->core.nr_entries > 0) {
+ if (evlist__nr_entries(evlist) > 0) {
pr_warning("The -e and -M options are not supported."
"All chosen events/metrics will be dropped\n");
evlist__put(evlist);
@@ -400,7 +400,7 @@ void iostat_prefix(struct evlist *evlist,
struct perf_stat_config *config,
char *prefix, struct timespec *ts)
{
- struct iio_root_port *rp = evlist->selected->priv;
+ struct iio_root_port *rp = evlist__selected(evlist)->priv;
if (rp) {
/*
@@ -463,7 +463,7 @@ void iostat_print_counters(struct evlist *evlist,
iostat_prefix(evlist, config, prefix, ts);
fprintf(config->output, "%s", prefix);
evlist__for_each_entry(evlist, counter) {
- perf_device = evlist->selected->priv;
+ perf_device = evlist__selected(evlist)->priv;
if (perf_device && perf_device != counter->priv) {
evlist__set_selected(evlist, counter);
iostat_prefix(evlist, config, prefix, ts);
diff --git a/tools/perf/bench/evlist-open-close.c b/tools/perf/bench/evlist-open-close.c
index 304929d1f67f..748ebbe458f4 100644
--- a/tools/perf/bench/evlist-open-close.c
+++ b/tools/perf/bench/evlist-open-close.c
@@ -116,7 +116,7 @@ static int bench__do_evlist_open_close(struct evlist *evlist)
return err;
}
- err = evlist__mmap(evlist, opts.mmap_pages);
+ err = evlist__do_mmap(evlist, opts.mmap_pages);
if (err < 0) {
pr_err("evlist__mmap: %s\n", str_error_r(errno, sbuf, sizeof(sbuf)));
return err;
@@ -124,7 +124,7 @@ static int bench__do_evlist_open_close(struct evlist *evlist)
evlist__enable(evlist);
evlist__disable(evlist);
- evlist__munmap(evlist);
+ evlist__do_munmap(evlist);
evlist__close(evlist);
return 0;
@@ -145,10 +145,11 @@ static int bench_evlist_open_close__run(char *evstr, const char *uid_str)
init_stats(&time_stats);
- printf(" Number of cpus:\t%d\n", perf_cpu_map__nr(evlist->core.user_requested_cpus));
- printf(" Number of threads:\t%d\n", evlist->core.threads->nr);
+ printf(" Number of cpus:\t%d\n",
+ perf_cpu_map__nr(evlist__core(evlist)->user_requested_cpus));
+ printf(" Number of threads:\t%d\n", evlist__core(evlist)->threads->nr);
printf(" Number of events:\t%d (%d fds)\n",
- evlist->core.nr_entries, evlist__count_evsel_fds(evlist));
+ evlist__nr_entries(evlist), evlist__count_evsel_fds(evlist));
printf(" Number of iterations:\t%d\n", iterations);
evlist__put(evlist);
diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
index 5e57b78548f4..3c14fbec7b3d 100644
--- a/tools/perf/builtin-annotate.c
+++ b/tools/perf/builtin-annotate.c
@@ -928,7 +928,7 @@ int cmd_annotate(int argc, const char **argv)
*/
if ((use_browser == 1 || annotate.use_stdio2) && annotate.has_br_stack) {
sort__mode = SORT_MODE__BRANCH;
- if (annotate.session->evlist->nr_br_cntr > 0)
+ if (evlist__nr_br_cntr(annotate.session->evlist) > 0)
annotate_opts.show_br_cntr = true;
}
diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
index 676239148b87..9e4c5220d43c 100644
--- a/tools/perf/builtin-ftrace.c
+++ b/tools/perf/builtin-ftrace.c
@@ -377,9 +377,9 @@ static int set_tracing_pid(struct perf_ftrace *ftrace)
if (target__has_cpu(&ftrace->target))
return 0;
- for (i = 0; i < perf_thread_map__nr(ftrace->evlist->core.threads); i++) {
+ for (i = 0; i < perf_thread_map__nr(evlist__core(ftrace->evlist)->threads); i++) {
scnprintf(buf, sizeof(buf), "%d",
- perf_thread_map__pid(ftrace->evlist->core.threads, i));
+ perf_thread_map__pid(evlist__core(ftrace->evlist)->threads, i));
if (append_tracing_file("set_ftrace_pid", buf) < 0)
return -1;
}
@@ -413,7 +413,7 @@ static int set_tracing_cpumask(struct perf_cpu_map *cpumap)
static int set_tracing_cpu(struct perf_ftrace *ftrace)
{
- struct perf_cpu_map *cpumap = ftrace->evlist->core.user_requested_cpus;
+ struct perf_cpu_map *cpumap = evlist__core(ftrace->evlist)->user_requested_cpus;
if (!target__has_cpu(&ftrace->target))
return 0;
diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index 88c0ef4f5ff1..8869268701d5 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -1427,7 +1427,7 @@ static int synthesize_id_index(struct perf_inject *inject, size_t new_cnt)
struct perf_session *session = inject->session;
struct evlist *evlist = session->evlist;
struct machine *machine = &session->machines.host;
- size_t from = evlist->core.nr_entries - new_cnt;
+ size_t from = evlist__nr_entries(evlist) - new_cnt;
return __perf_event__synthesize_id_index(&inject->tool, perf_event__repipe,
evlist, machine, from);
@@ -1962,7 +1962,7 @@ static int host__finished_init(const struct perf_tool *tool, struct perf_session
if (ret)
return ret;
- ret = synthesize_id_index(inject, gs->session->evlist->core.nr_entries);
+ ret = synthesize_id_index(inject, evlist__nr_entries(gs->session->evlist));
if (ret) {
pr_err("Failed to synthesize id_index\n");
return ret;
diff --git a/tools/perf/builtin-kvm.c b/tools/perf/builtin-kvm.c
index d88855e3c7b4..d14e2a9126ee 100644
--- a/tools/perf/builtin-kvm.c
+++ b/tools/perf/builtin-kvm.c
@@ -1222,7 +1222,7 @@ static s64 perf_kvm__mmap_read_idx(struct perf_kvm_stat *kvm, int idx,
int err;
*mmap_time = ULLONG_MAX;
- md = &evlist->mmap[idx];
+ md = &evlist__mmap(evlist)[idx];
err = perf_mmap__read_init(&md->core);
if (err < 0)
return (err == -EAGAIN) ? 0 : -1;
@@ -1267,7 +1267,7 @@ static int perf_kvm__mmap_read(struct perf_kvm_stat *kvm)
s64 n, ntotal = 0;
u64 flush_time = ULLONG_MAX, mmap_time;
- for (i = 0; i < kvm->evlist->core.nr_mmaps; i++) {
+ for (i = 0; i < evlist__core(kvm->evlist)->nr_mmaps; i++) {
n = perf_kvm__mmap_read_idx(kvm, i, &mmap_time);
if (n < 0)
return -1;
@@ -1450,7 +1450,7 @@ static int kvm_events_live_report(struct perf_kvm_stat *kvm)
evlist__enable(kvm->evlist);
while (!done) {
- struct fdarray *fda = &kvm->evlist->core.pollfd;
+ struct fdarray *fda = &evlist__core(kvm->evlist)->pollfd;
int rc;
rc = perf_kvm__mmap_read(kvm);
@@ -1532,7 +1532,7 @@ static int kvm_live_open_events(struct perf_kvm_stat *kvm)
goto out;
}
- if (evlist__mmap(evlist, kvm->opts.mmap_pages) < 0) {
+ if (evlist__do_mmap(evlist, kvm->opts.mmap_pages) < 0) {
ui__error("Failed to mmap the events: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
evlist__close(evlist);
@@ -1932,7 +1932,7 @@ static int kvm_events_live(struct perf_kvm_stat *kvm,
perf_session__set_id_hdr_size(kvm->session);
ordered_events__set_copy_on_queue(&kvm->session->ordered_events, true);
machine__synthesize_threads(&kvm->session->machines.host, &kvm->opts.target,
- kvm->evlist->core.threads, true, false, 1);
+ evlist__core(kvm->evlist)->threads, true, false, 1);
err = kvm_live_open_events(kvm);
if (err)
goto out;
diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index 9d3a4c779a41..270644c7ec46 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -1776,7 +1776,7 @@ static int perf_kwork__check_config(struct perf_kwork *kwork,
}
}
- list_for_each_entry(evsel, &session->evlist->core.entries, core.node) {
+ list_for_each_entry(evsel, &evlist__core(session->evlist)->entries, core.node) {
if (kwork->show_callchain && !evsel__has_callchain(evsel)) {
pr_debug("Samples do not have callchains\n");
kwork->show_callchain = 0;
@@ -1826,9 +1826,9 @@ static int perf_kwork__read_events(struct perf_kwork *kwork)
goto out_delete;
}
- kwork->nr_events = session->evlist->stats.nr_events[0];
- kwork->nr_lost_events = session->evlist->stats.total_lost;
- kwork->nr_lost_chunks = session->evlist->stats.nr_events[PERF_RECORD_LOST];
+ kwork->nr_events = evlist__stats(session->evlist)->nr_events[0];
+ kwork->nr_lost_events = evlist__stats(session->evlist)->total_lost;
+ kwork->nr_lost_chunks = evlist__stats(session->evlist)->nr_events[PERF_RECORD_LOST];
out_delete:
perf_session__delete(session);
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index b4fffa936e01..b09d2b5f31e3 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -501,12 +501,12 @@ static void record__aio_mmap_read_sync(struct record *rec)
{
int i;
struct evlist *evlist = rec->evlist;
- struct mmap *maps = evlist->mmap;
+ struct mmap *maps = evlist__mmap(evlist);
if (!record__aio_enabled(rec))
return;
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
struct mmap *map = &maps[i];
if (map->core.base)
@@ -810,8 +810,8 @@ static int record__auxtrace_read_snapshot_all(struct record *rec)
int i;
int rc = 0;
- for (i = 0; i < rec->evlist->core.nr_mmaps; i++) {
- struct mmap *map = &rec->evlist->mmap[i];
+ for (i = 0; i < evlist__core(rec->evlist)->nr_mmaps; i++) {
+ struct mmap *map = &evlist__mmap(rec->evlist)[i];
if (!map->auxtrace_mmap.base)
continue;
@@ -1053,15 +1053,15 @@ static void record__thread_data_close_pipes(struct record_thread *thread_data)
static bool evlist__per_thread(struct evlist *evlist)
{
- return cpu_map__is_dummy(evlist->core.user_requested_cpus);
+ return cpu_map__is_dummy(evlist__core(evlist)->user_requested_cpus);
}
static int record__thread_data_init_maps(struct record_thread *thread_data, struct evlist *evlist)
{
- int m, tm, nr_mmaps = evlist->core.nr_mmaps;
- struct mmap *mmap = evlist->mmap;
- struct mmap *overwrite_mmap = evlist->overwrite_mmap;
- struct perf_cpu_map *cpus = evlist->core.all_cpus;
+ int m, tm, nr_mmaps = evlist__core(evlist)->nr_mmaps;
+ struct mmap *mmap = evlist__mmap(evlist);
+ struct mmap *overwrite_mmap = evlist__overwrite_mmap(evlist);
+ struct perf_cpu_map *cpus = evlist__core(evlist)->all_cpus;
bool per_thread = evlist__per_thread(evlist);
if (per_thread)
@@ -1116,16 +1116,17 @@ static int record__thread_data_init_pollfd(struct record_thread *thread_data, st
overwrite_map = thread_data->overwrite_maps ?
thread_data->overwrite_maps[tm] : NULL;
- for (f = 0; f < evlist->core.pollfd.nr; f++) {
- void *ptr = evlist->core.pollfd.priv[f].ptr;
+ for (f = 0; f < evlist__core(evlist)->pollfd.nr; f++) {
+ void *ptr = evlist__core(evlist)->pollfd.priv[f].ptr;
if ((map && ptr == map) || (overwrite_map && ptr == overwrite_map)) {
pos = fdarray__dup_entry_from(&thread_data->pollfd, f,
- &evlist->core.pollfd);
+ &evlist__core(evlist)->pollfd);
if (pos < 0)
return pos;
pr_debug2("thread_data[%p]: pollfd[%d] <- event_fd=%d\n",
- thread_data, pos, evlist->core.pollfd.entries[f].fd);
+ thread_data, pos,
+ evlist__core(evlist)->pollfd.entries[f].fd);
}
}
}
@@ -1169,7 +1170,7 @@ static int record__update_evlist_pollfd_from_thread(struct record *rec,
struct evlist *evlist,
struct record_thread *thread_data)
{
- struct pollfd *e_entries = evlist->core.pollfd.entries;
+ struct pollfd *e_entries = evlist__core(evlist)->pollfd.entries;
struct pollfd *t_entries = thread_data->pollfd.entries;
int err = 0;
size_t i;
@@ -1193,7 +1194,7 @@ static int record__dup_non_perf_events(struct record *rec,
struct evlist *evlist,
struct record_thread *thread_data)
{
- struct fdarray *fda = &evlist->core.pollfd;
+ struct fdarray *fda = &evlist__core(evlist)->pollfd;
int i, ret;
for (i = 0; i < fda->nr; i++) {
@@ -1320,17 +1321,17 @@ static int record__mmap_evlist(struct record *rec,
return ret;
if (record__threads_enabled(rec)) {
- ret = perf_data__create_dir(&rec->data, evlist->core.nr_mmaps);
+ ret = perf_data__create_dir(&rec->data, evlist__core(evlist)->nr_mmaps);
if (ret) {
errno = -ret;
pr_err("Failed to create data directory: %m\n");
return ret;
}
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
- if (evlist->mmap)
- evlist->mmap[i].file = &rec->data.dir.files[i];
- if (evlist->overwrite_mmap)
- evlist->overwrite_mmap[i].file = &rec->data.dir.files[i];
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
+ if (evlist__mmap(evlist))
+ evlist__mmap(evlist)[i].file = &rec->data.dir.files[i];
+ if (evlist__overwrite_mmap(evlist))
+ evlist__overwrite_mmap(evlist)[i].file = &rec->data.dir.files[i];
}
}
@@ -1479,11 +1480,11 @@ static int record__open(struct record *rec)
static void set_timestamp_boundary(struct record *rec, u64 sample_time)
{
- if (rec->evlist->first_sample_time == 0)
- rec->evlist->first_sample_time = sample_time;
+ if (evlist__first_sample_time(rec->evlist) == 0)
+ evlist__set_first_sample_time(rec->evlist, sample_time);
if (sample_time)
- rec->evlist->last_sample_time = sample_time;
+ evlist__set_last_sample_time(rec->evlist, sample_time);
}
static int process_sample_event(const struct perf_tool *tool,
@@ -1652,7 +1653,7 @@ static int record__mmap_read_evlist(struct record *rec, struct evlist *evlist,
if (!maps)
return 0;
- if (overwrite && evlist->bkw_mmap_state != BKW_MMAP_DATA_PENDING)
+ if (overwrite && evlist__bkw_mmap_state(evlist) != BKW_MMAP_DATA_PENDING)
return 0;
if (record__aio_enabled(rec))
@@ -1807,7 +1808,7 @@ static void record__init_features(struct record *rec)
if (rec->no_buildid)
perf_header__clear_feat(&session->header, HEADER_BUILD_ID);
- if (!have_tracepoints(&rec->evlist->core.entries))
+ if (!have_tracepoints(&evlist__core(rec->evlist)->entries))
perf_header__clear_feat(&session->header, HEADER_TRACING_DATA);
if (!rec->opts.branch_stack)
@@ -1873,7 +1874,7 @@ static int record__synthesize_workload(struct record *rec, bool tail)
if (rec->opts.tail_synthesize != tail)
return 0;
- thread_map = thread_map__new_by_tid(rec->evlist->workload.pid);
+ thread_map = thread_map__new_by_tid(evlist__workload_pid(rec->evlist));
if (thread_map == NULL)
return -1;
@@ -2066,10 +2067,10 @@ static void alarm_sig_handler(int sig);
static const struct perf_event_mmap_page *evlist__pick_pc(struct evlist *evlist)
{
if (evlist) {
- if (evlist->mmap && evlist->mmap[0].core.base)
- return evlist->mmap[0].core.base;
- if (evlist->overwrite_mmap && evlist->overwrite_mmap[0].core.base)
- return evlist->overwrite_mmap[0].core.base;
+ if (evlist__mmap(evlist) && evlist__mmap(evlist)[0].core.base)
+ return evlist__mmap(evlist)[0].core.base;
+ if (evlist__overwrite_mmap(evlist) && evlist__overwrite_mmap(evlist)[0].core.base)
+ return evlist__overwrite_mmap(evlist)[0].core.base;
}
return NULL;
}
@@ -2149,7 +2150,7 @@ static int record__synthesize(struct record *rec, bool tail)
if (err)
goto out;
- err = perf_event__synthesize_thread_map2(&rec->tool, rec->evlist->core.threads,
+ err = perf_event__synthesize_thread_map2(&rec->tool, evlist__core(rec->evlist)->threads,
process_synthesized_event,
NULL);
if (err < 0) {
@@ -2157,7 +2158,7 @@ static int record__synthesize(struct record *rec, bool tail)
return err;
}
- err = perf_event__synthesize_cpu_map(&rec->tool, rec->evlist->core.all_cpus,
+ err = perf_event__synthesize_cpu_map(&rec->tool, evlist__core(rec->evlist)->all_cpus,
process_synthesized_event, NULL);
if (err < 0) {
pr_err("Couldn't synthesize cpu map.\n");
@@ -2190,7 +2191,7 @@ static int record__synthesize(struct record *rec, bool tail)
bool needs_mmap = rec->opts.synth & PERF_SYNTH_MMAP;
err = __machine__synthesize_threads(machine, tool, &opts->target,
- rec->evlist->core.threads,
+ evlist__core(rec->evlist)->threads,
f, needs_mmap, opts->record_data_mmap,
rec->opts.nr_threads_synthesize);
}
@@ -2543,7 +2544,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
* because we synthesize event name through the pipe
* and need the id for that.
*/
- if (data->is_pipe && rec->evlist->core.nr_entries == 1)
+ if (data->is_pipe && evlist__nr_entries(rec->evlist) == 1)
rec->opts.sample_id = true;
if (rec->timestamp_filename && perf_data__is_pipe(data)) {
@@ -2567,7 +2568,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
}
/* Debug message used by test scripts */
pr_debug3("perf record done opening and mmapping events\n");
- env->comp_mmap_len = session->evlist->core.mmap_len;
+ env->comp_mmap_len = evlist__core(session->evlist)->mmap_len;
if (rec->opts.kcore) {
err = record__kcore_copy(&session->machines.host, data);
@@ -2668,7 +2669,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
* Synthesize COMM event to prevent it.
*/
tgid = perf_event__synthesize_comm(tool, event,
- rec->evlist->workload.pid,
+ evlist__workload_pid(rec->evlist),
process_synthesized_event,
machine);
free(event);
@@ -2688,7 +2689,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
* Synthesize NAMESPACES event for the command specified.
*/
perf_event__synthesize_namespaces(tool, event,
- rec->evlist->workload.pid,
+ evlist__workload_pid(rec->evlist),
tgid, process_synthesized_event,
machine);
free(event);
@@ -2705,7 +2706,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
}
}
- err = event_enable_timer__start(rec->evlist->eet);
+ err = event_enable_timer__start(evlist__event_enable_timer(rec->evlist));
if (err)
goto out_child;
@@ -2767,7 +2768,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
* record__mmap_read_all() didn't collect data from
* overwritable ring buffer. Read again.
*/
- if (rec->evlist->bkw_mmap_state == BKW_MMAP_RUNNING)
+ if (evlist__bkw_mmap_state(rec->evlist) == BKW_MMAP_RUNNING)
continue;
trigger_ready(&switch_output_trigger);
@@ -2836,7 +2837,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
}
}
- err = event_enable_timer__process(rec->evlist->eet);
+ err = event_enable_timer__process(evlist__event_enable_timer(rec->evlist));
if (err < 0)
goto out_child;
if (err) {
@@ -2904,7 +2905,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
int exit_status;
if (!child_finished)
- kill(rec->evlist->workload.pid, SIGTERM);
+ kill(evlist__workload_pid(rec->evlist), SIGTERM);
wait(&exit_status);
@@ -4030,7 +4031,7 @@ static int record__init_thread_default_masks(struct record *rec, struct perf_cpu
static int record__init_thread_masks(struct record *rec)
{
int ret = 0;
- struct perf_cpu_map *cpus = rec->evlist->core.all_cpus;
+ struct perf_cpu_map *cpus = evlist__core(rec->evlist)->all_cpus;
if (!record__threads_enabled(rec))
return record__init_thread_default_masks(rec, cpus);
@@ -4281,14 +4282,14 @@ int cmd_record(int argc, const char **argv)
if (record.opts.overwrite)
record.opts.tail_synthesize = true;
- if (rec->evlist->core.nr_entries == 0) {
+ if (evlist__nr_entries(rec->evlist) == 0) {
struct evlist *def_evlist = evlist__new_default(&rec->opts.target,
callchain_param.enabled);
if (!def_evlist)
goto out;
- evlist__splice_list_tail(rec->evlist, &def_evlist->core.entries);
+ evlist__splice_list_tail(rec->evlist, &evlist__core(def_evlist)->entries);
evlist__put(def_evlist);
}
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index 95c0bdba6b11..38b66763b99a 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -561,7 +561,7 @@ static int evlist__tty_browse_hists(struct evlist *evlist, struct report *rep, c
if (!quiet) {
fprintf(stdout, "#\n# Total Lost Samples: %" PRIu64 "\n#\n",
- evlist->stats.total_lost_samples);
+ evlist__stats(evlist)->total_lost_samples);
}
evlist__for_each_entry(evlist, pos) {
@@ -1155,7 +1155,7 @@ static int __cmd_report(struct report *rep)
PERF_HPP_REPORT__BLOCK_AVG_CYCLES,
};
- if (session->evlist->nr_br_cntr > 0)
+ if (evlist__nr_br_cntr(session->evlist) > 0)
block_hpps[nr_hpps++] = PERF_HPP_REPORT__BLOCK_BRANCH_COUNTER;
block_hpps[nr_hpps++] = PERF_HPP_REPORT__BLOCK_RANGE;
@@ -1290,7 +1290,7 @@ static int process_attr(const struct perf_tool *tool __maybe_unused,
* on events sample_type.
*/
sample_type = evlist__combined_sample_type(*pevlist);
- session = (*pevlist)->session;
+ session = evlist__session(*pevlist);
callchain_param_setup(sample_type, perf_session__e_machine(session, /*e_flags=*/NULL));
return 0;
}
diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
index d683642ab4e0..d3fa9c70790f 100644
--- a/tools/perf/builtin-sched.c
+++ b/tools/perf/builtin-sched.c
@@ -1951,9 +1951,9 @@ static int perf_sched__read_events(struct perf_sched *sched)
goto out_delete;
}
- sched->nr_events = session->evlist->stats.nr_events[0];
- sched->nr_lost_events = session->evlist->stats.total_lost;
- sched->nr_lost_chunks = session->evlist->stats.nr_events[PERF_RECORD_LOST];
+ sched->nr_events = evlist__stats(session->evlist)->nr_events[0];
+ sched->nr_lost_events = evlist__stats(session->evlist)->total_lost;
+ sched->nr_lost_chunks = evlist__stats(session->evlist)->nr_events[PERF_RECORD_LOST];
}
rc = 0;
@@ -3211,7 +3211,7 @@ static int timehist_check_attr(struct perf_sched *sched,
struct evsel *evsel;
struct evsel_runtime *er;
- list_for_each_entry(evsel, &evlist->core.entries, core.node) {
+ list_for_each_entry(evsel, &evlist__core(evlist)->entries, core.node) {
er = evsel__get_runtime(evsel);
if (er == NULL) {
pr_err("Failed to allocate memory for evsel runtime data\n");
@@ -3382,9 +3382,9 @@ static int perf_sched__timehist(struct perf_sched *sched)
goto out;
}
- sched->nr_events = evlist->stats.nr_events[0];
- sched->nr_lost_events = evlist->stats.total_lost;
- sched->nr_lost_chunks = evlist->stats.nr_events[PERF_RECORD_LOST];
+ sched->nr_events = evlist__stats(evlist)->nr_events[0];
+ sched->nr_lost_events = evlist__stats(evlist)->total_lost;
+ sched->nr_lost_chunks = evlist__stats(evlist)->nr_events[PERF_RECORD_LOST];
if (sched->summary)
timehist_print_summary(sched, session);
@@ -3887,7 +3887,7 @@ static int perf_sched__schedstat_record(struct perf_sched *sched,
if (err < 0)
goto out;
- user_requested_cpus = evlist->core.user_requested_cpus;
+ user_requested_cpus = evlist__core(evlist)->user_requested_cpus;
err = perf_event__synthesize_schedstat(&(sched->tool),
process_synthesized_schedstat_event,
@@ -4509,7 +4509,7 @@ static int perf_sched__schedstat_report(struct perf_sched *sched)
if (err < 0)
goto out;
- user_requested_cpus = session->evlist->core.user_requested_cpus;
+ user_requested_cpus = evlist__core(session->evlist)->user_requested_cpus;
err = perf_session__process_events(session);
@@ -4675,7 +4675,7 @@ static int perf_sched__schedstat_live(struct perf_sched *sched,
if (err < 0)
goto out;
- user_requested_cpus = evlist->core.user_requested_cpus;
+ user_requested_cpus = evlist__core(evlist)->user_requested_cpus;
err = perf_event__synthesize_schedstat(&(sched->tool),
process_synthesized_event_live,
diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
index 0ead134940d5..3e3692088154 100644
--- a/tools/perf/builtin-script.c
+++ b/tools/perf/builtin-script.c
@@ -2224,9 +2224,10 @@ static int script_find_metrics(const struct pmu_metric *pm,
evlist__for_each_entry(metric_evlist, metric_evsel) {
struct evsel *script_evsel =
map_metric_evsel_to_script_evsel(script_evlist, metric_evsel);
- struct metric_event *metric_me = metricgroup__lookup(&metric_evlist->metric_events,
- metric_evsel,
- /*create=*/false);
+ struct metric_event *metric_me =
+ metricgroup__lookup(evlist__metric_events(metric_evlist),
+ metric_evsel,
+ /*create=*/false);
if (script_evsel->metric_id == NULL) {
script_evsel->metric_id = metric_evsel->metric_id;
@@ -2246,7 +2247,7 @@ static int script_find_metrics(const struct pmu_metric *pm,
if (metric_me) {
struct metric_expr *expr;
struct metric_event *script_me =
- metricgroup__lookup(&script_evlist->metric_events,
+ metricgroup__lookup(evlist__metric_events(script_evlist),
script_evsel,
/*create=*/true);
@@ -2316,7 +2317,7 @@ static void perf_sample__fprint_metric(struct thread *thread,
assert(stat_config.aggr_mode == AGGR_GLOBAL);
stat_config.aggr_get_id = script_aggr_cpu_id_get;
stat_config.aggr_map =
- cpu_aggr_map__new(evsel->evlist->core.user_requested_cpus,
+ cpu_aggr_map__new(evlist__core(evsel->evlist)->user_requested_cpus,
aggr_cpu_id__global, /*data=*/NULL,
/*needs_sort=*/false);
}
@@ -3898,7 +3899,7 @@ static int set_maps(struct perf_script *script)
if (WARN_ONCE(script->allocated, "stats double allocation\n"))
return -EINVAL;
- perf_evlist__set_maps(&evlist->core, script->cpus, script->threads);
+ perf_evlist__set_maps(evlist__core(evlist), script->cpus, script->threads);
if (evlist__alloc_stats(&stat_config, evlist, /*alloc_raw=*/true))
return -ENOMEM;
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index bfa3512e1686..fe06d057edf0 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -321,7 +321,7 @@ static int read_single_counter(struct evsel *counter, int cpu_map_idx, int threa
*/
static int read_counter_cpu(struct evsel *counter, int cpu_map_idx)
{
- int nthreads = perf_thread_map__nr(evsel_list->core.threads);
+ int nthreads = perf_thread_map__nr(evlist__core(evsel_list)->threads);
int thread;
if (!counter->supported)
@@ -628,11 +628,12 @@ static int dispatch_events(bool forks, int timeout, int interval, int *times)
time_to_sleep = sleep_time;
while (!done) {
- if (forks)
+ if (forks) {
child_exited = waitpid(child_pid, &status, WNOHANG);
- else
- child_exited = !is_target_alive(&target, evsel_list->core.threads) ? 1 : 0;
-
+ } else {
+ child_exited = !is_target_alive(&target,
+ evlist__core(evsel_list)->threads) ? 1 : 0;
+ }
if (child_exited)
break;
@@ -681,14 +682,15 @@ static enum counter_recovery stat_handle_error(struct evsel *counter, int err)
return COUNTER_RETRY;
}
if (target__has_per_thread(&target) && err != EOPNOTSUPP &&
- evsel_list->core.threads && evsel_list->core.threads->err_thread != -1) {
+ evlist__core(evsel_list)->threads &&
+ evlist__core(evsel_list)->threads->err_thread != -1) {
/*
* For global --per-thread case, skip current
* error thread.
*/
- if (!thread_map__remove(evsel_list->core.threads,
- evsel_list->core.threads->err_thread)) {
- evsel_list->core.threads->err_thread = -1;
+ if (!thread_map__remove(evlist__core(evsel_list)->threads,
+ evlist__core(evsel_list)->threads->err_thread)) {
+ evlist__core(evsel_list)->threads->err_thread = -1;
counter->supported = true;
return COUNTER_RETRY;
}
@@ -787,11 +789,12 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
bool second_pass = false, has_supported_counters;
if (forks) {
- if (evlist__prepare_workload(evsel_list, &target, argv, is_pipe, workload_exec_failed_signal) < 0) {
+ if (evlist__prepare_workload(evsel_list, &target, argv, is_pipe,
+ workload_exec_failed_signal) < 0) {
perror("failed to prepare workload");
return -1;
}
- child_pid = evsel_list->workload.pid;
+ child_pid = evlist__workload_pid(evsel_list);
}
evlist__for_each_entry(evsel_list, counter) {
@@ -1199,7 +1202,7 @@ static int parse_cputype(const struct option *opt,
const struct perf_pmu *pmu;
struct evlist *evlist = *(struct evlist **)opt->value;
- if (!list_empty(&evlist->core.entries)) {
+ if (!list_empty(&evlist__core(evlist)->entries)) {
fprintf(stderr, "Must define cputype before events/metrics\n");
return -1;
}
@@ -1220,7 +1223,7 @@ static int parse_pmu_filter(const struct option *opt,
{
struct evlist *evlist = *(struct evlist **)opt->value;
- if (!list_empty(&evlist->core.entries)) {
+ if (!list_empty(&evlist__core(evlist)->entries)) {
fprintf(stderr, "Must define pmu-filter before events/metrics\n");
return -1;
}
@@ -1586,8 +1589,9 @@ static int perf_stat_init_aggr_mode(void)
if (get_id) {
bool needs_sort = stat_config.aggr_mode != AGGR_NONE;
- stat_config.aggr_map = cpu_aggr_map__new(evsel_list->core.user_requested_cpus,
- get_id, /*data=*/NULL, needs_sort);
+ stat_config.aggr_map = cpu_aggr_map__new(
+ evlist__core(evsel_list)->user_requested_cpus,
+ get_id, /*data=*/NULL, needs_sort);
if (!stat_config.aggr_map) {
pr_err("cannot build %s map\n", aggr_mode__string[stat_config.aggr_mode]);
return -1;
@@ -1596,7 +1600,7 @@ static int perf_stat_init_aggr_mode(void)
}
if (stat_config.aggr_mode == AGGR_THREAD) {
- nr = perf_thread_map__nr(evsel_list->core.threads);
+ nr = perf_thread_map__nr(evlist__core(evsel_list)->threads);
stat_config.aggr_map = cpu_aggr_map__empty_new(nr);
if (stat_config.aggr_map == NULL)
return -ENOMEM;
@@ -1615,7 +1619,7 @@ static int perf_stat_init_aggr_mode(void)
* taking the highest cpu number to be the size of
* the aggregation translate cpumap.
*/
- nr = perf_cpu_map__max(evsel_list->core.all_cpus).cpu + 1;
+ nr = perf_cpu_map__max(evlist__core(evsel_list)->all_cpus).cpu + 1;
stat_config.cpus_aggr_map = cpu_aggr_map__empty_new(nr);
return stat_config.cpus_aggr_map ? 0 : -ENOMEM;
}
@@ -1896,7 +1900,7 @@ static int perf_stat_init_aggr_mode_file(struct perf_stat *st)
bool needs_sort = stat_config.aggr_mode != AGGR_NONE;
if (stat_config.aggr_mode == AGGR_THREAD) {
- int nr = perf_thread_map__nr(evsel_list->core.threads);
+ int nr = perf_thread_map__nr(evlist__core(evsel_list)->threads);
stat_config.aggr_map = cpu_aggr_map__empty_new(nr);
if (stat_config.aggr_map == NULL)
@@ -1914,7 +1918,7 @@ static int perf_stat_init_aggr_mode_file(struct perf_stat *st)
if (!get_id)
return 0;
- stat_config.aggr_map = cpu_aggr_map__new(evsel_list->core.user_requested_cpus,
+ stat_config.aggr_map = cpu_aggr_map__new(evlist__core(evsel_list)->user_requested_cpus,
get_id, env, needs_sort);
if (!stat_config.aggr_map) {
pr_err("cannot build %s map\n", aggr_mode__string[stat_config.aggr_mode]);
@@ -2082,7 +2086,7 @@ static int add_default_events(void)
if (!stat_config.topdown_level)
stat_config.topdown_level = 1;
- if (!evlist->core.nr_entries && !evsel_list->core.nr_entries) {
+ if (!evlist__nr_entries(evlist) && !evlist__nr_entries(evsel_list)) {
/*
* Add Default metrics. To minimize multiplexing, don't request
* threshold computation, but it will be computed if the events
@@ -2121,13 +2125,13 @@ static int add_default_events(void)
evlist__for_each_entry(metric_evlist, evsel)
evsel->default_metricgroup = true;
- evlist__splice_list_tail(evlist, &metric_evlist->core.entries);
+ evlist__splice_list_tail(evlist, &evlist__core(metric_evlist)->entries);
metricgroup__copy_metric_events(evlist, /*cgrp=*/NULL,
- &evlist->metric_events,
- &metric_evlist->metric_events);
+ evlist__metric_events(evlist),
+ evlist__metric_events(metric_evlist));
evlist__put(metric_evlist);
}
- list_sort(/*priv=*/NULL, &evlist->core.entries, default_evlist_evsel_cmp);
+ list_sort(/*priv=*/NULL, &evlist__core(evlist)->entries, default_evlist_evsel_cmp);
}
out:
@@ -2142,10 +2146,10 @@ static int add_default_events(void)
}
}
parse_events_error__exit(&err);
- evlist__splice_list_tail(evsel_list, &evlist->core.entries);
+ evlist__splice_list_tail(evsel_list, &evlist__core(evlist)->entries);
metricgroup__copy_metric_events(evsel_list, /*cgrp=*/NULL,
- &evsel_list->metric_events,
- &evlist->metric_events);
+ evlist__metric_events(evsel_list),
+ evlist__metric_events(evlist));
evlist__put(evlist);
return ret;
}
@@ -2266,7 +2270,7 @@ static int set_maps(struct perf_stat *st)
if (WARN_ONCE(st->maps_allocated, "stats double allocation\n"))
return -EINVAL;
- perf_evlist__set_maps(&evsel_list->core, st->cpus, st->threads);
+ perf_evlist__set_maps(evlist__core(evsel_list), st->cpus, st->threads);
if (evlist__alloc_stats(&stat_config, evsel_list, /*alloc_raw=*/true))
return -ENOMEM;
@@ -2418,7 +2422,7 @@ static void setup_system_wide(int forks)
}
}
- if (evsel_list->core.nr_entries)
+ if (evlist__nr_entries(evsel_list))
target.system_wide = true;
}
}
@@ -2645,7 +2649,7 @@ int cmd_stat(int argc, const char **argv)
stat_config.csv_sep = DEFAULT_SEPARATOR;
if (affinity_set)
- evsel_list->no_affinity = !affinity;
+ evlist__set_no_affinity(evsel_list, !affinity);
if (argc && strlen(argv[0]) > 2 && strstarts("record", argv[0])) {
argc = __cmd_record(stat_options, &opt_mode, argc, argv);
@@ -2876,9 +2880,10 @@ int cmd_stat(int argc, const char **argv)
}
#ifdef HAVE_BPF_SKEL
if (target.use_bpf && nr_cgroups &&
- (evsel_list->core.nr_entries / nr_cgroups) > BPERF_CGROUP__MAX_EVENTS) {
+ (evlist__nr_entries(evsel_list) / nr_cgroups) > BPERF_CGROUP__MAX_EVENTS) {
pr_warning("Disabling BPF counters due to more events (%d) than the max (%d)\n",
- evsel_list->core.nr_entries / nr_cgroups, BPERF_CGROUP__MAX_EVENTS);
+ evlist__nr_entries(evsel_list) / nr_cgroups,
+ BPERF_CGROUP__MAX_EVENTS);
target.use_bpf = false;
}
#endif // HAVE_BPF_SKEL
@@ -2916,7 +2921,7 @@ int cmd_stat(int argc, const char **argv)
* so we could print it out on output.
*/
if (stat_config.aggr_mode == AGGR_THREAD) {
- thread_map__read_comms(evsel_list->core.threads);
+ thread_map__read_comms(evlist__core(evsel_list)->threads);
}
if (stat_config.aggr_mode == AGGR_NODE)
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index c509cfef8285..fe8a73dd2000 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -141,7 +141,7 @@ static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he)
notes = symbol__annotation(sym);
annotation__lock(notes);
- if (!symbol__hists(sym, top->evlist->core.nr_entries)) {
+ if (!symbol__hists(sym, evlist__nr_entries(top->evlist))) {
annotation__unlock(notes);
pr_err("Not enough memory for annotating '%s' symbol!\n",
sym->name);
@@ -267,7 +267,7 @@ static void perf_top__show_details(struct perf_top *top)
more = hist_entry__annotate_printf(he, top->sym_evsel);
- if (top->evlist->enabled) {
+ if (evlist__enabled(top->evlist)) {
if (top->zero)
symbol__annotate_zero_histogram(symbol, top->sym_evsel);
else
@@ -293,7 +293,7 @@ static void perf_top__resort_hists(struct perf_top *t)
*/
hists__unlink(hists);
- if (evlist->enabled) {
+ if (evlist__enabled(evlist)) {
if (t->zero) {
hists__delete_entries(hists);
} else {
@@ -334,13 +334,13 @@ static void perf_top__print_sym_table(struct perf_top *top)
printf("%-*.*s\n", win_width, win_width, graph_dotted_line);
if (!top->record_opts.overwrite &&
- (top->evlist->stats.nr_lost_warned !=
- top->evlist->stats.nr_events[PERF_RECORD_LOST])) {
- top->evlist->stats.nr_lost_warned =
- top->evlist->stats.nr_events[PERF_RECORD_LOST];
+ (evlist__stats(top->evlist)->nr_lost_warned !=
+ evlist__stats(top->evlist)->nr_events[PERF_RECORD_LOST])) {
+ evlist__stats(top->evlist)->nr_lost_warned =
+ evlist__stats(top->evlist)->nr_events[PERF_RECORD_LOST];
color_fprintf(stdout, PERF_COLOR_RED,
"WARNING: LOST %d chunks, Check IO/CPU overload",
- top->evlist->stats.nr_lost_warned);
+ evlist__stats(top->evlist)->nr_lost_warned);
++printed;
}
@@ -447,7 +447,7 @@ static void perf_top__print_mapped_keys(struct perf_top *top)
fprintf(stdout, "\t[d] display refresh delay. \t(%d)\n", top->delay_secs);
fprintf(stdout, "\t[e] display entries (lines). \t(%d)\n", top->print_entries);
- if (top->evlist->core.nr_entries > 1)
+ if (evlist__nr_entries(top->evlist) > 1)
fprintf(stdout, "\t[E] active event counter. \t(%s)\n", evsel__name(top->sym_evsel));
fprintf(stdout, "\t[f] profile display filter (count). \t(%d)\n", top->count_filter);
@@ -482,7 +482,7 @@ static int perf_top__key_mapped(struct perf_top *top, int c)
case 'S':
return 1;
case 'E':
- return top->evlist->core.nr_entries > 1 ? 1 : 0;
+ return evlist__nr_entries(top->evlist) > 1 ? 1 : 0;
default:
break;
}
@@ -528,7 +528,7 @@ static bool perf_top__handle_keypress(struct perf_top *top, int c)
}
break;
case 'E':
- if (top->evlist->core.nr_entries > 1) {
+ if (evlist__nr_entries(top->evlist) > 1) {
/* Select 0 as the default event: */
int counter = 0;
@@ -539,7 +539,7 @@ static bool perf_top__handle_keypress(struct perf_top *top, int c)
prompt_integer(&counter, "Enter details event counter");
- if (counter >= top->evlist->core.nr_entries) {
+ if (counter >= evlist__nr_entries(top->evlist)) {
top->sym_evsel = evlist__first(top->evlist);
fprintf(stderr, "Sorry, no such event, using %s.\n", evsel__name(top->sym_evsel));
sleep(1);
@@ -598,8 +598,8 @@ static void perf_top__sort_new_samples(void *arg)
{
struct perf_top *t = arg;
- if (t->evlist->selected != NULL)
- t->sym_evsel = t->evlist->selected;
+ if (evlist__selected(t->evlist) != NULL)
+ t->sym_evsel = evlist__selected(t->evlist);
perf_top__resort_hists(t);
@@ -768,7 +768,7 @@ static void perf_event__process_sample(const struct perf_tool *tool,
if (!machine) {
pr_err("%u unprocessable samples recorded.\r",
- top->session->evlist->stats.nr_unprocessable_samples++);
+ evlist__stats(top->session->evlist)->nr_unprocessable_samples++);
return;
}
@@ -861,7 +861,7 @@ perf_top__process_lost(struct perf_top *top, union perf_event *event,
{
top->lost += event->lost.lost;
top->lost_total += event->lost.lost;
- evsel->evlist->stats.total_lost += event->lost.lost;
+ evlist__stats(evsel->evlist)->total_lost += event->lost.lost;
}
static void
@@ -871,7 +871,7 @@ perf_top__process_lost_samples(struct perf_top *top,
{
top->lost += event->lost_samples.lost;
top->lost_total += event->lost_samples.lost;
- evsel->evlist->stats.total_lost_samples += event->lost_samples.lost;
+ evlist__stats(evsel->evlist)->total_lost_samples += event->lost_samples.lost;
}
static u64 last_timestamp;
@@ -883,7 +883,7 @@ static void perf_top__mmap_read_idx(struct perf_top *top, int idx)
struct mmap *md;
union perf_event *event;
- md = opts->overwrite ? &evlist->overwrite_mmap[idx] : &evlist->mmap[idx];
+ md = opts->overwrite ? &evlist__overwrite_mmap(evlist)[idx] : &evlist__mmap(evlist)[idx];
if (perf_mmap__read_init(&md->core) < 0)
return;
@@ -920,7 +920,7 @@ static void perf_top__mmap_read(struct perf_top *top)
if (overwrite)
evlist__toggle_bkw_mmap(evlist, BKW_MMAP_DATA_PENDING);
- for (i = 0; i < top->evlist->core.nr_mmaps; i++)
+ for (i = 0; i < evlist__core(top->evlist)->nr_mmaps; i++)
perf_top__mmap_read_idx(top, i);
if (overwrite) {
@@ -1065,7 +1065,7 @@ static int perf_top__start_counters(struct perf_top *top)
goto out_err;
}
- if (evlist__mmap(evlist, opts->mmap_pages) < 0) {
+ if (evlist__do_mmap(evlist, opts->mmap_pages) < 0) {
ui__error("Failed to mmap with %d (%s)\n",
errno, str_error_r(errno, msg, sizeof(msg)));
goto out_err;
@@ -1218,10 +1218,10 @@ static int deliver_event(struct ordered_events *qe,
} else if (event->header.type == PERF_RECORD_LOST_SAMPLES) {
perf_top__process_lost_samples(top, event, evsel);
} else if (event->header.type < PERF_RECORD_MAX) {
- events_stats__inc(&session->evlist->stats, event->header.type);
+ events_stats__inc(evlist__stats(session->evlist), event->header.type);
machine__process_event(machine, event, &sample);
} else
- ++session->evlist->stats.nr_unknown_events;
+ ++evlist__stats(session->evlist)->nr_unknown_events;
ret = 0;
next_event:
@@ -1296,7 +1296,7 @@ static int __cmd_top(struct perf_top *top)
pr_debug("Couldn't synthesize cgroup events.\n");
machine__synthesize_threads(&top->session->machines.host, &opts->target,
- top->evlist->core.threads, true, false,
+ evlist__core(top->evlist)->threads, true, false,
top->nr_threads_synthesize);
perf_set_multithreaded();
@@ -1714,13 +1714,13 @@ int cmd_top(int argc, const char **argv)
if (target__none(target))
target->system_wide = true;
- if (!top.evlist->core.nr_entries) {
+ if (!evlist__nr_entries(top.evlist)) {
struct evlist *def_evlist = evlist__new_default(target, callchain_param.enabled);
if (!def_evlist)
goto out_put_evlist;
- evlist__splice_list_tail(top.evlist, &def_evlist->core.entries);
+ evlist__splice_list_tail(top.evlist, &evlist__core(def_evlist)->entries);
evlist__put(def_evlist);
}
@@ -1797,7 +1797,7 @@ int cmd_top(int argc, const char **argv)
top.session = NULL;
goto out_put_evlist;
}
- top.evlist->session = top.session;
+ evlist__set_session(top.evlist, top.session);
if (setup_sorting(top.evlist, perf_session__env(top.session)) < 0) {
if (sort_order)
diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 6ea935c13538..edd3eb408dd4 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -2008,7 +2008,7 @@ static int trace__symbols_init(struct trace *trace, int argc, const char **argv,
goto out;
err = __machine__synthesize_threads(trace->host, &trace->tool, &trace->opts.target,
- evlist->core.threads, trace__tool_process,
+ evlist__core(evlist)->threads, trace__tool_process,
/*needs_mmap=*/callchain_param.enabled &&
!trace->summary_only,
/*mmap_data=*/false,
@@ -4165,7 +4165,7 @@ static int trace__set_filter_pids(struct trace *trace)
err = augmented_syscalls__set_filter_pids(trace->filter_pids.nr,
trace->filter_pids.entries);
}
- } else if (perf_thread_map__pid(trace->evlist->core.threads, 0) == -1) {
+ } else if (perf_thread_map__pid(evlist__core(trace->evlist)->threads, 0) == -1) {
err = trace__set_filter_loop_pids(trace);
}
@@ -4479,7 +4479,7 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
fprintf(trace->output, "Couldn't run the workload!\n");
goto out_put_evlist;
}
- workload_pid = evlist->workload.pid;
+ workload_pid = evlist__workload_pid(evlist);
}
err = evlist__open(evlist);
@@ -4531,7 +4531,7 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
goto out_error_apply_filters;
if (!trace->summary_only || !trace->summary_bpf) {
- err = evlist__mmap(evlist, trace->opts.mmap_pages);
+ err = evlist__do_mmap(evlist, trace->opts.mmap_pages);
if (err < 0)
goto out_error_mmap;
}
@@ -4550,8 +4550,8 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
if (trace->summary_bpf)
trace_start_bpf_summary();
- trace->multiple_threads = perf_thread_map__pid(evlist->core.threads, 0) == -1 ||
- perf_thread_map__nr(evlist->core.threads) > 1 ||
+ trace->multiple_threads = perf_thread_map__pid(evlist__core(evlist)->threads, 0) == -1 ||
+ perf_thread_map__nr(evlist__core(evlist)->threads) > 1 ||
evlist__first(evlist)->core.attr.inherit;
/*
@@ -4568,11 +4568,11 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
again:
before = trace->nr_events;
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
union perf_event *event;
struct mmap *md;
- md = &evlist->mmap[i];
+ md = &evlist__mmap(evlist)[i];
if (perf_mmap__read_init(&md->core) < 0)
continue;
@@ -5272,7 +5272,7 @@ static int trace__parse_cgroups(const struct option *opt, const char *str, int u
{
struct trace *trace = opt->value;
- if (!list_empty(&trace->evlist->core.entries)) {
+ if (!list_empty(&evlist__core(trace->evlist)->entries)) {
struct option o = {
.value = &trace->evlist,
};
@@ -5545,7 +5545,7 @@ int cmd_trace(int argc, const char **argv)
* .perfconfig trace.add_events, and filter those out.
*/
if (!trace.trace_syscalls && !trace.trace_pgfaults &&
- trace.evlist->core.nr_entries == 0 /* Was --events used? */) {
+ evlist__nr_entries(trace.evlist) == 0 /* Was --events used? */) {
trace.trace_syscalls = true;
}
/*
@@ -5628,7 +5628,7 @@ int cmd_trace(int argc, const char **argv)
symbol_conf.use_callchain = true;
}
- if (trace.evlist->core.nr_entries > 0) {
+ if (evlist__nr_entries(trace.evlist) > 0) {
bool use_btf = false;
evlist__set_default_evsel_handler(trace.evlist, trace__event_handler);
diff --git a/tools/perf/tests/backward-ring-buffer.c b/tools/perf/tests/backward-ring-buffer.c
index 2b49b002d749..2735cc26d7ee 100644
--- a/tools/perf/tests/backward-ring-buffer.c
+++ b/tools/perf/tests/backward-ring-buffer.c
@@ -34,8 +34,8 @@ static int count_samples(struct evlist *evlist, int *sample_count,
{
int i;
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
- struct mmap *map = &evlist->overwrite_mmap[i];
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
+ struct mmap *map = &evlist__overwrite_mmap(evlist)[i];
union perf_event *event;
perf_mmap__read_init(&map->core);
@@ -65,7 +65,7 @@ static int do_test(struct evlist *evlist, int mmap_pages,
int err;
char sbuf[STRERR_BUFSIZE];
- err = evlist__mmap(evlist, mmap_pages);
+ err = evlist__do_mmap(evlist, mmap_pages);
if (err < 0) {
pr_debug("evlist__mmap: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
@@ -77,7 +77,7 @@ static int do_test(struct evlist *evlist, int mmap_pages,
evlist__disable(evlist);
err = count_samples(evlist, sample_count, comm_count);
- evlist__munmap(evlist);
+ evlist__do_munmap(evlist);
return err;
}
diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
index fc65a17f67f7..28c068a35ada 100644
--- a/tools/perf/tests/code-reading.c
+++ b/tools/perf/tests/code-reading.c
@@ -589,8 +589,8 @@ static int process_events(struct machine *machine, struct evlist *evlist,
struct mmap *md;
int i, ret;
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
- md = &evlist->mmap[i];
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
+ md = &evlist__mmap(evlist)[i];
if (perf_mmap__read_init(&md->core) < 0)
continue;
@@ -778,7 +778,7 @@ static int do_test_code_reading(bool try_kcore)
goto out_put;
}
- perf_evlist__set_maps(&evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, threads);
str = events[evidx];
pr_debug("Parsing event '%s'\n", str);
@@ -806,7 +806,7 @@ static int do_test_code_reading(bool try_kcore)
pr_debug("perf_evlist__open() failed!\n%s\n", errbuf);
}
- perf_evlist__set_maps(&evlist->core, NULL, NULL);
+ perf_evlist__set_maps(evlist__core(evlist), NULL, NULL);
evlist__put(evlist);
evlist = NULL;
continue;
@@ -817,7 +817,7 @@ static int do_test_code_reading(bool try_kcore)
if (events[evidx] == NULL)
goto out_put;
- ret = evlist__mmap(evlist, UINT_MAX);
+ ret = evlist__do_mmap(evlist, UINT_MAX);
if (ret < 0) {
pr_debug("evlist__mmap failed\n");
goto out_put;
diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
index 94ab54ecd3f9..56dd37ca760e 100644
--- a/tools/perf/tests/event-times.c
+++ b/tools/perf/tests/event-times.c
@@ -50,7 +50,7 @@ static int attach__enable_on_exec(struct evlist *evlist)
static int detach__enable_on_exec(struct evlist *evlist)
{
- waitpid(evlist->workload.pid, NULL, 0);
+ waitpid(evlist__workload_pid(evlist), NULL, 0);
return 0;
}
diff --git a/tools/perf/tests/event_update.c b/tools/perf/tests/event_update.c
index 73141b122d2f..220cc0347747 100644
--- a/tools/perf/tests/event_update.c
+++ b/tools/perf/tests/event_update.c
@@ -92,7 +92,7 @@ static int test__event_update(struct test_suite *test __maybe_unused, int subtes
TEST_ASSERT_VAL("failed to allocate ids",
!perf_evsel__alloc_id(&evsel->core, 1, 1));
- perf_evlist__id_add(&evlist->core, &evsel->core, 0, 0, 123);
+ perf_evlist__id_add(evlist__core(evlist), &evsel->core, 0, 0, 123);
free((char *)evsel->unit);
evsel->unit = strdup("KRAVA");
diff --git a/tools/perf/tests/expand-cgroup.c b/tools/perf/tests/expand-cgroup.c
index a7a445f12693..549fbd473ab7 100644
--- a/tools/perf/tests/expand-cgroup.c
+++ b/tools/perf/tests/expand-cgroup.c
@@ -28,7 +28,7 @@ static int test_expand_events(struct evlist *evlist)
TEST_ASSERT_VAL("evlist is empty", !evlist__empty(evlist));
- nr_events = evlist->core.nr_entries;
+ nr_events = evlist__nr_entries(evlist);
ev_name = calloc(nr_events, sizeof(*ev_name));
if (ev_name == NULL) {
pr_debug("memory allocation failure\n");
@@ -54,7 +54,7 @@ static int test_expand_events(struct evlist *evlist)
}
ret = TEST_FAIL;
- if (evlist->core.nr_entries != nr_events * nr_cgrps) {
+ if (evlist__nr_entries(evlist) != nr_events * nr_cgrps) {
pr_debug("event count doesn't match\n");
goto out;
}
diff --git a/tools/perf/tests/hwmon_pmu.c b/tools/perf/tests/hwmon_pmu.c
index 1b60c3a900f1..9dfc890841bf 100644
--- a/tools/perf/tests/hwmon_pmu.c
+++ b/tools/perf/tests/hwmon_pmu.c
@@ -183,9 +183,10 @@ static int do_test(size_t i, bool with_pmu, bool with_alias)
}
ret = TEST_OK;
- if (with_pmu ? (evlist->core.nr_entries != 1) : (evlist->core.nr_entries < 1)) {
+ if (with_pmu ? (evlist__nr_entries(evlist) != 1)
+ : (evlist__nr_entries(evlist) < 1)) {
pr_debug("FAILED %s:%d Unexpected number of events for '%s' of %d\n",
- __FILE__, __LINE__, str, evlist->core.nr_entries);
+ __FILE__, __LINE__, str, evlist__nr_entries(evlist));
ret = TEST_FAIL;
goto out;
}
diff --git a/tools/perf/tests/keep-tracking.c b/tools/perf/tests/keep-tracking.c
index 51cfd6522867..b760041bed30 100644
--- a/tools/perf/tests/keep-tracking.c
+++ b/tools/perf/tests/keep-tracking.c
@@ -37,8 +37,8 @@ static int find_comm(struct evlist *evlist, const char *comm)
int i, found;
found = 0;
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
- md = &evlist->mmap[i];
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
+ md = &evlist__mmap(evlist)[i];
if (perf_mmap__read_init(&md->core) < 0)
continue;
while ((event = perf_mmap__read_event(&md->core)) != NULL) {
@@ -87,7 +87,7 @@ static int test__keep_tracking(struct test_suite *test __maybe_unused, int subte
evlist = evlist__new();
CHECK_NOT_NULL__(evlist);
- perf_evlist__set_maps(&evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, threads);
CHECK__(parse_event(evlist, "dummy:u"));
CHECK__(parse_event(evlist, "cpu-cycles:u"));
@@ -106,7 +106,7 @@ static int test__keep_tracking(struct test_suite *test __maybe_unused, int subte
goto out_err;
}
- CHECK__(evlist__mmap(evlist, UINT_MAX));
+ CHECK__(evlist__do_mmap(evlist, UINT_MAX));
/*
* First, test that a 'comm' event can be found when the event is
diff --git a/tools/perf/tests/mmap-basic.c b/tools/perf/tests/mmap-basic.c
index e6501791c505..e2e65f344c72 100644
--- a/tools/perf/tests/mmap-basic.c
+++ b/tools/perf/tests/mmap-basic.c
@@ -81,7 +81,7 @@ static int test__basic_mmap(struct test_suite *test __maybe_unused, int subtest
goto out_free_cpus;
}
- perf_evlist__set_maps(&evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, threads);
for (i = 0; i < nsyscalls; ++i) {
char name[64];
@@ -113,7 +113,7 @@ static int test__basic_mmap(struct test_suite *test __maybe_unused, int subtest
expected_nr_events[i] = 1 + rand() % 127;
}
- if (evlist__mmap(evlist, 128) < 0) {
+ if (evlist__do_mmap(evlist, 128) < 0) {
pr_debug("failed to mmap events: %d (%s)\n", errno,
str_error_r(errno, sbuf, sizeof(sbuf)));
goto out_put_evlist;
@@ -124,7 +124,7 @@ static int test__basic_mmap(struct test_suite *test __maybe_unused, int subtest
syscalls[i]();
}
- md = &evlist->mmap[0];
+ md = &evlist__mmap(evlist)[0];
if (perf_mmap__read_init(&md->core) < 0)
goto out_init;
diff --git a/tools/perf/tests/openat-syscall-tp-fields.c b/tools/perf/tests/openat-syscall-tp-fields.c
index 3ff595c7a86a..7f5eaa492bab 100644
--- a/tools/perf/tests/openat-syscall-tp-fields.c
+++ b/tools/perf/tests/openat-syscall-tp-fields.c
@@ -64,7 +64,7 @@ static int test__syscall_openat_tp_fields(struct test_suite *test __maybe_unused
evsel__config(evsel, &opts, NULL);
- perf_thread_map__set_pid(evlist->core.threads, 0, getpid());
+ perf_thread_map__set_pid(evlist__core(evlist)->threads, 0, getpid());
err = evlist__open(evlist);
if (err < 0) {
@@ -73,7 +73,7 @@ static int test__syscall_openat_tp_fields(struct test_suite *test __maybe_unused
goto out_put_evlist;
}
- err = evlist__mmap(evlist, UINT_MAX);
+ err = evlist__do_mmap(evlist, UINT_MAX);
if (err < 0) {
pr_debug("evlist__mmap: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
@@ -90,11 +90,11 @@ static int test__syscall_openat_tp_fields(struct test_suite *test __maybe_unused
while (1) {
int before = nr_events;
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
union perf_event *event;
struct mmap *md;
- md = &evlist->mmap[i];
+ md = &evlist__mmap(evlist)[i];
if (perf_mmap__read_init(&md->core) < 0)
continue;
diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c
index 19dc7b7475d2..0ad0273da923 100644
--- a/tools/perf/tests/parse-events.c
+++ b/tools/perf/tests/parse-events.c
@@ -109,7 +109,7 @@ static int test__checkevent_tracepoint(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVLIST("wrong number of groups", 0 == evlist__nr_groups(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_TRACEPOINT == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong sample_type",
@@ -122,7 +122,7 @@ static int test__checkevent_tracepoint_multi(struct evlist *evlist)
{
struct evsel *evsel;
- TEST_ASSERT_EVLIST("wrong number of entries", evlist->core.nr_entries > 1, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", evlist__nr_entries(evlist) > 1, evlist);
TEST_ASSERT_EVLIST("wrong number of groups", 0 == evlist__nr_groups(evlist), evlist);
evlist__for_each_entry(evlist, evsel) {
@@ -144,7 +144,7 @@ static int test__checkevent_raw(struct evlist *evlist)
struct evsel *evsel;
bool raw_type_match = false;
- TEST_ASSERT_EVLIST("wrong number of entries", 0 != evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 0 != evlist__nr_entries(evlist), evlist);
evlist__for_each_entry(evlist, evsel) {
struct perf_pmu *pmu __maybe_unused = NULL;
@@ -182,7 +182,7 @@ static int test__checkevent_numeric(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", 1 == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 1 == evsel->core.attr.config, evsel);
return TEST_OK;
@@ -193,7 +193,7 @@ static int test__checkevent_symbolic_name(struct evlist *evlist)
{
struct evsel *evsel;
- TEST_ASSERT_EVLIST("wrong number of entries", 0 != evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 0 != evlist__nr_entries(evlist), evlist);
evlist__for_each_entry(evlist, evsel) {
TEST_ASSERT_EVSEL("unexpected event",
@@ -207,7 +207,7 @@ static int test__checkevent_symbolic_name_config(struct evlist *evlist)
{
struct evsel *evsel;
- TEST_ASSERT_EVLIST("wrong number of entries", 0 != evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 0 != evlist__nr_entries(evlist), evlist);
evlist__for_each_entry(evlist, evsel) {
TEST_ASSERT_EVSEL("unexpected event",
@@ -228,7 +228,7 @@ static int test__checkevent_symbolic_alias(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type/config", evsel__match(evsel, SOFTWARE, SW_PAGE_FAULTS),
evsel);
return TEST_OK;
@@ -238,7 +238,7 @@ static int test__checkevent_genhw(struct evlist *evlist)
{
struct evsel *evsel;
- TEST_ASSERT_EVLIST("wrong number of entries", 0 != evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 0 != evlist__nr_entries(evlist), evlist);
evlist__for_each_entry(evlist, evsel) {
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_HW_CACHE == evsel->core.attr.type, evsel);
@@ -251,7 +251,7 @@ static int test__checkevent_breakpoint(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_BREAKPOINT == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 0 == evsel->core.attr.config, evsel);
TEST_ASSERT_EVSEL("wrong bp_type",
@@ -265,7 +265,7 @@ static int test__checkevent_breakpoint_x(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_BREAKPOINT == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 0 == evsel->core.attr.config, evsel);
TEST_ASSERT_EVSEL("wrong bp_type", HW_BREAKPOINT_X == evsel->core.attr.bp_type, evsel);
@@ -278,7 +278,7 @@ static int test__checkevent_breakpoint_r(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_BREAKPOINT == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 0 == evsel->core.attr.config, evsel);
TEST_ASSERT_EVSEL("wrong bp_type", HW_BREAKPOINT_R == evsel->core.attr.bp_type, evsel);
@@ -290,7 +290,7 @@ static int test__checkevent_breakpoint_w(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_BREAKPOINT == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 0 == evsel->core.attr.config, evsel);
TEST_ASSERT_EVSEL("wrong bp_type", HW_BREAKPOINT_W == evsel->core.attr.bp_type, evsel);
@@ -302,7 +302,7 @@ static int test__checkevent_breakpoint_rw(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_BREAKPOINT == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 0 == evsel->core.attr.config, evsel);
TEST_ASSERT_EVSEL("wrong bp_type",
@@ -316,7 +316,7 @@ static int test__checkevent_tracepoint_modifier(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong exclude_user", evsel->core.attr.exclude_user, evsel);
TEST_ASSERT_EVSEL("wrong exclude_kernel", !evsel->core.attr.exclude_kernel, evsel);
TEST_ASSERT_EVSEL("wrong exclude_hv", evsel->core.attr.exclude_hv, evsel);
@@ -330,7 +330,7 @@ test__checkevent_tracepoint_multi_modifier(struct evlist *evlist)
{
struct evsel *evsel;
- TEST_ASSERT_EVLIST("wrong number of entries", evlist->core.nr_entries > 1, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", evlist__nr_entries(evlist) > 1, evlist);
evlist__for_each_entry(evlist, evsel) {
TEST_ASSERT_EVSEL("wrong exclude_user", !evsel->core.attr.exclude_user, evsel);
@@ -346,7 +346,7 @@ static int test__checkevent_raw_modifier(struct evlist *evlist)
{
struct evsel *evsel;
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
evlist__for_each_entry(evlist, evsel) {
TEST_ASSERT_EVSEL("wrong exclude_user", evsel->core.attr.exclude_user, evsel);
@@ -361,7 +361,7 @@ static int test__checkevent_numeric_modifier(struct evlist *evlist)
{
struct evsel *evsel;
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
evlist__for_each_entry(evlist, evsel) {
TEST_ASSERT_EVSEL("wrong exclude_user", evsel->core.attr.exclude_user, evsel);
@@ -377,7 +377,7 @@ static int test__checkevent_symbolic_name_modifier(struct evlist *evlist)
struct evsel *evsel;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
evlist__for_each_entry(evlist, evsel) {
@@ -394,7 +394,7 @@ static int test__checkevent_exclude_host_modifier(struct evlist *evlist)
struct evsel *evsel;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
evlist__for_each_entry(evlist, evsel) {
@@ -409,7 +409,7 @@ static int test__checkevent_exclude_guest_modifier(struct evlist *evlist)
struct evsel *evsel;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
evlist__for_each_entry(evlist, evsel) {
@@ -423,7 +423,8 @@ static int test__checkevent_symbolic_alias_modifier(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries",
+ 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong exclude_user", !evsel->core.attr.exclude_user, evsel);
TEST_ASSERT_EVSEL("wrong exclude_kernel", evsel->core.attr.exclude_kernel, evsel);
TEST_ASSERT_EVSEL("wrong exclude_hv", evsel->core.attr.exclude_hv, evsel);
@@ -437,7 +438,7 @@ static int test__checkevent_genhw_modifier(struct evlist *evlist)
struct evsel *evsel;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
evlist__for_each_entry(evlist, evsel) {
@@ -454,7 +455,7 @@ static int test__checkevent_exclude_idle_modifier(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
TEST_ASSERT_EVSEL("wrong exclude idle", evsel->core.attr.exclude_idle, evsel);
@@ -473,7 +474,7 @@ static int test__checkevent_exclude_idle_modifier_1(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
TEST_ASSERT_EVSEL("wrong exclude idle", evsel->core.attr.exclude_idle, evsel);
@@ -622,7 +623,7 @@ static int test__checkevent_breakpoint_2_events(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVSEL("wrong number of entries", 2 == evlist->core.nr_entries, evsel);
+ TEST_ASSERT_EVSEL("wrong number of entries", 2 == evlist__nr_entries(evlist), evsel);
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_BREAKPOINT == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong name", evsel__name_is(evsel, "breakpoint1"), evsel);
@@ -641,7 +642,7 @@ static int test__checkevent_pmu(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
struct perf_pmu *core_pmu = perf_pmus__find_core_pmu();
- TEST_ASSERT_EVSEL("wrong number of entries", 1 == evlist->core.nr_entries, evsel);
+ TEST_ASSERT_EVSEL("wrong number of entries", 1 == evlist__nr_entries(evlist), evsel);
TEST_ASSERT_EVSEL("wrong type", core_pmu->type == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", test_hw_config(evsel, 10), evsel);
TEST_ASSERT_EVSEL("wrong config1", 1 == evsel->core.attr.config1, evsel);
@@ -661,7 +662,7 @@ static int test__checkevent_list(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVSEL("wrong number of entries", 3 <= evlist->core.nr_entries, evsel);
+ TEST_ASSERT_EVSEL("wrong number of entries", 3 <= evlist__nr_entries(evlist), evsel);
/* r1 */
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_TRACEPOINT != evsel->core.attr.type, evsel);
@@ -707,14 +708,15 @@ static int test__checkevent_pmu_name(struct evlist *evlist)
char buf[256];
/* default_core/config=1,name=krava/u */
- TEST_ASSERT_EVLIST("wrong number of entries", 2 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries",
+ 2 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", core_pmu->type == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 1 == evsel->core.attr.config, evsel);
TEST_ASSERT_EVSEL("wrong name", evsel__name_is(evsel, "krava"), evsel);
/* default_core/config=2/u" */
evsel = evsel__next(evsel);
- TEST_ASSERT_EVSEL("wrong number of entries", 2 == evlist->core.nr_entries, evsel);
+ TEST_ASSERT_EVSEL("wrong number of entries", 2 == evlist__nr_entries(evlist), evsel);
TEST_ASSERT_EVSEL("wrong type", core_pmu->type == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 2 == evsel->core.attr.config, evsel);
snprintf(buf, sizeof(buf), "%s/config=2/u", core_pmu->name);
@@ -729,7 +731,8 @@ static int test__checkevent_pmu_partial_time_callgraph(struct evlist *evlist)
struct perf_pmu *core_pmu = perf_pmus__find_core_pmu();
/* default_core/config=1,call-graph=fp,time,period=100000/ */
- TEST_ASSERT_EVLIST("wrong number of entries", 2 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries",
+ 2 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", core_pmu->type == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 1 == evsel->core.attr.config, evsel);
/*
@@ -760,7 +763,7 @@ static int test__checkevent_pmu_events(struct evlist *evlist)
struct evsel *evsel;
struct perf_pmu *core_pmu = perf_pmus__find_core_pmu();
- TEST_ASSERT_EVLIST("wrong number of entries", 1 <= evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 <= evlist__nr_entries(evlist), evlist);
evlist__for_each_entry(evlist, evsel) {
TEST_ASSERT_EVSEL("wrong type",
@@ -787,8 +790,9 @@ static int test__checkevent_pmu_events_mix(struct evlist *evlist)
* The wild card event will be opened at least once, but it may be
* opened on each core PMU.
*/
- TEST_ASSERT_EVLIST("wrong number of entries", evlist->core.nr_entries >= 2, evlist);
- for (int i = 0; i < evlist->core.nr_entries - 1; i++) {
+ TEST_ASSERT_EVLIST("wrong number of entries",
+ evlist__nr_entries(evlist) >= 2, evlist);
+ for (int i = 0; i < evlist__nr_entries(evlist) - 1; i++) {
evsel = (i == 0 ? evlist__first(evlist) : evsel__next(evsel));
/* pmu-event:u */
TEST_ASSERT_EVSEL("wrong exclude_user", !evsel->core.attr.exclude_user, evsel);
@@ -905,7 +909,7 @@ static int test__group1(struct evlist *evlist)
struct evsel *evsel = NULL, *leader;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (num_core_entries(evlist) * 2),
+ evlist__nr_entries(evlist) == (num_core_entries(evlist) * 2),
evlist);
TEST_ASSERT_EVLIST("wrong number of groups",
evlist__nr_groups(evlist) == num_core_entries(evlist),
@@ -950,7 +954,7 @@ static int test__group2(struct evlist *evlist)
struct evsel *evsel, *leader = NULL;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (2 * num_core_entries(evlist) + 1),
+ evlist__nr_entries(evlist) == (2 * num_core_entries(evlist) + 1),
evlist);
/*
* TODO: Currently the software event won't be grouped with the hardware
@@ -1018,7 +1022,7 @@ static int test__group3(struct evlist *evlist __maybe_unused)
struct evsel *evsel, *group1_leader = NULL, *group2_leader = NULL;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (3 * perf_pmus__num_core_pmus() + 2),
+ evlist__nr_entries(evlist) == (3 * perf_pmus__num_core_pmus() + 2),
evlist);
/*
* Currently the software event won't be grouped with the hardware event
@@ -1144,7 +1148,7 @@ static int test__group4(struct evlist *evlist __maybe_unused)
struct evsel *evsel = NULL, *leader;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (num_core_entries(evlist) * 2),
+ evlist__nr_entries(evlist) == (num_core_entries(evlist) * 2),
evlist);
TEST_ASSERT_EVLIST("wrong number of groups",
num_core_entries(evlist) == evlist__nr_groups(evlist),
@@ -1191,7 +1195,7 @@ static int test__group5(struct evlist *evlist __maybe_unused)
struct evsel *evsel = NULL, *leader;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (5 * num_core_entries(evlist)),
+ evlist__nr_entries(evlist) == (5 * num_core_entries(evlist)),
evlist);
TEST_ASSERT_EVLIST("wrong number of groups",
evlist__nr_groups(evlist) == (2 * num_core_entries(evlist)),
@@ -1284,7 +1288,7 @@ static int test__group_gh1(struct evlist *evlist)
struct evsel *evsel = NULL, *leader;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (2 * num_core_entries(evlist)),
+ evlist__nr_entries(evlist) == (2 * num_core_entries(evlist)),
evlist);
TEST_ASSERT_EVLIST("wrong number of groups",
evlist__nr_groups(evlist) == num_core_entries(evlist),
@@ -1329,7 +1333,7 @@ static int test__group_gh2(struct evlist *evlist)
struct evsel *evsel = NULL, *leader;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (2 * num_core_entries(evlist)),
+ evlist__nr_entries(evlist) == (2 * num_core_entries(evlist)),
evlist);
TEST_ASSERT_EVLIST("wrong number of groups",
evlist__nr_groups(evlist) == num_core_entries(evlist),
@@ -1374,7 +1378,7 @@ static int test__group_gh3(struct evlist *evlist)
struct evsel *evsel = NULL, *leader;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (2 * num_core_entries(evlist)),
+ evlist__nr_entries(evlist) == (2 * num_core_entries(evlist)),
evlist);
TEST_ASSERT_EVLIST("wrong number of groups",
evlist__nr_groups(evlist) == num_core_entries(evlist),
@@ -1419,7 +1423,7 @@ static int test__group_gh4(struct evlist *evlist)
struct evsel *evsel = NULL, *leader;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (2 * num_core_entries(evlist)),
+ evlist__nr_entries(evlist) == (2 * num_core_entries(evlist)),
evlist);
TEST_ASSERT_EVLIST("wrong number of groups",
evlist__nr_groups(evlist) == num_core_entries(evlist),
@@ -1464,7 +1468,7 @@ static int test__leader_sample1(struct evlist *evlist)
struct evsel *evsel = NULL, *leader;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (3 * num_core_entries(evlist)),
+ evlist__nr_entries(evlist) == (3 * num_core_entries(evlist)),
evlist);
for (int i = 0; i < num_core_entries(evlist); i++) {
@@ -1520,7 +1524,7 @@ static int test__leader_sample2(struct evlist *evlist __maybe_unused)
struct evsel *evsel = NULL, *leader;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (2 * num_core_entries(evlist)),
+ evlist__nr_entries(evlist) == (2 * num_core_entries(evlist)),
evlist);
for (int i = 0; i < num_core_entries(evlist); i++) {
@@ -1562,7 +1566,7 @@ static int test__checkevent_pinned_modifier(struct evlist *evlist)
struct evsel *evsel = NULL;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
for (int i = 0; i < num_core_entries(evlist); i++) {
@@ -1581,7 +1585,7 @@ static int test__pinned_group(struct evlist *evlist)
struct evsel *evsel = NULL, *leader;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == (3 * num_core_entries(evlist)),
+ evlist__nr_entries(evlist) == (3 * num_core_entries(evlist)),
evlist);
for (int i = 0; i < num_core_entries(evlist); i++) {
@@ -1618,7 +1622,7 @@ static int test__checkevent_exclusive_modifier(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
TEST_ASSERT_EVSEL("wrong exclude_user", !evsel->core.attr.exclude_user, evsel);
TEST_ASSERT_EVSEL("wrong exclude_kernel", evsel->core.attr.exclude_kernel, evsel);
@@ -1634,7 +1638,7 @@ static int test__exclusive_group(struct evlist *evlist)
struct evsel *evsel = NULL, *leader;
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == 3 * num_core_entries(evlist),
+ evlist__nr_entries(evlist) == 3 * num_core_entries(evlist),
evlist);
for (int i = 0; i < num_core_entries(evlist); i++) {
@@ -1669,7 +1673,7 @@ static int test__checkevent_breakpoint_len(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_BREAKPOINT == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 0 == evsel->core.attr.config, evsel);
TEST_ASSERT_EVSEL("wrong bp_type",
@@ -1684,7 +1688,7 @@ static int test__checkevent_breakpoint_len_w(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_BREAKPOINT == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 0 == evsel->core.attr.config, evsel);
TEST_ASSERT_EVSEL("wrong bp_type", HW_BREAKPOINT_W == evsel->core.attr.bp_type, evsel);
@@ -1698,7 +1702,7 @@ test__checkevent_breakpoint_len_rw_modifier(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong exclude_user", !evsel->core.attr.exclude_user, evsel);
TEST_ASSERT_EVSEL("wrong exclude_kernel", evsel->core.attr.exclude_kernel, evsel);
TEST_ASSERT_EVSEL("wrong exclude_hv", evsel->core.attr.exclude_hv, evsel);
@@ -1712,7 +1716,7 @@ static int test__checkevent_precise_max_modifier(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == 1 + num_core_entries(evlist),
+ evlist__nr_entries(evlist) == 1 + num_core_entries(evlist),
evlist);
TEST_ASSERT_EVSEL("wrong type/config", evsel__match(evsel, SOFTWARE, SW_TASK_CLOCK), evsel);
return TEST_OK;
@@ -1723,7 +1727,7 @@ static int test__checkevent_config_symbol(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
TEST_ASSERT_EVSEL("wrong name setting", evsel__name_is(evsel, "insn"), evsel);
return TEST_OK;
@@ -1733,7 +1737,7 @@ static int test__checkevent_config_raw(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong name setting", evsel__name_is(evsel, "rawpmu"), evsel);
return TEST_OK;
}
@@ -1742,7 +1746,7 @@ static int test__checkevent_config_num(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong name setting", evsel__name_is(evsel, "numpmu"), evsel);
return TEST_OK;
}
@@ -1752,7 +1756,7 @@ static int test__checkevent_config_cache(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
TEST_ASSERT_EVSEL("wrong name setting", evsel__name_is(evsel, "cachepmu"), evsel);
return test__checkevent_genhw(evlist);
@@ -1777,7 +1781,7 @@ static int test__intel_pt(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong name setting", evsel__name_is(evsel, "intel_pt//u"), evsel);
return TEST_OK;
}
@@ -1798,7 +1802,8 @@ static int test__ratio_to_prev(struct evlist *evlist)
{
struct evsel *evsel, *leader;
- TEST_ASSERT_VAL("wrong number of entries", 2 * perf_pmus__num_core_pmus() == evlist->core.nr_entries);
+ TEST_ASSERT_VAL("wrong number of entries",
+ 2 * perf_pmus__num_core_pmus() == evlist__nr_entries(evlist));
evlist__for_each_entry(evlist, evsel) {
if (evsel != evsel__leader(evsel) ||
@@ -1842,7 +1847,7 @@ static int test__checkevent_complex_name(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
TEST_ASSERT_EVSEL("wrong complex name parsing",
evsel__name_is(evsel,
@@ -1855,7 +1860,7 @@ static int test__checkevent_raw_pmu(struct evlist *evlist)
{
struct evsel *evsel = evlist__first(evlist);
- TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist->core.nr_entries, evlist);
+ TEST_ASSERT_EVLIST("wrong number of entries", 1 == evlist__nr_entries(evlist), evlist);
TEST_ASSERT_EVSEL("wrong type", PERF_TYPE_SOFTWARE == evsel->core.attr.type, evsel);
TEST_ASSERT_EVSEL("wrong config", 0x1a == evsel->core.attr.config, evsel);
return TEST_OK;
@@ -1866,7 +1871,7 @@ static int test__sym_event_slash(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
TEST_ASSERT_EVSEL("unexpected event", evsel__match(evsel, HARDWARE, HW_CPU_CYCLES), evsel);
TEST_ASSERT_EVSEL("wrong exclude_kernel", evsel->core.attr.exclude_kernel, evsel);
@@ -1878,7 +1883,7 @@ static int test__sym_event_dc(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
TEST_ASSERT_EVSEL("unexpected event", evsel__match(evsel, HARDWARE, HW_CPU_CYCLES), evsel);
TEST_ASSERT_EVSEL("wrong exclude_user", evsel->core.attr.exclude_user, evsel);
@@ -1890,7 +1895,7 @@ static int test__term_equal_term(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
TEST_ASSERT_EVSEL("unexpected event", evsel__match(evsel, HARDWARE, HW_CPU_CYCLES), evsel);
TEST_ASSERT_EVSEL("wrong name setting", strcmp(evsel->name, "name") == 0, evsel);
@@ -1902,7 +1907,7 @@ static int test__term_equal_legacy(struct evlist *evlist)
struct evsel *evsel = evlist__first(evlist);
TEST_ASSERT_EVLIST("wrong number of entries",
- evlist->core.nr_entries == num_core_entries(evlist),
+ evlist__nr_entries(evlist) == num_core_entries(evlist),
evlist);
TEST_ASSERT_EVSEL("unexpected event", evsel__match(evsel, HARDWARE, HW_CPU_CYCLES), evsel);
TEST_ASSERT_EVSEL("wrong name setting", strcmp(evsel->name, "l1d") == 0, evsel);
@@ -1958,7 +1963,7 @@ static int count_tracepoints(void)
static int test__all_tracepoints(struct evlist *evlist)
{
TEST_ASSERT_VAL("wrong events count",
- count_tracepoints() == evlist->core.nr_entries);
+ count_tracepoints() == evlist__nr_entries(evlist));
return test__checkevent_tracepoint_multi(evlist);
}
diff --git a/tools/perf/tests/parse-metric.c b/tools/perf/tests/parse-metric.c
index 3f0ec839c056..8f9211eaf341 100644
--- a/tools/perf/tests/parse-metric.c
+++ b/tools/perf/tests/parse-metric.c
@@ -53,7 +53,7 @@ static double compute_single(struct evlist *evlist, const char *name)
struct evsel *evsel;
evlist__for_each_entry(evlist, evsel) {
- me = metricgroup__lookup(&evlist->metric_events, evsel, false);
+ me = metricgroup__lookup(evlist__metric_events(evlist), evsel, false);
if (me != NULL) {
list_for_each_entry (mexp, &me->head, nd) {
if (strcmp(mexp->metric_name, name))
@@ -88,7 +88,7 @@ static int __compute_metric(const char *name, struct value *vals,
return -ENOMEM;
}
- perf_evlist__set_maps(&evlist->core, cpus, NULL);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, NULL);
/* Parse the metric into metric_events list. */
pme_test = find_core_metrics_table("testarch", "testcpu");
diff --git a/tools/perf/tests/perf-record.c b/tools/perf/tests/perf-record.c
index f95752b2ed1c..0bd418e1cdc6 100644
--- a/tools/perf/tests/perf-record.c
+++ b/tools/perf/tests/perf-record.c
@@ -129,7 +129,7 @@ static int test__PERF_RECORD(struct test_suite *test __maybe_unused, int subtest
evsel__set_sample_bit(evsel, TIME);
evlist__config(evlist, &opts, NULL);
- err = sched__get_first_possible_cpu(evlist->workload.pid, cpu_mask);
+ err = sched__get_first_possible_cpu(evlist__workload_pid(evlist), cpu_mask);
if (err < 0) {
pr_debug("sched__get_first_possible_cpu: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
@@ -142,7 +142,7 @@ static int test__PERF_RECORD(struct test_suite *test __maybe_unused, int subtest
/*
* So that we can check perf_sample.cpu on all the samples.
*/
- if (sched_setaffinity(evlist->workload.pid, cpu_mask_size, cpu_mask) < 0) {
+ if (sched_setaffinity(evlist__workload_pid(evlist), cpu_mask_size, cpu_mask) < 0) {
pr_debug("sched_setaffinity: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
evlist__cancel_workload(evlist);
@@ -166,7 +166,7 @@ static int test__PERF_RECORD(struct test_suite *test __maybe_unused, int subtest
* fds in the same CPU to be injected in the same mmap ring buffer
* (using ioctl(PERF_EVENT_IOC_SET_OUTPUT)).
*/
- err = evlist__mmap(evlist, opts.mmap_pages);
+ err = evlist__do_mmap(evlist, opts.mmap_pages);
if (err < 0) {
pr_debug("evlist__mmap: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
@@ -188,11 +188,11 @@ static int test__PERF_RECORD(struct test_suite *test __maybe_unused, int subtest
while (1) {
int before = total_events;
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
union perf_event *event;
struct mmap *md;
- md = &evlist->mmap[i];
+ md = &evlist__mmap(evlist)[i];
if (perf_mmap__read_init(&md->core) < 0)
continue;
@@ -231,15 +231,15 @@ static int test__PERF_RECORD(struct test_suite *test __maybe_unused, int subtest
++errs;
}
- if ((pid_t)sample.pid != evlist->workload.pid) {
+ if ((pid_t)sample.pid != evlist__workload_pid(evlist)) {
pr_debug("%s with unexpected pid, expected %d, got %d\n",
- name, evlist->workload.pid, sample.pid);
+ name, evlist__workload_pid(evlist), sample.pid);
++errs;
}
- if ((pid_t)sample.tid != evlist->workload.pid) {
+ if ((pid_t)sample.tid != evlist__workload_pid(evlist)) {
pr_debug("%s with unexpected tid, expected %d, got %d\n",
- name, evlist->workload.pid, sample.tid);
+ name, evlist__workload_pid(evlist), sample.tid);
++errs;
}
@@ -248,7 +248,7 @@ static int test__PERF_RECORD(struct test_suite *test __maybe_unused, int subtest
type == PERF_RECORD_MMAP2 ||
type == PERF_RECORD_FORK ||
type == PERF_RECORD_EXIT) &&
- (pid_t)event->comm.pid != evlist->workload.pid) {
+ (pid_t)event->comm.pid != evlist__workload_pid(evlist)) {
pr_debug("%s with unexpected pid/tid\n", name);
++errs;
}
diff --git a/tools/perf/tests/perf-time-to-tsc.c b/tools/perf/tests/perf-time-to-tsc.c
index d3538fa20af3..f8f71fdd32b1 100644
--- a/tools/perf/tests/perf-time-to-tsc.c
+++ b/tools/perf/tests/perf-time-to-tsc.c
@@ -99,7 +99,7 @@ static int test__perf_time_to_tsc(struct test_suite *test __maybe_unused, int su
evlist = evlist__new();
CHECK_NOT_NULL__(evlist);
- perf_evlist__set_maps(&evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, threads);
CHECK__(parse_event(evlist, "cpu-cycles:u"));
@@ -121,9 +121,9 @@ static int test__perf_time_to_tsc(struct test_suite *test __maybe_unused, int su
goto out_err;
}
- CHECK__(evlist__mmap(evlist, UINT_MAX));
+ CHECK__(evlist__do_mmap(evlist, UINT_MAX));
- pc = evlist->mmap[0].core.base;
+ pc = evlist__mmap(evlist)[0].core.base;
ret = perf_read_tsc_conversion(pc, &tc);
if (ret) {
if (ret == -EOPNOTSUPP) {
@@ -145,8 +145,8 @@ static int test__perf_time_to_tsc(struct test_suite *test __maybe_unused, int su
evlist__disable(evlist);
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
- md = &evlist->mmap[i];
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
+ md = &evlist__mmap(evlist)[i];
if (perf_mmap__read_init(&md->core) < 0)
continue;
diff --git a/tools/perf/tests/pfm.c b/tools/perf/tests/pfm.c
index 8d19b1bfecbc..f7bf55be5e6e 100644
--- a/tools/perf/tests/pfm.c
+++ b/tools/perf/tests/pfm.c
@@ -69,12 +69,12 @@ static int test__pfm_events(struct test_suite *test __maybe_unused,
if (evlist == NULL)
return -ENOMEM;
- opt.value = evlist;
+ opt.value = &evlist;
parse_libpfm_events_option(&opt,
table[i].events,
0);
TEST_ASSERT_EQUAL(table[i].events,
- count_pfm_events(&evlist->core),
+ count_pfm_events(evlist__core(evlist)),
table[i].nr_events);
TEST_ASSERT_EQUAL(table[i].events,
evlist__nr_groups(evlist),
@@ -154,12 +154,12 @@ static int test__pfm_group(struct test_suite *test __maybe_unused,
if (evlist == NULL)
return -ENOMEM;
- opt.value = evlist;
+ opt.value = &evlist;
parse_libpfm_events_option(&opt,
table[i].events,
0);
TEST_ASSERT_EQUAL(table[i].events,
- count_pfm_events(&evlist->core),
+ count_pfm_events(evlist__core(evlist)),
table[i].nr_events);
TEST_ASSERT_EQUAL(table[i].events,
evlist__nr_groups(evlist),
diff --git a/tools/perf/tests/pmu-events.c b/tools/perf/tests/pmu-events.c
index 236bbbad5773..a66976ee093f 100644
--- a/tools/perf/tests/pmu-events.c
+++ b/tools/perf/tests/pmu-events.c
@@ -848,7 +848,7 @@ static int test__parsing_callback(const struct pmu_metric *pm,
return -ENOMEM;
}
- perf_evlist__set_maps(&evlist->core, cpus, NULL);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, NULL);
err = metricgroup__parse_groups_test(evlist, table, pm->metric_name);
if (err) {
@@ -875,7 +875,8 @@ static int test__parsing_callback(const struct pmu_metric *pm,
k++;
}
evlist__for_each_entry(evlist, evsel) {
- struct metric_event *me = metricgroup__lookup(&evlist->metric_events, evsel, false);
+ struct metric_event *me = metricgroup__lookup(evlist__metric_events(evlist),
+ evsel, false);
if (me != NULL) {
struct metric_expr *mexp;
diff --git a/tools/perf/tests/sample-parsing.c b/tools/perf/tests/sample-parsing.c
index 55f0b73ca20e..6db717e562d5 100644
--- a/tools/perf/tests/sample-parsing.c
+++ b/tools/perf/tests/sample-parsing.c
@@ -205,15 +205,11 @@ static bool samples_same(struct perf_sample *s1,
static int do_test(u64 sample_type, u64 sample_regs, u64 read_format)
{
- struct evsel evsel = {
- .needs_swap = false,
- .core = {
- . attr = {
- .sample_type = sample_type,
- .read_format = read_format,
- },
- },
+ struct perf_event_attr attr ={
+ .sample_type = sample_type,
+ .read_format = read_format,
};
+ struct evsel *evsel;
union perf_event *event;
union {
struct ip_callchain callchain;
@@ -287,16 +283,17 @@ static int do_test(u64 sample_type, u64 sample_regs, u64 read_format)
size_t i, sz, bufsz;
int err, ret = -1;
+ evsel = evsel__new(&attr);
perf_sample__init(&sample_out, /*all=*/false);
perf_sample__init(&sample_out_endian, /*all=*/false);
if (sample_type & PERF_SAMPLE_REGS_USER)
- evsel.core.attr.sample_regs_user = sample_regs;
+ evsel->core.attr.sample_regs_user = sample_regs;
if (sample_type & PERF_SAMPLE_REGS_INTR)
- evsel.core.attr.sample_regs_intr = sample_regs;
+ evsel->core.attr.sample_regs_intr = sample_regs;
if (sample_type & PERF_SAMPLE_BRANCH_STACK)
- evsel.core.attr.branch_sample_type |= PERF_SAMPLE_BRANCH_HW_INDEX;
+ evsel->core.attr.branch_sample_type |= PERF_SAMPLE_BRANCH_HW_INDEX;
for (i = 0; i < sizeof(regs); i++)
*(i + (u8 *)regs) = i & 0xfe;
@@ -311,7 +308,7 @@ static int do_test(u64 sample_type, u64 sample_regs, u64 read_format)
}
sz = perf_event__sample_event_size(&sample, sample_type, read_format,
- evsel.core.attr.branch_sample_type);
+ evsel->core.attr.branch_sample_type);
bufsz = sz + 4096; /* Add a bit for overrun checking */
event = malloc(bufsz);
if (!event) {
@@ -325,7 +322,7 @@ static int do_test(u64 sample_type, u64 sample_regs, u64 read_format)
event->header.size = sz;
err = perf_event__synthesize_sample(event, sample_type, read_format,
- evsel.core.attr.branch_sample_type, &sample);
+ evsel->core.attr.branch_sample_type, &sample);
if (err) {
pr_debug("%s failed for sample_type %#"PRIx64", error %d\n",
"perf_event__synthesize_sample", sample_type, err);
@@ -343,32 +340,32 @@ static int do_test(u64 sample_type, u64 sample_regs, u64 read_format)
goto out_free;
}
- evsel.sample_size = __evsel__sample_size(sample_type);
+ evsel->sample_size = __evsel__sample_size(sample_type);
- err = evsel__parse_sample(&evsel, event, &sample_out);
+ err = evsel__parse_sample(evsel, event, &sample_out);
if (err) {
pr_debug("%s failed for sample_type %#"PRIx64", error %d\n",
"evsel__parse_sample", sample_type, err);
goto out_free;
}
- if (!samples_same(&sample, &sample_out, sample_type, read_format, evsel.needs_swap)) {
+ if (!samples_same(&sample, &sample_out, sample_type, read_format, evsel->needs_swap)) {
pr_debug("parsing failed for sample_type %#"PRIx64"\n",
sample_type);
goto out_free;
}
if (sample_type == PERF_SAMPLE_BRANCH_STACK) {
- evsel.needs_swap = true;
- evsel.sample_size = __evsel__sample_size(sample_type);
- err = evsel__parse_sample(&evsel, event, &sample_out_endian);
+ evsel->needs_swap = true;
+ evsel->sample_size = __evsel__sample_size(sample_type);
+ err = evsel__parse_sample(evsel, event, &sample_out_endian);
if (err) {
pr_debug("%s failed for sample_type %#"PRIx64", error %d\n",
"evsel__parse_sample", sample_type, err);
goto out_free;
}
- if (!samples_same(&sample, &sample_out_endian, sample_type, read_format, evsel.needs_swap)) {
+ if (!samples_same(&sample, &sample_out_endian, sample_type, read_format, evsel->needs_swap)) {
pr_debug("parsing failed for sample_type %#"PRIx64"\n",
sample_type);
goto out_free;
@@ -380,6 +377,7 @@ static int do_test(u64 sample_type, u64 sample_regs, u64 read_format)
free(event);
perf_sample__exit(&sample_out_endian);
perf_sample__exit(&sample_out);
+ evsel__put(evsel);
if (ret && read_format)
pr_debug("read_format %#"PRIx64"\n", read_format);
return ret;
diff --git a/tools/perf/tests/sw-clock.c b/tools/perf/tests/sw-clock.c
index bb6b62cf51d1..d18185881635 100644
--- a/tools/perf/tests/sw-clock.c
+++ b/tools/perf/tests/sw-clock.c
@@ -71,7 +71,7 @@ static int __test__sw_clock_freq(enum perf_sw_ids clock_id)
goto out_put_evlist;
}
- perf_evlist__set_maps(&evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, threads);
if (evlist__open(evlist)) {
const char *knob = "/proc/sys/kernel/perf_event_max_sample_rate";
@@ -83,7 +83,7 @@ static int __test__sw_clock_freq(enum perf_sw_ids clock_id)
goto out_put_evlist;
}
- err = evlist__mmap(evlist, 128);
+ err = evlist__do_mmap(evlist, 128);
if (err < 0) {
pr_debug("failed to mmap event: %d (%s)\n", errno,
str_error_r(errno, sbuf, sizeof(sbuf)));
@@ -98,7 +98,7 @@ static int __test__sw_clock_freq(enum perf_sw_ids clock_id)
evlist__disable(evlist);
- md = &evlist->mmap[0];
+ md = &evlist__mmap(evlist)[0];
if (perf_mmap__read_init(&md->core) < 0)
goto out_init;
diff --git a/tools/perf/tests/switch-tracking.c b/tools/perf/tests/switch-tracking.c
index 306151c83af8..2b1694be8a06 100644
--- a/tools/perf/tests/switch-tracking.c
+++ b/tools/perf/tests/switch-tracking.c
@@ -279,8 +279,8 @@ static int process_events(struct evlist *evlist,
struct mmap *md;
int i, ret;
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
- md = &evlist->mmap[i];
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
+ md = &evlist__mmap(evlist)[i];
if (perf_mmap__read_init(&md->core) < 0)
continue;
@@ -371,7 +371,7 @@ static int test__switch_tracking(struct test_suite *test __maybe_unused, int sub
goto out_err;
}
- perf_evlist__set_maps(&evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, threads);
/* First event */
err = parse_event(evlist, "cpu-clock:u");
@@ -468,7 +468,7 @@ static int test__switch_tracking(struct test_suite *test __maybe_unused, int sub
goto out;
}
- err = evlist__mmap(evlist, UINT_MAX);
+ err = evlist__do_mmap(evlist, UINT_MAX);
if (err) {
pr_debug("evlist__mmap failed!\n");
goto out_err;
diff --git a/tools/perf/tests/task-exit.c b/tools/perf/tests/task-exit.c
index a46650b10689..95393edbfe36 100644
--- a/tools/perf/tests/task-exit.c
+++ b/tools/perf/tests/task-exit.c
@@ -77,7 +77,7 @@ static int test__task_exit(struct test_suite *test __maybe_unused, int subtest _
goto out_put_evlist;
}
- perf_evlist__set_maps(&evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, threads);
err = evlist__prepare_workload(evlist, &target, argv, false, workload_exec_failed_signal);
if (err < 0) {
@@ -104,7 +104,7 @@ static int test__task_exit(struct test_suite *test __maybe_unused, int subtest _
goto out_put_evlist;
}
- if (evlist__mmap(evlist, 128) < 0) {
+ if (evlist__do_mmap(evlist, 128) < 0) {
pr_debug("failed to mmap events: %d (%s)\n", errno,
str_error_r(errno, sbuf, sizeof(sbuf)));
err = -1;
@@ -114,7 +114,7 @@ static int test__task_exit(struct test_suite *test __maybe_unused, int subtest _
evlist__start_workload(evlist);
retry:
- md = &evlist->mmap[0];
+ md = &evlist__mmap(evlist)[0];
if (perf_mmap__read_init(&md->core) < 0)
goto out_init;
diff --git a/tools/perf/tests/time-utils-test.c b/tools/perf/tests/time-utils-test.c
index 38df10373c1e..90a9a4b4f178 100644
--- a/tools/perf/tests/time-utils-test.c
+++ b/tools/perf/tests/time-utils-test.c
@@ -69,16 +69,19 @@ struct test_data {
static bool test__perf_time__parse_for_ranges(struct test_data *d)
{
- struct evlist evlist = {
- .first_sample_time = d->first,
- .last_sample_time = d->last,
- };
- struct perf_session session = { .evlist = &evlist };
+ struct evlist *evlist = evlist__new();
+ struct perf_session session = { .evlist = evlist };
struct perf_time_interval *ptime = NULL;
int range_size, range_num;
bool pass = false;
int i, err;
+ if (!evlist) {
+ pr_debug("Missing evlist\n");
+ return false;
+ }
+ evlist__set_first_sample_time(evlist, d->first);
+ evlist__set_last_sample_time(evlist, d->last);
pr_debug("\nperf_time__parse_for_ranges(\"%s\")\n", d->str);
if (strchr(d->str, '%'))
@@ -127,6 +130,7 @@ static bool test__perf_time__parse_for_ranges(struct test_data *d)
pass = true;
out:
+ evlist__put(evlist);
free(ptime);
return pass;
}
diff --git a/tools/perf/tests/tool_pmu.c b/tools/perf/tests/tool_pmu.c
index e78ff9dcea97..c6c5ebf0e935 100644
--- a/tools/perf/tests/tool_pmu.c
+++ b/tools/perf/tests/tool_pmu.c
@@ -40,9 +40,10 @@ static int do_test(enum tool_pmu_event ev, bool with_pmu)
}
ret = TEST_OK;
- if (with_pmu ? (evlist->core.nr_entries != 1) : (evlist->core.nr_entries < 1)) {
+ if (with_pmu ? (evlist__nr_entries(evlist) != 1)
+ : (evlist__nr_entries(evlist) < 1)) {
pr_debug("FAILED %s:%d Unexpected number of events for '%s' of %d\n",
- __FILE__, __LINE__, str, evlist->core.nr_entries);
+ __FILE__, __LINE__, str, evlist__nr_entries(evlist));
ret = TEST_FAIL;
goto out;
}
diff --git a/tools/perf/tests/topology.c b/tools/perf/tests/topology.c
index 4ecf5d750313..b3ca73b2d8fc 100644
--- a/tools/perf/tests/topology.c
+++ b/tools/perf/tests/topology.c
@@ -45,7 +45,7 @@ static int session_write_header(char *path)
session->evlist = evlist__new_default(&target, /*sample_callchains=*/false);
TEST_ASSERT_VAL("can't get evlist", session->evlist);
- session->evlist->session = session;
+ evlist__set_session(session->evlist, session);
perf_header__set_feat(&session->header, HEADER_CPU_TOPOLOGY);
perf_header__set_feat(&session->header, HEADER_NRCPUS);
diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
index ea17e6d29a7e..99f143a52b5f 100644
--- a/tools/perf/ui/browsers/annotate.c
+++ b/tools/perf/ui/browsers/annotate.c
@@ -594,7 +594,7 @@ static bool annotate_browser__callq(struct annotate_browser *browser,
notes = symbol__annotation(dl->ops.target.sym);
annotation__lock(notes);
- if (!symbol__hists(dl->ops.target.sym, evsel->evlist->core.nr_entries)) {
+ if (!symbol__hists(dl->ops.target.sym, evlist__nr_entries(evsel->evlist))) {
annotation__unlock(notes);
ui__warning("Not enough memory for annotating '%s' symbol!\n",
dl->ops.target.sym->name);
diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
index cfa6386e6e1d..da7cc195b9f4 100644
--- a/tools/perf/ui/browsers/hists.c
+++ b/tools/perf/ui/browsers/hists.c
@@ -688,10 +688,10 @@ static int hist_browser__handle_hotkey(struct hist_browser *browser, bool warn_l
ui_browser__update_nr_entries(&browser->b, nr_entries);
if (warn_lost_event &&
- (evsel->evlist->stats.nr_lost_warned !=
- evsel->evlist->stats.nr_events[PERF_RECORD_LOST])) {
- evsel->evlist->stats.nr_lost_warned =
- evsel->evlist->stats.nr_events[PERF_RECORD_LOST];
+ (evlist__stats(evsel->evlist)->nr_lost_warned !=
+ evlist__stats(evsel->evlist)->nr_events[PERF_RECORD_LOST])) {
+ evlist__stats(evsel->evlist)->nr_lost_warned =
+ evlist__stats(evsel->evlist)->nr_events[PERF_RECORD_LOST];
ui_browser__warn_lost_events(&browser->b);
}
@@ -3321,7 +3321,7 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
* No need to refresh, resort/decay histogram
* entries if we are not collecting samples:
*/
- if (top->evlist->enabled) {
+ if (evlist__enabled(top->evlist)) {
helpline = "Press 'f' to disable the events or 'h' to see other hotkeys";
hbt->refresh = delay_secs;
} else {
@@ -3493,7 +3493,7 @@ static void perf_evsel_menu__write(struct ui_browser *browser,
unit, unit == ' ' ? "" : " ", ev_name);
ui_browser__printf(browser, "%s", bf);
- nr_events = evsel->evlist->stats.nr_events[PERF_RECORD_LOST];
+ nr_events = evlist__stats(evsel->evlist)->nr_events[PERF_RECORD_LOST];
if (nr_events != 0) {
menu->lost_events = true;
if (!current_entry)
@@ -3559,13 +3559,13 @@ static int perf_evsel_menu__run(struct evsel_menu *menu,
ui_browser__show_title(&menu->b, title);
switch (key) {
case K_TAB:
- if (pos->core.node.next == &evlist->core.entries)
+ if (pos->core.node.next == &evlist__core(evlist)->entries)
pos = evlist__first(evlist);
else
pos = evsel__next(pos);
goto browse_hists;
case K_UNTAB:
- if (pos->core.node.prev == &evlist->core.entries)
+ if (pos->core.node.prev == &evlist__core(evlist)->entries)
pos = evlist__last(evlist);
else
pos = evsel__prev(pos);
@@ -3618,7 +3618,7 @@ static int __evlist__tui_browse_hists(struct evlist *evlist, int nr_entries, con
struct evsel *pos;
struct evsel_menu menu = {
.b = {
- .entries = &evlist->core.entries,
+ .entries = &evlist__core(evlist)->entries,
.refresh = ui_browser__list_head_refresh,
.seek = ui_browser__list_head_seek,
.write = perf_evsel_menu__write,
@@ -3646,7 +3646,7 @@ static int __evlist__tui_browse_hists(struct evlist *evlist, int nr_entries, con
static bool evlist__single_entry(struct evlist *evlist)
{
- int nr_entries = evlist->core.nr_entries;
+ int nr_entries = evlist__nr_entries(evlist);
if (nr_entries == 1)
return true;
@@ -3664,7 +3664,7 @@ static bool evlist__single_entry(struct evlist *evlist)
int evlist__tui_browse_hists(struct evlist *evlist, const char *help, struct hist_browser_timer *hbt,
float min_pcnt, struct perf_env *env, bool warn_lost_event)
{
- int nr_entries = evlist->core.nr_entries;
+ int nr_entries = evlist__nr_entries(evlist);
if (evlist__single_entry(evlist)) {
single_entry: {
diff --git a/tools/perf/util/amd-sample-raw.c b/tools/perf/util/amd-sample-raw.c
index b084dee76b1a..c64584b0f794 100644
--- a/tools/perf/util/amd-sample-raw.c
+++ b/tools/perf/util/amd-sample-raw.c
@@ -354,7 +354,7 @@ static void parse_cpuid(struct perf_env *env)
*/
bool evlist__has_amd_ibs(struct evlist *evlist)
{
- struct perf_env *env = perf_session__env(evlist->session);
+ struct perf_env *env = perf_session__env(evlist__session(evlist));
int ret, nr_pmu_mappings = perf_env__nr_pmu_mappings(env);
const char *pmu_mapping = perf_env__pmu_mappings(env);
char name[sizeof("ibs_fetch")];
diff --git a/tools/perf/util/annotate-data.c b/tools/perf/util/annotate-data.c
index 1eff0a27237d..e8949dce37a9 100644
--- a/tools/perf/util/annotate-data.c
+++ b/tools/perf/util/annotate-data.c
@@ -1822,7 +1822,7 @@ int annotated_data_type__update_samples(struct annotated_data_type *adt,
return 0;
if (adt->histograms == NULL) {
- int nr = evsel->evlist->core.nr_entries;
+ int nr = evlist__nr_entries(evsel->evlist);
if (alloc_data_type_histograms(adt, nr) < 0)
return -1;
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index e745f3034a0e..02c1b8deda6b 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -326,7 +326,7 @@ static int symbol__inc_addr_samples(struct map_symbol *ms,
if (sym == NULL)
return 0;
- src = symbol__hists(sym, evsel->evlist->core.nr_entries);
+ src = symbol__hists(sym, evlist__nr_entries(evsel->evlist));
return src ? __symbol__inc_addr_samples(ms, src, evsel, addr, sample) : 0;
}
@@ -337,7 +337,7 @@ static int symbol__account_br_cntr(struct annotated_branch *branch,
{
unsigned int br_cntr_nr = evsel__leader(evsel)->br_cntr_nr;
unsigned int base = evsel__leader(evsel)->br_cntr_idx;
- unsigned int off = offset * evsel->evlist->nr_br_cntr;
+ unsigned int off = offset * evlist__nr_br_cntr(evsel->evlist);
u64 *branch_br_cntr = branch->br_cntr;
unsigned int i, mask, width;
@@ -367,7 +367,7 @@ static int symbol__account_cycles(u64 addr, u64 start, struct symbol *sym,
if (sym == NULL)
return 0;
- branch = symbol__find_branch_hist(sym, evsel->evlist->nr_br_cntr);
+ branch = symbol__find_branch_hist(sym, evlist__nr_br_cntr(evsel->evlist));
if (!branch)
return -ENOMEM;
if (addr < sym->start || addr >= sym->end)
@@ -509,7 +509,7 @@ static void annotation__count_and_fill(struct annotation *notes, u64 start, u64
static int annotation__compute_ipc(struct annotation *notes, size_t size,
struct evsel *evsel)
{
- unsigned int br_cntr_nr = evsel->evlist->nr_br_cntr;
+ unsigned int br_cntr_nr = evlist__nr_br_cntr(evsel->evlist);
int err = 0;
s64 offset;
@@ -1813,7 +1813,7 @@ int annotation_br_cntr_abbr_list(char **str, struct evsel *evsel, bool header)
struct evsel *pos;
struct strbuf sb;
- if (evsel->evlist->nr_br_cntr <= 0)
+ if (evlist__nr_br_cntr(evsel->evlist) <= 0)
return -ENOTSUP;
strbuf_init(&sb, /*hint=*/ 0);
diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
index a224687ffbc1..4d9dfbde7f78 100644
--- a/tools/perf/util/auxtrace.c
+++ b/tools/perf/util/auxtrace.c
@@ -191,7 +191,7 @@ void auxtrace_mmap_params__set_idx(struct auxtrace_mmap_params *mp,
struct evlist *evlist,
struct evsel *evsel, int idx)
{
- bool per_cpu = !perf_cpu_map__has_any_cpu(evlist->core.user_requested_cpus);
+ bool per_cpu = !perf_cpu_map__has_any_cpu(evlist__core(evlist)->user_requested_cpus);
mp->mmap_needed = evsel->needs_auxtrace_mmap;
@@ -201,11 +201,11 @@ void auxtrace_mmap_params__set_idx(struct auxtrace_mmap_params *mp,
mp->idx = idx;
if (per_cpu) {
- mp->cpu = perf_cpu_map__cpu(evlist->core.all_cpus, idx);
- mp->tid = perf_thread_map__pid(evlist->core.threads, 0);
+ mp->cpu = perf_cpu_map__cpu(evlist__core(evlist)->all_cpus, idx);
+ mp->tid = perf_thread_map__pid(evlist__core(evlist)->threads, 0);
} else {
mp->cpu.cpu = -1;
- mp->tid = perf_thread_map__pid(evlist->core.threads, idx);
+ mp->tid = perf_thread_map__pid(evlist__core(evlist)->threads, idx);
}
}
@@ -667,10 +667,10 @@ int auxtrace_parse_snapshot_options(struct auxtrace_record *itr,
static int evlist__enable_event_idx(struct evlist *evlist, struct evsel *evsel, int idx)
{
- bool per_cpu_mmaps = !perf_cpu_map__has_any_cpu(evlist->core.user_requested_cpus);
+ bool per_cpu_mmaps = !perf_cpu_map__has_any_cpu(evlist__core(evlist)->user_requested_cpus);
if (per_cpu_mmaps) {
- struct perf_cpu evlist_cpu = perf_cpu_map__cpu(evlist->core.all_cpus, idx);
+ struct perf_cpu evlist_cpu = perf_cpu_map__cpu(evlist__core(evlist)->all_cpus, idx);
int cpu_map_idx = perf_cpu_map__idx(evsel->core.cpus, evlist_cpu);
if (cpu_map_idx == -1)
@@ -1806,7 +1806,7 @@ void perf_session__auxtrace_error_inc(struct perf_session *session,
struct perf_record_auxtrace_error *e = &event->auxtrace_error;
if (e->type < PERF_AUXTRACE_ERROR_MAX)
- session->evlist->stats.nr_auxtrace_errors[e->type] += 1;
+ evlist__stats(session->evlist)->nr_auxtrace_errors[e->type] += 1;
}
void events_stats__auxtrace_error_warn(const struct events_stats *stats)
diff --git a/tools/perf/util/block-info.c b/tools/perf/util/block-info.c
index 8d3a9a661f26..1135e54f4c7f 100644
--- a/tools/perf/util/block-info.c
+++ b/tools/perf/util/block-info.c
@@ -472,7 +472,7 @@ struct block_report *block_info__create_report(struct evlist *evlist,
int *nr_reps)
{
struct block_report *block_reports;
- int nr_hists = evlist->core.nr_entries, i = 0;
+ int nr_hists = evlist__nr_entries(evlist), i = 0;
struct evsel *pos;
block_reports = calloc(nr_hists, sizeof(struct block_report));
@@ -483,7 +483,7 @@ struct block_report *block_info__create_report(struct evlist *evlist,
struct hists *hists = evsel__hists(pos);
process_block_report(hists, &block_reports[i], total_cycles,
- block_hpps, nr_hpps, evlist->nr_br_cntr);
+ block_hpps, nr_hpps, evlist__nr_br_cntr(evlist));
i++;
}
diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
index 34b6b0da18b7..9362e45e17ce 100644
--- a/tools/perf/util/bpf_counter.c
+++ b/tools/perf/util/bpf_counter.c
@@ -443,7 +443,7 @@ static int bperf_check_target(struct evsel *evsel,
} else if (target->tid) {
*filter_type = BPERF_FILTER_PID;
*filter_entry_cnt = perf_thread_map__nr(evsel->core.threads);
- } else if (target->pid || evsel->evlist->workload.pid != -1) {
+ } else if (target->pid || evlist__workload_pid(evsel->evlist) != -1) {
*filter_type = BPERF_FILTER_TGID;
*filter_entry_cnt = perf_thread_map__nr(evsel->core.threads);
} else {
diff --git a/tools/perf/util/bpf_counter_cgroup.c b/tools/perf/util/bpf_counter_cgroup.c
index 339df94ef438..27bb1a41ae4f 100644
--- a/tools/perf/util/bpf_counter_cgroup.c
+++ b/tools/perf/util/bpf_counter_cgroup.c
@@ -111,7 +111,7 @@ static int bperf_load_program(struct evlist *evlist)
pr_err("Failed to open cgroup skeleton\n");
return -1;
}
- setup_rodata(skel, evlist->core.nr_entries);
+ setup_rodata(skel, evlist__nr_entries(evlist));
err = bperf_cgroup_bpf__load(skel);
if (err) {
@@ -122,12 +122,12 @@ static int bperf_load_program(struct evlist *evlist)
err = -1;
cgrp_switch = evsel__new(&cgrp_switch_attr);
- if (evsel__open_per_cpu(cgrp_switch, evlist->core.all_cpus, -1) < 0) {
+ if (evsel__open_per_cpu(cgrp_switch, evlist__core(evlist)->all_cpus, -1) < 0) {
pr_err("Failed to open cgroup switches event\n");
goto out;
}
- perf_cpu_map__for_each_cpu(cpu, i, evlist->core.all_cpus) {
+ perf_cpu_map__for_each_cpu(cpu, i, evlist__core(evlist)->all_cpus) {
link = bpf_program__attach_perf_event(skel->progs.on_cgrp_switch,
FD(cgrp_switch, i));
if (IS_ERR(link)) {
@@ -238,7 +238,7 @@ static int bperf_cgrp__sync_counters(struct evlist *evlist)
unsigned int idx;
int prog_fd = bpf_program__fd(skel->progs.trigger_read);
- perf_cpu_map__for_each_cpu(cpu, idx, evlist->core.all_cpus)
+ perf_cpu_map__for_each_cpu(cpu, idx, evlist__core(evlist)->all_cpus)
bperf_trigger_reading(prog_fd, cpu.cpu);
return 0;
diff --git a/tools/perf/util/bpf_ftrace.c b/tools/perf/util/bpf_ftrace.c
index c456d24efa30..abeafd406e8e 100644
--- a/tools/perf/util/bpf_ftrace.c
+++ b/tools/perf/util/bpf_ftrace.c
@@ -59,13 +59,13 @@ int perf_ftrace__latency_prepare_bpf(struct perf_ftrace *ftrace)
/* don't need to set cpu filter for system-wide mode */
if (ftrace->target.cpu_list) {
- ncpus = perf_cpu_map__nr(ftrace->evlist->core.user_requested_cpus);
+ ncpus = perf_cpu_map__nr(evlist__core(ftrace->evlist)->user_requested_cpus);
bpf_map__set_max_entries(skel->maps.cpu_filter, ncpus);
skel->rodata->has_cpu = 1;
}
if (target__has_task(&ftrace->target) || target__none(&ftrace->target)) {
- ntasks = perf_thread_map__nr(ftrace->evlist->core.threads);
+ ntasks = perf_thread_map__nr(evlist__core(ftrace->evlist)->threads);
bpf_map__set_max_entries(skel->maps.task_filter, ntasks);
skel->rodata->has_task = 1;
}
@@ -87,7 +87,8 @@ int perf_ftrace__latency_prepare_bpf(struct perf_ftrace *ftrace)
fd = bpf_map__fd(skel->maps.cpu_filter);
for (i = 0; i < ncpus; i++) {
- cpu = perf_cpu_map__cpu(ftrace->evlist->core.user_requested_cpus, i).cpu;
+ cpu = perf_cpu_map__cpu(
+ evlist__core(ftrace->evlist)->user_requested_cpus, i).cpu;
bpf_map_update_elem(fd, &cpu, &val, BPF_ANY);
}
}
@@ -99,7 +100,7 @@ int perf_ftrace__latency_prepare_bpf(struct perf_ftrace *ftrace)
fd = bpf_map__fd(skel->maps.task_filter);
for (i = 0; i < ntasks; i++) {
- pid = perf_thread_map__pid(ftrace->evlist->core.threads, i);
+ pid = perf_thread_map__pid(evlist__core(ftrace->evlist)->threads, i);
bpf_map_update_elem(fd, &pid, &val, BPF_ANY);
}
}
diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
index cbd7435579fe..85727d154d9c 100644
--- a/tools/perf/util/bpf_lock_contention.c
+++ b/tools/perf/util/bpf_lock_contention.c
@@ -222,11 +222,11 @@ int lock_contention_prepare(struct lock_contention *con)
if (target__has_cpu(target)) {
skel->rodata->has_cpu = 1;
- ncpus = perf_cpu_map__nr(evlist->core.user_requested_cpus);
+ ncpus = perf_cpu_map__nr(evlist__core(evlist)->user_requested_cpus);
}
if (target__has_task(target)) {
skel->rodata->has_task = 1;
- ntasks = perf_thread_map__nr(evlist->core.threads);
+ ntasks = perf_thread_map__nr(evlist__core(evlist)->threads);
}
if (con->filters->nr_types) {
skel->rodata->has_type = 1;
@@ -327,7 +327,7 @@ int lock_contention_prepare(struct lock_contention *con)
fd = bpf_map__fd(skel->maps.cpu_filter);
for (i = 0; i < ncpus; i++) {
- cpu = perf_cpu_map__cpu(evlist->core.user_requested_cpus, i).cpu;
+ cpu = perf_cpu_map__cpu(evlist__core(evlist)->user_requested_cpus, i).cpu;
bpf_map_update_elem(fd, &cpu, &val, BPF_ANY);
}
}
@@ -339,13 +339,13 @@ int lock_contention_prepare(struct lock_contention *con)
fd = bpf_map__fd(skel->maps.task_filter);
for (i = 0; i < ntasks; i++) {
- pid = perf_thread_map__pid(evlist->core.threads, i);
+ pid = perf_thread_map__pid(evlist__core(evlist)->threads, i);
bpf_map_update_elem(fd, &pid, &val, BPF_ANY);
}
}
- if (target__none(target) && evlist->workload.pid > 0) {
- u32 pid = evlist->workload.pid;
+ if (target__none(target) && evlist__workload_pid(evlist) > 0) {
+ u32 pid = evlist__workload_pid(evlist);
u8 val = 1;
fd = bpf_map__fd(skel->maps.task_filter);
diff --git a/tools/perf/util/bpf_off_cpu.c b/tools/perf/util/bpf_off_cpu.c
index 48cb930cdd2e..c4639f6a5776 100644
--- a/tools/perf/util/bpf_off_cpu.c
+++ b/tools/perf/util/bpf_off_cpu.c
@@ -73,13 +73,13 @@ static void off_cpu_start(void *arg)
/* update task filter for the given workload */
if (skel->rodata->has_task && skel->rodata->uses_tgid &&
- perf_thread_map__pid(evlist->core.threads, 0) != -1) {
+ perf_thread_map__pid(evlist__core(evlist)->threads, 0) != -1) {
int fd;
u32 pid;
u8 val = 1;
fd = bpf_map__fd(skel->maps.task_filter);
- pid = perf_thread_map__pid(evlist->core.threads, 0);
+ pid = perf_thread_map__pid(evlist__core(evlist)->threads, 0);
bpf_map_update_elem(fd, &pid, &val, BPF_ANY);
}
@@ -168,7 +168,7 @@ int off_cpu_prepare(struct evlist *evlist, struct target *target,
/* don't need to set cpu filter for system-wide mode */
if (target->cpu_list) {
- ncpus = perf_cpu_map__nr(evlist->core.user_requested_cpus);
+ ncpus = perf_cpu_map__nr(evlist__core(evlist)->user_requested_cpus);
bpf_map__set_max_entries(skel->maps.cpu_filter, ncpus);
skel->rodata->has_cpu = 1;
}
@@ -199,7 +199,7 @@ int off_cpu_prepare(struct evlist *evlist, struct target *target,
skel->rodata->has_task = 1;
skel->rodata->uses_tgid = 1;
} else if (target__has_task(target)) {
- ntasks = perf_thread_map__nr(evlist->core.threads);
+ ntasks = perf_thread_map__nr(evlist__core(evlist)->threads);
bpf_map__set_max_entries(skel->maps.task_filter, ntasks);
skel->rodata->has_task = 1;
} else if (target__none(target)) {
@@ -209,7 +209,7 @@ int off_cpu_prepare(struct evlist *evlist, struct target *target,
}
if (evlist__first(evlist)->cgrp) {
- ncgrps = evlist->core.nr_entries - 1; /* excluding a dummy */
+ ncgrps = evlist__nr_entries(evlist) - 1; /* excluding a dummy */
bpf_map__set_max_entries(skel->maps.cgroup_filter, ncgrps);
if (!cgroup_is_v2("perf_event"))
@@ -240,7 +240,7 @@ int off_cpu_prepare(struct evlist *evlist, struct target *target,
fd = bpf_map__fd(skel->maps.cpu_filter);
for (i = 0; i < ncpus; i++) {
- cpu = perf_cpu_map__cpu(evlist->core.user_requested_cpus, i).cpu;
+ cpu = perf_cpu_map__cpu(evlist__core(evlist)->user_requested_cpus, i).cpu;
bpf_map_update_elem(fd, &cpu, &val, BPF_ANY);
}
}
@@ -269,7 +269,7 @@ int off_cpu_prepare(struct evlist *evlist, struct target *target,
fd = bpf_map__fd(skel->maps.task_filter);
for (i = 0; i < ntasks; i++) {
- pid = perf_thread_map__pid(evlist->core.threads, i);
+ pid = perf_thread_map__pid(evlist__core(evlist)->threads, i);
bpf_map_update_elem(fd, &pid, &val, BPF_ANY);
}
}
diff --git a/tools/perf/util/cgroup.c b/tools/perf/util/cgroup.c
index 914744724467..c7be16a7915e 100644
--- a/tools/perf/util/cgroup.c
+++ b/tools/perf/util/cgroup.c
@@ -367,7 +367,7 @@ int parse_cgroups(const struct option *opt, const char *str,
char *s;
int ret, i;
- if (list_empty(&evlist->core.entries)) {
+ if (list_empty(&evlist__core(evlist)->entries)) {
fprintf(stderr, "must define events before cgroups\n");
return -1;
}
@@ -423,7 +423,7 @@ int evlist__expand_cgroup(struct evlist *evlist, const char *str, bool open_cgro
int ret = -1;
int prefix_len;
- if (evlist->core.nr_entries == 0) {
+ if (evlist__nr_entries(evlist) == 0) {
fprintf(stderr, "must define events before cgroups\n");
return -EINVAL;
}
@@ -436,11 +436,11 @@ int evlist__expand_cgroup(struct evlist *evlist, const char *str, bool open_cgro
}
/* save original events and init evlist */
- evlist__splice_list_tail(orig_list, &evlist->core.entries);
- evlist->core.nr_entries = 0;
+ evlist__splice_list_tail(orig_list, &evlist__core(evlist)->entries);
+ evlist__core(evlist)->nr_entries = 0;
- orig_metric_events = evlist->metric_events;
- metricgroup__rblist_init(&evlist->metric_events);
+ orig_metric_events = *evlist__metric_events(evlist);
+ metricgroup__rblist_init(evlist__metric_events(evlist));
if (has_pattern_string(str))
prefix_len = match_cgroups(str);
@@ -503,15 +503,15 @@ int evlist__expand_cgroup(struct evlist *evlist, const char *str, bool open_cgro
nr_cgroups++;
if (metricgroup__copy_metric_events(tmp_list, cgrp,
- &evlist->metric_events,
+ evlist__metric_events(evlist),
&orig_metric_events) < 0)
goto out_err;
- evlist__splice_list_tail(evlist, &tmp_list->core.entries);
- tmp_list->core.nr_entries = 0;
+ evlist__splice_list_tail(evlist, &evlist__core(tmp_list)->entries);
+ evlist__core(tmp_list)->nr_entries = 0;
}
- if (list_empty(&evlist->core.entries)) {
+ if (list_empty(&evlist__core(evlist)->entries)) {
fprintf(stderr, "no cgroup matched: %s\n", str);
goto out_err;
}
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index a362f338f104..29588af735e5 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -31,6 +31,7 @@
#include <api/fs/fs.h>
#include <internal/lib.h> // page_size
+#include <internal/rc_check.h>
#include <internal/xyarray.h>
#include <perf/cpumap.h>
#include <perf/evlist.h>
@@ -75,30 +76,31 @@ int sigqueue(pid_t pid, int sig, const union sigval value);
#define FD(e, x, y) (*(int *)xyarray__entry(e->core.fd, x, y))
#define SID(e, x, y) xyarray__entry(e->core.sample_id, x, y)
-static void evlist__init(struct evlist *evlist, struct perf_cpu_map *cpus,
- struct perf_thread_map *threads)
-{
- perf_evlist__init(&evlist->core);
- perf_evlist__set_maps(&evlist->core, cpus, threads);
- evlist->workload.pid = -1;
- evlist->bkw_mmap_state = BKW_MMAP_NOTREADY;
- evlist->ctl_fd.fd = -1;
- evlist->ctl_fd.ack = -1;
- evlist->ctl_fd.pos = -1;
- evlist->nr_br_cntr = -1;
- metricgroup__rblist_init(&evlist->metric_events);
- INIT_LIST_HEAD(&evlist->deferred_samples);
- refcount_set(&evlist->refcnt, 1);
-}
+static void event_enable_timer__exit(struct event_enable_timer **ep);
struct evlist *evlist__new(void)
{
- struct evlist *evlist = zalloc(sizeof(*evlist));
-
- if (evlist != NULL)
- evlist__init(evlist, NULL, NULL);
-
- return evlist;
+ struct evlist *result;
+ RC_STRUCT(evlist) *evlist;
+
+ evlist = zalloc(sizeof(*evlist));
+ if (ADD_RC_CHK(result, evlist)) {
+ perf_evlist__init(evlist__core(result));
+ perf_evlist__set_maps(evlist__core(result), /*cpus=*/NULL, /*threads=*/NULL);
+ evlist__set_workload_pid(result, -1);
+ evlist__set_bkw_mmap_state(result, BKW_MMAP_NOTREADY);
+ evlist__set_ctl_fd_fd(result, -1);
+ evlist__set_ctl_fd_ack(result, -1);
+ evlist__set_ctl_fd_pos(result, -1);
+ evlist__set_nr_br_cntr(result, -1);
+ metricgroup__rblist_init(evlist__metric_events(result));
+ INIT_LIST_HEAD(&evlist->deferred_samples);
+ refcount_set(evlist__refcnt(result), 1);
+ } else {
+ free(evlist);
+ result = NULL;
+ }
+ return result;
}
struct evlist *evlist__new_default(const struct target *target, bool sample_callchains)
@@ -106,7 +108,6 @@ struct evlist *evlist__new_default(const struct target *target, bool sample_call
struct evlist *evlist = evlist__new();
bool can_profile_kernel;
struct perf_pmu *pmu = NULL;
- struct evsel *evsel;
char buf[256];
int err;
@@ -133,7 +134,9 @@ struct evlist *evlist__new_default(const struct target *target, bool sample_call
}
/* If there is only 1 event a sample identifier isn't necessary. */
- if (evlist->core.nr_entries > 1) {
+ if (evlist__nr_entries(evlist) > 1) {
+ struct evsel *evsel;
+
evlist__for_each_entry(evlist, evsel)
evsel__set_sample_id(evsel, /*can_sample_identifier=*/false);
}
@@ -158,8 +161,12 @@ struct evlist *evlist__new_dummy(void)
struct evlist *evlist__get(struct evlist *evlist)
{
- refcount_inc(&evlist->refcnt);
- return evlist;
+ struct evlist *result;
+
+ if (RC_CHK_GET(result, evlist))
+ refcount_inc(evlist__refcnt(evlist));
+
+ return result;
}
/**
@@ -173,8 +180,8 @@ void evlist__set_id_pos(struct evlist *evlist)
{
struct evsel *first = evlist__first(evlist);
- evlist->id_pos = first->id_pos;
- evlist->is_pos = first->is_pos;
+ RC_CHK_ACCESS(evlist)->id_pos = first->id_pos;
+ RC_CHK_ACCESS(evlist)->is_pos = first->is_pos;
}
static void evlist__update_id_pos(struct evlist *evlist)
@@ -193,52 +200,76 @@ static void evlist__purge(struct evlist *evlist)
evlist__for_each_entry_safe(evlist, n, pos) {
list_del_init(&pos->core.node);
+ if (pos->evlist) {
+ /* Minimal evlist__put. */
+ refcount_dec_and_test(evlist__refcnt(pos->evlist));
+ RC_CHK_PUT(pos->evlist);
+ }
pos->evlist = NULL;
evsel__put(pos);
}
- evlist->core.nr_entries = 0;
+ evlist__core(evlist)->nr_entries = 0;
}
static void evlist__exit(struct evlist *evlist)
{
- metricgroup__rblist_exit(&evlist->metric_events);
- event_enable_timer__exit(&evlist->eet);
- zfree(&evlist->mmap);
- zfree(&evlist->overwrite_mmap);
- perf_evlist__exit(&evlist->core);
+ metricgroup__rblist_exit(evlist__metric_events(evlist));
+ event_enable_timer__exit(&RC_CHK_ACCESS(evlist)->eet);
+ free(evlist__mmap(evlist));
+ free(evlist__overwrite_mmap(evlist));
+ perf_evlist__exit(evlist__core(evlist));
}
void evlist__put(struct evlist *evlist)
{
+ struct evsel *evsel;
+ unsigned int count;
+
if (evlist == NULL)
return;
- if (!refcount_dec_and_test(&evlist->refcnt))
- return;
+ if (refcount_dec_and_test(evlist__refcnt(evlist)))
+ goto out_delete;
+ count = refcount_read(evlist__refcnt(evlist));
+ evlist__for_each_entry(evlist, evsel) {
+ if (RC_CHK_EQUAL(evsel->evlist, evlist) && count)
+ count--;
+ }
+ if (count != 0) {
+ /*
+ * Not the last reference except for back references from
+ * evsels.
+ */
+ RC_CHK_PUT(evlist);
+ return;
+ }
+out_delete:
evlist__free_stats(evlist);
- evlist__munmap(evlist);
+ evlist__do_munmap(evlist);
evlist__close(evlist);
evlist__purge(evlist);
evlist__exit(evlist);
- free(evlist);
+ RC_CHK_FREE(evlist);
}
void evlist__add(struct evlist *evlist, struct evsel *entry)
{
- perf_evlist__add(&evlist->core, &entry->core);
- entry->evlist = evlist;
+ perf_evlist__add(evlist__core(evlist), &entry->core);
+ evlist__put(entry->evlist);
+ entry->evlist = evlist__get(evlist);
entry->tracking = !entry->core.idx;
- if (evlist->core.nr_entries == 1)
+ if (evlist__nr_entries(evlist) == 1)
evlist__set_id_pos(evlist);
}
void evlist__remove(struct evlist *evlist, struct evsel *evsel)
{
+ perf_evlist__remove(evlist__core(evlist), &evsel->core);
+ evlist__put(evsel->evlist);
evsel->evlist = NULL;
- perf_evlist__remove(&evlist->core, &evsel->core);
}
void evlist__splice_list_tail(struct evlist *evlist, struct list_head *list)
@@ -287,7 +318,7 @@ int __evlist__set_tracepoints_handlers(struct evlist *evlist,
static void evlist__set_leader(struct evlist *evlist)
{
- perf_evlist__set_leader(&evlist->core);
+ perf_evlist__set_leader(evlist__core(evlist));
}
static struct evsel *evlist__dummy_event(struct evlist *evlist)
@@ -301,7 +332,7 @@ static struct evsel *evlist__dummy_event(struct evlist *evlist)
.sample_period = 1,
};
- return evsel__new_idx(&attr, evlist->core.nr_entries);
+ return evsel__new_idx(&attr, evlist__nr_entries(evlist));
}
int evlist__add_dummy(struct evlist *evlist)
@@ -390,8 +421,8 @@ static bool evlist__use_affinity(struct evlist *evlist)
struct perf_cpu_map *used_cpus = NULL;
bool ret = false;
- if (evlist->no_affinity || !evlist->core.user_requested_cpus ||
- cpu_map__is_dummy(evlist->core.user_requested_cpus))
+ if (evlist__no_affinity(evlist) || !evlist__core(evlist)->user_requested_cpus ||
+ cpu_map__is_dummy(evlist__core(evlist)->user_requested_cpus))
return false;
evlist__for_each_entry(evlist, pos) {
@@ -446,7 +477,7 @@ void evlist_cpu_iterator__init(struct evlist_cpu_iterator *itr, struct evlist *e
.evsel = NULL,
.cpu_map_idx = 0,
.evlist_cpu_map_idx = 0,
- .evlist_cpu_map_nr = perf_cpu_map__nr(evlist->core.all_cpus),
+ .evlist_cpu_map_nr = perf_cpu_map__nr(evlist__core(evlist)->all_cpus),
.cpu = (struct perf_cpu){ .cpu = -1},
.affinity = NULL,
};
@@ -462,7 +493,7 @@ void evlist_cpu_iterator__init(struct evlist_cpu_iterator *itr, struct evlist *e
itr->affinity = &itr->saved_affinity;
}
itr->evsel = evlist__first(evlist);
- itr->cpu = perf_cpu_map__cpu(evlist->core.all_cpus, 0);
+ itr->cpu = perf_cpu_map__cpu(evlist__core(evlist)->all_cpus, 0);
if (itr->affinity)
affinity__set(itr->affinity, itr->cpu.cpu);
itr->cpu_map_idx = perf_cpu_map__idx(itr->evsel->core.cpus, itr->cpu);
@@ -497,7 +528,7 @@ void evlist_cpu_iterator__next(struct evlist_cpu_iterator *evlist_cpu_itr)
if (evlist_cpu_itr->evlist_cpu_map_idx < evlist_cpu_itr->evlist_cpu_map_nr) {
evlist_cpu_itr->evsel = evlist__first(evlist_cpu_itr->container);
evlist_cpu_itr->cpu =
- perf_cpu_map__cpu(evlist_cpu_itr->container->core.all_cpus,
+ perf_cpu_map__cpu(evlist__core(evlist_cpu_itr->container)->all_cpus,
evlist_cpu_itr->evlist_cpu_map_idx);
if (evlist_cpu_itr->affinity)
affinity__set(evlist_cpu_itr->affinity, evlist_cpu_itr->cpu.cpu);
@@ -524,7 +555,7 @@ static int evsel__strcmp(struct evsel *pos, char *evsel_name)
return !evsel__name_is(pos, evsel_name);
}
-static int evlist__is_enabled(struct evlist *evlist)
+static bool evlist__is_enabled(struct evlist *evlist)
{
struct evsel *pos;
@@ -578,10 +609,7 @@ static void __evlist__disable(struct evlist *evlist, char *evsel_name, bool excl
* If we disabled only single event, we need to check
* the enabled state of the evlist manually.
*/
- if (evsel_name)
- evlist->enabled = evlist__is_enabled(evlist);
- else
- evlist->enabled = false;
+ evlist__set_enabled(evlist, evsel_name ? evlist__is_enabled(evlist) : false);
}
void evlist__disable(struct evlist *evlist)
@@ -629,7 +657,7 @@ static void __evlist__enable(struct evlist *evlist, char *evsel_name, bool excl_
* so the toggle can work properly and toggle to
* 'disabled' state.
*/
- evlist->enabled = true;
+ evlist__set_enabled(evlist, true);
}
void evlist__enable(struct evlist *evlist)
@@ -649,23 +677,24 @@ void evlist__enable_evsel(struct evlist *evlist, char *evsel_name)
void evlist__toggle_enable(struct evlist *evlist)
{
- (evlist->enabled ? evlist__disable : evlist__enable)(evlist);
+ (evlist__enabled(evlist) ? evlist__disable : evlist__enable)(evlist);
}
int evlist__add_pollfd(struct evlist *evlist, int fd)
{
- return perf_evlist__add_pollfd(&evlist->core, fd, NULL, POLLIN, fdarray_flag__default);
+ return perf_evlist__add_pollfd(evlist__core(evlist), fd, NULL, POLLIN,
+ fdarray_flag__default);
}
int evlist__filter_pollfd(struct evlist *evlist, short revents_and_mask)
{
- return perf_evlist__filter_pollfd(&evlist->core, revents_and_mask);
+ return perf_evlist__filter_pollfd(evlist__core(evlist), revents_and_mask);
}
#ifdef HAVE_EVENTFD_SUPPORT
int evlist__add_wakeup_eventfd(struct evlist *evlist, int fd)
{
- return perf_evlist__add_pollfd(&evlist->core, fd, NULL, POLLIN,
+ return perf_evlist__add_pollfd(evlist__core(evlist), fd, NULL, POLLIN,
fdarray_flag__nonfilterable |
fdarray_flag__non_perf_event);
}
@@ -673,7 +702,7 @@ int evlist__add_wakeup_eventfd(struct evlist *evlist, int fd)
int evlist__poll(struct evlist *evlist, int timeout)
{
- return perf_evlist__poll(&evlist->core, timeout);
+ return perf_evlist__poll(evlist__core(evlist), timeout);
}
struct perf_sample_id *evlist__id2sid(struct evlist *evlist, u64 id)
@@ -683,7 +712,7 @@ struct perf_sample_id *evlist__id2sid(struct evlist *evlist, u64 id)
int hash;
hash = hash_64(id, PERF_EVLIST__HLIST_BITS);
- head = &evlist->core.heads[hash];
+ head = &evlist__core(evlist)->heads[hash];
hlist_for_each_entry(sid, head, node)
if (sid->id == id)
@@ -696,7 +725,7 @@ struct evsel *evlist__id2evsel(struct evlist *evlist, u64 id)
{
struct perf_sample_id *sid;
- if (evlist->core.nr_entries == 1 || !id)
+ if (evlist__nr_entries(evlist) == 1 || !id)
return evlist__first(evlist);
sid = evlist__id2sid(evlist, id);
@@ -731,13 +760,13 @@ static int evlist__event2id(struct evlist *evlist, union perf_event *event, u64
n = (event->header.size - sizeof(event->header)) >> 3;
if (event->header.type == PERF_RECORD_SAMPLE) {
- if (evlist->id_pos >= n)
+ if (evlist__id_pos(evlist) >= n)
return -1;
- *id = array[evlist->id_pos];
+ *id = array[evlist__id_pos(evlist)];
} else {
- if (evlist->is_pos > n)
+ if (evlist__is_pos(evlist) > n)
return -1;
- n -= evlist->is_pos;
+ n -= evlist__is_pos(evlist);
*id = array[n];
}
return 0;
@@ -751,7 +780,7 @@ struct evsel *evlist__event2evsel(struct evlist *evlist, union perf_event *event
int hash;
u64 id;
- if (evlist->core.nr_entries == 1)
+ if (evlist__nr_entries(evlist) == 1)
return first;
if (!first->core.attr.sample_id_all &&
@@ -766,7 +795,7 @@ struct evsel *evlist__event2evsel(struct evlist *evlist, union perf_event *event
return first;
hash = hash_64(id, PERF_EVLIST__HLIST_BITS);
- head = &evlist->core.heads[hash];
+ head = &evlist__core(evlist)->heads[hash];
hlist_for_each_entry(sid, head, node) {
if (sid->id == id)
@@ -779,11 +808,11 @@ static int evlist__set_paused(struct evlist *evlist, bool value)
{
int i;
- if (!evlist->overwrite_mmap)
+ if (!evlist__overwrite_mmap(evlist))
return 0;
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
- int fd = evlist->overwrite_mmap[i].core.fd;
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
+ int fd = evlist__overwrite_mmap(evlist)[i].core.fd;
int err;
if (fd < 0)
@@ -809,20 +838,20 @@ static void evlist__munmap_nofree(struct evlist *evlist)
{
int i;
- if (evlist->mmap)
- for (i = 0; i < evlist->core.nr_mmaps; i++)
- perf_mmap__munmap(&evlist->mmap[i].core);
+ if (evlist__mmap(evlist))
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++)
+ perf_mmap__munmap(&evlist__mmap(evlist)[i].core);
- if (evlist->overwrite_mmap)
- for (i = 0; i < evlist->core.nr_mmaps; i++)
- perf_mmap__munmap(&evlist->overwrite_mmap[i].core);
+ if (evlist__overwrite_mmap(evlist))
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++)
+ perf_mmap__munmap(&evlist__overwrite_mmap(evlist)[i].core);
}
-void evlist__munmap(struct evlist *evlist)
+void evlist__do_munmap(struct evlist *evlist)
{
evlist__munmap_nofree(evlist);
- zfree(&evlist->mmap);
- zfree(&evlist->overwrite_mmap);
+ zfree(&RC_CHK_ACCESS(evlist)->mmap);
+ zfree(&RC_CHK_ACCESS(evlist)->overwrite_mmap);
}
static void perf_mmap__unmap_cb(struct perf_mmap *map)
@@ -836,12 +865,12 @@ static struct mmap *evlist__alloc_mmap(struct evlist *evlist,
bool overwrite)
{
int i;
- struct mmap *map = calloc(evlist->core.nr_mmaps, sizeof(struct mmap));
+ struct mmap *map = calloc(evlist__core(evlist)->nr_mmaps, sizeof(struct mmap));
if (!map)
return NULL;
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
struct perf_mmap *prev = i ? &map[i - 1].core : NULL;
/*
@@ -859,41 +888,73 @@ static struct mmap *evlist__alloc_mmap(struct evlist *evlist,
return map;
}
+static struct evlist *from_list_start(struct perf_evlist *core)
+{
+#ifdef REFCNT_CHECKING
+ RC_STRUCT(evlist) *core_evlist = container_of(core, RC_STRUCT(evlist), core);
+ struct evlist *evlist;
+
+ if (ADD_RC_CHK(evlist, core_evlist))
+ refcount_inc(evlist__refcnt(evlist));
+
+ return evlist;
+#else
+ return container_of(core, struct evlist, core);
+#endif
+}
+
+static void from_list_end(struct evlist *evlist __maybe_unused)
+{
+#ifdef REFCNT_CHECKING
+ evlist__put(evlist);
+#endif
+}
+
static void
perf_evlist__mmap_cb_idx(struct perf_evlist *_evlist,
struct perf_evsel *_evsel,
struct perf_mmap_param *_mp,
int idx)
{
- struct evlist *evlist = container_of(_evlist, struct evlist, core);
+ struct evlist *evlist = from_list_start(_evlist);
struct mmap_params *mp = container_of(_mp, struct mmap_params, core);
struct evsel *evsel = container_of(_evsel, struct evsel, core);
+ if (!evlist)
+ return;
+
auxtrace_mmap_params__set_idx(&mp->auxtrace_mp, evlist, evsel, idx);
+
+ from_list_end(evlist);
}
static struct perf_mmap*
perf_evlist__mmap_cb_get(struct perf_evlist *_evlist, bool overwrite, int idx)
{
- struct evlist *evlist = container_of(_evlist, struct evlist, core);
+ struct evlist *evlist = from_list_start(_evlist);
struct mmap *maps;
- maps = overwrite ? evlist->overwrite_mmap : evlist->mmap;
+ if (!evlist)
+ return NULL;
+
+ maps = overwrite ? evlist__overwrite_mmap(evlist) : evlist__mmap(evlist);
if (!maps) {
maps = evlist__alloc_mmap(evlist, overwrite);
- if (!maps)
+ if (!maps) {
+ from_list_end(evlist);
return NULL;
+ }
if (overwrite) {
- evlist->overwrite_mmap = maps;
- if (evlist->bkw_mmap_state == BKW_MMAP_NOTREADY)
+ RC_CHK_ACCESS(evlist)->overwrite_mmap = maps;
+ if (evlist__bkw_mmap_state(evlist) == BKW_MMAP_NOTREADY)
evlist__toggle_bkw_mmap(evlist, BKW_MMAP_RUNNING);
} else {
- evlist->mmap = maps;
+ RC_CHK_ACCESS(evlist)->mmap = maps;
}
}
-
+ from_list_end(evlist);
return &maps[idx].core;
}
@@ -1050,16 +1111,16 @@ int evlist__mmap_ex(struct evlist *evlist, unsigned int pages,
.mmap = perf_evlist__mmap_cb_mmap,
};
- evlist->core.mmap_len = evlist__mmap_size(pages);
- pr_debug("mmap size %zuB\n", evlist->core.mmap_len);
+ evlist__core(evlist)->mmap_len = evlist__mmap_size(pages);
+ pr_debug("mmap size %zuB\n", evlist__core(evlist)->mmap_len);
- auxtrace_mmap_params__init(&mp.auxtrace_mp, evlist->core.mmap_len,
+ auxtrace_mmap_params__init(&mp.auxtrace_mp, evlist__core(evlist)->mmap_len,
auxtrace_pages, auxtrace_overwrite);
- return perf_evlist__mmap_ops(&evlist->core, &ops, &mp.core);
+ return perf_evlist__mmap_ops(evlist__core(evlist), &ops, &mp.core);
}
-int evlist__mmap(struct evlist *evlist, unsigned int pages)
+int evlist__do_mmap(struct evlist *evlist, unsigned int pages)
{
return evlist__mmap_ex(evlist, pages, 0, false, 0, PERF_AFFINITY_SYS, 1, 0);
}
@@ -1101,9 +1162,9 @@ int evlist__create_maps(struct evlist *evlist, struct target *target)
if (!cpus)
goto out_delete_threads;
- evlist->core.has_user_cpus = !!target->cpu_list;
+ evlist__core(evlist)->has_user_cpus = !!target->cpu_list;
- perf_evlist__set_maps(&evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, threads);
/* as evlist now has references, put count here */
perf_cpu_map__put(cpus);
@@ -1243,15 +1304,15 @@ bool evlist__valid_sample_type(struct evlist *evlist)
{
struct evsel *pos;
- if (evlist->core.nr_entries == 1)
+ if (evlist__nr_entries(evlist) == 1)
return true;
- if (evlist->id_pos < 0 || evlist->is_pos < 0)
+ if (evlist__id_pos(evlist) < 0 || evlist__is_pos(evlist) < 0)
return false;
evlist__for_each_entry(evlist, pos) {
- if (pos->id_pos != evlist->id_pos ||
- pos->is_pos != evlist->is_pos)
+ if (pos->id_pos != evlist__id_pos(evlist) ||
+ pos->is_pos != evlist__is_pos(evlist))
return false;
}
@@ -1262,18 +1323,18 @@ u64 __evlist__combined_sample_type(struct evlist *evlist)
{
struct evsel *evsel;
- if (evlist->combined_sample_type)
- return evlist->combined_sample_type;
+ if (RC_CHK_ACCESS(evlist)->combined_sample_type)
+ return RC_CHK_ACCESS(evlist)->combined_sample_type;
evlist__for_each_entry(evlist, evsel)
- evlist->combined_sample_type |= evsel->core.attr.sample_type;
+ RC_CHK_ACCESS(evlist)->combined_sample_type |= evsel->core.attr.sample_type;
- return evlist->combined_sample_type;
+ return RC_CHK_ACCESS(evlist)->combined_sample_type;
}
u64 evlist__combined_sample_type(struct evlist *evlist)
{
- evlist->combined_sample_type = 0;
+ RC_CHK_ACCESS(evlist)->combined_sample_type = 0;
return __evlist__combined_sample_type(evlist);
}
@@ -1350,7 +1411,7 @@ void evlist__update_br_cntr(struct evlist *evlist)
evlist__new_abbr_name(evsel->abbr_name);
}
}
- evlist->nr_br_cntr = i;
+ evlist__set_nr_br_cntr(evlist, i);
}
bool evlist__valid_read_format(struct evlist *evlist)
@@ -1400,11 +1461,6 @@ bool evlist__sample_id_all(struct evlist *evlist)
return first->core.attr.sample_id_all;
}
-void evlist__set_selected(struct evlist *evlist, struct evsel *evsel)
-{
- evlist->selected = evsel;
-}
-
void evlist__close(struct evlist *evlist)
{
struct evsel *evsel;
@@ -1421,7 +1477,7 @@ void evlist__close(struct evlist *evlist)
perf_evsel__free_fd(&evsel->core);
perf_evsel__free_id(&evsel->core);
}
- perf_evlist__reset_id_hash(&evlist->core);
+ perf_evlist__reset_id_hash(evlist__core(evlist));
}
static int evlist__create_syswide_maps(struct evlist *evlist)
@@ -1448,7 +1504,7 @@ static int evlist__create_syswide_maps(struct evlist *evlist)
return -ENOMEM;
}
- perf_evlist__set_maps(&evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, threads);
perf_thread_map__put(threads);
perf_cpu_map__put(cpus);
return 0;
@@ -1463,7 +1519,8 @@ int evlist__open(struct evlist *evlist)
* Default: one fd per CPU, all threads, aka systemwide
* as sys_perf_event_open(cpu = -1, thread = -1) is EINVAL
*/
- if (evlist->core.threads == NULL && evlist->core.user_requested_cpus == NULL) {
+ if (evlist__core(evlist)->threads == NULL &&
+ evlist__core(evlist)->user_requested_cpus == NULL) {
err = evlist__create_syswide_maps(evlist);
if (err < 0)
goto out_err;
@@ -1490,7 +1547,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
int child_ready_pipe[2], go_pipe[2];
char bf;
- evlist->workload.cork_fd = -1;
+ evlist__set_workload_cork_fd(evlist, -1);
if (pipe(child_ready_pipe) < 0) {
perror("failed to create 'ready' pipe");
@@ -1502,13 +1559,13 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
goto out_close_ready_pipe;
}
- evlist->workload.pid = fork();
- if (evlist->workload.pid < 0) {
+ evlist__set_workload_pid(evlist, fork());
+ if (evlist__workload_pid(evlist) < 0) {
perror("failed to fork");
goto out_close_pipes;
}
- if (!evlist->workload.pid) {
+ if (!evlist__workload_pid(evlist)) {
int ret;
if (pipe_output)
@@ -1574,12 +1631,13 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
}
if (target__none(target)) {
- if (evlist->core.threads == NULL) {
+ if (evlist__core(evlist)->threads == NULL) {
fprintf(stderr, "FATAL: evlist->threads need to be set at this point (%s:%d).\n",
__func__, __LINE__);
goto out_close_pipes;
}
- perf_thread_map__set_pid(evlist->core.threads, 0, evlist->workload.pid);
+ perf_thread_map__set_pid(evlist__core(evlist)->threads, 0,
+ evlist__workload_pid(evlist));
}
close(child_ready_pipe[1]);
@@ -1593,7 +1651,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
}
fcntl(go_pipe[1], F_SETFD, FD_CLOEXEC);
- evlist->workload.cork_fd = go_pipe[1];
+ evlist__set_workload_cork_fd(evlist, go_pipe[1]);
close(child_ready_pipe[0]);
return 0;
@@ -1608,18 +1666,18 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
int evlist__start_workload(struct evlist *evlist)
{
- if (evlist->workload.cork_fd >= 0) {
+ if (evlist__workload_cork_fd(evlist) >= 0) {
char bf = 0;
int ret;
/*
* Remove the cork, let it rip!
*/
- ret = write(evlist->workload.cork_fd, &bf, 1);
+ ret = write(evlist__workload_cork_fd(evlist), &bf, 1);
if (ret < 0)
perror("unable to write to pipe");
- close(evlist->workload.cork_fd);
- evlist->workload.cork_fd = -1;
+ close(evlist__workload_cork_fd(evlist));
+ evlist__set_workload_cork_fd(evlist, -1);
return ret;
}
@@ -1630,10 +1688,10 @@ void evlist__cancel_workload(struct evlist *evlist)
{
int status;
- if (evlist->workload.cork_fd >= 0) {
- close(evlist->workload.cork_fd);
- evlist->workload.cork_fd = -1;
- waitpid(evlist->workload.pid, &status, WNOHANG);
+ if (evlist__workload_cork_fd(evlist) >= 0) {
+ close(evlist__workload_cork_fd(evlist));
+ evlist__set_workload_cork_fd(evlist, -1);
+ waitpid(evlist__workload_pid(evlist), &status, WNOHANG);
}
}
@@ -1727,7 +1785,8 @@ int evlist__strerror_open(struct evlist *evlist, int err, char *buf, size_t size
int evlist__strerror_mmap(struct evlist *evlist, int err, char *buf, size_t size)
{
- int pages_attempted = evlist->core.mmap_len / 1024, pages_max_per_user, printed = 0;
+ int pages_attempted = evlist__core(evlist)->mmap_len / 1024;
+ int pages_max_per_user, printed = 0;
switch (err) {
case EPERM:
@@ -1770,7 +1829,7 @@ void evlist__to_front(struct evlist *evlist, struct evsel *move_evsel)
list_move_tail(&evsel->core.node, &move);
}
- list_splice(&move, &evlist->core.entries);
+ list_splice(&move, &evlist__core(evlist)->entries);
}
struct evsel *evlist__get_tracking_event(struct evlist *evlist)
@@ -1812,7 +1871,7 @@ struct evsel *evlist__findnew_tracking_event(struct evlist *evlist, bool system_
evlist__set_tracking_event(evlist, evsel);
} else if (system_wide) {
- perf_evlist__go_system_wide(&evlist->core, &evsel->core);
+ perf_evlist__go_system_wide(evlist__core(evlist), &evsel->core);
}
return evsel;
@@ -1834,14 +1893,14 @@ struct evsel *evlist__find_evsel_by_str(struct evlist *evlist, const char *str)
void evlist__toggle_bkw_mmap(struct evlist *evlist, enum bkw_mmap_state state)
{
- enum bkw_mmap_state old_state = evlist->bkw_mmap_state;
+ enum bkw_mmap_state old_state = evlist__bkw_mmap_state(evlist);
enum action {
NONE,
PAUSE,
RESUME,
} action = NONE;
- if (!evlist->overwrite_mmap)
+ if (!evlist__overwrite_mmap(evlist))
return;
switch (old_state) {
@@ -1871,7 +1930,7 @@ void evlist__toggle_bkw_mmap(struct evlist *evlist, enum bkw_mmap_state state)
WARN_ONCE(1, "Shouldn't get there\n");
}
- evlist->bkw_mmap_state = state;
+ evlist__set_bkw_mmap_state(evlist, state);
switch (action) {
case PAUSE:
@@ -2049,40 +2108,41 @@ int evlist__initialize_ctlfd(struct evlist *evlist, int fd, int ack)
return 0;
}
- evlist->ctl_fd.pos = perf_evlist__add_pollfd(&evlist->core, fd, NULL, POLLIN,
- fdarray_flag__nonfilterable |
- fdarray_flag__non_perf_event);
- if (evlist->ctl_fd.pos < 0) {
- evlist->ctl_fd.pos = -1;
+ evlist__set_ctl_fd_pos(evlist,
+ perf_evlist__add_pollfd(evlist__core(evlist), fd, NULL, POLLIN,
+ fdarray_flag__nonfilterable |
+ fdarray_flag__non_perf_event));
+ if (evlist__ctl_fd_pos(evlist) < 0) {
+ evlist__set_ctl_fd_pos(evlist, -1);
pr_err("Failed to add ctl fd entry: %m\n");
return -1;
}
- evlist->ctl_fd.fd = fd;
- evlist->ctl_fd.ack = ack;
+ evlist__set_ctl_fd_fd(evlist, fd);
+ evlist__set_ctl_fd_ack(evlist, ack);
return 0;
}
bool evlist__ctlfd_initialized(struct evlist *evlist)
{
- return evlist->ctl_fd.pos >= 0;
+ return evlist__ctl_fd_pos(evlist) >= 0;
}
int evlist__finalize_ctlfd(struct evlist *evlist)
{
- struct pollfd *entries = evlist->core.pollfd.entries;
+ struct pollfd *entries = evlist__core(evlist)->pollfd.entries;
if (!evlist__ctlfd_initialized(evlist))
return 0;
- entries[evlist->ctl_fd.pos].fd = -1;
- entries[evlist->ctl_fd.pos].events = 0;
- entries[evlist->ctl_fd.pos].revents = 0;
+ entries[evlist__ctl_fd_pos(evlist)].fd = -1;
+ entries[evlist__ctl_fd_pos(evlist)].events = 0;
+ entries[evlist__ctl_fd_pos(evlist)].revents = 0;
- evlist->ctl_fd.pos = -1;
- evlist->ctl_fd.ack = -1;
- evlist->ctl_fd.fd = -1;
+ evlist__set_ctl_fd_pos(evlist, -1);
+ evlist__set_ctl_fd_ack(evlist, -1);
+ evlist__set_ctl_fd_fd(evlist, -1);
return 0;
}
@@ -2099,7 +2159,7 @@ static int evlist__ctlfd_recv(struct evlist *evlist, enum evlist_ctl_cmd *cmd,
data_size--;
do {
- err = read(evlist->ctl_fd.fd, &c, 1);
+ err = read(evlist__ctl_fd_fd(evlist), &c, 1);
if (err > 0) {
if (c == '\n' || c == '\0')
break;
@@ -2113,7 +2173,8 @@ static int evlist__ctlfd_recv(struct evlist *evlist, enum evlist_ctl_cmd *cmd,
if (errno == EAGAIN || errno == EWOULDBLOCK)
err = 0;
else
- pr_err("Failed to read from ctlfd %d: %m\n", evlist->ctl_fd.fd);
+ pr_err("Failed to read from ctlfd %d: %m\n",
+ evlist__ctl_fd_fd(evlist));
}
break;
} while (1);
@@ -2151,13 +2212,13 @@ int evlist__ctlfd_ack(struct evlist *evlist)
{
int err;
- if (evlist->ctl_fd.ack == -1)
+ if (evlist__ctl_fd_ack(evlist) == -1)
return 0;
- err = write(evlist->ctl_fd.ack, EVLIST_CTL_CMD_ACK_TAG,
+ err = write(evlist__ctl_fd_ack(evlist), EVLIST_CTL_CMD_ACK_TAG,
sizeof(EVLIST_CTL_CMD_ACK_TAG));
if (err == -1)
- pr_err("failed to write to ctl_ack_fd %d: %m\n", evlist->ctl_fd.ack);
+ pr_err("failed to write to ctl_ack_fd %d: %m\n", evlist__ctl_fd_ack(evlist));
return err;
}
@@ -2258,8 +2319,8 @@ int evlist__ctlfd_process(struct evlist *evlist, enum evlist_ctl_cmd *cmd)
{
int err = 0;
char cmd_data[EVLIST_CTL_CMD_MAX_LEN];
- int ctlfd_pos = evlist->ctl_fd.pos;
- struct pollfd *entries = evlist->core.pollfd.entries;
+ int ctlfd_pos = evlist__ctl_fd_pos(evlist);
+ struct pollfd *entries = evlist__core(evlist)->pollfd.entries;
if (!evlist__ctlfd_initialized(evlist) || !entries[ctlfd_pos].revents)
return 0;
@@ -2430,14 +2491,15 @@ int evlist__parse_event_enable_time(struct evlist *evlist, struct record_opts *o
goto free_eet_times;
}
- eet->pollfd_pos = perf_evlist__add_pollfd(&evlist->core, eet->timerfd, NULL, POLLIN, flags);
+ eet->pollfd_pos = perf_evlist__add_pollfd(evlist__core(evlist), eet->timerfd,
+ NULL, POLLIN, flags);
if (eet->pollfd_pos < 0) {
err = eet->pollfd_pos;
goto close_timerfd;
}
eet->evlist = evlist;
- evlist->eet = eet;
+ RC_CHK_ACCESS(evlist)->eet = eet;
opts->target.initial_delay = eet->times[0].start;
return 0;
@@ -2487,7 +2549,7 @@ int event_enable_timer__process(struct event_enable_timer *eet)
if (!eet)
return 0;
- entries = eet->evlist->core.pollfd.entries;
+ entries = evlist__core(eet->evlist)->pollfd.entries;
revents = entries[eet->pollfd_pos].revents;
entries[eet->pollfd_pos].revents = 0;
@@ -2523,7 +2585,7 @@ int event_enable_timer__process(struct event_enable_timer *eet)
return 0;
}
-void event_enable_timer__exit(struct event_enable_timer **ep)
+static void event_enable_timer__exit(struct event_enable_timer **ep)
{
if (!ep || !*ep)
return;
@@ -2627,7 +2689,7 @@ void evlist__warn_user_requested_cpus(struct evlist *evlist, const char *cpu_lis
}
/* Should uniquify be disabled for the evlist? */
-static bool evlist__disable_uniquify(const struct evlist *evlist)
+static bool evlist__disable_uniquify(struct evlist *evlist)
{
struct evsel *counter;
struct perf_pmu *last_pmu = NULL;
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index a9820a6aad5b..838e263b76f3 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -14,6 +14,7 @@
#include <api/fd/array.h>
#include <internal/evlist.h>
#include <internal/evsel.h>
+#include <internal/rc_check.h>
#include <perf/evlist.h>
#include "affinity.h"
@@ -59,7 +60,7 @@ enum bkw_mmap_state {
struct event_enable_timer;
-struct evlist {
+DECLARE_RC_STRUCT(evlist) {
struct perf_evlist core;
refcount_t refcnt;
bool enabled;
@@ -86,7 +87,7 @@ struct evlist {
struct {
pthread_t th;
volatile int done;
- } thread;
+ } sb_thread;
struct {
int fd; /* control file descriptor */
int ack; /* ack file descriptor for control commands */
@@ -107,6 +108,227 @@ struct evsel_str_handler {
void *handler;
};
+static inline struct perf_evlist *evlist__core(struct evlist *evlist)
+{
+ return &RC_CHK_ACCESS(evlist)->core;
+}
+
+static inline const struct perf_evlist *evlist__const_core(const struct evlist *evlist)
+{
+ return &RC_CHK_ACCESS(evlist)->core;
+}
+
+static inline int evlist__nr_entries(const struct evlist *evlist)
+{
+ return evlist__const_core(evlist)->nr_entries;
+}
+
+static inline bool evlist__enabled(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->enabled;
+}
+
+static inline void evlist__set_enabled(struct evlist *evlist, bool enabled)
+{
+ RC_CHK_ACCESS(evlist)->enabled = enabled;
+}
+
+static inline bool evlist__no_affinity(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->no_affinity;
+}
+
+static inline void evlist__set_no_affinity(struct evlist *evlist, bool no_affinity)
+{
+ RC_CHK_ACCESS(evlist)->no_affinity = no_affinity;
+}
+
+static inline int evlist__sb_thread_done(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->sb_thread.done;
+}
+
+static inline void evlist__set_sb_thread_done(struct evlist *evlist, int done)
+{
+ RC_CHK_ACCESS(evlist)->sb_thread.done = done;
+}
+
+static inline pthread_t *evlist__sb_thread_th(struct evlist *evlist)
+{
+ return &RC_CHK_ACCESS(evlist)->sb_thread.th;
+}
+
+static inline int evlist__id_pos(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->id_pos;
+}
+
+static inline int evlist__is_pos(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->is_pos;
+}
+
+static inline struct event_enable_timer *evlist__event_enable_timer(struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->eet;
+}
+
+static inline enum bkw_mmap_state evlist__bkw_mmap_state(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->bkw_mmap_state;
+}
+
+static inline void evlist__set_bkw_mmap_state(struct evlist *evlist, enum bkw_mmap_state state)
+{
+ RC_CHK_ACCESS(evlist)->bkw_mmap_state = state;
+}
+
+static inline struct mmap *evlist__mmap(struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->mmap;
+}
+
+static inline struct mmap *evlist__overwrite_mmap(struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->overwrite_mmap;
+}
+
+static inline struct events_stats *evlist__stats(struct evlist *evlist)
+{
+ return &RC_CHK_ACCESS(evlist)->stats;
+}
+
+static inline u64 evlist__first_sample_time(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->first_sample_time;
+}
+
+static inline void evlist__set_first_sample_time(struct evlist *evlist, u64 first)
+{
+ RC_CHK_ACCESS(evlist)->first_sample_time = first;
+}
+
+static inline u64 evlist__last_sample_time(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->last_sample_time;
+}
+
+static inline void evlist__set_last_sample_time(struct evlist *evlist, u64 last)
+{
+ RC_CHK_ACCESS(evlist)->last_sample_time = last;
+}
+
+static inline int evlist__nr_br_cntr(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->nr_br_cntr;
+}
+
+static inline void evlist__set_nr_br_cntr(struct evlist *evlist, int nr)
+{
+ RC_CHK_ACCESS(evlist)->nr_br_cntr = nr;
+}
+
+static inline struct perf_session *evlist__session(struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->session;
+}
+
+static inline void evlist__set_session(struct evlist *evlist, struct perf_session *session)
+{
+ RC_CHK_ACCESS(evlist)->session = session;
+}
+
+static inline void (*evlist__trace_event_sample_raw(struct evlist *evlist))
+ (struct evlist *evlist,
+ union perf_event *event,
+ struct perf_sample *sample)
+{
+ return RC_CHK_ACCESS(evlist)->trace_event_sample_raw;
+}
+
+static inline void evlist__set_trace_event_sample_raw(struct evlist *evlist,
+ void (*fun)(struct evlist *evlist,
+ union perf_event *event,
+ struct perf_sample *sample))
+{
+ RC_CHK_ACCESS(evlist)->trace_event_sample_raw = fun;
+}
+
+static inline pid_t evlist__workload_pid(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->workload.pid;
+}
+
+static inline void evlist__set_workload_pid(struct evlist *evlist, pid_t pid)
+{
+ RC_CHK_ACCESS(evlist)->workload.pid = pid;
+}
+
+static inline int evlist__workload_cork_fd(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->workload.cork_fd;
+}
+
+static inline void evlist__set_workload_cork_fd(struct evlist *evlist, int cork_fd)
+{
+ RC_CHK_ACCESS(evlist)->workload.cork_fd = cork_fd;
+}
+
+static inline int evlist__ctl_fd_fd(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->ctl_fd.fd;
+}
+
+static inline void evlist__set_ctl_fd_fd(struct evlist *evlist, int fd)
+{
+ RC_CHK_ACCESS(evlist)->ctl_fd.fd = fd;
+}
+
+static inline int evlist__ctl_fd_ack(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->ctl_fd.ack;
+}
+
+static inline void evlist__set_ctl_fd_ack(struct evlist *evlist, int ack)
+{
+ RC_CHK_ACCESS(evlist)->ctl_fd.ack = ack;
+}
+
+static inline int evlist__ctl_fd_pos(const struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->ctl_fd.pos;
+}
+
+static inline void evlist__set_ctl_fd_pos(struct evlist *evlist, int pos)
+{
+ RC_CHK_ACCESS(evlist)->ctl_fd.pos = pos;
+}
+
+static inline refcount_t *evlist__refcnt(struct evlist *evlist)
+{
+ return &RC_CHK_ACCESS(evlist)->refcnt;
+}
+
+static inline struct rblist *evlist__metric_events(struct evlist *evlist)
+{
+ return &RC_CHK_ACCESS(evlist)->metric_events;
+}
+
+static inline struct list_head *evlist__deferred_samples(struct evlist *evlist)
+{
+ return &RC_CHK_ACCESS(evlist)->deferred_samples;
+}
+
+static inline struct evsel *evlist__selected(struct evlist *evlist)
+{
+ return RC_CHK_ACCESS(evlist)->selected;
+}
+
+static inline void evlist__set_selected(struct evlist *evlist, struct evsel *evsel)
+{
+ RC_CHK_ACCESS(evlist)->selected = evsel;
+}
+
struct evlist *evlist__new(void);
struct evlist *evlist__new_default(const struct target *target, bool sample_callchains);
struct evlist *evlist__new_dummy(void);
@@ -200,8 +422,8 @@ int evlist__mmap_ex(struct evlist *evlist, unsigned int pages,
unsigned int auxtrace_pages,
bool auxtrace_overwrite, int nr_cblocks,
int affinity, int flush, int comp_level);
-int evlist__mmap(struct evlist *evlist, unsigned int pages);
-void evlist__munmap(struct evlist *evlist);
+int evlist__do_mmap(struct evlist *evlist, unsigned int pages);
+void evlist__do_munmap(struct evlist *evlist);
size_t evlist__mmap_size(unsigned long pages);
@@ -213,8 +435,6 @@ void evlist__enable_evsel(struct evlist *evlist, char *evsel_name);
void evlist__disable_non_dummy(struct evlist *evlist);
void evlist__enable_non_dummy(struct evlist *evlist);
-void evlist__set_selected(struct evlist *evlist, struct evsel *evsel);
-
int evlist__create_maps(struct evlist *evlist, struct target *target);
int evlist__apply_filters(struct evlist *evlist, struct evsel **err_evsel,
struct target *target);
@@ -237,26 +457,26 @@ void evlist__splice_list_tail(struct evlist *evlist, struct list_head *list);
static inline bool evlist__empty(struct evlist *evlist)
{
- return list_empty(&evlist->core.entries);
+ return list_empty(&evlist__core(evlist)->entries);
}
static inline struct evsel *evlist__first(struct evlist *evlist)
{
- struct perf_evsel *evsel = perf_evlist__first(&evlist->core);
+ struct perf_evsel *evsel = perf_evlist__first(evlist__core(evlist));
return container_of(evsel, struct evsel, core);
}
static inline struct evsel *evlist__last(struct evlist *evlist)
{
- struct perf_evsel *evsel = perf_evlist__last(&evlist->core);
+ struct perf_evsel *evsel = perf_evlist__last(evlist__core(evlist));
return container_of(evsel, struct evsel, core);
}
static inline int evlist__nr_groups(struct evlist *evlist)
{
- return perf_evlist__nr_groups(&evlist->core);
+ return perf_evlist__nr_groups(evlist__core(evlist));
}
int evlist__strerror_open(struct evlist *evlist, int err, char *buf, size_t size);
@@ -279,7 +499,7 @@ void evlist__to_front(struct evlist *evlist, struct evsel *move_evsel);
* @evsel: struct evsel iterator
*/
#define evlist__for_each_entry(evlist, evsel) \
- __evlist__for_each_entry(&(evlist)->core.entries, evsel)
+ __evlist__for_each_entry(&evlist__core(evlist)->entries, evsel)
/**
* __evlist__for_each_entry_continue - continue iteration thru all the evsels
@@ -295,7 +515,7 @@ void evlist__to_front(struct evlist *evlist, struct evsel *move_evsel);
* @evsel: struct evsel iterator
*/
#define evlist__for_each_entry_continue(evlist, evsel) \
- __evlist__for_each_entry_continue(&(evlist)->core.entries, evsel)
+ __evlist__for_each_entry_continue(&evlist__core(evlist)->entries, evsel)
/**
* __evlist__for_each_entry_from - continue iteration from @evsel (included)
@@ -311,7 +531,7 @@ void evlist__to_front(struct evlist *evlist, struct evsel *move_evsel);
* @evsel: struct evsel iterator
*/
#define evlist__for_each_entry_from(evlist, evsel) \
- __evlist__for_each_entry_from(&(evlist)->core.entries, evsel)
+ __evlist__for_each_entry_from(&evlist__core(evlist)->entries, evsel)
/**
* __evlist__for_each_entry_reverse - iterate thru all the evsels in reverse order
@@ -327,7 +547,7 @@ void evlist__to_front(struct evlist *evlist, struct evsel *move_evsel);
* @evsel: struct evsel iterator
*/
#define evlist__for_each_entry_reverse(evlist, evsel) \
- __evlist__for_each_entry_reverse(&(evlist)->core.entries, evsel)
+ __evlist__for_each_entry_reverse(&evlist__core(evlist)->entries, evsel)
/**
* __evlist__for_each_entry_safe - safely iterate thru all the evsels
@@ -345,7 +565,7 @@ void evlist__to_front(struct evlist *evlist, struct evsel *move_evsel);
* @tmp: struct evsel temp iterator
*/
#define evlist__for_each_entry_safe(evlist, tmp, evsel) \
- __evlist__for_each_entry_safe(&(evlist)->core.entries, tmp, evsel)
+ __evlist__for_each_entry_safe(&evlist__core(evlist)->entries, tmp, evsel)
/** Iterator state for evlist__for_each_cpu */
struct evlist_cpu_iterator {
@@ -451,7 +671,6 @@ int evlist__ctlfd_ack(struct evlist *evlist);
int evlist__parse_event_enable_time(struct evlist *evlist, struct record_opts *opts,
const char *str, int unset);
int event_enable_timer__start(struct event_enable_timer *eet);
-void event_enable_timer__exit(struct event_enable_timer **ep);
int event_enable_timer__process(struct event_enable_timer *eet);
struct evsel *evlist__find_evsel(struct evlist *evlist, int idx);
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index a54aae079c22..3015b9b4b4da 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -3178,7 +3178,7 @@ static inline bool evsel__has_branch_counters(const struct evsel *evsel)
if (!leader || !evsel->evlist)
return false;
- if (evsel->evlist->nr_br_cntr < 0)
+ if (evlist__nr_br_cntr(evsel->evlist) < 0)
evlist__update_br_cntr(evsel->evlist);
if (leader->br_cntr_nr > 0)
@@ -4162,7 +4162,7 @@ int evsel__open_strerror(struct evsel *evsel, struct target *target,
struct perf_session *evsel__session(struct evsel *evsel)
{
- return evsel && evsel->evlist ? evsel->evlist->session : NULL;
+ return evsel && evsel->evlist ? evlist__session(evsel->evlist) : NULL;
}
struct perf_env *evsel__env(struct evsel *evsel)
@@ -4187,7 +4187,7 @@ static int store_evsel_ids(struct evsel *evsel, struct evlist *evlist)
thread++) {
int fd = FD(evsel, cpu_map_idx, thread);
- if (perf_evlist__id_add_fd(&evlist->core, &evsel->core,
+ if (perf_evlist__id_add_fd(evlist__core(evlist), &evsel->core,
cpu_map_idx, thread, fd) < 0)
return -1;
}
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 35b1bbca9036..acebd483b9e4 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -501,7 +501,7 @@ for ((_evsel) = list_entry((_leader)->core.node.next, struct evsel, core.node);
(_evsel) = list_entry((_evsel)->core.node.next, struct evsel, core.node))
#define for_each_group_member(_evsel, _leader) \
- for_each_group_member_head(_evsel, _leader, &(_leader)->evlist->core.entries)
+ for_each_group_member_head(_evsel, _leader, &evlist__core((_leader)->evlist)->entries)
/* Iterates group WITH the leader. */
#define for_each_group_evsel_head(_evsel, _leader, _head) \
@@ -511,7 +511,7 @@ for ((_evsel) = _leader; \
(_evsel) = list_entry((_evsel)->core.node.next, struct evsel, core.node))
#define for_each_group_evsel(_evsel, _leader) \
- for_each_group_evsel_head(_evsel, _leader, &(_leader)->evlist->core.entries)
+ for_each_group_evsel_head(_evsel, _leader, &evlist__core((_leader)->evlist)->entries)
static inline bool evsel__has_branch_callstack(const struct evsel *evsel)
{
--git a/tools/perf/util/header.c b/tools/perf/util/header.c
index f9887d2fc8ed..2469e2741bc4 100644
--- a/tools/perf/util/header.c
+++ b/tools/perf/util/header.c
@@ -323,7 +323,7 @@ static int write_tracing_data(struct feat_fd *ff,
return -1;
#ifdef HAVE_LIBTRACEEVENT
- return read_tracing_data(ff->fd, &evlist->core.entries);
+ return read_tracing_data(ff->fd, &evlist__core(evlist)->entries);
#else
pr_err("ERROR: Trying to write tracing data without libtraceevent support.\n");
return -1;
@@ -397,7 +397,7 @@ static int write_e_machine(struct feat_fd *ff,
{
/* e_machine expanded from 16 to 32-bits for alignment. */
uint32_t e_flags;
- uint32_t e_machine = perf_session__e_machine(evlist->session, &e_flags);
+ uint32_t e_machine = perf_session__e_machine(evlist__session(evlist), &e_flags);
int ret;
ret = do_write(ff, &e_machine, sizeof(e_machine));
@@ -533,7 +533,7 @@ static int write_event_desc(struct feat_fd *ff,
u32 nre, nri, sz;
int ret;
- nre = evlist->core.nr_entries;
+ nre = evlist__nr_entries(evlist);
/*
* write number of events
@@ -915,7 +915,7 @@ int __weak get_cpuid(char *buffer __maybe_unused, size_t sz __maybe_unused,
static int write_cpuid(struct feat_fd *ff, struct evlist *evlist)
{
- struct perf_cpu cpu = perf_cpu_map__min(evlist->core.all_cpus);
+ struct perf_cpu cpu = perf_cpu_map__min(evlist__core(evlist)->all_cpus);
char buffer[64];
int ret;
@@ -1348,14 +1348,14 @@ static int write_sample_time(struct feat_fd *ff,
struct evlist *evlist)
{
int ret;
+ u64 data = evlist__first_sample_time(evlist);
- ret = do_write(ff, &evlist->first_sample_time,
- sizeof(evlist->first_sample_time));
+ ret = do_write(ff, &data, sizeof(data));
if (ret < 0)
return ret;
- return do_write(ff, &evlist->last_sample_time,
- sizeof(evlist->last_sample_time));
+ data = evlist__last_sample_time(evlist);
+ return do_write(ff, &data, sizeof(data));
}
@@ -2425,16 +2425,16 @@ static void print_sample_time(struct feat_fd *ff, FILE *fp)
session = container_of(ff->ph, struct perf_session, header);
- timestamp__scnprintf_usec(session->evlist->first_sample_time,
+ timestamp__scnprintf_usec(evlist__first_sample_time(session->evlist),
time_buf, sizeof(time_buf));
fprintf(fp, "# time of first sample : %s\n", time_buf);
- timestamp__scnprintf_usec(session->evlist->last_sample_time,
+ timestamp__scnprintf_usec(evlist__last_sample_time(session->evlist),
time_buf, sizeof(time_buf));
fprintf(fp, "# time of last sample : %s\n", time_buf);
- d = (double)(session->evlist->last_sample_time -
- session->evlist->first_sample_time) / NSEC_PER_MSEC;
+ d = (double)(evlist__last_sample_time(session->evlist) -
+ evlist__first_sample_time(session->evlist)) / NSEC_PER_MSEC;
fprintf(fp, "# sample duration : %10.3f ms\n", d);
}
@@ -3326,8 +3326,8 @@ static int process_sample_time(struct feat_fd *ff, void *data __maybe_unused)
if (ret)
return -1;
- session->evlist->first_sample_time = first_sample_time;
- session->evlist->last_sample_time = last_sample_time;
+ evlist__set_first_sample_time(session->evlist, first_sample_time);
+ evlist__set_last_sample_time(session->evlist, last_sample_time);
return 0;
}
@@ -4396,7 +4396,7 @@ int perf_session__write_header(struct perf_session *session,
/*write_attrs_after_data=*/false);
}
-size_t perf_session__data_offset(const struct evlist *evlist)
+size_t perf_session__data_offset(struct evlist *evlist)
{
struct evsel *evsel;
size_t data_offset;
@@ -4405,7 +4405,7 @@ size_t perf_session__data_offset(const struct evlist *evlist)
evlist__for_each_entry(evlist, evsel) {
data_offset += evsel->core.ids * sizeof(u64);
}
- data_offset += evlist->core.nr_entries * sizeof(struct perf_file_attr);
+ data_offset += evlist__nr_entries(evlist) * sizeof(struct perf_file_attr);
return data_offset;
}
@@ -4849,7 +4849,7 @@ int perf_session__read_header(struct perf_session *session)
if (session->evlist == NULL)
return -ENOMEM;
- session->evlist->session = session;
+ evlist__set_session(session->evlist, session);
session->machines.host.env = &header->env;
/*
@@ -4933,7 +4933,8 @@ int perf_session__read_header(struct perf_session *session)
if (perf_header__getbuffer64(header, fd, &f_id, sizeof(f_id)))
goto out_errno;
- perf_evlist__id_add(&session->evlist->core, &evsel->core, 0, j, f_id);
+ perf_evlist__id_add(evlist__core(session->evlist),
+ &evsel->core, 0, j, f_id);
}
lseek(fd, tmp, SEEK_SET);
@@ -5126,7 +5127,7 @@ int perf_event__process_attr(const struct perf_tool *tool __maybe_unused,
ids = perf_record_header_attr_id(event);
for (i = 0; i < n_ids; i++) {
- perf_evlist__id_add(&evlist->core, &evsel->core, 0, i, ids[i]);
+ perf_evlist__id_add(evlist__core(evlist), &evsel->core, 0, i, ids[i]);
}
return 0;
--git a/tools/perf/util/header.h b/tools/perf/util/header.h
index 86b1a72026d3..5e03f884b7cc 100644
--- a/tools/perf/util/header.h
+++ b/tools/perf/util/header.h
@@ -158,7 +158,7 @@ int perf_session__inject_header(struct perf_session *session,
struct feat_copier *fc,
bool write_attrs_after_data);
-size_t perf_session__data_offset(const struct evlist *evlist);
+size_t perf_session__data_offset(struct evlist *evlist);
void perf_header__set_feat(struct perf_header *header, int feat);
void perf_header__clear_feat(struct perf_header *header, int feat);
diff --git a/tools/perf/util/intel-tpebs.c b/tools/perf/util/intel-tpebs.c
index 8b615dc94e9e..4c1096ba9dcd 100644
--- a/tools/perf/util/intel-tpebs.c
+++ b/tools/perf/util/intel-tpebs.c
@@ -95,8 +95,9 @@ static int evsel__tpebs_start_perf_record(struct evsel *evsel)
record_argv[i++] = "-o";
record_argv[i++] = PERF_DATA;
- if (!perf_cpu_map__is_any_cpu_or_is_empty(evsel->evlist->core.user_requested_cpus)) {
- cpu_map__snprint(evsel->evlist->core.user_requested_cpus, cpumap_buf,
+ if (!perf_cpu_map__is_any_cpu_or_is_empty(
+ evlist__core(evsel->evlist)->user_requested_cpus)) {
+ cpu_map__snprint(evlist__core(evsel->evlist)->user_requested_cpus, cpumap_buf,
sizeof(cpumap_buf));
record_argv[i++] = "-C";
record_argv[i++] = cpumap_buf;
@@ -172,7 +173,7 @@ static bool should_ignore_sample(const struct perf_sample *sample, const struct
if (t->evsel->evlist == NULL)
return true;
- workload_pid = t->evsel->evlist->workload.pid;
+ workload_pid = evlist__workload_pid(t->evsel->evlist);
if (workload_pid < 0 || workload_pid == sample_pid)
return false;
diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
index 191ec2d8a250..26306d5fc72e 100644
--- a/tools/perf/util/metricgroup.c
+++ b/tools/perf/util/metricgroup.c
@@ -1494,7 +1494,7 @@ static int parse_groups(struct evlist *perf_evlist,
goto out;
}
- me = metricgroup__lookup(&perf_evlist->metric_events,
+ me = metricgroup__lookup(evlist__metric_events(perf_evlist),
pick_display_evsel(&metric_list, metric_events),
/*create=*/true);
@@ -1545,13 +1545,13 @@ static int parse_groups(struct evlist *perf_evlist,
if (combined_evlist) {
- evlist__splice_list_tail(perf_evlist, &combined_evlist->core.entries);
+ evlist__splice_list_tail(perf_evlist, &evlist__core(combined_evlist)->entries);
evlist__put(combined_evlist);
}
list_for_each_entry(m, &metric_list, nd) {
if (m->evlist)
- evlist__splice_list_tail(perf_evlist, &m->evlist->core.entries);
+ evlist__splice_list_tail(perf_evlist, &evlist__core(m->evlist)->entries);
}
out:
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index f0809be63ad8..3682053b23cb 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -2267,7 +2267,7 @@ int __parse_events(struct evlist *evlist, const char *str, const char *pmu_filte
{
struct parse_events_state parse_state = {
.list = LIST_HEAD_INIT(parse_state.list),
- .idx = evlist->core.nr_entries,
+ .idx = evlist__nr_entries(evlist),
.error = err,
.stoken = PE_START_EVENTS,
.fake_pmu = fake_pmu,
@@ -2541,7 +2541,7 @@ foreach_evsel_in_last_glob(struct evlist *evlist,
*
* So no need to WARN here, let *func do this.
*/
- if (evlist->core.nr_entries > 0)
+ if (evlist__nr_entries(evlist) > 0)
last = evlist__last(evlist);
do {
@@ -2551,7 +2551,7 @@ foreach_evsel_in_last_glob(struct evlist *evlist,
if (!last)
return 0;
- if (last->core.node.prev == &evlist->core.entries)
+ if (last->core.node.prev == &evlist__core(evlist)->entries)
return 0;
last = list_entry(last->core.node.prev, struct evsel, core.node);
} while (!last->cmdline_group_boundary);
diff --git a/tools/perf/util/pfm.c b/tools/perf/util/pfm.c
index 5f53c2f68a96..f80d6b0df47a 100644
--- a/tools/perf/util/pfm.c
+++ b/tools/perf/util/pfm.c
@@ -85,7 +85,7 @@ int parse_libpfm_events_option(const struct option *opt, const char *str,
}
pmu = perf_pmus__find_by_type((unsigned int)attr.type);
- evsel = parse_events__add_event(evlist->core.nr_entries,
+ evsel = parse_events__add_event(evlist__nr_entries(evlist),
&attr, q, /*metric_id=*/NULL,
pmu);
if (evsel == NULL)
diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c
index 8585ae992e6b..0162d8a625de 100644
--- a/tools/perf/util/python.c
+++ b/tools/perf/util/python.c
@@ -1455,7 +1455,7 @@ static int pyrf_evlist__init(struct pyrf_evlist *pevlist,
}
threads = ((struct pyrf_thread_map *)pthreads)->threads;
cpus = ((struct pyrf_cpu_map *)pcpus)->cpus;
- perf_evlist__set_maps(&pevlist->evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(pevlist->evlist), cpus, threads);
return 0;
}
@@ -1471,7 +1471,7 @@ static PyObject *pyrf_evlist__all_cpus(struct pyrf_evlist *pevlist)
struct pyrf_cpu_map *pcpu_map = PyObject_New(struct pyrf_cpu_map, &pyrf_cpu_map__type);
if (pcpu_map)
- pcpu_map->cpus = perf_cpu_map__get(pevlist->evlist->core.all_cpus);
+ pcpu_map->cpus = perf_cpu_map__get(evlist__core(pevlist->evlist)->all_cpus);
return (PyObject *)pcpu_map;
}
@@ -1484,7 +1484,7 @@ static PyObject *pyrf_evlist__metrics(struct pyrf_evlist *pevlist)
if (!list)
return NULL;
- for (node = rb_first_cached(&pevlist->evlist->metric_events.entries); node;
+ for (node = rb_first_cached(&evlist__metric_events(pevlist->evlist)->entries); node;
node = rb_next(node)) {
struct metric_event *me = container_of(node, struct metric_event, nd);
struct list_head *pos;
@@ -1590,7 +1590,7 @@ static PyObject *pyrf_evlist__compute_metric(struct pyrf_evlist *pevlist,
if (!PyArg_ParseTuple(args, "sii", &metric, &cpu, &thread))
return NULL;
- for (node = rb_first_cached(&pevlist->evlist->metric_events.entries);
+ for (node = rb_first_cached(&evlist__metric_events(pevlist->evlist)->entries);
mexp == NULL && node;
node = rb_next(node)) {
struct metric_event *me = container_of(node, struct metric_event, nd);
@@ -1659,7 +1659,7 @@ static PyObject *pyrf_evlist__mmap(struct pyrf_evlist *pevlist,
&pages, &overwrite))
return NULL;
- if (evlist__mmap(evlist, pages) < 0) {
+ if (evlist__do_mmap(evlist, pages) < 0) {
PyErr_SetFromErrno(PyExc_OSError);
return NULL;
}
@@ -1695,9 +1695,9 @@ static PyObject *pyrf_evlist__get_pollfd(struct pyrf_evlist *pevlist,
PyObject *list = PyList_New(0);
int i;
- for (i = 0; i < evlist->core.pollfd.nr; ++i) {
+ for (i = 0; i < evlist__core(evlist)->pollfd.nr; ++i) {
PyObject *file;
- file = PyFile_FromFd(evlist->core.pollfd.entries[i].fd, "perf", "r", -1,
+ file = PyFile_FromFd(evlist__core(evlist)->pollfd.entries[i].fd, "perf", "r", -1,
NULL, NULL, NULL, 0);
if (file == NULL)
goto free_list;
@@ -1729,18 +1729,18 @@ static PyObject *pyrf_evlist__add(struct pyrf_evlist *pevlist,
Py_INCREF(pevsel);
evsel = ((struct pyrf_evsel *)pevsel)->evsel;
- evsel->core.idx = evlist->core.nr_entries;
+ evsel->core.idx = evlist__nr_entries(evlist);
evlist__add(evlist, evsel__get(evsel));
- return Py_BuildValue("i", evlist->core.nr_entries);
+ return Py_BuildValue("i", evlist__nr_entries(evlist));
}
static struct mmap *get_md(struct evlist *evlist, int cpu)
{
int i;
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
- struct mmap *md = &evlist->mmap[i];
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
+ struct mmap *md = &evlist__mmap(evlist)[i];
if (md->core.cpu.cpu == cpu)
return md;
@@ -1955,7 +1955,7 @@ static Py_ssize_t pyrf_evlist__length(PyObject *obj)
{
struct pyrf_evlist *pevlist = (void *)obj;
- return pevlist->evlist->core.nr_entries;
+ return evlist__nr_entries(pevlist->evlist);
}
static PyObject *pyrf_evsel__from_evsel(struct evsel *evsel)
@@ -1974,7 +1974,7 @@ static PyObject *pyrf_evlist__item(PyObject *obj, Py_ssize_t i)
struct pyrf_evlist *pevlist = (void *)obj;
struct evsel *pos;
- if (i >= pevlist->evlist->core.nr_entries) {
+ if (i >= evlist__nr_entries(pevlist->evlist)) {
PyErr_SetString(PyExc_IndexError, "Index out of range");
return NULL;
}
@@ -2169,7 +2169,7 @@ static PyObject *pyrf__parse_events(PyObject *self, PyObject *args)
cpus = pcpus ? ((struct pyrf_cpu_map *)pcpus)->cpus : NULL;
parse_events_error__init(&err);
- perf_evlist__set_maps(&evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, threads);
if (parse_events(evlist, input, &err)) {
parse_events_error__print(&err, input);
PyErr_SetFromErrno(PyExc_OSError);
@@ -2202,7 +2202,7 @@ static PyObject *pyrf__parse_metrics(PyObject *self, PyObject *args)
threads = pthreads ? ((struct pyrf_thread_map *)pthreads)->threads : NULL;
cpus = pcpus ? ((struct pyrf_cpu_map *)pcpus)->cpus : NULL;
- perf_evlist__set_maps(&evlist->core, cpus, threads);
+ perf_evlist__set_maps(evlist__core(evlist), cpus, threads);
ret = metricgroup__parse_groups(evlist, pmu ?: "all", input,
/*metric_no_group=*/ false,
/*metric_no_merge=*/ false,
diff --git a/tools/perf/util/record.c b/tools/perf/util/record.c
index 8a5fc7d5e43c..38e8aee3106b 100644
--- a/tools/perf/util/record.c
+++ b/tools/perf/util/record.c
@@ -99,7 +99,7 @@ void evlist__config(struct evlist *evlist, struct record_opts *opts, struct call
bool use_comm_exec;
bool sample_id = opts->sample_id;
- if (perf_cpu_map__cpu(evlist->core.user_requested_cpus, 0).cpu < 0)
+ if (perf_cpu_map__cpu(evlist__core(evlist)->user_requested_cpus, 0).cpu < 0)
opts->no_inherit = true;
use_comm_exec = perf_can_comm_exec();
@@ -122,7 +122,7 @@ void evlist__config(struct evlist *evlist, struct record_opts *opts, struct call
*/
use_sample_identifier = perf_can_sample_identifier();
sample_id = true;
- } else if (evlist->core.nr_entries > 1) {
+ } else if (evlist__nr_entries(evlist) > 1) {
struct evsel *first = evlist__first(evlist);
evlist__for_each_entry(evlist, evsel) {
@@ -237,7 +237,8 @@ bool evlist__can_select_event(struct evlist *evlist, const char *str)
evsel = evlist__last(temp_evlist);
- if (!evlist || perf_cpu_map__is_any_cpu_or_is_empty(evlist->core.user_requested_cpus)) {
+ if (!evlist ||
+ perf_cpu_map__is_any_cpu_or_is_empty(evlist__core(evlist)->user_requested_cpus)) {
struct perf_cpu_map *cpus = perf_cpu_map__new_online_cpus();
if (cpus)
@@ -245,7 +246,7 @@ bool evlist__can_select_event(struct evlist *evlist, const char *str)
perf_cpu_map__put(cpus);
} else {
- cpu = perf_cpu_map__cpu(evlist->core.user_requested_cpus, 0);
+ cpu = perf_cpu_map__cpu(evlist__core(evlist)->user_requested_cpus, 0);
}
while (1) {
diff --git a/tools/perf/util/sample-raw.c b/tools/perf/util/sample-raw.c
index bcf442574d6e..ec33b864431c 100644
--- a/tools/perf/util/sample-raw.c
+++ b/tools/perf/util/sample-raw.c
@@ -18,10 +18,10 @@ void evlist__init_trace_event_sample_raw(struct evlist *evlist, struct perf_env
const char *cpuid = perf_env__cpuid(env);
if (arch_pf && !strcmp("s390", arch_pf))
- evlist->trace_event_sample_raw = evlist__s390_sample_raw;
+ evlist__set_trace_event_sample_raw(evlist, evlist__s390_sample_raw);
else if (arch_pf && !strcmp("x86", arch_pf) &&
cpuid && strstarts(cpuid, "AuthenticAMD") &&
evlist__has_amd_ibs(evlist)) {
- evlist->trace_event_sample_raw = evlist__amd_sample_raw;
+ evlist__set_trace_event_sample_raw(evlist, evlist__amd_sample_raw);
}
}
diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
index deb5b9dfe44c..f9fafbb80a9d 100644
--- a/tools/perf/util/session.c
+++ b/tools/perf/util/session.c
@@ -204,7 +204,7 @@ struct perf_session *__perf_session__new(struct perf_data *data,
session->machines.host.env = host_env;
}
if (session->evlist)
- session->evlist->session = session;
+ evlist__set_session(session->evlist, session);
session->machines.host.single_address_space =
perf_env__single_address_space(session->machines.host.env);
@@ -1099,8 +1099,8 @@ static void dump_event(struct evlist *evlist, union perf_event *event,
file_offset, file_path, event->header.size, event->header.type);
trace_event(event);
- if (event->header.type == PERF_RECORD_SAMPLE && evlist->trace_event_sample_raw)
- evlist->trace_event_sample_raw(evlist, event, sample);
+ if (event->header.type == PERF_RECORD_SAMPLE && evlist__trace_event_sample_raw(evlist))
+ evlist__trace_event_sample_raw(evlist)(evlist, event, sample);
if (sample)
evlist__print_tstamp(evlist, event, sample);
@@ -1279,7 +1279,7 @@ static int deliver_sample_value(struct evlist *evlist,
}
if (!storage || sid->evsel == NULL) {
- ++evlist->stats.nr_unknown_id;
+ ++evlist__stats(evlist)->nr_unknown_id;
return 0;
}
@@ -1371,6 +1371,8 @@ static int evlist__deliver_deferred_callchain(struct evlist *evlist,
struct evsel *saved_evsel = sample->evsel;
sample->evsel = evlist__id2evsel(evlist, sample->id);
+ if (sample->evsel)
+ sample->evsel = evsel__get(sample->evsel);
ret = tool->callchain_deferred(tool, event, sample,
sample->evsel, machine);
evsel__put(sample->evsel);
@@ -1378,7 +1380,7 @@ static int evlist__deliver_deferred_callchain(struct evlist *evlist,
return ret;
}
- list_for_each_entry_safe(de, tmp, &evlist->deferred_samples, list) {
+ list_for_each_entry_safe(de, tmp, evlist__deferred_samples(evlist), list) {
struct perf_sample orig_sample;
perf_sample__init(&orig_sample, /*all=*/false);
@@ -1400,6 +1402,8 @@ static int evlist__deliver_deferred_callchain(struct evlist *evlist,
orig_sample.deferred_callchain = false;
orig_sample.evsel = evlist__id2evsel(evlist, orig_sample.id);
+ if (orig_sample.evsel)
+ orig_sample.evsel = evsel__get(orig_sample.evsel);
ret = evlist__deliver_sample(evlist, tool, de->event,
&orig_sample, orig_sample.evsel, machine);
@@ -1426,7 +1430,7 @@ static int session__flush_deferred_samples(struct perf_session *session,
struct deferred_event *de, *tmp;
int ret = 0;
- list_for_each_entry_safe(de, tmp, &evlist->deferred_samples, list) {
+ list_for_each_entry_safe(de, tmp, evlist__deferred_samples(evlist), list) {
struct perf_sample sample;
perf_sample__init(&sample, /*all=*/false);
@@ -1438,6 +1442,8 @@ static int session__flush_deferred_samples(struct perf_session *session,
}
sample.evsel = evlist__id2evsel(evlist, sample.id);
+ if (sample.evsel)
+ sample.evsel = evsel__get(sample.evsel);
ret = evlist__deliver_sample(evlist, tool, de->event,
&sample, sample.evsel, machine);
@@ -1464,22 +1470,24 @@ static int machines__deliver_event(struct machines *machines,
dump_event(evlist, event, file_offset, sample, file_path);
- if (!sample->evsel)
+ if (!sample->evsel) {
sample->evsel = evlist__id2evsel(evlist, sample->id);
- else
+ if (sample->evsel)
+ sample->evsel = evsel__get(sample->evsel);
+ } else {
assert(sample->evsel == evlist__id2evsel(evlist, sample->id));
-
+ }
evsel = sample->evsel;
machine = machines__find_for_cpumode(machines, event, sample);
switch (event->header.type) {
case PERF_RECORD_SAMPLE:
if (evsel == NULL) {
- ++evlist->stats.nr_unknown_id;
+ ++evlist__stats(evlist)->nr_unknown_id;
return 0;
}
if (machine == NULL) {
- ++evlist->stats.nr_unprocessable_samples;
+ ++evlist__stats(evlist)->nr_unprocessable_samples;
dump_sample(machine, evsel, event, sample);
return 0;
}
@@ -1497,7 +1505,7 @@ static int machines__deliver_event(struct machines *machines,
return -ENOMEM;
}
memcpy(de->event, event, sz);
- list_add_tail(&de->list, &evlist->deferred_samples);
+ list_add_tail(&de->list, evlist__deferred_samples(evlist));
return 0;
}
return evlist__deliver_sample(evlist, tool, event, sample, evsel, machine);
@@ -1505,7 +1513,7 @@ static int machines__deliver_event(struct machines *machines,
return tool->mmap(tool, event, sample, machine);
case PERF_RECORD_MMAP2:
if (event->header.misc & PERF_RECORD_MISC_PROC_MAP_PARSE_TIMEOUT)
- ++evlist->stats.nr_proc_map_timeout;
+ ++evlist__stats(evlist)->nr_proc_map_timeout;
return tool->mmap2(tool, event, sample, machine);
case PERF_RECORD_COMM:
return tool->comm(tool, event, sample, machine);
@@ -1519,13 +1527,13 @@ static int machines__deliver_event(struct machines *machines,
return tool->exit(tool, event, sample, machine);
case PERF_RECORD_LOST:
if (tool->lost == perf_event__process_lost)
- evlist->stats.total_lost += event->lost.lost;
+ evlist__stats(evlist)->total_lost += event->lost.lost;
return tool->lost(tool, event, sample, machine);
case PERF_RECORD_LOST_SAMPLES:
if (event->header.misc & PERF_RECORD_MISC_LOST_SAMPLES_BPF)
- evlist->stats.total_dropped_samples += event->lost_samples.lost;
+ evlist__stats(evlist)->total_dropped_samples += event->lost_samples.lost;
else if (tool->lost_samples == perf_event__process_lost_samples)
- evlist->stats.total_lost_samples += event->lost_samples.lost;
+ evlist__stats(evlist)->total_lost_samples += event->lost_samples.lost;
return tool->lost_samples(tool, event, sample, machine);
case PERF_RECORD_READ:
dump_read(evsel, event);
@@ -1537,11 +1545,11 @@ static int machines__deliver_event(struct machines *machines,
case PERF_RECORD_AUX:
if (tool->aux == perf_event__process_aux) {
if (event->aux.flags & PERF_AUX_FLAG_TRUNCATED)
- evlist->stats.total_aux_lost += 1;
+ evlist__stats(evlist)->total_aux_lost += 1;
if (event->aux.flags & PERF_AUX_FLAG_PARTIAL)
- evlist->stats.total_aux_partial += 1;
+ evlist__stats(evlist)->total_aux_partial += 1;
if (event->aux.flags & PERF_AUX_FLAG_COLLISION)
- evlist->stats.total_aux_collision += 1;
+ evlist__stats(evlist)->total_aux_collision += 1;
}
return tool->aux(tool, event, sample, machine);
case PERF_RECORD_ITRACE_START:
@@ -1562,7 +1570,7 @@ static int machines__deliver_event(struct machines *machines,
return evlist__deliver_deferred_callchain(evlist, tool, event,
sample, machine);
default:
- ++evlist->stats.nr_unknown_events;
+ ++evlist__stats(evlist)->nr_unknown_events;
return -1;
}
}
@@ -1728,7 +1736,7 @@ int perf_session__deliver_synth_event(struct perf_session *session,
struct evlist *evlist = session->evlist;
const struct perf_tool *tool = session->tool;
- events_stats__inc(&evlist->stats, event->header.type);
+ events_stats__inc(evlist__stats(evlist), event->header.type);
if (event->header.type >= PERF_RECORD_USER_TYPE_START)
return perf_session__process_user_event(session, event, 0, NULL);
@@ -1876,7 +1884,7 @@ static s64 perf_session__process_event(struct perf_session *session,
return event->header.size;
}
- events_stats__inc(&evlist->stats, event->header.type);
+ events_stats__inc(evlist__stats(evlist), event->header.type);
if (event->header.type >= PERF_RECORD_USER_TYPE_START)
return perf_session__process_user_event(session, event, file_offset, file_path);
@@ -1937,7 +1945,7 @@ perf_session__warn_order(const struct perf_session *session)
static void perf_session__warn_about_errors(const struct perf_session *session)
{
- const struct events_stats *stats = &session->evlist->stats;
+ const struct events_stats *stats = evlist__stats(session->evlist);
if (session->tool->lost == perf_event__process_lost &&
stats->nr_events[PERF_RECORD_LOST] != 0) {
@@ -2751,7 +2759,7 @@ size_t perf_session__fprintf_nr_events(struct perf_session *session, FILE *fp)
ret = fprintf(fp, "\nAggregated stats:%s\n", msg);
- ret += events_stats__fprintf(&session->evlist->stats, fp);
+ ret += events_stats__fprintf(evlist__stats(session->evlist), fp);
return ret;
}
diff --git a/tools/perf/util/sideband_evlist.c b/tools/perf/util/sideband_evlist.c
index b84a5463e039..c07dacf3c54c 100644
--- a/tools/perf/util/sideband_evlist.c
+++ b/tools/perf/util/sideband_evlist.c
@@ -22,7 +22,7 @@ int evlist__add_sb_event(struct evlist *evlist, struct perf_event_attr *attr,
attr->sample_id_all = 1;
}
- evsel = evsel__new_idx(attr, evlist->core.nr_entries);
+ evsel = evsel__new_idx(attr, evlist__nr_entries(evlist));
if (!evsel)
return -1;
@@ -49,14 +49,14 @@ static void *perf_evlist__poll_thread(void *arg)
while (!done) {
bool got_data = false;
- if (evlist->thread.done)
+ if (evlist__sb_thread_done(evlist))
draining = true;
if (!draining)
evlist__poll(evlist, 1000);
- for (i = 0; i < evlist->core.nr_mmaps; i++) {
- struct mmap *map = &evlist->mmap[i];
+ for (i = 0; i < evlist__core(evlist)->nr_mmaps; i++) {
+ struct mmap *map = &evlist__mmap(evlist)[i];
union perf_event *event;
if (perf_mmap__read_init(&map->core))
@@ -104,7 +104,7 @@ int evlist__start_sb_thread(struct evlist *evlist, struct target *target)
if (evlist__create_maps(evlist, target))
goto out_put_evlist;
- if (evlist->core.nr_entries > 1) {
+ if (evlist__nr_entries(evlist) > 1) {
bool can_sample_identifier = perf_can_sample_identifier();
evlist__for_each_entry(evlist, counter)
@@ -114,12 +114,12 @@ int evlist__start_sb_thread(struct evlist *evlist, struct target *target)
}
evlist__for_each_entry(evlist, counter) {
- if (evsel__open(counter, evlist->core.user_requested_cpus,
- evlist->core.threads) < 0)
+ if (evsel__open(counter, evlist__core(evlist)->user_requested_cpus,
+ evlist__core(evlist)->threads) < 0)
goto out_put_evlist;
}
- if (evlist__mmap(evlist, UINT_MAX))
+ if (evlist__do_mmap(evlist, UINT_MAX))
goto out_put_evlist;
evlist__for_each_entry(evlist, counter) {
@@ -127,8 +127,8 @@ int evlist__start_sb_thread(struct evlist *evlist, struct target *target)
goto out_put_evlist;
}
- evlist->thread.done = 0;
- if (pthread_create(&evlist->thread.th, NULL, perf_evlist__poll_thread, evlist))
+ evlist__set_sb_thread_done(evlist, 0);
+ if (pthread_create(evlist__sb_thread_th(evlist), NULL, perf_evlist__poll_thread, evlist))
goto out_put_evlist;
return 0;
@@ -143,7 +143,7 @@ void evlist__stop_sb_thread(struct evlist *evlist)
{
if (!evlist)
return;
- evlist->thread.done = 1;
- pthread_join(evlist->thread.th, NULL);
+ evlist__set_sb_thread_done(evlist, 1);
+ pthread_join(*evlist__sb_thread_th(evlist), NULL);
evlist__put(evlist);
}
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
index 0020089cb13c..f93154f8adfd 100644
--- a/tools/perf/util/sort.c
+++ b/tools/perf/util/sort.c
@@ -3481,7 +3481,7 @@ static struct evsel *find_evsel(struct evlist *evlist, char *event_name)
if (event_name[0] == '%') {
int nr = strtol(event_name+1, NULL, 0);
- if (nr > evlist->core.nr_entries)
+ if (nr > evlist__nr_entries(evlist))
return NULL;
evsel = evlist__first(evlist);
diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
index 993f4c4b8f44..c8f49b56815d 100644
--- a/tools/perf/util/stat-display.c
+++ b/tools/perf/util/stat-display.c
@@ -669,7 +669,7 @@ static void print_metric_header(struct perf_stat_config *config,
/* In case of iostat, print metric header for first root port only */
if (config->iostat_run &&
- os->evsel->priv != os->evsel->evlist->selected->priv)
+ os->evsel->priv != evlist__selected(os->evsel->evlist)->priv)
return;
if (os->evsel->cgrp != os->cgrp)
@@ -1128,7 +1128,7 @@ static void print_no_aggr_metric(struct perf_stat_config *config,
unsigned int all_idx;
struct perf_cpu cpu;
- perf_cpu_map__for_each_cpu(cpu, all_idx, evlist->core.user_requested_cpus) {
+ perf_cpu_map__for_each_cpu(cpu, all_idx, evlist__core(evlist)->user_requested_cpus) {
struct evsel *counter;
bool first = true;
@@ -1545,7 +1545,7 @@ void evlist__print_counters(struct evlist *evlist, struct perf_stat_config *conf
evlist__uniquify_evsel_names(evlist, config);
if (config->iostat_run)
- evlist->selected = evlist__first(evlist);
+ evlist__set_selected(evlist, evlist__first(evlist));
if (config->interval)
prepare_timestamp(config, &os, ts);
diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c
index 48524450326d..482cb70681ab 100644
--- a/tools/perf/util/stat-shadow.c
+++ b/tools/perf/util/stat-shadow.c
@@ -283,7 +283,7 @@ void *perf_stat__print_shadow_stats_metricgroup(struct perf_stat_config *config,
void *ctxp = out->ctx;
bool header_printed = false;
const char *name = NULL;
- struct rblist *metric_events = &evsel->evlist->metric_events;
+ struct rblist *metric_events = evlist__metric_events(evsel->evlist);
me = metricgroup__lookup(metric_events, evsel, false);
if (me == NULL)
@@ -351,5 +351,5 @@ bool perf_stat__skip_metric_event(struct evsel *evsel)
if (!evsel->default_metricgroup)
return false;
- return !metricgroup__lookup(&evsel->evlist->metric_events, evsel, false);
+ return !metricgroup__lookup(evlist__metric_events(evsel->evlist), evsel, false);
}
diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
index 66eb9a66a4f7..25f31a174368 100644
--- a/tools/perf/util/stat.c
+++ b/tools/perf/util/stat.c
@@ -547,8 +547,8 @@ static void evsel__merge_aliases(struct evsel *evsel)
struct evlist *evlist = evsel->evlist;
struct evsel *alias;
- alias = list_prepare_entry(evsel, &(evlist->core.entries), core.node);
- list_for_each_entry_continue(alias, &evlist->core.entries, core.node) {
+ alias = list_prepare_entry(evsel, &(evlist__core(evlist)->entries), core.node);
+ list_for_each_entry_continue(alias, &evlist__core(evlist)->entries, core.node) {
if (alias->first_wildcard_match == evsel) {
/* Merge the same events on different PMUs. */
evsel__merge_aggr_counters(evsel, alias);
diff --git a/tools/perf/util/stream.c b/tools/perf/util/stream.c
index 3de4a6130853..7bccd2378344 100644
--- a/tools/perf/util/stream.c
+++ b/tools/perf/util/stream.c
@@ -131,7 +131,7 @@ static int evlist__init_callchain_streams(struct evlist *evlist,
struct evsel *pos;
int i = 0;
- BUG_ON(els->nr_evsel < evlist->core.nr_entries);
+ BUG_ON(els->nr_evsel < evlist__nr_entries(evlist));
evlist__for_each_entry(evlist, pos) {
struct hists *hists = evsel__hists(pos);
@@ -148,7 +148,7 @@ static int evlist__init_callchain_streams(struct evlist *evlist,
struct evlist_streams *evlist__create_streams(struct evlist *evlist,
int nr_streams_max)
{
- int nr_evsel = evlist->core.nr_entries, ret = -1;
+ int nr_evsel = evlist__nr_entries(evlist), ret = -1;
struct evlist_streams *els = evlist_streams__new(nr_evsel,
nr_streams_max);
diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
index 2461f25a4d7d..a6a1a83ccbec 100644
--- a/tools/perf/util/synthetic-events.c
+++ b/tools/perf/util/synthetic-events.c
@@ -2230,7 +2230,7 @@ int perf_event__synthesize_tracing_data(const struct perf_tool *tool, int fd, st
* - write the tracing data from the temp file
* to the pipe
*/
- tdata = tracing_data_get(&evlist->core.entries, fd, true);
+ tdata = tracing_data_get(&evlist__core(evlist)->entries, fd, true);
if (!tdata)
return -1;
@@ -2378,13 +2378,16 @@ int perf_event__synthesize_stat_events(struct perf_stat_config *config, const st
}
err = perf_event__synthesize_extra_attr(tool, evlist, process, attrs);
- err = perf_event__synthesize_thread_map2(tool, evlist->core.threads, process, NULL);
+ err = perf_event__synthesize_thread_map2(tool, evlist__core(evlist)->threads,
+ process, /*machine=*/NULL);
if (err < 0) {
pr_err("Couldn't synthesize thread map.\n");
return err;
}
- err = perf_event__synthesize_cpu_map(tool, evlist->core.user_requested_cpus, process, NULL);
+ err = perf_event__synthesize_cpu_map(tool,
+ evlist__core(evlist)->user_requested_cpus,
+ process, /*machine=*/NULL);
if (err < 0) {
pr_err("Couldn't synthesize thread map.\n");
return err;
@@ -2492,7 +2495,7 @@ int perf_event__synthesize_for_pipe(const struct perf_tool *tool,
ret += err;
#ifdef HAVE_LIBTRACEEVENT
- if (have_tracepoints(&evlist->core.entries)) {
+ if (have_tracepoints(&evlist__core(evlist)->entries)) {
int fd = perf_data__fd(data);
/*
diff --git a/tools/perf/util/time-utils.c b/tools/perf/util/time-utils.c
index d43c4577d7eb..5558a5a0fea4 100644
--- a/tools/perf/util/time-utils.c
+++ b/tools/perf/util/time-utils.c
@@ -473,8 +473,8 @@ int perf_time__parse_for_ranges_reltime(const char *time_str,
return -ENOMEM;
if (has_percent || reltime) {
- if (session->evlist->first_sample_time == 0 &&
- session->evlist->last_sample_time == 0) {
+ if (evlist__first_sample_time(session->evlist) == 0 &&
+ evlist__last_sample_time(session->evlist) == 0) {
pr_err("HINT: no first/last sample time found in perf data.\n"
"Please use latest perf binary to execute 'perf record'\n"
"(if '--buildid-all' is enabled, please set '--timestamp-boundary').\n");
@@ -486,8 +486,8 @@ int perf_time__parse_for_ranges_reltime(const char *time_str,
num = perf_time__percent_parse_str(
ptime_range, size,
time_str,
- session->evlist->first_sample_time,
- session->evlist->last_sample_time);
+ evlist__first_sample_time(session->evlist),
+ evlist__last_sample_time(session->evlist));
} else {
num = perf_time__parse_strs(ptime_range, time_str, size);
}
@@ -499,8 +499,8 @@ int perf_time__parse_for_ranges_reltime(const char *time_str,
int i;
for (i = 0; i < num; i++) {
- ptime_range[i].start += session->evlist->first_sample_time;
- ptime_range[i].end += session->evlist->first_sample_time;
+ ptime_range[i].start += evlist__first_sample_time(session->evlist);
+ ptime_range[i].end += evlist__first_sample_time(session->evlist);
}
}
diff --git a/tools/perf/util/top.c b/tools/perf/util/top.c
index b06e10a116bb..851a26be6931 100644
--- a/tools/perf/util/top.c
+++ b/tools/perf/util/top.c
@@ -71,7 +71,7 @@ size_t perf_top__header_snprintf(struct perf_top *top, char *bf, size_t size)
esamples_percent);
}
- if (top->evlist->core.nr_entries == 1) {
+ if (evlist__nr_entries(top->evlist) == 1) {
struct evsel *first = evlist__first(top->evlist);
ret += SNPRINTF(bf + ret, size - ret, "%" PRIu64 "%s ",
(uint64_t)first->core.attr.sample_period,
@@ -94,7 +94,7 @@ size_t perf_top__header_snprintf(struct perf_top *top, char *bf, size_t size)
else
ret += SNPRINTF(bf + ret, size - ret, " (all");
- nr_cpus = perf_cpu_map__nr(top->evlist->core.user_requested_cpus);
+ nr_cpus = perf_cpu_map__nr(evlist__core(top->evlist)->user_requested_cpus);
if (target->cpu_list)
ret += SNPRINTF(bf + ret, size - ret, ", CPU%s: %s)",
nr_cpus > 1 ? "s" : "",
--
2.54.0.rc2.533.g4f5dca5207-goog
next prev parent reply other threads:[~2026-04-23 16:34 UTC|newest]
Thread overview: 209+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-19 23:58 [PATCH v1 00/58] perf: Reorganize scripting support Ian Rogers
2026-04-19 23:58 ` [PATCH v1 01/58] perf inject: Fix itrace branch stack synthesis Ian Rogers
2026-04-19 23:58 ` [PATCH v1 02/58] perf arch arm: Sort includes and add missed explicit dependencies Ian Rogers
2026-04-19 23:58 ` [PATCH v1 03/58] perf arch x86: " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 04/58] perf tests: " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 05/58] perf script: " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 06/58] perf util: " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 07/58] perf python: Add " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 08/58] perf evsel/evlist: Avoid unnecessary #includes Ian Rogers
2026-04-19 23:58 ` [PATCH v1 09/58] perf data: Add open flag Ian Rogers
2026-04-19 23:58 ` [PATCH v1 10/58] perf evlist: Add reference count Ian Rogers
2026-04-19 23:58 ` [PATCH v1 11/58] perf evsel: " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 12/58] perf evlist: Add reference count checking Ian Rogers
2026-04-19 23:58 ` [PATCH v1 13/58] perf python: Use evsel in sample in pyrf_event Ian Rogers
2026-04-19 23:58 ` [PATCH v1 14/58] perf python: Add wrapper for perf_data file abstraction Ian Rogers
2026-04-19 23:58 ` [PATCH v1 15/58] perf python: Add python session abstraction wrapping perf's session Ian Rogers
2026-04-19 23:58 ` [PATCH v1 16/58] perf python: Add syscall name/id to convert syscall number and name Ian Rogers
2026-04-19 23:58 ` [PATCH v1 17/58] perf python: Refactor and add accessors to sample event Ian Rogers
2026-04-19 23:58 ` [PATCH v1 18/58] perf python: Add callchain support Ian Rogers
2026-04-19 23:58 ` [PATCH v1 19/58] perf python: Add config file access Ian Rogers
2026-04-19 23:58 ` [PATCH v1 20/58] perf python: Extend API for stat events in python.c Ian Rogers
2026-04-19 23:58 ` [PATCH v1 21/58] perf python: Expose brstack in sample event Ian Rogers
2026-04-19 23:58 ` [PATCH v1 22/58] perf python: Add perf.pyi stubs file Ian Rogers
2026-04-19 23:58 ` [PATCH v1 23/58] perf python: Add LiveSession helper Ian Rogers
2026-04-19 23:58 ` [PATCH v1 24/58] perf python: Move exported-sql-viewer.py and parallel-perf.py to tools/perf/python/ Ian Rogers
2026-04-19 23:58 ` [PATCH v1 25/58] perf stat-cpi: Port stat-cpi to use python module Ian Rogers
2026-04-19 23:58 ` [PATCH v1 26/58] perf mem-phys-addr: Port mem-phys-addr " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 00/58] perf: Reorganize scripting support Ian Rogers
2026-04-23 3:54 ` [PATCH v2 01/58] perf inject: Fix itrace branch stack synthesis Ian Rogers
2026-04-23 3:54 ` [PATCH v2 02/58] perf arch arm: Sort includes and add missed explicit dependencies Ian Rogers
2026-04-23 3:54 ` [PATCH v2 03/58] perf arch x86: " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 04/58] perf tests: " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 05/58] perf script: " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 06/58] perf util: " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 07/58] perf python: Add " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 08/58] perf evsel/evlist: Avoid unnecessary #includes Ian Rogers
2026-04-23 3:54 ` [PATCH v2 09/58] perf data: Add open flag Ian Rogers
2026-04-23 3:54 ` [PATCH v2 10/58] perf evlist: Add reference count Ian Rogers
2026-04-23 3:54 ` [PATCH v2 11/58] perf evsel: " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 12/58] perf evlist: Add reference count checking Ian Rogers
2026-04-23 3:54 ` [PATCH v2 13/58] perf python: Use evsel in sample in pyrf_event Ian Rogers
2026-04-23 3:54 ` [PATCH v2 14/58] perf python: Add wrapper for perf_data file abstraction Ian Rogers
2026-04-23 3:54 ` [PATCH v2 15/58] perf python: Add python session abstraction wrapping perf's session Ian Rogers
2026-04-23 3:54 ` [PATCH v2 16/58] perf python: Add syscall name/id to convert syscall number and name Ian Rogers
2026-04-23 3:54 ` [PATCH v2 17/58] perf python: Refactor and add accessors to sample event Ian Rogers
2026-04-23 3:54 ` [PATCH v2 18/58] perf python: Add callchain support Ian Rogers
2026-04-23 3:54 ` [PATCH v2 19/58] perf python: Add config file access Ian Rogers
2026-04-23 3:54 ` [PATCH v2 20/58] perf python: Extend API for stat events in python.c Ian Rogers
2026-04-23 3:54 ` [PATCH v2 21/58] perf python: Expose brstack in sample event Ian Rogers
2026-04-23 3:54 ` [PATCH v2 22/58] perf python: Add perf.pyi stubs file Ian Rogers
2026-04-23 3:54 ` [PATCH v2 23/58] perf python: Add LiveSession helper Ian Rogers
2026-04-23 3:54 ` [PATCH v2 24/58] perf python: Move exported-sql-viewer.py and parallel-perf.py to tools/perf/python/ Ian Rogers
2026-04-23 3:54 ` [PATCH v2 25/58] perf stat-cpi: Port stat-cpi to use python module Ian Rogers
2026-04-23 3:54 ` [PATCH v2 26/58] perf mem-phys-addr: Port mem-phys-addr " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 27/58] perf syscall-counts: Port syscall-counts " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 28/58] perf syscall-counts-by-pid: Port syscall-counts-by-pid " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 29/58] perf futex-contention: Port futex-contention " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 30/58] perf flamegraph: Port flamegraph " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 31/58] perf gecko: Port gecko " Ian Rogers
2026-04-23 3:54 ` [PATCH v2 32/58] perf arm-cs-trace-disasm: Port arm-cs-trace-disasm " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 33/58] perf check-perf-trace: Port check-perf-trace " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 34/58] perf compaction-times: Port compaction-times " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 35/58] perf event_analyzing_sample: Port event_analyzing_sample " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 36/58] perf export-to-sqlite: Port export-to-sqlite " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 37/58] perf export-to-postgresql: Port export-to-postgresql " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 38/58] perf failed-syscalls-by-pid: Port failed-syscalls-by-pid " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 39/58] perf intel-pt-events: Port intel-pt-events/libxed " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 40/58] perf net_dropmonitor: Port net_dropmonitor " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 41/58] perf netdev-times: Port netdev-times " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 42/58] perf powerpc-hcalls: Port powerpc-hcalls " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 43/58] perf sched-migration: Port sched-migration/SchedGui " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 44/58] perf sctop: Port sctop " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 45/58] perf stackcollapse: Port stackcollapse " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 46/58] perf task-analyzer: Port task-analyzer " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 47/58] perf failed-syscalls: Port failed-syscalls " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 48/58] perf rw-by-file: Port rw-by-file " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 49/58] perf rw-by-pid: Port rw-by-pid " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 50/58] perf rwtop: Port rwtop " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 51/58] perf wakeup-latency: Port wakeup-latency " Ian Rogers
2026-04-23 3:55 ` [PATCH v2 52/58] perf test: Migrate Intel PT virtual LBR test to use Python API Ian Rogers
2026-04-23 3:55 ` [PATCH v2 53/58] perf: Remove libperl support, legacy Perl scripts and tests Ian Rogers
2026-04-23 3:55 ` [PATCH v2 54/58] perf: Remove libpython support and legacy Python scripts Ian Rogers
2026-04-23 3:55 ` [PATCH v2 55/58] perf Makefile: Update Python script installation path Ian Rogers
2026-04-23 3:55 ` [PATCH v2 56/58] perf script: Refactor to support standalone scripts and remove legacy features Ian Rogers
2026-04-23 3:55 ` [PATCH v2 57/58] perf Documentation: Update for standalone Python scripts and remove obsolete data Ian Rogers
2026-04-23 3:55 ` [PATCH v2 58/58] perf python: Improve perf script -l descriptions Ian Rogers
2026-04-23 16:09 ` [PATCH v3 00/58] perf: Reorganize scripting support Ian Rogers
2026-04-23 16:09 ` [PATCH v3 01/58] perf arch arm: Sort includes and add missed explicit dependencies Ian Rogers
2026-04-23 16:09 ` [PATCH v3 02/58] perf arch x86: " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 03/58] perf tests: " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 04/58] perf script: " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 05/58] perf util: " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 06/58] perf python: Add " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 07/58] perf evsel/evlist: Avoid unnecessary #includes Ian Rogers
2026-04-23 16:09 ` [PATCH v3 08/58] perf data: Add open flag Ian Rogers
2026-04-23 16:09 ` [PATCH v3 09/58] perf evlist: Add reference count Ian Rogers
2026-04-23 16:09 ` [PATCH v3 10/58] perf evsel: " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 11/58] perf evlist: Add reference count checking Ian Rogers
2026-04-23 16:09 ` [PATCH v3 12/58] perf python: Use evsel in sample in pyrf_event Ian Rogers
2026-04-23 16:09 ` [PATCH v3 13/58] perf python: Add wrapper for perf_data file abstraction Ian Rogers
2026-04-23 16:09 ` [PATCH v3 14/58] perf python: Add python session abstraction wrapping perf's session Ian Rogers
2026-04-23 16:09 ` [PATCH v3 15/58] perf python: Add syscall name/id to convert syscall number and name Ian Rogers
2026-04-23 16:09 ` [PATCH v3 16/58] perf python: Refactor and add accessors to sample event Ian Rogers
2026-04-23 16:09 ` [PATCH v3 17/58] perf python: Add callchain support Ian Rogers
2026-04-23 16:09 ` [PATCH v3 18/58] perf python: Add config file access Ian Rogers
2026-04-23 16:09 ` [PATCH v3 19/58] perf python: Extend API for stat events in python.c Ian Rogers
2026-04-23 16:09 ` [PATCH v3 20/58] perf python: Expose brstack in sample event Ian Rogers
2026-04-23 16:09 ` [PATCH v3 21/58] perf python: Add perf.pyi stubs file Ian Rogers
2026-04-23 16:09 ` [PATCH v3 22/58] perf python: Add LiveSession helper Ian Rogers
2026-04-23 16:09 ` [PATCH v3 23/58] perf python: Move exported-sql-viewer.py and parallel-perf.py to tools/perf/python/ Ian Rogers
2026-04-23 16:09 ` [PATCH v3 24/58] perf stat-cpi: Port stat-cpi to use python module Ian Rogers
2026-04-23 16:09 ` [PATCH v3 25/58] perf mem-phys-addr: Port mem-phys-addr " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 26/58] perf syscall-counts: Port syscall-counts " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 27/58] perf syscall-counts-by-pid: Port syscall-counts-by-pid " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 28/58] perf futex-contention: Port futex-contention " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 29/58] perf flamegraph: Port flamegraph " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 30/58] perf gecko: Port gecko " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 31/58] perf arm-cs-trace-disasm: Port arm-cs-trace-disasm " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 32/58] perf check-perf-trace: Port check-perf-trace " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 33/58] perf compaction-times: Port compaction-times " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 34/58] perf event_analyzing_sample: Port event_analyzing_sample " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 35/58] perf export-to-sqlite: Port export-to-sqlite " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 36/58] perf export-to-postgresql: Port export-to-postgresql " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 37/58] perf failed-syscalls-by-pid: Port failed-syscalls-by-pid " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 38/58] perf intel-pt-events: Port intel-pt-events/libxed " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 39/58] perf net_dropmonitor: Port net_dropmonitor " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 40/58] perf netdev-times: Port netdev-times " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 41/58] perf powerpc-hcalls: Port powerpc-hcalls " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 42/58] perf sched-migration: Port sched-migration/SchedGui " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 43/58] perf sctop: Port sctop " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 44/58] perf stackcollapse: Port stackcollapse " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 45/58] perf task-analyzer: Port task-analyzer " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 46/58] perf failed-syscalls: Port failed-syscalls " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 47/58] perf rw-by-file: Port rw-by-file " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 48/58] perf rw-by-pid: Port rw-by-pid " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 49/58] perf rwtop: Port rwtop " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 50/58] perf wakeup-latency: Port wakeup-latency " Ian Rogers
2026-04-23 16:09 ` [PATCH v3 51/58] perf test: Migrate Intel PT virtual LBR test to use Python API Ian Rogers
2026-04-23 16:09 ` [PATCH v3 52/58] perf: Remove libperl support, legacy Perl scripts and tests Ian Rogers
2026-04-23 16:10 ` [PATCH v3 53/58] perf: Remove libpython support and legacy Python scripts Ian Rogers
2026-04-23 16:10 ` [PATCH v3 54/58] perf Makefile: Update Python script installation path Ian Rogers
2026-04-23 16:10 ` [PATCH v3 55/58] perf script: Refactor to support standalone scripts and remove legacy features Ian Rogers
2026-04-23 16:10 ` [PATCH v3 56/58] perf Documentation: Update for standalone Python scripts and remove obsolete data Ian Rogers
2026-04-23 16:10 ` [PATCH v3 57/58] perf python: Improve perf script -l descriptions Ian Rogers
2026-04-23 16:10 ` [PATCH v3 58/58] fixup! perf check-perf-trace: Port check-perf-trace to use python module Ian Rogers
2026-04-23 16:33 ` [PATCH v4 00/58] perf: Reorganize scripting support Ian Rogers
2026-04-23 16:33 ` [PATCH v4 01/58] perf inject: Fix itrace branch stack synthesis Ian Rogers
2026-04-23 16:33 ` [PATCH v4 02/58] perf arch arm: Sort includes and add missed explicit dependencies Ian Rogers
2026-04-23 16:33 ` [PATCH v4 03/58] perf arch x86: " Ian Rogers
2026-04-23 16:33 ` [PATCH v4 04/58] perf tests: " Ian Rogers
2026-04-23 16:33 ` [PATCH v4 05/58] perf script: " Ian Rogers
2026-04-23 16:33 ` [PATCH v4 06/58] perf util: " Ian Rogers
2026-04-23 16:33 ` [PATCH v4 07/58] perf python: Add " Ian Rogers
2026-04-23 16:33 ` [PATCH v4 08/58] perf evsel/evlist: Avoid unnecessary #includes Ian Rogers
2026-04-23 16:33 ` [PATCH v4 09/58] perf data: Add open flag Ian Rogers
2026-04-23 16:33 ` [PATCH v4 10/58] perf evlist: Add reference count Ian Rogers
2026-04-23 16:33 ` [PATCH v4 11/58] perf evsel: " Ian Rogers
2026-04-23 16:33 ` Ian Rogers [this message]
2026-04-23 16:33 ` [PATCH v4 13/58] perf python: Use evsel in sample in pyrf_event Ian Rogers
2026-04-23 16:33 ` [PATCH v4 14/58] perf python: Add wrapper for perf_data file abstraction Ian Rogers
2026-04-23 16:33 ` [PATCH v4 15/58] perf python: Add python session abstraction wrapping perf's session Ian Rogers
2026-04-23 16:33 ` [PATCH v4 16/58] perf python: Add syscall name/id to convert syscall number and name Ian Rogers
2026-04-23 16:33 ` [PATCH v4 17/58] perf python: Refactor and add accessors to sample event Ian Rogers
2026-04-23 16:33 ` [PATCH v4 18/58] perf python: Add callchain support Ian Rogers
2026-04-23 16:33 ` [PATCH v4 19/58] perf python: Add config file access Ian Rogers
2026-04-23 16:33 ` [PATCH v4 20/58] perf python: Extend API for stat events in python.c Ian Rogers
2026-04-23 16:33 ` [PATCH v4 21/58] perf python: Expose brstack in sample event Ian Rogers
2026-04-23 16:33 ` [PATCH v4 22/58] perf python: Add perf.pyi stubs file Ian Rogers
2026-04-23 16:33 ` [PATCH v4 23/58] perf python: Add LiveSession helper Ian Rogers
2026-04-23 16:33 ` [PATCH v4 24/58] perf python: Move exported-sql-viewer.py and parallel-perf.py to tools/perf/python/ Ian Rogers
2026-04-23 16:33 ` [PATCH v4 25/58] perf stat-cpi: Port stat-cpi to use python module Ian Rogers
2026-04-23 16:33 ` [PATCH v4 26/58] perf mem-phys-addr: Port mem-phys-addr " Ian Rogers
2026-04-23 16:33 ` [PATCH v4 27/58] perf syscall-counts: Port syscall-counts " Ian Rogers
2026-04-23 17:58 ` [PATCH v4 00/58] perf: Reorganize scripting support Ian Rogers
2026-04-23 19:43 ` [PATCH v4 28/58] perf syscall-counts-by-pid: Port syscall-counts-by-pid to use python module Ian Rogers
2026-04-23 19:43 ` [PATCH v4 29/58] perf futex-contention: Port futex-contention " Ian Rogers
2026-04-23 19:43 ` [PATCH v4 30/58] perf flamegraph: Port flamegraph " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 27/58] perf syscall-counts: Port syscall-counts " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 28/58] perf syscall-counts-by-pid: Port syscall-counts-by-pid " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 29/58] perf futex-contention: Port futex-contention " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 30/58] perf flamegraph: Port flamegraph " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 31/58] perf gecko: Port gecko " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 32/58] perf arm-cs-trace-disasm: Port arm-cs-trace-disasm " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 33/58] perf check-perf-trace: Port check-perf-trace " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 34/58] perf compaction-times: Port compaction-times " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 35/58] perf event_analyzing_sample: Port event_analyzing_sample " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 36/58] perf export-to-sqlite: Port export-to-sqlite " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 37/58] perf export-to-postgresql: Port export-to-postgresql " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 38/58] perf failed-syscalls-by-pid: Port failed-syscalls-by-pid " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 39/58] perf intel-pt-events: Port intel-pt-events/libxed " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 40/58] perf net_dropmonitor: Port net_dropmonitor " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 41/58] perf netdev-times: Port netdev-times " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 42/58] perf powerpc-hcalls: Port powerpc-hcalls " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 43/58] perf sched-migration: Port sched-migration/SchedGui " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 44/58] perf sctop: Port sctop " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 45/58] perf stackcollapse: Port stackcollapse " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 46/58] perf task-analyzer: Port task-analyzer " Ian Rogers
2026-04-19 23:58 ` [PATCH v1 47/58] perf failed-syscalls: Port failed-syscalls " Ian Rogers
2026-04-19 23:59 ` [PATCH v1 48/58] perf rw-by-file: Port rw-by-file " Ian Rogers
2026-04-19 23:59 ` [PATCH v1 49/58] perf rw-by-pid: Port rw-by-pid " Ian Rogers
2026-04-19 23:59 ` [PATCH v1 50/58] perf rwtop: Port rwtop " Ian Rogers
2026-04-19 23:59 ` [PATCH v1 51/58] perf wakeup-latency: Port wakeup-latency " Ian Rogers
2026-04-19 23:59 ` [PATCH v1 52/58] perf test: Migrate Intel PT virtual LBR test to use Python API Ian Rogers
2026-04-19 23:59 ` [PATCH v1 53/58] perf: Remove libperl support, legacy Perl scripts and tests Ian Rogers
2026-04-19 23:59 ` [PATCH v1 54/58] perf: Remove libpython support and legacy Python scripts Ian Rogers
2026-04-19 23:59 ` [PATCH v1 55/58] perf Makefile: Update Python script installation path Ian Rogers
2026-04-19 23:59 ` [PATCH v1 56/58] perf script: Refactor to support standalone scripts and remove legacy features Ian Rogers
2026-04-19 23:59 ` [PATCH v1 57/58] perf Documentation: Update for standalone Python scripts and remove obsolete data Ian Rogers
2026-04-19 23:59 ` [PATCH v1 58/58] perf python: Improve perf script -l descriptions Ian Rogers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260423163406.1779809-13-irogers@google.com \
--to=irogers@google.com \
--cc=9erthalion6@gmail.com \
--cc=acme@kernel.org \
--cc=adityab1@linux.ibm.com \
--cc=adrian.hunter@intel.com \
--cc=alexandre.chartre@oracle.com \
--cc=alice.mei.rogers@gmail.com \
--cc=ankur.a.arora@oracle.com \
--cc=ashelat@redhat.com \
--cc=atrajeev@linux.ibm.com \
--cc=blakejones@google.com \
--cc=changbin.du@huawei.com \
--cc=chuck.lever@oracle.com \
--cc=collin.funk1@gmail.com \
--cc=coresight@lists.linaro.org \
--cc=ctshao@google.com \
--cc=dapeng1.mi@linux.intel.com \
--cc=derek.foreman@collabora.com \
--cc=dsterba@suse.com \
--cc=gautam@linux.ibm.com \
--cc=howardchu95@gmail.com \
--cc=james.clark@linaro.org \
--cc=john.g.garry@oracle.com \
--cc=jolsa@kernel.org \
--cc=jonathan.cameron@huawei.com \
--cc=justinstitt@google.com \
--cc=leo.yan@linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mike.leach@arm.com \
--cc=mingo@redhat.com \
--cc=morbo@google.com \
--cc=namhyung@kernel.org \
--cc=nathan@kernel.org \
--cc=nichen@iscas.ac.cn \
--cc=nick.desaulniers+lkml@gmail.com \
--cc=pan.deng@intel.com \
--cc=peterz@infradead.org \
--cc=ravi.bangoria@amd.com \
--cc=ricky.ringler@proton.me \
--cc=stephen.s.brennan@oracle.com \
--cc=sun.jian.kdev@gmail.com \
--cc=suzuki.poulose@arm.com \
--cc=swapnil.sapkal@amd.com \
--cc=tanze@kylinos.cn \
--cc=terrelln@fb.com \
--cc=thomas.falcon@intel.com \
--cc=tianyou.li@intel.com \
--cc=tmricht@linux.ibm.com \
--cc=tycho@kernel.org \
--cc=wangyang.guo@intel.com \
--cc=xiaqinxin@huawei.com \
--cc=yang.lee@linux.alibaba.com \
--cc=yuzhuo@google.com \
--cc=zhiguo.zhou@intel.com \
--cc=zli94@ncsu.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox