linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jiri Olsa <olsajiri@gmail.com>
To: Tzvetomir Stoyanov <tz.stoyanov@gmail.com>,
	Ian Rogers <irogers@google.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>,
	linux-perf-users@vger.kernel.org
Subject: Re: libperf: CPU map question
Date: Mon, 14 Feb 2022 22:22:41 +0100	[thread overview]
Message-ID: <YgrIId0p9KoE9zPI@krava> (raw)
In-Reply-To: <CAPpZLN6+BPu=Y_iCXBV_Uqg_mcY2LEBtg+_7r+oJqWkG8pNSGA@mail.gmail.com>

On Mon, Feb 14, 2022 at 06:54:38PM +0200, Tzvetomir Stoyanov wrote:
> Hello,
> I'm trying to use libperf as an interface to perf from an application,
> but I face some difficulties.  I cannot understand how perf_cpu_map_
> APIs should be used. My use case is:
>  I want to iterate over the CPUs from the map. I'm using
> perf_evsel__cpus() API to get the map and perf_cpu_map__for_each_cpu()
> macros to iterate over the map. However, using that logic I get a
> struct perf_cpu for each CPU from the map. That structure seems to be
> private and I cannot access the actual .cpu id. Am I missing something
> ?
> Will be glad for any help and clarification.

yea it's a bug ;-) there was a change recently that wrapped cpu
in struct perf_cpu to distinguish it from cpu indexes

  6d18804b963b perf cpumap: Give CPUs their own type

not sure there's workaround for that.. but I think change below should fix it

Ian, would this be ok? we need to add some test for this

jirka


---
diff --git a/tools/lib/perf/include/internal/cpumap.h b/tools/lib/perf/include/internal/cpumap.h
index 581f9ffb4237..1973a18c096b 100644
--- a/tools/lib/perf/include/internal/cpumap.h
+++ b/tools/lib/perf/include/internal/cpumap.h
@@ -3,11 +3,7 @@
 #define __LIBPERF_INTERNAL_CPUMAP_H
 
 #include <linux/refcount.h>
-
-/** A wrapper around a CPU to avoid confusion with the perf_cpu_map's map's indices. */
-struct perf_cpu {
-	int cpu;
-};
+#include <perf/cpumap.h>
 
 /**
  * A sized, reference counted, sorted array of integers representing CPU
diff --git a/tools/lib/perf/include/perf/cpumap.h b/tools/lib/perf/include/perf/cpumap.h
index 15b8faafd615..4a2edbdb5e2b 100644
--- a/tools/lib/perf/include/perf/cpumap.h
+++ b/tools/lib/perf/include/perf/cpumap.h
@@ -7,6 +7,11 @@
 #include <stdio.h>
 #include <stdbool.h>
 
+/** A wrapper around a CPU to avoid confusion with the perf_cpu_map's map's indices. */
+struct perf_cpu {
+	int cpu;
+};
+
 LIBPERF_API struct perf_cpu_map *perf_cpu_map__dummy_new(void);
 LIBPERF_API struct perf_cpu_map *perf_cpu_map__default_new(void);
 LIBPERF_API struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list);

  parent reply	other threads:[~2022-02-14 21:23 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-14 16:54 libperf: CPU map question Tzvetomir Stoyanov
2022-02-14 19:51 ` Arnaldo Carvalho de Melo
2022-02-14 21:22 ` Jiri Olsa [this message]
2022-02-14 22:18   ` Ian Rogers
2022-02-15  4:46     ` Tzvetomir Stoyanov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YgrIId0p9KoE9zPI@krava \
    --to=olsajiri@gmail.com \
    --cc=acme@kernel.org \
    --cc=irogers@google.com \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=tz.stoyanov@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).