From: Peter Zijlstra <peterz@infradead.org>
To: "Liang, Kan" <kan.liang@linux.intel.com>
Cc: Vince Weaver <vincent.weaver@maine.edu>,
Ingo Molnar <mingo@redhat.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Namhyung Kim <namhyung@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Jiri Olsa <jolsa@kernel.org>,
Adrian Hunter <adrian.hunter@intel.com>,
linux-kernel@vger.kernel.org, Andi Kleen <ak@linux.intel.com>,
mathieu.desnoyers@efficios.com
Subject: Re: perf: is it possible to userspace rdpmc but only on a certain core type
Date: Tue, 21 Jan 2025 13:52:30 +0100 [thread overview]
Message-ID: <20250121125230.GD7145@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <31ad367a-721f-46e7-8e5f-e96b250a8a32@linux.intel.com>
On Mon, Jan 20, 2025 at 11:44:37AM -0500, Liang, Kan wrote:
>
>
> On 2025-01-17 5:04 p.m., Vince Weaver wrote:
> > Hello
> >
> > so we've been working on PAPI support for Intel Top-Down events, which
> > let's say does "exciting" things involving the rdpmc instruction.
> >
> > One issue we are having is that on a hybrid machine (Raptor Lake in this
> > case with performance/efficiency cores) there is no top-down support
> > for the E-cores, and it will gpf/segfault if you try to rdpmc the top-down
> > events.
> >
> > Obviously PAPI would like to avoid this, and somehow only run the rdpmc
> > from userspace if scheduled on a P-core.
> >
> > Is there any way to atomically do this? Somehow detect what core we are
> > on and atomically execute a userspace instruction before a core-reschedule
> > can happen?
> >
> > Or barring that, any other way to handle this in a way that won't crash
> > without having to have the users have to bind to a core any time they want
> > to run PAPI?
>
> Can the PAPI rely on the event_idx(), similar to what Andi's pmu-tools
> do? For a stopped event, the index is always 0.
That's not race-free, the task can get migrated to an E core the moment
after you done the load and before the rdpmc instruction.
I suppose you can wrap the whole thing in RSEQ though, it's a bit of a
pain, but RSEQ can be configured to abort on migration.
The very latest libc (2.35+) should have rseq registered by default,
older will have to do so itself -- there is example code in
tools/testing/selftests/rseq but also
https://git.kernel.org/pub/scm/libs/librseq/librseq.git
next prev parent reply other threads:[~2025-01-21 12:52 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-17 22:04 perf: is it possible to userspace rdpmc but only on a certain core type Vince Weaver
2025-01-20 16:44 ` Liang, Kan
2025-01-21 12:52 ` Peter Zijlstra [this message]
2025-01-21 14:30 ` Mathieu Desnoyers
2025-01-22 21:51 ` Vince Weaver
2025-01-23 18:14 ` Andi Kleen
2025-01-23 19:45 ` Vince Weaver
2025-01-24 5:18 ` Andi Kleen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250121125230.GD7145@noisy.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=ak@linux.intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=jolsa@kernel.org \
--cc=kan.liang@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=vincent.weaver@maine.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox