From: Rob Herring <robh@kernel.org>
To: Mark Rutland <mark.rutland@arm.com>
Cc: Russell King <linux@armlinux.org.uk>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Namhyung Kim <namhyung@kernel.org>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Jiri Olsa <jolsa@kernel.org>, Ian Rogers <irogers@google.com>,
Adrian Hunter <adrian.hunter@intel.com>,
Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
James Morse <james.morse@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Catalin Marinas <catalin.marinas@arm.com>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org,
kvmarm@lists.linux.dev
Subject: Re: [PATCH 3/9] perf: arm_pmu: Remove event index to counter remapping
Date: Mon, 10 Jun 2024 10:42:55 -0600 [thread overview]
Message-ID: <CAL_JsqK5TT1usMUY1Eaxy6qyGoWLj5R8XRNG-L6h-1S3WQfkRg@mail.gmail.com> (raw)
In-Reply-To: <ZmbZG-eaqE4NPcE3@J2N7QTR9R3>
On Mon, Jun 10, 2024 at 4:44 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Fri, Jun 07, 2024 at 02:31:28PM -0600, Rob Herring (Arm) wrote:
> > Xscale and Armv6 PMUs defined the cycle counter at 0 and event counters
> > starting at 1 and had 1:1 event index to counter numbering. On Armv7 and
> > later, this changed the cycle counter to 31 and event counters start at
> > 0. The drivers for Armv7 and PMUv3 kept the old event index numbering
> > and introduced an event index to counter conversion. The conversion uses
> > masking to convert from event index to a counter number. This operation
> > relies on having at most 32 counters so that the cycle counter index 0
> > can be transformed to counter number 31.
[...]
> > @@ -783,7 +767,7 @@ static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu)
> > struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
> >
> > /* Clear any unused counters to avoid leaking their contents */
> > - for_each_clear_bit(i, cpuc->used_mask, cpu_pmu->num_events) {
> > + for_each_clear_bit(i, cpuc->used_mask, ARMPMU_MAX_HWEVENTS) {
> > if (i == ARMV8_IDX_CYCLE_COUNTER)
> > write_pmccntr(0);
> > else
>
> IIUC this will now hit all unimplemented counters; e.g. for N counters the body
> will run for counters N..31, and the else case has:
>
> armv8pmu_write_evcntr(i, 0);
>
> ... where the resulting write to PMEVCNTR<n>_EL0 for unimplemented
> counters is CONSTRAINED UNPREDICTABLE and might be UNDEFINED.
>
> We can fix that with for_each_andnot_bit(), e.g.
Good catch. Fixed.
>
> for_each_andnot_bit(i, cpu_pmu->cntr_mask, cpuc->used_mask,
> ARMPMU_MAX_HWEVENTS) {
> if (i == ARMV8_IDX_CYCLE_COUNTER)
> write_pmccntr(0);
> else
> armv8pmu_write_evcntr(i, 0);
> }
>
> [...]
>
> > @@ -905,7 +889,7 @@ static int armv8pmu_get_single_idx(struct pmu_hw_events *cpuc,
> > {
> > int idx;
> >
> > - for (idx = ARMV8_IDX_COUNTER0; idx < cpu_pmu->num_events; idx++) {
> > + for_each_set_bit(idx, cpu_pmu->cntr_mask, 31) {
> > if (!test_and_set_bit(idx, cpuc->used_mask))
> > return idx;
> > }
> > @@ -921,7 +905,9 @@ static int armv8pmu_get_chain_idx(struct pmu_hw_events *cpuc,
> > * Chaining requires two consecutive event counters, where
> > * the lower idx must be even.
> > */
> > - for (idx = ARMV8_IDX_COUNTER0 + 1; idx < cpu_pmu->num_events; idx += 2) {
> > + for_each_set_bit(idx, cpu_pmu->cntr_mask, 31) {
> > + if (!(idx & 0x1))
> > + continue;
> > if (!test_and_set_bit(idx, cpuc->used_mask)) {
> > /* Check if the preceding even counter is available */
> > if (!test_and_set_bit(idx - 1, cpuc->used_mask))
>
> It would be nice to replace those instances of '31' with something
> indicating that this was only covering the generic/programmable
> counters, but I wasn't able to come up with a nice mnemonic for that.
> The best I could think of was:
>
> #define ARMV8_MAX_NR_GENERIC_COUNTERS 31
>
> Maybe it makes sense to define that along with ARMV8_IDX_CYCLE_COUNTER.
I've got nothing better. :) I think there's a few other spots that can use this.
[...]
> > /* Read the nb of CNTx counters supported from PMNC */
> > - *nb_cnt = (armv7_pmnc_read() >> ARMV7_PMNC_N_SHIFT) & ARMV7_PMNC_N_MASK;
> > + nb_cnt = (armv7_pmnc_read() >> ARMV7_PMNC_N_SHIFT) & ARMV7_PMNC_N_MASK;
> > + bitmap_set(cpu_pmu->cntr_mask, 0, nb_cnt);
> >
> > /* Add the CPU cycles counter */
> > - *nb_cnt += 1;
> > + bitmap_set(cpu_pmu->cntr_mask, ARMV7_IDX_CYCLE_COUNTER, 1);
>
> This can be:
>
> set_bit(cpu_pmu->cntr_mask, ARMV7_IDX_CYCLE_COUNTER);
>
> ... and likewise for the PMUv3 version.
Indeed. The documentation in bitmap.h is not clear that greater than 1
unsigned long # of bits works given it says there set_bit() is just
"*addr |= bit". I guess I don't use bitops enough...
Rob
next prev parent reply other threads:[~2024-06-10 16:43 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-07 20:31 [PATCH 0/9] arm64: Add support for Armv9.4 PMU fixed instruction counter Rob Herring (Arm)
2024-06-07 20:31 ` [PATCH 1/9] perf/arm: Move 32-bit PMU drivers to drivers/perf/ Rob Herring (Arm)
2024-06-10 9:23 ` Mark Rutland
2024-06-07 20:31 ` [PATCH 2/9] perf: arm_v6/7_pmu: Drop non-DT probe support Rob Herring (Arm)
2024-06-10 9:30 ` Mark Rutland
2024-06-07 20:31 ` [PATCH 3/9] perf: arm_pmu: Remove event index to counter remapping Rob Herring (Arm)
2024-06-08 19:37 ` kernel test robot
2024-06-10 10:44 ` Mark Rutland
2024-06-10 16:42 ` Rob Herring [this message]
2024-06-07 20:31 ` [PATCH 4/9] perf: arm_pmuv3: Prepare for more than 32 counters Rob Herring (Arm)
2024-06-10 10:51 ` Mark Rutland
2024-06-07 20:31 ` [PATCH 5/9] KVM: arm64: pmu: Use arm_pmuv3.h register accessors Rob Herring (Arm)
2024-06-10 11:02 ` Mark Rutland
2024-06-07 20:31 ` [PATCH 6/9] KVM: arm64: pmu: Use generated define for PMSELR_EL0.SEL access Rob Herring (Arm)
2024-06-10 11:10 ` Mark Rutland
2024-06-07 20:31 ` [PATCH 7/9] arm64: perf/kvm: Use a common PMU cycle counter define Rob Herring (Arm)
2024-06-10 11:24 ` Mark Rutland
2024-06-07 20:31 ` [PATCH 8/9] KVM: arm64: Refine PMU defines for number of counters Rob Herring (Arm)
2024-06-10 11:27 ` Mark Rutland
2024-06-07 20:31 ` [PATCH 9/9] perf: arm_pmuv3: Add support for Armv9.4 PMU instruction counter Rob Herring (Arm)
2024-06-10 11:55 ` Mark Rutland
2024-06-10 14:15 ` Rob Herring
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAL_JsqK5TT1usMUY1Eaxy6qyGoWLj5R8XRNG-L6h-1S3WQfkRg@mail.gmail.com \
--to=robh@kernel.org \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=catalin.marinas@arm.com \
--cc=irogers@google.com \
--cc=james.morse@arm.com \
--cc=jolsa@kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=peterz@infradead.org \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).