From: Oliver Upton <oliver.upton@linux.dev>
To: kvmarm@lists.linux.dev
Cc: Marc Zyngier <maz@kernel.org>, Joey Gouly <joey.gouly@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Mingwei Zhang <mizhang@google.com>,
Colton Lewis <coltonlewis@google.com>,
Raghavendra Rao Ananta <rananta@google.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org,
Oliver Upton <oliver.upton@linux.dev>
Subject: [RFC PATCH 05/14] KVM: arm64: Always allow fixed cycle counter
Date: Tue, 3 Dec 2024 11:32:11 -0800 [thread overview]
Message-ID: <20241203193220.1070811-6-oliver.upton@linux.dev> (raw)
In-Reply-To: <20241203193220.1070811-1-oliver.upton@linux.dev>
The fixed CPU cycle counter is mandatory for PMUv3, so it doesn't make a
lot of sense allowing userspace to filter it. Only apply the PMU event
filter to *programmed* event counters.
While at it, use the generic CPU_CYCLES perf event to back the cycle
counter, potentially allowing non-PMUv3 drivers to map the event onto
the underlying implementation.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
arch/arm64/kvm/pmu-emul.c | 35 +++++++++++++++++++----------------
1 file changed, 19 insertions(+), 16 deletions(-)
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 809d65b912e8..3e7091e1a2e4 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -707,26 +707,27 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc)
evtreg = kvm_pmc_read_evtreg(pmc);
kvm_pmu_stop_counter(pmc);
- if (pmc->idx == ARMV8_PMU_CYCLE_IDX)
+ if (pmc->idx == ARMV8_PMU_CYCLE_IDX) {
eventsel = ARMV8_PMUV3_PERFCTR_CPU_CYCLES;
- else
+ } else {
eventsel = evtreg & kvm_pmu_event_mask(vcpu->kvm);
- /*
- * Neither SW increment nor chained events need to be backed
- * by a perf event.
- */
- if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR ||
- eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
- return;
+ /*
+ * If we have a filter in place and that the event isn't
+ * allowed, do not install a perf event either.
+ */
+ if (vcpu->kvm->arch.pmu_filter &&
+ !test_bit(eventsel, vcpu->kvm->arch.pmu_filter))
+ return;
- /*
- * If we have a filter in place and that the event isn't allowed, do
- * not install a perf event either.
- */
- if (vcpu->kvm->arch.pmu_filter &&
- !test_bit(eventsel, vcpu->kvm->arch.pmu_filter))
- return;
+ /*
+ * Neither SW increment nor chained events need to be backed
+ * by a perf event.
+ */
+ if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR ||
+ eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
+ return;
+ }
memset(&attr, 0, sizeof(struct perf_event_attr));
attr.type = arm_pmu->pmu.type;
@@ -877,6 +878,8 @@ static u64 compute_pmceid0(struct arm_pmu *pmu)
/* always support CHAIN */
val |= BIT(ARMV8_PMUV3_PERFCTR_CHAIN);
+ /* always support CPU_CYCLES */
+ val |= BIT(ARMV8_PMUV3_PERFCTR_CPU_CYCLES);
return val;
}
--
2.39.5
next prev parent reply other threads:[~2024-12-03 19:33 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-03 19:32 [RFC PATCH 00/14] KVM: arm64: Support FEAT_PMUv3 on Apple hardware Oliver Upton
2024-12-03 19:32 ` [RFC PATCH 01/14] drivers/perf: apple_m1: Refactor event select/filter configuration Oliver Upton
2024-12-03 19:32 ` [RFC PATCH 02/14] drivers/perf: apple_m1: Support host/guest event filtering Oliver Upton
2024-12-03 19:32 ` [RFC PATCH 03/14] drivers/perf: apple_m1: Map generic branch events Oliver Upton
2024-12-03 19:32 ` [RFC PATCH 04/14] KVM: arm64: Compute PMCEID from arm_pmu's event bitmaps Oliver Upton
2024-12-03 19:32 ` Oliver Upton [this message]
2024-12-03 21:32 ` [RFC PATCH 05/14] KVM: arm64: Always allow fixed cycle counter Marc Zyngier
2024-12-03 22:32 ` Oliver Upton
2024-12-04 9:04 ` Marc Zyngier
2024-12-04 21:56 ` Oliver Upton
2024-12-10 9:49 ` Marc Zyngier
2024-12-03 19:32 ` [RFC PATCH 06/14] KVM: arm64: Use PERF_COUNT_HW_CPU_CYCLES for " Oliver Upton
2024-12-03 19:32 ` [RFC PATCH 07/14] KVM: arm64: Use a cpucap to determine if system supports FEAT_PMUv3 Oliver Upton
2024-12-03 19:32 ` [RFC PATCH 08/14] KVM: arm64: Drop kvm_arm_pmu_available static key Oliver Upton
2024-12-03 19:32 ` [RFC PATCH 09/14] KVM: arm64: Use guard() to cleanup usage of arm_pmus_lock Oliver Upton
2024-12-03 19:32 ` [RFC PATCH 10/14] KVM: arm64: Move PMUVer filtering into KVM code Oliver Upton
2024-12-03 19:32 ` [RFC PATCH 11/14] KVM: arm64: Compute synthetic sysreg ESR for Apple PMUv3 traps Oliver Upton
2024-12-03 19:32 ` [RFC PATCH 12/14] KVM: arm64: Advertise PMUv3 if IMPDEF traps are present Oliver Upton
2024-12-03 19:32 ` [RFC PATCH 13/14] KVM: arm64: Advertise 0 event counters for IMPDEF PMU Oliver Upton
2024-12-03 19:34 ` [RFC PATCH 14/14] arm64: Enable IMP DEF PMUv3 traps on Apple M2 Oliver Upton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241203193220.1070811-6-oliver.upton@linux.dev \
--to=oliver.upton@linux.dev \
--cc=catalin.marinas@arm.com \
--cc=coltonlewis@google.com \
--cc=joey.gouly@arm.com \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=mizhang@google.com \
--cc=rananta@google.com \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox