From: Marc Zyngier <maz@kernel.org>
To: Oliver Upton <oliver.upton@linux.dev>
Cc: kvmarm@lists.linux.dev, Joey Gouly <joey.gouly@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Mingwei Zhang <mizhang@google.com>,
Colton Lewis <coltonlewis@google.com>,
Raghavendra Rao Ananta <rananta@google.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, Janne Grunau <j@jannau.net>
Subject: Re: [PATCH v2 07/14] KVM: arm64: Use a cpucap to determine if system supports FEAT_PMUv3
Date: Wed, 19 Feb 2025 17:44:59 +0000 [thread overview]
Message-ID: <864j0psuas.wl-maz@kernel.org> (raw)
In-Reply-To: <20250203183111.191519-8-oliver.upton@linux.dev>
On Mon, 03 Feb 2025 18:31:04 +0000,
Oliver Upton <oliver.upton@linux.dev> wrote:
>
> KVM is about to learn some new tricks to virtualize PMUv3 on IMPDEF
> hardware. As part of that, we now need to differentiate host support
> from guest support for PMUv3.
>
> Add a cpucap to determine if an architectural PMUv3 is present to guard
> host usage of PMUv3 controls.
>
> Tested-by: Janne Grunau <j@jannau.net>
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
> arch/arm64/include/asm/cpufeature.h | 5 +++++
> arch/arm64/kernel/cpufeature.c | 19 +++++++++++++++++++
> arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++--
> arch/arm64/kvm/pmu.c | 10 +++++-----
> arch/arm64/tools/cpucaps | 1 +
> include/kvm/arm_pmu.h | 2 +-
> 6 files changed, 33 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index e0e4478f5fb5..0eff048848b8 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -866,6 +866,11 @@ static __always_inline bool system_supports_mpam_hcr(void)
> return alternative_has_cap_unlikely(ARM64_MPAM_HCR);
> }
>
> +static inline bool system_supports_pmuv3(void)
> +{
> + return cpus_have_final_cap(ARM64_HAS_PMUV3);
> +}
> +
> int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
> bool try_emulate_mrs(struct pt_regs *regs, u32 isn);
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 4eb7c6698ae4..6886d2875fac 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1898,6 +1898,19 @@ static bool has_lpa2(const struct arm64_cpu_capabilities *entry, int scope)
> }
> #endif
>
> +static bool has_pmuv3(const struct arm64_cpu_capabilities *entry, int scope)
> +{
> + u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> + unsigned int pmuver;
> +
> + pmuver = cpuid_feature_extract_unsigned_field(dfr0,
> + ID_AA64DFR0_EL1_PMUVer_SHIFT);
> + if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
> + return false;
> +
> + return pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP;
Given that PMUVer is a signed field, how about using
cpuid_feature_extract_signed_field() and do a signed comparison instead?
> +}
> +
> #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> #define KPTI_NG_TEMP_VA (-(1UL << PMD_SHIFT))
>
> @@ -2999,6 +3012,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, GCS, IMP)
> },
> #endif
> + {
> + .desc = "PMUv3",
> + .capability = ARM64_HAS_PMUV3,
> + .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> + .matches = has_pmuv3,
> + },
This cap is probed unconditionally (without any configuration
dependency)...
> {},
> };
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index f838a45665f2..0edc7882bedb 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -244,7 +244,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
> * counter, which could make a PMXEVCNTR_EL0 access UNDEF at
> * EL1 instead of being trapped to EL2.
> */
> - if (kvm_arm_support_pmu_v3()) {
> + if (system_supports_pmuv3()) {
... but kvm_arm_support_pmu_v3() is conditional on
CONFIG_HW_PERF_EVENTS. Doesn't this create some sort of new code path
that we didn't expect?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
next prev parent reply other threads:[~2025-02-19 17:47 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-03 18:30 [PATCH v2 00/14] KVM: arm64: Support FEAT_PMUv3 on Apple hardware Oliver Upton
2025-02-03 18:30 ` [PATCH v2 01/14] drivers/perf: apple_m1: Refactor event select/filter configuration Oliver Upton
2025-02-19 16:22 ` Marc Zyngier
2025-02-03 18:30 ` [PATCH v2 02/14] drivers/perf: apple_m1: Support host/guest event filtering Oliver Upton
2025-02-03 18:31 ` [PATCH v2 03/14] drivers/perf: apple_m1: Provide helper for mapping PMUv3 events Oliver Upton
2025-02-19 16:37 ` Marc Zyngier
2025-02-03 18:31 ` [PATCH v2 04/14] KVM: arm64: Compute PMCEID from arm_pmu's event bitmaps Oliver Upton
2025-02-03 18:31 ` [PATCH v2 05/14] KVM: arm64: Always support SW_INCR PMU event Oliver Upton
2025-02-03 18:31 ` [PATCH v2 06/14] KVM: arm64: Remap PMUv3 events onto hardware Oliver Upton
2025-02-19 16:45 ` Marc Zyngier
2025-02-19 19:25 ` Oliver Upton
2025-02-03 18:31 ` [PATCH v2 07/14] KVM: arm64: Use a cpucap to determine if system supports FEAT_PMUv3 Oliver Upton
2025-02-19 17:44 ` Marc Zyngier [this message]
2025-02-19 19:22 ` Oliver Upton
2025-02-19 19:35 ` Marc Zyngier
2025-02-03 18:31 ` [PATCH v2 08/14] KVM: arm64: Drop kvm_arm_pmu_available static key Oliver Upton
2025-02-03 18:31 ` [PATCH v2 09/14] KVM: arm64: Use guard() to cleanup usage of arm_pmus_lock Oliver Upton
2025-02-03 18:31 ` [PATCH v2 10/14] KVM: arm64: Move PMUVer filtering into KVM code Oliver Upton
2025-02-19 18:17 ` Marc Zyngier
2025-02-03 18:31 ` [PATCH v2 11/14] KVM: arm64: Compute synthetic sysreg ESR for Apple PMUv3 traps Oliver Upton
2025-02-03 18:31 ` [PATCH v2 12/14] KVM: arm64: Advertise PMUv3 if IMPDEF traps are present Oliver Upton
2025-02-03 18:31 ` [PATCH v2 13/14] KVM: arm64: Provide 1 event counter on IMPDEF hardware Oliver Upton
2025-02-03 18:31 ` [PATCH v2 14/14] arm64: Enable IMP DEF PMUv3 traps on Apple M* Oliver Upton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=864j0psuas.wl-maz@kernel.org \
--to=maz@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=coltonlewis@google.com \
--cc=j@jannau.net \
--cc=joey.gouly@arm.com \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mizhang@google.com \
--cc=oliver.upton@linux.dev \
--cc=rananta@google.com \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).