From: Oliver Upton <oliver.upton@linux.dev>
To: Akihiko Odaki <akihiko.odaki@daynix.com>
Cc: Marc Zyngier <maz@kernel.org>, Joey Gouly <joey.gouly@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>, Kees Cook <kees@kernel.org>,
"Gustavo A. R. Silva" <gustavoars@kernel.org>,
linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org,
devel@daynix.com
Subject: Re: [PATCH RFC] KVM: arm64: PMU: Use multiple host PMUs
Date: Wed, 19 Mar 2025 00:34:19 -0700 [thread overview]
Message-ID: <Z9pze3J2_zrTk_yC@linux.dev> (raw)
In-Reply-To: <20250319-hybrid-v1-1-4d1ada10e705@daynix.com>
Hi Akihiko,
On Wed, Mar 19, 2025 at 03:33:46PM +0900, Akihiko Odaki wrote:
> Problem
> -------
>
> arch/arm64/kvm/pmu-emul.c used to have a comment saying the follows:
> > The observant among you will notice that the supported_cpus
> > mask does not get updated for the default PMU even though it
> > is quite possible the selected instance supports only a
> > subset of cores in the system. This is intentional, and
> > upholds the preexisting behavior on heterogeneous systems
> > where vCPUs can be scheduled on any core but the guest
> > counters could stop working.
>
> Despite the reference manual says counters may not continuously
> incrementing, Windows is not robust enough to handle stopped PMCCNTR_EL0
> and crashes with a division-by-zero error and it also crashes when the
> PMU is not present.
>
> To avoid such a problem, the userspace should pin the vCPU threads to
> pCPUs supported by one host PMU when initializing the vCPUs or specify
> the host PMU to use with KVM_ARM_VCPU_PMU_V3_SET_PMU after the
> initialization. However, QEMU/libvirt can pin vCPU threads only after the
> vCPUs are initialized. It also limits the pCPUs the guest can use even
> for VMMs that support proper pinning.
>
> Solution
> --------
>
> Ideally, Windows should fix the division-by-zero error and QEMU/libvirt
> should support pinning better, but neither of them are going to happen
> anytime soon.
>
> To allow running Windows on QEMU/libvirt or with heterogeneous cores,
> combine all host PMUs necessary to cover the cores vCPUs can run and
> keep PMCCNTR_EL0 working.
I'm extremely uneasy about making this a generalized solution. PMUs are
deeply tied to the microarchitecture of a particular implementation, and
that isn't something we can abstract away from the guest in KVM.
For example, you could have an event ID that counts on only a subset of
cores, or better yet an event that counts something completely different
depending on where a vCPU lands.
I do appreciate the issue that you're trying to solve.
The good news though is that the fixed PMU cycle counter is the only
thing guaranteed to be present in any PMUv3 implementation. Since
that's the only counter Windows actually needs, perhaps we could
special-case this in KVM.
I have the following (completely untested) patch, do you want to give it
a try? There's still going to be observable differences between PMUs
(e.g. CPU frequency) but at least it should get things booting.
Thanks,
Oliver
---
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index a1bc10d7116a..913a7bab50b5 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -724,14 +724,21 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc)
return;
memset(&attr, 0, sizeof(struct perf_event_attr));
- attr.type = arm_pmu->pmu.type;
+
+ if (pmc->idx == ARMV8_PMU_CYCLE_IDX) {
+ attr.type = PERF_TYPE_HARDWARE;
+ attr.config = PERF_COUNT_HW_CPU_CYCLES;
+ } else {
+ attr.type = arm_pmu->pmu.type;
+ attr.config = eventsel;
+ }
+
attr.size = sizeof(attr);
attr.pinned = 1;
attr.disabled = !kvm_pmu_counter_is_enabled(pmc);
attr.exclude_user = !kvm_pmc_counts_at_el0(pmc);
attr.exclude_hv = 1; /* Don't count EL2 events */
attr.exclude_host = 1; /* Don't count host events */
- attr.config = eventsel;
/*
* Filter events at EL1 (i.e. vEL2) when in a hyp context based on the
next prev parent reply other threads:[~2025-03-19 7:36 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-19 6:33 [PATCH RFC] KVM: arm64: PMU: Use multiple host PMUs Akihiko Odaki
2025-03-19 7:34 ` Oliver Upton [this message]
2025-03-19 8:37 ` Akihiko Odaki
2025-03-19 9:47 ` Marc Zyngier
2025-03-19 10:26 ` Akihiko Odaki
2025-03-19 11:07 ` Marc Zyngier
2025-03-19 11:26 ` Akihiko Odaki
2025-03-19 11:41 ` Marc Zyngier
2025-03-19 11:51 ` Akihiko Odaki
2025-03-19 18:38 ` Marc Zyngier
2025-03-19 18:51 ` Oliver Upton
2025-03-20 6:03 ` Akihiko Odaki
2025-03-20 9:10 ` Marc Zyngier
2025-03-20 9:52 ` Akihiko Odaki
2025-03-20 17:14 ` Marc Zyngier
2025-03-21 6:20 ` Akihiko Odaki
2025-03-21 10:59 ` Marc Zyngier
2025-03-20 9:19 ` Marc Zyngier
2025-03-20 17:44 ` Oliver Upton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z9pze3J2_zrTk_yC@linux.dev \
--to=oliver.upton@linux.dev \
--cc=akihiko.odaki@daynix.com \
--cc=catalin.marinas@arm.com \
--cc=devel@daynix.com \
--cc=gustavoars@kernel.org \
--cc=joey.gouly@arm.com \
--cc=kees@kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-hardening@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).