From: James Clark <james.clark@linaro.org>
To: Colton Lewis <coltonlewis@google.com>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Jonathan Corbet <corbet@lwn.net>,
Russell King <linux@armlinux.org.uk>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
Mingwei Zhang <mizhang@google.com>,
Joey Gouly <joey.gouly@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Mark Rutland <mark.rutland@arm.com>,
Shuah Khan <shuah@kernel.org>,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
linux-perf-users@vger.kernel.org,
linux-kselftest@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [PATCH v7 13/20] KVM: arm64: Apply dynamic guest counter reservations
Date: Mon, 11 May 2026 15:47:54 +0100 [thread overview]
Message-ID: <e2a7679d-e61a-43ac-a1d7-72f7e815c400@linaro.org> (raw)
In-Reply-To: <20260504211813.1804997-14-coltonlewis@google.com>
On 04/05/2026 10:18 pm, Colton Lewis wrote:
> Apply dynamic guest counter reservations by checking if the requested
> guest mask collides with any events the host has scheduled and calling
> pmu_perf_resched_update() with a hook that updates the mask of
> available counters in between schedule out and schedule in.
>
> Signed-off-by: Colton Lewis <coltonlewis@google.com>
> ---
> arch/arm64/kvm/pmu-direct.c | 69 ++++++++++++++++++++++++++++++++++++
> include/linux/perf/arm_pmu.h | 1 +
> 2 files changed, 70 insertions(+)
>
> diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c
> index 2252d3b905db9..14cc419dbafad 100644
> --- a/arch/arm64/kvm/pmu-direct.c
> +++ b/arch/arm64/kvm/pmu-direct.c
> @@ -100,6 +100,73 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu)
> return *host_data_ptr(nr_event_counters);
> }
>
> +/* Callback to update counter mask between perf scheduling */
> +static void kvm_pmu_update_mask(struct pmu *pmu, void *data)
> +{
> + struct arm_pmu *arm_pmu = to_arm_pmu(pmu);
> + unsigned long *new_mask = data;
> +
> + bitmap_copy(arm_pmu->cntr_mask, new_mask, ARMPMU_MAX_HWEVENTS);
> +}
> +
> +/**
> + * kvm_pmu_set_guest_counters() - Handle dynamic counter reservations
> + * @cpu_pmu: struct arm_pmu to potentially modify
> + * @guest_mask: new guest mask for the pmu
> + *
> + * Check if guest counters will interfere with current host events and
> + * call into perf_pmu_resched_update if a reschedule is required.
> + */
> +static void kvm_pmu_set_guest_counters(struct arm_pmu *cpu_pmu, u64 guest_mask)
> +{
> + struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
> + DECLARE_BITMAP(guest_bitmap, ARMPMU_MAX_HWEVENTS);
> + DECLARE_BITMAP(new_mask, ARMPMU_MAX_HWEVENTS);
> + bool need_resched = false;
> +
> + bitmap_from_arr64(guest_bitmap, &guest_mask, ARMPMU_MAX_HWEVENTS);
> + bitmap_copy(new_mask, cpu_pmu->hw_cntr_mask, ARMPMU_MAX_HWEVENTS);
> +
> + if (guest_mask) {
> + /* Subtract guest counters from available host mask */
> + bitmap_andnot(new_mask, new_mask, guest_bitmap, ARMPMU_MAX_HWEVENTS);
> +
> + /* Did we collide with an active host event? */
> + if (bitmap_intersects(cpuc->used_mask, guest_bitmap, ARMPMU_MAX_HWEVENTS)) {
> + int idx;
> +
> + need_resched = true;
> + cpuc->host_squeezed = true;
> +
> + /* Look for pinned events that are about to be preempted */
> + for_each_set_bit(idx, guest_bitmap, ARMPMU_MAX_HWEVENTS) {
> + if (test_bit(idx, cpuc->used_mask) && cpuc->events[idx] &&
> + cpuc->events[idx]->attr.pinned) {
> + pr_warn_ratelimited("perf: Pinned host event squeezed out by KVM guest PMU partition\n");
Hi Colton,
I get "perf: Pinned host event squeezed out by KVM guest PMU partition"
even with arm_pmuv3.reserved_host_counters=3 for example. I would have
expected any non zero value to stop the warning.
I think armv8pmu_get_single_idx() needs to be changed to allocate from
the high end host counters first. A more complicated option would be
checking to see if there are any non-pinned counters in the host
reserved half when a new pinned counter is opened, then swapping the
places of the new pinned and existing non-pinned counters so pinned
always prefer being put into the host half. But it's probably not worth
doing that.
James
> + break;
> + }
> + }
> + }
> + } else {
> + /*
> + * Restoring to hw_cntr_mask.
> + * Only resched if we previously squeezed an event.
> + */
> + if (cpuc->host_squeezed) {
> + need_resched = true;
> + cpuc->host_squeezed = false;
> + }
> + }
> +
> + if (need_resched) {
> + /* Collision: run full perf reschedule */
> + perf_pmu_resched_update(&cpu_pmu->pmu, kvm_pmu_update_mask, new_mask);
> + } else {
> + /* Host was never using guest counters anyway */
> + bitmap_copy(cpu_pmu->cntr_mask, new_mask, ARMPMU_MAX_HWEVENTS);
> + }
> +}
> +
> /**
> * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters
> * @pmu: Pointer to arm_pmu struct
> @@ -218,6 +285,7 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu)
>
> pmu = vcpu->kvm->arch.arm_pmu;
> guest_counters = kvm_pmu_guest_counter_mask(pmu);
> + kvm_pmu_set_guest_counters(pmu, guest_counters);
> kvm_pmu_apply_event_filter(vcpu);
>
> for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) {
> @@ -319,5 +387,6 @@ void kvm_pmu_put(struct kvm_vcpu *vcpu)
> val = read_sysreg(pmintenset_el1);
> __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask);
>
> + kvm_pmu_set_guest_counters(pmu, 0);
> preempt_enable();
> }
> diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
> index f7b000bb3eca8..63f88fec5e80f 100644
> --- a/include/linux/perf/arm_pmu.h
> +++ b/include/linux/perf/arm_pmu.h
> @@ -75,6 +75,7 @@ struct pmu_hw_events {
>
> /* Active events requesting branch records */
> unsigned int branch_users;
> + bool host_squeezed;
> };
>
> enum armpmu_attr_groups {
next prev parent reply other threads:[~2026-05-11 14:47 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-04 21:17 [PATCH v7 00/20] ARM64 PMU Partitioning Colton Lewis
2026-05-04 21:17 ` [PATCH v7 01/20] arm64: cpufeature: Add cpucap for HPMN0 Colton Lewis
2026-05-04 21:17 ` [PATCH v7 02/20] KVM: arm64: Reorganize PMU includes Colton Lewis
2026-05-04 21:17 ` [PATCH v7 03/20] KVM: arm64: Reorganize PMU functions Colton Lewis
2026-05-04 21:17 ` [PATCH v7 04/20] perf: arm_pmuv3: Generalize counter bitmasks Colton Lewis
2026-05-04 21:17 ` [PATCH v7 05/20] perf: arm_pmuv3: Check cntr_mask before using pmccntr Colton Lewis
2026-05-04 21:17 ` [PATCH v7 06/20] perf: arm_pmuv3: Add method to partition the PMU Colton Lewis
2026-05-11 14:51 ` James Clark
2026-05-04 21:18 ` [PATCH v7 07/20] KVM: arm64: Set up FGT for Partitioned PMU Colton Lewis
2026-05-04 21:18 ` [PATCH v7 08/20] KVM: arm64: Add Partitioned PMU register trap handlers Colton Lewis
2026-05-04 21:18 ` [PATCH v7 09/20] KVM: arm64: Set up MDCR_EL2 to handle a Partitioned PMU Colton Lewis
2026-05-04 21:18 ` [PATCH v7 10/20] KVM: arm64: Context swap Partitioned PMU guest registers Colton Lewis
2026-05-11 14:49 ` James Clark
2026-05-04 21:18 ` [PATCH v7 11/20] KVM: arm64: Enforce PMU event filter at vcpu_load() Colton Lewis
2026-05-04 21:18 ` [PATCH v7 12/20] perf: Add perf_pmu_resched_update() Colton Lewis
2026-05-04 21:18 ` [PATCH v7 13/20] KVM: arm64: Apply dynamic guest counter reservations Colton Lewis
2026-05-11 14:47 ` James Clark [this message]
2026-05-04 21:18 ` [PATCH v7 14/20] KVM: arm64: Implement lazy PMU context swaps Colton Lewis
2026-05-04 21:18 ` [PATCH v7 15/20] perf: arm_pmuv3: Handle IRQs for Partitioned PMU guest counters Colton Lewis
2026-05-04 21:18 ` [PATCH v7 16/20] KVM: arm64: Detect overflows for the Partitioned PMU Colton Lewis
2026-05-04 21:18 ` [PATCH v7 17/20] KVM: arm64: Add vCPU device attr to partition the PMU Colton Lewis
2026-05-04 21:18 ` [PATCH v7 18/20] KVM: selftests: Add find_bit to KVM library Colton Lewis
2026-05-04 21:18 ` [PATCH v7 19/20] KVM: arm64: selftests: Add test case for Partitioned PMU Colton Lewis
2026-05-04 21:18 ` [PATCH v7 20/20] KVM: arm64: selftests: Relax testing for exceptions when partitioned Colton Lewis
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e2a7679d-e61a-43ac-a1d7-72f7e815c400@linaro.org \
--to=james.clark@linaro.org \
--cc=alexandru.elisei@arm.com \
--cc=catalin.marinas@arm.com \
--cc=coltonlewis@google.com \
--cc=corbet@lwn.net \
--cc=gankulkarni@os.amperecomputing.com \
--cc=joey.gouly@arm.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=mizhang@google.com \
--cc=oliver.upton@linux.dev \
--cc=pbonzini@redhat.com \
--cc=shuah@kernel.org \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox