Linux Perf Users
 help / color / mirror / Atom feed
From: James Clark <james.clark@linaro.org>
To: Colton Lewis <coltonlewis@google.com>
Cc: alexandru.elisei@arm.com, pbonzini@redhat.com, corbet@lwn.net,
	linux@armlinux.org.uk, catalin.marinas@arm.com, will@kernel.org,
	maz@kernel.org, oliver.upton@linux.dev, mizhang@google.com,
	joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com,
	mark.rutland@arm.com, shuah@kernel.org,
	gankulkarni@os.amperecomputing.com, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-perf-users@vger.kernel.org,
	linux-kselftest@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [PATCH v7 13/20] KVM: arm64: Apply dynamic guest counter reservations
Date: Fri, 15 May 2026 09:28:40 +0100	[thread overview]
Message-ID: <c7a68965-88e5-4808-9a75-d58c4986a3b6@linaro.org> (raw)
In-Reply-To: <gsntlddlbylw.fsf@coltonlewis-kvm.c.googlers.com>



On 14/05/2026 8:05 pm, Colton Lewis wrote:
> James Clark <james.clark@linaro.org> writes:
> 
>> On 13/05/2026 5:45 pm, Colton Lewis wrote:
>>> James Clark <james.clark@linaro.org> writes:
> 
>>>> On 04/05/2026 10:18 pm, Colton Lewis wrote:
>>>>> Apply dynamic guest counter reservations by checking if the requested
>>>>> guest mask collides with any events the host has scheduled and calling
>>>>> pmu_perf_resched_update() with a hook that updates the mask of
>>>>> available counters in between schedule out and schedule in.
> 
>>>>> Signed-off-by: Colton Lewis <coltonlewis@google.com>
>>>>> ---
>>>>>    arch/arm64/kvm/pmu-direct.c  | 69 ++++++++++++++++++++++++++++++++
>>>>> ++++
>>>>>    include/linux/perf/arm_pmu.h |  1 +
>>>>>    2 files changed, 70 insertions(+)
> 
>>>>> diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c
>>>>> index 2252d3b905db9..14cc419dbafad 100644
>>>>> --- a/arch/arm64/kvm/pmu-direct.c
>>>>> +++ b/arch/arm64/kvm/pmu-direct.c
>>>>> @@ -100,6 +100,73 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu)
>>>>>        return *host_data_ptr(nr_event_counters);
>>>>>    }
> 
>>>>> +/* Callback to update counter mask between perf scheduling */
>>>>> +static void kvm_pmu_update_mask(struct pmu *pmu, void *data)
>>>>> +{
>>>>> +    struct arm_pmu *arm_pmu = to_arm_pmu(pmu);
>>>>> +    unsigned long *new_mask = data;
>>>>> +
>>>>> +    bitmap_copy(arm_pmu->cntr_mask, new_mask, ARMPMU_MAX_HWEVENTS);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> + * kvm_pmu_set_guest_counters() - Handle dynamic counter reservations
>>>>> + * @cpu_pmu: struct arm_pmu to potentially modify
>>>>> + * @guest_mask: new guest mask for the pmu
>>>>> + *
>>>>> + * Check if guest counters will interfere with current host events 
>>>>> and
>>>>> + * call into perf_pmu_resched_update if a reschedule is required.
>>>>> + */
>>>>> +static void kvm_pmu_set_guest_counters(struct arm_pmu *cpu_pmu, u64
>>>>> guest_mask)
>>>>> +{
>>>>> +    struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
>>>>> +    DECLARE_BITMAP(guest_bitmap, ARMPMU_MAX_HWEVENTS);
>>>>> +    DECLARE_BITMAP(new_mask, ARMPMU_MAX_HWEVENTS);
>>>>> +    bool need_resched = false;
>>>>> +
>>>>> +    bitmap_from_arr64(guest_bitmap, &guest_mask, 
>>>>> ARMPMU_MAX_HWEVENTS);
>>>>> +    bitmap_copy(new_mask, cpu_pmu->hw_cntr_mask, 
>>>>> ARMPMU_MAX_HWEVENTS);
>>>>> +
>>>>> +    if (guest_mask) {
>>>>> +        /* Subtract guest counters from available host mask */
>>>>> +        bitmap_andnot(new_mask, new_mask, guest_bitmap,
>>>>> ARMPMU_MAX_HWEVENTS);
>>>>> +
>>>>> +        /* Did we collide with an active host event? */
>>>>> +        if (bitmap_intersects(cpuc->used_mask, guest_bitmap,
>>>>> ARMPMU_MAX_HWEVENTS)) {
>>>>> +            int idx;
>>>>> +
>>>>> +            need_resched = true;
>>>>> +            cpuc->host_squeezed = true;
>>>>> +
>>>>> +            /* Look for pinned events that are about to be 
>>>>> preempted */
>>>>> +            for_each_set_bit(idx, guest_bitmap, 
>>>>> ARMPMU_MAX_HWEVENTS) {
>>>>> +                if (test_bit(idx, cpuc->used_mask) && cpuc-
>>>>> >events[idx] &&
>>>>> +                    cpuc->events[idx]->attr.pinned) {
>>>>> +                    pr_warn_ratelimited("perf: Pinned host event
>>>>> squeezed out by KVM guest PMU partition\n");
> 
>>>> Hi Colton,
> 
>>>> I get "perf: Pinned host event squeezed out by KVM guest PMU partition"
>>>> even with arm_pmuv3.reserved_host_counters=3 for example. I would have
>>>> expected any non zero value to stop the warning.
> 
>>>> I think armv8pmu_get_single_idx() needs to be changed to allocate from
>>>> the high end host counters first. A more complicated option would be
>>>> checking to see if there are any non-pinned counters in the host
>>>> reserved half when a new pinned counter is opened, then swapping the
>>>> places of the new pinned and existing non-pinned counters so pinned
>>>> always prefer being put into the host half. But it's probably not worth
>>>> doing that.
> 
>>>> James
> 
> 
>>> I agree it makes the most sense to allocate from the top, but I'm happy
>>> the basic idea works.
> 
> 
>> Another thing I forgot to mention is that even with the ratelimited
>> warning, this spams the logs any time the host and guest are both using
>> the PMU and I'm not sure how useful that is.
> 
> I'm sure it does. I'll delete it.
> 

A warn_once might save someone a few hours of debugging, but we probably 
don't need more than that.

>>>>> +                    break;
>>>>> +                }
>>>>> +            }
>>>>> +        }
>>>>> +    } else {
>>>>> +        /*
>>>>> +         * Restoring to hw_cntr_mask.
>>>>> +         * Only resched if we previously squeezed an event.
>>>>> +         */
>>>>> +        if (cpuc->host_squeezed) {
>>>>> +            need_resched = true;
>>>>> +            cpuc->host_squeezed = false;
>>>>> +        }
>>>>> +    }
>>>>> +
>>>>> +    if (need_resched) {
>>>>> +        /* Collision: run full perf reschedule */
>>>>> +        perf_pmu_resched_update(&cpu_pmu->pmu, kvm_pmu_update_mask,
>>>>> new_mask);
>>>>> +    } else {
>>>>> +        /* Host was never using guest counters anyway */
>>>>> +        bitmap_copy(cpu_pmu->cntr_mask, new_mask, 
>>>>> ARMPMU_MAX_HWEVENTS);
>>>>> +    }
>>>>> +}
>>>>> +
>>>>>    /**
>>>>>     * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved
>>>>> counters
>>>>>     * @pmu: Pointer to arm_pmu struct
>>>>> @@ -218,6 +285,7 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu)
> 
>>>>>        pmu = vcpu->kvm->arch.arm_pmu;
>>>>>        guest_counters = kvm_pmu_guest_counter_mask(pmu);
>>>>> +    kvm_pmu_set_guest_counters(pmu, guest_counters);
>>>>>        kvm_pmu_apply_event_filter(vcpu);
> 
>>>>>        for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) {
>>>>> @@ -319,5 +387,6 @@ void kvm_pmu_put(struct kvm_vcpu *vcpu)
>>>>>        val = read_sysreg(pmintenset_el1);
>>>>>        __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask);
> 
>>>>> +    kvm_pmu_set_guest_counters(pmu, 0);
>>>>>        preempt_enable();
>>>>>    }
>>>>> diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/ 
>>>>> arm_pmu.h
>>>>> index f7b000bb3eca8..63f88fec5e80f 100644
>>>>> --- a/include/linux/perf/arm_pmu.h
>>>>> +++ b/include/linux/perf/arm_pmu.h
>>>>> @@ -75,6 +75,7 @@ struct pmu_hw_events {
> 
>>>>>        /* Active events requesting branch records */
>>>>>        unsigned int        branch_users;
>>>>> +    bool host_squeezed;
>>>>>    };
> 
>>>>>    enum armpmu_attr_groups {


  reply	other threads:[~2026-05-15  8:28 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-04 21:17 [PATCH v7 00/20] ARM64 PMU Partitioning Colton Lewis
2026-05-04 21:17 ` [PATCH v7 01/20] arm64: cpufeature: Add cpucap for HPMN0 Colton Lewis
2026-05-04 21:17 ` [PATCH v7 02/20] KVM: arm64: Reorganize PMU includes Colton Lewis
2026-05-04 21:44   ` sashiko-bot
2026-05-04 21:17 ` [PATCH v7 03/20] KVM: arm64: Reorganize PMU functions Colton Lewis
2026-05-04 22:02   ` sashiko-bot
2026-05-04 21:17 ` [PATCH v7 04/20] perf: arm_pmuv3: Generalize counter bitmasks Colton Lewis
2026-05-04 21:41   ` sashiko-bot
2026-05-04 21:17 ` [PATCH v7 05/20] perf: arm_pmuv3: Check cntr_mask before using pmccntr Colton Lewis
2026-05-04 21:49   ` sashiko-bot
2026-05-04 21:17 ` [PATCH v7 06/20] perf: arm_pmuv3: Add method to partition the PMU Colton Lewis
2026-05-04 21:53   ` sashiko-bot
2026-05-11 14:51   ` James Clark
2026-05-13 16:13     ` Colton Lewis
2026-05-04 21:18 ` [PATCH v7 07/20] KVM: arm64: Set up FGT for Partitioned PMU Colton Lewis
2026-05-04 22:09   ` sashiko-bot
2026-05-13  7:34   ` Oliver Upton
2026-05-14 17:49     ` Colton Lewis
2026-05-04 21:18 ` [PATCH v7 08/20] KVM: arm64: Add Partitioned PMU register trap handlers Colton Lewis
2026-05-04 22:06   ` sashiko-bot
2026-05-13  7:45   ` Oliver Upton
2026-05-14 18:18     ` Colton Lewis
2026-05-04 21:18 ` [PATCH v7 09/20] KVM: arm64: Set up MDCR_EL2 to handle a Partitioned PMU Colton Lewis
2026-05-04 22:02   ` sashiko-bot
2026-05-13  7:57   ` Oliver Upton
2026-05-14 18:43     ` Colton Lewis
2026-05-04 21:18 ` [PATCH v7 10/20] KVM: arm64: Context swap Partitioned PMU guest registers Colton Lewis
2026-05-04 22:01   ` sashiko-bot
2026-05-11 14:49   ` James Clark
2026-05-13 16:38     ` Colton Lewis
2026-05-13  9:18   ` Oliver Upton
2026-05-14 18:59     ` Colton Lewis
2026-05-04 21:18 ` [PATCH v7 11/20] KVM: arm64: Enforce PMU event filter at vcpu_load() Colton Lewis
2026-05-04 22:31   ` sashiko-bot
2026-05-04 21:18 ` [PATCH v7 12/20] perf: Add perf_pmu_resched_update() Colton Lewis
2026-05-04 21:55   ` sashiko-bot
2026-05-04 21:18 ` [PATCH v7 13/20] KVM: arm64: Apply dynamic guest counter reservations Colton Lewis
2026-05-04 22:11   ` sashiko-bot
2026-05-11 14:47   ` James Clark
2026-05-13 16:45     ` Colton Lewis
2026-05-14  9:10       ` James Clark
2026-05-14 19:05         ` Colton Lewis
2026-05-15  8:28           ` James Clark [this message]
2026-05-04 21:18 ` [PATCH v7 14/20] KVM: arm64: Implement lazy PMU context swaps Colton Lewis
2026-05-04 22:13   ` sashiko-bot
2026-05-04 21:18 ` [PATCH v7 15/20] perf: arm_pmuv3: Handle IRQs for Partitioned PMU guest counters Colton Lewis
2026-05-04 22:18   ` sashiko-bot
2026-05-04 21:18 ` [PATCH v7 16/20] KVM: arm64: Detect overflows for the Partitioned PMU Colton Lewis
2026-05-04 23:47   ` sashiko-bot
2026-05-04 21:18 ` [PATCH v7 17/20] KVM: arm64: Add vCPU device attr to partition the PMU Colton Lewis
2026-05-04 22:23   ` sashiko-bot
2026-05-04 21:18 ` [PATCH v7 18/20] KVM: selftests: Add find_bit to KVM library Colton Lewis
2026-05-04 21:18 ` [PATCH v7 19/20] KVM: arm64: selftests: Add test case for Partitioned PMU Colton Lewis
2026-05-04 22:19   ` sashiko-bot
2026-05-04 21:18 ` [PATCH v7 20/20] KVM: arm64: selftests: Relax testing for exceptions when partitioned Colton Lewis
2026-05-11 14:57 ` [PATCH v7 00/20] ARM64 PMU Partitioning James Clark
2026-05-13 16:10   ` Colton Lewis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c7a68965-88e5-4808-9a75-d58c4986a3b6@linaro.org \
    --to=james.clark@linaro.org \
    --cc=alexandru.elisei@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=coltonlewis@google.com \
    --cc=corbet@lwn.net \
    --cc=gankulkarni@os.amperecomputing.com \
    --cc=joey.gouly@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=mizhang@google.com \
    --cc=oliver.upton@linux.dev \
    --cc=pbonzini@redhat.com \
    --cc=shuah@kernel.org \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox