From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-oi1-f201.google.com (mail-oi1-f201.google.com [209.85.167.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 35B6E40B6DC for ; Mon, 4 May 2026 21:18:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777929528; cv=none; b=lN0pnM93WBWFeBSOOaINWbENhV3cdHY7nRrOUNbt8dvlPKdtSE2qGLUz7cWd/sem5oyRvWlGhHMhYtZwt/+gu6xGF6bw/YYCdQDa+JakyFy95ALfLKx1B1LOfl00WB3UG+RFyIO15S8eDnM1HohEyiQhzCVqMpfsh2jQ6GJGEHU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777929528; c=relaxed/simple; bh=51x2qNUJZmu8VDSVPga7Ou82hKP9Y1fm/Wq0S6zmqPo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NzdCWcj23umGZtTwLAtJEuQROLRs8Bi4/ShQNhuiI5A93JX+Y/nApb2+bXvvAGk1lrW9W/328nD6+BKBr/iNtVauFuxnWvnjyUqNmRCvmC47rOVjk/AYGJpNmDmvjs1oyIV+nYeZ5dUIBhrYODE/XArdg5z+i8cEGeMmjOmoXns= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ir+/0SHz; arc=none smtp.client-ip=209.85.167.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ir+/0SHz" Received: by mail-oi1-f201.google.com with SMTP id 5614622812f47-464bbea2120so6006124b6e.3 for ; Mon, 04 May 2026 14:18:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1777929523; x=1778534323; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ikJw7Yl21sbWiTQZxDvWwtZnXGp+jw2m9ITCB7brm+8=; b=ir+/0SHz4yMowC9EeZO5C8GyDjV/+pijEF6lPJ8laFtVBvRWR+3/+kTu9V5CdwgLJY 5LjB7d8duDMerUeuz+ul9dkchPpG5UOnWzZweX3X9C8yuYSB/rabGOnJmwEcAAKWLd/I Tkkn2bYJHoGNFV5SivZOXsGONk9k5CUoa2eANlaA93GMx7iJI1FeFh7xlTMe5Lbi6Meg a2fKwErhM4W0a8uwTU5pZppBWVYbpYZBmj+YgdTRDLSKFo9Qguikh2tUuRdfrHIQUu9i sBZxBEuSGntnjCOv0TpNu5TR6bFh4t36qZdY/XPO93W67+kkHi9UtZNWEOW50+dhGAAv cXeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777929523; x=1778534323; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ikJw7Yl21sbWiTQZxDvWwtZnXGp+jw2m9ITCB7brm+8=; b=XYQEDCbQABnZWmd3MCDLqsGYWSHfUOLqqpBsaL3O9CNhXfdG6knjyGJLryR7jfayW8 YNWHekteg6r6f1rCeUkY9mn/Yskc24iPtgf7zfbgDoZSL4YqzUcMYGiSTVjWXE1Egdff H09jMKGRQW8ixAY3k0vEeOD4zFSeQaevXt5+E42fIkA1v6NveK8Elg+lXOPMaxSIB8gy Pcay3seIrK3emzOk++nqJDilIa686XpV1WHb16ZO1yB8dec75PoruvdK4Oci/eUDm6L1 W0g5lwmF52cFMyHMpVRU+R06iNzVC7++VnkZk+CoA9zoLqH2xbOjeviLWsAig8jk/f5N elTw== X-Gm-Message-State: AOJu0Yz7t6Csauar1BkmKtTrZJr477JIcgc3Qs1SMdA0g4AKg5MUP1B0 /TMLtxJRQ1HTr2YsWsne7303jMcAxc5veCgHDDW0jTeVqqSWnUzpuUk9Zmj6zc2pLbMU2Hyv3gj xN7nYqdZmWOw8G5+TCA+Xsf6aSPeCX5SpjbPllayWRPGNBC7D8XnPEMRGa6+r5U6qdBoH3FigmZ v/J0jwmi1VE+y1qzoCurIxGGA2/E+lGsxWx2viY4dYI8IIGfrGILW4I0PEYtw= X-Received: from ilaj29.prod.google.com ([2002:a05:6e02:219d:b0:4fd:6acc:4c7b]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:2012:b0:696:1a85:586b with SMTP id 006d021491bc7-69697c3388cmr5842328eaf.35.1777929522601; Mon, 04 May 2026 14:18:42 -0700 (PDT) Date: Mon, 4 May 2026 21:18:04 +0000 In-Reply-To: <20260504211813.1804997-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260504211813.1804997-1-coltonlewis@google.com> X-Mailer: git-send-email 2.54.0.545.g6539524ca2-goog Message-ID: <20260504211813.1804997-12-coltonlewis@google.com> Subject: [PATCH v7 11/20] KVM: arm64: Enforce PMU event filter at vcpu_load() From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , James Clark , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Type: text/plain; charset="UTF-8" The KVM API for event filtering says that counters do not count when blocked by the event filter. To enforce that, the event filter must be rechecked on every load since it might have changed since the last time the guest wrote a value. If the event is filtered, exclude counting at all exception levels before writing the hardware. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-direct.c | 54 +++++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c index 360d022d918d5..2252d3b905db9 100644 --- a/arch/arm64/kvm/pmu-direct.c +++ b/arch/arm64/kvm/pmu-direct.c @@ -138,6 +138,59 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) return 0; } +/** + * kvm_pmu_apply_event_filter() + * @vcpu: Pointer to vcpu struct + * + * To uphold the guarantee of the KVM PMU event filter, we must ensure + * no counter counts if the event is filtered. Accomplish this by + * filtering all exception levels if the event is filtered. + */ +static void kvm_pmu_apply_event_filter(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu = vcpu->kvm->arch.arm_pmu; + unsigned long guest_counters; + u64 evtyper_set = ARMV8_PMU_EXCLUDE_EL0 | + ARMV8_PMU_EXCLUDE_EL1; + u64 evtyper_clr = ARMV8_PMU_INCLUDE_EL2; + bool guest_include_el2; + u8 i; + u64 val; + u64 evsel; + + if (!pmu) + return; + + guest_counters = kvm_pmu_guest_counter_mask(pmu); + + for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) { + if (i == ARMV8_PMU_CYCLE_IDX) { + val = __vcpu_sys_reg(vcpu, PMCCFILTR_EL0); + evsel = ARMV8_PMUV3_PERFCTR_CPU_CYCLES; + } else { + val = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); + evsel = val & kvm_pmu_event_mask(vcpu->kvm); + } + + guest_include_el2 = (val & ARMV8_PMU_INCLUDE_EL2); + val &= ~evtyper_clr; + + if (unlikely(is_hyp_ctxt(vcpu)) && guest_include_el2) + val &= ~ARMV8_PMU_EXCLUDE_EL1; + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(evsel, vcpu->kvm->arch.pmu_filter)) + val |= evtyper_set; + + if (i == ARMV8_PMU_CYCLE_IDX) { + write_sysreg(val, pmccntr_el0); + } else { + write_sysreg(i, pmselr_el0); + write_sysreg(val, pmxevtyper_el0); + } + } +} + /** * kvm_pmu_load() - Load untrapped PMU registers * @vcpu: Pointer to struct kvm_vcpu @@ -165,6 +218,7 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) pmu = vcpu->kvm->arch.arm_pmu; guest_counters = kvm_pmu_guest_counter_mask(pmu); + kvm_pmu_apply_event_filter(vcpu); for_each_set_bit(i, &guest_counters, ARMPMU_MAX_HWEVENTS) { val = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); -- 2.54.0.545.g6539524ca2-goog