From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C53C63F54BA for ; Mon, 4 May 2026 21:18:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777929524; cv=none; b=QwLqCm1LGPXaDzJ0u0GEshzqF4YIZdSu/NRc4Fyrx2/Yw9Qv+PnfRqVigGBJxzhfPgLiFOiEBHQvmg1EqUStQOy5XCgcYAiLkiwx38Tl/819bjjrw9bJ2xDnTmIx6Actlprt7FZgH6rdenYaFmxZwnXO84eNwt+QdofgheMLPNo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777929524; c=relaxed/simple; bh=SKo63HjoGCHTOKU92BPiu/k9cKpX9pAJWhd5TqUrBa0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LhX/N2EhCRDQhKB03SdHFZV//rGy6WFVcy3GbOmqeJk0xjdP5KOtpDnh6uvDYGQGL3tt1XNMvIit4YXJY4bHfLVi5Nm+8yfrUXjArfRo/isv1qMGsGol86/UQ7yzqG9lKYRPEIkO1kZk47NqN2GyTY1UpXhJldmKGH0dSxFZHnQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=SScHX28L; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SScHX28L" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-42fb9d10232so10197377fac.2 for ; Mon, 04 May 2026 14:18:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1777929518; x=1778534318; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BdEaEz514fBiVElpHNtxsUX8/xlNYtbOJ74nBbxAmPE=; b=SScHX28L2iiptBatsfhGJhlVP0VDMYhU4KRwD8PqfTk2ewgteDQDui4Muja7O2Nh6L Cc6hr04b5t6KrrWrDJ3nihb2vfjg5VGLvjS52nR7QO48kGaUl68TqCGQwO9mnj1hlOJK I7mbOfrzeMSVL7RnWPQpPaAM19ulaszOS8zislGUo20sCgVH29vbII6csDGPcCClfQW6 +AeW0vr97awtvHZGHPSjPYrUMCp9QgRUnEJtNxe/n6yakNXTib69L3Rc2jy2JbRiC3/l Ws7h0kKdIhVO95XS+KgxoV061Efqym0aI8AUPFJWsJPrDqpJ4BY9xVeAlxWZpjUn5v0f Nf7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777929518; x=1778534318; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BdEaEz514fBiVElpHNtxsUX8/xlNYtbOJ74nBbxAmPE=; b=RnsuiZKuZtnUc95Pn2wp9475+ZIpqUMUgZkV+SnN9icZFs6knovWNqb3cr+aSMjWta 1AXT0USEu7UKky/257jfZAbjvmv8bbHGntv9/rXQjuXP1rnh9iF/gjy5i+rp+DknlF5U qBfFTASDzR6zOOBT66o+OlI7eGG/dghrUYZnWjFPcOsmhL4+BtCDx9yL2QqC4YGsfCpF XibFFSH/pwI9/gjpVv7jp29fbLQrO8lmyk9b8cEGBExqFqU5PrB1Ly+8vFkiE0LWm5uJ eVQisgJid4eb1JWOB4ltXlg91XGvJ7m80Osmk9hQhE4Kj6nYILGX/6AmNRLKefFQdBi7 EgbA== X-Gm-Message-State: AOJu0YwX+3xOX99NTWMD+pRAYbwGN+/cWdiNy9sE8GRyPMSnbUzNzMot LpEUfusyVe42reikkKXqv7rWb4/cxKO/8a2WgBz3vuL7MdP7dapzx7LL9xv9vodTQpJnBwloIgn o6fqVkyB6sZ5DOQZ4u8C3CWxlRaM/NPm5xfi+w6cgUyNYLr+K1QPmMNGBUlkYtDn064gjsBh+Ep XZ4ci9yLZp/+YZOXEGTiOi5bg/dGaq/RrMiUG0Q2irrRtBhySU2QHS79AMUi0= X-Received: from ilvd7.prod.google.com ([2002:a05:6e02:2147:b0:4fa:a7a5:6571]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:2214:b0:696:1791:add8 with SMTP id 006d021491bc7-69697d5e366mr5834496eaf.40.1777929517855; Mon, 04 May 2026 14:18:37 -0700 (PDT) Date: Mon, 4 May 2026 21:18:01 +0000 In-Reply-To: <20260504211813.1804997-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260504211813.1804997-1-coltonlewis@google.com> X-Mailer: git-send-email 2.54.0.545.g6539524ca2-goog Message-ID: <20260504211813.1804997-9-coltonlewis@google.com> Subject: [PATCH v7 08/20] KVM: arm64: Add Partitioned PMU register trap handlers From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , James Clark , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Type: text/plain; charset="UTF-8" We may want a partitioned PMU but not have FEAT_FGT to untrap the specific registers that would normally be untrapped. Add handling for those trapped register accesses that does the right thing if the PMU is partitioned. For registers that shouldn't be written to hardware because they require special handling (PMEVTYPER and PMOVS), write to the virtual register. A later patch will ensure these are handled correctly at vcpu_load time. Signed-off-by: Colton Lewis --- arch/arm64/kvm/sys_regs.c | 236 +++++++++++++++++++++++++++++++------- 1 file changed, 197 insertions(+), 39 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 0a8e8ee69cd00..cc3d1804ab200 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -985,9 +985,25 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) return __vcpu_sys_reg(vcpu, r->reg); } +static void pmu_write_pmuserenr(struct kvm_vcpu *vcpu, u64 val) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + write_sysreg(val, pmuserenr_el0); + else + __vcpu_assign_sys_reg(vcpu, PMUSERENR_EL0, val); +} + +static u64 pmu_read_pmuserenr(struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + return read_sysreg(pmuserenr_el0); + else + return __vcpu_sys_reg(vcpu, PMUSERENR_EL0); +} + static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) { - u64 reg = __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + u64 reg = pmu_read_pmuserenr(vcpu); bool enabled = (reg & flags) || vcpu_mode_priv(vcpu); if (!enabled) @@ -1016,6 +1032,29 @@ static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu) return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_EN); } +static void pmu_write_pmcr(struct kvm_vcpu *vcpu, u64 val) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + write_sysreg(val, pmcr_el0); + return; + } + + kvm_pmu_handle_pmcr(vcpu, val); +} + +static u64 pmu_read_pmcr(struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + return u64_replace_bits( + read_sysreg(pmcr_el0), + vcpu->kvm->arch.nr_pmu_counters, + ARMV8_PMU_PMCR_N); + } + + return kvm_vcpu_read_pmcr(vcpu); + +} + static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1026,18 +1065,17 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (p->is_write) { /* - * Only update writeable bits of PMCR (continuing into - * kvm_pmu_handle_pmcr() as well) + * Only update writeable bits of PMCR */ - val = kvm_vcpu_read_pmcr(vcpu); + val = pmu_read_pmcr(vcpu); val &= ~ARMV8_PMU_PMCR_MASK; val |= p->regval & ARMV8_PMU_PMCR_MASK; if (!kvm_supports_32bit_el0()) val |= ARMV8_PMU_PMCR_LC; - kvm_pmu_handle_pmcr(vcpu, val); + pmu_write_pmcr(vcpu, val); } else { /* PMCR.P & PMCR.C are RAZ */ - val = kvm_vcpu_read_pmcr(vcpu) + val = pmu_read_pmcr(vcpu) & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); p->regval = val; } @@ -1045,6 +1083,24 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, return true; } +static void pmu_write_pmselr(struct kvm_vcpu *vcpu, u64 val) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + write_sysreg(val, pmselr_el0); + return; + } + + __vcpu_assign_sys_reg(vcpu, PMSELR_EL0, val); +} + +static u64 pmu_read_pmselr(struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + return read_sysreg(pmselr_el0); + + return __vcpu_sys_reg(vcpu, PMSELR_EL0); +} + static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1052,10 +1108,10 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, return false; if (p->is_write) - __vcpu_assign_sys_reg(vcpu, PMSELR_EL0, p->regval); + pmu_write_pmselr(vcpu, p->regval); else /* return PMSELR.SEL field */ - p->regval = __vcpu_sys_reg(vcpu, PMSELR_EL0) + p->regval = pmu_read_pmselr(vcpu) & PMSELR_EL0_SEL_MASK; return true; @@ -1128,6 +1184,44 @@ static int set_pmu_evcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, return 0; } +static void pmu_write_evcntr(struct kvm_vcpu *vcpu, u64 val, u64 idx) +{ + u64 pmselr; + + if (!kvm_vcpu_pmu_is_partitioned(vcpu)) { + kvm_pmu_set_counter_value(vcpu, idx, val); + return; + } + + if (idx == ARMV8_PMU_CYCLE_IDX) { + write_sysreg(val, pmccntr_el0); + return; + } + + pmselr = read_sysreg(pmselr_el0); + write_sysreg(idx, pmselr_el0); + write_sysreg(val, pmxevcntr_el0); + write_sysreg(pmselr, pmselr_el0); +} + +static u64 pmu_read_evcntr(struct kvm_vcpu *vcpu, u64 idx) +{ + u64 pmselr; + u64 val; + + if (!kvm_vcpu_pmu_is_partitioned(vcpu)) + return kvm_pmu_get_counter_value(vcpu, idx); + + if (idx == ARMV8_PMU_CYCLE_IDX) + return read_sysreg(pmccntr_el0); + + pmselr = read_sysreg(pmselr_el0); + write_sysreg(idx, pmselr_el0); + val = read_sysreg(pmxevcntr_el0); + write_sysreg(pmselr, pmselr_el0); + return val; +} + static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) @@ -1141,7 +1235,7 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, return false; idx = SYS_FIELD_GET(PMSELR_EL0, SEL, - __vcpu_sys_reg(vcpu, PMSELR_EL0)); + pmu_read_pmselr(vcpu)); } else if (r->Op2 == 0) { /* PMCCNTR_EL0 */ if (pmu_access_cycle_counter_el0_disabled(vcpu)) @@ -1173,14 +1267,34 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, if (pmu_access_el0_disabled(vcpu)) return false; - kvm_pmu_set_counter_value(vcpu, idx, p->regval); + pmu_write_evcntr(vcpu, p->regval, idx); } else { - p->regval = kvm_pmu_get_counter_value(vcpu, idx); + p->regval = pmu_read_evcntr(vcpu, idx); } return true; } + +static void pmu_write_evtyper(struct kvm_vcpu *vcpu, u64 val, u64 idx) +{ + u64 mask; + + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + mask = kvm_pmu_evtyper_mask(vcpu->kvm); + __vcpu_assign_sys_reg(vcpu, PMEVTYPER0_EL0 + idx, val & mask); + return; + } + + kvm_pmu_set_counter_event_type(vcpu, val, idx); + kvm_vcpu_pmu_restore_guest(vcpu); +} + +static u64 pmu_read_evtyper(struct kvm_vcpu *vcpu, u64 idx) +{ + return __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + idx); +} + static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1191,7 +1305,7 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) { /* PMXEVTYPER_EL0 */ - idx = SYS_FIELD_GET(PMSELR_EL0, SEL, __vcpu_sys_reg(vcpu, PMSELR_EL0)); + idx = SYS_FIELD_GET(PMSELR_EL0, SEL, pmu_read_pmselr(vcpu)); reg = PMEVTYPER0_EL0 + idx; } else if (r->CRn == 14 && (r->CRm & 12) == 12) { idx = ((r->CRm & 3) << 3) | (r->Op2 & 7); @@ -1207,12 +1321,10 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (!pmu_counter_idx_valid(vcpu, idx)) return false; - if (p->is_write) { - kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); - kvm_vcpu_pmu_restore_guest(vcpu); - } else { - p->regval = __vcpu_sys_reg(vcpu, reg); - } + if (p->is_write) + pmu_write_evtyper(vcpu, p->regval, idx); + else + p->regval = pmu_read_evtyper(vcpu, idx); return true; } @@ -1235,6 +1347,35 @@ static int get_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, u64 *v return 0; } +static void pmu_write_pmcnten(struct kvm_vcpu *vcpu, u64 val, bool set) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + if (set) + write_sysreg(val, pmcntenset_el0); + else + write_sysreg(val, pmcntenclr_el0); + + return; + } + + if (set) + /* accessing PMCNTENSET_EL0 */ + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, |=, val); + else + /* accessing PMCNTENCLR_EL0 */ + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=, ~val); + + kvm_pmu_reprogram_counter_mask(vcpu, val); +} + +static u64 pmu_read_pmcnten(struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + return read_sysreg(pmcntenset_el0); + + return __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); +} + static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1246,40 +1387,58 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, mask = kvm_pmu_accessible_counter_mask(vcpu); if (p->is_write) { val = p->regval & mask; - if (r->Op2 & 0x1) - /* accessing PMCNTENSET_EL0 */ - __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, |=, val); - else - /* accessing PMCNTENCLR_EL0 */ - __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=, ~val); - - kvm_pmu_reprogram_counter_mask(vcpu, val); + pmu_write_pmcnten(vcpu, val, r->Op2 & 0x1); } else { - p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + p->regval = pmu_read_pmcnten(vcpu); } return true; } +static void pmu_write_pminten(struct kvm_vcpu *vcpu, u64 val, bool set) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + if (set) + write_sysreg(val, pmintenset_el1); + else + write_sysreg(val, pmintenclr_el1); + + return; + } + + if (set) + /* accessing PMINTENSET_EL1 */ + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, |=, val); + else + /* accessing PMINTENCLR_EL1 */ + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=, ~val); + + kvm_pmu_reprogram_counter_mask(vcpu, val); +} + +static u64 pmu_read_pminten(struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + return read_sysreg(pmintenset_el1); + + return __vcpu_sys_reg(vcpu, PMINTENSET_EL1); +} + static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 mask = kvm_pmu_accessible_counter_mask(vcpu); + u64 val, mask; if (check_pmu_access_disabled(vcpu, 0)) return false; + mask = kvm_pmu_accessible_counter_mask(vcpu); if (p->is_write) { - u64 val = p->regval & mask; + val = p->regval & mask; - if (r->Op2 & 0x1) - /* accessing PMINTENSET_EL1 */ - __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, |=, val); - else - /* accessing PMINTENCLR_EL1 */ - __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=, ~val); + pmu_write_pminten(vcpu, val, r->Op2 & 0x1); } else { - p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1); + p->regval = pmu_read_pminten(vcpu); } return true; @@ -1330,10 +1489,9 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (!vcpu_mode_priv(vcpu)) return undef_access(vcpu, p, r); - __vcpu_assign_sys_reg(vcpu, PMUSERENR_EL0, - (p->regval & ARMV8_PMU_USERENR_MASK)); + pmu_write_pmuserenr(vcpu, p->regval & ARMV8_PMU_USERENR_MASK); } else { - p->regval = __vcpu_sys_reg(vcpu, PMUSERENR_EL0) + p->regval = pmu_read_pmuserenr(vcpu) & ARMV8_PMU_USERENR_MASK; } -- 2.54.0.545.g6539524ca2-goog