From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 730CFC7EE30 for ; Thu, 26 Jun 2025 22:19:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=aRlId8j6LCdYrirtMZ0W4g5cwi8ZnWeFsohFg4IJ6SY=; b=gn6/yxKX6GY8bYNWOe3iWJ7eRj lxoWs6iN5mfQ7xg+2pjQCC8BEmzV2NZyxTvCRaUcpwmsEygSQJxmB68NoZp6cCX+Ao6uBQeSnmR/N OgAPZhB+rWEyb7FXb+Rhe4xdfTKynQUlgixuGrleJDIsL0iO51KF4sp7T2Ekw2OPJnXM+z4E81y25 AcalUkvJKjgfzobbCAcVDbZ1+JV1TDMhOboJVidF4apvAgrvW84tj5xX+Jhk6F3UTnPXY7OEe5/Q+ fzbadYq2d4e/5geLre6vWw6jlIew/gGAym7cT20DHmqKeJ4QDTCFYrAJnjlpv4Ub+UEYLBNzlMP9r KMM9EFtw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uUuwC-0000000CvCa-0zC0; Thu, 26 Jun 2025 22:19:20 +0000 Received: from mail-il1-x14a.google.com ([2607:f8b0:4864:20::14a]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uUsrK-0000000Cgoi-2M8Z for linux-arm-kernel@lists.infradead.org; Thu, 26 Jun 2025 20:06:11 +0000 Received: by mail-il1-x14a.google.com with SMTP id e9e14a558f8ab-3df309d9842so30587735ab.3 for ; Thu, 26 Jun 2025 13:06:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968369; x=1751573169; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aRlId8j6LCdYrirtMZ0W4g5cwi8ZnWeFsohFg4IJ6SY=; b=iqNb1ZTuay2w16Ou/OmkZU3axTx3XjXvhn+3VK5A9S8QSywhp6EowlghH/D+BOjuhT o3Xe8ejwjLbFNXSskWr/1FA6lTDi9u4EHBVg2JNxEwQOC6a1Kpr/fp/Yypy05/G4cSp2 nBUTL4QSbrplIwm0QzPX0CDJ7bMpe53SzZ2VZtJw+nR4oQ15uPEVcvJJD/m/2rfamNTr meV8SCMMVzwB7Q30UQzugFoITHEGcypCtWDPIIDJPQg9xHBQqdN0CpeYv4d+k/NCorMF Uts4xrwVeZDcgATr7AdU/IC8I/U9tzL+1J5g4qUekebEEnB/OpcNsaq06aNSxF7meoQC 2ceQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968369; x=1751573169; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aRlId8j6LCdYrirtMZ0W4g5cwi8ZnWeFsohFg4IJ6SY=; b=K0Hwz/4EnPkecZokzETBteFWlWMB5rwJkLjLg1qgTbwCyEom8meyk+un3tZ4LatMos VB47qDeCDs9PjBYVD5i6/EiQUZ3884yZrI4xNksv9b1V/X6YcjPs5sSfxVTv0IsGM3zl mqmnYwqSQ9a78oPMNYVglWLLroD6m28DWoVzwl8Ec5dQHBsrW1n2HwpbTMyUMtk0JjyU ORzD5H7ePaxXXPANrV4WG7bMcIpIuKKNzcqNuUljaE+QAIsA3mNi2VrYbWiQw0eKG3AJ xtl61fFWWTgZLYLi5c3n/vXlIgu6w1G3qkEJing0kSBx0+rxjk/jxCq/5BvWOcTWU1zO Sz2w== X-Forwarded-Encrypted: i=1; AJvYcCV72B2LNzBoPZay5PTGhjPjSB5GewSI7pnsJXJ/VK6n2psfkMLgA5az/TLkOLzbnZJm3KC9vCbQKV81/CD/OzKQ@lists.infradead.org X-Gm-Message-State: AOJu0YyIn3lXtnM/CiWsrA0CGbdIeLCpKnxoLWi9/VPyi3khQecYV0HB 51HCBvtXom9KIqn1djK/wAHwmOu3ECBXrcuwr9/0uKUzn7lGWU6KFC4Jcb1D2R1lPzSSFtgJHZ5 V0lMYNKbMGQMiL8jpzOikgB0T/Q== X-Google-Smtp-Source: AGHT+IFQs2Wwdg/fSES5A58gFSJxMknzSDLNOp4k85bx8hXhl1wyXlz+jCSjhcviMYqna778nTCFvIvjB3UkqOM8qQ== X-Received: from ilbbm19.prod.google.com ([2002:a05:6e02:3313:b0:3df:3a5e:9ba8]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:2589:b0:3df:460a:ec3c with SMTP id e9e14a558f8ab-3df4acc520fmr11509085ab.22.1750968369104; Thu, 26 Jun 2025 13:06:09 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:53 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-18-coltonlewis@google.com> Subject: [PATCH v3 17/22] KVM: arm64: Context swap Partitioned PMU guest registers From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Type: text/plain; charset="UTF-8" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250626_130610_602860_233EA222 X-CRM114-Status: GOOD ( 16.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Save and restore newly untrapped registers that can be directly accessed by the guest when the PMU is partitioned. * PMEVCNTRn_EL0 * PMCCNTR_EL0 * PMICNTR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCR_EL0 * PMCNTEN_EL0 * PMINTEN_EL1 If the PMU is not partitioned or MDCR_EL2.TPM is set, all PMU registers are trapped so return immediately. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 4 ++ arch/arm64/kvm/arm.c | 2 + arch/arm64/kvm/pmu-part.c | 101 +++++++++++++++++++++++++++++++ 3 files changed, 107 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_pmu.h index 35674879aae0..4f0741bf6779 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -98,6 +98,8 @@ void kvm_pmu_host_counters_disable(void); u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); +void kvm_pmu_load(struct kvm_vcpu *vcpu); +void kvm_pmu_put(struct kvm_vcpu *vcpu); #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); @@ -169,6 +171,8 @@ static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) { return 0; } +static inline void kvm_pmu_load(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_put(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e452aba1a3b2..7c007ee44ecb 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -616,6 +616,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_vcpu_load_vhe(vcpu); kvm_arch_vcpu_load_fp(vcpu); kvm_vcpu_pmu_restore_guest(vcpu); + kvm_pmu_load(vcpu); if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); @@ -658,6 +659,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_timer_vcpu_put(vcpu); kvm_vgic_put(vcpu); kvm_vcpu_pmu_restore_host(vcpu); + kvm_pmu_put(vcpu); if (vcpu_has_nv(vcpu)) kvm_vcpu_put_hw_mmu(vcpu); kvm_arm_vmid_clear_active(); diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index f954d2d29314..5eb53c6409e7 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -9,6 +9,7 @@ #include #include +#include #include /** @@ -194,3 +195,103 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) return hpmn; } + +/** + * kvm_pmu_load() - Load untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Load all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_load(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu = vcpu->kvm->arch.arm_pmu; + u64 mask = kvm_pmu_guest_counter_mask(pmu); + u8 i; + u64 val; + + /* + * If the PMU is not partitioned or we have MDCR_EL2_TPM, + * every PMU access is trapped so don't bother with the swap. + */ + if (!kvm_pmu_is_partitioned(pmu) || (vcpu->arch.mdcr_el2 & MDCR_EL2_TPM)) + return; + + for (i = 0; i < pmu->hpmn_max; i++) { + val = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); + write_pmevcntrn(i, val); + } + + val = __vcpu_sys_reg(vcpu, PMCCNTR_EL0); + write_pmccntr(val); + + val = __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + write_pmuserenr(val); + + val = __vcpu_sys_reg(vcpu, PMSELR_EL0); + write_pmselr(val); + + val = __vcpu_sys_reg(vcpu, PMCR_EL0); + write_pmcr(val); + + /* + * Loading these registers is tricky because of + * 1. Applying only the bits for guest counters (indicated by mask) + * 2. Setting and clearing are different registers + */ + val = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + write_pmcntenset(val & mask); + write_pmcntenclr(~val & mask); + + val = __vcpu_sys_reg(vcpu, PMINTENSET_EL1); + write_pmintenset(val & mask); + write_pmintenclr(~val & mask); +} + +/** + * kvm_pmu_put() - Put untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Put all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_put(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu = vcpu->kvm->arch.arm_pmu; + u64 mask = kvm_pmu_guest_counter_mask(pmu); + u8 i; + u64 val; + + /* + * If the PMU is not partitioned or we have MDCR_EL2_TPM, + * every PMU access is trapped so don't bother with the swap. + */ + if (!kvm_pmu_is_partitioned(pmu) || (vcpu->arch.mdcr_el2 & MDCR_EL2_TPM)) + return; + + for (i = 0; i < pmu->hpmn_max; i++) { + val = read_pmevcntrn(i); + __vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + i, val); + } + + val = read_pmccntr(); + __vcpu_assign_sys_reg(vcpu, PMCCNTR_EL0, val); + + val = read_pmuserenr(); + __vcpu_assign_sys_reg(vcpu, PMUSERENR_EL0, val); + + val = read_pmselr(); + __vcpu_assign_sys_reg(vcpu, PMSELR_EL0, val); + + val = read_pmcr(); + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, val); + + /* Mask these to only save the guest relevant bits. */ + val = read_pmcntenset(); + __vcpu_assign_sys_reg(vcpu, PMCNTENSET_EL0, val & mask); + + val = read_pmintenset(); + __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); +} -- 2.50.0.727.gbf7dc18ff4-goog