From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0C9D3C77B72 for ; Tue, 11 Apr 2023 09:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Mi1h7DCT/aw9OMrGAn6gNd8s3jeDmv8J0trdPnvLugc=; b=m5tfhGcyV2llWd b4CQG3a3BE5SImgE7wziSZGWjqTyWi4C5O4Y9EiQS+kTvWnLweUZZE6TvxyUGa7SHfwcsWdDkFRr/ W6A+jSUvBNpSondHGNkyAnAU4nFPtDiX4lh0HF3hW9sqzNZ9eFJSUEo/99SXp/d3X0aQ7iYKYPG2+ oOiBwzfboC5jyNB3oAqmVXBVRuGj71omKoQC/9vk0aPOPvY3MzWTMknI/9Dq80QrkGMyFNN4FDXEN w+7Gm8E+1D026pYICJtfpgi28eIZVChWusr21nt3IjcXnJ6bdKpRyIxLNOsAo4p/5gIba/kOsnRzZ U21PVBnxcmfQuzZk3FzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmAOD-00H3zz-1j; Tue, 11 Apr 2023 09:34:13 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmAOA-00H3wI-0B for linux-arm-kernel@lists.infradead.org; Tue, 11 Apr 2023 09:34:12 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 42691D75; Tue, 11 Apr 2023 02:34:46 -0700 (PDT) Received: from FVFF77S0Q05N (unknown [10.57.20.166]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1EE7B3F73F; Tue, 11 Apr 2023 02:33:55 -0700 (PDT) Date: Tue, 11 Apr 2023 10:33:50 +0100 From: Mark Rutland To: Reiji Watanabe Cc: Marc Zyngier , Oliver Upton , Will Deacon , Catalin Marinas , kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , Shaoqin Huang , Rob Herring Subject: Re: [PATCH v2 2/2] KVM: arm64: PMU: Don't overwrite PMUSERENR with vcpu loaded Message-ID: References: <20230408034759.2369068-1-reijiw@google.com> <20230408034759.2369068-3-reijiw@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20230408034759.2369068-3-reijiw@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230411_023410_186428_992A66A8 X-CRM114-Status: GOOD ( 37.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Apr 07, 2023 at 08:47:59PM -0700, Reiji Watanabe wrote: > Currently, with VHE, KVM sets ER, CR, SW and EN bits of > PMUSERENR_EL0 to 1 on vcpu_load(), and saves and restores > the register value for the host on vcpu_load() and vcpu_put(). > If the value of those bits are cleared on a pCPU with a vCPU > loaded (armv8pmu_start() would do that when PMU counters are > programmed for the guest), PMU access from the guest EL0 might > be trapped to the guest EL1 directly regardless of the current > PMUSERENR_EL0 value of the vCPU. > > Fix this by not letting armv8pmu_start() overwrite PMUSERENR on > the pCPU on which a vCPU is loaded, and instead updating the > saved shadow register value for the host, so that the value can > be restored on vcpu_put() later. I'm happy with the hook in the PMU code, but I think there's still a race between an IPI and vcpu_{load,put}() where we can lose an update to PMUSERERNR_EL0. I tried to point that out in my final question in: https://lore.kernel.org/all/ZCwzV7ACl21VbLru@FVFF77S0Q05N.cambridge.arm.com/ ... but I looks like that wasn't all that clear. Consider vcpu_load(): void vcpu_load(struct kvm_vcpu *vcpu) { int cpu = get_cpu(); __this_cpu_write(kvm_running_vcpu, vcpu); preempt_notifier_register(&vcpu->preempt_notifier); kvm_arch_vcpu_load(vcpu, cpu); put_cpu(); } AFAICT that's called with IRQs enabled, and the {get,put}_cpu() calls will only disable migration/preemption. After the write to kvm_running_vcpu, the code in kvm_set_pmuserenr() will see that there is a running vcpu, and write to the host context without updating the real PMUSERENR_EL0 register. If we take an IPI and call kvm_set_pmuserenr() after the write to kvm_running_vcpu but before kvm_running_vcpu() completes, the call to kvm_set_pmuserenr() could update the host context (without updating the real PMUSERENR_EL0 value) before __activate_traps_common() saves the host value with: ctxt_sys_reg(hctxt, PMUSERENR_EL0) = read_sysreg(pmuserenr_el0); ... which would discard the write made by kvm_set_pmuserenr(). Similar can happen in vcpu_put() where an IPI after __deactivate_traps_common() but before kvm_running_vcpu is cleared would result in kvm_set_pmuserenr() writing to the host context, but this value would never be written into HW. Unless I'm missing something (e.g. if interrupts are actually masked during those windows), I don't think this is a complete fix as-is. I'm not sure if there is a smart fix for that. Thanks, Mark. > Suggested-by: Mark Rutland > Suggested-by: Marc Zyngier > Fixes: 83a7a4d643d3 ("arm64: perf: Enable PMU counter userspace access for perf event") > Signed-off-by: Reiji Watanabe > --- > arch/arm64/include/asm/kvm_host.h | 5 +++++ > arch/arm64/kernel/perf_event.c | 21 ++++++++++++++++++--- > arch/arm64/kvm/pmu.c | 20 ++++++++++++++++++++ > 3 files changed, 43 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index bcd774d74f34..22db2f885c17 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -1028,9 +1028,14 @@ void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu); > #ifdef CONFIG_KVM > void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr); > void kvm_clr_pmu_events(u32 clr); > +bool kvm_set_pmuserenr(u64 val); > #else > static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {} > static inline void kvm_clr_pmu_events(u32 clr) {} > +static inline bool kvm_set_pmuserenr(u64 val) > +{ > + return false; > +} > #endif > > void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu); > diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c > index dde06c0f97f3..0fffe4c56c28 100644 > --- a/arch/arm64/kernel/perf_event.c > +++ b/arch/arm64/kernel/perf_event.c > @@ -741,9 +741,25 @@ static inline u32 armv8pmu_getreset_flags(void) > return value; > } > > +static void update_pmuserenr(u64 val) > +{ > + lockdep_assert_irqs_disabled(); > + > + /* > + * The current pmuserenr value might be the value for the guest. > + * If that's the case, have KVM keep tracking of the register value > + * for the host EL0 so that KVM can restore it before returning to > + * the host EL0. Otherwise, update the register now. > + */ > + if (kvm_set_pmuserenr(val)) > + return; > + > + write_sysreg(val, pmuserenr_el0); > +} > + > static void armv8pmu_disable_user_access(void) > { > - write_sysreg(0, pmuserenr_el0); > + update_pmuserenr(0); > } > > static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) > @@ -759,8 +775,7 @@ static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) > armv8pmu_write_evcntr(i, 0); > } > > - write_sysreg(0, pmuserenr_el0); > - write_sysreg(ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_CR, pmuserenr_el0); > + update_pmuserenr(ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_CR); > } > > static void armv8pmu_enable_event(struct perf_event *event) > diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c > index 7887133d15f0..40bb2cb13317 100644 > --- a/arch/arm64/kvm/pmu.c > +++ b/arch/arm64/kvm/pmu.c > @@ -209,3 +209,23 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) > kvm_vcpu_pmu_enable_el0(events_host); > kvm_vcpu_pmu_disable_el0(events_guest); > } > + > +/* > + * With VHE, keep track of the PMUSERENR_EL0 value for the host EL0 on > + * the pCPU where vCPU is loaded, since PMUSERENR_EL0 is switched to > + * the value for the guest on vcpu_load(). The value for the host EL0 > + * will be restored on vcpu_put(), before returning to the EL0. > + * > + * Return true if KVM takes care of the register. Otherwise return false. > + */ > +bool kvm_set_pmuserenr(u64 val) > +{ > + struct kvm_cpu_context *hctxt; > + > + if (!kvm_arm_support_pmu_v3() || !has_vhe() || !kvm_get_running_vcpu()) > + return false; > + > + hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; > + ctxt_sys_reg(hctxt, PMUSERENR_EL0) = val; > + return true; > +} > -- > 2.40.0.577.gac1e443424-goog > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel