From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 23454C77B7E for ; Sun, 28 May 2023 10:53:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Subject:Cc:To:From:Message-ID:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZpY+/CrWbvmdJKKYCwJtC9p70tWeDvkfvEkWGzyyg1I=; b=P33dk6IzGkaptj r9JNiTc1lIPc1scjT9XAnfXq4HCsnVy66vEnzVABQ+B78hRnW5DGPCW53CU65Z2TbetW6X3Bny0qu f/4r9gbCLh9etGOWtbC3DG0j2h0USssB+Ft0+OBwLPVByAQCaBe49SBDjqc1hmdxZoS4BZ4MBB/40 xveXqDsHXVIzZGYpZVOLUNlavY4M8ZDxX3b5mzx2rDEPEpkZLwI2SFdFcDrIktZvJe5bMIBdVH3d2 IjT41PSKYwsd6pJUuW2J1e3OehvmL+G/N7xAR55czxUgDsZXGHdVOofEAc5zpLydGmhfZOyFgexo+ V0HuolomLF35Gjb2ftKA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q3E1T-007U7c-1X; Sun, 28 May 2023 10:53:15 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q3E1Q-007U7D-01 for linux-arm-kernel@lists.infradead.org; Sun, 28 May 2023 10:53:13 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 351C160A6B; Sun, 28 May 2023 10:53:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 77319C433EF; Sun, 28 May 2023 10:53:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1685271190; bh=Ug5kWRktJUW8JxUHIhi/JVQATkzY2UsMsgeuXDwr0Jo=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=H/6HZEEIp3JWuN5RIuN4FysQkG4rzIBvtSCgXhhfmUnOL5j6vWeHN0IhsJLfoSxh1 x4LyQ1SkkFZv3cMbz5FeeNICzItS8sDOR01+oJdbfjKVep78SNhv9k3ly8w8n/nvHT k8iBoZAuJex+oPBeFdmo4yaRNDI+e89Im8OIo/N7qpMQRxfLg9c4owc1iwj4iaIXyd BZCxCumA2KvYeFdiAeVTf2CDZXV/32x6flTrgaAWzZmmaYP6ulk5o0GMSvwe31E+GX hhRUL6xhtjjFW0mCSREgoAHBwp3wYAw1iAoJiiRTzQ7d/0jcTcv5As9sSzERPMaLUz Uyf48eZo3vhIA== Received: from sofa.misterjones.org ([185.219.108.64] helo=wait-a-minute.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1q3E1M-000rot-Ej; Sun, 28 May 2023 11:53:08 +0100 Date: Sun, 28 May 2023 11:52:59 +0100 Message-ID: <87r0r0ohh0.wl-maz@kernel.org> From: Marc Zyngier To: Jing Zhang Cc: KVM , KVMARM , ARMLinux , Oliver Upton , Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta Subject: Re: [PATCH v10 3/5] KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer In-Reply-To: <20230522221835.957419-4-jingzhangos@google.com> References: <20230522221835.957419-1-jingzhangos@google.com> <20230522221835.957419-4-jingzhangos@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/28.2 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: jingzhangos@google.com, kvm@vger.kernel.org, kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, oupton@google.com, will@kernel.org, pbonzini@redhat.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, tabba@google.com, reijiw@google.com, rananta@google.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230528_035312_151780_D0548FAF X-CRM114-Status: GOOD ( 34.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, 22 May 2023 23:18:33 +0100, Jing Zhang wrote: > > With per guest ID registers, PMUver settings from userspace > can be stored in its corresponding ID register. > > No functional change intended. > > Signed-off-by: Jing Zhang > --- > arch/arm64/include/asm/kvm_host.h | 12 ++-- > arch/arm64/kvm/arm.c | 6 -- > arch/arm64/kvm/sys_regs.c | 100 ++++++++++++++++++++++++------ > include/kvm/arm_pmu.h | 5 +- > 4 files changed, 92 insertions(+), 31 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 8a2fde6c04c4..7b0f43373dbe 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -246,6 +246,13 @@ struct kvm_arch { > #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE 7 > /* SMCCC filter initialized for the VM */ > #define KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED 8 > + /* > + * AA64DFR0_EL1.PMUver was set as ID_AA64DFR0_EL1_PMUVer_IMP_DEF > + * or DFR0_EL1.PerfMon was set as ID_DFR0_EL1_PerfMon_IMPDEF from > + * userspace for VCPUs without PMU. > + */ > +#define KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU 9 > + > unsigned long flags; > > /* > @@ -257,11 +264,6 @@ struct kvm_arch { > > cpumask_var_t supported_cpus; > > - struct { > - u8 imp:4; > - u8 unimp:4; > - } dfr0_pmuver; > - > /* Hypercall features firmware registers' descriptor */ > struct kvm_smccc_features smccc_feat; > struct maple_tree smccc_filter; > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 5114521ace60..ca18c09ccf82 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -148,12 +148,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) > kvm_arm_init_hypercalls(kvm); > kvm_arm_init_id_regs(kvm); > > - /* > - * Initialise the default PMUver before there is a chance to > - * create an actual PMU. > - */ > - kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit(); > - > return 0; > > err_free_cpumask: > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c > index 9fb1c2f8f5a5..84d9e4baa4f8 100644 > --- a/arch/arm64/kvm/sys_regs.c > +++ b/arch/arm64/kvm/sys_regs.c > @@ -1178,9 +1178,12 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, > static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) > { > if (kvm_vcpu_has_pmu(vcpu)) > - return vcpu->kvm->arch.dfr0_pmuver.imp; > + return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), > + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)); > + else if (test_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags)) > + return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; > > - return vcpu->kvm->arch.dfr0_pmuver.unimp; > + return 0; > } > > static u8 perfmon_to_pmuver(u8 perfmon) > @@ -1403,8 +1406,12 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, > const struct sys_reg_desc *rd, > u64 val) > { > + struct kvm_arch *arch = &vcpu->kvm->arch; > + u64 old_val = read_id_reg(vcpu, rd); > u8 pmuver, host_pmuver; > + u64 new_val = val; > bool valid_pmu; > + int ret = 0; > > host_pmuver = kvm_arm_pmu_get_pmuver_limit(); > > @@ -1424,26 +1431,51 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, > if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) > return -EINVAL; > > + mutex_lock(&arch->config_lock); > /* We can only differ with PMUver, and anything else is an error */ > - val ^= read_id_reg(vcpu, rd); > + val ^= old_val; > val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); > - if (val) > - return -EINVAL; > + if (val) { > + ret = -EINVAL; > + goto out; > + } > > - if (valid_pmu) > - vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; > - else > - vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; > + /* Only allow userspace to change the idregs before VM running */ > + if (kvm_vm_has_ran_once(vcpu->kvm)) { > + if (new_val != old_val) > + ret = -EBUSY; > + } else { > + if (valid_pmu) { > + val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); > + val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; > + val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver); > + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val; > + > + val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1); > + val &= ~ID_DFR0_EL1_PerfMon_MASK; > + val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver)); > + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val; > + } else { > + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, > + pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF); > + } > + } > > - return 0; > +out: > + mutex_unlock(&arch->config_lock); > + return ret; > } > > static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, > const struct sys_reg_desc *rd, > u64 val) > { > + struct kvm_arch *arch = &vcpu->kvm->arch; > + u64 old_val = read_id_reg(vcpu, rd); > u8 perfmon, host_perfmon; > + u64 new_val = val; > bool valid_pmu; > + int ret = 0; > > host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); > > @@ -1464,18 +1496,39 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, > if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) > return -EINVAL; > > + mutex_lock(&arch->config_lock); > /* We can only differ with PerfMon, and anything else is an error */ > - val ^= read_id_reg(vcpu, rd); > + val ^= old_val; > val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); > - if (val) > - return -EINVAL; > + if (val) { > + ret = -EINVAL; > + goto out; > + } > > - if (valid_pmu) > - vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); > - else > - vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); > + /* Only allow userspace to change the idregs before VM running */ > + if (kvm_vm_has_ran_once(vcpu->kvm)) { > + if (new_val != old_val) > + ret = -EBUSY; > + } else { > + if (valid_pmu) { > + val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1); > + val &= ~ID_DFR0_EL1_PerfMon_MASK; > + val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon); > + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val; > + > + val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); > + val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; > + val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon)); > + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val; > + } else { > + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, > + perfmon == ID_DFR0_EL1_PerfMon_IMPDEF); > + } > + } This is the exact same code as for aa64fdr0. Make it a helper, please. > > - return 0; > +out: > + mutex_unlock(&arch->config_lock); > + return ret; > } > > /* > @@ -3422,6 +3475,17 @@ void kvm_arm_init_id_regs(struct kvm *kvm) > } > > IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; > + /* > + * Initialise the default PMUver before there is a chance to > + * create an actual PMU. > + */ > + val = IDREG(kvm, SYS_ID_AA64DFR0_EL1); > + > + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); > + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), > + kvm_arm_pmu_get_pmuver_limit()); > + > + IDREG(kvm, SYS_ID_AA64DFR0_EL1) = val; > } > > int __init kvm_sys_reg_table_init(void) > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h > index 1a6a695ca67a..8d70dbdc1e0a 100644 > --- a/include/kvm/arm_pmu.h > +++ b/include/kvm/arm_pmu.h > @@ -92,8 +92,9 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); > /* > * Evaluates as true when emulating PMUv3p5, and false otherwise. > */ > -#define kvm_pmu_is_3p5(vcpu) \ > - (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5) > +#define kvm_pmu_is_3p5(vcpu) \ > + (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), \ > + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)) >= ID_AA64DFR0_EL1_PMUVer_V3P5) This is getting unreadable. How about something like: diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 8d70dbdc1e0a..ecb55d87fa36 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -92,9 +92,13 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); /* * Evaluates as true when emulating PMUv3p5, and false otherwise. */ -#define kvm_pmu_is_3p5(vcpu) \ - (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), \ - IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)) >= ID_AA64DFR0_EL1_PMUVer_V3P5) +#define kvm_pmu_is_3p5(vcpu) ({ \ + u64 val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); \ + u8 v; \ + \ + v = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); \ + v >= ID_AA64DFR0_EL1_PMUVer_V3P5; \ +}) u8 kvm_arm_pmu_get_pmuver_limit(void); Thanks, M. -- Without deviation from the norm, progress is not possible. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel