From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED,USER_AGENT_NEOMUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB689C4360F for ; Wed, 3 Apr 2019 20:02:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7648F2084B for ; Wed, 3 Apr 2019 20:02:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="YRXL1sW0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7648F2084B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MWYsyUbXhmO0rSrjET6aXZBdYBqYuFfNW+PX/Ec5Kjw=; b=YRXL1sW0d5Vqgi VJYf4/NPEZ96lhmhHLalDvwr95Wc7yCwR9WTpeWkWlcjRWFhs5YfahQIPqBO40UQF89inI/9e6Enn Ac5mqDnYn+tqHwU2NIg9VXDboP4hsyQFH+L2n1wZVEUCSp0PIMPuvwubmo9+LGSrSQBRzRl4RvU1K AbM698BUD2ibMktsPNPmCPnIiZL5j08l824TkEQjT0CyEUBMsVE/ndR50tNuSVNDFy4abD8lZY1cq J+yIgEtQOsp0GlcxIAWNDapNectspuX9QFnKEvpOKOkWZkQn5KKD1Wk+cwK4PdD32x//fT0YDvM8p ArScGwhrW7Q0OHKryWsg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hBm55-0001Ob-Cw; Wed, 03 Apr 2019 20:01:55 +0000 Received: from mx1.redhat.com ([209.132.183.28]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hBm51-0001OI-2P for linux-arm-kernel@lists.infradead.org; Wed, 03 Apr 2019 20:01:52 +0000 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3311DC025018; Wed, 3 Apr 2019 20:01:50 +0000 (UTC) Received: from kamzik.brq.redhat.com (unknown [10.43.2.160]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C2904643F1; Wed, 3 Apr 2019 20:01:47 +0000 (UTC) Date: Wed, 3 Apr 2019 22:01:45 +0200 From: Andrew Jones To: Dave Martin Subject: Re: [PATCH v7 13/27] KVM: arm64/sve: Context switch the SVE registers Message-ID: <20190403200145.c2oep2hbugl7db5t@kamzik.brq.redhat.com> References: <1553864452-15080-1-git-send-email-Dave.Martin@arm.com> <1553864452-15080-14-git-send-email-Dave.Martin@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1553864452-15080-14-git-send-email-Dave.Martin@arm.com> User-Agent: NeoMutt/20180716 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Wed, 03 Apr 2019 20:01:50 +0000 (UTC) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190403_130151_151225_976D20C6 X-CRM114-Status: GOOD ( 32.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Okamoto Takayuki , Christoffer Dall , Ard Biesheuvel , Marc Zyngier , Catalin Marinas , Will Deacon , Zhang Lei , Julien Grall , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Mar 29, 2019 at 01:00:38PM +0000, Dave Martin wrote: > In order to give each vcpu its own view of the SVE registers, this > patch adds context storage via a new sve_state pointer in struct > vcpu_arch. An additional member sve_max_vl is also added for each > vcpu, to determine the maximum vector length visible to the guest > and thus the value to be configured in ZCR_EL2.LEN while the vcpu > is active. This also determines the layout and size of the storage > in sve_state, which is read and written by the same backend > functions that are used for context-switching the SVE state for > host tasks. > > On SVE-enabled vcpus, SVE access traps are now handled by switching > in the vcpu's SVE context and disabling the trap before returning > to the guest. On other vcpus, the trap is not handled and an exit > back to the host occurs, where the handle_sve() fallback path > reflects an undefined instruction exception back to the guest, > consistently with the behaviour of non-SVE-capable hardware (as was > done unconditionally prior to this patch). > > No SVE handling is added on non-VHE-only paths, since VHE is an > architectural and Kconfig prerequisite of SVE. > > Signed-off-by: Dave Martin > Reviewed-by: Julien Thierry > Tested-by: zhang.lei > > --- > > Changes since v5: > > * [Julien Thierry, Julien Grall] Commit message typo fixes > > * [Mark Rutland] Rename trap_class to hsr_ec, for consistency with > existing code. > > * [Mark Rutland] Simplify condition for refusing to handle an > FPSIMD/SVE trap, using multiple if () statements for clarity. The > previous condition was a bit tortuous, and how that the static_key > checks have been hoisted out, it makes little difference to the > compiler how we express the condition here. > --- > arch/arm64/include/asm/kvm_host.h | 6 ++++ > arch/arm64/kvm/fpsimd.c | 5 +-- > arch/arm64/kvm/hyp/switch.c | 75 +++++++++++++++++++++++++++++---------- > 3 files changed, 66 insertions(+), 20 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 22cf484..4fabfd2 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -228,6 +228,8 @@ struct vcpu_reset_state { > > struct kvm_vcpu_arch { > struct kvm_cpu_context ctxt; > + void *sve_state; > + unsigned int sve_max_vl; > > /* HYP configuration */ > u64 hcr_el2; > @@ -323,6 +325,10 @@ struct kvm_vcpu_arch { > bool sysregs_loaded_on_cpu; > }; > > +/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ > +#define vcpu_sve_pffr(vcpu) ((void *)((char *)((vcpu)->arch.sve_state) + \ > + sve_ffr_offset((vcpu)->arch.sve_max_vl))) Maybe an inline function instead? > + > /* vcpu_arch flags field values: */ > #define KVM_ARM64_DEBUG_DIRTY (1 << 0) > #define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */ > diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c > index 7053bf4..6e3c9c8 100644 > --- a/arch/arm64/kvm/fpsimd.c > +++ b/arch/arm64/kvm/fpsimd.c > @@ -87,10 +87,11 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) > > if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { > fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.gp_regs.fp_regs, > - NULL, SVE_VL_MIN); > + vcpu->arch.sve_state, > + vcpu->arch.sve_max_vl); > > clear_thread_flag(TIF_FOREIGN_FPSTATE); > - clear_thread_flag(TIF_SVE); > + update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu)); > } > } > > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c > index 9d46066..5444b9c 100644 > --- a/arch/arm64/kvm/hyp/switch.c > +++ b/arch/arm64/kvm/hyp/switch.c > @@ -100,7 +100,10 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu) > val = read_sysreg(cpacr_el1); > val |= CPACR_EL1_TTA; > val &= ~CPACR_EL1_ZEN; > - if (!update_fp_enabled(vcpu)) { > + if (update_fp_enabled(vcpu)) { > + if (vcpu_has_sve(vcpu)) > + val |= CPACR_EL1_ZEN; > + } else { > val &= ~CPACR_EL1_FPEN; > __activate_traps_fpsimd32(vcpu); > } > @@ -317,16 +320,48 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) > return true; > } > > -static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu) > +/* Check for an FPSIMD/SVE trap and handle as appropriate */ > +static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) > { > - struct user_fpsimd_state *host_fpsimd = vcpu->arch.host_fpsimd_state; > + bool vhe, sve_guest, sve_host; > + u8 hsr_ec; > > - if (has_vhe()) > - write_sysreg(read_sysreg(cpacr_el1) | CPACR_EL1_FPEN, > - cpacr_el1); > - else > + if (!system_supports_fpsimd()) > + return false; > + > + if (system_supports_sve()) { > + sve_guest = vcpu_has_sve(vcpu); > + sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE; > + vhe = true; > + } else { > + sve_guest = false; > + sve_host = false; > + vhe = has_vhe(); > + } > + > + hsr_ec = kvm_vcpu_trap_get_class(vcpu); > + if (hsr_ec != ESR_ELx_EC_FP_ASIMD && > + hsr_ec != ESR_ELx_EC_SVE) > + return false; > + > + /* Don't handle SVE traps for non-SVE vcpus here: */ > + if (!sve_guest) > + if (hsr_ec != ESR_ELx_EC_FP_ASIMD) > + return false; > + > + /* Valid trap. Switch the context: */ > + > + if (vhe) { > + u64 reg = read_sysreg(cpacr_el1) | CPACR_EL1_FPEN; > + > + if (sve_guest) > + reg |= CPACR_EL1_ZEN; > + > + write_sysreg(reg, cpacr_el1); > + } else { > write_sysreg(read_sysreg(cptr_el2) & ~(u64)CPTR_EL2_TFP, > cptr_el2); > + } > > isb(); > > @@ -335,24 +370,28 @@ static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu) > * In the SVE case, VHE is assumed: it is enforced by > * Kconfig and kvm_arch_init(). > */ > - if (system_supports_sve() && > - (vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE)) { > + if (sve_host) { > struct thread_struct *thread = container_of( > - host_fpsimd, > + vcpu->arch.host_fpsimd_state, > struct thread_struct, uw.fpsimd_state); > > - sve_save_state(sve_pffr(thread), &host_fpsimd->fpsr); > + sve_save_state(sve_pffr(thread), > + &vcpu->arch.host_fpsimd_state->fpsr); > } else { > - __fpsimd_save_state(host_fpsimd); > + __fpsimd_save_state(vcpu->arch.host_fpsimd_state); > } > > vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; > } > > - __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); > - > - if (vcpu_has_sve(vcpu)) > + if (sve_guest) { > + sve_load_state(vcpu_sve_pffr(vcpu), > + &vcpu->arch.ctxt.gp_regs.fp_regs.fpsr, > + sve_vq_from_vl(vcpu->arch.sve_max_vl) - 1); > write_sysreg_s(vcpu->arch.ctxt.sys_regs[ZCR_EL1], SYS_ZCR_EL12); > + } else { > + __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); > + } > > /* Skip restoring fpexc32 for AArch64 guests */ > if (!(read_sysreg(hcr_el2) & HCR_RW)) > @@ -388,10 +427,10 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) > * and restore the guest context lazily. > * If FP/SIMD is not implemented, handle the trap and inject an > * undefined instruction exception to the guest. > + * Similarly for trapped SVE accesses. > */ > - if (system_supports_fpsimd() && > - kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_FP_ASIMD) > - return __hyp_switch_fpsimd(vcpu); > + if (__hyp_handle_fpsimd(vcpu)) > + return true; > > if (!__populate_fault_info(vcpu)) > return true; Reviewed-by: Andrew Jones _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel