From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BFA5C432C2 for ; Wed, 25 Sep 2019 13:59:16 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 31E932146E for ; Wed, 25 Sep 2019 13:59:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 31E932146E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:52290 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iD7p5-0007tC-68 for qemu-devel@archiver.kernel.org; Wed, 25 Sep 2019 09:59:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:45684) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iD7oH-0007I1-Ol for qemu-devel@nongnu.org; Wed, 25 Sep 2019 09:58:27 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iD7oG-0005Ho-Bs for qemu-devel@nongnu.org; Wed, 25 Sep 2019 09:58:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45468) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iD7o9-0005Fg-Bt; Wed, 25 Sep 2019 09:58:17 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6DC738980FE; Wed, 25 Sep 2019 13:58:16 +0000 (UTC) Received: from [10.36.117.64] (ovpn-117-64.ams2.redhat.com [10.36.117.64]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C6EEC19C7F; Wed, 25 Sep 2019 13:58:13 +0000 (UTC) Subject: Re: [PATCH v4 5/9] target/arm/kvm64: Add kvm_arch_get/put_sve To: Andrew Jones , qemu-devel@nongnu.org, qemu-arm@nongnu.org References: <20190924113105.19076-1-drjones@redhat.com> <20190924113105.19076-6-drjones@redhat.com> From: Auger Eric Message-ID: Date: Wed, 25 Sep 2019 15:58:12 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 In-Reply-To: <20190924113105.19076-6-drjones@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.67]); Wed, 25 Sep 2019 13:58:16 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, richard.henderson@linaro.org, armbru@redhat.com, imammedo@redhat.com, alex.bennee@linaro.org, Dave.Martin@arm.com Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Hi Drew, On 9/24/19 1:31 PM, Andrew Jones wrote: > These are the SVE equivalents to kvm_arch_get/put_fpsimd. Note, the > swabbing is different than it is for fpsmid because the vector format > is a little-endian stream of words. > > Signed-off-by: Andrew Jones > Reviewed-by: Richard Henderson Reviewed-by: Eric Auger Eric > --- > target/arm/kvm64.c | 137 +++++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 133 insertions(+), 4 deletions(-) > > diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c > index 28f6db57d5ee..ea454c613919 100644 > --- a/target/arm/kvm64.c > +++ b/target/arm/kvm64.c > @@ -671,11 +671,12 @@ int kvm_arch_destroy_vcpu(CPUState *cs) > bool kvm_arm_reg_syncs_via_cpreg_list(uint64_t regidx) > { > /* Return true if the regidx is a register we should synchronize > - * via the cpreg_tuples array (ie is not a core reg we sync by > - * hand in kvm_arch_get/put_registers()) > + * via the cpreg_tuples array (ie is not a core or sve reg that > + * we sync by hand in kvm_arch_get/put_registers()) > */ > switch (regidx & KVM_REG_ARM_COPROC_MASK) { > case KVM_REG_ARM_CORE: > + case KVM_REG_ARM64_SVE: > return false; > default: > return true; > @@ -761,6 +762,78 @@ static int kvm_arch_put_fpsimd(CPUState *cs) > return 0; > } > > +/* > + * SVE registers are encoded in KVM's memory in an endianness-invariant format. > + * The byte at offset i from the start of the in-memory representation contains > + * the bits [(7 + 8 * i) : (8 * i)] of the register value. As this means the > + * lowest offsets are stored in the lowest memory addresses, then that nearly > + * matches QEMU's representation, which is to use an array of host-endian > + * uint64_t's, where the lower offsets are at the lower indices. To complete > + * the translation we just need to byte swap the uint64_t's on big-endian hosts. > + */ > +static uint64_t *sve_bswap64(uint64_t *dst, uint64_t *src, int nr) > +{ > +#ifdef HOST_WORDS_BIGENDIAN > + int i; > + > + for (i = 0; i < nr; ++i) { > + dst[i] = bswap64(src[i]); > + } > + > + return dst; > +#else > + return src; > +#endif > +} > + > +/* > + * KVM SVE registers come in slices where ZREGs have a slice size of 2048 bits > + * and PREGS and the FFR have a slice size of 256 bits. However we simply hard > + * code the slice index to zero for now as it's unlikely we'll need more than > + * one slice for quite some time. > + */ > +static int kvm_arch_put_sve(CPUState *cs) > +{ > + ARMCPU *cpu = ARM_CPU(cs); > + CPUARMState *env = &cpu->env; > + uint64_t tmp[ARM_MAX_VQ * 2]; > + uint64_t *r; > + struct kvm_one_reg reg; > + int n, ret; > + > + for (n = 0; n < KVM_ARM64_SVE_NUM_ZREGS; ++n) { > + r = sve_bswap64(tmp, &env->vfp.zregs[n].d[0], cpu->sve_max_vq * 2); > + reg.addr = (uintptr_t)r; > + reg.id = KVM_REG_ARM64_SVE_ZREG(n, 0); > + ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, ®); > + if (ret) { > + return ret; > + } > + } > + > + for (n = 0; n < KVM_ARM64_SVE_NUM_PREGS; ++n) { > + r = sve_bswap64(tmp, r = &env->vfp.pregs[n].p[0], > + DIV_ROUND_UP(cpu->sve_max_vq, 8)); > + reg.addr = (uintptr_t)r; > + reg.id = KVM_REG_ARM64_SVE_PREG(n, 0); > + ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, ®); > + if (ret) { > + return ret; > + } > + } > + > + r = sve_bswap64(tmp, &env->vfp.pregs[FFR_PRED_NUM].p[0], > + DIV_ROUND_UP(cpu->sve_max_vq, 8)); > + reg.addr = (uintptr_t)r; > + reg.id = KVM_REG_ARM64_SVE_FFR(0); > + ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, ®); > + if (ret) { > + return ret; > + } > + > + return 0; > +} > + > int kvm_arch_put_registers(CPUState *cs, int level) > { > struct kvm_one_reg reg; > @@ -855,7 +928,11 @@ int kvm_arch_put_registers(CPUState *cs, int level) > } > } > > - ret = kvm_arch_put_fpsimd(cs); > + if (cpu_isar_feature(aa64_sve, cpu)) { > + ret = kvm_arch_put_sve(cs); > + } else { > + ret = kvm_arch_put_fpsimd(cs); > + } > if (ret) { > return ret; > } > @@ -918,6 +995,54 @@ static int kvm_arch_get_fpsimd(CPUState *cs) > return 0; > } > > +/* > + * KVM SVE registers come in slices where ZREGs have a slice size of 2048 bits > + * and PREGS and the FFR have a slice size of 256 bits. However we simply hard > + * code the slice index to zero for now as it's unlikely we'll need more than > + * one slice for quite some time. > + */ > +static int kvm_arch_get_sve(CPUState *cs) > +{ > + ARMCPU *cpu = ARM_CPU(cs); > + CPUARMState *env = &cpu->env; > + struct kvm_one_reg reg; > + uint64_t *r; > + int n, ret; > + > + for (n = 0; n < KVM_ARM64_SVE_NUM_ZREGS; ++n) { > + r = &env->vfp.zregs[n].d[0]; > + reg.addr = (uintptr_t)r; > + reg.id = KVM_REG_ARM64_SVE_ZREG(n, 0); > + ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, ®); > + if (ret) { > + return ret; > + } > + sve_bswap64(r, r, cpu->sve_max_vq * 2); > + } > + > + for (n = 0; n < KVM_ARM64_SVE_NUM_PREGS; ++n) { > + r = &env->vfp.pregs[n].p[0]; > + reg.addr = (uintptr_t)r; > + reg.id = KVM_REG_ARM64_SVE_PREG(n, 0); > + ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, ®); > + if (ret) { > + return ret; > + } > + sve_bswap64(r, r, DIV_ROUND_UP(cpu->sve_max_vq, 8)); > + } > + > + r = &env->vfp.pregs[FFR_PRED_NUM].p[0]; > + reg.addr = (uintptr_t)r; > + reg.id = KVM_REG_ARM64_SVE_FFR(0); > + ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, ®); > + if (ret) { > + return ret; > + } > + sve_bswap64(r, r, DIV_ROUND_UP(cpu->sve_max_vq, 8)); > + > + return 0; > +} > + > int kvm_arch_get_registers(CPUState *cs) > { > struct kvm_one_reg reg; > @@ -1012,7 +1137,11 @@ int kvm_arch_get_registers(CPUState *cs) > env->spsr = env->banked_spsr[i]; > } > > - ret = kvm_arch_get_fpsimd(cs); > + if (cpu_isar_feature(aa64_sve, cpu)) { > + ret = kvm_arch_get_sve(cs); > + } else { > + ret = kvm_arch_get_fpsimd(cs); > + } > if (ret) { > return ret; > } >