From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED,USER_AGENT_NEOMUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63390C4360F for ; Thu, 4 Apr 2019 20:31:27 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2F51B20855 for ; Thu, 4 Apr 2019 20:31:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="RfcFGAhV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2F51B20855 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JTyfBcs4PU9C5zpMbeEWm7icAuXnhS77EQSp+az5qGc=; b=RfcFGAhVafAR0x e0I837cP8O1OP7NmP/PGrlCnARJHkrGJ4t9+uQXOEJL8ahCtJ82sgQZ2++rQ0LscO8ohGm2Nxjctm e4UtjOsUKYvuWdD8GKydT9atxaAJG4p+zx0fcYCOqRFZSDaEqnW8P4o1ruUaEftjX/j8h1SKE1bsY DqpQ9m+Xut5PxY5qrAEGXoRlLo2RalUWkwEf4xtQPfTff28N3jCI+3Z/Ovwb1e8RdBlcz/JZoRG8R f87J0znZj1dlJ0DNUM8cV22IGHQq49zkOns5s2kwl2HuXt5Rp86/GynZrUsNrhiYKmKsojxlQOsTD AfQ7n6f+YnwJXjgtV2mA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hC915-0007OG-ON; Thu, 04 Apr 2019 20:31:19 +0000 Received: from mx1.redhat.com ([209.132.183.28]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hC911-0007Nu-LK for linux-arm-kernel@lists.infradead.org; Thu, 04 Apr 2019 20:31:17 +0000 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C2C717F3E4; Thu, 4 Apr 2019 20:31:14 +0000 (UTC) Received: from kamzik.brq.redhat.com (unknown [10.43.2.160]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 12C4562671; Thu, 4 Apr 2019 20:31:11 +0000 (UTC) Date: Thu, 4 Apr 2019 22:31:09 +0200 From: Andrew Jones To: Dave Martin Subject: Re: [PATCH v7 23/27] KVM: arm64/sve: Add pseudo-register for the guest's vector lengths Message-ID: <20190404203109.lrt5czzlqpritecd@kamzik.brq.redhat.com> References: <1553864452-15080-1-git-send-email-Dave.Martin@arm.com> <1553864452-15080-24-git-send-email-Dave.Martin@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1553864452-15080-24-git-send-email-Dave.Martin@arm.com> User-Agent: NeoMutt/20180716 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Thu, 04 Apr 2019 20:31:15 +0000 (UTC) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190404_133115_742469_3B478574 X-CRM114-Status: GOOD ( 43.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Okamoto Takayuki , Christoffer Dall , Ard Biesheuvel , Marc Zyngier , Catalin Marinas , Will Deacon , Zhang Lei , Julien Grall , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Mar 29, 2019 at 01:00:48PM +0000, Dave Martin wrote: > This patch adds a new pseudo-register KVM_REG_ARM64_SVE_VLS to > allow userspace to set and query the set of vector lengths visible > to the guest. > > In the future, multiple register slices per SVE register may be > visible through the ioctl interface. Once the set of slices has > been determined we would not be able to allow the vector length set > to be changed any more, in order to avoid userspace seeing > inconsistent sets of registers. For this reason, this patch adds > support for explicit finalization of the SVE configuration via the > KVM_ARM_VCPU_FINALIZE ioctl. > > Finalization is the proper place to allocate the SVE register state > storage in vcpu->arch.sve_state, so this patch adds that as > appropriate. The data is freed via kvm_arch_vcpu_uninit(), which > was previously a no-op on arm64. > > To simplify the logic for determining what vector lengths can be > supported, some code is added to KVM init to work this out, in the > kvm_arm_init_arch_resources() hook. > > The KVM_REG_ARM64_SVE_VLS pseudo-register is not exposed yet. > Subsequent patches will allow SVE to be turned on for guest vcpus, > making it visible. > > Signed-off-by: Dave Martin > Reviewed-by: Julien Thierry > Tested-by: zhang.lei > > --- > > Changes since v5: > > * [Julien Thierry] Delete overzealous BUILD_BUG_ON() checks. > It also turns out that these could cause kernel build failures in > some configurations, even though the checked condition is compile- > time constant. > > Because of the way the affected functions are called, the checks > are superfluous, so the simplest option is simply to get rid of > them. > > * [Julien Thierry] Free vcpu->arch.sve_state (if any) in > kvm_arch_vcpu_uninit() (which is currently a no-op). > > This was accidentally lost during a previous rebase. > > * Add kvm_arm_init_arch_resources() hook, and use it to probe SVE > configuration for KVM, to avoid duplicating the logic elsewhere. > We only need to do this once. > > * Move sve_state buffer allocation to kvm_arm_vcpu_finalize(). > > As well as making the code more straightforward, this avoids the > need to allocate memory in kvm_reset_vcpu(), the meat of which is > non-preemptible since commit 358b28f09f0a ("arm/arm64: KVM: Allow a > VCPU to fully reset itself"). > > The refactoring means that if this has not been done by the time > we hit KVM_RUN, then this allocation will happen on the > kvm_arm_first_run_init() path, where preemption remains enabled. > > * Add a couple of comments in {get,set}_sve_reg() to make the handling > of the KVM_REG_ARM64_SVE_VLS special case a little clearer. > > * Move mis-split rework to avoid put_user() being the correct size > by accident in KVM_GET_REG_LIST to KVM: arm64: Enumerate SVE register > indices for KVM_GET_REG_LIST. > > * Fix wrong WARN_ON() check sense when checking whether the > implementation may needs move SVE register slices than KVM can > support. > > * Fix erroneous setting of vcpu->arch.sve_max_vl based on stale loop > control veriable vq. > > * Move definition of KVM_ARM_VCPU_SVE from KVM: arm64/sve: Allow > userspace to enable SVE for vcpus. > > * Migrate to explicit finalization of the SVE configuration, using > KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). > --- > arch/arm64/include/asm/kvm_host.h | 15 +++-- > arch/arm64/include/uapi/asm/kvm.h | 5 ++ > arch/arm64/kvm/guest.c | 114 +++++++++++++++++++++++++++++++++++++- > arch/arm64/kvm/reset.c | 89 +++++++++++++++++++++++++++++ > 4 files changed, 215 insertions(+), 8 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 98658f7..5475cc4 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -23,7 +23,6 @@ > #define __ARM64_KVM_HOST_H__ > > #include > -#include > #include > #include > #include > @@ -50,6 +49,7 @@ > > #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS > > +/* Will be incremented when KVM_ARM_VCPU_SVE is fully implemented: */ > #define KVM_VCPU_MAX_FEATURES 4 > > #define KVM_REQ_SLEEP \ > @@ -59,10 +59,12 @@ > > DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); > > -static inline int kvm_arm_init_arch_resources(void) { return 0; } > +extern unsigned int kvm_sve_max_vl; > +int kvm_arm_init_arch_resources(void); > > int __attribute_const__ kvm_target_cpu(void); > int kvm_reset_vcpu(struct kvm_vcpu *vcpu); > +void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu); > int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext); > void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start); > > @@ -353,6 +355,7 @@ struct kvm_vcpu_arch { > #define KVM_ARM64_HOST_SVE_IN_USE (1 << 3) /* backup for host TIF_SVE */ > #define KVM_ARM64_HOST_SVE_ENABLED (1 << 4) /* SVE enabled for EL0 */ > #define KVM_ARM64_GUEST_HAS_SVE (1 << 5) /* SVE exposed to guest */ > +#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 6) /* SVE config completed */ > > #define vcpu_has_sve(vcpu) (system_supports_sve() && \ > ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE)) > @@ -525,7 +528,6 @@ static inline bool kvm_arch_requires_vhe(void) > > static inline void kvm_arch_hardware_unsetup(void) {} > static inline void kvm_arch_sync_events(struct kvm *kvm) {} > -static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} > static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} > static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} > > @@ -626,7 +628,10 @@ void kvm_arch_free_vm(struct kvm *kvm); > > int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type); > > -#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL) > -#define kvm_arm_vcpu_is_finalized(vcpu) true > +int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what); > +bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); > + > +#define kvm_arm_vcpu_sve_finalized(vcpu) \ > + ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED) > > #endif /* __ARM64_KVM_HOST_H__ */ > diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h > index ced760c..6963b7e 100644 > --- a/arch/arm64/include/uapi/asm/kvm.h > +++ b/arch/arm64/include/uapi/asm/kvm.h > @@ -102,6 +102,7 @@ struct kvm_regs { > #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */ > #define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */ > #define KVM_ARM_VCPU_PMU_V3 3 /* Support guest PMUv3 */ > +#define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */ > > struct kvm_vcpu_init { > __u32 target; > @@ -243,6 +244,10 @@ struct kvm_vcpu_events { > ((n) << 5) | (i)) > #define KVM_REG_ARM64_SVE_FFR(i) KVM_REG_ARM64_SVE_PREG(16, i) > > +/* Vector lengths pseudo-register: */ > +#define KVM_REG_ARM64_SVE_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \ > + KVM_REG_SIZE_U512 | 0xffff) > + > /* Device Control API: ARM VGIC */ > #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 > #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 > diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c > index 2aa80a5..086ab05 100644 > --- a/arch/arm64/kvm/guest.c > +++ b/arch/arm64/kvm/guest.c > @@ -206,6 +206,73 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) > return err; > } > > +#define vq_word(vq) (((vq) - SVE_VQ_MIN) / 64) > +#define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64) > + > +static bool vq_present( > + const u64 (*const vqs)[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)], > + unsigned int vq) > +{ > + return (*vqs)[vq_word(vq)] & vq_mask(vq); > +} > + > +static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) > +{ > + unsigned int max_vq, vq; > + u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)]; > + > + if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl))) > + return -EINVAL; > + > + memset(vqs, 0, sizeof(vqs)); > + > + max_vq = sve_vq_from_vl(vcpu->arch.sve_max_vl); > + for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq) > + if (sve_vq_available(vq)) > + vqs[vq_word(vq)] |= vq_mask(vq); > + > + if (copy_to_user((void __user *)reg->addr, vqs, sizeof(vqs))) > + return -EFAULT; > + > + return 0; > +} > + > +static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) > +{ > + unsigned int max_vq, vq; > + u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)]; > + > + if (kvm_arm_vcpu_sve_finalized(vcpu)) > + return -EPERM; /* too late! */ > + > + if (WARN_ON(vcpu->arch.sve_state)) > + return -EINVAL; > + > + if (copy_from_user(vqs, (const void __user *)reg->addr, sizeof(vqs))) > + return -EFAULT; > + > + max_vq = 0; > + for (vq = SVE_VQ_MIN; vq <= SVE_VQ_MAX; ++vq) > + if (vq_present(&vqs, vq)) > + max_vq = vq; > + > + if (max_vq > sve_vq_from_vl(kvm_sve_max_vl)) > + return -EINVAL; > + > + for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq) > + if (vq_present(&vqs, vq) != sve_vq_available(vq)) > + return -EINVAL; > + > + /* Can't run with no vector lengths at all: */ > + if (max_vq < SVE_VQ_MIN) > + return -EINVAL; > + > + /* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ > + vcpu->arch.sve_max_vl = sve_vl_from_vq(max_vq); > + > + return 0; > +} > + > #define SVE_REG_SLICE_SHIFT 0 > #define SVE_REG_SLICE_BITS 5 > #define SVE_REG_ID_SHIFT (SVE_REG_SLICE_SHIFT + SVE_REG_SLICE_BITS) > @@ -296,7 +363,19 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) > struct sve_state_reg_region region; > char __user *uptr = (char __user *)reg->addr; > > - if (!vcpu_has_sve(vcpu) || sve_reg_to_region(®ion, vcpu, reg)) > + if (!vcpu_has_sve(vcpu)) > + return -ENOENT; > + > + /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ > + if (reg->id == KVM_REG_ARM64_SVE_VLS) > + return get_sve_vls(vcpu, reg); > + > + /* Otherwise, reg is an architectural SVE register... */ > + > + if (!kvm_arm_vcpu_sve_finalized(vcpu)) > + return -EPERM; > + > + if (sve_reg_to_region(®ion, vcpu, reg)) > return -ENOENT; > > if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, > @@ -312,7 +391,19 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) > struct sve_state_reg_region region; > const char __user *uptr = (const char __user *)reg->addr; > > - if (!vcpu_has_sve(vcpu) || sve_reg_to_region(®ion, vcpu, reg)) > + if (!vcpu_has_sve(vcpu)) > + return -ENOENT; > + > + /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ > + if (reg->id == KVM_REG_ARM64_SVE_VLS) > + return set_sve_vls(vcpu, reg); > + > + /* Otherwise, reg is an architectural SVE register... */ > + > + if (!kvm_arm_vcpu_sve_finalized(vcpu)) > + return -EPERM; > + > + if (sve_reg_to_region(®ion, vcpu, reg)) > return -ENOENT; > > if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, > @@ -426,7 +517,11 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) > if (!vcpu_has_sve(vcpu)) > return 0; > > - return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */); > + /* Policed by KVM_GET_REG_LIST: */ > + WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); > + > + return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */) > + + 1; /* KVM_REG_ARM64_SVE_VLS */ > } > > static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, > @@ -441,6 +536,19 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, > if (!vcpu_has_sve(vcpu)) > return 0; > > + /* Policed by KVM_GET_REG_LIST: */ > + WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); > + > + /* > + * Enumerate this first, so that userspace can save/restore in > + * the order reported by KVM_GET_REG_LIST: > + */ > + reg = KVM_REG_ARM64_SVE_VLS; > + if (put_user(reg, uindices++)) > + return -EFAULT; > + > + ++num_regs; > + > for (i = 0; i < slices; i++) { > for (n = 0; n < SVE_NUM_ZREGS; n++) { > reg = KVM_REG_ARM64_SVE_ZREG(n, i); > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c > index f16a5f8..e7f9c06 100644 > --- a/arch/arm64/kvm/reset.c > +++ b/arch/arm64/kvm/reset.c > @@ -23,11 +23,14 @@ > #include > #include > #include > +#include > +#include > > #include > > #include > #include > +#include > #include > #include > #include > @@ -99,6 +102,92 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) > return r; > } > > +unsigned int kvm_sve_max_vl; > + > +int kvm_arm_init_arch_resources(void) > +{ > + if (system_supports_sve()) { > + kvm_sve_max_vl = sve_max_virtualisable_vl; > + > + /* > + * The get_sve_reg()/set_sve_reg() ioctl interface will need > + * to be extended with multiple register slice support in > + * order to support vector lengths greater than > + * SVE_VL_ARCH_MAX: > + */ > + if (WARN_ON(kvm_sve_max_vl > SVE_VL_ARCH_MAX)) > + kvm_sve_max_vl = SVE_VL_ARCH_MAX; > + > + /* > + * Don't even try to make use of vector lengths that > + * aren't available on all CPUs, for now: > + */ > + if (kvm_sve_max_vl < sve_max_vl) > + pr_warn("KVM: SVE vector length for guests limited to %u bytes\n", > + kvm_sve_max_vl); > + } > + > + return 0; > +} > + > +/* > + * Finalize vcpu's maximum SVE vector length, allocating > + * vcpu->arch.sve_state as necessary. > + */ > +static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) > +{ > + void *buf; > + unsigned int vl; > + > + vl = vcpu->arch.sve_max_vl; > + > + /* > + * Resposibility for these properties is shared between > + * kvm_arm_init_arch_resources(), kvm_vcpu_enable_sve() and > + * set_sve_vls(). Double-check here just to be sure: > + */ > + if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl || > + vl > SVE_VL_ARCH_MAX)) > + return -EIO; > + > + buf = kzalloc(SVE_SIG_REGS_SIZE(sve_vq_from_vl(vl)), GFP_KERNEL); > + if (!buf) > + return -ENOMEM; > + > + vcpu->arch.sve_state = buf; > + vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED; > + return 0; > +} > + > +int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what) > +{ > + switch (what) { > + case KVM_ARM_VCPU_SVE: > + if (!vcpu_has_sve(vcpu)) > + return -EINVAL; > + > + if (kvm_arm_vcpu_sve_finalized(vcpu)) > + return -EPERM; > + > + return kvm_vcpu_finalize_sve(vcpu); In the next patch I see that we already have KVM_ARM64_GUEST_HAS_SVE set in vcpu->arch.flags at this point. So if this kvm_vcpu_finalize_sve() call fails, shouldn't we unset KVM_ARM64_GUEST_HAS_SVE and anything else used to indicate the vcpu has sve? Thanks, drew > + } > + > + return -EINVAL; > +} > + > +bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) > +{ > + if (vcpu_has_sve(vcpu) && !kvm_arm_vcpu_sve_finalized(vcpu)) > + return false; > + > + return true; > +} > + > +void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) > +{ > + kfree(vcpu->arch.sve_state); > +} > + > /** > * kvm_reset_vcpu - sets core registers and sys_regs to reset value > * @vcpu: The VCPU pointer > -- > 2.1.4 > > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel