From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0A3DFC25B74 for ; Mon, 13 May 2024 16:20:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Subject:Cc:To:From:Message-ID:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=aVHGqsGWu3fMCuhecUTmjS6B8JUoiYNdyS9HvAyR7dI=; b=JZNxaomxr0Cofk mmYTKeuIVsRLeXrpDYNVsbw8AEhiKpzyDwEoMJkl5VAZzLKwXayPCSjPiBg5m9o5cwtHtKFBGJnMp ibjNFEw6sZFlYDhKSO9eMOHzeP+2alG2UHfRKYHRAHhNz2/yswz4n6aNxBeZWc1+V6Gb+8J0DiOBx K2oiY9nG6XVlFeR8XPqtydpMMNAS89Cn16dRUMgItN9W+FoNWPN3guV3nz+DmxyiMN9qB0SxixNDi 8RXvDyfq558pX9baE9PxUKQeA0g2+qdrArQKVu1XNcMmcV8mQODRRiuE6DC1j1vc1BszH7kWVpz/6 yh0mLGJeFNp03YHvKL7A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s6YOz-0000000DWqr-0QbG; Mon, 13 May 2024 16:19:49 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s6YOv-0000000DWpi-2dIN for linux-arm-kernel@lists.infradead.org; Mon, 13 May 2024 16:19:47 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 4311CCE0B86; Mon, 13 May 2024 16:19:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DBFDCC113CC; Mon, 13 May 2024 16:19:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715617182; bh=vD7X+Ao+W12WTcLa+fYbUGGl0ZipNgpNyrh4jIzYIyU=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=HRWpst6JsD4icBNYc82I3q1S17Jbos8fZuec2eQD6MnU5NoZMPmaDRjUEPHR4qFAL cS0yl/9Gp/5zkS2lA8WFGf7JFwB/b3l9Xvur/y9mNA2pNhAP9klIob/S0geo3+UgXF cjxB0CNTslG5rybbaKISxindvIVflX5IyzdW6l49RrgKRooJSP36pGVMV1pEbhOzw7 yAMQnwwHgpNwYPIaGf4RMgHd9/yhGK5fJtFG8IlVI5SVPwpaZAryBH/5jTB6SfGUiR U/18li5zgV/waVSY3WMQ9+32Txue0e0oXtzy9OPOmt43F3X/XwE3GCnb3HJydlcuCU JVEdSkjo+F1hQ== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1s6YOp-00Csg5-JY; Mon, 13 May 2024 17:19:39 +0100 Date: Mon, 13 May 2024 17:19:39 +0100 Message-ID: <86wmnxn0pw.wl-maz@kernel.org> From: Marc Zyngier To: Oliver Upton Cc: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Suzuki K Poulose , Zenghui Yu , Joey Gouly , Alexandru Elisei , Christoffer Dall Subject: Re: [PATCH 01/16] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures In-Reply-To: References: <20240409175448.3507472-1-maz@kernel.org> <20240409175448.3507472-2-maz@kernel.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/29.2 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: oliver.upton@linux.dev, kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, joey.gouly@arm.com, alexandru.elisei@arm.com, christoffer.dall@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240513_091946_054148_AF4D82C3 X-CRM114-Status: GOOD ( 54.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, 07 May 2024 07:17:13 +0100, Oliver Upton wrote: > > Hey Marc, > > On Tue, Apr 09, 2024 at 06:54:33PM +0100, Marc Zyngier wrote: > > +static inline bool kvm_s2_mmu_valid(struct kvm_s2_mmu *mmu) > > +{ > > + return !(mmu->tlb_vttbr & 1); > > +} > > More readable if you use VTTBR_CNP_BIT here. Yes, well spotted. [...] > > +int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type) > > +{ > > + int cpu, err; > > + struct kvm_pgtable *pgt; > > + > > + /* > > + * If we already have our page tables in place, and that the > > + * MMU context is the canonical one, we have a bug somewhere, > > + * as this is only supposed to ever happen once per VM. > > + * > > + * Otherwise, we're building nested page tables, and that's > > + * probably because userspace called KVM_ARM_VCPU_INIT more > > + * than once on the same vcpu. Since that's actually legal, > > + * don't kick a fuss and leave gracefully. > > + */ > > if (mmu->pgt != NULL) { > > + if (&kvm->arch.mmu != mmu) > > A helper might be a good idea, I see this repeated several times: > > static inline bool kvm_is_nested_s2_mmu(struct kvm_s2_mmu *mmu) > { > return &arch->mmu != mmu; > } Yeah, I can probably fit something like this in a number of spots. Just need to be careful as mmu is not initialised at all in some contexts. > > > + return 0; > > + > > kvm_err("kvm_arch already initialized?\n"); > > return -EINVAL; > > } > > > > + /* > > + * We only initialise the IPA range on the canonical MMU, so > > + * the type is meaningless in all other situations. > > + */ > > + if (&kvm->arch.mmu != mmu) > > + type = kvm_get_pa_bits(kvm); > > I'm not sure I follow this comment, because kvm_init_ipa_range() still > gets called on nested MMUs. Is this suggesting that the configured IPA > limit of the shadow MMUs doesn't matter as they can only ever map things > in the canonical IPA space? Yes, that's exactly what I meant. Just because we limit the IPA space to some number of bits doesn't mean we can limit the guest's own S2 to the same thing, because they mean different things: - the canonical IPA space (aka type) is a contract between KVM and userspace on which ranges the MMIO exits are valid - the nested IPA space is whatever the virtual HW exposes as PARange, and the only constraint is that the *output* of the nested IPA space must be contained in the canonical IPA space Does this make sense? Happy to rework the comment to clarify this. > > > + err = kvm_init_ipa_range(mmu, type); > > + if (err) > > + return err; > > + > > pgt = kzalloc(sizeof(*pgt), GFP_KERNEL_ACCOUNT); > > if (!pgt) > > return -ENOMEM; > > @@ -925,6 +960,10 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t > > > > mmu->pgt = pgt; > > mmu->pgd_phys = __pa(pgt->pgd); > > + > > + if (&kvm->arch.mmu != mmu) > > + kvm_init_nested_s2_mmu(mmu); > > + > > return 0; > > > > out_destroy_pgtable: > > @@ -976,7 +1015,7 @@ static void stage2_unmap_memslot(struct kvm *kvm, > > > > if (!(vma->vm_flags & VM_PFNMAP)) { > > gpa_t gpa = addr + (vm_start - memslot->userspace_addr); > > - unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start); > > + kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start); > > } > > hva = vm_end; > > } while (hva < reg_end); > > @@ -2054,11 +2093,6 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) > > { > > } > > > > -void kvm_arch_flush_shadow_all(struct kvm *kvm) > > -{ > > - kvm_uninit_stage2_mmu(kvm); > > -} > > - > > void kvm_arch_flush_shadow_memslot(struct kvm *kvm, > > struct kvm_memory_slot *slot) > > { > > @@ -2066,7 +2100,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, > > phys_addr_t size = slot->npages << PAGE_SHIFT; > > > > write_lock(&kvm->mmu_lock); > > - unmap_stage2_range(&kvm->arch.mmu, gpa, size); > > + kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, size); > > write_unlock(&kvm->mmu_lock); > > } > > > > diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c > > index ced30c90521a..1f4f80a8c011 100644 > > --- a/arch/arm64/kvm/nested.c > > +++ b/arch/arm64/kvm/nested.c > > @@ -7,7 +7,9 @@ > > #include > > #include > > > > +#include > > #include > > +#include > > #include > > #include > > > > @@ -16,6 +18,209 @@ > > /* Protection against the sysreg repainting madness... */ > > #define NV_FTR(r, f) ID_AA64##r##_EL1_##f > > > > +void kvm_init_nested(struct kvm *kvm) > > +{ > > + kvm->arch.nested_mmus = NULL; > > + kvm->arch.nested_mmus_size = 0; > > +} > > + > > +int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu) > > +{ > > + struct kvm *kvm = vcpu->kvm; > > + struct kvm_s2_mmu *tmp; > > + int num_mmus; > > + int ret = -ENOMEM; > > + > > + if (!test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->kvm->arch.vcpu_features)) > > + return 0; > > + > > + if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT)) > > + return -EINVAL; > > nitpick: maybe guard the call to kvm_vcpu_init_nested() with > vcpu_has_nv() and collapse these into > > if (!vcpu_has_nv(vcpu)) > return -EINVAL; Indeed, this is definitely old cruft we can get rid off. We don't even need to error out, as there is a single call site. > > > + /* > > + * Let's treat memory allocation failures as benign: If we fail to > > + * allocate anything, return an error and keep the allocated array > > + * alive. Userspace may try to recover by intializing the vcpu > > + * again, and there is no reason to affect the whole VM for this. > > + */ > > This code feels a bit tricky, and I'm not sure much will be done to > recover the VM in practice should this allocation / ioctl fail. I think this is a question of consistency. We don't break the VM when VPCU_INIT fails in any other case. But yeah, I agree that the whole fixup code is tricky. > Is it possible to do this late in kvm_arch_vcpu_run_pid_change() and > only have the first vCPU to reach the call do the initialization for the > whole VM? We could then dispose of the reallocation / fixup scheme > below. We could, but then the error becomes pretty non-recoverable. Another thing is that I really should move this over to be vmalloc'd rather than kmalloc'd -- there is no benefit in having this physically contiguous. > > If we keep this code... > > > + num_mmus = atomic_read(&kvm->online_vcpus) * 2; > > + tmp = krealloc(kvm->arch.nested_mmus, > > + num_mmus * sizeof(*kvm->arch.nested_mmus), > > + GFP_KERNEL_ACCOUNT | __GFP_ZERO); > > Just do an early 'return -ENOMEM' here to cut a level of indendation for > the rest that follows. > > > + if (tmp) { > > + /* > > + * If we went through a realocation, adjust the MMU > > + * back-pointers in the previously initialised > > + * pg_table structures. > > nitpick: pgtable or kvm_pgtable structures > > > + */ > > + if (kvm->arch.nested_mmus != tmp) { > > + int i; > > + > > + for (i = 0; i < num_mmus - 2; i++) > > + tmp[i].pgt->mmu = &tmp[i]; > > + } > > + > > + if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1], 0) || > > + kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2], 0)) { > > + kvm_free_stage2_pgd(&tmp[num_mmus - 1]); > > + kvm_free_stage2_pgd(&tmp[num_mmus - 2]); > > + } else { > > + kvm->arch.nested_mmus_size = num_mmus; > > + ret = 0; > > + } > > + > > + kvm->arch.nested_mmus = tmp; > > + } > > + > > + return ret; > > +} > > + > > +struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu) > > +{ > > + bool nested_stage2_enabled; > > + u64 vttbr, vtcr, hcr; > > + struct kvm *kvm; > > + int i; > > + > > + kvm = vcpu->kvm; > > nit: just do this when declaring the local. > > > + lockdep_assert_held_write(&kvm->mmu_lock); > > + > > + vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2); > > + vtcr = vcpu_read_sys_reg(vcpu, VTCR_EL2); > > + hcr = vcpu_read_sys_reg(vcpu, HCR_EL2); > > + > > + nested_stage2_enabled = hcr & HCR_VM; > > + > > + /* Don't consider the CnP bit for the vttbr match */ > > + vttbr = vttbr & ~VTTBR_CNP_BIT; > > nit: &= > > > + /* > > + * Two possibilities when looking up a S2 MMU context: > > + * > > + * - either S2 is enabled in the guest, and we need a context that is > > + * S2-enabled and matches the full VTTBR (VMID+BADDR) and VTCR, > > + * which makes it safe from a TLB conflict perspective (a broken > > + * guest won't be able to generate them), > > + * > > + * - or S2 is disabled, and we need a context that is S2-disabled > > + * and matches the VMID only, as all TLBs are tagged by VMID even > > + * if S2 translation is disabled. > > + */ > > Looks like some spaces snuck in and got the indendation weird. Ack on all the above. Thanks for having looked into it! M. -- Without deviation from the norm, progress is not possible. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel