From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 19E0BC433EF for ; Mon, 21 Mar 2022 17:30:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Pg8ET1QwuLUYrubgg9vKHI7hnlc/SB7wm22gSimt0WY=; b=CXmUV5RMVZOoEb XRstHH+KObNUTVuFpg9j5mcZpjVKafj5VlN79+HaKKsp5jyZWU9pGdRBmElsDSAjtf7yn/RIC4R7x 0bfCE4LTvPvS9GVJsXaASpc/VUmI+D7dI3jDFjcXbbx37hNqipOGW/ZB73YUj8pEOCDOvq1jA/sDq r+DxjiYhn0lxLnsAu/UhnrD3p/hLPfpIUZijrRw40osqhHvpyk/kZmUbLoOrRPkqMDc3HwT82awSS XoMnhaMS+eTvTMghvVQjeJwlbjeNaiuO2tAl58tnys58zo3fuEb+A0WnXoB+QGpMjWfukqiveqEbK hJ1fBcblLjvektMrasvw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nWLpw-008cyx-0e; Mon, 21 Mar 2022 17:28:56 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nWLpp-008cy1-N7 for linux-arm-kernel@lists.infradead.org; Mon, 21 Mar 2022 17:28:53 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D02BB1042; Mon, 21 Mar 2022 10:28:48 -0700 (PDT) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6BBAE3F66F; Mon, 21 Mar 2022 10:28:47 -0700 (PDT) Date: Mon, 21 Mar 2022 17:29:14 +0000 From: Alexandru Elisei To: Reiji Watanabe Cc: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, mark.rutland@arm.com Subject: Re: [RFC PATCH v5 08/38] KVM: arm64: Unlock memslots after stage 2 tables are freed Message-ID: References: <20211117153842.302159-1-alexandru.elisei@arm.com> <20211117153842.302159-9-alexandru.elisei@arm.com> <0ce1011b-6e77-f00c-6d19-5b5394e0d0c2@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <0ce1011b-6e77-f00c-6d19-5b5394e0d0c2@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220321_102849_869946_14820B83 X-CRM114-Status: GOOD ( 23.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi, On Thu, Mar 17, 2022 at 10:19:56PM -0700, Reiji Watanabe wrote: > Hi Alex, > > On 11/17/21 7:38 AM, Alexandru Elisei wrote: > > Unpin the backing pages mapped at stage 2 after the stage 2 translation > > tables are destroyed. > > > > Signed-off-by: Alexandru Elisei > > --- > > arch/arm64/kvm/mmu.c | 23 ++++++++++++++++++----- > > 1 file changed, 18 insertions(+), 5 deletions(-) > > > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > > index cd6f1bc7842d..072e2aba371f 100644 > > --- a/arch/arm64/kvm/mmu.c > > +++ b/arch/arm64/kvm/mmu.c > > @@ -1627,11 +1627,19 @@ int kvm_mmu_lock_memslot(struct kvm *kvm, u64 slot, u64 flags) > > return ret; > > } > > -static void unlock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) > > +static void __unlock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) > > { > > bool writable = memslot->arch.flags & KVM_MEMSLOT_LOCK_WRITE; > > unsigned long npages = memslot->npages; > > + unpin_memslot_pages(memslot, writable); > > + account_locked_vm(current->mm, npages, false); > > + > > + memslot->arch.flags &= ~KVM_MEMSLOT_LOCK_MASK; > > +} > > + > > +static void unlock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) > > +{ > > /* > > * MMU maintenace operations aren't performed on an unlocked memslot. > > * Unmap it from stage 2 so the abort handler performs the necessary > > @@ -1640,10 +1648,7 @@ static void unlock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) > > if (kvm_mmu_has_pending_ops(kvm)) > > kvm_arch_flush_shadow_memslot(kvm, memslot); > > - unpin_memslot_pages(memslot, writable); > > - account_locked_vm(current->mm, npages, false); > > - > > - memslot->arch.flags &= ~KVM_MEMSLOT_LOCK_MASK; > > + __unlock_memslot(kvm, memslot); > > } > > int kvm_mmu_unlock_memslot(struct kvm *kvm, u64 slot, u64 flags) > > @@ -1951,7 +1956,15 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) > > void kvm_arch_flush_shadow_all(struct kvm *kvm) > > { > > + struct kvm_memory_slot *memslot; > > + > > kvm_free_stage2_pgd(&kvm->arch.mmu); > > + > > + kvm_for_each_memslot(memslot, kvm_memslots(kvm)) { > > + if (!memslot_is_locked(memslot)) > > + continue; > > + __unlock_memslot(kvm, memslot); > > + } > > } > > Perhaps it might be useful to manage the number of locked memslots ? > (can be used in the fix for kvm_mmu_unlock_memslot in the patch-7 as well) I don't think it's very useful to manage the number, as we usually want to find all locked memslots, and there's absolutely no guarantee that the locked memslot will be at the start of the list, in which case we would have saved iterating over the last memslots. In the case above, this is done when the VM is being destroyed, which is not particularly performance sensitive. And certainly a few linked list accesses won't make much of a difference. In patch #7, KVM iterates through the memslots and calls kvm_arch_flush_shadow_memslot(), which is several orders of magnitude slower than iterating through a few extra memslots. Also, I don't think userspace locking then unlocking a memslot before running any VCPUs is something that will happen very often. Thanks, Alex _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel