From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 342D3C433E1 for ; Tue, 25 Aug 2020 09:44:02 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 901B420706 for ; Tue, 25 Aug 2020 09:44:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="CruQfXXz"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="DMblZVX/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 901B420706 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=0bZlBdnwYu3ox5Kwk0LRrlQcoJ8n9e0W/CoFdGU/0R8=; b=CruQfXXzj9ZuQFe77eVsgx7H7 1zT3QiFfnOpuFKSMIoWC//qlnMnNP2gwWUCXIofqI5Q5ZZZYNHZRHnY0jQe0vVCIUEo0mI8mSMnLI KU2bz1q0FRhGNHRc7JpDZsDlZs6WdW7OfPHMAf1VTwu3flkBoVntWSsX1dvDrcA/NlbF0Smphw6wv zuwYsC1DvsxeWjWhYobI32Xwl1xI5SDX+OS+iZ//nH2HbxUyC5WxMJ0buudxECLYuEY/uxbVlIDDR aAki5ZIAITGDN5KY5dRPqzptSWrMnlvwBuPkWWEzCAmZ27/bSQYxZaC8CZS95fiQz/4NCp3M/h1cK zAGidCqtw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAVTC-0001eo-0T; Tue, 25 Aug 2020 09:42:22 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAVRG-0000rf-VA for linux-arm-kernel@lists.infradead.org; Tue, 25 Aug 2020 09:40:25 +0000 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6CD1E20767; Tue, 25 Aug 2020 09:40:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1598348422; bh=FTJHi5pXqo3iWTD/ns1efWm5XgPd8Ep4ZviMb9VtJEs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DMblZVX/eIn2fxNlObER+c0UtMtDgVpxyxbo6AH15y9ncAcMydSHs0TOyrJD4e+Gp lEz7aWtwNlD1cwEIEB+ZAc/hPiRg5EyQ/t/LyJPLFtKp/khZDKoZi68wlr4u06fzuB 5mCDKJ/iyyQtUt9FttP+KDfHGO+AYKgshM4OmM3U= From: Will Deacon To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v3 09/21] KVM: arm64: Convert unmap_stage2_range() to generic page-table API Date: Tue, 25 Aug 2020 10:39:41 +0100 Message-Id: <20200825093953.26493-10-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200825093953.26493-1-will@kernel.org> References: <20200825093953.26493-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200825_054023_335200_ADA65709 X-CRM114-Status: GOOD ( 18.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kernel-team@android.com, Gavin Shan , Suzuki Poulose , Marc Zyngier , Quentin Perret , James Morse , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Convert unmap_stage2_range() to use kvm_pgtable_stage2_unmap() instead of walking the page-table directly. Cc: Marc Zyngier Cc: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/mmu.c | 57 +++++++++++++++++++++++++------------------- 1 file changed, 32 insertions(+), 25 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 704b471a48ce..751ce2462765 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -39,6 +39,33 @@ static bool is_iomap(unsigned long flags) return flags & KVM_S2PTE_FLAG_IS_IOMAP; } +/* + * Release kvm_mmu_lock periodically if the memory region is large. Otherwise, + * we may see kernel panics with CONFIG_DETECT_HUNG_TASK, + * CONFIG_LOCKUP_DETECTOR, CONFIG_LOCKDEP. Additionally, holding the lock too + * long will also starve other vCPUs. We have to also make sure that the page + * tables are not freed while we released the lock. + */ +#define stage2_apply_range(kvm, addr, end, fn, resched) \ +({ \ + int ret; \ + struct kvm *__kvm = (kvm); \ + bool __resched = (resched); \ + u64 next, __addr = (addr), __end = (end); \ + do { \ + struct kvm_pgtable *pgt = __kvm->arch.mmu.pgt; \ + if (!pgt) \ + break; \ + next = stage2_pgd_addr_end(__kvm, __addr, __end); \ + ret = fn(pgt, __addr, next - __addr); \ + if (ret) \ + break; \ + if (__resched && next != __end) \ + cond_resched_lock(&__kvm->mmu_lock); \ + } while (__addr = next, __addr != __end); \ + ret; \ +}) + static bool memslot_is_logging(struct kvm_memory_slot *memslot) { return memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY); @@ -220,8 +247,8 @@ static inline void kvm_pgd_populate(pgd_t *pgdp, p4d_t *p4dp) * end up writing old data to disk. * * This is why right after unmapping a page/section and invalidating - * the corresponding TLBs, we call kvm_flush_dcache_p*() to make sure - * the IO subsystem will never hit in the cache. + * the corresponding TLBs, we flush to make sure the IO subsystem will + * never hit in the cache. * * This is all avoided on systems that have ARM64_HAS_STAGE2_FWB, as * we then fully enforce cacheability of RAM, no matter what the guest @@ -344,32 +371,12 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 bool may_block) { struct kvm *kvm = mmu->kvm; - pgd_t *pgd; - phys_addr_t addr = start, end = start + size; - phys_addr_t next; + phys_addr_t end = start + size; assert_spin_locked(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - - pgd = mmu->pgd + stage2_pgd_index(kvm, addr); - do { - /* - * Make sure the page table is still active, as another thread - * could have possibly freed the page table, while we released - * the lock. - */ - if (!READ_ONCE(mmu->pgd)) - break; - next = stage2_pgd_addr_end(kvm, addr, end); - if (!stage2_pgd_none(kvm, *pgd)) - unmap_stage2_p4ds(mmu, pgd, addr, next); - /* - * If the range is too large, release the kvm->mmu_lock - * to prevent starvation and lockup detector warnings. - */ - if (may_block && next != end) - cond_resched_lock(&kvm->mmu_lock); - } while (pgd++, addr = next, addr != end); + WARN_ON(stage2_apply_range(kvm, start, end, kvm_pgtable_stage2_unmap, + may_block)); } static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size) -- 2.28.0.297.g1956fa8f8d-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel