From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B463AC433EF for ; Tue, 5 Jul 2022 20:46:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231559AbiGEUqJ (ORCPT ); Tue, 5 Jul 2022 16:46:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229710AbiGEUqH (ORCPT ); Tue, 5 Jul 2022 16:46:07 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9574265F2 for ; Tue, 5 Jul 2022 13:46:02 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C163361C5B for ; Tue, 5 Jul 2022 20:46:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 131BDC341C7; Tue, 5 Jul 2022 20:45:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657053961; bh=vV5sEaKKx1MzkRfFumfYYlM3/ThLTBn8xX32cScp1ro=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=PrrYt2wokC2svFhtJCfa8Has+IZKh3ZUcO2lXuaN+RetPSUDog6AtzHWnGNdgJxyR 7c9EjwQfrti5rGmyrlmJ4cFIVN5Nixqf5UIYh6q5w2JlauC4ir+8m1yeVrbw5BhF+0 XLPCp52MVK6dkKxdFQp+0pyXSkqkcFR5FSSoq0hHb9+8/ApZ1+mgqrifoqdNz0jOzh h5AP8QGxlMbjL4GOjRU3IVGVcye4gSEAdkGhkti092PnurdP8XwzBWWtuMUcLDOL1D 1K9awSBbQEWgKecLJmRI1lTMYL9LXnIDZcsQcrrog6ZHvKqV7sMUI8kohF4A9x0MSo w8A/Xk78JeccA== Date: Tue, 5 Jul 2022 23:45:40 +0300 From: Mike Rapoport To: Catalin Marinas Cc: Will Deacon , "guanghui.fgh" , Ard Biesheuvel , baolin.wang@linux.alibaba.com, akpm@linux-foundation.org, david@redhat.com, jianyong.wu@arm.com, james.morse@arm.com, quic_qiancai@quicinc.com, christophe.leroy@csgroup.eu, jonathan@marek.ca, mark.rutland@arm.com, thunder.leizhen@huawei.com, anshuman.khandual@arm.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, geert+renesas@glider.be, linux-mm@kvack.org, yaohongbo@linux.alibaba.com, alikernel-developer@linux.alibaba.com Subject: Re: [PATCH v4] arm64: mm: fix linear mem mapping access performance degradation Message-ID: References: <6977c692-78ca-5a67-773e-0389c85f2650@linux.alibaba.com> <20220704163815.GA32177@willie-the-truck> <20220705095231.GB552@willie-the-truck> <5d044fdd-a61a-d60f-d294-89e17de37712@linux.alibaba.com> <20220705121115.GB1012@willie-the-truck> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 05, 2022 at 06:05:01PM +0100, Catalin Marinas wrote: > On Tue, Jul 05, 2022 at 06:57:53PM +0300, Mike Rapoport wrote: > > On Tue, Jul 05, 2022 at 04:34:09PM +0100, Catalin Marinas wrote: > > > On Tue, Jul 05, 2022 at 06:02:02PM +0300, Mike Rapoport wrote: > > > > +void __init remap_crashkernel(void) > > > > +{ > > > > +#ifdef CONFIG_KEXEC_CORE > > > > + phys_addr_t start, end, size; > > > > + phys_addr_t aligned_start, aligned_end; > > > > + > > > > + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) > > > > + return; > > > > + > > > > + if (!crashk_res.end) > > > > + return; > > > > + > > > > + start = crashk_res.start & PAGE_MASK; > > > > + end = PAGE_ALIGN(crashk_res.end); > > > > + > > > > + aligned_start = ALIGN_DOWN(crashk_res.start, PUD_SIZE); > > > > + aligned_end = ALIGN(end, PUD_SIZE); > > > > + > > > > + /* Clear PUDs containing crash kernel memory */ > > > > + unmap_hotplug_range(__phys_to_virt(aligned_start), > > > > + __phys_to_virt(aligned_end), false, NULL); > > > > > > What I don't understand is what happens if there's valid kernel data > > > between aligned_start and crashk_res.start (or the other end of the > > > range). > > > > Data shouldn't go anywhere :) > > > > There is > > > > + /* map area from PUD start to start of crash kernel with large pages */ > > + size = start - aligned_start; > > + __create_pgd_mapping(swapper_pg_dir, aligned_start, > > + __phys_to_virt(aligned_start), > > + size, PAGE_KERNEL, early_pgtable_alloc, 0); > > > > and > > > > + /* map area from end of crash kernel to PUD end with large pages */ > > + size = aligned_end - end; > > + __create_pgd_mapping(swapper_pg_dir, end, __phys_to_virt(end), > > + size, PAGE_KERNEL, early_pgtable_alloc, 0); > > > > after the unmap, so after we tear down a part of a linear map we > > immediately recreate it, just with a different page size. > > > > This all happens before SMP, so there is no concurrency at that point. > > That brief period of unmap worries me. The kernel text, data and stack > are all in the vmalloc space but any other (memblock) allocation to this > point may be in the unmapped range before and after the crashkernel > reservation. The interrupts are off, so I think the only allocation and > potential access that may go in this range is the page table itself. But > it looks fragile to me. I agree there are chances there will be an allocation from the unmapped range. We can make sure this won't happen, though. We can cap the memblock allocations with memblock_set_current_limit(aligned_end) or memblock_reserve(algined_start, aligned_end) until the mappings are restored. > -- > Catalin -- Sincerely yours, Mike.