From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42C92C433EF for ; Tue, 5 Jul 2022 15:58:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BD0E16B0071; Tue, 5 Jul 2022 11:58:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B80F86B0073; Tue, 5 Jul 2022 11:58:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A48806B0074; Tue, 5 Jul 2022 11:58:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 953ED6B0071 for ; Tue, 5 Jul 2022 11:58:20 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 61B3F80B00 for ; Tue, 5 Jul 2022 15:58:20 +0000 (UTC) X-FDA: 79653503160.23.CA31523 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf27.hostedemail.com (Postfix) with ESMTP id 96F0C4001B for ; Tue, 5 Jul 2022 15:58:19 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 6E31ACE1BB9; Tue, 5 Jul 2022 15:58:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5AD05C341C7; Tue, 5 Jul 2022 15:58:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036693; bh=4A1AHn7ylrMzqKqrvzhJq8kFnrb33zDTy6Fgy8bwK9M=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=C8AfbsF/VrL2Vpr/HaZq7xkeVukZut6YosVcGe0uBGva/EsFDhd+e4SoxaZdDC+2J r5y6P4W9LSibFM0mMrPzNIUc+Vbto6Lo/2tvO+RXfNWGpqExQM0Pu+Hds5v1xq5wCG VPBNr45xG5eL/1pkeyBUbao4wpElBy02Dt3xMqu21QEmPldl2f/wHnP1gLXsvnVyvw yIjdu5ia62l5rX/AOLCjOIYCJbsd/0Xp4FN478jzpkDdp5nMCwC3qk1imDkYY0MBpY jK9uikFrdSX7dOJmXDCAoCSLm063E6w4NmduhuRK542yYt6QQR7TmuTkoVXC40AwSL LKttcUGw01PnQ== Date: Tue, 5 Jul 2022 18:57:53 +0300 From: Mike Rapoport To: Catalin Marinas Cc: Will Deacon , "guanghui.fgh" , Ard Biesheuvel , baolin.wang@linux.alibaba.com, akpm@linux-foundation.org, david@redhat.com, jianyong.wu@arm.com, james.morse@arm.com, quic_qiancai@quicinc.com, christophe.leroy@csgroup.eu, jonathan@marek.ca, mark.rutland@arm.com, thunder.leizhen@huawei.com, anshuman.khandual@arm.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, geert+renesas@glider.be, linux-mm@kvack.org, yaohongbo@linux.alibaba.com, alikernel-developer@linux.alibaba.com Subject: Re: [PATCH v4] arm64: mm: fix linear mem mapping access performance degradation Message-ID: References: <2ae1cae0-ee26-aa59-7ed9-231d67194dce@linux.alibaba.com> <20220704142313.GE31684@willie-the-truck> <6977c692-78ca-5a67-773e-0389c85f2650@linux.alibaba.com> <20220704163815.GA32177@willie-the-truck> <20220705095231.GB552@willie-the-truck> <5d044fdd-a61a-d60f-d294-89e17de37712@linux.alibaba.com> <20220705121115.GB1012@willie-the-truck> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657036700; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZlglkCwkzxuLC+R0iN5V2TjdX5So1upyBQhK5F+oNQ0=; b=oKeRWa4GXDNi2JEvLYWcX+b06WlC/qbUKX2zhW4WEK8vBld+Iy0RY+iFu/PI6b1FWSNdob w939SzVm0u5Txr3u4E5J808t59hUI2r4yz106YtaY6bQzTTT5IZeA+G0QzKCxiKKVZVRhB 5PPrHVhbO6PJBiOQbZjU9i/SLK+5jrc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657036700; a=rsa-sha256; cv=none; b=qMug8MMSm45s6XhXyxlXn3XGscXOrwXrVvlS7WZMDaV4WB6FNacwrArET8/yEVyyKX31AG aYmX7X9BUYEp8n/AT3KxEyIlTxM3rGldmlhuvlsewjufZB8xCGYh7eQP9gEYJqDyr8cW2W LUMn/yAACucrKCyYbyVuV/UMJS1y7lo= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="C8AfbsF/"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Stat-Signature: 5yhnmyf1gri8hrpeopt59czp89x696g3 X-Rspamd-Queue-Id: 96F0C4001B X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="C8AfbsF/"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspamd-Server: rspam06 X-HE-Tag: 1657036699-360497 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jul 05, 2022 at 04:34:09PM +0100, Catalin Marinas wrote: > On Tue, Jul 05, 2022 at 06:02:02PM +0300, Mike Rapoport wrote: > > +void __init remap_crashkernel(void) > > +{ > > +#ifdef CONFIG_KEXEC_CORE > > + phys_addr_t start, end, size; > > + phys_addr_t aligned_start, aligned_end; > > + > > + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) > > + return; > > + > > + if (!crashk_res.end) > > + return; > > + > > + start = crashk_res.start & PAGE_MASK; > > + end = PAGE_ALIGN(crashk_res.end); > > + > > + aligned_start = ALIGN_DOWN(crashk_res.start, PUD_SIZE); > > + aligned_end = ALIGN(end, PUD_SIZE); > > + > > + /* Clear PUDs containing crash kernel memory */ > > + unmap_hotplug_range(__phys_to_virt(aligned_start), > > + __phys_to_virt(aligned_end), false, NULL); > > What I don't understand is what happens if there's valid kernel data > between aligned_start and crashk_res.start (or the other end of the > range). Data shouldn't go anywhere :) There is + /* map area from PUD start to start of crash kernel with large pages */ + size = start - aligned_start; + __create_pgd_mapping(swapper_pg_dir, aligned_start, + __phys_to_virt(aligned_start), + size, PAGE_KERNEL, early_pgtable_alloc, 0); and + /* map area from end of crash kernel to PUD end with large pages */ + size = aligned_end - end; + __create_pgd_mapping(swapper_pg_dir, end, __phys_to_virt(end), + size, PAGE_KERNEL, early_pgtable_alloc, 0); after the unmap, so after we tear down a part of a linear map we immediately recreate it, just with a different page size. This all happens before SMP, so there is no concurrency at that point. > -- > Catalin -- Sincerely yours, Mike.