From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACF47CCA47C for ; Wed, 6 Jul 2022 07:43:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3490A8E0002; Wed, 6 Jul 2022 03:43:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F8EC8E0001; Wed, 6 Jul 2022 03:43:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E80C8E0002; Wed, 6 Jul 2022 03:43:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 109978E0001 for ; Wed, 6 Jul 2022 03:43:47 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D7CA1E46 for ; Wed, 6 Jul 2022 07:43:46 +0000 (UTC) X-FDA: 79655885652.06.E097A59 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf13.hostedemail.com (Postfix) with ESMTP id 32AD520036 for ; Wed, 6 Jul 2022 07:43:45 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D6BC561DF6; Wed, 6 Jul 2022 07:43:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A3F2EC3411C; Wed, 6 Jul 2022 07:43:40 +0000 (UTC) Date: Wed, 6 Jul 2022 08:43:36 +0100 From: Catalin Marinas To: "guanghui.fgh" Cc: Mike Rapoport , Will Deacon , Ard Biesheuvel , baolin.wang@linux.alibaba.com, akpm@linux-foundation.org, david@redhat.com, jianyong.wu@arm.com, james.morse@arm.com, quic_qiancai@quicinc.com, christophe.leroy@csgroup.eu, jonathan@marek.ca, mark.rutland@arm.com, thunder.leizhen@huawei.com, anshuman.khandual@arm.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, geert+renesas@glider.be, linux-mm@kvack.org, yaohongbo@linux.alibaba.com, alikernel-developer@linux.alibaba.com Subject: Re: [PATCH v4] arm64: mm: fix linear mem mapping access performance degradation Message-ID: References: <20220705095231.GB552@willie-the-truck> <5d044fdd-a61a-d60f-d294-89e17de37712@linux.alibaba.com> <20220705121115.GB1012@willie-the-truck> <7bf7c5ea-16eb-b02f-8ef5-bb94c157236d@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <7bf7c5ea-16eb-b02f-8ef5-bb94c157236d@linux.alibaba.com> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657093426; a=rsa-sha256; cv=none; b=iGKEOVPYZqFZqCwPkr3pc42196u8A+WUkAcAUfV7RNwION943m9OHB41yEkL50RNzczAPf Of0WSjwuugLm9qc3EviuSTkFpvHlvrejzMxMmNUaKy5RbvdrEL9yuA+ITuvJFKh7siw8m/ Mqgv1g80QucUAGElSUgxbGHIKdiPlTk= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf13.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657093426; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tikW8tAZtUCf6BwZsRjSFH4ebev0eZkv/NlJaVHQTf8=; b=oFSjhgonT/gKWSOlog/5aTXErS7e8XX/CPn5i7hKY1yYGWEYBgfDtxzoKW9Bevc6g1TVQo 4P/oUZne9OyOdfgzx/yfEay1s6bswPnvdWl8NUYC7nkfUuRwrBrnH1yEu9M8JMW/T8XA9k 7DdksecXr8L93BNY4JSrxp+75iGsB7E= X-Rspamd-Server: rspam04 X-Rspam-User: Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf13.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org X-Stat-Signature: uqew17kpbrgrkgqx9oi1ud183kyip4hj X-Rspamd-Queue-Id: 32AD520036 X-HE-Tag: 1657093425-555915 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 06, 2022 at 10:49:43AM +0800, guanghui.fgh wrote: > 在 2022/7/6 4:45, Mike Rapoport 写道: > > On Tue, Jul 05, 2022 at 06:05:01PM +0100, Catalin Marinas wrote: > > > On Tue, Jul 05, 2022 at 06:57:53PM +0300, Mike Rapoport wrote: > > > > On Tue, Jul 05, 2022 at 04:34:09PM +0100, Catalin Marinas wrote: > > > > > On Tue, Jul 05, 2022 at 06:02:02PM +0300, Mike Rapoport wrote: > > > > > > +void __init remap_crashkernel(void) > > > > > > +{ > > > > > > +#ifdef CONFIG_KEXEC_CORE > > > > > > + phys_addr_t start, end, size; > > > > > > + phys_addr_t aligned_start, aligned_end; > > > > > > + > > > > > > + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) > > > > > > + return; > > > > > > + > > > > > > + if (!crashk_res.end) > > > > > > + return; > > > > > > + > > > > > > + start = crashk_res.start & PAGE_MASK; > > > > > > + end = PAGE_ALIGN(crashk_res.end); > > > > > > + > > > > > > + aligned_start = ALIGN_DOWN(crashk_res.start, PUD_SIZE); > > > > > > + aligned_end = ALIGN(end, PUD_SIZE); > > > > > > + > > > > > > + /* Clear PUDs containing crash kernel memory */ > > > > > > + unmap_hotplug_range(__phys_to_virt(aligned_start), > > > > > > + __phys_to_virt(aligned_end), false, NULL); > > > > > > > > > > What I don't understand is what happens if there's valid kernel data > > > > > between aligned_start and crashk_res.start (or the other end of the > > > > > range). > > > > > > > > Data shouldn't go anywhere :) > > > > > > > > There is > > > > > > > > + /* map area from PUD start to start of crash kernel with large pages */ > > > > + size = start - aligned_start; > > > > + __create_pgd_mapping(swapper_pg_dir, aligned_start, > > > > + __phys_to_virt(aligned_start), > > > > + size, PAGE_KERNEL, early_pgtable_alloc, 0); > > > > > > > > and > > > > > > > > + /* map area from end of crash kernel to PUD end with large pages */ > > > > + size = aligned_end - end; > > > > + __create_pgd_mapping(swapper_pg_dir, end, __phys_to_virt(end), > > > > + size, PAGE_KERNEL, early_pgtable_alloc, 0); > > > > > > > > after the unmap, so after we tear down a part of a linear map we > > > > immediately recreate it, just with a different page size. > > > > > > > > This all happens before SMP, so there is no concurrency at that point. > > > > > > That brief period of unmap worries me. The kernel text, data and stack > > > are all in the vmalloc space but any other (memblock) allocation to this > > > point may be in the unmapped range before and after the crashkernel > > > reservation. The interrupts are off, so I think the only allocation and > > > potential access that may go in this range is the page table itself. But > > > it looks fragile to me. > > > > I agree there are chances there will be an allocation from the unmapped > > range. > > > > We can make sure this won't happen, though. We can cap the memblock > > allocations with memblock_set_current_limit(aligned_end) or > > memblock_reserve(algined_start, aligned_end) until the mappings are > > restored. > > I think there is no need to worry about vmalloc mem. That's not what I'm worried about. It's about memblock allocations that are accessed through the linear map. -- Catalin