From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82906C433DB for ; Fri, 8 Jan 2021 01:13:42 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3905823447 for ; Fri, 8 Jan 2021 01:13:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3905823447 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=k/Bx7p1MlFfvd8sZrVlEtIgvjDV+H/O1I/r9q0Tm7N4=; b=dRhKwe8E9Zp2LSSpmRb+zxpm5 JU38uEtcDKqLd+JWE7zQdW8E/wu231egpvuZSzsd2dXGpeNFSHhULSYhi5M7/3JJpCOu9nzCj0ZjA yFltjfQAGUgBBLNolfk8IzdVE1B5JpZUxaIxR0MNcXxwjK4RIml2fBB71au3mdWkXyKMMNNt+mrmN Vfm2VuCRWNqh4fm+krpyla1WYxWBHscryxMQrZwkvIyd951CobOpgOzeby0qruWM9KA3M5+cBh1HL Rr2aWdylgQ2/UTXllk3pvqo2NRvMkYlrj+Ehm33n5TAphzy/1M/pZN/CkRjCI5KM8LiE+3sjWKpEs No6DpQi6g==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kxgIv-0007Yu-Hy; Fri, 08 Jan 2021 01:11:01 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kxgIW-0007Jv-Mj for linux-arm-kernel@lists.infradead.org; Fri, 08 Jan 2021 01:10:39 +0000 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DBlPb1xsPzj3WC; Fri, 8 Jan 2021 09:09:39 +0800 (CST) Received: from [10.174.176.61] (10.174.176.61) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 8 Jan 2021 09:10:13 +0800 Subject: Re: [PATCH 2/2] arm64: mm: fix kdump broken with ZONE_DMA reintroduced To: Catalin Marinas , Nicolas Saenz Julienne References: <20201226033557.116251-1-chenzhou10@huawei.com> <20201226033557.116251-3-chenzhou10@huawei.com> <653d43ed326e6a3974660c0ca2ad8a847a4ff986.camel@suse.de> <20210107142541.GA26159@gaia> From: chenzhou Message-ID: Date: Fri, 8 Jan 2021 09:09:46 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20210107142541.GA26159@gaia> X-Originating-IP: [10.174.176.61] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210107_201037_431403_D31844E0 X-CRM114-Status: GOOD ( 24.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: song.bao.hua@hisilicon.com, xiexiuqi@huawei.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, huawei.libin@huawei.com, akpm@linux-foundation.org, will@kernel.org, ardb@kernel.org, rppt@kernel.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2021/1/7 22:25, Catalin Marinas wrote: > On Sat, Dec 26, 2020 at 11:34:58AM +0100, Nicolas Saenz Julienne wrote: >> On Sat, 2020-12-26 at 11:35 +0800, Chen Zhou wrote: >>> If the memory reserved for crash dump kernel falled in ZONE_DMA32, >>> the devices in crash dump kernel need to use ZONE_DMA will alloc fail. >>> >>> Fix this by reserving low memory in ZONE_DMA if CONFIG_ZONE_DMA is >>> enabled, otherwise, reserving in ZONE_DMA32. >>> >>> Fixes: bff3b04460a8 ("arm64: mm: reserve CMA and crashkernel in ZONE_DMA32") >> I'm not so sure this counts as a fix, if someone backports it it'll probably >> break things as it depends on the series that dynamically sizes DMA zones. >> >>> Signed-off-by: Chen Zhou >>> --- >> Why not doing the same with CMA? You'll probably have to move the >> dma_contiguous_reserve() call into bootmem_init() so as to make sure that >> arm64_dma_phys_limit is populated. > Do we need the arm64_dma32_phys_limit at all? I can see the > (arm64_dma_phys_limit ? : arm64_dma32_phys_limit) pattern in several > places but I think we can just live with the arm64_dma_phys_limit. Yes, arm64_dma_phys_limit is enough. > > Also, I don't think we need any early ARCH_LOW_ADDRESS_LIMIT. It's only > used by memblock_alloc_low() and that's called from swiotlb_init() > after arm64_dma_phys_limit was initialised. > > What about something like below (on top of you ARCH_LOW_ADDRESS_LIMIT > fix but I can revert that)? I haven't tested it in all configurations > yet. > > diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h > index 69ad25fbeae4..ca2cd75d3286 100644 > --- a/arch/arm64/include/asm/processor.h > +++ b/arch/arm64/include/asm/processor.h > @@ -94,8 +94,7 @@ > #endif /* CONFIG_ARM64_FORCE_52BIT */ > > extern phys_addr_t arm64_dma_phys_limit; > -extern phys_addr_t arm64_dma32_phys_limit; > -#define ARCH_LOW_ADDRESS_LIMIT ((arm64_dma_phys_limit ? : arm64_dma32_phys_limit) - 1) > +#define ARCH_LOW_ADDRESS_LIMIT (arm64_dma_phys_limit - 1) > > struct debug_info { > #ifdef CONFIG_HAVE_HW_BREAKPOINT > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 7deddf56f7c3..596a94bf5ed6 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -59,7 +59,6 @@ EXPORT_SYMBOL(memstart_addr); > * bit addressable memory area. > */ > phys_addr_t arm64_dma_phys_limit __ro_after_init; > -phys_addr_t arm64_dma32_phys_limit __ro_after_init; > > #ifdef CONFIG_KEXEC_CORE > /* > @@ -84,7 +83,7 @@ static void __init reserve_crashkernel(void) > > if (crash_base == 0) { > /* Current arm64 boot protocol requires 2MB alignment */ > - crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit, > + crash_base = memblock_find_in_range(0, arm64_dma_phys_limit, > crash_size, SZ_2M); > if (crash_base == 0) { > pr_warn("cannot allocate crashkernel (size:0x%llx)\n", > @@ -196,6 +195,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) > unsigned long max_zone_pfns[MAX_NR_ZONES] = {0}; > unsigned int __maybe_unused acpi_zone_dma_bits; > unsigned int __maybe_unused dt_zone_dma_bits; > + phys_addr_t dma32_phys_limit = max_zone_phys(32); > > #ifdef CONFIG_ZONE_DMA > acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address()); > @@ -205,8 +205,12 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) > max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit); > #endif > #ifdef CONFIG_ZONE_DMA32 > - max_zone_pfns[ZONE_DMA32] = PFN_DOWN(arm64_dma32_phys_limit); > + max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit); > + if (!arm64_dma_phys_limit) > + arm64_dma_phys_limit = dma32_phys_limit; > #endif > + if (!arm64_dma_phys_limit) > + arm64_dma_phys_limit = PHYS_MASK + 1; > max_zone_pfns[ZONE_NORMAL] = max; > > free_area_init(max_zone_pfns); > @@ -394,16 +398,9 @@ void __init arm64_memblock_init(void) > > early_init_fdt_scan_reserved_mem(); > > - if (IS_ENABLED(CONFIG_ZONE_DMA32)) > - arm64_dma32_phys_limit = max_zone_phys(32); > - else > - arm64_dma32_phys_limit = PHYS_MASK + 1; > - > reserve_elfcorehdr(); > > high_memory = __va(memblock_end_of_DRAM() - 1) + 1; > - > - dma_contiguous_reserve(arm64_dma32_phys_limit); > } > > void __init bootmem_init(void) > @@ -438,6 +435,11 @@ void __init bootmem_init(void) > sparse_init(); > zone_sizes_init(min, max); > > + /* > + * Reserve the CMA area after arm64_dma_phys_limit was initialised. > + */ > + dma_contiguous_reserve(arm64_dma_phys_limit); > + > /* > * request_standard_resources() depends on crashkernel's memory being > * reserved, so do it here. > @@ -455,7 +457,7 @@ void __init bootmem_init(void) > void __init mem_init(void) > { > if (swiotlb_force == SWIOTLB_FORCE || > - max_pfn > PFN_DOWN(arm64_dma_phys_limit ? : arm64_dma32_phys_limit)) > + max_pfn > PFN_DOWN(arm64_dma_phys_limit)) > swiotlb_init(1); > else > swiotlb_force = SWIOTLB_NO_FORCE; > . > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel