From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7739DC433DB for ; Thu, 7 Jan 2021 14:27:35 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 203B722EBF for ; Thu, 7 Jan 2021 14:27:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 203B722EBF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=7HNxy8AewZfrnQCcyaOAR/R7XMGnSuHCrmdUqwuyZuU=; b=QErvYsKq8k0o+34CZbfE6qapT xFnnr/3bhKhcF2pu4dfsmVTFilYYQEqILUrTO2uNMO39TNpQAxWb7w6ebAVcjhUwFbPhFGXjtlawI dMvxYsCmHju5WaV8W3JABNlF8bj0xRjeQM4XNc1YVIORfMwKlG3qjWmgzNJhfvc5Al6rbORUD8Xl8 CYajCAScsfGUArQhGml7YRmliDn0DSS1xmMT3aPLRK9wye8N1+rLcRV4u06kgEaxns/gyzYO84BUD hjonFF9vpbIyHAKn5o62HEFGPNkQUZ9RI+grgJzF0o+IxtXoAvgBGigDnCz6Rd5JWXDSNuS9nlGIc AOxPHAjOg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kxWEZ-0001PP-2Y; Thu, 07 Jan 2021 14:25:51 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kxWEW-0001Oh-1c for linux-arm-kernel@lists.infradead.org; Thu, 07 Jan 2021 14:25:48 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id E7F8422EBF; Thu, 7 Jan 2021 14:25:44 +0000 (UTC) Date: Thu, 7 Jan 2021 14:25:42 +0000 From: Catalin Marinas To: Nicolas Saenz Julienne Subject: Re: [PATCH 2/2] arm64: mm: fix kdump broken with ZONE_DMA reintroduced Message-ID: <20210107142541.GA26159@gaia> References: <20201226033557.116251-1-chenzhou10@huawei.com> <20201226033557.116251-3-chenzhou10@huawei.com> <653d43ed326e6a3974660c0ca2ad8a847a4ff986.camel@suse.de> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <653d43ed326e6a3974660c0ca2ad8a847a4ff986.camel@suse.de> User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210107_092548_230740_0C158053 X-CRM114-Status: GOOD ( 23.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: song.bao.hua@hisilicon.com, xiexiuqi@huawei.com, Chen Zhou , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, huawei.libin@huawei.com, akpm@linux-foundation.org, will@kernel.org, ardb@kernel.org, rppt@kernel.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Sat, Dec 26, 2020 at 11:34:58AM +0100, Nicolas Saenz Julienne wrote: > On Sat, 2020-12-26 at 11:35 +0800, Chen Zhou wrote: > > If the memory reserved for crash dump kernel falled in ZONE_DMA32, > > the devices in crash dump kernel need to use ZONE_DMA will alloc fail. > > > > Fix this by reserving low memory in ZONE_DMA if CONFIG_ZONE_DMA is > > enabled, otherwise, reserving in ZONE_DMA32. > > > > Fixes: bff3b04460a8 ("arm64: mm: reserve CMA and crashkernel in ZONE_DMA32") > > I'm not so sure this counts as a fix, if someone backports it it'll probably > break things as it depends on the series that dynamically sizes DMA zones. > > > Signed-off-by: Chen Zhou > > --- > > Why not doing the same with CMA? You'll probably have to move the > dma_contiguous_reserve() call into bootmem_init() so as to make sure that > arm64_dma_phys_limit is populated. Do we need the arm64_dma32_phys_limit at all? I can see the (arm64_dma_phys_limit ? : arm64_dma32_phys_limit) pattern in several places but I think we can just live with the arm64_dma_phys_limit. Also, I don't think we need any early ARCH_LOW_ADDRESS_LIMIT. It's only used by memblock_alloc_low() and that's called from swiotlb_init() after arm64_dma_phys_limit was initialised. What about something like below (on top of you ARCH_LOW_ADDRESS_LIMIT fix but I can revert that)? I haven't tested it in all configurations yet. diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 69ad25fbeae4..ca2cd75d3286 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -94,8 +94,7 @@ #endif /* CONFIG_ARM64_FORCE_52BIT */ extern phys_addr_t arm64_dma_phys_limit; -extern phys_addr_t arm64_dma32_phys_limit; -#define ARCH_LOW_ADDRESS_LIMIT ((arm64_dma_phys_limit ? : arm64_dma32_phys_limit) - 1) +#define ARCH_LOW_ADDRESS_LIMIT (arm64_dma_phys_limit - 1) struct debug_info { #ifdef CONFIG_HAVE_HW_BREAKPOINT diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 7deddf56f7c3..596a94bf5ed6 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -59,7 +59,6 @@ EXPORT_SYMBOL(memstart_addr); * bit addressable memory area. */ phys_addr_t arm64_dma_phys_limit __ro_after_init; -phys_addr_t arm64_dma32_phys_limit __ro_after_init; #ifdef CONFIG_KEXEC_CORE /* @@ -84,7 +83,7 @@ static void __init reserve_crashkernel(void) if (crash_base == 0) { /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit, + crash_base = memblock_find_in_range(0, arm64_dma_phys_limit, crash_size, SZ_2M); if (crash_base == 0) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", @@ -196,6 +195,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) unsigned long max_zone_pfns[MAX_NR_ZONES] = {0}; unsigned int __maybe_unused acpi_zone_dma_bits; unsigned int __maybe_unused dt_zone_dma_bits; + phys_addr_t dma32_phys_limit = max_zone_phys(32); #ifdef CONFIG_ZONE_DMA acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address()); @@ -205,8 +205,12 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit); #endif #ifdef CONFIG_ZONE_DMA32 - max_zone_pfns[ZONE_DMA32] = PFN_DOWN(arm64_dma32_phys_limit); + max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit); + if (!arm64_dma_phys_limit) + arm64_dma_phys_limit = dma32_phys_limit; #endif + if (!arm64_dma_phys_limit) + arm64_dma_phys_limit = PHYS_MASK + 1; max_zone_pfns[ZONE_NORMAL] = max; free_area_init(max_zone_pfns); @@ -394,16 +398,9 @@ void __init arm64_memblock_init(void) early_init_fdt_scan_reserved_mem(); - if (IS_ENABLED(CONFIG_ZONE_DMA32)) - arm64_dma32_phys_limit = max_zone_phys(32); - else - arm64_dma32_phys_limit = PHYS_MASK + 1; - reserve_elfcorehdr(); high_memory = __va(memblock_end_of_DRAM() - 1) + 1; - - dma_contiguous_reserve(arm64_dma32_phys_limit); } void __init bootmem_init(void) @@ -438,6 +435,11 @@ void __init bootmem_init(void) sparse_init(); zone_sizes_init(min, max); + /* + * Reserve the CMA area after arm64_dma_phys_limit was initialised. + */ + dma_contiguous_reserve(arm64_dma_phys_limit); + /* * request_standard_resources() depends on crashkernel's memory being * reserved, so do it here. @@ -455,7 +457,7 @@ void __init bootmem_init(void) void __init mem_init(void) { if (swiotlb_force == SWIOTLB_FORCE || - max_pfn > PFN_DOWN(arm64_dma_phys_limit ? : arm64_dma32_phys_limit)) + max_pfn > PFN_DOWN(arm64_dma_phys_limit)) swiotlb_init(1); else swiotlb_force = SWIOTLB_NO_FORCE; _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel