From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E196C433EF for ; Mon, 14 Feb 2022 03:53:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240116AbiBNDxH (ORCPT ); Sun, 13 Feb 2022 22:53:07 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:37280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229928AbiBNDxH (ORCPT ); Sun, 13 Feb 2022 22:53:07 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3FBFB56221 for ; Sun, 13 Feb 2022 19:52:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644810778; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=k5gXQFIv5vbKifedeqqMIbptAWWOjXJO6QmZ5mrBFnQ=; b=JWlL6/t+C8iop5Ex+ozPIf9neyJd0b4S/p4kJWr/2jZ+h0W7HUTeqMMrklFKVLjmfCsg1j ue1Xr39kHDJeCyNBYK4eG0RYZMLIi6AyX1HxCyTvLojUX53cPG7EV9kaP63yiS6sYmqnOZ XZ1Orm8Ob6wrLlTpZZdU6bXS/yzSHu4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-30-Bm3C-TOJP8Kv9GMyVUkbow-1; Sun, 13 Feb 2022 22:52:55 -0500 X-MC-Unique: Bm3C-TOJP8Kv9GMyVUkbow-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 09E931006AA0; Mon, 14 Feb 2022 03:52:52 +0000 (UTC) Received: from localhost (ovpn-12-173.pek2.redhat.com [10.72.12.173]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 808024CEC8; Mon, 14 Feb 2022 03:52:46 +0000 (UTC) Date: Mon, 14 Feb 2022 11:52:43 +0800 From: Baoquan He To: Zhen Lei Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , linux-kernel@vger.kernel.org, Dave Young , Vivek Goyal , Eric Biederman , kexec@lists.infradead.org, Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org, Rob Herring , Frank Rowand , devicetree@vger.kernel.org, Jonathan Corbet , linux-doc@vger.kernel.org, Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , John Donnelly , Dave Kleikamp Subject: Re: [PATCH v20 3/5] arm64: kdump: reimplement crashkernel=X Message-ID: References: <20220124084708.683-1-thunder.leizhen@huawei.com> <20220124084708.683-4-thunder.leizhen@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220124084708.683-4-thunder.leizhen@huawei.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org On 01/24/22 at 04:47pm, Zhen Lei wrote: ...... > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 6c653a2c7cff052..a5d43feac0d7d96 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -71,6 +71,30 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; > #define CRASH_ADDR_LOW_MAX arm64_dma_phys_limit > #define CRASH_ADDR_HIGH_MAX MEMBLOCK_ALLOC_ACCESSIBLE > > +static int __init reserve_crashkernel_low(unsigned long long low_size) > +{ > + unsigned long long low_base; > + > + /* passed with crashkernel=0,low ? */ > + if (!low_size) > + return 0; > + > + low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, 0, CRASH_ADDR_LOW_MAX); > + if (!low_base) { > + pr_err("cannot allocate crashkernel low memory (size:0x%llx).\n", low_size); > + return -ENOMEM; > + } > + > + pr_info("crashkernel low memory reserved: 0x%llx - 0x%llx (%lld MB)\n", > + low_base, low_base + low_size, low_size >> 20); > + > + crashk_low_res.start = low_base; > + crashk_low_res.end = low_base + low_size - 1; > + insert_resource(&iomem_resource, &crashk_low_res); > + > + return 0; > +} > + > /* > * reserve_crashkernel() - reserves memory for crash kernel My another concern is the crashkernel=,low handling. In this patch, the code related to low memory is obscure. Wondering if we should make them explicit with a little redundant but very clear code flows. Saying this because the code must be very clear to you and reviewers, it may be harder for later code reader or anyone interested to understand. 1) crashkernel=X,high 2) crashkernel=X,high crashkernel=Y,low 3) crashkernel=X,high crashkernel=0,low 4) crashkernel=X,high crashkernel='messy code',low 5) crashkernel=X //fall back to high memory, low memory is required then. It could be me thinking about it too much. I made changes to your patch with a tuning, not sure if it's OK to you. Otherwise, this patchset works very well for all above test cases, it's ripe to be merged for wider testing. diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index a5d43feac0d7..671862c56d7d 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -94,7 +94,8 @@ static int __init reserve_crashkernel_low(unsigned long long low_size) return 0; } - +/*Words explaining why it's 256M*/ +#define DEFAULT_CRASH_KERNEL_LOW_SIZE SZ_256M /* * reserve_crashkernel() - reserves memory for crash kernel * @@ -105,10 +106,10 @@ static int __init reserve_crashkernel_low(unsigned long long low_size) static void __init reserve_crashkernel(void) { unsigned long long crash_base, crash_size; - unsigned long long crash_low_size = SZ_256M; + unsigned long long crash_low_size; unsigned long long crash_max = CRASH_ADDR_LOW_MAX; int ret; - bool fixed_base; + bool fixed_base, high; char *cmdline = boot_command_line; /* crashkernel=X[@offset] */ @@ -126,7 +127,10 @@ static void __init reserve_crashkernel(void) ret = parse_crashkernel_low(cmdline, 0, &low_size, &crash_base); if (!ret) crash_low_size = low_size; + else + crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; + high = true; crash_max = CRASH_ADDR_HIGH_MAX; } @@ -134,7 +138,7 @@ static void __init reserve_crashkernel(void) crash_size = PAGE_ALIGN(crash_size); /* User specifies base address explicitly. */ - if (crash_base) + if (fixed_base) crash_max = crash_base + crash_size; retry: @@ -156,7 +160,10 @@ static void __init reserve_crashkernel(void) return; } - if (crash_base >= SZ_4G && reserve_crashkernel_low(crash_low_size)) { + if (crash_base >= SZ_4G && !high) + crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; + + if (reserve_crashkernel_low(crash_low_size)) { memblock_phys_free(crash_base, crash_size); return; } > * > @@ -81,29 +105,62 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; > static void __init reserve_crashkernel(void) > { > unsigned long long crash_base, crash_size; > + unsigned long long crash_low_size = SZ_256M; > unsigned long long crash_max = CRASH_ADDR_LOW_MAX; > int ret; > + bool fixed_base; > + char *cmdline = boot_command_line; > > - ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), > + /* crashkernel=X[@offset] */ > + ret = parse_crashkernel(cmdline, memblock_phys_mem_size(), > &crash_size, &crash_base); > - /* no crashkernel= or invalid value specified */ > - if (ret || !crash_size) > - return; > + if (ret || !crash_size) { > + unsigned long long low_size; > > + /* crashkernel=X,high */ > + ret = parse_crashkernel_high(cmdline, 0, &crash_size, &crash_base); > + if (ret || !crash_size) > + return; > + > + /* crashkernel=X,low */ > + ret = parse_crashkernel_low(cmdline, 0, &low_size, &crash_base); > + if (!ret) > + crash_low_size = low_size; > + > + crash_max = CRASH_ADDR_HIGH_MAX; > + } > + > + fixed_base = !!crash_base; > crash_size = PAGE_ALIGN(crash_size); > > /* User specifies base address explicitly. */ > if (crash_base) > crash_max = crash_base + crash_size; > > +retry: > crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, > crash_base, crash_max); > if (!crash_base) { > + /* > + * Attempt to fully allocate low memory failed, fall back > + * to high memory, the minimum required low memory will be > + * reserved later. > + */ > + if (!fixed_base && (crash_max == CRASH_ADDR_LOW_MAX)) { > + crash_max = CRASH_ADDR_HIGH_MAX; > + goto retry; > + } > + > pr_warn("cannot allocate crashkernel (size:0x%llx)\n", > crash_size); > return; > } > > + if (crash_base >= SZ_4G && reserve_crashkernel_low(crash_low_size)) { > + memblock_phys_free(crash_base, crash_size); > + return; > + } > + > pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", > crash_base, crash_base + crash_size, crash_size >> 20); > > @@ -112,6 +169,9 @@ static void __init reserve_crashkernel(void) > * map. Inform kmemleak so that it won't try to access it. > */ > kmemleak_ignore_phys(crash_base); > + if (crashk_low_res.end) > + kmemleak_ignore_phys(crashk_low_res.start); > + > crashk_res.start = crash_base; > crashk_res.end = crash_base + crash_size - 1; > insert_resource(&iomem_resource, &crashk_res); > -- > 2.25.1 >