From: Zhen Lei <thunder.leizhen@huawei.com>
To: Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
<x86@kernel.org>, "H . Peter Anvin" <hpa@zytor.com>,
Eric Biederman <ebiederm@xmission.com>,
Rob Herring <robh+dt@kernel.org>,
Frank Rowand <frowand.list@gmail.com>,
<devicetree@vger.kernel.org>, Dave Young <dyoung@redhat.com>,
Baoquan He <bhe@redhat.com>, Vivek Goyal <vgoyal@redhat.com>,
<kexec@lists.infradead.org>, <linux-kernel@vger.kernel.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
<linux-arm-kernel@lists.infradead.org>,
Jonathan Corbet <corbet@lwn.net>, <linux-doc@vger.kernel.org>
Cc: Zhen Lei <thunder.leizhen@huawei.com>,
Randy Dunlap <rdunlap@infradead.org>,
Feng Zhou <zhoufeng.zf@bytedance.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Chen Zhou <dingguo.cz@antgroup.com>,
"John Donnelly" <John.p.donnelly@oracle.com>,
Dave Kleikamp <dave.kleikamp@oracle.com>
Subject: [PATCH 4/5] arm64: kdump: Decide when to reserve crash memory in reserve_crashkernel()
Date: Mon, 13 Jun 2022 16:09:31 +0800 [thread overview]
Message-ID: <20220613080932.663-5-thunder.leizhen@huawei.com> (raw)
In-Reply-To: <20220613080932.663-1-thunder.leizhen@huawei.com>
After the kexec completes data loading, the crash memory must be set to be
inaccessible, to prevent the current kernel from damaging the data of the
crash kernel. But for some platforms, the DMA zones is not known until the
dtb or acpi table is parsed, but by then the linear mapping has been
created, all are forced to be page-level mapping.
To optimize the system performance (reduce the TLB miss rate) when
crashkernel=X,high is used. The reservation of crash memory is divided
into two phases: reserve crash high memory before paging_init() is called
and crash low memory after it. We only perform page mapping for the crash
high memory.
commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
platforms with no DMA memory zones") has caused reserve_crashkernel() to
be called in two places: before or after paging_init(), which is
controlled by whether CONFIG_ZONE_DMA/DMA32 is enabled. Just move the
control into reserve_crashkernel(), prepare for the optimizations
mentioned above.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
arch/arm64/mm/init.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 8539598f9e58b4d..fb24efbc46f5ef4 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -90,6 +90,9 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit;
phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1;
#endif
+#define DMA_PHYS_LIMIT_UNKNOWN 0
+#define DMA_PHYS_LIMIT_KNOWN 1
+
/* Current arm64 boot protocol requires 2MB alignment */
#define CRASH_ALIGN SZ_2M
@@ -131,18 +134,23 @@ static int __init reserve_crashkernel_low(unsigned long long low_size)
* line parameter. The memory reserved is used by dump capture kernel when
* primary kernel is crashing.
*/
-static void __init reserve_crashkernel(void)
+static void __init reserve_crashkernel(int dma_state)
{
unsigned long long crash_base, crash_size;
unsigned long long crash_low_size = 0;
unsigned long long crash_max = CRASH_ADDR_LOW_MAX;
char *cmdline = boot_command_line;
+ int dma_enabled = IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32);
int ret;
bool fixed_base;
if (!IS_ENABLED(CONFIG_KEXEC_CORE))
return;
+ if ((!dma_enabled && (dma_state != DMA_PHYS_LIMIT_UNKNOWN)) ||
+ (dma_enabled && (dma_state != DMA_PHYS_LIMIT_KNOWN)))
+ return;
+
/* crashkernel=X[@offset] */
ret = parse_crashkernel(cmdline, memblock_phys_mem_size(),
&crash_size, &crash_base);
@@ -413,8 +421,7 @@ void __init arm64_memblock_init(void)
early_init_fdt_scan_reserved_mem();
- if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32))
- reserve_crashkernel();
+ reserve_crashkernel(DMA_PHYS_LIMIT_UNKNOWN);
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
}
@@ -462,8 +469,7 @@ void __init bootmem_init(void)
* request_standard_resources() depends on crashkernel's memory being
* reserved, so do it here.
*/
- if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32))
- reserve_crashkernel();
+ reserve_crashkernel(DMA_PHYS_LIMIT_KNOWN);
memblock_dump_all();
}
--
2.25.1
next prev parent reply other threads:[~2022-06-13 8:12 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-13 8:09 [PATCH 0/5] arm64: kdump: Function supplement and performance optimization Zhen Lei
2022-06-13 8:09 ` [PATCH 1/5] arm64: kdump: Provide default size when crashkernel=Y,low is not specified Zhen Lei
2022-06-17 2:40 ` Baoquan He
2022-06-17 7:39 ` Leizhen (ThunderTown)
2022-06-17 8:26 ` Baoquan He
2022-06-13 8:09 ` [PATCH 2/5] arm64: kdump: Support crashkernel=X fall back to reserve region above DMA zones Zhen Lei
2022-06-17 4:16 ` Baoquan He
2022-06-13 8:09 ` [PATCH 3/5] arm64: kdump: Remove some redundant checks in map_mem() Zhen Lei
2022-06-20 7:42 ` Baoquan He
2022-06-13 8:09 ` Zhen Lei [this message]
2022-06-13 8:09 ` [PATCH 5/5] arm64: kdump: Don't defer the reservation of crash high memory Zhen Lei
2022-06-21 5:33 ` Baoquan He
2022-06-21 6:24 ` Kefeng Wang
2022-06-21 9:27 ` Baoquan He
2022-06-21 18:04 ` Catalin Marinas
2022-06-22 8:35 ` Baoquan He
2022-06-23 14:07 ` Catalin Marinas
2022-06-27 2:52 ` Baoquan He
2022-06-27 9:17 ` Leizhen (ThunderTown)
2022-06-27 10:17 ` Baoquan He
2022-06-27 11:11 ` Leizhen (ThunderTown)
2022-06-22 12:03 ` Kefeng Wang
2022-06-23 10:27 ` Catalin Marinas
2022-06-23 14:23 ` Kefeng Wang
2022-06-21 7:56 ` Leizhen (ThunderTown)
2022-06-21 9:35 ` Baoquan He
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220613080932.663-5-thunder.leizhen@huawei.com \
--to=thunder.leizhen@huawei.com \
--cc=John.p.donnelly@oracle.com \
--cc=bhe@redhat.com \
--cc=bp@alien8.de \
--cc=catalin.marinas@arm.com \
--cc=corbet@lwn.net \
--cc=dave.kleikamp@oracle.com \
--cc=devicetree@vger.kernel.org \
--cc=dingguo.cz@antgroup.com \
--cc=dyoung@redhat.com \
--cc=ebiederm@xmission.com \
--cc=frowand.list@gmail.com \
--cc=hpa@zytor.com \
--cc=kexec@lists.infradead.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=rdunlap@infradead.org \
--cc=robh+dt@kernel.org \
--cc=tglx@linutronix.de \
--cc=vgoyal@redhat.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=zhoufeng.zf@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).