From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2FD4C433EF for ; Tue, 26 Apr 2022 14:26:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351632AbiDZO3q (ORCPT ); Tue, 26 Apr 2022 10:29:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351430AbiDZO3p (ORCPT ); Tue, 26 Apr 2022 10:29:45 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C37C60AAB; Tue, 26 Apr 2022 07:26:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 149EBB8204E; Tue, 26 Apr 2022 14:26:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 916BFC385AA; Tue, 26 Apr 2022 14:26:31 +0000 (UTC) Date: Tue, 26 Apr 2022 15:26:28 +0100 From: Catalin Marinas To: Zhen Lei Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , linux-kernel@vger.kernel.org, Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , kexec@lists.infradead.org, Will Deacon , linux-arm-kernel@lists.infradead.org, Rob Herring , Frank Rowand , devicetree@vger.kernel.org, Jonathan Corbet , linux-doc@vger.kernel.org, Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , John Donnelly , Dave Kleikamp Subject: Re: [PATCH v22 4/9] arm64: kdump: Don't force page-level mappings for memory above 4G Message-ID: References: <20220414115720.1887-1-thunder.leizhen@huawei.com> <20220414115720.1887-5-thunder.leizhen@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220414115720.1887-5-thunder.leizhen@huawei.com> Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org On Thu, Apr 14, 2022 at 07:57:15PM +0800, Zhen Lei wrote: > @@ -540,13 +540,31 @@ static void __init map_mem(pgd_t *pgdp) > for_each_mem_range(i, &start, &end) { > if (start >= end) > break; > + > +#ifdef CONFIG_KEXEC_CORE > + if (eflags && (end >= SZ_4G)) { > + /* > + * The memory block cross the 4G boundary. > + * Forcibly use page-level mappings for memory under 4G. > + */ > + if (start < SZ_4G) { > + __map_memblock(pgdp, start, SZ_4G - 1, > + pgprot_tagged(PAGE_KERNEL), flags | eflags); > + start = SZ_4G; > + } > + > + /* Page-level mappings is not mandatory for memory above 4G */ > + eflags = 0; > + } > +#endif That's a bit tricky if a SoC has all RAM above 4G. IIRC AMD Seattle had this layout. See max_zone_phys() for how we deal with this, basically extending ZONE_DMA to the whole range if RAM starts above 4GB. In that case, crashkernel reservation would fall in the range above 4GB. BTW, we changed the max_zone_phys() logic with commit 791ab8b2e3db ("arm64: Ignore any DMA offsets in the max_zone_phys() calculation"). -- Catalin