From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 22E0AC48286 for ; Thu, 1 Feb 2024 19:47:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L2wnf0Y9KohaHVrXotBZLHgxKiJqUntEh2XSnYrSSiQ=; b=bePBzCUs9NMepd AuadCeXJ3KNhjjda1O+0AeFAP7OgjAtf2Gjd/VOc/c32HcgUXI3LGfOA1OYb4VtNCALRXB+n4lmBn 5wXpFmmdAdryVK1QeSnhdpiUC7df+ZwN8hsUZBYu017C6Dy+MYLA6EtxHtpizUUQ/71UioXAsFG/k Gzx0KUHJJ4fvo7gRyE5iHqUJUggrK0zxKreu1CcgeJ3TiwBWOnyFP/F5ntLttwf5JwGQTgJ2OsRhB EN32sI5kiD3AXO9VKFUpqYwVJEhreUhek12ZRkBIgd4owak/+ct3w4ONhQnJNP8e8AAFLsncCTuXC 9KKd3yUDOv1QJ1tBpa6Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVd1Z-00000009Chw-0wmd; Thu, 01 Feb 2024 19:47:01 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVd1W-00000009CgJ-3ZGl for linux-arm-kernel@lists.infradead.org; Thu, 01 Feb 2024 19:47:00 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id B12EECE2871; Thu, 1 Feb 2024 19:46:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ABBA3C433C7; Thu, 1 Feb 2024 19:46:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1706816816; bh=NluyzrHFJsYNvLhJgJPTCWrHJBLx84eu5Yc7tv6NdP8=; h=Date:From:To:List-Id:Cc:Subject:References:In-Reply-To:From; b=T4NU3UIlM1yr+qTOCCKoiaBmecBzqwjhDsUDKKQwDXegVSiTzT4u93i4V0r05/oOt WrU08x09BR4nnTHx75FSy7F3xQv0GCOi4N3w2AsiV7UQPq+k3Y/oVCrU1MevvLjagN MkHloj+q0gmFbskns+0RbtSagyFVpOgqzM57P9RgbfURZwUB7JonoMU/HOCsp3JpSt PkNpKRk5BiNCgG6Lli1LKnwAecI4fl6z8BCb5MbBXP9imx8xOw/GCzNR68KhTFS66J miQS6CsBSkb9plo8F8PDoJ8UOV+kTk2nEL1vsEUhsw8rrjYTuVJ8Ua22N2EKOAJgcF NhcqtlrryMi+Q== Date: Thu, 1 Feb 2024 13:46:53 -0600 From: Rob Herring To: Oreoluwa Babatunde Cc: catalin.marinas@arm.com, will@kernel.org, frowand.list@gmail.com, vgupta@kernel.org, arnd@arndb.de, olof@lixom.net, soc@kernel.org, guoren@kernel.org, monstr@monstr.eu, palmer@dabbelt.com, aou@eecs.berkeley.edu, dinguyen@kernel.org, chenhuacai@kernel.org, tsbogend@alpha.franken.de, jonas@southpole.se, stefan.kristiansson@saunalahti.fi, shorne@gmail.com, mpe@ellerman.id.au, ysato@users.sourceforge.jp, dalias@libc.org, glaubitz@physik.fu-berlin.de, richard@nod.at, anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net, chris@zankel.net, jcmvbkbc@gmail.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, kernel@quicinc.com Subject: Re: [PATCH 00/46] Dynamic allocation of reserved_mem array. Message-ID: <20240201194653.GA1328565-robh@kernel.org> References: <20240126235425.12233-1-quic_obabatun@quicinc.com> <20240131000710.GA2581425-robh@kernel.org> <51dc64bb-3101-4b4a-a54f-c0df6c0b264c@quicinc.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <51dc64bb-3101-4b4a-a54f-c0df6c0b264c@quicinc.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240201_114659_281507_3D96D79C X-CRM114-Status: GOOD ( 62.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Feb 01, 2024 at 09:08:06AM -0800, Oreoluwa Babatunde wrote: > = > On 1/30/2024 4:07 PM, Rob Herring wrote: > > On Fri, Jan 26, 2024 at 03:53:39PM -0800, Oreoluwa Babatunde wrote: > >> The reserved_mem array is used to store data for the different > >> reserved memory regions defined in the DT of a device. The array > >> stores information such as region name, node, start-address, and size > >> of the reserved memory regions. > >> > >> The array is currently statically allocated with a size of > >> MAX_RESERVED_REGIONS(64). This means that any system that specifies a > >> number of reserved memory regions greater than MAX_RESERVED_REGIONS(64) > >> will not have enough space to store the information for all the region= s. > >> > >> Therefore, this series extends the use of the static array for > >> reserved_mem, and introduces a dynamically allocated array using > >> memblock_alloc() based on the number of reserved memory regions > >> specified in the DT. > >> > >> Some architectures such as arm64 require the page tables to be setup > >> before memblock allocated memory is writable. Therefore, the dynamic > >> allocation of the reserved_mem array will need to be done after the > >> page tables have been setup on these architectures. In most cases that > >> will be after paging_init(). > >> > >> Reserved memory regions can be divided into 2 groups. > >> i) Statically-placed reserved memory regions > >> i.e. regions defined in the DT using the @reg property. > >> ii) Dynamically-placed reserved memory regions. > >> i.e. regions specified in the DT using the @alloc_ranges > >> and @size properties. > >> > >> It is possible to call memblock_reserve() and memblock_mark_nomap() on > >> the statically-placed reserved memory regions and not need to save them > >> to the reserved_mem array until memory is allocated for it using > >> memblock, which will be after the page tables have been setup. > >> For the dynamically-placed reserved memory regions, it is not possible > >> to wait to store its information because the starting address is > >> allocated only at run time, and hence they need to be stored somewhere > >> after they are allocated. > >> Waiting until after the page tables have been setup to allocate memory > >> for the dynamically-placed regions is also not an option because the > >> allocations will come from memory that have already been added to the > >> page tables, which is not good for memory that is supposed to be > >> reserved and/or marked as nomap. > >> > >> Therefore, this series splits up the processing of the reserved memory > >> regions into two stages, of which the first stage is carried out by > >> early_init_fdt_scan_reserved_mem() and the second is carried out by > >> fdt_init_reserved_mem(). > >> > >> The early_init_fdt_scan_reserved_mem(), which is called before the page > >> tables are setup is used to: > >> 1. Call memblock_reserve() and memblock_mark_nomap() on all the > >> statically-placed reserved memory regions as needed. > >> 2. Allocate memory from memblock for the dynamically-placed reserved > >> memory regions and store them in the static array for reserved_mem. > >> memblock_reserve() and memblock_mark_nomap() are also called as > >> needed on all the memory allocated for the dynamically-placed > >> regions. > >> 3. Count the total number of reserved memory regions found in the DT. > >> > >> fdt_init_reserved_mem(), which should be called after the page tables > >> have been setup, is used to carry out the following: > >> 1. Allocate memory for the reserved_mem array based on the number of > >> reserved memory regions counted as mentioned above. > >> 2. Copy all the information for the dynamically-placed reserved memory > >> regions from the static array into the new allocated memory for the > >> reserved_mem array. > >> 3. Add the information for the statically-placed reserved memory into > >> reserved_mem array. > >> 4. Run the region specific init functions for each of the reserve memo= ry > >> regions saved in the reserved_mem array. > > I don't see the need for fdt_init_reserved_mem() to be explicitly calle= d = > > by arch code. I said this already, but that can be done at the same tim= e = > > as unflattening the DT. The same conditions are needed for both: we nee= d = > > to be able to allocate memory from memblock. > > > > To put it another way, if fdt_init_reserved_mem() can be called "early"= , = > > then unflattening could be moved earlier as well. Though I don't think = > > we should optimize that. I'd rather see all arches call the DT function= s = > > at the same stages. > Hi Rob, > = > The reason we moved fdt_init_reserved_mem() back into the arch specific c= ode > was because we realized that there was no apparently obvious way to call > early_init_fdt_scan_reserved_mem() and fdt_init_reserved_mem() in the cor= rect > order that will work for all archs if we placed fdt_init_reserved_mem() i= nside the > unflatten_devicetree() function. > = > early_init_fdt_scan_reserved_mem() needs to be > called first before fdt_init_reserved_mem(). But on some archs, > unflatten_devicetree() is called before early_init_fdt_scan_reserved_mem(= ), which > means that if we have fdt_init_reserved_mem() inside the unflatten_device= tree() > function, it will be called before early_init_fdt_scan_reserved_mem(). > = > This is connected to your other comments on Patch 7 & Patch 14. > I agree, unflatten_devicetree() should NOT be getting called before we re= serve > memory for the reserved memory regions because that could cause memory to= be > allocated from regions that should be reserved. > = > Hence, resolving this issue should allow us to call fdt_init_reserved_mem= () from > the=A0 unflatten_devicetree() function without it changing the order that= we are > trying to have. There's one issue I've found which is unflatten_device_tree() isn't = called for ACPI case on arm64. Turns out we need /reserved-memory = handled in that case too. However, I think we're going to change = calling unflatten_device_tree() unconditionally for another reason[1]. = [1] https://lore.kernel.org/all/efe6a7886c3491cc9c225a903efa2b1e.sboyd@kern= el.org/ > = > I will work on implementing this and send another revision. I think we should go with a simpler route that's just copy the an = initial array in initdata to a properly sized, allocated array like the = patch below. Of course it will need some arch fixes and a follow-on = patch to increase the initial array size. 8<-------------------------------------------------------------------- From: Rob Herring Date: Wed, 31 Jan 2024 16:26:23 -0600 Subject: [PATCH] of: reserved-mem: Re-allocate reserved_mem array to actual size In preparation to increase the static reserved_mem array size yet again, copy the initial array to an allocated array sized based on the actual size needed. Now increasing the the size of the static reserved_mem array only eats up the initdata space. For platforms with reasonable number of reserved regions, we have a net gain in free memory. In order to do memblock allocations, fdt_init_reserved_mem() is moved a bit later to unflatten_device_tree(). On some arches this is effectively a nop. Signed-off-by: Rob Herring --- RFC as this is compile tested only. This is an alternative to this series[1]. [1] https://lore.kernel.org/all/20240126235425.12233-1-quic_obabatun@quicin= c.com/ --- drivers/of/fdt.c | 4 ++-- drivers/of/of_reserved_mem.c | 18 +++++++++++++----- 2 files changed, 15 insertions(+), 7 deletions(-) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index bf502ba8da95..14360f5191ae 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -645,8 +645,6 @@ void __init early_init_fdt_scan_reserved_mem(void) break; memblock_reserve(base, size); } - - fdt_init_reserved_mem(); } = /** @@ -1328,6 +1326,8 @@ bool __init early_init_dt_scan(void *params) */ void __init unflatten_device_tree(void) { + fdt_init_reserved_mem(); + __unflatten_device_tree(initial_boot_params, NULL, &of_root, early_init_dt_alloc_memory_arch, false); = diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index 7ec94cfcbddb..ae323d6b25ad 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -27,7 +27,8 @@ #include "of_private.h" = #define MAX_RESERVED_REGIONS 64 -static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS]; +static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS] __initdata; +static struct reserved_mem *reserved_mem_p; static int reserved_mem_count; = static int __init early_init_dt_alloc_reserved_memory_arch(phys_addr_t siz= e, @@ -354,6 +355,13 @@ void __init fdt_init_reserved_mem(void) } } } + + reserved_mem_p =3D memblock_alloc(sizeof(struct reserved_mem) * reserved_= mem_count, + sizeof(struct reserved_mem)); + if (WARN(!reserved_mem_p, "of: reserved-memory allocation failed, continu= ing with __initdata array!\n")) + reserved_mem_p =3D reserved_mem; + else + memcpy(reserved_mem_p, reserved_mem, sizeof(struct reserved_mem) * reser= ved_mem_count); } = static inline struct reserved_mem *__find_rmem(struct device_node *node) @@ -364,8 +372,8 @@ static inline struct reserved_mem *__find_rmem(struct d= evice_node *node) return NULL; = for (i =3D 0; i < reserved_mem_count; i++) - if (reserved_mem[i].phandle =3D=3D node->phandle) - return &reserved_mem[i]; + if (reserved_mem_p[i].phandle =3D=3D node->phandle) + return &reserved_mem_p[i]; return NULL; } = @@ -507,8 +515,8 @@ struct reserved_mem *of_reserved_mem_lookup(struct devi= ce_node *np) = name =3D kbasename(np->full_name); for (i =3D 0; i < reserved_mem_count; i++) - if (!strcmp(reserved_mem[i].name, name)) - return &reserved_mem[i]; + if (!strcmp(reserved_mem_p[i].name, name)) + return &reserved_mem_p[i]; = return NULL; } -- = 2.43.0 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel