From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E2E2AC46CD2 for ; Wed, 31 Jan 2024 00:07:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZMEC92isWwhfg/D+R+6SUUvV562+LZWcD5XyfMAyB/o=; b=RhRSa6/WvRXs9s mt2YVUhBuBykwpNxlGpgFkbwZexXvllT5argtBFQjaV+hsaZZrJJRznruBdQIagd6H3VZZaKFY79y CjFgojZn45eiQn7WEWOC27vL1QjURB3pmTbH7mkv4ypsU6daiTzrlTPmcpsG5gz6/iuMC1WnLceqp ZQYjHCD2byAzJXvDw+RCdCo14jh5RIUDoj7t93d5SmvnhgnvSTnR2k+dctNnsLkpquWS8epQBVhiG FpXuUYxZ1mZhEeFKdcPupolHwGY0DoCB5c+3waNJUMGhoo/le3d6GDAdksKMHdFaUQSs2mkxCMR5z mNYKd8d2OtH9HtL11HNw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rUy8O-00000000ybZ-0Jnr; Wed, 31 Jan 2024 00:07:20 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rUy8L-00000000yaN-1p9G for linux-arm-kernel@lists.infradead.org; Wed, 31 Jan 2024 00:07:19 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 6BCD3CE18BE; Wed, 31 Jan 2024 00:07:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 67E6CC433C7; Wed, 31 Jan 2024 00:07:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1706659632; bh=4IQeU0aHdZ9OyJPsxqDoj0X2DZjWXfdHf5qWMKXFsBo=; h=Date:From:To:List-Id:Cc:Subject:References:In-Reply-To:From; b=WDMKUIklvJcm6EcsZNKAPY/tr3zImhMVxuK+6CQzXx3/bqkePR8l3BDVDP05wgtfd z8mNxq0EX9GgXIKa2en2de4C47Jl93ypnNnPJ2jL++MIrU05hBGDyWv7CFgOblKvSz Hzo5RMVvFMnisIj9crEhIxJjUGLQBoeFMID/wXovHe38dXqecSLRKKnWZNQ3UM6ggp PwgoZxxh3NF8D08IYo9SPpvoWmN9SbsoCuTe1Ag9FO2iPadbzNzRRLbw+uCrM0ULrX 5JOCzJO5UhFvn6RNC0O6HcwaHNqn/0Tkfu7Xo8KTSA2xWaof7iuOx0RV7OG08cP5/C V1L92SX9fU73Q== Date: Tue, 30 Jan 2024 18:07:10 -0600 From: Rob Herring To: Oreoluwa Babatunde Cc: catalin.marinas@arm.com, will@kernel.org, frowand.list@gmail.com, vgupta@kernel.org, arnd@arndb.de, olof@lixom.net, soc@kernel.org, guoren@kernel.org, monstr@monstr.eu, palmer@dabbelt.com, aou@eecs.berkeley.edu, dinguyen@kernel.org, chenhuacai@kernel.org, tsbogend@alpha.franken.de, jonas@southpole.se, stefan.kristiansson@saunalahti.fi, shorne@gmail.com, mpe@ellerman.id.au, ysato@users.sourceforge.jp, dalias@libc.org, glaubitz@physik.fu-berlin.de, richard@nod.at, anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net, chris@zankel.net, jcmvbkbc@gmail.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, kernel@quicinc.com Subject: Re: [PATCH 00/46] Dynamic allocation of reserved_mem array. Message-ID: <20240131000710.GA2581425-robh@kernel.org> References: <20240126235425.12233-1-quic_obabatun@quicinc.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240126235425.12233-1-quic_obabatun@quicinc.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240130_160717_857153_6E8FA7DB X-CRM114-Status: GOOD ( 52.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Jan 26, 2024 at 03:53:39PM -0800, Oreoluwa Babatunde wrote: > The reserved_mem array is used to store data for the different > reserved memory regions defined in the DT of a device. The array > stores information such as region name, node, start-address, and size > of the reserved memory regions. > > The array is currently statically allocated with a size of > MAX_RESERVED_REGIONS(64). This means that any system that specifies a > number of reserved memory regions greater than MAX_RESERVED_REGIONS(64) > will not have enough space to store the information for all the regions. > > Therefore, this series extends the use of the static array for > reserved_mem, and introduces a dynamically allocated array using > memblock_alloc() based on the number of reserved memory regions > specified in the DT. > > Some architectures such as arm64 require the page tables to be setup > before memblock allocated memory is writable. Therefore, the dynamic > allocation of the reserved_mem array will need to be done after the > page tables have been setup on these architectures. In most cases that > will be after paging_init(). > > Reserved memory regions can be divided into 2 groups. > i) Statically-placed reserved memory regions > i.e. regions defined in the DT using the @reg property. > ii) Dynamically-placed reserved memory regions. > i.e. regions specified in the DT using the @alloc_ranges > and @size properties. > > It is possible to call memblock_reserve() and memblock_mark_nomap() on > the statically-placed reserved memory regions and not need to save them > to the reserved_mem array until memory is allocated for it using > memblock, which will be after the page tables have been setup. > For the dynamically-placed reserved memory regions, it is not possible > to wait to store its information because the starting address is > allocated only at run time, and hence they need to be stored somewhere > after they are allocated. > Waiting until after the page tables have been setup to allocate memory > for the dynamically-placed regions is also not an option because the > allocations will come from memory that have already been added to the > page tables, which is not good for memory that is supposed to be > reserved and/or marked as nomap. > > Therefore, this series splits up the processing of the reserved memory > regions into two stages, of which the first stage is carried out by > early_init_fdt_scan_reserved_mem() and the second is carried out by > fdt_init_reserved_mem(). > > The early_init_fdt_scan_reserved_mem(), which is called before the page > tables are setup is used to: > 1. Call memblock_reserve() and memblock_mark_nomap() on all the > statically-placed reserved memory regions as needed. > 2. Allocate memory from memblock for the dynamically-placed reserved > memory regions and store them in the static array for reserved_mem. > memblock_reserve() and memblock_mark_nomap() are also called as > needed on all the memory allocated for the dynamically-placed > regions. > 3. Count the total number of reserved memory regions found in the DT. > > fdt_init_reserved_mem(), which should be called after the page tables > have been setup, is used to carry out the following: > 1. Allocate memory for the reserved_mem array based on the number of > reserved memory regions counted as mentioned above. > 2. Copy all the information for the dynamically-placed reserved memory > regions from the static array into the new allocated memory for the > reserved_mem array. > 3. Add the information for the statically-placed reserved memory into > reserved_mem array. > 4. Run the region specific init functions for each of the reserve memory > regions saved in the reserved_mem array. I don't see the need for fdt_init_reserved_mem() to be explicitly called by arch code. I said this already, but that can be done at the same time as unflattening the DT. The same conditions are needed for both: we need to be able to allocate memory from memblock. To put it another way, if fdt_init_reserved_mem() can be called "early", then unflattening could be moved earlier as well. Though I don't think we should optimize that. I'd rather see all arches call the DT functions at the same stages. > Once the above steps have been completed and the init process is done > running, the original statically allocated reserved_mem array of size > MAX_RESERVED_REGIONS(64) will be automatically freed back to buddy > because it is no longer needed. This is done by marking the array as an > "__initdata" object in Patch 0018. > > Note: > > - Per Architecture, this series is effectively only 10 patches. The > code for each architecture is split up into separate patches to > allow each architecture to be tested independently of changes from > other architectures. Should this series be accepted, this should > allow for each arcitecture change to be picked up independently as > well. Only if patches 1 and 2 are accepted in one cycle and the arch ones in the next cycle. No need for that though, I can take the whole thing (when it's ready). > > Patch 0001: Splits up the processing of the reserved memory regions > between early_init_fdt_scan_reserved_mem and fdt_init_reserved_mem. > > Patch 0002: Introduces a copy of early_init_fdt_scan_reserved_mem() > which is used to separate it from fdt_init_reserved_mem() so that the > two functions can be called independently of each other. > > Patch 0003 - Patch 0016: Duplicated change for each architecture to > call early_init_fdt_scan_reserved_mem() and fdt_init_reserved_mem() > at their appropriate locations. Here fdt_init_reserved_mem() is called > either before of after the page tables have been setup depending on > the architecture requirements. > > Patch 0017: Deletes the early_init_fdt_scan_reserved_mem() function > since all architectures are now using the copy introduced in > Patch 0002. > > Patch 0018: Dynamically allocate memory for the reserved_mem array > based on the total number of reserved memory regions specified in the > DT. > > Patch 0019 - Patch 0029: Duplicated change for each architecture to > move the fdt_init_reserved_mem() function call to below the > unflatten_devicetree() function call. This is so that the unflatten > devicetree APIs can be used to process the reserved memory regions. > > Patch 0030: Make code changes to start using the unflatten devicetree > APIs to access the reserved memory regions defined in the DT. > > Patch 0031: Rename fdt_* functions as dt_* to refelct that the > flattened devicetree (fdt) APIs have been replaced with the unflatten > devicetree APIs. > > Patch 0032 - Patch 0045: Duplicated change for each architecture to > switch from the use of fdt_init_reserved_mem() to > dt_init_reserved_mem(), which is the same function but the later uses > the unflatten devicetree APIs. > > Patch 0046: Delete the fdt_init_reserved_mem() function as all > architectures have switched to using dt_init_reserved_mem() which was > introduced in Patch 0031. > > - The limitation to this approach is that there is still a limit of > 64 for dynamically-placed reserved memory regions. But from my current > analysis, these types of reserved memory regions are generally less > in number when compared to the statically-placed reserved memory > regions. > > - I have looked through all architectures and placed the call to > memblock_alloc() for the reserved_mem array at points where I > believe memblock allocated memory are available to be written to. > I currently only have access to an arm64 device and this is where I am > testing the functionality of this series. Hence, I will need help from > architecture maintainers to test this series on other architectures to > ensure that the code is functioning properly on there. > > Previous patch revisions: > 1. [RFC V1 Patchset]: > https://lore.kernel.org/all/20231019184825.9712-1-quic_obabatun@quicinc.com/ > > 2. [RFC V2 Patchset]: > https://lore.kernel.org/all/20231204041339.9902-1-quic_obabatun@quicinc.com/ > - Extend changes to all other relevant architectures. > - Add code to use unflatten devicetree APIs to process the reserved > memory regions. Dropping RFC does not make this v1. RFC is a state of the patches not a version. Rob _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel