From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stephen Warren Date: Tue, 23 Feb 2016 13:33:37 -0700 Subject: [U-Boot] [PATCH 2/9] arm64: Make full va map code more dynamic In-Reply-To: References: <1456106232-233210-1-git-send-email-agraf@suse.de> <1456106232-233210-3-git-send-email-agraf@suse.de> <56CC9514.4050404@wwwdotorg.org> <56CC9982.9040607@wwwdotorg.org> Message-ID: <56CCC221.70107@wwwdotorg.org> List-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: u-boot@lists.denx.de On 02/23/2016 01:00 PM, Simon Glass wrote: > Hi Stephen, > > On 23 February 2016 at 10:40, Stephen Warren wrote: >> On 02/23/2016 10:30 AM, Simon Glass wrote: >>> >>> Hi Stephen, >>> >>> On 23 February 2016 at 10:21, Stephen Warren >>> wrote: >>>> >>>> On 02/23/2016 06:17 AM, Simon Glass wrote: >>>>> >>>>> >>>>> Hi Alex, >>>>> >>>>> On 21 February 2016 at 18:57, Alexander Graf wrote: >>>>>> >>>>>> >>>>>> The idea to generate our pages tables from an array of memory ranges >>>>>> is very sound. However, instead of hard coding the code to create up >>>>>> to 2 levels of 64k granule page tables, we really should just create >>>>>> normal 4k page tables that allow us to set caching attributes on 2M >>>>>> or 4k level later on. >>>>>> >>>>>> So this patch moves the full_va mapping code to 4k page size and >>>>>> makes it fully flexible to dynamically create as many levels as >>>>>> necessary for a map (including dynamic 1G/2M pages). It also adds >>>>>> support to dynamically split a large map into smaller ones when >>>>>> some code wants to set dcache attributes. >>>>>> >>>>>> With all this in place, there is very little reason to create your >>>>>> own page tables in board specific files. >>>> >>>> >>>> >>>>>> static struct mm_region mem_map[] = CONFIG_SYS_MEM_MAP; >>>>> >>>>> >>>>> >>>>> I am not ken on the idea of using a big #define table on these boards. >>>>> Is there not a device-tree binding for this that we can use? It is >>>>> just a data table, We are moving to Kconfig and eventually want to >>>>> drop the config files. >>>> >>>> >>>> >>>> I would strongly object to making the MMU setup depend on device tree >>>> parsing. This is low-level system code that should be handled purely by >>>> simple standalone C code. >>> >>> >>> Because...? >> >> >> There is literally zero benefit from putting the exact same content into DT, >> and hence having to run significantly more code to parse DT and get back >> exactly the same hard-coded table. > > We do this so that board-specific variations can be described in one > place. In the board-specific case, there are benefits. I'd like to see an explicit enumeration of the benefits; I'm not aware of any (either benefits, or such an enumeration). Board-specific data can just as easily (actually, more easily due to lack of need for parsing code) be stored in C data structures vs. stored in DT. Or put another way, the simple fact that some data is board-specific does not in-and-of-itself mean there's a benefit to putting it into DT. To move something into DT, we should be able to enumerate some other benefit, such as: - Speeds up boot time. - Allows code to be simpler. - Simplifies editing the data. (Note that I don't believe any of those example potential benefits are actually true, but in fact are the opposite of the truth). >> DT is not a goal in-and-of-itself. In some cases there are benefits to >> placing configuration data outside a binary, and in those cases DT is an >> acceptable mechanism to do that. However, any benefit from doing so derives >> from arguments for separating the data out of the code, not because "use DT" >> is itself a benefit. > > That's fine as far as it goes. > > The config file is not an acceptable means of providing per-board or > per-arch configuration. If it is arch-specific and/or SoC-specific, > but NOT board-specific then we can have it in a C table in a source > file (not the config header) that is built into the binary. If it is > board-specific, it must use the device tree. > > What category are we talking about here? Unfortunately it's not > entirely clear from the patches and I lack the knowledge/background to > figure it out. I expect this data is SoC-specific. At least for Tegra in the codebase, that's certainly true. I believe it's true for other SoCs in the current codebase too. I don't expect this to change going forward, at the very least for Tegra.