* [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
@ 2024-08-09 18:48 Oreoluwa Babatunde
2024-08-09 18:48 ` [PATCH v7 1/2] of: reserved_mem: Restruture how the reserved memory regions are processed Oreoluwa Babatunde
` (3 more replies)
0 siblings, 4 replies; 12+ messages in thread
From: Oreoluwa Babatunde @ 2024-08-09 18:48 UTC (permalink / raw)
To: robh, saravanak
Cc: klarasmodin, aisheng.dong, hch, m.szyprowski, robin.murphy,
devicetree, linux-kernel, iommu, will, catalin.marinas, kernel,
Oreoluwa Babatunde
The reserved_mem array is used to store data for the different
reserved memory regions defined in the DT of a device. The array
stores information such as region name, node reference, start-address,
and size of the different reserved memory regions.
The array is currently statically allocated with a size of
MAX_RESERVED_REGIONS(64). This means that any system that specifies a
number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
will not have enough space to store the information for all the regions.
This can be fixed by making the reserved_mem array a dynamically sized
array which is allocated using memblock_alloc() based on the exact
number of reserved memory regions defined in the DT.
On architectures such as arm64, memblock allocated memory is not
writable until after the page tables have been setup.
This is an issue because the current implementation initializes the
reserved memory regions and stores their information in the array before
the page tables are setup. Hence, dynamically allocating the
reserved_mem array and attempting to write information to it at this
point will fail.
Therefore, the allocation of the reserved_mem array will need to be done
after the page tables have been setup, which means that the reserved
memory regions will also need to wait until after the page tables have
been setup to be stored in the array.
When processing the reserved memory regions defined in the DT, these
regions are marked as reserved by calling memblock_reserve(base, size).
Where: base = base address of the reserved region.
size = the size of the reserved memory region.
Depending on if that region is defined using the "no-map" property,
memblock_mark_nomap(base, size) is also called.
The "no-map" property is used to indicate to the operating system that a
mapping of the specified region must NOT be created. This also means
that no access (including speculative accesses) is allowed on this
region of memory except when it is coming from the device driver that
this region of memory is being reserved for.[1]
Therefore, it is important to call memblock_reserve() and
memblock_mark_nomap() on all the reserved memory regions before the
system sets up the page tables so that the system does not unknowingly
include any of the no-map reserved memory regions in the memory map.
There are two ways to define how/where a reserved memory region is
placed in memory:
i) Statically-placed reserved memory regions
i.e. regions defined with a set start address and size using the
"reg" property in the DT.
ii) Dynamically-placed reserved memory regions.
i.e. regions defined by specifying a range of addresses where they can
be placed in memory using the "alloc_ranges" and "size" properties
in the DT.
The dynamically-placed reserved memory regions get assigned a start
address only at runtime. And this needs to be done before the page
tables are setup so that memblock_reserve() and memblock_mark_nomap()
can be called on the allocated region as explained above.
Since the dynamically allocated reserved_mem array can only be
available after the page tables have been setup, the information for
the dynamically-placed reserved memory regions needs to be stored
somewhere temporarily until the reserved_mem array is available.
Therefore, this series makes use of a temporary static array to store
the information of the dynamically-placed reserved memory regions until
the reserved_mem array is allocated.
Once the reserved_mem array is available, the information is copied over
from the temporary array into the reserved_mem array, and the memory for
the temporary array is freed back to the system.
The information for the statically-placed reserved memory regions does
not need to be stored in a temporary array because their starting
address is already stored in the devicetree.
Once the reserved_mem array is allocated, the information for the
statically-placed reserved memory regions is added to the array.
Note:
Because of the use of a temporary array to store the information of the
dynamically-placed reserved memory regions, there still exists a
limitation of 64 for this particular kind of reserved memory regions.
From my observation, these regions are typically small in number and
hence I expect this to not be an issue for now.
Patch Versions:
v7:
- Make changes to initialize the reserved memory regions earlier in
response to issue reported in v6:
https://lore.kernel.org/all/20240610213403.GA1697364@thelio-3990X/
- For the reserved regions to be setup properly,
fdt_init_reserved_mem_node() needs to be called on each of the regions
before the page tables are setup. Since the function requires a
refernece to the devicetree node of each region, we are not able to
use the unflattened_devicetree APIs since they are not available until
after the page tables have been setup.
Hence, revert the use of the unflatten_device APIs as a result of this
limitation which was discovered in v6:
https://lore.kernel.org/all/986361f4-f000-4129-8214-39f2fb4a90da@gmail.com/
https://lore.kernel.org/all/DU0PR04MB9299C3EC247E1FE2C373440F80DE2@DU0PR04MB9299.eurprd04.prod.outlook.com/
v6:
https://lore.kernel.org/all/20240528223650.619532-1-quic_obabatun@quicinc.com/
- Rebased patchset on top of v6.10-rc1.
- Addressed comments received in v5 such as:
1. Switched to using relevant typed functions such as
of_property_read_u32(), of_property_present(), etc.
2. Switched to using of_address_to_resource() to read the "reg"
property of nodes.
3. Renamed functions using "of_*" naming scheme instead of "dt_*".
v5:
https://lore.kernel.org/all/20240328211543.191876-1-quic_obabatun@quicinc.com/
- Rebased changes on top of v6.9-rc1.
- Addressed minor code comments from v4.
v4:
https://lore.kernel.org/all/20240308191204.819487-2-quic_obabatun@quicinc.com/
- Move fdt_init_reserved_mem() back into the unflatten_device_tree()
function.
- Fix warnings found by Kernel test robot:
https://lore.kernel.org/all/202401281219.iIhqs1Si-lkp@intel.com/
https://lore.kernel.org/all/202401281304.tsu89Kcm-lkp@intel.com/
https://lore.kernel.org/all/202401291128.e7tdNh5x-lkp@intel.com/
v3:
https://lore.kernel.org/all/20240126235425.12233-1-quic_obabatun@quicinc.com/
- Make use of __initdata to delete the temporary static array after
dynamically allocating memory for reserved_mem array using memblock.
- Move call to fdt_init_reserved_mem() out of the
unflatten_device_tree() function and into architecture specific setup
code.
- Breaking up the changes for the individual architectures into separate
patches.
v2:
https://lore.kernel.org/all/20231204041339.9902-1-quic_obabatun@quicinc.com/
- Extend changes to all other relevant architectures by moving
fdt_init_reserved_mem() into the unflatten_device_tree() function.
- Add code to use unflatten devicetree APIs to process the reserved
memory regions.
v1:
https://lore.kernel.org/all/20231019184825.9712-1-quic_obabatun@quicinc.com/
References:
[1]
https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/reserved-memory/reserved-memory.yaml#L79
Oreoluwa Babatunde (2):
of: reserved_mem: Restruture how the reserved memory regions are
processed
of: reserved_mem: Add code to dynamically allocate reserved_mem array
drivers/of/fdt.c | 5 +-
drivers/of/of_private.h | 3 +-
drivers/of/of_reserved_mem.c | 231 +++++++++++++++++++++++++++--------
3 files changed, 188 insertions(+), 51 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v7 1/2] of: reserved_mem: Restruture how the reserved memory regions are processed
2024-08-09 18:48 [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array Oreoluwa Babatunde
@ 2024-08-09 18:48 ` Oreoluwa Babatunde
2024-08-16 21:07 ` Rob Herring (Arm)
2024-08-19 22:04 ` Rob Herring
2024-08-09 18:48 ` [PATCH v7 2/2] of: reserved_mem: Add code to dynamically allocate reserved_mem array Oreoluwa Babatunde
` (2 subsequent siblings)
3 siblings, 2 replies; 12+ messages in thread
From: Oreoluwa Babatunde @ 2024-08-09 18:48 UTC (permalink / raw)
To: robh, saravanak
Cc: klarasmodin, aisheng.dong, hch, m.szyprowski, robin.murphy,
devicetree, linux-kernel, iommu, will, catalin.marinas, kernel,
Oreoluwa Babatunde
Reserved memory regions defined in the devicetree can be broken up into
two groups:
i) Statically-placed reserved memory regions
i.e. regions defined with a static start address and size using the
"reg" property.
ii) Dynamically-placed reserved memory regions.
i.e. regions defined by specifying an address range where they can be
placed in memory using the "alloc_ranges" and "size" properties.
These regions are processed and set aside at boot time.
This is done in two stages as seen below:
Stage 1:
At this stage, fdt_scan_reserved_mem() scans through the child nodes of
the reserved_memory node using the flattened devicetree and does the
following:
1) If the node represents a statically-placed reserved memory region,
i.e. if it is defined using the "reg" property:
- Call memblock_reserve() or memblock_mark_nomap() as needed.
- Add the information for that region into the reserved_mem array
using fdt_reserved_mem_save_node().
i.e. fdt_reserved_mem_save_node(node, name, base, size).
2) If the node represents a dynamically-placed reserved memory region,
i.e. if it is defined using "alloc-ranges" and "size" properties:
- Add the information for that region to the reserved_mem array with
the starting address and size set to 0.
i.e. fdt_reserved_mem_save_node(node, name, 0, 0).
Note: This region is saved to the array with a starting address of 0
because a starting address is not yet allocated for it.
Stage 2:
After iterating through all the reserved memory nodes and storing their
relevant information in the reserved_mem array,fdt_init_reserved_mem() is
called and does the following:
1) For statically-placed reserved memory regions:
- Call the region specific init function using
__reserved_mem_init_node().
2) For dynamically-placed reserved memory regions:
- Call __reserved_mem_alloc_size() which is used to allocate memory
for each of these regions, and mark them as nomap if they have the
nomap property specified in the DT.
- Call the region specific init function.
The current size of the resvered_mem array is 64 as is defined by
MAX_RESERVED_REGIONS. This means that there is a limitation of 64 for
how many reserved memory regions can be specified on a system.
As systems continue to grow more and more complex, the number of
reserved memory regions needed are also growing and are starting to hit
this 64 count limit, hence the need to make the reserved_mem array
dynamically sized (i.e. dynamically allocating memory for the
reserved_mem array using membock_alloc_*).
On architectures such as arm64, memory allocated using memblock is
writable only after the page tables have been setup. This means that if
the reserved_mem array is going to be dynamically allocated, it needs to
happen after the page tables have been setup, not before.
Since the reserved memory regions are currently being processed and
added to the array before the page tables are setup, there is a need to
change the order in which some of the processing is done to allow for
the reserved_mem array to be dynamically sized.
It is possible to process the statically-placed reserved memory regions
without needing to store them in the reserved_mem array until after the
page tables have been setup because all the information stored in the
array is readily available in the devicetree and can be referenced at
any time.
Dynamically-placed reserved memory regions on the other hand get
assigned a start address only at runtime, and hence need a place to be
stored once they are allocated since there is no other referrence to the
start address for these regions.
Hence this patch changes the processing order of the reserved memory
regions in the following ways:
Step 1:
fdt_scan_reserved_mem() scans through the child nodes of
the reserved_memory node using the flattened devicetree and does the
following:
1) If the node represents a statically-placed reserved memory region,
i.e. if it is defined using the "reg" property:
- Call memblock_reserve() or memblock_mark_nomap() as needed.
- Call the region specific initialization function for the region
using fdt_init_reserved_mem_node().
2) If the node represents a dynamically-placed reserved memory region,
i.e. if it is defined using "alloc-ranges" and "size" properties:
- Call __reserved_mem_alloc_size() which will:
i) Allocate memory for the reserved region and call
memblock_mark_nomap() as needed.
ii) Call the region specific initialization function using
fdt_init_reserved_mem_node().
iii) Save the region information in the reserved_mem array using
fdt_reserved_mem_save_node().
Step 2:
1) This stage of the reserved memory processing is now only used to add
the statically-placed reserved memory regions into the reserved_mem
array using fdt_scan_reserved_mem_reg_nodes().
2) This step is also moved to be after the page tables have been
setup. Moving this will allow us to replace the reserved_mem
array with a dynamically sized array before storing the rest of
these regions.
Signed-off-by: Oreoluwa Babatunde <quic_obabatun@quicinc.com>
---
drivers/of/fdt.c | 5 +-
drivers/of/of_private.h | 3 +-
drivers/of/of_reserved_mem.c | 172 +++++++++++++++++++++++++----------
3 files changed, 131 insertions(+), 49 deletions(-)
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index 68103ad230ee..d4b7aaa70e31 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -511,8 +511,6 @@ void __init early_init_fdt_scan_reserved_mem(void)
break;
memblock_reserve(base, size);
}
-
- fdt_init_reserved_mem();
}
/**
@@ -1239,6 +1237,9 @@ void __init unflatten_device_tree(void)
of_alias_scan(early_init_dt_alloc_memory_arch);
unittest_unflatten_overlay_base();
+
+ /* Save the statically-placed regions in the reserved_mem array */
+ fdt_scan_reserved_mem_reg_nodes();
}
/**
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index 04aa2a91f851..29525c0b9939 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -9,6 +9,7 @@
*/
#define FDT_ALIGN_SIZE 8
+#define MAX_RESERVED_REGIONS 64
/**
* struct alias_prop - Alias property in 'aliases' node
@@ -180,7 +181,7 @@ static inline struct device_node *__of_get_dma_parent(const struct device_node *
#endif
int fdt_scan_reserved_mem(void);
-void fdt_init_reserved_mem(void);
+void __init fdt_scan_reserved_mem_reg_nodes(void);
bool of_fdt_device_is_available(const void *blob, unsigned long node);
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 46e1c3fbc769..b52690e554f0 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -27,7 +27,6 @@
#include "of_private.h"
-#define MAX_RESERVED_REGIONS 64
static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
static int reserved_mem_count;
@@ -96,6 +95,8 @@ static int __init early_init_dt_reserve_memory(phys_addr_t base,
return memblock_reserve(base, size);
}
+static void __init fdt_init_reserved_mem_node(unsigned long node, const char *uname,
+ phys_addr_t base, phys_addr_t size);
/*
* __reserved_mem_reserve_reg() - reserve all memory described in 'reg' property
*/
@@ -135,7 +136,8 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
len -= t_len;
if (first) {
- fdt_reserved_mem_save_node(node, uname, base, size);
+ /* Call region specific initialization function */
+ fdt_init_reserved_mem_node(node, uname, base, size);
first = 0;
}
}
@@ -165,12 +167,74 @@ static int __init __reserved_mem_check_root(unsigned long node)
return 0;
}
+static void __init __rmem_check_for_overlap(void);
+
+/**
+ * fdt_scan_reserved_mem_reg_nodes() - Store info for the "reg" defined
+ * reserved memory regions.
+ *
+ * This function is used to scan through the DT and store the
+ * information for the reserved memory regions that are defined using
+ * the "reg" property. The region node number, name, base address, and
+ * size are all stored in the reserved_mem array by calling the
+ * fdt_reserved_mem_save_node() function.
+ */
+void __init fdt_scan_reserved_mem_reg_nodes(void)
+{
+ int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32);
+ const void *fdt = initial_boot_params;
+ phys_addr_t base, size;
+ const __be32 *prop;
+ int node, child;
+ int len;
+
+ node = fdt_path_offset(fdt, "/reserved-memory");
+ if (node < 0) {
+ pr_info("Reserved memory: No reserved-memory node in the DT\n");
+ return;
+ }
+
+ if (__reserved_mem_check_root(node)) {
+ pr_err("Reserved memory: unsupported node format, ignoring\n");
+ return;
+ }
+
+ fdt_for_each_subnode(child, fdt, node) {
+ const char *uname;
+
+ prop = of_get_flat_dt_prop(child, "reg", &len);
+ if (!prop)
+ continue;
+ if (!of_fdt_device_is_available(fdt, child))
+ continue;
+
+ uname = fdt_get_name(fdt, child, NULL);
+ if (len && len % t_len != 0) {
+ pr_err("Reserved memory: invalid reg property in '%s', skipping node.\n",
+ uname);
+ continue;
+ }
+ base = dt_mem_next_cell(dt_root_addr_cells, &prop);
+ size = dt_mem_next_cell(dt_root_size_cells, &prop);
+
+ if (size)
+ fdt_reserved_mem_save_node(child, uname, base, size);
+ }
+
+ /* check for overlapping reserved regions */
+ __rmem_check_for_overlap();
+}
+
+static int __init __reserved_mem_alloc_size(unsigned long node, const char *uname);
+
/*
* fdt_scan_reserved_mem() - scan a single FDT node for reserved memory
*/
int __init fdt_scan_reserved_mem(void)
{
int node, child;
+ int dynamic_nodes_cnt = 0;
+ int dynamic_nodes[MAX_RESERVED_REGIONS];
const void *fdt = initial_boot_params;
node = fdt_path_offset(fdt, "/reserved-memory");
@@ -192,8 +256,24 @@ int __init fdt_scan_reserved_mem(void)
uname = fdt_get_name(fdt, child, NULL);
err = __reserved_mem_reserve_reg(child, uname);
- if (err == -ENOENT && of_get_flat_dt_prop(child, "size", NULL))
- fdt_reserved_mem_save_node(child, uname, 0, 0);
+ /*
+ * Save the nodes for the dynamically-placed regions
+ * into an array which will be used for allocation right
+ * after all the statically-placed regions are reserved
+ * or marked as no-map. This is done to avoid dynamically
+ * allocating from one of the statically-placed regions.
+ */
+ if (err == -ENOENT && of_get_flat_dt_prop(child, "size", NULL)) {
+ dynamic_nodes[dynamic_nodes_cnt] = child;
+ dynamic_nodes_cnt++;
+ }
+ }
+ for (int i = 0; i < dynamic_nodes_cnt; i++) {
+ const char *uname;
+
+ child = dynamic_nodes[i];
+ uname = fdt_get_name(fdt, child, NULL);
+ __reserved_mem_alloc_size(child, uname);
}
return 0;
}
@@ -253,8 +333,7 @@ static int __init __reserved_mem_alloc_in_range(phys_addr_t size,
* __reserved_mem_alloc_size() - allocate reserved memory described by
* 'size', 'alignment' and 'alloc-ranges' properties.
*/
-static int __init __reserved_mem_alloc_size(unsigned long node,
- const char *uname, phys_addr_t *res_base, phys_addr_t *res_size)
+static int __init __reserved_mem_alloc_size(unsigned long node, const char *uname)
{
int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32);
phys_addr_t start = 0, end = 0;
@@ -334,9 +413,11 @@ static int __init __reserved_mem_alloc_size(unsigned long node,
return -ENOMEM;
}
- *res_base = base;
- *res_size = size;
+ /* Call region specific initialization function */
+ fdt_init_reserved_mem_node(node, uname, base, size);
+ /* Save region in the reserved_mem array */
+ fdt_reserved_mem_save_node(node, uname, base, size);
return 0;
}
@@ -425,48 +506,47 @@ static void __init __rmem_check_for_overlap(void)
}
/**
- * fdt_init_reserved_mem() - allocate and init all saved reserved memory regions
+ * fdt_init_reserved_mem_node() - Initialize a reserved memory region
+ * @node: fdt node for the region to be initialized.
+ * @uname: name of the region to be initialized.
+ * @base: base address of the region to be initialized.
+ * @size: size of the region to be initialized.
+ *
+ * This function is used to call the region specific initialization
+ * function on a reserved memory region described by the node, name,
+ * base address and size being passed in as arguments.
*/
-void __init fdt_init_reserved_mem(void)
+static void __init fdt_init_reserved_mem_node(unsigned long node, const char *uname,
+ phys_addr_t base, phys_addr_t size)
{
- int i;
-
- /* check for overlapping reserved regions */
- __rmem_check_for_overlap();
-
- for (i = 0; i < reserved_mem_count; i++) {
- struct reserved_mem *rmem = &reserved_mem[i];
- unsigned long node = rmem->fdt_node;
- int err = 0;
- bool nomap;
+ int err = 0;
+ bool nomap;
+ struct reserved_mem rmem = {
+ .fdt_node = node,
+ .name = uname,
+ .base = base,
+ .size = size
+ };
- nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
+ nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
- if (rmem->size == 0)
- err = __reserved_mem_alloc_size(node, rmem->name,
- &rmem->base, &rmem->size);
- if (err == 0) {
- err = __reserved_mem_init_node(rmem);
- if (err != 0 && err != -ENOENT) {
- pr_info("node %s compatible matching fail\n",
- rmem->name);
- if (nomap)
- memblock_clear_nomap(rmem->base, rmem->size);
- else
- memblock_phys_free(rmem->base,
- rmem->size);
- } else {
- phys_addr_t end = rmem->base + rmem->size - 1;
- bool reusable =
- (of_get_flat_dt_prop(node, "reusable", NULL)) != NULL;
-
- pr_info("%pa..%pa (%lu KiB) %s %s %s\n",
- &rmem->base, &end, (unsigned long)(rmem->size / SZ_1K),
- nomap ? "nomap" : "map",
- reusable ? "reusable" : "non-reusable",
- rmem->name ? rmem->name : "unknown");
- }
- }
+ err = __reserved_mem_init_node(&rmem);
+ if (err != 0 && err != -ENOENT) {
+ pr_info("node %s compatible matching fail\n", rmem.name);
+ if (nomap)
+ memblock_clear_nomap(rmem.base, rmem.size);
+ else
+ memblock_phys_free(rmem.base, rmem.size);
+ } else {
+ phys_addr_t end = rmem.base + rmem.size - 1;
+ bool reusable =
+ (of_get_flat_dt_prop(node, "reusable", NULL)) != NULL;
+
+ pr_info("%pa..%pa (%lu KiB) %s %s %s\n",
+ &rmem.base, &end, (unsigned long)(rmem.size / SZ_1K),
+ nomap ? "nomap" : "map",
+ reusable ? "reusable" : "non-reusable",
+ rmem.name ? rmem.name : "unknown");
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v7 2/2] of: reserved_mem: Add code to dynamically allocate reserved_mem array
2024-08-09 18:48 [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array Oreoluwa Babatunde
2024-08-09 18:48 ` [PATCH v7 1/2] of: reserved_mem: Restruture how the reserved memory regions are processed Oreoluwa Babatunde
@ 2024-08-09 18:48 ` Oreoluwa Babatunde
2024-08-16 21:08 ` Rob Herring (Arm)
2024-08-09 20:09 ` [PATCH v7 0/2] Dynamic Allocation of the " Klara Modin
2024-08-19 17:23 ` Andy Shevchenko
3 siblings, 1 reply; 12+ messages in thread
From: Oreoluwa Babatunde @ 2024-08-09 18:48 UTC (permalink / raw)
To: robh, saravanak
Cc: klarasmodin, aisheng.dong, hch, m.szyprowski, robin.murphy,
devicetree, linux-kernel, iommu, will, catalin.marinas, kernel,
Oreoluwa Babatunde
The reserved_mem array is statically allocated with a size of
MAX_RESERVED_REGIONS(64). Therefore, if the number of reserved_mem
regions exceeds this size, there will not be enough space to store
all the data.
Hence, extend the use of the static array by introducing a
dynamically allocated array based on the number of reserved memory
regions specified in the DT.
On architectures such as arm64, memblock allocated memory is not
writable until after the page tables have been setup. Hence, the
dynamic allocation of the reserved_mem array will need to be done only
after the page tables have been setup.
As a result, a temporary static array is still needed in the initial
stages to store the information of the dynamically-placed reserved
memory regions because the start address is selected only at run-time
and is not stored anywhere else.
It is not possible to wait until the reserved_mem array is allocated
because this is done after the page tables are setup and the reserved
memory regions need to be initialized before then.
After the reserved_mem array is allocated, all entries from the static
array is copied over to the new array, and the rest of the information
for the statically-placed reserved memory regions are read in from the
DT and stored in the new array as well.
Once the init process is completed, the temporary static array is
released back to the system because it is no longer needed. This is
achieved by marking it as __initdata.
Signed-off-by: Oreoluwa Babatunde <quic_obabatun@quicinc.com>
---
drivers/of/of_reserved_mem.c | 68 +++++++++++++++++++++++++++++++++---
1 file changed, 64 insertions(+), 4 deletions(-)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index b52690e554f0..d90972cb5949 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -27,7 +27,9 @@
#include "of_private.h"
-static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
+static struct reserved_mem reserved_mem_array[MAX_RESERVED_REGIONS] __initdata;
+static struct reserved_mem *reserved_mem __refdata = reserved_mem_array;
+static int total_reserved_mem_cnt = MAX_RESERVED_REGIONS;
static int reserved_mem_count;
static int __init early_init_dt_alloc_reserved_memory_arch(phys_addr_t size,
@@ -55,6 +57,50 @@ static int __init early_init_dt_alloc_reserved_memory_arch(phys_addr_t size,
return err;
}
+/*
+ * alloc_reserved_mem_array() - allocate memory for the reserved_mem
+ * array using memblock
+ *
+ * This function is used to allocate memory for the reserved_mem
+ * array according to the total number of reserved memory regions
+ * defined in the DT.
+ * After the new array is allocated, the information stored in
+ * the initial static array is copied over to this new array and
+ * the new array is used from this point on.
+ */
+static void __init alloc_reserved_mem_array(void)
+{
+ struct reserved_mem *new_array;
+ size_t alloc_size, copy_size, memset_size;
+
+ alloc_size = array_size(total_reserved_mem_cnt, sizeof(*new_array));
+ if (alloc_size == SIZE_MAX) {
+ pr_err("Failed to allocate memory for reserved_mem array with err: %d", -EOVERFLOW);
+ return;
+ }
+
+ new_array = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
+ if (!new_array) {
+ pr_err("Failed to allocate memory for reserved_mem array with err: %d", -ENOMEM);
+ return;
+ }
+
+ copy_size = array_size(reserved_mem_count, sizeof(*new_array));
+ if (copy_size == SIZE_MAX) {
+ memblock_free(new_array, alloc_size);
+ total_reserved_mem_cnt = MAX_RESERVED_REGIONS;
+ pr_err("Failed to allocate memory for reserved_mem array with err: %d", -EOVERFLOW);
+ return;
+ }
+
+ memset_size = alloc_size - copy_size;
+
+ memcpy(new_array, reserved_mem, copy_size);
+ memset(new_array + reserved_mem_count, 0, memset_size);
+
+ reserved_mem = new_array;
+}
+
/*
* fdt_reserved_mem_save_node() - save fdt node for second pass initialization
*/
@@ -63,7 +109,7 @@ static void __init fdt_reserved_mem_save_node(unsigned long node, const char *un
{
struct reserved_mem *rmem = &reserved_mem[reserved_mem_count];
- if (reserved_mem_count == ARRAY_SIZE(reserved_mem)) {
+ if (reserved_mem_count == total_reserved_mem_cnt) {
pr_err("not enough space for all defined regions.\n");
return;
}
@@ -199,6 +245,13 @@ void __init fdt_scan_reserved_mem_reg_nodes(void)
return;
}
+ /*
+ * Allocate the exact size needed for the reserved_mem array and
+ * copy all the contents from the previous array if allocation
+ * is successful.
+ */
+ alloc_reserved_mem_array();
+
fdt_for_each_subnode(child, fdt, node) {
const char *uname;
@@ -233,7 +286,7 @@ static int __init __reserved_mem_alloc_size(unsigned long node, const char *unam
int __init fdt_scan_reserved_mem(void)
{
int node, child;
- int dynamic_nodes_cnt = 0;
+ int dynamic_nodes_cnt = 0, count = 0;
int dynamic_nodes[MAX_RESERVED_REGIONS];
const void *fdt = initial_boot_params;
@@ -256,6 +309,9 @@ int __init fdt_scan_reserved_mem(void)
uname = fdt_get_name(fdt, child, NULL);
err = __reserved_mem_reserve_reg(child, uname);
+ if (!err)
+ count++;
+
/*
* Save the nodes for the dynamically-placed regions
* into an array which will be used for allocation right
@@ -270,11 +326,15 @@ int __init fdt_scan_reserved_mem(void)
}
for (int i = 0; i < dynamic_nodes_cnt; i++) {
const char *uname;
+ int err;
child = dynamic_nodes[i];
uname = fdt_get_name(fdt, child, NULL);
- __reserved_mem_alloc_size(child, uname);
+ err = __reserved_mem_alloc_size(child, uname);
+ if (!err)
+ count++;
}
+ total_reserved_mem_cnt = count;
return 0;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
2024-08-09 18:48 [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array Oreoluwa Babatunde
2024-08-09 18:48 ` [PATCH v7 1/2] of: reserved_mem: Restruture how the reserved memory regions are processed Oreoluwa Babatunde
2024-08-09 18:48 ` [PATCH v7 2/2] of: reserved_mem: Add code to dynamically allocate reserved_mem array Oreoluwa Babatunde
@ 2024-08-09 20:09 ` Klara Modin
2024-08-19 17:23 ` Andy Shevchenko
3 siblings, 0 replies; 12+ messages in thread
From: Klara Modin @ 2024-08-09 20:09 UTC (permalink / raw)
To: Oreoluwa Babatunde, robh, saravanak
Cc: aisheng.dong, hch, m.szyprowski, robin.murphy, devicetree,
linux-kernel, iommu, will, catalin.marinas, kernel
Hi,
On 2024-08-09 20:48, Oreoluwa Babatunde wrote:
> The reserved_mem array is used to store data for the different
> reserved memory regions defined in the DT of a device. The array
> stores information such as region name, node reference, start-address,
> and size of the different reserved memory regions.
>
> The array is currently statically allocated with a size of
> MAX_RESERVED_REGIONS(64). This means that any system that specifies a
> number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
> will not have enough space to store the information for all the regions.
>
> This can be fixed by making the reserved_mem array a dynamically sized
> array which is allocated using memblock_alloc() based on the exact
> number of reserved memory regions defined in the DT.
>
> On architectures such as arm64, memblock allocated memory is not
> writable until after the page tables have been setup.
> This is an issue because the current implementation initializes the
> reserved memory regions and stores their information in the array before
> the page tables are setup. Hence, dynamically allocating the
> reserved_mem array and attempting to write information to it at this
> point will fail.
>
> Therefore, the allocation of the reserved_mem array will need to be done
> after the page tables have been setup, which means that the reserved
> memory regions will also need to wait until after the page tables have
> been setup to be stored in the array.
>
> When processing the reserved memory regions defined in the DT, these
> regions are marked as reserved by calling memblock_reserve(base, size).
> Where: base = base address of the reserved region.
> size = the size of the reserved memory region.
>
> Depending on if that region is defined using the "no-map" property,
> memblock_mark_nomap(base, size) is also called.
>
> The "no-map" property is used to indicate to the operating system that a
> mapping of the specified region must NOT be created. This also means
> that no access (including speculative accesses) is allowed on this
> region of memory except when it is coming from the device driver that
> this region of memory is being reserved for.[1]
>
> Therefore, it is important to call memblock_reserve() and
> memblock_mark_nomap() on all the reserved memory regions before the
> system sets up the page tables so that the system does not unknowingly
> include any of the no-map reserved memory regions in the memory map.
>
> There are two ways to define how/where a reserved memory region is
> placed in memory:
> i) Statically-placed reserved memory regions
> i.e. regions defined with a set start address and size using the
> "reg" property in the DT.
> ii) Dynamically-placed reserved memory regions.
> i.e. regions defined by specifying a range of addresses where they can
> be placed in memory using the "alloc_ranges" and "size" properties
> in the DT.
>
> The dynamically-placed reserved memory regions get assigned a start
> address only at runtime. And this needs to be done before the page
> tables are setup so that memblock_reserve() and memblock_mark_nomap()
> can be called on the allocated region as explained above.
> Since the dynamically allocated reserved_mem array can only be
> available after the page tables have been setup, the information for
> the dynamically-placed reserved memory regions needs to be stored
> somewhere temporarily until the reserved_mem array is available.
>
> Therefore, this series makes use of a temporary static array to store
> the information of the dynamically-placed reserved memory regions until
> the reserved_mem array is allocated.
> Once the reserved_mem array is available, the information is copied over
> from the temporary array into the reserved_mem array, and the memory for
> the temporary array is freed back to the system.
>
> The information for the statically-placed reserved memory regions does
> not need to be stored in a temporary array because their starting
> address is already stored in the devicetree.
> Once the reserved_mem array is allocated, the information for the
> statically-placed reserved memory regions is added to the array.
>
> Note:
> Because of the use of a temporary array to store the information of the
> dynamically-placed reserved memory regions, there still exists a
> limitation of 64 for this particular kind of reserved memory regions.
> From my observation, these regions are typically small in number and
> hence I expect this to not be an issue for now.
>
> Patch Versions:
> v7:
> - Make changes to initialize the reserved memory regions earlier in
> response to issue reported in v6:
> https://lore.kernel.org/all/20240610213403.GA1697364@thelio-3990X/
>
> - For the reserved regions to be setup properly,
> fdt_init_reserved_mem_node() needs to be called on each of the regions
> before the page tables are setup. Since the function requires a
> refernece to the devicetree node of each region, we are not able to
> use the unflattened_devicetree APIs since they are not available until
> after the page tables have been setup.
> Hence, revert the use of the unflatten_device APIs as a result of this
> limitation which was discovered in v6:
> https://lore.kernel.org/all/986361f4-f000-4129-8214-39f2fb4a90da@gmail.com/
> https://lore.kernel.org/all/DU0PR04MB9299C3EC247E1FE2C373440F80DE2@DU0PR04MB9299.eurprd04.prod.outlook.com/
>
> v6:
> https://lore.kernel.org/all/20240528223650.619532-1-quic_obabatun@quicinc.com/
> - Rebased patchset on top of v6.10-rc1.
> - Addressed comments received in v5 such as:
> 1. Switched to using relevant typed functions such as
> of_property_read_u32(), of_property_present(), etc.
> 2. Switched to using of_address_to_resource() to read the "reg"
> property of nodes.
> 3. Renamed functions using "of_*" naming scheme instead of "dt_*".
>
> v5:
> https://lore.kernel.org/all/20240328211543.191876-1-quic_obabatun@quicinc.com/
> - Rebased changes on top of v6.9-rc1.
> - Addressed minor code comments from v4.
>
> v4:
> https://lore.kernel.org/all/20240308191204.819487-2-quic_obabatun@quicinc.com/
> - Move fdt_init_reserved_mem() back into the unflatten_device_tree()
> function.
> - Fix warnings found by Kernel test robot:
> https://lore.kernel.org/all/202401281219.iIhqs1Si-lkp@intel.com/
> https://lore.kernel.org/all/202401281304.tsu89Kcm-lkp@intel.com/
> https://lore.kernel.org/all/202401291128.e7tdNh5x-lkp@intel.com/
>
> v3:
> https://lore.kernel.org/all/20240126235425.12233-1-quic_obabatun@quicinc.com/
> - Make use of __initdata to delete the temporary static array after
> dynamically allocating memory for reserved_mem array using memblock.
> - Move call to fdt_init_reserved_mem() out of the
> unflatten_device_tree() function and into architecture specific setup
> code.
> - Breaking up the changes for the individual architectures into separate
> patches.
>
> v2:
> https://lore.kernel.org/all/20231204041339.9902-1-quic_obabatun@quicinc.com/
> - Extend changes to all other relevant architectures by moving
> fdt_init_reserved_mem() into the unflatten_device_tree() function.
> - Add code to use unflatten devicetree APIs to process the reserved
> memory regions.
>
> v1:
> https://lore.kernel.org/all/20231019184825.9712-1-quic_obabatun@quicinc.com/
>
> References:
> [1]
> https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/reserved-memory/reserved-memory.yaml#L79
>
> Oreoluwa Babatunde (2):
> of: reserved_mem: Restruture how the reserved memory regions are
> processed
> of: reserved_mem: Add code to dynamically allocate reserved_mem array
>
> drivers/of/fdt.c | 5 +-
> drivers/of/of_private.h | 3 +-
> drivers/of/of_reserved_mem.c | 231 +++++++++++++++++++++++++++--------
> 3 files changed, 188 insertions(+), 51 deletions(-)
>
I did not see anything suspicious on my relevant machines with this
patch series (Raspberry Pi 1 and 3, Edgerouter 6P).
Regards,
Tested-by: Klara Modin <klarasmodin@gmail.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v7 1/2] of: reserved_mem: Restruture how the reserved memory regions are processed
2024-08-09 18:48 ` [PATCH v7 1/2] of: reserved_mem: Restruture how the reserved memory regions are processed Oreoluwa Babatunde
@ 2024-08-16 21:07 ` Rob Herring (Arm)
2024-08-19 22:04 ` Rob Herring
1 sibling, 0 replies; 12+ messages in thread
From: Rob Herring (Arm) @ 2024-08-16 21:07 UTC (permalink / raw)
To: Oreoluwa Babatunde
Cc: will, m.szyprowski, saravanak, klarasmodin, robin.murphy, iommu,
aisheng.dong, kernel, devicetree, catalin.marinas, hch,
linux-kernel
On Fri, 09 Aug 2024 11:48:13 -0700, Oreoluwa Babatunde wrote:
> Reserved memory regions defined in the devicetree can be broken up into
> two groups:
> i) Statically-placed reserved memory regions
> i.e. regions defined with a static start address and size using the
> "reg" property.
> ii) Dynamically-placed reserved memory regions.
> i.e. regions defined by specifying an address range where they can be
> placed in memory using the "alloc_ranges" and "size" properties.
>
> These regions are processed and set aside at boot time.
> This is done in two stages as seen below:
>
> Stage 1:
> At this stage, fdt_scan_reserved_mem() scans through the child nodes of
> the reserved_memory node using the flattened devicetree and does the
> following:
>
> 1) If the node represents a statically-placed reserved memory region,
> i.e. if it is defined using the "reg" property:
> - Call memblock_reserve() or memblock_mark_nomap() as needed.
> - Add the information for that region into the reserved_mem array
> using fdt_reserved_mem_save_node().
> i.e. fdt_reserved_mem_save_node(node, name, base, size).
>
> 2) If the node represents a dynamically-placed reserved memory region,
> i.e. if it is defined using "alloc-ranges" and "size" properties:
> - Add the information for that region to the reserved_mem array with
> the starting address and size set to 0.
> i.e. fdt_reserved_mem_save_node(node, name, 0, 0).
> Note: This region is saved to the array with a starting address of 0
> because a starting address is not yet allocated for it.
>
> Stage 2:
> After iterating through all the reserved memory nodes and storing their
> relevant information in the reserved_mem array,fdt_init_reserved_mem() is
> called and does the following:
>
> 1) For statically-placed reserved memory regions:
> - Call the region specific init function using
> __reserved_mem_init_node().
> 2) For dynamically-placed reserved memory regions:
> - Call __reserved_mem_alloc_size() which is used to allocate memory
> for each of these regions, and mark them as nomap if they have the
> nomap property specified in the DT.
> - Call the region specific init function.
>
> The current size of the resvered_mem array is 64 as is defined by
> MAX_RESERVED_REGIONS. This means that there is a limitation of 64 for
> how many reserved memory regions can be specified on a system.
> As systems continue to grow more and more complex, the number of
> reserved memory regions needed are also growing and are starting to hit
> this 64 count limit, hence the need to make the reserved_mem array
> dynamically sized (i.e. dynamically allocating memory for the
> reserved_mem array using membock_alloc_*).
>
> On architectures such as arm64, memory allocated using memblock is
> writable only after the page tables have been setup. This means that if
> the reserved_mem array is going to be dynamically allocated, it needs to
> happen after the page tables have been setup, not before.
>
> Since the reserved memory regions are currently being processed and
> added to the array before the page tables are setup, there is a need to
> change the order in which some of the processing is done to allow for
> the reserved_mem array to be dynamically sized.
>
> It is possible to process the statically-placed reserved memory regions
> without needing to store them in the reserved_mem array until after the
> page tables have been setup because all the information stored in the
> array is readily available in the devicetree and can be referenced at
> any time.
> Dynamically-placed reserved memory regions on the other hand get
> assigned a start address only at runtime, and hence need a place to be
> stored once they are allocated since there is no other referrence to the
> start address for these regions.
>
> Hence this patch changes the processing order of the reserved memory
> regions in the following ways:
>
> Step 1:
> fdt_scan_reserved_mem() scans through the child nodes of
> the reserved_memory node using the flattened devicetree and does the
> following:
>
> 1) If the node represents a statically-placed reserved memory region,
> i.e. if it is defined using the "reg" property:
> - Call memblock_reserve() or memblock_mark_nomap() as needed.
> - Call the region specific initialization function for the region
> using fdt_init_reserved_mem_node().
>
> 2) If the node represents a dynamically-placed reserved memory region,
> i.e. if it is defined using "alloc-ranges" and "size" properties:
> - Call __reserved_mem_alloc_size() which will:
> i) Allocate memory for the reserved region and call
> memblock_mark_nomap() as needed.
> ii) Call the region specific initialization function using
> fdt_init_reserved_mem_node().
> iii) Save the region information in the reserved_mem array using
> fdt_reserved_mem_save_node().
>
> Step 2:
> 1) This stage of the reserved memory processing is now only used to add
> the statically-placed reserved memory regions into the reserved_mem
> array using fdt_scan_reserved_mem_reg_nodes().
>
> 2) This step is also moved to be after the page tables have been
> setup. Moving this will allow us to replace the reserved_mem
> array with a dynamically sized array before storing the rest of
> these regions.
>
> Signed-off-by: Oreoluwa Babatunde <quic_obabatun@quicinc.com>
> ---
> drivers/of/fdt.c | 5 +-
> drivers/of/of_private.h | 3 +-
> drivers/of/of_reserved_mem.c | 172 +++++++++++++++++++++++++----------
> 3 files changed, 131 insertions(+), 49 deletions(-)
>
Applied, thanks!
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v7 2/2] of: reserved_mem: Add code to dynamically allocate reserved_mem array
2024-08-09 18:48 ` [PATCH v7 2/2] of: reserved_mem: Add code to dynamically allocate reserved_mem array Oreoluwa Babatunde
@ 2024-08-16 21:08 ` Rob Herring (Arm)
0 siblings, 0 replies; 12+ messages in thread
From: Rob Herring (Arm) @ 2024-08-16 21:08 UTC (permalink / raw)
To: Oreoluwa Babatunde
Cc: kernel, klarasmodin, saravanak, robin.murphy, m.szyprowski,
aisheng.dong, hch, devicetree, will, iommu, catalin.marinas,
linux-kernel
On Fri, 09 Aug 2024 11:48:14 -0700, Oreoluwa Babatunde wrote:
> The reserved_mem array is statically allocated with a size of
> MAX_RESERVED_REGIONS(64). Therefore, if the number of reserved_mem
> regions exceeds this size, there will not be enough space to store
> all the data.
>
> Hence, extend the use of the static array by introducing a
> dynamically allocated array based on the number of reserved memory
> regions specified in the DT.
>
> On architectures such as arm64, memblock allocated memory is not
> writable until after the page tables have been setup. Hence, the
> dynamic allocation of the reserved_mem array will need to be done only
> after the page tables have been setup.
>
> As a result, a temporary static array is still needed in the initial
> stages to store the information of the dynamically-placed reserved
> memory regions because the start address is selected only at run-time
> and is not stored anywhere else.
> It is not possible to wait until the reserved_mem array is allocated
> because this is done after the page tables are setup and the reserved
> memory regions need to be initialized before then.
>
> After the reserved_mem array is allocated, all entries from the static
> array is copied over to the new array, and the rest of the information
> for the statically-placed reserved memory regions are read in from the
> DT and stored in the new array as well.
>
> Once the init process is completed, the temporary static array is
> released back to the system because it is no longer needed. This is
> achieved by marking it as __initdata.
>
> Signed-off-by: Oreoluwa Babatunde <quic_obabatun@quicinc.com>
> ---
> drivers/of/of_reserved_mem.c | 68 +++++++++++++++++++++++++++++++++---
> 1 file changed, 64 insertions(+), 4 deletions(-)
>
Applied, thanks!
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
2024-08-09 18:48 [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array Oreoluwa Babatunde
` (2 preceding siblings ...)
2024-08-09 20:09 ` [PATCH v7 0/2] Dynamic Allocation of the " Klara Modin
@ 2024-08-19 17:23 ` Andy Shevchenko
2024-08-19 21:55 ` Rob Herring
2024-08-30 16:35 ` Oreoluwa Babatunde
3 siblings, 2 replies; 12+ messages in thread
From: Andy Shevchenko @ 2024-08-19 17:23 UTC (permalink / raw)
To: Oreoluwa Babatunde
Cc: robh, saravanak, klarasmodin, aisheng.dong, hch, m.szyprowski,
robin.murphy, devicetree, linux-kernel, iommu, will,
catalin.marinas, kernel
On Fri, Aug 09, 2024 at 11:48:12AM -0700, Oreoluwa Babatunde wrote:
> The reserved_mem array is used to store data for the different
> reserved memory regions defined in the DT of a device. The array
> stores information such as region name, node reference, start-address,
> and size of the different reserved memory regions.
>
> The array is currently statically allocated with a size of
> MAX_RESERVED_REGIONS(64). This means that any system that specifies a
> number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
> will not have enough space to store the information for all the regions.
>
> This can be fixed by making the reserved_mem array a dynamically sized
> array which is allocated using memblock_alloc() based on the exact
> number of reserved memory regions defined in the DT.
>
> On architectures such as arm64, memblock allocated memory is not
> writable until after the page tables have been setup.
> This is an issue because the current implementation initializes the
> reserved memory regions and stores their information in the array before
> the page tables are setup. Hence, dynamically allocating the
> reserved_mem array and attempting to write information to it at this
> point will fail.
>
> Therefore, the allocation of the reserved_mem array will need to be done
> after the page tables have been setup, which means that the reserved
> memory regions will also need to wait until after the page tables have
> been setup to be stored in the array.
>
> When processing the reserved memory regions defined in the DT, these
> regions are marked as reserved by calling memblock_reserve(base, size).
> Where: base = base address of the reserved region.
> size = the size of the reserved memory region.
>
> Depending on if that region is defined using the "no-map" property,
> memblock_mark_nomap(base, size) is also called.
>
> The "no-map" property is used to indicate to the operating system that a
> mapping of the specified region must NOT be created. This also means
> that no access (including speculative accesses) is allowed on this
> region of memory except when it is coming from the device driver that
> this region of memory is being reserved for.[1]
>
> Therefore, it is important to call memblock_reserve() and
> memblock_mark_nomap() on all the reserved memory regions before the
> system sets up the page tables so that the system does not unknowingly
> include any of the no-map reserved memory regions in the memory map.
>
> There are two ways to define how/where a reserved memory region is
> placed in memory:
> i) Statically-placed reserved memory regions
> i.e. regions defined with a set start address and size using the
> "reg" property in the DT.
> ii) Dynamically-placed reserved memory regions.
> i.e. regions defined by specifying a range of addresses where they can
> be placed in memory using the "alloc_ranges" and "size" properties
> in the DT.
>
> The dynamically-placed reserved memory regions get assigned a start
> address only at runtime. And this needs to be done before the page
> tables are setup so that memblock_reserve() and memblock_mark_nomap()
> can be called on the allocated region as explained above.
> Since the dynamically allocated reserved_mem array can only be
> available after the page tables have been setup, the information for
> the dynamically-placed reserved memory regions needs to be stored
> somewhere temporarily until the reserved_mem array is available.
>
> Therefore, this series makes use of a temporary static array to store
> the information of the dynamically-placed reserved memory regions until
> the reserved_mem array is allocated.
> Once the reserved_mem array is available, the information is copied over
> from the temporary array into the reserved_mem array, and the memory for
> the temporary array is freed back to the system.
>
> The information for the statically-placed reserved memory regions does
> not need to be stored in a temporary array because their starting
> address is already stored in the devicetree.
> Once the reserved_mem array is allocated, the information for the
> statically-placed reserved memory regions is added to the array.
>
> Note:
> Because of the use of a temporary array to store the information of the
> dynamically-placed reserved memory regions, there still exists a
> limitation of 64 for this particular kind of reserved memory regions.
> >From my observation, these regions are typically small in number and
> hence I expect this to not be an issue for now.
This series (in particular the first patch) broke boot on Intel Meteor
Lake-P. Taking Linux next of 20240819 with these being reverted makes
things work again.
Taking into account bisectability issue (that's how I noticed the issue
in the first place) I think it would be nice to have no such patches at
all in the respective subsystem tree. On my side I may help with testing
whatever solution or next version provides.
git bisect start
# status: waiting for both good and bad commits
# good: [47ac09b91befbb6a235ab620c32af719f8208399] Linux 6.11-rc4
git bisect good 47ac09b91befbb6a235ab620c32af719f8208399
# status: waiting for bad commit, 1 good commit known
# bad: [469f1bad3c1c6e268059f78c0eec7e9552b3894c] Add linux-next specific files for 20240819
git bisect bad 469f1bad3c1c6e268059f78c0eec7e9552b3894c
# good: [3f6ea50f8205eb79e4a321559c292eecb059bfaa] Merge branch 'spi-nor/next' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux.git
git bisect good 3f6ea50f8205eb79e4a321559c292eecb059bfaa
# good: [95ff8c994d58104a68eb12988d7bc24597876831] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator.git
git bisect good 95ff8c994d58104a68eb12988d7bc24597876831
# bad: [9434b7c52128e9959dce1111b8e1078ffc91468d] Merge branch 'usb-next' of git://git.kernel.org/pub/scm/linux/kernel/git/johan/usb-serial.git
git bisect bad 9434b7c52128e9959dce1111b8e1078ffc91468d
# bad: [791ba08d6d977046e8c4a7f01dabd8770d1eb94d] Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
git bisect bad 791ba08d6d977046e8c4a7f01dabd8770d1eb94d
# good: [2b3eb431609a479193044bba064090141a504b9a] Merge branch into tip/master: 'timers/core'
git bisect good 2b3eb431609a479193044bba064090141a504b9a
# good: [81b6ef7427cb4b90c913488c665414ba21bbe46d] Merge branch into tip/master: 'x86/timers'
git bisect good 81b6ef7427cb4b90c913488c665414ba21bbe46d
# bad: [f5d0a26ecd6875f02c6cf4fedf245812015b4cef] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git
git bisect bad f5d0a26ecd6875f02c6cf4fedf245812015b4cef
# good: [5c80b13d27252446973a5ce14a5331b336556f28] Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux.git
git bisect good 5c80b13d27252446973a5ce14a5331b336556f28
# good: [84252c1d2c6efed706037e00f25455378fdda97c] dt-bindings: timer: nxp,lpc3220-timer: Convert to dtschema
git bisect good 84252c1d2c6efed706037e00f25455378fdda97c
# good: [ca35f2837927d73441cfb51174b824ae82a15f93] dt-bindings: soc: fsl: cpm_qe: convert network.txt to yaml
git bisect good ca35f2837927d73441cfb51174b824ae82a15f93
# bad: [a27afc7a6266f02703a6bd492e1f57e8d1ee069b] of: reserved_mem: Add code to dynamically allocate reserved_mem array
git bisect bad a27afc7a6266f02703a6bd492e1f57e8d1ee069b
# bad: [4be66e32070d1e8da72934dbe4dff44a49bd2e5f] of: reserved_mem: Restructure how the reserved memory regions are processed
git bisect bad 4be66e32070d1e8da72934dbe4dff44a49bd2e5f
# good: [d2a97be34548fc5643b4e9536ac8789d839f7374] scripts/dtc: Update to upstream version v1.7.0-95-gbcd02b523429
git bisect good d2a97be34548fc5643b4e9536ac8789d839f7374
# first bad commit: [4be66e32070d1e8da72934dbe4dff44a49bd2e5f] of: reserved_mem: Restructure how the reserved memory regions are processed
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
2024-08-19 17:23 ` Andy Shevchenko
@ 2024-08-19 21:55 ` Rob Herring
2024-08-19 22:33 ` Andy Shevchenko
2024-08-30 16:35 ` Oreoluwa Babatunde
1 sibling, 1 reply; 12+ messages in thread
From: Rob Herring @ 2024-08-19 21:55 UTC (permalink / raw)
To: Andy Shevchenko
Cc: Oreoluwa Babatunde, saravanak, klarasmodin, aisheng.dong, hch,
m.szyprowski, robin.murphy, devicetree, linux-kernel, iommu, will,
catalin.marinas, kernel
On Mon, Aug 19, 2024 at 12:23 PM Andy Shevchenko
<andy@black.fi.intel.com> wrote:
>
> On Fri, Aug 09, 2024 at 11:48:12AM -0700, Oreoluwa Babatunde wrote:
> > The reserved_mem array is used to store data for the different
> > reserved memory regions defined in the DT of a device. The array
> > stores information such as region name, node reference, start-address,
> > and size of the different reserved memory regions.
> >
> > The array is currently statically allocated with a size of
> > MAX_RESERVED_REGIONS(64). This means that any system that specifies a
> > number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
> > will not have enough space to store the information for all the regions.
> >
> > This can be fixed by making the reserved_mem array a dynamically sized
> > array which is allocated using memblock_alloc() based on the exact
> > number of reserved memory regions defined in the DT.
> >
> > On architectures such as arm64, memblock allocated memory is not
> > writable until after the page tables have been setup.
> > This is an issue because the current implementation initializes the
> > reserved memory regions and stores their information in the array before
> > the page tables are setup. Hence, dynamically allocating the
> > reserved_mem array and attempting to write information to it at this
> > point will fail.
> >
> > Therefore, the allocation of the reserved_mem array will need to be done
> > after the page tables have been setup, which means that the reserved
> > memory regions will also need to wait until after the page tables have
> > been setup to be stored in the array.
> >
> > When processing the reserved memory regions defined in the DT, these
> > regions are marked as reserved by calling memblock_reserve(base, size).
> > Where: base = base address of the reserved region.
> > size = the size of the reserved memory region.
> >
> > Depending on if that region is defined using the "no-map" property,
> > memblock_mark_nomap(base, size) is also called.
> >
> > The "no-map" property is used to indicate to the operating system that a
> > mapping of the specified region must NOT be created. This also means
> > that no access (including speculative accesses) is allowed on this
> > region of memory except when it is coming from the device driver that
> > this region of memory is being reserved for.[1]
> >
> > Therefore, it is important to call memblock_reserve() and
> > memblock_mark_nomap() on all the reserved memory regions before the
> > system sets up the page tables so that the system does not unknowingly
> > include any of the no-map reserved memory regions in the memory map.
> >
> > There are two ways to define how/where a reserved memory region is
> > placed in memory:
> > i) Statically-placed reserved memory regions
> > i.e. regions defined with a set start address and size using the
> > "reg" property in the DT.
> > ii) Dynamically-placed reserved memory regions.
> > i.e. regions defined by specifying a range of addresses where they can
> > be placed in memory using the "alloc_ranges" and "size" properties
> > in the DT.
> >
> > The dynamically-placed reserved memory regions get assigned a start
> > address only at runtime. And this needs to be done before the page
> > tables are setup so that memblock_reserve() and memblock_mark_nomap()
> > can be called on the allocated region as explained above.
> > Since the dynamically allocated reserved_mem array can only be
> > available after the page tables have been setup, the information for
> > the dynamically-placed reserved memory regions needs to be stored
> > somewhere temporarily until the reserved_mem array is available.
> >
> > Therefore, this series makes use of a temporary static array to store
> > the information of the dynamically-placed reserved memory regions until
> > the reserved_mem array is allocated.
> > Once the reserved_mem array is available, the information is copied over
> > from the temporary array into the reserved_mem array, and the memory for
> > the temporary array is freed back to the system.
> >
> > The information for the statically-placed reserved memory regions does
> > not need to be stored in a temporary array because their starting
> > address is already stored in the devicetree.
> > Once the reserved_mem array is allocated, the information for the
> > statically-placed reserved memory regions is added to the array.
> >
> > Note:
> > Because of the use of a temporary array to store the information of the
> > dynamically-placed reserved memory regions, there still exists a
> > limitation of 64 for this particular kind of reserved memory regions.
> > >From my observation, these regions are typically small in number and
> > hence I expect this to not be an issue for now.
>
>
> This series (in particular the first patch) broke boot on Intel Meteor
> Lake-P. Taking Linux next of 20240819 with these being reverted makes
> things work again.
Looks like this provides some detail:
https://lore.kernel.org/all/202408192157.8d8fe8a9-oliver.sang@intel.com/
I've dropped the patches for now.
> Taking into account bisectability issue (that's how I noticed the issue
> in the first place) I think it would be nice to have no such patches at
> all in the respective subsystem tree. On my side I may help with testing
> whatever solution or next version provides.
I don't follow what you are asking for? That the patches should be
bisectable? Well, yes, of course, but I don't verify that typically.
Patch 1 builds fine for m, so I'm not sure what issue you see.
Rob
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v7 1/2] of: reserved_mem: Restruture how the reserved memory regions are processed
2024-08-09 18:48 ` [PATCH v7 1/2] of: reserved_mem: Restruture how the reserved memory regions are processed Oreoluwa Babatunde
2024-08-16 21:07 ` Rob Herring (Arm)
@ 2024-08-19 22:04 ` Rob Herring
2024-08-21 23:41 ` Oreoluwa Babatunde
1 sibling, 1 reply; 12+ messages in thread
From: Rob Herring @ 2024-08-19 22:04 UTC (permalink / raw)
To: Oreoluwa Babatunde
Cc: saravanak, klarasmodin, aisheng.dong, hch, m.szyprowski,
robin.murphy, devicetree, linux-kernel, iommu, will,
catalin.marinas, kernel
On Fri, Aug 9, 2024 at 1:48 PM Oreoluwa Babatunde
<quic_obabatun@quicinc.com> wrote:
>
> Reserved memory regions defined in the devicetree can be broken up into
> two groups:
> i) Statically-placed reserved memory regions
> i.e. regions defined with a static start address and size using the
> "reg" property.
> ii) Dynamically-placed reserved memory regions.
> i.e. regions defined by specifying an address range where they can be
> placed in memory using the "alloc_ranges" and "size" properties.
>
> These regions are processed and set aside at boot time.
> This is done in two stages as seen below:
>
> Stage 1:
> At this stage, fdt_scan_reserved_mem() scans through the child nodes of
> the reserved_memory node using the flattened devicetree and does the
> following:
>
> 1) If the node represents a statically-placed reserved memory region,
> i.e. if it is defined using the "reg" property:
> - Call memblock_reserve() or memblock_mark_nomap() as needed.
> - Add the information for that region into the reserved_mem array
> using fdt_reserved_mem_save_node().
> i.e. fdt_reserved_mem_save_node(node, name, base, size).
>
> 2) If the node represents a dynamically-placed reserved memory region,
> i.e. if it is defined using "alloc-ranges" and "size" properties:
> - Add the information for that region to the reserved_mem array with
> the starting address and size set to 0.
> i.e. fdt_reserved_mem_save_node(node, name, 0, 0).
> Note: This region is saved to the array with a starting address of 0
> because a starting address is not yet allocated for it.
>
> Stage 2:
> After iterating through all the reserved memory nodes and storing their
> relevant information in the reserved_mem array,fdt_init_reserved_mem() is
> called and does the following:
>
> 1) For statically-placed reserved memory regions:
> - Call the region specific init function using
> __reserved_mem_init_node().
> 2) For dynamically-placed reserved memory regions:
> - Call __reserved_mem_alloc_size() which is used to allocate memory
> for each of these regions, and mark them as nomap if they have the
> nomap property specified in the DT.
> - Call the region specific init function.
>
> The current size of the resvered_mem array is 64 as is defined by
> MAX_RESERVED_REGIONS. This means that there is a limitation of 64 for
> how many reserved memory regions can be specified on a system.
> As systems continue to grow more and more complex, the number of
> reserved memory regions needed are also growing and are starting to hit
> this 64 count limit, hence the need to make the reserved_mem array
> dynamically sized (i.e. dynamically allocating memory for the
> reserved_mem array using membock_alloc_*).
>
> On architectures such as arm64, memory allocated using memblock is
> writable only after the page tables have been setup. This means that if
> the reserved_mem array is going to be dynamically allocated, it needs to
> happen after the page tables have been setup, not before.
>
> Since the reserved memory regions are currently being processed and
> added to the array before the page tables are setup, there is a need to
> change the order in which some of the processing is done to allow for
> the reserved_mem array to be dynamically sized.
>
> It is possible to process the statically-placed reserved memory regions
> without needing to store them in the reserved_mem array until after the
> page tables have been setup because all the information stored in the
> array is readily available in the devicetree and can be referenced at
> any time.
> Dynamically-placed reserved memory regions on the other hand get
> assigned a start address only at runtime, and hence need a place to be
> stored once they are allocated since there is no other referrence to the
> start address for these regions.
>
> Hence this patch changes the processing order of the reserved memory
> regions in the following ways:
>
> Step 1:
> fdt_scan_reserved_mem() scans through the child nodes of
> the reserved_memory node using the flattened devicetree and does the
> following:
>
> 1) If the node represents a statically-placed reserved memory region,
> i.e. if it is defined using the "reg" property:
> - Call memblock_reserve() or memblock_mark_nomap() as needed.
> - Call the region specific initialization function for the region
> using fdt_init_reserved_mem_node().
>
> 2) If the node represents a dynamically-placed reserved memory region,
> i.e. if it is defined using "alloc-ranges" and "size" properties:
> - Call __reserved_mem_alloc_size() which will:
> i) Allocate memory for the reserved region and call
> memblock_mark_nomap() as needed.
> ii) Call the region specific initialization function using
> fdt_init_reserved_mem_node().
> iii) Save the region information in the reserved_mem array using
> fdt_reserved_mem_save_node().
>
> Step 2:
> 1) This stage of the reserved memory processing is now only used to add
> the statically-placed reserved memory regions into the reserved_mem
> array using fdt_scan_reserved_mem_reg_nodes().
>
> 2) This step is also moved to be after the page tables have been
> setup. Moving this will allow us to replace the reserved_mem
> array with a dynamically sized array before storing the rest of
> these regions.
>
> Signed-off-by: Oreoluwa Babatunde <quic_obabatun@quicinc.com>
> ---
> drivers/of/fdt.c | 5 +-
> drivers/of/of_private.h | 3 +-
> drivers/of/of_reserved_mem.c | 172 +++++++++++++++++++++++++----------
> 3 files changed, 131 insertions(+), 49 deletions(-)
>
> diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
> index 68103ad230ee..d4b7aaa70e31 100644
> --- a/drivers/of/fdt.c
> +++ b/drivers/of/fdt.c
> @@ -511,8 +511,6 @@ void __init early_init_fdt_scan_reserved_mem(void)
> break;
> memblock_reserve(base, size);
> }
> -
> - fdt_init_reserved_mem();
> }
>
> /**
> @@ -1239,6 +1237,9 @@ void __init unflatten_device_tree(void)
> of_alias_scan(early_init_dt_alloc_memory_arch);
>
> unittest_unflatten_overlay_base();
> +
> + /* Save the statically-placed regions in the reserved_mem array */
> + fdt_scan_reserved_mem_reg_nodes();
I'm still not understanding why the unflatttened API doesn't work
here? It was just used in of_alias_scan() above here.
The problem reported is this function uses initial_boot_params, but
that's NULL for x86.
Rob
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
2024-08-19 21:55 ` Rob Herring
@ 2024-08-19 22:33 ` Andy Shevchenko
0 siblings, 0 replies; 12+ messages in thread
From: Andy Shevchenko @ 2024-08-19 22:33 UTC (permalink / raw)
To: Rob Herring
Cc: Andy Shevchenko, Oreoluwa Babatunde, saravanak, klarasmodin,
aisheng.dong, hch, m.szyprowski, robin.murphy, devicetree,
linux-kernel, iommu, will, catalin.marinas, kernel
Mon, Aug 19, 2024 at 04:55:49PM -0500, Rob Herring kirjoitti:
> On Mon, Aug 19, 2024 at 12:23 PM Andy Shevchenko
> <andy@black.fi.intel.com> wrote:
...
> > This series (in particular the first patch) broke boot on Intel Meteor
> > Lake-P. Taking Linux next of 20240819 with these being reverted makes
> > things work again.
>
> Looks like this provides some detail:
> https://lore.kernel.org/all/202408192157.8d8fe8a9-oliver.sang@intel.com/
>
> I've dropped the patches for now.
Thank you, that's what I have asked for!
> > Taking into account bisectability issue (that's how I noticed the issue
> > in the first place) I think it would be nice to have no such patches at
> > all in the respective subsystem tree. On my side I may help with testing
> > whatever solution or next version provides.
>
> I don't follow what you are asking for? That the patches should be
> bisectable? Well, yes, of course, but I don't verify that typically.
> Patch 1 builds fine for m, so I'm not sure what issue you see.
There are two types of bisectability:
1) compile-time;
2) run-time.
People often forgot about #2 and that's exactly what I'm complaining about.
Due to bisecting another thing, I have stumbled over this issue.
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v7 1/2] of: reserved_mem: Restruture how the reserved memory regions are processed
2024-08-19 22:04 ` Rob Herring
@ 2024-08-21 23:41 ` Oreoluwa Babatunde
0 siblings, 0 replies; 12+ messages in thread
From: Oreoluwa Babatunde @ 2024-08-21 23:41 UTC (permalink / raw)
To: Rob Herring
Cc: saravanak, klarasmodin, aisheng.dong, hch, m.szyprowski,
robin.murphy, devicetree, linux-kernel, iommu, will,
catalin.marinas, kernel
On 8/19/2024 3:04 PM, Rob Herring wrote:
> On Fri, Aug 9, 2024 at 1:48 PM Oreoluwa Babatunde
> <quic_obabatun@quicinc.com> wrote:
>> Reserved memory regions defined in the devicetree can be broken up into
>> two groups:
>> i) Statically-placed reserved memory regions
>> i.e. regions defined with a static start address and size using the
>> "reg" property.
>> ii) Dynamically-placed reserved memory regions.
>> i.e. regions defined by specifying an address range where they can be
>> placed in memory using the "alloc_ranges" and "size" properties.
>>
>> [...]
>>
>> Signed-off-by: Oreoluwa Babatunde <quic_obabatun@quicinc.com>
>> ---
>> drivers/of/fdt.c | 5 +-
>> drivers/of/of_private.h | 3 +-
>> drivers/of/of_reserved_mem.c | 172 +++++++++++++++++++++++++----------
>> 3 files changed, 131 insertions(+), 49 deletions(-)
>>
>> diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
>> index 68103ad230ee..d4b7aaa70e31 100644
>> --- a/drivers/of/fdt.c
>> +++ b/drivers/of/fdt.c
>> @@ -511,8 +511,6 @@ void __init early_init_fdt_scan_reserved_mem(void)
>> break;
>> memblock_reserve(base, size);
>> }
>> -
>> - fdt_init_reserved_mem();
>> }
>>
>> /**
>> @@ -1239,6 +1237,9 @@ void __init unflatten_device_tree(void)
>> of_alias_scan(early_init_dt_alloc_memory_arch);
>>
>> unittest_unflatten_overlay_base();
>> +
>> + /* Save the statically-placed regions in the reserved_mem array */
>> + fdt_scan_reserved_mem_reg_nodes();
Hi Rob,
> I'm still not understanding why the unflatttened API doesn't work
> here? It was just used in of_alias_scan() above here.
The main reason why the unflatten_devicetree APIs does not work here is
because a reference to fdt_node needs to be stored for the reserved
regions, and it can only be gotten by using the fdt APIs.
The fdt_node is needed by rmem_dma_setup(), rmem_cma_setup(), and
rmem_swiotlb_setup(). All of which are used to configure the reserved
memory regions during early bootup.
In my previous versions, I replaced fdt_node with device_node in struct
reserved_mem in order to leverage the unflatten_devicetree APIs, and the
above functions were being called after the page tables were setup.
As we found out later, those functions need to be called before the page
tables are setup in order for the reserved regions to be configured
correctly[1]. But since the unflatten_devicetree APIs are not available
before the page tables are setup, I had to switch back to using the
fdt_node which can only be gotten by using the fdt APIs.
[1] https://lore.kernel.org/all/002b6176-41b3-4888-abb1-978399d108b8@arm.com/
The only way I see that we can avoid using the fdt APIs here is if
we just don't store an fdt_node reference for the reserved regions
in resvered_mem. But I'm not sure if we want to do that.
> The problem reported is this function uses initial_boot_params, but
> that's NULL for x86.
ack
Thank you,
Oreoluwa.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
2024-08-19 17:23 ` Andy Shevchenko
2024-08-19 21:55 ` Rob Herring
@ 2024-08-30 16:35 ` Oreoluwa Babatunde
1 sibling, 0 replies; 12+ messages in thread
From: Oreoluwa Babatunde @ 2024-08-30 16:35 UTC (permalink / raw)
To: Andy Shevchenko
Cc: robh, saravanak, klarasmodin, aisheng.dong, hch, m.szyprowski,
robin.murphy, devicetree, linux-kernel, iommu, will,
catalin.marinas, kernel
On 8/19/2024 10:23 AM, Andy Shevchenko wrote:
> On Fri, Aug 09, 2024 at 11:48:12AM -0700, Oreoluwa Babatunde wrote:
>> The reserved_mem array is used to store data for the different
>> reserved memory regions defined in the DT of a device. The array
>> stores information such as region name, node reference, start-address,
>> and size of the different reserved memory regions.
>>
>> The array is currently statically allocated with a size of
>> MAX_RESERVED_REGIONS(64). This means that any system that specifies a
>> number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
>> will not have enough space to store the information for all the regions.
>>
>> This can be fixed by making the reserved_mem array a dynamically sized
>> array which is allocated using memblock_alloc() based on the exact
>> number of reserved memory regions defined in the DT.
>>
>> On architectures such as arm64, memblock allocated memory is not
>> writable until after the page tables have been setup.
>> This is an issue because the current implementation initializes the
>> reserved memory regions and stores their information in the array before
>> the page tables are setup. Hence, dynamically allocating the
>> reserved_mem array and attempting to write information to it at this
>> point will fail.
>>
>> Therefore, the allocation of the reserved_mem array will need to be done
>> after the page tables have been setup, which means that the reserved
>> memory regions will also need to wait until after the page tables have
>> been setup to be stored in the array.
>>
>> When processing the reserved memory regions defined in the DT, these
>> regions are marked as reserved by calling memblock_reserve(base, size).
>> Where: base = base address of the reserved region.
>> size = the size of the reserved memory region.
>>
>> Depending on if that region is defined using the "no-map" property,
>> memblock_mark_nomap(base, size) is also called.
>>
>> The "no-map" property is used to indicate to the operating system that a
>> mapping of the specified region must NOT be created. This also means
>> that no access (including speculative accesses) is allowed on this
>> region of memory except when it is coming from the device driver that
>> this region of memory is being reserved for.[1]
>>
>> Therefore, it is important to call memblock_reserve() and
>> memblock_mark_nomap() on all the reserved memory regions before the
>> system sets up the page tables so that the system does not unknowingly
>> include any of the no-map reserved memory regions in the memory map.
>>
>> There are two ways to define how/where a reserved memory region is
>> placed in memory:
>> i) Statically-placed reserved memory regions
>> i.e. regions defined with a set start address and size using the
>> "reg" property in the DT.
>> ii) Dynamically-placed reserved memory regions.
>> i.e. regions defined by specifying a range of addresses where they can
>> be placed in memory using the "alloc_ranges" and "size" properties
>> in the DT.
>>
>> The dynamically-placed reserved memory regions get assigned a start
>> address only at runtime. And this needs to be done before the page
>> tables are setup so that memblock_reserve() and memblock_mark_nomap()
>> can be called on the allocated region as explained above.
>> Since the dynamically allocated reserved_mem array can only be
>> available after the page tables have been setup, the information for
>> the dynamically-placed reserved memory regions needs to be stored
>> somewhere temporarily until the reserved_mem array is available.
>>
>> Therefore, this series makes use of a temporary static array to store
>> the information of the dynamically-placed reserved memory regions until
>> the reserved_mem array is allocated.
>> Once the reserved_mem array is available, the information is copied over
>> from the temporary array into the reserved_mem array, and the memory for
>> the temporary array is freed back to the system.
>>
>> The information for the statically-placed reserved memory regions does
>> not need to be stored in a temporary array because their starting
>> address is already stored in the devicetree.
>> Once the reserved_mem array is allocated, the information for the
>> statically-placed reserved memory regions is added to the array.
>>
>> Note:
>> Because of the use of a temporary array to store the information of the
>> dynamically-placed reserved memory regions, there still exists a
>> limitation of 64 for this particular kind of reserved memory regions.
>> >From my observation, these regions are typically small in number and
>> hence I expect this to not be an issue for now.
>
> This series (in particular the first patch) broke boot on Intel Meteor
> Lake-P. Taking Linux next of 20240819 with these being reverted makes
> things work again.
>
> Taking into account bisectability issue (that's how I noticed the issue
> in the first place) I think it would be nice to have no such patches at
> all in the respective subsystem tree. On my side I may help with testing
> whatever solution or next version provides.
Hi Andy,
I have re-uploaded another version of my patches.
Please can you help test it on your platform to confirm
that the issue is no longer present?
https://lore.kernel.org/all/20240830162857.2821502-1-quic_obabatun@quicinc.com/
Thank you!
Oreoluwa
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2024-08-30 16:35 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-09 18:48 [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array Oreoluwa Babatunde
2024-08-09 18:48 ` [PATCH v7 1/2] of: reserved_mem: Restruture how the reserved memory regions are processed Oreoluwa Babatunde
2024-08-16 21:07 ` Rob Herring (Arm)
2024-08-19 22:04 ` Rob Herring
2024-08-21 23:41 ` Oreoluwa Babatunde
2024-08-09 18:48 ` [PATCH v7 2/2] of: reserved_mem: Add code to dynamically allocate reserved_mem array Oreoluwa Babatunde
2024-08-16 21:08 ` Rob Herring (Arm)
2024-08-09 20:09 ` [PATCH v7 0/2] Dynamic Allocation of the " Klara Modin
2024-08-19 17:23 ` Andy Shevchenko
2024-08-19 21:55 ` Rob Herring
2024-08-19 22:33 ` Andy Shevchenko
2024-08-30 16:35 ` Oreoluwa Babatunde
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).