* [RFC PATCH 0/8] arm64/memblock: Handling of CXL Fixed Memory Windows.
@ 2024-05-29 17:12 Jonathan Cameron
2024-05-29 17:12 ` [RFC PATCH 1/8] arm64: numa: Introduce a memory_add_physaddr_to_nid() Jonathan Cameron
` (7 more replies)
0 siblings, 8 replies; 30+ messages in thread
From: Jonathan Cameron @ 2024-05-29 17:12 UTC (permalink / raw)
To: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla
Cc: Andrew Morton, David Hildenbrand, Will Deacon, Jia He,
Mike Rapoport, linuxarm, catalin.marinas, Anshuman.Khandual,
Yuquan Wang, Oscar Salvador, Lorenzo Pieralisi, James Morse
RFC because
- I'm relying heavily on comments people made back when Dan proposed
generic memblock based tracking of Numa nodes that his
approach was fine for arm64 (but not for other architectures).
- Final patch. I'm hoping someone will explain why the hot remove path
removes reserved memblocks. That currently breaks this approach as we
can't re-add memory at an address where previously removed some.
- I'm not particularly confident in this area so this might be
stupidly broken in a way I've not considered.
On x86 CXL Fixed Memory Windows, as described in the ACPI CEDT table CFMW
Structures either result in a new NUMA node being assigned, or extend an
existing NUMA node. Unlike ACPI memory hotplug (where the PXM is included
in the signalling and whatever SRAT told us is largely ignored) CXL NUMA
node assignment is based on static data available early in boot. Note
that whilst they define a range of physical memory, until we program various
address decoders + hotplug relevant devices there is no memory there.
The wrinkle is that the firmware may well have configured some CXL
memory and be presenting it as normal system memory (in appropriate
firmware tables etc).
Unfortunately despite using some nice general sounding functions, the
solution is somewhat x86 specific. This series is a first attempt to
support NUMA nodes for CXL memory on on arm64. Note I tried or
considered few different approaches:
- A new MEMBLOCK flag to indicate that the memblock was just for
NUMA mappings. That turned out to be fiddly as a lot of places
needed modifying.
- Adding completely separate handling of CFMWS entries.
That means handling them completely differently to SRAT entrees.
- Reparse the CEDT table at time of hotplug and figure out which
node to use based on something like normal NUMA nodes, + number
of CEDT CFMWS entry. This solution looked likely to be messy
and may be fragile.
So not seeing a way forwards I asked on the montly CXL open
source sync call...
Dan William's pointed out a similar discussion was had a few years
ago but a memblock approach was rejected because only arm64 uses
memblocks as the single source of information of NUMA nodes for
memory. Given I'm looking at ARM64 that sounded perfect.
[PATCH v2 00/22] device-dax: Support sub-dividing soft-reserved ranges
https://lore.kernel.org/linux-mm/159457116473.754248.7879464730875147365.stgit@dwillia2-desk3.amr.corp.intel.com/
This series leverages two of Dan's patches with minor tweaks.
Very kind of Dan to write nice patches for arm64 support so
I've kept the original authorship as my changes were mainly code
movement.
The remainder of the series deals with the differences between CFMWS
address ranges and soft reserved ones - primarily that there is not necessarily
anything in the EFI memory map or similar so we need to add an entry.
The solution is a little ugly and this isn't an area of the kernel I know
at all well, so I'd love to hear suggestions of a better way to do this!
As I don't have an arm64 system that does the mixture of firmware setup
CXL memory and additional hotplugged memory dealt with by the OS those
code paths were tested by a dirty hack to create overlapping memblocks.
Dan Williams (2):
arm64: numa: Introduce a memory_add_physaddr_to_nid()
arm64: memblock: Introduce a generic phys_addr_to_target_node()
Jonathan Cameron (6):
mm: memblock: Add a means to add to memblock.reserved
arch_numa: Avoid onlining empty NUMA nodes
arch_numa: Make numa_add_memblk() set nid for memblock.reserved
regions
arm64: mm: numa_fill_memblks() to add a memblock.reserved region if
match.
acpi: srat: cxl: Skip zero length CXL fixed memory windows.
HACK: mm: memory_hotplug: Drop memblock_phys_free() call in
try_remove_memory()
arch/arm64/include/asm/sparsemem.h | 8 ++++
arch/arm64/mm/init.c | 77 ++++++++++++++++++++++++++++++
drivers/acpi/numa/srat.c | 5 ++
drivers/base/arch_numa.c | 12 +++++
include/linux/memblock.h | 10 ++++
include/linux/mm.h | 14 ++++++
mm/memblock.c | 33 +++++++++++--
mm/memory_hotplug.c | 2 +-
mm/mm_init.c | 29 ++++++++++-
9 files changed, 185 insertions(+), 5 deletions(-)
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC PATCH 1/8] arm64: numa: Introduce a memory_add_physaddr_to_nid()
2024-05-29 17:12 [RFC PATCH 0/8] arm64/memblock: Handling of CXL Fixed Memory Windows Jonathan Cameron
@ 2024-05-29 17:12 ` Jonathan Cameron
2024-08-01 7:50 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 2/8] arm64: memblock: Introduce a generic phys_addr_to_target_node() Jonathan Cameron
` (6 subsequent siblings)
7 siblings, 1 reply; 30+ messages in thread
From: Jonathan Cameron @ 2024-05-29 17:12 UTC (permalink / raw)
To: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla
Cc: Andrew Morton, David Hildenbrand, Will Deacon, Jia He,
Mike Rapoport, linuxarm, catalin.marinas, Anshuman.Khandual,
Yuquan Wang, Oscar Salvador, Lorenzo Pieralisi, James Morse
From: Dan Williams <dan.j.williams@intel.com>
Based heavily on Dan William's earlier attempt to introduce this
infrastruture for all architectures so I've kept his authorship. [1]
arm64 stores it's numa data in memblock. Add a memblock generic
way to interrogate that data for memory_Add_physaddr_to_nid.
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Jia He <justin.he@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Link: https://lore.kernel.org/r/159457120334.754248.12908401960465408733.stgit@dwillia2-desk3.amr.corp.intel.com [1]
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
arch/arm64/include/asm/sparsemem.h | 4 ++++
arch/arm64/mm/init.c | 29 +++++++++++++++++++++++++++++
2 files changed, 33 insertions(+)
diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
index 8a8acc220371..8dd1b6a718fa 100644
--- a/arch/arm64/include/asm/sparsemem.h
+++ b/arch/arm64/include/asm/sparsemem.h
@@ -26,4 +26,8 @@
#define SECTION_SIZE_BITS 27
#endif /* CONFIG_ARM64_64K_PAGES */
+#ifndef __ASSEMBLY__
+extern int memory_add_physaddr_to_nid(u64 addr);
+#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
+#endif /* __ASSEMBLY__ */
#endif
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 9b5ab6818f7f..f310cbd349ba 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -48,6 +48,35 @@
#include <asm/alternative.h>
#include <asm/xen/swiotlb-xen.h>
+#ifdef CONFIG_NUMA
+
+static int __memory_add_physaddr_to_nid(u64 addr)
+{
+ unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(addr);
+ int nid;
+
+ for_each_online_node(nid) {
+ get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
+ if (pfn >= start_pfn && pfn <= end_pfn)
+ return nid;
+ }
+ return NUMA_NO_NODE;
+}
+
+int memory_add_physaddr_to_nid(u64 start)
+{
+ int nid = __memory_add_physaddr_to_nid(start);
+
+ /* Default to node0 as not all callers are prepared for this to fail */
+ if (nid == NUMA_NO_NODE)
+ return 0;
+
+ return nid;
+}
+EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
+
+#endif /* CONFIG_NUMA */
+
/*
* We need to be able to catch inadvertent references to memstart_addr
* that occur (potentially in generic code) before arm64_memblock_init()
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [RFC PATCH 2/8] arm64: memblock: Introduce a generic phys_addr_to_target_node()
2024-05-29 17:12 [RFC PATCH 0/8] arm64/memblock: Handling of CXL Fixed Memory Windows Jonathan Cameron
2024-05-29 17:12 ` [RFC PATCH 1/8] arm64: numa: Introduce a memory_add_physaddr_to_nid() Jonathan Cameron
@ 2024-05-29 17:12 ` Jonathan Cameron
2024-08-01 7:52 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 3/8] mm: memblock: Add a means to add to memblock.reserved Jonathan Cameron
` (5 subsequent siblings)
7 siblings, 1 reply; 30+ messages in thread
From: Jonathan Cameron @ 2024-05-29 17:12 UTC (permalink / raw)
To: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla
Cc: Andrew Morton, David Hildenbrand, Will Deacon, Jia He,
Mike Rapoport, linuxarm, catalin.marinas, Anshuman.Khandual,
Yuquan Wang, Oscar Salvador, Lorenzo Pieralisi, James Morse
From: Dan Williams <dan.j.williams@intel.com>
Similar to how generic memory_add_physaddr_to_nid() interrogates
memblock data for numa information, introduce
get_reserved_pfn_range_from_nid() to enable the same operation for
reserved memory ranges. Example memory ranges that are reserved, but
still have associated numa-info are persistent memory or Soft Reserved
(EFI_MEMORY_SP) memory.
This is Dan's patch but with the implementation of
phys_addr_to_target_node() made arm64 specific.
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Jia He <justin.he@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Link: https://lore.kernel.org/r/159457120893.754248.7783260004248722175.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
arch/arm64/include/asm/sparsemem.h | 4 ++++
arch/arm64/mm/init.c | 22 ++++++++++++++++++++++
include/linux/memblock.h | 8 ++++++++
include/linux/mm.h | 14 ++++++++++++++
mm/memblock.c | 22 +++++++++++++++++++---
mm/mm_init.c | 29 ++++++++++++++++++++++++++++-
6 files changed, 95 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
index 8dd1b6a718fa..5b483ad6d501 100644
--- a/arch/arm64/include/asm/sparsemem.h
+++ b/arch/arm64/include/asm/sparsemem.h
@@ -27,7 +27,11 @@
#endif /* CONFIG_ARM64_64K_PAGES */
#ifndef __ASSEMBLY__
+
extern int memory_add_physaddr_to_nid(u64 addr);
#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
+extern int phys_to_target_node(phys_addr_t start);
+#define phys_to_target_node phys_to_target_node
+
#endif /* __ASSEMBLY__ */
#endif
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index f310cbd349ba..6a2f21b1bb58 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -75,6 +75,28 @@ int memory_add_physaddr_to_nid(u64 start)
}
EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
+int phys_to_target_node(phys_addr_t start)
+{
+ unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(start);
+ int nid = __memory_add_physaddr_to_nid(start);
+
+ if (nid != NUMA_NO_NODE)
+ return nid;
+
+ /*
+ * Search reserved memory ranges since the memory address does
+ * not appear to be online
+ */
+ for_each_node_state(nid, N_POSSIBLE) {
+ get_reserved_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
+ if (pfn >= start_pfn && pfn <= end_pfn)
+ return nid;
+ }
+
+ return NUMA_NO_NODE;
+}
+EXPORT_SYMBOL(phys_to_target_node);
+
#endif /* CONFIG_NUMA */
/*
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index e2082240586d..c7d518a54359 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -281,6 +281,10 @@ int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn,
void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
unsigned long *out_end_pfn, int *out_nid);
+void __next_reserved_pfn_range(int *idx, int nid,
+ unsigned long *out_start_pfn,
+ unsigned long *out_end_pfn, int *out_nid);
+
/**
* for_each_mem_pfn_range - early memory pfn range iterator
* @i: an integer used as loop variable
@@ -295,6 +299,10 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
for (i = -1, __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid); \
i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid))
+#define for_each_reserved_pfn_range(i, nid, p_start, p_end, p_nid) \
+ for (i = -1, __next_reserved_pfn_range(&i, nid, p_start, p_end, p_nid); \
+ i >= 0; __next_reserved_pfn_range(&i, nid, p_start, p_end, p_nid))
+
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
unsigned long *out_spfn,
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9849dfda44d4..0c829b2d44fa 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3245,9 +3245,23 @@ void free_area_init(unsigned long *max_zone_pfn);
unsigned long node_map_pfn_alignment(void);
extern unsigned long absent_pages_in_range(unsigned long start_pfn,
unsigned long end_pfn);
+
+/*
+ * Allow archs to opt-in to keeping get_pfn_range_for_nid() available
+ * after boot.
+ */
+#ifdef CONFIG_ARCH_KEEP_MEMBLOCK
+#define __init_or_memblock
+#else
+#define __init_or_memblock __init
+#endif
+
extern void get_pfn_range_for_nid(unsigned int nid,
unsigned long *start_pfn, unsigned long *end_pfn);
+extern void get_reserved_pfn_range_for_nid(unsigned int nid,
+ unsigned long *start_pfn, unsigned long *end_pfn);
+
#ifndef CONFIG_NUMA
static inline int early_pfn_to_nid(unsigned long pfn)
{
diff --git a/mm/memblock.c b/mm/memblock.c
index d09136e040d3..5498d5ea70b4 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1289,11 +1289,11 @@ void __init_memblock __next_mem_range_rev(u64 *idx, int nid,
/*
* Common iterator interface used to define for_each_mem_pfn_range().
*/
-void __init_memblock __next_mem_pfn_range(int *idx, int nid,
+static void __init_memblock __next_memblock_pfn_range(int *idx, int nid,
unsigned long *out_start_pfn,
- unsigned long *out_end_pfn, int *out_nid)
+ unsigned long *out_end_pfn, int *out_nid,
+ struct memblock_type *type)
{
- struct memblock_type *type = &memblock.memory;
struct memblock_region *r;
int r_nid;
@@ -1319,6 +1319,22 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid,
*out_nid = r_nid;
}
+void __init_memblock __next_mem_pfn_range(int *idx, int nid,
+ unsigned long *out_start_pfn,
+ unsigned long *out_end_pfn, int *out_nid)
+{
+ __next_memblock_pfn_range(idx, nid, out_start_pfn, out_end_pfn, out_nid,
+ &memblock.memory);
+}
+
+void __init_memblock __next_reserved_pfn_range(int *idx, int nid,
+ unsigned long *out_start_pfn,
+ unsigned long *out_end_pfn, int *out_nid)
+{
+ __next_memblock_pfn_range(idx, nid, out_start_pfn, out_end_pfn, out_nid,
+ &memblock.reserved);
+}
+
/**
* memblock_set_node - set node ID on memblock regions
* @base: base of area to set node ID for
diff --git a/mm/mm_init.c b/mm/mm_init.c
index f72b852bd5b8..1f6e29e60673 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1644,7 +1644,7 @@ static inline void alloc_node_mem_map(struct pglist_data *pgdat) { }
* provided by memblock_set_node(). If called for a node
* with no available memory, the start and end PFNs will be 0.
*/
-void __init get_pfn_range_for_nid(unsigned int nid,
+void __init_or_memblock get_pfn_range_for_nid(unsigned int nid,
unsigned long *start_pfn, unsigned long *end_pfn)
{
unsigned long this_start_pfn, this_end_pfn;
@@ -1662,6 +1662,33 @@ void __init get_pfn_range_for_nid(unsigned int nid,
*start_pfn = 0;
}
+/**
+ * get_reserved_pfn_range_for_nid - Return the start and end page frames for a node
+ * @nid: The nid to return the range for. If MAX_NUMNODES, the min and max PFN are returned.
+ * @start_pfn: Passed by reference. On return, it will have the node start_pfn.
+ * @end_pfn: Passed by reference. On return, it will have the node end_pfn.
+ *
+ * Mostly identical to get_pfn_range_for_nid() except it operates on
+ * reserved ranges rather than online memory.
+ */
+void __init_or_memblock get_reserved_pfn_range_for_nid(unsigned int nid,
+ unsigned long *start_pfn, unsigned long *end_pfn)
+{
+ unsigned long this_start_pfn, this_end_pfn;
+ int i;
+
+ *start_pfn = -1UL;
+ *end_pfn = 0;
+
+ for_each_reserved_pfn_range(i, nid, &this_start_pfn, &this_end_pfn, NULL) {
+ *start_pfn = min(*start_pfn, this_start_pfn);
+ *end_pfn = max(*end_pfn, this_end_pfn);
+ }
+
+ if (*start_pfn == -1UL)
+ *start_pfn = 0;
+}
+
static void __init free_area_init_node(int nid)
{
pg_data_t *pgdat = NODE_DATA(nid);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [RFC PATCH 3/8] mm: memblock: Add a means to add to memblock.reserved
2024-05-29 17:12 [RFC PATCH 0/8] arm64/memblock: Handling of CXL Fixed Memory Windows Jonathan Cameron
2024-05-29 17:12 ` [RFC PATCH 1/8] arm64: numa: Introduce a memory_add_physaddr_to_nid() Jonathan Cameron
2024-05-29 17:12 ` [RFC PATCH 2/8] arm64: memblock: Introduce a generic phys_addr_to_target_node() Jonathan Cameron
@ 2024-05-29 17:12 ` Jonathan Cameron
2024-08-01 7:53 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 4/8] arch_numa: Avoid onlining empty NUMA nodes Jonathan Cameron
` (4 subsequent siblings)
7 siblings, 1 reply; 30+ messages in thread
From: Jonathan Cameron @ 2024-05-29 17:12 UTC (permalink / raw)
To: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla
Cc: Andrew Morton, David Hildenbrand, Will Deacon, Jia He,
Mike Rapoport, linuxarm, catalin.marinas, Anshuman.Khandual,
Yuquan Wang, Oscar Salvador, Lorenzo Pieralisi, James Morse
For CXL CFMWS regions, need to add memblocks that may not be
in the system memory map so that their nid can be queried later.
Add a function to make this easy to do.
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
include/linux/memblock.h | 2 ++
mm/memblock.c | 11 +++++++++++
2 files changed, 13 insertions(+)
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index c7d518a54359..9ac1ed8c3293 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -113,6 +113,8 @@ static inline void memblock_discard(void) {}
void memblock_allow_resize(void);
int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid,
enum memblock_flags flags);
+int memblock_add_reserved_node(phys_addr_t base, phys_addr_t size, int nid,
+ enum memblock_flags flags);
int memblock_add(phys_addr_t base, phys_addr_t size);
int memblock_remove(phys_addr_t base, phys_addr_t size);
int memblock_phys_free(phys_addr_t base, phys_addr_t size);
diff --git a/mm/memblock.c b/mm/memblock.c
index 5498d5ea70b4..8d02f75ec186 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -714,6 +714,17 @@ int __init_memblock memblock_add_node(phys_addr_t base, phys_addr_t size,
return memblock_add_range(&memblock.memory, base, size, nid, flags);
}
+int __init_memblock memblock_add_reserved_node(phys_addr_t base, phys_addr_t size,
+ int nid, enum memblock_flags flags)
+{
+ phys_addr_t end = base + size - 1;
+
+ memblock_dbg("%s: [%pa-%pa] nid=%d flags=%x %pS\n", __func__,
+ &base, &end, nid, flags, (void *)_RET_IP_);
+
+ return memblock_add_range(&memblock.reserved, base, size, nid, flags);
+}
+
/**
* memblock_add - add new memblock region
* @base: base address of the new region
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [RFC PATCH 4/8] arch_numa: Avoid onlining empty NUMA nodes
2024-05-29 17:12 [RFC PATCH 0/8] arm64/memblock: Handling of CXL Fixed Memory Windows Jonathan Cameron
` (2 preceding siblings ...)
2024-05-29 17:12 ` [RFC PATCH 3/8] mm: memblock: Add a means to add to memblock.reserved Jonathan Cameron
@ 2024-05-29 17:12 ` Jonathan Cameron
2024-08-01 7:53 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 5/8] arch_numa: Make numa_add_memblk() set nid for memblock.reserved regions Jonathan Cameron
` (3 subsequent siblings)
7 siblings, 1 reply; 30+ messages in thread
From: Jonathan Cameron @ 2024-05-29 17:12 UTC (permalink / raw)
To: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla
Cc: Andrew Morton, David Hildenbrand, Will Deacon, Jia He,
Mike Rapoport, linuxarm, catalin.marinas, Anshuman.Khandual,
Yuquan Wang, Oscar Salvador, Lorenzo Pieralisi, James Morse
ACPI can declare NUMA nodes for memory that will be along later.
CXL Fixed Memory Windows may also be assigned NUMA nodes that
are initially empty. Currently the generic arch_numa handling will
online these empty nodes. This is both inconsistent with x86 and
with itself as if we add memory and remove it again the node goes
away.
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
drivers/base/arch_numa.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
index 5b59d133b6af..0630efb696ab 100644
--- a/drivers/base/arch_numa.c
+++ b/drivers/base/arch_numa.c
@@ -363,6 +363,11 @@ static int __init numa_register_nodes(void)
unsigned long start_pfn, end_pfn;
get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
+ if (start_pfn >= end_pfn &&
+ !node_state(nid, N_CPU) &&
+ !node_state(nid, N_GENERIC_INITIATOR))
+ continue;
+
setup_node_data(nid, start_pfn, end_pfn);
node_set_online(nid);
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [RFC PATCH 5/8] arch_numa: Make numa_add_memblk() set nid for memblock.reserved regions
2024-05-29 17:12 [RFC PATCH 0/8] arm64/memblock: Handling of CXL Fixed Memory Windows Jonathan Cameron
` (3 preceding siblings ...)
2024-05-29 17:12 ` [RFC PATCH 4/8] arch_numa: Avoid onlining empty NUMA nodes Jonathan Cameron
@ 2024-05-29 17:12 ` Jonathan Cameron
2024-08-01 7:54 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 6/8] arm64: mm: numa_fill_memblks() to add a memblock.reserved region if match Jonathan Cameron
` (2 subsequent siblings)
7 siblings, 1 reply; 30+ messages in thread
From: Jonathan Cameron @ 2024-05-29 17:12 UTC (permalink / raw)
To: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla
Cc: Andrew Morton, David Hildenbrand, Will Deacon, Jia He,
Mike Rapoport, linuxarm, catalin.marinas, Anshuman.Khandual,
Yuquan Wang, Oscar Salvador, Lorenzo Pieralisi, James Morse
Setting the reserved region entries to the appropriate Node ID means
that they can be used to establish the node to which we should add
hotplugged CXL memory within a CXL fixed memory window.
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
drivers/base/arch_numa.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
index 0630efb696ab..568dbabeb636 100644
--- a/drivers/base/arch_numa.c
+++ b/drivers/base/arch_numa.c
@@ -208,6 +208,13 @@ int __init numa_add_memblk(int nid, u64 start, u64 end)
start, (end - 1), nid);
return ret;
}
+ /* Also set reserved nodes nid */
+ ret = memblock_set_node(start, (end - start), &memblock.reserved, nid);
+ if (ret < 0) {
+ pr_err("memblock [0x%llx - 0x%llx] failed to add on node %d\n",
+ start, (end - 1), nid);
+ return ret;
+ }
node_set(nid, numa_nodes_parsed);
return ret;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [RFC PATCH 6/8] arm64: mm: numa_fill_memblks() to add a memblock.reserved region if match.
2024-05-29 17:12 [RFC PATCH 0/8] arm64/memblock: Handling of CXL Fixed Memory Windows Jonathan Cameron
` (4 preceding siblings ...)
2024-05-29 17:12 ` [RFC PATCH 5/8] arch_numa: Make numa_add_memblk() set nid for memblock.reserved regions Jonathan Cameron
@ 2024-05-29 17:12 ` Jonathan Cameron
2024-08-01 7:54 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 7/8] acpi: srat: cxl: Skip zero length CXL fixed memory windows Jonathan Cameron
2024-05-29 17:12 ` [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory() Jonathan Cameron
7 siblings, 1 reply; 30+ messages in thread
From: Jonathan Cameron @ 2024-05-29 17:12 UTC (permalink / raw)
To: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla
Cc: Andrew Morton, David Hildenbrand, Will Deacon, Jia He,
Mike Rapoport, linuxarm, catalin.marinas, Anshuman.Khandual,
Yuquan Wang, Oscar Salvador, Lorenzo Pieralisi, James Morse
CXL memory hotplug relies on additional NUMA nodes being created
for any CXL Fixed Memory Window if there is no suitable one created
by system firmware. To detect if system firmware has created one look
for any normal memblock that overlaps with the Fixed Memory Window that
has a NUMA node (nid) set.
If one is found, add a region with the same nid to memblock.reserved
so we can match it later when CXL memory is hotplugged.
If not, add a region anyway because a suitable NUMA node will be
set later. So for now use NUMA_NO_NODE.
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
arch/arm64/mm/init.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 6a2f21b1bb58..27941f22db1c 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -50,6 +50,32 @@
#ifdef CONFIG_NUMA
+/*
+ * Scan existing memblocks and if this region overlaps with a region with
+ * a nid set, add a reserved memblock.
+ */
+int __init numa_fill_memblks(u64 start, u64 end)
+{
+ struct memblock_region *region;
+
+ for_each_mem_region(region) {
+ int nid = memblock_get_region_node(region);
+
+ if (nid == NUMA_NO_NODE)
+ continue;
+ if (!(end < region->base || start >= region->base + region->size)) {
+ memblock_add_reserved_node(start, end - start, nid,
+ MEMBLOCK_RSRV_NOINIT);
+ return 0;
+ }
+ }
+
+ memblock_add_reserved_node(start, end - start, NUMA_NO_NODE,
+ MEMBLOCK_RSRV_NOINIT);
+
+ return NUMA_NO_MEMBLK;
+}
+
static int __memory_add_physaddr_to_nid(u64 addr)
{
unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(addr);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [RFC PATCH 7/8] acpi: srat: cxl: Skip zero length CXL fixed memory windows.
2024-05-29 17:12 [RFC PATCH 0/8] arm64/memblock: Handling of CXL Fixed Memory Windows Jonathan Cameron
` (5 preceding siblings ...)
2024-05-29 17:12 ` [RFC PATCH 6/8] arm64: mm: numa_fill_memblks() to add a memblock.reserved region if match Jonathan Cameron
@ 2024-05-29 17:12 ` Jonathan Cameron
2024-08-01 7:55 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory() Jonathan Cameron
7 siblings, 1 reply; 30+ messages in thread
From: Jonathan Cameron @ 2024-05-29 17:12 UTC (permalink / raw)
To: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla
Cc: Andrew Morton, David Hildenbrand, Will Deacon, Jia He,
Mike Rapoport, linuxarm, catalin.marinas, Anshuman.Khandual,
Yuquan Wang, Oscar Salvador, Lorenzo Pieralisi, James Morse
One reported platform uses this nonsensical entry to represent
a disable CFWMS. The acpi_cxl driver already correctly errors
out on seeing this, but that leaves an additional confusing node
in /sys/devices/system/nodes/possible plus wastes some space.
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
drivers/acpi/numa/srat.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/acpi/numa/srat.c b/drivers/acpi/numa/srat.c
index e3f26e71637a..28c963d5c51f 100644
--- a/drivers/acpi/numa/srat.c
+++ b/drivers/acpi/numa/srat.c
@@ -329,6 +329,11 @@ static int __init acpi_parse_cfmws(union acpi_subtable_headers *header,
int node;
cfmws = (struct acpi_cedt_cfmws *)header;
+
+ /* At least one firmware reports disabled entries with size 0 */
+ if (cfmws->window_size == 0)
+ return 0;
+
start = cfmws->base_hpa;
end = cfmws->base_hpa + cfmws->window_size;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-05-29 17:12 [RFC PATCH 0/8] arm64/memblock: Handling of CXL Fixed Memory Windows Jonathan Cameron
` (6 preceding siblings ...)
2024-05-29 17:12 ` [RFC PATCH 7/8] acpi: srat: cxl: Skip zero length CXL fixed memory windows Jonathan Cameron
@ 2024-05-29 17:12 ` Jonathan Cameron
2024-05-30 10:07 ` Oscar Salvador
2024-05-31 7:49 ` David Hildenbrand
7 siblings, 2 replies; 30+ messages in thread
From: Jonathan Cameron @ 2024-05-29 17:12 UTC (permalink / raw)
To: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla
Cc: Andrew Morton, David Hildenbrand, Will Deacon, Jia He,
Mike Rapoport, linuxarm, catalin.marinas, Anshuman.Khandual,
Yuquan Wang, Oscar Salvador, Lorenzo Pieralisi, James Morse
I'm not sure what this is balancing, but it if is necessary then the reserved
memblock approach can't be used to stash NUMA node assignments as after the
first add / remove cycle the entry is dropped so not available if memory is
re-added at the same HPA.
This patch is here to hopefully spur comments on what this is there for!
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
mm/memory_hotplug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 431b1f6753c0..3d8dd4749dfc 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
}
if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
- memblock_phys_free(start, size);
+ // memblock_phys_free(start, size);
memblock_remove(start, size);
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-05-29 17:12 ` [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory() Jonathan Cameron
@ 2024-05-30 10:07 ` Oscar Salvador
2024-05-30 12:14 ` Jonathan Cameron
2024-05-31 7:49 ` David Hildenbrand
1 sibling, 1 reply; 30+ messages in thread
From: Oscar Salvador @ 2024-05-30 10:07 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla,
Andrew Morton, David Hildenbrand, Will Deacon, Jia He,
Mike Rapoport, linuxarm, catalin.marinas, Anshuman.Khandual,
Yuquan Wang, Lorenzo Pieralisi, James Morse
On Wed, May 29, 2024 at 06:12:36PM +0100, Jonathan Cameron wrote:
> I'm not sure what this is balancing, but it if is necessary then the reserved
> memblock approach can't be used to stash NUMA node assignments as after the
> first add / remove cycle the entry is dropped so not available if memory is
> re-added at the same HPA.
It is balancing previously allocated memory which was allocated via
memblock_phys_alloc{_range,try_nid}.
memblock_phys_alloc_try_nid() is who does the heavy-lifting, and also calls
kmemleak_alloc_phys() to make kmemleak aware of that memory.
A quick idea that came to me is:
I think that it should be possible 1) create a new memory_block flag
(check 'enum memblock_flags') and 2) flag the range you want with this
range (check memblock_setclr_flag()) with a function like
memblock_reserved_mark_{yourflag}.
Then, in memblock_phys_free() (or down the path) we could check for that flag,
and refuse to proceed if it is set.
Would that work?
I am not sure, but you might need to
--
Oscar Salvador
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-05-30 10:07 ` Oscar Salvador
@ 2024-05-30 12:14 ` Jonathan Cameron
0 siblings, 0 replies; 30+ messages in thread
From: Jonathan Cameron @ 2024-05-30 12:14 UTC (permalink / raw)
To: Oscar Salvador
Cc: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla,
Andrew Morton, David Hildenbrand, Will Deacon, Jia He,
Mike Rapoport, linuxarm, catalin.marinas, Anshuman.Khandual,
Yuquan Wang, Lorenzo Pieralisi, James Morse
On Thu, 30 May 2024 12:07:37 +0200
Oscar Salvador <osalvador@suse.com> wrote:
> On Wed, May 29, 2024 at 06:12:36PM +0100, Jonathan Cameron wrote:
> > I'm not sure what this is balancing, but it if is necessary then the reserved
> > memblock approach can't be used to stash NUMA node assignments as after the
> > first add / remove cycle the entry is dropped so not available if memory is
> > re-added at the same HPA.
>
> It is balancing previously allocated memory which was allocated via
> memblock_phys_alloc{_range,try_nid}.
> memblock_phys_alloc_try_nid() is who does the heavy-lifting, and also calls
> kmemleak_alloc_phys() to make kmemleak aware of that memory.
Thanks Oscar,
I'm struggling to find the call path that does that for the particular range being
released.
There are some calls to allocate node data but for small amounts of data
whereas I think this call is removing the full range of the hot removed memory.
Is the intent just to get rid of the node data related reserved memblock if
it happens to be in the memory removed? If so feels like the call should
really be targetting just that.
>
> A quick idea that came to me is:
> I think that it should be possible 1) create a new memory_block flag
> (check 'enum memblock_flags') and 2) flag the range you want with this
> range (check memblock_setclr_flag()) with a function like
> memblock_reserved_mark_{yourflag}.
>
> Then, in memblock_phys_free() (or down the path) we could check for that flag,
> and refuse to proceed if it is set.
I had tried a flag in the main memblock rather than using the reserved memblock
a while back and that was horribly invasive so I got a bit scared of touching
those flags. One used only in the reserved memblock might work
to bypass the removal of the reserved memblocks but carry out everything
else that call is intended to do. I'm still sketchy on what I'm bypassing
though!
Thanks,
Jonathan
>
> Would that work?
> I am not sure, but you might need to
>
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-05-29 17:12 ` [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory() Jonathan Cameron
2024-05-30 10:07 ` Oscar Salvador
@ 2024-05-31 7:49 ` David Hildenbrand
2024-05-31 9:48 ` Jonathan Cameron
2024-06-03 7:57 ` Mike Rapoport
1 sibling, 2 replies; 30+ messages in thread
From: David Hildenbrand @ 2024-05-31 7:49 UTC (permalink / raw)
To: Jonathan Cameron, Dan Williams, linux-cxl, linux-arm-kernel,
Sudeep Holla
Cc: Andrew Morton, Will Deacon, Jia He, Mike Rapoport, linuxarm,
catalin.marinas, Anshuman.Khandual, Yuquan Wang, Oscar Salvador,
Lorenzo Pieralisi, James Morse
On 29.05.24 19:12, Jonathan Cameron wrote:
> I'm not sure what this is balancing, but it if is necessary then the reserved
> memblock approach can't be used to stash NUMA node assignments as after the
> first add / remove cycle the entry is dropped so not available if memory is
> re-added at the same HPA.
>
> This patch is here to hopefully spur comments on what this is there for!
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> mm/memory_hotplug.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 431b1f6753c0..3d8dd4749dfc 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
> }
>
> if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
> - memblock_phys_free(start, size);
> + // memblock_phys_free(start, size);
> memblock_remove(start, size);
> }
memblock_phys_free() works on memblock.reserved, memblock_remove() works
on memblock.memory.
If you take a look at the doc at the top of memblock.c:
memblock.memory: physical memory available to the system
memblock.reserved: regions that were allocated [during boot]
memblock.memory is supposed to be a superset of memblock.reserved. Your
"hack" here indicates that you somehow would be relying on the opposite
being true, which indicates that you are doing the wrong thing.
memblock_remove() indeed balances against memblock_add_node() for
hotplugged memory [add_memory_resource()]. There seem to a case where we
would succeed in hotunplugging memory that was part of "memblock.reserved".
But how could that happen? I think the following way:
Once the buddy is up and running, memory allocated during early boot is
not freed back to memblock, but usually we simply go via something like
free_reserved_page(), not memblock_free() [because the buddy took over].
So one could end up unplugging memory that still resides in
memblock.reserved set.
So with memblock_phys_free(), we are enforcing the invariant that
memblock.memory is a superset of memblock.reserved.
Likely, arm64 should store that node assignment elsewhere from where it
can be queried. Or it should be using something like
CONFIG_HAVE_MEMBLOCK_PHYS_MAP for these static windows.
--
Cheers,
David / dhildenb
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-05-31 7:49 ` David Hildenbrand
@ 2024-05-31 9:48 ` Jonathan Cameron
2024-05-31 9:55 ` David Hildenbrand
2024-06-03 7:57 ` Mike Rapoport
1 sibling, 1 reply; 30+ messages in thread
From: Jonathan Cameron @ 2024-05-31 9:48 UTC (permalink / raw)
To: David Hildenbrand
Cc: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla,
Andrew Morton, Will Deacon, Jia He, Mike Rapoport, linuxarm,
catalin.marinas, Anshuman.Khandual, Yuquan Wang, Oscar Salvador,
Lorenzo Pieralisi, James Morse
On Fri, 31 May 2024 09:49:32 +0200
David Hildenbrand <david@redhat.com> wrote:
> On 29.05.24 19:12, Jonathan Cameron wrote:
> > I'm not sure what this is balancing, but it if is necessary then the reserved
> > memblock approach can't be used to stash NUMA node assignments as after the
> > first add / remove cycle the entry is dropped so not available if memory is
> > re-added at the same HPA.
> >
> > This patch is here to hopefully spur comments on what this is there for!
> >
> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > ---
> > mm/memory_hotplug.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > index 431b1f6753c0..3d8dd4749dfc 100644
> > --- a/mm/memory_hotplug.c
> > +++ b/mm/memory_hotplug.c
> > @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
> > }
> >
> > if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
> > - memblock_phys_free(start, size);
> > + // memblock_phys_free(start, size);
> > memblock_remove(start, size);
> > }
>
> memblock_phys_free() works on memblock.reserved, memblock_remove() works
> on memblock.memory.
>
> If you take a look at the doc at the top of memblock.c:
>
> memblock.memory: physical memory available to the system
> memblock.reserved: regions that were allocated [during boot]
>
>
> memblock.memory is supposed to be a superset of memblock.reserved. Your
> "hack" here indicates that you somehow would be relying on the opposite
> being true, which indicates that you are doing the wrong thing.
>
>
> memblock_remove() indeed balances against memblock_add_node() for
> hotplugged memory [add_memory_resource()]. There seem to a case where we
> would succeed in hotunplugging memory that was part of "memblock.reserved".
>
> But how could that happen? I think the following way:
>
> Once the buddy is up and running, memory allocated during early boot is
> not freed back to memblock, but usually we simply go via something like
> free_reserved_page(), not memblock_free() [because the buddy took over].
> So one could end up unplugging memory that still resides in
> memblock.reserved set.
>
> So with memblock_phys_free(), we are enforcing the invariant that
> memblock.memory is a superset of memblock.reserved.
>
> Likely, arm64 should store that node assignment elsewhere from where it
> can be queried. Or it should be using something like
> CONFIG_HAVE_MEMBLOCK_PHYS_MAP for these static windows.
>
Hi David,
Thanks for the explanation and pointers. I'd rather avoid inventing a parallel
infrastructure for this (like x86 has for other reasons, but which is also used
for this purpose).
From a quick look CONFIG_HAVE_MEMBLOCK_PHYS_MAP is documented in a fashion that
makes me think it's not directly appropriate (this isn't actual physical memory
available during boot) but the general approach of just adding another memblock
collection seems like it will work.
Hardest problem might be naming it. physmem_possible perhaps?
Fill that with anything found in SRAT or CEDT should work for ACPI, but I'm not
sure on whether we can fill it when neither of those is present. Maybe we just
don't bother as for today's usecase CEDT needs to be present.
Maybe physmem_known_possible is the way to go. I'll give this approach a spin
and see how it goes.
Jonathan
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-05-31 9:48 ` Jonathan Cameron
@ 2024-05-31 9:55 ` David Hildenbrand
2024-06-06 15:44 ` Mike Rapoport
0 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2024-05-31 9:55 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Dan Williams, linux-cxl, linux-arm-kernel, Sudeep Holla,
Andrew Morton, Will Deacon, Jia He, Mike Rapoport, linuxarm,
catalin.marinas, Anshuman.Khandual, Yuquan Wang, Oscar Salvador,
Lorenzo Pieralisi, James Morse
On 31.05.24 11:48, Jonathan Cameron wrote:
> On Fri, 31 May 2024 09:49:32 +0200
> David Hildenbrand <david@redhat.com> wrote:
>
>> On 29.05.24 19:12, Jonathan Cameron wrote:
>>> I'm not sure what this is balancing, but it if is necessary then the reserved
>>> memblock approach can't be used to stash NUMA node assignments as after the
>>> first add / remove cycle the entry is dropped so not available if memory is
>>> re-added at the same HPA.
>>>
>>> This patch is here to hopefully spur comments on what this is there for!
>>>
>>> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>>> ---
>>> mm/memory_hotplug.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index 431b1f6753c0..3d8dd4749dfc 100644
>>> --- a/mm/memory_hotplug.c
>>> +++ b/mm/memory_hotplug.c
>>> @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
>>> }
>>>
>>> if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
>>> - memblock_phys_free(start, size);
>>> + // memblock_phys_free(start, size);
>>> memblock_remove(start, size);
>>> }
>>
>> memblock_phys_free() works on memblock.reserved, memblock_remove() works
>> on memblock.memory.
>>
>> If you take a look at the doc at the top of memblock.c:
>>
>> memblock.memory: physical memory available to the system
>> memblock.reserved: regions that were allocated [during boot]
>>
>>
>> memblock.memory is supposed to be a superset of memblock.reserved. Your
>> "hack" here indicates that you somehow would be relying on the opposite
>> being true, which indicates that you are doing the wrong thing.
>>
>>
>> memblock_remove() indeed balances against memblock_add_node() for
>> hotplugged memory [add_memory_resource()]. There seem to a case where we
>> would succeed in hotunplugging memory that was part of "memblock.reserved".
>>
>> But how could that happen? I think the following way:
>>
>> Once the buddy is up and running, memory allocated during early boot is
>> not freed back to memblock, but usually we simply go via something like
>> free_reserved_page(), not memblock_free() [because the buddy took over].
>> So one could end up unplugging memory that still resides in
>> memblock.reserved set.
>>
>> So with memblock_phys_free(), we are enforcing the invariant that
>> memblock.memory is a superset of memblock.reserved.
>>
>> Likely, arm64 should store that node assignment elsewhere from where it
>> can be queried. Or it should be using something like
>> CONFIG_HAVE_MEMBLOCK_PHYS_MAP for these static windows.
>>
>
> Hi David,
>
> Thanks for the explanation and pointers. I'd rather avoid inventing a parallel
> infrastructure for this (like x86 has for other reasons, but which is also used
> for this purpose).
Yes, although memblock feels a bit wrong, because it is targeted at
managing actual present memory, not properties about memory that could
become present later.
> From a quick look CONFIG_HAVE_MEMBLOCK_PHYS_MAP is documented in a fashion that
> makes me think it's not directly appropriate (this isn't actual physical memory
> available during boot) but the general approach of just adding another memblock
> collection seems like it will work.
Yes. As an alternative, modify the description of
CONFIG_HAVE_MEMBLOCK_PHYS_MAP.
>
> Hardest problem might be naming it. physmem_possible perhaps?
> Fill that with anything found in SRAT or CEDT should work for ACPI, but I'm not
> sure on whether we can fill it when neither of those is present. Maybe we just
> don't bother as for today's usecase CEDT needs to be present.
You likely only want something for these special windows, with these
special semantics. That makes it hard.
"physmem_possible" is a bit too generic for my taste, promising
semantics that might not hold true (we don't want all hotpluggable areas
to show up there).
>
> Maybe physmem_known_possible is the way to go. I'll give this approach a spin
> and see how it goes.
Goes into a better direction, or really "physmem_fixed_windows". Because
also "physmem_known_possible" as a wrong smell to it.
--
Cheers,
David / dhildenb
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-05-31 7:49 ` David Hildenbrand
2024-05-31 9:48 ` Jonathan Cameron
@ 2024-06-03 7:57 ` Mike Rapoport
2024-06-03 9:14 ` David Hildenbrand
1 sibling, 1 reply; 30+ messages in thread
From: Mike Rapoport @ 2024-06-03 7:57 UTC (permalink / raw)
To: David Hildenbrand
Cc: Jonathan Cameron, Dan Williams, linux-cxl, linux-arm-kernel,
Sudeep Holla, Andrew Morton, Will Deacon, Jia He, Mike Rapoport,
linuxarm, catalin.marinas, Anshuman.Khandual, Yuquan Wang,
Oscar Salvador, Lorenzo Pieralisi, James Morse
On Fri, May 31, 2024 at 09:49:32AM +0200, David Hildenbrand wrote:
> On 29.05.24 19:12, Jonathan Cameron wrote:
> > I'm not sure what this is balancing, but it if is necessary then the reserved
> > memblock approach can't be used to stash NUMA node assignments as after the
> > first add / remove cycle the entry is dropped so not available if memory is
> > re-added at the same HPA.
> >
> > This patch is here to hopefully spur comments on what this is there for!
> >
> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > ---
> > mm/memory_hotplug.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > index 431b1f6753c0..3d8dd4749dfc 100644
> > --- a/mm/memory_hotplug.c
> > +++ b/mm/memory_hotplug.c
> > @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
> > }
> > if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
> > - memblock_phys_free(start, size);
> > + // memblock_phys_free(start, size);
> > memblock_remove(start, size);
> > }
>
> memblock_phys_free() works on memblock.reserved, memblock_remove() works on
> memblock.memory.
>
> If you take a look at the doc at the top of memblock.c:
>
> memblock.memory: physical memory available to the system
> memblock.reserved: regions that were allocated [during boot]
>
>
> memblock.memory is supposed to be a superset of memblock.reserved. Your
No it's not.
memblock.reserved is more of "if there is memory, don't touch it".
Some regions in memblock.reserved are boot time allocations and they are indeed a
subset of memblock.memory, but some are reservations done by firmware (e.g.
reserved memory in DT) that just might not have a corresponding regions in
memblock.memory. It can happen for example, when the same firmware runs on
devices with different memory configuration, but still wants to preserve
some physical addresses.
> "hack" here indicates that you somehow would be relying on the opposite
> being true, which indicates that you are doing the wrong thing.
I'm not sure about that, I still have to digest the patches :)
> memblock_remove() indeed balances against memblock_add_node() for hotplugged
> memory [add_memory_resource()]. There seem to a case where we would succeed
> in hotunplugging memory that was part of "memblock.reserved".
>
> But how could that happen? I think the following way:
>
> Once the buddy is up and running, memory allocated during early boot is not
> freed back to memblock, but usually we simply go via something like
> free_reserved_page(), not memblock_free() [because the buddy took over]. So
> one could end up unplugging memory that still resides in memblock.reserved
> set.
>
> So with memblock_phys_free(), we are enforcing the invariant that
> memblock.memory is a superset of memblock.reserved.
>
> Likely, arm64 should store that node assignment elsewhere from where it can
> be queried. Or it should be using something like
> CONFIG_HAVE_MEMBLOCK_PHYS_MAP for these static windows.
>
> --
> Cheers,
>
> David / dhildenb
>
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-06-03 7:57 ` Mike Rapoport
@ 2024-06-03 9:14 ` David Hildenbrand
2024-06-03 10:43 ` Mike Rapoport
0 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2024-06-03 9:14 UTC (permalink / raw)
To: Mike Rapoport
Cc: Jonathan Cameron, Dan Williams, linux-cxl, linux-arm-kernel,
Sudeep Holla, Andrew Morton, Will Deacon, Jia He, Mike Rapoport,
linuxarm, catalin.marinas, Anshuman.Khandual, Yuquan Wang,
Oscar Salvador, Lorenzo Pieralisi, James Morse
On 03.06.24 09:57, Mike Rapoport wrote:
> On Fri, May 31, 2024 at 09:49:32AM +0200, David Hildenbrand wrote:
>> On 29.05.24 19:12, Jonathan Cameron wrote:
>>> I'm not sure what this is balancing, but it if is necessary then the reserved
>>> memblock approach can't be used to stash NUMA node assignments as after the
>>> first add / remove cycle the entry is dropped so not available if memory is
>>> re-added at the same HPA.
>>>
>>> This patch is here to hopefully spur comments on what this is there for!
>>>
>>> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>>> ---
>>> mm/memory_hotplug.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index 431b1f6753c0..3d8dd4749dfc 100644
>>> --- a/mm/memory_hotplug.c
>>> +++ b/mm/memory_hotplug.c
>>> @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
>>> }
>>> if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
>>> - memblock_phys_free(start, size);
>>> + // memblock_phys_free(start, size);
>>> memblock_remove(start, size);
>>> }
>>
>> memblock_phys_free() works on memblock.reserved, memblock_remove() works on
>> memblock.memory.
>>
>> If you take a look at the doc at the top of memblock.c:
>>
>> memblock.memory: physical memory available to the system
>> memblock.reserved: regions that were allocated [during boot]
>>
>>
>> memblock.memory is supposed to be a superset of memblock.reserved. Your
>
> No it's not.
> memblock.reserved is more of "if there is memory, don't touch it".
Then we should certainly clarify that in the comments! :P
But for the memory hotunplug case, that's most likely why that code was
added. And it only deals with ordinary system RAM, not weird
reservations you describe below.
> Some regions in memblock.reserved are boot time allocations and they are indeed a
> subset of memblock.memory, but some are reservations done by firmware (e.g.
> reserved memory in DT) that just might not have a corresponding regions in
> memblock.memory. It can happen for example, when the same firmware runs on
> devices with different memory configuration, but still wants to preserve
> some physical addresses.
Could this happen with a good old BIOS as well? Just curious.
>
>> "hack" here indicates that you somehow would be relying on the opposite
>> being true, which indicates that you are doing the wrong thing.
>
> I'm not sure about that, I still have to digest the patches :)
In any case, using "reserved" to store persistent data across
plug/unplug sounds wrong; but maybe I'm wrong :)
--
Cheers,
David / dhildenb
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-06-03 9:14 ` David Hildenbrand
@ 2024-06-03 10:43 ` Mike Rapoport
2024-06-03 20:53 ` David Hildenbrand
0 siblings, 1 reply; 30+ messages in thread
From: Mike Rapoport @ 2024-06-03 10:43 UTC (permalink / raw)
To: David Hildenbrand
Cc: Jonathan Cameron, Dan Williams, linux-cxl, linux-arm-kernel,
Sudeep Holla, Andrew Morton, Will Deacon, Jia He, Mike Rapoport,
linuxarm, catalin.marinas, Anshuman.Khandual, Yuquan Wang,
Oscar Salvador, Lorenzo Pieralisi, James Morse
On Mon, Jun 03, 2024 at 11:14:00AM +0200, David Hildenbrand wrote:
> On 03.06.24 09:57, Mike Rapoport wrote:
> > On Fri, May 31, 2024 at 09:49:32AM +0200, David Hildenbrand wrote:
> > > On 29.05.24 19:12, Jonathan Cameron wrote:
> > > > I'm not sure what this is balancing, but it if is necessary then the reserved
> > > > memblock approach can't be used to stash NUMA node assignments as after the
> > > > first add / remove cycle the entry is dropped so not available if memory is
> > > > re-added at the same HPA.
> > > >
> > > > This patch is here to hopefully spur comments on what this is there for!
> > > >
> > > > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > > > ---
> > > > mm/memory_hotplug.c | 2 +-
> > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > > > index 431b1f6753c0..3d8dd4749dfc 100644
> > > > --- a/mm/memory_hotplug.c
> > > > +++ b/mm/memory_hotplug.c
> > > > @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
> > > > }
> > > > if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
> > > > - memblock_phys_free(start, size);
> > > > + // memblock_phys_free(start, size);
> > > > memblock_remove(start, size);
> > > > }
> > >
> > > memblock_phys_free() works on memblock.reserved, memblock_remove() works on
> > > memblock.memory.
> > >
> > > If you take a look at the doc at the top of memblock.c:
> > >
> > > memblock.memory: physical memory available to the system
> > > memblock.reserved: regions that were allocated [during boot]
> > >
> > >
> > > memblock.memory is supposed to be a superset of memblock.reserved. Your
> >
> > No it's not.
> > memblock.reserved is more of "if there is memory, don't touch it".
>
> Then we should certainly clarify that in the comments! :P
You are welcome to send a patch :-P
> But for the memory hotunplug case, that's most likely why that code was
> added. And it only deals with ordinary system RAM, not weird reservations
> you describe below.
The commit that added memblock_free() at the first place (f9126ab9241f
("memory-hotplug: fix wrong edge when hot add a new node")) does not really
describe why that was required :(
But at a quick glance it looks completely spurious.
> > Some regions in memblock.reserved are boot time allocations and they are indeed a
> > subset of memblock.memory, but some are reservations done by firmware (e.g.
> > reserved memory in DT) that just might not have a corresponding regions in
> > memblock.memory. It can happen for example, when the same firmware runs on
> > devices with different memory configuration, but still wants to preserve
> > some physical addresses.
>
> Could this happen with a good old BIOS as well? Just curious.
Yes. E.g. for E820_TYPE_SOFT_RESERVED.
> > > "hack" here indicates that you somehow would be relying on the opposite
> > > being true, which indicates that you are doing the wrong thing.
> > I'm not sure about that, I still have to digest the patches :)
>
> In any case, using "reserved" to store persistent data across plug/unplug
> sounds wrong; but maybe I'm wrong :)
>
> --
> Cheers,
>
> David / dhildenb
>
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-06-03 10:43 ` Mike Rapoport
@ 2024-06-03 20:53 ` David Hildenbrand
2024-06-04 9:35 ` Mike Rapoport
0 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2024-06-03 20:53 UTC (permalink / raw)
To: Mike Rapoport
Cc: Jonathan Cameron, Dan Williams, linux-cxl, linux-arm-kernel,
Sudeep Holla, Andrew Morton, Will Deacon, Jia He, Mike Rapoport,
linuxarm, catalin.marinas, Anshuman.Khandual, Yuquan Wang,
Oscar Salvador, Lorenzo Pieralisi, James Morse
On 03.06.24 12:43, Mike Rapoport wrote:
> On Mon, Jun 03, 2024 at 11:14:00AM +0200, David Hildenbrand wrote:
>> On 03.06.24 09:57, Mike Rapoport wrote:
>>> On Fri, May 31, 2024 at 09:49:32AM +0200, David Hildenbrand wrote:
>>>> On 29.05.24 19:12, Jonathan Cameron wrote:
>>>>> I'm not sure what this is balancing, but it if is necessary then the reserved
>>>>> memblock approach can't be used to stash NUMA node assignments as after the
>>>>> first add / remove cycle the entry is dropped so not available if memory is
>>>>> re-added at the same HPA.
>>>>>
>>>>> This patch is here to hopefully spur comments on what this is there for!
>>>>>
>>>>> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>>>>> ---
>>>>> mm/memory_hotplug.c | 2 +-
>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>>>> index 431b1f6753c0..3d8dd4749dfc 100644
>>>>> --- a/mm/memory_hotplug.c
>>>>> +++ b/mm/memory_hotplug.c
>>>>> @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
>>>>> }
>>>>> if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
>>>>> - memblock_phys_free(start, size);
>>>>> + // memblock_phys_free(start, size);
>>>>> memblock_remove(start, size);
>>>>> }
>>>>
>>>> memblock_phys_free() works on memblock.reserved, memblock_remove() works on
>>>> memblock.memory.
>>>>
>>>> If you take a look at the doc at the top of memblock.c:
>>>>
>>>> memblock.memory: physical memory available to the system
>>>> memblock.reserved: regions that were allocated [during boot]
>>>>
>>>>
>>>> memblock.memory is supposed to be a superset of memblock.reserved. Your
>>>
>>> No it's not.
>>> memblock.reserved is more of "if there is memory, don't touch it".
>>
>> Then we should certainly clarify that in the comments! :P
>
> You are welcome to send a patch :-P
I'll try once I understood what changed ever since you documented that
in 2018 -- or if we missed that detail back then already.
>
>> But for the memory hotunplug case, that's most likely why that code was
>> added. And it only deals with ordinary system RAM, not weird reservations
>> you describe below.
>
> The commit that added memblock_free() at the first place (f9126ab9241f
> ("memory-hotplug: fix wrong edge when hot add a new node")) does not really
> describe why that was required :(
>
> But at a quick glance it looks completely spurious.
There are more details [1] but I also did not figure out why the
memblock_free() was really required to resolve that issue.
[1] https://marc.info/?l=linux-kernel&m=142961156129456&w=2
--
Cheers,
David / dhildenb
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-06-03 20:53 ` David Hildenbrand
@ 2024-06-04 9:35 ` Mike Rapoport
2024-06-04 9:39 ` David Hildenbrand
0 siblings, 1 reply; 30+ messages in thread
From: Mike Rapoport @ 2024-06-04 9:35 UTC (permalink / raw)
To: David Hildenbrand
Cc: Jonathan Cameron, Dan Williams, linux-cxl, linux-arm-kernel,
Sudeep Holla, Andrew Morton, Will Deacon, Jia He, Mike Rapoport,
linuxarm, catalin.marinas, Anshuman.Khandual, Yuquan Wang,
Oscar Salvador, Lorenzo Pieralisi, James Morse
On Mon, Jun 03, 2024 at 10:53:03PM +0200, David Hildenbrand wrote:
> On 03.06.24 12:43, Mike Rapoport wrote:
> > On Mon, Jun 03, 2024 at 11:14:00AM +0200, David Hildenbrand wrote:
> > > On 03.06.24 09:57, Mike Rapoport wrote:
> > > > On Fri, May 31, 2024 at 09:49:32AM +0200, David Hildenbrand wrote:
> > > > > On 29.05.24 19:12, Jonathan Cameron wrote:
> > > > > > I'm not sure what this is balancing, but it if is necessary then the reserved
> > > > > > memblock approach can't be used to stash NUMA node assignments as after the
> > > > > > first add / remove cycle the entry is dropped so not available if memory is
> > > > > > re-added at the same HPA.
> > > > > >
> > > > > > This patch is here to hopefully spur comments on what this is there for!
> > > > > >
> > > > > > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > > > > > ---
> > > > > > mm/memory_hotplug.c | 2 +-
> > > > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > >
> > > > > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > > > > > index 431b1f6753c0..3d8dd4749dfc 100644
> > > > > > --- a/mm/memory_hotplug.c
> > > > > > +++ b/mm/memory_hotplug.c
> > > > > > @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
> > > > > > }
> > > > > > if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
> > > > > > - memblock_phys_free(start, size);
> > > > > > + // memblock_phys_free(start, size);
> > > > > > memblock_remove(start, size);
> > > > > > }
> > > > >
> > > > > memblock_phys_free() works on memblock.reserved, memblock_remove() works on
> > > > > memblock.memory.
> > > > >
> > > > > If you take a look at the doc at the top of memblock.c:
> > > > >
> > > > > memblock.memory: physical memory available to the system
> > > > > memblock.reserved: regions that were allocated [during boot]
> > > > >
> > > > >
> > > > > memblock.memory is supposed to be a superset of memblock.reserved. Your
> > > >
> > > > No it's not.
> > > > memblock.reserved is more of "if there is memory, don't touch it".
> > >
> > > Then we should certainly clarify that in the comments! :P
> >
> > You are welcome to send a patch :-P
>
> I'll try once I understood what changed ever since you documented that in
> 2018 -- or if we missed that detail back then already.
> > > But for the memory hotunplug case, that's most likely why that code was
> > > added. And it only deals with ordinary system RAM, not weird reservations
> > > you describe below.
> >
> > The commit that added memblock_free() at the first place (f9126ab9241f
> > ("memory-hotplug: fix wrong edge when hot add a new node")) does not really
> > describe why that was required :(
> >
> > But at a quick glance it looks completely spurious.
>
> There are more details [1] but I also did not figure out why the
> memblock_free() was really required to resolve that issue.
>
> [1] https://marc.info/?l=linux-kernel&m=142961156129456&w=2
The tinkering with memblock there and in f9126ab9241f seem bogus in the
context of memory hotplug on x86.
I believe that dropping that memblock_phys_free() is right thing to do
regardless of this series. There's no corresponding memblock_alloc() and it
was added as part of a fix for hotunplug on x86 that anyway had memblock
discarded at that point.
> --
> Cheers,
>
> David / dhildenb
>
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-06-04 9:35 ` Mike Rapoport
@ 2024-06-04 9:39 ` David Hildenbrand
2024-06-05 8:00 ` Mike Rapoport
0 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2024-06-04 9:39 UTC (permalink / raw)
To: Mike Rapoport
Cc: Jonathan Cameron, Dan Williams, linux-cxl, linux-arm-kernel,
Sudeep Holla, Andrew Morton, Will Deacon, Jia He, Mike Rapoport,
linuxarm, catalin.marinas, Anshuman.Khandual, Yuquan Wang,
Oscar Salvador, Lorenzo Pieralisi, James Morse
On 04.06.24 11:35, Mike Rapoport wrote:
> On Mon, Jun 03, 2024 at 10:53:03PM +0200, David Hildenbrand wrote:
>> On 03.06.24 12:43, Mike Rapoport wrote:
>>> On Mon, Jun 03, 2024 at 11:14:00AM +0200, David Hildenbrand wrote:
>>>> On 03.06.24 09:57, Mike Rapoport wrote:
>>>>> On Fri, May 31, 2024 at 09:49:32AM +0200, David Hildenbrand wrote:
>>>>>> On 29.05.24 19:12, Jonathan Cameron wrote:
>>>>>>> I'm not sure what this is balancing, but it if is necessary then the reserved
>>>>>>> memblock approach can't be used to stash NUMA node assignments as after the
>>>>>>> first add / remove cycle the entry is dropped so not available if memory is
>>>>>>> re-added at the same HPA.
>>>>>>>
>>>>>>> This patch is here to hopefully spur comments on what this is there for!
>>>>>>>
>>>>>>> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>>>>>>> ---
>>>>>>> mm/memory_hotplug.c | 2 +-
>>>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>>>>>> index 431b1f6753c0..3d8dd4749dfc 100644
>>>>>>> --- a/mm/memory_hotplug.c
>>>>>>> +++ b/mm/memory_hotplug.c
>>>>>>> @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
>>>>>>> }
>>>>>>> if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
>>>>>>> - memblock_phys_free(start, size);
>>>>>>> + // memblock_phys_free(start, size);
>>>>>>> memblock_remove(start, size);
>>>>>>> }
>>>>>>
>>>>>> memblock_phys_free() works on memblock.reserved, memblock_remove() works on
>>>>>> memblock.memory.
>>>>>>
>>>>>> If you take a look at the doc at the top of memblock.c:
>>>>>>
>>>>>> memblock.memory: physical memory available to the system
>>>>>> memblock.reserved: regions that were allocated [during boot]
>>>>>>
>>>>>>
>>>>>> memblock.memory is supposed to be a superset of memblock.reserved. Your
>>>>>
>>>>> No it's not.
>>>>> memblock.reserved is more of "if there is memory, don't touch it".
>>>>
>>>> Then we should certainly clarify that in the comments! :P
>>>
>>> You are welcome to send a patch :-P
>>
>> I'll try once I understood what changed ever since you documented that in
>> 2018 -- or if we missed that detail back then already.
>
>>>> But for the memory hotunplug case, that's most likely why that code was
>>>> added. And it only deals with ordinary system RAM, not weird reservations
>>>> you describe below.
>>>
>>> The commit that added memblock_free() at the first place (f9126ab9241f
>>> ("memory-hotplug: fix wrong edge when hot add a new node")) does not really
>>> describe why that was required :(
>>>
>>> But at a quick glance it looks completely spurious.
>>
>> There are more details [1] but I also did not figure out why the
>> memblock_free() was really required to resolve that issue.
>>
>> [1] https://marc.info/?l=linux-kernel&m=142961156129456&w=2
>
> The tinkering with memblock there and in f9126ab9241f seem bogus in the
> context of memory hotplug on x86.
>
> I believe that dropping that memblock_phys_free() is right thing to do
> regardless of this series. There's no corresponding memblock_alloc() and it
> was added as part of a fix for hotunplug on x86 that anyway had memblock
> discarded at that point.
So when we re-add that memory, we might have still ranges as "reserved".
It does sound weird, but you're the boss :)
--
Cheers,
David / dhildenb
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-06-04 9:39 ` David Hildenbrand
@ 2024-06-05 8:00 ` Mike Rapoport
2024-06-05 8:23 ` David Hildenbrand
0 siblings, 1 reply; 30+ messages in thread
From: Mike Rapoport @ 2024-06-05 8:00 UTC (permalink / raw)
To: David Hildenbrand
Cc: Jonathan Cameron, Dan Williams, linux-cxl, linux-arm-kernel,
Sudeep Holla, Andrew Morton, Will Deacon, Jia He, Mike Rapoport,
linuxarm, catalin.marinas, Anshuman.Khandual, Yuquan Wang,
Oscar Salvador, Lorenzo Pieralisi, James Morse
On Tue, Jun 04, 2024 at 11:39:27AM +0200, David Hildenbrand wrote:
> On 04.06.24 11:35, Mike Rapoport wrote:
> > On Mon, Jun 03, 2024 at 10:53:03PM +0200, David Hildenbrand wrote:
> > > On 03.06.24 12:43, Mike Rapoport wrote:
> > > > On Mon, Jun 03, 2024 at 11:14:00AM +0200, David Hildenbrand wrote:
> > >
> > > > The commit that added memblock_free() at the first place (f9126ab9241f
> > > > ("memory-hotplug: fix wrong edge when hot add a new node")) does not really
> > > > describe why that was required :(
> > > >
> > > > But at a quick glance it looks completely spurious.
> > >
> > > There are more details [1] but I also did not figure out why the
> > > memblock_free() was really required to resolve that issue.
> > >
> > > [1] https://marc.info/?l=linux-kernel&m=142961156129456&w=2
> > The tinkering with memblock there and in f9126ab9241f seem bogus in the
> > context of memory hotplug on x86.
> >
> > I believe that dropping that memblock_phys_free() is right thing to do
> > regardless of this series. There's no corresponding memblock_alloc() and it
> > was added as part of a fix for hotunplug on x86 that anyway had memblock
> > discarded at that point.
>
> So when we re-add that memory, we might have still ranges as "reserved".
I don't see how anything can become reserved on the hotplug path unless
hotplug is possible before mm_core_init().
There are no memblock_reserve() calls in memory_hotplug.c, no memblock
allocations possible after mm is inited, and even if memblock_add() will
need to allocate memory that will be done via slab.
> It does sound weird, but you're the boss :)
Nah, it's mm/memory_hotplug.c, so you are :)
But I can send a patch anyway :)
> --
> Cheers,
>
> David / dhildenb
>
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-06-05 8:00 ` Mike Rapoport
@ 2024-06-05 8:23 ` David Hildenbrand
0 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2024-06-05 8:23 UTC (permalink / raw)
To: Mike Rapoport
Cc: Jonathan Cameron, Dan Williams, linux-cxl, linux-arm-kernel,
Sudeep Holla, Andrew Morton, Will Deacon, Jia He, Mike Rapoport,
linuxarm, catalin.marinas, Anshuman.Khandual, Yuquan Wang,
Oscar Salvador, Lorenzo Pieralisi, James Morse
On 05.06.24 10:00, Mike Rapoport wrote:
> On Tue, Jun 04, 2024 at 11:39:27AM +0200, David Hildenbrand wrote:
>> On 04.06.24 11:35, Mike Rapoport wrote:
>>> On Mon, Jun 03, 2024 at 10:53:03PM +0200, David Hildenbrand wrote:
>>>> On 03.06.24 12:43, Mike Rapoport wrote:
>>>>> On Mon, Jun 03, 2024 at 11:14:00AM +0200, David Hildenbrand wrote:
>>>>
>>>>> The commit that added memblock_free() at the first place (f9126ab9241f
>>>>> ("memory-hotplug: fix wrong edge when hot add a new node")) does not really
>>>>> describe why that was required :(
>>>>>
>>>>> But at a quick glance it looks completely spurious.
>>>>
>>>> There are more details [1] but I also did not figure out why the
>>>> memblock_free() was really required to resolve that issue.
>>>>
>>>> [1] https://marc.info/?l=linux-kernel&m=142961156129456&w=2
>>> The tinkering with memblock there and in f9126ab9241f seem bogus in the
>>> context of memory hotplug on x86.
>>>
>>> I believe that dropping that memblock_phys_free() is right thing to do
>>> regardless of this series. There's no corresponding memblock_alloc() and it
>>> was added as part of a fix for hotunplug on x86 that anyway had memblock
>>> discarded at that point.
>>
>> So when we re-add that memory, we might have still ranges as "reserved".
>
> I don't see how anything can become reserved on the hotplug path unless
> hotplug is possible before mm_core_init().
> There are no memblock_reserve() calls in memory_hotplug.c, no memblock
> allocations possible after mm is inited, and even if memblock_add() will
> need to allocate memory that will be done via slab.
I had the following in mind:
(1) DIMM is part of boot mem and some boot allocation ended up on it
(2) That boot allocation got freed after the buddy is already up
(memblock.reserved not updated)
(3) We succeed in offlining the memory and unplugging the DIMM
Now we have some "reserved" leftover from memory that is no longer
physically around.
(4) We re-plug a DIMM at that position, possibly with a different NUMA
assignment.
On bare metal, this is unlikely to happen. With current QEMU, it won't
happen because (hotunpluggable) DIMMs are usually not part of bootmem;
that is, e820 and friends only indicate it as "hotpluggable but not
present memory" range. It could be possible after kexec (for example, we
add that memory to the e820 of the new kernel), but that's rather a
corner case.
(3) is already unlikely to happen, so removing that memblock_phys_free()
probably won't change anything in practice.
>
>> It does sound weird, but you're the boss :)
>
> Nah, it's mm/memory_hotplug.c, so you are :)
>
Well yes :) but it's your decision whether we want to use
memblock.reserved memory to store this persistent NUMA node assignment
for the fixed memory windows. Essentially another user of
memblock.reserved we should then document ;)
Removing the memblock_phys_free() call sounds like being a requirement
for that (although it might make sense independently) use case.
I'm not sure whether memblock.reserved is the right datastructure for
this purpose, but if you think it is, Jonathan would have green light on
the general approach in this RFC.
--
Cheers,
David / dhildenb
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory()
2024-05-31 9:55 ` David Hildenbrand
@ 2024-06-06 15:44 ` Mike Rapoport
0 siblings, 0 replies; 30+ messages in thread
From: Mike Rapoport @ 2024-06-06 15:44 UTC (permalink / raw)
To: David Hildenbrand, Jonathan Cameron, Dan Williams
Cc: linux-cxl, linux-arm-kernel, Sudeep Holla, Andrew Morton,
Will Deacon, Jia He, Mike Rapoport, linuxarm, catalin.marinas,
Anshuman.Khandual, Yuquan Wang, Oscar Salvador, Lorenzo Pieralisi,
James Morse
On Fri, May 31, 2024 at 11:55:08AM +0200, David Hildenbrand wrote:
> On 31.05.24 11:48, Jonathan Cameron wrote:
> > On Fri, 31 May 2024 09:49:32 +0200
> > David Hildenbrand <david@redhat.com> wrote:
> >
> > > On 29.05.24 19:12, Jonathan Cameron wrote:
> > > > I'm not sure what this is balancing, but it if is necessary then the reserved
> > > > memblock approach can't be used to stash NUMA node assignments as after the
> > > > first add / remove cycle the entry is dropped so not available if memory is
> > > > re-added at the same HPA.
> > > >
> > > > This patch is here to hopefully spur comments on what this is there for!
> > > >
> > > > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > > > ---
> > > > mm/memory_hotplug.c | 2 +-
> > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > > > index 431b1f6753c0..3d8dd4749dfc 100644
> > > > --- a/mm/memory_hotplug.c
> > > > +++ b/mm/memory_hotplug.c
> > > > @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
> > > > }
> > > > if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
> > > > - memblock_phys_free(start, size);
> > > > + // memblock_phys_free(start, size);
> > > > memblock_remove(start, size);
> > > > }
> > >
> > > memblock_phys_free() works on memblock.reserved, memblock_remove() works
> > > on memblock.memory.
> > >
> > > If you take a look at the doc at the top of memblock.c:
> > >
> > > memblock.memory: physical memory available to the system
> > > memblock.reserved: regions that were allocated [during boot]
> > >
> > >
> > > memblock.memory is supposed to be a superset of memblock.reserved. Your
> > > "hack" here indicates that you somehow would be relying on the opposite
> > > being true, which indicates that you are doing the wrong thing.
> > >
> > >
> > > memblock_remove() indeed balances against memblock_add_node() for
> > > hotplugged memory [add_memory_resource()]. There seem to a case where we
> > > would succeed in hotunplugging memory that was part of "memblock.reserved".
> > >
> > > But how could that happen? I think the following way:
> > >
> > > Once the buddy is up and running, memory allocated during early boot is
> > > not freed back to memblock, but usually we simply go via something like
> > > free_reserved_page(), not memblock_free() [because the buddy took over].
> > > So one could end up unplugging memory that still resides in
> > > memblock.reserved set.
> > >
> > > So with memblock_phys_free(), we are enforcing the invariant that
> > > memblock.memory is a superset of memblock.reserved.
> > >
> > > Likely, arm64 should store that node assignment elsewhere from where it
> > > can be queried. Or it should be using something like
> > > CONFIG_HAVE_MEMBLOCK_PHYS_MAP for these static windows.
> > >
> >
> > Hi David,
> >
> > Thanks for the explanation and pointers. I'd rather avoid inventing a parallel
> > infrastructure for this (like x86 has for other reasons, but which is also used
> > for this purpose).
>
> Yes, although memblock feels a bit wrong, because it is targeted at managing
> actual present memory, not properties about memory that could become present
> later.
Right now memblock.reserved can have ranges that are not actually present,
but I still have doubts about using it to contain memory ranges that could
be hotplugged and I'm more leaning against it.
In general memblock as an infrastructure for dealing with memory ranges and
their properties seems to be a good fit, so either memblock.reserved or a
new memblock_type can be used to implement phys_to_target_node().
memblock.reserved maybe less suitable than a new data structure because
unlike x86::numa_reserved_meminfo it does manage already present memory for
the most part while x86::numa_reserved_meminfo contains regions that do not
overlap with RAM.
Now before we continue to bikeshed about the name for the new data
structure, how about this crazy idea to merge parts of arch/x86/numa.c
dealing with 'struct numa_meminfo' to arch_numa.c and then remove some of
the redundancy, e.g by implementing memory_add_physaddr_to_nid() with
memblock.memory instead of numa_meminfo.
It's an involved project indeed, but it has advantage of shared
infrastructure for NUMA initialization of ACPI systems.
AFAIU, x86 has a parallel infrastructure because they were first, but I
don't see a fundamental reason why it has to remain that way.
There also were several attempts to implement fakenuma on arm64 or riscv
(sorry, can't find lore links right now), and with numa_meminfo in
arch_numa.c we get it almost for free.
> > From a quick look CONFIG_HAVE_MEMBLOCK_PHYS_MAP is documented in a fashion that
> > makes me think it's not directly appropriate (this isn't actual physical memory
> > available during boot) but the general approach of just adding another memblock
> > collection seems like it will work.
>
> Yes. As an alternative, modify the description of
> CONFIG_HAVE_MEMBLOCK_PHYS_MAP.
I wouldn't want to touch physmem, except maybe moving it entirely to
arch/s390 and leaving it there.
> > Hardest problem might be naming it. physmem_possible perhaps?
> > Fill that with anything found in SRAT or CEDT should work for ACPI, but I'm not
> > sure on whether we can fill it when neither of those is present. Maybe we just
> > don't bother as for today's usecase CEDT needs to be present.
>
> You likely only want something for these special windows, with these special
> semantics. That makes it hard.
>
> "physmem_possible" is a bit too generic for my taste, promising semantics
> that might not hold true (we don't want all hotpluggable areas to show up
> there).
>
> >
> > Maybe physmem_known_possible is the way to go. I'll give this approach a spin
> > and see how it goes.
>
> Goes into a better direction, or really "physmem_fixed_windows". Because
> also "physmem_known_possible" as a wrong smell to it.
physmem_potential? ;-)
> --
> Cheers,
>
> David / dhildenb
>
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 1/8] arm64: numa: Introduce a memory_add_physaddr_to_nid()
2024-05-29 17:12 ` [RFC PATCH 1/8] arm64: numa: Introduce a memory_add_physaddr_to_nid() Jonathan Cameron
@ 2024-08-01 7:50 ` Yuquan Wang
0 siblings, 0 replies; 30+ messages in thread
From: Yuquan Wang @ 2024-08-01 7:50 UTC (permalink / raw)
To: Jonathan Cameron; +Cc: dan.j.williams, linux-cxl, linux-arm-kernel, chenbaozi
On Wed, May 29, 2024 at 06:12:29PM +0100, Jonathan Cameron wrote:
> From: Dan Williams <dan.j.williams@intel.com>
>
> Based heavily on Dan William's earlier attempt to introduce this
> infrastruture for all architectures so I've kept his authorship. [1]
>
> arm64 stores it's numa data in memblock. Add a memblock generic
> way to interrogate that data for memory_Add_physaddr_to_nid.
>
> Cc: Mike Rapoport <rppt@linux.ibm.com>
> Cc: Jia He <justin.he@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> Link: https://lore.kernel.org/r/159457120334.754248.12908401960465408733.stgit@dwillia2-desk3.amr.corp.intel.com [1]
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> arch/arm64/include/asm/sparsemem.h | 4 ++++
> arch/arm64/mm/init.c | 29 +++++++++++++++++++++++++++++
> 2 files changed, 33 insertions(+)
>
> diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
> index 8a8acc220371..8dd1b6a718fa 100644
> --- a/arch/arm64/include/asm/sparsemem.h
> +++ b/arch/arm64/include/asm/sparsemem.h
> @@ -26,4 +26,8 @@
> #define SECTION_SIZE_BITS 27
> #endif /* CONFIG_ARM64_64K_PAGES */
>
> +#ifndef __ASSEMBLY__
> +extern int memory_add_physaddr_to_nid(u64 addr);
> +#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
> +#endif /* __ASSEMBLY__ */
> #endif
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 9b5ab6818f7f..f310cbd349ba 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -48,6 +48,35 @@
> #include <asm/alternative.h>
> #include <asm/xen/swiotlb-xen.h>
>
> +#ifdef CONFIG_NUMA
> +
> +static int __memory_add_physaddr_to_nid(u64 addr)
> +{
> + unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(addr);
> + int nid;
> +
> + for_each_online_node(nid) {
> + get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
> + if (pfn >= start_pfn && pfn <= end_pfn)
> + return nid;
> + }
> + return NUMA_NO_NODE;
> +}
> +
> +int memory_add_physaddr_to_nid(u64 start)
> +{
> + int nid = __memory_add_physaddr_to_nid(start);
> +
> + /* Default to node0 as not all callers are prepared for this to fail */
> + if (nid == NUMA_NO_NODE)
> + return 0;
> +
> + return nid;
> +}
> +EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
> +
> +#endif /* CONFIG_NUMA */
> +
> /*
> * We need to be able to catch inadvertent references to memstart_addr
> * that occur (potentially in generic code) before arm64_memblock_init()
> --
> 2.39.2
>
Tested-off-by: Yuquan Wang <wangyuquan1236@phytium.com.cn>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 2/8] arm64: memblock: Introduce a generic phys_addr_to_target_node()
2024-05-29 17:12 ` [RFC PATCH 2/8] arm64: memblock: Introduce a generic phys_addr_to_target_node() Jonathan Cameron
@ 2024-08-01 7:52 ` Yuquan Wang
0 siblings, 0 replies; 30+ messages in thread
From: Yuquan Wang @ 2024-08-01 7:52 UTC (permalink / raw)
To: Jonathan Cameron; +Cc: dan.j.williams, linux-cxl, linux-arm-kernel, chenbaozi
On Wed, May 29, 2024 at 06:12:30PM +0100, Jonathan Cameron wrote:
> From: Dan Williams <dan.j.williams@intel.com>
>
> Similar to how generic memory_add_physaddr_to_nid() interrogates
> memblock data for numa information, introduce
> get_reserved_pfn_range_from_nid() to enable the same operation for
> reserved memory ranges. Example memory ranges that are reserved, but
> still have associated numa-info are persistent memory or Soft Reserved
> (EFI_MEMORY_SP) memory.
>
> This is Dan's patch but with the implementation of
> phys_addr_to_target_node() made arm64 specific.
>
> Cc: Mike Rapoport <rppt@linux.ibm.com>
> Cc: Jia He <justin.he@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> Link: https://lore.kernel.org/r/159457120893.754248.7783260004248722175.stgit@dwillia2-desk3.amr.corp.intel.com
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> arch/arm64/include/asm/sparsemem.h | 4 ++++
> arch/arm64/mm/init.c | 22 ++++++++++++++++++++++
> include/linux/memblock.h | 8 ++++++++
> include/linux/mm.h | 14 ++++++++++++++
> mm/memblock.c | 22 +++++++++++++++++++---
> mm/mm_init.c | 29 ++++++++++++++++++++++++++++-
> 6 files changed, 95 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
> index 8dd1b6a718fa..5b483ad6d501 100644
> --- a/arch/arm64/include/asm/sparsemem.h
> +++ b/arch/arm64/include/asm/sparsemem.h
> @@ -27,7 +27,11 @@
> #endif /* CONFIG_ARM64_64K_PAGES */
>
> #ifndef __ASSEMBLY__
> +
> extern int memory_add_physaddr_to_nid(u64 addr);
> #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
> +extern int phys_to_target_node(phys_addr_t start);
> +#define phys_to_target_node phys_to_target_node
> +
> #endif /* __ASSEMBLY__ */
> #endif
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index f310cbd349ba..6a2f21b1bb58 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -75,6 +75,28 @@ int memory_add_physaddr_to_nid(u64 start)
> }
> EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
>
> +int phys_to_target_node(phys_addr_t start)
> +{
> + unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(start);
> + int nid = __memory_add_physaddr_to_nid(start);
> +
> + if (nid != NUMA_NO_NODE)
> + return nid;
> +
> + /*
> + * Search reserved memory ranges since the memory address does
> + * not appear to be online
> + */
> + for_each_node_state(nid, N_POSSIBLE) {
> + get_reserved_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
> + if (pfn >= start_pfn && pfn <= end_pfn)
> + return nid;
> + }
> +
> + return NUMA_NO_NODE;
> +}
> +EXPORT_SYMBOL(phys_to_target_node);
> +
> #endif /* CONFIG_NUMA */
>
> /*
> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index e2082240586d..c7d518a54359 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -281,6 +281,10 @@ int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn,
> void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
> unsigned long *out_end_pfn, int *out_nid);
>
> +void __next_reserved_pfn_range(int *idx, int nid,
> + unsigned long *out_start_pfn,
> + unsigned long *out_end_pfn, int *out_nid);
> +
> /**
> * for_each_mem_pfn_range - early memory pfn range iterator
> * @i: an integer used as loop variable
> @@ -295,6 +299,10 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
> for (i = -1, __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid); \
> i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid))
>
> +#define for_each_reserved_pfn_range(i, nid, p_start, p_end, p_nid) \
> + for (i = -1, __next_reserved_pfn_range(&i, nid, p_start, p_end, p_nid); \
> + i >= 0; __next_reserved_pfn_range(&i, nid, p_start, p_end, p_nid))
> +
> #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
> void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
> unsigned long *out_spfn,
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 9849dfda44d4..0c829b2d44fa 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3245,9 +3245,23 @@ void free_area_init(unsigned long *max_zone_pfn);
> unsigned long node_map_pfn_alignment(void);
> extern unsigned long absent_pages_in_range(unsigned long start_pfn,
> unsigned long end_pfn);
> +
> +/*
> + * Allow archs to opt-in to keeping get_pfn_range_for_nid() available
> + * after boot.
> + */
> +#ifdef CONFIG_ARCH_KEEP_MEMBLOCK
> +#define __init_or_memblock
> +#else
> +#define __init_or_memblock __init
> +#endif
> +
> extern void get_pfn_range_for_nid(unsigned int nid,
> unsigned long *start_pfn, unsigned long *end_pfn);
>
> +extern void get_reserved_pfn_range_for_nid(unsigned int nid,
> + unsigned long *start_pfn, unsigned long *end_pfn);
> +
> #ifndef CONFIG_NUMA
> static inline int early_pfn_to_nid(unsigned long pfn)
> {
> diff --git a/mm/memblock.c b/mm/memblock.c
> index d09136e040d3..5498d5ea70b4 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -1289,11 +1289,11 @@ void __init_memblock __next_mem_range_rev(u64 *idx, int nid,
> /*
> * Common iterator interface used to define for_each_mem_pfn_range().
> */
> -void __init_memblock __next_mem_pfn_range(int *idx, int nid,
> +static void __init_memblock __next_memblock_pfn_range(int *idx, int nid,
> unsigned long *out_start_pfn,
> - unsigned long *out_end_pfn, int *out_nid)
> + unsigned long *out_end_pfn, int *out_nid,
> + struct memblock_type *type)
> {
> - struct memblock_type *type = &memblock.memory;
> struct memblock_region *r;
> int r_nid;
>
> @@ -1319,6 +1319,22 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid,
> *out_nid = r_nid;
> }
>
> +void __init_memblock __next_mem_pfn_range(int *idx, int nid,
> + unsigned long *out_start_pfn,
> + unsigned long *out_end_pfn, int *out_nid)
> +{
> + __next_memblock_pfn_range(idx, nid, out_start_pfn, out_end_pfn, out_nid,
> + &memblock.memory);
> +}
> +
> +void __init_memblock __next_reserved_pfn_range(int *idx, int nid,
> + unsigned long *out_start_pfn,
> + unsigned long *out_end_pfn, int *out_nid)
> +{
> + __next_memblock_pfn_range(idx, nid, out_start_pfn, out_end_pfn, out_nid,
> + &memblock.reserved);
> +}
> +
> /**
> * memblock_set_node - set node ID on memblock regions
> * @base: base of area to set node ID for
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index f72b852bd5b8..1f6e29e60673 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -1644,7 +1644,7 @@ static inline void alloc_node_mem_map(struct pglist_data *pgdat) { }
> * provided by memblock_set_node(). If called for a node
> * with no available memory, the start and end PFNs will be 0.
> */
> -void __init get_pfn_range_for_nid(unsigned int nid,
> +void __init_or_memblock get_pfn_range_for_nid(unsigned int nid,
> unsigned long *start_pfn, unsigned long *end_pfn)
> {
> unsigned long this_start_pfn, this_end_pfn;
> @@ -1662,6 +1662,33 @@ void __init get_pfn_range_for_nid(unsigned int nid,
> *start_pfn = 0;
> }
>
> +/**
> + * get_reserved_pfn_range_for_nid - Return the start and end page frames for a node
> + * @nid: The nid to return the range for. If MAX_NUMNODES, the min and max PFN are returned.
> + * @start_pfn: Passed by reference. On return, it will have the node start_pfn.
> + * @end_pfn: Passed by reference. On return, it will have the node end_pfn.
> + *
> + * Mostly identical to get_pfn_range_for_nid() except it operates on
> + * reserved ranges rather than online memory.
> + */
> +void __init_or_memblock get_reserved_pfn_range_for_nid(unsigned int nid,
> + unsigned long *start_pfn, unsigned long *end_pfn)
> +{
> + unsigned long this_start_pfn, this_end_pfn;
> + int i;
> +
> + *start_pfn = -1UL;
> + *end_pfn = 0;
> +
> + for_each_reserved_pfn_range(i, nid, &this_start_pfn, &this_end_pfn, NULL) {
> + *start_pfn = min(*start_pfn, this_start_pfn);
> + *end_pfn = max(*end_pfn, this_end_pfn);
> + }
> +
> + if (*start_pfn == -1UL)
> + *start_pfn = 0;
> +}
> +
> static void __init free_area_init_node(int nid)
> {
> pg_data_t *pgdat = NODE_DATA(nid);
> --
> 2.39.2
>
Tested-off-by: Yuquan Wang <wangyuquan1236@phytium.com.cn>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 3/8] mm: memblock: Add a means to add to memblock.reserved
2024-05-29 17:12 ` [RFC PATCH 3/8] mm: memblock: Add a means to add to memblock.reserved Jonathan Cameron
@ 2024-08-01 7:53 ` Yuquan Wang
0 siblings, 0 replies; 30+ messages in thread
From: Yuquan Wang @ 2024-08-01 7:53 UTC (permalink / raw)
To: Jonathan Cameron; +Cc: dan.j.williams, linux-cxl, linux-arm-kernel, chenbaozi
On Wed, May 29, 2024 at 06:12:31PM +0100, Jonathan Cameron wrote:
> For CXL CFMWS regions, need to add memblocks that may not be
> in the system memory map so that their nid can be queried later.
> Add a function to make this easy to do.
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> include/linux/memblock.h | 2 ++
> mm/memblock.c | 11 +++++++++++
> 2 files changed, 13 insertions(+)
>
> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index c7d518a54359..9ac1ed8c3293 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -113,6 +113,8 @@ static inline void memblock_discard(void) {}
> void memblock_allow_resize(void);
> int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid,
> enum memblock_flags flags);
> +int memblock_add_reserved_node(phys_addr_t base, phys_addr_t size, int nid,
> + enum memblock_flags flags);
> int memblock_add(phys_addr_t base, phys_addr_t size);
> int memblock_remove(phys_addr_t base, phys_addr_t size);
> int memblock_phys_free(phys_addr_t base, phys_addr_t size);
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 5498d5ea70b4..8d02f75ec186 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -714,6 +714,17 @@ int __init_memblock memblock_add_node(phys_addr_t base, phys_addr_t size,
> return memblock_add_range(&memblock.memory, base, size, nid, flags);
> }
>
> +int __init_memblock memblock_add_reserved_node(phys_addr_t base, phys_addr_t size,
> + int nid, enum memblock_flags flags)
> +{
> + phys_addr_t end = base + size - 1;
> +
> + memblock_dbg("%s: [%pa-%pa] nid=%d flags=%x %pS\n", __func__,
> + &base, &end, nid, flags, (void *)_RET_IP_);
> +
> + return memblock_add_range(&memblock.reserved, base, size, nid, flags);
> +}
> +
> /**
> * memblock_add - add new memblock region
> * @base: base address of the new region
> --
> 2.39.2
>
Tested-off-by: Yuquan Wang <wangyuquan1236@phytium.com.cn>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 4/8] arch_numa: Avoid onlining empty NUMA nodes
2024-05-29 17:12 ` [RFC PATCH 4/8] arch_numa: Avoid onlining empty NUMA nodes Jonathan Cameron
@ 2024-08-01 7:53 ` Yuquan Wang
0 siblings, 0 replies; 30+ messages in thread
From: Yuquan Wang @ 2024-08-01 7:53 UTC (permalink / raw)
To: Jonathan Cameron; +Cc: dan.j.williams, linux-cxl, linux-arm-kernel, chenbaozi
On Wed, May 29, 2024 at 06:12:32PM +0100, Jonathan Cameron wrote:
> ACPI can declare NUMA nodes for memory that will be along later.
> CXL Fixed Memory Windows may also be assigned NUMA nodes that
> are initially empty. Currently the generic arch_numa handling will
> online these empty nodes. This is both inconsistent with x86 and
> with itself as if we add memory and remove it again the node goes
> away.
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> drivers/base/arch_numa.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
> index 5b59d133b6af..0630efb696ab 100644
> --- a/drivers/base/arch_numa.c
> +++ b/drivers/base/arch_numa.c
> @@ -363,6 +363,11 @@ static int __init numa_register_nodes(void)
> unsigned long start_pfn, end_pfn;
>
> get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
> + if (start_pfn >= end_pfn &&
> + !node_state(nid, N_CPU) &&
> + !node_state(nid, N_GENERIC_INITIATOR))
> + continue;
> +
> setup_node_data(nid, start_pfn, end_pfn);
> node_set_online(nid);
> }
> --
> 2.39.2
>
Tested-off-by: Yuquan Wang <wangyuquan1236@phytium.com.cn>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 5/8] arch_numa: Make numa_add_memblk() set nid for memblock.reserved regions
2024-05-29 17:12 ` [RFC PATCH 5/8] arch_numa: Make numa_add_memblk() set nid for memblock.reserved regions Jonathan Cameron
@ 2024-08-01 7:54 ` Yuquan Wang
0 siblings, 0 replies; 30+ messages in thread
From: Yuquan Wang @ 2024-08-01 7:54 UTC (permalink / raw)
To: Jonathan Cameron; +Cc: dan.j.williams, linux-cxl, linux-arm-kernel, chenbaozi
On Wed, May 29, 2024 at 06:12:33PM +0100, Jonathan Cameron wrote:
> Setting the reserved region entries to the appropriate Node ID means
> that they can be used to establish the node to which we should add
> hotplugged CXL memory within a CXL fixed memory window.
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> drivers/base/arch_numa.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
> index 0630efb696ab..568dbabeb636 100644
> --- a/drivers/base/arch_numa.c
> +++ b/drivers/base/arch_numa.c
> @@ -208,6 +208,13 @@ int __init numa_add_memblk(int nid, u64 start, u64 end)
> start, (end - 1), nid);
> return ret;
> }
> + /* Also set reserved nodes nid */
> + ret = memblock_set_node(start, (end - start), &memblock.reserved, nid);
> + if (ret < 0) {
> + pr_err("memblock [0x%llx - 0x%llx] failed to add on node %d\n",
> + start, (end - 1), nid);
> + return ret;
> + }
>
> node_set(nid, numa_nodes_parsed);
> return ret;
> --
> 2.39.2
>
Tested-off-by: Yuquan Wang <wangyuquan1236@phytium.com.cn>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 6/8] arm64: mm: numa_fill_memblks() to add a memblock.reserved region if match.
2024-05-29 17:12 ` [RFC PATCH 6/8] arm64: mm: numa_fill_memblks() to add a memblock.reserved region if match Jonathan Cameron
@ 2024-08-01 7:54 ` Yuquan Wang
0 siblings, 0 replies; 30+ messages in thread
From: Yuquan Wang @ 2024-08-01 7:54 UTC (permalink / raw)
To: Jonathan Cameron; +Cc: dan.j.williams, linux-cxl, linux-arm-kernel, chenbaozi
On Wed, May 29, 2024 at 06:12:34PM +0100, Jonathan Cameron wrote:
> CXL memory hotplug relies on additional NUMA nodes being created
> for any CXL Fixed Memory Window if there is no suitable one created
> by system firmware. To detect if system firmware has created one look
> for any normal memblock that overlaps with the Fixed Memory Window that
> has a NUMA node (nid) set.
>
> If one is found, add a region with the same nid to memblock.reserved
> so we can match it later when CXL memory is hotplugged.
> If not, add a region anyway because a suitable NUMA node will be
> set later. So for now use NUMA_NO_NODE.
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> arch/arm64/mm/init.c | 26 ++++++++++++++++++++++++++
> 1 file changed, 26 insertions(+)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 6a2f21b1bb58..27941f22db1c 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -50,6 +50,32 @@
>
> #ifdef CONFIG_NUMA
>
> +/*
> + * Scan existing memblocks and if this region overlaps with a region with
> + * a nid set, add a reserved memblock.
> + */
> +int __init numa_fill_memblks(u64 start, u64 end)
> +{
> + struct memblock_region *region;
> +
> + for_each_mem_region(region) {
> + int nid = memblock_get_region_node(region);
> +
> + if (nid == NUMA_NO_NODE)
> + continue;
> + if (!(end < region->base || start >= region->base + region->size)) {
> + memblock_add_reserved_node(start, end - start, nid,
> + MEMBLOCK_RSRV_NOINIT);
> + return 0;
> + }
> + }
> +
> + memblock_add_reserved_node(start, end - start, NUMA_NO_NODE,
> + MEMBLOCK_RSRV_NOINIT);
> +
> + return NUMA_NO_MEMBLK;
> +}
> +
> static int __memory_add_physaddr_to_nid(u64 addr)
> {
> unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(addr);
> --
> 2.39.2
>
Tested-off-by: Yuquan Wang <wangyuquan1236@phytium.com.cn>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC PATCH 7/8] acpi: srat: cxl: Skip zero length CXL fixed memory windows.
2024-05-29 17:12 ` [RFC PATCH 7/8] acpi: srat: cxl: Skip zero length CXL fixed memory windows Jonathan Cameron
@ 2024-08-01 7:55 ` Yuquan Wang
0 siblings, 0 replies; 30+ messages in thread
From: Yuquan Wang @ 2024-08-01 7:55 UTC (permalink / raw)
To: Jonathan Cameron; +Cc: dan.j.williams, linux-cxl, linux-arm-kernel, chenbaozi
On Wed, May 29, 2024 at 06:12:35PM +0100, Jonathan Cameron wrote:
> One reported platform uses this nonsensical entry to represent
> a disable CFWMS. The acpi_cxl driver already correctly errors
> out on seeing this, but that leaves an additional confusing node
> in /sys/devices/system/nodes/possible plus wastes some space.
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> drivers/acpi/numa/srat.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/acpi/numa/srat.c b/drivers/acpi/numa/srat.c
> index e3f26e71637a..28c963d5c51f 100644
> --- a/drivers/acpi/numa/srat.c
> +++ b/drivers/acpi/numa/srat.c
> @@ -329,6 +329,11 @@ static int __init acpi_parse_cfmws(union acpi_subtable_headers *header,
> int node;
>
> cfmws = (struct acpi_cedt_cfmws *)header;
> +
> + /* At least one firmware reports disabled entries with size 0 */
> + if (cfmws->window_size == 0)
> + return 0;
> +
> start = cfmws->base_hpa;
> end = cfmws->base_hpa + cfmws->window_size;
>
> --
> 2.39.2
>
Tested-off-by: Yuquan Wang <wangyuquan1236@phytium.com.cn>
^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2024-08-01 7:56 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-29 17:12 [RFC PATCH 0/8] arm64/memblock: Handling of CXL Fixed Memory Windows Jonathan Cameron
2024-05-29 17:12 ` [RFC PATCH 1/8] arm64: numa: Introduce a memory_add_physaddr_to_nid() Jonathan Cameron
2024-08-01 7:50 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 2/8] arm64: memblock: Introduce a generic phys_addr_to_target_node() Jonathan Cameron
2024-08-01 7:52 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 3/8] mm: memblock: Add a means to add to memblock.reserved Jonathan Cameron
2024-08-01 7:53 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 4/8] arch_numa: Avoid onlining empty NUMA nodes Jonathan Cameron
2024-08-01 7:53 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 5/8] arch_numa: Make numa_add_memblk() set nid for memblock.reserved regions Jonathan Cameron
2024-08-01 7:54 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 6/8] arm64: mm: numa_fill_memblks() to add a memblock.reserved region if match Jonathan Cameron
2024-08-01 7:54 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 7/8] acpi: srat: cxl: Skip zero length CXL fixed memory windows Jonathan Cameron
2024-08-01 7:55 ` Yuquan Wang
2024-05-29 17:12 ` [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory() Jonathan Cameron
2024-05-30 10:07 ` Oscar Salvador
2024-05-30 12:14 ` Jonathan Cameron
2024-05-31 7:49 ` David Hildenbrand
2024-05-31 9:48 ` Jonathan Cameron
2024-05-31 9:55 ` David Hildenbrand
2024-06-06 15:44 ` Mike Rapoport
2024-06-03 7:57 ` Mike Rapoport
2024-06-03 9:14 ` David Hildenbrand
2024-06-03 10:43 ` Mike Rapoport
2024-06-03 20:53 ` David Hildenbrand
2024-06-04 9:35 ` Mike Rapoport
2024-06-04 9:39 ` David Hildenbrand
2024-06-05 8:00 ` Mike Rapoport
2024-06-05 8:23 ` David Hildenbrand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).