* [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL
@ 2025-08-22 3:41 Smita Koralahalli
2025-08-22 3:41 ` [PATCH 1/6] dax/hmem, e820, resource: Defer Soft Reserved registration until hmem is ready Smita Koralahalli
` (6 more replies)
0 siblings, 7 replies; 14+ messages in thread
From: Smita Koralahalli @ 2025-08-22 3:41 UTC (permalink / raw)
To: linux-cxl, linux-kernel, nvdimm, linux-fsdevel, linux-pm
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Yao Xingtao, Peter Zijlstra, Greg KH,
Nathan Fontenot, Smita Koralahalli, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati, Zhijian Li
This series aims to address long-standing conflicts between dax_hmem and
CXL when handling Soft Reserved memory ranges.
I have considered adding support for DAX_CXL_MODE_REGISTER, but I do not
yet have a solid approach. Since this came up in discussion yesterday,
I am sending the current work and would appreciate inputs on how best to
handle the DAX_CXL_MODE_REGISTER case.
Reworked from Dan's patch:
https://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl.git/patch/?id=ab70c6227ee6165a562c215d9dcb4a1c55620d5d
Previous work:
https://lore.kernel.org/all/20250715180407.47426-1-Smita.KoralahalliChannabasappa@amd.com/
Smita Koralahalli (6):
dax/hmem, e820, resource: Defer Soft Reserved registration until hmem
is ready
dax/hmem: Request cxl_acpi and cxl_pci before walking Soft Reserved
ranges
dax/hmem, cxl: Tighten dependencies on DEV_DAX_CXL and dax_hmem
dax/hmem: Defer Soft Reserved overlap handling until CXL region
assembly completes
dax/hmem: Reintroduce Soft Reserved ranges back into the iomem tree
cxl/region, dax/hmem: Guard CXL DAX region creation and tighten HMEM
deps
arch/x86/kernel/e820.c | 2 +-
drivers/cxl/core/region.c | 4 +-
drivers/dax/Kconfig | 3 +
drivers/dax/hmem/device.c | 4 +-
drivers/dax/hmem/hmem.c | 137 +++++++++++++++++++++++++++++++++++---
include/linux/ioport.h | 24 +++++++
kernel/resource.c | 73 +++++++++++++++++---
7 files changed, 222 insertions(+), 25 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/6] dax/hmem, e820, resource: Defer Soft Reserved registration until hmem is ready
2025-08-22 3:41 [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Smita Koralahalli
@ 2025-08-22 3:41 ` Smita Koralahalli
2025-09-01 2:59 ` Zhijian Li (Fujitsu)
2025-08-22 3:41 ` [PATCH 2/6] dax/hmem: Request cxl_acpi and cxl_pci before walking Soft Reserved ranges Smita Koralahalli
` (5 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: Smita Koralahalli @ 2025-08-22 3:41 UTC (permalink / raw)
To: linux-cxl, linux-kernel, nvdimm, linux-fsdevel, linux-pm
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Yao Xingtao, Peter Zijlstra, Greg KH,
Nathan Fontenot, Smita Koralahalli, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati, Zhijian Li
Insert Soft Reserved memory into a dedicated soft_reserve_resource tree
instead of the iomem_resource tree at boot.
Publishing Soft Reserved ranges into iomem too early causes conflicts with
CXL hotplug and region assembly failure, especially when Soft Reserved
overlaps CXL regions.
Re-inserting these ranges into iomem will be handled in follow-up patches,
after ensuring CXL window publication ordering is stabilized and when the
dax_hmem is ready to consume them.
This avoids trimming or deleting resources later and provides a cleaner
handoff between EFI-defined memory and CXL resource management.
Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
arch/x86/kernel/e820.c | 2 +-
drivers/dax/hmem/device.c | 4 +--
drivers/dax/hmem/hmem.c | 8 +++++
include/linux/ioport.h | 24 +++++++++++++
kernel/resource.c | 73 +++++++++++++++++++++++++++++++++------
5 files changed, 97 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index c3acbd26408b..aef1ff2cabda 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -1153,7 +1153,7 @@ void __init e820__reserve_resources_late(void)
res = e820_res;
for (i = 0; i < e820_table->nr_entries; i++) {
if (!res->parent && res->end)
- insert_resource_expand_to_fit(&iomem_resource, res);
+ insert_resource_late(res);
res++;
}
diff --git a/drivers/dax/hmem/device.c b/drivers/dax/hmem/device.c
index f9e1a76a04a9..22732b729017 100644
--- a/drivers/dax/hmem/device.c
+++ b/drivers/dax/hmem/device.c
@@ -83,8 +83,8 @@ static __init int hmem_register_one(struct resource *res, void *data)
static __init int hmem_init(void)
{
- walk_iomem_res_desc(IORES_DESC_SOFT_RESERVED,
- IORESOURCE_MEM, 0, -1, NULL, hmem_register_one);
+ walk_soft_reserve_res_desc(IORES_DESC_SOFT_RESERVED, IORESOURCE_MEM, 0,
+ -1, NULL, hmem_register_one);
return 0;
}
diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c
index c18451a37e4f..d5b8f06d531e 100644
--- a/drivers/dax/hmem/hmem.c
+++ b/drivers/dax/hmem/hmem.c
@@ -73,10 +73,18 @@ static int hmem_register_device(struct device *host, int target_nid,
return 0;
}
+#ifdef CONFIG_EFI_SOFT_RESERVE
+ rc = region_intersects_soft_reserve(res->start, resource_size(res),
+ IORESOURCE_MEM,
+ IORES_DESC_SOFT_RESERVED);
+ if (rc != REGION_INTERSECTS)
+ return 0;
+#else
rc = region_intersects(res->start, resource_size(res), IORESOURCE_MEM,
IORES_DESC_SOFT_RESERVED);
if (rc != REGION_INTERSECTS)
return 0;
+#endif
id = memregion_alloc(GFP_KERNEL);
if (id < 0) {
diff --git a/include/linux/ioport.h b/include/linux/ioport.h
index e8b2d6aa4013..889bc4982777 100644
--- a/include/linux/ioport.h
+++ b/include/linux/ioport.h
@@ -232,6 +232,9 @@ struct resource_constraint {
/* PC/ISA/whatever - the normal PC address spaces: IO and memory */
extern struct resource ioport_resource;
extern struct resource iomem_resource;
+#ifdef CONFIG_EFI_SOFT_RESERVE
+extern struct resource soft_reserve_resource;
+#endif
extern struct resource *request_resource_conflict(struct resource *root, struct resource *new);
extern int request_resource(struct resource *root, struct resource *new);
@@ -255,6 +258,22 @@ int adjust_resource(struct resource *res, resource_size_t start,
resource_size_t size);
resource_size_t resource_alignment(struct resource *res);
+
+#ifdef CONFIG_EFI_SOFT_RESERVE
+static inline void insert_resource_late(struct resource *new)
+{
+ if (new->desc == IORES_DESC_SOFT_RESERVED)
+ insert_resource_expand_to_fit(&soft_reserve_resource, new);
+ else
+ insert_resource_expand_to_fit(&iomem_resource, new);
+}
+#else
+static inline void insert_resource_late(struct resource *new)
+{
+ insert_resource_expand_to_fit(&iomem_resource, new);
+}
+#endif
+
/**
* resource_set_size - Calculate resource end address from size and start
* @res: Resource descriptor
@@ -409,6 +428,11 @@ walk_system_ram_res_rev(u64 start, u64 end, void *arg,
extern int
walk_iomem_res_desc(unsigned long desc, unsigned long flags, u64 start, u64 end,
void *arg, int (*func)(struct resource *, void *));
+int walk_soft_reserve_res_desc(unsigned long desc, unsigned long flags,
+ u64 start, u64 end, void *arg,
+ int (*func)(struct resource *, void *));
+int region_intersects_soft_reserve(resource_size_t start, size_t size,
+ unsigned long flags, unsigned long desc);
struct resource *devm_request_free_mem_region(struct device *dev,
struct resource *base, unsigned long size);
diff --git a/kernel/resource.c b/kernel/resource.c
index f9bb5481501a..8479a99441e2 100644
--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -321,13 +321,14 @@ static bool is_type_match(struct resource *p, unsigned long flags, unsigned long
}
/**
- * find_next_iomem_res - Finds the lowest iomem resource that covers part of
- * [@start..@end].
+ * find_next_res - Finds the lowest resource that covers part of
+ * [@start..@end].
*
* If a resource is found, returns 0 and @*res is overwritten with the part
* of the resource that's within [@start..@end]; if none is found, returns
* -ENODEV. Returns -EINVAL for invalid parameters.
*
+ * @parent: resource tree root to search
* @start: start address of the resource searched for
* @end: end address of same resource
* @flags: flags which the resource must have
@@ -337,9 +338,9 @@ static bool is_type_match(struct resource *p, unsigned long flags, unsigned long
* The caller must specify @start, @end, @flags, and @desc
* (which may be IORES_DESC_NONE).
*/
-static int find_next_iomem_res(resource_size_t start, resource_size_t end,
- unsigned long flags, unsigned long desc,
- struct resource *res)
+static int find_next_res(struct resource *parent, resource_size_t start,
+ resource_size_t end, unsigned long flags,
+ unsigned long desc, struct resource *res)
{
struct resource *p;
@@ -351,7 +352,7 @@ static int find_next_iomem_res(resource_size_t start, resource_size_t end,
read_lock(&resource_lock);
- for_each_resource(&iomem_resource, p, false) {
+ for_each_resource(parent, p, false) {
/* If we passed the resource we are looking for, stop */
if (p->start > end) {
p = NULL;
@@ -382,16 +383,23 @@ static int find_next_iomem_res(resource_size_t start, resource_size_t end,
return p ? 0 : -ENODEV;
}
-static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
- unsigned long flags, unsigned long desc,
- void *arg,
- int (*func)(struct resource *, void *))
+static int find_next_iomem_res(resource_size_t start, resource_size_t end,
+ unsigned long flags, unsigned long desc,
+ struct resource *res)
+{
+ return find_next_res(&iomem_resource, start, end, flags, desc, res);
+}
+
+static int walk_res_desc(struct resource *parent, resource_size_t start,
+ resource_size_t end, unsigned long flags,
+ unsigned long desc, void *arg,
+ int (*func)(struct resource *, void *))
{
struct resource res;
int ret = -EINVAL;
while (start < end &&
- !find_next_iomem_res(start, end, flags, desc, &res)) {
+ !find_next_res(parent, start, end, flags, desc, &res)) {
ret = (*func)(&res, arg);
if (ret)
break;
@@ -402,6 +410,15 @@ static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
return ret;
}
+static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
+ unsigned long flags, unsigned long desc,
+ void *arg,
+ int (*func)(struct resource *, void *))
+{
+ return walk_res_desc(&iomem_resource, start, end, flags, desc, arg, func);
+}
+
+
/**
* walk_iomem_res_desc - Walks through iomem resources and calls func()
* with matching resource ranges.
@@ -426,6 +443,26 @@ int walk_iomem_res_desc(unsigned long desc, unsigned long flags, u64 start,
}
EXPORT_SYMBOL_GPL(walk_iomem_res_desc);
+#ifdef CONFIG_EFI_SOFT_RESERVE
+struct resource soft_reserve_resource = {
+ .name = "Soft Reserved",
+ .start = 0,
+ .end = -1,
+ .desc = IORES_DESC_SOFT_RESERVED,
+ .flags = IORESOURCE_MEM,
+};
+EXPORT_SYMBOL_GPL(soft_reserve_resource);
+
+int walk_soft_reserve_res_desc(unsigned long desc, unsigned long flags,
+ u64 start, u64 end, void *arg,
+ int (*func)(struct resource *, void *))
+{
+ return walk_res_desc(&soft_reserve_resource, start, end, flags, desc,
+ arg, func);
+}
+EXPORT_SYMBOL_GPL(walk_soft_reserve_res_desc);
+#endif
+
/*
* This function calls the @func callback against all memory ranges of type
* System RAM which are marked as IORESOURCE_SYSTEM_RAM and IORESOUCE_BUSY.
@@ -648,6 +685,20 @@ int region_intersects(resource_size_t start, size_t size, unsigned long flags,
}
EXPORT_SYMBOL_GPL(region_intersects);
+int region_intersects_soft_reserve(resource_size_t start, size_t size,
+ unsigned long flags, unsigned long desc)
+{
+ int ret;
+
+ read_lock(&resource_lock);
+ ret = __region_intersects(&soft_reserve_resource, start, size, flags,
+ desc);
+ read_unlock(&resource_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(region_intersects_soft_reserve);
+
void __weak arch_remove_reservations(struct resource *avail)
{
}
--
2.17.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/6] dax/hmem: Request cxl_acpi and cxl_pci before walking Soft Reserved ranges
2025-08-22 3:41 [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Smita Koralahalli
2025-08-22 3:41 ` [PATCH 1/6] dax/hmem, e820, resource: Defer Soft Reserved registration until hmem is ready Smita Koralahalli
@ 2025-08-22 3:41 ` Smita Koralahalli
2025-09-01 3:08 ` Zhijian Li (Fujitsu)
2025-08-22 3:41 ` [PATCH 3/6] dax/hmem, cxl: Tighten dependencies on DEV_DAX_CXL and dax_hmem Smita Koralahalli
` (4 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: Smita Koralahalli @ 2025-08-22 3:41 UTC (permalink / raw)
To: linux-cxl, linux-kernel, nvdimm, linux-fsdevel, linux-pm
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Yao Xingtao, Peter Zijlstra, Greg KH,
Nathan Fontenot, Smita Koralahalli, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati, Zhijian Li
Ensure that cxl_acpi has published CXL Window resources before dax_hmem
walks Soft Reserved ranges.
Replace MODULE_SOFTDEP("pre: cxl_acpi") with an explicit, synchronous
request_module("cxl_acpi"). MODULE_SOFTDEP() only guarantees eventual
loading, it does not enforce that the dependency has finished init
before the current module runs. This can cause dax_hmem to start before
cxl_acpi has populated the resource tree, breaking detection of overlaps
between Soft Reserved and CXL Windows.
Also, request cxl_pci before dax_hmem walks Soft Reserved ranges. Unlike
cxl_acpi, cxl_pci attach is asynchronous and creates dependent devices
that trigger further module loads. Asynchronous probe flushing
(wait_for_device_probe()) is added later in the series in a deferred
context before dax_hmem makes ownership decisions for Soft Reserved
ranges.
Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
drivers/dax/hmem/hmem.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c
index d5b8f06d531e..9277e5ea0019 100644
--- a/drivers/dax/hmem/hmem.c
+++ b/drivers/dax/hmem/hmem.c
@@ -146,6 +146,16 @@ static __init int dax_hmem_init(void)
{
int rc;
+ /*
+ * Ensure that cxl_acpi and cxl_pci have a chance to kick off
+ * CXL topology discovery at least once before scanning the
+ * iomem resource tree for IORES_DESC_CXL resources.
+ */
+ if (IS_ENABLED(CONFIG_DEV_DAX_CXL)) {
+ request_module("cxl_acpi");
+ request_module("cxl_pci");
+ }
+
rc = platform_driver_register(&dax_hmem_platform_driver);
if (rc)
return rc;
@@ -166,13 +176,6 @@ static __exit void dax_hmem_exit(void)
module_init(dax_hmem_init);
module_exit(dax_hmem_exit);
-/* Allow for CXL to define its own dax regions */
-#if IS_ENABLED(CONFIG_CXL_REGION)
-#if IS_MODULE(CONFIG_CXL_ACPI)
-MODULE_SOFTDEP("pre: cxl_acpi");
-#endif
-#endif
-
MODULE_ALIAS("platform:hmem*");
MODULE_ALIAS("platform:hmem_platform*");
MODULE_DESCRIPTION("HMEM DAX: direct access to 'specific purpose' memory");
--
2.17.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/6] dax/hmem, cxl: Tighten dependencies on DEV_DAX_CXL and dax_hmem
2025-08-22 3:41 [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Smita Koralahalli
2025-08-22 3:41 ` [PATCH 1/6] dax/hmem, e820, resource: Defer Soft Reserved registration until hmem is ready Smita Koralahalli
2025-08-22 3:41 ` [PATCH 2/6] dax/hmem: Request cxl_acpi and cxl_pci before walking Soft Reserved ranges Smita Koralahalli
@ 2025-08-22 3:41 ` Smita Koralahalli
2025-09-01 3:28 ` Zhijian Li (Fujitsu)
2025-08-22 3:42 ` [PATCH 4/6] dax/hmem: Defer Soft Reserved overlap handling until CXL region assembly completes Smita Koralahalli
` (3 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: Smita Koralahalli @ 2025-08-22 3:41 UTC (permalink / raw)
To: linux-cxl, linux-kernel, nvdimm, linux-fsdevel, linux-pm
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Yao Xingtao, Peter Zijlstra, Greg KH,
Nathan Fontenot, Smita Koralahalli, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati, Zhijian Li
Update Kconfig and runtime checks to better coordinate dax_cxl and dax_hmem
registration.
Add explicit Kconfig ordering so that CXL_ACPI and CXL_PCI must be
initialized before DEV_DAX_HMEM. This prevents dax_hmem from consuming
Soft Reserved ranges before CXL drivers have had a chance to claim them.
Replace IS_ENABLED(CONFIG_CXL_REGION) with IS_ENABLED(CONFIG_DEV_DAX_CXL)
so the code more precisely reflects when CXL-specific DAX coordination is
expected.
This ensures that ownership of Soft Reserved ranges is consistently
handed off to the CXL stack when DEV_DAX_CXL is configured.
Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
drivers/dax/Kconfig | 2 ++
drivers/dax/hmem/hmem.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/dax/Kconfig b/drivers/dax/Kconfig
index d656e4c0eb84..3683bb3f2311 100644
--- a/drivers/dax/Kconfig
+++ b/drivers/dax/Kconfig
@@ -48,6 +48,8 @@ config DEV_DAX_CXL
tristate "CXL DAX: direct access to CXL RAM regions"
depends on CXL_BUS && CXL_REGION && DEV_DAX
default CXL_REGION && DEV_DAX
+ depends on CXL_ACPI >= DEV_DAX_HMEM
+ depends on CXL_PCI >= DEV_DAX_HMEM
help
CXL RAM regions are either mapped by platform-firmware
and published in the initial system-memory map as "System RAM", mapped
diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c
index 9277e5ea0019..7ada820cb177 100644
--- a/drivers/dax/hmem/hmem.c
+++ b/drivers/dax/hmem/hmem.c
@@ -66,7 +66,7 @@ static int hmem_register_device(struct device *host, int target_nid,
long id;
int rc;
- if (IS_ENABLED(CONFIG_CXL_REGION) &&
+ if (IS_ENABLED(CONFIG_DEV_DAX_CXL) &&
region_intersects(res->start, resource_size(res), IORESOURCE_MEM,
IORES_DESC_CXL) != REGION_DISJOINT) {
dev_dbg(host, "deferring range to CXL: %pr\n", res);
--
2.17.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 4/6] dax/hmem: Defer Soft Reserved overlap handling until CXL region assembly completes
2025-08-22 3:41 [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Smita Koralahalli
` (2 preceding siblings ...)
2025-08-22 3:41 ` [PATCH 3/6] dax/hmem, cxl: Tighten dependencies on DEV_DAX_CXL and dax_hmem Smita Koralahalli
@ 2025-08-22 3:42 ` Smita Koralahalli
2025-09-01 4:01 ` Zhijian Li (Fujitsu)
2025-08-22 3:42 ` [PATCH 5/6] dax/hmem: Reintroduce Soft Reserved ranges back into the iomem tree Smita Koralahalli
` (2 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: Smita Koralahalli @ 2025-08-22 3:42 UTC (permalink / raw)
To: linux-cxl, linux-kernel, nvdimm, linux-fsdevel, linux-pm
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Yao Xingtao, Peter Zijlstra, Greg KH,
Nathan Fontenot, Smita Koralahalli, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati, Zhijian Li
Previously, dax_hmem deferred to CXL only when an immediate resource
intersection with a CXL window was detected. This left a gap: if cxl_acpi
or cxl_pci probing or region assembly had not yet started, hmem could
prematurely claim ranges.
Fix this by introducing a dax_cxl_mode state machine and a deferred
work mechanism.
The new workqueue delays consideration of Soft Reserved overlaps until
the CXL subsystem has had a chance to complete its discovery and region
assembly. This avoids premature iomem claims, eliminates race conditions
with async cxl_pci probe, and provides a cleaner handoff between hmem and
CXL resource management.
Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
drivers/dax/hmem/hmem.c | 72 +++++++++++++++++++++++++++++++++++++++--
1 file changed, 70 insertions(+), 2 deletions(-)
diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c
index 7ada820cb177..90978518e5f4 100644
--- a/drivers/dax/hmem/hmem.c
+++ b/drivers/dax/hmem/hmem.c
@@ -58,9 +58,45 @@ static void release_hmem(void *pdev)
platform_device_unregister(pdev);
}
+static enum dax_cxl_mode {
+ DAX_CXL_MODE_DEFER,
+ DAX_CXL_MODE_REGISTER,
+ DAX_CXL_MODE_DROP,
+} dax_cxl_mode;
+
+static int handle_deferred_cxl(struct device *host, int target_nid,
+ const struct resource *res)
+{
+ if (region_intersects(res->start, resource_size(res), IORESOURCE_MEM,
+ IORES_DESC_CXL) != REGION_DISJOINT) {
+ if (dax_cxl_mode == DAX_CXL_MODE_DROP)
+ dev_dbg(host, "dropping CXL range: %pr\n", res);
+ }
+ return 0;
+}
+
+struct dax_defer_work {
+ struct platform_device *pdev;
+ struct work_struct work;
+};
+
+static void process_defer_work(struct work_struct *_work)
+{
+ struct dax_defer_work *work = container_of(_work, typeof(*work), work);
+ struct platform_device *pdev = work->pdev;
+
+ /* relies on cxl_acpi and cxl_pci having had a chance to load */
+ wait_for_device_probe();
+
+ dax_cxl_mode = DAX_CXL_MODE_DROP;
+
+ walk_hmem_resources(&pdev->dev, handle_deferred_cxl);
+}
+
static int hmem_register_device(struct device *host, int target_nid,
const struct resource *res)
{
+ struct dax_defer_work *work = dev_get_drvdata(host);
struct platform_device *pdev;
struct memregion_info info;
long id;
@@ -69,8 +105,18 @@ static int hmem_register_device(struct device *host, int target_nid,
if (IS_ENABLED(CONFIG_DEV_DAX_CXL) &&
region_intersects(res->start, resource_size(res), IORESOURCE_MEM,
IORES_DESC_CXL) != REGION_DISJOINT) {
- dev_dbg(host, "deferring range to CXL: %pr\n", res);
- return 0;
+ switch (dax_cxl_mode) {
+ case DAX_CXL_MODE_DEFER:
+ dev_dbg(host, "deferring range to CXL: %pr\n", res);
+ schedule_work(&work->work);
+ return 0;
+ case DAX_CXL_MODE_REGISTER:
+ dev_dbg(host, "registering CXL range: %pr\n", res);
+ break;
+ case DAX_CXL_MODE_DROP:
+ dev_dbg(host, "dropping CXL range: %pr\n", res);
+ return 0;
+ }
}
#ifdef CONFIG_EFI_SOFT_RESERVE
@@ -130,8 +176,30 @@ static int hmem_register_device(struct device *host, int target_nid,
return rc;
}
+static void kill_defer_work(void *_work)
+{
+ struct dax_defer_work *work = container_of(_work, typeof(*work), work);
+
+ cancel_work_sync(&work->work);
+ kfree(work);
+}
+
static int dax_hmem_platform_probe(struct platform_device *pdev)
{
+ struct dax_defer_work *work = kzalloc(sizeof(*work), GFP_KERNEL);
+ int rc;
+
+ if (!work)
+ return -ENOMEM;
+
+ work->pdev = pdev;
+ INIT_WORK(&work->work, process_defer_work);
+
+ rc = devm_add_action_or_reset(&pdev->dev, kill_defer_work, work);
+ if (rc)
+ return rc;
+
+ platform_set_drvdata(pdev, work);
return walk_hmem_resources(&pdev->dev, hmem_register_device);
}
--
2.17.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 5/6] dax/hmem: Reintroduce Soft Reserved ranges back into the iomem tree
2025-08-22 3:41 [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Smita Koralahalli
` (3 preceding siblings ...)
2025-08-22 3:42 ` [PATCH 4/6] dax/hmem: Defer Soft Reserved overlap handling until CXL region assembly completes Smita Koralahalli
@ 2025-08-22 3:42 ` Smita Koralahalli
2025-08-22 3:42 ` [RFC PATCH 6/6] cxl/region, dax/hmem: Guard CXL DAX region creation and tighten HMEM deps Smita Koralahalli
2025-08-26 23:21 ` [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Alison Schofield
6 siblings, 0 replies; 14+ messages in thread
From: Smita Koralahalli @ 2025-08-22 3:42 UTC (permalink / raw)
To: linux-cxl, linux-kernel, nvdimm, linux-fsdevel, linux-pm
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Yao Xingtao, Peter Zijlstra, Greg KH,
Nathan Fontenot, Smita Koralahalli, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati, Zhijian Li
Reworked from a patch by Alison Schofield <alison.schofield@intel.com>
Reintroduce Soft Reserved range into the iomem_resource tree for dax_hmem
to consume.
This restores visibility in /proc/iomem for ranges actively in use, while
avoiding the early-boot conflicts that occurred when Soft Reserved was
published into iomem before CXL window and region discovery.
Link: https://lore.kernel.org/linux-cxl/29312c0765224ae76862d59a17748c8188fb95f1.1692638817.git.alison.schofield@intel.com/
Co-developed-by: Alison Schofield <alison.schofield@intel.com>
Signed-off-by: Alison Schofield <alison.schofield@intel.com>
Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
---
drivers/dax/hmem/hmem.c | 38 ++++++++++++++++++++++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c
index 90978518e5f4..24a6e7e3d916 100644
--- a/drivers/dax/hmem/hmem.c
+++ b/drivers/dax/hmem/hmem.c
@@ -93,6 +93,40 @@ static void process_defer_work(struct work_struct *_work)
walk_hmem_resources(&pdev->dev, handle_deferred_cxl);
}
+static void remove_soft_reserved(void *data)
+{
+ struct resource *r = data;
+
+ remove_resource(r);
+ kfree(r);
+}
+
+static int add_soft_reserve_into_iomem(struct device *host,
+ const struct resource *res)
+{
+ struct resource *soft = kzalloc(sizeof(*soft), GFP_KERNEL);
+ int rc;
+
+ if (!soft)
+ return -ENOMEM;
+
+ *soft = DEFINE_RES_NAMED_DESC(res->start, (res->end - res->start + 1),
+ "Soft Reserved", IORESOURCE_MEM,
+ IORES_DESC_SOFT_RESERVED);
+
+ rc = insert_resource(&iomem_resource, soft);
+ if (rc) {
+ kfree(soft);
+ return rc;
+ }
+
+ rc = devm_add_action_or_reset(host, remove_soft_reserved, soft);
+ if (rc)
+ return rc;
+
+ return 0;
+}
+
static int hmem_register_device(struct device *host, int target_nid,
const struct resource *res)
{
@@ -125,6 +159,10 @@ static int hmem_register_device(struct device *host, int target_nid,
IORES_DESC_SOFT_RESERVED);
if (rc != REGION_INTERSECTS)
return 0;
+
+ rc = add_soft_reserve_into_iomem(host, res);
+ if (rc)
+ return rc;
#else
rc = region_intersects(res->start, resource_size(res), IORESOURCE_MEM,
IORES_DESC_SOFT_RESERVED);
--
2.17.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [RFC PATCH 6/6] cxl/region, dax/hmem: Guard CXL DAX region creation and tighten HMEM deps
2025-08-22 3:41 [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Smita Koralahalli
` (4 preceding siblings ...)
2025-08-22 3:42 ` [PATCH 5/6] dax/hmem: Reintroduce Soft Reserved ranges back into the iomem tree Smita Koralahalli
@ 2025-08-22 3:42 ` Smita Koralahalli
2025-09-01 6:21 ` Zhijian Li (Fujitsu)
2025-08-26 23:21 ` [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Alison Schofield
6 siblings, 1 reply; 14+ messages in thread
From: Smita Koralahalli @ 2025-08-22 3:42 UTC (permalink / raw)
To: linux-cxl, linux-kernel, nvdimm, linux-fsdevel, linux-pm
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Yao Xingtao, Peter Zijlstra, Greg KH,
Nathan Fontenot, Smita Koralahalli, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati, Zhijian Li
Prevent cxl_region_probe() from unconditionally calling into
devm_cxl_add_dax_region() when the DEV_DAX_CXL driver is not enabled.
Wrap the call with IS_ENABLED(CONFIG_DEV_DAX_CXL) so region probe skips
DAX setup cleanly if no consumer is present.
In parallel, update DEV_DAX_HMEM’s Kconfig to depend on
!CXL_BUS || (CXL_ACPI && CXL_PCI) || m. This ensures:
Built-in (y) HMEM is allowed when CXL is disabled, or when the full
CXL discovery stack is built-in. Module (m) HMEM remains always possible.
Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
---
I did not want to override Dan’s original approach, so I am posting this
as an RFC.
This patch addresses a corner case when applied on top of Patches 1–5.
When DEV_DAX_HMEM=y and CXL=m, the DEV_DAX_CXL option ends up disabled.
In that configuration, with Patches 1–5 applied, ownership of the Soft
Reserved ranges falls back to dax_hmem. As a result, /proc/iomem looks
like this:
850000000-284fffffff : CXL Window 0
850000000-284fffffff : region3
850000000-284fffffff : Soft Reserved
850000000-284fffffff : dax0.0
850000000-284fffffff : System RAM (kmem)
2850000000-484fffffff : CXL Window 1
2850000000-484fffffff : region4
2850000000-484fffffff : Soft Reserved
2850000000-484fffffff : dax1.0
2850000000-484fffffff : System RAM (kmem)
4850000000-684fffffff : CXL Window 2
4850000000-684fffffff : region5
4850000000-684fffffff : Soft Reserved
4850000000-684fffffff : dax2.0
4850000000-684fffffff : System RAM (kmem)
In this case the dax devices are created by dax_hmem, not by dax_cxl.
Consequently, a "cxl disable-region <regionx>" operation does not
unregister these devices. In addition, the dmesg output can be misleading
to users, since it looks like the CXL region driver created the devdax
devices:
devm_cxl_add_region: cxl_acpi ACPI0017:00: decoder0.2: created region5
..
..
This patch addresses those situations. I am not entirely sure how clean
the approach of using “|| m” is, so I am sending it as RFC for feedback.
---
drivers/cxl/core/region.c | 4 +++-
drivers/dax/Kconfig | 1 +
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 71cc42d05248..6a2c21e55dbc 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -3617,7 +3617,9 @@ static int cxl_region_probe(struct device *dev)
p->res->start, p->res->end, cxlr,
is_system_ram) > 0)
return 0;
- return devm_cxl_add_dax_region(cxlr);
+ if (IS_ENABLED(CONFIG_DEV_DAX_CXL))
+ return devm_cxl_add_dax_region(cxlr);
+ return 0;
default:
dev_dbg(&cxlr->dev, "unsupported region mode: %d\n",
cxlr->mode);
diff --git a/drivers/dax/Kconfig b/drivers/dax/Kconfig
index 3683bb3f2311..fd12cca91c78 100644
--- a/drivers/dax/Kconfig
+++ b/drivers/dax/Kconfig
@@ -30,6 +30,7 @@ config DEV_DAX_PMEM
config DEV_DAX_HMEM
tristate "HMEM DAX: direct access to 'specific purpose' memory"
depends on EFI_SOFT_RESERVE
+ depends on !CXL_BUS || (CXL_ACPI && CXL_PCI) || m
select NUMA_KEEP_MEMINFO if NUMA_MEMBLKS
default DEV_DAX
help
--
2.17.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL
2025-08-22 3:41 [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Smita Koralahalli
` (5 preceding siblings ...)
2025-08-22 3:42 ` [RFC PATCH 6/6] cxl/region, dax/hmem: Guard CXL DAX region creation and tighten HMEM deps Smita Koralahalli
@ 2025-08-26 23:21 ` Alison Schofield
2025-08-28 23:34 ` Koralahalli Channabasappa, Smita
6 siblings, 1 reply; 14+ messages in thread
From: Alison Schofield @ 2025-08-26 23:21 UTC (permalink / raw)
To: Smita Koralahalli
Cc: linux-cxl, linux-kernel, nvdimm, linux-fsdevel, linux-pm,
Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Vishal Verma,
Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Yao Xingtao, Peter Zijlstra, Greg KH,
Nathan Fontenot, Terry Bowman, Robert Richter, Benjamin Cheatham,
PradeepVineshReddy Kodamati, Zhijian Li
On Fri, Aug 22, 2025 at 03:41:56AM +0000, Smita Koralahalli wrote:
> This series aims to address long-standing conflicts between dax_hmem and
> CXL when handling Soft Reserved memory ranges.
Hi Smita,
I was able to try this out today and it looks good. See one question
about the !CXL_REGION case below.
Test case of a hot replace a dax region worked as expected. It appeared
with no soft reserved and after tear down, the same region could be
rebuilt in place.
Test case with CONFIG_CXL_REGION=N looks good too, as in DAX consumed
the entire resource. Do we intend the Soft Reserved resource to remain
like this:
c080000000-17dbfffffff : CXL Window 0
c080000000-c47fffffff : Soft Reserved
c080000000-c47fffffff : dax2.0
c080000000-c47fffffff : System RAM (kmem)
These other issues noted previously did not re-appear:
- kmem dax3.0: probe with driver kmem failed with error -16
- resource: Unaddressable device [ ] conflicts with []
-- Alison
snip
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL
2025-08-26 23:21 ` [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Alison Schofield
@ 2025-08-28 23:34 ` Koralahalli Channabasappa, Smita
0 siblings, 0 replies; 14+ messages in thread
From: Koralahalli Channabasappa, Smita @ 2025-08-28 23:34 UTC (permalink / raw)
To: Alison Schofield
Cc: linux-cxl, linux-kernel, nvdimm, linux-fsdevel, linux-pm,
Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Vishal Verma,
Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Yao Xingtao, Peter Zijlstra, Greg KH,
Nathan Fontenot, Terry Bowman, Robert Richter, Benjamin Cheatham,
PradeepVineshReddy Kodamati, Zhijian Li
Hi Alison,
On 8/26/2025 4:21 PM, Alison Schofield wrote:
> On Fri, Aug 22, 2025 at 03:41:56AM +0000, Smita Koralahalli wrote:
>> This series aims to address long-standing conflicts between dax_hmem and
>> CXL when handling Soft Reserved memory ranges.
>
> Hi Smita,
>
> I was able to try this out today and it looks good. See one question
> about the !CXL_REGION case below.
>
> Test case of a hot replace a dax region worked as expected. It appeared
> with no soft reserved and after tear down, the same region could be
> rebuilt in place.
>
> Test case with CONFIG_CXL_REGION=N looks good too, as in DAX consumed
> the entire resource. Do we intend the Soft Reserved resource to remain
> like this:
> c080000000-17dbfffffff : CXL Window 0
> c080000000-c47fffffff : Soft Reserved
> c080000000-c47fffffff : dax2.0
> c080000000-c47fffffff : System RAM (kmem)
Yes, that is how it currently looks. Maybe we should also add a log
message to make it clear that this dax is coming from dax_hmem and not
dax_cxl?
Another thought I had is that if we hand off fully to CXL even with
regions disabled, we could avoid showing the Soft Reserved layer
entirely (along with the kmem and devdax under it). The question is
whether that approach would be preferable, since in that case the memory
would end up left unclaimed/unavailable to Linux. Would be good to get
your perspective on this.
https://lore.kernel.org/all/a2e900b0-1b89-4e88-a6d4-8c0e6de50f52@amd.com/
Thanks
Smita
>
> These other issues noted previously did not re-appear:
> - kmem dax3.0: probe with driver kmem failed with error -16
> - resource: Unaddressable device [ ] conflicts with []
>
> -- Alison
>
> snip
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/6] dax/hmem, e820, resource: Defer Soft Reserved registration until hmem is ready
2025-08-22 3:41 ` [PATCH 1/6] dax/hmem, e820, resource: Defer Soft Reserved registration until hmem is ready Smita Koralahalli
@ 2025-09-01 2:59 ` Zhijian Li (Fujitsu)
0 siblings, 0 replies; 14+ messages in thread
From: Zhijian Li (Fujitsu) @ 2025-09-01 2:59 UTC (permalink / raw)
To: Smita Koralahalli, linux-cxl@vger.kernel.org,
linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev,
linux-fsdevel@vger.kernel.org, linux-pm@vger.kernel.org
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Xingtao Yao (Fujitsu), Peter Zijlstra,
Greg KH, Nathan Fontenot, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati
On 22/08/2025 11:41, Smita Koralahalli wrote:
> Insert Soft Reserved memory into a dedicated soft_reserve_resource tree
> instead of the iomem_resource tree at boot.
>
> Publishing Soft Reserved ranges into iomem too early causes conflicts with
> CXL hotplug and region assembly failure, especially when Soft Reserved
> overlaps CXL regions.
>
> Re-inserting these ranges into iomem will be handled in follow-up patches,
> after ensuring CXL window publication ordering is stabilized and when the
> dax_hmem is ready to consume them.
>
> This avoids trimming or deleting resources later and provides a cleaner
> handoff between EFI-defined memory and CXL resource management.
>
> Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
> arch/x86/kernel/e820.c | 2 +-
> drivers/dax/hmem/device.c | 4 +--
> drivers/dax/hmem/hmem.c | 8 +++++
> include/linux/ioport.h | 24 +++++++++++++
> kernel/resource.c | 73 +++++++++++++++++++++++++++++++++------
> 5 files changed, 97 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
> index c3acbd26408b..aef1ff2cabda 100644
> --- a/arch/x86/kernel/e820.c
> +++ b/arch/x86/kernel/e820.c
> @@ -1153,7 +1153,7 @@ void __init e820__reserve_resources_late(void)
> res = e820_res;
> for (i = 0; i < e820_table->nr_entries; i++) {
> if (!res->parent && res->end)
> - insert_resource_expand_to_fit(&iomem_resource, res);
> + insert_resource_late(res);
> res++;
> }
>
> diff --git a/drivers/dax/hmem/device.c b/drivers/dax/hmem/device.c
> index f9e1a76a04a9..22732b729017 100644
> --- a/drivers/dax/hmem/device.c
> +++ b/drivers/dax/hmem/device.c
> @@ -83,8 +83,8 @@ static __init int hmem_register_one(struct resource *res, void *data)
>
> static __init int hmem_init(void)
> {
> - walk_iomem_res_desc(IORES_DESC_SOFT_RESERVED,
> - IORESOURCE_MEM, 0, -1, NULL, hmem_register_one);
> + walk_soft_reserve_res_desc(IORES_DESC_SOFT_RESERVED, IORESOURCE_MEM, 0,
> + -1, NULL, hmem_register_one);
> return 0;
> }
>
> diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c
> index c18451a37e4f..d5b8f06d531e 100644
> --- a/drivers/dax/hmem/hmem.c
> +++ b/drivers/dax/hmem/hmem.c
> @@ -73,10 +73,18 @@ static int hmem_register_device(struct device *host, int target_nid,
> return 0;
> }
>
> +#ifdef CONFIG_EFI_SOFT_RESERVE
Note that dax_kmem currently depends on CONFIG_EFI_SOFT_RESERVED, so this conditional check may be redundant.
> + rc = region_intersects_soft_reserve(res->start, resource_size(res),
> + IORESOURCE_MEM,
> + IORES_DESC_SOFT_RESERVED);
> + if (rc != REGION_INTERSECTS)
> + return 0;
> +#else
> rc = region_intersects(res->start, resource_size(res), IORESOURCE_MEM,
> IORES_DESC_SOFT_RESERVED);
> if (rc != REGION_INTERSECTS)
> return 0;
> +#endif
>
Additionally, please add a TODO note here (e.g., "Add soft-reserved memory back to iomem").
> id = memregion_alloc(GFP_KERNEL);
> if (id < 0) {
> diff --git a/include/linux/ioport.h b/include/linux/ioport.h
> index e8b2d6aa4013..889bc4982777 100644
> --- a/include/linux/ioport.h
> +++ b/include/linux/ioport.h
> @@ -232,6 +232,9 @@ struct resource_constraint {
> /* PC/ISA/whatever - the normal PC address spaces: IO and memory */
> extern struct resource ioport_resource;
> extern struct resource iomem_resource;
> +#ifdef CONFIG_EFI_SOFT_RESERVE
> +extern struct resource soft_reserve_resource;
> +#endif
>
> extern struct resource *request_resource_conflict(struct resource *root, struct resource *new);
> extern int request_resource(struct resource *root, struct resource *new);
> @@ -255,6 +258,22 @@ int adjust_resource(struct resource *res, resource_size_t start,
> resource_size_t size);
> resource_size_t resource_alignment(struct resource *res);
>
> +
> +#ifdef CONFIG_EFI_SOFT_RESERVE
> +static inline void insert_resource_late(struct resource *new)
> +{
> + if (new->desc == IORES_DESC_SOFT_RESERVED)
> + insert_resource_expand_to_fit(&soft_reserve_resource, new);
> + else
> + insert_resource_expand_to_fit(&iomem_resource, new);
> +}
> +#else
> +static inline void insert_resource_late(struct resource *new)
> +{
> + insert_resource_expand_to_fit(&iomem_resource, new);
> +}
> +#endif
> +
> /**
> * resource_set_size - Calculate resource end address from size and start
> * @res: Resource descriptor
> @@ -409,6 +428,11 @@ walk_system_ram_res_rev(u64 start, u64 end, void *arg,
> extern int
> walk_iomem_res_desc(unsigned long desc, unsigned long flags, u64 start, u64 end,
> void *arg, int (*func)(struct resource *, void *));
> +int walk_soft_reserve_res_desc(unsigned long desc, unsigned long flags,
> + u64 start, u64 end, void *arg,
> + int (*func)(struct resource *, void *));
> +int region_intersects_soft_reserve(resource_size_t start, size_t size,
> + unsigned long flags, unsigned long desc);
>
> struct resource *devm_request_free_mem_region(struct device *dev,
> struct resource *base, unsigned long size);
> diff --git a/kernel/resource.c b/kernel/resource.c
> index f9bb5481501a..8479a99441e2 100644
> --- a/kernel/resource.c
> +++ b/kernel/resource.c
> @@ -321,13 +321,14 @@ static bool is_type_match(struct resource *p, unsigned long flags, unsigned long
> }
>
> /**
> - * find_next_iomem_res - Finds the lowest iomem resource that covers part of
> - * [@start..@end].
> + * find_next_res - Finds the lowest resource that covers part of
> + * [@start..@end].
> *
> * If a resource is found, returns 0 and @*res is overwritten with the part
> * of the resource that's within [@start..@end]; if none is found, returns
> * -ENODEV. Returns -EINVAL for invalid parameters.
> *
> + * @parent: resource tree root to search
> * @start: start address of the resource searched for
> * @end: end address of same resource
> * @flags: flags which the resource must have
> @@ -337,9 +338,9 @@ static bool is_type_match(struct resource *p, unsigned long flags, unsigned long
> * The caller must specify @start, @end, @flags, and @desc
> * (which may be IORES_DESC_NONE).
> */
> -static int find_next_iomem_res(resource_size_t start, resource_size_t end,
> - unsigned long flags, unsigned long desc,
> - struct resource *res)
> +static int find_next_res(struct resource *parent, resource_size_t start,
> + resource_size_t end, unsigned long flags,
> + unsigned long desc, struct resource *res)
> {
> struct resource *p;
>
> @@ -351,7 +352,7 @@ static int find_next_iomem_res(resource_size_t start, resource_size_t end,
>
> read_lock(&resource_lock);
>
> - for_each_resource(&iomem_resource, p, false) {
> + for_each_resource(parent, p, false) {
> /* If we passed the resource we are looking for, stop */
> if (p->start > end) {
> p = NULL;
> @@ -382,16 +383,23 @@ static int find_next_iomem_res(resource_size_t start, resource_size_t end,
> return p ? 0 : -ENODEV;
> }
>
> -static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
> - unsigned long flags, unsigned long desc,
> - void *arg,
> - int (*func)(struct resource *, void *))
> +static int find_next_iomem_res(resource_size_t start, resource_size_t end,
> + unsigned long flags, unsigned long desc,
> + struct resource *res)
> +{
> + return find_next_res(&iomem_resource, start, end, flags, desc, res);
> +}
> +
> +static int walk_res_desc(struct resource *parent, resource_size_t start,
> + resource_size_t end, unsigned long flags,
> + unsigned long desc, void *arg,
> + int (*func)(struct resource *, void *))
> {
> struct resource res;
> int ret = -EINVAL;
>
> while (start < end &&
> - !find_next_iomem_res(start, end, flags, desc, &res)) {
> + !find_next_res(parent, start, end, flags, desc, &res)) {
> ret = (*func)(&res, arg);
> if (ret)
> break;
> @@ -402,6 +410,15 @@ static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
> return ret;
> }
>
> +static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
> + unsigned long flags, unsigned long desc,
> + void *arg,
> + int (*func)(struct resource *, void *))
> +{
> + return walk_res_desc(&iomem_resource, start, end, flags, desc, arg, func);
> +}
> +
> +
> /**
> * walk_iomem_res_desc - Walks through iomem resources and calls func()
> * with matching resource ranges.
> @@ -426,6 +443,26 @@ int walk_iomem_res_desc(unsigned long desc, unsigned long flags, u64 start,
> }
> EXPORT_SYMBOL_GPL(walk_iomem_res_desc);
>
> +#ifdef CONFIG_EFI_SOFT_RESERVE
> +struct resource soft_reserve_resource = {
> + .name = "Soft Reserved",
> + .start = 0,
> + .end = -1,
> + .desc = IORES_DESC_SOFT_RESERVED,
> + .flags = IORESOURCE_MEM,
> +};
> +EXPORT_SYMBOL_GPL(soft_reserve_resource);
> +
> +int walk_soft_reserve_res_desc(unsigned long desc, unsigned long flags,
> + u64 start, u64 end, void *arg,
> + int (*func)(struct resource *, void *))
> +{
> + return walk_res_desc(&soft_reserve_resource, start, end, flags, desc,
> + arg, func);
> +}
> +EXPORT_SYMBOL_GPL(walk_soft_reserve_res_desc);
> +#endif
> +
> /*
> * This function calls the @func callback against all memory ranges of type
> * System RAM which are marked as IORESOURCE_SYSTEM_RAM and IORESOUCE_BUSY.
> @@ -648,6 +685,20 @@ int region_intersects(resource_size_t start, size_t size, unsigned long flags,
> }
> EXPORT_SYMBOL_GPL(region_intersects);
>
> +int region_intersects_soft_reserve(resource_size_t start, size_t size,
> + unsigned long flags, unsigned long desc)
Shouldn't this function be implemented uder `#if CONFIG_EFI_SOFT_RESERVE`? Otherwise it may cause compilation failures when the config is disabled.
Thanks
Zhijian
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/6] dax/hmem: Request cxl_acpi and cxl_pci before walking Soft Reserved ranges
2025-08-22 3:41 ` [PATCH 2/6] dax/hmem: Request cxl_acpi and cxl_pci before walking Soft Reserved ranges Smita Koralahalli
@ 2025-09-01 3:08 ` Zhijian Li (Fujitsu)
0 siblings, 0 replies; 14+ messages in thread
From: Zhijian Li (Fujitsu) @ 2025-09-01 3:08 UTC (permalink / raw)
To: Smita Koralahalli, linux-cxl@vger.kernel.org,
linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev,
linux-fsdevel@vger.kernel.org, linux-pm@vger.kernel.org
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Xingtao Yao (Fujitsu), Peter Zijlstra,
Greg KH, Nathan Fontenot, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati
I personally recommend that this patch be moved to position 1/6 in the series, as it specifically addresses the existing issue.
Thanks
Zhijian
On 22/08/2025 11:41, Smita Koralahalli wrote:
> Ensure that cxl_acpi has published CXL Window resources before dax_hmem
> walks Soft Reserved ranges.
>
> Replace MODULE_SOFTDEP("pre: cxl_acpi") with an explicit, synchronous
> request_module("cxl_acpi"). MODULE_SOFTDEP() only guarantees eventual
> loading, it does not enforce that the dependency has finished init
> before the current module runs. This can cause dax_hmem to start before
> cxl_acpi has populated the resource tree, breaking detection of overlaps
> between Soft Reserved and CXL Windows.
>
> Also, request cxl_pci before dax_hmem walks Soft Reserved ranges. Unlike
> cxl_acpi, cxl_pci attach is asynchronous and creates dependent devices
> that trigger further module loads. Asynchronous probe flushing
> (wait_for_device_probe()) is added later in the series in a deferred
> context before dax_hmem makes ownership decisions for Soft Reserved
> ranges.
>
> Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
> drivers/dax/hmem/hmem.c | 17 ++++++++++-------
> 1 file changed, 10 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c
> index d5b8f06d531e..9277e5ea0019 100644
> --- a/drivers/dax/hmem/hmem.c
> +++ b/drivers/dax/hmem/hmem.c
> @@ -146,6 +146,16 @@ static __init int dax_hmem_init(void)
> {
> int rc;
>
> + /*
> + * Ensure that cxl_acpi and cxl_pci have a chance to kick off
> + * CXL topology discovery at least once before scanning the
> + * iomem resource tree for IORES_DESC_CXL resources.
> + */
> + if (IS_ENABLED(CONFIG_DEV_DAX_CXL)) {
> + request_module("cxl_acpi");
> + request_module("cxl_pci");
> + }
> +
> rc = platform_driver_register(&dax_hmem_platform_driver);
> if (rc)
> return rc;
> @@ -166,13 +176,6 @@ static __exit void dax_hmem_exit(void)
> module_init(dax_hmem_init);
> module_exit(dax_hmem_exit);
>
> -/* Allow for CXL to define its own dax regions */
> -#if IS_ENABLED(CONFIG_CXL_REGION)
> -#if IS_MODULE(CONFIG_CXL_ACPI)
> -MODULE_SOFTDEP("pre: cxl_acpi");
> -#endif
> -#endif
> -
> MODULE_ALIAS("platform:hmem*");
> MODULE_ALIAS("platform:hmem_platform*");
> MODULE_DESCRIPTION("HMEM DAX: direct access to 'specific purpose' memory");
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/6] dax/hmem, cxl: Tighten dependencies on DEV_DAX_CXL and dax_hmem
2025-08-22 3:41 ` [PATCH 3/6] dax/hmem, cxl: Tighten dependencies on DEV_DAX_CXL and dax_hmem Smita Koralahalli
@ 2025-09-01 3:28 ` Zhijian Li (Fujitsu)
0 siblings, 0 replies; 14+ messages in thread
From: Zhijian Li (Fujitsu) @ 2025-09-01 3:28 UTC (permalink / raw)
To: Smita Koralahalli, linux-cxl@vger.kernel.org,
linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev,
linux-fsdevel@vger.kernel.org, linux-pm@vger.kernel.org
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Xingtao Yao (Fujitsu), Peter Zijlstra,
Greg KH, Nathan Fontenot, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati
On 22/08/2025 11:41, Smita Koralahalli wrote:
> Update Kconfig and runtime checks to better coordinate dax_cxl and dax_hmem
> registration.
>
> Add explicit Kconfig ordering so that CXL_ACPI and CXL_PCI must be
> initialized before DEV_DAX_HMEM.
Is this dependency statement fully accurate? To clarify, another prerequisite for
this ordering to work correctly is that dax_hmem must explicitly call
`request_module("cxl_acpi")` and `request_module("cxl_pci")` during its initialization.
Therefore, I recommend consolidating the following patches into a single commit
to ensure atomic handling of the initialization order:
- [PATCH 2/6] dax/hmem: Request cxl_acpi and cxl_pci before walking Soft Reserved ranges
- [PATCH 3/6] dax/hmem, cxl: Tighten dependencies on DEV_DAX_CXL and dax_hmem
Thanks
Zhijian
> This prevents dax_hmem from consuming
> Soft Reserved ranges before CXL drivers have had a chance to claim them.
>
> Replace IS_ENABLED(CONFIG_CXL_REGION) with IS_ENABLED(CONFIG_DEV_DAX_CXL)
> so the code more precisely reflects when CXL-specific DAX coordination is
> expected.
>
> This ensures that ownership of Soft Reserved ranges is consistently
> handed off to the CXL stack when DEV_DAX_CXL is configured.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 4/6] dax/hmem: Defer Soft Reserved overlap handling until CXL region assembly completes
2025-08-22 3:42 ` [PATCH 4/6] dax/hmem: Defer Soft Reserved overlap handling until CXL region assembly completes Smita Koralahalli
@ 2025-09-01 4:01 ` Zhijian Li (Fujitsu)
0 siblings, 0 replies; 14+ messages in thread
From: Zhijian Li (Fujitsu) @ 2025-09-01 4:01 UTC (permalink / raw)
To: Smita Koralahalli, linux-cxl@vger.kernel.org,
linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev,
linux-fsdevel@vger.kernel.org, linux-pm@vger.kernel.org
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Xingtao Yao (Fujitsu), Peter Zijlstra,
Greg KH, Nathan Fontenot, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati
On 22/08/2025 11:42, Smita Koralahalli wrote:
> Previously, dax_hmem deferred to CXL only when an immediate resource
> intersection with a CXL window was detected. This left a gap: if cxl_acpi
> or cxl_pci probing or region assembly had not yet started, hmem could
> prematurely claim ranges.
>
> Fix this by introducing a dax_cxl_mode state machine and a deferred
> work mechanism.
>
> The new workqueue delays consideration of Soft Reserved overlaps until
> the CXL subsystem has had a chance to complete its discovery and region
> assembly. This avoids premature iomem claims, eliminates race conditions
> with async cxl_pci probe, and provides a cleaner handoff between hmem and
> CXL resource management.
>
> Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
> drivers/dax/hmem/hmem.c | 72 +++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 70 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c
> index 7ada820cb177..90978518e5f4 100644
> --- a/drivers/dax/hmem/hmem.c
> +++ b/drivers/dax/hmem/hmem.c
> @@ -58,9 +58,45 @@ static void release_hmem(void *pdev)
> platform_device_unregister(pdev);
> }
>
> +static enum dax_cxl_mode {
> + DAX_CXL_MODE_DEFER,
> + DAX_CXL_MODE_REGISTER,
The patch looks good overall, but I have one question for the community:
Should we retain the `DAX_CXL_MODE_REGISTER` enum value which for the feature
we have not ever supported.
The idea of having a 'register' mode as the last resort for 'Soft Reserved'
memory might seem appealing, but it is not easy to implement. Instead, to
avoid increasing driver complexity, I would prefer that when we encounter
quirk/misconfiguration cases, we allow the user to reprogram/recorrect it. However, this
is beyond the scope of the current patchset
Thanks
Zhijian
> + DAX_CXL_MODE_DROP,
> +} dax_cxl_mode;
> +
> +static int handle_deferred_cxl(struct device *host, int target_nid,
> + const struct resource *res)
> +{
> + if (region_intersects(res->start, resource_size(res), IORESOURCE_MEM,
> + IORES_DESC_CXL) != REGION_DISJOINT) {
> + if (dax_cxl_mode == DAX_CXL_MODE_DROP)
> + dev_dbg(host, "dropping CXL range: %pr\n", res);
> + }
> + return 0;
> +}
> +
> +struct dax_defer_work {
> + struct platform_device *pdev;
> + struct work_struct work;
> +};
> +
> +static void process_defer_work(struct work_struct *_work)
> +{
> + struct dax_defer_work *work = container_of(_work, typeof(*work), work);
> + struct platform_device *pdev = work->pdev;
> +
> + /* relies on cxl_acpi and cxl_pci having had a chance to load */
> + wait_for_device_probe();
> +
> + dax_cxl_mode = DAX_CXL_MODE_DROP;
> +
> + walk_hmem_resources(&pdev->dev, handle_deferred_cxl);
> +}
> +
> static int hmem_register_device(struct device *host, int target_nid,
> const struct resource *res)
> {
> + struct dax_defer_work *work = dev_get_drvdata(host);
> struct platform_device *pdev;
> struct memregion_info info;
> long id;
> @@ -69,8 +105,18 @@ static int hmem_register_device(struct device *host, int target_nid,
> if (IS_ENABLED(CONFIG_DEV_DAX_CXL) &&
> region_intersects(res->start, resource_size(res), IORESOURCE_MEM,
> IORES_DESC_CXL) != REGION_DISJOINT) {
> - dev_dbg(host, "deferring range to CXL: %pr\n", res);
> - return 0;
> + switch (dax_cxl_mode) {
> + case DAX_CXL_MODE_DEFER:
> + dev_dbg(host, "deferring range to CXL: %pr\n", res);
> + schedule_work(&work->work);
> + return 0;
> + case DAX_CXL_MODE_REGISTER:
> + dev_dbg(host, "registering CXL range: %pr\n", res);
> + break;
> + case DAX_CXL_MODE_DROP:
> + dev_dbg(host, "dropping CXL range: %pr\n", res);
> + return 0;
> + }
> }
>
> #ifdef CONFIG_EFI_SOFT_RESERVE
> @@ -130,8 +176,30 @@ static int hmem_register_device(struct device *host, int target_nid,
> return rc;
> }
>
> +static void kill_defer_work(void *_work)
> +{
> + struct dax_defer_work *work = container_of(_work, typeof(*work), work);
> +
> + cancel_work_sync(&work->work);
> + kfree(work);
> +}
> +
> static int dax_hmem_platform_probe(struct platform_device *pdev)
> {
> + struct dax_defer_work *work = kzalloc(sizeof(*work), GFP_KERNEL);
> + int rc;
> +
> + if (!work)
> + return -ENOMEM;
> +
> + work->pdev = pdev;
> + INIT_WORK(&work->work, process_defer_work);
> +
> + rc = devm_add_action_or_reset(&pdev->dev, kill_defer_work, work);
> + if (rc)
> + return rc;
> +
> + platform_set_drvdata(pdev, work);
> return walk_hmem_resources(&pdev->dev, hmem_register_device);
> }
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC PATCH 6/6] cxl/region, dax/hmem: Guard CXL DAX region creation and tighten HMEM deps
2025-08-22 3:42 ` [RFC PATCH 6/6] cxl/region, dax/hmem: Guard CXL DAX region creation and tighten HMEM deps Smita Koralahalli
@ 2025-09-01 6:21 ` Zhijian Li (Fujitsu)
0 siblings, 0 replies; 14+ messages in thread
From: Zhijian Li (Fujitsu) @ 2025-09-01 6:21 UTC (permalink / raw)
To: Smita Koralahalli, linux-cxl@vger.kernel.org,
linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev,
linux-fsdevel@vger.kernel.org, linux-pm@vger.kernel.org
Cc: Davidlohr Bueso, Jonathan Cameron, Dave Jiang, Alison Schofield,
Vishal Verma, Ira Weiny, Dan Williams, Matthew Wilcox, Jan Kara,
Rafael J . Wysocki, Len Brown, Pavel Machek, Li Ming,
Jeff Johnson, Ying Huang, Xingtao Yao (Fujitsu), Peter Zijlstra,
Greg KH, Nathan Fontenot, Terry Bowman, Robert Richter,
Benjamin Cheatham, PradeepVineshReddy Kodamati
On 22/08/2025 11:42, Smita Koralahalli wrote:
> Prevent cxl_region_probe() from unconditionally calling into
> devm_cxl_add_dax_region() when the DEV_DAX_CXL driver is not enabled.
> Wrap the call with IS_ENABLED(CONFIG_DEV_DAX_CXL) so region probe skips
> DAX setup cleanly if no consumer is present.
A question came to mind:
Why is the case of `CXL_REGION && !DEV_DAX_CXL` necessary? It appears to fall back to the hmem driver in that scenario.
If so, could we instead simplify it as follows?
--- a/drivers/cxl/Kconfig
+++ b/drivers/cxl/Kconfig
@@ -200,6 +200,7 @@ config CXL_REGION
depends on SPARSEMEM
select MEMREGION
select GET_FREE_REGION
+ select DEV_DAX_CXL
>
> In parallel, update DEV_DAX_HMEM’s Kconfig to depend on
> !CXL_BUS || (CXL_ACPI && CXL_PCI) || m. This ensures:
>
> Built-in (y) HMEM is allowed when CXL is disabled, or when the full
> CXL discovery stack is built-in. Module (m) HMEM remains always possible.
Hmm,IIUC, `dax_hmem` isn't exclusively designed for CXL. It could support other special memory types (e.g., HBM).
Thanks
Zhijian
>
> Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
> ---
> I did not want to override Dan’s original approach, so I am posting this
> as an RFC.
>
> This patch addresses a corner case when applied on top of Patches 1–5.
>
> When DEV_DAX_HMEM=y and CXL=m, the DEV_DAX_CXL option ends up disabled.
> In that configuration, with Patches 1–5 applied, ownership of the Soft
> Reserved ranges falls back to dax_hmem. As a result, /proc/iomem looks
> like this:
>
> 850000000-284fffffff : CXL Window 0
> 850000000-284fffffff : region3
> 850000000-284fffffff : Soft Reserved
> 850000000-284fffffff : dax0.0
> 850000000-284fffffff : System RAM (kmem)
> 2850000000-484fffffff : CXL Window 1
> 2850000000-484fffffff : region4
> 2850000000-484fffffff : Soft Reserved
> 2850000000-484fffffff : dax1.0
> 2850000000-484fffffff : System RAM (kmem)
> 4850000000-684fffffff : CXL Window 2
> 4850000000-684fffffff : region5
> 4850000000-684fffffff : Soft Reserved
> 4850000000-684fffffff : dax2.0
> 4850000000-684fffffff : System RAM (kmem)
>
> In this case the dax devices are created by dax_hmem, not by dax_cxl.
> Consequently, a "cxl disable-region <regionx>" operation does not
> unregister these devices. In addition, the dmesg output can be misleading
> to users, since it looks like the CXL region driver created the devdax
> devices:
>
> devm_cxl_add_region: cxl_acpi ACPI0017:00: decoder0.2: created region5
> ..
> ..
>
> This patch addresses those situations. I am not entirely sure how clean
> the approach of using “|| m” is, so I am sending it as RFC for feedback.
> ---
> drivers/cxl/core/region.c | 4 +++-
> drivers/dax/Kconfig | 1 +
> 2 files changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
> index 71cc42d05248..6a2c21e55dbc 100644
> --- a/drivers/cxl/core/region.c
> +++ b/drivers/cxl/core/region.c
> @@ -3617,7 +3617,9 @@ static int cxl_region_probe(struct device *dev)
> p->res->start, p->res->end, cxlr,
> is_system_ram) > 0)
> return 0;
> - return devm_cxl_add_dax_region(cxlr);
> + if (IS_ENABLED(CONFIG_DEV_DAX_CXL))
> + return devm_cxl_add_dax_region(cxlr);
> + return 0;
> default:
> dev_dbg(&cxlr->dev, "unsupported region mode: %d\n",
> cxlr->mode);
> diff --git a/drivers/dax/Kconfig b/drivers/dax/Kconfig
> index 3683bb3f2311..fd12cca91c78 100644
> --- a/drivers/dax/Kconfig
> +++ b/drivers/dax/Kconfig
> @@ -30,6 +30,7 @@ config DEV_DAX_PMEM
> config DEV_DAX_HMEM
> tristate "HMEM DAX: direct access to 'specific purpose' memory"
> depends on EFI_SOFT_RESERVE
> + depends on !CXL_BUS || (CXL_ACPI && CXL_PCI) || m
> select NUMA_KEEP_MEMINFO if NUMA_MEMBLKS
> default DEV_DAX
> help
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-09-01 6:21 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-22 3:41 [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Smita Koralahalli
2025-08-22 3:41 ` [PATCH 1/6] dax/hmem, e820, resource: Defer Soft Reserved registration until hmem is ready Smita Koralahalli
2025-09-01 2:59 ` Zhijian Li (Fujitsu)
2025-08-22 3:41 ` [PATCH 2/6] dax/hmem: Request cxl_acpi and cxl_pci before walking Soft Reserved ranges Smita Koralahalli
2025-09-01 3:08 ` Zhijian Li (Fujitsu)
2025-08-22 3:41 ` [PATCH 3/6] dax/hmem, cxl: Tighten dependencies on DEV_DAX_CXL and dax_hmem Smita Koralahalli
2025-09-01 3:28 ` Zhijian Li (Fujitsu)
2025-08-22 3:42 ` [PATCH 4/6] dax/hmem: Defer Soft Reserved overlap handling until CXL region assembly completes Smita Koralahalli
2025-09-01 4:01 ` Zhijian Li (Fujitsu)
2025-08-22 3:42 ` [PATCH 5/6] dax/hmem: Reintroduce Soft Reserved ranges back into the iomem tree Smita Koralahalli
2025-08-22 3:42 ` [RFC PATCH 6/6] cxl/region, dax/hmem: Guard CXL DAX region creation and tighten HMEM deps Smita Koralahalli
2025-09-01 6:21 ` Zhijian Li (Fujitsu)
2025-08-26 23:21 ` [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Alison Schofield
2025-08-28 23:34 ` Koralahalli Channabasappa, Smita
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).