From: "Zhijian Li (Fujitsu)" <lizhijian@fujitsu.com>
To: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>,
"linux-cxl@vger.kernel.org" <linux-cxl@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"nvdimm@lists.linux.dev" <nvdimm@lists.linux.dev>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>,
Jonathan Cameron <jonathan.cameron@huawei.com>,
Dave Jiang <dave.jiang@intel.com>,
Alison Schofield <alison.schofield@intel.com>,
Vishal Verma <vishal.l.verma@intel.com>,
Ira Weiny <ira.weiny@intel.com>,
Dan Williams <dan.j.williams@intel.com>,
Matthew Wilcox <willy@infradead.org>, Jan Kara <jack@suse.cz>,
"Rafael J . Wysocki" <rafael@kernel.org>,
Len Brown <len.brown@intel.com>, Pavel Machek <pavel@kernel.org>,
Li Ming <ming.li@zohomail.com>,
Jeff Johnson <jeff.johnson@oss.qualcomm.com>,
Ying Huang <huang.ying.caritas@gmail.com>,
"Xingtao Yao (Fujitsu)" <yaoxt.fnst@fujitsu.com>,
Peter Zijlstra <peterz@infradead.org>,
Greg KH <gregkh@linuxfoundation.org>,
Nathan Fontenot <nathan.fontenot@amd.com>,
Terry Bowman <terry.bowman@amd.com>,
Robert Richter <rrichter@amd.com>,
Benjamin Cheatham <benjamin.cheatham@amd.com>,
PradeepVineshReddy Kodamati <PradeepVineshReddy.Kodamati@amd.com>
Subject: Re: [PATCH 4/6] dax/hmem: Defer Soft Reserved overlap handling until CXL region assembly completes
Date: Mon, 1 Sep 2025 04:01:06 +0000 [thread overview]
Message-ID: <98a7baf6-1fb2-4d8e-be87-2ca6cf6cdc0d@fujitsu.com> (raw)
In-Reply-To: <20250822034202.26896-5-Smita.KoralahalliChannabasappa@amd.com>
On 22/08/2025 11:42, Smita Koralahalli wrote:
> Previously, dax_hmem deferred to CXL only when an immediate resource
> intersection with a CXL window was detected. This left a gap: if cxl_acpi
> or cxl_pci probing or region assembly had not yet started, hmem could
> prematurely claim ranges.
>
> Fix this by introducing a dax_cxl_mode state machine and a deferred
> work mechanism.
>
> The new workqueue delays consideration of Soft Reserved overlaps until
> the CXL subsystem has had a chance to complete its discovery and region
> assembly. This avoids premature iomem claims, eliminates race conditions
> with async cxl_pci probe, and provides a cleaner handoff between hmem and
> CXL resource management.
>
> Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
> drivers/dax/hmem/hmem.c | 72 +++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 70 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c
> index 7ada820cb177..90978518e5f4 100644
> --- a/drivers/dax/hmem/hmem.c
> +++ b/drivers/dax/hmem/hmem.c
> @@ -58,9 +58,45 @@ static void release_hmem(void *pdev)
> platform_device_unregister(pdev);
> }
>
> +static enum dax_cxl_mode {
> + DAX_CXL_MODE_DEFER,
> + DAX_CXL_MODE_REGISTER,
The patch looks good overall, but I have one question for the community:
Should we retain the `DAX_CXL_MODE_REGISTER` enum value which for the feature
we have not ever supported.
The idea of having a 'register' mode as the last resort for 'Soft Reserved'
memory might seem appealing, but it is not easy to implement. Instead, to
avoid increasing driver complexity, I would prefer that when we encounter
quirk/misconfiguration cases, we allow the user to reprogram/recorrect it. However, this
is beyond the scope of the current patchset
Thanks
Zhijian
> + DAX_CXL_MODE_DROP,
> +} dax_cxl_mode;
> +
> +static int handle_deferred_cxl(struct device *host, int target_nid,
> + const struct resource *res)
> +{
> + if (region_intersects(res->start, resource_size(res), IORESOURCE_MEM,
> + IORES_DESC_CXL) != REGION_DISJOINT) {
> + if (dax_cxl_mode == DAX_CXL_MODE_DROP)
> + dev_dbg(host, "dropping CXL range: %pr\n", res);
> + }
> + return 0;
> +}
> +
> +struct dax_defer_work {
> + struct platform_device *pdev;
> + struct work_struct work;
> +};
> +
> +static void process_defer_work(struct work_struct *_work)
> +{
> + struct dax_defer_work *work = container_of(_work, typeof(*work), work);
> + struct platform_device *pdev = work->pdev;
> +
> + /* relies on cxl_acpi and cxl_pci having had a chance to load */
> + wait_for_device_probe();
> +
> + dax_cxl_mode = DAX_CXL_MODE_DROP;
> +
> + walk_hmem_resources(&pdev->dev, handle_deferred_cxl);
> +}
> +
> static int hmem_register_device(struct device *host, int target_nid,
> const struct resource *res)
> {
> + struct dax_defer_work *work = dev_get_drvdata(host);
> struct platform_device *pdev;
> struct memregion_info info;
> long id;
> @@ -69,8 +105,18 @@ static int hmem_register_device(struct device *host, int target_nid,
> if (IS_ENABLED(CONFIG_DEV_DAX_CXL) &&
> region_intersects(res->start, resource_size(res), IORESOURCE_MEM,
> IORES_DESC_CXL) != REGION_DISJOINT) {
> - dev_dbg(host, "deferring range to CXL: %pr\n", res);
> - return 0;
> + switch (dax_cxl_mode) {
> + case DAX_CXL_MODE_DEFER:
> + dev_dbg(host, "deferring range to CXL: %pr\n", res);
> + schedule_work(&work->work);
> + return 0;
> + case DAX_CXL_MODE_REGISTER:
> + dev_dbg(host, "registering CXL range: %pr\n", res);
> + break;
> + case DAX_CXL_MODE_DROP:
> + dev_dbg(host, "dropping CXL range: %pr\n", res);
> + return 0;
> + }
> }
>
> #ifdef CONFIG_EFI_SOFT_RESERVE
> @@ -130,8 +176,30 @@ static int hmem_register_device(struct device *host, int target_nid,
> return rc;
> }
>
> +static void kill_defer_work(void *_work)
> +{
> + struct dax_defer_work *work = container_of(_work, typeof(*work), work);
> +
> + cancel_work_sync(&work->work);
> + kfree(work);
> +}
> +
> static int dax_hmem_platform_probe(struct platform_device *pdev)
> {
> + struct dax_defer_work *work = kzalloc(sizeof(*work), GFP_KERNEL);
> + int rc;
> +
> + if (!work)
> + return -ENOMEM;
> +
> + work->pdev = pdev;
> + INIT_WORK(&work->work, process_defer_work);
> +
> + rc = devm_add_action_or_reset(&pdev->dev, kill_defer_work, work);
> + if (rc)
> + return rc;
> +
> + platform_set_drvdata(pdev, work);
> return walk_hmem_resources(&pdev->dev, hmem_register_device);
> }
>
next prev parent reply other threads:[~2025-09-01 4:02 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-22 3:41 [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Smita Koralahalli
2025-08-22 3:41 ` [PATCH 1/6] dax/hmem, e820, resource: Defer Soft Reserved registration until hmem is ready Smita Koralahalli
2025-09-01 2:59 ` Zhijian Li (Fujitsu)
2025-09-30 4:01 ` Koralahalli Channabasappa, Smita
2025-09-08 23:01 ` dan.j.williams
2025-09-11 19:39 ` Koralahalli Channabasappa, Smita
2025-09-25 18:17 ` dan.j.williams
2025-09-09 16:12 ` Borislav Petkov
2025-09-30 4:56 ` Koralahalli Channabasappa, Smita
2025-12-18 17:52 ` dan.j.williams
2026-01-13 18:29 ` dan.j.williams
2025-08-22 3:41 ` [PATCH 2/6] dax/hmem: Request cxl_acpi and cxl_pci before walking Soft Reserved ranges Smita Koralahalli
2025-09-01 3:08 ` Zhijian Li (Fujitsu)
2025-09-04 17:35 ` Dave Jiang
2025-08-22 3:41 ` [PATCH 3/6] dax/hmem, cxl: Tighten dependencies on DEV_DAX_CXL and dax_hmem Smita Koralahalli
2025-09-01 3:28 ` Zhijian Li (Fujitsu)
2025-09-30 4:04 ` Koralahalli Channabasappa, Smita
2025-08-22 3:42 ` [PATCH 4/6] dax/hmem: Defer Soft Reserved overlap handling until CXL region assembly completes Smita Koralahalli
2025-09-01 4:01 ` Zhijian Li (Fujitsu) [this message]
2025-08-22 3:42 ` [PATCH 5/6] dax/hmem: Reintroduce Soft Reserved ranges back into the iomem tree Smita Koralahalli
2025-09-04 18:14 ` Dave Jiang
2025-09-10 13:41 ` Jonathan Cameron
2025-09-30 4:03 ` Koralahalli Channabasappa, Smita
2025-08-22 3:42 ` [RFC PATCH 6/6] cxl/region, dax/hmem: Guard CXL DAX region creation and tighten HMEM deps Smita Koralahalli
2025-09-01 6:21 ` Zhijian Li (Fujitsu)
2025-09-30 4:06 ` Koralahalli Channabasappa, Smita
2025-08-26 23:21 ` [PATCH 0/6] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL Alison Schofield
2025-08-28 23:34 ` Koralahalli Channabasappa, Smita
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=98a7baf6-1fb2-4d8e-be87-2ca6cf6cdc0d@fujitsu.com \
--to=lizhijian@fujitsu.com \
--cc=PradeepVineshReddy.Kodamati@amd.com \
--cc=Smita.KoralahalliChannabasappa@amd.com \
--cc=alison.schofield@intel.com \
--cc=benjamin.cheatham@amd.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=dave@stgolabs.net \
--cc=gregkh@linuxfoundation.org \
--cc=huang.ying.caritas@gmail.com \
--cc=ira.weiny@intel.com \
--cc=jack@suse.cz \
--cc=jeff.johnson@oss.qualcomm.com \
--cc=jonathan.cameron@huawei.com \
--cc=len.brown@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=ming.li@zohomail.com \
--cc=nathan.fontenot@amd.com \
--cc=nvdimm@lists.linux.dev \
--cc=pavel@kernel.org \
--cc=peterz@infradead.org \
--cc=rafael@kernel.org \
--cc=rrichter@amd.com \
--cc=terry.bowman@amd.com \
--cc=vishal.l.verma@intel.com \
--cc=willy@infradead.org \
--cc=yaoxt.fnst@fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox