From: Jonathan Cameron <jonathan.cameron@huawei.com>
To: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
Cc: <linux-cxl@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<nvdimm@lists.linux.dev>, <linux-fsdevel@vger.kernel.org>,
<linux-pm@vger.kernel.org>,
Alison Schofield <alison.schofield@intel.com>,
Vishal Verma <vishal.l.verma@intel.com>,
Ira Weiny <ira.weiny@intel.com>,
Dan Williams <dan.j.williams@intel.com>,
Yazen Ghannam <yazen.ghannam@amd.com>,
Dave Jiang <dave.jiang@intel.com>,
Davidlohr Bueso <dave@stgolabs.net>,
Matthew Wilcox <willy@infradead.org>, Jan Kara <jack@suse.cz>,
"Rafael J . Wysocki" <rafael@kernel.org>,
Len Brown <len.brown@intel.com>, Pavel Machek <pavel@kernel.org>,
Li Ming <ming.li@zohomail.com>,
Jeff Johnson <jeff.johnson@oss.qualcomm.com>,
"Ying Huang" <huang.ying.caritas@gmail.com>,
Yao Xingtao <yaoxt.fnst@fujitsu.com>,
Peter Zijlstra <peterz@infradead.org>,
Greg KH <gregkh@linuxfoundation.org>,
Nathan Fontenot <nathan.fontenot@amd.com>,
Terry Bowman <terry.bowman@amd.com>,
Robert Richter <rrichter@amd.com>,
Benjamin Cheatham <benjamin.cheatham@amd.com>,
Zhijian Li <lizhijian@fujitsu.com>,
Borislav Petkov <bp@alien8.de>, Ard Biesheuvel <ardb@kernel.org>
Subject: Re: [PATCH v4 1/9] dax/hmem, e820, resource: Defer Soft Reserved insertion until hmem is ready
Date: Wed, 17 Dec 2025 12:05:50 +0000 [thread overview]
Message-ID: <20251217120550.00003325@huawei.com> (raw)
In-Reply-To: <20251120031925.87762-2-Smita.KoralahalliChannabasappa@amd.com>
On Thu, 20 Nov 2025 03:19:17 +0000
Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com> wrote:
> From: Dan Williams <dan.j.williams@intel.com>
>
> Insert Soft Reserved memory into a dedicated soft_reserve_resource tree
> instead of the iomem_resource tree at boot. Delay publishing these ranges
> into the iomem hierarchy until ownership is resolved and the HMEM path
> is ready to consume them.
>
> Publishing Soft Reserved ranges into iomem too early conflicts with CXL
> hotplug and prevents region assembly when those ranges overlap CXL
> windows.
>
> Follow up patches will reinsert Soft Reserved ranges into iomem after CXL
> window publication is complete and HMEM is ready to claim the memory. This
> provides a cleaner handoff between EFI-defined memory ranges and CXL
> resource management without trimming or deleting resources later.
>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
A couple of general comments below. I don't feel particularly strongly
about any of them however if you disagree! (other than the ever important
number of blank lines!) :)
Jonathan
> diff --git a/drivers/dax/hmem/device.c b/drivers/dax/hmem/device.c
> index f9e1a76a04a9..22732b729017 100644
> --- a/drivers/dax/hmem/device.c
> +++ b/drivers/dax/hmem/device.c
> @@ -83,8 +83,8 @@ static __init int hmem_register_one(struct resource *res, void *data)
>
> static __init int hmem_init(void)
> {
> - walk_iomem_res_desc(IORES_DESC_SOFT_RESERVED,
> - IORESOURCE_MEM, 0, -1, NULL, hmem_register_one);
> + walk_soft_reserve_res_desc(IORES_DESC_SOFT_RESERVED, IORESOURCE_MEM, 0,
Similar to below. If we are only putting MEM of type SOFT_RESERVED in here
can we drop those two as parameters?
> + -1, NULL, hmem_register_one);
> return 0;
> }
>
> diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c
> index c18451a37e4f..48f4642f4bb8 100644
> --- a/drivers/dax/hmem/hmem.c
> +++ b/drivers/dax/hmem/hmem.c
> @@ -73,11 +73,14 @@ static int hmem_register_device(struct device *host, int target_nid,
> return 0;
> }
>
> - rc = region_intersects(res->start, resource_size(res), IORESOURCE_MEM,
> - IORES_DESC_SOFT_RESERVED);
> + rc = region_intersects_soft_reserve(res->start, resource_size(res),
> + IORESOURCE_MEM,
> + IORES_DESC_SOFT_RESERVED);
The flags seem perhaps redundant. Trade off between matching the more complex
existing functions and simplfying this. Maybe push them down into the
call and just have
rc = region_intersects_soft_reserved(res->start, resource_size(res));
here?
> if (rc != REGION_INTERSECTS)
> return 0;
>
> + /* TODO: Add Soft-Reserved memory back to iomem */
> +
> id = memregion_alloc(GFP_KERNEL);
> if (id < 0) {
> dev_err(host, "memregion allocation failure for %pr\n", res);
> diff --git a/kernel/resource.c b/kernel/resource.c
> index b9fa2a4ce089..208eaafcc681 100644
> --- a/kernel/resource.c
> +++ b/kernel/resource.c
> @@ -402,6 +410,15 @@ static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
> return ret;
> }
>
> +static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
> + unsigned long flags, unsigned long desc,
> + void *arg,
> + int (*func)(struct resource *, void *))
> +{
> + return walk_res_desc(&iomem_resource, start, end, flags, desc, arg, func);
> +}
> +
Local style seems to be single line breaks - stick to that unless I'm missing
some reason this one is special.
> +
> /**
> /*
> * This function calls the @func callback against all memory ranges of type
> * System RAM which are marked as IORESOURCE_SYSTEM_RAM and IORESOUCE_BUSY.
> @@ -648,6 +685,22 @@ int region_intersects(resource_size_t start, size_t size, unsigned long flags,
> }
> EXPORT_SYMBOL_GPL(region_intersects);
>
> +#ifdef CONFIG_EFI_SOFT_RESERVE
> +int region_intersects_soft_reserve(resource_size_t start, size_t size,
> + unsigned long flags, unsigned long desc)
> +{
> + int ret;
> +
> + read_lock(&resource_lock);
> + ret = __region_intersects(&soft_reserve_resource, start, size, flags,
> + desc);
> + read_unlock(&resource_lock);
> +
> + return ret;
Perhaps the shortening of code makes it worth implementing this as:
guard(read_lock)(&resource_lock);
return __region_intersects();
Or ignore that until someone feels like a more general use of that
infrastructure in this file. Looks like there are a bunch of places
where I'd argue it is worth doing.
Jonathan
> +}
> +EXPORT_SYMBOL_GPL(region_intersects_soft_reserve);
> +#endif
next prev parent reply other threads:[~2025-12-17 12:06 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-20 3:19 [PATCH v4 0/9] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL and HMEM Smita Koralahalli
2025-11-20 3:19 ` [PATCH v4 1/9] dax/hmem, e820, resource: Defer Soft Reserved insertion until hmem is ready Smita Koralahalli
2025-12-02 22:19 ` dan.j.williams
2025-12-11 23:20 ` Koralahalli Channabasappa, Smita
2025-12-17 23:17 ` dan.j.williams
2025-12-02 23:31 ` Dave Jiang
2025-12-17 12:05 ` Jonathan Cameron [this message]
2025-11-20 3:19 ` [PATCH v4 2/9] dax/hmem: Request cxl_acpi and cxl_pci before walking Soft Reserved ranges Smita Koralahalli
2025-11-20 3:19 ` [PATCH v4 3/9] dax/hmem: Gate Soft Reserved deferral on DEV_DAX_CXL Smita Koralahalli
2025-12-02 23:32 ` Dave Jiang
2025-12-17 12:07 ` Jonathan Cameron
2025-11-20 3:19 ` [PATCH v4 4/9] dax/hmem: Defer handling of Soft Reserved ranges that overlap CXL windows Smita Koralahalli
2025-12-02 22:37 ` dan.j.williams
2025-12-11 23:23 ` Koralahalli Channabasappa, Smita
2025-11-20 3:19 ` [PATCH v4 5/9] cxl/region, dax/hmem: Arbitrate Soft Reserved ownership with cxl_regions_fully_map() Smita Koralahalli
2025-12-03 3:50 ` dan.j.williams
2025-12-11 23:42 ` Koralahalli Channabasappa, Smita
2025-11-20 3:19 ` [PATCH v4 6/9] cxl/region: Add register_dax flag to defer DAX setup Smita Koralahalli
2025-11-20 18:17 ` Koralahalli Channabasappa, Smita
2025-11-20 20:21 ` kernel test robot
2025-12-04 0:22 ` dan.j.williams
2025-12-12 19:59 ` Koralahalli Channabasappa, Smita
2025-11-20 3:19 ` [PATCH v4 7/9] cxl/region, dax/hmem: Register cxl_dax only when CXL owns Soft Reserved span Smita Koralahalli
2025-11-20 3:19 ` [PATCH v4 8/9] cxl/region, dax/hmem: Tear down CXL regions when HMEM reclaims Soft Reserved Smita Koralahalli
2025-12-04 0:50 ` dan.j.williams
2025-12-12 22:12 ` Koralahalli Channabasappa, Smita
2025-11-20 3:19 ` [PATCH v4 9/9] dax/hmem: Reintroduce Soft Reserved ranges back into the iomem tree Smita Koralahalli
2025-12-04 0:54 ` dan.j.williams
2025-12-12 22:14 ` Koralahalli Channabasappa, Smita
2025-12-01 19:56 ` [PATCH v4 0/9] dax/hmem, cxl: Coordinate Soft Reserved handling with CXL and HMEM Alison Schofield
2025-12-03 13:35 ` Tomasz Wolski
2025-12-03 22:05 ` dan.j.williams
2025-12-05 2:54 ` Yasunori Gotou (Fujitsu)
2025-12-05 23:04 ` Tomasz Wolski
2025-12-06 0:11 ` dan.j.williams
2025-12-02 6:41 ` dan.j.williams
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251217120550.00003325@huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=Smita.KoralahalliChannabasappa@amd.com \
--cc=alison.schofield@intel.com \
--cc=ardb@kernel.org \
--cc=benjamin.cheatham@amd.com \
--cc=bp@alien8.de \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=dave@stgolabs.net \
--cc=gregkh@linuxfoundation.org \
--cc=huang.ying.caritas@gmail.com \
--cc=ira.weiny@intel.com \
--cc=jack@suse.cz \
--cc=jeff.johnson@oss.qualcomm.com \
--cc=len.brown@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=lizhijian@fujitsu.com \
--cc=ming.li@zohomail.com \
--cc=nathan.fontenot@amd.com \
--cc=nvdimm@lists.linux.dev \
--cc=pavel@kernel.org \
--cc=peterz@infradead.org \
--cc=rafael@kernel.org \
--cc=rrichter@amd.com \
--cc=terry.bowman@amd.com \
--cc=vishal.l.verma@intel.com \
--cc=willy@infradead.org \
--cc=yaoxt.fnst@fujitsu.com \
--cc=yazen.ghannam@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).