From: Jonathan Cameron <jonathan.cameron@huawei.com>
To: Gregory Price <gourry@gourry.net>
Cc: <linux-cxl@vger.kernel.org>, <linux-doc@vger.kernel.org>,
<linux-kernel@vger.kernel.org>, <dave@stgolabs.net>,
<dave.jiang@intel.com>, <alison.schofield@intel.com>,
<vishal.l.verma@intel.com>, <ira.weiny@intel.com>,
<dan.j.williams@intel.com>, <corbet@lwn.net>,
<kernel-team@meta.com>, <alejandro.lucero-palau@amd.com>
Subject: Re: [PATCH] Documentation/driver-api/cxl: device hotplug section
Date: Thu, 18 Dec 2025 15:26:16 +0000 [thread overview]
Message-ID: <20251218152616.00005b73@huawei.com> (raw)
In-Reply-To: <20251218144636.1232527-1-gourry@gourry.net>
On Thu, 18 Dec 2025 09:46:36 -0500
Gregory Price <gourry@gourry.net> wrote:
> Describe cxl memory device hotplug implications, in particular how the
> platform CEDT CFMWS must be described to support successful hot-add of
> memory devices.
>
> Signed-off-by: Gregory Price <gourry@gourry.net>
Hi Gregory,
Thanks for drawing this up.
> ---
> Documentation/driver-api/cxl/index.rst | 1 +
> .../cxl/platform/device-hotplug.rst | 77 +++++++++++++++++++
> 2 files changed, 78 insertions(+)
> create mode 100644 Documentation/driver-api/cxl/platform/device-hotplug.rst
>
> diff --git a/Documentation/driver-api/cxl/index.rst b/Documentation/driver-api/cxl/index.rst
> index c1106a68b67c..5a734988a5af 100644
> --- a/Documentation/driver-api/cxl/index.rst
> +++ b/Documentation/driver-api/cxl/index.rst
> @@ -30,6 +30,7 @@ that have impacts on each other. The docs here break up configurations steps.
> platform/acpi
> platform/cdat
> platform/example-configs
> + platform/device-hotplug
>
> .. toctree::
> :maxdepth: 2
> diff --git a/Documentation/driver-api/cxl/platform/device-hotplug.rst b/Documentation/driver-api/cxl/platform/device-hotplug.rst
> new file mode 100644
> index 000000000000..9af8988bd47a
> --- /dev/null
> +++ b/Documentation/driver-api/cxl/platform/device-hotplug.rst
> @@ -0,0 +1,77 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +==================
> +CXL Device Hotplug
> +==================
> +
> +Device hotplug refers to *physical* hotplug of a device (addition or removal
> +of a physical device from the machine).
> +
> +Hot-Remove
> +==========
> +Hot removal of a device typically requires careful removal of software
> +constructs (memory regions, associated drivers) which manage these devices.
> +
> +Hard-removing a CXL.mem device without carefully tearing down driver stacks
> +is likely to cause the system to machine-check (or at least SIGBUS if memory
> +access is limited to user space).
> +
> +Memory Device Hot-Add
> +=====================
> +Hot-adding a memory device requires that the memory associated with that
> +device fits in a pre-defined (*static*) CXL Fixed Memory Window in the
> +:doc:`CEDT<acpi/cedt>`.
> +
> +There are two basic hot-add scenarios which may occur.
> +
> +Device Present at Boot
> +----------------------
> +A device present at boot likely had its capacity reported in the
> +:doc:`CEDT<acpi/cedt>`. If a device is removed and a new device hotplugged,
The concept of reporting in CEDT is a little vague. Perhaps expand on that a little
with something like:
A device present at boot will be associated with a CFMWS reported in
@doc:`CEDT<acpi/cedt>` and that CFMWS may match the size of the device.
> +the capacity of the new device will be limited to the original CFMWS capacity.
> +
> +Adding a device larger than the original device will cause memory region
> +creation to fail if the region size is greater than the CFMWS size.
Adding capacity larger than the original device
(can add a subset of the new device capacity)
> +
> +The CFMWS is *static* and cannot be adjusted. Platforms which may expect
> +different sized devices to be hotplugged must allocate sufficient CFMWS space
> +*at boot time* to cover all future expected devices.
> +
> +No CXL Device Present at Boot
> +-----------------------------
> +When no CXL device is present on boot, most platforms omit the CFMWS in the
> +:doc:`CEDT<acpi/cedt>`. When this occurs, hot-add is not possible.
Relax to 'some platforms'
Just to future proof the doc for when people start mostly doing the sensible thing.
> +
> +For a platform to support hot-add of a memory device, it must allocate a
For a platofmr to support hot-add of a full memory device
(see above for partial capacity being fine)
> +CEDT CFMWS region with sufficient memory capacity to cover all future
> +potentially added capacity.
> +
> +Switches in the fabric should report the max possible memory capacity
> +expected to be hot-added so that platform software may construct the
> +appropriately sized CFMWS.
How do switches report this? I don't think they can as it really has nothing
to do with the switch beyond maybe how many DSPs it has (which incidentally
is what is used to work out space for PCI HP where the code divides up space
left over space between HP DSPs.).
Obviously this excludes the weird switches that are out there than pretend
to be a single memory device as those are not switches at all as far
as Linux is concerned.
> +
> +Interleave Sets
> +===============
> +
> +Host Bridge Interleave
> +----------------------
> +Host-bridge interleaved memory regions are defined *statically* in the
> +:doc:`CEDT<acpi/cedt>`. To apply cross-host-bridge interleave, a CFMWS entry
> +describing that interleave must have been provided *at boot*. Hotplugged
> +devices cannot add host-bridge interleave capabilities at hotplug time.
> +
> +See the :doc:`Flexible CEDT Configuration<example-configurations/flexible>`
> +example to see how a platform can provide this kind of flexibility regarding
> +hotplugged memory devices.
> +
> +Platform vendors should work with switch vendors to work out how this
> +HPA space reservation should work when one or more interleave options are
> +intended to be presented to a host.
Same as above. Nothing to do with switches as far as I understand things
beyond them providing fan out. So if you have
HB0 HB1
RP0 RP1 RP2
| | |
Empty Empty USP
_______|_______
| | | |
DSP DSP DSP DSP
| | | |
All empty
You might provide more room for devices below HB1 than HB0 if you don't expect
to see switches being hot added.
Jonathan
> +
> +HDM Interleave
> +--------------
> +Decoder-applied interleave can flexibly handle hotplugged devices, as decoders
> +can be re-programmed after hotplug.
> +
> +To add or remove a device to/from an existing HDM-applied interleaved region,
> +that region must be torn down an re-created.
next prev parent reply other threads:[~2025-12-18 15:26 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-18 14:46 [PATCH] Documentation/driver-api/cxl: device hotplug section Gregory Price
2025-12-18 15:26 ` Jonathan Cameron [this message]
2025-12-18 16:02 ` Gregory Price
2025-12-19 10:49 ` Jonathan Cameron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251218152616.00005b73@huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=alejandro.lucero-palau@amd.com \
--cc=alison.schofield@intel.com \
--cc=corbet@lwn.net \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=dave@stgolabs.net \
--cc=gourry@gourry.net \
--cc=ira.weiny@intel.com \
--cc=kernel-team@meta.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).