* [PATCH v22 00/25] Type2 device basic support
@ 2025-12-05 11:52 alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 01/25] cxl/mem: Arrange for always-synchronous memdev attach alejandro.lucero-palau
` (25 more replies)
0 siblings, 26 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero
From: Alejandro Lucero <alucerop@amd.com>
The patchset should be applied on the described base commit then applying
Terry's v13 about CXL error handling. The first 3 patches come from Dan's
for-6.18/cxl-probe-order branch with minor modifications.
This last version introduces support for Type2 decoder committed by
firmware, implying CXL region automatically created during memdev
initialization. New patches 11, 13 and 14 show this new core support
with the sfc driver using it. This driver has also support for the
option used until today, where HDM decoders not committed. This is true
under certain scenarios and also after the driver has been unload. This
brings up the question if such firmware committer decoder should be
reset at driver unload, assuming no locked HDM what this patchset does
not support.
v22 changes:
patch 1-3 from Dan's branch without any changes.
patch 11: new
patch 12: moved here from v21 patch 22
patch 13-14: new
patch 23: move check ahead of type3 only checks
All patches with sfc changes adapted to support both options.
v21 changes;
patch1-2: v20 patch1 splitted up doing the code move in the second
patch in v21. (Jonathan)
patch1-4: adding my Signed-off tag along with Dan's
patch5: fix duplication of CXL_NR_PARTITION definition
patch7: dropped the cxl test fixes removing unused function. It was
sent independently ahead of this version.
patch12: optimization for max free space calculation (Jonathan)
patch19: optimization for returning on error (Jonathan)
v20 changes:
patch 1: using release helps (Jonathan).
patch 6: minor fix in comments (Jonathan).
patch 7 & 8: change commit mentioning sfc changes
patch 11: Fix interleave_ways setting (Jonathan)
Change assignament location (Dave)
patch 13: changing error return order (Jonathan)
removing blank line (Dave)
patch 18: Add check for only supporting uncommitted decoders
(Ben, Dave)
Add check for returned value (Dave)
v19 changes:
Removal of cxl_acquire_endpoint and driver callback for unexpected cxl
module removal. Dan's patches made them unnecessary.
patch 4: remove code already moved by Terry's patches (Ben Cheatham)
patch 6: removed unrelated change (Ben Cheatham)
patch 7: fix error report inconsistencies (Jonathan, Dave)
patch 9: remove unnecessary comment (Ben Cheatham)
patch 11: fix __free usage (Jonathan Cameron, Ben Cheatham)
patch 13: style fixes (Jonathan Cameron, Dave Jiag)
patch 14: move code to previous patch (Jonathan Cameron)
patch 18: group code in one locking (Dave Jian)
use __free helper (Ben Cheatham)
v18 changes:
patch 1: minor changes and fixing docs generation (Jonathan, Dan)
patch4: merged with v17 patch5
patch 5: merging v17 patches 6 and 7
patch 6: adding helpers for clarity
patch 9:
- minor changes (Dave)
- simplifying flags check (Dan)
patch 10: minor changes (Jonathan)
patch 11:
- minor changes (Dave)
- fix mess (Jonathan, Dave)
patch 18: minor changes (Jonathan, Dan)
v17 changes: (Dan Williams review)
- use devm for cxl_dev_state allocation
- using current cxl struct for checking capability registers found by
the driver.
- simplify dpa initialization without a mailbox not supporting pmem
- add cxl_acquire_endpoint for protection during initialization
- add callback/action to cxl_create_region for a driver notified about cxl
core kernel modules removal.
- add sfc function to disable CXL-based PIO buffers if such a callback
is invoked.
- Always manage a Type2 created region as private not allowing DAX.
v16 changes:
- rebase against rc4 (Dave Jiang)
- remove duplicate line (Ben Cheatham)
v15 changes:
- remove reference to unused header file (Jonathan Cameron)
- add proper kernel docs to exported functions (Alison Schofield)
- using an array to map the enums to strings (Alison Schofield)
- clarify comment when using bitmap_subset (Jonathan Cameron)
- specify link to type2 support in all patches (Alison Schofield)
Patches changed (minor): 4, 11
v14 changes:
- static null initialization of bitmaps (Jonathan Cameron)
- Fixing cxl tests (Alison Schofield)
- Fixing robot compilation problems
Patches changed (minor): 1, 4, 6, 13
v13 changes:
- using names for headers checking more consistent (Jonathan Cameron)
- using helper for caps bit setting (Jonathan Cameron)
- provide generic function for reporting missing capabilities (Jonathan Cameron)
- rename cxl_pci_setup_memdev_regs to cxl_pci_accel_setup_memdev_regs (Jonathan Cameron)
- cxl_dpa_info size to be set by the Type2 driver (Jonathan Cameron)
- avoiding rc variable when possible (Jonathan Cameron)
- fix spelling (Simon Horman)
- use scoped_guard (Dave Jiang)
- use enum instead of bool (Dave Jiang)
- dropping patch with hardware symbols
v12 changes:
- use new macro cxl_dev_state_create in pci driver (Ben Cheatham)
- add public/private sections in now exported cxl_dev_state struct (Ben
Cheatham)
- fix cxl/pci.h regarding file name for checking if defined
- Clarify capabilities found vs expected in error message. (Ben
Cheatham)
- Clarify new CXL_DECODER_F flag (Ben Cheatham)
- Fix changes about cxl memdev creation support moving code to the
proper patch. (Ben Cheatham)
- Avoid debug and function duplications (Ben Cheatham)
v11 changes:
- Dropping the use of cxl_memdev_state and going back to using
cxl_dev_state.
- Using a helper for an accel driver to allocate its own cxl-related
struct embedding cxl_dev_state.
- Exporting the required structs in include/cxl/cxl.h for an accel
driver being able to know the cxl_dev_state size required in the
previously mentioned helper for allocation.
- Avoid using any struct for dpa initialization by the accel driver
adding a specific function for creating dpa partitions by accel
drivers without a mailbox.
v10 changes:
- Using cxl_memdev_state instead of cxl_dev_state for type2 which has a
memory after all and facilitates the setup.
- Adapt core for using cxl_memdev_state allowing accel drivers to work
with them without further awareness of internal cxl structs.
- Using last DPA changes for creating DPA partitions with accel driver
hardcoding mds values when no mailbox.
- capabilities not a new field but built up when current register maps
is performed and returned to the caller for checking.
- HPA free space supporting interleaving.
- DPA free space droping max-min for a simple alloc size.
v9 changes:
- adding forward definitions (Jonathan Cameron)
- using set_bit instead of bitmap_set (Jonathan Cameron)
- fix rebase problem (Jonathan Cameron)
- Improve error path (Jonathan Cameron)
- fix build problems with cxl region dependency (robot)
- fix error path (Simon Horman)
v8 changes:
- Change error path labeling inside sfc cxl code (Edward Cree)
- Properly handling checks and error in sfc cxl code (Simon Horman)
- Fix bug when checking resource_size (Simon Horman)
- Avoid bisect problems reordering patches (Edward Cree)
- Fix buffer allocation size in sfc (Simon Horman)
v7 changes:
- fixing kernel test robot complains
- fix type with Type3 mandatory capabilities (Zhi Wang)
- optimize code in cxl_request_resource (Kalesh Anakkur Purayil)
- add sanity check when dealing with resources arithmetics (Fan Ni)
- fix typos and blank lines (Fan Ni)
- keep previous log errors/warnings in sfc driver (Martin Habets)
- add WARN_ON_ONCE if region given is NULL
v6 changes:
- update sfc mcdi_pcol.h with full hardware changes most not related to
this patchset. This is an automatic file created from hardware design
changes and not touched by software. It is updated from time to time
and it required update for the sfc driver CXL support.
- remove CXL capabilities definitions not used by the patchset or
previous kernel code. (Dave Jiang, Jonathan Cameron)
- Use bitmap_subset instead of reinventing the wheel ... (Ben Cheatham)
- Use cxl_accel_memdev for new device_type created (Ben Cheatham)
- Fix construct_region use of rwsem (Zhi Wang)
- Obtain region range instead of region params (Allison Schofield, Dave
Jiang)
v5 changes:
- Fix SFC configuration based on kernel CXL configuration
- Add subset check for capabilities.
- fix region creation when HDM decoders programmed by firmware/BIOS (Ben
Cheatham)
- Add option for creating dax region based on driver decission (Ben
Cheatham)
- Using sfc probe_data struct for keeping sfc cxl data
v4 changes:
- Use bitmap for capabilities new field (Jonathan Cameron)
- Use cxl_mem attributes for sysfs based on device type (Dave Jian)
- Add conditional cxl sfc compilation relying on kernel CXL config (kernel test robot)
- Add sfc changes in different patches for facilitating backport (Jonathan Cameron)
- Remove patch for dealing with cxl modules dependencies and using sfc kconfig plus
MODULE_SOFTDEP instead.
v3 changes:
- cxl_dev_state not defined as opaque but only manipulated by accel drivers
through accessors.
- accessors names not identified as only for accel drivers.
- move pci code from pci driver (drivers/cxl/pci.c) to generic pci code
(drivers/cxl/core/pci.c).
- capabilities field from u8 to u32 and initialised by CXL regs discovering
code.
- add capabilities check and removing current check by CXL regs discovering
code.
- Not fail if CXL Device Registers not found. Not mandatory for Type2.
- add timeout in acquire_endpoint for solving a race with the endpoint port
creation.
- handle EPROBE_DEFER by sfc driver.
- Limiting interleave ways to 1 for accel driver HPA/DPA requests.
- factoring out interleave ways and granularity helpers from type2 region
creation patch.
- restricting region_creation for type2 to one endpoint decoder.
v2 changes:
I have removed the introduction about the concerns with BIOS/UEFI after the
discussion leading to confirm the need of the functionality implemented, at
least is some scenarios.
There are two main changes from the RFC:
1) Following concerns about drivers using CXL core without restrictions, the CXL
struct to work with is opaque to those drivers, therefore functions are
implemented for modifying or reading those structs indirectly.
2) The driver for using the added functionality is not a test driver but a real
one: the SFC ethernet network driver. It uses the CXL region mapped for PIO
buffers instead of regions inside PCIe BARs.
RFC:
Current CXL kernel code is focused on supporting Type3 CXL devices, aka memory
expanders. Type2 CXL devices, aka device accelerators, share some functionalities
but require some special handling.
First of all, Type2 are by definition specific to drivers doing something and not just
a memory expander, so it is expected to work with the CXL specifics. This implies the CXL
setup needs to be done by such a driver instead of by a generic CXL PCI driver
as for memory expanders. Most of such setup needs to use current CXL core code
and therefore needs to be accessible to those vendor drivers. This is accomplished
exporting opaque CXL structs and adding and exporting functions for working with
those structs indirectly.
Some of the patches are based on a patchset sent by Dan Williams [1] which was just
partially integrated, most related to making things ready for Type2 but none
related to specific Type2 support. Those patches based on Dan´s work have Dan´s
signing as co-developer, and a link to the original patch.
A final note about CXL.cache is needed. This patchset does not cover it at all,
although the emulated Type2 device advertises it. From the kernel point of view
supporting CXL.cache will imply to be sure the CXL path supports what the Type2
device needs. A device accelerator will likely be connected to a Root Switch,
but other configurations can not be discarded. Therefore the kernel will need to
check not just HPA, DPA, interleave and granularity, but also the available
CXL.cache support and resources in each switch in the CXL path to the Type2
device. I expect to contribute to this support in the following months, and
it would be good to discuss about it when possible.
[1] https://lore.kernel.org/linux-cxl/98b1f61a-e6c2-71d4-c368-50d958501b0c@intel.com/T/
Alejandro Lucero (22):
cxl: Add type2 device basic support
sfc: add cxl support
cxl: Move pci generic code
cxl/sfc: Map cxl component regs
cxl/sfc: Initialize dpa without a mailbox
cxl: Prepare memdev creation for type2
sfc: create type2 cxl memdev
cxl/hdm: Add support for getting region from committed decoder
cxl: Add function for obtaining region range
cxl: Export functions for unwinding cxl by accelerators
sfc: obtain decoder and region if committed by firmware
cxl: Define a driver interface for HPA free space enumeration
sfc: get root decoder
cxl: Define a driver interface for DPA allocation
sfc: get endpoint decoder
cxl: Make region type based on endpoint type
cxl/region: Factor out interleave ways setup
cxl/region: Factor out interleave granularity setup
cxl: Allow region creation by type2 drivers
cxl: Avoid dax creation for accelerators
sfc: create cxl region
sfc: support pio mapping based on cxl
Dan Williams (3):
cxl/mem: Arrange for always-synchronous memdev attach
cxl/port: Arrange for always synchronous endpoint attach
cxl/mem: Introduce a memdev creation ->probe() operation
drivers/cxl/Kconfig | 4 +-
drivers/cxl/core/core.h | 10 +-
drivers/cxl/core/hdm.c | 128 ++++++++
drivers/cxl/core/mbox.c | 63 +---
drivers/cxl/core/memdev.c | 207 +++++++++----
drivers/cxl/core/pci.c | 63 ++++
drivers/cxl/core/pci_drv.c | 87 +-----
drivers/cxl/core/port.c | 1 +
drivers/cxl/core/region.c | 422 +++++++++++++++++++++++---
drivers/cxl/core/regs.c | 2 +-
drivers/cxl/cxl.h | 125 +-------
drivers/cxl/cxlmem.h | 90 +-----
drivers/cxl/cxlpci.h | 21 +-
drivers/cxl/mem.c | 145 +++++----
drivers/cxl/port.c | 41 +++
drivers/cxl/private.h | 16 +
drivers/net/ethernet/sfc/Kconfig | 10 +
drivers/net/ethernet/sfc/Makefile | 1 +
drivers/net/ethernet/sfc/ef10.c | 50 ++-
drivers/net/ethernet/sfc/efx.c | 15 +-
drivers/net/ethernet/sfc/efx_cxl.c | 192 ++++++++++++
drivers/net/ethernet/sfc/efx_cxl.h | 41 +++
drivers/net/ethernet/sfc/net_driver.h | 12 +
drivers/net/ethernet/sfc/nic.h | 3 +
include/cxl/cxl.h | 296 ++++++++++++++++++
include/cxl/pci.h | 21 ++
tools/testing/cxl/test/mem.c | 5 +-
27 files changed, 1549 insertions(+), 522 deletions(-)
create mode 100644 drivers/cxl/private.h
create mode 100644 drivers/net/ethernet/sfc/efx_cxl.c
create mode 100644 drivers/net/ethernet/sfc/efx_cxl.h
create mode 100644 include/cxl/cxl.h
create mode 100644 include/cxl/pci.h
base-commit: 211ddde0823f1442e4ad052a2f30f050145ccada
prerequisite-patch-id: f8f1003c82226bdbd967c0755c41d6602f35884f
prerequisite-patch-id: 8bccb1a750b00b11bfc347f3f2e1a162990f6275
prerequisite-patch-id: d9142fe7f0c216b3ea219847b9514b5997df63be
prerequisite-patch-id: bbba5b3224f0c6a0a331769652e5d6a0a3c28934
prerequisite-patch-id: 7c9fa56417d63fdb17a09abf932de8048c5b334b
prerequisite-patch-id: f418c5b2aea8b65520742f750f4b79f8cf4f0c90
prerequisite-patch-id: 9205c9a8b15f9571c6ecf9ef46b526ac8c9d9b33
prerequisite-patch-id: 7390649b7e6b0c0628de8403d46a5047e1e12417
prerequisite-patch-id: 70e95c74c1777b9e281ba54add0024746f5ff5e1
prerequisite-patch-id: 5a2273b31ad4755e14fc8bca28362f2bff54a909
prerequisite-patch-id: e9dc88f1b91dce5dc3d46ff2b5bf184aba06439d
prerequisite-patch-id: 0c5c038156ff28f810a63cd08ddab7867619af23
prerequisite-patch-id: 7e719ed404f664ee8d9b98d56f58326f55ea2175
prerequisite-patch-id: ad0c7b6122a0398a2654c92ab0c0527cb8a968c6
prerequisite-patch-id: c2829969f73d41d63b50983b92fef4cf72f87d03
prerequisite-patch-id: e1d0d259bd20b59cd9dff76880f6214e88c1fe32
prerequisite-patch-id: db84a3b9aefceef39764452998967f7aef0a3796
prerequisite-patch-id: cfb91a38e8c55201344eda86b730c0991ab8d79e
prerequisite-patch-id: 9889b65c6eff79af627158dac6cfe67f2b10fc21
prerequisite-patch-id: a4e751c90817a7d5016f7840f64185108fe4393b
prerequisite-patch-id: e90c5457d242847534b1c7f657541ecc7c72f23a
prerequisite-patch-id: 16f41d388ef33e355d90b9a38d1bacfa9f5740d4
prerequisite-patch-id: 8654e54082d6dba5d83dfdfb2bc2fd85b12d4a12
prerequisite-patch-id: 1afa817cac87367bea6af9d6eed8582b070d8424
prerequisite-patch-id: f5c386200140e5b90cbe5914dba04076cbb79d2f
--
2.34.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH v22 01/25] cxl/mem: Arrange for always-synchronous memdev attach
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 02/25] cxl/port: Arrange for always synchronous endpoint attach alejandro.lucero-palau
` (24 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Dan Williams
From: Dan Williams <dan.williams@intel.com>
In preparation for CXL accelerator drivers that have a hard dependency on
CXL capability initialization, arrange for the endpoint probe result to be
conveyed to the caller of devm_cxl_add_memdev().
As it stands cxl_pci does not care about the attach state of the cxl_memdev
because all generic memory expansion functionality can be handled by the
cxl_core. For accelerators, that driver needs to know perform driver
specific initialization if CXL is available, or exectute a fallback to PCIe
only operation.
By moving devm_cxl_add_memdev() to cxl_mem.ko it removes async module
loading as one reason that a memdev may not be attached upon return from
devm_cxl_add_memdev().
The diff is busy as this moves cxl_memdev_alloc() down below the definition
of cxl_memdev_fops and introduces devm_cxl_memdev_add_or_reset() to
preclude needing to export more symbols from the cxl_core.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
drivers/cxl/Kconfig | 4 +-
drivers/cxl/core/memdev.c | 97 ++++++++++++++++-----------------------
drivers/cxl/mem.c | 30 ++++++++++++
drivers/cxl/private.h | 11 +++++
4 files changed, 83 insertions(+), 59 deletions(-)
create mode 100644 drivers/cxl/private.h
diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
index 360c78fa7e97..94a3102ce86b 100644
--- a/drivers/cxl/Kconfig
+++ b/drivers/cxl/Kconfig
@@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
menuconfig CXL_BUS
- tristate "CXL (Compute Express Link) Devices Support"
+ bool "CXL (Compute Express Link) Devices Support"
depends on PCI
select FW_LOADER
select FW_UPLOAD
@@ -22,6 +22,7 @@ if CXL_BUS
config CXL_PCI
bool "PCI manageability"
default CXL_BUS
+ select CXL_MEM
help
The CXL specification defines a "CXL memory device" sub-class in the
PCI "memory controller" base class of devices. Device's identified by
@@ -89,7 +90,6 @@ config CXL_PMEM
config CXL_MEM
tristate "CXL: Memory Expansion"
- depends on CXL_PCI
default CXL_BUS
help
The CXL.mem protocol allows a device to act as a provider of "System
diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
index e370d733e440..3152e9ef41fc 100644
--- a/drivers/cxl/core/memdev.c
+++ b/drivers/cxl/core/memdev.c
@@ -8,6 +8,7 @@
#include <linux/idr.h>
#include <linux/pci.h>
#include <cxlmem.h>
+#include "private.h"
#include "trace.h"
#include "core.h"
@@ -648,42 +649,29 @@ static void detach_memdev(struct work_struct *work)
static struct lock_class_key cxl_memdev_key;
-static struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds,
- const struct file_operations *fops)
+struct cxl_memdev *devm_cxl_memdev_add_or_reset(struct device *host,
+ struct cxl_memdev *cxlmd)
{
- struct cxl_memdev *cxlmd;
- struct device *dev;
- struct cdev *cdev;
+ struct device *dev = &cxlmd->dev;
+ struct cdev *cdev = &cxlmd->cdev;
int rc;
- cxlmd = kzalloc(sizeof(*cxlmd), GFP_KERNEL);
- if (!cxlmd)
- return ERR_PTR(-ENOMEM);
-
- rc = ida_alloc_max(&cxl_memdev_ida, CXL_MEM_MAX_DEVS - 1, GFP_KERNEL);
- if (rc < 0)
- goto err;
- cxlmd->id = rc;
- cxlmd->depth = -1;
-
- dev = &cxlmd->dev;
- device_initialize(dev);
- lockdep_set_class(&dev->mutex, &cxl_memdev_key);
- dev->parent = cxlds->dev;
- dev->bus = &cxl_bus_type;
- dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
- dev->type = &cxl_memdev_type;
- device_set_pm_not_required(dev);
- INIT_WORK(&cxlmd->detach_work, detach_memdev);
-
- cdev = &cxlmd->cdev;
- cdev_init(cdev, fops);
+ rc = cdev_device_add(cdev, dev);
+ if (rc) {
+ /*
+ * The cdev was briefly live, shutdown any ioctl operations that
+ * saw that state.
+ */
+ cxl_memdev_shutdown(dev);
+ put_device(dev);
+ return ERR_PTR(rc);
+ }
+ rc = devm_add_action_or_reset(host, cxl_memdev_unregister, cxlmd);
+ if (rc)
+ return ERR_PTR(rc);
return cxlmd;
-
-err:
- kfree(cxlmd);
- return ERR_PTR(rc);
}
+EXPORT_SYMBOL_NS_GPL(devm_cxl_memdev_add_or_reset, "CXL");
static long __cxl_memdev_ioctl(struct cxl_memdev *cxlmd, unsigned int cmd,
unsigned long arg)
@@ -1051,50 +1039,45 @@ static const struct file_operations cxl_memdev_fops = {
.llseek = noop_llseek,
};
-struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
- struct cxl_dev_state *cxlds)
+struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds)
{
struct cxl_memdev *cxlmd;
struct device *dev;
struct cdev *cdev;
int rc;
- cxlmd = cxl_memdev_alloc(cxlds, &cxl_memdev_fops);
- if (IS_ERR(cxlmd))
- return cxlmd;
+ cxlmd = kzalloc(sizeof(*cxlmd), GFP_KERNEL);
+ if (!cxlmd)
+ return ERR_PTR(-ENOMEM);
- dev = &cxlmd->dev;
- rc = dev_set_name(dev, "mem%d", cxlmd->id);
- if (rc)
+ rc = ida_alloc_max(&cxl_memdev_ida, CXL_MEM_MAX_DEVS - 1, GFP_KERNEL);
+ if (rc < 0)
goto err;
- /*
- * Activate ioctl operations, no cxl_memdev_rwsem manipulation
- * needed as this is ordered with cdev_add() publishing the device.
- */
+ cxlmd->id = rc;
+ cxlmd->depth = -1;
cxlmd->cxlds = cxlds;
cxlds->cxlmd = cxlmd;
- cdev = &cxlmd->cdev;
- rc = cdev_device_add(cdev, dev);
- if (rc)
- goto err;
+ dev = &cxlmd->dev;
+ device_initialize(dev);
+ lockdep_set_class(&dev->mutex, &cxl_memdev_key);
+ dev->parent = cxlds->dev;
+ dev->bus = &cxl_bus_type;
+ dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
+ dev->type = &cxl_memdev_type;
+ device_set_pm_not_required(dev);
+ INIT_WORK(&cxlmd->detach_work, detach_memdev);
- rc = devm_add_action_or_reset(host, cxl_memdev_unregister, cxlmd);
- if (rc)
- return ERR_PTR(rc);
+ cdev = &cxlmd->cdev;
+ cdev_init(cdev, &cxl_memdev_fops);
return cxlmd;
err:
- /*
- * The cdev was briefly live, shutdown any ioctl operations that
- * saw that state.
- */
- cxl_memdev_shutdown(dev);
- put_device(dev);
+ kfree(cxlmd);
return ERR_PTR(rc);
}
-EXPORT_SYMBOL_NS_GPL(devm_cxl_add_memdev, "CXL");
+EXPORT_SYMBOL_NS_GPL(cxl_memdev_alloc, "CXL");
static void sanitize_teardown_notifier(void *data)
{
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index d2155f45240d..ac354fee704c 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -7,6 +7,7 @@
#include "cxlmem.h"
#include "cxlpci.h"
+#include "private.h"
/**
* DOC: cxl mem
@@ -202,6 +203,34 @@ static int cxl_mem_probe(struct device *dev)
return devm_add_action_or_reset(dev, enable_suspend, NULL);
}
+/**
+ * devm_cxl_add_memdev - Add a CXL memory device
+ * @host: devres alloc/release context and parent for the memdev
+ * @cxlds: CXL device state to associate with the memdev
+ *
+ * Upon return the device will have had a chance to attach to the
+ * cxl_mem driver, but may fail if the CXL topology is not ready
+ * (hardware CXL link down, or software platform CXL root not attached)
+ */
+struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
+ struct cxl_dev_state *cxlds)
+{
+ struct cxl_memdev *cxlmd = cxl_memdev_alloc(cxlds);
+ int rc;
+
+ if (IS_ERR(cxlmd))
+ return cxlmd;
+
+ rc = dev_set_name(&cxlmd->dev, "mem%d", cxlmd->id);
+ if (rc) {
+ put_device(&cxlmd->dev);
+ return ERR_PTR(rc);
+ }
+
+ return devm_cxl_memdev_add_or_reset(host, cxlmd);
+}
+EXPORT_SYMBOL_NS_GPL(devm_cxl_add_memdev, "CXL");
+
static ssize_t trigger_poison_list_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t len)
@@ -250,6 +279,7 @@ static struct cxl_driver cxl_mem_driver = {
.id = CXL_DEVICE_MEMORY_EXPANDER,
.drv = {
.dev_groups = cxl_mem_groups,
+ .probe_type = PROBE_FORCE_SYNCHRONOUS,
},
};
diff --git a/drivers/cxl/private.h b/drivers/cxl/private.h
new file mode 100644
index 000000000000..eff425822af3
--- /dev/null
+++ b/drivers/cxl/private.h
@@ -0,0 +1,11 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2025 Intel Corporation. */
+
+/* Private interfaces betwen common drivers ("cxl_mem") and the cxl_core */
+
+#ifndef __CXL_PRIVATE_H__
+#define __CXL_PRIVATE_H__
+struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds);
+struct cxl_memdev *devm_cxl_memdev_add_or_reset(struct device *host,
+ struct cxl_memdev *cxlmd);
+#endif /* __CXL_PRIVATE_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 02/25] cxl/port: Arrange for always synchronous endpoint attach
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 01/25] cxl/mem: Arrange for always-synchronous memdev attach alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 03/25] cxl/mem: Introduce a memdev creation ->probe() operation alejandro.lucero-palau
` (23 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
From: Dan Williams <dan.j.williams@intel.com>
Make it so that upon return from devm_cxl_add_endpoint() that cxl_mem_probe() can
assume that the endpoint has had a chance to complete cxl_port_probe().
I.e. cxl_port module loading has completed prior to device registration.
MODULE_SOFTDEP() is not sufficient for this purpose, but a hard link-time
dependency is reliable.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
drivers/cxl/mem.c | 43 -------------------------------------------
drivers/cxl/port.c | 41 +++++++++++++++++++++++++++++++++++++++++
drivers/cxl/private.h | 8 ++++++--
3 files changed, 47 insertions(+), 45 deletions(-)
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index ac354fee704c..8569c01bf3c2 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -46,44 +46,6 @@ static int cxl_mem_dpa_show(struct seq_file *file, void *data)
return 0;
}
-static int devm_cxl_add_endpoint(struct device *host, struct cxl_memdev *cxlmd,
- struct cxl_dport *parent_dport)
-{
- struct cxl_port *parent_port = parent_dport->port;
- struct cxl_port *endpoint, *iter, *down;
- int rc;
-
- /*
- * Now that the path to the root is established record all the
- * intervening ports in the chain.
- */
- for (iter = parent_port, down = NULL; !is_cxl_root(iter);
- down = iter, iter = to_cxl_port(iter->dev.parent)) {
- struct cxl_ep *ep;
-
- ep = cxl_ep_load(iter, cxlmd);
- ep->next = down;
- }
-
- /* Note: endpoint port component registers are derived from @cxlds */
- endpoint = devm_cxl_add_port(host, &cxlmd->dev, CXL_RESOURCE_NONE,
- parent_dport);
- if (IS_ERR(endpoint))
- return PTR_ERR(endpoint);
-
- rc = cxl_endpoint_autoremove(cxlmd, endpoint);
- if (rc)
- return rc;
-
- if (!endpoint->dev.driver) {
- dev_err(&cxlmd->dev, "%s failed probe\n",
- dev_name(&endpoint->dev));
- return -ENXIO;
- }
-
- return 0;
-}
-
static int cxl_debugfs_poison_inject(void *data, u64 dpa)
{
struct cxl_memdev *cxlmd = data;
@@ -289,8 +251,3 @@ MODULE_DESCRIPTION("CXL: Memory Expansion");
MODULE_LICENSE("GPL v2");
MODULE_IMPORT_NS("CXL");
MODULE_ALIAS_CXL(CXL_DEVICE_MEMORY_EXPANDER);
-/*
- * create_endpoint() wants to validate port driver attach immediately after
- * endpoint registration.
- */
-MODULE_SOFTDEP("pre: cxl_port");
diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c
index 51c8f2f84717..ef65d983e1c8 100644
--- a/drivers/cxl/port.c
+++ b/drivers/cxl/port.c
@@ -6,6 +6,7 @@
#include "cxlmem.h"
#include "cxlpci.h"
+#include "private.h"
/**
* DOC: cxl port
@@ -156,10 +157,50 @@ static struct cxl_driver cxl_port_driver = {
.probe = cxl_port_probe,
.id = CXL_DEVICE_PORT,
.drv = {
+ .probe_type = PROBE_FORCE_SYNCHRONOUS,
.dev_groups = cxl_port_attribute_groups,
},
};
+int devm_cxl_add_endpoint(struct device *host, struct cxl_memdev *cxlmd,
+ struct cxl_dport *parent_dport)
+{
+ struct cxl_port *parent_port = parent_dport->port;
+ struct cxl_port *endpoint, *iter, *down;
+ int rc;
+
+ /*
+ * Now that the path to the root is established record all the
+ * intervening ports in the chain.
+ */
+ for (iter = parent_port, down = NULL; !is_cxl_root(iter);
+ down = iter, iter = to_cxl_port(iter->dev.parent)) {
+ struct cxl_ep *ep;
+
+ ep = cxl_ep_load(iter, cxlmd);
+ ep->next = down;
+ }
+
+ /* Note: endpoint port component registers are derived from @cxlds */
+ endpoint = devm_cxl_add_port(host, &cxlmd->dev, CXL_RESOURCE_NONE,
+ parent_dport);
+ if (IS_ERR(endpoint))
+ return PTR_ERR(endpoint);
+
+ rc = cxl_endpoint_autoremove(cxlmd, endpoint);
+ if (rc)
+ return rc;
+
+ if (!endpoint->dev.driver) {
+ dev_err(&cxlmd->dev, "%s failed probe\n",
+ dev_name(&endpoint->dev));
+ return -ENXIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_NS_GPL(devm_cxl_add_endpoint, "CXL");
+
static int __init cxl_port_init(void)
{
return cxl_driver_register(&cxl_port_driver);
diff --git a/drivers/cxl/private.h b/drivers/cxl/private.h
index eff425822af3..93ff0101dd4b 100644
--- a/drivers/cxl/private.h
+++ b/drivers/cxl/private.h
@@ -1,11 +1,15 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2025 Intel Corporation. */
-/* Private interfaces betwen common drivers ("cxl_mem") and the cxl_core */
-
+/*
+ * Private interfaces betwen common drivers ("cxl_mem", "cxl_port") and
+ * the cxl_core.
+ */
#ifndef __CXL_PRIVATE_H__
#define __CXL_PRIVATE_H__
struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds);
struct cxl_memdev *devm_cxl_memdev_add_or_reset(struct device *host,
struct cxl_memdev *cxlmd);
+int devm_cxl_add_endpoint(struct device *host, struct cxl_memdev *cxlmd,
+ struct cxl_dport *parent_dport);
#endif /* __CXL_PRIVATE_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 03/25] cxl/mem: Introduce a memdev creation ->probe() operation
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 01/25] cxl/mem: Arrange for always-synchronous memdev attach alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 02/25] cxl/port: Arrange for always synchronous endpoint attach alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 04/25] cxl: Add type2 device basic support alejandro.lucero-palau
` (22 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
From: Dan Williams <dan.j.williams@intel.com>
Allow for a driver to pass a routine to be called in cxl_mem_probe()
context. This ability mirrors the semantics of faux_device_create() and
allows for the caller to run CXL-topology-attach dependent logic on behalf
of the caller.
This capability is needed for CXL accelerator device drivers that need to
make decisions about enabling CXL dependent functionality in the device, or
falling back to PCIe-only operation.
The probe callback runs after the port topology is successfully attached
for the given memdev.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
drivers/cxl/core/memdev.c | 5 ++++-
drivers/cxl/core/pci_drv.c | 2 +-
drivers/cxl/cxlmem.h | 10 +++++++++-
drivers/cxl/mem.c | 33 ++++++++++++++++++++++++++++++---
drivers/cxl/private.h | 3 ++-
tools/testing/cxl/test/mem.c | 2 +-
6 files changed, 47 insertions(+), 8 deletions(-)
diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
index 3152e9ef41fc..fd64f558c8fd 100644
--- a/drivers/cxl/core/memdev.c
+++ b/drivers/cxl/core/memdev.c
@@ -1039,7 +1039,8 @@ static const struct file_operations cxl_memdev_fops = {
.llseek = noop_llseek,
};
-struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds)
+struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds,
+ const struct cxl_memdev_ops *ops)
{
struct cxl_memdev *cxlmd;
struct device *dev;
@@ -1056,6 +1057,8 @@ struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds)
cxlmd->id = rc;
cxlmd->depth = -1;
+ cxlmd->ops = ops;
+ cxlmd->endpoint = ERR_PTR(-ENXIO);
cxlmd->cxlds = cxlds;
cxlds->cxlmd = cxlmd;
diff --git a/drivers/cxl/core/pci_drv.c b/drivers/cxl/core/pci_drv.c
index bc3c959f7eb6..f43590062efd 100644
--- a/drivers/cxl/core/pci_drv.c
+++ b/drivers/cxl/core/pci_drv.c
@@ -1007,7 +1007,7 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (rc)
dev_dbg(&pdev->dev, "No CXL Features discovered\n");
- cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlds);
+ cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlds, NULL);
if (IS_ERR(cxlmd))
return PTR_ERR(cxlmd);
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 434031a0c1f7..63b1957fddda 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -34,6 +34,10 @@
(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) != \
CXLMDEV_RESET_NEEDED_NOT)
+struct cxl_memdev_ops {
+ int (*probe)(struct cxl_memdev *cxlmd);
+};
+
/**
* struct cxl_memdev - CXL bus object representing a Type-3 Memory Device
* @dev: driver core device object
@@ -43,6 +47,7 @@
* @cxl_nvb: coordinate removal of @cxl_nvd if present
* @cxl_nvd: optional bridge to an nvdimm if the device supports pmem
* @endpoint: connection to the CXL port topology for this memory device
+ * @ops: incremental caller specific probe routine
* @id: id number of this memdev instance.
* @depth: endpoint port depth
* @scrub_cycle: current scrub cycle set for this device
@@ -59,6 +64,7 @@ struct cxl_memdev {
struct cxl_nvdimm_bridge *cxl_nvb;
struct cxl_nvdimm *cxl_nvd;
struct cxl_port *endpoint;
+ const struct cxl_memdev_ops *ops;
int id;
int depth;
u8 scrub_cycle;
@@ -96,7 +102,9 @@ static inline bool is_cxl_endpoint(struct cxl_port *port)
}
struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
- struct cxl_dev_state *cxlds);
+ struct cxl_dev_state *cxlds,
+ const struct cxl_memdev_ops *ops);
+
int devm_cxl_sanitize_setup_notifier(struct device *host,
struct cxl_memdev *cxlmd);
struct cxl_memdev_state;
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index 8569c01bf3c2..b36d8bb812a3 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -144,6 +144,12 @@ static int cxl_mem_probe(struct device *dev)
return rc;
}
+ if (cxlmd->ops) {
+ rc = cxlmd->ops->probe(cxlmd);
+ if (rc)
+ return rc;
+ }
+
rc = devm_cxl_memdev_edac_register(cxlmd);
if (rc)
dev_dbg(dev, "CXL memdev EDAC registration failed rc=%d\n", rc);
@@ -169,15 +175,17 @@ static int cxl_mem_probe(struct device *dev)
* devm_cxl_add_memdev - Add a CXL memory device
* @host: devres alloc/release context and parent for the memdev
* @cxlds: CXL device state to associate with the memdev
+ * @ops: optional operations to run in cxl_mem::{probe,remove}() context
*
* Upon return the device will have had a chance to attach to the
* cxl_mem driver, but may fail if the CXL topology is not ready
* (hardware CXL link down, or software platform CXL root not attached)
*/
struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
- struct cxl_dev_state *cxlds)
+ struct cxl_dev_state *cxlds,
+ const struct cxl_memdev_ops *ops)
{
- struct cxl_memdev *cxlmd = cxl_memdev_alloc(cxlds);
+ struct cxl_memdev *cxlmd = cxl_memdev_alloc(cxlds, ops);
int rc;
if (IS_ERR(cxlmd))
@@ -189,7 +197,26 @@ struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
return ERR_PTR(rc);
}
- return devm_cxl_memdev_add_or_reset(host, cxlmd);
+ cxlmd = devm_cxl_memdev_add_or_reset(host, cxlmd);
+ if (IS_ERR(cxlmd))
+ return cxlmd;
+
+ /*
+ * If ops is provided fail if the driver is not attached upon
+ * return. The ->endpoint ERR_PTR may have a more precise error
+ * code to convey. Note that failure here could be the result of
+ * a race to teardown the CXL port topology. I.e.
+ * cxl_mem_probe() could have succeeded and then cxl_mem unbound
+ * before the lock is acquired.
+ */
+ guard(device)(&cxlmd->dev);
+ if (ops && !cxlmd->dev.driver) {
+ if (IS_ERR(cxlmd->endpoint))
+ return ERR_CAST(cxlmd->endpoint);
+ return ERR_PTR(-ENXIO);
+ }
+
+ return cxlmd;
}
EXPORT_SYMBOL_NS_GPL(devm_cxl_add_memdev, "CXL");
diff --git a/drivers/cxl/private.h b/drivers/cxl/private.h
index 93ff0101dd4b..167a538efd18 100644
--- a/drivers/cxl/private.h
+++ b/drivers/cxl/private.h
@@ -7,7 +7,8 @@
*/
#ifndef __CXL_PRIVATE_H__
#define __CXL_PRIVATE_H__
-struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds);
+struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds,
+ const struct cxl_memdev_ops *ops);
struct cxl_memdev *devm_cxl_memdev_add_or_reset(struct device *host,
struct cxl_memdev *cxlmd);
int devm_cxl_add_endpoint(struct device *host, struct cxl_memdev *cxlmd,
diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
index d533481672b7..33d06ec5a4b9 100644
--- a/tools/testing/cxl/test/mem.c
+++ b/tools/testing/cxl/test/mem.c
@@ -1768,7 +1768,7 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
cxl_mock_add_event_logs(&mdata->mes);
- cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlds);
+ cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlds, NULL);
if (IS_ERR(cxlmd))
return PTR_ERR(cxlmd);
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 04/25] cxl: Add type2 device basic support
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (2 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 03/25] cxl/mem: Introduce a memdev creation ->probe() operation alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 05/25] sfc: add cxl support alejandro.lucero-palau
` (21 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Jonathan Cameron, Alison Schofield,
Ben Cheatham
From: Alejandro Lucero <alucerop@amd.com>
Differentiate CXL memory expanders (type 3) from CXL device accelerators
(type 2) with a new function for initializing cxl_dev_state and a macro
for helping accel drivers to embed cxl_dev_state inside a private
struct.
Move structs to include/cxl as the size of the accel driver private
struct embedding cxl_dev_state needs to know the size of this struct.
Use same new initialization with the type3 pci driver.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Alison Schofield <alison.schofield@intel.com>
Reviewed-by: Ben Cheatham <benjamin.cheatham@amd.com>
---
drivers/cxl/core/mbox.c | 12 +-
drivers/cxl/core/memdev.c | 32 +++++
drivers/cxl/core/pci_drv.c | 14 +--
drivers/cxl/cxl.h | 97 +--------------
drivers/cxl/cxlmem.h | 86 +------------
include/cxl/cxl.h | 226 +++++++++++++++++++++++++++++++++++
tools/testing/cxl/test/mem.c | 3 +-
7 files changed, 274 insertions(+), 196 deletions(-)
create mode 100644 include/cxl/cxl.h
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index fa6dd0c94656..bee84d0101d1 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -1514,23 +1514,21 @@ int cxl_mailbox_init(struct cxl_mailbox *cxl_mbox, struct device *host)
}
EXPORT_SYMBOL_NS_GPL(cxl_mailbox_init, "CXL");
-struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev)
+struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev, u64 serial,
+ u16 dvsec)
{
struct cxl_memdev_state *mds;
int rc;
- mds = devm_kzalloc(dev, sizeof(*mds), GFP_KERNEL);
+ mds = devm_cxl_dev_state_create(dev, CXL_DEVTYPE_CLASSMEM, serial,
+ dvsec, struct cxl_memdev_state, cxlds,
+ true);
if (!mds) {
dev_err(dev, "No memory available\n");
return ERR_PTR(-ENOMEM);
}
mutex_init(&mds->event.log_lock);
- mds->cxlds.dev = dev;
- mds->cxlds.reg_map.host = dev;
- mds->cxlds.cxl_mbox.host = dev;
- mds->cxlds.reg_map.resource = CXL_RESOURCE_NONE;
- mds->cxlds.type = CXL_DEVTYPE_CLASSMEM;
rc = devm_cxl_register_mce_notifier(dev, &mds->mce_notifier);
if (rc == -EOPNOTSUPP)
diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
index fd64f558c8fd..1dd6f0294030 100644
--- a/drivers/cxl/core/memdev.c
+++ b/drivers/cxl/core/memdev.c
@@ -649,6 +649,38 @@ static void detach_memdev(struct work_struct *work)
static struct lock_class_key cxl_memdev_key;
+static void cxl_dev_state_init(struct cxl_dev_state *cxlds, struct device *dev,
+ enum cxl_devtype type, u64 serial, u16 dvsec,
+ bool has_mbox)
+{
+ *cxlds = (struct cxl_dev_state) {
+ .dev = dev,
+ .type = type,
+ .serial = serial,
+ .cxl_dvsec = dvsec,
+ .reg_map.host = dev,
+ .reg_map.resource = CXL_RESOURCE_NONE,
+ };
+
+ if (has_mbox)
+ cxlds->cxl_mbox.host = dev;
+}
+
+struct cxl_dev_state *_devm_cxl_dev_state_create(struct device *dev,
+ enum cxl_devtype type,
+ u64 serial, u16 dvsec,
+ size_t size, bool has_mbox)
+{
+ struct cxl_dev_state *cxlds = devm_kzalloc(dev, size, GFP_KERNEL);
+
+ if (!cxlds)
+ return NULL;
+
+ cxl_dev_state_init(cxlds, dev, type, serial, dvsec, has_mbox);
+ return cxlds;
+}
+EXPORT_SYMBOL_NS_GPL(_devm_cxl_dev_state_create, "CXL");
+
struct cxl_memdev *devm_cxl_memdev_add_or_reset(struct device *host,
struct cxl_memdev *cxlmd)
{
diff --git a/drivers/cxl/core/pci_drv.c b/drivers/cxl/core/pci_drv.c
index f43590062efd..b4b8350ba44d 100644
--- a/drivers/cxl/core/pci_drv.c
+++ b/drivers/cxl/core/pci_drv.c
@@ -912,6 +912,7 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
int rc, pmu_count;
unsigned int i;
bool irq_avail;
+ u16 dvsec;
/*
* Double check the anonymous union trickery in struct cxl_regs
@@ -925,19 +926,18 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return rc;
pci_set_master(pdev);
- mds = cxl_memdev_state_create(&pdev->dev);
+ dvsec = pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_CXL,
+ PCI_DVSEC_CXL_DEVICE);
+ if (!dvsec)
+ pci_warn(pdev, "Device DVSEC not present, skip CXL.mem init\n");
+
+ mds = cxl_memdev_state_create(&pdev->dev, pci_get_dsn(pdev), dvsec);
if (IS_ERR(mds))
return PTR_ERR(mds);
cxlds = &mds->cxlds;
pci_set_drvdata(pdev, cxlds);
cxlds->rcd = is_cxl_restricted(pdev);
- cxlds->serial = pci_get_dsn(pdev);
- cxlds->cxl_dvsec = pci_find_dvsec_capability(
- pdev, PCI_VENDOR_ID_CXL, PCI_DVSEC_CXL_DEVICE);
- if (!cxlds->cxl_dvsec)
- dev_warn(&pdev->dev,
- "Device DVSEC not present, skip CXL.mem init\n");
rc = cxl_pci_setup_regs(pdev, CXL_REGLOC_RBI_MEMDEV, &map);
if (rc)
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index b7654d40dc9e..1517250b0ec2 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -12,6 +12,7 @@
#include <linux/node.h>
#include <linux/io.h>
#include <linux/range.h>
+#include <cxl/cxl.h>
extern const struct nvdimm_security_ops *cxl_security_ops;
@@ -201,97 +202,6 @@ static inline int ways_to_eiw(unsigned int ways, u8 *eiw)
#define CXLDEV_MBOX_BG_CMD_COMMAND_VENDOR_MASK GENMASK_ULL(63, 48)
#define CXLDEV_MBOX_PAYLOAD_OFFSET 0x20
-/*
- * Using struct_group() allows for per register-block-type helper routines,
- * without requiring block-type agnostic code to include the prefix.
- */
-struct cxl_regs {
- /*
- * Common set of CXL Component register block base pointers
- * @hdm_decoder: CXL 2.0 8.2.5.12 CXL HDM Decoder Capability Structure
- * @ras: CXL 2.0 8.2.5.9 CXL RAS Capability Structure
- */
- struct_group_tagged(cxl_component_regs, component,
- void __iomem *hdm_decoder;
- void __iomem *ras;
- );
- /*
- * Common set of CXL Device register block base pointers
- * @status: CXL 2.0 8.2.8.3 Device Status Registers
- * @mbox: CXL 2.0 8.2.8.4 Mailbox Registers
- * @memdev: CXL 2.0 8.2.8.5 Memory Device Registers
- */
- struct_group_tagged(cxl_device_regs, device_regs,
- void __iomem *status, *mbox, *memdev;
- );
-
- struct_group_tagged(cxl_pmu_regs, pmu_regs,
- void __iomem *pmu;
- );
-
- /*
- * RCH downstream port specific RAS register
- * @aer: CXL 3.0 8.2.1.1 RCH Downstream Port RCRB
- */
- struct_group_tagged(cxl_rch_regs, rch_regs,
- void __iomem *dport_aer;
- );
-
- /*
- * RCD upstream port specific PCIe cap register
- * @pcie_cap: CXL 3.0 8.2.1.2 RCD Upstream Port RCRB
- */
- struct_group_tagged(cxl_rcd_regs, rcd_regs,
- void __iomem *rcd_pcie_cap;
- );
-};
-
-struct cxl_reg_map {
- bool valid;
- int id;
- unsigned long offset;
- unsigned long size;
-};
-
-struct cxl_component_reg_map {
- struct cxl_reg_map hdm_decoder;
- struct cxl_reg_map ras;
-};
-
-struct cxl_device_reg_map {
- struct cxl_reg_map status;
- struct cxl_reg_map mbox;
- struct cxl_reg_map memdev;
-};
-
-struct cxl_pmu_reg_map {
- struct cxl_reg_map pmu;
-};
-
-/**
- * struct cxl_register_map - DVSEC harvested register block mapping parameters
- * @host: device for devm operations and logging
- * @base: virtual base of the register-block-BAR + @block_offset
- * @resource: physical resource base of the register block
- * @max_size: maximum mapping size to perform register search
- * @reg_type: see enum cxl_regloc_type
- * @component_map: cxl_reg_map for component registers
- * @device_map: cxl_reg_maps for device registers
- * @pmu_map: cxl_reg_maps for CXL Performance Monitoring Units
- */
-struct cxl_register_map {
- struct device *host;
- void __iomem *base;
- resource_size_t resource;
- resource_size_t max_size;
- u8 reg_type;
- union {
- struct cxl_component_reg_map component_map;
- struct cxl_device_reg_map device_map;
- struct cxl_pmu_reg_map pmu_map;
- };
-};
-
void cxl_probe_component_regs(struct device *dev, void __iomem *base,
struct cxl_component_reg_map *map);
void cxl_probe_device_regs(struct device *dev, void __iomem *base,
@@ -497,11 +407,6 @@ struct cxl_region_params {
resource_size_t cache_size;
};
-enum cxl_partition_mode {
- CXL_PARTMODE_RAM,
- CXL_PARTMODE_PMEM,
-};
-
/*
* Indicate whether this region has been assembled by autodetection or
* userspace assembly. Prevent endpoint decoders outside of automatic
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 63b1957fddda..05f4cb5aaed0 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -113,8 +113,6 @@ int devm_cxl_dpa_reserve(struct cxl_endpoint_decoder *cxled,
resource_size_t base, resource_size_t len,
resource_size_t skipped);
-#define CXL_NR_PARTITIONS_MAX 2
-
struct cxl_dpa_info {
u64 size;
struct cxl_dpa_part_info {
@@ -373,87 +371,6 @@ struct cxl_security_state {
struct kernfs_node *sanitize_node;
};
-/*
- * enum cxl_devtype - delineate type-2 from a generic type-3 device
- * @CXL_DEVTYPE_DEVMEM - Vendor specific CXL Type-2 device implementing HDM-D or
- * HDM-DB, no requirement that this device implements a
- * mailbox, or other memory-device-standard manageability
- * flows.
- * @CXL_DEVTYPE_CLASSMEM - Common class definition of a CXL Type-3 device with
- * HDM-H and class-mandatory memory device registers
- */
-enum cxl_devtype {
- CXL_DEVTYPE_DEVMEM,
- CXL_DEVTYPE_CLASSMEM,
-};
-
-/**
- * struct cxl_dpa_perf - DPA performance property entry
- * @dpa_range: range for DPA address
- * @coord: QoS performance data (i.e. latency, bandwidth)
- * @cdat_coord: raw QoS performance data from CDAT
- * @qos_class: QoS Class cookies
- */
-struct cxl_dpa_perf {
- struct range dpa_range;
- struct access_coordinate coord[ACCESS_COORDINATE_MAX];
- struct access_coordinate cdat_coord[ACCESS_COORDINATE_MAX];
- int qos_class;
-};
-
-/**
- * struct cxl_dpa_partition - DPA partition descriptor
- * @res: shortcut to the partition in the DPA resource tree (cxlds->dpa_res)
- * @perf: performance attributes of the partition from CDAT
- * @mode: operation mode for the DPA capacity, e.g. ram, pmem, dynamic...
- */
-struct cxl_dpa_partition {
- struct resource res;
- struct cxl_dpa_perf perf;
- enum cxl_partition_mode mode;
-};
-
-/**
- * struct cxl_dev_state - The driver device state
- *
- * cxl_dev_state represents the CXL driver/device state. It provides an
- * interface to mailbox commands as well as some cached data about the device.
- * Currently only memory devices are represented.
- *
- * @dev: The device associated with this CXL state
- * @cxlmd: The device representing the CXL.mem capabilities of @dev
- * @reg_map: component and ras register mapping parameters
- * @regs: Parsed register blocks
- * @cxl_dvsec: Offset to the PCIe device DVSEC
- * @rcd: operating in RCD mode (CXL 3.0 9.11.8 CXL Devices Attached to an RCH)
- * @media_ready: Indicate whether the device media is usable
- * @dpa_res: Overall DPA resource tree for the device
- * @part: DPA partition array
- * @nr_partitions: Number of DPA partitions
- * @serial: PCIe Device Serial Number
- * @type: Generic Memory Class device or Vendor Specific Memory device
- * @cxl_mbox: CXL mailbox context
- * @cxlfs: CXL features context
- */
-struct cxl_dev_state {
- struct device *dev;
- struct cxl_memdev *cxlmd;
- struct cxl_register_map reg_map;
- struct cxl_regs regs;
- int cxl_dvsec;
- bool rcd;
- bool media_ready;
- struct resource dpa_res;
- struct cxl_dpa_partition part[CXL_NR_PARTITIONS_MAX];
- unsigned int nr_partitions;
- u64 serial;
- enum cxl_devtype type;
- struct cxl_mailbox cxl_mbox;
-#ifdef CONFIG_CXL_FEATURES
- struct cxl_features_state *cxlfs;
-#endif
-};
-
static inline resource_size_t cxl_pmem_size(struct cxl_dev_state *cxlds)
{
/*
@@ -858,7 +775,8 @@ int cxl_dev_state_identify(struct cxl_memdev_state *mds);
int cxl_await_media_ready(struct cxl_dev_state *cxlds);
int cxl_enumerate_cmds(struct cxl_memdev_state *mds);
int cxl_mem_dpa_fetch(struct cxl_memdev_state *mds, struct cxl_dpa_info *info);
-struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev);
+struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev, u64 serial,
+ u16 dvsec);
void set_exclusive_cxl_commands(struct cxl_memdev_state *mds,
unsigned long *cmds);
void clear_exclusive_cxl_commands(struct cxl_memdev_state *mds,
diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
new file mode 100644
index 000000000000..13d448686189
--- /dev/null
+++ b/include/cxl/cxl.h
@@ -0,0 +1,226 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2020 Intel Corporation. */
+/* Copyright(c) 2025 Advanced Micro Devices, Inc. */
+
+#ifndef __CXL_CXL_H__
+#define __CXL_CXL_H__
+
+#include <linux/node.h>
+#include <linux/ioport.h>
+#include <cxl/mailbox.h>
+
+/**
+ * enum cxl_devtype - delineate type-2 from a generic type-3 device
+ * @CXL_DEVTYPE_DEVMEM: Vendor specific CXL Type-2 device implementing HDM-D or
+ * HDM-DB, no requirement that this device implements a
+ * mailbox, or other memory-device-standard manageability
+ * flows.
+ * @CXL_DEVTYPE_CLASSMEM: Common class definition of a CXL Type-3 device with
+ * HDM-H and class-mandatory memory device registers
+ */
+enum cxl_devtype {
+ CXL_DEVTYPE_DEVMEM,
+ CXL_DEVTYPE_CLASSMEM,
+};
+
+struct device;
+
+/*
+ * Using struct_group() allows for per register-block-type helper routines,
+ * without requiring block-type agnostic code to include the prefix.
+ */
+struct cxl_regs {
+ /*
+ * Common set of CXL Component register block base pointers
+ * @hdm_decoder: CXL 2.0 8.2.5.12 CXL HDM Decoder Capability Structure
+ * @ras: CXL 2.0 8.2.5.9 CXL RAS Capability Structure
+ */
+ struct_group_tagged(cxl_component_regs, component,
+ void __iomem *hdm_decoder;
+ void __iomem *ras;
+ );
+ /*
+ * Common set of CXL Device register block base pointers
+ * @status: CXL 2.0 8.2.8.3 Device Status Registers
+ * @mbox: CXL 2.0 8.2.8.4 Mailbox Registers
+ * @memdev: CXL 2.0 8.2.8.5 Memory Device Registers
+ */
+ struct_group_tagged(cxl_device_regs, device_regs,
+ void __iomem *status, *mbox, *memdev;
+ );
+
+ struct_group_tagged(cxl_pmu_regs, pmu_regs,
+ void __iomem *pmu;
+ );
+
+ /*
+ * RCH downstream port specific RAS register
+ * @aer: CXL 3.0 8.2.1.1 RCH Downstream Port RCRB
+ */
+ struct_group_tagged(cxl_rch_regs, rch_regs,
+ void __iomem *dport_aer;
+ );
+
+ /*
+ * RCD upstream port specific PCIe cap register
+ * @pcie_cap: CXL 3.0 8.2.1.2 RCD Upstream Port RCRB
+ */
+ struct_group_tagged(cxl_rcd_regs, rcd_regs,
+ void __iomem *rcd_pcie_cap;
+ );
+};
+
+struct cxl_reg_map {
+ bool valid;
+ int id;
+ unsigned long offset;
+ unsigned long size;
+};
+
+struct cxl_component_reg_map {
+ struct cxl_reg_map hdm_decoder;
+ struct cxl_reg_map ras;
+};
+
+struct cxl_device_reg_map {
+ struct cxl_reg_map status;
+ struct cxl_reg_map mbox;
+ struct cxl_reg_map memdev;
+};
+
+struct cxl_pmu_reg_map {
+ struct cxl_reg_map pmu;
+};
+
+/**
+ * struct cxl_register_map - DVSEC harvested register block mapping parameters
+ * @host: device for devm operations and logging
+ * @base: virtual base of the register-block-BAR + @block_offset
+ * @resource: physical resource base of the register block
+ * @max_size: maximum mapping size to perform register search
+ * @reg_type: see enum cxl_regloc_type
+ * @component_map: cxl_reg_map for component registers
+ * @device_map: cxl_reg_maps for device registers
+ * @pmu_map: cxl_reg_maps for CXL Performance Monitoring Units
+ */
+struct cxl_register_map {
+ struct device *host;
+ void __iomem *base;
+ resource_size_t resource;
+ resource_size_t max_size;
+ u8 reg_type;
+ union {
+ struct cxl_component_reg_map component_map;
+ struct cxl_device_reg_map device_map;
+ struct cxl_pmu_reg_map pmu_map;
+ };
+};
+
+/**
+ * struct cxl_dpa_perf - DPA performance property entry
+ * @dpa_range: range for DPA address
+ * @coord: QoS performance data (i.e. latency, bandwidth)
+ * @cdat_coord: raw QoS performance data from CDAT
+ * @qos_class: QoS Class cookies
+ */
+struct cxl_dpa_perf {
+ struct range dpa_range;
+ struct access_coordinate coord[ACCESS_COORDINATE_MAX];
+ struct access_coordinate cdat_coord[ACCESS_COORDINATE_MAX];
+ int qos_class;
+};
+
+enum cxl_partition_mode {
+ CXL_PARTMODE_RAM,
+ CXL_PARTMODE_PMEM,
+};
+
+/**
+ * struct cxl_dpa_partition - DPA partition descriptor
+ * @res: shortcut to the partition in the DPA resource tree (cxlds->dpa_res)
+ * @perf: performance attributes of the partition from CDAT
+ * @mode: operation mode for the DPA capacity, e.g. ram, pmem, dynamic...
+ */
+struct cxl_dpa_partition {
+ struct resource res;
+ struct cxl_dpa_perf perf;
+ enum cxl_partition_mode mode;
+};
+
+#define CXL_NR_PARTITIONS_MAX 2
+
+/**
+ * struct cxl_dev_state - The driver device state
+ *
+ * cxl_dev_state represents the CXL driver/device state. It provides an
+ * interface to mailbox commands as well as some cached data about the device.
+ * Currently only memory devices are represented.
+ *
+ * @dev: The device associated with this CXL state
+ * @cxlmd: The device representing the CXL.mem capabilities of @dev
+ * @reg_map: component and ras register mapping parameters
+ * @regs: Parsed register blocks
+ * @cxl_dvsec: Offset to the PCIe device DVSEC
+ * @rcd: operating in RCD mode (CXL 3.0 9.11.8 CXL Devices Attached to an RCH)
+ * @media_ready: Indicate whether the device media is usable
+ * @dpa_res: Overall DPA resource tree for the device
+ * @part: DPA partition array
+ * @nr_partitions: Number of DPA partitions
+ * @serial: PCIe Device Serial Number
+ * @type: Generic Memory Class device or Vendor Specific Memory device
+ * @cxl_mbox: CXL mailbox context
+ * @cxlfs: CXL features context
+ */
+struct cxl_dev_state {
+ /* public for Type2 drivers */
+ struct device *dev;
+ struct cxl_memdev *cxlmd;
+
+ /* private for Type2 drivers */
+ struct cxl_register_map reg_map;
+ struct cxl_regs regs;
+ int cxl_dvsec;
+ bool rcd;
+ bool media_ready;
+ struct resource dpa_res;
+ struct cxl_dpa_partition part[CXL_NR_PARTITIONS_MAX];
+ unsigned int nr_partitions;
+ u64 serial;
+ enum cxl_devtype type;
+ struct cxl_mailbox cxl_mbox;
+#ifdef CONFIG_CXL_FEATURES
+ struct cxl_features_state *cxlfs;
+#endif
+};
+
+struct cxl_dev_state *_devm_cxl_dev_state_create(struct device *dev,
+ enum cxl_devtype type,
+ u64 serial, u16 dvsec,
+ size_t size, bool has_mbox);
+
+/**
+ * cxl_dev_state_create - safely create and cast a cxl dev state embedded in a
+ * driver specific struct.
+ *
+ * @parent: device behind the request
+ * @type: CXL device type
+ * @serial: device identification
+ * @dvsec: dvsec capability offset
+ * @drv_struct: driver struct embedding a cxl_dev_state struct
+ * @member: drv_struct member as cxl_dev_state
+ * @mbox: true if mailbox supported
+ *
+ * Returns a pointer to the drv_struct allocated and embedding a cxl_dev_state
+ * struct initialized.
+ *
+ * Introduced for Type2 driver support.
+ */
+#define devm_cxl_dev_state_create(parent, type, serial, dvsec, drv_struct, member, mbox) \
+ ({ \
+ static_assert(__same_type(struct cxl_dev_state, \
+ ((drv_struct *)NULL)->member)); \
+ static_assert(offsetof(drv_struct, member) == 0); \
+ (drv_struct *)_devm_cxl_dev_state_create(parent, type, serial, dvsec, \
+ sizeof(drv_struct), mbox); \
+ })
+#endif /* __CXL_CXL_H__ */
diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
index 33d06ec5a4b9..6fbe0af3e8f8 100644
--- a/tools/testing/cxl/test/mem.c
+++ b/tools/testing/cxl/test/mem.c
@@ -1717,7 +1717,7 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
if (rc)
return rc;
- mds = cxl_memdev_state_create(dev);
+ mds = cxl_memdev_state_create(dev, pdev->id + 1, 0);
if (IS_ERR(mds))
return PTR_ERR(mds);
@@ -1733,7 +1733,6 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
mds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf;
INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mockmem_sanitize_work);
- cxlds->serial = pdev->id + 1;
if (is_rcd(pdev))
cxlds->rcd = true;
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 05/25] sfc: add cxl support
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (3 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 04/25] cxl: Add type2 device basic support alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 06/25] cxl: Move pci generic code alejandro.lucero-palau
` (20 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Jonathan Cameron, Edward Cree, Alison Schofield
From: Alejandro Lucero <alucerop@amd.com>
Add CXL initialization based on new CXL API for accel drivers and make
it dependent on kernel CXL configuration.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Acked-by: Edward Cree <ecree.xilinx@gmail.com>
Reviewed-by: Alison Schofield <alison.schofield@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
---
drivers/net/ethernet/sfc/Kconfig | 9 +++++
drivers/net/ethernet/sfc/Makefile | 1 +
drivers/net/ethernet/sfc/efx.c | 15 ++++++-
drivers/net/ethernet/sfc/efx_cxl.c | 56 +++++++++++++++++++++++++++
drivers/net/ethernet/sfc/efx_cxl.h | 40 +++++++++++++++++++
drivers/net/ethernet/sfc/net_driver.h | 10 +++++
6 files changed, 130 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/sfc/efx_cxl.c
create mode 100644 drivers/net/ethernet/sfc/efx_cxl.h
diff --git a/drivers/net/ethernet/sfc/Kconfig b/drivers/net/ethernet/sfc/Kconfig
index c4c43434f314..979f2801e2a8 100644
--- a/drivers/net/ethernet/sfc/Kconfig
+++ b/drivers/net/ethernet/sfc/Kconfig
@@ -66,6 +66,15 @@ config SFC_MCDI_LOGGING
Driver-Interface) commands and responses, allowing debugging of
driver/firmware interaction. The tracing is actually enabled by
a sysfs file 'mcdi_logging' under the PCI device.
+config SFC_CXL
+ bool "Solarflare SFC9100-family CXL support"
+ depends on SFC && CXL_BUS >= SFC
+ default SFC
+ help
+ This enables SFC CXL support if the kernel is configuring CXL for
+ using CTPIO with CXL.mem. The SFC device with CXL support and
+ with a CXL-aware firmware can be used for minimizing latencies
+ when sending through CTPIO.
source "drivers/net/ethernet/sfc/falcon/Kconfig"
source "drivers/net/ethernet/sfc/siena/Kconfig"
diff --git a/drivers/net/ethernet/sfc/Makefile b/drivers/net/ethernet/sfc/Makefile
index d99039ec468d..bb0f1891cde6 100644
--- a/drivers/net/ethernet/sfc/Makefile
+++ b/drivers/net/ethernet/sfc/Makefile
@@ -13,6 +13,7 @@ sfc-$(CONFIG_SFC_SRIOV) += sriov.o ef10_sriov.o ef100_sriov.o ef100_rep.o \
mae.o tc.o tc_bindings.o tc_counters.o \
tc_encap_actions.o tc_conntrack.o
+sfc-$(CONFIG_SFC_CXL) += efx_cxl.o
obj-$(CONFIG_SFC) += sfc.o
obj-$(CONFIG_SFC_FALCON) += falcon/
diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c
index 112e55b98ed3..537668278375 100644
--- a/drivers/net/ethernet/sfc/efx.c
+++ b/drivers/net/ethernet/sfc/efx.c
@@ -34,6 +34,7 @@
#include "selftest.h"
#include "sriov.h"
#include "efx_devlink.h"
+#include "efx_cxl.h"
#include "mcdi_port_common.h"
#include "mcdi_pcol.h"
@@ -981,12 +982,15 @@ static void efx_pci_remove(struct pci_dev *pci_dev)
efx_pci_remove_main(efx);
efx_fini_io(efx);
+
+ probe_data = container_of(efx, struct efx_probe_data, efx);
+ efx_cxl_exit(probe_data);
+
pci_dbg(efx->pci_dev, "shutdown successful\n");
efx_fini_devlink_and_unlock(efx);
efx_fini_struct(efx);
free_netdev(efx->net_dev);
- probe_data = container_of(efx, struct efx_probe_data, efx);
kfree(probe_data);
};
@@ -1190,6 +1194,15 @@ static int efx_pci_probe(struct pci_dev *pci_dev,
if (rc)
goto fail2;
+ /* A successful cxl initialization implies a CXL region created to be
+ * used for PIO buffers. If there is no CXL support, or initialization
+ * fails, efx_cxl_pio_initialised will be false and legacy PIO buffers
+ * defined at specific PCI BAR regions will be used.
+ */
+ rc = efx_cxl_init(probe_data);
+ if (rc)
+ pci_err(pci_dev, "CXL initialization failed with error %d\n", rc);
+
rc = efx_pci_probe_post_io(efx);
if (rc) {
/* On failure, retry once immediately.
diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c
new file mode 100644
index 000000000000..8e0481d8dced
--- /dev/null
+++ b/drivers/net/ethernet/sfc/efx_cxl.c
@@ -0,0 +1,56 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/****************************************************************************
+ *
+ * Driver for AMD network controllers and boards
+ * Copyright (C) 2025, Advanced Micro Devices, Inc.
+ */
+
+#include <linux/pci.h>
+
+#include "net_driver.h"
+#include "efx_cxl.h"
+
+#define EFX_CTPIO_BUFFER_SIZE SZ_256M
+
+int efx_cxl_init(struct efx_probe_data *probe_data)
+{
+ struct efx_nic *efx = &probe_data->efx;
+ struct pci_dev *pci_dev = efx->pci_dev;
+ struct efx_cxl *cxl;
+ u16 dvsec;
+
+ probe_data->cxl_pio_initialised = false;
+
+ /* Is the device configured with and using CXL? */
+ if (!pcie_is_cxl(pci_dev))
+ return 0;
+
+ dvsec = pci_find_dvsec_capability(pci_dev, PCI_VENDOR_ID_CXL,
+ PCI_DVSEC_CXL_DEVICE);
+ if (!dvsec) {
+ pci_err(pci_dev, "CXL_DVSEC_PCIE_DEVICE capability not found\n");
+ return 0;
+ }
+
+ pci_dbg(pci_dev, "CXL_DVSEC_PCIE_DEVICE capability found\n");
+
+ /* Create a cxl_dev_state embedded in the cxl struct using cxl core api
+ * specifying no mbox available.
+ */
+ cxl = devm_cxl_dev_state_create(&pci_dev->dev, CXL_DEVTYPE_DEVMEM,
+ pci_dev->dev.id, dvsec, struct efx_cxl,
+ cxlds, false);
+
+ if (!cxl)
+ return -ENOMEM;
+
+ probe_data->cxl = cxl;
+
+ return 0;
+}
+
+void efx_cxl_exit(struct efx_probe_data *probe_data)
+{
+}
+
+MODULE_IMPORT_NS("CXL");
diff --git a/drivers/net/ethernet/sfc/efx_cxl.h b/drivers/net/ethernet/sfc/efx_cxl.h
new file mode 100644
index 000000000000..961639cef692
--- /dev/null
+++ b/drivers/net/ethernet/sfc/efx_cxl.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/****************************************************************************
+ * Driver for AMD network controllers and boards
+ * Copyright (C) 2025, Advanced Micro Devices, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation, incorporated herein by reference.
+ */
+
+#ifndef EFX_CXL_H
+#define EFX_CXL_H
+
+#ifdef CONFIG_SFC_CXL
+
+#include <cxl/cxl.h>
+
+struct cxl_root_decoder;
+struct cxl_port;
+struct cxl_endpoint_decoder;
+struct cxl_region;
+struct efx_probe_data;
+
+struct efx_cxl {
+ struct cxl_dev_state cxlds;
+ struct cxl_memdev *cxlmd;
+ struct cxl_root_decoder *cxlrd;
+ struct cxl_port *endpoint;
+ struct cxl_endpoint_decoder *cxled;
+ struct cxl_region *efx_region;
+ void __iomem *ctpio_cxl;
+};
+
+int efx_cxl_init(struct efx_probe_data *probe_data);
+void efx_cxl_exit(struct efx_probe_data *probe_data);
+#else
+static inline int efx_cxl_init(struct efx_probe_data *probe_data) { return 0; }
+static inline void efx_cxl_exit(struct efx_probe_data *probe_data) {}
+#endif
+#endif
diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h
index b98c259f672d..3964b2c56609 100644
--- a/drivers/net/ethernet/sfc/net_driver.h
+++ b/drivers/net/ethernet/sfc/net_driver.h
@@ -1197,14 +1197,24 @@ struct efx_nic {
atomic_t n_rx_noskb_drops;
};
+#ifdef CONFIG_SFC_CXL
+struct efx_cxl;
+#endif
+
/**
* struct efx_probe_data - State after hardware probe
* @pci_dev: The PCI device
* @efx: Efx NIC details
+ * @cxl: details of related cxl objects
+ * @cxl_pio_initialised: cxl initialization outcome.
*/
struct efx_probe_data {
struct pci_dev *pci_dev;
struct efx_nic efx;
+#ifdef CONFIG_SFC_CXL
+ struct efx_cxl *cxl;
+ bool cxl_pio_initialised;
+#endif
};
static inline struct efx_nic *efx_netdev_priv(struct net_device *dev)
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 06/25] cxl: Move pci generic code
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (4 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 05/25] sfc: add cxl support alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 07/25] cxl/sfc: Map cxl component regs alejandro.lucero-palau
` (19 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Ben Cheatham, Fan Ni, Jonathan Cameron,
Alison Schofield
From: Alejandro Lucero <alucerop@amd.com>
Inside cxl/core/pci.c there are helpers for CXL PCIe initialization
meanwhile cxl/pci_drv.c implements the functionality for a Type3 device
initialization.
Move helper functions from cxl/core/pci_drv.c to cxl/core/pci.c in order
to be exported and shared with CXL Type2 device initialization.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Ben Cheatham <benjamin.cheatham@amd.com>
Reviewed-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alison Schofield <alison.schofield@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
---
drivers/cxl/core/core.h | 3 ++
drivers/cxl/core/pci.c | 62 +++++++++++++++++++++++++++++++++
drivers/cxl/core/pci_drv.c | 70 --------------------------------------
drivers/cxl/core/regs.c | 1 -
drivers/cxl/cxl.h | 2 --
drivers/cxl/cxlpci.h | 13 +++++++
6 files changed, 78 insertions(+), 73 deletions(-)
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
index a7a0838c8f23..2b2d3af0b5ec 100644
--- a/drivers/cxl/core/core.h
+++ b/drivers/cxl/core/core.h
@@ -232,4 +232,7 @@ static inline bool cxl_pci_drv_bound(struct pci_dev *pdev) { return false; };
static inline int cxl_pci_driver_init(void) { return 0; }
static inline void cxl_pci_driver_exit(void) { }
#endif
+
+resource_size_t cxl_rcd_component_reg_phys(struct device *dev,
+ struct cxl_dport *dport);
#endif /* __CXL_CORE_H__ */
diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
index a66f7a84b5c8..566d57ba0579 100644
--- a/drivers/cxl/core/pci.c
+++ b/drivers/cxl/core/pci.c
@@ -775,6 +775,68 @@ bool cxl_endpoint_decoder_reset_detected(struct cxl_port *port)
}
EXPORT_SYMBOL_NS_GPL(cxl_endpoint_decoder_reset_detected, "CXL");
+static int cxl_rcrb_get_comp_regs(struct pci_dev *pdev,
+ struct cxl_register_map *map,
+ struct cxl_dport *dport)
+{
+ resource_size_t component_reg_phys;
+
+ *map = (struct cxl_register_map) {
+ .host = &pdev->dev,
+ .resource = CXL_RESOURCE_NONE,
+ };
+
+ struct cxl_port *port __free(put_cxl_port) =
+ cxl_pci_find_port(pdev, &dport);
+ if (!port)
+ return -EPROBE_DEFER;
+
+ component_reg_phys = cxl_rcd_component_reg_phys(&pdev->dev, dport);
+ if (component_reg_phys == CXL_RESOURCE_NONE)
+ return -ENXIO;
+
+ map->resource = component_reg_phys;
+ map->reg_type = CXL_REGLOC_RBI_COMPONENT;
+ map->max_size = CXL_COMPONENT_REG_BLOCK_SIZE;
+
+ return 0;
+}
+
+int cxl_pci_setup_regs(struct pci_dev *pdev, enum cxl_regloc_type type,
+ struct cxl_register_map *map)
+{
+ int rc;
+
+ rc = cxl_find_regblock(pdev, type, map);
+
+ /*
+ * If the Register Locator DVSEC does not exist, check if it
+ * is an RCH and try to extract the Component Registers from
+ * an RCRB.
+ */
+ if (rc && type == CXL_REGLOC_RBI_COMPONENT && is_cxl_restricted(pdev)) {
+ struct cxl_dport *dport;
+ struct cxl_port *port __free(put_cxl_port) =
+ cxl_pci_find_port(pdev, &dport);
+ if (!port)
+ return -EPROBE_DEFER;
+
+ rc = cxl_rcrb_get_comp_regs(pdev, map, dport);
+ if (rc)
+ return rc;
+
+ rc = cxl_dport_map_rcd_linkcap(pdev, dport);
+ if (rc)
+ return rc;
+
+ } else if (rc) {
+ return rc;
+ }
+
+ return cxl_setup_regs(map);
+}
+EXPORT_SYMBOL_NS_GPL(cxl_pci_setup_regs, "CXL");
+
int cxl_pci_get_bandwidth(struct pci_dev *pdev, struct access_coordinate *c)
{
int speed, bw;
diff --git a/drivers/cxl/core/pci_drv.c b/drivers/cxl/core/pci_drv.c
index b4b8350ba44d..761779528eb5 100644
--- a/drivers/cxl/core/pci_drv.c
+++ b/drivers/cxl/core/pci_drv.c
@@ -466,76 +466,6 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail)
return 0;
}
-/*
- * Assume that any RCIEP that emits the CXL memory expander class code
- * is an RCD
- */
-static bool is_cxl_restricted(struct pci_dev *pdev)
-{
- return pci_pcie_type(pdev) == PCI_EXP_TYPE_RC_END;
-}
-
-static int cxl_rcrb_get_comp_regs(struct pci_dev *pdev,
- struct cxl_register_map *map,
- struct cxl_dport *dport)
-{
- resource_size_t component_reg_phys;
-
- *map = (struct cxl_register_map) {
- .host = &pdev->dev,
- .resource = CXL_RESOURCE_NONE,
- };
-
- struct cxl_port *port __free(put_cxl_port) =
- cxl_pci_find_port(pdev, &dport);
- if (!port)
- return -EPROBE_DEFER;
-
- component_reg_phys = cxl_rcd_component_reg_phys(&pdev->dev, dport);
- if (component_reg_phys == CXL_RESOURCE_NONE)
- return -ENXIO;
-
- map->resource = component_reg_phys;
- map->reg_type = CXL_REGLOC_RBI_COMPONENT;
- map->max_size = CXL_COMPONENT_REG_BLOCK_SIZE;
-
- return 0;
-}
-
-static int cxl_pci_setup_regs(struct pci_dev *pdev, enum cxl_regloc_type type,
- struct cxl_register_map *map)
-{
- int rc;
-
- rc = cxl_find_regblock(pdev, type, map);
-
- /*
- * If the Register Locator DVSEC does not exist, check if it
- * is an RCH and try to extract the Component Registers from
- * an RCRB.
- */
- if (rc && type == CXL_REGLOC_RBI_COMPONENT && is_cxl_restricted(pdev)) {
- struct cxl_dport *dport;
- struct cxl_port *port __free(put_cxl_port) =
- cxl_pci_find_port(pdev, &dport);
- if (!port)
- return -EPROBE_DEFER;
-
- rc = cxl_rcrb_get_comp_regs(pdev, map, dport);
- if (rc)
- return rc;
-
- rc = cxl_dport_map_rcd_linkcap(pdev, dport);
- if (rc)
- return rc;
-
- } else if (rc) {
- return rc;
- }
-
- return cxl_setup_regs(map);
-}
-
static int cxl_pci_ras_unmask(struct pci_dev *pdev)
{
struct cxl_dev_state *cxlds = pci_get_drvdata(pdev);
diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c
index fb70ffbba72d..fc7fbd4f39d2 100644
--- a/drivers/cxl/core/regs.c
+++ b/drivers/cxl/core/regs.c
@@ -641,4 +641,3 @@ resource_size_t cxl_rcd_component_reg_phys(struct device *dev,
return CXL_RESOURCE_NONE;
return __rcrb_to_component(dev, &dport->rcrb, CXL_RCRB_UPSTREAM);
}
-EXPORT_SYMBOL_NS_GPL(cxl_rcd_component_reg_phys, "CXL");
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 1517250b0ec2..536c9d99e0e6 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -222,8 +222,6 @@ int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type,
struct cxl_register_map *map);
int cxl_setup_regs(struct cxl_register_map *map);
struct cxl_dport;
-resource_size_t cxl_rcd_component_reg_phys(struct device *dev,
- struct cxl_dport *dport);
int cxl_dport_map_rcd_linkcap(struct pci_dev *pdev, struct cxl_dport *dport);
#define CXL_RESOURCE_NONE ((resource_size_t) -1)
diff --git a/drivers/cxl/cxlpci.h b/drivers/cxl/cxlpci.h
index 3526e6d75f79..24aba9ff6d2e 100644
--- a/drivers/cxl/cxlpci.h
+++ b/drivers/cxl/cxlpci.h
@@ -74,6 +74,17 @@ static inline bool cxl_pci_flit_256(struct pci_dev *pdev)
return lnksta2 & PCI_EXP_LNKSTA2_FLIT;
}
+/*
+ * Assume that the caller has already validated that @pdev has CXL
+ * capabilities, any RCiEP with CXL capabilities is treated as a
+ * Restricted CXL Device (RCD) and finds upstream port and endpoint
+ * registers in a Root Complex Register Block (RCRB).
+ */
+static inline bool is_cxl_restricted(struct pci_dev *pdev)
+{
+ return pci_pcie_type(pdev) == PCI_EXP_TYPE_RC_END;
+}
+
int devm_cxl_port_enumerate_dports(struct cxl_port *port);
struct cxl_dev_state;
void read_cdat_data(struct cxl_port *port);
@@ -89,4 +100,6 @@ static inline void cxl_uport_init_ras_reporting(struct cxl_port *port,
struct device *host) { }
#endif
+int cxl_pci_setup_regs(struct pci_dev *pdev, enum cxl_regloc_type type,
+ struct cxl_register_map *map);
#endif /* __CXL_PCI_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 07/25] cxl/sfc: Map cxl component regs
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (5 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 06/25] cxl: Move pci generic code alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 08/25] cxl/sfc: Initialize dpa without a mailbox alejandro.lucero-palau
` (18 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Jonathan Cameron, Ben Cheatham
From: Alejandro Lucero <alucerop@amd.com>
Export cxl core functions for a Type2 driver being able to discover and
map the device component registers.
Use it in sfc driver cxl initialization.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Ben Cheatham <benjamin.cheatham@amd.com>
---
drivers/cxl/core/pci.c | 1 +
drivers/cxl/core/pci_drv.c | 1 +
drivers/cxl/core/port.c | 1 +
drivers/cxl/core/regs.c | 1 +
drivers/cxl/cxl.h | 7 ------
drivers/cxl/cxlpci.h | 12 ----------
drivers/net/ethernet/sfc/efx_cxl.c | 35 ++++++++++++++++++++++++++++++
include/cxl/cxl.h | 19 ++++++++++++++++
include/cxl/pci.h | 21 ++++++++++++++++++
9 files changed, 79 insertions(+), 19 deletions(-)
create mode 100644 include/cxl/pci.h
diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
index 566d57ba0579..90a0763e72c4 100644
--- a/drivers/cxl/core/pci.c
+++ b/drivers/cxl/core/pci.c
@@ -6,6 +6,7 @@
#include <linux/delay.h>
#include <linux/pci.h>
#include <linux/pci-doe.h>
+#include <cxl/pci.h>
#include <linux/aer.h>
#include <cxlpci.h>
#include <cxlmem.h>
diff --git a/drivers/cxl/core/pci_drv.c b/drivers/cxl/core/pci_drv.c
index 761779528eb5..4a812765217e 100644
--- a/drivers/cxl/core/pci_drv.c
+++ b/drivers/cxl/core/pci_drv.c
@@ -11,6 +11,7 @@
#include <linux/pci.h>
#include <linux/aer.h>
#include <linux/io.h>
+#include <cxl/pci.h>
#include <cxl/mailbox.h>
#include "cxlmem.h"
#include "cxlpci.h"
diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
index d19ebf052d76..7c828c75e7b8 100644
--- a/drivers/cxl/core/port.c
+++ b/drivers/cxl/core/port.c
@@ -11,6 +11,7 @@
#include <linux/idr.h>
#include <linux/node.h>
#include <cxl/einj.h>
+#include <cxl/pci.h>
#include <cxlmem.h>
#include <cxlpci.h>
#include <cxl.h>
diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c
index fc7fbd4f39d2..dcf444f1fe48 100644
--- a/drivers/cxl/core/regs.c
+++ b/drivers/cxl/core/regs.c
@@ -4,6 +4,7 @@
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/pci.h>
+#include <cxl/pci.h>
#include <cxlmem.h>
#include <cxlpci.h>
#include <pmu.h>
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 536c9d99e0e6..d7ddca6f7115 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -39,10 +39,6 @@ extern const struct nvdimm_security_ops *cxl_security_ops;
#define CXL_CM_CAP_HDR_ARRAY_SIZE_MASK GENMASK(31, 24)
#define CXL_CM_CAP_PTR_MASK GENMASK(31, 20)
-#define CXL_CM_CAP_CAP_ID_RAS 0x2
-#define CXL_CM_CAP_CAP_ID_HDM 0x5
-#define CXL_CM_CAP_CAP_HDM_VERSION 1
-
/* HDM decoders CXL 2.0 8.2.5.12 CXL HDM Decoder Capability Structure */
#define CXL_HDM_DECODER_CAP_OFFSET 0x0
#define CXL_HDM_DECODER_COUNT_MASK GENMASK(3, 0)
@@ -206,9 +202,6 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base,
struct cxl_component_reg_map *map);
void cxl_probe_device_regs(struct device *dev, void __iomem *base,
struct cxl_device_reg_map *map);
-int cxl_map_component_regs(const struct cxl_register_map *map,
- struct cxl_component_regs *regs,
- unsigned long map_mask);
int cxl_map_device_regs(const struct cxl_register_map *map,
struct cxl_device_regs *regs);
int cxl_map_pmu_regs(struct cxl_register_map *map, struct cxl_pmu_regs *regs);
diff --git a/drivers/cxl/cxlpci.h b/drivers/cxl/cxlpci.h
index 24aba9ff6d2e..53760ce31af8 100644
--- a/drivers/cxl/cxlpci.h
+++ b/drivers/cxl/cxlpci.h
@@ -13,16 +13,6 @@
*/
#define CXL_PCI_DEFAULT_MAX_VECTORS 16
-/* Register Block Identifier (RBI) */
-enum cxl_regloc_type {
- CXL_REGLOC_RBI_EMPTY = 0,
- CXL_REGLOC_RBI_COMPONENT,
- CXL_REGLOC_RBI_VIRT,
- CXL_REGLOC_RBI_MEMDEV,
- CXL_REGLOC_RBI_PMU,
- CXL_REGLOC_RBI_TYPES
-};
-
/*
* Table Access DOE, CDAT Read Entry Response
*
@@ -100,6 +90,4 @@ static inline void cxl_uport_init_ras_reporting(struct cxl_port *port,
struct device *host) { }
#endif
-int cxl_pci_setup_regs(struct pci_dev *pdev, enum cxl_regloc_type type,
- struct cxl_register_map *map);
#endif /* __CXL_PCI_H__ */
diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c
index 8e0481d8dced..34126bc4826c 100644
--- a/drivers/net/ethernet/sfc/efx_cxl.c
+++ b/drivers/net/ethernet/sfc/efx_cxl.c
@@ -7,6 +7,8 @@
#include <linux/pci.h>
+#include <cxl/cxl.h>
+#include <cxl/pci.h>
#include "net_driver.h"
#include "efx_cxl.h"
@@ -18,6 +20,7 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
struct pci_dev *pci_dev = efx->pci_dev;
struct efx_cxl *cxl;
u16 dvsec;
+ int rc;
probe_data->cxl_pio_initialised = false;
@@ -44,6 +47,38 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
if (!cxl)
return -ENOMEM;
+ rc = cxl_pci_setup_regs(pci_dev, CXL_REGLOC_RBI_COMPONENT,
+ &cxl->cxlds.reg_map);
+ if (rc) {
+ pci_err(pci_dev, "No component registers\n");
+ return rc;
+ }
+
+ if (!cxl->cxlds.reg_map.component_map.hdm_decoder.valid) {
+ pci_err(pci_dev, "Expected HDM component register not found\n");
+ return -ENODEV;
+ }
+
+ if (!cxl->cxlds.reg_map.component_map.ras.valid) {
+ pci_err(pci_dev, "Expected RAS component register not found\n");
+ return -ENODEV;
+ }
+
+ rc = cxl_map_component_regs(&cxl->cxlds.reg_map,
+ &cxl->cxlds.regs.component,
+ BIT(CXL_CM_CAP_CAP_ID_RAS));
+ if (rc) {
+ pci_err(pci_dev, "Failed to map RAS capability.\n");
+ return rc;
+ }
+
+ /*
+ * Set media ready explicitly as there are neither mailbox for checking
+ * this state nor the CXL register involved, both not mandatory for
+ * type2.
+ */
+ cxl->cxlds.media_ready = true;
+
probe_data->cxl = cxl;
return 0;
diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
index 13d448686189..7f2e23bce1f7 100644
--- a/include/cxl/cxl.h
+++ b/include/cxl/cxl.h
@@ -70,6 +70,10 @@ struct cxl_regs {
);
};
+#define CXL_CM_CAP_CAP_ID_RAS 0x2
+#define CXL_CM_CAP_CAP_ID_HDM 0x5
+#define CXL_CM_CAP_CAP_HDM_VERSION 1
+
struct cxl_reg_map {
bool valid;
int id;
@@ -223,4 +227,19 @@ struct cxl_dev_state *_devm_cxl_dev_state_create(struct device *dev,
(drv_struct *)_devm_cxl_dev_state_create(parent, type, serial, dvsec, \
sizeof(drv_struct), mbox); \
})
+
+/**
+ * cxl_map_component_regs - map cxl component registers
+ *
+ * @map: cxl register map to update with the mappings
+ * @regs: cxl component registers to work with
+ * @map_mask: cxl component regs to map
+ *
+ * Returns integer: success (0) or error (-ENOMEM)
+ *
+ * Made public for Type2 driver support.
+ */
+int cxl_map_component_regs(const struct cxl_register_map *map,
+ struct cxl_component_regs *regs,
+ unsigned long map_mask);
#endif /* __CXL_CXL_H__ */
diff --git a/include/cxl/pci.h b/include/cxl/pci.h
new file mode 100644
index 000000000000..a172439f08c6
--- /dev/null
+++ b/include/cxl/pci.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+
+#ifndef __CXL_CXL_PCI_H__
+#define __CXL_CXL_PCI_H__
+
+/* Register Block Identifier (RBI) */
+enum cxl_regloc_type {
+ CXL_REGLOC_RBI_EMPTY = 0,
+ CXL_REGLOC_RBI_COMPONENT,
+ CXL_REGLOC_RBI_VIRT,
+ CXL_REGLOC_RBI_MEMDEV,
+ CXL_REGLOC_RBI_PMU,
+ CXL_REGLOC_RBI_TYPES
+};
+
+struct cxl_register_map;
+
+int cxl_pci_setup_regs(struct pci_dev *pdev, enum cxl_regloc_type type,
+ struct cxl_register_map *map);
+#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 08/25] cxl/sfc: Initialize dpa without a mailbox
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (6 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 07/25] cxl/sfc: Map cxl component regs alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 09/25] cxl: Prepare memdev creation for type2 alejandro.lucero-palau
` (17 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Ben Cheatham, Jonathan Cameron
From: Alejandro Lucero <alucerop@amd.com>
Type3 relies on mailbox CXL_MBOX_OP_IDENTIFY command for initializing
memdev state params which end up being used for DPA initialization.
Allow a Type2 driver to initialize DPA simply by giving the size of its
volatile hardware partition.
Move related functions to memdev.
Add sfc driver as the client.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Ben Cheatham <benjamin.cheatham@amd.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
drivers/cxl/core/core.h | 2 +
drivers/cxl/core/mbox.c | 51 +----------------------
drivers/cxl/core/memdev.c | 66 ++++++++++++++++++++++++++++++
drivers/net/ethernet/sfc/efx_cxl.c | 5 +++
include/cxl/cxl.h | 1 +
5 files changed, 75 insertions(+), 50 deletions(-)
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
index 2b2d3af0b5ec..1c1726856139 100644
--- a/drivers/cxl/core/core.h
+++ b/drivers/cxl/core/core.h
@@ -91,6 +91,8 @@ void __iomem *devm_cxl_iomap_block(struct device *dev, resource_size_t addr,
struct dentry *cxl_debugfs_create_dir(const char *dir);
int cxl_dpa_set_part(struct cxl_endpoint_decoder *cxled,
enum cxl_partition_mode mode);
+struct cxl_memdev_state;
+int cxl_mem_get_partition_info(struct cxl_memdev_state *mds);
int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, u64 size);
int cxl_dpa_free(struct cxl_endpoint_decoder *cxled);
resource_size_t cxl_dpa_size(struct cxl_endpoint_decoder *cxled);
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index bee84d0101d1..d57a0c2d39fb 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -1144,7 +1144,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_mem_get_event_records, "CXL");
*
* See CXL @8.2.9.5.2.1 Get Partition Info
*/
-static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds)
+int cxl_mem_get_partition_info(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_get_partition_info pi;
@@ -1300,55 +1300,6 @@ int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd)
return -EBUSY;
}
-static void add_part(struct cxl_dpa_info *info, u64 start, u64 size, enum cxl_partition_mode mode)
-{
- int i = info->nr_partitions;
-
- if (size == 0)
- return;
-
- info->part[i].range = (struct range) {
- .start = start,
- .end = start + size - 1,
- };
- info->part[i].mode = mode;
- info->nr_partitions++;
-}
-
-int cxl_mem_dpa_fetch(struct cxl_memdev_state *mds, struct cxl_dpa_info *info)
-{
- struct cxl_dev_state *cxlds = &mds->cxlds;
- struct device *dev = cxlds->dev;
- int rc;
-
- if (!cxlds->media_ready) {
- info->size = 0;
- return 0;
- }
-
- info->size = mds->total_bytes;
-
- if (mds->partition_align_bytes == 0) {
- add_part(info, 0, mds->volatile_only_bytes, CXL_PARTMODE_RAM);
- add_part(info, mds->volatile_only_bytes,
- mds->persistent_only_bytes, CXL_PARTMODE_PMEM);
- return 0;
- }
-
- rc = cxl_mem_get_partition_info(mds);
- if (rc) {
- dev_err(dev, "Failed to query partition information\n");
- return rc;
- }
-
- add_part(info, 0, mds->active_volatile_bytes, CXL_PARTMODE_RAM);
- add_part(info, mds->active_volatile_bytes, mds->active_persistent_bytes,
- CXL_PARTMODE_PMEM);
-
- return 0;
-}
-EXPORT_SYMBOL_NS_GPL(cxl_mem_dpa_fetch, "CXL");
-
int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
index 1dd6f0294030..e5def6f08f1c 100644
--- a/drivers/cxl/core/memdev.c
+++ b/drivers/cxl/core/memdev.c
@@ -584,6 +584,72 @@ bool is_cxl_memdev(const struct device *dev)
}
EXPORT_SYMBOL_NS_GPL(is_cxl_memdev, "CXL");
+static void add_part(struct cxl_dpa_info *info, u64 start, u64 size, enum cxl_partition_mode mode)
+{
+ int i = info->nr_partitions;
+
+ if (size == 0)
+ return;
+
+ info->part[i].range = (struct range) {
+ .start = start,
+ .end = start + size - 1,
+ };
+ info->part[i].mode = mode;
+ info->nr_partitions++;
+}
+
+int cxl_mem_dpa_fetch(struct cxl_memdev_state *mds, struct cxl_dpa_info *info)
+{
+ struct cxl_dev_state *cxlds = &mds->cxlds;
+ struct device *dev = cxlds->dev;
+ int rc;
+
+ if (!cxlds->media_ready) {
+ info->size = 0;
+ return 0;
+ }
+
+ info->size = mds->total_bytes;
+
+ if (mds->partition_align_bytes == 0) {
+ add_part(info, 0, mds->volatile_only_bytes, CXL_PARTMODE_RAM);
+ add_part(info, mds->volatile_only_bytes,
+ mds->persistent_only_bytes, CXL_PARTMODE_PMEM);
+ return 0;
+ }
+
+ rc = cxl_mem_get_partition_info(mds);
+ if (rc) {
+ dev_err(dev, "Failed to query partition information\n");
+ return rc;
+ }
+
+ add_part(info, 0, mds->active_volatile_bytes, CXL_PARTMODE_RAM);
+ add_part(info, mds->active_volatile_bytes, mds->active_persistent_bytes,
+ CXL_PARTMODE_PMEM);
+
+ return 0;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_mem_dpa_fetch, "CXL");
+
+/**
+ * cxl_set_capacity: initialize dpa by a driver without a mailbox.
+ *
+ * @cxlds: pointer to cxl_dev_state
+ * @capacity: device volatile memory size
+ */
+int cxl_set_capacity(struct cxl_dev_state *cxlds, u64 capacity)
+{
+ struct cxl_dpa_info range_info = {
+ .size = capacity,
+ };
+
+ add_part(&range_info, 0, capacity, CXL_PARTMODE_RAM);
+ return cxl_dpa_setup(cxlds, &range_info);
+}
+EXPORT_SYMBOL_NS_GPL(cxl_set_capacity, "CXL");
+
/**
* set_exclusive_cxl_commands() - atomically disable user cxl commands
* @mds: The device state to operate on
diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c
index 34126bc4826c..0b10a2e6aceb 100644
--- a/drivers/net/ethernet/sfc/efx_cxl.c
+++ b/drivers/net/ethernet/sfc/efx_cxl.c
@@ -79,6 +79,11 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
*/
cxl->cxlds.media_ready = true;
+ if (cxl_set_capacity(&cxl->cxlds, EFX_CTPIO_BUFFER_SIZE)) {
+ pci_err(pci_dev, "dpa capacity setup failed\n");
+ return -ENODEV;
+ }
+
probe_data->cxl = cxl;
return 0;
diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
index 7f2e23bce1f7..fb2f8f2395d5 100644
--- a/include/cxl/cxl.h
+++ b/include/cxl/cxl.h
@@ -242,4 +242,5 @@ struct cxl_dev_state *_devm_cxl_dev_state_create(struct device *dev,
int cxl_map_component_regs(const struct cxl_register_map *map,
struct cxl_component_regs *regs,
unsigned long map_mask);
+int cxl_set_capacity(struct cxl_dev_state *cxlds, u64 capacity);
#endif /* __CXL_CXL_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 09/25] cxl: Prepare memdev creation for type2
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (7 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 08/25] cxl/sfc: Initialize dpa without a mailbox alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 10/25] sfc: create type2 cxl memdev alejandro.lucero-palau
` (16 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Ben Cheatham, Jonathan Cameron,
Alison Schofield
From: Alejandro Lucero <alucerop@amd.com>
Current cxl core is relying on a CXL_DEVTYPE_CLASSMEM type device when
creating a memdev leading to problems when obtaining cxl_memdev_state
references from a CXL_DEVTYPE_DEVMEM type.
Modify check for obtaining cxl_memdev_state adding CXL_DEVTYPE_DEVMEM
support.
Make devm_cxl_add_memdev accessible from a accel driver.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Ben Cheatham <benjamin.cheatham@amd.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Alison Schofield <alison.schofield@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
---
drivers/cxl/core/memdev.c | 15 +++++++++++--
drivers/cxl/cxlmem.h | 8 -------
drivers/cxl/mem.c | 45 +++++++++++++++++++++++++++++----------
include/cxl/cxl.h | 7 ++++++
4 files changed, 54 insertions(+), 21 deletions(-)
diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
index e5def6f08f1c..2d4828831ce1 100644
--- a/drivers/cxl/core/memdev.c
+++ b/drivers/cxl/core/memdev.c
@@ -7,6 +7,7 @@
#include <linux/slab.h>
#include <linux/idr.h>
#include <linux/pci.h>
+#include <cxl/cxl.h>
#include <cxlmem.h>
#include "private.h"
#include "trace.h"
@@ -578,9 +579,16 @@ static const struct device_type cxl_memdev_type = {
.groups = cxl_memdev_attribute_groups,
};
+static const struct device_type cxl_accel_memdev_type = {
+ .name = "cxl_accel_memdev",
+ .release = cxl_memdev_release,
+ .devnode = cxl_memdev_devnode,
+};
+
bool is_cxl_memdev(const struct device *dev)
{
- return dev->type == &cxl_memdev_type;
+ return (dev->type == &cxl_memdev_type ||
+ dev->type == &cxl_accel_memdev_type);
}
EXPORT_SYMBOL_NS_GPL(is_cxl_memdev, "CXL");
@@ -1166,7 +1174,10 @@ struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds,
dev->parent = cxlds->dev;
dev->bus = &cxl_bus_type;
dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
- dev->type = &cxl_memdev_type;
+ if (cxlds->type == CXL_DEVTYPE_DEVMEM)
+ dev->type = &cxl_accel_memdev_type;
+ else
+ dev->type = &cxl_memdev_type;
device_set_pm_not_required(dev);
INIT_WORK(&cxlmd->detach_work, detach_memdev);
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 05f4cb5aaed0..1eaf4e57554e 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -34,10 +34,6 @@
(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) != \
CXLMDEV_RESET_NEEDED_NOT)
-struct cxl_memdev_ops {
- int (*probe)(struct cxl_memdev *cxlmd);
-};
-
/**
* struct cxl_memdev - CXL bus object representing a Type-3 Memory Device
* @dev: driver core device object
@@ -101,10 +97,6 @@ static inline bool is_cxl_endpoint(struct cxl_port *port)
return is_cxl_memdev(port->uport_dev);
}
-struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
- struct cxl_dev_state *cxlds,
- const struct cxl_memdev_ops *ops);
-
int devm_cxl_sanitize_setup_notifier(struct device *host,
struct cxl_memdev *cxlmd);
struct cxl_memdev_state;
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index b36d8bb812a3..6d0f2f0b332a 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -66,6 +66,26 @@ static int cxl_debugfs_poison_clear(void *data, u64 dpa)
DEFINE_DEBUGFS_ATTRIBUTE(cxl_poison_clear_fops, NULL,
cxl_debugfs_poison_clear, "%llx\n");
+static void cxl_memdev_poison_enable(struct cxl_memdev_state *mds,
+ struct cxl_memdev *cxlmd,
+ struct dentry *dentry)
+{
+ /*
+ * Avoid poison debugfs for DEVMEM aka accelerators as they rely on
+ * cxl_memdev_state.
+ */
+ if (!mds)
+ return;
+
+ if (test_bit(CXL_POISON_ENABLED_INJECT, mds->poison.enabled_cmds))
+ debugfs_create_file("inject_poison", 0200, dentry, cxlmd,
+ &cxl_poison_inject_fops);
+
+ if (test_bit(CXL_POISON_ENABLED_CLEAR, mds->poison.enabled_cmds))
+ debugfs_create_file("clear_poison", 0200, dentry, cxlmd,
+ &cxl_poison_clear_fops);
+}
+
static int cxl_mem_probe(struct device *dev)
{
struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
@@ -93,12 +113,7 @@ static int cxl_mem_probe(struct device *dev)
dentry = cxl_debugfs_create_dir(dev_name(dev));
debugfs_create_devm_seqfile(dev, "dpamem", dentry, cxl_mem_dpa_show);
- if (test_bit(CXL_POISON_ENABLED_INJECT, mds->poison.enabled_cmds))
- debugfs_create_file("inject_poison", 0200, dentry, cxlmd,
- &cxl_poison_inject_fops);
- if (test_bit(CXL_POISON_ENABLED_CLEAR, mds->poison.enabled_cmds))
- debugfs_create_file("clear_poison", 0200, dentry, cxlmd,
- &cxl_poison_clear_fops);
+ cxl_memdev_poison_enable(mds, cxlmd, dentry);
rc = devm_add_action_or_reset(dev, remove_debugfs, dentry);
if (rc)
@@ -236,16 +251,24 @@ static ssize_t trigger_poison_list_store(struct device *dev,
}
static DEVICE_ATTR_WO(trigger_poison_list);
-static umode_t cxl_mem_visible(struct kobject *kobj, struct attribute *a, int n)
+static bool cxl_poison_attr_visible(struct kobject *kobj, struct attribute *a)
{
struct device *dev = kobj_to_dev(kobj);
struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
- if (a == &dev_attr_trigger_poison_list.attr)
- if (!test_bit(CXL_POISON_ENABLED_LIST,
- mds->poison.enabled_cmds))
- return 0;
+ if (!mds ||
+ !test_bit(CXL_POISON_ENABLED_LIST, mds->poison.enabled_cmds))
+ return false;
+
+ return true;
+}
+
+static umode_t cxl_mem_visible(struct kobject *kobj, struct attribute *a, int n)
+{
+ if (a == &dev_attr_trigger_poison_list.attr &&
+ !cxl_poison_attr_visible(kobj, a))
+ return 0;
return a->mode;
}
diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
index fb2f8f2395d5..043fc31c764e 100644
--- a/include/cxl/cxl.h
+++ b/include/cxl/cxl.h
@@ -153,6 +153,10 @@ struct cxl_dpa_partition {
#define CXL_NR_PARTITIONS_MAX 2
+struct cxl_memdev_ops {
+ int (*probe)(struct cxl_memdev *cxlmd);
+};
+
/**
* struct cxl_dev_state - The driver device state
*
@@ -243,4 +247,7 @@ int cxl_map_component_regs(const struct cxl_register_map *map,
struct cxl_component_regs *regs,
unsigned long map_mask);
int cxl_set_capacity(struct cxl_dev_state *cxlds, u64 capacity);
+struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
+ struct cxl_dev_state *cxlds,
+ const struct cxl_memdev_ops *ops);
#endif /* __CXL_CXL_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 10/25] sfc: create type2 cxl memdev
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (8 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 09/25] cxl: Prepare memdev creation for type2 alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 11/25] cxl/hdm: Add support for getting region from committed decoder alejandro.lucero-palau
` (15 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Martin Habets, Fan Ni, Edward Cree,
Jonathan Cameron
From: Alejandro Lucero <alucerop@amd.com>
Use cxl API for creating a cxl memory device using the type2
cxl_dev_state struct.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Martin Habets <habetsm.xilinx@gmail.com>
Reviewed-by: Fan Ni <fan.ni@samsung.com>
Acked-by: Edward Cree <ecree.xilinx@gmail.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
---
drivers/net/ethernet/sfc/efx_cxl.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c
index 0b10a2e6aceb..f6eda93e67e2 100644
--- a/drivers/net/ethernet/sfc/efx_cxl.c
+++ b/drivers/net/ethernet/sfc/efx_cxl.c
@@ -84,6 +84,12 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
return -ENODEV;
}
+ cxl->cxlmd = devm_cxl_add_memdev(&pci_dev->dev, &cxl->cxlds, NULL);
+ if (IS_ERR(cxl->cxlmd)) {
+ pci_err(pci_dev, "CXL accel memdev creation failed");
+ return PTR_ERR(cxl->cxlmd);
+ }
+
probe_data->cxl = cxl;
return 0;
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 11/25] cxl/hdm: Add support for getting region from committed decoder
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (9 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 10/25] sfc: create type2 cxl memdev alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-15 13:50 ` Jonathan Cameron
2025-12-05 11:52 ` [PATCH v22 12/25] cxl: Add function for obtaining region range alejandro.lucero-palau
` (14 subsequent siblings)
25 siblings, 1 reply; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero
From: Alejandro Lucero <alucerop@amd.com>
A Type2 device configured by the BIOS can already have its HDM
committed. Add a cxl_get_committed_decoder() function for cheking
so after memdev creation. A CXL region should have been created
during memdev initialization, therefore a Type2 driver can ask for
such a region for working with the HPA. If the HDM is not committed,
a Type2 driver will create the region after obtaining proper HPA
and DPA space.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
---
drivers/cxl/core/hdm.c | 44 ++++++++++++++++++++++++++++++++++++++++++
include/cxl/cxl.h | 3 +++
2 files changed, 47 insertions(+)
diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c
index d3a094ca01ad..fa99657440d1 100644
--- a/drivers/cxl/core/hdm.c
+++ b/drivers/cxl/core/hdm.c
@@ -92,6 +92,7 @@ static void parse_hdm_decoder_caps(struct cxl_hdm *cxlhdm)
static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
{
struct cxl_hdm *cxlhdm;
+ struct cxl_port *port;
void __iomem *hdm;
u32 ctrl;
int i;
@@ -105,6 +106,10 @@ static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
if (!hdm)
return true;
+ port = cxlhdm->port;
+ if (is_cxl_endpoint(port))
+ return false;
+
/*
* If HDM decoders are present and the driver is in control of
* Mem_Enable skip DVSEC based emulation
@@ -686,6 +691,45 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, u64 size)
return devm_add_action_or_reset(&port->dev, cxl_dpa_release, cxled);
}
+static int find_committed_decoder(struct device *dev, const void *data)
+{
+ struct cxl_endpoint_decoder *cxled;
+ struct cxl_port *port;
+
+ if (!is_endpoint_decoder(dev))
+ return 0;
+
+ cxled = to_cxl_endpoint_decoder(dev);
+ port = cxled_to_port(cxled);
+
+ return cxled->cxld.id == (port->hdm_end);
+}
+
+struct cxl_endpoint_decoder *cxl_get_committed_decoder(struct cxl_memdev *cxlmd,
+ struct cxl_region **cxlr)
+{
+ struct cxl_port *endpoint = cxlmd->endpoint;
+ struct cxl_endpoint_decoder *cxled;
+ struct device *cxled_dev;
+
+ if (!endpoint)
+ return NULL;
+
+ guard(rwsem_read)(&cxl_rwsem.dpa);
+ cxled_dev = device_find_child(&endpoint->dev, NULL,
+ find_committed_decoder);
+
+ if (!cxled_dev)
+ return NULL;
+
+ cxled = to_cxl_endpoint_decoder(cxled_dev);
+ *cxlr = cxled->cxld.region;
+
+ put_device(cxled_dev);
+ return cxled;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_get_committed_decoder, "CXL");
+
static void cxld_set_interleave(struct cxl_decoder *cxld, u32 *ctrl)
{
u16 eig;
diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
index 043fc31c764e..2ff3c19c684c 100644
--- a/include/cxl/cxl.h
+++ b/include/cxl/cxl.h
@@ -250,4 +250,7 @@ int cxl_set_capacity(struct cxl_dev_state *cxlds, u64 capacity);
struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
struct cxl_dev_state *cxlds,
const struct cxl_memdev_ops *ops);
+struct cxl_region;
+struct cxl_endpoint_decoder *cxl_get_committed_decoder(struct cxl_memdev *cxlmd,
+ struct cxl_region **cxlr);
#endif /* __CXL_CXL_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 12/25] cxl: Add function for obtaining region range
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (10 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 11/25] cxl/hdm: Add support for getting region from committed decoder alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 13/25] cxl: Export functions for unwinding cxl by accelerators alejandro.lucero-palau
` (13 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Zhi Wang, Jonathan Cameron
From: Alejandro Lucero <alucerop@amd.com>
A CXL region struct contains the physical address to work with.
Type2 drivers can create a CXL region but have not access to the
related struct as it is defined as private by the kernel CXL core.
Add a function for getting the cxl region range to be used for mapping
such memory range by a Type2 driver.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Zhi Wang <zhiw@nvidia.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
---
drivers/cxl/core/region.c | 23 +++++++++++++++++++++++
include/cxl/cxl.h | 2 ++
2 files changed, 25 insertions(+)
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index b06fee1978ba..8166a402373e 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -2575,6 +2575,29 @@ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxlrd,
return ERR_PTR(rc);
}
+/**
+ * cxl_get_region_range - obtain range linked to a CXL region
+ *
+ * @region: a pointer to struct cxl_region
+ * @range: a pointer to a struct range to be set
+ *
+ * Returns 0 or error.
+ */
+int cxl_get_region_range(struct cxl_region *region, struct range *range)
+{
+ if (WARN_ON_ONCE(!region))
+ return -ENODEV;
+
+ if (!region->params.res)
+ return -ENOSPC;
+
+ range->start = region->params.res->start;
+ range->end = region->params.res->end;
+
+ return 0;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_get_region_range, "CXL");
+
static ssize_t __create_region_show(struct cxl_root_decoder *cxlrd, char *buf)
{
return sysfs_emit(buf, "region%u\n", atomic_read(&cxlrd->region_id));
diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
index 2ff3c19c684c..f02dd817b40f 100644
--- a/include/cxl/cxl.h
+++ b/include/cxl/cxl.h
@@ -253,4 +253,6 @@ struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
struct cxl_region;
struct cxl_endpoint_decoder *cxl_get_committed_decoder(struct cxl_memdev *cxlmd,
struct cxl_region **cxlr);
+struct range;
+int cxl_get_region_range(struct cxl_region *region, struct range *range);
#endif /* __CXL_CXL_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 13/25] cxl: Export functions for unwinding cxl by accelerators
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (11 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 12/25] cxl: Add function for obtaining region range alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-15 13:53 ` Jonathan Cameron
2025-12-05 11:52 ` [PATCH v22 14/25] sfc: obtain decoder and region if committed by firmware alejandro.lucero-palau
` (12 subsequent siblings)
25 siblings, 1 reply; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero
From: Alejandro Lucero <alucerop@amd.com>
Add unregister_region() and cxl_decoder_detach() to the accelerator
driver API for a clean exit.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
---
drivers/cxl/core/core.h | 5 -----
drivers/cxl/core/region.c | 4 +++-
include/cxl/cxl.h | 9 +++++++++
3 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
index 1c1726856139..9a6775845afe 100644
--- a/drivers/cxl/core/core.h
+++ b/drivers/cxl/core/core.h
@@ -15,11 +15,6 @@ extern const struct device_type cxl_pmu_type;
extern struct attribute_group cxl_base_attribute_group;
-enum cxl_detach_mode {
- DETACH_ONLY,
- DETACH_INVALIDATE,
-};
-
#ifdef CONFIG_CXL_REGION
extern struct device_attribute dev_attr_create_pmem_region;
extern struct device_attribute dev_attr_create_ram_region;
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 8166a402373e..104caa33b7bb 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -2199,6 +2199,7 @@ int cxl_decoder_detach(struct cxl_region *cxlr,
}
return 0;
}
+EXPORT_SYMBOL_NS_GPL(cxl_decoder_detach, "CXL");
static int __attach_target(struct cxl_region *cxlr,
struct cxl_endpoint_decoder *cxled, int pos,
@@ -2393,7 +2394,7 @@ static struct cxl_region *to_cxl_region(struct device *dev)
return container_of(dev, struct cxl_region, dev);
}
-static void unregister_region(void *_cxlr)
+void unregister_region(void *_cxlr)
{
struct cxl_region *cxlr = _cxlr;
struct cxl_region_params *p = &cxlr->params;
@@ -2412,6 +2413,7 @@ static void unregister_region(void *_cxlr)
cxl_region_iomem_release(cxlr);
put_device(&cxlr->dev);
}
+EXPORT_SYMBOL_NS_GPL(unregister_region, "CXL");
static struct lock_class_key cxl_region_key;
diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
index f02dd817b40f..b8683c75dfde 100644
--- a/include/cxl/cxl.h
+++ b/include/cxl/cxl.h
@@ -255,4 +255,13 @@ struct cxl_endpoint_decoder *cxl_get_committed_decoder(struct cxl_memdev *cxlmd,
struct cxl_region **cxlr);
struct range;
int cxl_get_region_range(struct cxl_region *region, struct range *range);
+enum cxl_detach_mode {
+ DETACH_ONLY,
+ DETACH_INVALIDATE,
+};
+
+int cxl_decoder_detach(struct cxl_region *cxlr,
+ struct cxl_endpoint_decoder *cxled, int pos,
+ enum cxl_detach_mode mode);
+void unregister_region(void *_cxlr);
#endif /* __CXL_CXL_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 14/25] sfc: obtain decoder and region if committed by firmware
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (12 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 13/25] cxl: Export functions for unwinding cxl by accelerators alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-15 13:57 ` Jonathan Cameron
2025-12-05 11:52 ` [PATCH v22 15/25] cxl: Define a driver interface for HPA free space enumeration alejandro.lucero-palau
` (11 subsequent siblings)
25 siblings, 1 reply; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero
From: Alejandro Lucero <alucerop@amd.com>
Check if device HDM is already committed during firmware/BIOS
initialization.
A CXL region should exist if so after memdev allocation/initialization.
Get HPA from region and map it.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
---
drivers/net/ethernet/sfc/efx_cxl.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c
index f6eda93e67e2..ad1f49e76179 100644
--- a/drivers/net/ethernet/sfc/efx_cxl.c
+++ b/drivers/net/ethernet/sfc/efx_cxl.c
@@ -19,6 +19,7 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
struct efx_nic *efx = &probe_data->efx;
struct pci_dev *pci_dev = efx->pci_dev;
struct efx_cxl *cxl;
+ struct range range;
u16 dvsec;
int rc;
@@ -90,6 +91,26 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
return PTR_ERR(cxl->cxlmd);
}
+ cxl->cxled = cxl_get_committed_decoder(cxl->cxlmd, &cxl->efx_region);
+ if (cxl->cxled) {
+ if (!cxl->efx_region) {
+ pci_err(pci_dev, "CXL found committed decoder without a region");
+ return -ENODEV;
+ }
+ rc = cxl_get_region_range(cxl->efx_region, &range);
+ if (rc) {
+ pci_err(pci_dev,
+ "CXL getting regions params from a committed decoder failed");
+ return rc;
+ }
+
+ cxl->ctpio_cxl = ioremap(range.start, range.end - range.start + 1);
+ if (!cxl->ctpio_cxl) {
+ pci_err(pci_dev, "CXL ioremap region (%pra) failed", &range);
+ return -ENOMEM;
+ }
+ }
+
probe_data->cxl = cxl;
return 0;
@@ -97,6 +118,12 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
void efx_cxl_exit(struct efx_probe_data *probe_data)
{
+ if (!probe_data->cxl)
+ return;
+
+ iounmap(probe_data->cxl->ctpio_cxl);
+ cxl_decoder_detach(NULL, probe_data->cxl->cxled, 0, DETACH_INVALIDATE);
+ unregister_region(probe_data->cxl->efx_region);
}
MODULE_IMPORT_NS("CXL");
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 15/25] cxl: Define a driver interface for HPA free space enumeration
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (13 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 14/25] sfc: obtain decoder and region if committed by firmware alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 16/25] sfc: get root decoder alejandro.lucero-palau
` (10 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Jonathan Cameron
From: Alejandro Lucero <alucerop@amd.com>
CXL region creation involves allocating capacity from Device Physical
Address (DPA) and assigning it to decode a given Host Physical Address
(HPA). Before determining how much DPA to allocate the amount of available
HPA must be determined. Also, not all HPA is created equal, some HPA
targets RAM, some targets PMEM, some is prepared for device-memory flows
like HDM-D and HDM-DB, and some is HDM-H (host-only).
In order to support Type2 CXL devices, wrap all of those concerns into
an API that retrieves a root decoder (platform CXL window) that fits the
specified constraints and the capacity available for a new region.
Add a complementary function for releasing the reference to such root
decoder.
Based on https://lore.kernel.org/linux-cxl/168592159290.1948938.13522227102445462976.stgit@dwillia2-xfh.jf.intel.com/
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
drivers/cxl/core/region.c | 165 ++++++++++++++++++++++++++++++++++++++
drivers/cxl/cxl.h | 3 +
include/cxl/cxl.h | 6 ++
3 files changed, 174 insertions(+)
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 104caa33b7bb..be2b78fd6ee9 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -711,6 +711,171 @@ static int free_hpa(struct cxl_region *cxlr)
return 0;
}
+struct cxlrd_max_context {
+ struct device * const *host_bridges;
+ int interleave_ways;
+ unsigned long flags;
+ resource_size_t max_hpa;
+ struct cxl_root_decoder *cxlrd;
+};
+
+static int find_max_hpa(struct device *dev, void *data)
+{
+ struct cxlrd_max_context *ctx = data;
+ struct cxl_switch_decoder *cxlsd;
+ struct cxl_root_decoder *cxlrd;
+ struct resource *res, *prev;
+ struct cxl_decoder *cxld;
+ resource_size_t free = 0;
+ resource_size_t max;
+ int found = 0;
+
+ if (!is_root_decoder(dev))
+ return 0;
+
+ cxlrd = to_cxl_root_decoder(dev);
+ cxlsd = &cxlrd->cxlsd;
+ cxld = &cxlsd->cxld;
+
+ if ((cxld->flags & ctx->flags) != ctx->flags) {
+ dev_dbg(dev, "flags not matching: %08lx vs %08lx\n",
+ cxld->flags, ctx->flags);
+ return 0;
+ }
+
+ for (int i = 0; i < ctx->interleave_ways; i++) {
+ for (int j = 0; j < ctx->interleave_ways; j++) {
+ if (ctx->host_bridges[i] == cxlsd->target[j]->dport_dev) {
+ found++;
+ break;
+ }
+ }
+ }
+
+ if (found != ctx->interleave_ways) {
+ dev_dbg(dev,
+ "Not enough host bridges. Found %d for %d interleave ways requested\n",
+ found, ctx->interleave_ways);
+ return 0;
+ }
+
+ /*
+ * Walk the root decoder resource range relying on cxl_rwsem.region to
+ * preclude sibling arrival/departure and find the largest free space
+ * gap.
+ */
+ lockdep_assert_held_read(&cxl_rwsem.region);
+ res = cxlrd->res->child;
+
+ /* With no resource child the whole parent resource is available */
+ if (!res)
+ max = resource_size(cxlrd->res);
+ else
+ max = 0;
+
+ for (prev = NULL; res; prev = res, res = res->sibling) {
+
+ if (!prev && res->start == cxlrd->res->start &&
+ res->end == cxlrd->res->end) {
+ max = resource_size(cxlrd->res);
+ break;
+ }
+ /*
+ * Sanity check for preventing arithmetic problems below as a
+ * resource with size 0 could imply using the end field below
+ * when set to unsigned zero - 1 or all f in hex.
+ */
+ if (prev && !resource_size(prev))
+ continue;
+
+ if (!prev && res->start > cxlrd->res->start) {
+ free = res->start - cxlrd->res->start;
+ max = max(free, max);
+ }
+ if (prev && res->start > prev->end + 1) {
+ free = res->start - prev->end + 1;
+ max = max(free, max);
+ }
+ }
+
+ if (prev && prev->end + 1 < cxlrd->res->end + 1) {
+ free = cxlrd->res->end + 1 - prev->end + 1;
+ max = max(free, max);
+ }
+
+ dev_dbg(cxlrd_dev(cxlrd), "found %pa bytes of free space\n", &max);
+ if (max > ctx->max_hpa) {
+ if (ctx->cxlrd)
+ put_device(cxlrd_dev(ctx->cxlrd));
+ get_device(cxlrd_dev(cxlrd));
+ ctx->cxlrd = cxlrd;
+ ctx->max_hpa = max;
+ }
+ return 0;
+}
+
+/**
+ * cxl_get_hpa_freespace - find a root decoder with free capacity per constraints
+ * @cxlmd: the mem device requiring the HPA
+ * @interleave_ways: number of entries in @host_bridges
+ * @flags: CXL_DECODER_F flags for selecting RAM vs PMEM, and Type2 device
+ * @max_avail_contig: output parameter of max contiguous bytes available in the
+ * returned decoder
+ *
+ * Returns a pointer to a struct cxl_root_decoder
+ *
+ * The return tuple of a 'struct cxl_root_decoder' and 'bytes available given
+ * in (@max_avail_contig))' is a point in time snapshot. If by the time the
+ * caller goes to use this decoder and its capacity is reduced then caller needs
+ * to loop and retry.
+ *
+ * The returned root decoder has an elevated reference count that needs to be
+ * put with cxl_put_root_decoder(cxlrd).
+ */
+struct cxl_root_decoder *cxl_get_hpa_freespace(struct cxl_memdev *cxlmd,
+ int interleave_ways,
+ unsigned long flags,
+ resource_size_t *max_avail_contig)
+{
+ struct cxlrd_max_context ctx = {
+ .flags = flags,
+ .interleave_ways = interleave_ways,
+ };
+ struct cxl_port *root_port;
+ struct cxl_port *endpoint;
+
+ endpoint = cxlmd->endpoint;
+ if (!endpoint) {
+ dev_dbg(&cxlmd->dev, "endpoint not linked to memdev\n");
+ return ERR_PTR(-ENXIO);
+ }
+
+ ctx.host_bridges = &endpoint->host_bridge;
+
+ struct cxl_root *root __free(put_cxl_root) = find_cxl_root(endpoint);
+ if (!root) {
+ dev_dbg(&endpoint->dev, "endpoint is not related to a root port\n");
+ return ERR_PTR(-ENXIO);
+ }
+
+ root_port = &root->port;
+ scoped_guard(rwsem_read, &cxl_rwsem.region)
+ device_for_each_child(&root_port->dev, &ctx, find_max_hpa);
+
+ if (!ctx.cxlrd)
+ return ERR_PTR(-ENOMEM);
+
+ *max_avail_contig = ctx.max_hpa;
+ return ctx.cxlrd;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_get_hpa_freespace, "CXL");
+
+void cxl_put_root_decoder(struct cxl_root_decoder *cxlrd)
+{
+ put_device(cxlrd_dev(cxlrd));
+}
+EXPORT_SYMBOL_NS_GPL(cxl_put_root_decoder, "CXL");
+
static ssize_t size_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t len)
{
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index d7ddca6f7115..78845e0e3e4f 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -679,6 +679,9 @@ struct cxl_root_decoder *to_cxl_root_decoder(struct device *dev);
struct cxl_switch_decoder *to_cxl_switch_decoder(struct device *dev);
struct cxl_endpoint_decoder *to_cxl_endpoint_decoder(struct device *dev);
bool is_root_decoder(struct device *dev);
+
+#define cxlrd_dev(cxlrd) (&(cxlrd)->cxlsd.cxld.dev)
+
bool is_switch_decoder(struct device *dev);
bool is_endpoint_decoder(struct device *dev);
struct cxl_root_decoder *cxl_root_decoder_alloc(struct cxl_port *port,
diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
index b8683c75dfde..f138bb4c2560 100644
--- a/include/cxl/cxl.h
+++ b/include/cxl/cxl.h
@@ -264,4 +264,10 @@ int cxl_decoder_detach(struct cxl_region *cxlr,
struct cxl_endpoint_decoder *cxled, int pos,
enum cxl_detach_mode mode);
void unregister_region(void *_cxlr);
+struct cxl_port;
+struct cxl_root_decoder *cxl_get_hpa_freespace(struct cxl_memdev *cxlmd,
+ int interleave_ways,
+ unsigned long flags,
+ resource_size_t *max);
+void cxl_put_root_decoder(struct cxl_root_decoder *cxlrd);
#endif /* __CXL_CXL_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 16/25] sfc: get root decoder
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (14 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 15/25] cxl: Define a driver interface for HPA free space enumeration alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 17/25] cxl: Define a driver interface for DPA allocation alejandro.lucero-palau
` (9 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Martin Habets, Edward Cree, Jonathan Cameron,
Ben Cheatham
From: Alejandro Lucero <alucerop@amd.com>
Use cxl api for getting HPA (Host Physical Address) to use from a
CXL root decoder.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Martin Habets <habetsm.xilinx@gmail.com>
Acked-by: Edward Cree <ecree.xilinx@gmail.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Ben Cheatham <benjamin.cheatham@amd.com>
---
drivers/cxl/cxl.h | 15 ---------------
drivers/net/ethernet/sfc/Kconfig | 1 +
drivers/net/ethernet/sfc/efx_cxl.c | 30 +++++++++++++++++++++++++++---
drivers/net/ethernet/sfc/efx_cxl.h | 1 +
include/cxl/cxl.h | 15 +++++++++++++++
5 files changed, 44 insertions(+), 18 deletions(-)
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 78845e0e3e4f..5441a296c351 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -220,21 +220,6 @@ int cxl_dport_map_rcd_linkcap(struct pci_dev *pdev, struct cxl_dport *dport);
#define CXL_RESOURCE_NONE ((resource_size_t) -1)
#define CXL_TARGET_STRLEN 20
-/*
- * cxl_decoder flags that define the type of memory / devices this
- * decoder supports as well as configuration lock status See "CXL 2.0
- * 8.2.5.12.7 CXL HDM Decoder 0 Control Register" for details.
- * Additionally indicate whether decoder settings were autodetected,
- * user customized.
- */
-#define CXL_DECODER_F_RAM BIT(0)
-#define CXL_DECODER_F_PMEM BIT(1)
-#define CXL_DECODER_F_TYPE2 BIT(2)
-#define CXL_DECODER_F_TYPE3 BIT(3)
-#define CXL_DECODER_F_LOCK BIT(4)
-#define CXL_DECODER_F_ENABLE BIT(5)
-#define CXL_DECODER_F_MASK GENMASK(5, 0)
-
enum cxl_decoder_type {
CXL_DECODER_DEVMEM = 2,
CXL_DECODER_HOSTONLYMEM = 3,
diff --git a/drivers/net/ethernet/sfc/Kconfig b/drivers/net/ethernet/sfc/Kconfig
index 979f2801e2a8..e959d9b4f4ce 100644
--- a/drivers/net/ethernet/sfc/Kconfig
+++ b/drivers/net/ethernet/sfc/Kconfig
@@ -69,6 +69,7 @@ config SFC_MCDI_LOGGING
config SFC_CXL
bool "Solarflare SFC9100-family CXL support"
depends on SFC && CXL_BUS >= SFC
+ depends on CXL_REGION
default SFC
help
This enables SFC CXL support if the kernel is configuring CXL for
diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c
index ad1f49e76179..d0e907034960 100644
--- a/drivers/net/ethernet/sfc/efx_cxl.c
+++ b/drivers/net/ethernet/sfc/efx_cxl.c
@@ -18,6 +18,7 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
{
struct efx_nic *efx = &probe_data->efx;
struct pci_dev *pci_dev = efx->pci_dev;
+ resource_size_t max_size;
struct efx_cxl *cxl;
struct range range;
u16 dvsec;
@@ -109,6 +110,24 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
pci_err(pci_dev, "CXL ioremap region (%pra) failed", &range);
return -ENOMEM;
}
+ cxl->hdm_was_committed = true;
+ } else {
+ cxl->cxlrd = cxl_get_hpa_freespace(cxl->cxlmd, 1,
+ CXL_DECODER_F_RAM |
+ CXL_DECODER_F_TYPE2,
+ &max_size);
+
+ if (IS_ERR(cxl->cxlrd)) {
+ dev_err(&pci_dev->dev, "cxl_get_hpa_freespace failed\n");
+ return PTR_ERR(cxl->cxlrd);
+ }
+
+ if (max_size < EFX_CTPIO_BUFFER_SIZE) {
+ dev_err(&pci_dev->dev, "%s: not enough free HPA space %pap < %u\n",
+ __func__, &max_size, EFX_CTPIO_BUFFER_SIZE);
+ cxl_put_root_decoder(cxl->cxlrd);
+ return -ENOSPC;
+ }
}
probe_data->cxl = cxl;
@@ -121,9 +140,14 @@ void efx_cxl_exit(struct efx_probe_data *probe_data)
if (!probe_data->cxl)
return;
- iounmap(probe_data->cxl->ctpio_cxl);
- cxl_decoder_detach(NULL, probe_data->cxl->cxled, 0, DETACH_INVALIDATE);
- unregister_region(probe_data->cxl->efx_region);
+ if (probe_data->cxl->hdm_was_committed) {
+ iounmap(probe_data->cxl->ctpio_cxl);
+ cxl_decoder_detach(NULL, probe_data->cxl->cxled, 0,
+ DETACH_INVALIDATE);
+ unregister_region(probe_data->cxl->efx_region);
+ } else {
+ cxl_put_root_decoder(probe_data->cxl->cxlrd);
+ }
}
MODULE_IMPORT_NS("CXL");
diff --git a/drivers/net/ethernet/sfc/efx_cxl.h b/drivers/net/ethernet/sfc/efx_cxl.h
index 961639cef692..9a92e386695b 100644
--- a/drivers/net/ethernet/sfc/efx_cxl.h
+++ b/drivers/net/ethernet/sfc/efx_cxl.h
@@ -27,6 +27,7 @@ struct efx_cxl {
struct cxl_root_decoder *cxlrd;
struct cxl_port *endpoint;
struct cxl_endpoint_decoder *cxled;
+ bool hdm_was_committed;
struct cxl_region *efx_region;
void __iomem *ctpio_cxl;
};
diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
index f138bb4c2560..6fe5c15bd3c5 100644
--- a/include/cxl/cxl.h
+++ b/include/cxl/cxl.h
@@ -153,6 +153,21 @@ struct cxl_dpa_partition {
#define CXL_NR_PARTITIONS_MAX 2
+/*
+ * cxl_decoder flags that define the type of memory / devices this
+ * decoder supports as well as configuration lock status See "CXL 2.0
+ * 8.2.5.12.7 CXL HDM Decoder 0 Control Register" for details.
+ * Additionally indicate whether decoder settings were autodetected,
+ * user customized.
+ */
+#define CXL_DECODER_F_RAM BIT(0)
+#define CXL_DECODER_F_PMEM BIT(1)
+#define CXL_DECODER_F_TYPE2 BIT(2)
+#define CXL_DECODER_F_TYPE3 BIT(3)
+#define CXL_DECODER_F_LOCK BIT(4)
+#define CXL_DECODER_F_ENABLE BIT(5)
+#define CXL_DECODER_F_MASK GENMASK(5, 0)
+
struct cxl_memdev_ops {
int (*probe)(struct cxl_memdev *cxlmd);
};
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 17/25] cxl: Define a driver interface for DPA allocation
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (15 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 16/25] sfc: get root decoder alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 18/25] sfc: get endpoint decoder alejandro.lucero-palau
` (8 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Jonathan Cameron
From: Alejandro Lucero <alucerop@amd.com>
Region creation involves finding available DPA (device-physical-address)
capacity to map into HPA (host-physical-address) space.
In order to support CXL Type2 devices, define an API, cxl_request_dpa(),
that tries to allocate the DPA memory the driver requires to operate.The
memory requested should not be bigger than the max available HPA obtained
previously with cxl_get_hpa_freespace().
Based on https://lore.kernel.org/linux-cxl/168592158743.1948938.7622563891193802610.stgit@dwillia2-xfh.jf.intel.com/
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
---
drivers/cxl/core/hdm.c | 84 ++++++++++++++++++++++++++++++++++++++++++
drivers/cxl/cxl.h | 1 +
include/cxl/cxl.h | 5 +++
3 files changed, 90 insertions(+)
diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c
index fa99657440d1..5a2616129244 100644
--- a/drivers/cxl/core/hdm.c
+++ b/drivers/cxl/core/hdm.c
@@ -3,6 +3,7 @@
#include <linux/seq_file.h>
#include <linux/device.h>
#include <linux/delay.h>
+#include <cxl/cxl.h>
#include "cxlmem.h"
#include "core.h"
@@ -551,6 +552,12 @@ bool cxl_resource_contains_addr(const struct resource *res, const resource_size_
return resource_contains(res, &_addr);
}
+/**
+ * cxl_dpa_free - release DPA (Device Physical Address)
+ * @cxled: endpoint decoder linked to the DPA
+ *
+ * Returns 0 or error.
+ */
int cxl_dpa_free(struct cxl_endpoint_decoder *cxled)
{
struct cxl_port *port = cxled_to_port(cxled);
@@ -577,6 +584,7 @@ int cxl_dpa_free(struct cxl_endpoint_decoder *cxled)
devm_cxl_dpa_release(cxled);
return 0;
}
+EXPORT_SYMBOL_NS_GPL(cxl_dpa_free, "CXL");
int cxl_dpa_set_part(struct cxl_endpoint_decoder *cxled,
enum cxl_partition_mode mode)
@@ -608,6 +616,82 @@ int cxl_dpa_set_part(struct cxl_endpoint_decoder *cxled,
return 0;
}
+static int find_free_decoder(struct device *dev, const void *data)
+{
+ struct cxl_endpoint_decoder *cxled;
+ struct cxl_port *port;
+
+ if (!is_endpoint_decoder(dev))
+ return 0;
+
+ cxled = to_cxl_endpoint_decoder(dev);
+ port = cxled_to_port(cxled);
+
+ return cxled->cxld.id == (port->hdm_end + 1);
+}
+
+static struct cxl_endpoint_decoder *
+cxl_find_free_decoder(struct cxl_memdev *cxlmd)
+{
+ struct cxl_port *endpoint = cxlmd->endpoint;
+ struct device *dev;
+
+ guard(rwsem_read)(&cxl_rwsem.dpa);
+ dev = device_find_child(&endpoint->dev, NULL,
+ find_free_decoder);
+ if (!dev)
+ return NULL;
+
+ return to_cxl_endpoint_decoder(dev);
+}
+
+/**
+ * cxl_request_dpa - search and reserve DPA given input constraints
+ * @cxlmd: memdev with an endpoint port with available decoders
+ * @mode: CXL partition mode (ram vs pmem)
+ * @alloc: dpa size required
+ *
+ * Returns a pointer to a 'struct cxl_endpoint_decoder' on success or
+ * an errno encoded pointer on failure.
+ *
+ * Given that a region needs to allocate from limited HPA capacity it
+ * may be the case that a device has more mappable DPA capacity than
+ * available HPA. The expectation is that @alloc is a driver known
+ * value based on the device capacity but which could not be fully
+ * available due to HPA constraints.
+ *
+ * Returns a pinned cxl_decoder with at least @alloc bytes of capacity
+ * reserved, or an error pointer. The caller is also expected to own the
+ * lifetime of the memdev registration associated with the endpoint to
+ * pin the decoder registered as well.
+ */
+struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_memdev *cxlmd,
+ enum cxl_partition_mode mode,
+ resource_size_t alloc)
+{
+ int rc;
+
+ if (!IS_ALIGNED(alloc, SZ_256M))
+ return ERR_PTR(-EINVAL);
+
+ struct cxl_endpoint_decoder *cxled __free(put_cxled) =
+ cxl_find_free_decoder(cxlmd);
+
+ if (!cxled)
+ return ERR_PTR(-ENODEV);
+
+ rc = cxl_dpa_set_part(cxled, mode);
+ if (rc)
+ return ERR_PTR(rc);
+
+ rc = cxl_dpa_alloc(cxled, alloc);
+ if (rc)
+ return ERR_PTR(rc);
+
+ return no_free_ptr(cxled);
+}
+EXPORT_SYMBOL_NS_GPL(cxl_request_dpa, "CXL");
+
static int __cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, u64 size)
{
struct cxl_memdev *cxlmd = cxled_to_memdev(cxled);
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 5441a296c351..06a111392c3b 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -640,6 +640,7 @@ struct cxl_root *find_cxl_root(struct cxl_port *port);
DEFINE_FREE(put_cxl_root, struct cxl_root *, if (_T) put_device(&_T->port.dev))
DEFINE_FREE(put_cxl_port, struct cxl_port *, if (!IS_ERR_OR_NULL(_T)) put_device(&_T->dev))
+DEFINE_FREE(put_cxled, struct cxl_endpoint_decoder *, if (!IS_ERR_OR_NULL(_T)) put_device(&_T->cxld.dev))
DEFINE_FREE(put_cxl_root_decoder, struct cxl_root_decoder *, if (!IS_ERR_OR_NULL(_T)) put_device(&_T->cxlsd.cxld.dev))
DEFINE_FREE(put_cxl_region, struct cxl_region *, if (!IS_ERR_OR_NULL(_T)) put_device(&_T->dev))
diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
index 6fe5c15bd3c5..7bd88e6b8598 100644
--- a/include/cxl/cxl.h
+++ b/include/cxl/cxl.h
@@ -7,6 +7,7 @@
#include <linux/node.h>
#include <linux/ioport.h>
+#include <linux/range.h>
#include <cxl/mailbox.h>
/**
@@ -285,4 +286,8 @@ struct cxl_root_decoder *cxl_get_hpa_freespace(struct cxl_memdev *cxlmd,
unsigned long flags,
resource_size_t *max);
void cxl_put_root_decoder(struct cxl_root_decoder *cxlrd);
+struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_memdev *cxlmd,
+ enum cxl_partition_mode mode,
+ resource_size_t alloc);
+int cxl_dpa_free(struct cxl_endpoint_decoder *cxled);
#endif /* __CXL_CXL_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 18/25] sfc: get endpoint decoder
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (16 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 17/25] cxl: Define a driver interface for DPA allocation alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 19/25] cxl: Make region type based on endpoint type alejandro.lucero-palau
` (7 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Martin Habets, Edward Cree, Jonathan Cameron,
Ben Cheatham
From: Alejandro Lucero <alucerop@amd.com>
Use cxl api for getting DPA (Device Physical Address) to use through an
endpoint decoder.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Martin Habets <habetsm.xilinx@gmail.com>
Acked-by: Edward Cree <ecree.xilinx@gmail.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Ben Cheatham <benjamin.cheatham@amd.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
---
drivers/net/ethernet/sfc/efx_cxl.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c
index d0e907034960..56e7104483a5 100644
--- a/drivers/net/ethernet/sfc/efx_cxl.c
+++ b/drivers/net/ethernet/sfc/efx_cxl.c
@@ -128,6 +128,14 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
cxl_put_root_decoder(cxl->cxlrd);
return -ENOSPC;
}
+
+ cxl->cxled = cxl_request_dpa(cxl->cxlmd, CXL_PARTMODE_RAM,
+ EFX_CTPIO_BUFFER_SIZE);
+ if (IS_ERR(cxl->cxled)) {
+ pci_err(pci_dev, "CXL accel request DPA failed");
+ cxl_put_root_decoder(cxl->cxlrd);
+ return PTR_ERR(cxl->cxled);
+ }
}
probe_data->cxl = cxl;
@@ -146,6 +154,7 @@ void efx_cxl_exit(struct efx_probe_data *probe_data)
DETACH_INVALIDATE);
unregister_region(probe_data->cxl->efx_region);
} else {
+ cxl_dpa_free(probe_data->cxl->cxled);
cxl_put_root_decoder(probe_data->cxl->cxlrd);
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 19/25] cxl: Make region type based on endpoint type
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (17 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 18/25] sfc: get endpoint decoder alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 20/25] cxl/region: Factor out interleave ways setup alejandro.lucero-palau
` (6 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Zhi Wang, Jonathan Cameron, Ben Cheatham,
Alison Schofield, Davidlohr Bueso
From: Alejandro Lucero <alucerop@amd.com>
Current code is expecting Type3 or CXL_DECODER_HOSTONLYMEM devices only.
Support for Type2 implies region type needs to be based on the endpoint
type HDM-D[B] instead.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Zhi Wang <zhiw@nvidia.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Ben Cheatham <benjamin.cheatham@amd.com>
Reviewed-by: Alison Schofield <alison.schofield@intel.com>
Reviewed-by: Davidlohr Bueso <daves@stgolabs.net>
---
drivers/cxl/core/region.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index be2b78fd6ee9..9aeee87e647e 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -2783,7 +2783,8 @@ static ssize_t create_ram_region_show(struct device *dev,
}
static struct cxl_region *__create_region(struct cxl_root_decoder *cxlrd,
- enum cxl_partition_mode mode, int id)
+ enum cxl_partition_mode mode, int id,
+ enum cxl_decoder_type target_type)
{
int rc;
@@ -2805,7 +2806,7 @@ static struct cxl_region *__create_region(struct cxl_root_decoder *cxlrd,
return ERR_PTR(-EBUSY);
}
- return devm_cxl_add_region(cxlrd, id, mode, CXL_DECODER_HOSTONLYMEM);
+ return devm_cxl_add_region(cxlrd, id, mode, target_type);
}
static ssize_t create_region_store(struct device *dev, const char *buf,
@@ -2819,7 +2820,7 @@ static ssize_t create_region_store(struct device *dev, const char *buf,
if (rc != 1)
return -EINVAL;
- cxlr = __create_region(cxlrd, mode, id);
+ cxlr = __create_region(cxlrd, mode, id, CXL_DECODER_HOSTONLYMEM);
if (IS_ERR(cxlr))
return PTR_ERR(cxlr);
@@ -3713,7 +3714,8 @@ static struct cxl_region *construct_region(struct cxl_root_decoder *cxlrd,
do {
cxlr = __create_region(cxlrd, cxlds->part[part].mode,
- atomic_read(&cxlrd->region_id));
+ atomic_read(&cxlrd->region_id),
+ cxled->cxld.target_type);
} while (IS_ERR(cxlr) && PTR_ERR(cxlr) == -EBUSY);
if (IS_ERR(cxlr)) {
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 20/25] cxl/region: Factor out interleave ways setup
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (18 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 19/25] cxl: Make region type based on endpoint type alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 21/25] cxl/region: Factor out interleave granularity setup alejandro.lucero-palau
` (5 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Zhi Wang, Jonathan Cameron, Ben Cheatham,
Alison Schofield
From: Alejandro Lucero <alucerop@amd.com>
Region creation based on Type3 devices is triggered from user space
allowing memory combination through interleaving.
In preparation for kernel driven region creation, that is Type2 drivers
triggering region creation backed with its advertised CXL memory, factor
out a common helper from the user-sysfs region setup for interleave ways.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Zhi Wang <zhiw@nvidia.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Ben Cheatham <benjamin.cheatham@amd.com>
Reviewed-by: Alison Schofield <alison.schofield@intel.com>
---
drivers/cxl/core/region.c | 43 ++++++++++++++++++++++++---------------
1 file changed, 27 insertions(+), 16 deletions(-)
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 9aeee87e647e..157deee726a9 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -491,22 +491,14 @@ static ssize_t interleave_ways_show(struct device *dev,
static const struct attribute_group *get_cxl_region_target_group(void);
-static ssize_t interleave_ways_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t len)
+static int set_interleave_ways(struct cxl_region *cxlr, int val)
{
- struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(dev->parent);
+ struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent);
struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld;
- struct cxl_region *cxlr = to_cxl_region(dev);
struct cxl_region_params *p = &cxlr->params;
- unsigned int val, save;
- int rc;
+ int save, rc;
u8 iw;
- rc = kstrtouint(buf, 0, &val);
- if (rc)
- return rc;
-
rc = ways_to_eiw(val, &iw);
if (rc)
return rc;
@@ -521,9 +513,7 @@ static ssize_t interleave_ways_store(struct device *dev,
return -EINVAL;
}
- ACQUIRE(rwsem_write_kill, rwsem)(&cxl_rwsem.region);
- if ((rc = ACQUIRE_ERR(rwsem_write_kill, &rwsem)))
- return rc;
+ lockdep_assert_held_write(&cxl_rwsem.region);
if (p->state >= CXL_CONFIG_INTERLEAVE_ACTIVE)
return -EBUSY;
@@ -531,10 +521,31 @@ static ssize_t interleave_ways_store(struct device *dev,
save = p->interleave_ways;
p->interleave_ways = val;
rc = sysfs_update_group(&cxlr->dev.kobj, get_cxl_region_target_group());
- if (rc) {
+ if (rc)
p->interleave_ways = save;
+
+ return rc;
+}
+
+static ssize_t interleave_ways_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t len)
+{
+ struct cxl_region *cxlr = to_cxl_region(dev);
+ unsigned int val;
+ int rc;
+
+ rc = kstrtouint(buf, 0, &val);
+ if (rc)
+ return rc;
+
+ ACQUIRE(rwsem_write_kill, rwsem)(&cxl_rwsem.region);
+ if ((rc = ACQUIRE_ERR(rwsem_write_kill, &rwsem)))
+ return rc;
+
+ rc = set_interleave_ways(cxlr, val);
+ if (rc)
return rc;
- }
return len;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 21/25] cxl/region: Factor out interleave granularity setup
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (19 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 20/25] cxl/region: Factor out interleave ways setup alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 22/25] cxl: Allow region creation by type2 drivers alejandro.lucero-palau
` (4 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Zhi Wang, Jonathan Cameron, Ben Cheatham,
Alison Schofield
From: Alejandro Lucero <alucerop@amd.com>
Region creation based on Type3 devices is triggered from user space
allowing memory combination through interleaving.
In preparation for kernel driven region creation, that is Type2 drivers
triggering region creation backed with its advertised CXL memory, factor
out a common helper from the user-sysfs region setup forinterleave
granularity.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Zhi Wang <zhiw@nvidia.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Ben Cheatham <benjamin.cheatham@amd.com>
Reviewed-by: Alison Schofield <alison.schofield@intel.com>
---
drivers/cxl/core/region.c | 39 +++++++++++++++++++++++++--------------
1 file changed, 25 insertions(+), 14 deletions(-)
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 157deee726a9..21063f7a9468 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -565,21 +565,14 @@ static ssize_t interleave_granularity_show(struct device *dev,
return sysfs_emit(buf, "%d\n", p->interleave_granularity);
}
-static ssize_t interleave_granularity_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t len)
+static int set_interleave_granularity(struct cxl_region *cxlr, int val)
{
- struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(dev->parent);
+ struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent);
struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld;
- struct cxl_region *cxlr = to_cxl_region(dev);
struct cxl_region_params *p = &cxlr->params;
- int rc, val;
+ int rc;
u16 ig;
- rc = kstrtoint(buf, 0, &val);
- if (rc)
- return rc;
-
rc = granularity_to_eig(val, &ig);
if (rc)
return rc;
@@ -595,14 +588,32 @@ static ssize_t interleave_granularity_store(struct device *dev,
if (cxld->interleave_ways > 1 && val != cxld->interleave_granularity)
return -EINVAL;
- ACQUIRE(rwsem_write_kill, rwsem)(&cxl_rwsem.region);
- if ((rc = ACQUIRE_ERR(rwsem_write_kill, &rwsem)))
- return rc;
-
+ lockdep_assert_held_write(&cxl_rwsem.region);
if (p->state >= CXL_CONFIG_INTERLEAVE_ACTIVE)
return -EBUSY;
p->interleave_granularity = val;
+ return 0;
+}
+
+static ssize_t interleave_granularity_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t len)
+{
+ struct cxl_region *cxlr = to_cxl_region(dev);
+ int rc, val;
+
+ rc = kstrtoint(buf, 0, &val);
+ if (rc)
+ return rc;
+
+ ACQUIRE(rwsem_write_kill, rwsem)(&cxl_rwsem.region);
+ if ((rc = ACQUIRE_ERR(rwsem_write_kill, &rwsem)))
+ return rc;
+
+ rc = set_interleave_granularity(cxlr, val);
+ if (rc)
+ return rc;
return len;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 22/25] cxl: Allow region creation by type2 drivers
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (20 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 21/25] cxl/region: Factor out interleave granularity setup alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 23/25] cxl: Avoid dax creation for accelerators alejandro.lucero-palau
` (3 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Jonathan Cameron
From: Alejandro Lucero <alucerop@amd.com>
Creating a CXL region requires userspace intervention through the cxl
sysfs files. Type2 support should allow accelerator drivers to create
such cxl region from kernel code.
Adding that functionality and integrating it with current support for
memory expanders.
Based on https://lore.kernel.org/linux-cxl/168592159835.1948938.1647215579839222774.stgit@dwillia2-xfh.jf.intel.com/
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
---
drivers/cxl/core/region.c | 131 ++++++++++++++++++++++++++++++++++++--
include/cxl/cxl.h | 3 +
2 files changed, 127 insertions(+), 7 deletions(-)
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 21063f7a9468..694bb1e543cc 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -2894,6 +2894,14 @@ cxl_find_region_by_name(struct cxl_root_decoder *cxlrd, const char *name)
return to_cxl_region(region_dev);
}
+static void drop_region(struct cxl_region *cxlr)
+{
+ struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent);
+ struct cxl_port *port = cxlrd_to_port(cxlrd);
+
+ devm_release_action(port->uport_dev, unregister_region, cxlr);
+}
+
static ssize_t delete_region_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t len)
@@ -3724,14 +3732,12 @@ static int __construct_region(struct cxl_region *cxlr,
return 0;
}
-/* Establish an empty region covering the given HPA range */
-static struct cxl_region *construct_region(struct cxl_root_decoder *cxlrd,
- struct cxl_endpoint_decoder *cxled)
+static struct cxl_region *construct_region_begin(struct cxl_root_decoder *cxlrd,
+ struct cxl_endpoint_decoder *cxled)
{
struct cxl_memdev *cxlmd = cxled_to_memdev(cxled);
- struct cxl_port *port = cxlrd_to_port(cxlrd);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
- int rc, part = READ_ONCE(cxled->part);
+ int part = READ_ONCE(cxled->part);
struct cxl_region *cxlr;
do {
@@ -3740,13 +3746,26 @@ static struct cxl_region *construct_region(struct cxl_root_decoder *cxlrd,
cxled->cxld.target_type);
} while (IS_ERR(cxlr) && PTR_ERR(cxlr) == -EBUSY);
- if (IS_ERR(cxlr)) {
+ if (IS_ERR(cxlr))
dev_err(cxlmd->dev.parent,
"%s:%s: %s failed assign region: %ld\n",
dev_name(&cxlmd->dev), dev_name(&cxled->cxld.dev),
__func__, PTR_ERR(cxlr));
+
+ return cxlr;
+}
+
+/* Establish an empty region covering the given HPA range */
+static struct cxl_region *construct_region(struct cxl_root_decoder *cxlrd,
+ struct cxl_endpoint_decoder *cxled)
+{
+ struct cxl_port *port = cxlrd_to_port(cxlrd);
+ struct cxl_region *cxlr;
+ int rc;
+
+ cxlr = construct_region_begin(cxlrd, cxled);
+ if (IS_ERR(cxlr))
return cxlr;
- }
rc = __construct_region(cxlr, cxlrd, cxled);
if (rc) {
@@ -3757,6 +3776,104 @@ static struct cxl_region *construct_region(struct cxl_root_decoder *cxlrd,
return cxlr;
}
+DEFINE_FREE(cxl_region_drop, struct cxl_region *, if (_T) drop_region(_T))
+
+static struct cxl_region *
+__construct_new_region(struct cxl_root_decoder *cxlrd,
+ struct cxl_endpoint_decoder **cxled, int ways)
+{
+ struct cxl_memdev *cxlmd = cxled_to_memdev(cxled[0]);
+ struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld;
+ struct cxl_region_params *p;
+ resource_size_t size = 0;
+ int rc, i;
+
+ struct cxl_region *cxlr __free(cxl_region_drop) =
+ construct_region_begin(cxlrd, cxled[0]);
+ if (IS_ERR(cxlr))
+ return cxlr;
+
+ guard(rwsem_write)(&cxl_rwsem.region);
+
+ /*
+ * Sanity check. This should not happen with an accel driver handling
+ * the region creation.
+ */
+ p = &cxlr->params;
+ if (p->state >= CXL_CONFIG_INTERLEAVE_ACTIVE) {
+ dev_err(cxlmd->dev.parent,
+ "%s:%s: %s unexpected region state\n",
+ dev_name(&cxlmd->dev), dev_name(&cxled[0]->cxld.dev),
+ __func__);
+ return ERR_PTR(-EBUSY);
+ }
+
+ rc = set_interleave_ways(cxlr, ways);
+ if (rc)
+ return ERR_PTR(rc);
+
+ rc = set_interleave_granularity(cxlr, cxld->interleave_granularity);
+ if (rc)
+ return ERR_PTR(rc);
+
+ scoped_guard(rwsem_read, &cxl_rwsem.dpa) {
+ for (i = 0; i < ways; i++) {
+ if (!cxled[i]->dpa_res)
+ return ERR_PTR(-EINVAL);
+ size += resource_size(cxled[i]->dpa_res);
+ }
+
+ rc = alloc_hpa(cxlr, size);
+ if (rc)
+ return ERR_PTR(rc);
+
+ for (i = 0; i < ways; i++) {
+ rc = cxl_region_attach(cxlr, cxled[i], 0);
+ if (rc)
+ return ERR_PTR(rc);
+ }
+ }
+
+ rc = cxl_region_decode_commit(cxlr);
+ if (rc)
+ return ERR_PTR(rc);
+
+ p->state = CXL_CONFIG_COMMIT;
+
+ return no_free_ptr(cxlr);
+}
+
+/**
+ * cxl_create_region - Establish a region given an endpoint decoder
+ * @cxlrd: root decoder to allocate HPA
+ * @cxled: endpoint decoders with reserved DPA capacity
+ * @ways: interleave ways required
+ *
+ * Returns a fully formed region in the commit state and attached to the
+ * cxl_region driver.
+ */
+struct cxl_region *cxl_create_region(struct cxl_root_decoder *cxlrd,
+ struct cxl_endpoint_decoder **cxled,
+ int ways)
+{
+ struct cxl_region *cxlr;
+
+ mutex_lock(&cxlrd->range_lock);
+ cxlr = __construct_new_region(cxlrd, cxled, ways);
+ mutex_unlock(&cxlrd->range_lock);
+ if (IS_ERR(cxlr))
+ return cxlr;
+
+ if (device_attach(&cxlr->dev) <= 0) {
+ dev_err(&cxlr->dev, "failed to create region\n");
+ drop_region(cxlr);
+ return ERR_PTR(-ENODEV);
+ }
+
+ return cxlr;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_create_region, "CXL");
+
static struct cxl_region *
cxl_find_region_by_range(struct cxl_root_decoder *cxlrd, struct range *hpa)
{
diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
index 7bd88e6b8598..e6176677ea94 100644
--- a/include/cxl/cxl.h
+++ b/include/cxl/cxl.h
@@ -290,4 +290,7 @@ struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_memdev *cxlmd,
enum cxl_partition_mode mode,
resource_size_t alloc);
int cxl_dpa_free(struct cxl_endpoint_decoder *cxled);
+struct cxl_region *cxl_create_region(struct cxl_root_decoder *cxlrd,
+ struct cxl_endpoint_decoder **cxled,
+ int ways);
#endif /* __CXL_CXL_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 23/25] cxl: Avoid dax creation for accelerators
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (21 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 22/25] cxl: Allow region creation by type2 drivers alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 24/25] sfc: create cxl region alejandro.lucero-palau
` (2 subsequent siblings)
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Jonathan Cameron, Davidlohr Bueso, Ben Cheatham
From: Alejandro Lucero <alucerop@amd.com>
By definition a type2 cxl device will use the host managed memory for
specific functionality, therefore it should not be available to other
uses.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Davidlohr Bueso <daves@stgolabs.net>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Ben Cheatham <benjamin.cheatham@amd.com>
---
drivers/cxl/core/region.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 694bb1e543cc..4d37561c07b2 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -4116,6 +4116,13 @@ static int cxl_region_probe(struct device *dev)
if (rc)
return rc;
+ /*
+ * HDM-D[B] (device-memory) regions have accelerator specific usage.
+ * Skip device-dax registration.
+ */
+ if (cxlr->type == CXL_DECODER_DEVMEM)
+ return 0;
+
/*
* From this point on any path that changes the region's state away from
* CXL_CONFIG_COMMIT is also responsible for releasing the driver.
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 24/25] sfc: create cxl region
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (22 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 23/25] cxl: Avoid dax creation for accelerators alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 25/25] sfc: support pio mapping based on cxl alejandro.lucero-palau
2025-12-07 7:12 ` [PATCH v22 00/25] Type2 device basic support dan.j.williams
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Jonathan Cameron
From: Alejandro Lucero <alucerop@amd.com>
Use cxl api for creating a region using the endpoint decoder related to
a DPA range.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
---
drivers/net/ethernet/sfc/efx_cxl.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c
index 56e7104483a5..18b487d0cac3 100644
--- a/drivers/net/ethernet/sfc/efx_cxl.c
+++ b/drivers/net/ethernet/sfc/efx_cxl.c
@@ -136,6 +136,14 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
cxl_put_root_decoder(cxl->cxlrd);
return PTR_ERR(cxl->cxled);
}
+
+ cxl->efx_region = cxl_create_region(cxl->cxlrd, &cxl->cxled, 1);
+ if (IS_ERR(cxl->efx_region)) {
+ pci_err(pci_dev, "CXL accel create region failed");
+ cxl_put_root_decoder(cxl->cxlrd);
+ cxl_dpa_free(cxl->cxled);
+ return PTR_ERR(cxl->efx_region);
+ }
}
probe_data->cxl = cxl;
@@ -152,11 +160,14 @@ void efx_cxl_exit(struct efx_probe_data *probe_data)
iounmap(probe_data->cxl->ctpio_cxl);
cxl_decoder_detach(NULL, probe_data->cxl->cxled, 0,
DETACH_INVALIDATE);
- unregister_region(probe_data->cxl->efx_region);
} else {
+ cxl_decoder_detach(NULL, probe_data->cxl->cxled, 0,
+ DETACH_INVALIDATE);
cxl_dpa_free(probe_data->cxl->cxled);
cxl_put_root_decoder(probe_data->cxl->cxlrd);
}
+
+ unregister_region(probe_data->cxl->efx_region);
}
MODULE_IMPORT_NS("CXL");
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v22 25/25] sfc: support pio mapping based on cxl
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (23 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 24/25] sfc: create cxl region alejandro.lucero-palau
@ 2025-12-05 11:52 ` alejandro.lucero-palau
2025-12-07 7:12 ` [PATCH v22 00/25] Type2 device basic support dan.j.williams
25 siblings, 0 replies; 36+ messages in thread
From: alejandro.lucero-palau @ 2025-12-05 11:52 UTC (permalink / raw)
To: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero, Jonathan Cameron
From: Alejandro Lucero <alucerop@amd.com>
A PIO buffer is a region of device memory to which the driver can write a
packet for TX, with the device handling the transmit doorbell without
requiring a DMA for getting the packet data, which helps reducing latency
in certain exchanges. With CXL mem protocol this latency can be lowered
further.
With a device supporting CXL and successfully initialised, use the cxl
region to map the memory range and use this mapping for PIO buffers.
Add the disabling of those CXL-based PIO buffers if the callback for
potential cxl endpoint removal by the CXL code happens.
Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
---
drivers/net/ethernet/sfc/ef10.c | 50 +++++++++++++++++++++++----
drivers/net/ethernet/sfc/efx_cxl.c | 39 +++++++++++++++------
drivers/net/ethernet/sfc/net_driver.h | 2 ++
drivers/net/ethernet/sfc/nic.h | 3 ++
4 files changed, 77 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
index fcec81f862ec..2bb6d3136c7c 100644
--- a/drivers/net/ethernet/sfc/ef10.c
+++ b/drivers/net/ethernet/sfc/ef10.c
@@ -24,6 +24,7 @@
#include <linux/wait.h>
#include <linux/workqueue.h>
#include <net/udp_tunnel.h>
+#include "efx_cxl.h"
/* Hardware control for EF10 architecture including 'Huntington'. */
@@ -106,7 +107,7 @@ static int efx_ef10_get_vf_index(struct efx_nic *efx)
static int efx_ef10_init_datapath_caps(struct efx_nic *efx)
{
- MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_CAPABILITIES_V4_OUT_LEN);
+ MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_CAPABILITIES_V7_OUT_LEN);
struct efx_ef10_nic_data *nic_data = efx->nic_data;
size_t outlen;
int rc;
@@ -177,6 +178,12 @@ static int efx_ef10_init_datapath_caps(struct efx_nic *efx)
efx->num_mac_stats);
}
+ if (outlen < MC_CMD_GET_CAPABILITIES_V7_OUT_LEN)
+ nic_data->datapath_caps3 = 0;
+ else
+ nic_data->datapath_caps3 = MCDI_DWORD(outbuf,
+ GET_CAPABILITIES_V7_OUT_FLAGS3);
+
return 0;
}
@@ -919,6 +926,9 @@ static void efx_ef10_forget_old_piobufs(struct efx_nic *efx)
static void efx_ef10_remove(struct efx_nic *efx)
{
struct efx_ef10_nic_data *nic_data = efx->nic_data;
+#ifdef CONFIG_SFC_CXL
+ struct efx_probe_data *probe_data;
+#endif
int rc;
#ifdef CONFIG_SFC_SRIOV
@@ -949,7 +959,12 @@ static void efx_ef10_remove(struct efx_nic *efx)
efx_mcdi_rx_free_indir_table(efx);
+#ifdef CONFIG_SFC_CXL
+ probe_data = container_of(efx, struct efx_probe_data, efx);
+ if (nic_data->wc_membase && !probe_data->cxl_pio_in_use)
+#else
if (nic_data->wc_membase)
+#endif
iounmap(nic_data->wc_membase);
rc = efx_mcdi_free_vis(efx);
@@ -1140,6 +1155,9 @@ static int efx_ef10_dimension_resources(struct efx_nic *efx)
unsigned int channel_vis, pio_write_vi_base, max_vis;
struct efx_ef10_nic_data *nic_data = efx->nic_data;
unsigned int uc_mem_map_size, wc_mem_map_size;
+#ifdef CONFIG_SFC_CXL
+ struct efx_probe_data *probe_data;
+#endif
void __iomem *membase;
int rc;
@@ -1263,8 +1281,25 @@ static int efx_ef10_dimension_resources(struct efx_nic *efx)
iounmap(efx->membase);
efx->membase = membase;
- /* Set up the WC mapping if needed */
- if (wc_mem_map_size) {
+ if (!wc_mem_map_size)
+ goto skip_pio;
+
+ /* Set up the WC mapping */
+
+#ifdef CONFIG_SFC_CXL
+ probe_data = container_of(efx, struct efx_probe_data, efx);
+ if ((nic_data->datapath_caps3 &
+ (1 << MC_CMD_GET_CAPABILITIES_V7_OUT_CXL_CONFIG_ENABLE_LBN)) &&
+ probe_data->cxl_pio_initialised) {
+ /* Using PIO through CXL mapping? */
+ nic_data->pio_write_base = probe_data->cxl->ctpio_cxl +
+ (pio_write_vi_base * efx->vi_stride +
+ ER_DZ_TX_PIOBUF - uc_mem_map_size);
+ probe_data->cxl_pio_in_use = true;
+ } else
+#endif
+ {
+ /* Using legacy PIO BAR mapping */
nic_data->wc_membase = ioremap_wc(efx->membase_phys +
uc_mem_map_size,
wc_mem_map_size);
@@ -1279,12 +1314,13 @@ static int efx_ef10_dimension_resources(struct efx_nic *efx)
nic_data->wc_membase +
(pio_write_vi_base * efx->vi_stride + ER_DZ_TX_PIOBUF -
uc_mem_map_size);
-
- rc = efx_ef10_link_piobufs(efx);
- if (rc)
- efx_ef10_free_piobufs(efx);
}
+ rc = efx_ef10_link_piobufs(efx);
+ if (rc)
+ efx_ef10_free_piobufs(efx);
+
+skip_pio:
netif_dbg(efx, probe, efx->net_dev,
"memory BAR at %pa (virtual %p+%x UC, %p+%x WC)\n",
&efx->membase_phys, efx->membase, uc_mem_map_size,
diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c
index 18b487d0cac3..024a92632c56 100644
--- a/drivers/net/ethernet/sfc/efx_cxl.c
+++ b/drivers/net/ethernet/sfc/efx_cxl.c
@@ -11,6 +11,7 @@
#include <cxl/pci.h>
#include "net_driver.h"
#include "efx_cxl.h"
+#include "efx.h"
#define EFX_CTPIO_BUFFER_SIZE SZ_256M
@@ -140,15 +141,35 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
cxl->efx_region = cxl_create_region(cxl->cxlrd, &cxl->cxled, 1);
if (IS_ERR(cxl->efx_region)) {
pci_err(pci_dev, "CXL accel create region failed");
- cxl_put_root_decoder(cxl->cxlrd);
- cxl_dpa_free(cxl->cxled);
- return PTR_ERR(cxl->efx_region);
+ rc = PTR_ERR(cxl->efx_region);
+ goto err_dpa;
+ }
+
+ rc = cxl_get_region_range(cxl->efx_region, &range);
+ if (rc) {
+ pci_err(pci_dev, "CXL getting regions params failed");
+ goto err_detach;
+ }
+
+ cxl->ctpio_cxl = ioremap(range.start, range.end - range.start + 1);
+ if (!cxl->ctpio_cxl) {
+ pci_err(pci_dev, "CXL ioremap region (%pra) failed", &range);
+ rc = -ENOMEM;
+ goto err_detach;
}
}
probe_data->cxl = cxl;
+ probe_data->cxl_pio_initialised = true;
return 0;
+
+err_detach:
+ cxl_decoder_detach(NULL, cxl->cxled, 0, DETACH_INVALIDATE);
+err_dpa:
+ cxl_put_root_decoder(cxl->cxlrd);
+ cxl_dpa_free(cxl->cxled);
+ return rc;
}
void efx_cxl_exit(struct efx_probe_data *probe_data)
@@ -156,13 +177,11 @@ void efx_cxl_exit(struct efx_probe_data *probe_data)
if (!probe_data->cxl)
return;
- if (probe_data->cxl->hdm_was_committed) {
- iounmap(probe_data->cxl->ctpio_cxl);
- cxl_decoder_detach(NULL, probe_data->cxl->cxled, 0,
- DETACH_INVALIDATE);
- } else {
- cxl_decoder_detach(NULL, probe_data->cxl->cxled, 0,
- DETACH_INVALIDATE);
+ iounmap(probe_data->cxl->ctpio_cxl);
+ cxl_decoder_detach(NULL, probe_data->cxl->cxled, 0,
+ DETACH_INVALIDATE);
+
+ if (!probe_data->cxl->hdm_was_committed) {
cxl_dpa_free(probe_data->cxl->cxled);
cxl_put_root_decoder(probe_data->cxl->cxlrd);
}
diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h
index 3964b2c56609..bea4eecdf842 100644
--- a/drivers/net/ethernet/sfc/net_driver.h
+++ b/drivers/net/ethernet/sfc/net_driver.h
@@ -1207,6 +1207,7 @@ struct efx_cxl;
* @efx: Efx NIC details
* @cxl: details of related cxl objects
* @cxl_pio_initialised: cxl initialization outcome.
+ * @cxl_pio_in_use: PIO using CXL mapping
*/
struct efx_probe_data {
struct pci_dev *pci_dev;
@@ -1214,6 +1215,7 @@ struct efx_probe_data {
#ifdef CONFIG_SFC_CXL
struct efx_cxl *cxl;
bool cxl_pio_initialised;
+ bool cxl_pio_in_use;
#endif
};
diff --git a/drivers/net/ethernet/sfc/nic.h b/drivers/net/ethernet/sfc/nic.h
index 9fa5c4c713ab..c87cc9214690 100644
--- a/drivers/net/ethernet/sfc/nic.h
+++ b/drivers/net/ethernet/sfc/nic.h
@@ -152,6 +152,8 @@ enum {
* %MC_CMD_GET_CAPABILITIES response)
* @datapath_caps2: Further Capabilities of datapath firmware (FLAGS2 field of
* %MC_CMD_GET_CAPABILITIES response)
+ * @datapath_caps3: Further Capabilities of datapath firmware (FLAGS3 field of
+ * %MC_CMD_GET_CAPABILITIES response)
* @rx_dpcpu_fw_id: Firmware ID of the RxDPCPU
* @tx_dpcpu_fw_id: Firmware ID of the TxDPCPU
* @must_probe_vswitching: Flag: vswitching has yet to be setup after MC reboot
@@ -186,6 +188,7 @@ struct efx_ef10_nic_data {
bool must_check_datapath_caps;
u32 datapath_caps;
u32 datapath_caps2;
+ u32 datapath_caps3;
unsigned int rx_dpcpu_fw_id;
unsigned int tx_dpcpu_fw_id;
bool must_probe_vswitching;
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* Re: [PATCH v22 00/25] Type2 device basic support
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
` (24 preceding siblings ...)
2025-12-05 11:52 ` [PATCH v22 25/25] sfc: support pio mapping based on cxl alejandro.lucero-palau
@ 2025-12-07 7:12 ` dan.j.williams
25 siblings, 0 replies; 36+ messages in thread
From: dan.j.williams @ 2025-12-07 7:12 UTC (permalink / raw)
To: alejandro.lucero-palau, linux-cxl, netdev, dan.j.williams,
edward.cree, davem, kuba, pabeni, edumazet, dave.jiang
Cc: Alejandro Lucero
alejandro.lucero-palau@ wrote:
> From: Alejandro Lucero <alucerop@amd.com>
>
> The patchset should be applied on the described base commit then applying
> Terry's v13 about CXL error handling. The first 3 patches come from Dan's
> for-6.18/cxl-probe-order branch with minor modifications.
>
> This last version introduces support for Type2 decoder committed by
> firmware, implying CXL region automatically created during memdev
> initialization. New patches 11, 13 and 14 show this new core support
> with the sfc driver using it.
"Using" in what aspect? Does your test platform auto-create Type-2
regions? I know that is expected on the platforms PJ is using, but I
want to get a sense of what is the highest priority for Linux to address
first.
My sense, from the trouble PJ has been having, is that regions committed
by firmware is a higher priority than driver created regions. Yes, the
subsystem will support both in the end, but in terms of staging this set
incrementally, I think we probably want to review one mode at a time.
> This driver has also support for the
> option used until today, where HDM decoders not committed. This is true
> under certain scenarios and also after the driver has been unload. This
> brings up the question if such firmware committer decoder should be
> reset at driver unload, assuming no locked HDM what this patchset does
> not support.
This question is asked and answered in Smita's Soft Reserve Recovery
effort. See this discussion [1]:
[1]:
http://lore.kernel.org/6930dacd6510f_198110020@dwillia2-mobl4.notmuch
The quick summary is that regions and decoders alive before the expander
or accelerator driver loaded should stay alive after the driver is
unloaded. Only explicit userspace driven de-commit can convert firmware
established regions back to driver established regions.
> v22 changes:
>
> patch 1-3 from Dan's branch without any changes.
Note for others following along, I am deleting that RFC branch in favor
of formal patches here [2].
[2]: http://lore.kernel.org/20251204022136.2573521-1-dan.j.williams@intel.com
The expectation is use that set to finish Smita's series [3]. Then
finalize the port and error handling rework's in Terry's (which will end
up removing mapped CXL component registers from 'struct cxl_dev_state),
and then queue this series on top.
[3]: http://lore.kernel.org/20251120031925.87762-1-Smita.KoralahalliChannabasappa@amd.com
> patch 11: new
>
> patch 12: moved here from v21 patch 22
>
> patch 13-14: new
>
> patch 23: move check ahead of type3 only checks
>
> All patches with sfc changes adapted to support both options.
Going forward the log of old changes can be replaced with a link to the
N-1 posting.
Given the backlog of the Soft Reserve Recovery, CXL Protocol Error
Handling, and this Accelerator series I want to see some of these
precursor patches land on a topic branch before moving on to dealing
with the full accelerator set. I.e. I think we are at the point where
this can stop posting on moving baselines and focus on getting the the
dependencies into a topic branch in cxl.git.
[..]
How much of the changelog below is still relevant. It still talks about
the original RFC. Might it need a refresh given current learnings and
the passage of time? For example, no need to talk about CXL.cache, first
things first, just get non-cxl_pci based CXL.mem going.
> v2 changes:
>
> I have removed the introduction about the concerns with BIOS/UEFI after the
> discussion leading to confirm the need of the functionality implemented, at
> least is some scenarios.
>
> There are two main changes from the RFC:
>
> 1) Following concerns about drivers using CXL core without restrictions, the CXL
> struct to work with is opaque to those drivers, therefore functions are
> implemented for modifying or reading those structs indirectly.
>
> 2) The driver for using the added functionality is not a test driver but a real
> one: the SFC ethernet network driver. It uses the CXL region mapped for PIO
> buffers instead of regions inside PCIe BARs.
>
> RFC:
>
> Current CXL kernel code is focused on supporting Type3 CXL devices, aka memory
> expanders. Type2 CXL devices, aka device accelerators, share some functionalities
> but require some special handling.
>
> First of all, Type2 are by definition specific to drivers doing something and not just
> a memory expander, so it is expected to work with the CXL specifics. This implies the CXL
> setup needs to be done by such a driver instead of by a generic CXL PCI driver
> as for memory expanders. Most of such setup needs to use current CXL core code
> and therefore needs to be accessible to those vendor drivers. This is accomplished
> exporting opaque CXL structs and adding and exporting functions for working with
> those structs indirectly.
>
> Some of the patches are based on a patchset sent by Dan Williams [1] which was just
> partially integrated, most related to making things ready for Type2 but none
> related to specific Type2 support. Those patches based on Dan´s work have Dan´s
> signing as co-developer, and a link to the original patch.
>
> A final note about CXL.cache is needed. This patchset does not cover it at all,
> although the emulated Type2 device advertises it. From the kernel point of view
> supporting CXL.cache will imply to be sure the CXL path supports what the Type2
> device needs. A device accelerator will likely be connected to a Root Switch,
> but other configurations can not be discarded. Therefore the kernel will need to
> check not just HPA, DPA, interleave and granularity, but also the available
> CXL.cache support and resources in each switch in the CXL path to the Type2
> device. I expect to contribute to this support in the following months, and
> it would be good to discuss about it when possible.
>
> [1] https://lore.kernel.org/linux-cxl/98b1f61a-e6c2-71d4-c368-50d958501b0c@intel.com/T/
>
[..]
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v22 11/25] cxl/hdm: Add support for getting region from committed decoder
2025-12-05 11:52 ` [PATCH v22 11/25] cxl/hdm: Add support for getting region from committed decoder alejandro.lucero-palau
@ 2025-12-15 13:50 ` Jonathan Cameron
2025-12-18 11:52 ` Alejandro Lucero Palau
0 siblings, 1 reply; 36+ messages in thread
From: Jonathan Cameron @ 2025-12-15 13:50 UTC (permalink / raw)
To: alejandro.lucero-palau
Cc: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang, Alejandro Lucero
On Fri, 5 Dec 2025 11:52:34 +0000
<alejandro.lucero-palau@amd.com> wrote:
> From: Alejandro Lucero <alucerop@amd.com>
>
> A Type2 device configured by the BIOS can already have its HDM
> committed. Add a cxl_get_committed_decoder() function for cheking
checking if this is so after memdev creation.
> so after memdev creation. A CXL region should have been created
> during memdev initialization, therefore a Type2 driver can ask for
> such a region for working with the HPA. If the HDM is not committed,
> a Type2 driver will create the region after obtaining proper HPA
> and DPA space.
>
> Signed-off-by: Alejandro Lucero <alucerop@amd.com>
Hi Alejandro,
I'm in two minds about this. In general there are devices that have
been configured by the BIOS because they are already in use. I'm not sure
the driver you are working with here is necessarily set up to survive
that sort of live setup without interrupting data flows.
If it is fair enough to support this, otherwise my inclination is tear
down whatever the bios did and start again (unless locked - in which
case go grumble at your BIOS folk). Reasoning being that we then only
have to handle the equivalent of the hotplug flow in both cases rather
than having to handle 2.
There are also the TSP / encrypted link cases where we need to be careful.
I have no idea if that applies here.
So I'm not against this in general, just not sure there is an argument
for this approach 'yet'. If there is, give more breadcrumbs to it in this
commit message.
A few comments inline.
> ---
> drivers/cxl/core/hdm.c | 44 ++++++++++++++++++++++++++++++++++++++++++
> include/cxl/cxl.h | 3 +++
> 2 files changed, 47 insertions(+)
>
> diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c
> index d3a094ca01ad..fa99657440d1 100644
> --- a/drivers/cxl/core/hdm.c
> +++ b/drivers/cxl/core/hdm.c
> @@ -92,6 +92,7 @@ static void parse_hdm_decoder_caps(struct cxl_hdm *cxlhdm)
> static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
> {
> struct cxl_hdm *cxlhdm;
> + struct cxl_port *port;
> void __iomem *hdm;
> u32 ctrl;
> int i;
> @@ -105,6 +106,10 @@ static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
> if (!hdm)
> return true;
>
> + port = cxlhdm->port;
> + if (is_cxl_endpoint(port))
> + return false;
Why this change? If it was valid before this patch as an early exit
then do it in a patch that justifies that not buried in here.
> +
> /*
> * If HDM decoders are present and the driver is in control of
> * Mem_Enable skip DVSEC based emulation
> @@ -686,6 +691,45 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, u64 size)
> return devm_add_action_or_reset(&port->dev, cxl_dpa_release, cxled);
> }
>
> +static int find_committed_decoder(struct device *dev, const void *data)
Function name rather suggests it finds committed decoders on 'whatever'
but it only works for the endpoint decoders. Rename it to avoid this
confusion.
> +{
> + struct cxl_endpoint_decoder *cxled;
> + struct cxl_port *port;
> +
> + if (!is_endpoint_decoder(dev))
> + return 0;
> +
> + cxled = to_cxl_endpoint_decoder(dev);
> + port = cxled_to_port(cxled);
> +
> + return cxled->cxld.id == (port->hdm_end);
Drop the ()
> +}
> +
> +struct cxl_endpoint_decoder *cxl_get_committed_decoder(struct cxl_memdev *cxlmd,
> + struct cxl_region **cxlr)
> +{
> + struct cxl_port *endpoint = cxlmd->endpoint;
> + struct cxl_endpoint_decoder *cxled;
> + struct device *cxled_dev;
> +
> + if (!endpoint)
> + return NULL;
> +
> + guard(rwsem_read)(&cxl_rwsem.dpa);
> + cxled_dev = device_find_child(&endpoint->dev, NULL,
> + find_committed_decoder);
> +
> + if (!cxled_dev)
> + return NULL;
> +
> + cxled = to_cxl_endpoint_decoder(cxled_dev);
> + *cxlr = cxled->cxld.region;
> +
> + put_device(cxled_dev);
Probably use a __free() for this.
> + return cxled;
> +}
> +EXPORT_SYMBOL_NS_GPL(cxl_get_committed_decoder, "CXL");
> +
> static void cxld_set_interleave(struct cxl_decoder *cxld, u32 *ctrl)
> {
> u16 eig;
> diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
> index 043fc31c764e..2ff3c19c684c 100644
> --- a/include/cxl/cxl.h
> +++ b/include/cxl/cxl.h
> @@ -250,4 +250,7 @@ int cxl_set_capacity(struct cxl_dev_state *cxlds, u64 capacity);
> struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
> struct cxl_dev_state *cxlds,
> const struct cxl_memdev_ops *ops);
> +struct cxl_region;
> +struct cxl_endpoint_decoder *cxl_get_committed_decoder(struct cxl_memdev *cxlmd,
> + struct cxl_region **cxlr);
> #endif /* __CXL_CXL_H__ */
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v22 13/25] cxl: Export functions for unwinding cxl by accelerators
2025-12-05 11:52 ` [PATCH v22 13/25] cxl: Export functions for unwinding cxl by accelerators alejandro.lucero-palau
@ 2025-12-15 13:53 ` Jonathan Cameron
2025-12-18 12:07 ` Alejandro Lucero Palau
0 siblings, 1 reply; 36+ messages in thread
From: Jonathan Cameron @ 2025-12-15 13:53 UTC (permalink / raw)
To: alejandro.lucero-palau
Cc: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang, Alejandro Lucero
On Fri, 5 Dec 2025 11:52:36 +0000
<alejandro.lucero-palau@amd.com> wrote:
> From: Alejandro Lucero <alucerop@amd.com>
>
> Add unregister_region() and cxl_decoder_detach() to the accelerator
> driver API for a clean exit.
>
> Signed-off-by: Alejandro Lucero <alucerop@amd.com>
In general seems fine but comment on type safety inline.
Jonathan
> ---
> drivers/cxl/core/core.h | 5 -----
> drivers/cxl/core/region.c | 4 +++-
> include/cxl/cxl.h | 9 +++++++++
> 3 files changed, 12 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
> index 1c1726856139..9a6775845afe 100644
> --- a/drivers/cxl/core/core.h
> +++ b/drivers/cxl/core/core.h
> @@ -15,11 +15,6 @@ extern const struct device_type cxl_pmu_type;
>
> extern struct attribute_group cxl_base_attribute_group;
>
> -enum cxl_detach_mode {
> - DETACH_ONLY,
> - DETACH_INVALIDATE,
> -};
> -
> #ifdef CONFIG_CXL_REGION
> extern struct device_attribute dev_attr_create_pmem_region;
> extern struct device_attribute dev_attr_create_ram_region;
> diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
> index 8166a402373e..104caa33b7bb 100644
> --- a/drivers/cxl/core/region.c
> +++ b/drivers/cxl/core/region.c
> @@ -2199,6 +2199,7 @@ int cxl_decoder_detach(struct cxl_region *cxlr,
> }
> return 0;
> }
> +EXPORT_SYMBOL_NS_GPL(cxl_decoder_detach, "CXL");
>
> static int __attach_target(struct cxl_region *cxlr,
> struct cxl_endpoint_decoder *cxled, int pos,
> @@ -2393,7 +2394,7 @@ static struct cxl_region *to_cxl_region(struct device *dev)
> return container_of(dev, struct cxl_region, dev);
> }
>
> -static void unregister_region(void *_cxlr)
> +void unregister_region(void *_cxlr)
> {
> struct cxl_region *cxlr = _cxlr;
> struct cxl_region_params *p = &cxlr->params;
> @@ -2412,6 +2413,7 @@ static void unregister_region(void *_cxlr)
> cxl_region_iomem_release(cxlr);
> put_device(&cxlr->dev);
> }
> +EXPORT_SYMBOL_NS_GPL(unregister_region, "CXL");
>
> static struct lock_class_key cxl_region_key;
>
> diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
> index f02dd817b40f..b8683c75dfde 100644
> --- a/include/cxl/cxl.h
> +++ b/include/cxl/cxl.h
> @@ -255,4 +255,13 @@ struct cxl_endpoint_decoder *cxl_get_committed_decoder(struct cxl_memdev *cxlmd,
> struct cxl_region **cxlr);
> struct range;
> int cxl_get_region_range(struct cxl_region *region, struct range *range);
> +enum cxl_detach_mode {
> + DETACH_ONLY,
> + DETACH_INVALIDATE,
> +};
> +
> +int cxl_decoder_detach(struct cxl_region *cxlr,
> + struct cxl_endpoint_decoder *cxled, int pos,
> + enum cxl_detach_mode mode);
> +void unregister_region(void *_cxlr);
I'd wrap this for an exposed interface that isn't going to be used
as a devm callback so we can make it type safe. Maybe making the
existing devm callback the one doing wrapping is cleanest route.
> #endif /* __CXL_CXL_H__ */
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v22 14/25] sfc: obtain decoder and region if committed by firmware
2025-12-05 11:52 ` [PATCH v22 14/25] sfc: obtain decoder and region if committed by firmware alejandro.lucero-palau
@ 2025-12-15 13:57 ` Jonathan Cameron
2025-12-18 12:14 ` Alejandro Lucero Palau
0 siblings, 1 reply; 36+ messages in thread
From: Jonathan Cameron @ 2025-12-15 13:57 UTC (permalink / raw)
To: alejandro.lucero-palau
Cc: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang, Alejandro Lucero
On Fri, 5 Dec 2025 11:52:37 +0000
<alejandro.lucero-palau@amd.com> wrote:
> From: Alejandro Lucero <alucerop@amd.com>
>
> Check if device HDM is already committed during firmware/BIOS
> initialization.
>
> A CXL region should exist if so after memdev allocation/initialization.
> Get HPA from region and map it.
I'm confused. If this only occurs if there is a committed decoder,
why is the exist cleanup unconditional?
Looks like you add logic around this in patch 16. I think that should be
back here for ease of reading even if for some reason this isn't broken.
Jonathan
>
> Signed-off-by: Alejandro Lucero <alucerop@amd.com>
> ---
> drivers/net/ethernet/sfc/efx_cxl.c | 27 +++++++++++++++++++++++++++
> 1 file changed, 27 insertions(+)
>
> diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c
> index f6eda93e67e2..ad1f49e76179 100644
> --- a/drivers/net/ethernet/sfc/efx_cxl.c
> +++ b/drivers/net/ethernet/sfc/efx_cxl.c
> @@ -19,6 +19,7 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
> struct efx_nic *efx = &probe_data->efx;
> struct pci_dev *pci_dev = efx->pci_dev;
> struct efx_cxl *cxl;
> + struct range range;
> u16 dvsec;
> int rc;
>
> @@ -90,6 +91,26 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
> return PTR_ERR(cxl->cxlmd);
> }
>
> + cxl->cxled = cxl_get_committed_decoder(cxl->cxlmd, &cxl->efx_region);
> + if (cxl->cxled) {
> + if (!cxl->efx_region) {
> + pci_err(pci_dev, "CXL found committed decoder without a region");
> + return -ENODEV;
> + }
> + rc = cxl_get_region_range(cxl->efx_region, &range);
> + if (rc) {
> + pci_err(pci_dev,
> + "CXL getting regions params from a committed decoder failed");
> + return rc;
> + }
> +
> + cxl->ctpio_cxl = ioremap(range.start, range.end - range.start + 1);
> + if (!cxl->ctpio_cxl) {
> + pci_err(pci_dev, "CXL ioremap region (%pra) failed", &range);
> + return -ENOMEM;
> + }
> + }
> +
> probe_data->cxl = cxl;
>
> return 0;
> @@ -97,6 +118,12 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
>
> void efx_cxl_exit(struct efx_probe_data *probe_data)
> {
> + if (!probe_data->cxl)
> + return;
> +
> + iounmap(probe_data->cxl->ctpio_cxl);
> + cxl_decoder_detach(NULL, probe_data->cxl->cxled, 0, DETACH_INVALIDATE);
> + unregister_region(probe_data->cxl->efx_region);
> }
>
> MODULE_IMPORT_NS("CXL");
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v22 11/25] cxl/hdm: Add support for getting region from committed decoder
2025-12-15 13:50 ` Jonathan Cameron
@ 2025-12-18 11:52 ` Alejandro Lucero Palau
2025-12-18 15:03 ` Jonathan Cameron
0 siblings, 1 reply; 36+ messages in thread
From: Alejandro Lucero Palau @ 2025-12-18 11:52 UTC (permalink / raw)
To: Jonathan Cameron, alejandro.lucero-palau
Cc: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
Hi Jonathan,
On 12/15/25 13:50, Jonathan Cameron wrote:
> On Fri, 5 Dec 2025 11:52:34 +0000
> <alejandro.lucero-palau@amd.com> wrote:
>
>> From: Alejandro Lucero <alucerop@amd.com>
>>
>> A Type2 device configured by the BIOS can already have its HDM
>> committed. Add a cxl_get_committed_decoder() function for cheking
> checking if this is so after memdev creation.
>
>> so after memdev creation. A CXL region should have been created
>> during memdev initialization, therefore a Type2 driver can ask for
>> such a region for working with the HPA. If the HDM is not committed,
>> a Type2 driver will create the region after obtaining proper HPA
>> and DPA space.
>>
>> Signed-off-by: Alejandro Lucero <alucerop@amd.com>
> Hi Alejandro,
>
> I'm in two minds about this. In general there are devices that have
> been configured by the BIOS because they are already in use. I'm not sure
> the driver you are working with here is necessarily set up to survive
> that sort of live setup without interrupting data flows.
This is not mainly about my driver/device but something PJ and Dan agree
on support along this type2 patchset.
You can see the v21 discussions, but basically PJ can not have his
driver using the committed decoders from BIOS. So this change addresses
that situation which my driver/device can also benefit from as current
BIOS available is committing decoders regardless of UEFI flags like
EFI_RESERVED_TYPE.
Neither in my case nor in PJ case the device will be in use before
kernel is executing, although PJ should confirm this.
>
> If it is fair enough to support this, otherwise my inclination is tear
> down whatever the bios did and start again (unless locked - in which
> case go grumble at your BIOS folk). Reasoning being that we then only
> have to handle the equivalent of the hotplug flow in both cases rather
> than having to handle 2.
Well, the automatic discovery region used for Type3 can be reused for
Type2 in this scenario, so we do not need to tear down what the BIOS
did. However, the argument is what we should do when the driver exits
which the current functionality added with the patchset being tearing
down the device and CXL bridge decoders. Dan seems to be keen on not
doing this tear down even if the HDMs are not locked.
What I can say is I have tested this patchset with an AMD system and
with the BIOS committing the HDM decoders for my device, and the first
time the driver loads it gets the region from the automatic discovery
while creating memdev, and the driver does tear down the HDMs when
exiting. Subsequent driver loads do the HDM configuration as this
patchset had been doing from day one. So all works as expected.
I'm inclined to leave the functionality as it is now, and your
suggestion or Dan's one for keeping the HDMs, as they were configured by
the BIOS, when driver exits should require, IMO, a good reason behind it.
> There are also the TSP / encrypted link cases where we need to be careful.
> I have no idea if that applies here.
I would say, let's wait until this support is completed, but as far as I
know, this is not a requirement for current Type2 clients (sfc and jump
trading).
> So I'm not against this in general, just not sure there is an argument
> for this approach 'yet'. If there is, give more breadcrumbs to it in this
> commit message.
>
> A few comments inline.
>
>> ---
>> drivers/cxl/core/hdm.c | 44 ++++++++++++++++++++++++++++++++++++++++++
>> include/cxl/cxl.h | 3 +++
>> 2 files changed, 47 insertions(+)
>>
>> diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c
>> index d3a094ca01ad..fa99657440d1 100644
>> --- a/drivers/cxl/core/hdm.c
>> +++ b/drivers/cxl/core/hdm.c
>> @@ -92,6 +92,7 @@ static void parse_hdm_decoder_caps(struct cxl_hdm *cxlhdm)
>> static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
>> {
>> struct cxl_hdm *cxlhdm;
>> + struct cxl_port *port;
>> void __iomem *hdm;
>> u32 ctrl;
>> int i;
>> @@ -105,6 +106,10 @@ static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
>> if (!hdm)
>> return true;
>>
>> + port = cxlhdm->port;
>> + if (is_cxl_endpoint(port))
>> + return false;
> Why this change? If it was valid before this patch as an early exit
> then do it in a patch that justifies that not buried in here.
Good catch. I needed this hack for the functionality described, because
the second time the driver loads this check turns to be positive because
the memory state. I think I understand the reason behind this decoders
emulation, but being honest, I do not understand what such emulation
depends on. I would say once the device advertises HDM, it should never
depend on other things, what seems to be the case now. I will explain
more about the problem in the following days.
>> +
>> /*
>> * If HDM decoders are present and the driver is in control of
>> * Mem_Enable skip DVSEC based emulation
>> @@ -686,6 +691,45 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, u64 size)
>> return devm_add_action_or_reset(&port->dev, cxl_dpa_release, cxled);
>> }
>>
>> +static int find_committed_decoder(struct device *dev, const void *data)
> Function name rather suggests it finds committed decoders on 'whatever'
> but it only works for the endpoint decoders. Rename it to avoid this
> confusion.
OK
>
>> +{
>> + struct cxl_endpoint_decoder *cxled;
>> + struct cxl_port *port;
>> +
>> + if (!is_endpoint_decoder(dev))
>> + return 0;
>> +
>> + cxled = to_cxl_endpoint_decoder(dev);
>> + port = cxled_to_port(cxled);
>> +
>> + return cxled->cxld.id == (port->hdm_end);
> Drop the ()
Sure.
>
>> +}
>> +
>> +struct cxl_endpoint_decoder *cxl_get_committed_decoder(struct cxl_memdev *cxlmd,
>> + struct cxl_region **cxlr)
>> +{
>> + struct cxl_port *endpoint = cxlmd->endpoint;
>> + struct cxl_endpoint_decoder *cxled;
>> + struct device *cxled_dev;
>> +
>> + if (!endpoint)
>> + return NULL;
>> +
>> + guard(rwsem_read)(&cxl_rwsem.dpa);
>> + cxled_dev = device_find_child(&endpoint->dev, NULL,
>> + find_committed_decoder);
>> +
>> + if (!cxled_dev)
>> + return NULL;
>> +
>> + cxled = to_cxl_endpoint_decoder(cxled_dev);
>> + *cxlr = cxled->cxld.region;
>> +
>> + put_device(cxled_dev);
> Probably use a __free() for this.
I'll think about it.
Thanks!
>> + return cxled;
>> +}
>> +EXPORT_SYMBOL_NS_GPL(cxl_get_committed_decoder, "CXL");
>> +
>> static void cxld_set_interleave(struct cxl_decoder *cxld, u32 *ctrl)
>> {
>> u16 eig;
>> diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
>> index 043fc31c764e..2ff3c19c684c 100644
>> --- a/include/cxl/cxl.h
>> +++ b/include/cxl/cxl.h
>> @@ -250,4 +250,7 @@ int cxl_set_capacity(struct cxl_dev_state *cxlds, u64 capacity);
>> struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
>> struct cxl_dev_state *cxlds,
>> const struct cxl_memdev_ops *ops);
>> +struct cxl_region;
>> +struct cxl_endpoint_decoder *cxl_get_committed_decoder(struct cxl_memdev *cxlmd,
>> + struct cxl_region **cxlr);
>> #endif /* __CXL_CXL_H__ */
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v22 13/25] cxl: Export functions for unwinding cxl by accelerators
2025-12-15 13:53 ` Jonathan Cameron
@ 2025-12-18 12:07 ` Alejandro Lucero Palau
0 siblings, 0 replies; 36+ messages in thread
From: Alejandro Lucero Palau @ 2025-12-18 12:07 UTC (permalink / raw)
To: Jonathan Cameron, alejandro.lucero-palau
Cc: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
On 12/15/25 13:53, Jonathan Cameron wrote:
> On Fri, 5 Dec 2025 11:52:36 +0000
> <alejandro.lucero-palau@amd.com> wrote:
>
>> From: Alejandro Lucero <alucerop@amd.com>
>>
>> Add unregister_region() and cxl_decoder_detach() to the accelerator
>> driver API for a clean exit.
>>
>> Signed-off-by: Alejandro Lucero <alucerop@amd.com>
> In general seems fine but comment on type safety inline.
>
> Jonathan
>
>> ---
>> drivers/cxl/core/core.h | 5 -----
>> drivers/cxl/core/region.c | 4 +++-
>> include/cxl/cxl.h | 9 +++++++++
>> 3 files changed, 12 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
>> index 1c1726856139..9a6775845afe 100644
>> --- a/drivers/cxl/core/core.h
>> +++ b/drivers/cxl/core/core.h
>> @@ -15,11 +15,6 @@ extern const struct device_type cxl_pmu_type;
>>
>> extern struct attribute_group cxl_base_attribute_group;
>>
>> -enum cxl_detach_mode {
>> - DETACH_ONLY,
>> - DETACH_INVALIDATE,
>> -};
>> -
>> #ifdef CONFIG_CXL_REGION
>> extern struct device_attribute dev_attr_create_pmem_region;
>> extern struct device_attribute dev_attr_create_ram_region;
>> diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
>> index 8166a402373e..104caa33b7bb 100644
>> --- a/drivers/cxl/core/region.c
>> +++ b/drivers/cxl/core/region.c
>> @@ -2199,6 +2199,7 @@ int cxl_decoder_detach(struct cxl_region *cxlr,
>> }
>> return 0;
>> }
>> +EXPORT_SYMBOL_NS_GPL(cxl_decoder_detach, "CXL");
>>
>> static int __attach_target(struct cxl_region *cxlr,
>> struct cxl_endpoint_decoder *cxled, int pos,
>> @@ -2393,7 +2394,7 @@ static struct cxl_region *to_cxl_region(struct device *dev)
>> return container_of(dev, struct cxl_region, dev);
>> }
>>
>> -static void unregister_region(void *_cxlr)
>> +void unregister_region(void *_cxlr)
>> {
>> struct cxl_region *cxlr = _cxlr;
>> struct cxl_region_params *p = &cxlr->params;
>> @@ -2412,6 +2413,7 @@ static void unregister_region(void *_cxlr)
>> cxl_region_iomem_release(cxlr);
>> put_device(&cxlr->dev);
>> }
>> +EXPORT_SYMBOL_NS_GPL(unregister_region, "CXL");
>>
>> static struct lock_class_key cxl_region_key;
>>
>> diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h
>> index f02dd817b40f..b8683c75dfde 100644
>> --- a/include/cxl/cxl.h
>> +++ b/include/cxl/cxl.h
>> @@ -255,4 +255,13 @@ struct cxl_endpoint_decoder *cxl_get_committed_decoder(struct cxl_memdev *cxlmd,
>> struct cxl_region **cxlr);
>> struct range;
>> int cxl_get_region_range(struct cxl_region *region, struct range *range);
>> +enum cxl_detach_mode {
>> + DETACH_ONLY,
>> + DETACH_INVALIDATE,
>> +};
>> +
>> +int cxl_decoder_detach(struct cxl_region *cxlr,
>> + struct cxl_endpoint_decoder *cxled, int pos,
>> + enum cxl_detach_mode mode);
>> +void unregister_region(void *_cxlr);
> I'd wrap this for an exposed interface that isn't going to be used
> as a devm callback so we can make it type safe. Maybe making the
> existing devm callback the one doing wrapping is cleanest route.
I think it is a good idea. I will think how to do it following your advice.
Thanks
>
>> #endif /* __CXL_CXL_H__ */
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v22 14/25] sfc: obtain decoder and region if committed by firmware
2025-12-15 13:57 ` Jonathan Cameron
@ 2025-12-18 12:14 ` Alejandro Lucero Palau
0 siblings, 0 replies; 36+ messages in thread
From: Alejandro Lucero Palau @ 2025-12-18 12:14 UTC (permalink / raw)
To: Jonathan Cameron, alejandro.lucero-palau
Cc: linux-cxl, netdev, dan.j.williams, edward.cree, davem, kuba,
pabeni, edumazet, dave.jiang
On 12/15/25 13:57, Jonathan Cameron wrote:
> On Fri, 5 Dec 2025 11:52:37 +0000
> <alejandro.lucero-palau@amd.com> wrote:
>
>> From: Alejandro Lucero <alucerop@amd.com>
>>
>> Check if device HDM is already committed during firmware/BIOS
>> initialization.
>>
>> A CXL region should exist if so after memdev allocation/initialization.
>> Get HPA from region and map it.
> I'm confused. If this only occurs if there is a committed decoder,
> why is the exist cleanup unconditional?
Not sure I follow, but the cleanup is unconditional because it is based
on sfc cxl initialization being successful. If not probe_data->cxl
nothing to do.
> Looks like you add logic around this in patch 16. I think that should be
> back here for ease of reading even if for some reason this isn't broken.
I do not think so. The unwinding is different if the HDM were committed
than if the driver did commit them (indirectly through the type2 API).
This patch only covers the first case. The other patch adds the second
case and the a conditional cleanup type. And in any case the cleanup
depends on the probe_data->cxl state.
> Jonathan
>
>> Signed-off-by: Alejandro Lucero <alucerop@amd.com>
>> ---
>> drivers/net/ethernet/sfc/efx_cxl.c | 27 +++++++++++++++++++++++++++
>> 1 file changed, 27 insertions(+)
>>
>> diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c
>> index f6eda93e67e2..ad1f49e76179 100644
>> --- a/drivers/net/ethernet/sfc/efx_cxl.c
>> +++ b/drivers/net/ethernet/sfc/efx_cxl.c
>> @@ -19,6 +19,7 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
>> struct efx_nic *efx = &probe_data->efx;
>> struct pci_dev *pci_dev = efx->pci_dev;
>> struct efx_cxl *cxl;
>> + struct range range;
>> u16 dvsec;
>> int rc;
>>
>> @@ -90,6 +91,26 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
>> return PTR_ERR(cxl->cxlmd);
>> }
>>
>> + cxl->cxled = cxl_get_committed_decoder(cxl->cxlmd, &cxl->efx_region);
>> + if (cxl->cxled) {
>> + if (!cxl->efx_region) {
>> + pci_err(pci_dev, "CXL found committed decoder without a region");
>> + return -ENODEV;
>> + }
>> + rc = cxl_get_region_range(cxl->efx_region, &range);
>> + if (rc) {
>> + pci_err(pci_dev,
>> + "CXL getting regions params from a committed decoder failed");
>> + return rc;
>> + }
>> +
>> + cxl->ctpio_cxl = ioremap(range.start, range.end - range.start + 1);
>> + if (!cxl->ctpio_cxl) {
>> + pci_err(pci_dev, "CXL ioremap region (%pra) failed", &range);
>> + return -ENOMEM;
>> + }
>> + }
>> +
>> probe_data->cxl = cxl;
>>
>> return 0;
>> @@ -97,6 +118,12 @@ int efx_cxl_init(struct efx_probe_data *probe_data)
>>
>> void efx_cxl_exit(struct efx_probe_data *probe_data)
>> {
>> + if (!probe_data->cxl)
>> + return;
>> +
>> + iounmap(probe_data->cxl->ctpio_cxl);
>> + cxl_decoder_detach(NULL, probe_data->cxl->cxled, 0, DETACH_INVALIDATE);
>> + unregister_region(probe_data->cxl->efx_region);
>> }
>>
>> MODULE_IMPORT_NS("CXL");
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v22 11/25] cxl/hdm: Add support for getting region from committed decoder
2025-12-18 11:52 ` Alejandro Lucero Palau
@ 2025-12-18 15:03 ` Jonathan Cameron
2025-12-18 15:27 ` Alejandro Lucero Palau
0 siblings, 1 reply; 36+ messages in thread
From: Jonathan Cameron @ 2025-12-18 15:03 UTC (permalink / raw)
To: Alejandro Lucero Palau
Cc: alejandro.lucero-palau, linux-cxl, netdev, dan.j.williams,
edward.cree, davem, kuba, pabeni, edumazet, dave.jiang
On Thu, 18 Dec 2025 11:52:58 +0000
Alejandro Lucero Palau <alucerop@amd.com> wrote:
> Hi Jonathan,
>
>
> On 12/15/25 13:50, Jonathan Cameron wrote:
> > On Fri, 5 Dec 2025 11:52:34 +0000
> > <alejandro.lucero-palau@amd.com> wrote:
> >
> >> From: Alejandro Lucero <alucerop@amd.com>
> >>
> >> A Type2 device configured by the BIOS can already have its HDM
> >> committed. Add a cxl_get_committed_decoder() function for cheking
> > checking if this is so after memdev creation.
> >
> >> so after memdev creation. A CXL region should have been created
> >> during memdev initialization, therefore a Type2 driver can ask for
> >> such a region for working with the HPA. If the HDM is not committed,
> >> a Type2 driver will create the region after obtaining proper HPA
> >> and DPA space.
> >>
> >> Signed-off-by: Alejandro Lucero <alucerop@amd.com>
> > Hi Alejandro,
> >
> > I'm in two minds about this. In general there are devices that have
> > been configured by the BIOS because they are already in use. I'm not sure
> > the driver you are working with here is necessarily set up to survive
> > that sort of live setup without interrupting data flows.
>
>
> This is not mainly about my driver/device but something PJ and Dan agree
> on support along this type2 patchset.
>
> You can see the v21 discussions, but basically PJ can not have his
> driver using the committed decoders from BIOS. So this change addresses
> that situation which my driver/device can also benefit from as current
> BIOS available is committing decoders regardless of UEFI flags like
> EFI_RESERVED_TYPE.
>
>
> Neither in my case nor in PJ case the device will be in use before
> kernel is executing, although PJ should confirm this.
There was some discussion in that thread of whether the decoders are locked.
If they aren't (and if the device is not in use, or some other hard constraint
isn't requiring it, in my view they definitely shouldn't be!) I'd at least
like to consider the option of a 'cleanup pass' to tear them down and give
the driver a clean slate to build on. Kind of similar to what we do in
making PCI reeumerate in the kernel if we really don't like what the bios did.
Might not be possible if there is another higher numbered decoder in use
though :(
>
>
> >
> > If it is fair enough to support this, otherwise my inclination is tear
> > down whatever the bios did and start again (unless locked - in which
> > case go grumble at your BIOS folk). Reasoning being that we then only
> > have to handle the equivalent of the hotplug flow in both cases rather
> > than having to handle 2.
>
>
> Well, the automatic discovery region used for Type3 can be reused for
> Type2 in this scenario, so we do not need to tear down what the BIOS
> did. However, the argument is what we should do when the driver exits
> which the current functionality added with the patchset being tearing
> down the device and CXL bridge decoders. Dan seems to be keen on not
> doing this tear down even if the HDMs are not locked.
That's the question that makes this interesting. What is reasoning for
leaving bios stuff around in type 2 cases? I'd definitely like 'a way'
to blow it away even if another option keeps it in place.
A bios configures for what it can see at boot not necessarily what shows
up later. Similar cases exist in PCI such as resizeable BARs.
The OS knows a lot more about the workload than the bios ever does and
may choose to reconfigure because of hotplugged devices.
>
>
> What I can say is I have tested this patchset with an AMD system and
> with the BIOS committing the HDM decoders for my device, and the first
> time the driver loads it gets the region from the automatic discovery
> while creating memdev, and the driver does tear down the HDMs when
> exiting. Subsequent driver loads do the HDM configuration as this
> patchset had been doing from day one. So all works as expected.
>
>
> I'm inclined to leave the functionality as it is now, and your
> suggestion or Dan's one for keeping the HDMs, as they were configured by
> the BIOS, when driver exits should require, IMO, a good reason behind it.
I'd definitely not make the assumption that BIOS' always do things for
good reasons. They do things because someone once thought there was
a good reason - or some other OS relied on them doing some part of setup.
>
>
> > There are also the TSP / encrypted link cases where we need to be careful.
> > I have no idea if that applies here.
>
>
> I would say, let's wait until this support is completed, but as far as I
> know, this is not a requirement for current Type2 clients (sfc and jump
> trading).
Dealing with this later works for me. As long as it fails cleanly all good.
Jonathan
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v22 11/25] cxl/hdm: Add support for getting region from committed decoder
2025-12-18 15:03 ` Jonathan Cameron
@ 2025-12-18 15:27 ` Alejandro Lucero Palau
2025-12-19 11:02 ` Jonathan Cameron
0 siblings, 1 reply; 36+ messages in thread
From: Alejandro Lucero Palau @ 2025-12-18 15:27 UTC (permalink / raw)
To: Jonathan Cameron
Cc: alejandro.lucero-palau, linux-cxl, netdev, dan.j.williams,
edward.cree, davem, kuba, pabeni, edumazet, dave.jiang
On 12/18/25 15:03, Jonathan Cameron wrote:
> On Thu, 18 Dec 2025 11:52:58 +0000
> Alejandro Lucero Palau <alucerop@amd.com> wrote:
>
>> Hi Jonathan,
>>
>>
>> On 12/15/25 13:50, Jonathan Cameron wrote:
>>> On Fri, 5 Dec 2025 11:52:34 +0000
>>> <alejandro.lucero-palau@amd.com> wrote:
>>>
>>>> From: Alejandro Lucero <alucerop@amd.com>
>>>>
>>>> A Type2 device configured by the BIOS can already have its HDM
>>>> committed. Add a cxl_get_committed_decoder() function for cheking
>>> checking if this is so after memdev creation.
>>>
>>>> so after memdev creation. A CXL region should have been created
>>>> during memdev initialization, therefore a Type2 driver can ask for
>>>> such a region for working with the HPA. If the HDM is not committed,
>>>> a Type2 driver will create the region after obtaining proper HPA
>>>> and DPA space.
>>>>
>>>> Signed-off-by: Alejandro Lucero <alucerop@amd.com>
>>> Hi Alejandro,
>>>
>>> I'm in two minds about this. In general there are devices that have
>>> been configured by the BIOS because they are already in use. I'm not sure
>>> the driver you are working with here is necessarily set up to survive
>>> that sort of live setup without interrupting data flows.
>>
>> This is not mainly about my driver/device but something PJ and Dan agree
>> on support along this type2 patchset.
>>
>> You can see the v21 discussions, but basically PJ can not have his
>> driver using the committed decoders from BIOS. So this change addresses
>> that situation which my driver/device can also benefit from as current
>> BIOS available is committing decoders regardless of UEFI flags like
>> EFI_RESERVED_TYPE.
>>
>>
>> Neither in my case nor in PJ case the device will be in use before
>> kernel is executing, although PJ should confirm this.
> There was some discussion in that thread of whether the decoders are locked.
> If they aren't (and if the device is not in use, or some other hard constraint
> isn't requiring it, in my view they definitely shouldn't be!) I'd at least
> like to consider the option of a 'cleanup pass' to tear them down and give
> the driver a clean slate to build on. Kind of similar to what we do in
> making PCI reeumerate in the kernel if we really don't like what the bios did.
I do not mind to support that option, but could we do it as a follow-up?
> Might not be possible if there is another higher numbered decoder in use
> though :(
>
>>
>>> If it is fair enough to support this, otherwise my inclination is tear
>>> down whatever the bios did and start again (unless locked - in which
>>> case go grumble at your BIOS folk). Reasoning being that we then only
>>> have to handle the equivalent of the hotplug flow in both cases rather
>>> than having to handle 2.
>>
>> Well, the automatic discovery region used for Type3 can be reused for
>> Type2 in this scenario, so we do not need to tear down what the BIOS
>> did. However, the argument is what we should do when the driver exits
>> which the current functionality added with the patchset being tearing
>> down the device and CXL bridge decoders. Dan seems to be keen on not
>> doing this tear down even if the HDMs are not locked.
> That's the question that makes this interesting. What is reasoning for
> leaving bios stuff around in type 2 cases? I'd definitely like 'a way'
> to blow it away even if another option keeps it in place.
> A bios configures for what it can see at boot not necessarily what shows
> up later. Similar cases exist in PCI such as resizeable BARs.
> The OS knows a lot more about the workload than the bios ever does and
> may choose to reconfigure because of hotplugged devices.
The main reason seems to be an assumption from BIOSes that only
advertise CFMWS is there exists a CXL.mem enabled ... with the CXL Host
Bridge CFMWS being equal to the total CXL.mem advertises by those
devices discovered. This is something I have been talking about in
discord and internally because I think that creates problems with
hotplugging and future FAM support, or maybe current DCD.
One case, theoretical but I think quite possible, is a device requiring
the CXL.mem not using the full capacity in all modes, likely because
that device memory used for other purposes and kept hidden from the
host. So the one knowing what to do should be the driver and dependent
on the device and likely some other data maybe even configurable from
user space.
So yes, I agree with you that the kernel should be able to do things far
better than the BIOS ...
>>
>> What I can say is I have tested this patchset with an AMD system and
>> with the BIOS committing the HDM decoders for my device, and the first
>> time the driver loads it gets the region from the automatic discovery
>> while creating memdev, and the driver does tear down the HDMs when
>> exiting. Subsequent driver loads do the HDM configuration as this
>> patchset had been doing from day one. So all works as expected.
>>
>>
>> I'm inclined to leave the functionality as it is now, and your
>> suggestion or Dan's one for keeping the HDMs, as they were configured by
>> the BIOS, when driver exits should require, IMO, a good reason behind it.
> I'd definitely not make the assumption that BIOS' always do things for
> good reasons. They do things because someone once thought there was
> a good reason - or some other OS relied on them doing some part of setup.
>
100% agreement again.
>>
>>> There are also the TSP / encrypted link cases where we need to be careful.
>>> I have no idea if that applies here.
>>
>> I would say, let's wait until this support is completed, but as far as I
>> know, this is not a requirement for current Type2 clients (sfc and jump
>> trading).
> Dealing with this later works for me. As long as it fails cleanly all good.
Great.
Thanks!
> Jonathan
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v22 11/25] cxl/hdm: Add support for getting region from committed decoder
2025-12-18 15:27 ` Alejandro Lucero Palau
@ 2025-12-19 11:02 ` Jonathan Cameron
0 siblings, 0 replies; 36+ messages in thread
From: Jonathan Cameron @ 2025-12-19 11:02 UTC (permalink / raw)
To: Alejandro Lucero Palau
Cc: alejandro.lucero-palau, linux-cxl, netdev, dan.j.williams,
edward.cree, davem, kuba, pabeni, edumazet, dave.jiang
On Thu, 18 Dec 2025 15:27:29 +0000
Alejandro Lucero Palau <alucerop@amd.com> wrote:
> On 12/18/25 15:03, Jonathan Cameron wrote:
> > On Thu, 18 Dec 2025 11:52:58 +0000
> > Alejandro Lucero Palau <alucerop@amd.com> wrote:
> >
> >> Hi Jonathan,
> >>
> >>
> >> On 12/15/25 13:50, Jonathan Cameron wrote:
> >>> On Fri, 5 Dec 2025 11:52:34 +0000
> >>> <alejandro.lucero-palau@amd.com> wrote:
> >>>
> >>>> From: Alejandro Lucero <alucerop@amd.com>
> >>>>
> >>>> A Type2 device configured by the BIOS can already have its HDM
> >>>> committed. Add a cxl_get_committed_decoder() function for cheking
> >>> checking if this is so after memdev creation.
> >>>
> >>>> so after memdev creation. A CXL region should have been created
> >>>> during memdev initialization, therefore a Type2 driver can ask for
> >>>> such a region for working with the HPA. If the HDM is not committed,
> >>>> a Type2 driver will create the region after obtaining proper HPA
> >>>> and DPA space.
> >>>>
> >>>> Signed-off-by: Alejandro Lucero <alucerop@amd.com>
> >>> Hi Alejandro,
> >>>
> >>> I'm in two minds about this. In general there are devices that have
> >>> been configured by the BIOS because they are already in use. I'm not sure
> >>> the driver you are working with here is necessarily set up to survive
> >>> that sort of live setup without interrupting data flows.
> >>
> >> This is not mainly about my driver/device but something PJ and Dan agree
> >> on support along this type2 patchset.
> >>
> >> You can see the v21 discussions, but basically PJ can not have his
> >> driver using the committed decoders from BIOS. So this change addresses
> >> that situation which my driver/device can also benefit from as current
> >> BIOS available is committing decoders regardless of UEFI flags like
> >> EFI_RESERVED_TYPE.
> >>
> >>
> >> Neither in my case nor in PJ case the device will be in use before
> >> kernel is executing, although PJ should confirm this.
> > There was some discussion in that thread of whether the decoders are locked.
> > If they aren't (and if the device is not in use, or some other hard constraint
> > isn't requiring it, in my view they definitely shouldn't be!) I'd at least
> > like to consider the option of a 'cleanup pass' to tear them down and give
> > the driver a clean slate to build on. Kind of similar to what we do in
> > making PCI reeumerate in the kernel if we really don't like what the bios did.
>
>
> I do not mind to support that option, but could we do it as a follow-up?
Sure. I'm wondering a bit on whether it's a global flag similar to the
one for full PCI bus reenumeration or more like the stuff that repairs corners
of PCI enumeration if the kernel doesn't like what it finds.
>
>
> > Might not be possible if there is another higher numbered decoder in use
> > though :(
> >
> >>
> >>> If it is fair enough to support this, otherwise my inclination is tear
> >>> down whatever the bios did and start again (unless locked - in which
> >>> case go grumble at your BIOS folk). Reasoning being that we then only
> >>> have to handle the equivalent of the hotplug flow in both cases rather
> >>> than having to handle 2.
> >>
> >> Well, the automatic discovery region used for Type3 can be reused for
> >> Type2 in this scenario, so we do not need to tear down what the BIOS
> >> did. However, the argument is what we should do when the driver exits
> >> which the current functionality added with the patchset being tearing
> >> down the device and CXL bridge decoders. Dan seems to be keen on not
> >> doing this tear down even if the HDMs are not locked.
> > That's the question that makes this interesting. What is reasoning for
> > leaving bios stuff around in type 2 cases? I'd definitely like 'a way'
> > to blow it away even if another option keeps it in place.
> > A bios configures for what it can see at boot not necessarily what shows
> > up later. Similar cases exist in PCI such as resizeable BARs.
> > The OS knows a lot more about the workload than the bios ever does and
> > may choose to reconfigure because of hotplugged devices.
>
>
> The main reason seems to be an assumption from BIOSes that only
> advertise CFMWS is there exists a CXL.mem enabled ... with the CXL Host
Just to confirm, do you mean CXL.mem is enabled for the device? I.e.
memory is in use at boot? If that config bit is set then we have
to leave it alone as we have very little idea what traffic is in flight.
Or just that there is some memory advertised by the device.
> Bridge CFMWS being equal to the total CXL.mem advertises by those
> devices discovered. This is something I have been talking about in
> discord and internally because I think that creates problems with
> hotplugging and future FAM support, or maybe current DCD.
For DCD it shouldn't matter as long as there is space for all the DC
regions. Whether that is backed by the device shouldn't be something
the bios cares about. For the others fully agree its a wrong bios
writer assumption that we should try to get them to stop making!
>
>
> One case, theoretical but I think quite possible, is a device requiring
> the CXL.mem not using the full capacity in all modes, likely because
> that device memory used for other purposes and kept hidden from the
> host. So the one knowing what to do should be the driver and dependent
> on the device and likely some other data maybe even configurable from
> user space.
Yes. This is kind of similar to some of the things that happen with
resizeable BARs in PCI.
>
>
> So yes, I agree with you that the kernel should be able to do things far
> better than the BIOS ...
I'm sure everyone reading this email agrees policy in the OS where possible
not the BIOS :)
Jonathan
^ permalink raw reply [flat|nested] 36+ messages in thread
end of thread, other threads:[~2025-12-19 11:02 UTC | newest]
Thread overview: 36+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-05 11:52 [PATCH v22 00/25] Type2 device basic support alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 01/25] cxl/mem: Arrange for always-synchronous memdev attach alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 02/25] cxl/port: Arrange for always synchronous endpoint attach alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 03/25] cxl/mem: Introduce a memdev creation ->probe() operation alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 04/25] cxl: Add type2 device basic support alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 05/25] sfc: add cxl support alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 06/25] cxl: Move pci generic code alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 07/25] cxl/sfc: Map cxl component regs alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 08/25] cxl/sfc: Initialize dpa without a mailbox alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 09/25] cxl: Prepare memdev creation for type2 alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 10/25] sfc: create type2 cxl memdev alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 11/25] cxl/hdm: Add support for getting region from committed decoder alejandro.lucero-palau
2025-12-15 13:50 ` Jonathan Cameron
2025-12-18 11:52 ` Alejandro Lucero Palau
2025-12-18 15:03 ` Jonathan Cameron
2025-12-18 15:27 ` Alejandro Lucero Palau
2025-12-19 11:02 ` Jonathan Cameron
2025-12-05 11:52 ` [PATCH v22 12/25] cxl: Add function for obtaining region range alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 13/25] cxl: Export functions for unwinding cxl by accelerators alejandro.lucero-palau
2025-12-15 13:53 ` Jonathan Cameron
2025-12-18 12:07 ` Alejandro Lucero Palau
2025-12-05 11:52 ` [PATCH v22 14/25] sfc: obtain decoder and region if committed by firmware alejandro.lucero-palau
2025-12-15 13:57 ` Jonathan Cameron
2025-12-18 12:14 ` Alejandro Lucero Palau
2025-12-05 11:52 ` [PATCH v22 15/25] cxl: Define a driver interface for HPA free space enumeration alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 16/25] sfc: get root decoder alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 17/25] cxl: Define a driver interface for DPA allocation alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 18/25] sfc: get endpoint decoder alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 19/25] cxl: Make region type based on endpoint type alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 20/25] cxl/region: Factor out interleave ways setup alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 21/25] cxl/region: Factor out interleave granularity setup alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 22/25] cxl: Allow region creation by type2 drivers alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 23/25] cxl: Avoid dax creation for accelerators alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 24/25] sfc: create cxl region alejandro.lucero-palau
2025-12-05 11:52 ` [PATCH v22 25/25] sfc: support pio mapping based on cxl alejandro.lucero-palau
2025-12-07 7:12 ` [PATCH v22 00/25] Type2 device basic support dan.j.williams
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).