* [PATCH 01/23] PCI: Add SNIA SDXI accelerator sub-class
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 02/23] MAINTAINERS: Add entry for SDXI driver Nathan Lynch via B4 Relay
` (21 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Add sub-class code for SNIA Smart Data Accelerator Interface (SDXI).
See PCI Code and ID Assignment spec r1.14, sec 1.19.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
---
include/linux/pci_ids.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index 406abf629be2..e3c68f90f3ad 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -154,6 +154,7 @@
#define PCI_BASE_CLASS_ACCELERATOR 0x12
#define PCI_CLASS_ACCELERATOR_PROCESSING 0x1200
+#define PCI_CLASS_ACCELERATOR_SDXI 0x120100
#define PCI_CLASS_OTHERS 0xff
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 02/23] MAINTAINERS: Add entry for SDXI driver
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 01/23] PCI: Add SNIA SDXI accelerator sub-class Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 03/23] dmaengine: sdxi: Add PCI initialization Nathan Lynch via B4 Relay
` (20 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Add an entry for the SDXI driver to MAINTAINERS. Wei and I will
maintain the driver.
The SDXI specification and other materials may be found at:
https://www.snia.org/sdxi
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index b38452804a2d..5aa7b6cd93f9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -23831,6 +23831,13 @@ L: sdricohcs-devel@lists.sourceforge.net (subscribers-only)
S: Maintained
F: drivers/mmc/host/sdricoh_cs.c
+SDXI (Smart Data Accelerator Interface) DRIVER
+M: Nathan Lynch <nathan.lynch@amd.com>
+M: Wei Huang <wei.huang2@amd.com>
+L: dmaengine@vger.kernel.org
+S: Supported
+F: drivers/dma/sdxi/
+
SECO BOARDS CEC DRIVER
M: Ettore Chimenti <ek5.chimenti@gmail.com>
S: Maintained
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 03/23] dmaengine: sdxi: Add PCI initialization
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 01/23] PCI: Add SNIA SDXI accelerator sub-class Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 02/23] MAINTAINERS: Add entry for SDXI driver Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 04/23] dmaengine: sdxi: Feature discovery and initial configuration Nathan Lynch via B4 Relay
` (19 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Add enough code to bind a SDXI device via the class code and map its
control registers and doorbell region. All device resources are
managed with devres at this point, so there is no explicit teardown
path.
While the SDXI specification includes a PCIe binding, the standard is
intended to be independent of the underlying I/O interconnect. So the
driver confines PCI-specific code to pci.c, and the rest (such as
device.c, introduced here) is bus-agnostic. Hence there is some
indirection: during probe, the bus code registers any matched device
with the generic SDXI core, supplying the device and a sdxi_bus_ops
vector. After the core associates a new sdxi_dev with the device,
bus-specific initialization proceeds via the sdxi_bus_ops->init()
callback.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/Kconfig | 2 ++
drivers/dma/Makefile | 1 +
drivers/dma/sdxi/Kconfig | 8 +++++
drivers/dma/sdxi/Makefile | 6 ++++
drivers/dma/sdxi/device.c | 26 ++++++++++++++
drivers/dma/sdxi/pci.c | 87 +++++++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/sdxi.h | 45 ++++++++++++++++++++++++
7 files changed, 175 insertions(+)
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index e98e3e8c5036..5a19df2da7f2 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -783,6 +783,8 @@ source "drivers/dma/fsl-dpaa2-qdma/Kconfig"
source "drivers/dma/lgm/Kconfig"
+source "drivers/dma/sdxi/Kconfig"
+
source "drivers/dma/stm32/Kconfig"
# clients
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index df566c4958b6..3055ed87bc52 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -86,6 +86,7 @@ obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
obj-$(CONFIG_ST_FDMA) += st_fdma.o
obj-$(CONFIG_FSL_DPAA2_QDMA) += fsl-dpaa2-qdma/
obj-$(CONFIG_INTEL_LDMA) += lgm/
+obj-$(CONFIG_SDXI) += sdxi/
obj-y += amd/
obj-y += mediatek/
diff --git a/drivers/dma/sdxi/Kconfig b/drivers/dma/sdxi/Kconfig
new file mode 100644
index 000000000000..a568284cd583
--- /dev/null
+++ b/drivers/dma/sdxi/Kconfig
@@ -0,0 +1,8 @@
+config SDXI
+ tristate "SDXI support"
+ select DMA_ENGINE
+ help
+ Enable support for Smart Data Accelerator Interface (SDXI)
+ Platform Data Mover devices. SDXI is a vendor-neutral
+ standard for a memory-to-memory data mover and acceleration
+ interface.
diff --git a/drivers/dma/sdxi/Makefile b/drivers/dma/sdxi/Makefile
new file mode 100644
index 000000000000..f84b87d53e27
--- /dev/null
+++ b/drivers/dma/sdxi/Makefile
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_SDXI) += sdxi.o
+
+sdxi-objs += device.o
+
+sdxi-$(CONFIG_PCI_MSI) += pci.o
diff --git a/drivers/dma/sdxi/device.c b/drivers/dma/sdxi/device.c
new file mode 100644
index 000000000000..b718ce04afa0
--- /dev/null
+++ b/drivers/dma/sdxi/device.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SDXI hardware device driver
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ */
+
+#include <linux/device.h>
+#include <linux/slab.h>
+
+#include "sdxi.h"
+
+int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops)
+{
+ struct sdxi_dev *sdxi;
+
+ sdxi = devm_kzalloc(dev, sizeof(*sdxi), GFP_KERNEL);
+ if (!sdxi)
+ return -ENOMEM;
+
+ sdxi->dev = dev;
+ sdxi->bus_ops = ops;
+ dev_set_drvdata(dev, sdxi);
+
+ return sdxi->bus_ops->init(sdxi);
+}
diff --git a/drivers/dma/sdxi/pci.c b/drivers/dma/sdxi/pci.c
new file mode 100644
index 000000000000..f3f8485e50e3
--- /dev/null
+++ b/drivers/dma/sdxi/pci.c
@@ -0,0 +1,87 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SDXI PCI device code
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ */
+
+#include <linux/dev_printk.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/io.h>
+#include <linux/iomap.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include "sdxi.h"
+
+enum sdxi_mmio_bars {
+ SDXI_PCI_BAR_CTL_REGS = 0,
+ SDXI_PCI_BAR_DOORBELL = 2,
+};
+
+static struct pci_dev *sdxi_to_pci_dev(const struct sdxi_dev *sdxi)
+{
+ return to_pci_dev(sdxi_to_dev(sdxi));
+}
+
+static int sdxi_pci_init(struct sdxi_dev *sdxi)
+{
+ struct pci_dev *pdev = sdxi_to_pci_dev(sdxi);
+ struct device *dev = &pdev->dev;
+ int ret;
+
+ ret = pcim_enable_device(pdev);
+ if (ret)
+ return dev_err_probe(dev, ret, "failed to enable device\n");
+
+ ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ if (ret)
+ return dev_err_probe(dev, ret, "failed to set DMA masks\n");
+
+ sdxi->ctrl_regs = pcim_iomap_region(pdev, SDXI_PCI_BAR_CTL_REGS,
+ KBUILD_MODNAME);
+ if (IS_ERR(sdxi->ctrl_regs)) {
+ return dev_err_probe(dev, PTR_ERR(sdxi->ctrl_regs),
+ "failed to map control registers\n");
+ }
+
+ sdxi->dbs = pcim_iomap_region(pdev, SDXI_PCI_BAR_DOORBELL,
+ KBUILD_MODNAME);
+ if (IS_ERR(sdxi->dbs)) {
+ return dev_err_probe(dev, PTR_ERR(sdxi->dbs),
+ "failed to map doorbell region\n");
+ }
+
+ pci_set_master(pdev);
+ return 0;
+}
+
+static const struct sdxi_bus_ops sdxi_pci_ops = {
+ .init = sdxi_pci_init,
+};
+
+static int sdxi_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+{
+ return sdxi_register(&pdev->dev, &sdxi_pci_ops);
+}
+
+static const struct pci_device_id sdxi_id_table[] = {
+ { PCI_DEVICE_CLASS(PCI_CLASS_ACCELERATOR_SDXI, 0xffffff) },
+ { }
+};
+MODULE_DEVICE_TABLE(pci, sdxi_id_table);
+
+static struct pci_driver sdxi_driver = {
+ .name = "sdxi",
+ .id_table = sdxi_id_table,
+ .probe = sdxi_pci_probe,
+ .sriov_configure = pci_sriov_configure_simple,
+};
+
+MODULE_AUTHOR("Wei Huang");
+MODULE_AUTHOR("Nathan Lynch");
+MODULE_DESCRIPTION(SDXI_DRV_DESC);
+MODULE_LICENSE("GPL");
+module_pci_driver(sdxi_driver);
diff --git a/drivers/dma/sdxi/sdxi.h b/drivers/dma/sdxi/sdxi.h
new file mode 100644
index 000000000000..9430f3b8d0b3
--- /dev/null
+++ b/drivers/dma/sdxi/sdxi.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * SDXI device driver header
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ */
+
+#ifndef DMA_SDXI_H
+#define DMA_SDXI_H
+
+#include <linux/compiler_types.h>
+#include <linux/types.h>
+
+#define SDXI_DRV_DESC "SDXI driver"
+
+struct sdxi_dev;
+
+/**
+ * struct sdxi_bus_ops - Bus-specific methods for SDXI devices.
+ */
+struct sdxi_bus_ops {
+ /**
+ * @init: Map control registers and doorbell region, allocate
+ * IRQ ranges. Invoked before bus-agnostic SDXI
+ * function initialization.
+ */
+ int (*init)(struct sdxi_dev *sdxi);
+};
+
+struct sdxi_dev {
+ struct device *dev;
+ void __iomem *ctrl_regs; /* virt addr of ctrl registers */
+ void __iomem *dbs; /* virt addr of doorbells */
+
+ const struct sdxi_bus_ops *bus_ops;
+};
+
+static inline struct device *sdxi_to_dev(const struct sdxi_dev *sdxi)
+{
+ return sdxi->dev;
+}
+
+int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops);
+
+#endif /* DMA_SDXI_H */
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 04/23] dmaengine: sdxi: Feature discovery and initial configuration
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (2 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 03/23] dmaengine: sdxi: Add PCI initialization Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 05/23] dmaengine: sdxi: Configure context tables Nathan Lynch via B4 Relay
` (18 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
After bus-specific initialization, force the SDXI function to stopped
state. This is the expected state from reset, but kexec or driver bugs
can leave a function in other states from which the initialization
code must be able to recover.
Discover via the capability registers the doorbell region stride, the
maximum supported context ID, the operation groups implemented, and
limits on buffer and control structure sizes. The driver has the
option of writing more conservative limits to the ctl2 register, but
it uses those supplied by the implementation for now.
Introduce device register definitions and associated masks via mmio.h.
Add convenience wrappers which are first used here:
- sdxi_dbg()
- sdxi_info()
- sdxi_err()
- sdxi_read64()
- sdxi_write64()
Report the version of the standard to which the device conforms, e.g.
sdxi 0000:00:03.0: SDXI 1.0 device found
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/device.c | 149 +++++++++++++++++++++++++++++++++++++++++++++-
drivers/dma/sdxi/mmio.h | 51 ++++++++++++++++
drivers/dma/sdxi/sdxi.h | 23 +++++++
3 files changed, 222 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/sdxi/device.c b/drivers/dma/sdxi/device.c
index b718ce04afa0..1083fdddd72f 100644
--- a/drivers/dma/sdxi/device.c
+++ b/drivers/dma/sdxi/device.c
@@ -5,14 +5,157 @@
* Copyright Advanced Micro Devices, Inc.
*/
+#include <linux/bitfield.h>
+#include <linux/delay.h>
#include <linux/device.h>
#include <linux/slab.h>
+#include "mmio.h"
#include "sdxi.h"
+enum sdxi_fn_gsv {
+ SDXI_GSV_STOP,
+ SDXI_GSV_INIT,
+ SDXI_GSV_ACTIVE,
+ SDXI_GSV_STOPG_SF,
+ SDXI_GSV_STOPG_HD,
+ SDXI_GSV_ERROR,
+};
+
+static const char *const gsv_strings[] = {
+ [SDXI_GSV_STOP] = "stopped",
+ [SDXI_GSV_INIT] = "initializing",
+ [SDXI_GSV_ACTIVE] = "active",
+ [SDXI_GSV_STOPG_SF] = "soft stopping",
+ [SDXI_GSV_STOPG_HD] = "hard stopping",
+ [SDXI_GSV_ERROR] = "error",
+};
+
+static const char *gsv_str(enum sdxi_fn_gsv gsv)
+{
+ if ((size_t)gsv < ARRAY_SIZE(gsv_strings))
+ return gsv_strings[(size_t)gsv];
+
+ WARN_ONCE(1, "unexpected gsv %u\n", gsv);
+
+ return "unknown";
+}
+
+enum sdxi_fn_gsr {
+ SDXI_GSRV_RESET,
+ SDXI_GSRV_STOP_SF,
+ SDXI_GSRV_STOP_HD,
+ SDXI_GSRV_ACTIVE,
+};
+
+static enum sdxi_fn_gsv sdxi_dev_gsv(const struct sdxi_dev *sdxi)
+{
+ return (enum sdxi_fn_gsv)FIELD_GET(SDXI_MMIO_STS0_FN_GSV,
+ sdxi_read64(sdxi, SDXI_MMIO_STS0));
+}
+
+static void sdxi_write_fn_gsr(struct sdxi_dev *sdxi, enum sdxi_fn_gsr cmd)
+{
+ u64 ctl0 = sdxi_read64(sdxi, SDXI_MMIO_CTL0);
+
+ FIELD_MODIFY(SDXI_MMIO_CTL0_FN_GSR, &ctl0, cmd);
+ sdxi_write64(sdxi, SDXI_MMIO_CTL0, ctl0);
+}
+
+/* Get the device to the GSV_STOP state. */
+static int sdxi_dev_stop(struct sdxi_dev *sdxi)
+{
+ unsigned long deadline = jiffies + msecs_to_jiffies(1000);
+ bool reset_issued = false;
+
+ do {
+ enum sdxi_fn_gsv status = sdxi_dev_gsv(sdxi);
+
+ sdxi_dbg(sdxi, "%s: function state: %s\n", __func__, gsv_str(status));
+
+ switch (status) {
+ case SDXI_GSV_ACTIVE:
+ sdxi_write_fn_gsr(sdxi, SDXI_GSRV_STOP_SF);
+ break;
+ case SDXI_GSV_ERROR:
+ if (!reset_issued) {
+ sdxi_info(sdxi,
+ "function in error state, issuing reset\n");
+ sdxi_write_fn_gsr(sdxi, SDXI_GSRV_RESET);
+ reset_issued = true;
+ } else {
+ fsleep(1000);
+ }
+ break;
+ case SDXI_GSV_STOP:
+ return 0;
+ case SDXI_GSV_INIT:
+ case SDXI_GSV_STOPG_SF:
+ case SDXI_GSV_STOPG_HD:
+ /* transitional states, wait */
+ sdxi_dbg(sdxi, "waiting for stop (gsv = %u)\n",
+ status);
+ fsleep(1000);
+ break;
+ default:
+ sdxi_err(sdxi, "unknown gsv %u, giving up\n", status);
+ return -EIO;
+ }
+ } while (time_before(jiffies, deadline));
+
+ sdxi_err(sdxi, "stop attempt timed out, current status %u\n",
+ sdxi_dev_gsv(sdxi));
+ return -ETIMEDOUT;
+}
+
+/*
+ * See SDXI 1.0 4.1.8 Activation of the SDXI Function by Software.
+ */
+static int sdxi_fn_activate(struct sdxi_dev *sdxi)
+{
+ u64 version, cap0, cap1, ctl2;
+ int err;
+
+ /*
+ * Clear any existing configuration from MMIO_CTL0 and ensure
+ * the function is in GSV_STOP state.
+ */
+ sdxi_write64(sdxi, SDXI_MMIO_CTL0, 0);
+ err = sdxi_dev_stop(sdxi);
+ if (err)
+ return err;
+
+ version = sdxi_read64(sdxi, SDXI_MMIO_VERSION);
+ sdxi_info(sdxi, "SDXI %llu.%llu device found\n",
+ FIELD_GET(SDXI_MMIO_VERSION_MAJOR, version),
+ FIELD_GET(SDXI_MMIO_VERSION_MINOR, version));
+
+ /* Read capabilities and features. */
+ cap0 = sdxi_read64(sdxi, SDXI_MMIO_CAP0);
+ sdxi->db_stride = SZ_4K;
+ sdxi->db_stride *= 1U << FIELD_GET(SDXI_MMIO_CAP0_DB_STRIDE, cap0);
+
+ cap1 = sdxi_read64(sdxi, SDXI_MMIO_CAP1);
+ sdxi->op_grp_cap = FIELD_GET(SDXI_MMIO_CAP1_OPB_000_CAP, cap1);
+ sdxi->max_cxtid = FIELD_GET(SDXI_MMIO_CAP1_MAX_CXT, cap1);
+
+ /* Apply our configuration. */
+ ctl2 = FIELD_PREP(SDXI_MMIO_CTL2_MAX_CXT, sdxi->max_cxtid);
+ ctl2 |= FIELD_PREP(SDXI_MMIO_CTL2_MAX_BUFFER,
+ FIELD_GET(SDXI_MMIO_CAP1_MAX_BUFFER, cap1));
+ ctl2 |= FIELD_PREP(SDXI_MMIO_CTL2_MAX_AKEY_SZ,
+ FIELD_GET(SDXI_MMIO_CAP1_MAX_AKEY_SZ, cap1));
+ ctl2 |= FIELD_PREP(SDXI_MMIO_CTL2_OPB_000_AVL,
+ FIELD_GET(SDXI_MMIO_CAP1_OPB_000_CAP, cap1));
+ sdxi_write64(sdxi, SDXI_MMIO_CTL2, ctl2);
+
+ return 0;
+}
+
int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops)
{
struct sdxi_dev *sdxi;
+ int err;
sdxi = devm_kzalloc(dev, sizeof(*sdxi), GFP_KERNEL);
if (!sdxi)
@@ -22,5 +165,9 @@ int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops)
sdxi->bus_ops = ops;
dev_set_drvdata(dev, sdxi);
- return sdxi->bus_ops->init(sdxi);
+ err = sdxi->bus_ops->init(sdxi);
+ if (err)
+ return err;
+
+ return sdxi_fn_activate(sdxi);
}
diff --git a/drivers/dma/sdxi/mmio.h b/drivers/dma/sdxi/mmio.h
new file mode 100644
index 000000000000..c9a11c3f2f76
--- /dev/null
+++ b/drivers/dma/sdxi/mmio.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+/*
+ * SDXI MMIO register offsets and layouts.
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ */
+
+#ifndef DMA_SDXI_MMIO_H
+#define DMA_SDXI_MMIO_H
+
+#include <linux/bits.h>
+
+enum sdxi_reg {
+ /* SDXI 1.0 9.1 General Control and Status Registers */
+ SDXI_MMIO_CTL0 = 0x00000,
+ SDXI_MMIO_CTL2 = 0x00010,
+ SDXI_MMIO_STS0 = 0x00100,
+ SDXI_MMIO_CAP0 = 0x00200,
+ SDXI_MMIO_CAP1 = 0x00208,
+ SDXI_MMIO_VERSION = 0x00210,
+};
+
+/* SDXI 1.0 Table 9-2: MMIO_CTL0 */
+#define SDXI_MMIO_CTL0_FN_GSR GENMASK_ULL(1, 0)
+
+/* SDXI 1.0 Table 9-4: MMIO_CTL2 */
+#define SDXI_MMIO_CTL2_MAX_BUFFER GENMASK_ULL(3, 0)
+#define SDXI_MMIO_CTL2_MAX_AKEY_SZ GENMASK_ULL(15, 12)
+#define SDXI_MMIO_CTL2_MAX_CXT GENMASK_ULL(31, 16)
+#define SDXI_MMIO_CTL2_OPB_000_AVL GENMASK_ULL(63, 32)
+
+/* SDXI 1.0 Table 9-5: MMIO_STS0 */
+#define SDXI_MMIO_STS0_FN_GSV GENMASK_ULL(2, 0)
+
+/* SDXI 1.0 Table 9-6: MMIO_CAP0 */
+#define SDXI_MMIO_CAP0_SFUNC GENMASK_ULL(15, 0)
+#define SDXI_MMIO_CAP0_DB_STRIDE GENMASK_ULL(22, 20)
+#define SDXI_MMIO_CAP0_MAX_DS_RING_SZ GENMASK_ULL(28, 24)
+
+/* SDXI 1.0 Table 9-7: MMIO_CAP1 */
+#define SDXI_MMIO_CAP1_MAX_BUFFER GENMASK_ULL(3, 0)
+#define SDXI_MMIO_CAP1_MAX_AKEY_SZ GENMASK_ULL(15, 12)
+#define SDXI_MMIO_CAP1_MAX_CXT GENMASK_ULL(31, 16)
+#define SDXI_MMIO_CAP1_OPB_000_CAP GENMASK_ULL(63, 32)
+
+/* SDXI 1.0 Table 9-8: MMIO_VERSION */
+#define SDXI_MMIO_VERSION_MINOR GENMASK_ULL(7, 0)
+#define SDXI_MMIO_VERSION_MAJOR GENMASK_ULL(23, 16)
+
+#endif /* DMA_SDXI_MMIO_H */
diff --git a/drivers/dma/sdxi/sdxi.h b/drivers/dma/sdxi/sdxi.h
index 9430f3b8d0b3..427118e60aa6 100644
--- a/drivers/dma/sdxi/sdxi.h
+++ b/drivers/dma/sdxi/sdxi.h
@@ -9,8 +9,12 @@
#define DMA_SDXI_H
#include <linux/compiler_types.h>
+#include <linux/dev_printk.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/types.h>
+#include "mmio.h"
+
#define SDXI_DRV_DESC "SDXI driver"
struct sdxi_dev;
@@ -32,6 +36,11 @@ struct sdxi_dev {
void __iomem *ctrl_regs; /* virt addr of ctrl registers */
void __iomem *dbs; /* virt addr of doorbells */
+ /* hardware capabilities (from cap0 & cap1) */
+ u32 db_stride; /* doorbell stride in bytes */
+ u16 max_cxtid; /* Maximum context ID allowed. */
+ u32 op_grp_cap; /* supported operation group cap */
+
const struct sdxi_bus_ops *bus_ops;
};
@@ -40,6 +49,20 @@ static inline struct device *sdxi_to_dev(const struct sdxi_dev *sdxi)
return sdxi->dev;
}
+#define sdxi_dbg(s, fmt, ...) dev_dbg(sdxi_to_dev(s), fmt, ## __VA_ARGS__)
+#define sdxi_info(s, fmt, ...) dev_info(sdxi_to_dev(s), fmt, ## __VA_ARGS__)
+#define sdxi_err(s, fmt, ...) dev_err(sdxi_to_dev(s), fmt, ## __VA_ARGS__)
+
int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops);
+static inline u64 sdxi_read64(const struct sdxi_dev *sdxi, enum sdxi_reg reg)
+{
+ return ioread64(sdxi->ctrl_regs + reg);
+}
+
+static inline void sdxi_write64(struct sdxi_dev *sdxi, enum sdxi_reg reg, u64 val)
+{
+ iowrite64(val, sdxi->ctrl_regs + reg);
+}
+
#endif /* DMA_SDXI_H */
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 05/23] dmaengine: sdxi: Configure context tables
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (3 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 04/23] dmaengine: sdxi: Feature discovery and initial configuration Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 06/23] dmaengine: sdxi: Allocate DMA pools Nathan Lynch via B4 Relay
` (17 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
SDXI uses a two-level table hierarchy to track contexts. There is a
single level 2 table per function which enumerates up to 512 level 1
tables. Each level 1 table enumerates up to 128 contexts.
Allocate and install the L2 table and a single L1 table, enough for
context IDs 0-127 (i.e. the admin context with reserved id 0, plus 127
client contexts). For now, to avoid dynamic management of additional
L1 tables, cap ctl2.max_cxt to 127.
Since the table allocations are devres-managed, there is no
corresponding cleanup code required.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/device.c | 40 +++++++++++++++++++++++++++++--
drivers/dma/sdxi/hw.h | 61 +++++++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/mmio.h | 6 +++++
drivers/dma/sdxi/sdxi.h | 5 ++++
4 files changed, 110 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/sdxi/device.c b/drivers/dma/sdxi/device.c
index 1083fdddd72f..7e772ce81365 100644
--- a/drivers/dma/sdxi/device.c
+++ b/drivers/dma/sdxi/device.c
@@ -8,8 +8,11 @@
#include <linux/bitfield.h>
#include <linux/delay.h>
#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/log2.h>
#include <linux/slab.h>
+#include "hw.h"
#include "mmio.h"
#include "sdxi.h"
@@ -113,7 +116,8 @@ static int sdxi_dev_stop(struct sdxi_dev *sdxi)
*/
static int sdxi_fn_activate(struct sdxi_dev *sdxi)
{
- u64 version, cap0, cap1, ctl2;
+ u64 version, cap0, cap1, ctl2, cxt_l2, lv01_ptr;
+ struct sdxi_cxt_L2_ent *L2_ent;
int err;
/*
@@ -137,7 +141,13 @@ static int sdxi_fn_activate(struct sdxi_dev *sdxi)
cap1 = sdxi_read64(sdxi, SDXI_MMIO_CAP1);
sdxi->op_grp_cap = FIELD_GET(SDXI_MMIO_CAP1_OPB_000_CAP, cap1);
- sdxi->max_cxtid = FIELD_GET(SDXI_MMIO_CAP1_MAX_CXT, cap1);
+
+ /*
+ * Constrain the number of client contexts supported by the
+ * driver to what fits in a single L1 table.
+ */
+ sdxi->max_cxtid = min(SDXI_L1_TABLE_ENTRIES - 1,
+ FIELD_GET(SDXI_MMIO_CAP1_MAX_CXT, cap1));
/* Apply our configuration. */
ctl2 = FIELD_PREP(SDXI_MMIO_CTL2_MAX_CXT, sdxi->max_cxtid);
@@ -149,6 +159,32 @@ static int sdxi_fn_activate(struct sdxi_dev *sdxi)
FIELD_GET(SDXI_MMIO_CAP1_OPB_000_CAP, cap1));
sdxi_write64(sdxi, SDXI_MMIO_CTL2, ctl2);
+ /* SDXI 1.0 4.1.8.2 Context Level 2 Table Setup */
+ sdxi->L2_table = dmam_alloc_coherent(sdxi_to_dev(sdxi),
+ sizeof(*sdxi->L2_table),
+ &sdxi->L2_dma, GFP_KERNEL);
+ if (!sdxi->L2_table)
+ return -ENOMEM;
+
+ cxt_l2 = FIELD_PREP(SDXI_MMIO_CXT_L2_PTR, sdxi->L2_dma >> ilog2(SZ_4K));
+ sdxi_write64(sdxi, SDXI_MMIO_CXT_L2, cxt_l2);
+
+ /* SDXI 1.0 4.1.8.3 Context Level 1 Table Setup */
+ sdxi->L1_table = dmam_alloc_coherent(sdxi_to_dev(sdxi),
+ sizeof(*sdxi->L1_table),
+ &sdxi->L1_dma, GFP_KERNEL);
+ if (!sdxi->L1_table)
+ return -ENOMEM;
+ /*
+ * SDXI 1.0 4.1.8.3.c: Initialize the Context level 2 table to
+ * point to the Context Level 1 [table].
+ */
+ L2_ent = &sdxi->L2_table->entry[0];
+ lv01_ptr = FIELD_PREP(SDXI_CXT_L2_ENT_VL, 1);
+ lv01_ptr |= FIELD_PREP(SDXI_CXT_L2_ENT_LV01_PTR,
+ sdxi->L1_dma >> ilog2(SZ_4K));
+ L2_ent->lv01_ptr = cpu_to_le64(lv01_ptr);
+
return 0;
}
diff --git a/drivers/dma/sdxi/hw.h b/drivers/dma/sdxi/hw.h
new file mode 100644
index 000000000000..df520ca7792b
--- /dev/null
+++ b/drivers/dma/sdxi/hw.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright Advanced Micro Devices, Inc. */
+
+/*
+ * Control structures and constants defined in the SDXI specification,
+ * with low-level accessors. The ordering of the structures here
+ * follows the order of their definitions in the SDXI spec.
+ *
+ * Names of structures, members, and subfields (bit ranges within
+ * members) are written to match the spec, generally. E.g. struct
+ * sdxi_cxt_L2_ent corresponds to CXT_L2_ENT in the spec.
+ *
+ * Note: a member can have a subfield whose name is identical to the
+ * member's name. E.g. CXT_L2_ENT's lv01_ptr.
+ *
+ * All reserved fields and bits (usually named "rsvd" or some
+ * variation) must be set to zero by the driver unless otherwise
+ * specified.
+ */
+
+#ifndef DMA_SDXI_HW_H
+#define DMA_SDXI_HW_H
+
+#include <linux/bits.h>
+#include <linux/build_bug.h>
+#include <linux/types.h>
+#include <asm/byteorder.h>
+
+/* SDXI 1.0 Table 3-2: Context Level 2 Table Entry (CXT_L2_ENT) */
+struct sdxi_cxt_L2_ent {
+ __le64 lv01_ptr;
+#define SDXI_CXT_L2_ENT_VL BIT_ULL(0)
+#define SDXI_CXT_L2_ENT_LV01_PTR GENMASK_ULL(63, 12)
+} __packed;
+static_assert(sizeof(struct sdxi_cxt_L2_ent) == 8);
+
+/* SDXI 1.0 3.2.1 Context Level 2 Table */
+#define SDXI_L2_TABLE_ENTRIES 512
+struct sdxi_cxt_L2_table {
+ struct sdxi_cxt_L2_ent entry[SDXI_L2_TABLE_ENTRIES];
+};
+static_assert(sizeof(struct sdxi_cxt_L2_table) == 4096);
+
+/* SDXI 1.0 Table 3-3: Context Level 1 Table Entry (CXT_L1_ENT) */
+struct sdxi_cxt_L1_ent {
+ __le64 cxt_ctl_ptr;
+ __le64 akey_ptr;
+ __le32 misc0;
+ __le32 opb_000_enb;
+ __u8 rsvd_0[8];
+} __packed;
+static_assert(sizeof(struct sdxi_cxt_L1_ent) == 32);
+
+/* SDXI 1.0 3.2.2 Context Level 1 Table */
+#define SDXI_L1_TABLE_ENTRIES 128
+struct sdxi_cxt_L1_table {
+ struct sdxi_cxt_L1_ent entry[SDXI_L1_TABLE_ENTRIES];
+};
+static_assert(sizeof(struct sdxi_cxt_L1_table) == 4096);
+
+#endif /* DMA_SDXI_HW_H */
diff --git a/drivers/dma/sdxi/mmio.h b/drivers/dma/sdxi/mmio.h
index c9a11c3f2f76..d8d631849938 100644
--- a/drivers/dma/sdxi/mmio.h
+++ b/drivers/dma/sdxi/mmio.h
@@ -19,6 +19,9 @@ enum sdxi_reg {
SDXI_MMIO_CAP0 = 0x00200,
SDXI_MMIO_CAP1 = 0x00208,
SDXI_MMIO_VERSION = 0x00210,
+
+ /* SDXI 1.0 9.2 Context and RKey Table Registers */
+ SDXI_MMIO_CXT_L2 = 0x10000,
};
/* SDXI 1.0 Table 9-2: MMIO_CTL0 */
@@ -48,4 +51,7 @@ enum sdxi_reg {
#define SDXI_MMIO_VERSION_MINOR GENMASK_ULL(7, 0)
#define SDXI_MMIO_VERSION_MAJOR GENMASK_ULL(23, 16)
+/* SDXI 1.0 Table 9-9: MMIO_CXT_L2 */
+#define SDXI_MMIO_CXT_L2_PTR GENMASK_ULL(63, 12)
+
#endif /* DMA_SDXI_MMIO_H */
diff --git a/drivers/dma/sdxi/sdxi.h b/drivers/dma/sdxi/sdxi.h
index 427118e60aa6..185f58b725da 100644
--- a/drivers/dma/sdxi/sdxi.h
+++ b/drivers/dma/sdxi/sdxi.h
@@ -41,6 +41,11 @@ struct sdxi_dev {
u16 max_cxtid; /* Maximum context ID allowed. */
u32 op_grp_cap; /* supported operation group cap */
+ struct sdxi_cxt_L2_table *L2_table;
+ dma_addr_t L2_dma;
+ struct sdxi_cxt_L1_table *L1_table;
+ dma_addr_t L1_dma;
+
const struct sdxi_bus_ops *bus_ops;
};
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 06/23] dmaengine: sdxi: Allocate DMA pools
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (4 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 05/23] dmaengine: sdxi: Configure context tables Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 07/23] dmaengine: sdxi: Allocate administrative context Nathan Lynch via B4 Relay
` (16 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Each SDXI context consists of several control structures in system
memory:
* Descriptor ring
* Access key (AKey) table
* Context control block (CXT_CTL)
* Context status block (CXT_STS)
* Write index
Of these, the write index, context control and context status blocks
are small enough to justify DMA pools.
SDXI descriptors may optionally have 32-byte completion status
blocks (CST_BLK) associated with them that software can poll for
completion.
Introduce the C structures for context control, context status, and
completion status blocks. Create a DMA pool for each of these objects
as well as write indexes during SDXI function initialization.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/device.c | 34 +++++++++++++++++++++++++++++++++-
drivers/dma/sdxi/hw.h | 28 ++++++++++++++++++++++++++++
drivers/dma/sdxi/sdxi.h | 5 +++++
3 files changed, 66 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/sdxi/device.c b/drivers/dma/sdxi/device.c
index 7e772ce81365..80bd1bbd9c7c 100644
--- a/drivers/dma/sdxi/device.c
+++ b/drivers/dma/sdxi/device.c
@@ -9,6 +9,7 @@
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
#include <linux/log2.h>
#include <linux/slab.h>
@@ -188,6 +189,37 @@ static int sdxi_fn_activate(struct sdxi_dev *sdxi)
return 0;
}
+static int sdxi_create_dma_pool(struct sdxi_dev *sdxi, struct dma_pool **pool,
+ const char *name, size_t size)
+{
+ *pool = dmam_pool_create(name, sdxi_to_dev(sdxi), size, size, 0);
+ return *pool ? 0 : -ENOMEM;
+}
+
+static int sdxi_device_init(struct sdxi_dev *sdxi)
+{
+ int err;
+
+ if (sdxi_create_dma_pool(sdxi, &sdxi->write_index_pool,
+ "Write_Index", sizeof(__le64)))
+ return -ENOMEM;
+ if (sdxi_create_dma_pool(sdxi, &sdxi->cxt_sts_pool,
+ "CXT_STS", sizeof(struct sdxi_cxt_sts)))
+ return -ENOMEM;
+ if (sdxi_create_dma_pool(sdxi, &sdxi->cxt_ctl_pool,
+ "CXT_CTL", sizeof(struct sdxi_cxt_ctl)))
+ return -ENOMEM;
+ if (sdxi_create_dma_pool(sdxi, &sdxi->cst_blk_pool,
+ "CST_BLK", sizeof(struct sdxi_cst_blk)))
+ return -ENOMEM;
+
+ err = sdxi_fn_activate(sdxi);
+ if (err)
+ return err;
+
+ return 0;
+}
+
int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops)
{
struct sdxi_dev *sdxi;
@@ -205,5 +237,5 @@ int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops)
if (err)
return err;
- return sdxi_fn_activate(sdxi);
+ return sdxi_device_init(sdxi);
}
diff --git a/drivers/dma/sdxi/hw.h b/drivers/dma/sdxi/hw.h
index df520ca7792b..846c671c423f 100644
--- a/drivers/dma/sdxi/hw.h
+++ b/drivers/dma/sdxi/hw.h
@@ -58,4 +58,32 @@ struct sdxi_cxt_L1_table {
};
static_assert(sizeof(struct sdxi_cxt_L1_table) == 4096);
+/* SDXI 1.0 Table 3-4: Context Control (CXT_CTL) */
+struct sdxi_cxt_ctl {
+ __le64 ds_ring_ptr;
+ __le32 ds_ring_sz;
+ __u8 rsvd_0[4];
+ __le64 cxt_sts_ptr;
+ __le64 write_index_ptr;
+ __u8 rsvd_1[32];
+} __packed;
+static_assert(sizeof(struct sdxi_cxt_ctl) == 64);
+
+/* SDXI 1.0 Table 3-5: Context Status (CXT_STS) */
+struct sdxi_cxt_sts {
+ __u8 state;
+ __u8 misc0;
+ __u8 rsvd_0[6];
+ __le64 read_index;
+} __packed;
+static_assert(sizeof(struct sdxi_cxt_sts) == 16);
+
+/* SDXI 1.0 Table 6-4: CST_BLK (Completion Status Block) */
+struct sdxi_cst_blk {
+ __le64 signal;
+ __le32 flags;
+ __u8 rsvd_0[20];
+} __packed;
+static_assert(sizeof(struct sdxi_cst_blk) == 32);
+
#endif /* DMA_SDXI_HW_H */
diff --git a/drivers/dma/sdxi/sdxi.h b/drivers/dma/sdxi/sdxi.h
index 185f58b725da..6cda60bb33c4 100644
--- a/drivers/dma/sdxi/sdxi.h
+++ b/drivers/dma/sdxi/sdxi.h
@@ -46,6 +46,11 @@ struct sdxi_dev {
struct sdxi_cxt_L1_table *L1_table;
dma_addr_t L1_dma;
+ struct dma_pool *write_index_pool;
+ struct dma_pool *cxt_sts_pool;
+ struct dma_pool *cxt_ctl_pool;
+ struct dma_pool *cst_blk_pool;
+
const struct sdxi_bus_ops *bus_ops;
};
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 07/23] dmaengine: sdxi: Allocate administrative context
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (5 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 06/23] dmaengine: sdxi: Allocate DMA pools Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 08/23] dmaengine: sdxi: Install " Nathan Lynch via B4 Relay
` (15 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Create the control structure hierarchy in memory for the per-function
administrative context. Use devres to queue the corresponding cleanup
since the admin context is a device-scope resource. The context is
inert for now; changes to follow will make it functional.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/Makefile | 4 +-
drivers/dma/sdxi/context.c | 128 +++++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/context.h | 54 +++++++++++++++++++
drivers/dma/sdxi/device.c | 11 ++++
drivers/dma/sdxi/hw.h | 43 +++++++++++++++
drivers/dma/sdxi/sdxi.h | 2 +
6 files changed, 241 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/sdxi/Makefile b/drivers/dma/sdxi/Makefile
index f84b87d53e27..2178f274831c 100644
--- a/drivers/dma/sdxi/Makefile
+++ b/drivers/dma/sdxi/Makefile
@@ -1,6 +1,8 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_SDXI) += sdxi.o
-sdxi-objs += device.o
+sdxi-objs += \
+ context.o \
+ device.o
sdxi-$(CONFIG_PCI_MSI) += pci.o
diff --git a/drivers/dma/sdxi/context.c b/drivers/dma/sdxi/context.c
new file mode 100644
index 000000000000..0a6821992776
--- /dev/null
+++ b/drivers/dma/sdxi/context.c
@@ -0,0 +1,128 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SDXI context management
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ */
+
+#define pr_fmt(fmt) "SDXI: " fmt
+
+#include <linux/bug.h>
+#include <linux/cleanup.h>
+#include <linux/device/devres.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "context.h"
+#include "sdxi.h"
+
+#define DEFAULT_DESC_RING_ENTRIES 1024
+
+enum {
+ /*
+ * The admin context always has ID 0. See SDXI 1.0 3.5
+ * Administrative Context (Context 0).
+ */
+ SDXI_ADMIN_CXT_ID = 0,
+};
+
+/*
+ * Free context and its resources. @cxt may be partially allocated but
+ * must have ->sdxi set.
+ */
+static void sdxi_free_cxt(struct sdxi_cxt *cxt)
+{
+ struct sdxi_dev *sdxi = cxt->sdxi;
+ struct sdxi_sq *sq = cxt->sq;
+
+ if (cxt->cxt_ctl)
+ dma_pool_free(sdxi->cxt_ctl_pool, cxt->cxt_ctl,
+ cxt->cxt_ctl_dma);
+ if (cxt->akey_table)
+ dma_free_coherent(sdxi_to_dev(sdxi), sizeof(*cxt->akey_table),
+ cxt->akey_table, cxt->akey_table_dma);
+ if (sq && sq->write_index)
+ dma_pool_free(sdxi->write_index_pool, sq->write_index,
+ sq->write_index_dma);
+ if (sq && sq->cxt_sts)
+ dma_pool_free(sdxi->cxt_sts_pool, sq->cxt_sts, sq->cxt_sts_dma);
+ if (sq && sq->desc_ring)
+ dma_free_coherent(sdxi_to_dev(sdxi), sq->ring_size,
+ sq->desc_ring, sq->ring_dma);
+ kfree(cxt->sq);
+ kfree(cxt);
+}
+
+DEFINE_FREE(sdxi_cxt, struct sdxi_cxt *, if (_T) sdxi_free_cxt(_T))
+
+/* Allocate a context and its control structure hierarchy in memory. */
+static struct sdxi_cxt *sdxi_alloc_cxt(struct sdxi_dev *sdxi)
+{
+ struct device *dev = sdxi_to_dev(sdxi);
+ struct sdxi_sq *sq;
+ struct sdxi_cxt *cxt __free(sdxi_cxt) = kzalloc(sizeof(*cxt), GFP_KERNEL);
+
+ if (!cxt)
+ return NULL;
+
+ cxt->sdxi = sdxi;
+
+ cxt->sq = kzalloc_obj(*cxt->sq, GFP_KERNEL);
+ if (!cxt->sq)
+ return NULL;
+
+ cxt->akey_table = dma_alloc_coherent(dev, sizeof(*cxt->akey_table),
+ &cxt->akey_table_dma, GFP_KERNEL);
+ if (!cxt->akey_table)
+ return NULL;
+
+ cxt->cxt_ctl = dma_pool_zalloc(sdxi->cxt_ctl_pool, GFP_KERNEL,
+ &cxt->cxt_ctl_dma);
+ if (!cxt->cxt_ctl_dma)
+ return NULL;
+
+ sq = cxt->sq;
+
+ sq->ring_entries = DEFAULT_DESC_RING_ENTRIES;
+ sq->ring_size = sq->ring_entries * sizeof(sq->desc_ring[0]);
+ sq->desc_ring = dma_alloc_coherent(dev, sq->ring_size, &sq->ring_dma,
+ GFP_KERNEL);
+ if (!sq->desc_ring)
+ return NULL;
+
+ sq->cxt_sts = dma_pool_zalloc(sdxi->cxt_sts_pool, GFP_KERNEL,
+ &sq->cxt_sts_dma);
+ if (!sq->cxt_sts)
+ return NULL;
+
+ sq->write_index = dma_pool_zalloc(sdxi->write_index_pool, GFP_KERNEL,
+ &sq->write_index_dma);
+ if (!sq->write_index)
+ return NULL;
+
+ return_ptr(cxt);
+}
+
+static void free_admin_cxt(void *ptr)
+{
+ struct sdxi_dev *sdxi = ptr;
+
+ sdxi_free_cxt(sdxi->admin_cxt);
+}
+
+int sdxi_admin_cxt_init(struct sdxi_dev *sdxi)
+{
+ struct sdxi_cxt *cxt __free(sdxi_cxt) = sdxi_alloc_cxt(sdxi);
+ if (!cxt)
+ return -ENOMEM;
+
+ cxt->id = SDXI_ADMIN_CXT_ID;
+ cxt->db = sdxi->dbs + cxt->id * sdxi->db_stride;
+
+ sdxi->admin_cxt = no_free_ptr(cxt);
+
+ return devm_add_action_or_reset(sdxi_to_dev(sdxi), free_admin_cxt, sdxi);
+}
diff --git a/drivers/dma/sdxi/context.h b/drivers/dma/sdxi/context.h
new file mode 100644
index 000000000000..800b4ead1dd9
--- /dev/null
+++ b/drivers/dma/sdxi/context.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright Advanced Micro Devices, Inc.
+ */
+
+#ifndef DMA_SDXI_CONTEXT_H
+#define DMA_SDXI_CONTEXT_H
+
+#include <linux/dma-mapping.h>
+#include <linux/types.h>
+
+#include "hw.h"
+#include "sdxi.h"
+
+/*
+ * The size of the AKey table is flexible, from 4KB to 1MB. Always use
+ * the minimum size for now.
+ */
+struct sdxi_akey_table {
+ struct sdxi_akey_ent entry[SZ_4K / sizeof(struct sdxi_akey_ent)];
+};
+
+/* Submission Queue */
+struct sdxi_sq {
+ u32 ring_entries;
+ u32 ring_size;
+ struct sdxi_desc *desc_ring;
+ dma_addr_t ring_dma;
+
+ __le64 *write_index;
+ dma_addr_t write_index_dma;
+
+ struct sdxi_cxt_sts *cxt_sts;
+ dma_addr_t cxt_sts_dma;
+};
+
+struct sdxi_cxt {
+ struct sdxi_dev *sdxi;
+ unsigned int id;
+
+ __le64 __iomem *db;
+
+ struct sdxi_cxt_ctl *cxt_ctl;
+ dma_addr_t cxt_ctl_dma;
+
+ struct sdxi_akey_table *akey_table;
+ dma_addr_t akey_table_dma;
+
+ struct sdxi_sq *sq;
+};
+
+int sdxi_admin_cxt_init(struct sdxi_dev *sdxi);
+
+#endif /* DMA_SDXI_CONTEXT_H */
diff --git a/drivers/dma/sdxi/device.c b/drivers/dma/sdxi/device.c
index 80bd1bbd9c7c..636abc410dcd 100644
--- a/drivers/dma/sdxi/device.c
+++ b/drivers/dma/sdxi/device.c
@@ -13,6 +13,7 @@
#include <linux/log2.h>
#include <linux/slab.h>
+#include "context.h"
#include "hw.h"
#include "mmio.h"
#include "sdxi.h"
@@ -186,6 +187,16 @@ static int sdxi_fn_activate(struct sdxi_dev *sdxi)
sdxi->L1_dma >> ilog2(SZ_4K));
L2_ent->lv01_ptr = cpu_to_le64(lv01_ptr);
+ /*
+ * SDXI 1.0 4.1.8.4 Administrative Context
+ *
+ * The admin context will not consume descriptors until we
+ * write its doorbell later.
+ */
+ err = sdxi_admin_cxt_init(sdxi);
+ if (err)
+ return err;
+
return 0;
}
diff --git a/drivers/dma/sdxi/hw.h b/drivers/dma/sdxi/hw.h
index 846c671c423f..b66eb22f7f90 100644
--- a/drivers/dma/sdxi/hw.h
+++ b/drivers/dma/sdxi/hw.h
@@ -23,6 +23,7 @@
#include <linux/bits.h>
#include <linux/build_bug.h>
+#include <linux/stddef.h>
#include <linux/types.h>
#include <asm/byteorder.h>
@@ -72,12 +73,39 @@ static_assert(sizeof(struct sdxi_cxt_ctl) == 64);
/* SDXI 1.0 Table 3-5: Context Status (CXT_STS) */
struct sdxi_cxt_sts {
__u8 state;
+#define SDXI_CXT_STS_STATE GENMASK(3, 0)
__u8 misc0;
__u8 rsvd_0[6];
__le64 read_index;
} __packed;
static_assert(sizeof(struct sdxi_cxt_sts) == 16);
+/* SDXI 1.0 Table 3-6: CXT_STS.state Encoding */
+/* Valid values for FIELD_GET(SDXI_CXT_STS_STATE, sdxi_cxt_sts.state). */
+enum cxt_sts_state {
+ CXTV_STOP_SW = 0x0,
+ CXTV_RUN = 0x1,
+ CXTV_STOPG_SW = 0x2,
+ CXTV_STOP_FN = 0x4,
+ CXTV_STOPG_FN = 0x6,
+ CXTV_ERR_FN = 0xf,
+};
+
+/* SDXI 1.0 Table 3-7: AKey Table Entry (AKEY_ENT) */
+struct sdxi_akey_ent {
+ __le16 intr_num;
+#define SDXI_AKEY_ENT_VL BIT(0)
+#define SDXI_AKEY_ENT_IV BIT(1)
+#define SDXI_AKEY_ENT_INTR_NUM GENMASK(14, 4)
+ __le16 tgt_sfunc;
+ __le32 pasid;
+ __le16 stag;
+ __u8 rsvd_0[2];
+ __le16 rkey;
+ __u8 rsvd_1[2];
+} __packed;
+static_assert(sizeof(struct sdxi_akey_ent) == 16);
+
/* SDXI 1.0 Table 6-4: CST_BLK (Completion Status Block) */
struct sdxi_cst_blk {
__le64 signal;
@@ -86,4 +114,19 @@ struct sdxi_cst_blk {
} __packed;
static_assert(sizeof(struct sdxi_cst_blk) == 32);
+struct sdxi_desc {
+ union {
+ /*
+ * SDXI 1.0 Table 6-3: DSC_GENERIC SDXI Descriptor
+ * Common Header and Footer Format
+ */
+ struct_group_tagged(sdxi_dsc_generic, generic,
+ __le32 opcode;
+ __u8 operation[52];
+ __le64 csb_ptr;
+ );
+ };
+} __packed;
+static_assert(sizeof(struct sdxi_desc) == 64);
+
#endif /* DMA_SDXI_HW_H */
diff --git a/drivers/dma/sdxi/sdxi.h b/drivers/dma/sdxi/sdxi.h
index 6cda60bb33c4..4ef893ae15f3 100644
--- a/drivers/dma/sdxi/sdxi.h
+++ b/drivers/dma/sdxi/sdxi.h
@@ -51,6 +51,8 @@ struct sdxi_dev {
struct dma_pool *cxt_ctl_pool;
struct dma_pool *cst_blk_pool;
+ struct sdxi_cxt *admin_cxt;
+
const struct sdxi_bus_ops *bus_ops;
};
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 08/23] dmaengine: sdxi: Install administrative context
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (6 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 07/23] dmaengine: sdxi: Allocate administrative context Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 09/23] dmaengine: sdxi: Start functions on probe, stop on remove Nathan Lynch via B4 Relay
` (14 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Serialize the context control block, akey table, and L1 entry for the
admin context, making its descriptor ring, write index, and context
status block visible to the SDXI implementation once it is activated.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/context.c | 162 +++++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/context.h | 7 ++
drivers/dma/sdxi/hw.h | 15 +++++
drivers/dma/sdxi/sdxi.h | 9 +++
4 files changed, 193 insertions(+)
diff --git a/drivers/dma/sdxi/context.c b/drivers/dma/sdxi/context.c
index 0a6821992776..097d871e530f 100644
--- a/drivers/dma/sdxi/context.c
+++ b/drivers/dma/sdxi/context.c
@@ -7,16 +7,22 @@
#define pr_fmt(fmt) "SDXI: " fmt
+#include <linux/align.h>
+#include <linux/bitfield.h>
#include <linux/bug.h>
#include <linux/cleanup.h>
#include <linux/device/devres.h>
#include <linux/dma-mapping.h>
#include <linux/dmapool.h>
#include <linux/errno.h>
+#include <linux/iommu.h>
#include <linux/slab.h>
#include <linux/types.h>
+#include <asm/barrier.h>
+#include <asm/rwonce.h>
#include "context.h"
+#include "hw.h"
#include "sdxi.h"
#define DEFAULT_DESC_RING_ENTRIES 1024
@@ -106,6 +112,152 @@ static struct sdxi_cxt *sdxi_alloc_cxt(struct sdxi_dev *sdxi)
return_ptr(cxt);
}
+struct sdxi_cxt_ctl_cfg {
+ dma_addr_t ds_ring_ptr;
+ dma_addr_t cxt_sts_ptr;
+ dma_addr_t write_index_ptr;
+ u32 ds_ring_sz;
+ u8 qos;
+ u8 csa;
+ bool se;
+};
+
+static int configure_cxt_ctl(struct sdxi_cxt_ctl *ctl, const struct sdxi_cxt_ctl_cfg *cfg)
+{
+ u64 ds_ring_ptr, cxt_sts_ptr, write_index_ptr;
+
+ write_index_ptr = FIELD_PREP(SDXI_CXT_CTL_WRITE_INDEX_PTR,
+ cfg->write_index_ptr >> WRT_INDEX_PTR_SHIFT);
+ cxt_sts_ptr = FIELD_PREP(SDXI_CXT_CTL_CXT_STS_PTR,
+ cfg->cxt_sts_ptr >> CXT_STATUS_PTR_SHIFT);
+
+ *ctl = (typeof(*ctl)) {
+ /*
+ * ds_ring_ptr contains the validity bit and is updated
+ * after a barrier is issued.
+ */
+ .ds_ring_sz = cpu_to_le32(cfg->ds_ring_sz),
+ .cxt_sts_ptr = cpu_to_le64(cxt_sts_ptr),
+ .write_index_ptr = cpu_to_le64(write_index_ptr),
+ };
+
+ ds_ring_ptr = FIELD_PREP(SDXI_CXT_CTL_VL, 1) |
+ FIELD_PREP(SDXI_CXT_CTL_QOS, cfg->qos) |
+ FIELD_PREP(SDXI_CXT_CTL_SE, cfg->se) |
+ FIELD_PREP(SDXI_CXT_CTL_CSA, cfg->csa) |
+ FIELD_PREP(SDXI_CXT_CTL_DS_RING_PTR,
+ cfg->ds_ring_ptr >> DESC_RING_BASE_PTR_SHIFT);
+ /* Ensure other fields are visible before hw sees vl=1. */
+ dma_wmb();
+ WRITE_ONCE(ctl->ds_ring_ptr, cpu_to_le64(ds_ring_ptr));
+
+ return 0;
+}
+
+/*
+ * Logical representation of CXT_L1_ENT subfields.
+ */
+struct sdxi_cxt_L1_cfg {
+ dma_addr_t cxt_ctl_ptr;
+ dma_addr_t akey_ptr;
+ u32 cxt_pasid;
+ u32 opb_000_enb;
+ u16 max_buffer;
+ u8 akey_sz;
+ bool ka;
+ bool pv;
+};
+
+static int configure_L1_entry(struct sdxi_cxt_L1_ent *ent,
+ const struct sdxi_cxt_L1_cfg *cfg)
+{
+ u64 cxt_ctl_ptr, akey_ptr;
+ u32 misc0;
+
+ if (WARN_ON_ONCE(!IS_ALIGNED(cfg->cxt_ctl_ptr, SZ_64)))
+ return -EFAULT;
+ if (WARN_ON_ONCE(!IS_ALIGNED(cfg->akey_ptr, SZ_4K)))
+ return -EFAULT;
+
+ akey_ptr = FIELD_PREP(SDXI_CXT_L1_ENT_AKEY_SZ, cfg->akey_sz) |
+ FIELD_PREP(SDXI_CXT_L1_ENT_AKEY_PTR,
+ cfg->akey_ptr >> L1_CXT_AKEY_PTR_SHIFT);
+
+ misc0 = FIELD_PREP(SDXI_CXT_L1_ENT_PASID, cfg->cxt_pasid) |
+ FIELD_PREP(SDXI_CXT_L1_ENT_MAX_BUFFER, cfg->max_buffer);
+
+ *ent = (typeof(*ent)) {
+ /*
+ * cxt_ctl_ptr contains the validity bit and is
+ * updated after a barrier is issued.
+ */
+ .akey_ptr = cpu_to_le64(akey_ptr),
+ .misc0 = cpu_to_le32(misc0),
+ .opb_000_enb = cpu_to_le32(cfg->opb_000_enb),
+ };
+
+ cxt_ctl_ptr = FIELD_PREP(SDXI_CXT_L1_ENT_VL, 1) |
+ FIELD_PREP(SDXI_CXT_L1_ENT_KA, cfg->ka) |
+ FIELD_PREP(SDXI_CXT_L1_ENT_PV, cfg->pv) |
+ FIELD_PREP(SDXI_CXT_L1_ENT_CXT_CTL_PTR,
+ cfg->cxt_ctl_ptr >> L1_CXT_CTRL_PTR_SHIFT);
+ /* Ensure other fields are visible before hw sees vl=1. */
+ dma_wmb();
+ WRITE_ONCE(ent->cxt_ctl_ptr, cpu_to_le64(cxt_ctl_ptr));
+
+ return 0;
+}
+
+/*
+ * Make the context control structure hierarchy valid from the POV of
+ * the SDXI implementation. This may eventually involve allocation of
+ * a L1 table page, so it needs to be fallible.
+ */
+static int sdxi_publish_cxt(const struct sdxi_cxt *cxt)
+{
+ struct sdxi_cxt_ctl_cfg ctl_cfg;
+ struct sdxi_cxt_L1_cfg L1_cfg;
+ struct sdxi_cxt_L1_ent *ent;
+ u8 l1_idx;
+ int err;
+
+ if (WARN_ONCE(cxt->id > cxt->sdxi->max_cxtid,
+ "can't install cxt with id %u (limit %u)",
+ cxt->id, cxt->sdxi->max_cxtid))
+ return -EINVAL;
+
+ ctl_cfg = (typeof(ctl_cfg)) {
+ .se = 1,
+ .csa = 1,
+ .ds_ring_ptr = cxt->sq->ring_dma,
+ .ds_ring_sz = cxt->sq->ring_size >> 6,
+ .cxt_sts_ptr = cxt->sq->cxt_sts_dma,
+ .write_index_ptr = cxt->sq->write_index_dma,
+ };
+
+ err = configure_cxt_ctl(cxt->cxt_ctl, &ctl_cfg);
+ if (err)
+ return err;
+
+ l1_idx = ID_TO_L1_INDEX(cxt->id);
+
+ ent = &cxt->sdxi->L1_table->entry[l1_idx];
+
+ L1_cfg = (typeof(L1_cfg)) {
+ .ka = 1,
+ .pv = 0,
+ .cxt_ctl_ptr = cxt->cxt_ctl_dma,
+ .akey_sz = akey_table_order(cxt->akey_table),
+ .akey_ptr = cxt->akey_table_dma,
+ .cxt_pasid = IOMMU_NO_PASID,
+ .max_buffer = 11, /* 4GB */
+ .opb_000_enb = cxt->sdxi->op_grp_cap,
+ };
+
+ return configure_L1_entry(ent, &L1_cfg);
+ /* todo: need to send DSC_CXT_UPD to admin */
+}
+
static void free_admin_cxt(void *ptr)
{
struct sdxi_dev *sdxi = ptr;
@@ -115,13 +267,23 @@ static void free_admin_cxt(void *ptr)
int sdxi_admin_cxt_init(struct sdxi_dev *sdxi)
{
+ int err;
+ struct sdxi_sq *sq;
+
struct sdxi_cxt *cxt __free(sdxi_cxt) = sdxi_alloc_cxt(sdxi);
if (!cxt)
return -ENOMEM;
+ sq = cxt->sq;
+ /* SDXI 1.0 4.1.8.4.b: Set CXT_STS.state to CXTV_RUN. */
+ sq->cxt_sts->state = FIELD_PREP(SDXI_CXT_STS_STATE, CXTV_RUN);
cxt->id = SDXI_ADMIN_CXT_ID;
cxt->db = sdxi->dbs + cxt->id * sdxi->db_stride;
+ err = sdxi_publish_cxt(cxt);
+ if (err)
+ return err;
+
sdxi->admin_cxt = no_free_ptr(cxt);
return devm_add_action_or_reset(sdxi_to_dev(sdxi), free_admin_cxt, sdxi);
diff --git a/drivers/dma/sdxi/context.h b/drivers/dma/sdxi/context.h
index 800b4ead1dd9..bbde1fd49af3 100644
--- a/drivers/dma/sdxi/context.h
+++ b/drivers/dma/sdxi/context.h
@@ -20,6 +20,13 @@ struct sdxi_akey_table {
struct sdxi_akey_ent entry[SZ_4K / sizeof(struct sdxi_akey_ent)];
};
+/* For encoding the akey table size in CXT_L1_ENT's akey_sz. */
+static inline u8 akey_table_order(const struct sdxi_akey_table *tbl)
+{
+ static_assert(sizeof(*tbl) == SZ_4K);
+ return 0;
+}
+
/* Submission Queue */
struct sdxi_sq {
u32 ring_entries;
diff --git a/drivers/dma/sdxi/hw.h b/drivers/dma/sdxi/hw.h
index b66eb22f7f90..46424376f26f 100644
--- a/drivers/dma/sdxi/hw.h
+++ b/drivers/dma/sdxi/hw.h
@@ -45,8 +45,16 @@ static_assert(sizeof(struct sdxi_cxt_L2_table) == 4096);
/* SDXI 1.0 Table 3-3: Context Level 1 Table Entry (CXT_L1_ENT) */
struct sdxi_cxt_L1_ent {
__le64 cxt_ctl_ptr;
+#define SDXI_CXT_L1_ENT_VL BIT_ULL(0)
+#define SDXI_CXT_L1_ENT_KA BIT_ULL(1)
+#define SDXI_CXT_L1_ENT_PV BIT_ULL(2)
+#define SDXI_CXT_L1_ENT_CXT_CTL_PTR GENMASK_ULL(63, 6)
__le64 akey_ptr;
+#define SDXI_CXT_L1_ENT_AKEY_SZ GENMASK_ULL(3, 0)
+#define SDXI_CXT_L1_ENT_AKEY_PTR GENMASK_ULL(63, 12)
__le32 misc0;
+#define SDXI_CXT_L1_ENT_PASID GENMASK(19, 0)
+#define SDXI_CXT_L1_ENT_MAX_BUFFER GENMASK(23, 20)
__le32 opb_000_enb;
__u8 rsvd_0[8];
} __packed;
@@ -62,10 +70,17 @@ static_assert(sizeof(struct sdxi_cxt_L1_table) == 4096);
/* SDXI 1.0 Table 3-4: Context Control (CXT_CTL) */
struct sdxi_cxt_ctl {
__le64 ds_ring_ptr;
+#define SDXI_CXT_CTL_VL BIT_ULL(0)
+#define SDXI_CXT_CTL_QOS GENMASK_ULL(3, 2)
+#define SDXI_CXT_CTL_SE BIT_ULL(4)
+#define SDXI_CXT_CTL_CSA BIT_ULL(5)
+#define SDXI_CXT_CTL_DS_RING_PTR GENMASK_ULL(63, 6)
__le32 ds_ring_sz;
__u8 rsvd_0[4];
__le64 cxt_sts_ptr;
+#define SDXI_CXT_CTL_CXT_STS_PTR GENMASK_ULL(63, 4)
__le64 write_index_ptr;
+#define SDXI_CXT_CTL_WRITE_INDEX_PTR GENMASK_ULL(63, 3)
__u8 rsvd_1[32];
} __packed;
static_assert(sizeof(struct sdxi_cxt_ctl) == 64);
diff --git a/drivers/dma/sdxi/sdxi.h b/drivers/dma/sdxi/sdxi.h
index 4ef893ae15f3..bbc14364a5c9 100644
--- a/drivers/dma/sdxi/sdxi.h
+++ b/drivers/dma/sdxi/sdxi.h
@@ -17,6 +17,15 @@
#define SDXI_DRV_DESC "SDXI driver"
+#define ID_TO_L1_INDEX(id) ((id) & 0x7F)
+
+#define DESC_RING_BASE_PTR_SHIFT 6
+#define CXT_STATUS_PTR_SHIFT 4
+#define WRT_INDEX_PTR_SHIFT 3
+
+#define L1_CXT_CTRL_PTR_SHIFT 6
+#define L1_CXT_AKEY_PTR_SHIFT 12
+
struct sdxi_dev;
/**
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 09/23] dmaengine: sdxi: Start functions on probe, stop on remove
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (7 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 08/23] dmaengine: sdxi: Install " Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 10/23] dmaengine: sdxi: Complete administrative context jump start Nathan Lynch via B4 Relay
` (13 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Following admin context setup in the previous patch, drive each SDXI
function to active state during probe. This is done by writing
GSRV_ACTIVE to MMIO_CTL0.fn_gsr and polling MMIO_STS0.fn_gsv until the
function reaches GSV_ACTIVE or an error state. A 1-second timeout has
been sufficient in practice so far.
Introduce sdxi_unregister() to stop the function during remove and wire
it up via the pci_driver .remove callback.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/device.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++-
drivers/dma/sdxi/pci.c | 6 +++++
drivers/dma/sdxi/sdxi.h | 1 +
3 files changed, 62 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/sdxi/device.c b/drivers/dma/sdxi/device.c
index 636abc410dcd..145aa098c269 100644
--- a/drivers/dma/sdxi/device.c
+++ b/drivers/dma/sdxi/device.c
@@ -67,6 +67,49 @@ static void sdxi_write_fn_gsr(struct sdxi_dev *sdxi, enum sdxi_fn_gsr cmd)
sdxi_write64(sdxi, SDXI_MMIO_CTL0, ctl0);
}
+static int sdxi_dev_start(struct sdxi_dev *sdxi)
+{
+ unsigned long deadline;
+ enum sdxi_fn_gsv status;
+
+ status = sdxi_dev_gsv(sdxi);
+ if (status != SDXI_GSV_STOP) {
+ sdxi_err(sdxi,
+ "can't activate busy device (unexpected gsv: %s)\n",
+ gsv_str(status));
+ return -EIO;
+ }
+
+ sdxi_write_fn_gsr(sdxi, SDXI_GSRV_ACTIVE);
+
+ deadline = jiffies + msecs_to_jiffies(1000);
+ do {
+ status = sdxi_dev_gsv(sdxi);
+ sdxi_dbg(sdxi, "%s: function state: %s\n", __func__, gsv_str(status));
+
+ switch (status) {
+ case SDXI_GSV_ACTIVE:
+ sdxi_dbg(sdxi, "activated\n");
+ return 0;
+ case SDXI_GSV_ERROR:
+ sdxi_err(sdxi, "went to error state\n");
+ return -EIO;
+ case SDXI_GSV_INIT:
+ case SDXI_GSV_STOP:
+ /* transitional states, wait */
+ fsleep(1000);
+ break;
+ default:
+ sdxi_err(sdxi, "unexpected gsv %u, giving up\n", status);
+ return -EIO;
+ }
+ } while (time_before(jiffies, deadline));
+
+ sdxi_err(sdxi, "activation timed out, current status %u\n",
+ sdxi_dev_gsv(sdxi));
+ return -ETIMEDOUT;
+}
+
/* Get the device to the GSV_STOP state. */
static int sdxi_dev_stop(struct sdxi_dev *sdxi)
{
@@ -197,7 +240,11 @@ static int sdxi_fn_activate(struct sdxi_dev *sdxi)
if (err)
return err;
- return 0;
+ /*
+ * SDXI 1.0 4.1.8.9: Set MMIO_CTL0.fn_gsr to GSRV_ACTIVE and
+ * wait for MMIO_STS0.fn_gsv to reach GSV_ACTIVE or GSV_ERROR.
+ */
+ return sdxi_dev_start(sdxi);
}
static int sdxi_create_dma_pool(struct sdxi_dev *sdxi, struct dma_pool **pool,
@@ -250,3 +297,10 @@ int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops)
return sdxi_device_init(sdxi);
}
+
+void sdxi_unregister(struct device *dev)
+{
+ struct sdxi_dev *sdxi = dev_get_drvdata(dev);
+
+ sdxi_dev_stop(sdxi);
+}
diff --git a/drivers/dma/sdxi/pci.c b/drivers/dma/sdxi/pci.c
index f3f8485e50e3..8e4dfde078ff 100644
--- a/drivers/dma/sdxi/pci.c
+++ b/drivers/dma/sdxi/pci.c
@@ -67,6 +67,11 @@ static int sdxi_pci_probe(struct pci_dev *pdev,
return sdxi_register(&pdev->dev, &sdxi_pci_ops);
}
+static void sdxi_pci_remove(struct pci_dev *pdev)
+{
+ sdxi_unregister(&pdev->dev);
+}
+
static const struct pci_device_id sdxi_id_table[] = {
{ PCI_DEVICE_CLASS(PCI_CLASS_ACCELERATOR_SDXI, 0xffffff) },
{ }
@@ -77,6 +82,7 @@ static struct pci_driver sdxi_driver = {
.name = "sdxi",
.id_table = sdxi_id_table,
.probe = sdxi_pci_probe,
+ .remove = sdxi_pci_remove,
.sriov_configure = pci_sriov_configure_simple,
};
diff --git a/drivers/dma/sdxi/sdxi.h b/drivers/dma/sdxi/sdxi.h
index bbc14364a5c9..426101875334 100644
--- a/drivers/dma/sdxi/sdxi.h
+++ b/drivers/dma/sdxi/sdxi.h
@@ -75,6 +75,7 @@ static inline struct device *sdxi_to_dev(const struct sdxi_dev *sdxi)
#define sdxi_err(s, fmt, ...) dev_err(sdxi_to_dev(s), fmt, ## __VA_ARGS__)
int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops);
+void sdxi_unregister(struct device *dev);
static inline u64 sdxi_read64(const struct sdxi_dev *sdxi, enum sdxi_reg reg)
{
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 10/23] dmaengine: sdxi: Complete administrative context jump start
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (8 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 09/23] dmaengine: sdxi: Start functions on probe, stop on remove Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 11/23] dmaengine: sdxi: Add client context alloc and release APIs Nathan Lynch via B4 Relay
` (12 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Now that the SDXI function has been placed in active state, the admin
context can finally be started by writing its doorbell. Introduce
sdxi_cxt_push_doorbell() to pair the necessary barrier with the
doorbell update.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/context.c | 8 ++++++++
drivers/dma/sdxi/context.h | 2 ++
drivers/dma/sdxi/device.c | 15 ++++++++++++++-
3 files changed, 24 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/sdxi/context.c b/drivers/dma/sdxi/context.c
index 097d871e530f..934c487b4774 100644
--- a/drivers/dma/sdxi/context.c
+++ b/drivers/dma/sdxi/context.c
@@ -16,6 +16,7 @@
#include <linux/dmapool.h>
#include <linux/errno.h>
#include <linux/iommu.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <asm/barrier.h>
@@ -258,6 +259,13 @@ static int sdxi_publish_cxt(const struct sdxi_cxt *cxt)
/* todo: need to send DSC_CXT_UPD to admin */
}
+void sdxi_cxt_push_doorbell(struct sdxi_cxt *cxt, u64 index)
+{
+ /* Ensure preceding write index increment is visible. */
+ dma_wmb();
+ iowrite64(index, cxt->db);
+}
+
static void free_admin_cxt(void *ptr)
{
struct sdxi_dev *sdxi = ptr;
diff --git a/drivers/dma/sdxi/context.h b/drivers/dma/sdxi/context.h
index bbde1fd49af3..c34acd730acb 100644
--- a/drivers/dma/sdxi/context.h
+++ b/drivers/dma/sdxi/context.h
@@ -58,4 +58,6 @@ struct sdxi_cxt {
int sdxi_admin_cxt_init(struct sdxi_dev *sdxi);
+void sdxi_cxt_push_doorbell(struct sdxi_cxt *cxt, u64 index);
+
#endif /* DMA_SDXI_CONTEXT_H */
diff --git a/drivers/dma/sdxi/device.c b/drivers/dma/sdxi/device.c
index 145aa098c269..15f61d1ce490 100644
--- a/drivers/dma/sdxi/device.c
+++ b/drivers/dma/sdxi/device.c
@@ -244,7 +244,20 @@ static int sdxi_fn_activate(struct sdxi_dev *sdxi)
* SDXI 1.0 4.1.8.9: Set MMIO_CTL0.fn_gsr to GSRV_ACTIVE and
* wait for MMIO_STS0.fn_gsv to reach GSV_ACTIVE or GSV_ERROR.
*/
- return sdxi_dev_start(sdxi);
+ err = sdxi_dev_start(sdxi);
+ if (err)
+ return err;
+
+ /*
+ * SDXI 1.0 4.1.8.10.b: Start the admin context using method
+ * #3 ("Jump Start 1") from 4.3.4 Starting A Context and
+ * Context Signaling. We haven't queued any descriptors to the
+ * admin context at this point, so the appropriate value for
+ * the doorbell is 0.
+ */
+ sdxi_cxt_push_doorbell(sdxi->admin_cxt, 0);
+
+ return 0;
}
static int sdxi_create_dma_pool(struct sdxi_dev *sdxi, struct dma_pool **pool,
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 11/23] dmaengine: sdxi: Add client context alloc and release APIs
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (9 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 10/23] dmaengine: sdxi: Complete administrative context jump start Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 12/23] dmaengine: sdxi: Add descriptor ring management Nathan Lynch via B4 Relay
` (11 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Expose sdxi_cxt_new() and sdxi_cxt_exit(), which are the rest of the
driver's entry points to creating and releasing SDXI contexts.
Track client contexts in a device-wide allocating xarray, mapping
context ID to the context object. The admin context always has ID 0,
so begin allocations at 1. Define a local class for ID allocation to
facilitate automatic release of an ID if an error is encountered when
"publishing" a context to the L1 table.
Introduce new code to invalidate a context's entry in the L1 table on
deallocation.
Support for starting and stopping contexts will be added in changes to
follow.
The only expected user of sdxi_cxt_new() and sdxi_cxt_exit() at this
point is the DMA engine provider code where a client context per
channel will be created.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/context.c | 111 +++++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/context.h | 13 ++++++
drivers/dma/sdxi/device.c | 8 ++++
drivers/dma/sdxi/sdxi.h | 2 +
4 files changed, 134 insertions(+)
diff --git a/drivers/dma/sdxi/context.c b/drivers/dma/sdxi/context.c
index 934c487b4774..7cae140c0a20 100644
--- a/drivers/dma/sdxi/context.c
+++ b/drivers/dma/sdxi/context.c
@@ -155,6 +155,16 @@ static int configure_cxt_ctl(struct sdxi_cxt_ctl *ctl, const struct sdxi_cxt_ctl
return 0;
}
+static void invalidate_cxtl_ctl(struct sdxi_cxt_ctl *ctl)
+{
+ u64 ds_ring_ptr = le64_to_cpu(ctl->ds_ring_ptr);
+
+ FIELD_MODIFY(SDXI_CXT_CTL_VL, &ds_ring_ptr, 0);
+ WRITE_ONCE(ctl->ds_ring_ptr, cpu_to_le64(ds_ring_ptr));
+ dma_wmb();
+ *ctl = (typeof(*ctl)) { 0 };
+}
+
/*
* Logical representation of CXT_L1_ENT subfields.
*/
@@ -209,6 +219,16 @@ static int configure_L1_entry(struct sdxi_cxt_L1_ent *ent,
return 0;
}
+static void invalidate_L1_entry(struct sdxi_cxt_L1_ent *ent)
+{
+ u64 cxt_ctl_ptr = le64_to_cpu(ent->cxt_ctl_ptr);
+
+ FIELD_MODIFY(SDXI_CXT_L1_ENT_VL, &cxt_ctl_ptr, 0);
+ WRITE_ONCE(ent->cxt_ctl_ptr, cpu_to_le64(cxt_ctl_ptr));
+ dma_wmb();
+ *ent = (typeof(*ent)) { 0 };
+}
+
/*
* Make the context control structure hierarchy valid from the POV of
* the SDXI implementation. This may eventually involve allocation of
@@ -259,6 +279,17 @@ static int sdxi_publish_cxt(const struct sdxi_cxt *cxt)
/* todo: need to send DSC_CXT_UPD to admin */
}
+/* Invalidate a context. */
+static void sdxi_rescind_cxt(struct sdxi_cxt *cxt)
+{
+ u8 l1_idx = ID_TO_L1_INDEX(cxt->id);
+ struct sdxi_cxt_L1_ent *ent = &cxt->sdxi->L1_table->entry[l1_idx];
+
+ invalidate_L1_entry(ent);
+ invalidate_cxtl_ctl(cxt->cxt_ctl);
+ /* todo: need to send DSC_CXT_UPD to admin */
+}
+
void sdxi_cxt_push_doorbell(struct sdxi_cxt *cxt, u64 index)
{
/* Ensure preceding write index increment is visible. */
@@ -266,6 +297,61 @@ void sdxi_cxt_push_doorbell(struct sdxi_cxt *cxt, u64 index)
iowrite64(index, cxt->db);
}
+/* Enable automatic cleanup of an allocated context ID */
+struct __class_sdxi_cxt_id {
+ struct sdxi_dev *sdxi;
+ s32 id;
+};
+
+#define sdxi_cxt_id_null ((struct __class_sdxi_cxt_id){ NULL, -1 })
+#define take_sdxi_cxt_id(x) __get_and_null(x, sdxi_cxt_id_null)
+
+DEFINE_CLASS(sdxi_alloc_cxt_id, struct __class_sdxi_cxt_id,
+ if (_T.id >= 0)
+ xa_erase(&_T.sdxi->client_cxts, _T.id),
+ ((struct __class_sdxi_cxt_id){
+ .sdxi = sdxi,
+ .id = ({
+ struct xa_limit limit = XA_LIMIT(1, sdxi->max_cxtid);
+ u32 id;
+ int err = xa_alloc(&sdxi->client_cxts, &id, cxt,
+ limit, GFP_KERNEL);
+ err ? err : id;
+ }),
+ }),
+ struct sdxi_dev *sdxi, struct sdxi_cxt *cxt)
+
+/*
+ * Allocate the context ID; link the context back to the device;
+ * perform some final initialization of the context based on the ID
+ * allocated; update the context tables.
+ */
+static int register_cxt(struct sdxi_dev *sdxi, struct sdxi_cxt *cxt)
+{
+ int err;
+
+ CLASS(sdxi_alloc_cxt_id, slot)(sdxi, cxt);
+ if (slot.id < 0)
+ return slot.id;
+
+ cxt->sdxi = sdxi;
+ cxt->id = slot.id;
+ cxt->db = sdxi->dbs + slot.id * sdxi->db_stride;
+
+ err = sdxi_publish_cxt(cxt);
+ if (err)
+ return err;
+
+ take_sdxi_cxt_id(slot);
+ return 0;
+}
+
+static void unregister_cxt(struct sdxi_cxt *cxt)
+{
+ sdxi_rescind_cxt(cxt);
+ xa_erase(&cxt->sdxi->client_cxts, cxt->id);
+}
+
static void free_admin_cxt(void *ptr)
{
struct sdxi_dev *sdxi = ptr;
@@ -296,3 +382,28 @@ int sdxi_admin_cxt_init(struct sdxi_dev *sdxi)
return devm_add_action_or_reset(sdxi_to_dev(sdxi), free_admin_cxt, sdxi);
}
+
+/*
+ * Allocate a context for in-kernel use. Starting the context is the
+ * caller's responsibility.
+ */
+struct sdxi_cxt *sdxi_cxt_new(struct sdxi_dev *sdxi)
+{
+ struct sdxi_cxt *cxt __free(sdxi_cxt) = sdxi_alloc_cxt(sdxi);
+ if (!cxt)
+ return NULL;
+
+ if (register_cxt(sdxi, cxt))
+ return NULL;
+
+ return_ptr(cxt);
+}
+
+void sdxi_cxt_exit(struct sdxi_cxt *cxt)
+{
+ if (WARN_ON(sdxi_cxt_is_admin(cxt)))
+ return;
+
+ unregister_cxt(cxt);
+ sdxi_free_cxt(cxt);
+}
diff --git a/drivers/dma/sdxi/context.h b/drivers/dma/sdxi/context.h
index c34acd730acb..5cd78e883c8d 100644
--- a/drivers/dma/sdxi/context.h
+++ b/drivers/dma/sdxi/context.h
@@ -58,6 +58,19 @@ struct sdxi_cxt {
int sdxi_admin_cxt_init(struct sdxi_dev *sdxi);
+struct sdxi_cxt *sdxi_cxt_new(struct sdxi_dev *sdxi);
+void sdxi_cxt_exit(struct sdxi_cxt *cxt);
+
+static inline struct sdxi_cxt *to_admin_cxt(const struct sdxi_cxt *cxt)
+{
+ return cxt->sdxi->admin_cxt;
+}
+
+static inline bool sdxi_cxt_is_admin(const struct sdxi_cxt *cxt)
+{
+ return cxt == to_admin_cxt(cxt);
+}
+
void sdxi_cxt_push_doorbell(struct sdxi_cxt *cxt, u64 index);
#endif /* DMA_SDXI_CONTEXT_H */
diff --git a/drivers/dma/sdxi/device.c b/drivers/dma/sdxi/device.c
index 15f61d1ce490..aaff6b15325a 100644
--- a/drivers/dma/sdxi/device.c
+++ b/drivers/dma/sdxi/device.c
@@ -12,6 +12,7 @@
#include <linux/dmapool.h>
#include <linux/log2.h>
#include <linux/slab.h>
+#include <linux/xarray.h>
#include "context.h"
#include "hw.h"
@@ -302,6 +303,7 @@ int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops)
sdxi->dev = dev;
sdxi->bus_ops = ops;
+ xa_init_flags(&sdxi->client_cxts, XA_FLAGS_ALLOC1);
dev_set_drvdata(dev, sdxi);
err = sdxi->bus_ops->init(sdxi);
@@ -314,6 +316,12 @@ int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops)
void sdxi_unregister(struct device *dev)
{
struct sdxi_dev *sdxi = dev_get_drvdata(dev);
+ struct sdxi_cxt *cxt;
+ unsigned long index;
+
+ xa_for_each(&sdxi->client_cxts, index, cxt)
+ sdxi_cxt_exit(cxt);
+ xa_destroy(&sdxi->client_cxts);
sdxi_dev_stop(sdxi);
}
diff --git a/drivers/dma/sdxi/sdxi.h b/drivers/dma/sdxi/sdxi.h
index 426101875334..da33719735ab 100644
--- a/drivers/dma/sdxi/sdxi.h
+++ b/drivers/dma/sdxi/sdxi.h
@@ -12,6 +12,7 @@
#include <linux/dev_printk.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/types.h>
+#include <linux/xarray.h>
#include "mmio.h"
@@ -61,6 +62,7 @@ struct sdxi_dev {
struct dma_pool *cst_blk_pool;
struct sdxi_cxt *admin_cxt;
+ struct xarray client_cxts; /* context id -> (struct sdxi_cxt *) */
const struct sdxi_bus_ops *bus_ops;
};
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 12/23] dmaengine: sdxi: Add descriptor ring management
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (10 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 11/23] dmaengine: sdxi: Add client context alloc and release APIs Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 13/23] dmaengine: sdxi: Add unit tests for descriptor ring reservations Nathan Lynch via B4 Relay
` (10 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Introduce a library for managing SDXI descriptor ring state. It
encapsulates determining the next free space in the ring to deposit
descriptors and performing the update of the write index correctly, as
well as iterating over slices (reservations) of the ring without
dealing directly with ring offsets/indexes.
The central abstraction is sdxi_ring_state, which maintains the write
index and a wait queue. An internal spin lock serializes checks for
space in the ring and updates to the write index.
Reservations (sdxi_ring_resv) are intended to be short-lived on-stack
objects representing slices of the ring for callers to populate with
descriptors. Both blocking and non-blocking reservation APIs are
provided.
Descriptor access within a reservation is provided via
sdxi_ring_resv_next() and sdxi_ring_resv_foreach().
Completion handlers must call sdxi_ring_wake_up() when descriptors
have been consumed so that blocked reservations can proceed.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/Makefile | 3 +-
drivers/dma/sdxi/ring.c | 158 ++++++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/ring.h | 84 ++++++++++++++++++++++++
3 files changed, 244 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/sdxi/Makefile b/drivers/dma/sdxi/Makefile
index 2178f274831c..23536a1defc3 100644
--- a/drivers/dma/sdxi/Makefile
+++ b/drivers/dma/sdxi/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_SDXI) += sdxi.o
sdxi-objs += \
context.o \
- device.o
+ device.o \
+ ring.o
sdxi-$(CONFIG_PCI_MSI) += pci.o
diff --git a/drivers/dma/sdxi/ring.c b/drivers/dma/sdxi/ring.c
new file mode 100644
index 000000000000..d51b9e708a4f
--- /dev/null
+++ b/drivers/dma/sdxi/ring.c
@@ -0,0 +1,158 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SDXI descriptor ring state management. Handles advancing the write
+ * index correctly and supplies "reservations" i.e. slices of the ring
+ * to be filled with descriptors.
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ */
+#include <kunit/visibility.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/lockdep.h>
+#include <linux/range.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+#include <asm/barrier.h>
+#include <asm/byteorder.h>
+#include <asm/div64.h>
+#include <asm/rwonce.h>
+
+#include "ring.h"
+#include "hw.h"
+
+/*
+ * Initialize ring management state. Caller is responsible for
+ * allocating, mapping, and initializing the actual control structures
+ * shared with hardware: the indexes and ring array.
+ */
+void sdxi_ring_state_init(struct sdxi_ring_state *rs, const __le64 *read_index,
+ __le64 *write_index, u32 entries,
+ struct sdxi_desc descs[static SZ_1K])
+{
+ WARN_ON_ONCE(!read_index);
+ WARN_ON_ONCE(!write_index);
+ /*
+ * See SDXI 1.0 Table 3-1 Memory Structure Summary. Minimum
+ * descriptor ring size in bytes is 64KB; thus 1024 64-byte
+ * entries.
+ */
+ WARN_ON_ONCE(entries < SZ_1K);
+
+ *rs = (typeof(*rs)) {
+ .write_index = le64_to_cpu(*write_index),
+ .write_index_ptr = write_index,
+ .read_index_ptr = read_index,
+ .entries = entries,
+ .entry = descs,
+ };
+ spin_lock_init(&rs->lock);
+ init_waitqueue_head(&rs->wqh);
+}
+EXPORT_SYMBOL_IF_KUNIT(sdxi_ring_state_init);
+
+static u64 sdxi_ring_state_load_ridx(struct sdxi_ring_state *rs)
+{
+ lockdep_assert_held(&rs->lock);
+ return le64_to_cpu(READ_ONCE(*rs->read_index_ptr));
+}
+
+static void sdxi_ring_state_store_widx(struct sdxi_ring_state *rs, u64 new_widx)
+{
+ lockdep_assert_held(&rs->lock);
+ *rs->write_index_ptr = cpu_to_le64(rs->write_index = new_widx);
+}
+
+/* Non-blocking ring reservation. Callers must handle ring full (-EBUSY). */
+int sdxi_ring_try_reserve(struct sdxi_ring_state *rs, size_t nr,
+ struct sdxi_ring_resv *resv)
+{
+ u64 new_widx;
+
+ /*
+ * Caller bug, warn and reject.
+ */
+ if (WARN_ONCE(nr < 1 || nr > rs->entries,
+ "Reservation of size %zu requested from ring of size %u\n",
+ nr, rs->entries))
+ return -EINVAL;
+
+ scoped_guard(spinlock_irqsave, &rs->lock) {
+ u64 ridx = sdxi_ring_state_load_ridx(rs);
+
+ /*
+ * Bug: the read index should never exceed the write index.
+ * TODO: sdxi_err() or similar; need a reference to
+ * the device.
+ */
+ if (ridx > rs->write_index)
+ return -EIO;
+
+ new_widx = rs->write_index + nr;
+
+ /*
+ * Not enough space available right now.
+ * TODO: sdxi_dbg() or tracepoint here.
+ */
+ if (new_widx - ridx > rs->entries)
+ return -EBUSY;
+
+ sdxi_ring_state_store_widx(rs, new_widx);
+ }
+
+ *resv = (typeof(*resv)) {
+ .rs = rs,
+ .range = {
+ .start = new_widx - nr,
+ .end = new_widx - 1,
+ },
+ .iter = new_widx - nr,
+ };
+
+ return 0;
+}
+EXPORT_SYMBOL_IF_KUNIT(sdxi_ring_try_reserve);
+
+/* Blocking ring reservation. Retries until success or non-transient error. */
+int sdxi_ring_reserve(struct sdxi_ring_state *rs, size_t nr,
+ struct sdxi_ring_resv *resv)
+{
+ int ret;
+
+ wait_event(rs->wqh,
+ (ret = sdxi_ring_try_reserve(rs, nr, resv)) != -EBUSY);
+
+ return ret;
+}
+
+/* Completion code should call this whenever descriptors have been consumed. */
+void sdxi_ring_wake_up(struct sdxi_ring_state *rs)
+{
+ wake_up_all(&rs->wqh);
+}
+
+static struct sdxi_desc *
+sdxi_desc_ring_entry(const struct sdxi_ring_state *rs, u64 index)
+{
+ return &rs->entry[do_div(index, rs->entries)];
+}
+
+struct sdxi_desc *sdxi_ring_resv_next(struct sdxi_ring_resv *resv)
+{
+ if (resv->range.start <= resv->iter && resv->iter <= resv->range.end)
+ return sdxi_desc_ring_entry(resv->rs, resv->iter++);
+ /*
+ * Caller has iterated to the end of the reservation.
+ */
+ if (resv->iter == resv->range.end + 1)
+ return NULL;
+ /*
+ * Should happen only if caller messed with internal
+ * reservation state.
+ */
+ WARN_ONCE(1, "reservation[%llu,%llu] with iter %llu",
+ resv->range.start, resv->range.end, resv->iter);
+ return NULL;
+}
+EXPORT_SYMBOL_IF_KUNIT(sdxi_ring_resv_next);
diff --git a/drivers/dma/sdxi/ring.h b/drivers/dma/sdxi/ring.h
new file mode 100644
index 000000000000..d5682687c05c
--- /dev/null
+++ b/drivers/dma/sdxi/ring.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright Advanced Micro Devices, Inc. */
+#ifndef DMA_SDXI_RING_H
+#define DMA_SDXI_RING_H
+
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/range.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+#include <asm/barrier.h>
+#include <asm/byteorder.h>
+#include <asm/div64.h>
+#include <asm/rwonce.h>
+
+#include "hw.h"
+
+/*
+ * struct sdxi_ring_state - Descriptor ring management.
+ *
+ * @lock: Guards *read_index_ptr (RO), *write_index_ptr (RW),
+ * write_index (RW). *read_index is incremented by hw.
+ * @write_index: Cached write index value, minimizes dereferences in
+ * critical sections.
+ * @write_index_ptr: Location of the architected write index shared with
+ * the SDXI implementation.
+ * @read_index_ptr: Location of the architected read index shared with
+ * the SDXI implementation.
+ * @entries: Number of entries in the ring.
+ * @entry: The descriptor ring itself, shared with the SDXI implementation.
+ * @wqh: Pending reservations.
+ */
+struct sdxi_ring_state {
+ spinlock_t lock;
+ u64 write_index; /* Cache current value of write index. */
+ __le64 *write_index_ptr;
+ const __le64 *read_index_ptr;
+ u32 entries;
+ struct sdxi_desc *entry;
+ wait_queue_head_t wqh;
+};
+
+/*
+ * Ring reservation and iteration state.
+ */
+struct sdxi_ring_resv {
+ const struct sdxi_ring_state *rs;
+ struct range range;
+ u64 iter;
+};
+
+void sdxi_ring_state_init(struct sdxi_ring_state *ring, const __le64 *read_index,
+ __le64 *write_index, u32 entries,
+ struct sdxi_desc descs[static SZ_1K]);
+void sdxi_ring_wake_up(struct sdxi_ring_state *rs);
+int sdxi_ring_reserve(struct sdxi_ring_state *ring, size_t nr,
+ struct sdxi_ring_resv *resv);
+int sdxi_ring_try_reserve(struct sdxi_ring_state *ring, size_t nr,
+ struct sdxi_ring_resv *resv);
+struct sdxi_desc *sdxi_ring_resv_next(struct sdxi_ring_resv *resv);
+
+/* Reset reservation's internal iterator. */
+static inline void sdxi_ring_resv_reset(struct sdxi_ring_resv *resv)
+{
+ resv->iter = resv->range.start;
+}
+
+/*
+ * Return the value that should be written to the doorbell after
+ * serializing descriptors for this reservation, i.e. the value of the
+ * write index after obtaining the reservation.
+ */
+static inline u64 sdxi_ring_resv_dbval(const struct sdxi_ring_resv *resv)
+{
+ return resv->range.end + 1;
+}
+
+#define sdxi_ring_resv_foreach(resv_, desc_) \
+ for (sdxi_ring_resv_reset(resv_), \
+ desc_ = sdxi_ring_resv_next(resv_); \
+ desc_; \
+ desc_ = sdxi_ring_resv_next(resv_))
+
+#endif /* DMA_SDXI_RING_H */
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 13/23] dmaengine: sdxi: Add unit tests for descriptor ring reservations
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (11 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 12/23] dmaengine: sdxi: Add descriptor ring management Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 14/23] dmaengine: sdxi: Attach descriptor ring state to contexts Nathan Lynch via B4 Relay
` (9 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Add KUnit tests for the descriptor ring reservation API, covering:
- Valid reservations: full-ring and single-slot after advancing the
read pointer.
- Error paths: zero or over-capacity count (-EINVAL), inconsistent
index state (-EIO), and insufficient space (-EBUSY).
A .kunitconfig is included ease of use:
$ tools/testing/kunit/kunit.py run \
--kunitconfig=drivers/dma/sdxi/.kunitconfig
No SDXI hardware is required to run these tests.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/.kunitconfig | 4 ++
drivers/dma/sdxi/Kconfig | 8 ++++
drivers/dma/sdxi/Makefile | 3 ++
drivers/dma/sdxi/ring_kunit.c | 105 ++++++++++++++++++++++++++++++++++++++++++
4 files changed, 120 insertions(+)
diff --git a/drivers/dma/sdxi/.kunitconfig b/drivers/dma/sdxi/.kunitconfig
new file mode 100644
index 000000000000..a98cf19770f0
--- /dev/null
+++ b/drivers/dma/sdxi/.kunitconfig
@@ -0,0 +1,4 @@
+CONFIG_KUNIT=y
+CONFIG_DMADEVICES=y
+CONFIG_SDXI=y
+CONFIG_SDXI_KUNIT_TEST=y
diff --git a/drivers/dma/sdxi/Kconfig b/drivers/dma/sdxi/Kconfig
index a568284cd583..e616d3e323bc 100644
--- a/drivers/dma/sdxi/Kconfig
+++ b/drivers/dma/sdxi/Kconfig
@@ -6,3 +6,11 @@ config SDXI
Platform Data Mover devices. SDXI is a vendor-neutral
standard for a memory-to-memory data mover and acceleration
interface.
+
+config SDXI_KUNIT_TEST
+ tristate "SDXI unit tests" if !KUNIT_ALL_TESTS
+ depends on SDXI && KUNIT
+ default KUNIT_ALL_TESTS
+ help
+ KUnit tests for parts of the SDXI driver. Does not require
+ SDXI hardware.
diff --git a/drivers/dma/sdxi/Makefile b/drivers/dma/sdxi/Makefile
index 23536a1defc3..372f793c15b1 100644
--- a/drivers/dma/sdxi/Makefile
+++ b/drivers/dma/sdxi/Makefile
@@ -7,3 +7,6 @@ sdxi-objs += \
ring.o
sdxi-$(CONFIG_PCI_MSI) += pci.o
+
+obj-$(CONFIG_SDXI_KUNIT_TEST) += \
+ ring_kunit.o
diff --git a/drivers/dma/sdxi/ring_kunit.c b/drivers/dma/sdxi/ring_kunit.c
new file mode 100644
index 000000000000..3bc7073e0c39
--- /dev/null
+++ b/drivers/dma/sdxi/ring_kunit.c
@@ -0,0 +1,105 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SDXI descriptor ring management tests.
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ */
+#include <kunit/device.h>
+#include <kunit/test-bug.h>
+#include <kunit/test.h>
+#include <linux/container_of.h>
+#include <linux/dma-mapping.h>
+#include <linux/module.h>
+#include <linux/packing.h>
+#include <linux/string.h>
+
+#include "ring.h"
+
+MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING");
+
+static void valid(struct kunit *t)
+{
+ __le64 wi, ri;
+ struct sdxi_ring_state r;
+ struct sdxi_ring_resv resv;
+ struct sdxi_desc *descs, *desc;
+
+
+ descs = kunit_kmalloc_array(t, SZ_1K, sizeof(descs[0]),
+ GFP_KERNEL | __GFP_ZERO);
+ KUNIT_ASSERT_NOT_NULL(t, descs);
+
+ ri = wi = 0;
+ sdxi_ring_state_init(&r, &ri, &wi, SZ_1K, descs);
+
+ KUNIT_EXPECT_EQ(t, sdxi_ring_try_reserve(&r, r.entries, &resv), 0);
+ KUNIT_EXPECT_EQ(t, resv.range.start, 0);
+ KUNIT_EXPECT_EQ(t, resv.range.end, r.entries - 1);
+ KUNIT_EXPECT_EQ(t, le64_to_cpu(wi), r.entries);
+ sdxi_ring_resv_foreach(&resv, desc) {
+ KUNIT_EXPECT_NOT_NULL_MSG(t, sdxi_ring_resv_next(&resv),
+ "unexpected null descriptor for index %llu", resv.iter);
+ }
+
+ ri = cpu_to_le64(1);
+ KUNIT_EXPECT_EQ(t, sdxi_ring_try_reserve(&r, 1, &resv), 0);
+ KUNIT_EXPECT_EQ(t, le64_to_cpu(wi), r.entries + 1);
+ KUNIT_EXPECT_NOT_NULL(t, sdxi_ring_resv_next(&resv));
+}
+
+static void invalid(struct kunit *t)
+{
+ __le64 wi, ri;
+ struct sdxi_ring_state rs;
+ struct sdxi_ring_resv resv;
+ struct sdxi_desc *descs;
+
+ descs = kunit_kmalloc_array(t, SZ_1K, sizeof(descs[0]),
+ GFP_KERNEL | __GFP_ZERO);
+ KUNIT_ASSERT_NOT_NULL(t, descs);
+
+ ri = wi = 0;
+ sdxi_ring_state_init(&rs, &ri, &wi, SZ_1K, descs);
+
+ KUNIT_EXPECT_EQ(t, sdxi_ring_try_reserve(&rs, 0, &resv), -EINVAL);
+ KUNIT_EXPECT_EQ(t, sdxi_ring_try_reserve(&rs, rs.entries + 1, &resv), -EINVAL);
+
+ ri = cpu_to_le64(1);
+ KUNIT_EXPECT_EQ(t, sdxi_ring_try_reserve(&rs, 1, &resv), -EIO);
+
+ ri = 0;
+ wi = cpu_to_le64(rs.entries);
+ sdxi_ring_state_init(&rs, &ri, &wi, SZ_1K, descs);
+ KUNIT_EXPECT_EQ(t, sdxi_ring_try_reserve(&rs, 1, &resv), -EBUSY);
+
+ ri = cpu_to_le64(rs.entries);
+ wi = cpu_to_le64(rs.entries + 1);
+ sdxi_ring_state_init(&rs, &ri, &wi, SZ_1K, descs);
+ KUNIT_EXPECT_EQ(t, sdxi_ring_try_reserve(&rs, rs.entries, &resv), -EBUSY);
+}
+
+static struct kunit_case testcases[] = {
+ KUNIT_CASE(valid),
+ KUNIT_CASE(invalid),
+ {}
+};
+
+static int setup_device(struct kunit *t)
+{
+ struct device *dev = kunit_device_register(t, "sdxi-mock-device");
+
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(t, dev);
+ t->priv = dev;
+ return 0;
+}
+
+static struct kunit_suite generic_desc_ts = {
+ .name = "SDXI descriptor ring management",
+ .test_cases = testcases,
+ .init = setup_device,
+};
+kunit_test_suite(generic_desc_ts);
+
+MODULE_DESCRIPTION("SDXI descriptor ring tests");
+MODULE_AUTHOR("Nathan Lynch");
+MODULE_LICENSE("GPL");
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 14/23] dmaengine: sdxi: Attach descriptor ring state to contexts
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (12 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 13/23] dmaengine: sdxi: Add unit tests for descriptor ring reservations Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 15/23] dmaengine: sdxi: Per-context access key (AKey) table entry allocator Nathan Lynch via B4 Relay
` (8 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Attach an instance of struct sdxi_ring_state to each context upon
allocation. Each ring state has the same lifetime has its context and
is freed upon context release.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/context.c | 13 +++++++++++++
drivers/dma/sdxi/context.h | 2 ++
2 files changed, 15 insertions(+)
diff --git a/drivers/dma/sdxi/context.c b/drivers/dma/sdxi/context.c
index 7cae140c0a20..792b5032203b 100644
--- a/drivers/dma/sdxi/context.c
+++ b/drivers/dma/sdxi/context.c
@@ -24,6 +24,7 @@
#include "context.h"
#include "hw.h"
+#include "ring.h"
#include "sdxi.h"
#define DEFAULT_DESC_RING_ENTRIES 1024
@@ -60,6 +61,7 @@ static void sdxi_free_cxt(struct sdxi_cxt *cxt)
dma_free_coherent(sdxi_to_dev(sdxi), sq->ring_size,
sq->desc_ring, sq->ring_dma);
kfree(cxt->sq);
+ kfree(cxt->ring_state);
kfree(cxt);
}
@@ -77,6 +79,10 @@ static struct sdxi_cxt *sdxi_alloc_cxt(struct sdxi_dev *sdxi)
cxt->sdxi = sdxi;
+ cxt->ring_state = kzalloc_obj(*cxt->ring_state, GFP_KERNEL);
+ if (!cxt->ring_state)
+ return NULL;
+
cxt->sq = kzalloc_obj(*cxt->sq, GFP_KERNEL);
if (!cxt->sq)
return NULL;
@@ -373,6 +379,8 @@ int sdxi_admin_cxt_init(struct sdxi_dev *sdxi)
sq->cxt_sts->state = FIELD_PREP(SDXI_CXT_STS_STATE, CXTV_RUN);
cxt->id = SDXI_ADMIN_CXT_ID;
cxt->db = sdxi->dbs + cxt->id * sdxi->db_stride;
+ sdxi_ring_state_init(cxt->ring_state, &sq->cxt_sts->read_index,
+ sq->write_index, sq->ring_entries, sq->desc_ring);
err = sdxi_publish_cxt(cxt);
if (err)
@@ -389,10 +397,15 @@ int sdxi_admin_cxt_init(struct sdxi_dev *sdxi)
*/
struct sdxi_cxt *sdxi_cxt_new(struct sdxi_dev *sdxi)
{
+ struct sdxi_sq *sq;
+
struct sdxi_cxt *cxt __free(sdxi_cxt) = sdxi_alloc_cxt(sdxi);
if (!cxt)
return NULL;
+ sq = cxt->sq;
+ sdxi_ring_state_init(cxt->ring_state, &sq->cxt_sts->read_index,
+ sq->write_index, sq->ring_entries, sq->desc_ring);
if (register_cxt(sdxi, cxt))
return NULL;
diff --git a/drivers/dma/sdxi/context.h b/drivers/dma/sdxi/context.h
index 5cd78e883c8d..9779b9aa4f86 100644
--- a/drivers/dma/sdxi/context.h
+++ b/drivers/dma/sdxi/context.h
@@ -54,6 +54,8 @@ struct sdxi_cxt {
dma_addr_t akey_table_dma;
struct sdxi_sq *sq;
+
+ struct sdxi_ring_state *ring_state;
};
int sdxi_admin_cxt_init(struct sdxi_dev *sdxi);
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 15/23] dmaengine: sdxi: Per-context access key (AKey) table entry allocator
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (13 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 14/23] dmaengine: sdxi: Attach descriptor ring state to contexts Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 16/23] dmaengine: sdxi: Generic descriptor manipulation helpers Nathan Lynch via B4 Relay
` (7 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Each SDXI context has a table of access keys (AKeys). SDXI descriptors
submitted to a context may refer to an AKey associated with that
context by its index in the table. AKeys describe properties of the
access that the descriptor is to perform, such as PASID or a target
SDXI function, or an interrupt to trigger.
Use a per-context IDA to keep track of used entries in the table.
Provide sdxi_alloc_akey(), which claims an AKey table entry for the
caller to program directly; sdxi_akey_index(), which returns the
entry's index for programming into descriptors the caller intends to
submit; and sdxi_free_akey(), which clears the entry and makes it
available again.
The DMA engine provider is currently the only user and allocates a
single entry that encodes the access properties for copy operations
and a completion interrupt. More complex use patterns are possible
when user space gains access to SDXI contexts (not in this series).
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/context.c | 5 +++++
drivers/dma/sdxi/context.h | 24 ++++++++++++++++++++++++
2 files changed, 29 insertions(+)
diff --git a/drivers/dma/sdxi/context.c b/drivers/dma/sdxi/context.c
index 792b5032203b..04e0d3e6a337 100644
--- a/drivers/dma/sdxi/context.c
+++ b/drivers/dma/sdxi/context.c
@@ -15,6 +15,7 @@
#include <linux/dma-mapping.h>
#include <linux/dmapool.h>
#include <linux/errno.h>
+#include <linux/idr.h>
#include <linux/iommu.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/slab.h>
@@ -61,6 +62,7 @@ static void sdxi_free_cxt(struct sdxi_cxt *cxt)
dma_free_coherent(sdxi_to_dev(sdxi), sq->ring_size,
sq->desc_ring, sq->ring_dma);
kfree(cxt->sq);
+ ida_destroy(&cxt->akey_ida);
kfree(cxt->ring_state);
kfree(cxt);
}
@@ -381,6 +383,7 @@ int sdxi_admin_cxt_init(struct sdxi_dev *sdxi)
cxt->db = sdxi->dbs + cxt->id * sdxi->db_stride;
sdxi_ring_state_init(cxt->ring_state, &sq->cxt_sts->read_index,
sq->write_index, sq->ring_entries, sq->desc_ring);
+ ida_init(&cxt->akey_ida);
err = sdxi_publish_cxt(cxt);
if (err)
@@ -406,6 +409,8 @@ struct sdxi_cxt *sdxi_cxt_new(struct sdxi_dev *sdxi)
sq = cxt->sq;
sdxi_ring_state_init(cxt->ring_state, &sq->cxt_sts->read_index,
sq->write_index, sq->ring_entries, sq->desc_ring);
+ ida_init(&cxt->akey_ida);
+
if (register_cxt(sdxi, cxt))
return NULL;
diff --git a/drivers/dma/sdxi/context.h b/drivers/dma/sdxi/context.h
index 9779b9aa4f86..5310e51a668c 100644
--- a/drivers/dma/sdxi/context.h
+++ b/drivers/dma/sdxi/context.h
@@ -6,7 +6,10 @@
#ifndef DMA_SDXI_CONTEXT_H
#define DMA_SDXI_CONTEXT_H
+#include <linux/array_size.h>
#include <linux/dma-mapping.h>
+#include <linux/idr.h>
+#include <linux/string.h>
#include <linux/types.h>
#include "hw.h"
@@ -50,6 +53,7 @@ struct sdxi_cxt {
struct sdxi_cxt_ctl *cxt_ctl;
dma_addr_t cxt_ctl_dma;
+ struct ida akey_ida;
struct sdxi_akey_table *akey_table;
dma_addr_t akey_table_dma;
@@ -75,4 +79,24 @@ static inline bool sdxi_cxt_is_admin(const struct sdxi_cxt *cxt)
void sdxi_cxt_push_doorbell(struct sdxi_cxt *cxt, u64 index);
+static inline struct sdxi_akey_ent *sdxi_alloc_akey(struct sdxi_cxt *cxt)
+{
+ unsigned int max = ARRAY_SIZE(cxt->akey_table->entry) - 1;
+ int idx = ida_alloc_max(&cxt->akey_ida, max, GFP_KERNEL);
+
+ return idx < 0 ? NULL : &cxt->akey_table->entry[idx];
+}
+
+static inline unsigned int sdxi_akey_index(const struct sdxi_cxt *cxt,
+ const struct sdxi_akey_ent *akey)
+{
+ return akey - &cxt->akey_table->entry[0];
+}
+
+static inline void sdxi_free_akey(struct sdxi_cxt *cxt, struct sdxi_akey_ent *akey)
+{
+ memset(akey, 0, sizeof(*akey));
+ ida_free(&cxt->akey_ida, sdxi_akey_index(cxt, akey));
+}
+
#endif /* DMA_SDXI_CONTEXT_H */
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 16/23] dmaengine: sdxi: Generic descriptor manipulation helpers
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (14 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 15/23] dmaengine: sdxi: Per-context access key (AKey) table entry allocator Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 17/23] dmaengine: sdxi: Add completion status block API Nathan Lynch via B4 Relay
` (6 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Introduce small helper functions for manipulating certain common
properties of descriptors after their operation-specific encoding has
been performed but before they are submitted.
sdxi_desc_set_csb() associates an optional completion status block
with a descriptor.
sdxi_desc_set_fence() forces retirement of any prior descriptors in
the ring before the target descriptor is executed. This is useful for
interrupt descriptors that signal the completion of an operation.
sdxi_desc_set_sequential() ensures that all writes from prior
descriptor operations in the same context are made globally visible
prior to making writes from the target descriptor globally visible.
sdxi_desc_make_valid() sets the descriptor validity bit, transferring
ownership of the descriptor from software to the SDXI
implementation. (The implementation is allowed to execute the
descriptor at this point, but the caller is still obligated to push
the doorbell to ensure execution occurs.)
Each of the preceding functions will warn if invoked on a descriptor
that has already been released to the SDXI implementation (i.e. had
its validity bit set).
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/descriptor.h | 64 +++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/hw.h | 9 ++++++
2 files changed, 73 insertions(+)
diff --git a/drivers/dma/sdxi/descriptor.h b/drivers/dma/sdxi/descriptor.h
new file mode 100644
index 000000000000..c0f01b1be726
--- /dev/null
+++ b/drivers/dma/sdxi/descriptor.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef DMA_SDXI_DESCRIPTOR_H
+#define DMA_SDXI_DESCRIPTOR_H
+
+/*
+ * Facilities for encoding SDXI descriptors.
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ */
+
+#include <linux/bitfield.h>
+#include <linux/ratelimit.h>
+#include <linux/types.h>
+#include <asm/byteorder.h>
+
+#include "hw.h"
+
+static inline void sdxi_desc_vl_expect(const struct sdxi_desc *desc, bool expected)
+{
+ u8 vl = FIELD_GET(SDXI_DSC_VL, le32_to_cpu(desc->opcode));
+
+ WARN_RATELIMIT(vl != expected, "expected vl=%u but got %u\n", expected, vl);
+}
+
+static inline void sdxi_desc_set_csb(struct sdxi_desc *desc, dma_addr_t addr)
+{
+ sdxi_desc_vl_expect(desc, 0);
+ desc->csb_ptr = cpu_to_le64(FIELD_PREP(SDXI_DSC_CSB_PTR, addr >> 5));
+}
+
+static inline void sdxi_desc_make_valid(struct sdxi_desc *desc)
+{
+ u32 opcode = le32_to_cpu(desc->opcode);
+
+ sdxi_desc_vl_expect(desc, 0);
+ FIELD_MODIFY(SDXI_DSC_VL, &opcode, 1);
+ /*
+ * Once vl is set, no more modifications to the descriptor
+ * payload are allowed. Ensure the vl update is ordered after
+ * all other initialization of the descriptor.
+ */
+ dma_wmb();
+ WRITE_ONCE(desc->opcode, cpu_to_le32(opcode));
+}
+
+static inline void sdxi_desc_set_fence(struct sdxi_desc *desc)
+{
+ u32 opcode = le32_to_cpu(desc->opcode);
+
+ sdxi_desc_vl_expect(desc, 0);
+ FIELD_MODIFY(SDXI_DSC_FE, &opcode, 1);
+ desc->opcode = cpu_to_le32(opcode);
+}
+
+static inline void sdxi_desc_set_sequential(struct sdxi_desc *desc)
+{
+ u32 opcode = le32_to_cpu(desc->opcode);
+
+ sdxi_desc_vl_expect(desc, 0);
+ FIELD_MODIFY(SDXI_DSC_SE, &opcode, 1);
+ desc->opcode = cpu_to_le32(opcode);
+}
+
+#endif /* DMA_SDXI_DESCRIPTOR_H */
diff --git a/drivers/dma/sdxi/hw.h b/drivers/dma/sdxi/hw.h
index 46424376f26f..cb1bed2f83f2 100644
--- a/drivers/dma/sdxi/hw.h
+++ b/drivers/dma/sdxi/hw.h
@@ -140,6 +140,15 @@ struct sdxi_desc {
__u8 operation[52];
__le64 csb_ptr;
);
+
+/* For opcode field */
+#define SDXI_DSC_VL BIT(0)
+#define SDXI_DSC_SE BIT(1)
+#define SDXI_DSC_FE BIT(2)
+
+/* For csb_ptr field */
+#define SDXI_DSC_CSB_PTR GENMASK_ULL(63, 5)
+
};
} __packed;
static_assert(sizeof(struct sdxi_desc) == 64);
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 17/23] dmaengine: sdxi: Add completion status block API
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (15 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 16/23] dmaengine: sdxi: Generic descriptor manipulation helpers Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 18/23] dmaengine: sdxi: Encode context start, stop, and sync descriptors Nathan Lynch via B4 Relay
` (5 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Introduce an API for managing completion status blocks. These are
DMA-coherent buffers that may be optionally attached to SDXI
descriptors to signal completion. The SDXI implementation clears the
signal field (initialized to 1) upon completion, setting an
error bit in the flags field if problems were encountered executing
the descriptor.
Callers allocate completion blocks from a per-device DMA pool via
sdxi_completion_alloc(). sdxi_completion_attach() associates a
completion with a descriptor by encoding the completion's DMA address
into the descriptor's csb_ptr field.
sdxi_completion_poll() busy-waits until the signal field is cleared by
the implementation, and is intended for descriptors that are expected
to execute quickly.
sdxi_completion_signaled() and sdxi_completion_errored() query the
signal field and error flag of the completion, respectively.
struct sdxi_completion is kept opaque to callers. A DEFINE_FREE
cleanup handler is provided.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/Makefile | 1 +
drivers/dma/sdxi/completion.c | 79 +++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/completion.h | 24 +++++++++++++
drivers/dma/sdxi/hw.h | 1 +
4 files changed, 105 insertions(+)
diff --git a/drivers/dma/sdxi/Makefile b/drivers/dma/sdxi/Makefile
index 372f793c15b1..dd08f4a5f723 100644
--- a/drivers/dma/sdxi/Makefile
+++ b/drivers/dma/sdxi/Makefile
@@ -2,6 +2,7 @@
obj-$(CONFIG_SDXI) += sdxi.o
sdxi-objs += \
+ completion.o \
context.o \
device.o \
ring.o
diff --git a/drivers/dma/sdxi/completion.c b/drivers/dma/sdxi/completion.c
new file mode 100644
index 000000000000..859c8334f0e7
--- /dev/null
+++ b/drivers/dma/sdxi/completion.c
@@ -0,0 +1,79 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SDXI Descriptor Completion Status Block handling.
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ */
+#include <linux/cleanup.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/slab.h>
+
+#include "completion.h"
+#include "descriptor.h"
+#include "hw.h"
+
+struct sdxi_completion {
+ struct sdxi_dev *sdxi;
+ struct sdxi_cst_blk *cst_blk;
+ dma_addr_t cst_blk_dma;
+};
+
+struct sdxi_completion *sdxi_completion_alloc(struct sdxi_dev *sdxi)
+{
+ struct sdxi_cst_blk *cst_blk;
+ dma_addr_t cst_blk_dma;
+
+ /*
+ * Assume callers can't tolerate GFP_KERNEL and use
+ * GFP_NOWAIT. Add a gfp_t flags parameter if that changes.
+ */
+ struct sdxi_completion *sc __free(kfree) = kmalloc(sizeof(*sc), GFP_NOWAIT);
+ if (!sc)
+ return NULL;
+
+ cst_blk = dma_pool_zalloc(sdxi->cst_blk_pool, GFP_NOWAIT, &cst_blk_dma);
+ if (!cst_blk)
+ return NULL;
+
+ cst_blk->signal = cpu_to_le64(1);
+
+ *sc = (typeof(*sc)) {
+ .sdxi = sdxi,
+ .cst_blk = cst_blk,
+ .cst_blk_dma = cst_blk_dma,
+ };
+
+ return_ptr(sc);
+}
+
+void sdxi_completion_free(struct sdxi_completion *sc)
+{
+ dma_pool_free(sc->sdxi->cst_blk_pool, sc->cst_blk, sc->cst_blk_dma);
+ kfree(sc);
+}
+
+void sdxi_completion_poll(const struct sdxi_completion *sc)
+{
+ while (READ_ONCE(sc->cst_blk->signal) != 0)
+ cpu_relax();
+}
+
+bool sdxi_completion_signaled(const struct sdxi_completion *sc)
+{
+ dma_rmb();
+ return (sc->cst_blk->signal == 0);
+}
+
+bool sdxi_completion_errored(const struct sdxi_completion *sc)
+{
+ dma_rmb();
+ return FIELD_GET(SDXI_CST_BLK_ER_BIT, le32_to_cpu(sc->cst_blk->flags));
+}
+
+
+void sdxi_completion_attach(struct sdxi_desc *desc,
+ const struct sdxi_completion *cs)
+{
+ sdxi_desc_set_csb(desc, cs->cst_blk_dma);
+}
diff --git a/drivers/dma/sdxi/completion.h b/drivers/dma/sdxi/completion.h
new file mode 100644
index 000000000000..b3b2b85796ad
--- /dev/null
+++ b/drivers/dma/sdxi/completion.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright Advanced Micro Devices, Inc. */
+#ifndef DMA_SDXI_COMPLETION_H
+#define DMA_SDXI_COMPLETION_H
+
+#include "sdxi.h"
+
+/*
+ * Polled completion status block that can be attached to a
+ * descriptor.
+ */
+struct sdxi_completion;
+struct sdxi_desc;
+struct sdxi_completion *sdxi_completion_alloc(struct sdxi_dev *sdxi);
+void sdxi_completion_free(struct sdxi_completion *sc);
+void sdxi_completion_poll(const struct sdxi_completion *sc);
+void sdxi_completion_attach(struct sdxi_desc *desc,
+ const struct sdxi_completion *sc);
+bool sdxi_completion_signaled(const struct sdxi_completion *sc);
+bool sdxi_completion_errored(const struct sdxi_completion *sc);
+
+DEFINE_FREE(sdxi_completion, struct sdxi_completion *, if (_T) sdxi_completion_free(_T))
+
+#endif /* DMA_SDXI_COMPLETION_H */
diff --git a/drivers/dma/sdxi/hw.h b/drivers/dma/sdxi/hw.h
index cb1bed2f83f2..178161588bd0 100644
--- a/drivers/dma/sdxi/hw.h
+++ b/drivers/dma/sdxi/hw.h
@@ -125,6 +125,7 @@ static_assert(sizeof(struct sdxi_akey_ent) == 16);
struct sdxi_cst_blk {
__le64 signal;
__le32 flags;
+#define SDXI_CST_BLK_ER_BIT BIT(31)
__u8 rsvd_0[20];
} __packed;
static_assert(sizeof(struct sdxi_cst_blk) == 32);
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 18/23] dmaengine: sdxi: Encode context start, stop, and sync descriptors
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (16 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 17/23] dmaengine: sdxi: Add completion status block API Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 19/23] dmaengine: sdxi: Provide context start and stop APIs Nathan Lynch via B4 Relay
` (4 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Introduce the low-level support for serializing three operation types
to the descriptor ring of the admin context: context start, context
stop, and sync. Each operation has its own distinct type that overlays
the generic struct sdxi_desc, along with a dedicated encoder function
that accepts an operation-specific parameter struct.
The parameter structs (sdxi_cxt_start, sdxi_cxt_stop, sdxi_sync)
expose only a necessary subset of the available descriptor fields to
callers, i.e. the target context range. These can be expanded over
time as needed.
Each encoder function is intended to 1) set any mandatory field values
for the descriptor type (e.g. SDXI_DSC_FE=1 for context start); and 2)
translate conventional kernel types (dma_addr_t, CPU-endian values)
from the parameter block to the descriptor in memory. While they're
expected to operate directly on descriptor ring memory, they do not
set the descriptor validity bit. That is left to the caller, which may
need to make other modifictions to the descriptor, such as attaching a
completion block, before releasing it to the SDXI implementation.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/Makefile | 1 +
drivers/dma/sdxi/descriptor.c | 91 +++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/descriptor.h | 46 ++++++++++++++++++++++
drivers/dma/sdxi/hw.h | 64 ++++++++++++++++++++++++++++++
4 files changed, 202 insertions(+)
diff --git a/drivers/dma/sdxi/Makefile b/drivers/dma/sdxi/Makefile
index dd08f4a5f723..08dd73a45dc7 100644
--- a/drivers/dma/sdxi/Makefile
+++ b/drivers/dma/sdxi/Makefile
@@ -4,6 +4,7 @@ obj-$(CONFIG_SDXI) += sdxi.o
sdxi-objs += \
completion.o \
context.o \
+ descriptor.o \
device.o \
ring.o
diff --git a/drivers/dma/sdxi/descriptor.c b/drivers/dma/sdxi/descriptor.c
new file mode 100644
index 000000000000..be2a9244ce19
--- /dev/null
+++ b/drivers/dma/sdxi/descriptor.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SDXI descriptor encoding.
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ */
+
+#include <kunit/visibility.h>
+#include <linux/bitfield.h>
+#include <linux/types.h>
+#include <asm/byteorder.h>
+
+#include "hw.h"
+#include "descriptor.h"
+
+int sdxi_encode_cxt_start(struct sdxi_desc *desc,
+ const struct sdxi_cxt_start *params)
+{
+ u64 csb_ptr;
+ u32 opcode;
+
+ opcode = (FIELD_PREP(SDXI_DSC_FE, 1) |
+ FIELD_PREP(SDXI_DSC_SUBTYPE, SDXI_DSC_OP_SUBTYPE_CXT_START_NM) |
+ FIELD_PREP(SDXI_DSC_TYPE, SDXI_DSC_OP_TYPE_ADMIN));
+
+ csb_ptr = FIELD_PREP(SDXI_DSC_NP, 1);
+
+ *desc = (typeof(*desc)) {
+ .cxt_start = (typeof(desc->cxt_start)) {
+ .opcode = cpu_to_le32(opcode),
+ .cxt_start = cpu_to_le16(params->range.cxt_start),
+ .cxt_end = cpu_to_le16(params->range.cxt_end),
+ .csb_ptr = cpu_to_le64(csb_ptr),
+ },
+ };
+
+ return 0;
+}
+EXPORT_SYMBOL_IF_KUNIT(sdxi_encode_cxt_start);
+
+int sdxi_encode_cxt_stop(struct sdxi_desc *desc,
+ const struct sdxi_cxt_stop *params)
+{
+ u64 csb_ptr;
+ u32 opcode;
+
+ opcode = (FIELD_PREP(SDXI_DSC_FE, 1) |
+ FIELD_PREP(SDXI_DSC_SUBTYPE, SDXI_DSC_OP_SUBTYPE_CXT_STOP) |
+ FIELD_PREP(SDXI_DSC_TYPE, SDXI_DSC_OP_TYPE_ADMIN));
+
+ csb_ptr = FIELD_PREP(SDXI_DSC_NP, 1);
+
+ *desc = (typeof(*desc)) {
+ .cxt_stop = (typeof(desc->cxt_stop)) {
+ .opcode = cpu_to_le32(opcode),
+ .cxt_start = cpu_to_le16(params->range.cxt_start),
+ .cxt_end = cpu_to_le16(params->range.cxt_end),
+ .csb_ptr = cpu_to_le64(csb_ptr),
+ },
+ };
+
+ return 0;
+}
+EXPORT_SYMBOL_IF_KUNIT(sdxi_encode_cxt_stop);
+
+int sdxi_encode_sync(struct sdxi_desc *desc, const struct sdxi_sync *params)
+{
+ u64 csb_ptr;
+ u32 opcode;
+ u8 cflags;
+
+ opcode = (FIELD_PREP(SDXI_DSC_SUBTYPE, SDXI_DSC_OP_SUBTYPE_SYNC) |
+ FIELD_PREP(SDXI_DSC_TYPE, SDXI_DSC_OP_TYPE_ADMIN));
+
+ cflags = FIELD_PREP(SDXI_DSC_SYNC_FLT, params->filter);
+
+ csb_ptr = FIELD_PREP(SDXI_DSC_NP, 1);
+
+ *desc = (typeof(*desc)) {
+ .sync = (typeof(desc->sync)) {
+ .opcode = cpu_to_le32(opcode),
+ .cflags = cflags,
+ .cxt_start = cpu_to_le16(params->range.cxt_start),
+ .cxt_end = cpu_to_le16(params->range.cxt_end),
+ .csb_ptr = cpu_to_le64(csb_ptr),
+ },
+ };
+
+ return 0;
+}
+EXPORT_SYMBOL_IF_KUNIT(sdxi_encode_sync);
diff --git a/drivers/dma/sdxi/descriptor.h b/drivers/dma/sdxi/descriptor.h
index c0f01b1be726..5b8fd7cbaa03 100644
--- a/drivers/dma/sdxi/descriptor.h
+++ b/drivers/dma/sdxi/descriptor.h
@@ -9,6 +9,7 @@
*/
#include <linux/bitfield.h>
+#include <linux/minmax.h>
#include <linux/ratelimit.h>
#include <linux/types.h>
#include <asm/byteorder.h>
@@ -61,4 +62,49 @@ static inline void sdxi_desc_set_sequential(struct sdxi_desc *desc)
desc->opcode = cpu_to_le32(opcode);
}
+struct sdxi_cxt_range {
+ u16 cxt_start;
+ u16 cxt_end;
+};
+
+static inline struct sdxi_cxt_range sdxi_cxt_range(u16 a, u16 b)
+{
+ return (struct sdxi_cxt_range) {
+ .cxt_start = min(a, b),
+ .cxt_end = max(a, b),
+ };
+}
+
+static inline struct sdxi_cxt_range sdxi_cxt_range_single(u16 nr)
+{
+ return sdxi_cxt_range(nr, nr);
+}
+
+struct sdxi_cxt_start {
+ struct sdxi_cxt_range range;
+};
+
+int sdxi_encode_cxt_start(struct sdxi_desc *desc,
+ const struct sdxi_cxt_start *params);
+
+struct sdxi_cxt_stop {
+ struct sdxi_cxt_range range;
+};
+
+int sdxi_encode_cxt_stop(struct sdxi_desc *desc,
+ const struct sdxi_cxt_stop *params);
+
+struct sdxi_sync {
+ enum sdxi_sync_filter {
+ SDXI_SYNC_FLT_CXT = 0x0,
+ SDXI_SYNC_FLT_STOP = 0x1,
+ SDXI_SYNC_FLT_AKEY = 0x2,
+ SDXI_SYNC_FLT_RKEY = 0x3,
+ SDXI_SYNC_FLT_FN = 0x4,
+ } filter;
+ struct sdxi_cxt_range range;
+};
+
+int sdxi_encode_sync(struct sdxi_desc *desc, const struct sdxi_sync *params);
+
#endif /* DMA_SDXI_DESCRIPTOR_H */
diff --git a/drivers/dma/sdxi/hw.h b/drivers/dma/sdxi/hw.h
index 178161588bd0..4dcd0a3ff0fd 100644
--- a/drivers/dma/sdxi/hw.h
+++ b/drivers/dma/sdxi/hw.h
@@ -146,12 +146,76 @@ struct sdxi_desc {
#define SDXI_DSC_VL BIT(0)
#define SDXI_DSC_SE BIT(1)
#define SDXI_DSC_FE BIT(2)
+#define SDXI_DSC_SUBTYPE GENMASK(15, 8)
+#define SDXI_DSC_TYPE GENMASK(26, 16)
/* For csb_ptr field */
+#define SDXI_DSC_NP BIT_ULL(0)
#define SDXI_DSC_CSB_PTR GENMASK_ULL(63, 5)
+#define define_sdxi_dsc(tag_, name_, op_body_) \
+ struct tag_ { \
+ __le32 opcode; \
+ op_body_ \
+ __le64 csb_ptr; \
+ } __packed name_; \
+ static_assert(sizeof(struct tag_) == \
+ sizeof(struct sdxi_dsc_generic)); \
+ static_assert(offsetof(struct tag_, csb_ptr) == \
+ offsetof(struct sdxi_dsc_generic, csb_ptr))
+
+ /* SDXI 1.0 Table 6-14: DSC_CXT_START Descriptor Format */
+ define_sdxi_dsc(sdxi_dsc_cxt_start, cxt_start,
+ __u8 rsvd_0;
+ __u8 vflags;
+ __le16 vf_num;
+ __le16 cxt_start;
+ __le16 cxt_end;
+ __u8 rsvd_1[4];
+ __le64 db_value;
+ __u8 rsvd_2[32];
+ );
+
+ /* SDXI 1.0 Table 6-15: DSC_CXT_STOP Descriptor Format */
+ define_sdxi_dsc(sdxi_dsc_cxt_stop, cxt_stop,
+ __u8 rsvd_0;
+ __u8 vflags;
+ __le16 vf_num;
+ __le16 cxt_start;
+ __le16 cxt_end;
+ __u8 rsvd_1[44];
+ );
+
+ /* SDXI 1.0 Table 6-22: DSC_SYNC Descriptor Format */
+ define_sdxi_dsc(sdxi_dsc_sync, sync,
+ __u8 cflags;
+ __u8 vflags;
+ __le16 vf_num;
+ __le16 cxt_start;
+ __le16 cxt_end;
+ __le16 key_start;
+ __le16 key_end;
+ __u8 rsvd_0[40];
+ );
+/* For use with sync.cflags */
+#define SDXI_DSC_SYNC_FLT GENMASK(2, 0)
+
+#undef define_sdxi_dsc
};
} __packed;
static_assert(sizeof(struct sdxi_desc) == 64);
+/* SDXI 1.0 Table 6-1: SDXI Operation Groups */
+enum sdxi_dsc_type {
+ SDXI_DSC_OP_TYPE_ADMIN = 0x002,
+};
+
+/* SDXI 1.0 Table 6-2: SDXI Operation Groups, Types, and Subtypes */
+enum sdxi_dsc_subtype {
+ /* Administrative */
+ SDXI_DSC_OP_SUBTYPE_CXT_START_NM = 0x03,
+ SDXI_DSC_OP_SUBTYPE_CXT_STOP = 0x04,
+ SDXI_DSC_OP_SUBTYPE_SYNC = 0x06,
+};
+
#endif /* DMA_SDXI_HW_H */
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 19/23] dmaengine: sdxi: Provide context start and stop APIs
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (17 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 18/23] dmaengine: sdxi: Encode context start, stop, and sync descriptors Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 20/23] dmaengine: sdxi: Encode nop, copy, and interrupt descriptors Nathan Lynch via B4 Relay
` (3 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Starting and stopping SDXI client contexts is implemented by submitting
special-purpose descriptors to a function's admin context.
Introduce high-level context start and stop APIs that operate on
struct sdxi_cxt objects, encapsulating the administrative descriptor
submission and completion signaling. These are intended for use by
clients such as the DMA engine provider to come.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/context.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/context.h | 3 ++
2 files changed, 80 insertions(+)
diff --git a/drivers/dma/sdxi/context.c b/drivers/dma/sdxi/context.c
index 04e0d3e6a337..fc6291f12ffe 100644
--- a/drivers/dma/sdxi/context.c
+++ b/drivers/dma/sdxi/context.c
@@ -23,7 +23,9 @@
#include <asm/barrier.h>
#include <asm/rwonce.h>
+#include "completion.h"
#include "context.h"
+#include "descriptor.h"
#include "hw.h"
#include "ring.h"
#include "sdxi.h"
@@ -394,6 +396,81 @@ int sdxi_admin_cxt_init(struct sdxi_dev *sdxi)
return devm_add_action_or_reset(sdxi_to_dev(sdxi), free_admin_cxt, sdxi);
}
+int sdxi_start_cxt(struct sdxi_cxt *cxt)
+{
+ struct sdxi_cxt *adm = to_admin_cxt(cxt);
+ struct sdxi_desc *desc;
+ struct sdxi_ring_resv resv;
+ int err;
+
+ might_sleep();
+
+ struct sdxi_completion *sc __free(sdxi_completion) =
+ sdxi_completion_alloc(cxt->sdxi);
+
+ if (!sc)
+ return -ENOMEM;
+
+ /* This is not how to start the admin context. */
+ if (WARN_ON(adm == cxt))
+ return -EINVAL;
+
+ err = sdxi_ring_reserve(adm->ring_state, 1, &resv);
+ if (err)
+ return err;
+
+ desc = sdxi_ring_resv_next(&resv);
+ sdxi_encode_cxt_start(desc, &(const struct sdxi_cxt_start) {
+ .range = sdxi_cxt_range_single(cxt->id),
+ });
+ sdxi_completion_attach(desc, sc);
+ sdxi_desc_make_valid(desc);
+ sdxi_cxt_push_doorbell(adm, sdxi_ring_resv_dbval(&resv));
+ sdxi_completion_poll(sc);
+
+ return 0;
+}
+
+void sdxi_stop_cxt(struct sdxi_cxt *cxt)
+{
+ struct sdxi_cxt *adm = to_admin_cxt(cxt);
+ struct sdxi_desc *stop, *sync;
+ struct sdxi_ring_resv resv;
+ int err;
+
+ might_sleep();
+
+ struct sdxi_completion *sc __free(sdxi_completion) =
+ sdxi_completion_alloc(cxt->sdxi);
+
+ if (!sc)
+ return;
+
+ /* This is not how to stop the admin context. */
+ if (WARN_ON(adm == cxt))
+ return;
+
+ err = sdxi_ring_reserve(adm->ring_state, 2, &resv);
+ if (WARN_ON_ONCE(err))
+ return;
+
+ stop = sdxi_ring_resv_next(&resv);
+ sync = sdxi_ring_resv_next(&resv);
+
+ sdxi_encode_cxt_stop(stop, &(const struct sdxi_cxt_stop) {
+ .range = sdxi_cxt_range_single(cxt->id),
+ });
+ sdxi_encode_sync(sync, &(const struct sdxi_sync) {
+ .filter = SDXI_SYNC_FLT_STOP,
+ .range = sdxi_cxt_range_single(cxt->id),
+ });
+ sdxi_completion_attach(sync, sc);
+ sdxi_desc_make_valid(stop);
+ sdxi_desc_make_valid(sync);
+ sdxi_cxt_push_doorbell(adm, sdxi_ring_resv_dbval(&resv));
+ sdxi_completion_poll(sc);
+}
+
/*
* Allocate a context for in-kernel use. Starting the context is the
* caller's responsibility.
diff --git a/drivers/dma/sdxi/context.h b/drivers/dma/sdxi/context.h
index 5310e51a668c..9061221e86cb 100644
--- a/drivers/dma/sdxi/context.h
+++ b/drivers/dma/sdxi/context.h
@@ -67,6 +67,9 @@ int sdxi_admin_cxt_init(struct sdxi_dev *sdxi);
struct sdxi_cxt *sdxi_cxt_new(struct sdxi_dev *sdxi);
void sdxi_cxt_exit(struct sdxi_cxt *cxt);
+int sdxi_start_cxt(struct sdxi_cxt *cxt);
+void sdxi_stop_cxt(struct sdxi_cxt *cxt);
+
static inline struct sdxi_cxt *to_admin_cxt(const struct sdxi_cxt *cxt)
{
return cxt->sdxi->admin_cxt;
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 20/23] dmaengine: sdxi: Encode nop, copy, and interrupt descriptors
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (18 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 19/23] dmaengine: sdxi: Provide context start and stop APIs Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 21/23] dmaengine: sdxi: Add unit tests for descriptor encoding Nathan Lynch via B4 Relay
` (2 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Introduce low-level support for serializing three operation types to
the descriptor ring of a client context: nop, copy, and interrupt.
As with the administrative descriptor support introduced earlier, each
operation has its own distinct type that overlays the generic struct
sdxi_desc, along with a dedicated encoder function that accepts an
operation-specific parameter struct.
Copy descriptors are used to implement memcpy offload for the DMA
engine provider, and interrupt descriptors are used to signal the
completion of preceding descriptors in the ring. Nops can be used in
error paths where a ring reservation has been obtained and the caller
needs to submit valid descriptors before returning.
Conditionally expose sdxi_encode_size32() for unit testing.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/descriptor.c | 107 ++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/descriptor.h | 25 ++++++++++
drivers/dma/sdxi/hw.h | 33 +++++++++++++
3 files changed, 165 insertions(+)
diff --git a/drivers/dma/sdxi/descriptor.c b/drivers/dma/sdxi/descriptor.c
index be2a9244ce19..41019e747528 100644
--- a/drivers/dma/sdxi/descriptor.c
+++ b/drivers/dma/sdxi/descriptor.c
@@ -7,12 +7,119 @@
#include <kunit/visibility.h>
#include <linux/bitfield.h>
+#include <linux/bug.h>
+#include <linux/range.h>
+#include <linux/sizes.h>
#include <linux/types.h>
#include <asm/byteorder.h>
#include "hw.h"
#include "descriptor.h"
+VISIBLE_IF_KUNIT int __must_check sdxi_encode_size32(u64 size, __le32 *dest)
+{
+ /*
+ * sizes are encoded as value - 1:
+ * value encoding
+ * 1 0
+ * 2 1
+ * ...
+ * 4G 0xffffffff
+ */
+ if (WARN_ON_ONCE(size > SZ_4G) ||
+ WARN_ON_ONCE(size == 0))
+ return -EINVAL;
+ size = clamp_val(size, 1, SZ_4G);
+ *dest = cpu_to_le32((u32)(size - 1));
+ return 0;
+}
+EXPORT_SYMBOL_IF_KUNIT(sdxi_encode_size32);
+
+void sdxi_serialize_nop(struct sdxi_desc *desc)
+{
+ u32 opcode = (FIELD_PREP(SDXI_DSC_SUBTYPE, SDXI_DSC_OP_SUBTYPE_NOP) |
+ FIELD_PREP(SDXI_DSC_TYPE, SDXI_DSC_OP_TYPE_DMAB));
+ u64 csb_ptr = FIELD_PREP(SDXI_DSC_NP, 1);
+
+ *desc = (typeof(*desc)) {
+ .nop = (typeof(desc->nop)) {
+ .opcode = cpu_to_le32(opcode),
+ .csb_ptr = cpu_to_le64(csb_ptr),
+ },
+ };
+
+}
+
+int sdxi_encode_copy(struct sdxi_desc *desc, const struct sdxi_copy *params)
+{
+ u64 csb_ptr;
+ u32 opcode;
+ __le32 size;
+ int err;
+
+ err = sdxi_encode_size32(params->len, &size);
+ if (err)
+ return err;
+ /*
+ * Reject overlapping src and dst. "Software ... shall not
+ * overlap the source buffer, destination buffer, Atomic
+ * Return Data, or completion status block." - SDXI 1.0 5.6
+ * Memory Consistency Model
+ */
+ if (range_overlaps(&(const struct range) {
+ .start = params->src,
+ .end = params->src + params->len - 1,
+ },
+ &(const struct range) {
+ .start = params->dst,
+ .end = params->dst + params->len - 1,
+ }))
+ return -EINVAL;
+
+ opcode = (FIELD_PREP(SDXI_DSC_SUBTYPE, SDXI_DSC_OP_SUBTYPE_COPY) |
+ FIELD_PREP(SDXI_DSC_TYPE, SDXI_DSC_OP_TYPE_DMAB));
+
+ csb_ptr = FIELD_PREP(SDXI_DSC_NP, 1);
+
+ *desc = (typeof(*desc)) {
+ .copy = (typeof(desc->copy)) {
+ .opcode = cpu_to_le32(opcode),
+ .size = size,
+ .akey0 = cpu_to_le16(params->src_akey),
+ .akey1 = cpu_to_le16(params->dst_akey),
+ .addr0 = cpu_to_le64(params->src),
+ .addr1 = cpu_to_le64(params->dst),
+ .csb_ptr = cpu_to_le64(csb_ptr),
+ },
+ };
+
+ return 0;
+}
+EXPORT_SYMBOL_IF_KUNIT(sdxi_encode_copy);
+
+int sdxi_encode_intr(struct sdxi_desc *desc,
+ const struct sdxi_intr *params)
+{
+ u64 csb_ptr;
+ u32 opcode;
+
+ opcode = (FIELD_PREP(SDXI_DSC_SUBTYPE, SDXI_DSC_OP_SUBTYPE_INTR) |
+ FIELD_PREP(SDXI_DSC_TYPE, SDXI_DSC_OP_TYPE_INTR));
+
+ csb_ptr = FIELD_PREP(SDXI_DSC_NP, 1);
+
+ *desc = (typeof(*desc)) {
+ .intr = (typeof(desc->intr)) {
+ .opcode = cpu_to_le32(opcode),
+ .akey = cpu_to_le16(params->akey),
+ .csb_ptr = cpu_to_le64(csb_ptr),
+ },
+ };
+
+ return 0;
+}
+EXPORT_SYMBOL_IF_KUNIT(sdxi_encode_intr);
+
int sdxi_encode_cxt_start(struct sdxi_desc *desc,
const struct sdxi_cxt_start *params)
{
diff --git a/drivers/dma/sdxi/descriptor.h b/drivers/dma/sdxi/descriptor.h
index 5b8fd7cbaa03..14f92c8dea1d 100644
--- a/drivers/dma/sdxi/descriptor.h
+++ b/drivers/dma/sdxi/descriptor.h
@@ -9,6 +9,7 @@
*/
#include <linux/bitfield.h>
+#include <linux/kconfig.h>
#include <linux/minmax.h>
#include <linux/ratelimit.h>
#include <linux/types.h>
@@ -16,6 +17,10 @@
#include "hw.h"
+#if IS_ENABLED(CONFIG_KUNIT)
+int __must_check sdxi_encode_size32(u64 size, __le32 *dest);
+#endif
+
static inline void sdxi_desc_vl_expect(const struct sdxi_desc *desc, bool expected)
{
u8 vl = FIELD_GET(SDXI_DSC_VL, le32_to_cpu(desc->opcode));
@@ -80,6 +85,26 @@ static inline struct sdxi_cxt_range sdxi_cxt_range_single(u16 nr)
return sdxi_cxt_range(nr, nr);
}
+void sdxi_serialize_nop(struct sdxi_desc *desc);
+
+struct sdxi_copy {
+ dma_addr_t src;
+ dma_addr_t dst;
+ u64 len;
+ u16 src_akey;
+ u16 dst_akey;
+};
+
+int sdxi_encode_copy(struct sdxi_desc *desc,
+ const struct sdxi_copy *params);
+
+struct sdxi_intr {
+ u16 akey;
+};
+
+int sdxi_encode_intr(struct sdxi_desc *desc,
+ const struct sdxi_intr *params);
+
struct sdxi_cxt_start {
struct sdxi_cxt_range range;
};
diff --git a/drivers/dma/sdxi/hw.h b/drivers/dma/sdxi/hw.h
index 4dcd0a3ff0fd..11d88cfc8819 100644
--- a/drivers/dma/sdxi/hw.h
+++ b/drivers/dma/sdxi/hw.h
@@ -164,6 +164,30 @@ struct sdxi_desc {
static_assert(offsetof(struct tag_, csb_ptr) == \
offsetof(struct sdxi_dsc_generic, csb_ptr))
+ /* SDXI 1.0 Table 6-6: DSC_DMAB_NOP Descriptor Format */
+ define_sdxi_dsc(sdxi_dsc_dmab_nop, nop,
+ __u8 rsvd_0[52];
+ );
+
+ /* SDXI 1.0 Table 6-8: DSC_DMAB_COPY Descriptor Format */
+ define_sdxi_dsc(sdxi_dsc_dmab_copy, copy,
+ __le32 size;
+ __u8 attr;
+ __u8 rsvd_0[3];
+ __le16 akey0;
+ __le16 akey1;
+ __le64 addr0;
+ __le64 addr1;
+ __u8 rsvd_1[24];
+ );
+
+ /* SDXI 1.0 Table 6-12: DSC_INTR Descriptor Format */
+ define_sdxi_dsc(sdxi_dsc_intr, intr,
+ __u8 rsvd_0[8];
+ __le16 akey;
+ __u8 rsvd_1[42];
+ );
+
/* SDXI 1.0 Table 6-14: DSC_CXT_START Descriptor Format */
define_sdxi_dsc(sdxi_dsc_cxt_start, cxt_start,
__u8 rsvd_0;
@@ -207,11 +231,20 @@ static_assert(sizeof(struct sdxi_desc) == 64);
/* SDXI 1.0 Table 6-1: SDXI Operation Groups */
enum sdxi_dsc_type {
+ SDXI_DSC_OP_TYPE_DMAB = 0x001,
SDXI_DSC_OP_TYPE_ADMIN = 0x002,
+ SDXI_DSC_OP_TYPE_INTR = 0x004,
};
/* SDXI 1.0 Table 6-2: SDXI Operation Groups, Types, and Subtypes */
enum sdxi_dsc_subtype {
+ /* DMA Base */
+ SDXI_DSC_OP_SUBTYPE_NOP = 0x01,
+ SDXI_DSC_OP_SUBTYPE_COPY = 0x03,
+
+ /* Interrupt */
+ SDXI_DSC_OP_SUBTYPE_INTR = 0x00,
+
/* Administrative */
SDXI_DSC_OP_SUBTYPE_CXT_START_NM = 0x03,
SDXI_DSC_OP_SUBTYPE_CXT_STOP = 0x04,
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 21/23] dmaengine: sdxi: Add unit tests for descriptor encoding
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (19 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 20/23] dmaengine: sdxi: Encode nop, copy, and interrupt descriptors Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 22/23] dmaengine: sdxi: MSI/MSI-X vector allocation and mapping Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 23/23] dmaengine: sdxi: Add DMA engine provider Nathan Lynch via B4 Relay
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Test the encoder function for each descriptor type currently used by
the driver.
The production code uses the GENMASK()/BIT() family of macros to
support encoding descriptors. The tests for that code use the packing
API to decode descriptors produced by that code without relying on
those bitmask definitions.
By limiting what's shared between the real code and the tests we gain
confidence in both. If both the driver code and the tests rely on the
bitfield macros, and then upon adding a new descriptor field the
author mistranslates the bit numbering from the spec, that error is
more likely to propagate to the tests undetected than if the test code
relies on a separate mechanism for decoding descriptors.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/Kconfig | 1 +
drivers/dma/sdxi/Makefile | 1 +
drivers/dma/sdxi/descriptor_kunit.c | 484 ++++++++++++++++++++++++++++++++++++
3 files changed, 486 insertions(+)
diff --git a/drivers/dma/sdxi/Kconfig b/drivers/dma/sdxi/Kconfig
index e616d3e323bc..39343eb85614 100644
--- a/drivers/dma/sdxi/Kconfig
+++ b/drivers/dma/sdxi/Kconfig
@@ -11,6 +11,7 @@ config SDXI_KUNIT_TEST
tristate "SDXI unit tests" if !KUNIT_ALL_TESTS
depends on SDXI && KUNIT
default KUNIT_ALL_TESTS
+ select PACKING
help
KUnit tests for parts of the SDXI driver. Does not require
SDXI hardware.
diff --git a/drivers/dma/sdxi/Makefile b/drivers/dma/sdxi/Makefile
index 08dd73a45dc7..419c71c2ef6a 100644
--- a/drivers/dma/sdxi/Makefile
+++ b/drivers/dma/sdxi/Makefile
@@ -11,4 +11,5 @@ sdxi-objs += \
sdxi-$(CONFIG_PCI_MSI) += pci.o
obj-$(CONFIG_SDXI_KUNIT_TEST) += \
+ descriptor_kunit.o \
ring_kunit.o
diff --git a/drivers/dma/sdxi/descriptor_kunit.c b/drivers/dma/sdxi/descriptor_kunit.c
new file mode 100644
index 000000000000..1f3c2e7ab2dd
--- /dev/null
+++ b/drivers/dma/sdxi/descriptor_kunit.c
@@ -0,0 +1,484 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SDXI descriptor encoding tests.
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ *
+ * While the driver code uses bitfield macros (BIT, GENMASK) to encode
+ * descriptors, these tests use the packing API to decode them.
+ * Capturing the descriptor layout using PACKED_FIELD() is basically a
+ * copy-paste exercise since SDXI defines control structure fields in
+ * terms of bit offsets. Eschewing the bitfield constants such as
+ * SDXI_DSC_VL in the test code makes it possible for the tests to
+ * detect any mistakes in defining them.
+ *
+ * Note that the checks in unpack_fields() can be quite time-consuming
+ * at build time. Uncomment '#define SKIP_PACKING_CHECKS' below if
+ * that's too annoying when working on this code.
+ */
+#include <kunit/device.h>
+#include <kunit/test-bug.h>
+#include <kunit/test.h>
+#include <linux/container_of.h>
+#include <linux/dma-mapping.h>
+#include <linux/module.h>
+#include <linux/packing.h>
+#include <linux/stddef.h>
+#include <linux/string.h>
+
+#include "descriptor.h"
+
+/* #define SKIP_PACKING_CHECKS */
+
+MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING");
+
+enum {
+ SDXI_PACKING_QUIRKS = QUIRK_LITTLE_ENDIAN | QUIRK_LSW32_IS_FIRST,
+};
+
+
+#define desc_field(_high, _low, _target_struct, _member) \
+ PACKED_FIELD(_high, _low, _target_struct, _member)
+#define desc_flag(_bit, _target_struct, _member) \
+ desc_field(_bit, _bit, _target_struct, _member)
+
+/* DMAB_COPY */
+struct unpacked__copy {
+ u32 size;
+ u8 attr_src;
+ u8 attr_dst;
+ u16 akey0;
+ u16 akey1;
+ u64 addr0;
+ u64 addr1;
+};
+
+#define copy_field(_high, _low, _member) \
+ desc_field(_high, _low, struct unpacked__copy, _member)
+
+static const struct packed_field_u16 copy_subfields[] = {
+ copy_field(63, 32, size),
+ copy_field(67, 64, attr_src),
+ copy_field(71, 68, attr_dst),
+ copy_field(111, 96, akey0),
+ copy_field(127, 112, akey1),
+ copy_field(191, 128, addr0),
+ copy_field(255, 192, addr1),
+};
+
+/* DSC_INTR */
+struct unpacked__intr {
+ u16 akey;
+};
+
+#define intr_field(_high, _low, _member) \
+ desc_field(_high, _low, struct unpacked__intr, _member)
+
+static const struct packed_field_u16 intr_subfields[] = {
+ intr_field(111, 96, akey),
+};
+
+/* DSC_SYNC */
+struct unpacked__sync {
+ u8 flt;
+ bool vf;
+ u16 vf_num;
+ u16 cxt_start;
+ u16 cxt_end;
+ u16 key_start;
+ u16 key_end;
+};
+
+#define sync_field(_high, _low, _member) \
+ desc_field(_high, _low, struct unpacked__sync, _member)
+#define sync_flag(_bit, _member) sync_field(_bit, _bit, _member)
+
+static const struct packed_field_u16 sync_subfields[] = {
+ sync_field(34, 32, flt),
+ sync_flag(47, vf),
+ sync_field(63, 48, vf_num),
+ sync_field(79, 64, cxt_start),
+ sync_field(95, 80, cxt_end),
+ sync_field(111, 96, key_start),
+ sync_field(127, 112, key_end),
+};
+
+/* DSC_CXT_START */
+struct unpacked__cxt_start {
+ bool dv;
+ bool vf;
+ u16 vf_num;
+ u16 cxt_start;
+ u16 cxt_end;
+ u64 db_value;
+};
+
+#define cxt_start_field(_high, _low, _member) \
+ desc_field(_high, _low, struct unpacked__cxt_start, _member)
+#define cxt_start_flag(_bit, _member) cxt_start_field(_bit, _bit, _member)
+
+static const struct packed_field_u16 cxt_start_subfields[] = {
+ cxt_start_flag(46, dv),
+ cxt_start_flag(47, vf),
+ cxt_start_field(63, 48, vf_num),
+ cxt_start_field(79, 64, cxt_start),
+ cxt_start_field(95, 80, cxt_end),
+ cxt_start_field(191, 128, db_value),
+};
+
+/* DSC_CXT_STOP */
+struct unpacked__cxt_stop {
+ bool hs;
+ bool vf;
+ u16 vf_num;
+ u16 cxt_start;
+ u16 cxt_end;
+};
+
+#define cxt_stop_field(_high, _low, _member) \
+ desc_field(_high, _low, struct unpacked__cxt_stop, _member)
+#define cxt_stop_flag(_bit, _member) cxt_stop_field(_bit, _bit, _member)
+
+static const struct packed_field_u16 cxt_stop_subfields[] = {
+ cxt_stop_flag(45, hs),
+ cxt_stop_flag(47, vf),
+ cxt_stop_field(63, 48, vf_num),
+ cxt_stop_field(79, 64, cxt_start),
+ cxt_stop_field(95, 80, cxt_end),
+};
+
+/* DSC_GENERIC */
+struct unpacked_desc {
+ u64 csb_ptr;
+ u16 type;
+ u8 subtype;
+ bool vl;
+ bool se;
+ bool fe;
+ bool ch;
+ bool csr;
+ bool rb;
+ bool np;
+ union {
+ struct unpacked__copy copy;
+ struct unpacked__intr intr;
+ struct unpacked__sync sync;
+ struct unpacked__cxt_start cxt_start;
+ struct unpacked__cxt_stop cxt_stop;
+ };
+};
+
+#define generic_field(_high, _low, _member) \
+ desc_field(_high, _low, struct unpacked_desc, _member)
+#define generic_flag(_bit, _member) generic_field(_bit, _bit, _member)
+
+static const struct packed_field_u16 generic_subfields[] = {
+ generic_flag(0, vl),
+ generic_flag(1, se),
+ generic_flag(2, fe),
+ generic_flag(3, ch),
+ generic_flag(4, csr),
+ generic_flag(5, rb),
+ generic_field(15, 8, subtype),
+ generic_field(26, 16, type),
+ generic_flag(448, np),
+ generic_field(511, 453, csb_ptr),
+};
+
+#ifndef SKIP_PACKING_CHECKS
+#define define_unpack_fn(_T) \
+ static void unpack_ ## _T(struct unpacked_desc *to, \
+ const struct sdxi_desc *from) \
+ { \
+ unpack_fields(from, sizeof(*from), to, \
+ generic_subfields, SDXI_PACKING_QUIRKS); \
+ unpack_fields(from, sizeof(*from), &to->_T, \
+ _T ## _subfields, SDXI_PACKING_QUIRKS); \
+ }
+#else
+#define define_unpack_fn(_T) \
+ static void unpack_ ## _T(struct unpacked_desc *to, \
+ const struct sdxi_desc *from) \
+ { \
+ unpack_fields_u16(from, sizeof(*from), to, \
+ generic_subfields, \
+ ARRAY_SIZE(generic_subfields), \
+ SDXI_PACKING_QUIRKS); \
+ unpack_fields_u16(from, sizeof(*from), &to->_T, \
+ _T ## _subfields, \
+ ARRAY_SIZE(_T ## _subfields), \
+ SDXI_PACKING_QUIRKS); \
+ }
+#endif /* SKIP_PACKING_CHECKS */
+
+define_unpack_fn(intr)
+define_unpack_fn(copy)
+define_unpack_fn(sync)
+define_unpack_fn(cxt_start)
+define_unpack_fn(cxt_stop)
+
+static void desc_poison(struct sdxi_desc *d)
+{
+ memset(d, 0xff, sizeof(*d));
+}
+
+static void encode_size32(struct kunit *t)
+{
+ __le32 res = cpu_to_le32(U32_MAX);
+
+ /* Valid sizes. */
+ KUNIT_EXPECT_EQ(t, 0, sdxi_encode_size32(1, &res));
+ KUNIT_EXPECT_EQ(t, 0, le32_to_cpu(res));
+
+ KUNIT_EXPECT_EQ(t, 0, sdxi_encode_size32(SZ_4K, &res));
+ KUNIT_EXPECT_EQ(t, SZ_4K - 1, le32_to_cpu(res));
+
+ KUNIT_EXPECT_EQ(t, 0, sdxi_encode_size32(SZ_4M, &res));
+ KUNIT_EXPECT_EQ(t, SZ_4M - 1, le32_to_cpu(res));
+
+ KUNIT_EXPECT_EQ(t, 0, sdxi_encode_size32(SZ_4G - 1, &res));
+ KUNIT_EXPECT_EQ(t, SZ_4G - 2, le32_to_cpu(res));
+
+ KUNIT_EXPECT_EQ(t, 0, sdxi_encode_size32(SZ_4G, &res));
+ KUNIT_EXPECT_EQ(t, SZ_4G - 1, le32_to_cpu(res));
+
+ /* Invalid sizes. Ensure the out parameter is unmodified. */
+#define RES_VAL 0x843829
+ res = cpu_to_le32(RES_VAL);
+
+ KUNIT_EXPECT_EQ(t, -EINVAL, sdxi_encode_size32(0, &res));
+ KUNIT_EXPECT_EQ(t, RES_VAL, le32_to_cpu(res));
+
+ KUNIT_EXPECT_EQ(t, -EINVAL, sdxi_encode_size32(SZ_4G + 1, &res));
+ KUNIT_EXPECT_EQ(t, RES_VAL, le32_to_cpu(res));
+
+ KUNIT_EXPECT_EQ(t, -EINVAL, sdxi_encode_size32(SZ_8G, &res));
+ KUNIT_EXPECT_EQ(t, RES_VAL, le32_to_cpu(res));
+
+ KUNIT_EXPECT_EQ(t, -EINVAL, sdxi_encode_size32(U64_MAX, &res));
+ KUNIT_EXPECT_EQ(t, RES_VAL, le32_to_cpu(res));
+
+#undef RES_VAL
+}
+
+static void copy(struct kunit *t)
+{
+ struct unpacked_desc unpacked;
+ struct sdxi_desc desc = {};
+ struct sdxi_copy copy = {
+ .src = 0x1000,
+ .dst = 0x2000,
+ .len = 4096,
+ .src_akey = 0,
+ .dst_akey = 0,
+ };
+
+ KUNIT_EXPECT_EQ(t, 0, sdxi_encode_copy(&desc, ©));
+
+ unpack_copy(&unpacked, &desc);
+ KUNIT_EXPECT_EQ(t, unpacked.vl, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.ch, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.subtype, SDXI_DSC_OP_SUBTYPE_COPY);
+ KUNIT_EXPECT_EQ(t, unpacked.type, SDXI_DSC_OP_TYPE_DMAB);
+ KUNIT_EXPECT_EQ(t, unpacked.csb_ptr, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.np, 1);
+
+ KUNIT_EXPECT_EQ(t, unpacked.copy.size, copy.len - 1);
+
+ /* Zero isn't a valid size. */
+ desc_poison(&desc);
+ copy.len = 0;
+ KUNIT_EXPECT_EQ(t, -EINVAL, sdxi_encode_copy(&desc, ©));
+
+ /* But 1 is. */
+ desc_poison(&desc);
+ copy.len = 1;
+ KUNIT_EXPECT_EQ(t, 0, sdxi_encode_copy(&desc, ©));
+ unpack_copy(&unpacked, &desc);
+ KUNIT_EXPECT_EQ(t, unpacked.copy.size, copy.len - 1);
+
+ /* SDXI forbids overlapping source and destination. */
+ desc_poison(&desc);
+ copy.len = 4097;
+ KUNIT_EXPECT_EQ(t, -EINVAL, sdxi_encode_copy(&desc, ©));
+ copy = (typeof(copy)) {
+ .src = 0x4000,
+ .dst = 0x4000,
+ .len = 1,
+ .src_akey = 0,
+ .dst_akey = 0,
+ };
+ KUNIT_EXPECT_EQ(t, -EINVAL, sdxi_encode_copy(&desc, ©));
+
+ desc_poison(&desc);
+ KUNIT_EXPECT_EQ(t, 0,
+ sdxi_encode_copy(&desc,
+ &(struct sdxi_copy) {
+ .src = 0x1000,
+ .dst = 0x2000,
+ .len = 0x100,
+ .src_akey = 1,
+ .dst_akey = 2,
+ }));
+ KUNIT_EXPECT_EQ(t, 0x1000, le64_to_cpu(desc.copy.addr0));
+ KUNIT_EXPECT_EQ(t, 0x2000, le64_to_cpu(desc.copy.addr1));
+ KUNIT_EXPECT_EQ(t, 0x100, 1 + le32_to_cpu(desc.copy.size));
+ KUNIT_EXPECT_EQ(t, 1, le16_to_cpu(desc.copy.akey0));
+ KUNIT_EXPECT_EQ(t, 2, le16_to_cpu(desc.copy.akey1));
+
+ unpack_copy(&unpacked, &desc);
+ KUNIT_EXPECT_EQ(t, unpacked.vl, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.ch, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.subtype, SDXI_DSC_OP_SUBTYPE_COPY);
+ KUNIT_EXPECT_EQ(t, unpacked.type, SDXI_DSC_OP_TYPE_DMAB);
+ KUNIT_EXPECT_EQ(t, unpacked.csb_ptr, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.np, 1);
+
+ KUNIT_EXPECT_EQ(t, unpacked.copy.size, 0x100 - 1);
+}
+
+static void intr(struct kunit *t)
+{
+ struct unpacked_desc unpacked;
+ struct sdxi_intr intr = {
+ .akey = 1234,
+ };
+ struct sdxi_desc desc;
+
+ desc_poison(&desc);
+ KUNIT_EXPECT_EQ(t, 0, sdxi_encode_intr(&desc, &intr));
+
+ unpack_intr(&unpacked, &desc);
+ KUNIT_EXPECT_EQ(t, unpacked.vl, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.ch, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.subtype, SDXI_DSC_OP_SUBTYPE_INTR);
+ KUNIT_EXPECT_EQ(t, unpacked.type, SDXI_DSC_OP_TYPE_INTR);
+ KUNIT_EXPECT_EQ(t, unpacked.csb_ptr, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.np, 1);
+
+ KUNIT_EXPECT_EQ(t, unpacked.intr.akey, 1234);
+}
+
+static void cxt_start(struct kunit *t)
+{
+ struct unpacked_desc unpacked;
+ struct sdxi_cxt_start start = {
+ .range = sdxi_cxt_range_single(2),
+ };
+ struct sdxi_desc desc;
+
+ desc_poison(&desc);
+ KUNIT_ASSERT_EQ(t, 0, sdxi_encode_cxt_start(&desc, &start));
+
+ unpack_cxt_start(&unpacked, &desc);
+
+ /* Check op-specific fields. */
+ KUNIT_EXPECT_EQ(t, 0, desc.cxt_start.vflags);
+
+ /*
+ * Check generic fields. Some flags have mandatory values
+ * according to the operation type.
+ */
+ KUNIT_EXPECT_EQ(t, unpacked.vl, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.se, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.fe, 1);
+ KUNIT_EXPECT_EQ(t, unpacked.ch, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.subtype, SDXI_DSC_OP_SUBTYPE_CXT_START_NM);
+ KUNIT_EXPECT_EQ(t, unpacked.type, SDXI_DSC_OP_TYPE_ADMIN);
+ KUNIT_EXPECT_EQ(t, unpacked.csb_ptr, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.np, 1);
+
+ KUNIT_EXPECT_FALSE(t, unpacked.cxt_start.dv);
+ KUNIT_EXPECT_FALSE(t, unpacked.cxt_start.vf);
+ KUNIT_EXPECT_EQ(t, unpacked.cxt_start.cxt_start, 2);
+ KUNIT_EXPECT_EQ(t, unpacked.cxt_start.cxt_end, 2);
+ KUNIT_EXPECT_EQ(t, unpacked.cxt_start.vf_num, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.cxt_start.db_value, 0);
+}
+
+static void cxt_stop(struct kunit *t)
+{
+ struct unpacked_desc unpacked;
+ struct sdxi_cxt_stop stop = {
+ .range = sdxi_cxt_range_single(2),
+ };
+ struct sdxi_desc desc;
+
+ desc_poison(&desc);
+ KUNIT_ASSERT_EQ(t, 0, sdxi_encode_cxt_stop(&desc, &stop));
+
+ unpack_cxt_stop(&unpacked, &desc);
+
+ /* Check op-specific fields. */
+ KUNIT_EXPECT_EQ(t, 0, desc.cxt_start.vflags);
+
+ /*
+ * Check generic fields. Some flags have mandatory values
+ * according to the operation type.
+ */
+ KUNIT_EXPECT_EQ(t, unpacked.vl, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.se, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.fe, 1);
+ KUNIT_EXPECT_EQ(t, unpacked.ch, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.subtype, SDXI_DSC_OP_SUBTYPE_CXT_STOP);
+ KUNIT_EXPECT_EQ(t, unpacked.type, SDXI_DSC_OP_TYPE_ADMIN);
+ KUNIT_EXPECT_EQ(t, unpacked.csb_ptr, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.np, 1);
+
+ KUNIT_EXPECT_FALSE(t, unpacked.cxt_stop.hs);
+ KUNIT_EXPECT_FALSE(t, unpacked.cxt_stop.vf);
+ KUNIT_EXPECT_EQ(t, unpacked.cxt_stop.cxt_start, 2);
+ KUNIT_EXPECT_EQ(t, unpacked.cxt_stop.cxt_end, 2);
+ KUNIT_EXPECT_EQ(t, unpacked.cxt_stop.vf_num, 0);
+}
+
+static void sync(struct kunit *t)
+{
+ struct sdxi_sync sync = {
+ .filter = SDXI_SYNC_FLT_STOP,
+ .range = sdxi_cxt_range(1, U16_MAX),
+ };
+ struct sdxi_desc desc;
+ struct unpacked_desc unpacked;
+
+ desc_poison(&desc);
+ KUNIT_ASSERT_EQ(t, 0, sdxi_encode_sync(&desc, &sync));
+ unpack_sync(&unpacked, &desc);
+
+ KUNIT_EXPECT_EQ(t, unpacked.type, SDXI_DSC_OP_TYPE_ADMIN);
+ KUNIT_EXPECT_EQ(t, unpacked.subtype, SDXI_DSC_OP_SUBTYPE_SYNC);
+ KUNIT_EXPECT_EQ(t, unpacked.ch, 0);
+ KUNIT_EXPECT_EQ(t, unpacked.sync.flt, SDXI_SYNC_FLT_STOP);
+ KUNIT_EXPECT_EQ(t, unpacked.sync.cxt_start, 1);
+ KUNIT_EXPECT_EQ(t, unpacked.sync.cxt_end, U16_MAX);
+}
+
+static struct kunit_case generic_desc_tcs[] = {
+ KUNIT_CASE(encode_size32),
+ KUNIT_CASE(copy),
+ KUNIT_CASE(intr),
+ KUNIT_CASE(cxt_start),
+ KUNIT_CASE(cxt_stop),
+ KUNIT_CASE(sync),
+ {}
+};
+
+static int generic_desc_setup_device(struct kunit *t)
+{
+ struct device *dev = kunit_device_register(t, "sdxi-mock-device");
+
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(t, dev);
+ t->priv = dev;
+ return 0;
+}
+
+static struct kunit_suite generic_desc_ts = {
+ .name = "Generic SDXI descriptor encoding",
+ .test_cases = generic_desc_tcs,
+ .init = generic_desc_setup_device,
+};
+kunit_test_suite(generic_desc_ts);
+
+MODULE_DESCRIPTION("SDXI descriptor encoding tests");
+MODULE_AUTHOR("Nathan Lynch");
+MODULE_LICENSE("GPL");
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 22/23] dmaengine: sdxi: MSI/MSI-X vector allocation and mapping
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (20 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 21/23] dmaengine: sdxi: Add unit tests for descriptor encoding Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
2026-04-10 13:07 ` [PATCH 23/23] dmaengine: sdxi: Add DMA engine provider Nathan Lynch via B4 Relay
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
During PCI probe, allocate a vector per context supported by the
function as reported by the capability register, plus one for the
error log interrupt, which is always vector 0. The rest of the vector
range is available for use with interrupt-generating descriptors.
Introduce sdxi_alloc_vector() and sdxi_free_vector() which are thin
wrappers around the IDA that tracks the allocated vector range.
Introduce sdxi_vector_to_irq() which invokes a new get_irq() bus op to
translate the device-relative index to the Linux IRQ number for use
with request_irq() etc. For PCI this dispatches to pci_irq_vector().
Code such as the DMA engine provider that intends to submit interrupt
descriptors should prepare by using sdxi_alloc_vector() and
sdxi_vector_to_irq(), and clean up by using sdxi_free_vector().
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/device.c | 4 ++++
drivers/dma/sdxi/pci.c | 29 +++++++++++++++++++++++-
drivers/dma/sdxi/sdxi.h | 57 +++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 89 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/sdxi/device.c b/drivers/dma/sdxi/device.c
index aaff6b15325a..8b11197c5781 100644
--- a/drivers/dma/sdxi/device.c
+++ b/drivers/dma/sdxi/device.c
@@ -10,6 +10,7 @@
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/dmapool.h>
+#include <linux/idr.h>
#include <linux/log2.h>
#include <linux/slab.h>
#include <linux/xarray.h>
@@ -303,6 +304,7 @@ int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops)
sdxi->dev = dev;
sdxi->bus_ops = ops;
+ ida_init(&sdxi->vectors);
xa_init_flags(&sdxi->client_cxts, XA_FLAGS_ALLOC1);
dev_set_drvdata(dev, sdxi);
@@ -323,5 +325,7 @@ void sdxi_unregister(struct device *dev)
sdxi_cxt_exit(cxt);
xa_destroy(&sdxi->client_cxts);
+ ida_destroy(&sdxi->vectors);
+
sdxi_dev_stop(sdxi);
}
diff --git a/drivers/dma/sdxi/pci.c b/drivers/dma/sdxi/pci.c
index 8e4dfde078ff..99430eaa583d 100644
--- a/drivers/dma/sdxi/pci.c
+++ b/drivers/dma/sdxi/pci.c
@@ -5,6 +5,7 @@
* Copyright Advanced Micro Devices, Inc.
*/
+#include <linux/bitfield.h>
#include <linux/dev_printk.h>
#include <linux/dma-mapping.h>
#include <linux/err.h>
@@ -13,6 +14,7 @@
#include <linux/module.h>
#include <linux/pci.h>
+#include "mmio.h"
#include "sdxi.h"
enum sdxi_mmio_bars {
@@ -29,7 +31,8 @@ static int sdxi_pci_init(struct sdxi_dev *sdxi)
{
struct pci_dev *pdev = sdxi_to_pci_dev(sdxi);
struct device *dev = &pdev->dev;
- int ret;
+ unsigned int cap1_max_cxt;
+ int vecs, ret;
ret = pcim_enable_device(pdev);
if (ret)
@@ -53,12 +56,36 @@ static int sdxi_pci_init(struct sdxi_dev *sdxi)
"failed to map doorbell region\n");
}
+ /*
+ * Allocate the minimum required set of vectors plus one for
+ * each client context supported by the function.
+ */
+ cap1_max_cxt = FIELD_GET(SDXI_MMIO_CAP1_MAX_CXT,
+ sdxi_read64(sdxi, SDXI_MMIO_CAP1));
+ vecs = pci_alloc_irq_vectors(pdev, SDXI_MIN_VECTORS,
+ SDXI_MIN_VECTORS + cap1_max_cxt,
+ PCI_IRQ_MSI | PCI_IRQ_MSIX);
+ if (vecs < 0) {
+ return dev_err_probe(dev, vecs,
+ "failed to allocate MSIs (max_cxt=%u)\n",
+ cap1_max_cxt);
+ }
+
+ sdxi->nr_vectors = vecs;
+ sdxi_dbg(sdxi, "allocated %u vectors\n", sdxi->nr_vectors);
+
pci_set_master(pdev);
return 0;
}
+static int sdxi_pci_get_irq(struct sdxi_dev *sdxi, unsigned int nr)
+{
+ return pci_irq_vector(sdxi_to_pci_dev(sdxi), nr);
+}
+
static const struct sdxi_bus_ops sdxi_pci_ops = {
.init = sdxi_pci_init,
+ .get_irq = sdxi_pci_get_irq,
};
static int sdxi_pci_probe(struct pci_dev *pdev,
diff --git a/drivers/dma/sdxi/sdxi.h b/drivers/dma/sdxi/sdxi.h
index da33719735ab..d4e1236a775e 100644
--- a/drivers/dma/sdxi/sdxi.h
+++ b/drivers/dma/sdxi/sdxi.h
@@ -8,8 +8,10 @@
#ifndef DMA_SDXI_H
#define DMA_SDXI_H
+#include <linux/bug.h>
#include <linux/compiler_types.h>
#include <linux/dev_printk.h>
+#include <linux/idr.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/types.h>
#include <linux/xarray.h>
@@ -27,6 +29,21 @@
#define L1_CXT_CTRL_PTR_SHIFT 6
#define L1_CXT_AKEY_PTR_SHIFT 12
+enum {
+ /*
+ * Per SDXI 1.0 3.4 Error Log, the error log interrupt is
+ * always vector 0.
+ */
+ SDXI_ERROR_VECTOR = 0,
+
+ /*
+ * Request at least one vector to account for the error log
+ * interrupt. Increment this if the driver gains more
+ * dedicated interrupts (e.g. one for the admin context).
+ */
+ SDXI_MIN_VECTORS = 1,
+};
+
struct sdxi_dev;
/**
@@ -39,6 +56,10 @@ struct sdxi_bus_ops {
* function initialization.
*/
int (*init)(struct sdxi_dev *sdxi);
+ /**
+ * @get_irq: Map device interrupt index to Linux IRQ number.
+ */
+ int (*get_irq)(struct sdxi_dev *sdxi, unsigned int index);
};
struct sdxi_dev {
@@ -61,6 +82,9 @@ struct sdxi_dev {
struct dma_pool *cxt_ctl_pool;
struct dma_pool *cst_blk_pool;
+ unsigned int nr_vectors;
+ struct ida vectors;
+
struct sdxi_cxt *admin_cxt;
struct xarray client_cxts; /* context id -> (struct sdxi_cxt *) */
@@ -76,6 +100,39 @@ static inline struct device *sdxi_to_dev(const struct sdxi_dev *sdxi)
#define sdxi_info(s, fmt, ...) dev_info(sdxi_to_dev(s), fmt, ## __VA_ARGS__)
#define sdxi_err(s, fmt, ...) dev_err(sdxi_to_dev(s), fmt, ## __VA_ARGS__)
+/**
+ * sdxi_alloc_vector() - Allocate an interrupt vector.
+ *
+ * A vector that will have the same lifetime as the device does not
+ * need to be released explicitly. Otherwise the vector must be
+ * released with sdxi_free_vector().
+ */
+static inline int sdxi_alloc_vector(struct sdxi_dev *sdxi)
+{
+ return ida_alloc_max(&sdxi->vectors, sdxi->nr_vectors - 1,
+ GFP_KERNEL);
+}
+
+/**
+ * sdxi_free_vector() - Release a previously allocated index.
+ */
+static inline void sdxi_free_vector(struct sdxi_dev *sdxi, unsigned int nr)
+{
+ ida_free(&sdxi->vectors, nr);
+}
+
+/**
+ * sdxi_vector_to_irq() - Translate an allocated interrupt vector to
+ * Linux IRQ number suitable for passing to
+ * request_irq() et al.
+ */
+static inline int sdxi_vector_to_irq(struct sdxi_dev *sdxi, unsigned int nr)
+{
+ /* Moan if the index isn't currently allocated. */
+ WARN_ON_ONCE(!ida_exists(&sdxi->vectors, nr));
+ return sdxi->bus_ops->get_irq(sdxi, nr);
+}
+
int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops);
void sdxi_unregister(struct device *dev);
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH 23/23] dmaengine: sdxi: Add DMA engine provider
2026-04-10 13:07 [PATCH 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
` (21 preceding siblings ...)
2026-04-10 13:07 ` [PATCH 22/23] dmaengine: sdxi: MSI/MSI-X vector allocation and mapping Nathan Lynch via B4 Relay
@ 2026-04-10 13:07 ` Nathan Lynch via B4 Relay
22 siblings, 0 replies; 24+ messages in thread
From: Nathan Lynch via B4 Relay @ 2026-04-10 13:07 UTC (permalink / raw)
To: Vinod Koul
Cc: Wei Huang, Mario Limonciello, Bjorn Helgaas, Jonathan Cameron,
Stephen Bates, PradeepVineshReddy.Kodamati, John.Kariuki,
linux-pci, linux-kernel, dmaengine, Nathan Lynch
From: Nathan Lynch <nathan.lynch@amd.com>
Register a DMA engine provider that implements memcpy. The number of
channels per SDXI function can be controlled via a module
parameter (dma_channels). The provider uses the virt-dma library.
This survives dmatest runs with both polled and interrupt-signaled
completion modes, with the following debug options and sanitizers
enabled:
CONFIG_DEBUG_KMEMLEAK=y
CONFIG_KASAN=y
CONFIG_PROVE_LOCKING=y
CONFIG_SLUB_DEBUG_ON=y
CONFIG_UBSAN=y
Example test:
$ qemu-system-x86_64 -m 4G -smp 4 -kernel ~/bzImage -nographic \
-append 'console=ttyS0 debug sdxi.dma_channels=2 dmatest.polled=0 \
dmatest.iterations=10000 dmatest.run=1 dmatest.threads_per_chan=2 \
sdxi.dyndbg=+p' -device vfio-pci,host=0000:01:02.1 \
-initrd ~/rootfs.cpio -M q35 -accel kvm
[...]
# dmesg | grep -i -e sdxi -e dmatest
dmatest: No channels configured, continue with any
sdxi 0000:00:03.0: allocated 64 vectors
sdxi 0000:00:03.0: sdxi_dev_stop: function state: stopped
sdxi 0000:00:03.0: SDXI 1.0 device found
sdxi 0000:00:03.0: sdxi_dev_start: function state: active
sdxi 0000:00:03.0: activated
dmatest: Added 2 threads using dma0chan0
dmatest: Added 2 threads using dma0chan1
dmatest: Started 2 threads using dma0chan0
dmatest: Started 2 threads using dma0chan1
dmatest: dma0chan1-copy1: summary 10000 tests, 0 failures
dmatest: dma0chan1-copy0: summary 10000 tests, 0 failures
dmatest: dma0chan0-copy1: summary 10000 tests, 0 failures
dmatest: dma0chan0-copy0: summary 10000 tests, 0 failures
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
---
drivers/dma/sdxi/Kconfig | 1 +
drivers/dma/sdxi/Makefile | 1 +
drivers/dma/sdxi/device.c | 2 +
drivers/dma/sdxi/dma.c | 497 ++++++++++++++++++++++++++++++++++++++++++++++
drivers/dma/sdxi/dma.h | 12 ++
5 files changed, 513 insertions(+)
diff --git a/drivers/dma/sdxi/Kconfig b/drivers/dma/sdxi/Kconfig
index 39343eb85614..41158e77b991 100644
--- a/drivers/dma/sdxi/Kconfig
+++ b/drivers/dma/sdxi/Kconfig
@@ -1,6 +1,7 @@
config SDXI
tristate "SDXI support"
select DMA_ENGINE
+ select DMA_VIRTUAL_CHANNELS
help
Enable support for Smart Data Accelerator Interface (SDXI)
Platform Data Mover devices. SDXI is a vendor-neutral
diff --git a/drivers/dma/sdxi/Makefile b/drivers/dma/sdxi/Makefile
index 419c71c2ef6a..80b1871fe7b5 100644
--- a/drivers/dma/sdxi/Makefile
+++ b/drivers/dma/sdxi/Makefile
@@ -6,6 +6,7 @@ sdxi-objs += \
context.o \
descriptor.o \
device.o \
+ dma.o \
ring.o
sdxi-$(CONFIG_PCI_MSI) += pci.o
diff --git a/drivers/dma/sdxi/device.c b/drivers/dma/sdxi/device.c
index 8b11197c5781..e159c9939fb4 100644
--- a/drivers/dma/sdxi/device.c
+++ b/drivers/dma/sdxi/device.c
@@ -16,6 +16,7 @@
#include <linux/xarray.h>
#include "context.h"
+#include "dma.h"
#include "hw.h"
#include "mmio.h"
#include "sdxi.h"
@@ -290,6 +291,7 @@ static int sdxi_device_init(struct sdxi_dev *sdxi)
if (err)
return err;
+ sdxi_dma_register(sdxi);
return 0;
}
diff --git a/drivers/dma/sdxi/dma.c b/drivers/dma/sdxi/dma.c
new file mode 100644
index 000000000000..238b3140c90f
--- /dev/null
+++ b/drivers/dma/sdxi/dma.c
@@ -0,0 +1,497 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SDXI dmaengine provider
+ *
+ * Copyright Advanced Micro Devices, Inc.
+ */
+
+#include <linux/cleanup.h>
+#include <linux/delay.h>
+#include <linux/dev_printk.h>
+#include <linux/container_of.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmaengine.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/overflow.h>
+#include <linux/spinlock.h>
+
+#include "../dmaengine.h"
+#include "../virt-dma.h"
+#include "completion.h"
+#include "context.h"
+#include "descriptor.h"
+#include "dma.h"
+#include "ring.h"
+#include "sdxi.h"
+
+static unsigned short dma_channels = 1;
+module_param(dma_channels, ushort, 0644);
+MODULE_PARM_DESC(dma_channels, "DMA channels per function (default: 1)");
+
+/*
+ * An SDXI context is allocated for each channel configured.
+ *
+ * Each context has a descriptor ring with a minimum of 1K entries.
+ * SDXI supports a variety of primitive operations, e.g. copy,
+ * interrupt, nop. Each Linux virtual DMA descriptor may be composed
+ * of a grouping of SDXI descriptors in the ring. E.g. two SDXI
+ * descriptors (copy, then interrupt) to implement a
+ * dma_async_tx_descriptor for memcpy with DMA_PREP_INTERRUPT flag.
+ *
+ * dma_device->device_prep_dma_* functions reserve space in the
+ * descriptor ring and serialize SDXI descriptors implementing the
+ * operation to the reserved slots, leaving their valid (vl) bits
+ * clear. A single virtual descriptor is added to the allocated list.
+ *
+ * dma_async_tx_descriptor->tx_submit() invokes vchan_tx_submit(),
+ * which merely assigns a cookie and moves the txd to the submitted
+ * list without entering the SDXI provider code.
+ *
+ * dma_device->device_issue_pending() (sdxi_dma_issue_pending()) sets vl
+ * on each SDXI descriptor reachable from the submitted list, then
+ * rings the context doorbell. The submitted txds are moved to the
+ * issued list via vchan_issue_pending().
+ */
+
+struct sdxi_dma_chan {
+ struct virt_dma_chan vchan;
+ struct sdxi_cxt *cxt;
+ unsigned int vector;
+ unsigned int irq;
+ struct sdxi_akey_ent *akey;
+};
+
+struct sdxi_dma_dev {
+ struct dma_device dma_dev;
+ size_t nr_channels;
+ struct sdxi_dma_chan sdchan[] __counted_by(nr_channels);
+};
+
+/*
+ * A virtual descriptor can correspond to a group of SDXI hardware descriptors.
+ */
+struct sdxi_dma_desc {
+ struct virt_dma_desc vdesc;
+ struct sdxi_ring_resv resv;
+ struct sdxi_completion *completion;
+};
+
+static struct sdxi_dma_chan *to_sdxi_dma_chan(const struct dma_chan *dma_chan)
+{
+ const struct virt_dma_chan *vchan;
+
+ vchan = container_of_const(dma_chan, struct virt_dma_chan, chan);
+ return container_of(vchan, struct sdxi_dma_chan, vchan);
+}
+
+static struct sdxi_dma_desc *
+to_sdxi_dma_desc(const struct virt_dma_desc *vdesc)
+{
+ return container_of(vdesc, struct sdxi_dma_desc, vdesc);
+}
+
+static void sdxi_tx_desc_free(struct virt_dma_desc *vdesc)
+{
+ struct sdxi_dma_desc *sddesc = to_sdxi_dma_desc(vdesc);
+
+ sdxi_completion_free(sddesc->completion);
+ kfree(to_sdxi_dma_desc(vdesc));
+}
+
+static struct sdxi_dma_desc *
+prep_memcpy_intr(struct dma_chan *dma_chan, const struct sdxi_copy *params)
+{
+ struct sdxi_cxt *cxt = to_sdxi_dma_chan(dma_chan)->cxt;
+ struct sdxi_akey_ent *akey = to_sdxi_dma_chan(dma_chan)->akey;
+ struct sdxi_desc *copy, *intr;
+
+ struct sdxi_completion *comp __free(sdxi_completion) = sdxi_completion_alloc(cxt->sdxi);
+ if (!comp)
+ return NULL;
+
+ struct sdxi_dma_desc *sddesc __free(kfree) = kzalloc(sizeof(*sddesc), GFP_NOWAIT);
+ if (!sddesc)
+ return NULL;
+
+ if (sdxi_ring_try_reserve(cxt->ring_state, 2, &sddesc->resv))
+ return NULL;
+
+ copy = sdxi_ring_resv_next(&sddesc->resv);
+ (void)sdxi_encode_copy(copy, params); /* Caller checked validity. */
+ sdxi_desc_set_fence(copy); /* Conservatively fence every descriptor. */
+ sdxi_completion_attach(copy, comp);
+
+ sddesc->completion = no_free_ptr(comp);
+
+ intr = sdxi_ring_resv_next(&sddesc->resv);
+ sdxi_encode_intr(intr, &(const struct sdxi_intr) {
+ .akey = sdxi_akey_index(cxt, akey),
+ });
+ /* Raise the interrupt only after the copy has completed. */
+ sdxi_desc_set_fence(intr);
+ return_ptr(sddesc);
+}
+
+static struct sdxi_dma_desc *
+prep_memcpy_polled(struct dma_chan *dma_chan, const struct sdxi_copy *params)
+{
+ struct sdxi_cxt *cxt = to_sdxi_dma_chan(dma_chan)->cxt;
+ struct sdxi_desc *copy;
+
+ struct sdxi_completion *comp __free(sdxi_completion) = sdxi_completion_alloc(cxt->sdxi);
+ if (!comp)
+ return NULL;
+
+ struct sdxi_dma_desc *sddesc __free(kfree) = kzalloc(sizeof(*sddesc), GFP_NOWAIT);
+ if (!sddesc)
+ return NULL;
+
+ if (sdxi_ring_try_reserve(cxt->ring_state, 1, &sddesc->resv))
+ return NULL;
+
+ copy = sdxi_ring_resv_next(&sddesc->resv);
+ (void)sdxi_encode_copy(copy, params); /* Caller checked validity. */
+ sdxi_completion_attach(copy, comp);
+
+ sddesc->completion = no_free_ptr(comp);
+ return_ptr(sddesc);
+}
+
+static struct dma_async_tx_descriptor *
+sdxi_dma_prep_memcpy(struct dma_chan *dma_chan, dma_addr_t dst,
+ dma_addr_t src, size_t len, unsigned long flags)
+{
+ struct sdxi_akey_ent *akey = to_sdxi_dma_chan(dma_chan)->akey;
+ struct sdxi_cxt *cxt = to_sdxi_dma_chan(dma_chan)->cxt;
+ u16 akey_index = sdxi_akey_index(cxt, akey);
+ struct sdxi_dma_desc *sddesc;
+ struct sdxi_copy copy = {
+ .src = src,
+ .dst = dst,
+ .src_akey = akey_index,
+ .dst_akey = akey_index,
+ .len = len,
+ };
+
+ /*
+ * Perform a trial encode to a dummy descriptor on the stack
+ * so we can reject bad inputs without touching the ring
+ * state.
+ */
+ if (sdxi_encode_copy(&(struct sdxi_desc){}, ©))
+ return NULL;
+
+ sddesc = (flags & DMA_PREP_INTERRUPT) ?
+ prep_memcpy_intr(dma_chan, ©) :
+ prep_memcpy_polled(dma_chan, ©);
+
+ if (!sddesc)
+ return NULL;
+
+ return vchan_tx_prep(to_virt_chan(dma_chan), &sddesc->vdesc, flags);
+}
+
+static enum dma_status sdxi_tx_status(struct dma_chan *chan,
+ dma_cookie_t cookie,
+ struct dma_tx_state *state)
+{
+ struct sdxi_dma_chan *sdchan = to_sdxi_dma_chan(chan);
+ struct sdxi_dma_desc *sddesc;
+ enum dma_status status;
+ struct virt_dma_desc *vdesc;
+
+ status = dma_cookie_status(chan, cookie, state);
+ if (status == DMA_COMPLETE)
+ return status;
+
+ guard(spinlock_irqsave)(&sdchan->vchan.lock);
+
+ vdesc = vchan_find_desc(&sdchan->vchan, cookie);
+ if (!vdesc)
+ return status;
+
+ sddesc = to_sdxi_dma_desc(vdesc);
+
+ if (WARN_ON_ONCE(!sddesc->completion))
+ return DMA_ERROR;
+
+ if (!sdxi_completion_signaled(sddesc->completion))
+ return DMA_IN_PROGRESS;
+
+ if (sdxi_completion_errored(sddesc->completion))
+ return DMA_ERROR;
+
+ list_del(&vdesc->node);
+ vchan_cookie_complete(vdesc);
+
+ return dma_cookie_status(chan, cookie, state);
+}
+
+static void sdxi_dma_issue_pending(struct dma_chan *dma_chan)
+{
+ struct virt_dma_chan *vchan = to_virt_chan(dma_chan);
+ struct virt_dma_desc *vdesc;
+ u64 dbval = 0;
+
+ scoped_guard(spinlock_irqsave, &vchan->lock) {
+ /*
+ * This can happen with racing submitters.
+ */
+ if (list_empty(&vchan->desc_submitted))
+ return;
+
+ list_for_each_entry(vdesc, &vchan->desc_submitted, node) {
+ struct sdxi_dma_desc *sddesc = to_sdxi_dma_desc(vdesc);
+ struct sdxi_desc *hwdesc;
+
+ sdxi_ring_resv_foreach(&sddesc->resv, hwdesc)
+ sdxi_desc_make_valid(hwdesc);
+ /*
+ * The reservations ought to be ordered
+ * ascending, but use umax() just in case.
+ */
+ dbval = umax(sdxi_ring_resv_dbval(&sddesc->resv), dbval);
+ }
+
+ vchan_issue_pending(vchan);
+ }
+
+ /*
+ * The implementation is required to handle out-of-order
+ * doorbell updates; we can do this after dropping the
+ * lock.
+ */
+ sdxi_cxt_push_doorbell(to_sdxi_dma_chan(dma_chan)->cxt, dbval);
+}
+
+static int sdxi_dma_terminate_all(struct dma_chan *dma_chan)
+{
+ struct virt_dma_chan *vchan = to_virt_chan(dma_chan);
+ u64 dbval = 0;
+
+ /*
+ * Allocated and submitted txds are in the ring but not valid
+ * yet. Overwrite them with nops and then set their valid
+ * bits.
+ *
+ * The implementation may start consuming these as soon as the
+ * valid bits flip. sdxi_dma_synchronize() will ensure they're
+ * all done.
+ */
+ scoped_guard(spinlock_irqsave, &vchan->lock) {
+ struct virt_dma_desc *vdesc;
+ LIST_HEAD(head);
+
+ list_splice_tail_init(&vchan->desc_allocated, &head);
+ list_splice_tail_init(&vchan->desc_submitted, &head);
+
+ if (list_empty(&head))
+ return 0;
+
+ list_for_each_entry(vdesc, &head, node) {
+ struct sdxi_dma_desc *sddesc = to_sdxi_dma_desc(vdesc);
+ struct sdxi_desc *hwdesc;
+
+ sdxi_ring_resv_foreach(&sddesc->resv, hwdesc) {
+ sdxi_serialize_nop(hwdesc);
+ sdxi_desc_make_valid(hwdesc);
+ }
+
+ dbval = umax(sdxi_ring_resv_dbval(&sddesc->resv), dbval);
+ }
+
+ list_splice_tail(&head, &vchan->desc_terminated);
+ }
+
+ sdxi_cxt_push_doorbell(to_sdxi_dma_chan(dma_chan)->cxt, dbval);
+
+ return 0;
+}
+
+static void sdxi_dma_synchronize(struct dma_chan *dma_chan)
+{
+ struct sdxi_cxt *cxt = to_sdxi_dma_chan(dma_chan)->cxt;
+ struct sdxi_ring_resv resv;
+ struct sdxi_desc *nop;
+
+ /* Submit a single nop with fence and wait for it to complete. */
+
+ if (sdxi_ring_reserve(cxt->ring_state, 1, &resv))
+ return;
+
+ struct sdxi_completion *comp __free(sdxi_completion) = sdxi_completion_alloc(cxt->sdxi);
+ if (!comp)
+ return;
+
+ nop = sdxi_ring_resv_next(&resv);
+ sdxi_serialize_nop(nop);
+ sdxi_completion_attach(nop, comp);
+ sdxi_desc_set_fence(nop);
+ sdxi_desc_make_valid(nop);
+ sdxi_cxt_push_doorbell(cxt, sdxi_ring_resv_dbval(&resv));
+ sdxi_completion_poll(comp);
+
+ vchan_synchronize(to_virt_chan(dma_chan));
+}
+
+static irqreturn_t sdxi_dma_cxt_irq(int irq, void *data)
+{
+ struct sdxi_dma_chan *sdchan = data;
+ struct virt_dma_chan *vchan = &sdchan->vchan;
+ struct virt_dma_desc *vdesc;
+ bool completed = false;
+
+ guard(spinlock_irqsave)(&vchan->lock);
+
+ while ((vdesc = vchan_next_desc(vchan))) {
+ struct sdxi_dma_desc *sddesc = to_sdxi_dma_desc(vdesc);
+
+ if (!sdxi_completion_signaled(sddesc->completion))
+ break;
+
+ list_del(&vdesc->node);
+ vchan_cookie_complete(&sddesc->vdesc);
+ completed = true;
+ }
+
+ if (completed)
+ sdxi_ring_wake_up(sdchan->cxt->ring_state);
+
+ return IRQ_HANDLED;
+}
+
+static int sdxi_dma_alloc_chan_resources(struct dma_chan *dma_chan)
+{
+ struct sdxi_dev *sdxi = dev_get_drvdata(dma_chan->device->dev);
+ struct sdxi_dma_chan *sdchan = to_sdxi_dma_chan(dma_chan);
+ int vector, irq, err;
+
+ sdchan->cxt = sdxi_cxt_new(sdxi);
+ if (!sdchan->cxt)
+ return -ENOMEM;
+ /*
+ * This irq and akey setup should perhaps all be pushed into
+ * the context allocation.
+ */
+ err = vector = sdxi_alloc_vector(sdxi);
+ if (vector < 0)
+ goto exit_cxt;
+
+ sdchan->vector = vector;
+
+ err = irq = sdxi_vector_to_irq(sdxi, vector);
+ if (irq < 0)
+ goto free_vector;
+
+ sdchan->irq = irq;
+
+ /*
+ * Note this akey entry is used for both the completion
+ * interrupt and source and destination access for copies.
+ */
+ sdchan->akey = sdxi_alloc_akey(sdchan->cxt);
+ if (!sdchan->akey)
+ goto free_vector;
+
+ *sdchan->akey = (typeof(*sdchan->akey)) {
+ .intr_num = cpu_to_le16(FIELD_PREP(SDXI_AKEY_ENT_VL, 1) |
+ FIELD_PREP(SDXI_AKEY_ENT_IV, 1) |
+ FIELD_PREP(SDXI_AKEY_ENT_INTR_NUM,
+ vector)),
+ };
+
+ err = request_irq(sdchan->irq, sdxi_dma_cxt_irq,
+ IRQF_TRIGGER_NONE, "SDXI DMAengine", sdchan);
+ if (err)
+ goto free_akey;
+
+ err = sdxi_start_cxt(sdchan->cxt);
+ if (err)
+ goto free_irq;
+
+ return 0;
+free_irq:
+ free_irq(sdchan->irq, sdchan);
+free_akey:
+ sdxi_free_akey(sdchan->cxt, sdchan->akey);
+free_vector:
+ sdxi_free_vector(sdxi, vector);
+exit_cxt:
+ sdxi_cxt_exit(sdchan->cxt);
+ return err;
+}
+
+static void sdxi_dma_free_chan_resources(struct dma_chan *dma_chan)
+{
+ struct sdxi_dma_chan *sdchan = to_sdxi_dma_chan(dma_chan);
+
+ sdxi_stop_cxt(sdchan->cxt);
+ free_irq(sdchan->irq, sdchan);
+ sdxi_free_vector(sdchan->cxt->sdxi, sdchan->vector);
+ sdxi_free_akey(sdchan->cxt, sdchan->akey);
+ vchan_free_chan_resources(to_virt_chan(dma_chan));
+ sdxi_cxt_exit(sdchan->cxt);
+}
+
+int sdxi_dma_register(struct sdxi_dev *sdxi)
+{
+ struct device *dev = sdxi_to_dev(sdxi);
+ struct sdxi_dma_dev *sddev;
+ struct dma_device *dma_dev;
+ int err;
+
+ if (!dma_channels)
+ return 0;
+ /*
+ * Note that this code assumes the device supports the
+ * interrupt operation group (IntrGrp), which is optional. See
+ * SDXI 1.0 Table 6-1 SDXI Operation Groups.
+ *
+ * TODO: check sdxi->op_grp_cap for IntrGrp support and error
+ * out if it's missing.
+ */
+
+ sddev = devm_kzalloc(dev, struct_size(sddev, sdchan, dma_channels),
+ GFP_KERNEL);
+ if (!sddev)
+ return -ENOMEM;
+
+ sddev->nr_channels = dma_channels;
+
+ dma_dev = &sddev->dma_dev;
+ *dma_dev = (typeof(*dma_dev)) {
+ .dev = sdxi_to_dev(sdxi),
+ .src_addr_widths = DMA_SLAVE_BUSWIDTH_64_BYTES,
+ .dst_addr_widths = DMA_SLAVE_BUSWIDTH_64_BYTES,
+ .directions = BIT(DMA_MEM_TO_MEM),
+ .residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR,
+
+ .device_alloc_chan_resources = sdxi_dma_alloc_chan_resources,
+ .device_free_chan_resources = sdxi_dma_free_chan_resources,
+
+ .device_prep_dma_memcpy = sdxi_dma_prep_memcpy,
+
+ .device_terminate_all = sdxi_dma_terminate_all,
+ .device_synchronize = sdxi_dma_synchronize,
+ .device_tx_status = sdxi_tx_status,
+ .device_issue_pending = sdxi_dma_issue_pending,
+ };
+
+ dma_cap_set(DMA_MEMCPY, dma_dev->cap_mask);
+ dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ INIT_LIST_HEAD(&dma_dev->channels);
+
+ for (size_t i = 0; i < sddev->nr_channels; ++i) {
+ struct sdxi_dma_chan *sdchan = &sddev->sdchan[i];
+
+ sdchan->vchan.desc_free = sdxi_tx_desc_free;
+ vchan_init(&sdchan->vchan, &sddev->dma_dev);
+ }
+
+ err = dmaenginem_async_device_register(dma_dev);
+ if (err)
+ return dev_warn_probe(dev, err, "failed to register dma device\n");
+
+ return 0;
+}
diff --git a/drivers/dma/sdxi/dma.h b/drivers/dma/sdxi/dma.h
new file mode 100644
index 000000000000..4ff3c2cb67fc
--- /dev/null
+++ b/drivers/dma/sdxi/dma.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright Advanced Micro Devices, Inc. */
+
+#ifndef DMA_SDXI_DMA_H
+#define DMA_SDXI_DMA_H
+
+struct sdxi_dev;
+
+int sdxi_dma_register(struct sdxi_dev *sdxi);
+void sdxi_dma_unregister(struct sdxi_dev *sdxi);
+
+#endif /* DMA_SDXI_DMA_H */
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread