All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: linux-coco@lists.linux.dev, linux-pci@vger.kernel.org
Cc: gregkh@linuxfoundation.org, aik@amd.com, aneesh.kumar@kernel.org,
	yilun.xu@linux.intel.com, bhelgaas@google.com,
	alistair23@gmail.com, lukas@wunner.de, jgg@nvidia.com,
	Samuel Ortiz <sameo@rivosinc.com>
Subject: [PATCH v2 13/19] samples/devsec: Introduce a PCI device-security bus + endpoint sample
Date: Mon,  2 Mar 2026 16:02:01 -0800	[thread overview]
Message-ID: <20260303000207.1836586-14-dan.j.williams@intel.com> (raw)
In-Reply-To: <20260303000207.1836586-1-dan.j.williams@intel.com>

Establish just enough emulated PCI infrastructure to register a sample
TSM (platform security manager) driver and have it discover an IDE + TEE
(link encryption + device-interface security protocol (TDISP)) capable
device.

Use the existing CONFIG_PCI_BRIDGE_EMUL to emulate an IDE capable root
port, and open code the emulation of an endpoint device via simulated
configuration cycle responses.

The devsec_link_tsm driver responds to the PCI core TSM operations as if it
successfully exercised the given interface security protocol messages.

The devsec_bus and devsec_link_tsm drivers can be loaded in either order to
reflect cases like SEV-TIO where the TSM is PCI-device firmware, and cases
like TDX Connect where the TSM is a software agent running on the host CPU.

Follow-on patches exercise the IDE establishment core and the TDISP
operations that guests execute. For now, just successfully complete setup
and teardown of the DSM (device security manager) context as a building
block for management of TDI (trusted device interface) instances.

Note that 2 host bridges are emulated to regression test a bug report
[1] against the initial TSM based IDE establishment.

Trimmed kernel logs from loading the devsec sample modules and connecting a
device.

 # modprobe devsec_bus
    faux_driver devsec_bus: PCI host bridge to bus 10000:00
    pci_bus 10000:00: root bus resource [bus 00-01]
    pci_bus 10000:00: root bus resource [mem 0xf0000000-0xf03fffff]
    pci 10000:00:00.0: [8086:ffff] type 01 class 0x060400 PCIe Root Port
    pci 10000:00:00.0: PCI bridge to [bus 01]
    pci 10000:00:00.0:   bridge window [mem 0xf0000000-0xf03fffff]
    pci 10000:01:00.0: [8086:ffff] type 00 class 0x120000 PCIe Endpoint
    pci 10000:01:00.1: [8086:ffff] type 00 class 0x120000 PCIe Endpoint
    pci 10000:01:00.0: BAR 0 [mem 0xf0000000-0xf01fffff]: assigned
    pci 10000:01:00.1: BAR 0 [mem 0xf0200000-0xf03fffff]: assigned
    pcieport 10000:00:00.0: enabling device (0000 -> 0002)
    faux_driver devsec_bus: PCI host bridge to bus 10001:02
    pci_bus 10001:02: root bus resource [bus 02-03]
    pci_bus 10001:02: root bus resource [mem 0xf0400000-0xf07fffff]
    pci 10001:02:00.0: [8086:ffff] type 01 class 0x060400 PCIe Root Port
    pci 10001:02:00.0: PCI bridge to [bus 03]
    pci 10001:02:00.0:   bridge window [mem 0xf0400000-0xf07fffff]
    pci 10001:03:00.0: [8086:ffff] type 00 class 0x120000 PCIe Endpoint
    pci 10001:03:00.1: [8086:ffff] type 00 class 0x120000 PCIe Endpoint
    pci 10001:03:00.0: BAR 0 [mem 0xf0400000-0xf05fffff]: assigned
    pci 10001:03:00.1: BAR 0 [mem 0xf0600000-0xf07fffff]: assigned
    pcieport 10001:02:00.0: enabling device (0000 -> 0002)

 # modprobe devsec_link_tsm
    link_sysfs_enable: pci 10000:01:00.0: PCI/TSM: Platform TEE Security Manager detected (IDE TEE)
    link_sysfs_enable: pci 10001:03:00.0: PCI/TSM: Platform TEE Security Manager detected (IDE TEE)

 # echo tsm0 > /sys/bus/pci/devices/10000:01:00.0/tsm/connect
    devsec_tsm_pf0_probe: pci 10000:01:00.0: devsec: TSM enabled
    link_sysfs_enable: pci 10000:01:00.1: PCI/TSM: Device Security Manager detected (TEE)

Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Lukas Wunner <lukas@wunner.de>
Cc: Samuel Ortiz <sameo@rivosinc.com>
Cc: Alexey Kardashevskiy <aik@amd.com>
Cc: Xu Yilun <yilun.xu@linux.intel.com>
Link: http://lore.kernel.org/20260105093516.2645397-1-yilun.xu@linux.intel.com [1]
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 samples/Kconfig           |  19 +
 samples/Makefile          |   1 +
 samples/devsec/Makefile   |  10 +
 samples/devsec/devsec.h   |  43 +++
 samples/devsec/bus.c      | 780 ++++++++++++++++++++++++++++++++++++++
 samples/devsec/common.c   |  28 ++
 samples/devsec/link_tsm.c | 192 ++++++++++
 MAINTAINERS               |   1 +
 8 files changed, 1074 insertions(+)
 create mode 100644 samples/devsec/Makefile
 create mode 100644 samples/devsec/devsec.h
 create mode 100644 samples/devsec/bus.c
 create mode 100644 samples/devsec/common.c
 create mode 100644 samples/devsec/link_tsm.c

diff --git a/samples/Kconfig b/samples/Kconfig
index 5bc7c9e5a59e..679c332abd10 100644
--- a/samples/Kconfig
+++ b/samples/Kconfig
@@ -324,6 +324,25 @@ source "samples/rust/Kconfig"
 
 source "samples/damon/Kconfig"
 
+config SAMPLE_DEVSEC
+	tristate "Build a sample TEE Security Manager with an emulated PCI endpoint"
+	depends on m
+	depends on PCI
+	depends on VIRT_DRIVERS
+	depends on PCI_DOMAINS
+	select PCI_BRIDGE_EMUL
+	select PCI_TSM
+	select TSM
+	help
+	  Build a sample platform TEE Security Manager (TSM) driver with a
+	  corresponding emulated PCIe topology. The resulting sample modules,
+	  devsec_bus and devsec_tsm, exercise device-security enumeration, PCI
+	  subsystem use ABIs, device security flows. For example, exercise IDE
+	  (link encryption) establishment and TDISP state transitions via a
+	  Device Security Manager (DSM).
+
+	  If unsure, say N
+
 endif # SAMPLES
 
 config HAVE_SAMPLE_FTRACE_DIRECT
diff --git a/samples/Makefile b/samples/Makefile
index 07641e177bd8..59b510ace9b2 100644
--- a/samples/Makefile
+++ b/samples/Makefile
@@ -45,3 +45,4 @@ obj-$(CONFIG_SAMPLE_DAMON_PRCL)		+= damon/
 obj-$(CONFIG_SAMPLE_DAMON_MTIER)	+= damon/
 obj-$(CONFIG_SAMPLE_HUNG_TASK)		+= hung_task/
 obj-$(CONFIG_SAMPLE_TSM_MR)		+= tsm-mr/
+obj-y					+= devsec/
diff --git a/samples/devsec/Makefile b/samples/devsec/Makefile
new file mode 100644
index 000000000000..da122eb8d23d
--- /dev/null
+++ b/samples/devsec/Makefile
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: GPL-2.0
+
+obj-$(CONFIG_SAMPLE_DEVSEC) += devsec_common.o
+devsec_common-y := common.o
+
+obj-$(CONFIG_SAMPLE_DEVSEC) += devsec_bus.o
+devsec_bus-y := bus.o
+
+obj-$(CONFIG_SAMPLE_DEVSEC) += devsec_link_tsm.o
+devsec_link_tsm-y := link_tsm.o
diff --git a/samples/devsec/devsec.h b/samples/devsec/devsec.h
new file mode 100644
index 000000000000..e0ea9c6bb5e9
--- /dev/null
+++ b/samples/devsec/devsec.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2024 - 2026 Intel Corporation */
+
+#ifndef __DEVSEC_H__
+#define __DEVSEC_H__
+
+#define NR_DEVSEC_HOST_BRIDGES 2
+
+struct devsec_sysdata {
+#ifdef CONFIG_X86
+	/*
+	 * Must be first member to that x86::pci_domain_nr() can type
+	 * pun devsec_sysdata and pci_sysdata.
+	 */
+	struct pci_sysdata sd;
+#else
+	int domain_nr;
+#endif
+};
+
+#ifdef CONFIG_X86
+static inline void devsec_set_domain_nr(struct devsec_sysdata *sd,
+					int domain_nr)
+{
+	sd->sd.domain = domain_nr;
+}
+static inline int devsec_get_domain_nr(struct devsec_sysdata *sd)
+{
+	return sd->sd.domain;
+}
+#else
+static inline void devsec_set_domain_nr(struct devsec_sysdata *sd,
+					int domain_nr)
+{
+	sd->domain_nr = domain_nr;
+}
+static inline int devsec_get_domain_nr(struct devsec_sysdata *sd)
+{
+	return sd->domain_nr;
+}
+#endif
+extern struct devsec_sysdata *devsec_sysdata[NR_DEVSEC_HOST_BRIDGES];
+#endif /* __DEVSEC_H__ */
diff --git a/samples/devsec/bus.c b/samples/devsec/bus.c
new file mode 100644
index 000000000000..a55e9573c8bf
--- /dev/null
+++ b/samples/devsec/bus.c
@@ -0,0 +1,780 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2024 - 2026 Intel Corporation */
+
+#include <linux/bitfield.h>
+#include <linux/cleanup.h>
+#include <linux/device/faux.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/range.h>
+#include <uapi/linux/pci_regs.h>
+
+#include "../../drivers/pci/pci-bridge-emul.h"
+#include "devsec.h"
+
+#define NR_DEVSEC_BUSES 1 /* root ports per host-bridge */
+#define NR_DEVICES_PER_BUS 1
+#define NR_FUNCTIONS 2
+#define NR_DEVICES (NR_DEVICES_PER_BUS * NR_FUNCTIONS * NR_DEVSEC_BUSES)
+#define NR_PORT_STREAMS 1
+#define NR_PORT_LINK_STREAMS 4
+#define NR_ADDR_ASSOC 1
+
+struct devsec {
+	struct pci_host_bridge hb;
+	struct devsec_sysdata sysdata;
+	struct resource busnr_res;
+	struct resource mmio_res;
+	struct resource prefetch_res;
+	struct pci_bus *bus;
+	struct device *dev;
+	struct devsec_port {
+		union {
+			struct devsec_ide {
+				u32 cap;
+				u32 ctl;
+				struct devsec_link_stream {
+					u32 ctl;
+					u32 status;
+				} link_stream[NR_PORT_LINK_STREAMS];
+				struct devsec_stream {
+					u32 cap;
+					u32 ctl;
+					u32 status;
+					u32 rid1;
+					u32 rid2;
+					struct devsec_addr_assoc {
+						u32 assoc1;
+						u32 assoc2;
+						u32 assoc3;
+					} assoc[NR_ADDR_ASSOC];
+				} stream[NR_PORT_STREAMS];
+			} ide __packed;
+			char ide_regs[sizeof(struct devsec_ide)];
+		};
+		struct pci_bridge_emul bridge;
+	} *devsec_ports[NR_DEVSEC_BUSES];
+	struct devsec_dev {
+		struct devsec *devsec;
+		struct range mmio_range;
+		u8 __cfg[SZ_4K];
+		struct devsec_dev_doe {
+			int cap;
+			u32 req[SZ_4K / sizeof(u32)];
+			u32 rsp[SZ_4K / sizeof(u32)];
+			int write, read, read_ttl;
+		} doe;
+		u16 ide_pos;
+		union {
+			struct devsec_ide ide __packed;
+			char ide_regs[sizeof(struct devsec_ide)];
+		};
+	} *devsec_devs[NR_DEVICES];
+};
+
+#define devsec_base(x) ((void __force __iomem *) &(x)->__cfg[0])
+
+static struct devsec *bus_to_devsec(struct pci_bus *bus)
+{
+	return container_of(bus->sysdata, struct devsec, sysdata);
+}
+
+static int devsec_dev_config_read(struct devsec *devsec, struct pci_bus *bus,
+				  unsigned int devfn, int pos, int size,
+				  u32 *val)
+{
+	int fn = PCI_SLOT(devfn) * NR_FUNCTIONS + PCI_FUNC(devfn);
+	struct devsec_dev *devsec_dev;
+	struct devsec_dev_doe *doe;
+	void __iomem *base;
+
+	if (PCI_FUNC(devfn) > NR_FUNCTIONS ||
+	    PCI_SLOT(devfn) > NR_DEVICES_PER_BUS ||
+	    fn >= ARRAY_SIZE(devsec->devsec_devs))
+		return PCIBIOS_DEVICE_NOT_FOUND;
+
+	devsec_dev = devsec->devsec_devs[fn];
+	base = devsec_base(devsec_dev);
+	doe = &devsec_dev->doe;
+
+	if (doe->cap && pos == doe->cap + PCI_DOE_READ) {
+		if (doe->read_ttl > 0) {
+			*val = doe->rsp[doe->read];
+			dev_dbg(&bus->dev, "devfn: %#x doe read[%d]\n", devfn,
+				doe->read);
+		} else {
+			*val = 0;
+			dev_dbg(&bus->dev, "devfn: %#x doe no data\n", devfn);
+		}
+		return PCIBIOS_SUCCESSFUL;
+	} else if (doe->cap && pos == doe->cap + PCI_DOE_STATUS) {
+		if (doe->read_ttl > 0) {
+			*val = PCI_DOE_STATUS_DATA_OBJECT_READY;
+			dev_dbg(&bus->dev, "devfn: %#x object ready\n", devfn);
+		} else if (doe->read_ttl < 0) {
+			*val = PCI_DOE_STATUS_ERROR;
+			dev_dbg(&bus->dev, "devfn: %#x error\n", devfn);
+		} else {
+			*val = 0;
+			dev_dbg(&bus->dev, "devfn: %#x idle\n", devfn);
+		}
+		return PCIBIOS_SUCCESSFUL;
+	} else if (devsec_dev->ide_pos && pos >= devsec_dev->ide_pos &&
+		   pos < devsec_dev->ide_pos + sizeof(struct devsec_ide)) {
+		*val = *(u32 *) &devsec_dev->ide_regs[pos - devsec_dev->ide_pos];
+		return PCIBIOS_SUCCESSFUL;
+	}
+
+	switch (size) {
+	case 1:
+		*val = readb(base + pos);
+		break;
+	case 2:
+		*val = readw(base + pos);
+		break;
+	case 4:
+		*val = readl(base + pos);
+		break;
+	default:
+		PCI_SET_ERROR_RESPONSE(val);
+		return PCIBIOS_BAD_REGISTER_NUMBER;
+	}
+	return PCIBIOS_SUCCESSFUL;
+}
+
+static int devsec_port_config_read(struct devsec *devsec, unsigned int devfn,
+				   int pos, int size, u32 *val)
+{
+	struct devsec_port *devsec_port;
+
+	if (PCI_FUNC(devfn) != 0 ||
+	    PCI_SLOT(devfn) >= ARRAY_SIZE(devsec->devsec_ports))
+		return PCIBIOS_DEVICE_NOT_FOUND;
+
+	devsec_port = devsec->devsec_ports[PCI_SLOT(devfn)];
+	return pci_bridge_emul_conf_read(&devsec_port->bridge, pos, size, val);
+}
+
+static int devsec_pci_read(struct pci_bus *bus, unsigned int devfn, int pos,
+			   int size, u32 *val)
+{
+	struct devsec *devsec = bus_to_devsec(bus);
+
+	dev_vdbg(&bus->dev, "devfn: %#x pos: %#x size: %d\n", devfn, pos, size);
+
+	if (bus == devsec->hb.bus)
+		return devsec_port_config_read(devsec, devfn, pos, size, val);
+	else if (bus->parent == devsec->hb.bus)
+		return devsec_dev_config_read(devsec, bus, devfn, pos, size,
+					      val);
+
+	return PCIBIOS_DEVICE_NOT_FOUND;
+}
+
+#ifndef PCI_DOE_PROTOCOL_DISCOVERY
+#define PCI_DOE_PROTOCOL_DISCOVERY 0
+#define PCI_DOE_FEATURE_CMA 1
+#endif
+
+/* just indicate support for CMA */
+static void doe_process(struct devsec_dev_doe *doe)
+{
+	u8 type;
+	u16 vid;
+
+	vid = FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_VID, doe->req[0]);
+	type = FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, doe->req[0]);
+
+	if (vid != PCI_VENDOR_ID_PCI_SIG) {
+		doe->read_ttl = -1;
+		return;
+	}
+
+	if (type != PCI_DOE_PROTOCOL_DISCOVERY) {
+		doe->read_ttl = -1;
+		return;
+	}
+
+	doe->rsp[0] = doe->req[0];
+	doe->rsp[1] = FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH, 3);
+	doe->read_ttl = 3;
+	doe->rsp[2] = FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_RSP_3_VID,
+				 PCI_VENDOR_ID_PCI_SIG) |
+		      FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_RSP_3_PROTOCOL,
+				 PCI_DOE_FEATURE_CMA) |
+		      FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_RSP_3_NEXT_INDEX, 0);
+}
+
+static int devsec_dev_config_write(struct devsec *devsec, struct pci_bus *bus,
+				   unsigned int devfn, int pos, int size,
+				   u32 val)
+{
+	int fn = PCI_SLOT(devfn) * NR_FUNCTIONS + PCI_FUNC(devfn);
+	struct devsec_dev *devsec_dev;
+	struct devsec_dev_doe *doe;
+	struct devsec_ide *ide;
+	void __iomem *base;
+
+	dev_vdbg(&bus->dev, "devfn: %#x pos: %#x size: %d\n", devfn, pos, size);
+
+	if (PCI_FUNC(devfn) > NR_FUNCTIONS ||
+	    PCI_SLOT(devfn) > NR_DEVICES_PER_BUS ||
+	    fn >= ARRAY_SIZE(devsec->devsec_devs))
+		return PCIBIOS_DEVICE_NOT_FOUND;
+
+	devsec_dev = devsec->devsec_devs[fn];
+	base = devsec_base(devsec_dev);
+	doe = &devsec_dev->doe;
+	ide = &devsec_dev->ide;
+
+	if (pos >= PCI_BASE_ADDRESS_0 && pos <= PCI_BASE_ADDRESS_5) {
+		if (size != 4)
+			return PCIBIOS_BAD_REGISTER_NUMBER;
+		/* only one 64-bit mmio bar emulated for now */
+		if (pos == PCI_BASE_ADDRESS_0)
+			val &= ~lower_32_bits(range_len(&devsec_dev->mmio_range) - 1);
+		else if (pos == PCI_BASE_ADDRESS_1)
+			val &= ~upper_32_bits(range_len(&devsec_dev->mmio_range) - 1);
+		else
+			val = 0;
+	} else if (pos == PCI_ROM_ADDRESS) {
+		val = 0;
+	} else if (doe->cap && pos == doe->cap + PCI_DOE_CTRL) {
+		if (val & PCI_DOE_CTRL_GO) {
+			dev_dbg(&bus->dev, "devfn: %#x doe go\n", devfn);
+			doe_process(doe);
+		}
+		if (val & PCI_DOE_CTRL_ABORT) {
+			dev_dbg(&bus->dev, "devfn: %#x doe abort\n", devfn);
+			doe->write = 0;
+			doe->read = 0;
+			doe->read_ttl = 0;
+		}
+		return PCIBIOS_SUCCESSFUL;
+	} else if (doe->cap && pos == doe->cap + PCI_DOE_WRITE) {
+		if (doe->write < ARRAY_SIZE(doe->req))
+			doe->req[doe->write++] = val;
+		dev_dbg(&bus->dev, "devfn: %#x doe write[%d]\n", devfn,
+			doe->write - 1);
+		return PCIBIOS_SUCCESSFUL;
+	} else if (doe->cap && pos == doe->cap + PCI_DOE_READ) {
+		if (doe->read_ttl > 0) {
+			doe->read_ttl--;
+			doe->read++;
+			dev_dbg(&bus->dev, "devfn: %#x doe ack[%d]\n", devfn,
+				doe->read - 1);
+		}
+		return PCIBIOS_SUCCESSFUL;
+	} else if (devsec_dev->ide_pos && pos >= devsec_dev->ide_pos &&
+		   pos < devsec_dev->ide_pos + sizeof(struct devsec_ide)) {
+
+		*(u32 *) &devsec_dev->ide_regs[pos - devsec_dev->ide_pos] = val;
+
+		/*
+		 * Check if the control register is written and update the
+		 * status register accordingly
+		 */
+		for (int i = 0; i < NR_PORT_STREAMS; i++) {
+			struct devsec_stream *stream = &ide->stream[i];
+			u16 ide_off = pos - devsec_dev->ide_pos;
+
+			if (ide_off != offsetof(typeof(*ide), stream[i].ctl))
+				continue;
+
+			stream->status &= ~PCI_IDE_SEL_STS_STATE;
+			if (val & PCI_IDE_SEL_CTL_EN)
+				stream->status |= FIELD_PREP(
+					PCI_IDE_SEL_STS_STATE,
+					PCI_IDE_SEL_STS_STATE_SECURE);
+			else
+				stream->status |= FIELD_PREP(
+					PCI_IDE_SEL_STS_STATE,
+					PCI_IDE_SEL_STS_STATE_INSECURE);
+			break;
+		}
+		return PCIBIOS_SUCCESSFUL;
+	}
+
+	switch (size) {
+	case 1:
+		writeb(val, base + pos);
+		break;
+	case 2:
+		writew(val, base + pos);
+		break;
+	case 4:
+		writel(val, base + pos);
+		break;
+	default:
+		return PCIBIOS_BAD_REGISTER_NUMBER;
+	}
+	return PCIBIOS_SUCCESSFUL;
+}
+
+static int devsec_port_config_write(struct devsec *devsec, struct pci_bus *bus,
+				    unsigned int devfn, int pos, int size,
+				    u32 val)
+{
+	struct devsec_port *devsec_port;
+
+	dev_vdbg(&bus->dev, "devfn: %#x pos: %#x size: %d\n", devfn, pos, size);
+
+	if (PCI_FUNC(devfn) != 0 ||
+	    PCI_SLOT(devfn) >= ARRAY_SIZE(devsec->devsec_ports))
+		return PCIBIOS_DEVICE_NOT_FOUND;
+
+	devsec_port = devsec->devsec_ports[PCI_SLOT(devfn)];
+	return pci_bridge_emul_conf_write(&devsec_port->bridge, pos, size, val);
+}
+
+static int devsec_pci_write(struct pci_bus *bus, unsigned int devfn, int pos,
+			    int size, u32 val)
+{
+	struct devsec *devsec = bus_to_devsec(bus);
+
+	dev_vdbg(&bus->dev, "devfn: %#x pos: %#x size: %d\n", devfn, pos, size);
+
+	if (bus == devsec->hb.bus)
+		return devsec_port_config_write(devsec, bus, devfn, pos, size,
+						val);
+	else if (bus->parent == devsec->hb.bus)
+		return devsec_dev_config_write(devsec, bus, devfn, pos, size,
+					       val);
+	return PCIBIOS_DEVICE_NOT_FOUND;
+}
+
+static struct pci_ops devsec_ops = {
+	.read = devsec_pci_read,
+	.write = devsec_pci_write,
+};
+
+static void destroy_bus(void *data)
+{
+	struct pci_host_bridge *hb = data;
+
+	pci_stop_root_bus(hb->bus);
+	pci_remove_root_bus(hb->bus);
+}
+
+static u32 build_ext_cap_header(u32 id, u32 ver, u32 next)
+{
+	return FIELD_PREP(GENMASK(15, 0), id) |
+	       FIELD_PREP(GENMASK(19, 16), ver) |
+	       FIELD_PREP(GENMASK(31, 20), next);
+}
+
+static void init_ide(struct devsec_ide *ide)
+{
+	ide->cap =
+		PCI_IDE_CAP_LINK | PCI_IDE_CAP_SELECTIVE | PCI_IDE_CAP_IDE_KM |
+		PCI_IDE_CAP_TEE_LIMITED |
+		FIELD_PREP(PCI_IDE_CAP_LINK_TC_NUM, NR_PORT_LINK_STREAMS - 1) |
+		FIELD_PREP(PCI_IDE_CAP_SEL_NUM, NR_PORT_STREAMS - 1);
+
+	for (int i = 0; i < NR_PORT_STREAMS; i++)
+		ide->stream[i].cap =
+			FIELD_PREP(PCI_IDE_SEL_CAP_ASSOC_NUM, NR_ADDR_ASSOC);
+}
+
+static void init_dev_cfg(struct devsec_dev *devsec_dev, int fn)
+{
+	void __iomem *base = devsec_base(devsec_dev), *cap_base;
+	int pos, next;
+
+	/* BAR space */
+	writew(0x8086, base + PCI_VENDOR_ID);
+	writew(0xffff, base + PCI_DEVICE_ID);
+	writew(PCI_CLASS_ACCELERATOR_PROCESSING, base + PCI_CLASS_DEVICE);
+	writel(lower_32_bits(devsec_dev->mmio_range.start) |
+		       PCI_BASE_ADDRESS_MEM_TYPE_64 |
+		       PCI_BASE_ADDRESS_MEM_PREFETCH,
+	       base + PCI_BASE_ADDRESS_0);
+	writel(upper_32_bits(devsec_dev->mmio_range.start),
+	       base + PCI_BASE_ADDRESS_1);
+
+	/* PF0 is a DSM for this multifunction device */
+	writeb(PCI_HEADER_TYPE_MFD, base + PCI_HEADER_TYPE);
+
+	/* Capability init */
+	writew(PCI_STATUS_CAP_LIST, base + PCI_STATUS);
+	pos = 0x40;
+	writew(pos, base + PCI_CAPABILITY_LIST);
+
+	/* PCI-E Capability */
+	cap_base = base + pos;
+	writeb(PCI_CAP_ID_EXP, cap_base);
+	writew(PCI_EXP_TYPE_ENDPOINT, cap_base + PCI_EXP_FLAGS);
+	writew(PCI_EXP_LNKSTA_CLS_2_5GB | PCI_EXP_LNKSTA_NLW_X1, cap_base + PCI_EXP_LNKSTA);
+	writel(PCI_EXP_DEVCAP_FLR | PCI_EXP_DEVCAP_TEE, cap_base + PCI_EXP_DEVCAP);
+
+	/* PF0 has DOE and IDE, the rest of the functions are assignable TDIs */
+	if (fn > 0)
+		return;
+
+	/* DOE Extended Capability */
+	pos = PCI_CFG_SPACE_SIZE;
+	next = pos + PCI_DOE_CAP_SIZEOF;
+	cap_base = base + pos;
+	devsec_dev->doe.cap = pos;
+	writel(build_ext_cap_header(PCI_EXT_CAP_ID_DOE, 2, next), cap_base);
+
+	/* IDE Extended Capability */
+	pos = next;
+	cap_base = base + pos;
+	writel(build_ext_cap_header(PCI_EXT_CAP_ID_IDE, 1, 0), cap_base);
+	devsec_dev->ide_pos = pos + 4;
+	init_ide(&devsec_dev->ide);
+}
+
+#define MMIO_SIZE SZ_2M
+#define PREFETCH_SIZE SZ_2M
+
+static void destroy_devsec_dev(void *devsec_dev)
+{
+	kfree(devsec_dev);
+}
+
+static struct devsec_dev *devsec_dev_alloc(struct devsec *devsec, int rp, int fn)
+{
+	struct devsec_dev *devsec_dev __free(kfree) =
+		kzalloc(sizeof(*devsec_dev), GFP_KERNEL);
+	u64 start = devsec->prefetch_res.start +
+		    (rp * NR_FUNCTIONS + fn) * PREFETCH_SIZE;
+
+	if (!devsec_dev)
+		return ERR_PTR(-ENOMEM);
+
+	*devsec_dev = (struct devsec_dev) {
+		.mmio_range = {
+			.start = start,
+			.end = start + PREFETCH_SIZE - 1,
+		},
+		.devsec = devsec,
+	};
+	init_dev_cfg(devsec_dev, fn);
+
+	return_ptr(devsec_dev);
+}
+
+static int alloc_dev(struct devsec *devsec, int hb, int rp, int fn)
+{
+	struct devsec_dev *devsec_dev = devsec_dev_alloc(devsec, rp, fn);
+	int rc;
+
+	if (IS_ERR(devsec_dev))
+		return PTR_ERR(devsec_dev);
+	rc = devm_add_action_or_reset(devsec->dev, destroy_devsec_dev,
+				      devsec_dev);
+	if (rc)
+		return rc;
+	devsec->devsec_devs[rp * NR_FUNCTIONS + fn] = devsec_dev;
+
+	dev_dbg(devsec->dev, "added: hb: %d rp: %d fn: %d mmio: %pra\n",
+		hb, rp, fn, &devsec_dev->mmio_range);
+	return 0;
+}
+
+static pci_bridge_emul_read_status_t
+devsec_bridge_read_base(struct pci_bridge_emul *bridge, int pos, u32 *val)
+{
+	return PCI_BRIDGE_EMUL_NOT_HANDLED;
+}
+
+static pci_bridge_emul_read_status_t
+devsec_bridge_read_pcie(struct pci_bridge_emul *bridge, int pos, u32 *val)
+{
+	return PCI_BRIDGE_EMUL_NOT_HANDLED;
+}
+
+static pci_bridge_emul_read_status_t
+devsec_bridge_read_ext(struct pci_bridge_emul *bridge, int pos, u32 *val)
+{
+	struct devsec_port *devsec_port = bridge->data;
+
+	/* only one extended capability, IDE... */
+	if (pos == 0) {
+		*val = build_ext_cap_header(PCI_EXT_CAP_ID_IDE, 1, 0);
+		return PCI_BRIDGE_EMUL_HANDLED;
+	}
+
+	if (pos < 4)
+		return PCI_BRIDGE_EMUL_NOT_HANDLED;
+
+	pos -= 4;
+	if (pos < sizeof(struct devsec_ide)) {
+		*val = *(u32 *)(&devsec_port->ide_regs[pos]);
+		return PCI_BRIDGE_EMUL_HANDLED;
+	}
+
+	return PCI_BRIDGE_EMUL_NOT_HANDLED;
+}
+
+static void devsec_bridge_write_base(struct pci_bridge_emul *bridge, int pos,
+				     u32 old, u32 new, u32 mask)
+{
+}
+
+static void devsec_bridge_write_pcie(struct pci_bridge_emul *bridge, int pos,
+				     u32 old, u32 new, u32 mask)
+{
+}
+
+static void devsec_bridge_write_ext(struct pci_bridge_emul *bridge, int pos,
+				    u32 old, u32 new, u32 mask)
+{
+	struct devsec_port *devsec_port = bridge->data;
+
+	if (pos < sizeof(struct devsec_ide))
+		*(u32 *)(&devsec_port->ide_regs[pos]) = new;
+}
+
+static const struct pci_bridge_emul_ops devsec_bridge_ops = {
+	.read_base = devsec_bridge_read_base,
+	.write_base = devsec_bridge_write_base,
+	.read_pcie = devsec_bridge_read_pcie,
+	.write_pcie = devsec_bridge_write_pcie,
+	.read_ext = devsec_bridge_read_ext,
+	.write_ext = devsec_bridge_write_ext,
+};
+
+static int init_port(struct devsec *devsec, struct devsec_port *devsec_port,
+		     int hb, int rp)
+{
+	const struct resource *mres = &devsec->mmio_res;
+	const struct resource *pres = &devsec->prefetch_res;
+	struct pci_bridge_emul *bridge = &devsec_port->bridge;
+	u16 membase = cpu_to_le16(upper_16_bits(mres->start + MMIO_SIZE * rp) &
+				  0xfff0);
+	u16 memlimit =
+		cpu_to_le16(upper_16_bits(mres->end + MMIO_SIZE * rp) & 0xfff0);
+	u16 pref_mem_base =
+		cpu_to_le16((upper_16_bits(lower_32_bits(pres->start +
+							 PREFETCH_SIZE * rp)) &
+			     0xfff0) |
+			    PCI_PREF_RANGE_TYPE_64);
+	u16 pref_mem_limit = cpu_to_le16(
+		(upper_16_bits(lower_32_bits(pres->end + PREFETCH_SIZE * rp)) &
+		 0xfff0) |
+		PCI_PREF_RANGE_TYPE_64);
+	u32 prefbaseupper =
+		cpu_to_le32(upper_32_bits(pres->start + PREFETCH_SIZE * rp));
+	u32 preflimitupper =
+		cpu_to_le32(upper_32_bits(pres->end + PREFETCH_SIZE * rp));
+	int primary_bus = hb * (NR_DEVSEC_BUSES + 1);
+
+	*bridge = (struct pci_bridge_emul) {
+		.conf = {
+			.vendor = cpu_to_le16(0x8086),
+			.device = cpu_to_le16(0xffff),
+			.class_revision = cpu_to_le32(0x1),
+			.primary_bus = primary_bus,
+			.secondary_bus = primary_bus + rp + 1,
+			.subordinate_bus = primary_bus + rp + 1,
+			.membase = membase,
+			.memlimit = memlimit,
+			.pref_mem_base = pref_mem_base,
+			.pref_mem_limit = pref_mem_limit,
+			.prefbaseupper = prefbaseupper,
+			.preflimitupper = preflimitupper,
+		},
+		.pcie_conf = {
+			.devcap = cpu_to_le16(PCI_EXP_DEVCAP_FLR),
+			.lnksta = cpu_to_le16(PCI_EXP_LNKSTA_CLS_2_5GB),
+		},
+		.subsystem_vendor_id = cpu_to_le16(0x8086),
+		.has_pcie = true,
+		.data = devsec_port,
+		.ops = &devsec_bridge_ops,
+	};
+
+	init_ide(&devsec_port->ide);
+
+	return pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_IO_FORWARD);
+}
+
+static void destroy_port(void *data)
+{
+	struct devsec_port *devsec_port = data;
+
+	pci_bridge_emul_cleanup(&devsec_port->bridge);
+	kfree(devsec_port);
+}
+
+static struct devsec_port *devsec_port_alloc(struct devsec *devsec, int hb, int rp)
+{
+	int rc;
+
+	struct devsec_port *devsec_port __free(kfree) =
+		kzalloc(sizeof(*devsec_port), GFP_KERNEL);
+
+	if (!devsec_port)
+		return ERR_PTR(-ENOMEM);
+
+	rc = init_port(devsec, devsec_port, hb, rp);
+	if (rc)
+		return ERR_PTR(rc);
+
+	return_ptr(devsec_port);
+}
+
+static int alloc_port(struct devsec *devsec, int hb, int rp)
+{
+	struct devsec_port *devsec_port = devsec_port_alloc(devsec, hb, rp);
+	int rc;
+
+	if (IS_ERR(devsec_port))
+		return PTR_ERR(devsec_port);
+	rc = devm_add_action_or_reset(devsec->dev, destroy_port, devsec_port);
+	if (rc)
+		return rc;
+	devsec->devsec_ports[rp] = devsec_port;
+
+	return 0;
+}
+
+static void release_mmio_region(void *res)
+{
+	remove_resource(res);
+}
+
+static void release_prefetch_region(void *res)
+{
+	remove_resource(res);
+}
+
+static int devsec_add_host_bridge(struct faux_device *fdev, int hb_idx)
+{
+	struct pci_bus *bus;
+	struct devsec *devsec;
+	struct devsec_sysdata *sd;
+	struct pci_host_bridge *hb;
+	int busnr_start, busnr_end, rc;
+	struct device *dev = &fdev->dev;
+
+	hb = devm_pci_alloc_host_bridge(
+		dev, sizeof(*devsec) - sizeof(struct pci_host_bridge));
+	if (!hb)
+		return -ENOMEM;
+
+	devsec = container_of(hb, struct devsec, hb);
+	devsec->dev = dev;
+
+	devsec->mmio_res.name = "DEVSEC MMIO";
+	devsec->mmio_res.flags = IORESOURCE_MEM;
+	rc = allocate_resource(&iomem_resource, &devsec->mmio_res,
+			       MMIO_SIZE * NR_DEVICES, 0, SZ_4G, MMIO_SIZE,
+			       NULL, NULL);
+	if (rc)
+		return rc;
+
+	rc = devm_add_action_or_reset(dev, release_mmio_region,
+				      &devsec->mmio_res);
+	if (rc)
+		return rc;
+
+	devsec->prefetch_res.name = "DEVSEC PREFETCH";
+	devsec->prefetch_res.flags = IORESOURCE_MEM | IORESOURCE_MEM_64 |
+				     IORESOURCE_PREFETCH;
+	rc = allocate_resource(&iomem_resource, &devsec->prefetch_res,
+			       PREFETCH_SIZE * NR_DEVICES, SZ_4G, U64_MAX,
+			       PREFETCH_SIZE, NULL, NULL);
+	if (rc)
+		return rc;
+
+	rc = devm_add_action_or_reset(dev, release_prefetch_region,
+				      &devsec->prefetch_res);
+	if (rc)
+		return rc;
+
+	for (int i = 0; i < NR_DEVSEC_BUSES; i++) {
+		rc = alloc_port(devsec, hb_idx, i);
+		if (rc)
+			return rc;
+
+		for (int j = 0; j < NR_FUNCTIONS; j++) {
+			rc = alloc_dev(devsec, hb_idx, i, j);
+			if (rc)
+				return rc;
+		}
+	}
+
+	busnr_start = hb_idx * (NR_DEVSEC_BUSES + 1);
+	busnr_end = busnr_start + NR_DEVSEC_BUSES;
+
+	devsec->busnr_res = (struct resource) {
+		.name = "DEVSEC BUSES",
+		.start = busnr_start,
+		.end = busnr_end,
+		.flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
+	};
+	pci_add_resource(&hb->windows, &devsec->busnr_res);
+	pci_add_resource(&hb->windows, &devsec->mmio_res);
+	pci_add_resource(&hb->windows, &devsec->prefetch_res);
+
+	sd = &devsec->sysdata;
+	devsec_sysdata[hb_idx] = sd;
+
+	/* Start devsec_bus emulation above the last ACPI segment */
+	hb->domain_nr = pci_bus_find_emul_domain_nr(0, 0x10000, INT_MAX);
+	if (hb->domain_nr < 0)
+		return hb->domain_nr;
+
+	/*
+	 * Note, domain_nr is set in devsec_sysdata for
+	 * !CONFIG_PCI_DOMAINS_GENERIC platforms
+	 */
+	devsec_set_domain_nr(sd, hb->domain_nr);
+
+	hb->dev.parent = dev;
+	hb->sysdata = sd;
+	hb->ops = &devsec_ops;
+
+	rc = pci_scan_root_bus_bridge(hb);
+	if (rc)
+		return rc;
+
+	bus = hb->bus;
+	rc = devm_add_action_or_reset(dev, destroy_bus, no_free_ptr(hb));
+	if (rc)
+		return rc;
+
+	pci_assign_unassigned_bus_resources(bus);
+	pci_bus_add_devices(bus);
+
+	return 0;
+}
+
+static int __init devsec_bus_probe(struct faux_device *fdev)
+{
+	for (int i = 0; i < NR_DEVSEC_HOST_BRIDGES; i++) {
+		int rc = devsec_add_host_bridge(fdev, i);
+
+		if (rc)
+			return rc;
+	}
+	return 0;
+}
+
+static struct faux_device *devsec_bus;
+
+static struct faux_device_ops devsec_bus_ops = {
+	.probe = devsec_bus_probe,
+};
+
+static int __init devsec_bus_init(void)
+{
+	devsec_bus = faux_device_create("devsec_bus", NULL, &devsec_bus_ops);
+	if (!devsec_bus)
+		return -ENODEV;
+	return 0;
+}
+module_init(devsec_bus_init);
+
+static void __exit devsec_bus_exit(void)
+{
+	faux_device_destroy(devsec_bus);
+}
+module_exit(devsec_bus_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Device Security Sample Infrastructure: TDISP Device Emulation");
diff --git a/samples/devsec/common.c b/samples/devsec/common.c
new file mode 100644
index 000000000000..d0e8648dfe98
--- /dev/null
+++ b/samples/devsec/common.c
@@ -0,0 +1,28 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2024 - 2026 Intel Corporation */
+
+#include <linux/pci.h>
+#include <linux/export.h>
+
+#include "devsec.h"
+
+/*
+ * devsec_bus and devsec_tsm need a common location for this data to
+ * avoid depending on each other. Enables load order testing
+ */
+struct devsec_sysdata *devsec_sysdata[NR_DEVSEC_HOST_BRIDGES];
+EXPORT_SYMBOL_FOR_MODULES(devsec_sysdata, "devsec*");
+
+static int __init common_init(void)
+{
+	return 0;
+}
+module_init(common_init);
+
+static void __exit common_exit(void)
+{
+}
+module_exit(common_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Device Security Sample Infrastructure: Shared data");
diff --git a/samples/devsec/link_tsm.c b/samples/devsec/link_tsm.c
new file mode 100644
index 000000000000..e5d66b877bca
--- /dev/null
+++ b/samples/devsec/link_tsm.c
@@ -0,0 +1,192 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2024 - 2026 Intel Corporation */
+
+#define dev_fmt(fmt) "devsec: " fmt
+#include <linux/device/faux.h>
+#include <linux/pci-tsm.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/tsm.h>
+#include "devsec.h"
+
+struct devsec_tsm_pf0 {
+	struct pci_tsm_pf0 pci;
+#define NR_TSM_STREAMS 4
+};
+
+struct devsec_tsm_fn {
+	struct pci_tsm pci;
+};
+
+static struct devsec_tsm_pf0 *to_devsec_tsm_pf0(struct pci_tsm *tsm)
+{
+	return container_of(tsm, struct devsec_tsm_pf0, pci.base_tsm);
+}
+
+static struct devsec_tsm_fn *to_devsec_tsm_fn(struct pci_tsm *tsm)
+{
+	return container_of(tsm, struct devsec_tsm_fn, pci);
+}
+
+static struct device *pci_tsm_host(struct pci_dev *pdev)
+{
+	struct pci_tsm *tsm = READ_ONCE(pdev->tsm);
+
+	if (!tsm)
+		return NULL;
+	return tsm->tsm_dev->dev.parent;
+}
+
+static struct pci_tsm *devsec_tsm_pf0_probe(struct tsm_dev *tsm_dev,
+					    struct pci_dev *pdev)
+{
+	int rc;
+
+	dev_dbg(tsm_dev->dev.parent, "%s\n", pci_name(pdev));
+
+	struct devsec_tsm_pf0 *devsec_tsm __free(kfree) =
+		kzalloc(sizeof(*devsec_tsm), GFP_KERNEL);
+	if (!devsec_tsm)
+		return NULL;
+
+	rc = pci_tsm_pf0_constructor(pdev, &devsec_tsm->pci, tsm_dev);
+	if (rc)
+		return NULL;
+
+	pci_dbg(pdev, "TSM enabled\n");
+	return &no_free_ptr(devsec_tsm)->pci.base_tsm;
+}
+
+static struct pci_tsm *devsec_link_tsm_fn_probe(struct tsm_dev *tsm_dev,
+						struct pci_dev *pdev)
+{
+	int rc;
+
+	dev_dbg(tsm_dev->dev.parent, "%s\n", pci_name(pdev));
+
+	struct devsec_tsm_fn *devsec_tsm __free(kfree) =
+		kzalloc(sizeof(*devsec_tsm), GFP_KERNEL);
+	if (!devsec_tsm)
+		return NULL;
+
+	rc = pci_tsm_link_constructor(pdev, &devsec_tsm->pci, tsm_dev);
+	if (rc)
+		return NULL;
+
+	pci_dbg(pdev, "TSM (sub-function) enabled\n");
+	return &no_free_ptr(devsec_tsm)->pci;
+}
+
+static struct pci_tsm *devsec_link_tsm_pci_probe(struct tsm_dev *tsm_dev,
+						 struct pci_dev *pdev)
+{
+	int i;
+
+	for (i = 0; i < NR_DEVSEC_HOST_BRIDGES; i++)
+		if (pdev->sysdata == devsec_sysdata[i])
+			break;
+	if (i >= NR_DEVSEC_HOST_BRIDGES)
+		return NULL;
+
+	if (is_pci_tsm_pf0(pdev))
+		return devsec_tsm_pf0_probe(tsm_dev, pdev);
+	return devsec_link_tsm_fn_probe(tsm_dev, pdev);
+}
+
+static void devsec_link_tsm_pci_remove(struct pci_tsm *tsm)
+{
+	struct pci_dev *pdev = tsm->pdev;
+
+	dev_dbg(pci_tsm_host(pdev), "%s\n", pci_name(pdev));
+
+	if (is_pci_tsm_pf0(pdev)) {
+		struct devsec_tsm_pf0 *devsec_tsm = to_devsec_tsm_pf0(tsm);
+
+		pci_tsm_pf0_destructor(&devsec_tsm->pci);
+		kfree(devsec_tsm);
+	} else {
+		struct devsec_tsm_fn *devsec_tsm = to_devsec_tsm_fn(tsm);
+
+		kfree(devsec_tsm);
+	}
+}
+
+/*
+ * Reference consumer for a TSM driver "connect" operation callback. The
+ * low-level TSM driver understands details about the platform the PCI
+ * core does not, like number of available streams that can be
+ * established per host bridge. The expected flow is:
+ *
+ * 1/ Allocate platform specific Stream resource (TSM specific)
+ * 2/ Allocate Stream Ids in the endpoint and Root Port (PCI TSM helper)
+ * 3/ Register Stream Ids for the consumed resources from the last 2
+ *    steps to be accountable (via sysfs) to the admin (PCI TSM helper)
+ * 4/ Register the Stream with the TSM core so that either PCI sysfs or
+ *    TSM core sysfs can list the in-use resources (TSM core helper)
+ * 5/ Configure IDE settings in the endpoint and Root Port (PCI TSM helper)
+ * 6/ RPC call to TSM to perform IDE_KM and optionally enable the stream
+ * (TSM Specific)
+ * 7/ Enable the stream in the endpoint, and root port if TSM call did
+ *    not already handle that (PCI TSM helper)
+ *
+ * The expectation is the helpers referenceed are convenience "library"
+ * APIs for common operations, not a "midlayer" that enforces a specific
+ * or use model sequencing.
+ */
+static int devsec_link_tsm_connect(struct pci_dev *pdev)
+{
+	return -ENXIO;
+}
+
+static void devsec_link_tsm_disconnect(struct pci_dev *pdev)
+{
+}
+
+static struct pci_tsm_ops devsec_link_pci_ops = {
+	.probe = devsec_link_tsm_pci_probe,
+	.remove = devsec_link_tsm_pci_remove,
+	.connect = devsec_link_tsm_connect,
+	.disconnect = devsec_link_tsm_disconnect,
+};
+
+static void devsec_link_tsm_remove(void *tsm_dev)
+{
+	tsm_unregister(tsm_dev);
+}
+
+static int devsec_link_tsm_probe(struct faux_device *fdev)
+{
+	struct tsm_dev *tsm_dev;
+
+	tsm_dev = tsm_register(&fdev->dev, &devsec_link_pci_ops);
+	if (IS_ERR(tsm_dev))
+		return PTR_ERR(tsm_dev);
+
+	return devm_add_action_or_reset(&fdev->dev, devsec_link_tsm_remove,
+					tsm_dev);
+}
+
+static struct faux_device *devsec_link_tsm;
+
+static const struct faux_device_ops devsec_link_device_ops = {
+	.probe = devsec_link_tsm_probe,
+};
+
+static int __init devsec_link_tsm_init(void)
+{
+	devsec_link_tsm = faux_device_create("devsec_link_tsm", NULL,
+					     &devsec_link_device_ops);
+	if (!devsec_link_tsm)
+		return -ENOMEM;
+	return 0;
+}
+module_init(devsec_link_tsm_init);
+
+static void __exit devsec_link_tsm_exit(void)
+{
+	faux_device_destroy(devsec_link_tsm);
+}
+module_exit(devsec_link_tsm_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Device Security Sample Infrastructure: Platform Link-TSM Driver");
diff --git a/MAINTAINERS b/MAINTAINERS
index 3d5e2cbef71e..889546f66f2f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -26539,6 +26539,7 @@ F:	drivers/pci/tsm/
 F:	drivers/virt/coco/guest/
 F:	include/linux/*tsm*.h
 F:	include/uapi/linux/pci-tsm-netlink.h
+F:	samples/devsec/
 F:	samples/tsm-mr/
 
 TRUSTED SERVICES TEE DRIVER
-- 
2.52.0


  parent reply	other threads:[~2026-03-03  0:01 UTC|newest]

Thread overview: 99+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-03  0:01 [PATCH v2 00/19] PCI/TSM: TEE I/O infrastructure Dan Williams
2026-03-03  0:01 ` [PATCH v2 01/19] PCI/TSM: Report active IDE streams per host bridge Dan Williams
2026-03-09 16:36   ` Jonathan Cameron
2026-04-07 16:02   ` Xu Yilun
2026-03-03  0:01 ` [PATCH v2 02/19] device core: Fix kernel-doc warnings in base.h Dan Williams
2026-03-09 16:39   ` Jonathan Cameron
2026-03-12 14:45     ` Greg KH
2026-03-03  0:01 ` [PATCH v2 03/19] device core: Introduce confidential device acceptance Dan Williams
2026-03-09 16:42   ` Jonathan Cameron
2026-03-12 14:44   ` Greg KH
2026-03-13  4:11     ` Dan Williams
2026-03-13 12:18       ` Greg KH
2026-03-13 18:53         ` Dan Williams
2026-03-13 19:07           ` Jason Gunthorpe
2026-03-13 13:32       ` Jason Gunthorpe
2026-03-13 19:56         ` Dan Williams
2026-03-13 20:24           ` Jason Gunthorpe
2026-03-14  1:32             ` Dan Williams
2026-03-23 18:14               ` Jason Gunthorpe
2026-03-24  2:18                 ` Dan Williams
2026-03-24 12:36                   ` Jason Gunthorpe
2026-03-25  4:13                     ` Dan Williams
2026-03-25 11:56                       ` Jason Gunthorpe
2026-03-26  1:27                         ` Dan Williams
2026-03-26 12:00                           ` Jason Gunthorpe
2026-03-26 15:00                             ` Greg KH
2026-03-26 18:31                             ` Dan Williams
2026-03-26 19:28                               ` Jason Gunthorpe
2026-03-03  0:01 ` [PATCH v2 04/19] modules: Document the global async_probe parameter Dan Williams
2026-03-03  0:01 ` [PATCH v2 05/19] device core: Autoprobe considered harmful? Dan Williams
2026-03-09 16:58   ` Jonathan Cameron
2026-03-03  0:01 ` [PATCH v2 06/19] PCI/TSM: Add Device Security (TVM Guest) LOCK operation support Dan Williams
2026-03-03  0:01 ` [PATCH v2 07/19] PCI/TSM: Add Device Security (TVM Guest) ACCEPT " Dan Williams
2026-03-03  7:15   ` Baolu Lu
2026-04-10  8:44   ` Lai, Yi
2026-04-10  8:53   ` Lai, Yi
2026-03-03  0:01 ` [PATCH v2 08/19] PCI/TSM: Add "evidence" support Dan Williams
2026-03-03  3:14   ` kernel test robot
2026-03-03 10:16   ` Aneesh Kumar K.V
2026-03-03 16:38   ` Aneesh Kumar K.V
2026-03-13 10:07   ` Xu Yilun
2026-03-13 18:06     ` Dan Williams
2026-03-14 18:12   ` Jakub Kicinski
2026-03-17  1:45     ` Dan Williams
2026-03-19  0:00       ` Jakub Kicinski
2026-03-20  2:50         ` Dan Williams
2026-03-17 18:14     ` Lukas Wunner
2026-03-18  7:56       ` Dan Williams
2026-03-23 18:18         ` Jason Gunthorpe
2026-03-14 18:37   ` Lukas Wunner
2026-03-16 20:13     ` Dan Williams
2026-03-16 23:02       ` Dan Williams
2026-03-17 14:13         ` Lukas Wunner
2026-03-18  7:22           ` Dan Williams
2026-03-17 18:24   ` Lukas Wunner
2026-03-18  7:41     ` Dan Williams
2026-04-24 10:15       ` Aneesh Kumar K.V
2026-03-03  0:01 ` [PATCH v2 09/19] PCI/TSM: Support creating encrypted MMIO descriptors via TDISP Report Dan Williams
2026-03-04 17:14   ` dan.j.williams
2026-03-13  9:57     ` Xu Yilun
2026-03-05  4:46   ` Aneesh Kumar K.V
2026-03-13 10:23     ` Xu Yilun
2026-03-13 13:36       ` Jason Gunthorpe
2026-03-17  5:13         ` Xu Yilun
2026-03-24  3:26           ` Dan Williams
2026-03-24 12:38             ` Jason Gunthorpe
2026-04-09  7:48         ` Aneesh Kumar K.V
2026-03-16  5:19       ` Alexey Kardashevskiy
2026-03-23 18:20         ` Jason Gunthorpe
2026-03-26 23:38           ` Alexey Kardashevskiy
2026-03-27 11:49             ` Jason Gunthorpe
2026-03-30  5:47               ` Alexey Kardashevskiy
2026-03-30 11:49                 ` Jason Gunthorpe
2026-04-03 12:41                   ` Alexey Kardashevskiy
2026-04-03 14:08                     ` Jason Gunthorpe
2026-04-06 22:08                       ` Alexey Kardashevskiy
2026-04-06 22:21                         ` Jason Gunthorpe
2026-04-08  7:03                           ` Alexey Kardashevskiy
2026-04-08 16:54                             ` Jason Gunthorpe
2026-04-08 22:22                               ` Alexey Kardashevskiy
2026-04-08 23:56                                 ` Jason Gunthorpe
2026-03-03  0:01 ` [PATCH v2 10/19] x86, swiotlb: Teach swiotlb to skip "accepted" devices Dan Williams
2026-03-03  9:07   ` Aneesh Kumar K.V
2026-03-13 10:26     ` Xu Yilun
2026-04-09  7:33   ` Aneesh Kumar K.V
2026-03-03  0:01 ` [PATCH v2 11/19] x86, dma: Allow accepted devices to map private memory Dan Williams
2026-03-03  7:36   ` Alexey Kardashevskiy
2026-03-03  0:02 ` [PATCH v2 12/19] x86, ioremap, resource: Support IORES_DESC_ENCRYPTED for encrypted PCI MMIO Dan Williams
2026-03-19 15:34   ` Borislav Petkov
2026-03-03  0:02 ` Dan Williams [this message]
2026-03-03  0:02 ` [PATCH v2 14/19] samples/devsec: Add sample IDE establishment Dan Williams
2026-03-03  0:02 ` [PATCH v2 15/19] samples/devsec: Add sample TSM bind and guest_request flows Dan Williams
2026-03-03  0:02 ` [PATCH v2 16/19] samples/devsec: Introduce a "Device Security TSM" sample driver Dan Williams
2026-03-27  8:44   ` Lai, Yi
2026-03-03  0:02 ` [PATCH v2 17/19] tools/testing/devsec: Add a script to exercise samples/devsec/ Dan Williams
2026-03-03  0:02 ` [PATCH v2 18/19] samples/devsec: Add evidence support Dan Williams
2026-03-03  0:02 ` [PATCH v2 19/19] tools/testing/devsec: Add basic evidence retrieval validation Dan Williams
2026-03-03  9:23 ` [PATCH v2 00/19] PCI/TSM: TEE I/O infrastructure Aneesh Kumar K.V
2026-03-03 22:01   ` dan.j.williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260303000207.1836586-14-dan.j.williams@intel.com \
    --to=dan.j.williams@intel.com \
    --cc=aik@amd.com \
    --cc=alistair23@gmail.com \
    --cc=aneesh.kumar@kernel.org \
    --cc=bhelgaas@google.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=jgg@nvidia.com \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-pci@vger.kernel.org \
    --cc=lukas@wunner.de \
    --cc=sameo@rivosinc.com \
    --cc=yilun.xu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.