Linux PCI subsystem development
 help / color / mirror / Atom feed
* [PATCH v3 0/2] vfio/dma-buf: add TPH support for peer-to-peer access
@ 2026-05-12 18:47 Zhiping Zhang
  2026-05-12 18:47 ` [PATCH v3 1/2] vfio: add dma-buf get_tph callback and DMA_BUF_TPH feature Zhiping Zhang
  2026-05-12 18:47 ` [PATCH v3 2/2] RDMA/mlx5: get tph for p2p access when registering dma-buf mr Zhiping Zhang
  0 siblings, 2 replies; 8+ messages in thread
From: Zhiping Zhang @ 2026-05-12 18:47 UTC (permalink / raw)
  To: Alex Williamson, Jason Gunthorpe, Leon Romanovsky
  Cc: Bjorn Helgaas, kvm, linux-rdma, linux-pci, netdev, dri-devel,
	Keith Busch, Yochai Cohen, Yishai Hadas, Zhiping Zhang

This series adds TLP Processing Hints (TPH) support to the VFIO dma-buf
export path, allowing importing drivers (e.g. mlx5) to use the exporter's
steering tag when performing peer-to-peer DMA into a VFIO-owned device.

Patch 1 adds the dma-buf get_tph callback and the new vfio uAPI.
Patch 2 wires up the mlx5 RDMA driver as a consumer.

Changes since v2:
  - uAPI now carries both the 8-bit ST and 16-bit Extended ST values,
    gated by a flags field, because the two are distinct namespaces in
    the PCIe TPH ST table and a numeric range check on a single value
    cannot pick the right one. 
  - Add pcie_tph_get_st_width() in <linux/pci-tph.h> so mlx5 doesn't
    dereference pci_dev::tph_req_type directly.
  - Validate that the TLP Processing Hint fits in the 2-bit spec field
    in VFIO_DEVICE_FEATURE_DMA_BUF_TPH; previously an out-of-range
    userspace value would be stored unchecked.
  - Publish/consume ordering for the TPH metadata: writers store the
    flags last with smp_store_release() under memory_lock; readers run
    lockless with smp_load_acquire() on the flags. This avoids an
    unprotected read in get_tph() and removes the AB-BA risk that
    would appear if the reader took memory_lock while an importer held
    dma_resv_lock.
  - Convert vfio_pci_dma_buf::revoked from a bitfield to bool to
    eliminate the cross-lock RMW on a shared bitfield byte (revoked is
    written under dma_resv_lock; the new TPH fields are written under
    memory_lock).
  - mlx5 reuses the dma_buf pointer that the umem already resolved
    instead of calling dma_buf_get(fd) a second time, closing a TOCTOU
    where a concurrent dup2() could substitute a different dma_buf
    between umem creation and the TPH lookup.
  - mlx5 now tracks per-MR ownership of the allocated steering-tag
    index (dmabuf_st_index / dmabuf_st_owned on mlx5_ib_mr) and frees
    it both when the firmware mkey is destroyed and when the MR is
    revoked-and-recycled into the FRMR pool. 

Previous link:
v2: https://lore.kernel.org/linux-pci/20260430200704.352228-1-zhipingz@meta.com/

Zhiping Zhang (2):
  vfio: add dma-buf get_tph callback and DMA_BUF_TPH feature
  RDMA/mlx5: get tph for p2p access when registering dma-buf mr

 drivers/infiniband/hw/mlx5/mlx5_ib.h          |   6 +
 drivers/infiniband/hw/mlx5/mr.c               |  72 ++++++++++-
 .../net/ethernet/mellanox/mlx5/core/lib/st.c  |  27 +++--
 drivers/pci/tph.c                             |  20 ++++
 drivers/vfio/pci/vfio_pci_core.c              |   3 +
 drivers/vfio/pci/vfio_pci_dmabuf.c            | 113 +++++++++++++++++-
 drivers/vfio/pci/vfio_pci_priv.h              |  11 ++
 include/linux/dma-buf.h                       |  21 ++++
 include/linux/mlx5/driver.h                   |   7 ++
 include/linux/pci-tph.h                       |   2 +
 include/uapi/linux/vfio.h                     |  35 ++++++
 11 files changed, 306 insertions(+), 11 deletions(-)

-- 
2.52.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/2] vfio: add dma-buf get_tph callback and DMA_BUF_TPH feature
  2026-05-12 18:47 [PATCH v3 0/2] vfio/dma-buf: add TPH support for peer-to-peer access Zhiping Zhang
@ 2026-05-12 18:47 ` Zhiping Zhang
  2026-05-13  1:33   ` fengchengwen
  2026-05-14  0:05   ` sashiko-bot
  2026-05-12 18:47 ` [PATCH v3 2/2] RDMA/mlx5: get tph for p2p access when registering dma-buf mr Zhiping Zhang
  1 sibling, 2 replies; 8+ messages in thread
From: Zhiping Zhang @ 2026-05-12 18:47 UTC (permalink / raw)
  To: Alex Williamson, Jason Gunthorpe, Leon Romanovsky
  Cc: Bjorn Helgaas, kvm, linux-rdma, linux-pci, netdev, dri-devel,
	Keith Busch, Yochai Cohen, Yishai Hadas, Zhiping Zhang

Add a dma-buf callback that returns raw TPH metadata from the exporter
so peer devices can reuse the steering tag and processing hint
associated with a VFIO-exported buffer. Add a new
VFIO_DEVICE_FEATURE_DMA_BUF_TPH ioctl that takes the fd from
VFIO_DEVICE_FEATURE_DMA_BUF along with the TPH values, validates the fd
is a vfio-exported dma-buf belonging to this device, and stores the TPH
metadata under memory_lock. The existing VFIO_DEVICE_FEATURE_DMA_BUF
uAPI is unchanged.

8-bit ST and 16-bit Extended ST are distinct namespaces in the PCIe TPH
ST table (firmware reports them as separate fields with separate
validity bits in the ACPI _DSM ST table), so the uAPI carries both
values along with a flags field that indicates which value(s) are
valid for this device. The exporter selects the value that matches the
importer's requested width and returns -EOPNOTSUPP if that width is
not present, instead of substituting a value across namespaces.

Publish the TPH fields under memory_lock and gate readers on a
release/acquire on the flags field; this lets get_tph() run lockless
and avoids inverting the memory_lock -> dma_resv_lock ordering set up
by vfio_pci_dma_buf_move(). Convert the @revoked bitfield to a plain bool
so concurrent updates of @revoked (under dma_resv_lock) and the new TPH
fields (under memory_lock) cannot race on a shared bitfield byte.

Signed-off-by: Zhiping Zhang <zhipingz@meta.com>

---
 drivers/vfio/pci/vfio_pci_core.c   |   3 +
 drivers/vfio/pci/vfio_pci_dmabuf.c | 113 ++++++++++++++++++++++++++++-
 drivers/vfio/pci/vfio_pci_priv.h   |  11 +++
 include/linux/dma-buf.h            |  21 ++++++
 include/uapi/linux/vfio.h          |  35 +++++++++
 5 files changed, 182 insertions(+), 1 deletion(-)

diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index 3f8d093aacf8..94aa6dd95701 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -1534,6 +1534,9 @@ int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags,
 		return vfio_pci_core_feature_token(vdev, flags, arg, argsz);
 	case VFIO_DEVICE_FEATURE_DMA_BUF:
 		return vfio_pci_core_feature_dma_buf(vdev, flags, arg, argsz);
+	case VFIO_DEVICE_FEATURE_DMA_BUF_TPH:
+		return vfio_pci_core_feature_dma_buf_tph(vdev, flags, arg,
+							 argsz);
 	default:
 		return -ENOTTY;
 	}
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
index f87fd32e4a01..28247602e359 100644
--- a/drivers/vfio/pci/vfio_pci_dmabuf.c
+++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
@@ -19,7 +19,23 @@ struct vfio_pci_dma_buf {
 	u32 nr_ranges;
 	struct kref kref;
 	struct completion comp;
-	u8 revoked : 1;
+	/*
+	 * TPH metadata published by VFIO_DEVICE_FEATURE_DMA_BUF_TPH and
+	 * consumed by the @get_tph dma-buf callback.
+	 *
+	 * @tph_flags is the publish/consume gate: writers populate
+	 * @steering_tag, @steering_tag_ext and @ph first, then store
+	 * @tph_flags with smp_store_release(); readers do
+	 * smp_load_acquire(&tph_flags) before accessing the value fields.
+	 * @tph_flags == 0 means "TPH not set". Writers serialize via
+	 * vdev->memory_lock; readers are lockless to avoid AB-BA against
+	 * the dma_resv_lock held by importers.
+	 */
+	u32 tph_flags;
+	u16 steering_tag;
+	u16 steering_tag_ext;
+	u8 ph;
+	bool revoked;
 };
 
 static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf,
@@ -69,6 +85,35 @@ vfio_pci_dma_buf_map(struct dma_buf_attachment *attachment,
 	return ret;
 }
 
+static int vfio_pci_dma_buf_get_tph(struct dma_buf *dmabuf, u16 *steering_tag,
+				    u8 *ph, u8 st_width)
+{
+	struct vfio_pci_dma_buf *priv = dmabuf->priv;
+	u32 flags;
+
+	flags = smp_load_acquire(&priv->tph_flags);
+	if (!flags)
+		return -EOPNOTSUPP;
+
+	switch (st_width) {
+	case 8:
+		if (!(flags & VFIO_DMA_BUF_TPH_ST))
+			return -EOPNOTSUPP;
+		*steering_tag = priv->steering_tag;
+		break;
+	case 16:
+		if (!(flags & VFIO_DMA_BUF_TPH_ST_EXT))
+			return -EOPNOTSUPP;
+		*steering_tag = priv->steering_tag_ext;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	*ph = priv->ph;
+	return 0;
+}
+
 static void vfio_pci_dma_buf_unmap(struct dma_buf_attachment *attachment,
 				   struct sg_table *sgt,
 				   enum dma_data_direction dir)
@@ -101,6 +146,7 @@ static void vfio_pci_dma_buf_release(struct dma_buf *dmabuf)
 
 static const struct dma_buf_ops vfio_pci_dmabuf_ops = {
 	.attach = vfio_pci_dma_buf_attach,
+	.get_tph = vfio_pci_dma_buf_get_tph,
 	.map_dma_buf = vfio_pci_dma_buf_map,
 	.unmap_dma_buf = vfio_pci_dma_buf_unmap,
 	.release = vfio_pci_dma_buf_release,
@@ -331,6 +377,71 @@ int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
 	return ret;
 }
 
+int vfio_pci_core_feature_dma_buf_tph(struct vfio_pci_core_device *vdev,
+				      u32 flags,
+				      struct vfio_device_feature_dma_buf_tph __user *arg,
+				      size_t argsz)
+{
+	struct vfio_device_feature_dma_buf_tph set_tph;
+	struct vfio_pci_dma_buf *priv;
+	struct dma_buf *dmabuf;
+	int ret;
+
+	ret = vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_SET,
+				 sizeof(set_tph));
+	if (ret != 1)
+		return ret;
+
+	if (copy_from_user(&set_tph, arg, sizeof(set_tph)))
+		return -EFAULT;
+
+	if (set_tph.reserved[0] || set_tph.reserved[1] || set_tph.reserved[2])
+		return -EINVAL;
+
+	if (set_tph.flags & ~(VFIO_DMA_BUF_TPH_ST | VFIO_DMA_BUF_TPH_ST_EXT))
+		return -EINVAL;
+
+	if (!set_tph.flags)
+		return -EINVAL;
+
+	/* PCIe TLP Processing Hint is a 2-bit field. */
+	if (set_tph.ph & ~0x3)
+		return -EINVAL;
+
+	dmabuf = dma_buf_get(set_tph.dmabuf_fd);
+	if (IS_ERR(dmabuf))
+		return PTR_ERR(dmabuf);
+
+	if (dmabuf->ops != &vfio_pci_dmabuf_ops) {
+		ret = -EINVAL;
+		goto out_put;
+	}
+
+	priv = dmabuf->priv;
+	down_write(&vdev->memory_lock);
+	if (priv->vdev != vdev) {
+		ret = -EINVAL;
+		goto out_unlock;
+	}
+
+	priv->steering_tag = set_tph.steering_tag;
+	priv->steering_tag_ext = set_tph.steering_tag_ext;
+	priv->ph = set_tph.ph;
+	/*
+	 * Publish the TPH values before the gate flag, so that lockless
+	 * readers in vfio_pci_dma_buf_get_tph() see fully-initialized
+	 * fields once they observe a non-zero tph_flags.
+	 */
+	smp_store_release(&priv->tph_flags, set_tph.flags);
+	ret = 0;
+
+out_unlock:
+	up_write(&vdev->memory_lock);
+out_put:
+	dma_buf_put(dmabuf);
+	return ret;
+}
+
 void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
 {
 	struct vfio_pci_dma_buf *priv;
diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_priv.h
index fca9d0dfac90..200dd061ea5d 100644
--- a/drivers/vfio/pci/vfio_pci_priv.h
+++ b/drivers/vfio/pci/vfio_pci_priv.h
@@ -118,6 +118,10 @@ static inline bool vfio_pci_is_vga(struct pci_dev *pdev)
 int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
 				  struct vfio_device_feature_dma_buf __user *arg,
 				  size_t argsz);
+int vfio_pci_core_feature_dma_buf_tph(struct vfio_pci_core_device *vdev,
+				      u32 flags,
+				      struct vfio_device_feature_dma_buf_tph __user *arg,
+				      size_t argsz);
 void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev);
 void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked);
 #else
@@ -128,6 +132,13 @@ vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
 {
 	return -ENOTTY;
 }
+static inline int
+vfio_pci_core_feature_dma_buf_tph(struct vfio_pci_core_device *vdev, u32 flags,
+				  struct vfio_device_feature_dma_buf_tph __user *arg,
+				  size_t argsz)
+{
+	return -ENOTTY;
+}
 static inline void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
 {
 }
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index d1203da56fc5..d6a1b44052fc 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -113,6 +113,27 @@ struct dma_buf_ops {
 	 */
 	void (*unpin)(struct dma_buf_attachment *attach);
 
+	/**
+	 * @get_tph:
+	 * @dmabuf: DMA buffer for which to retrieve TPH metadata
+	 * @steering_tag: Returns the raw TPH steering tag for @st_width
+	 * @ph: Returns the TPH processing hint (2-bit value)
+	 * @st_width: Consumer's supported steering tag width in bits (8 or 16)
+	 *
+	 * Return the TPH (TLP Processing Hints) metadata associated with this
+	 * DMA buffer for the requested steering-tag width. 8-bit ST and 16-bit
+	 * Extended ST are distinct namespaces in the PCIe TPH ST table, so the
+	 * exporter must select the value that matches @st_width and must not
+	 * substitute one for the other.
+	 *
+	 * Return 0 on success, -EOPNOTSUPP if no metadata is available for the
+	 * requested width, or -EINVAL if @st_width is not 8 or 16.
+	 *
+	 * This callback is optional.
+	 */
+	int (*get_tph)(struct dma_buf *dmabuf, u16 *steering_tag, u8 *ph,
+		       u8 st_width);
+
 	/**
 	 * @map_dma_buf:
 	 *
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 5de618a3a5ee..53b2bbd9fc1e 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -1534,6 +1534,41 @@ struct vfio_device_feature_dma_buf {
  */
 #define VFIO_DEVICE_FEATURE_MIG_PRECOPY_INFOv2  12
 
+/**
+ * Upon VFIO_DEVICE_FEATURE_SET associate TPH (TLP Processing Hints) metadata
+ * with a vfio-exported dma-buf. The dma-buf must have been created by
+ * VFIO_DEVICE_FEATURE_DMA_BUF on this device.
+ *
+ * dmabuf_fd is the file descriptor returned by VFIO_DEVICE_FEATURE_DMA_BUF.
+ *
+ * 8-bit ST (steering_tag) and 16-bit Extended ST (steering_tag_ext) are
+ * distinct namespaces in the PCIe TPH ST table; userspace should populate
+ * the value(s) it has from the firmware ST table for this device and set
+ * the matching VFIO_DMA_BUF_TPH_ST / VFIO_DMA_BUF_TPH_ST_EXT bit in @flags.
+ * An importer requests a specific width and receives the matching value;
+ * if the requested width is not present, the importer is told TPH is
+ * unavailable for this dma-buf.
+ *
+ * ph is the 2-bit TLP Processing Hint and must be in the range [0, 3].
+ *
+ * The user must set TPH on the dma-buf before the importer consumes it.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+#define VFIO_DEVICE_FEATURE_DMA_BUF_TPH 13
+
+#define VFIO_DMA_BUF_TPH_ST		(1 << 0)  /* steering_tag valid */
+#define VFIO_DMA_BUF_TPH_ST_EXT		(1 << 1)  /* steering_tag_ext valid */
+
+struct vfio_device_feature_dma_buf_tph {
+	__s32	dmabuf_fd;
+	__u32	flags;
+	__u16	steering_tag;
+	__u16	steering_tag_ext;
+	__u8	ph;
+	__u8	reserved[3];
+};
+
 /* -------- API for Type1 VFIO IOMMU -------- */
 
 /**
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/2] RDMA/mlx5: get tph for p2p access when registering dma-buf mr
  2026-05-12 18:47 [PATCH v3 0/2] vfio/dma-buf: add TPH support for peer-to-peer access Zhiping Zhang
  2026-05-12 18:47 ` [PATCH v3 1/2] vfio: add dma-buf get_tph callback and DMA_BUF_TPH feature Zhiping Zhang
@ 2026-05-12 18:47 ` Zhiping Zhang
  2026-05-13  1:49   ` fengchengwen
  2026-05-14  0:39   ` sashiko-bot
  1 sibling, 2 replies; 8+ messages in thread
From: Zhiping Zhang @ 2026-05-12 18:47 UTC (permalink / raw)
  To: Alex Williamson, Jason Gunthorpe, Leon Romanovsky
  Cc: Bjorn Helgaas, kvm, linux-rdma, linux-pci, netdev, dri-devel,
	Keith Busch, Yochai Cohen, Yishai Hadas, Zhiping Zhang

Query dma-buf TPH metadata when registering a dma-buf MR for peer to
peer access and translate the raw steering tag into an mlx5 steering
tag index. Factor mlx5_st_alloc_index() so callers that already have a
raw steering tag can allocate the corresponding mlx5 index directly.
Keep the DMAH path as the first priority and only fall back to dma-buf
metadata when no DMAH is supplied.

Add pcie_tph_get_st_width() so the mlx5 IB driver can query the
device's negotiated ST width without poking pci_dev::tph_req_type
directly (that field is gated by CONFIG_PCIE_TPH and would otherwise
break !CONFIG_PCIE_TPH builds). Pass the width to the dma-buf
get_tph() callback so the exporter can return the value that matches
the consumer's capability.

Pass the dma_buf pointer that the umem already resolved into
get_tph_mr_dmabuf() instead of re-resolving the user-supplied fd.
Re-resolving opens a TOCTOU where a concurrent dup2() can substitute a
different dma_buf between umem creation and TPH lookup.

Track the per-MR ownership of the allocated mlx5 ST index on
mlx5_ib_mr (dmabuf_st_index / dmabuf_st_owned) and release it once the
firmware mkey no longer references it. Both the cached path
(mlx5r_umr_revoke_mr_with_lock + ib_frmr_pool_push) and the
destroy_mkey path call mlx5_ib_mr_put_dmabuf_st() so the ST index does
not leak when the MR is reused from the FRMR pool.

Initialize ret in mlx5_st_create() so the cached steering-tag path
returns success cleanly under clang builds.

Signed-off-by: Zhiping Zhang <zhipingz@meta.com>
---
 drivers/infiniband/hw/mlx5/mlx5_ib.h          |  6 ++
 drivers/infiniband/hw/mlx5/mr.c               | 72 ++++++++++++++++++-
 .../net/ethernet/mellanox/mlx5/core/lib/st.c  | 27 ++++---
 drivers/pci/tph.c                             | 20 ++++++
 include/linux/mlx5/driver.h                   |  7 ++
 include/linux/pci-tph.h                       |  2 +
 6 files changed, 124 insertions(+), 10 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index e156dc4d7529..4ab867392267 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -721,6 +721,12 @@ struct mlx5_ib_mr {
 			u8 revoked :1;
 			/* Indicates previous dmabuf page fault occurred */
 			u8 dmabuf_faulted:1;
+			/* Set when the MR owns dmabuf_st_index and must
+			 * release it via mlx5_st_dealloc_index() once the
+			 * firmware mkey is no longer referencing it.
+			 */
+			u8 dmabuf_st_owned:1;
+			u16 dmabuf_st_index;
 			struct mlx5_ib_mkey null_mmkey;
 		};
 	};
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 3b6da45061a5..84d570f7cafb 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -38,6 +38,7 @@
 #include <linux/delay.h>
 #include <linux/dma-buf.h>
 #include <linux/dma-resv.h>
+#include <linux/pci-tph.h>
 #include <rdma/frmr_pools.h>
 #include <rdma/ib_umem_odp.h>
 #include "dm.h"
@@ -46,6 +47,8 @@
 #include "data_direct.h"
 #include "dmah.h"
 
+MODULE_IMPORT_NS("DMA_BUF");
+
 static int mkey_max_umr_order(struct mlx5_ib_dev *dev)
 {
 	if (MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset))
@@ -899,6 +902,54 @@ static struct dma_buf_attach_ops mlx5_ib_dmabuf_attach_ops = {
 	.invalidate_mappings = mlx5_ib_dmabuf_invalidate_cb,
 };
 
+/*
+ * Query TPH metadata from @dmabuf and translate the raw steering tag into
+ * an mlx5 ST index. On success, returns 0 and the caller becomes the
+ * owner of *@st_index (must be released with mlx5_st_dealloc_index()
+ * once the firmware mkey no longer references it). On any failure
+ * *@st_index and *@ph are left as the no-TPH defaults set by the caller.
+ *
+ * @dmabuf must already be referenced by the caller (e.g. via the umem's
+ * attachment) so we don't re-resolve the user's fd here and avoid a
+ * dup2() TOCTOU between umem creation and TPH lookup.
+ */
+static void get_tph_mr_dmabuf(struct mlx5_ib_dev *dev, struct dma_buf *dmabuf,
+			      u16 *st_index, u8 *ph)
+{
+	u16 steering_tag;
+	u8 st_width;
+	int ret;
+
+	if (!dmabuf->ops->get_tph)
+		return;
+
+	st_width = pcie_tph_get_st_width(dev->mdev->pdev);
+	if (!st_width)
+		return;
+
+	ret = dmabuf->ops->get_tph(dmabuf, &steering_tag, ph, st_width);
+	if (ret) {
+		mlx5_ib_dbg(dev, "get_tph failed (%d)\n", ret);
+		*ph = MLX5_IB_NO_PH;
+		return;
+	}
+
+	ret = mlx5_st_alloc_index_by_tag(dev->mdev, steering_tag, st_index);
+	if (ret) {
+		*ph = MLX5_IB_NO_PH;
+		mlx5_ib_dbg(dev, "st_alloc_index_by_tag failed (%d)\n", ret);
+	}
+}
+
+static void mlx5_ib_mr_put_dmabuf_st(struct mlx5_ib_mr *mr)
+{
+	if (mr->umem && mr->dmabuf_st_owned) {
+		mlx5_st_dealloc_index(mr_to_mdev(mr)->mdev,
+				      mr->dmabuf_st_index);
+		mr->dmabuf_st_owned = 0;
+	}
+}
+
 static struct ib_mr *
 reg_user_mr_dmabuf(struct ib_pd *pd, struct device *dma_device,
 		   u64 offset, u64 length, u64 virt_addr,
@@ -941,16 +992,26 @@ reg_user_mr_dmabuf(struct ib_pd *pd, struct device *dma_device,
 		ph = dmah->ph;
 		if (dmah->valid_fields & BIT(IB_DMAH_CPU_ID_EXISTS))
 			st_index = mdmah->st_index;
+	} else {
+		get_tph_mr_dmabuf(dev, umem_dmabuf->attach->dmabuf,
+				  &st_index, &ph);
 	}
 
 	mr = alloc_cacheable_mr(pd, &umem_dmabuf->umem, virt_addr,
 				access_flags, access_mode,
 				st_index, ph);
 	if (IS_ERR(mr)) {
+		if (!dmah && st_index != MLX5_MKC_PCIE_TPH_NO_STEERING_TAG_INDEX)
+			mlx5_st_dealloc_index(dev->mdev, st_index);
 		ib_umem_release(&umem_dmabuf->umem);
 		return ERR_CAST(mr);
 	}
 
+	if (!dmah && st_index != MLX5_MKC_PCIE_TPH_NO_STEERING_TAG_INDEX) {
+		mr->dmabuf_st_index = st_index;
+		mr->dmabuf_st_owned = 1;
+	}
+
 	mlx5_ib_dbg(dev, "mkey 0x%x\n", mr->mmkey.key);
 
 	atomic_add(ib_umem_num_pages(mr->umem), &dev->mdev->priv.reg_pages);
@@ -1378,8 +1439,15 @@ static int mlx5r_handle_mkey_cleanup(struct mlx5_ib_mr *mr)
 	int ret;
 
 	if (mr->ibmr.frmr.pool && !mlx5_umr_revoke_mr_with_lock(mr) &&
-	    !ib_frmr_pool_push(mr->ibmr.device, &mr->ibmr))
+	    !ib_frmr_pool_push(mr->ibmr.device, &mr->ibmr)) {
+		/*
+		 * The mkey has been revoked: firmware no longer references
+		 * dmabuf_st_index, so release it before this mr re-enters
+		 * the FRMR cache for reuse by another registration.
+		 */
+		mlx5_ib_mr_put_dmabuf_st(mr);
 		return 0;
+	}
 
 	if (is_odp)
 		mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex);
@@ -1400,6 +1468,8 @@ static int mlx5r_handle_mkey_cleanup(struct mlx5_ib_mr *mr)
 		dma_resv_unlock(
 			to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv);
 	}
+	if (!ret)
+		mlx5_ib_mr_put_dmabuf_st(mr);
 	return ret;
 }
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/st.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/st.c
index 997be91f0a13..c5058557c7f0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/st.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/st.c
@@ -29,7 +29,7 @@ struct mlx5_st *mlx5_st_create(struct mlx5_core_dev *dev)
 	u8 direct_mode = 0;
 	u16 num_entries;
 	u32 tbl_loc;
-	int ret;
+	int ret = 0;
 
 	if (!MLX5_CAP_GEN(dev, mkey_pcie_tph))
 		return NULL;
@@ -92,23 +92,18 @@ void mlx5_st_destroy(struct mlx5_core_dev *dev)
 	kfree(st);
 }
 
-int mlx5_st_alloc_index(struct mlx5_core_dev *dev, enum tph_mem_type mem_type,
-			unsigned int cpu_uid, u16 *st_index)
+int mlx5_st_alloc_index_by_tag(struct mlx5_core_dev *dev, u16 tag,
+			       u16 *st_index)
 {
 	struct mlx5_st_idx_data *idx_data;
 	struct mlx5_st *st = dev->st;
 	unsigned long index;
 	u32 xa_id;
-	u16 tag;
-	int ret;
+	int ret = 0;
 
 	if (!st)
 		return -EOPNOTSUPP;
 
-	ret = pcie_tph_get_cpu_st(dev->pdev, mem_type, cpu_uid, &tag);
-	if (ret)
-		return ret;
-
 	if (st->direct_mode) {
 		*st_index = tag;
 		return 0;
@@ -152,6 +147,20 @@ int mlx5_st_alloc_index(struct mlx5_core_dev *dev, enum tph_mem_type mem_type,
 	mutex_unlock(&st->lock);
 	return ret;
 }
+EXPORT_SYMBOL_GPL(mlx5_st_alloc_index_by_tag);
+
+int mlx5_st_alloc_index(struct mlx5_core_dev *dev, enum tph_mem_type mem_type,
+			unsigned int cpu_uid, u16 *st_index)
+{
+	u16 tag;
+	int ret;
+
+	ret = pcie_tph_get_cpu_st(dev->pdev, mem_type, cpu_uid, &tag);
+	if (ret)
+		return ret;
+
+	return mlx5_st_alloc_index_by_tag(dev, tag, st_index);
+}
 EXPORT_SYMBOL_GPL(mlx5_st_alloc_index);
 
 int mlx5_st_dealloc_index(struct mlx5_core_dev *dev, u16 st_index)
diff --git a/drivers/pci/tph.c b/drivers/pci/tph.c
index 91145e8d9d95..644fb5b1f27c 100644
--- a/drivers/pci/tph.c
+++ b/drivers/pci/tph.c
@@ -174,6 +174,26 @@ u32 pcie_tph_get_st_table_loc(struct pci_dev *pdev)
 }
 EXPORT_SYMBOL(pcie_tph_get_st_table_loc);
 
+/**
+ * pcie_tph_get_st_width - Return the device's negotiated Steering Tag width
+ * @pdev: PCI device to query
+ *
+ * Return: 16 if the TPH Requester is enabled in Extended TPH mode, 8 if
+ * enabled in regular TPH mode, 0 if TPH is not enabled or supported.
+ */
+u8 pcie_tph_get_st_width(struct pci_dev *pdev)
+{
+	switch (pdev->tph_req_type) {
+	case PCI_TPH_REQ_TPH_ONLY:
+		return 8;
+	case PCI_TPH_REQ_EXT_TPH:
+		return 16;
+	default:
+		return 0;
+	}
+}
+EXPORT_SYMBOL(pcie_tph_get_st_width);
+
 /*
  * Return the size of ST table. If ST table is not in TPH Requester Extended
  * Capability space, return 0. Otherwise return the ST Table Size + 1.
diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index 04b96c5abb57..523a9ab0ae1e 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -1166,10 +1166,17 @@ int mlx5_dm_sw_icm_dealloc(struct mlx5_core_dev *dev, enum mlx5_sw_icm_type type
 			   u64 length, u16 uid, phys_addr_t addr, u32 obj_id);
 
 #ifdef CONFIG_PCIE_TPH
+int mlx5_st_alloc_index_by_tag(struct mlx5_core_dev *dev, u16 tag,
+			       u16 *st_index);
 int mlx5_st_alloc_index(struct mlx5_core_dev *dev, enum tph_mem_type mem_type,
 			unsigned int cpu_uid, u16 *st_index);
 int mlx5_st_dealloc_index(struct mlx5_core_dev *dev, u16 st_index);
 #else
+static inline int mlx5_st_alloc_index_by_tag(struct mlx5_core_dev *dev,
+					     u16 tag, u16 *st_index)
+{
+	return -EOPNOTSUPP;
+}
 static inline int mlx5_st_alloc_index(struct mlx5_core_dev *dev,
 				      enum tph_mem_type mem_type,
 				      unsigned int cpu_uid, u16 *st_index)
diff --git a/include/linux/pci-tph.h b/include/linux/pci-tph.h
index be68cd17f2f8..679f94f68cef 100644
--- a/include/linux/pci-tph.h
+++ b/include/linux/pci-tph.h
@@ -30,6 +30,7 @@ void pcie_disable_tph(struct pci_dev *pdev);
 int pcie_enable_tph(struct pci_dev *pdev, int mode);
 u16 pcie_tph_get_st_table_size(struct pci_dev *pdev);
 u32 pcie_tph_get_st_table_loc(struct pci_dev *pdev);
+u8 pcie_tph_get_st_width(struct pci_dev *pdev);
 #else
 static inline int pcie_tph_set_st_entry(struct pci_dev *pdev,
 					unsigned int index, u16 tag)
@@ -41,6 +42,7 @@ static inline int pcie_tph_get_cpu_st(struct pci_dev *dev,
 static inline void pcie_disable_tph(struct pci_dev *pdev) { }
 static inline int pcie_enable_tph(struct pci_dev *pdev, int mode)
 { return -EINVAL; }
+static inline u8 pcie_tph_get_st_width(struct pci_dev *pdev) { return 0; }
 #endif
 
 #endif /* LINUX_PCI_TPH_H */
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] vfio: add dma-buf get_tph callback and DMA_BUF_TPH feature
  2026-05-12 18:47 ` [PATCH v3 1/2] vfio: add dma-buf get_tph callback and DMA_BUF_TPH feature Zhiping Zhang
@ 2026-05-13  1:33   ` fengchengwen
  2026-05-14  0:05   ` sashiko-bot
  1 sibling, 0 replies; 8+ messages in thread
From: fengchengwen @ 2026-05-13  1:33 UTC (permalink / raw)
  To: Zhiping Zhang, Alex Williamson, Jason Gunthorpe, Leon Romanovsky
  Cc: Bjorn Helgaas, kvm, linux-rdma, linux-pci, netdev, dri-devel,
	Keith Busch, Yochai Cohen, Yishai Hadas

Hi Zhiping,

I have several suggestions:

1. In struct vfio_device_feature_dma_buf_tph, steering_tag is defined as
   __u16, but PCIe TPH base steering tag is only 8-bit. We can use __u8
   for steering_tag to shrink structure size and reduce reserved padding.

2. The flags field seems unnecessary. We can use value 0 of steering_tag
   or steering_tag_ext to indicate the corresponding ST entry is not
   available, which simplifies the uAPI design.

3. All TPH metadata fields (st, ext st, ph) fit within 32 bits. We can
   wrap them into a union with atomic_t, then use atomic read/write
   instead of memory_lock plus smp_load_acquire/smp_store_release. This
   makes lockless access cleaner and avoids ordering maintenance.

For details, see the text.

On 5/13/2026 2:47 AM, Zhiping Zhang wrote:
> Add a dma-buf callback that returns raw TPH metadata from the exporter
> so peer devices can reuse the steering tag and processing hint
> associated with a VFIO-exported buffer. Add a new
> VFIO_DEVICE_FEATURE_DMA_BUF_TPH ioctl that takes the fd from
> VFIO_DEVICE_FEATURE_DMA_BUF along with the TPH values, validates the fd
> is a vfio-exported dma-buf belonging to this device, and stores the TPH
> metadata under memory_lock. The existing VFIO_DEVICE_FEATURE_DMA_BUF
> uAPI is unchanged.
> 
> 8-bit ST and 16-bit Extended ST are distinct namespaces in the PCIe TPH
> ST table (firmware reports them as separate fields with separate
> validity bits in the ACPI _DSM ST table), so the uAPI carries both
> values along with a flags field that indicates which value(s) are
> valid for this device. The exporter selects the value that matches the
> importer's requested width and returns -EOPNOTSUPP if that width is
> not present, instead of substituting a value across namespaces.
> 
> Publish the TPH fields under memory_lock and gate readers on a
> release/acquire on the flags field; this lets get_tph() run lockless
> and avoids inverting the memory_lock -> dma_resv_lock ordering set up
> by vfio_pci_dma_buf_move(). Convert the @revoked bitfield to a plain bool
> so concurrent updates of @revoked (under dma_resv_lock) and the new TPH
> fields (under memory_lock) cannot race on a shared bitfield byte.

The commit log includes many implementation details, why not remove or simply it.

> 
> Signed-off-by: Zhiping Zhang <zhipingz@meta.com>
> 
> ---
>  drivers/vfio/pci/vfio_pci_core.c   |   3 +
>  drivers/vfio/pci/vfio_pci_dmabuf.c | 113 ++++++++++++++++++++++++++++-
>  drivers/vfio/pci/vfio_pci_priv.h   |  11 +++
>  include/linux/dma-buf.h            |  21 ++++++
>  include/uapi/linux/vfio.h          |  35 +++++++++
>  5 files changed, 182 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
> index 3f8d093aacf8..94aa6dd95701 100644
> --- a/drivers/vfio/pci/vfio_pci_core.c
> +++ b/drivers/vfio/pci/vfio_pci_core.c
> @@ -1534,6 +1534,9 @@ int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags,
>  		return vfio_pci_core_feature_token(vdev, flags, arg, argsz);
>  	case VFIO_DEVICE_FEATURE_DMA_BUF:
>  		return vfio_pci_core_feature_dma_buf(vdev, flags, arg, argsz);
> +	case VFIO_DEVICE_FEATURE_DMA_BUF_TPH:
> +		return vfio_pci_core_feature_dma_buf_tph(vdev, flags, arg,
> +							 argsz);
>  	default:
>  		return -ENOTTY;
>  	}
> diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
> index f87fd32e4a01..28247602e359 100644
> --- a/drivers/vfio/pci/vfio_pci_dmabuf.c
> +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
> @@ -19,7 +19,23 @@ struct vfio_pci_dma_buf {
>  	u32 nr_ranges;
>  	struct kref kref;
>  	struct completion comp;
> -	u8 revoked : 1;
> +	/*
> +	 * TPH metadata published by VFIO_DEVICE_FEATURE_DMA_BUF_TPH and
> +	 * consumed by the @get_tph dma-buf callback.
> +	 *
> +	 * @tph_flags is the publish/consume gate: writers populate
> +	 * @steering_tag, @steering_tag_ext and @ph first, then store
> +	 * @tph_flags with smp_store_release(); readers do
> +	 * smp_load_acquire(&tph_flags) before accessing the value fields.
> +	 * @tph_flags == 0 means "TPH not set". Writers serialize via
> +	 * vdev->memory_lock; readers are lockless to avoid AB-BA against
> +	 * the dma_resv_lock held by importers.
> +	 */
> +	u32 tph_flags;

As subsequent comments, can proceed without tph_flags

> +	u16 steering_tag;
> +	u16 steering_tag_ext;
> +	u8 ph;

struct dma_buf_tph {
        union {
                atomic_t val;
                struct {
                        u16 st_ext;
                        u8 st;
                        u8 ph;
                };
        };
};
Set and get are done with atomic operation, no need for lock

> +	bool revoked;
>  };
>  
>  static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf,
> @@ -69,6 +85,35 @@ vfio_pci_dma_buf_map(struct dma_buf_attachment *attachment,
>  	return ret;
>  }
>  

...

>  
> +	/**
> +	 * @get_tph:
> +	 * @dmabuf: DMA buffer for which to retrieve TPH metadata
> +	 * @steering_tag: Returns the raw TPH steering tag for @st_width
> +	 * @ph: Returns the TPH processing hint (2-bit value)
> +	 * @st_width: Consumer's supported steering tag width in bits (8 or 16)
> +	 *
> +	 * Return the TPH (TLP Processing Hints) metadata associated with this
> +	 * DMA buffer for the requested steering-tag width. 8-bit ST and 16-bit
> +	 * Extended ST are distinct namespaces in the PCIe TPH ST table, so the
> +	 * exporter must select the value that matches @st_width and must not
> +	 * substitute one for the other.
> +	 *
> +	 * Return 0 on success, -EOPNOTSUPP if no metadata is available for the
> +	 * requested width, or -EINVAL if @st_width is not 8 or 16.
> +	 *
> +	 * This callback is optional.
> +	 */
> +	int (*get_tph)(struct dma_buf *dmabuf, u16 *steering_tag, u8 *ph,
> +		       u8 st_width);

how about rename steering_tag to st?

> +
>  	/**
>  	 * @map_dma_buf:
>  	 *
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 5de618a3a5ee..53b2bbd9fc1e 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -1534,6 +1534,41 @@ struct vfio_device_feature_dma_buf {
>   */
>  #define VFIO_DEVICE_FEATURE_MIG_PRECOPY_INFOv2  12
>  
> +/**
> + * Upon VFIO_DEVICE_FEATURE_SET associate TPH (TLP Processing Hints) metadata
> + * with a vfio-exported dma-buf. The dma-buf must have been created by
> + * VFIO_DEVICE_FEATURE_DMA_BUF on this device.
> + *
> + * dmabuf_fd is the file descriptor returned by VFIO_DEVICE_FEATURE_DMA_BUF.
> + *
> + * 8-bit ST (steering_tag) and 16-bit Extended ST (steering_tag_ext) are
> + * distinct namespaces in the PCIe TPH ST table; userspace should populate
> + * the value(s) it has from the firmware ST table for this device and set
> + * the matching VFIO_DMA_BUF_TPH_ST / VFIO_DMA_BUF_TPH_ST_EXT bit in @flags.
> + * An importer requests a specific width and receives the matching value;
> + * if the requested width is not present, the importer is told TPH is
> + * unavailable for this dma-buf.
> + *
> + * ph is the 2-bit TLP Processing Hint and must be in the range [0, 3].
> + *
> + * The user must set TPH on the dma-buf before the importer consumes it.
> + *
> + * Return: 0 on success, -errno on failure.

-1 and errno is set on failure.

> + */
> +#define VFIO_DEVICE_FEATURE_DMA_BUF_TPH 13
> +
> +#define VFIO_DMA_BUF_TPH_ST		(1 << 0)  /* steering_tag valid */
> +#define VFIO_DMA_BUF_TPH_ST_EXT		(1 << 1)  /* steering_tag_ext valid */

It could be represented by judge whether steering_tag/ext == 0

> +
> +struct vfio_device_feature_dma_buf_tph {
> +	__s32	dmabuf_fd;
> +	__u32	flags;
> +	__u16	steering_tag;
> +	__u16	steering_tag_ext;
> +	__u8	ph;
> +	__u8	reserved[3];

How about:
struct vfio_device_feature_dma_buf_tph {
	__s32	dmabuf_fd;
	__u16	st_ext;
	__u8	st;
	__u8	ph;
}
If st_ext is not zero means it valid, and also with st field.

Thanks

> +};
> +
>  /* -------- API for Type1 VFIO IOMMU -------- */
>  
>  /**


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] RDMA/mlx5: get tph for p2p access when registering dma-buf mr
  2026-05-12 18:47 ` [PATCH v3 2/2] RDMA/mlx5: get tph for p2p access when registering dma-buf mr Zhiping Zhang
@ 2026-05-13  1:49   ` fengchengwen
  2026-05-13  6:37     ` Zhiping Zhang
  2026-05-14  0:39   ` sashiko-bot
  1 sibling, 1 reply; 8+ messages in thread
From: fengchengwen @ 2026-05-13  1:49 UTC (permalink / raw)
  To: Zhiping Zhang, Alex Williamson, Jason Gunthorpe, Leon Romanovsky
  Cc: Bjorn Helgaas, kvm, linux-rdma, linux-pci, netdev, dri-devel,
	Keith Busch, Yochai Cohen, Yishai Hadas

On 5/13/2026 2:47 AM, Zhiping Zhang wrote:
> Query dma-buf TPH metadata when registering a dma-buf MR for peer to
> peer access and translate the raw steering tag into an mlx5 steering
> tag index. Factor mlx5_st_alloc_index() so callers that already have a
> raw steering tag can allocate the corresponding mlx5 index directly.
> Keep the DMAH path as the first priority and only fall back to dma-buf
> metadata when no DMAH is supplied.
> 
> Add pcie_tph_get_st_width() so the mlx5 IB driver can query the
> device's negotiated ST width without poking pci_dev::tph_req_type
> directly (that field is gated by CONFIG_PCIE_TPH and would otherwise
> break !CONFIG_PCIE_TPH builds). Pass the width to the dma-buf
> get_tph() callback so the exporter can return the value that matches
> the consumer's capability.

1\ Recommend the PCI/TPH modification be committed separately.
2\ How about rename it to pcie_tph_enabled_req_type() ? so we could
   use already defined macro:
#define   PCI_TPH_REQ_DISABLE		0x0 /* No TPH requests allowed */
#define   PCI_TPH_REQ_TPH_ONLY		0x1 /* TPH only requests allowed */
#define   PCI_TPH_REQ_EXT_TPH		0x3 /* Extended TPH requests allowed */

> 
> Pass the dma_buf pointer that the umem already resolved into
> get_tph_mr_dmabuf() instead of re-resolving the user-supplied fd.
> Re-resolving opens a TOCTOU where a concurrent dup2() can substitute a
> different dma_buf between umem creation and TPH lookup.
> 
> Track the per-MR ownership of the allocated mlx5 ST index on
> mlx5_ib_mr (dmabuf_st_index / dmabuf_st_owned) and release it once the
> firmware mkey no longer references it. Both the cached path
> (mlx5r_umr_revoke_mr_with_lock + ib_frmr_pool_push) and the
> destroy_mkey path call mlx5_ib_mr_put_dmabuf_st() so the ST index does
> not leak when the MR is reused from the FRMR pool.
> 
> Initialize ret in mlx5_st_create() so the cached steering-tag path
> returns success cleanly under clang builds.
> 
> Signed-off-by: Zhiping Zhang <zhipingz@meta.com>
> ---
>  drivers/infiniband/hw/mlx5/mlx5_ib.h          |  6 ++
>  drivers/infiniband/hw/mlx5/mr.c               | 72 ++++++++++++++++++-
>  .../net/ethernet/mellanox/mlx5/core/lib/st.c  | 27 ++++---
>  drivers/pci/tph.c                             | 20 ++++++
>  include/linux/mlx5/driver.h                   |  7 ++
>  include/linux/pci-tph.h                       |  2 +
>  6 files changed, 124 insertions(+), 10 deletions(-)
> 

...

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] RDMA/mlx5: get tph for p2p access when registering dma-buf mr
  2026-05-13  1:49   ` fengchengwen
@ 2026-05-13  6:37     ` Zhiping Zhang
  0 siblings, 0 replies; 8+ messages in thread
From: Zhiping Zhang @ 2026-05-13  6:37 UTC (permalink / raw)
  To: fengchengwen
  Cc: Alex Williamson, Jason Gunthorpe, Leon Romanovsky, Bjorn Helgaas,
	kvm, linux-rdma, linux-pci, netdev, dri-devel, Keith Busch,
	Yochai Cohen, Yishai Hadas

On Tue, May 12, 2026 at 6:49 PM fengchengwen <fengchengwen@huawei.com> wrote:
>
> >
> On 5/13/2026 2:47 AM, Zhiping Zhang wrote:
> > Query dma-buf TPH metadata when registering a dma-buf MR for peer to
> > peer access and translate the raw steering tag into an mlx5 steering
> > tag index. Factor mlx5_st_alloc_index() so callers that already have a
> > raw steering tag can allocate the corresponding mlx5 index directly.
> > Keep the DMAH path as the first priority and only fall back to dma-buf
> > metadata when no DMAH is supplied.
> >
> > Add pcie_tph_get_st_width() so the mlx5 IB driver can query the
> > device's negotiated ST width without poking pci_dev::tph_req_type
> > directly (that field is gated by CONFIG_PCIE_TPH and would otherwise
> > break !CONFIG_PCIE_TPH builds). Pass the width to the dma-buf
> > get_tph() callback so the exporter can return the value that matches
> > the consumer's capability.
>
> 1\ Recommend the PCI/TPH modification be committed separately.
> 2\ How about rename it to pcie_tph_enabled_req_type() ? so we could
>    use already defined macro:
> #define   PCI_TPH_REQ_DISABLE           0x0 /* No TPH requests allowed */
> #define   PCI_TPH_REQ_TPH_ONLY          0x1 /* TPH only requests allowed */
> #define   PCI_TPH_REQ_EXT_TPH           0x3 /* Extended TPH requests allowed */
>

Hi Chengwen,
  Thanks for the great suggestions.
  1. Splitting the PCI/TPH helper change into a separate prep patch
sounds reasonable.
  2. I see your point about exposing the enabled TPH request type! I
want to take one
      more pass over the overall flow and switch to that if I don’t
find any issues.

  Zhiping

> >
> > Pass the dma_buf pointer that the umem already resolved into
> > get_tph_mr_dmabuf() instead of re-resolving the user-supplied fd.
> > Re-resolving opens a TOCTOU where a concurrent dup2() can substitute a
> > different dma_buf between umem creation and TPH lookup.
> >
> > Track the per-MR ownership of the allocated mlx5 ST index on
> > mlx5_ib_mr (dmabuf_st_index / dmabuf_st_owned) and release it once the
> > firmware mkey no longer references it. Both the cached path
> > (mlx5r_umr_revoke_mr_with_lock + ib_frmr_pool_push) and the
> > destroy_mkey path call mlx5_ib_mr_put_dmabuf_st() so the ST index does
> > not leak when the MR is reused from the FRMR pool.
> >
> > Initialize ret in mlx5_st_create() so the cached steering-tag path
> > returns success cleanly under clang builds.
> >
> > Signed-off-by: Zhiping Zhang <zhipingz@meta.com>
> > ---
> >  drivers/infiniband/hw/mlx5/mlx5_ib.h          |  6 ++
> >  drivers/infiniband/hw/mlx5/mr.c               | 72 ++++++++++++++++++-
> >  .../net/ethernet/mellanox/mlx5/core/lib/st.c  | 27 ++++---
> >  drivers/pci/tph.c                             | 20 ++++++
> >  include/linux/mlx5/driver.h                   |  7 ++
> >  include/linux/pci-tph.h                       |  2 +
> >  6 files changed, 124 insertions(+), 10 deletions(-)
> >
>
> ...

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] vfio: add dma-buf get_tph callback and DMA_BUF_TPH feature
  2026-05-12 18:47 ` [PATCH v3 1/2] vfio: add dma-buf get_tph callback and DMA_BUF_TPH feature Zhiping Zhang
  2026-05-13  1:33   ` fengchengwen
@ 2026-05-14  0:05   ` sashiko-bot
  1 sibling, 0 replies; 8+ messages in thread
From: sashiko-bot @ 2026-05-14  0:05 UTC (permalink / raw)
  To: Zhiping Zhang; +Cc: linux-pci

Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [High] Data Race / Lockless Concurrency Bug in TPH metadata updates
- [Medium] Data race on `priv->vdev` ownership check during cross-device ioctl invocation
--

commit d00a3ff646f832b56279c8a55984e20630463a25
Author: Zhiping Zhang <zhipingz@meta.com>

vfio: add dma-buf get_tph callback and DMA_BUF_TPH feature

Add a dma-buf callback that returns raw TPH metadata from the exporter
and a new VFIO_DEVICE_FEATURE_DMA_BUF_TPH ioctl to set it.

> diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
> index f87fd32e4a017..28247602e3599 100644
> --- a/drivers/vfio/pci/vfio_pci_dmabuf.c
> +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c

[ ... ]

> @@ -331,6 +377,71 @@ int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
>  	return ret;
>  }
>  
> +int vfio_pci_core_feature_dma_buf_tph(struct vfio_pci_core_device *vdev,
> +				      u32 flags,
> +				      struct vfio_device_feature_dma_buf_tph __user *arg,
> +				      size_t argsz)
> +{

[ ... ]

> +	if (dmabuf->ops != &vfio_pci_dmabuf_ops) {
> +		ret = -EINVAL;
> +		goto out_put;
> +	}
> +
> +	priv = dmabuf->priv;
> +	down_write(&vdev->memory_lock);
> +	if (priv->vdev != vdev) {

Can this race with a concurrent cleanup of another device?

If userspace passes a dmabuf from device A to the ioctl for device B,
device B's memory_lock is held here. If device A is concurrently closed,
vfio_pci_dma_buf_cleanup() could be setting priv->vdev = NULL while holding
device A's memory_lock, leading to a data race on priv->vdev.

Could we use READ_ONCE()/WRITE_ONCE() for priv->vdev, or acquire
dma_resv_lock() before the check?

> +		ret = -EINVAL;
> +		goto out_unlock;
> +	}
> +
> +	priv->steering_tag = set_tph.steering_tag;
> +	priv->steering_tag_ext = set_tph.steering_tag_ext;
> +	priv->ph = set_tph.ph;
> +	/*
> +	 * Publish the TPH values before the gate flag, so that lockless
> +	 * readers in vfio_pci_dma_buf_get_tph() see fully-initialized
> +	 * fields once they observe a non-zero tph_flags.
> +	 */
> +	smp_store_release(&priv->tph_flags, set_tph.flags);

What happens if this ioctl is called multiple times on the same dmabuf?

If priv->tph_flags is already non-zero from a previous call, a concurrent
lockless reader in vfio_pci_dma_buf_get_tph() could pass the
smp_load_acquire() check and read the metadata fields while they are being
updated here.

Does this cause a data race or torn reads?

Also, if the second call drops a previously set flag, could the reader
consume uninitialized data from set_tph?

Should we reject updates if priv->tph_flags is already set, or use a
seqlock_t to serialize lockless readers?

> +	ret = 0;
> +
> +out_unlock:
> +	up_write(&vdev->memory_lock);
> +out_put:
> +	dma_buf_put(dmabuf);
> +	return ret;
> +}

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512184755.4137227-1-zhipingz@meta.com?part=1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] RDMA/mlx5: get tph for p2p access when registering dma-buf mr
  2026-05-12 18:47 ` [PATCH v3 2/2] RDMA/mlx5: get tph for p2p access when registering dma-buf mr Zhiping Zhang
  2026-05-13  1:49   ` fengchengwen
@ 2026-05-14  0:39   ` sashiko-bot
  1 sibling, 0 replies; 8+ messages in thread
From: sashiko-bot @ 2026-05-14  0:39 UTC (permalink / raw)
  To: Zhiping Zhang; +Cc: linux-pci

Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [High] Use-after-handoff / Race condition of MR object after pushing to FRMR pool
- [High] Memory leak of `mlx5_st_idx_data` in `mlx5_st_dealloc_index()`
--

commit 2db4c0c9ad5981da4be78d996f10a6b611b880e0
Author: Zhiping Zhang <zhipingz@meta.com>
RDMA/mlx5: get tph for p2p access when registering dma-buf mr

This commit queries dma-buf TPH metadata when registering a dma-buf MR for
peer-to-peer access and translates the raw steering tag into an mlx5 ST
index. It also tracks the per-MR ownership of the allocated mlx5 ST index
on the MR and releases it once the firmware mkey no longer references it.

> diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
> index 3b6da45061a55..84d570f7cafb9 100644
> --- a/drivers/infiniband/hw/mlx5/mr.c
> +++ b/drivers/infiniband/hw/mlx5/mr.c
[ ... ]
> @@ -1378,8 +1439,15 @@ static int mlx5r_handle_mkey_cleanup(struct mlx5_ib_mr *mr)
>  	int ret;
>  
>  	if (mr->ibmr.frmr.pool && !mlx5_umr_revoke_mr_with_lock(mr) &&
> -	    !ib_frmr_pool_push(mr->ibmr.device, &mr->ibmr))
> +	    !ib_frmr_pool_push(mr->ibmr.device, &mr->ibmr)) {
> +		/*
> +		 * The mkey has been revoked: firmware no longer references
> +		 * dmabuf_st_index, so release it before this mr re-enters
> +		 * the FRMR cache for reuse by another registration.
> +		 */
> +		mlx5_ib_mr_put_dmabuf_st(mr);
>  		return 0;
> +	}

Does this introduce a race condition with MR reuse?

If ib_frmr_pool_push() succeeds, the MR is immediately inserted into the
FRMR pool and becomes available for concurrent reuse by other threads.
However, the boolean short-circuit evaluation causes
mlx5_ib_mr_put_dmabuf_st(mr) to execute after the push completes.

If another thread pops the MR and begins reinitialization before
mlx5_ib_mr_put_dmabuf_st(mr) executes, could the original thread read and
modify the newly owned MR state concurrently? This seems like it could
result in freeing the wrong steering tag index and corrupting the MR state.

> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/st.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/st.c
> index 997be91f0a135..c5058557c7f03 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/st.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/st.c
[ ... ]
> @@ -152,6 +147,20 @@ int mlx5_st_alloc_index(struct mlx5_core_dev *dev, enum tph_mem_type mem_type,
>  	mutex_unlock(&st->lock);
>  	return ret;
>  }
> +EXPORT_SYMBOL_GPL(mlx5_st_alloc_index_by_tag);
> +
> +int mlx5_st_alloc_index(struct mlx5_core_dev *dev, enum tph_mem_type mem_type,
> +			unsigned int cpu_uid, u16 *st_index)
> +{
> +	u16 tag;
> +	int ret;
> +
> +	ret = pcie_tph_get_cpu_st(dev->pdev, mem_type, cpu_uid, &tag);
> +	if (ret)
> +		return ret;
> +
> +	return mlx5_st_alloc_index_by_tag(dev, tag, st_index);
> +}
>  EXPORT_SYMBOL_GPL(mlx5_st_alloc_index);
>  
>  int mlx5_st_dealloc_index(struct mlx5_core_dev *dev, u16 st_index)

While looking at these changes which tie the ST allocation to the MR
lifecycle, it appears there is a pre-existing memory leak in
mlx5_st_dealloc_index():

drivers/net/ethernet/mellanox/mlx5/core/lib/st.c:mlx5_st_dealloc_index() {
	if (refcount_dec_and_test(&idx_data->usecount)) {
		xa_erase(&st->idx_xa, st_index);
	}
}

When the reference count drops to 0, xa_erase() is called to remove the
element from the XArray, but the underlying pointer is never freed with
kfree().

Because this patch introduces per-MR allocation and deallocation of ST
indices tied to the lifetime of dma-buf MRs, could a user space application
repeatedly register and deregister dmabuf MRs to silently leak kernel
memory and eventually exhaust system memory?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512184755.4137227-1-zhipingz@meta.com?part=2

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-05-14  0:39 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-12 18:47 [PATCH v3 0/2] vfio/dma-buf: add TPH support for peer-to-peer access Zhiping Zhang
2026-05-12 18:47 ` [PATCH v3 1/2] vfio: add dma-buf get_tph callback and DMA_BUF_TPH feature Zhiping Zhang
2026-05-13  1:33   ` fengchengwen
2026-05-14  0:05   ` sashiko-bot
2026-05-12 18:47 ` [PATCH v3 2/2] RDMA/mlx5: get tph for p2p access when registering dma-buf mr Zhiping Zhang
2026-05-13  1:49   ` fengchengwen
2026-05-13  6:37     ` Zhiping Zhang
2026-05-14  0:39   ` sashiko-bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox