linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] Refine virtio mapping API
@ 2025-07-01  1:13 Jason Wang
  2025-07-01  1:13 ` [PATCH 1/9] virtio_ring: constify virtqueue pointer for DMA helpers Jason Wang
                   ` (9 more replies)
  0 siblings, 10 replies; 18+ messages in thread
From: Jason Wang @ 2025-07-01  1:13 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, eperezma
  Cc: virtualization, linux-kernel, hch, xieyongji

Hi all:

Virtio used to be coupled with DMA API. This works fine for the device
that do real DMA but not the others. For example, VDUSE nees to craft
with DMA API in order to let the virtio-vdpa driver to work.

This series tries to solve this issue by introducing the mapping API
in the virtio core. So transport like vDPA can implement their own
mapping logic without the need to hack with DMA API. The mapping API
are abstracted with a new map operations in order to be re-used by
transprot or device. So device like VDUSE can implement its own
mapping loigc.

Please review.

Thanks

Jason Wang (9):
  virtio_ring: constify virtqueue pointer for DMA helpers
  virtio_ring: switch to use dma_{map|unmap}_page()
  virtio: rename dma helpers
  virtio: rename dma_dev to map_token
  virtio_ring: rename dma_handle to map_handle
  virtio: introduce map ops in virtio core
  vdpa: rename dma_dev to map_token
  vdpa: introduce map ops
  vduse: switch to use virtio map API instead of DMA API

 drivers/net/virtio_net.c                 |  32 +-
 drivers/vdpa/alibaba/eni_vdpa.c          |   5 +-
 drivers/vdpa/ifcvf/ifcvf_main.c          |   5 +-
 drivers/vdpa/octeon_ep/octep_vdpa_main.c |   6 +-
 drivers/vdpa/pds/vdpa_dev.c              |   3 +-
 drivers/vdpa/solidrun/snet_main.c        |   4 +-
 drivers/vdpa/vdpa.c                      |   5 +-
 drivers/vdpa/vdpa_sim/vdpa_sim.c         |   4 +-
 drivers/vdpa/vdpa_user/iova_domain.c     |   8 +-
 drivers/vdpa/vdpa_user/iova_domain.h     |   5 +-
 drivers/vdpa/vdpa_user/vduse_dev.c       |  34 +-
 drivers/vdpa/virtio_pci/vp_vdpa.c        |   5 +-
 drivers/vhost/vdpa.c                     |  11 +-
 drivers/virtio/virtio_ring.c             | 440 ++++++++++++++---------
 drivers/virtio/virtio_vdpa.c             |  15 +-
 include/linux/vdpa.h                     |  22 +-
 include/linux/virtio.h                   |  36 +-
 include/linux/virtio_config.h            |  68 ++++
 include/linux/virtio_ring.h              |   6 +-
 19 files changed, 476 insertions(+), 238 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/9] virtio_ring: constify virtqueue pointer for DMA helpers
  2025-07-01  1:13 [PATCH 0/9] Refine virtio mapping API Jason Wang
@ 2025-07-01  1:13 ` Jason Wang
  2025-07-01  1:13 ` [PATCH 2/9] virtio_ring: switch to use dma_{map|unmap}_page() Jason Wang
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2025-07-01  1:13 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, eperezma
  Cc: virtualization, linux-kernel, hch, xieyongji

This patch consities virtqueue point for DMA helpers.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/virtio/virtio_ring.c | 25 +++++++++++++------------
 include/linux/virtio.h       | 12 ++++++------
 2 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index b784aab66867..291d93d4a613 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -3141,12 +3141,12 @@ EXPORT_SYMBOL_GPL(virtqueue_get_vring);
  *
  * return DMA address. Caller should check that by virtqueue_dma_mapping_error().
  */
-dma_addr_t virtqueue_dma_map_single_attrs(struct virtqueue *_vq, void *ptr,
+dma_addr_t virtqueue_dma_map_single_attrs(const struct virtqueue *_vq, void *ptr,
 					  size_t size,
 					  enum dma_data_direction dir,
 					  unsigned long attrs)
 {
-	struct vring_virtqueue *vq = to_vvq(_vq);
+	const struct vring_virtqueue *vq = to_vvq(_vq);
 
 	if (!vq->use_dma_api) {
 		kmsan_handle_dma(virt_to_page(ptr), offset_in_page(ptr), size, dir);
@@ -3168,11 +3168,12 @@ EXPORT_SYMBOL_GPL(virtqueue_dma_map_single_attrs);
  * Unmap the address that is mapped by the virtqueue_dma_map_* APIs.
  *
  */
-void virtqueue_dma_unmap_single_attrs(struct virtqueue *_vq, dma_addr_t addr,
+void virtqueue_dma_unmap_single_attrs(const struct virtqueue *_vq,
+				      dma_addr_t addr,
 				      size_t size, enum dma_data_direction dir,
 				      unsigned long attrs)
 {
-	struct vring_virtqueue *vq = to_vvq(_vq);
+	const struct vring_virtqueue *vq = to_vvq(_vq);
 
 	if (!vq->use_dma_api)
 		return;
@@ -3188,9 +3189,9 @@ EXPORT_SYMBOL_GPL(virtqueue_dma_unmap_single_attrs);
  *
  * Returns 0 means dma valid. Other means invalid dma address.
  */
-int virtqueue_dma_mapping_error(struct virtqueue *_vq, dma_addr_t addr)
+int virtqueue_dma_mapping_error(const struct virtqueue *_vq, dma_addr_t addr)
 {
-	struct vring_virtqueue *vq = to_vvq(_vq);
+	const struct vring_virtqueue *vq = to_vvq(_vq);
 
 	if (!vq->use_dma_api)
 		return 0;
@@ -3209,9 +3210,9 @@ EXPORT_SYMBOL_GPL(virtqueue_dma_mapping_error);
  *
  * return bool
  */
-bool virtqueue_dma_need_sync(struct virtqueue *_vq, dma_addr_t addr)
+bool virtqueue_dma_need_sync(const struct virtqueue *_vq, dma_addr_t addr)
 {
-	struct vring_virtqueue *vq = to_vvq(_vq);
+	const struct vring_virtqueue *vq = to_vvq(_vq);
 
 	if (!vq->use_dma_api)
 		return false;
@@ -3232,12 +3233,12 @@ EXPORT_SYMBOL_GPL(virtqueue_dma_need_sync);
  * the DMA address really needs to be synchronized
  *
  */
-void virtqueue_dma_sync_single_range_for_cpu(struct virtqueue *_vq,
+void virtqueue_dma_sync_single_range_for_cpu(const struct virtqueue *_vq,
 					     dma_addr_t addr,
 					     unsigned long offset, size_t size,
 					     enum dma_data_direction dir)
 {
-	struct vring_virtqueue *vq = to_vvq(_vq);
+	const struct vring_virtqueue *vq = to_vvq(_vq);
 	struct device *dev = vring_dma_dev(vq);
 
 	if (!vq->use_dma_api)
@@ -3258,12 +3259,12 @@ EXPORT_SYMBOL_GPL(virtqueue_dma_sync_single_range_for_cpu);
  * Before calling this function, use virtqueue_dma_need_sync() to confirm that
  * the DMA address really needs to be synchronized
  */
-void virtqueue_dma_sync_single_range_for_device(struct virtqueue *_vq,
+void virtqueue_dma_sync_single_range_for_device(const struct virtqueue *_vq,
 						dma_addr_t addr,
 						unsigned long offset, size_t size,
 						enum dma_data_direction dir)
 {
-	struct vring_virtqueue *vq = to_vvq(_vq);
+	const struct vring_virtqueue *vq = to_vvq(_vq);
 	struct device *dev = vring_dma_dev(vq);
 
 	if (!vq->use_dma_api)
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 64cb4b04be7a..8c0a3165e754 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -259,18 +259,18 @@ void unregister_virtio_driver(struct virtio_driver *drv);
 	module_driver(__virtio_driver, register_virtio_driver, \
 			unregister_virtio_driver)
 
-dma_addr_t virtqueue_dma_map_single_attrs(struct virtqueue *_vq, void *ptr, size_t size,
+dma_addr_t virtqueue_dma_map_single_attrs(const struct virtqueue *_vq, void *ptr, size_t size,
 					  enum dma_data_direction dir, unsigned long attrs);
-void virtqueue_dma_unmap_single_attrs(struct virtqueue *_vq, dma_addr_t addr,
+void virtqueue_dma_unmap_single_attrs(const struct virtqueue *_vq, dma_addr_t addr,
 				      size_t size, enum dma_data_direction dir,
 				      unsigned long attrs);
-int virtqueue_dma_mapping_error(struct virtqueue *_vq, dma_addr_t addr);
+int virtqueue_dma_mapping_error(const struct virtqueue *_vq, dma_addr_t addr);
 
-bool virtqueue_dma_need_sync(struct virtqueue *_vq, dma_addr_t addr);
-void virtqueue_dma_sync_single_range_for_cpu(struct virtqueue *_vq, dma_addr_t addr,
+bool virtqueue_dma_need_sync(const struct virtqueue *_vq, dma_addr_t addr);
+void virtqueue_dma_sync_single_range_for_cpu(const struct virtqueue *_vq, dma_addr_t addr,
 					     unsigned long offset, size_t size,
 					     enum dma_data_direction dir);
-void virtqueue_dma_sync_single_range_for_device(struct virtqueue *_vq, dma_addr_t addr,
+void virtqueue_dma_sync_single_range_for_device(const struct virtqueue *_vq, dma_addr_t addr,
 						unsigned long offset, size_t size,
 						enum dma_data_direction dir);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/9] virtio_ring: switch to use dma_{map|unmap}_page()
  2025-07-01  1:13 [PATCH 0/9] Refine virtio mapping API Jason Wang
  2025-07-01  1:13 ` [PATCH 1/9] virtio_ring: constify virtqueue pointer for DMA helpers Jason Wang
@ 2025-07-01  1:13 ` Jason Wang
  2025-07-01  1:13 ` [PATCH 3/9] virtio: rename dma helpers Jason Wang
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2025-07-01  1:13 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, eperezma
  Cc: virtualization, linux-kernel, hch, xieyongji

This patch switches to use dma_{map|unmap}_page() to reduce the
coverage of DMA operations. This would help for the following rework
on the virtio map operations.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/virtio/virtio_ring.c | 55 +++++++++++++++---------------------
 1 file changed, 23 insertions(+), 32 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 291d93d4a613..04d88502a685 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -405,8 +405,8 @@ static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
 	if (!vq->use_dma_api)
 		return (dma_addr_t)virt_to_phys(cpu_addr);
 
-	return dma_map_single(vring_dma_dev(vq),
-			      cpu_addr, size, direction);
+	return virtqueue_dma_map_single_attrs(&vq->vq, cpu_addr,
+					      size, direction, 0);
 }
 
 static int vring_mapping_error(const struct vring_virtqueue *vq,
@@ -451,22 +451,14 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
 	if (flags & VRING_DESC_F_INDIRECT) {
 		if (!vq->use_dma_api)
 			goto out;
+	} else if (!vring_need_unmap_buffer(vq, extra))
+		goto out;
 
-		dma_unmap_single(vring_dma_dev(vq),
-				 extra->addr,
-				 extra->len,
-				 (flags & VRING_DESC_F_WRITE) ?
-				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
-	} else {
-		if (!vring_need_unmap_buffer(vq, extra))
-			goto out;
-
-		dma_unmap_page(vring_dma_dev(vq),
-			       extra->addr,
-			       extra->len,
-			       (flags & VRING_DESC_F_WRITE) ?
-			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
-	}
+	dma_unmap_page(vring_dma_dev(vq),
+		       extra->addr,
+		       extra->len,
+		       (flags & VRING_DESC_F_WRITE) ?
+		       DMA_FROM_DEVICE : DMA_TO_DEVICE);
 
 out:
 	return extra->next;
@@ -1276,20 +1268,13 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
 	if (flags & VRING_DESC_F_INDIRECT) {
 		if (!vq->use_dma_api)
 			return;
+	} else if (!vring_need_unmap_buffer(vq, extra))
+		return;
 
-		dma_unmap_single(vring_dma_dev(vq),
-				 extra->addr, extra->len,
-				 (flags & VRING_DESC_F_WRITE) ?
-				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
-	} else {
-		if (!vring_need_unmap_buffer(vq, extra))
-			return;
-
-		dma_unmap_page(vring_dma_dev(vq),
-			       extra->addr, extra->len,
-			       (flags & VRING_DESC_F_WRITE) ?
-			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
-	}
+	dma_unmap_page(vring_dma_dev(vq),
+		       extra->addr, extra->len,
+		       (flags & VRING_DESC_F_WRITE) ?
+		       DMA_FROM_DEVICE : DMA_TO_DEVICE);
 }
 
 static struct vring_packed_desc *alloc_indirect_packed(unsigned int total_sg,
@@ -3153,7 +3138,13 @@ dma_addr_t virtqueue_dma_map_single_attrs(const struct virtqueue *_vq, void *ptr
 		return (dma_addr_t)virt_to_phys(ptr);
 	}
 
-	return dma_map_single_attrs(vring_dma_dev(vq), ptr, size, dir, attrs);
+	/* DMA must never operate on areas that might be remapped. */
+	if (dev_WARN_ONCE(&_vq->vdev->dev, is_vmalloc_addr(ptr),
+			  "rejecting DMA map of vmalloc memory\n"))
+		return DMA_MAPPING_ERROR;
+
+	return dma_map_page_attrs(vring_dma_dev(vq), virt_to_page(ptr),
+				  offset_in_page(ptr), size, dir, attrs);
 }
 EXPORT_SYMBOL_GPL(virtqueue_dma_map_single_attrs);
 
@@ -3178,7 +3169,7 @@ void virtqueue_dma_unmap_single_attrs(const struct virtqueue *_vq,
 	if (!vq->use_dma_api)
 		return;
 
-	dma_unmap_single_attrs(vring_dma_dev(vq), addr, size, dir, attrs);
+	dma_unmap_page_attrs(vring_dma_dev(vq), addr, size, dir, attrs);
 }
 EXPORT_SYMBOL_GPL(virtqueue_dma_unmap_single_attrs);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/9] virtio: rename dma helpers
  2025-07-01  1:13 [PATCH 0/9] Refine virtio mapping API Jason Wang
  2025-07-01  1:13 ` [PATCH 1/9] virtio_ring: constify virtqueue pointer for DMA helpers Jason Wang
  2025-07-01  1:13 ` [PATCH 2/9] virtio_ring: switch to use dma_{map|unmap}_page() Jason Wang
@ 2025-07-01  1:13 ` Jason Wang
  2025-07-01  1:13 ` [PATCH 4/9] virtio: rename dma_dev to map_token Jason Wang
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2025-07-01  1:13 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, eperezma
  Cc: virtualization, linux-kernel, hch, xieyongji

Following patch will introduce virtio mapping fucntion to avoid
abusing DMA API for device that doesn't do DMA. To ease the
introduction, this patch rename "dma" to "map" for the current dma
mapping helpers.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c     |  28 ++++-----
 drivers/virtio/virtio_ring.c | 114 +++++++++++++++++------------------
 include/linux/virtio.h       |  12 ++--
 3 files changed, 77 insertions(+), 77 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index e53ba600605a..39bcb85335d5 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -913,7 +913,7 @@ static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len)
 	if (dma->need_sync && len) {
 		offset = buf - (head + sizeof(*dma));
 
-		virtqueue_dma_sync_single_range_for_cpu(rq->vq, dma->addr,
+		virtqueue_map_sync_single_range_for_cpu(rq->vq, dma->addr,
 							offset, len,
 							DMA_FROM_DEVICE);
 	}
@@ -921,8 +921,8 @@ static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len)
 	if (dma->ref)
 		return;
 
-	virtqueue_dma_unmap_single_attrs(rq->vq, dma->addr, dma->len,
-					 DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
+	virtqueue_unmap_single_attrs(rq->vq, dma->addr, dma->len,
+				     DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
 	put_page(page);
 }
 
@@ -989,13 +989,13 @@ static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp)
 
 		dma->len = alloc_frag->size - sizeof(*dma);
 
-		addr = virtqueue_dma_map_single_attrs(rq->vq, dma + 1,
-						      dma->len, DMA_FROM_DEVICE, 0);
-		if (virtqueue_dma_mapping_error(rq->vq, addr))
+		addr = virtqueue_map_single_attrs(rq->vq, dma + 1,
+						  dma->len, DMA_FROM_DEVICE, 0);
+		if (virtqueue_map_mapping_error(rq->vq, addr))
 			return NULL;
 
 		dma->addr = addr;
-		dma->need_sync = virtqueue_dma_need_sync(rq->vq, addr);
+		dma->need_sync = virtqueue_map_need_sync(rq->vq, addr);
 
 		/* Add a reference to dma to prevent the entire dma from
 		 * being released during error handling. This reference
@@ -5892,9 +5892,9 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
 	if (!rq->xsk_buffs)
 		return -ENOMEM;
 
-	hdr_dma = virtqueue_dma_map_single_attrs(sq->vq, &xsk_hdr, vi->hdr_len,
-						 DMA_TO_DEVICE, 0);
-	if (virtqueue_dma_mapping_error(sq->vq, hdr_dma)) {
+	hdr_dma = virtqueue_map_single_attrs(sq->vq, &xsk_hdr, vi->hdr_len,
+					     DMA_TO_DEVICE, 0);
+	if (virtqueue_map_mapping_error(sq->vq, hdr_dma)) {
 		err = -ENOMEM;
 		goto err_free_buffs;
 	}
@@ -5923,8 +5923,8 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
 err_rq:
 	xsk_pool_dma_unmap(pool, 0);
 err_xsk_map:
-	virtqueue_dma_unmap_single_attrs(rq->vq, hdr_dma, vi->hdr_len,
-					 DMA_TO_DEVICE, 0);
+	virtqueue_unmap_single_attrs(rq->vq, hdr_dma, vi->hdr_len,
+				     DMA_TO_DEVICE, 0);
 err_free_buffs:
 	kvfree(rq->xsk_buffs);
 	return err;
@@ -5951,8 +5951,8 @@ static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid)
 
 	xsk_pool_dma_unmap(pool, 0);
 
-	virtqueue_dma_unmap_single_attrs(sq->vq, sq->xsk_hdr_dma_addr,
-					 vi->hdr_len, DMA_TO_DEVICE, 0);
+	virtqueue_unmap_single_attrs(sq->vq, sq->xsk_hdr_dma_addr,
+				     vi->hdr_len, DMA_TO_DEVICE, 0);
 	kvfree(rq->xsk_buffs);
 
 	return err;
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 04d88502a685..5961e77db6dc 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -166,7 +166,7 @@ struct vring_virtqueue {
 	bool packed_ring;
 
 	/* Is DMA API used? */
-	bool use_dma_api;
+	bool use_map_api;
 
 	/* Can we use weak barriers? */
 	bool weak_barriers;
@@ -268,7 +268,7 @@ static bool virtqueue_use_indirect(const struct vring_virtqueue *vq,
  * unconditionally on data path.
  */
 
-static bool vring_use_dma_api(const struct virtio_device *vdev)
+static bool vring_use_map_api(const struct virtio_device *vdev)
 {
 	if (!virtio_has_dma_quirk(vdev))
 		return true;
@@ -291,14 +291,14 @@ static bool vring_use_dma_api(const struct virtio_device *vdev)
 static bool vring_need_unmap_buffer(const struct vring_virtqueue *vring,
 				    const struct vring_desc_extra *extra)
 {
-	return vring->use_dma_api && (extra->addr != DMA_MAPPING_ERROR);
+	return vring->use_map_api && (extra->addr != DMA_MAPPING_ERROR);
 }
 
 size_t virtio_max_dma_size(const struct virtio_device *vdev)
 {
 	size_t max_segment_size = SIZE_MAX;
 
-	if (vring_use_dma_api(vdev))
+	if (vring_use_map_api(vdev))
 		max_segment_size = dma_max_mapping_size(vdev->dev.parent);
 
 	return max_segment_size;
@@ -309,7 +309,7 @@ static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
 			       dma_addr_t *dma_handle, gfp_t flag,
 			       struct device *dma_dev)
 {
-	if (vring_use_dma_api(vdev)) {
+	if (vring_use_map_api(vdev)) {
 		return dma_alloc_coherent(dma_dev, size,
 					  dma_handle, flag);
 	} else {
@@ -343,7 +343,7 @@ static void vring_free_queue(struct virtio_device *vdev, size_t size,
 			     void *queue, dma_addr_t dma_handle,
 			     struct device *dma_dev)
 {
-	if (vring_use_dma_api(vdev))
+	if (vring_use_map_api(vdev))
 		dma_free_coherent(dma_dev, size, queue, dma_handle);
 	else
 		free_pages_exact(queue, PAGE_ALIGN(size));
@@ -372,7 +372,7 @@ static int vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist
 
 	*len = sg->length;
 
-	if (!vq->use_dma_api) {
+	if (!vq->use_map_api) {
 		/*
 		 * If DMA is not used, KMSAN doesn't know that the scatterlist
 		 * is initialized by the hardware. Explicitly check/unpoison it
@@ -402,17 +402,17 @@ static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
 				   void *cpu_addr, size_t size,
 				   enum dma_data_direction direction)
 {
-	if (!vq->use_dma_api)
+	if (!vq->use_map_api)
 		return (dma_addr_t)virt_to_phys(cpu_addr);
 
-	return virtqueue_dma_map_single_attrs(&vq->vq, cpu_addr,
-					      size, direction, 0);
+	return virtqueue_map_single_attrs(&vq->vq, cpu_addr,
+					  size, direction, 0);
 }
 
 static int vring_mapping_error(const struct vring_virtqueue *vq,
 			       dma_addr_t addr)
 {
-	if (!vq->use_dma_api)
+	if (!vq->use_map_api)
 		return 0;
 
 	return dma_mapping_error(vring_dma_dev(vq), addr);
@@ -449,7 +449,7 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
 	flags = extra->flags;
 
 	if (flags & VRING_DESC_F_INDIRECT) {
-		if (!vq->use_dma_api)
+		if (!vq->use_map_api)
 			goto out;
 	} else if (!vring_need_unmap_buffer(vq, extra))
 		goto out;
@@ -782,7 +782,7 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head,
 
 		extra = (struct vring_desc_extra *)&indir_desc[num];
 
-		if (vq->use_dma_api) {
+		if (vq->use_map_api) {
 			for (j = 0; j < num; j++)
 				vring_unmap_one_split(vq, &extra[j]);
 		}
@@ -1150,7 +1150,7 @@ static struct virtqueue *__vring_new_virtqueue_split(unsigned int index,
 	vq->broken = false;
 #endif
 	vq->dma_dev = dma_dev;
-	vq->use_dma_api = vring_use_dma_api(vdev);
+	vq->use_map_api = vring_use_map_api(vdev);
 
 	vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
 		!context;
@@ -1266,7 +1266,7 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
 	flags = extra->flags;
 
 	if (flags & VRING_DESC_F_INDIRECT) {
-		if (!vq->use_dma_api)
+		if (!vq->use_map_api)
 			return;
 	} else if (!vring_need_unmap_buffer(vq, extra))
 		return;
@@ -1351,7 +1351,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
 			desc[i].addr = cpu_to_le64(addr);
 			desc[i].len = cpu_to_le32(len);
 
-			if (unlikely(vq->use_dma_api)) {
+			if (unlikely(vq->use_map_api)) {
 				extra[i].addr = premapped ? DMA_MAPPING_ERROR : addr;
 				extra[i].len = len;
 				extra[i].flags = n < out_sgs ?  0 : VRING_DESC_F_WRITE;
@@ -1373,7 +1373,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
 				sizeof(struct vring_packed_desc));
 	vq->packed.vring.desc[head].id = cpu_to_le16(id);
 
-	if (vq->use_dma_api) {
+	if (vq->use_map_api) {
 		vq->packed.desc_extra[id].addr = addr;
 		vq->packed.desc_extra[id].len = total_sg *
 				sizeof(struct vring_packed_desc);
@@ -1515,7 +1515,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 			desc[i].len = cpu_to_le32(len);
 			desc[i].id = cpu_to_le16(id);
 
-			if (unlikely(vq->use_dma_api)) {
+			if (unlikely(vq->use_map_api)) {
 				vq->packed.desc_extra[curr].addr = premapped ?
 					DMA_MAPPING_ERROR : addr;
 				vq->packed.desc_extra[curr].len = len;
@@ -1650,7 +1650,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
 	vq->free_head = id;
 	vq->vq.num_free += state->num;
 
-	if (unlikely(vq->use_dma_api)) {
+	if (unlikely(vq->use_map_api)) {
 		curr = id;
 		for (i = 0; i < state->num; i++) {
 			vring_unmap_extra_packed(vq,
@@ -1668,7 +1668,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
 		if (!desc)
 			return;
 
-		if (vq->use_dma_api) {
+		if (vq->use_map_api) {
 			len = vq->packed.desc_extra[id].len;
 			num = len / sizeof(struct vring_packed_desc);
 
@@ -2121,7 +2121,7 @@ static struct virtqueue *__vring_new_virtqueue_packed(unsigned int index,
 #endif
 	vq->packed_ring = true;
 	vq->dma_dev = dma_dev;
-	vq->use_dma_api = vring_use_dma_api(vdev);
+	vq->use_map_api = vring_use_map_api(vdev);
 
 	vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
 		!context;
@@ -2429,7 +2429,7 @@ struct device *virtqueue_dma_dev(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	if (vq->use_dma_api)
+	if (vq->use_map_api)
 		return vring_dma_dev(vq);
 	else
 		return NULL;
@@ -3114,7 +3114,7 @@ const struct vring *virtqueue_get_vring(const struct virtqueue *vq)
 EXPORT_SYMBOL_GPL(virtqueue_get_vring);
 
 /**
- * virtqueue_dma_map_single_attrs - map DMA for _vq
+ * virtqueue_map_single_attrs - map DMA for _vq
  * @_vq: the struct virtqueue we're talking about.
  * @ptr: the pointer of the buffer to do dma
  * @size: the size of the buffer to do dma
@@ -3124,16 +3124,16 @@ EXPORT_SYMBOL_GPL(virtqueue_get_vring);
  * The caller calls this to do dma mapping in advance. The DMA address can be
  * passed to this _vq when it is in pre-mapped mode.
  *
- * return DMA address. Caller should check that by virtqueue_dma_mapping_error().
+ * return DMA address. Caller should check that by virtqueue_mapping_error().
  */
-dma_addr_t virtqueue_dma_map_single_attrs(const struct virtqueue *_vq, void *ptr,
-					  size_t size,
-					  enum dma_data_direction dir,
-					  unsigned long attrs)
+dma_addr_t virtqueue_map_single_attrs(const struct virtqueue *_vq, void *ptr,
+				      size_t size,
+				      enum dma_data_direction dir,
+				      unsigned long attrs)
 {
 	const struct vring_virtqueue *vq = to_vvq(_vq);
 
-	if (!vq->use_dma_api) {
+	if (!vq->use_map_api) {
 		kmsan_handle_dma(virt_to_page(ptr), offset_in_page(ptr), size, dir);
 		return (dma_addr_t)virt_to_phys(ptr);
 	}
@@ -3146,85 +3146,85 @@ dma_addr_t virtqueue_dma_map_single_attrs(const struct virtqueue *_vq, void *ptr
 	return dma_map_page_attrs(vring_dma_dev(vq), virt_to_page(ptr),
 				  offset_in_page(ptr), size, dir, attrs);
 }
-EXPORT_SYMBOL_GPL(virtqueue_dma_map_single_attrs);
+EXPORT_SYMBOL_GPL(virtqueue_map_single_attrs);
 
 /**
- * virtqueue_dma_unmap_single_attrs - unmap DMA for _vq
+ * virtqueue_unmap_single_attrs - unmap map for _vq
  * @_vq: the struct virtqueue we're talking about.
  * @addr: the dma address to unmap
  * @size: the size of the buffer
  * @dir: DMA direction
  * @attrs: DMA Attrs
  *
- * Unmap the address that is mapped by the virtqueue_dma_map_* APIs.
+ * Unmap the address that is mapped by the virtqueue_map_* APIs.
  *
  */
-void virtqueue_dma_unmap_single_attrs(const struct virtqueue *_vq,
-				      dma_addr_t addr,
-				      size_t size, enum dma_data_direction dir,
-				      unsigned long attrs)
+void virtqueue_unmap_single_attrs(const struct virtqueue *_vq,
+				  dma_addr_t addr,
+				  size_t size, enum dma_data_direction dir,
+				  unsigned long attrs)
 {
 	const struct vring_virtqueue *vq = to_vvq(_vq);
 
-	if (!vq->use_dma_api)
+	if (!vq->use_map_api)
 		return;
 
 	dma_unmap_page_attrs(vring_dma_dev(vq), addr, size, dir, attrs);
 }
-EXPORT_SYMBOL_GPL(virtqueue_dma_unmap_single_attrs);
+EXPORT_SYMBOL_GPL(virtqueue_unmap_single_attrs);
 
 /**
- * virtqueue_dma_mapping_error - check dma address
+ * virtqueue_map_mapping_error - check dma address
  * @_vq: the struct virtqueue we're talking about.
  * @addr: DMA address
  *
  * Returns 0 means dma valid. Other means invalid dma address.
  */
-int virtqueue_dma_mapping_error(const struct virtqueue *_vq, dma_addr_t addr)
+int virtqueue_map_mapping_error(const struct virtqueue *_vq, dma_addr_t addr)
 {
 	const struct vring_virtqueue *vq = to_vvq(_vq);
 
-	if (!vq->use_dma_api)
+	if (!vq->use_map_api)
 		return 0;
 
 	return dma_mapping_error(vring_dma_dev(vq), addr);
 }
-EXPORT_SYMBOL_GPL(virtqueue_dma_mapping_error);
+EXPORT_SYMBOL_GPL(virtqueue_map_mapping_error);
 
 /**
- * virtqueue_dma_need_sync - check a dma address needs sync
+ * virtqueue_map_need_sync - check a dma address needs sync
  * @_vq: the struct virtqueue we're talking about.
  * @addr: DMA address
  *
- * Check if the dma address mapped by the virtqueue_dma_map_* APIs needs to be
+ * Check if the dma address mapped by the virtqueue_map_* APIs needs to be
  * synchronized
  *
  * return bool
  */
-bool virtqueue_dma_need_sync(const struct virtqueue *_vq, dma_addr_t addr)
+bool virtqueue_map_need_sync(const struct virtqueue *_vq, dma_addr_t addr)
 {
 	const struct vring_virtqueue *vq = to_vvq(_vq);
 
-	if (!vq->use_dma_api)
+	if (!vq->use_map_api)
 		return false;
 
 	return dma_need_sync(vring_dma_dev(vq), addr);
 }
-EXPORT_SYMBOL_GPL(virtqueue_dma_need_sync);
+EXPORT_SYMBOL_GPL(virtqueue_map_need_sync);
 
 /**
- * virtqueue_dma_sync_single_range_for_cpu - dma sync for cpu
+ * virtqueue_map_sync_single_range_for_cpu - map sync for cpu
  * @_vq: the struct virtqueue we're talking about.
  * @addr: DMA address
  * @offset: DMA address offset
  * @size: buf size for sync
  * @dir: DMA direction
  *
- * Before calling this function, use virtqueue_dma_need_sync() to confirm that
+ * Before calling this function, use virtqueue_map_need_sync() to confirm that
  * the DMA address really needs to be synchronized
  *
  */
-void virtqueue_dma_sync_single_range_for_cpu(const struct virtqueue *_vq,
+void virtqueue_map_sync_single_range_for_cpu(const struct virtqueue *_vq,
 					     dma_addr_t addr,
 					     unsigned long offset, size_t size,
 					     enum dma_data_direction dir)
@@ -3232,25 +3232,25 @@ void virtqueue_dma_sync_single_range_for_cpu(const struct virtqueue *_vq,
 	const struct vring_virtqueue *vq = to_vvq(_vq);
 	struct device *dev = vring_dma_dev(vq);
 
-	if (!vq->use_dma_api)
+	if (!vq->use_map_api)
 		return;
 
 	dma_sync_single_range_for_cpu(dev, addr, offset, size, dir);
 }
-EXPORT_SYMBOL_GPL(virtqueue_dma_sync_single_range_for_cpu);
+EXPORT_SYMBOL_GPL(virtqueue_map_sync_single_range_for_cpu);
 
 /**
- * virtqueue_dma_sync_single_range_for_device - dma sync for device
+ * virtqueue_map_sync_single_range_for_device - map sync for device
  * @_vq: the struct virtqueue we're talking about.
  * @addr: DMA address
  * @offset: DMA address offset
  * @size: buf size for sync
  * @dir: DMA direction
  *
- * Before calling this function, use virtqueue_dma_need_sync() to confirm that
+ * Before calling this function, use virtqueue_map_need_sync() to confirm that
  * the DMA address really needs to be synchronized
  */
-void virtqueue_dma_sync_single_range_for_device(const struct virtqueue *_vq,
+void virtqueue_map_sync_single_range_for_device(const struct virtqueue *_vq,
 						dma_addr_t addr,
 						unsigned long offset, size_t size,
 						enum dma_data_direction dir)
@@ -3258,12 +3258,12 @@ void virtqueue_dma_sync_single_range_for_device(const struct virtqueue *_vq,
 	const struct vring_virtqueue *vq = to_vvq(_vq);
 	struct device *dev = vring_dma_dev(vq);
 
-	if (!vq->use_dma_api)
+	if (!vq->use_map_api)
 		return;
 
 	dma_sync_single_range_for_device(dev, addr, offset, size, dir);
 }
-EXPORT_SYMBOL_GPL(virtqueue_dma_sync_single_range_for_device);
+EXPORT_SYMBOL_GPL(virtqueue_map_sync_single_range_for_device);
 
 MODULE_DESCRIPTION("Virtio ring implementation");
 MODULE_LICENSE("GPL");
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 8c0a3165e754..0371b500ed19 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -259,18 +259,18 @@ void unregister_virtio_driver(struct virtio_driver *drv);
 	module_driver(__virtio_driver, register_virtio_driver, \
 			unregister_virtio_driver)
 
-dma_addr_t virtqueue_dma_map_single_attrs(const struct virtqueue *_vq, void *ptr, size_t size,
+dma_addr_t virtqueue_map_single_attrs(const struct virtqueue *_vq, void *ptr, size_t size,
 					  enum dma_data_direction dir, unsigned long attrs);
-void virtqueue_dma_unmap_single_attrs(const struct virtqueue *_vq, dma_addr_t addr,
+void virtqueue_unmap_single_attrs(const struct virtqueue *_vq, dma_addr_t addr,
 				      size_t size, enum dma_data_direction dir,
 				      unsigned long attrs);
-int virtqueue_dma_mapping_error(const struct virtqueue *_vq, dma_addr_t addr);
+int virtqueue_map_mapping_error(const struct virtqueue *_vq, dma_addr_t addr);
 
-bool virtqueue_dma_need_sync(const struct virtqueue *_vq, dma_addr_t addr);
-void virtqueue_dma_sync_single_range_for_cpu(const struct virtqueue *_vq, dma_addr_t addr,
+bool virtqueue_map_need_sync(const struct virtqueue *_vq, dma_addr_t addr);
+void virtqueue_map_sync_single_range_for_cpu(const struct virtqueue *_vq, dma_addr_t addr,
 					     unsigned long offset, size_t size,
 					     enum dma_data_direction dir);
-void virtqueue_dma_sync_single_range_for_device(const struct virtqueue *_vq, dma_addr_t addr,
+void virtqueue_map_sync_single_range_for_device(const struct virtqueue *_vq, dma_addr_t addr,
 						unsigned long offset, size_t size,
 						enum dma_data_direction dir);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/9] virtio: rename dma_dev to map_token
  2025-07-01  1:13 [PATCH 0/9] Refine virtio mapping API Jason Wang
                   ` (2 preceding siblings ...)
  2025-07-01  1:13 ` [PATCH 3/9] virtio: rename dma helpers Jason Wang
@ 2025-07-01  1:13 ` Jason Wang
  2025-07-01  1:13 ` [PATCH 5/9] virtio_ring: rename dma_handle to map_handle Jason Wang
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2025-07-01  1:13 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, eperezma
  Cc: virtualization, linux-kernel, hch, xieyongji

Following patch will introduce the mapping operations for virtio
device. So this patch rename dma_dev to map_token to match the
rework. The idea is the allow the transport layer to pass device
specific mapping token which will be used as a parameter for the
virtio mapping operations.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c     |   4 +-
 drivers/virtio/virtio_ring.c | 130 +++++++++++++++++------------------
 drivers/virtio/virtio_vdpa.c |   2 +-
 include/linux/virtio.h       |   2 +-
 include/linux/virtio_ring.h  |   6 +-
 5 files changed, 72 insertions(+), 72 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 39bcb85335d5..43711e4cc381 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -5879,10 +5879,10 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
 	 * But vq->dma_dev allows every vq has the respective dma dev. So I
 	 * check the dma dev of vq and sq is the same dev.
 	 */
-	if (virtqueue_dma_dev(rq->vq) != virtqueue_dma_dev(sq->vq))
+	if (virtqueue_map_token(rq->vq) != virtqueue_map_token(sq->vq))
 		return -EINVAL;
 
-	dma_dev = virtqueue_dma_dev(rq->vq);
+	dma_dev = virtqueue_map_token(rq->vq);
 	if (!dma_dev)
 		return -EINVAL;
 
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 5961e77db6dc..5f17f8d91f1a 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -210,8 +210,8 @@ struct vring_virtqueue {
 	/* DMA, allocation, and size information */
 	bool we_own_ring;
 
-	/* Device used for doing DMA */
-	struct device *dma_dev;
+	/* Transport specific token used for doing map */
+	void *map_token;
 
 #ifdef DEBUG
 	/* They're supposed to lock for us. */
@@ -307,10 +307,10 @@ EXPORT_SYMBOL_GPL(virtio_max_dma_size);
 
 static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
 			       dma_addr_t *dma_handle, gfp_t flag,
-			       struct device *dma_dev)
+			       void *map_token)
 {
 	if (vring_use_map_api(vdev)) {
-		return dma_alloc_coherent(dma_dev, size,
+		return dma_alloc_coherent(map_token, size,
 					  dma_handle, flag);
 	} else {
 		void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag);
@@ -341,22 +341,22 @@ static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
 
 static void vring_free_queue(struct virtio_device *vdev, size_t size,
 			     void *queue, dma_addr_t dma_handle,
-			     struct device *dma_dev)
+			     void *map_token)
 {
 	if (vring_use_map_api(vdev))
-		dma_free_coherent(dma_dev, size, queue, dma_handle);
+		dma_free_coherent(map_token, size, queue, dma_handle);
 	else
 		free_pages_exact(queue, PAGE_ALIGN(size));
 }
 
 /*
- * The DMA ops on various arches are rather gnarly right now, and
- * making all of the arch DMA ops work on the vring device itself
+ * The map ops on various arches are rather gnarly right now, and
+ * making all of the arch map ops work on the vring device itself
  * is a mess.
  */
-static struct device *vring_dma_dev(const struct vring_virtqueue *vq)
+static void *vring_map_token(const struct vring_virtqueue *vq)
 {
-	return vq->dma_dev;
+	return vq->map_token;
 }
 
 /* Map one sg entry. */
@@ -388,11 +388,11 @@ static int vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist
 	 * the way it expects (we don't guarantee that the scatterlist
 	 * will exist for the lifetime of the mapping).
 	 */
-	*addr = dma_map_page(vring_dma_dev(vq),
+	*addr = dma_map_page(vring_map_token(vq),
 			    sg_page(sg), sg->offset, sg->length,
 			    direction);
 
-	if (dma_mapping_error(vring_dma_dev(vq), *addr))
+	if (dma_mapping_error(vring_map_token(vq), *addr))
 		return -ENOMEM;
 
 	return 0;
@@ -415,7 +415,7 @@ static int vring_mapping_error(const struct vring_virtqueue *vq,
 	if (!vq->use_map_api)
 		return 0;
 
-	return dma_mapping_error(vring_dma_dev(vq), addr);
+	return dma_mapping_error(vring_map_token(vq), addr);
 }
 
 static void virtqueue_init(struct vring_virtqueue *vq, u32 num)
@@ -454,7 +454,7 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
 	} else if (!vring_need_unmap_buffer(vq, extra))
 		goto out;
 
-	dma_unmap_page(vring_dma_dev(vq),
+	dma_unmap_page(vring_map_token(vq),
 		       extra->addr,
 		       extra->len,
 		       (flags & VRING_DESC_F_WRITE) ?
@@ -1056,12 +1056,12 @@ static int vring_alloc_state_extra_split(struct vring_virtqueue_split *vring_spl
 }
 
 static void vring_free_split(struct vring_virtqueue_split *vring_split,
-			     struct virtio_device *vdev, struct device *dma_dev)
+			     struct virtio_device *vdev, void *map_token)
 {
 	vring_free_queue(vdev, vring_split->queue_size_in_bytes,
 			 vring_split->vring.desc,
 			 vring_split->queue_dma_addr,
-			 dma_dev);
+			 map_token);
 
 	kfree(vring_split->desc_state);
 	kfree(vring_split->desc_extra);
@@ -1072,7 +1072,7 @@ static int vring_alloc_queue_split(struct vring_virtqueue_split *vring_split,
 				   u32 num,
 				   unsigned int vring_align,
 				   bool may_reduce_num,
-				   struct device *dma_dev)
+				   void *map_token)
 {
 	void *queue = NULL;
 	dma_addr_t dma_addr;
@@ -1088,7 +1088,7 @@ static int vring_alloc_queue_split(struct vring_virtqueue_split *vring_split,
 		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
 					  &dma_addr,
 					  GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO,
-					  dma_dev);
+					  map_token);
 		if (queue)
 			break;
 		if (!may_reduce_num)
@@ -1102,7 +1102,7 @@ static int vring_alloc_queue_split(struct vring_virtqueue_split *vring_split,
 		/* Try to get a single page. You are my only hope! */
 		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
 					  &dma_addr, GFP_KERNEL | __GFP_ZERO,
-					  dma_dev);
+					  map_token);
 	}
 	if (!queue)
 		return -ENOMEM;
@@ -1126,7 +1126,7 @@ static struct virtqueue *__vring_new_virtqueue_split(unsigned int index,
 					       bool (*notify)(struct virtqueue *),
 					       void (*callback)(struct virtqueue *),
 					       const char *name,
-					       struct device *dma_dev)
+					       void *map_token)
 {
 	struct vring_virtqueue *vq;
 	int err;
@@ -1149,7 +1149,7 @@ static struct virtqueue *__vring_new_virtqueue_split(unsigned int index,
 #else
 	vq->broken = false;
 #endif
-	vq->dma_dev = dma_dev;
+	vq->map_token = map_token;
 	vq->use_map_api = vring_use_map_api(vdev);
 
 	vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
@@ -1187,21 +1187,21 @@ static struct virtqueue *vring_create_virtqueue_split(
 	bool (*notify)(struct virtqueue *),
 	void (*callback)(struct virtqueue *),
 	const char *name,
-	struct device *dma_dev)
+	void *map_token)
 {
 	struct vring_virtqueue_split vring_split = {};
 	struct virtqueue *vq;
 	int err;
 
 	err = vring_alloc_queue_split(&vring_split, vdev, num, vring_align,
-				      may_reduce_num, dma_dev);
+				      may_reduce_num, map_token);
 	if (err)
 		return NULL;
 
 	vq = __vring_new_virtqueue_split(index, &vring_split, vdev, weak_barriers,
-				   context, notify, callback, name, dma_dev);
+				   context, notify, callback, name, map_token);
 	if (!vq) {
-		vring_free_split(&vring_split, vdev, dma_dev);
+		vring_free_split(&vring_split, vdev, map_token);
 		return NULL;
 	}
 
@@ -1220,7 +1220,7 @@ static int virtqueue_resize_split(struct virtqueue *_vq, u32 num)
 	err = vring_alloc_queue_split(&vring_split, vdev, num,
 				      vq->split.vring_align,
 				      vq->split.may_reduce_num,
-				      vring_dma_dev(vq));
+				      vring_map_token(vq));
 	if (err)
 		goto err;
 
@@ -1238,7 +1238,7 @@ static int virtqueue_resize_split(struct virtqueue *_vq, u32 num)
 	return 0;
 
 err_state_extra:
-	vring_free_split(&vring_split, vdev, vring_dma_dev(vq));
+	vring_free_split(&vring_split, vdev, vring_map_token(vq));
 err:
 	virtqueue_reinit_split(vq);
 	return -ENOMEM;
@@ -1271,7 +1271,7 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
 	} else if (!vring_need_unmap_buffer(vq, extra))
 		return;
 
-	dma_unmap_page(vring_dma_dev(vq),
+	dma_unmap_page(vring_map_token(vq),
 		       extra->addr, extra->len,
 		       (flags & VRING_DESC_F_WRITE) ?
 		       DMA_FROM_DEVICE : DMA_TO_DEVICE);
@@ -1947,25 +1947,25 @@ static struct vring_desc_extra *vring_alloc_desc_extra(unsigned int num)
 
 static void vring_free_packed(struct vring_virtqueue_packed *vring_packed,
 			      struct virtio_device *vdev,
-			      struct device *dma_dev)
+			      void *map_token)
 {
 	if (vring_packed->vring.desc)
 		vring_free_queue(vdev, vring_packed->ring_size_in_bytes,
 				 vring_packed->vring.desc,
 				 vring_packed->ring_dma_addr,
-				 dma_dev);
+				 map_token);
 
 	if (vring_packed->vring.driver)
 		vring_free_queue(vdev, vring_packed->event_size_in_bytes,
 				 vring_packed->vring.driver,
 				 vring_packed->driver_event_dma_addr,
-				 dma_dev);
+				 map_token);
 
 	if (vring_packed->vring.device)
 		vring_free_queue(vdev, vring_packed->event_size_in_bytes,
 				 vring_packed->vring.device,
 				 vring_packed->device_event_dma_addr,
-				 dma_dev);
+				 map_token);
 
 	kfree(vring_packed->desc_state);
 	kfree(vring_packed->desc_extra);
@@ -1973,7 +1973,7 @@ static void vring_free_packed(struct vring_virtqueue_packed *vring_packed,
 
 static int vring_alloc_queue_packed(struct vring_virtqueue_packed *vring_packed,
 				    struct virtio_device *vdev,
-				    u32 num, struct device *dma_dev)
+				    u32 num, void *map_token)
 {
 	struct vring_packed_desc *ring;
 	struct vring_packed_desc_event *driver, *device;
@@ -1985,7 +1985,7 @@ static int vring_alloc_queue_packed(struct vring_virtqueue_packed *vring_packed,
 	ring = vring_alloc_queue(vdev, ring_size_in_bytes,
 				 &ring_dma_addr,
 				 GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO,
-				 dma_dev);
+				 map_token);
 	if (!ring)
 		goto err;
 
@@ -1998,7 +1998,7 @@ static int vring_alloc_queue_packed(struct vring_virtqueue_packed *vring_packed,
 	driver = vring_alloc_queue(vdev, event_size_in_bytes,
 				   &driver_event_dma_addr,
 				   GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO,
-				   dma_dev);
+				   map_token);
 	if (!driver)
 		goto err;
 
@@ -2009,7 +2009,7 @@ static int vring_alloc_queue_packed(struct vring_virtqueue_packed *vring_packed,
 	device = vring_alloc_queue(vdev, event_size_in_bytes,
 				   &device_event_dma_addr,
 				   GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO,
-				   dma_dev);
+				   map_token);
 	if (!device)
 		goto err;
 
@@ -2021,7 +2021,7 @@ static int vring_alloc_queue_packed(struct vring_virtqueue_packed *vring_packed,
 	return 0;
 
 err:
-	vring_free_packed(vring_packed, vdev, dma_dev);
+	vring_free_packed(vring_packed, vdev, map_token);
 	return -ENOMEM;
 }
 
@@ -2097,7 +2097,7 @@ static struct virtqueue *__vring_new_virtqueue_packed(unsigned int index,
 					       bool (*notify)(struct virtqueue *),
 					       void (*callback)(struct virtqueue *),
 					       const char *name,
-					       struct device *dma_dev)
+					       void *map_token)
 {
 	struct vring_virtqueue *vq;
 	int err;
@@ -2120,7 +2120,7 @@ static struct virtqueue *__vring_new_virtqueue_packed(unsigned int index,
 	vq->broken = false;
 #endif
 	vq->packed_ring = true;
-	vq->dma_dev = dma_dev;
+	vq->map_token = map_token;
 	vq->use_map_api = vring_use_map_api(vdev);
 
 	vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
@@ -2158,18 +2158,18 @@ static struct virtqueue *vring_create_virtqueue_packed(
 	bool (*notify)(struct virtqueue *),
 	void (*callback)(struct virtqueue *),
 	const char *name,
-	struct device *dma_dev)
+	void *map_token)
 {
 	struct vring_virtqueue_packed vring_packed = {};
 	struct virtqueue *vq;
 
-	if (vring_alloc_queue_packed(&vring_packed, vdev, num, dma_dev))
+	if (vring_alloc_queue_packed(&vring_packed, vdev, num, map_token))
 		return NULL;
 
 	vq = __vring_new_virtqueue_packed(index, &vring_packed, vdev, weak_barriers,
-					context, notify, callback, name, dma_dev);
+					context, notify, callback, name, map_token);
 	if (!vq) {
-		vring_free_packed(&vring_packed, vdev, dma_dev);
+		vring_free_packed(&vring_packed, vdev, map_token);
 		return NULL;
 	}
 
@@ -2185,7 +2185,7 @@ static int virtqueue_resize_packed(struct virtqueue *_vq, u32 num)
 	struct virtio_device *vdev = _vq->vdev;
 	int err;
 
-	if (vring_alloc_queue_packed(&vring_packed, vdev, num, vring_dma_dev(vq)))
+	if (vring_alloc_queue_packed(&vring_packed, vdev, num, vring_map_token(vq)))
 		goto err_ring;
 
 	err = vring_alloc_state_extra_packed(&vring_packed);
@@ -2202,7 +2202,7 @@ static int virtqueue_resize_packed(struct virtqueue *_vq, u32 num)
 	return 0;
 
 err_state_extra:
-	vring_free_packed(&vring_packed, vdev, vring_dma_dev(vq));
+	vring_free_packed(&vring_packed, vdev, vring_map_token(vq));
 err_ring:
 	virtqueue_reinit_packed(vq);
 	return -ENOMEM;
@@ -2420,21 +2420,21 @@ int virtqueue_add_inbuf_premapped(struct virtqueue *vq,
 EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_premapped);
 
 /**
- * virtqueue_dma_dev - get the dma dev
+ * virtqueue_map_token - get the transport specific map token
  * @_vq: the struct virtqueue we're talking about.
  *
- * Returns the dma dev. That can been used for dma api.
+ * Returns the map token. That can been used for map api.
  */
-struct device *virtqueue_dma_dev(struct virtqueue *_vq)
+void *virtqueue_map_token(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
 	if (vq->use_map_api)
-		return vring_dma_dev(vq);
+		return vring_map_token(vq);
 	else
 		return NULL;
 }
-EXPORT_SYMBOL_GPL(virtqueue_dma_dev);
+EXPORT_SYMBOL_GPL(virtqueue_map_token);
 
 /**
  * virtqueue_kick_prepare - first half of split virtqueue_kick call.
@@ -2727,7 +2727,7 @@ struct virtqueue *vring_create_virtqueue(
 }
 EXPORT_SYMBOL_GPL(vring_create_virtqueue);
 
-struct virtqueue *vring_create_virtqueue_dma(
+struct virtqueue *vring_create_virtqueue_map(
 	unsigned int index,
 	unsigned int num,
 	unsigned int vring_align,
@@ -2738,19 +2738,19 @@ struct virtqueue *vring_create_virtqueue_dma(
 	bool (*notify)(struct virtqueue *),
 	void (*callback)(struct virtqueue *),
 	const char *name,
-	struct device *dma_dev)
+	void *map_token)
 {
 
 	if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED))
 		return vring_create_virtqueue_packed(index, num, vring_align,
 				vdev, weak_barriers, may_reduce_num,
-				context, notify, callback, name, dma_dev);
+				context, notify, callback, name, map_token);
 
 	return vring_create_virtqueue_split(index, num, vring_align,
 			vdev, weak_barriers, may_reduce_num,
-			context, notify, callback, name, dma_dev);
+			context, notify, callback, name, map_token);
 }
-EXPORT_SYMBOL_GPL(vring_create_virtqueue_dma);
+EXPORT_SYMBOL_GPL(vring_create_virtqueue_map);
 
 /**
  * virtqueue_resize - resize the vring of vq
@@ -2886,19 +2886,19 @@ static void vring_free(struct virtqueue *_vq)
 					 vq->packed.ring_size_in_bytes,
 					 vq->packed.vring.desc,
 					 vq->packed.ring_dma_addr,
-					 vring_dma_dev(vq));
+					 vring_map_token(vq));
 
 			vring_free_queue(vq->vq.vdev,
 					 vq->packed.event_size_in_bytes,
 					 vq->packed.vring.driver,
 					 vq->packed.driver_event_dma_addr,
-					 vring_dma_dev(vq));
+					 vring_map_token(vq));
 
 			vring_free_queue(vq->vq.vdev,
 					 vq->packed.event_size_in_bytes,
 					 vq->packed.vring.device,
 					 vq->packed.device_event_dma_addr,
-					 vring_dma_dev(vq));
+					 vring_map_token(vq));
 
 			kfree(vq->packed.desc_state);
 			kfree(vq->packed.desc_extra);
@@ -2907,7 +2907,7 @@ static void vring_free(struct virtqueue *_vq)
 					 vq->split.queue_size_in_bytes,
 					 vq->split.vring.desc,
 					 vq->split.queue_dma_addr,
-					 vring_dma_dev(vq));
+					 vring_map_token(vq));
 		}
 	}
 	if (!vq->packed_ring) {
@@ -3143,7 +3143,7 @@ dma_addr_t virtqueue_map_single_attrs(const struct virtqueue *_vq, void *ptr,
 			  "rejecting DMA map of vmalloc memory\n"))
 		return DMA_MAPPING_ERROR;
 
-	return dma_map_page_attrs(vring_dma_dev(vq), virt_to_page(ptr),
+	return dma_map_page_attrs(vring_map_token(vq), virt_to_page(ptr),
 				  offset_in_page(ptr), size, dir, attrs);
 }
 EXPORT_SYMBOL_GPL(virtqueue_map_single_attrs);
@@ -3169,7 +3169,7 @@ void virtqueue_unmap_single_attrs(const struct virtqueue *_vq,
 	if (!vq->use_map_api)
 		return;
 
-	dma_unmap_page_attrs(vring_dma_dev(vq), addr, size, dir, attrs);
+	dma_unmap_page_attrs(vring_map_token(vq), addr, size, dir, attrs);
 }
 EXPORT_SYMBOL_GPL(virtqueue_unmap_single_attrs);
 
@@ -3187,7 +3187,7 @@ int virtqueue_map_mapping_error(const struct virtqueue *_vq, dma_addr_t addr)
 	if (!vq->use_map_api)
 		return 0;
 
-	return dma_mapping_error(vring_dma_dev(vq), addr);
+	return dma_mapping_error(vring_map_token(vq), addr);
 }
 EXPORT_SYMBOL_GPL(virtqueue_map_mapping_error);
 
@@ -3208,7 +3208,7 @@ bool virtqueue_map_need_sync(const struct virtqueue *_vq, dma_addr_t addr)
 	if (!vq->use_map_api)
 		return false;
 
-	return dma_need_sync(vring_dma_dev(vq), addr);
+	return dma_need_sync(vring_map_token(vq), addr);
 }
 EXPORT_SYMBOL_GPL(virtqueue_map_need_sync);
 
@@ -3230,7 +3230,7 @@ void virtqueue_map_sync_single_range_for_cpu(const struct virtqueue *_vq,
 					     enum dma_data_direction dir)
 {
 	const struct vring_virtqueue *vq = to_vvq(_vq);
-	struct device *dev = vring_dma_dev(vq);
+	struct device *dev = vring_map_token(vq);
 
 	if (!vq->use_map_api)
 		return;
@@ -3256,7 +3256,7 @@ void virtqueue_map_sync_single_range_for_device(const struct virtqueue *_vq,
 						enum dma_data_direction dir)
 {
 	const struct vring_virtqueue *vq = to_vvq(_vq);
-	struct device *dev = vring_dma_dev(vq);
+	struct device *dev = vring_map_token(vq);
 
 	if (!vq->use_map_api)
 		return;
diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
index 1f60c9d5cb18..59b53032f1e2 100644
--- a/drivers/virtio/virtio_vdpa.c
+++ b/drivers/virtio/virtio_vdpa.c
@@ -205,7 +205,7 @@ virtio_vdpa_setup_vq(struct virtio_device *vdev, unsigned int index,
 		dma_dev = ops->get_vq_dma_dev(vdpa, index);
 	else
 		dma_dev = vdpa_get_dma_dev(vdpa);
-	vq = vring_create_virtqueue_dma(index, max_num, align, vdev,
+	vq = vring_create_virtqueue_map(index, max_num, align, vdev,
 					true, may_reduce_num, ctx,
 					notify, callback, name, dma_dev);
 	if (!vq) {
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 0371b500ed19..3812661d3761 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -74,7 +74,7 @@ int virtqueue_add_sgs(struct virtqueue *vq,
 		      void *data,
 		      gfp_t gfp);
 
-struct device *virtqueue_dma_dev(struct virtqueue *vq);
+void *virtqueue_map_token(struct virtqueue *vq);
 
 bool virtqueue_kick(struct virtqueue *vq);
 
diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 9b33df741b63..a995bca6785f 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -79,9 +79,9 @@ struct virtqueue *vring_create_virtqueue(unsigned int index,
 
 /*
  * Creates a virtqueue and allocates the descriptor ring with per
- * virtqueue DMA device.
+ * virtqueue mapping operations.
  */
-struct virtqueue *vring_create_virtqueue_dma(unsigned int index,
+struct virtqueue *vring_create_virtqueue_map(unsigned int index,
 					     unsigned int num,
 					     unsigned int vring_align,
 					     struct virtio_device *vdev,
@@ -91,7 +91,7 @@ struct virtqueue *vring_create_virtqueue_dma(unsigned int index,
 					     bool (*notify)(struct virtqueue *vq),
 					     void (*callback)(struct virtqueue *vq),
 					     const char *name,
-					     struct device *dma_dev);
+					     void *map_token);
 
 /*
  * Creates a virtqueue with a standard layout but a caller-allocated
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/9] virtio_ring: rename dma_handle to map_handle
  2025-07-01  1:13 [PATCH 0/9] Refine virtio mapping API Jason Wang
                   ` (3 preceding siblings ...)
  2025-07-01  1:13 ` [PATCH 4/9] virtio: rename dma_dev to map_token Jason Wang
@ 2025-07-01  1:13 ` Jason Wang
  2025-07-01  1:13 ` [PATCH 6/9] virtio: introduce map ops in virtio core Jason Wang
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2025-07-01  1:13 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, eperezma
  Cc: virtualization, linux-kernel, hch, xieyongji

Following patch will introduce virtio map opreations which means the
address is not necessarily used for DMA. Let's rename the dma_handle
to map_handle first.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/virtio/virtio_ring.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 5f17f8d91f1a..04e754874bec 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -306,18 +306,18 @@ size_t virtio_max_dma_size(const struct virtio_device *vdev)
 EXPORT_SYMBOL_GPL(virtio_max_dma_size);
 
 static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
-			       dma_addr_t *dma_handle, gfp_t flag,
+			       dma_addr_t *map_handle, gfp_t flag,
 			       void *map_token)
 {
 	if (vring_use_map_api(vdev)) {
 		return dma_alloc_coherent(map_token, size,
-					  dma_handle, flag);
+					  map_handle, flag);
 	} else {
 		void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag);
 
 		if (queue) {
 			phys_addr_t phys_addr = virt_to_phys(queue);
-			*dma_handle = (dma_addr_t)phys_addr;
+			*map_handle = (dma_addr_t)phys_addr;
 
 			/*
 			 * Sanity check: make sure we dind't truncate
@@ -330,7 +330,7 @@ static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
 			 * warning and abort if we end up with an
 			 * unrepresentable address.
 			 */
-			if (WARN_ON_ONCE(*dma_handle != phys_addr)) {
+			if (WARN_ON_ONCE(*map_handle != phys_addr)) {
 				free_pages_exact(queue, PAGE_ALIGN(size));
 				return NULL;
 			}
@@ -340,11 +340,11 @@ static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
 }
 
 static void vring_free_queue(struct virtio_device *vdev, size_t size,
-			     void *queue, dma_addr_t dma_handle,
+			     void *queue, dma_addr_t map_handle,
 			     void *map_token)
 {
 	if (vring_use_map_api(vdev))
-		dma_free_coherent(map_token, size, queue, dma_handle);
+		dma_free_coherent(map_token, size, queue, map_handle);
 	else
 		free_pages_exact(queue, PAGE_ALIGN(size));
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/9] virtio: introduce map ops in virtio core
  2025-07-01  1:13 [PATCH 0/9] Refine virtio mapping API Jason Wang
                   ` (4 preceding siblings ...)
  2025-07-01  1:13 ` [PATCH 5/9] virtio_ring: rename dma_handle to map_handle Jason Wang
@ 2025-07-01  1:13 ` Jason Wang
  2025-07-01  1:13 ` [PATCH 7/9] vdpa: rename dma_dev to map_token Jason Wang
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2025-07-01  1:13 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, eperezma
  Cc: virtualization, linux-kernel, hch, xieyongji

This patch introduces map operations for virtio device. Virtio use to
use DMA API which is not necessarily the case since some devices
doesn't do DMA. Instead of using tricks and abusing DMA API, let's
simply abstract the current mapping logic into a virtio specific
mapping operations. For the device or transport that doesn't do DMA,
they can implement their own mapping logic without the need to trick
DMA core. In this case the map_token is opaque to the virtio core that
will be passed back to the transport or device specific map
operations. For other devices, DMA API will still be used, so map
token will still be the dma device to minimize the changeset and
performance impact.

The mapping operations are abstract as a independent structure instead
of reusing virtio_config_ops. This allows the transport can simply
reuse the structure for lower layers.

A set of new mapping helpers were introduced for the device that want
to do mapping by themselves.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/virtio/virtio_ring.c  | 174 +++++++++++++++++++++++++++++-----
 include/linux/virtio.h        |  22 +++++
 include/linux/virtio_config.h |  68 +++++++++++++
 3 files changed, 238 insertions(+), 26 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 04e754874bec..40b2f526832e 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -298,8 +298,14 @@ size_t virtio_max_dma_size(const struct virtio_device *vdev)
 {
 	size_t max_segment_size = SIZE_MAX;
 
-	if (vring_use_map_api(vdev))
-		max_segment_size = dma_max_mapping_size(vdev->dev.parent);
+	if (vring_use_map_api(vdev)) {
+		if (vdev->map)
+			max_segment_size =
+				vdev->map->max_mapping_size(vdev->dev.parent);
+		else
+			max_segment_size =
+				dma_max_mapping_size(vdev->dev.parent);
+	}
 
 	return max_segment_size;
 }
@@ -310,8 +316,8 @@ static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
 			       void *map_token)
 {
 	if (vring_use_map_api(vdev)) {
-		return dma_alloc_coherent(map_token, size,
-					  map_handle, flag);
+		return virtqueue_map_alloc_coherent(vdev, map_token, size,
+						    map_handle, flag);
 	} else {
 		void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag);
 
@@ -344,7 +350,8 @@ static void vring_free_queue(struct virtio_device *vdev, size_t size,
 			     void *map_token)
 {
 	if (vring_use_map_api(vdev))
-		dma_free_coherent(map_token, size, queue, map_handle);
+		virtqueue_map_free_coherent(vdev, map_token, size,
+					    queue, map_handle);
 	else
 		free_pages_exact(queue, PAGE_ALIGN(size));
 }
@@ -388,9 +395,9 @@ static int vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist
 	 * the way it expects (we don't guarantee that the scatterlist
 	 * will exist for the lifetime of the mapping).
 	 */
-	*addr = dma_map_page(vring_map_token(vq),
-			    sg_page(sg), sg->offset, sg->length,
-			    direction);
+	*addr = virtqueue_map_page_attrs(&vq->vq, sg_page(sg),
+					 sg->offset, sg->length,
+					 direction, 0);
 
 	if (dma_mapping_error(vring_map_token(vq), *addr))
 		return -ENOMEM;
@@ -454,11 +461,12 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
 	} else if (!vring_need_unmap_buffer(vq, extra))
 		goto out;
 
-	dma_unmap_page(vring_map_token(vq),
-		       extra->addr,
-		       extra->len,
-		       (flags & VRING_DESC_F_WRITE) ?
-		       DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	virtqueue_unmap_page_attrs(&vq->vq,
+				   extra->addr,
+				   extra->len,
+				   (flags & VRING_DESC_F_WRITE) ?
+				   DMA_FROM_DEVICE : DMA_TO_DEVICE,
+				   0);
 
 out:
 	return extra->next;
@@ -1271,10 +1279,11 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
 	} else if (!vring_need_unmap_buffer(vq, extra))
 		return;
 
-	dma_unmap_page(vring_map_token(vq),
-		       extra->addr, extra->len,
-		       (flags & VRING_DESC_F_WRITE) ?
-		       DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	virtqueue_unmap_page_attrs(&vq->vq,
+				   extra->addr, extra->len,
+				   (flags & VRING_DESC_F_WRITE) ?
+				   DMA_FROM_DEVICE : DMA_TO_DEVICE,
+				   0);
 }
 
 static struct vring_packed_desc *alloc_indirect_packed(unsigned int total_sg,
@@ -3113,6 +3122,105 @@ const struct vring *virtqueue_get_vring(const struct virtqueue *vq)
 }
 EXPORT_SYMBOL_GPL(virtqueue_get_vring);
 
+/**
+ * virtqueue_map_alloc_coherent - alloc coherent mapping
+ * @vdev: the virtio device we are talking to
+ * @token: device specific mapping token
+ * @size: the size of the buffer
+ * @map_handle: the pointer to the mapped adress
+ * @gfp: allocation flag (GFP_XXX)
+ *
+ * return virtual address or NULL on error
+ */
+void *virtqueue_map_alloc_coherent(struct virtio_device *vdev,
+				   void *map_token, size_t size,
+				   dma_addr_t *map_handle, gfp_t gfp)
+{
+	if (vdev->map)
+		return vdev->map->alloc(map_token, size, map_handle, gfp);
+	else
+		return dma_alloc_coherent(map_token, size,
+					  map_handle, gfp);
+}
+EXPORT_SYMBOL_GPL(virtqueue_map_alloc_coherent);
+
+/**
+ * virtqueue_map_free_coherent - free coherent mapping
+ * @vdev: the virtio device we are talking to
+ * @token: device specific mapping token
+ * @size: the size of the buffer
+ * @map_handle: the mapped address that needs to be freed
+ *
+ */
+void virtqueue_map_free_coherent(struct virtio_device *vdev,
+				 void *map_token, size_t size, void *vaddr,
+				 dma_addr_t map_handle)
+{
+	if (vdev->map)
+		vdev->map->free(map_token, size, vaddr, map_handle, 0);
+	else
+		dma_free_coherent(map_token, size, vaddr, map_handle);
+}
+EXPORT_SYMBOL_GPL(virtqueue_map_free_coherent);
+
+/**
+ * virtqueue_map_page_attrs - map a page to the device
+ * @_vq: the virtqueue we are talking to
+ * @page: the page that will be mapped by the device
+ * @offset: the offset in the page for a buffer
+ * @size: the buffer size
+ * @dir: mapping direction
+ * @attrs: mapping attributes
+ *
+ * Returns mapped address. Caller should check that by virtqueue_mapping_error().
+ */
+dma_addr_t virtqueue_map_page_attrs(const struct virtqueue *_vq,
+				    struct page *page,
+				    unsigned long offset,
+				    size_t size,
+				    enum dma_data_direction dir,
+				    unsigned long attrs)
+{
+	const struct vring_virtqueue *vq = to_vvq(_vq);
+	struct virtio_device *vdev = _vq->vdev;
+	void *map_token = vring_map_token(vq);
+
+	if (vdev->map)
+		return vdev->map->map_page(map_token,
+					   page, offset, size,
+					   dir, attrs);
+
+	return dma_map_page_attrs(map_token,
+				  page, offset, size,
+				  dir, attrs);
+}
+EXPORT_SYMBOL_GPL(virtqueue_map_page_attrs);
+
+/**
+ * virtqueue_unmap_page_attrs - map a page to the device
+ * @_vq: the virtqueue we are talking to
+ * @map_handle: the mapped address
+ * @size: the buffer size
+ * @dir: mapping direction
+ * @attrs: unmapping attributes
+ */
+void virtqueue_unmap_page_attrs(const struct virtqueue *_vq,
+				dma_addr_t map_handle,
+				size_t size, enum dma_data_direction dir,
+				unsigned long attrs)
+{
+	const struct vring_virtqueue *vq = to_vvq(_vq);
+	struct virtio_device *vdev = _vq->vdev;
+	void *map_token = vring_map_token(vq);
+
+	if (vdev->map)
+		vdev->map->unmap_page(map_token, map_handle,
+				      size, dir, attrs);
+	else
+		dma_unmap_page_attrs(map_token, map_handle, size, dir, attrs);
+}
+EXPORT_SYMBOL_GPL(virtqueue_unmap_page_attrs);
+
 /**
  * virtqueue_map_single_attrs - map DMA for _vq
  * @_vq: the struct virtqueue we're talking about.
@@ -3124,7 +3232,7 @@ EXPORT_SYMBOL_GPL(virtqueue_get_vring);
  * The caller calls this to do dma mapping in advance. The DMA address can be
  * passed to this _vq when it is in pre-mapped mode.
  *
- * return DMA address. Caller should check that by virtqueue_mapping_error().
+ * return mapped address. Caller should check that by virtqueue_mapping_error().
  */
 dma_addr_t virtqueue_map_single_attrs(const struct virtqueue *_vq, void *ptr,
 				      size_t size,
@@ -3143,8 +3251,8 @@ dma_addr_t virtqueue_map_single_attrs(const struct virtqueue *_vq, void *ptr,
 			  "rejecting DMA map of vmalloc memory\n"))
 		return DMA_MAPPING_ERROR;
 
-	return dma_map_page_attrs(vring_map_token(vq), virt_to_page(ptr),
-				  offset_in_page(ptr), size, dir, attrs);
+	return virtqueue_map_page_attrs(&vq->vq, virt_to_page(ptr),
+					offset_in_page(ptr), size, dir, attrs);
 }
 EXPORT_SYMBOL_GPL(virtqueue_map_single_attrs);
 
@@ -3169,7 +3277,7 @@ void virtqueue_unmap_single_attrs(const struct virtqueue *_vq,
 	if (!vq->use_map_api)
 		return;
 
-	dma_unmap_page_attrs(vring_map_token(vq), addr, size, dir, attrs);
+	virtqueue_unmap_page_attrs(_vq, addr, size, dir, attrs);
 }
 EXPORT_SYMBOL_GPL(virtqueue_unmap_single_attrs);
 
@@ -3204,11 +3312,16 @@ EXPORT_SYMBOL_GPL(virtqueue_map_mapping_error);
 bool virtqueue_map_need_sync(const struct virtqueue *_vq, dma_addr_t addr)
 {
 	const struct vring_virtqueue *vq = to_vvq(_vq);
+	struct virtio_device *vdev = _vq->vdev;
+	void *token = vring_map_token(vq);
 
 	if (!vq->use_map_api)
 		return false;
 
-	return dma_need_sync(vring_map_token(vq), addr);
+	if (vdev->map)
+		return vdev->map->need_sync(token, addr);
+	else
+		return dma_need_sync(token, addr);
 }
 EXPORT_SYMBOL_GPL(virtqueue_map_need_sync);
 
@@ -3230,12 +3343,16 @@ void virtqueue_map_sync_single_range_for_cpu(const struct virtqueue *_vq,
 					     enum dma_data_direction dir)
 {
 	const struct vring_virtqueue *vq = to_vvq(_vq);
-	struct device *dev = vring_map_token(vq);
+	struct virtio_device *vdev = _vq->vdev;
+	void *token = vring_map_token(vq);
 
 	if (!vq->use_map_api)
 		return;
 
-	dma_sync_single_range_for_cpu(dev, addr, offset, size, dir);
+	if (vdev->map)
+		vdev->map->sync_single_for_cpu(token, addr + offset, size, dir);
+	else
+		dma_sync_single_range_for_cpu(token, addr, offset, size, dir);
 }
 EXPORT_SYMBOL_GPL(virtqueue_map_sync_single_range_for_cpu);
 
@@ -3256,12 +3373,17 @@ void virtqueue_map_sync_single_range_for_device(const struct virtqueue *_vq,
 						enum dma_data_direction dir)
 {
 	const struct vring_virtqueue *vq = to_vvq(_vq);
-	struct device *dev = vring_map_token(vq);
+	struct virtio_device *vdev = _vq->vdev;
+	void *token = vring_map_token(vq);
 
 	if (!vq->use_map_api)
 		return;
 
-	dma_sync_single_range_for_device(dev, addr, offset, size, dir);
+	if (vdev->map)
+		vdev->map->sync_single_for_device(token, addr + offset,
+						  size, dir);
+	else
+		dma_sync_single_range_for_device(token, addr, offset, size, dir);
 }
 EXPORT_SYMBOL_GPL(virtqueue_map_sync_single_range_for_device);
 
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 3812661d3761..6e8e9b350d05 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -158,6 +158,7 @@ struct virtio_device {
 	struct virtio_device_id id;
 	const struct virtio_config_ops *config;
 	const struct vringh_config_ops *vringh_config;
+	const struct virtio_map_ops *map;
 	struct list_head vqs;
 	u64 features;
 	void *priv;
@@ -259,6 +260,27 @@ void unregister_virtio_driver(struct virtio_driver *drv);
 	module_driver(__virtio_driver, register_virtio_driver, \
 			unregister_virtio_driver)
 
+
+void *virtqueue_map_alloc_coherent(struct virtio_device *vdev,
+				   void *map_token, size_t size,
+				   dma_addr_t *dma_handle, gfp_t gfp);
+
+void virtqueue_map_free_coherent(struct virtio_device *vdev,
+				 void *map_token, size_t size, void *vaddr,
+				 dma_addr_t dma_handle);
+
+dma_addr_t virtqueue_map_page_attrs(const struct virtqueue *_vq,
+				    struct page *page,
+				    unsigned long offset,
+				    size_t size,
+				    enum dma_data_direction dir,
+				    unsigned long attrs);
+
+void virtqueue_unmap_page_attrs(const struct virtqueue *_vq,
+				dma_addr_t dma_handle,
+				size_t size, enum dma_data_direction dir,
+				unsigned long attrs);
+
 dma_addr_t virtqueue_map_single_attrs(const struct virtqueue *_vq, void *ptr, size_t size,
 					  enum dma_data_direction dir, unsigned long attrs);
 void virtqueue_unmap_single_attrs(const struct virtqueue *_vq, dma_addr_t addr,
diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
index b3e1d30c765b..706ebf7cb389 100644
--- a/include/linux/virtio_config.h
+++ b/include/linux/virtio_config.h
@@ -133,6 +133,74 @@ struct virtio_config_ops {
 	int (*enable_vq_after_reset)(struct virtqueue *vq);
 };
 
+/**
+ * struct virtio_map_ops - operations for mapping buffer for a virtio device
+ * Note: For transport that has its own mapping logic it must
+ * implements all of the operations
+ * @map_page: map a buffer to the device
+ *      token: device specific mapping token
+ *      page: the page that will be mapped by the device
+ *      offset: the offset in the page for a buffer
+ *      size: the buffer size
+ *      dir: mapping direction
+ *      attrs: mapping attributes
+ *      Returns: the mapped address
+ * @unmap_page: unmap a buffer from the device
+ *      token: device specific mapping token
+ *      map_handle: the mapped address
+ *      size: the buffer size
+ *      dir: mapping direction
+ *      attrs: unmapping attributes
+ * @sync_single_for_cpu: sync a single buffer from device to cpu
+ *      token: device specific mapping token
+ *      map_handle: the mapping adress to sync
+ *      size: the size of the buffer
+ *      dir: synchronization direction
+ * @sync_single_for_device: sync a single buffer from cpu to device
+ *      token: device specific mapping token
+ *      map_handle: the mapping adress to sync
+ *      size: the size of the buffer
+ *      dir: synchronization direction
+ * @alloc: alloc a coherent buffer mapping
+ *      token: device specific mapping token
+ *      size: the size of the buffer
+ *      map_handle: the mapping adress to sync
+ *      gfp: allocation flag (GFP_XXX)
+ *      Returns: virtual address of the allocated buffer
+ * @free: free a coherent buffer mapping
+ *      token: device specific mapping token
+ *      size: the size of the buffer
+ *      vaddr: virtual address of the buffer
+ *      map_handle: the mapping adress to sync
+ *      attrs: unmapping attributes
+ * @need_sync: if the buffer needs synchronization
+ *      token: device specific mapping token
+ *      map_handle: the mapped address
+ *      Returns: whether the buffer needs synchronization
+ * @max_mapping_size: get the maximum buffer size that can be mapped
+ *      token: device specific mapping token
+ *      Returns: the maximum buffer size that can be mapped
+ */
+struct virtio_map_ops {
+	dma_addr_t (*map_page)(void *token, struct page *page,
+			       unsigned long offset, size_t size,
+			       enum dma_data_direction dir, unsigned long attrs);
+	void (*unmap_page)(void *token, dma_addr_t map_handle,
+			   size_t size, enum dma_data_direction dir,
+			   unsigned long attrs);
+	void (*sync_single_for_cpu)(void *token, dma_addr_t map_handle,
+				    size_t size, enum dma_data_direction dir);
+	void (*sync_single_for_device)(void *token,
+				       dma_addr_t map_handle, size_t size,
+				       enum dma_data_direction dir);
+	void *(*alloc)(void *token, size_t size,
+		       dma_addr_t *map_handle, gfp_t gfp);
+	void (*free)(void *token, size_t size, void *vaddr,
+		     dma_addr_t map_handle, unsigned long attrs);
+	bool (*need_sync)(void *token, dma_addr_t map_handle);
+	size_t (*max_mapping_size)(void *token);
+};
+
 /* If driver didn't advertise the feature, it will never appear. */
 void virtio_check_driver_offered_feature(const struct virtio_device *vdev,
 					 unsigned int fbit);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 7/9] vdpa: rename dma_dev to map_token
  2025-07-01  1:13 [PATCH 0/9] Refine virtio mapping API Jason Wang
                   ` (5 preceding siblings ...)
  2025-07-01  1:13 ` [PATCH 6/9] virtio: introduce map ops in virtio core Jason Wang
@ 2025-07-01  1:13 ` Jason Wang
  2025-07-01 21:25   ` kernel test robot
  2025-07-01  1:14 ` [PATCH 8/9] vdpa: introduce map ops Jason Wang
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 18+ messages in thread
From: Jason Wang @ 2025-07-01  1:13 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, eperezma
  Cc: virtualization, linux-kernel, hch, xieyongji

Virtio core switches from DMA device to mapping token, let's do that
as well for vDPA.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vdpa/alibaba/eni_vdpa.c          |  2 +-
 drivers/vdpa/ifcvf/ifcvf_main.c          |  2 +-
 drivers/vdpa/octeon_ep/octep_vdpa_main.c |  2 +-
 drivers/vdpa/vdpa.c                      |  2 +-
 drivers/vdpa/vdpa_sim/vdpa_sim.c         |  2 +-
 drivers/vdpa/vdpa_user/vduse_dev.c       |  2 +-
 drivers/vdpa/virtio_pci/vp_vdpa.c        |  2 +-
 drivers/vhost/vdpa.c                     |  4 ++--
 drivers/virtio/virtio_vdpa.c             | 12 ++++++------
 include/linux/vdpa.h                     | 12 ++++++------
 10 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/drivers/vdpa/alibaba/eni_vdpa.c b/drivers/vdpa/alibaba/eni_vdpa.c
index ad7f3447fe90..34bf726dc660 100644
--- a/drivers/vdpa/alibaba/eni_vdpa.c
+++ b/drivers/vdpa/alibaba/eni_vdpa.c
@@ -496,7 +496,7 @@ static int eni_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	pci_set_master(pdev);
 	pci_set_drvdata(pdev, eni_vdpa);
 
-	eni_vdpa->vdpa.dma_dev = &pdev->dev;
+	eni_vdpa->vdpa.map_token = &pdev->dev;
 	eni_vdpa->queues = eni_vdpa_get_num_queues(eni_vdpa);
 
 	eni_vdpa->vring = devm_kcalloc(&pdev->dev, eni_vdpa->queues,
diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
index ccf64d7bbfaa..64d28ec97136 100644
--- a/drivers/vdpa/ifcvf/ifcvf_main.c
+++ b/drivers/vdpa/ifcvf/ifcvf_main.c
@@ -713,7 +713,7 @@ static int ifcvf_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
 
 	ifcvf_mgmt_dev->adapter = adapter;
 	adapter->pdev = pdev;
-	adapter->vdpa.dma_dev = &pdev->dev;
+	adapter->vdpa.map_token = &pdev->dev;
 	adapter->vdpa.mdev = mdev;
 	adapter->vf = vf;
 	vdpa_dev = &adapter->vdpa;
diff --git a/drivers/vdpa/octeon_ep/octep_vdpa_main.c b/drivers/vdpa/octeon_ep/octep_vdpa_main.c
index 9b49efd24391..42a4df4613dd 100644
--- a/drivers/vdpa/octeon_ep/octep_vdpa_main.c
+++ b/drivers/vdpa/octeon_ep/octep_vdpa_main.c
@@ -516,7 +516,7 @@ static int octep_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
 	}
 
 	oct_vdpa->pdev = pdev;
-	oct_vdpa->vdpa.dma_dev = &pdev->dev;
+	oct_vdpa->vdpa.map_token = &pdev->dev;
 	oct_vdpa->vdpa.mdev = mdev;
 	oct_vdpa->oct_hw = oct_hw;
 	vdpa_dev = &oct_vdpa->vdpa;
diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
index 8a372b51c21a..1cc4285ebd67 100644
--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -151,7 +151,7 @@ static void vdpa_release_dev(struct device *d)
  * Driver should use vdpa_alloc_device() wrapper macro instead of
  * using this directly.
  *
- * Return: Returns an error when parent/config/dma_dev is not set or fail to get
+ * Return: Returns an error when parent/config/map_token is not set or fail to get
  *	   ida.
  */
 struct vdpa_device *__vdpa_alloc_device(struct device *parent,
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index c204fc8e471a..7c8e468f2f8c 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -272,7 +272,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr,
 		vringh_set_iotlb(&vdpasim->vqs[i].vring, &vdpasim->iommu[0],
 				 &vdpasim->iommu_lock);
 
-	vdpasim->vdpa.dma_dev = dev;
+	vdpasim->vdpa.map_token = dev;
 
 	return vdpasim;
 
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 6a9a37351310..7420e90488ef 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -2022,7 +2022,7 @@ static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name)
 		return ret;
 	}
 	set_dma_ops(&vdev->vdpa.dev, &vduse_dev_dma_ops);
-	vdev->vdpa.dma_dev = &vdev->vdpa.dev;
+	vdev->vdpa.map_token = &vdev->vdpa.dev;
 	vdev->vdpa.mdev = &vduse_mgmt->mgmt_dev;
 
 	return 0;
diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c
index 8787407f75b0..6e22e95245fa 100644
--- a/drivers/vdpa/virtio_pci/vp_vdpa.c
+++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
@@ -520,7 +520,7 @@ static int vp_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
 
 	vp_vdpa_mgtdev->vp_vdpa = vp_vdpa;
 
-	vp_vdpa->vdpa.dma_dev = &pdev->dev;
+	vp_vdpa->vdpa.map_token = &pdev->dev;
 	vp_vdpa->queues = vp_modern_get_num_queues(mdev);
 	vp_vdpa->mdev = mdev;
 
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 5a49b5a6d496..732ed118c138 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -1320,7 +1320,7 @@ static int vhost_vdpa_alloc_domain(struct vhost_vdpa *v)
 {
 	struct vdpa_device *vdpa = v->vdpa;
 	const struct vdpa_config_ops *ops = vdpa->config;
-	struct device *dma_dev = vdpa_get_dma_dev(vdpa);
+	struct device *dma_dev = vdpa_get_map_token(vdpa);
 	int ret;
 
 	/* Device want to do DMA by itself */
@@ -1355,7 +1355,7 @@ static int vhost_vdpa_alloc_domain(struct vhost_vdpa *v)
 static void vhost_vdpa_free_domain(struct vhost_vdpa *v)
 {
 	struct vdpa_device *vdpa = v->vdpa;
-	struct device *dma_dev = vdpa_get_dma_dev(vdpa);
+	struct device *dma_dev = vdpa_get_map_token(vdpa);
 
 	if (v->domain) {
 		iommu_detach_device(v->domain, dma_dev);
diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
index 59b53032f1e2..cb68458cd809 100644
--- a/drivers/virtio/virtio_vdpa.c
+++ b/drivers/virtio/virtio_vdpa.c
@@ -147,7 +147,6 @@ virtio_vdpa_setup_vq(struct virtio_device *vdev, unsigned int index,
 {
 	struct virtio_vdpa_device *vd_dev = to_virtio_vdpa_device(vdev);
 	struct vdpa_device *vdpa = vd_get_vdpa(vdev);
-	struct device *dma_dev;
 	const struct vdpa_config_ops *ops = vdpa->config;
 	struct virtio_vdpa_vq_info *info;
 	bool (*notify)(struct virtqueue *vq) = virtio_vdpa_notify;
@@ -159,6 +158,7 @@ virtio_vdpa_setup_vq(struct virtio_device *vdev, unsigned int index,
 	unsigned long flags;
 	u32 align, max_num, min_num = 1;
 	bool may_reduce_num = true;
+	void *map_token;
 	int err;
 
 	if (!name)
@@ -201,13 +201,13 @@ virtio_vdpa_setup_vq(struct virtio_device *vdev, unsigned int index,
 	/* Create the vring */
 	align = ops->get_vq_align(vdpa);
 
-	if (ops->get_vq_dma_dev)
-		dma_dev = ops->get_vq_dma_dev(vdpa, index);
+	if (ops->get_vq_map_token)
+		map_token = ops->get_vq_map_token(vdpa, index);
 	else
-		dma_dev = vdpa_get_dma_dev(vdpa);
+		map_token = vdpa_get_map_token(vdpa);
 	vq = vring_create_virtqueue_map(index, max_num, align, vdev,
 					true, may_reduce_num, ctx,
-					notify, callback, name, dma_dev);
+					notify, callback, name, map_token);
 	if (!vq) {
 		err = -ENOMEM;
 		goto error_new_virtqueue;
@@ -497,7 +497,7 @@ static int virtio_vdpa_probe(struct vdpa_device *vdpa)
 	if (!vd_dev)
 		return -ENOMEM;
 
-	vd_dev->vdev.dev.parent = vdpa_get_dma_dev(vdpa);
+	vd_dev->vdev.dev.parent = vdpa_get_map_token(vdpa);
 	vd_dev->vdev.dev.release = virtio_vdpa_release_dev;
 	vd_dev->vdev.config = &virtio_vdpa_config_ops;
 	vd_dev->vdpa = vdpa;
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 2e7a30fe6b92..352ca5609c9a 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -70,7 +70,7 @@ struct vdpa_mgmt_dev;
 /**
  * struct vdpa_device - representation of a vDPA device
  * @dev: underlying device
- * @dma_dev: the actual device that is performing DMA
+ * @map_token: the token passed to upper layer to be used for mappping
  * @driver_override: driver name to force a match; do not set directly,
  *                   because core frees it; use driver_set_override() to
  *                   set or clear it.
@@ -87,7 +87,7 @@ struct vdpa_mgmt_dev;
  */
 struct vdpa_device {
 	struct device dev;
-	struct device *dma_dev;
+	void *map_token;
 	const char *driver_override;
 	const struct vdpa_config_ops *config;
 	struct rw_semaphore cf_lock; /* Protects get/set config */
@@ -352,7 +352,7 @@ struct vdpa_map_file {
  *				@vdev: vdpa device
  *				@asid: address space identifier
  *				Returns integer: success (0) or error (< 0)
- * @get_vq_dma_dev:		Get the dma device for a specific
+ * @get_vq_map_token:		Get the map token for a specific
  *				virtqueue (optional)
  *				@vdev: vdpa device
  *				@idx: virtqueue index
@@ -436,7 +436,7 @@ struct vdpa_config_ops {
 	int (*reset_map)(struct vdpa_device *vdev, unsigned int asid);
 	int (*set_group_asid)(struct vdpa_device *vdev, unsigned int group,
 			      unsigned int asid);
-	struct device *(*get_vq_dma_dev)(struct vdpa_device *vdev, u16 idx);
+	struct device *(*get_vq_map_token)(struct vdpa_device *vdev, u16 idx);
 	int (*bind_mm)(struct vdpa_device *vdev, struct mm_struct *mm);
 	void (*unbind_mm)(struct vdpa_device *vdev);
 
@@ -520,9 +520,9 @@ static inline void vdpa_set_drvdata(struct vdpa_device *vdev, void *data)
 	dev_set_drvdata(&vdev->dev, data);
 }
 
-static inline struct device *vdpa_get_dma_dev(struct vdpa_device *vdev)
+static inline void *vdpa_get_map_token(struct vdpa_device *vdev)
 {
-	return vdev->dma_dev;
+	return vdev->map_token;
 }
 
 static inline int vdpa_reset(struct vdpa_device *vdev, u32 flags)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 8/9] vdpa: introduce map ops
  2025-07-01  1:13 [PATCH 0/9] Refine virtio mapping API Jason Wang
                   ` (6 preceding siblings ...)
  2025-07-01  1:13 ` [PATCH 7/9] vdpa: rename dma_dev to map_token Jason Wang
@ 2025-07-01  1:14 ` Jason Wang
  2025-07-02  5:20   ` kernel test robot
  2025-07-01  1:14 ` [PATCH 9/9] vduse: switch to use virtio map API instead of DMA API Jason Wang
  2025-07-01  7:04 ` [PATCH 0/9] Refine virtio mapping API Michael S. Tsirkin
  9 siblings, 1 reply; 18+ messages in thread
From: Jason Wang @ 2025-07-01  1:14 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, eperezma
  Cc: virtualization, linux-kernel, hch, xieyongji

Virtio core allows the transport to provide device or transport
specific mapping functions. This patch adds this support to vDPA. We
can simply do this by allowing the vDPA parent to register a
virtio_map_ops.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vdpa/alibaba/eni_vdpa.c          |  3 ++-
 drivers/vdpa/ifcvf/ifcvf_main.c          |  3 ++-
 drivers/vdpa/octeon_ep/octep_vdpa_main.c |  4 ++--
 drivers/vdpa/pds/vdpa_dev.c              |  3 ++-
 drivers/vdpa/solidrun/snet_main.c        |  4 ++--
 drivers/vdpa/vdpa.c                      |  3 +++
 drivers/vdpa/vdpa_sim/vdpa_sim.c         |  2 +-
 drivers/vdpa/vdpa_user/iova_domain.c     |  6 ++++++
 drivers/vdpa/vdpa_user/iova_domain.h     |  3 +++
 drivers/vdpa/vdpa_user/vduse_dev.c       |  3 ++-
 drivers/vdpa/virtio_pci/vp_vdpa.c        |  3 ++-
 drivers/vhost/vdpa.c                     |  9 ++++++++-
 drivers/virtio/virtio_vdpa.c             |  1 +
 include/linux/vdpa.h                     | 10 +++++++---
 14 files changed, 43 insertions(+), 14 deletions(-)

diff --git a/drivers/vdpa/alibaba/eni_vdpa.c b/drivers/vdpa/alibaba/eni_vdpa.c
index 34bf726dc660..4ddf23065087 100644
--- a/drivers/vdpa/alibaba/eni_vdpa.c
+++ b/drivers/vdpa/alibaba/eni_vdpa.c
@@ -478,7 +478,8 @@ static int eni_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		return ret;
 
 	eni_vdpa = vdpa_alloc_device(struct eni_vdpa, vdpa,
-				     dev, &eni_vdpa_ops, 1, 1, NULL, false);
+				     dev, &eni_vdpa_ops, NULL,
+				     1, 1, NULL, false);
 	if (IS_ERR(eni_vdpa)) {
 		ENI_ERR(pdev, "failed to allocate vDPA structure\n");
 		return PTR_ERR(eni_vdpa);
diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
index 64d28ec97136..1bc22020d0cc 100644
--- a/drivers/vdpa/ifcvf/ifcvf_main.c
+++ b/drivers/vdpa/ifcvf/ifcvf_main.c
@@ -705,7 +705,8 @@ static int ifcvf_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
 	vf = &ifcvf_mgmt_dev->vf;
 	pdev = vf->pdev;
 	adapter = vdpa_alloc_device(struct ifcvf_adapter, vdpa,
-				    &pdev->dev, &ifc_vdpa_ops, 1, 1, NULL, false);
+				    &pdev->dev, &ifc_vdpa_ops,
+				    NULL, 1, 1, NULL, false);
 	if (IS_ERR(adapter)) {
 		IFCVF_ERR(pdev, "Failed to allocate vDPA structure");
 		return PTR_ERR(adapter);
diff --git a/drivers/vdpa/octeon_ep/octep_vdpa_main.c b/drivers/vdpa/octeon_ep/octep_vdpa_main.c
index 42a4df4613dd..bb4a68b6cce5 100644
--- a/drivers/vdpa/octeon_ep/octep_vdpa_main.c
+++ b/drivers/vdpa/octeon_ep/octep_vdpa_main.c
@@ -508,8 +508,8 @@ static int octep_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
 	u64 device_features;
 	int ret;
 
-	oct_vdpa = vdpa_alloc_device(struct octep_vdpa, vdpa, &pdev->dev, &octep_vdpa_ops, 1, 1,
-				     NULL, false);
+	oct_vdpa = vdpa_alloc_device(struct octep_vdpa, vdpa, &pdev->dev, &octep_vdpa_ops,
+				     NULL, 1, 1, NULL, false);
 	if (IS_ERR(oct_vdpa)) {
 		dev_err(&pdev->dev, "Failed to allocate vDPA structure for octep vdpa device");
 		return PTR_ERR(oct_vdpa);
diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
index 301d95e08596..d2a017697827 100644
--- a/drivers/vdpa/pds/vdpa_dev.c
+++ b/drivers/vdpa/pds/vdpa_dev.c
@@ -632,7 +632,8 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
 	}
 
 	pdsv = vdpa_alloc_device(struct pds_vdpa_device, vdpa_dev,
-				 dev, &pds_vdpa_ops, 1, 1, name, false);
+				 dev, &pds_vdpa_ops, NULL,
+				 1, 1, name, false);
 	if (IS_ERR(pdsv)) {
 		dev_err(dev, "Failed to allocate vDPA structure: %pe\n", pdsv);
 		return PTR_ERR(pdsv);
diff --git a/drivers/vdpa/solidrun/snet_main.c b/drivers/vdpa/solidrun/snet_main.c
index 55ec51c17ab3..46f1743eb9f5 100644
--- a/drivers/vdpa/solidrun/snet_main.c
+++ b/drivers/vdpa/solidrun/snet_main.c
@@ -1008,8 +1008,8 @@ static int snet_vdpa_probe_vf(struct pci_dev *pdev)
 	}
 
 	/* Allocate vdpa device */
-	snet = vdpa_alloc_device(struct snet, vdpa, &pdev->dev, &snet_config_ops, 1, 1, NULL,
-				 false);
+	snet = vdpa_alloc_device(struct snet, vdpa, &pdev->dev, &snet_config_ops,
+				 NULL, 1, 1, NULL, false);
 	if (!snet) {
 		SNET_ERR(pdev, "Failed to allocate a vdpa device\n");
 		ret = -ENOMEM;
diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
index 1cc4285ebd67..2715ffcda585 100644
--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -142,6 +142,7 @@ static void vdpa_release_dev(struct device *d)
  * initialized but before registered.
  * @parent: the parent device
  * @config: the bus operations that is supported by this device
+ * @map: the map opeartions that is supported by this device
  * @ngroups: number of groups supported by this device
  * @nas: number of address spaces supported by this device
  * @size: size of the parent structure that contains private data
@@ -156,6 +157,7 @@ static void vdpa_release_dev(struct device *d)
  */
 struct vdpa_device *__vdpa_alloc_device(struct device *parent,
 					const struct vdpa_config_ops *config,
+					const struct virtio_map_ops *map,
 					unsigned int ngroups, unsigned int nas,
 					size_t size, const char *name,
 					bool use_va)
@@ -187,6 +189,7 @@ struct vdpa_device *__vdpa_alloc_device(struct device *parent,
 	vdev->dev.release = vdpa_release_dev;
 	vdev->index = err;
 	vdev->config = config;
+	vdev->map = map;
 	vdev->features_valid = false;
 	vdev->use_va = use_va;
 	vdev->ngroups = ngroups;
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index 7c8e468f2f8c..89a795e2a44b 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -215,7 +215,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr,
 	else
 		ops = &vdpasim_config_ops;
 
-	vdpa = __vdpa_alloc_device(NULL, ops,
+	vdpa = __vdpa_alloc_device(NULL, ops, NULL,
 				   dev_attr->ngroups, dev_attr->nas,
 				   dev_attr->alloc_size,
 				   dev_attr->name, use_va);
diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
index 58116f89d8da..019f3305c0ac 100644
--- a/drivers/vdpa/vdpa_user/iova_domain.c
+++ b/drivers/vdpa/vdpa_user/iova_domain.c
@@ -506,6 +506,12 @@ void vduse_domain_free_coherent(struct vduse_iova_domain *domain, size_t size,
 	free_pages_exact(phys_to_virt(pa), size);
 }
 
+bool vduse_domain_need_sync(struct vduse_iova_domain *domain,
+			    dma_addr_t dma_addr)
+{
+	return dma_addr < domain->bounce_size;
+}
+
 static vm_fault_t vduse_domain_mmap_fault(struct vm_fault *vmf)
 {
 	struct vduse_iova_domain *domain = vmf->vma->vm_private_data;
diff --git a/drivers/vdpa/vdpa_user/iova_domain.h b/drivers/vdpa/vdpa_user/iova_domain.h
index 7f3f0928ec78..846572b95c23 100644
--- a/drivers/vdpa/vdpa_user/iova_domain.h
+++ b/drivers/vdpa/vdpa_user/iova_domain.h
@@ -70,6 +70,9 @@ void vduse_domain_free_coherent(struct vduse_iova_domain *domain, size_t size,
 				void *vaddr, dma_addr_t dma_addr,
 				unsigned long attrs);
 
+bool vduse_domain_need_sync(struct vduse_iova_domain *domain,
+			    dma_addr_t dma_addr);
+
 void vduse_domain_reset_bounce_map(struct vduse_iova_domain *domain);
 
 int vduse_domain_add_user_bounce_pages(struct vduse_iova_domain *domain,
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 7420e90488ef..64bc39722007 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -2009,7 +2009,8 @@ static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name)
 		return -EEXIST;
 
 	vdev = vdpa_alloc_device(struct vduse_vdpa, vdpa, dev->dev,
-				 &vduse_vdpa_config_ops, 1, 1, name, true);
+				 &vduse_vdpa_config_ops, NULL,
+				 1, 1, name, true);
 	if (IS_ERR(vdev))
 		return PTR_ERR(vdev);
 
diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c
index 6e22e95245fa..395996ec4608 100644
--- a/drivers/vdpa/virtio_pci/vp_vdpa.c
+++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
@@ -511,7 +511,8 @@ static int vp_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
 	int ret, i;
 
 	vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa,
-				    dev, &vp_vdpa_ops, 1, 1, name, false);
+				    dev, &vp_vdpa_ops, NULL,
+				    1, 1, name, false);
 
 	if (IS_ERR(vp_vdpa)) {
 		dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n");
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 732ed118c138..4932271899ea 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -1320,13 +1320,20 @@ static int vhost_vdpa_alloc_domain(struct vhost_vdpa *v)
 {
 	struct vdpa_device *vdpa = v->vdpa;
 	const struct vdpa_config_ops *ops = vdpa->config;
-	struct device *dma_dev = vdpa_get_map_token(vdpa);
+	const struct virtio_map_ops *map = vdpa->map;
+	struct device *dma_dev;
 	int ret;
 
 	/* Device want to do DMA by itself */
 	if (ops->set_map || ops->dma_map)
 		return 0;
 
+	if (map) {
+		dev_warn(&v->dev, "Can't allocate a domian, device use vendor specific mappings\n");
+		return -EINVAL;
+	}
+
+	dma_dev = vdpa_get_map_token(vdpa);
 	if (!device_iommu_capable(dma_dev, IOMMU_CAP_CACHE_COHERENCY)) {
 		dev_warn_once(&v->dev,
 			      "Failed to allocate domain, device is not IOMMU cache coherent capable\n");
diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
index cb68458cd809..286b60ce3637 100644
--- a/drivers/virtio/virtio_vdpa.c
+++ b/drivers/virtio/virtio_vdpa.c
@@ -500,6 +500,7 @@ static int virtio_vdpa_probe(struct vdpa_device *vdpa)
 	vd_dev->vdev.dev.parent = vdpa_get_map_token(vdpa);
 	vd_dev->vdev.dev.release = virtio_vdpa_release_dev;
 	vd_dev->vdev.config = &virtio_vdpa_config_ops;
+	vd_dev->vdev.map = vdpa->map;
 	vd_dev->vdpa = vdpa;
 	INIT_LIST_HEAD(&vd_dev->virtqueues);
 	spin_lock_init(&vd_dev->lock);
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 352ca5609c9a..cb51b7e2e569 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -75,6 +75,7 @@ struct vdpa_mgmt_dev;
  *                   because core frees it; use driver_set_override() to
  *                   set or clear it.
  * @config: the configuration ops for this device.
+ * @map: the map ops for this device
  * @cf_lock: Protects get and set access to configuration layout.
  * @index: device index
  * @features_valid: were features initialized? for legacy guests
@@ -90,6 +91,7 @@ struct vdpa_device {
 	void *map_token;
 	const char *driver_override;
 	const struct vdpa_config_ops *config;
+	const struct virtio_map_ops *map;
 	struct rw_semaphore cf_lock; /* Protects get/set config */
 	unsigned int index;
 	bool features_valid;
@@ -446,6 +448,7 @@ struct vdpa_config_ops {
 
 struct vdpa_device *__vdpa_alloc_device(struct device *parent,
 					const struct vdpa_config_ops *config,
+					const struct virtio_map_ops *map,
 					unsigned int ngroups, unsigned int nas,
 					size_t size, const char *name,
 					bool use_va);
@@ -457,6 +460,7 @@ struct vdpa_device *__vdpa_alloc_device(struct device *parent,
  * @member: the name of struct vdpa_device within the @dev_struct
  * @parent: the parent device
  * @config: the bus operations that is supported by this device
+ * @map: the map operations that is supported by this device
  * @ngroups: the number of virtqueue groups supported by this device
  * @nas: the number of address spaces
  * @name: name of the vdpa device
@@ -464,10 +468,10 @@ struct vdpa_device *__vdpa_alloc_device(struct device *parent,
  *
  * Return allocated data structure or ERR_PTR upon error
  */
-#define vdpa_alloc_device(dev_struct, member, parent, config, ngroups, nas, \
-			  name, use_va) \
+#define vdpa_alloc_device(dev_struct, member, parent, config, map, \
+	                  ngroups, nas, name, use_va) \
 			  container_of((__vdpa_alloc_device( \
-				       parent, config, ngroups, nas, \
+				       parent, config, map, ngroups, nas, \
 				       (sizeof(dev_struct) + \
 				       BUILD_BUG_ON_ZERO(offsetof( \
 				       dev_struct, member))), name, use_va)), \
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 9/9] vduse: switch to use virtio map API instead of DMA API
  2025-07-01  1:13 [PATCH 0/9] Refine virtio mapping API Jason Wang
                   ` (7 preceding siblings ...)
  2025-07-01  1:14 ` [PATCH 8/9] vdpa: introduce map ops Jason Wang
@ 2025-07-01  1:14 ` Jason Wang
  2025-07-01  7:50   ` Michael S. Tsirkin
  2025-07-01  7:04 ` [PATCH 0/9] Refine virtio mapping API Michael S. Tsirkin
  9 siblings, 1 reply; 18+ messages in thread
From: Jason Wang @ 2025-07-01  1:14 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, eperezma
  Cc: virtualization, linux-kernel, hch, xieyongji

Lacking the support of device specific mapping supported in virtio,
VDUSE must trick the DMA API in order to make virtio-vdpa transport
work. This is done by advertising vDPA device as dma device with a
VDUSE specific dma_ops even if it doesn't do DMA at all.

This will be fixed by this patch. Thanks to the new mapping operations
support by virtio and vDPA. VDUSE can simply switch to advertise its
specific mappings operations to virtio via virtio-vdpa then DMA API is
not needed for VDUSE any more.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vdpa/vdpa_user/iova_domain.c |  2 +-
 drivers/vdpa/vdpa_user/iova_domain.h |  2 +-
 drivers/vdpa/vdpa_user/vduse_dev.c   | 31 ++++++++++++++++------------
 3 files changed, 20 insertions(+), 15 deletions(-)

diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
index 019f3305c0ac..8ea311692545 100644
--- a/drivers/vdpa/vdpa_user/iova_domain.c
+++ b/drivers/vdpa/vdpa_user/iova_domain.c
@@ -447,7 +447,7 @@ void vduse_domain_unmap_page(struct vduse_iova_domain *domain,
 
 void *vduse_domain_alloc_coherent(struct vduse_iova_domain *domain,
 				  size_t size, dma_addr_t *dma_addr,
-				  gfp_t flag, unsigned long attrs)
+				  gfp_t flag)
 {
 	struct iova_domain *iovad = &domain->consistent_iovad;
 	unsigned long limit = domain->iova_limit;
diff --git a/drivers/vdpa/vdpa_user/iova_domain.h b/drivers/vdpa/vdpa_user/iova_domain.h
index 846572b95c23..a2316571671f 100644
--- a/drivers/vdpa/vdpa_user/iova_domain.h
+++ b/drivers/vdpa/vdpa_user/iova_domain.h
@@ -64,7 +64,7 @@ void vduse_domain_unmap_page(struct vduse_iova_domain *domain,
 
 void *vduse_domain_alloc_coherent(struct vduse_iova_domain *domain,
 				  size_t size, dma_addr_t *dma_addr,
-				  gfp_t flag, unsigned long attrs);
+				  gfp_t flag);
 
 void vduse_domain_free_coherent(struct vduse_iova_domain *domain, size_t size,
 				void *vaddr, dma_addr_t dma_addr,
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 64bc39722007..f86d7111e103 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -814,51 +814,55 @@ static const struct vdpa_config_ops vduse_vdpa_config_ops = {
 	.free			= vduse_vdpa_free,
 };
 
-static void vduse_dev_sync_single_for_device(struct device *dev,
+static void vduse_dev_sync_single_for_device(void *token,
 					     dma_addr_t dma_addr, size_t size,
 					     enum dma_data_direction dir)
 {
+	struct device *dev = token;
 	struct vduse_dev *vdev = dev_to_vduse(dev);
 	struct vduse_iova_domain *domain = vdev->domain;
 
 	vduse_domain_sync_single_for_device(domain, dma_addr, size, dir);
 }
 
-static void vduse_dev_sync_single_for_cpu(struct device *dev,
+static void vduse_dev_sync_single_for_cpu(void *token,
 					     dma_addr_t dma_addr, size_t size,
 					     enum dma_data_direction dir)
 {
+	struct device *dev = token;
 	struct vduse_dev *vdev = dev_to_vduse(dev);
 	struct vduse_iova_domain *domain = vdev->domain;
 
 	vduse_domain_sync_single_for_cpu(domain, dma_addr, size, dir);
 }
 
-static dma_addr_t vduse_dev_map_page(struct device *dev, struct page *page,
+static dma_addr_t vduse_dev_map_page(void *token, struct page *page,
 				     unsigned long offset, size_t size,
 				     enum dma_data_direction dir,
 				     unsigned long attrs)
 {
+	struct device *dev = token;
 	struct vduse_dev *vdev = dev_to_vduse(dev);
 	struct vduse_iova_domain *domain = vdev->domain;
 
 	return vduse_domain_map_page(domain, page, offset, size, dir, attrs);
 }
 
-static void vduse_dev_unmap_page(struct device *dev, dma_addr_t dma_addr,
+static void vduse_dev_unmap_page(void *token, dma_addr_t dma_addr,
 				size_t size, enum dma_data_direction dir,
 				unsigned long attrs)
 {
+	struct device *dev = token;
 	struct vduse_dev *vdev = dev_to_vduse(dev);
 	struct vduse_iova_domain *domain = vdev->domain;
 
 	return vduse_domain_unmap_page(domain, dma_addr, size, dir, attrs);
 }
 
-static void *vduse_dev_alloc_coherent(struct device *dev, size_t size,
-					dma_addr_t *dma_addr, gfp_t flag,
-					unsigned long attrs)
+static void *vduse_dev_alloc_coherent(void *token, size_t size,
+				      dma_addr_t *dma_addr, gfp_t flag)
 {
+	struct device *dev = token;
 	struct vduse_dev *vdev = dev_to_vduse(dev);
 	struct vduse_iova_domain *domain = vdev->domain;
 	unsigned long iova;
@@ -866,7 +870,7 @@ static void *vduse_dev_alloc_coherent(struct device *dev, size_t size,
 
 	*dma_addr = DMA_MAPPING_ERROR;
 	addr = vduse_domain_alloc_coherent(domain, size,
-				(dma_addr_t *)&iova, flag, attrs);
+					   (dma_addr_t *)&iova, flag);
 	if (!addr)
 		return NULL;
 
@@ -875,25 +879,27 @@ static void *vduse_dev_alloc_coherent(struct device *dev, size_t size,
 	return addr;
 }
 
-static void vduse_dev_free_coherent(struct device *dev, size_t size,
+static void vduse_dev_free_coherent(void *token, size_t size,
 					void *vaddr, dma_addr_t dma_addr,
 					unsigned long attrs)
 {
+	struct device *dev = token;
 	struct vduse_dev *vdev = dev_to_vduse(dev);
 	struct vduse_iova_domain *domain = vdev->domain;
 
 	vduse_domain_free_coherent(domain, size, vaddr, dma_addr, attrs);
 }
 
-static size_t vduse_dev_max_mapping_size(struct device *dev)
+static size_t vduse_dev_max_mapping_size(void *token)
 {
+	struct device *dev = token;
 	struct vduse_dev *vdev = dev_to_vduse(dev);
 	struct vduse_iova_domain *domain = vdev->domain;
 
 	return domain->bounce_size;
 }
 
-static const struct dma_map_ops vduse_dev_dma_ops = {
+static const struct virtio_map_ops vduse_map_ops = {
 	.sync_single_for_device = vduse_dev_sync_single_for_device,
 	.sync_single_for_cpu = vduse_dev_sync_single_for_cpu,
 	.map_page = vduse_dev_map_page,
@@ -2009,7 +2015,7 @@ static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name)
 		return -EEXIST;
 
 	vdev = vdpa_alloc_device(struct vduse_vdpa, vdpa, dev->dev,
-				 &vduse_vdpa_config_ops, NULL,
+				 &vduse_vdpa_config_ops, &vduse_map_ops,
 				 1, 1, name, true);
 	if (IS_ERR(vdev))
 		return PTR_ERR(vdev);
@@ -2022,7 +2028,6 @@ static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name)
 		put_device(&vdev->vdpa.dev);
 		return ret;
 	}
-	set_dma_ops(&vdev->vdpa.dev, &vduse_dev_dma_ops);
 	vdev->vdpa.map_token = &vdev->vdpa.dev;
 	vdev->vdpa.mdev = &vduse_mgmt->mgmt_dev;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/9] Refine virtio mapping API
  2025-07-01  1:13 [PATCH 0/9] Refine virtio mapping API Jason Wang
                   ` (8 preceding siblings ...)
  2025-07-01  1:14 ` [PATCH 9/9] vduse: switch to use virtio map API instead of DMA API Jason Wang
@ 2025-07-01  7:04 ` Michael S. Tsirkin
  2025-07-01  8:00   ` Jason Wang
  9 siblings, 1 reply; 18+ messages in thread
From: Michael S. Tsirkin @ 2025-07-01  7:04 UTC (permalink / raw)
  To: Jason Wang
  Cc: xuanzhuo, eperezma, virtualization, linux-kernel, hch, xieyongji

On Tue, Jul 01, 2025 at 09:13:52AM +0800, Jason Wang wrote:
> Hi all:
> 
> Virtio used to be coupled with DMA API. This works fine for the device
> that do real DMA but not the others. For example, VDUSE nees to craft
> with DMA API in order to let the virtio-vdpa driver to work.
> 
> This series tries to solve this issue by introducing the mapping API
> in the virtio core. So transport like vDPA can implement their own
> mapping logic without the need to hack with DMA API. The mapping API
> are abstracted with a new map operations in order to be re-used by
> transprot or device. So device like VDUSE can implement its own
> mapping loigc.
> 
> Please review.
> 
> Thanks

Cost of all this extra indirection? Especially on systems with
software spectre mitigations/retpoline enabled.

> Jason Wang (9):
>   virtio_ring: constify virtqueue pointer for DMA helpers
>   virtio_ring: switch to use dma_{map|unmap}_page()
>   virtio: rename dma helpers
>   virtio: rename dma_dev to map_token
>   virtio_ring: rename dma_handle to map_handle
>   virtio: introduce map ops in virtio core
>   vdpa: rename dma_dev to map_token
>   vdpa: introduce map ops
>   vduse: switch to use virtio map API instead of DMA API
> 
>  drivers/net/virtio_net.c                 |  32 +-
>  drivers/vdpa/alibaba/eni_vdpa.c          |   5 +-
>  drivers/vdpa/ifcvf/ifcvf_main.c          |   5 +-
>  drivers/vdpa/octeon_ep/octep_vdpa_main.c |   6 +-
>  drivers/vdpa/pds/vdpa_dev.c              |   3 +-
>  drivers/vdpa/solidrun/snet_main.c        |   4 +-
>  drivers/vdpa/vdpa.c                      |   5 +-
>  drivers/vdpa/vdpa_sim/vdpa_sim.c         |   4 +-
>  drivers/vdpa/vdpa_user/iova_domain.c     |   8 +-
>  drivers/vdpa/vdpa_user/iova_domain.h     |   5 +-
>  drivers/vdpa/vdpa_user/vduse_dev.c       |  34 +-
>  drivers/vdpa/virtio_pci/vp_vdpa.c        |   5 +-
>  drivers/vhost/vdpa.c                     |  11 +-
>  drivers/virtio/virtio_ring.c             | 440 ++++++++++++++---------
>  drivers/virtio/virtio_vdpa.c             |  15 +-
>  include/linux/vdpa.h                     |  22 +-
>  include/linux/virtio.h                   |  36 +-
>  include/linux/virtio_config.h            |  68 ++++
>  include/linux/virtio_ring.h              |   6 +-
>  19 files changed, 476 insertions(+), 238 deletions(-)
> 
> -- 
> 2.34.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 9/9] vduse: switch to use virtio map API instead of DMA API
  2025-07-01  1:14 ` [PATCH 9/9] vduse: switch to use virtio map API instead of DMA API Jason Wang
@ 2025-07-01  7:50   ` Michael S. Tsirkin
  2025-07-01  8:11     ` Jason Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Michael S. Tsirkin @ 2025-07-01  7:50 UTC (permalink / raw)
  To: Jason Wang
  Cc: xuanzhuo, eperezma, virtualization, linux-kernel, hch, xieyongji

On Tue, Jul 01, 2025 at 09:14:01AM +0800, Jason Wang wrote:
> Lacking the support of device specific mapping supported in virtio,
> VDUSE must trick the DMA API in order to make virtio-vdpa transport
> work. This is done by advertising vDPA device as dma device with a
> VDUSE specific dma_ops even if it doesn't do DMA at all.
> 
> This will be fixed by this patch. Thanks to the new mapping operations
> support by virtio and vDPA. VDUSE can simply switch to advertise its
> specific mappings operations to virtio via virtio-vdpa then DMA API is
> not needed for VDUSE any more.
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>

so what exactly is the issue fixed by all this pile of code?
I just don't really see it. yes the existing thing is a hack
but at least it is isolated within vduse which let's be
frank is not it's only issue.


> ---
>  drivers/vdpa/vdpa_user/iova_domain.c |  2 +-
>  drivers/vdpa/vdpa_user/iova_domain.h |  2 +-
>  drivers/vdpa/vdpa_user/vduse_dev.c   | 31 ++++++++++++++++------------
>  3 files changed, 20 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
> index 019f3305c0ac..8ea311692545 100644
> --- a/drivers/vdpa/vdpa_user/iova_domain.c
> +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> @@ -447,7 +447,7 @@ void vduse_domain_unmap_page(struct vduse_iova_domain *domain,
>  
>  void *vduse_domain_alloc_coherent(struct vduse_iova_domain *domain,
>  				  size_t size, dma_addr_t *dma_addr,
> -				  gfp_t flag, unsigned long attrs)
> +				  gfp_t flag)
>  {
>  	struct iova_domain *iovad = &domain->consistent_iovad;
>  	unsigned long limit = domain->iova_limit;
> diff --git a/drivers/vdpa/vdpa_user/iova_domain.h b/drivers/vdpa/vdpa_user/iova_domain.h
> index 846572b95c23..a2316571671f 100644
> --- a/drivers/vdpa/vdpa_user/iova_domain.h
> +++ b/drivers/vdpa/vdpa_user/iova_domain.h
> @@ -64,7 +64,7 @@ void vduse_domain_unmap_page(struct vduse_iova_domain *domain,
>  
>  void *vduse_domain_alloc_coherent(struct vduse_iova_domain *domain,
>  				  size_t size, dma_addr_t *dma_addr,
> -				  gfp_t flag, unsigned long attrs);
> +				  gfp_t flag);
>  
>  void vduse_domain_free_coherent(struct vduse_iova_domain *domain, size_t size,
>  				void *vaddr, dma_addr_t dma_addr,
> diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
> index 64bc39722007..f86d7111e103 100644
> --- a/drivers/vdpa/vdpa_user/vduse_dev.c
> +++ b/drivers/vdpa/vdpa_user/vduse_dev.c
> @@ -814,51 +814,55 @@ static const struct vdpa_config_ops vduse_vdpa_config_ops = {
>  	.free			= vduse_vdpa_free,
>  };
>  
> -static void vduse_dev_sync_single_for_device(struct device *dev,
> +static void vduse_dev_sync_single_for_device(void *token,
>  					     dma_addr_t dma_addr, size_t size,
>  					     enum dma_data_direction dir)
>  {
> +	struct device *dev = token;
>  	struct vduse_dev *vdev = dev_to_vduse(dev);
>  	struct vduse_iova_domain *domain = vdev->domain;
>  
>  	vduse_domain_sync_single_for_device(domain, dma_addr, size, dir);
>  }
>  
> -static void vduse_dev_sync_single_for_cpu(struct device *dev,
> +static void vduse_dev_sync_single_for_cpu(void *token,
>  					     dma_addr_t dma_addr, size_t size,
>  					     enum dma_data_direction dir)
>  {
> +	struct device *dev = token;
>  	struct vduse_dev *vdev = dev_to_vduse(dev);
>  	struct vduse_iova_domain *domain = vdev->domain;
>  
>  	vduse_domain_sync_single_for_cpu(domain, dma_addr, size, dir);
>  }
>  
> -static dma_addr_t vduse_dev_map_page(struct device *dev, struct page *page,
> +static dma_addr_t vduse_dev_map_page(void *token, struct page *page,
>  				     unsigned long offset, size_t size,
>  				     enum dma_data_direction dir,
>  				     unsigned long attrs)
>  {
> +	struct device *dev = token;
>  	struct vduse_dev *vdev = dev_to_vduse(dev);
>  	struct vduse_iova_domain *domain = vdev->domain;
>  
>  	return vduse_domain_map_page(domain, page, offset, size, dir, attrs);
>  }
>  
> -static void vduse_dev_unmap_page(struct device *dev, dma_addr_t dma_addr,
> +static void vduse_dev_unmap_page(void *token, dma_addr_t dma_addr,
>  				size_t size, enum dma_data_direction dir,
>  				unsigned long attrs)
>  {
> +	struct device *dev = token;
>  	struct vduse_dev *vdev = dev_to_vduse(dev);
>  	struct vduse_iova_domain *domain = vdev->domain;
>  
>  	return vduse_domain_unmap_page(domain, dma_addr, size, dir, attrs);
>  }
>  
> -static void *vduse_dev_alloc_coherent(struct device *dev, size_t size,
> -					dma_addr_t *dma_addr, gfp_t flag,
> -					unsigned long attrs)
> +static void *vduse_dev_alloc_coherent(void *token, size_t size,
> +				      dma_addr_t *dma_addr, gfp_t flag)
>  {
> +	struct device *dev = token;
>  	struct vduse_dev *vdev = dev_to_vduse(dev);
>  	struct vduse_iova_domain *domain = vdev->domain;
>  	unsigned long iova;
> @@ -866,7 +870,7 @@ static void *vduse_dev_alloc_coherent(struct device *dev, size_t size,
>  
>  	*dma_addr = DMA_MAPPING_ERROR;
>  	addr = vduse_domain_alloc_coherent(domain, size,
> -				(dma_addr_t *)&iova, flag, attrs);
> +					   (dma_addr_t *)&iova, flag);
>  	if (!addr)
>  		return NULL;
>  
> @@ -875,25 +879,27 @@ static void *vduse_dev_alloc_coherent(struct device *dev, size_t size,
>  	return addr;
>  }
>  
> -static void vduse_dev_free_coherent(struct device *dev, size_t size,
> +static void vduse_dev_free_coherent(void *token, size_t size,
>  					void *vaddr, dma_addr_t dma_addr,
>  					unsigned long attrs)
>  {
> +	struct device *dev = token;
>  	struct vduse_dev *vdev = dev_to_vduse(dev);
>  	struct vduse_iova_domain *domain = vdev->domain;
>  
>  	vduse_domain_free_coherent(domain, size, vaddr, dma_addr, attrs);
>  }
>  
> -static size_t vduse_dev_max_mapping_size(struct device *dev)
> +static size_t vduse_dev_max_mapping_size(void *token)
>  {
> +	struct device *dev = token;
>  	struct vduse_dev *vdev = dev_to_vduse(dev);
>  	struct vduse_iova_domain *domain = vdev->domain;
>  
>  	return domain->bounce_size;
>  }
>  
> -static const struct dma_map_ops vduse_dev_dma_ops = {
> +static const struct virtio_map_ops vduse_map_ops = {
>  	.sync_single_for_device = vduse_dev_sync_single_for_device,
>  	.sync_single_for_cpu = vduse_dev_sync_single_for_cpu,
>  	.map_page = vduse_dev_map_page,
> @@ -2009,7 +2015,7 @@ static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name)
>  		return -EEXIST;
>  
>  	vdev = vdpa_alloc_device(struct vduse_vdpa, vdpa, dev->dev,
> -				 &vduse_vdpa_config_ops, NULL,
> +				 &vduse_vdpa_config_ops, &vduse_map_ops,
>  				 1, 1, name, true);
>  	if (IS_ERR(vdev))
>  		return PTR_ERR(vdev);
> @@ -2022,7 +2028,6 @@ static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name)
>  		put_device(&vdev->vdpa.dev);
>  		return ret;
>  	}
> -	set_dma_ops(&vdev->vdpa.dev, &vduse_dev_dma_ops);
>  	vdev->vdpa.map_token = &vdev->vdpa.dev;
>  	vdev->vdpa.mdev = &vduse_mgmt->mgmt_dev;
>  
> -- 
> 2.34.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/9] Refine virtio mapping API
  2025-07-01  7:04 ` [PATCH 0/9] Refine virtio mapping API Michael S. Tsirkin
@ 2025-07-01  8:00   ` Jason Wang
  2025-07-03  8:57     ` Christoph Hellwig
  0 siblings, 1 reply; 18+ messages in thread
From: Jason Wang @ 2025-07-01  8:00 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: xuanzhuo, eperezma, virtualization, linux-kernel, hch, xieyongji

On Tue, Jul 1, 2025 at 3:04 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Tue, Jul 01, 2025 at 09:13:52AM +0800, Jason Wang wrote:
> > Hi all:
> >
> > Virtio used to be coupled with DMA API. This works fine for the device
> > that do real DMA but not the others. For example, VDUSE nees to craft
> > with DMA API in order to let the virtio-vdpa driver to work.
> >
> > This series tries to solve this issue by introducing the mapping API
> > in the virtio core. So transport like vDPA can implement their own
> > mapping logic without the need to hack with DMA API. The mapping API
> > are abstracted with a new map operations in order to be re-used by
> > transprot or device. So device like VDUSE can implement its own
> > mapping loigc.
> >
> > Please review.
> >
> > Thanks
>
> Cost of all this extra indirection? Especially on systems with
> software spectre mitigations/retpoline enabled.

Actually not, it doesn't change how things work for the device that
does DMA already like:

If device has its specific mapping ops
        go for device specific mapping ops
else
        go for DMA API

VDUSE is the only user now, and extra indirection has been used for
VDUSE even without this series (via abusing DMA API). This series
switch from:

virtio core -> DMA API -> VDUSE DMA API -> iova domain ops

to

virtio core -> virtio map ops -> VDUSE map ops -> iova domain ops

Thanks



>
> > Jason Wang (9):
> >   virtio_ring: constify virtqueue pointer for DMA helpers
> >   virtio_ring: switch to use dma_{map|unmap}_page()
> >   virtio: rename dma helpers
> >   virtio: rename dma_dev to map_token
> >   virtio_ring: rename dma_handle to map_handle
> >   virtio: introduce map ops in virtio core
> >   vdpa: rename dma_dev to map_token
> >   vdpa: introduce map ops
> >   vduse: switch to use virtio map API instead of DMA API
> >
> >  drivers/net/virtio_net.c                 |  32 +-
> >  drivers/vdpa/alibaba/eni_vdpa.c          |   5 +-
> >  drivers/vdpa/ifcvf/ifcvf_main.c          |   5 +-
> >  drivers/vdpa/octeon_ep/octep_vdpa_main.c |   6 +-
> >  drivers/vdpa/pds/vdpa_dev.c              |   3 +-
> >  drivers/vdpa/solidrun/snet_main.c        |   4 +-
> >  drivers/vdpa/vdpa.c                      |   5 +-
> >  drivers/vdpa/vdpa_sim/vdpa_sim.c         |   4 +-
> >  drivers/vdpa/vdpa_user/iova_domain.c     |   8 +-
> >  drivers/vdpa/vdpa_user/iova_domain.h     |   5 +-
> >  drivers/vdpa/vdpa_user/vduse_dev.c       |  34 +-
> >  drivers/vdpa/virtio_pci/vp_vdpa.c        |   5 +-
> >  drivers/vhost/vdpa.c                     |  11 +-
> >  drivers/virtio/virtio_ring.c             | 440 ++++++++++++++---------
> >  drivers/virtio/virtio_vdpa.c             |  15 +-
> >  include/linux/vdpa.h                     |  22 +-
> >  include/linux/virtio.h                   |  36 +-
> >  include/linux/virtio_config.h            |  68 ++++
> >  include/linux/virtio_ring.h              |   6 +-
> >  19 files changed, 476 insertions(+), 238 deletions(-)
> >
> > --
> > 2.34.1
>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 9/9] vduse: switch to use virtio map API instead of DMA API
  2025-07-01  7:50   ` Michael S. Tsirkin
@ 2025-07-01  8:11     ` Jason Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2025-07-01  8:11 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: xuanzhuo, eperezma, virtualization, linux-kernel, hch, xieyongji

On Tue, Jul 1, 2025 at 3:50 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Tue, Jul 01, 2025 at 09:14:01AM +0800, Jason Wang wrote:
> > Lacking the support of device specific mapping supported in virtio,
> > VDUSE must trick the DMA API in order to make virtio-vdpa transport
> > work. This is done by advertising vDPA device as dma device with a
> > VDUSE specific dma_ops even if it doesn't do DMA at all.
> >
> > This will be fixed by this patch. Thanks to the new mapping operations
> > support by virtio and vDPA. VDUSE can simply switch to advertise its
> > specific mappings operations to virtio via virtio-vdpa then DMA API is
> > not needed for VDUSE any more.
> >
> > Signed-off-by: Jason Wang <jasowang@redhat.com>
>
> so what exactly is the issue fixed by all this pile of code?

Avoiding using DMA API for VDUSE.

> I just don't really see it. yes the existing thing is a hack
> but at least it is isolated within vduse which let's be
> frank is not it's only issue.

Christoph shows concerns when Eugenio is trying to extend VDUSE for
multiple AS support:

https://lists.openwall.net/linux-kernel/2025/06/23/133

I think we need to reach some agreement here. I'm fine to leave the
current code as is.

But we may have a problem:

Technically, we want the ability of allowing control virtqueue to be
backed by an isolated iova domain in order to make shadow virtqueue
work. Though this might be only useful for vhost-vdpa, technically we
should allow virtio-vdpa to work in this case as well. This means cvq
should have its own dma device which might be tricky to VDUSE to
implement (for example it needs a hack on top of the existing hack,
e.g creating a child device to that which looks more like an
overkill).

Thanks

>
>
> > ---
> >  drivers/vdpa/vdpa_user/iova_domain.c |  2 +-
> >  drivers/vdpa/vdpa_user/iova_domain.h |  2 +-
> >  drivers/vdpa/vdpa_user/vduse_dev.c   | 31 ++++++++++++++++------------
> >  3 files changed, 20 insertions(+), 15 deletions(-)
> >
> > diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
> > index 019f3305c0ac..8ea311692545 100644
> > --- a/drivers/vdpa/vdpa_user/iova_domain.c
> > +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> > @@ -447,7 +447,7 @@ void vduse_domain_unmap_page(struct vduse_iova_domain *domain,
> >
> >  void *vduse_domain_alloc_coherent(struct vduse_iova_domain *domain,
> >                                 size_t size, dma_addr_t *dma_addr,
> > -                               gfp_t flag, unsigned long attrs)
> > +                               gfp_t flag)
> >  {
> >       struct iova_domain *iovad = &domain->consistent_iovad;
> >       unsigned long limit = domain->iova_limit;
> > diff --git a/drivers/vdpa/vdpa_user/iova_domain.h b/drivers/vdpa/vdpa_user/iova_domain.h
> > index 846572b95c23..a2316571671f 100644
> > --- a/drivers/vdpa/vdpa_user/iova_domain.h
> > +++ b/drivers/vdpa/vdpa_user/iova_domain.h
> > @@ -64,7 +64,7 @@ void vduse_domain_unmap_page(struct vduse_iova_domain *domain,
> >
> >  void *vduse_domain_alloc_coherent(struct vduse_iova_domain *domain,
> >                                 size_t size, dma_addr_t *dma_addr,
> > -                               gfp_t flag, unsigned long attrs);
> > +                               gfp_t flag);
> >
> >  void vduse_domain_free_coherent(struct vduse_iova_domain *domain, size_t size,
> >                               void *vaddr, dma_addr_t dma_addr,
> > diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
> > index 64bc39722007..f86d7111e103 100644
> > --- a/drivers/vdpa/vdpa_user/vduse_dev.c
> > +++ b/drivers/vdpa/vdpa_user/vduse_dev.c
> > @@ -814,51 +814,55 @@ static const struct vdpa_config_ops vduse_vdpa_config_ops = {
> >       .free                   = vduse_vdpa_free,
> >  };
> >
> > -static void vduse_dev_sync_single_for_device(struct device *dev,
> > +static void vduse_dev_sync_single_for_device(void *token,
> >                                            dma_addr_t dma_addr, size_t size,
> >                                            enum dma_data_direction dir)
> >  {
> > +     struct device *dev = token;
> >       struct vduse_dev *vdev = dev_to_vduse(dev);
> >       struct vduse_iova_domain *domain = vdev->domain;
> >
> >       vduse_domain_sync_single_for_device(domain, dma_addr, size, dir);
> >  }
> >
> > -static void vduse_dev_sync_single_for_cpu(struct device *dev,
> > +static void vduse_dev_sync_single_for_cpu(void *token,
> >                                            dma_addr_t dma_addr, size_t size,
> >                                            enum dma_data_direction dir)
> >  {
> > +     struct device *dev = token;
> >       struct vduse_dev *vdev = dev_to_vduse(dev);
> >       struct vduse_iova_domain *domain = vdev->domain;
> >
> >       vduse_domain_sync_single_for_cpu(domain, dma_addr, size, dir);
> >  }
> >
> > -static dma_addr_t vduse_dev_map_page(struct device *dev, struct page *page,
> > +static dma_addr_t vduse_dev_map_page(void *token, struct page *page,
> >                                    unsigned long offset, size_t size,
> >                                    enum dma_data_direction dir,
> >                                    unsigned long attrs)
> >  {
> > +     struct device *dev = token;
> >       struct vduse_dev *vdev = dev_to_vduse(dev);
> >       struct vduse_iova_domain *domain = vdev->domain;
> >
> >       return vduse_domain_map_page(domain, page, offset, size, dir, attrs);
> >  }
> >
> > -static void vduse_dev_unmap_page(struct device *dev, dma_addr_t dma_addr,
> > +static void vduse_dev_unmap_page(void *token, dma_addr_t dma_addr,
> >                               size_t size, enum dma_data_direction dir,
> >                               unsigned long attrs)
> >  {
> > +     struct device *dev = token;
> >       struct vduse_dev *vdev = dev_to_vduse(dev);
> >       struct vduse_iova_domain *domain = vdev->domain;
> >
> >       return vduse_domain_unmap_page(domain, dma_addr, size, dir, attrs);
> >  }
> >
> > -static void *vduse_dev_alloc_coherent(struct device *dev, size_t size,
> > -                                     dma_addr_t *dma_addr, gfp_t flag,
> > -                                     unsigned long attrs)
> > +static void *vduse_dev_alloc_coherent(void *token, size_t size,
> > +                                   dma_addr_t *dma_addr, gfp_t flag)
> >  {
> > +     struct device *dev = token;
> >       struct vduse_dev *vdev = dev_to_vduse(dev);
> >       struct vduse_iova_domain *domain = vdev->domain;
> >       unsigned long iova;
> > @@ -866,7 +870,7 @@ static void *vduse_dev_alloc_coherent(struct device *dev, size_t size,
> >
> >       *dma_addr = DMA_MAPPING_ERROR;
> >       addr = vduse_domain_alloc_coherent(domain, size,
> > -                             (dma_addr_t *)&iova, flag, attrs);
> > +                                        (dma_addr_t *)&iova, flag);
> >       if (!addr)
> >               return NULL;
> >
> > @@ -875,25 +879,27 @@ static void *vduse_dev_alloc_coherent(struct device *dev, size_t size,
> >       return addr;
> >  }
> >
> > -static void vduse_dev_free_coherent(struct device *dev, size_t size,
> > +static void vduse_dev_free_coherent(void *token, size_t size,
> >                                       void *vaddr, dma_addr_t dma_addr,
> >                                       unsigned long attrs)
> >  {
> > +     struct device *dev = token;
> >       struct vduse_dev *vdev = dev_to_vduse(dev);
> >       struct vduse_iova_domain *domain = vdev->domain;
> >
> >       vduse_domain_free_coherent(domain, size, vaddr, dma_addr, attrs);
> >  }
> >
> > -static size_t vduse_dev_max_mapping_size(struct device *dev)
> > +static size_t vduse_dev_max_mapping_size(void *token)
> >  {
> > +     struct device *dev = token;
> >       struct vduse_dev *vdev = dev_to_vduse(dev);
> >       struct vduse_iova_domain *domain = vdev->domain;
> >
> >       return domain->bounce_size;
> >  }
> >
> > -static const struct dma_map_ops vduse_dev_dma_ops = {
> > +static const struct virtio_map_ops vduse_map_ops = {
> >       .sync_single_for_device = vduse_dev_sync_single_for_device,
> >       .sync_single_for_cpu = vduse_dev_sync_single_for_cpu,
> >       .map_page = vduse_dev_map_page,
> > @@ -2009,7 +2015,7 @@ static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name)
> >               return -EEXIST;
> >
> >       vdev = vdpa_alloc_device(struct vduse_vdpa, vdpa, dev->dev,
> > -                              &vduse_vdpa_config_ops, NULL,
> > +                              &vduse_vdpa_config_ops, &vduse_map_ops,
> >                                1, 1, name, true);
> >       if (IS_ERR(vdev))
> >               return PTR_ERR(vdev);
> > @@ -2022,7 +2028,6 @@ static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name)
> >               put_device(&vdev->vdpa.dev);
> >               return ret;
> >       }
> > -     set_dma_ops(&vdev->vdpa.dev, &vduse_dev_dma_ops);
> >       vdev->vdpa.map_token = &vdev->vdpa.dev;
> >       vdev->vdpa.mdev = &vduse_mgmt->mgmt_dev;
> >
> > --
> > 2.34.1
>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 7/9] vdpa: rename dma_dev to map_token
  2025-07-01  1:13 ` [PATCH 7/9] vdpa: rename dma_dev to map_token Jason Wang
@ 2025-07-01 21:25   ` kernel test robot
  0 siblings, 0 replies; 18+ messages in thread
From: kernel test robot @ 2025-07-01 21:25 UTC (permalink / raw)
  To: Jason Wang, mst, xuanzhuo, eperezma
  Cc: oe-kbuild-all, virtualization, linux-kernel, hch, xieyongji

Hi Jason,

kernel test robot noticed the following build errors:

[auto build test ERROR on mst-vhost/linux-next]
[also build test ERROR on linus/master v6.16-rc4 next-20250701]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Jason-Wang/virtio_ring-constify-virtqueue-pointer-for-DMA-helpers/20250701-091746
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git linux-next
patch link:    https://lore.kernel.org/r/20250701011401.74851-8-jasowang%40redhat.com
patch subject: [PATCH 7/9] vdpa: rename dma_dev to map_token
config: i386-allmodconfig (https://download.01.org/0day-ci/archive/20250702/202507020521.PEt0EuaY-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250702/202507020521.PEt0EuaY-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507020521.PEt0EuaY-lkp@intel.com/

All error/warnings (new ones prefixed by >>):

   drivers/vdpa/mlx5/net/mlx5_vnet.c: In function 'mlx5_get_vq_dma_dev':
   drivers/vdpa/mlx5/net/mlx5_vnet.c:3405:28: error: 'struct vdpa_device' has no member named 'dma_dev'; did you mean 'mdev'?
    3405 |         return mvdev->vdev.dma_dev;
         |                            ^~~~~~~
         |                            mdev
   drivers/vdpa/mlx5/net/mlx5_vnet.c: At top level:
   drivers/vdpa/mlx5/net/mlx5_vnet.c:3687:10: error: 'const struct vdpa_config_ops' has no member named 'get_vq_dma_dev'; did you mean 'get_vq_ready'?
    3687 |         .get_vq_dma_dev = mlx5_get_vq_dma_dev,
         |          ^~~~~~~~~~~~~~
         |          get_vq_ready
>> drivers/vdpa/mlx5/net/mlx5_vnet.c:3687:27: error: positional initialization of field in 'struct' declared with 'designated_init' attribute [-Werror=designated-init]
    3687 |         .get_vq_dma_dev = mlx5_get_vq_dma_dev,
         |                           ^~~~~~~~~~~~~~~~~~~
   drivers/vdpa/mlx5/net/mlx5_vnet.c:3687:27: note: (near initialization for 'mlx5_vdpa_ops')
   drivers/vdpa/mlx5/net/mlx5_vnet.c:3687:27: error: initialization of 'void (*)(struct vdpa_device *)' from incompatible pointer type 'struct device * (*)(struct vdpa_device *, u16)' {aka 'struct device * (*)(struct vdpa_device *, short unsigned int)'} [-Werror=incompatible-pointer-types]
   drivers/vdpa/mlx5/net/mlx5_vnet.c:3687:27: note: (near initialization for 'mlx5_vdpa_ops.unbind_mm')
   drivers/vdpa/mlx5/net/mlx5_vnet.c: In function 'mlx5_vdpa_dev_add':
   drivers/vdpa/mlx5/net/mlx5_vnet.c:3966:21: error: 'struct vdpa_device' has no member named 'dma_dev'; did you mean 'mdev'?
    3966 |         mvdev->vdev.dma_dev = &mdev->pdev->dev;
         |                     ^~~~~~~
         |                     mdev
   drivers/vdpa/mlx5/net/mlx5_vnet.c: In function 'mlx5_get_vq_dma_dev':
>> drivers/vdpa/mlx5/net/mlx5_vnet.c:3406:1: warning: control reaches end of non-void function [-Wreturn-type]
    3406 | }
         | ^
   cc1: some warnings being treated as errors


vim +3687 drivers/vdpa/mlx5/net/mlx5_vnet.c

8fcd20c307042b Eli Cohen          2022-07-14  3652  
1a86b377aa2147 Eli Cohen          2020-08-04  3653  static const struct vdpa_config_ops mlx5_vdpa_ops = {
1a86b377aa2147 Eli Cohen          2020-08-04  3654  	.set_vq_address = mlx5_vdpa_set_vq_address,
1a86b377aa2147 Eli Cohen          2020-08-04  3655  	.set_vq_num = mlx5_vdpa_set_vq_num,
1a86b377aa2147 Eli Cohen          2020-08-04  3656  	.kick_vq = mlx5_vdpa_kick_vq,
1a86b377aa2147 Eli Cohen          2020-08-04  3657  	.set_vq_cb = mlx5_vdpa_set_vq_cb,
1a86b377aa2147 Eli Cohen          2020-08-04  3658  	.set_vq_ready = mlx5_vdpa_set_vq_ready,
1a86b377aa2147 Eli Cohen          2020-08-04  3659  	.get_vq_ready = mlx5_vdpa_get_vq_ready,
1a86b377aa2147 Eli Cohen          2020-08-04  3660  	.set_vq_state = mlx5_vdpa_set_vq_state,
1a86b377aa2147 Eli Cohen          2020-08-04  3661  	.get_vq_state = mlx5_vdpa_get_vq_state,
1892a3d425bf52 Eli Cohen          2022-05-18  3662  	.get_vendor_vq_stats = mlx5_vdpa_get_vendor_vq_stats,
1a86b377aa2147 Eli Cohen          2020-08-04  3663  	.get_vq_notification = mlx5_get_vq_notification,
1a86b377aa2147 Eli Cohen          2020-08-04  3664  	.get_vq_irq = mlx5_get_vq_irq,
1a86b377aa2147 Eli Cohen          2020-08-04  3665  	.get_vq_align = mlx5_vdpa_get_vq_align,
d4821902e43453 Gautam Dawar       2022-03-30  3666  	.get_vq_group = mlx5_vdpa_get_vq_group,
03dd63c8fae459 Dragos Tatulea     2023-10-18  3667  	.get_vq_desc_group = mlx5_vdpa_get_vq_desc_group, /* Op disabled if not supported. */
a64917bc2e9b1e Eli Cohen          2022-01-05  3668  	.get_device_features = mlx5_vdpa_get_device_features,
c695964474f3a8 Eugenio Pérez      2023-07-03  3669  	.get_backend_features = mlx5_vdpa_get_backend_features,
a64917bc2e9b1e Eli Cohen          2022-01-05  3670  	.set_driver_features = mlx5_vdpa_set_driver_features,
a64917bc2e9b1e Eli Cohen          2022-01-05  3671  	.get_driver_features = mlx5_vdpa_get_driver_features,
1a86b377aa2147 Eli Cohen          2020-08-04  3672  	.set_config_cb = mlx5_vdpa_set_config_cb,
1a86b377aa2147 Eli Cohen          2020-08-04  3673  	.get_vq_num_max = mlx5_vdpa_get_vq_num_max,
1a86b377aa2147 Eli Cohen          2020-08-04  3674  	.get_device_id = mlx5_vdpa_get_device_id,
1a86b377aa2147 Eli Cohen          2020-08-04  3675  	.get_vendor_id = mlx5_vdpa_get_vendor_id,
1a86b377aa2147 Eli Cohen          2020-08-04  3676  	.get_status = mlx5_vdpa_get_status,
1a86b377aa2147 Eli Cohen          2020-08-04  3677  	.set_status = mlx5_vdpa_set_status,
0686082dbf7a20 Xie Yongji         2021-08-31  3678  	.reset = mlx5_vdpa_reset,
2eacf4b5e3ebe7 Si-Wei Liu         2023-10-21  3679  	.compat_reset = mlx5_vdpa_compat_reset,
442706f9f94d28 Stefano Garzarella 2021-03-15  3680  	.get_config_size = mlx5_vdpa_get_config_size,
1a86b377aa2147 Eli Cohen          2020-08-04  3681  	.get_config = mlx5_vdpa_get_config,
1a86b377aa2147 Eli Cohen          2020-08-04  3682  	.set_config = mlx5_vdpa_set_config,
1a86b377aa2147 Eli Cohen          2020-08-04  3683  	.get_generation = mlx5_vdpa_get_generation,
1a86b377aa2147 Eli Cohen          2020-08-04  3684  	.set_map = mlx5_vdpa_set_map,
2eacf4b5e3ebe7 Si-Wei Liu         2023-10-21  3685  	.reset_map = mlx5_vdpa_reset_map,
8fcd20c307042b Eli Cohen          2022-07-14  3686  	.set_group_asid = mlx5_set_group_asid,
36871fb92b7059 Jason Wang         2023-01-19 @3687  	.get_vq_dma_dev = mlx5_get_vq_dma_dev,
1a86b377aa2147 Eli Cohen          2020-08-04  3688  	.free = mlx5_vdpa_free,
cae15c2ed8e6e0 Eli Cohen          2022-07-14  3689  	.suspend = mlx5_vdpa_suspend,
145096937b8a6a Dragos Tatulea     2023-12-25  3690  	.resume = mlx5_vdpa_resume, /* Op disabled if not supported. */
1a86b377aa2147 Eli Cohen          2020-08-04  3691  };
1a86b377aa2147 Eli Cohen          2020-08-04  3692  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 8/9] vdpa: introduce map ops
  2025-07-01  1:14 ` [PATCH 8/9] vdpa: introduce map ops Jason Wang
@ 2025-07-02  5:20   ` kernel test robot
  2025-07-02  6:59     ` Jason Wang
  0 siblings, 1 reply; 18+ messages in thread
From: kernel test robot @ 2025-07-02  5:20 UTC (permalink / raw)
  To: Jason Wang, mst, xuanzhuo, eperezma
  Cc: llvm, oe-kbuild-all, virtualization, linux-kernel, hch, xieyongji

Hi Jason,

kernel test robot noticed the following build errors:

[auto build test ERROR on mst-vhost/linux-next]
[also build test ERROR on linus/master v6.16-rc4 next-20250701]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Jason-Wang/virtio_ring-constify-virtqueue-pointer-for-DMA-helpers/20250701-091746
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git linux-next
patch link:    https://lore.kernel.org/r/20250701011401.74851-9-jasowang%40redhat.com
patch subject: [PATCH 8/9] vdpa: introduce map ops
config: x86_64-randconfig-073-20250702 (https://download.01.org/0day-ci/archive/20250702/202507021212.rhQmuuvi-lkp@intel.com/config)
compiler: clang version 20.1.7 (https://github.com/llvm/llvm-project 6146a88f60492b520a36f8f8f3231e15f3cc6082)
rustc: rustc 1.78.0 (9b00956e5 2024-04-29)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250702/202507021212.rhQmuuvi-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507021212.rhQmuuvi-lkp@intel.com/

All errors (new ones prefixed by >>):

   drivers/vdpa/mlx5/net/mlx5_vnet.c:3405:21: error: no member named 'dma_dev' in 'struct vdpa_device'
    3405 |         return mvdev->vdev.dma_dev;
         |                ~~~~~~~~~~~ ^
   drivers/vdpa/mlx5/net/mlx5_vnet.c:3687:3: error: field designator 'get_vq_dma_dev' does not refer to any field in type 'const struct vdpa_config_ops'
    3687 |         .get_vq_dma_dev = mlx5_get_vq_dma_dev,
         |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/vdpa/mlx5/net/mlx5_vnet.c:3880:59: error: too few arguments provided to function-like macro invocation
    3880 |                                  MLX5_VDPA_NUMVQ_GROUPS, MLX5_VDPA_NUM_AS, name, false);
         |                                                                                       ^
   include/linux/vdpa.h:471:9: note: macro 'vdpa_alloc_device' defined here
     471 | #define vdpa_alloc_device(dev_struct, member, parent, config, map, \
         |         ^
>> drivers/vdpa/mlx5/net/mlx5_vnet.c:3879:9: error: use of undeclared identifier 'vdpa_alloc_device'; did you mean '__vdpa_alloc_device'?
    3879 |         ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mgtdev->vdpa_ops,
         |                ^~~~~~~~~~~~~~~~~
         |                __vdpa_alloc_device
   include/linux/vdpa.h:449:21: note: '__vdpa_alloc_device' declared here
     449 | struct vdpa_device *__vdpa_alloc_device(struct device *parent,
         |                     ^
   drivers/vdpa/mlx5/net/mlx5_vnet.c:3966:14: error: no member named 'dma_dev' in 'struct vdpa_device'
    3966 |         mvdev->vdev.dma_dev = &mdev->pdev->dev;
         |         ~~~~~~~~~~~ ^
   5 errors generated.


vim +3879 drivers/vdpa/mlx5/net/mlx5_vnet.c

bc9a2b3e686e32 Eli Cohen           2023-06-07  3817  
d8ca2fa5be1bdb Parav Pandit        2021-10-26  3818  static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
d8ca2fa5be1bdb Parav Pandit        2021-10-26  3819  			     const struct vdpa_dev_set_config *add_config)
1a86b377aa2147 Eli Cohen           2020-08-04  3820  {
58926c8aab104d Eli Cohen           2021-04-08  3821  	struct mlx5_vdpa_mgmtdev *mgtdev = container_of(v_mdev, struct mlx5_vdpa_mgmtdev, mgtdev);
1a86b377aa2147 Eli Cohen           2020-08-04  3822  	struct virtio_net_config *config;
7c9f131f366ab4 Eli Cohen           2021-04-22  3823  	struct mlx5_core_dev *pfmdev;
1a86b377aa2147 Eli Cohen           2020-08-04  3824  	struct mlx5_vdpa_dev *mvdev;
1a86b377aa2147 Eli Cohen           2020-08-04  3825  	struct mlx5_vdpa_net *ndev;
58926c8aab104d Eli Cohen           2021-04-08  3826  	struct mlx5_core_dev *mdev;
deeacf35c922da Si-Wei Liu          2023-02-06  3827  	u64 device_features;
1a86b377aa2147 Eli Cohen           2020-08-04  3828  	u32 max_vqs;
246fd1caf0f442 Eli Cohen           2021-09-09  3829  	u16 mtu;
1a86b377aa2147 Eli Cohen           2020-08-04  3830  	int err;
1a86b377aa2147 Eli Cohen           2020-08-04  3831  
58926c8aab104d Eli Cohen           2021-04-08  3832  	if (mgtdev->ndev)
58926c8aab104d Eli Cohen           2021-04-08  3833  		return -ENOSPC;
58926c8aab104d Eli Cohen           2021-04-08  3834  
58926c8aab104d Eli Cohen           2021-04-08  3835  	mdev = mgtdev->madev->mdev;
deeacf35c922da Si-Wei Liu          2023-02-06  3836  	device_features = mgtdev->mgtdev.supported_features;
deeacf35c922da Si-Wei Liu          2023-02-06  3837  	if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) {
deeacf35c922da Si-Wei Liu          2023-02-06  3838  		if (add_config->device_features & ~device_features) {
deeacf35c922da Si-Wei Liu          2023-02-06  3839  			dev_warn(mdev->device,
deeacf35c922da Si-Wei Liu          2023-02-06  3840  				 "The provisioned features 0x%llx are not supported by this device with features 0x%llx\n",
deeacf35c922da Si-Wei Liu          2023-02-06  3841  				 add_config->device_features, device_features);
deeacf35c922da Si-Wei Liu          2023-02-06  3842  			return -EINVAL;
deeacf35c922da Si-Wei Liu          2023-02-06  3843  		}
deeacf35c922da Si-Wei Liu          2023-02-06  3844  		device_features &= add_config->device_features;
791a1cb7b8591e Eli Cohen           2023-03-21  3845  	} else {
791a1cb7b8591e Eli Cohen           2023-03-21  3846  		device_features &= ~BIT_ULL(VIRTIO_NET_F_MRG_RXBUF);
deeacf35c922da Si-Wei Liu          2023-02-06  3847  	}
deeacf35c922da Si-Wei Liu          2023-02-06  3848  	if (!(device_features & BIT_ULL(VIRTIO_F_VERSION_1) &&
deeacf35c922da Si-Wei Liu          2023-02-06  3849  	      device_features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM))) {
deeacf35c922da Si-Wei Liu          2023-02-06  3850  		dev_warn(mdev->device,
deeacf35c922da Si-Wei Liu          2023-02-06  3851  			 "Must provision minimum features 0x%llx for this device",
deeacf35c922da Si-Wei Liu          2023-02-06  3852  			 BIT_ULL(VIRTIO_F_VERSION_1) | BIT_ULL(VIRTIO_F_ACCESS_PLATFORM));
deeacf35c922da Si-Wei Liu          2023-02-06  3853  		return -EOPNOTSUPP;
deeacf35c922da Si-Wei Liu          2023-02-06  3854  	}
deeacf35c922da Si-Wei Liu          2023-02-06  3855  
879753c816dbbd Eli Cohen           2021-08-11  3856  	if (!(MLX5_CAP_DEV_VDPA_EMULATION(mdev, virtio_queue_type) &
879753c816dbbd Eli Cohen           2021-08-11  3857  	    MLX5_VIRTIO_EMULATION_CAP_VIRTIO_QUEUE_TYPE_SPLIT)) {
879753c816dbbd Eli Cohen           2021-08-11  3858  		dev_warn(mdev->device, "missing support for split virtqueues\n");
879753c816dbbd Eli Cohen           2021-08-11  3859  		return -EOPNOTSUPP;
879753c816dbbd Eli Cohen           2021-08-11  3860  	}
879753c816dbbd Eli Cohen           2021-08-11  3861  
acde3929492bcb Eli Cohen           2022-05-16  3862  	max_vqs = min_t(int, MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues),
acde3929492bcb Eli Cohen           2022-05-16  3863  			1 << MLX5_CAP_GEN(mdev, log_max_rqt_size));
75560522eaef2f Eli Cohen           2022-01-05  3864  	if (max_vqs < 2) {
75560522eaef2f Eli Cohen           2022-01-05  3865  		dev_warn(mdev->device,
75560522eaef2f Eli Cohen           2022-01-05  3866  			 "%d virtqueues are supported. At least 2 are required\n",
75560522eaef2f Eli Cohen           2022-01-05  3867  			 max_vqs);
75560522eaef2f Eli Cohen           2022-01-05  3868  		return -EAGAIN;
75560522eaef2f Eli Cohen           2022-01-05  3869  	}
75560522eaef2f Eli Cohen           2022-01-05  3870  
75560522eaef2f Eli Cohen           2022-01-05  3871  	if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
75560522eaef2f Eli Cohen           2022-01-05  3872  		if (add_config->net.max_vq_pairs > max_vqs / 2)
75560522eaef2f Eli Cohen           2022-01-05  3873  			return -EINVAL;
75560522eaef2f Eli Cohen           2022-01-05  3874  		max_vqs = min_t(u32, max_vqs, 2 * add_config->net.max_vq_pairs);
75560522eaef2f Eli Cohen           2022-01-05  3875  	} else {
75560522eaef2f Eli Cohen           2022-01-05  3876  		max_vqs = 2;
75560522eaef2f Eli Cohen           2022-01-05  3877  	}
1a86b377aa2147 Eli Cohen           2020-08-04  3878  
03dd63c8fae459 Dragos Tatulea      2023-10-18 @3879  	ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mgtdev->vdpa_ops,
8fcd20c307042b Eli Cohen           2022-07-14 @3880  				 MLX5_VDPA_NUMVQ_GROUPS, MLX5_VDPA_NUM_AS, name, false);
1a86b377aa2147 Eli Cohen           2020-08-04  3881  	if (IS_ERR(ndev))
74c9729dd892a1 Leon Romanovsky     2020-10-04  3882  		return PTR_ERR(ndev);
1a86b377aa2147 Eli Cohen           2020-08-04  3883  
1a86b377aa2147 Eli Cohen           2020-08-04  3884  	ndev->mvdev.max_vqs = max_vqs;
1a86b377aa2147 Eli Cohen           2020-08-04  3885  	mvdev = &ndev->mvdev;
1a86b377aa2147 Eli Cohen           2020-08-04  3886  	mvdev->mdev = mdev;
439252e167ac45 Konstantin Shkolnyy 2025-02-04  3887  	/* cpu_to_mlx5vdpa16() below depends on this flag */
439252e167ac45 Konstantin Shkolnyy 2025-02-04  3888  	mvdev->actual_features =
439252e167ac45 Konstantin Shkolnyy 2025-02-04  3889  			(device_features & BIT_ULL(VIRTIO_F_VERSION_1));
75560522eaef2f Eli Cohen           2022-01-05  3890  
75560522eaef2f Eli Cohen           2022-01-05  3891  	ndev->vqs = kcalloc(max_vqs, sizeof(*ndev->vqs), GFP_KERNEL);
75560522eaef2f Eli Cohen           2022-01-05  3892  	ndev->event_cbs = kcalloc(max_vqs + 1, sizeof(*ndev->event_cbs), GFP_KERNEL);
75560522eaef2f Eli Cohen           2022-01-05  3893  	if (!ndev->vqs || !ndev->event_cbs) {
75560522eaef2f Eli Cohen           2022-01-05  3894  		err = -ENOMEM;
75560522eaef2f Eli Cohen           2022-01-05  3895  		goto err_alloc;
75560522eaef2f Eli Cohen           2022-01-05  3896  	}
1835ed4a5d49d2 Dragos Tatulea      2024-06-26  3897  	ndev->cur_num_vqs = MLX5V_DEFAULT_VQ_COUNT;
75560522eaef2f Eli Cohen           2022-01-05  3898  
4a19f2942a0fe5 Dragos Tatulea      2024-06-26  3899  	mvqs_set_defaults(ndev);
bc9a2b3e686e32 Eli Cohen           2023-06-07  3900  	allocate_irqs(ndev);
759ae7f9bf1e6b Eli Cohen           2022-05-18  3901  	init_rwsem(&ndev->reslock);
1a86b377aa2147 Eli Cohen           2020-08-04  3902  	config = &ndev->config;
1e00e821e4ca63 Eli Cohen           2022-02-21  3903  
1e00e821e4ca63 Eli Cohen           2022-02-21  3904  	if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MTU)) {
1e00e821e4ca63 Eli Cohen           2022-02-21  3905  		err = config_func_mtu(mdev, add_config->net.mtu);
1e00e821e4ca63 Eli Cohen           2022-02-21  3906  		if (err)
759ae7f9bf1e6b Eli Cohen           2022-05-18  3907  			goto err_alloc;
1e00e821e4ca63 Eli Cohen           2022-02-21  3908  	}
1e00e821e4ca63 Eli Cohen           2022-02-21  3909  
deeacf35c922da Si-Wei Liu          2023-02-06  3910  	if (device_features & BIT_ULL(VIRTIO_NET_F_MTU)) {
246fd1caf0f442 Eli Cohen           2021-09-09  3911  		err = query_mtu(mdev, &mtu);
1a86b377aa2147 Eli Cohen           2020-08-04  3912  		if (err)
759ae7f9bf1e6b Eli Cohen           2022-05-18  3913  			goto err_alloc;
1a86b377aa2147 Eli Cohen           2020-08-04  3914  
246fd1caf0f442 Eli Cohen           2021-09-09  3915  		ndev->config.mtu = cpu_to_mlx5vdpa16(mvdev, mtu);
033779a708f0b0 Si-Wei Liu          2023-02-06  3916  	}
1a86b377aa2147 Eli Cohen           2020-08-04  3917  
deeacf35c922da Si-Wei Liu          2023-02-06  3918  	if (device_features & BIT_ULL(VIRTIO_NET_F_STATUS)) {
edf747affc41a1 Eli Cohen           2021-09-09  3919  		if (get_link_state(mvdev))
edf747affc41a1 Eli Cohen           2021-09-09  3920  			ndev->config.status |= cpu_to_mlx5vdpa16(mvdev, VIRTIO_NET_S_LINK_UP);
edf747affc41a1 Eli Cohen           2021-09-09  3921  		else
edf747affc41a1 Eli Cohen           2021-09-09  3922  			ndev->config.status &= cpu_to_mlx5vdpa16(mvdev, ~VIRTIO_NET_S_LINK_UP);
033779a708f0b0 Si-Wei Liu          2023-02-06  3923  	}
edf747affc41a1 Eli Cohen           2021-09-09  3924  
a007d940040c0b Eli Cohen           2021-10-26  3925  	if (add_config->mask & (1 << VDPA_ATTR_DEV_NET_CFG_MACADDR)) {
a007d940040c0b Eli Cohen           2021-10-26  3926  		memcpy(ndev->config.mac, add_config->net.mac, ETH_ALEN);
deeacf35c922da Si-Wei Liu          2023-02-06  3927  	/* No bother setting mac address in config if not going to provision _F_MAC */
deeacf35c922da Si-Wei Liu          2023-02-06  3928  	} else if ((add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) == 0 ||
deeacf35c922da Si-Wei Liu          2023-02-06  3929  		   device_features & BIT_ULL(VIRTIO_NET_F_MAC)) {
1a86b377aa2147 Eli Cohen           2020-08-04  3930  		err = mlx5_query_nic_vport_mac_address(mdev, 0, 0, config->mac);
1a86b377aa2147 Eli Cohen           2020-08-04  3931  		if (err)
759ae7f9bf1e6b Eli Cohen           2022-05-18  3932  			goto err_alloc;
a007d940040c0b Eli Cohen           2021-10-26  3933  	}
1a86b377aa2147 Eli Cohen           2020-08-04  3934  
7c9f131f366ab4 Eli Cohen           2021-04-22  3935  	if (!is_zero_ether_addr(config->mac)) {
7c9f131f366ab4 Eli Cohen           2021-04-22  3936  		pfmdev = pci_get_drvdata(pci_physfn(mdev->pdev));
7c9f131f366ab4 Eli Cohen           2021-04-22  3937  		err = mlx5_mpfs_add_mac(pfmdev, config->mac);
7c9f131f366ab4 Eli Cohen           2021-04-22  3938  		if (err)
759ae7f9bf1e6b Eli Cohen           2022-05-18  3939  			goto err_alloc;
deeacf35c922da Si-Wei Liu          2023-02-06  3940  	} else if ((add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) == 0) {
deeacf35c922da Si-Wei Liu          2023-02-06  3941  		/*
deeacf35c922da Si-Wei Liu          2023-02-06  3942  		 * We used to clear _F_MAC feature bit if seeing
deeacf35c922da Si-Wei Liu          2023-02-06  3943  		 * zero mac address when device features are not
deeacf35c922da Si-Wei Liu          2023-02-06  3944  		 * specifically provisioned. Keep the behaviour
deeacf35c922da Si-Wei Liu          2023-02-06  3945  		 * so old scripts do not break.
deeacf35c922da Si-Wei Liu          2023-02-06  3946  		 */
deeacf35c922da Si-Wei Liu          2023-02-06  3947  		device_features &= ~BIT_ULL(VIRTIO_NET_F_MAC);
deeacf35c922da Si-Wei Liu          2023-02-06  3948  	} else if (device_features & BIT_ULL(VIRTIO_NET_F_MAC)) {
deeacf35c922da Si-Wei Liu          2023-02-06  3949  		/* Don't provision zero mac address for _F_MAC */
deeacf35c922da Si-Wei Liu          2023-02-06  3950  		mlx5_vdpa_warn(&ndev->mvdev,
deeacf35c922da Si-Wei Liu          2023-02-06  3951  			       "No mac address provisioned?\n");
deeacf35c922da Si-Wei Liu          2023-02-06  3952  		err = -EINVAL;
deeacf35c922da Si-Wei Liu          2023-02-06  3953  		goto err_alloc;
7c9f131f366ab4 Eli Cohen           2021-04-22  3954  	}
7c9f131f366ab4 Eli Cohen           2021-04-22  3955  
1e8dac7bb6ca9c Dragos Tatulea      2024-06-26  3956  	if (device_features & BIT_ULL(VIRTIO_NET_F_MQ)) {
acde3929492bcb Eli Cohen           2022-05-16  3957  		config->max_virtqueue_pairs = cpu_to_mlx5vdpa16(mvdev, max_vqs / 2);
1e8dac7bb6ca9c Dragos Tatulea      2024-06-26  3958  		ndev->rqt_size = max_vqs / 2;
1e8dac7bb6ca9c Dragos Tatulea      2024-06-26  3959  	} else {
1e8dac7bb6ca9c Dragos Tatulea      2024-06-26  3960  		ndev->rqt_size = 1;
1e8dac7bb6ca9c Dragos Tatulea      2024-06-26  3961  	}
deeacf35c922da Si-Wei Liu          2023-02-06  3962  
1fcdf43ea69e97 Dragos Tatulea      2024-08-16  3963  	mlx5_cmd_init_async_ctx(mdev, &mvdev->async_ctx);
1fcdf43ea69e97 Dragos Tatulea      2024-08-16  3964  
deeacf35c922da Si-Wei Liu          2023-02-06  3965  	ndev->mvdev.mlx_features = device_features;
7d23dcdf213c2e Eli Cohen           2021-06-06  3966  	mvdev->vdev.dma_dev = &mdev->pdev->dev;
1a86b377aa2147 Eli Cohen           2020-08-04  3967  	err = mlx5_vdpa_alloc_resources(&ndev->mvdev);
1a86b377aa2147 Eli Cohen           2020-08-04  3968  	if (err)
83e445e64f48bd Dragos Tatulea      2024-11-05  3969  		goto err_alloc;
1a86b377aa2147 Eli Cohen           2020-08-04  3970  
f30a1232b6979c Dragos Tatulea      2024-08-30  3971  	err = mlx5_vdpa_init_mr_resources(mvdev);
f30a1232b6979c Dragos Tatulea      2024-08-30  3972  	if (err)
83e445e64f48bd Dragos Tatulea      2024-11-05  3973  		goto err_alloc;
f16d65124380ac Dragos Tatulea      2023-12-25  3974  
6f5312f801836e Eli Cohen           2021-06-02  3975  	if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
049cbeab861ef4 Dragos Tatulea      2023-10-18  3976  		err = mlx5_vdpa_create_dma_mr(mvdev);
1a86b377aa2147 Eli Cohen           2020-08-04  3977  		if (err)
83e445e64f48bd Dragos Tatulea      2024-11-05  3978  			goto err_alloc;
6f5312f801836e Eli Cohen           2021-06-02  3979  	}
6f5312f801836e Eli Cohen           2021-06-02  3980  
1f5d6476f12152 Dragos Tatulea      2024-06-26  3981  	err = alloc_fixed_resources(ndev);
6f5312f801836e Eli Cohen           2021-06-02  3982  	if (err)
83e445e64f48bd Dragos Tatulea      2024-11-05  3983  		goto err_alloc;
1a86b377aa2147 Eli Cohen           2020-08-04  3984  
55ebf0d60e3cc6 Jason Wang          2022-03-29  3985  	ndev->cvq_ent.mvdev = mvdev;
55ebf0d60e3cc6 Jason Wang          2022-03-29  3986  	INIT_WORK(&ndev->cvq_ent.work, mlx5_cvq_kick_handler);
218bdd20e56cab Eli Cohen           2021-09-09  3987  	mvdev->wq = create_singlethread_workqueue("mlx5_vdpa_wq");
5262912ef3cfc5 Eli Cohen           2021-08-23  3988  	if (!mvdev->wq) {
5262912ef3cfc5 Eli Cohen           2021-08-23  3989  		err = -ENOMEM;
83e445e64f48bd Dragos Tatulea      2024-11-05  3990  		goto err_alloc;
5262912ef3cfc5 Eli Cohen           2021-08-23  3991  	}
5262912ef3cfc5 Eli Cohen           2021-08-23  3992  
58926c8aab104d Eli Cohen           2021-04-08  3993  	mvdev->vdev.mdev = &mgtdev->mgtdev;
acde3929492bcb Eli Cohen           2022-05-16  3994  	err = _vdpa_register_device(&mvdev->vdev, max_vqs + 1);
1a86b377aa2147 Eli Cohen           2020-08-04  3995  	if (err)
1a86b377aa2147 Eli Cohen           2020-08-04  3996  		goto err_reg;
1a86b377aa2147 Eli Cohen           2020-08-04  3997  
58926c8aab104d Eli Cohen           2021-04-08  3998  	mgtdev->ndev = ndev;
ffb1aae43ed507 Dragos Tatulea      2024-06-26  3999  
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4000  	/* For virtio-vdpa, the device was set up during device register. */
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4001  	if (ndev->setup)
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4002  		return 0;
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4003  
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4004  	down_write(&ndev->reslock);
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4005  	err = setup_vq_resources(ndev, false);
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4006  	up_write(&ndev->reslock);
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4007  	if (err)
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4008  		goto err_setup_vq_res;
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4009  
74c9729dd892a1 Leon Romanovsky     2020-10-04  4010  	return 0;
1a86b377aa2147 Eli Cohen           2020-08-04  4011  
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4012  err_setup_vq_res:
ffb1aae43ed507 Dragos Tatulea      2024-06-26  4013  	_vdpa_unregister_device(&mvdev->vdev);
1a86b377aa2147 Eli Cohen           2020-08-04  4014  err_reg:
5262912ef3cfc5 Eli Cohen           2021-08-23  4015  	destroy_workqueue(mvdev->wq);
75560522eaef2f Eli Cohen           2022-01-05  4016  err_alloc:
1a86b377aa2147 Eli Cohen           2020-08-04  4017  	put_device(&mvdev->vdev.dev);
74c9729dd892a1 Leon Romanovsky     2020-10-04  4018  	return err;
1a86b377aa2147 Eli Cohen           2020-08-04  4019  }
1a86b377aa2147 Eli Cohen           2020-08-04  4020  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 8/9] vdpa: introduce map ops
  2025-07-02  5:20   ` kernel test robot
@ 2025-07-02  6:59     ` Jason Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2025-07-02  6:59 UTC (permalink / raw)
  To: kernel test robot
  Cc: mst, xuanzhuo, eperezma, llvm, oe-kbuild-all, virtualization,
	linux-kernel, hch, xieyongji

On Wed, Jul 2, 2025 at 1:21 PM kernel test robot <lkp@intel.com> wrote:
>
> Hi Jason,
>
> kernel test robot noticed the following build errors:
>
> [auto build test ERROR on mst-vhost/linux-next]
> [also build test ERROR on linus/master v6.16-rc4 next-20250701]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Jason-Wang/virtio_ring-constify-virtqueue-pointer-for-DMA-helpers/20250701-091746
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git linux-next
> patch link:    https://lore.kernel.org/r/20250701011401.74851-9-jasowang%40redhat.com
> patch subject: [PATCH 8/9] vdpa: introduce map ops
> config: x86_64-randconfig-073-20250702 (https://download.01.org/0day-ci/archive/20250702/202507021212.rhQmuuvi-lkp@intel.com/config)
> compiler: clang version 20.1.7 (https://github.com/llvm/llvm-project 6146a88f60492b520a36f8f8f3231e15f3cc6082)
> rustc: rustc 1.78.0 (9b00956e5 2024-04-29)
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250702/202507021212.rhQmuuvi-lkp@intel.com/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202507021212.rhQmuuvi-lkp@intel.com/
>
> All errors (new ones prefixed by >>):
>
>    drivers/vdpa/mlx5/net/mlx5_vnet.c:3405:21: error: no member named 'dma_dev' in 'struct vdpa_device'
>     3405 |         return mvdev->vdev.dma_dev;
>          |                ~~~~~~~~~~~ ^

It seems I forgot to convert mlx5 divers. Will do that in the next
version (if we agree this is the right direction).

Thanks


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/9] Refine virtio mapping API
  2025-07-01  8:00   ` Jason Wang
@ 2025-07-03  8:57     ` Christoph Hellwig
  0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2025-07-03  8:57 UTC (permalink / raw)
  To: Jason Wang
  Cc: Michael S. Tsirkin, xuanzhuo, eperezma, virtualization,
	linux-kernel, hch, xieyongji

On Tue, Jul 01, 2025 at 04:00:31PM +0800, Jason Wang wrote:
> Actually not, it doesn't change how things work for the device that
> does DMA already like:
> 
> If device has its specific mapping ops
>         go for device specific mapping ops
> else
>         go for DMA API
> 
> VDUSE is the only user now, and extra indirection has been used for
> VDUSE even without this series (via abusing DMA API). This series
> switch from:
> 
> virtio core -> DMA API -> VDUSE DMA API -> iova domain ops
> 
> to
> 
> virtio core -> virtio map ops -> VDUSE map ops -> iova domain ops

And that's exaxctly how it should be done.

Thanks for doing the work!  I'll go through if I find some nitpicks,
but the concept is the only right one here.


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2025-07-03  8:57 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-01  1:13 [PATCH 0/9] Refine virtio mapping API Jason Wang
2025-07-01  1:13 ` [PATCH 1/9] virtio_ring: constify virtqueue pointer for DMA helpers Jason Wang
2025-07-01  1:13 ` [PATCH 2/9] virtio_ring: switch to use dma_{map|unmap}_page() Jason Wang
2025-07-01  1:13 ` [PATCH 3/9] virtio: rename dma helpers Jason Wang
2025-07-01  1:13 ` [PATCH 4/9] virtio: rename dma_dev to map_token Jason Wang
2025-07-01  1:13 ` [PATCH 5/9] virtio_ring: rename dma_handle to map_handle Jason Wang
2025-07-01  1:13 ` [PATCH 6/9] virtio: introduce map ops in virtio core Jason Wang
2025-07-01  1:13 ` [PATCH 7/9] vdpa: rename dma_dev to map_token Jason Wang
2025-07-01 21:25   ` kernel test robot
2025-07-01  1:14 ` [PATCH 8/9] vdpa: introduce map ops Jason Wang
2025-07-02  5:20   ` kernel test robot
2025-07-02  6:59     ` Jason Wang
2025-07-01  1:14 ` [PATCH 9/9] vduse: switch to use virtio map API instead of DMA API Jason Wang
2025-07-01  7:50   ` Michael S. Tsirkin
2025-07-01  8:11     ` Jason Wang
2025-07-01  7:04 ` [PATCH 0/9] Refine virtio mapping API Michael S. Tsirkin
2025-07-01  8:00   ` Jason Wang
2025-07-03  8:57     ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).