netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings
@ 2023-09-28 16:33 Dragos Tatulea
  0 siblings, 0 replies; 14+ messages in thread
From: Dragos Tatulea @ 2023-09-28 16:33 UTC (permalink / raw)
  To: eperezma, gal, Jason Wang, Leon Romanovsky, Michael S. Tsirkin,
	Saeed Mahameed, Xuan Zhuo
  Cc: Dragos Tatulea, kvm, linux-kernel, linux-rdma, netdev,
	virtualization

This patch series adds support for vq descriptor table mappings which
are used to improve vdpa live migration downtime. The improvement comes
from using smaller mappings which take less time to create and destroy
in hw.

The first part adds the vdpa core changes from Si-Wei [0].

The second part adds support in mlx5_vdpa:
- Refactor the mr code to be able to cleanly add descriptor mappings.
- Add hardware descriptor mr support.
- Properly update iotlb for cvq during ASID switch.

Changes in v2:

- The "vdpa/mlx5: Enable hw support for vq descriptor mapping" change
  was split off into two patches to avoid merge conflicts into the tree
  of Linus.

  The first patch contains only changes for mlx5_ifc.h. This must be
  applied into the mlx5-next tree [1] first. Once this patch is applied
  on mlx5-next, the change has to be pulled fom mlx5-next into the vhost
  tree and only then the remaining patches can be applied.

[0] https://lore.kernel.org/virtualization/1694248959-13369-1-git-send-email-si-wei.liu@oracle.com
[1] https://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git/log/?h=mlx5-next

Dragos Tatulea (13):
  vdpa/mlx5: Expose descriptor group mkey hw capability
  vdpa/mlx5: Create helper function for dma mappings
  vdpa/mlx5: Decouple cvq iotlb handling from hw mapping code
  vdpa/mlx5: Take cvq iotlb lock during refresh
  vdpa/mlx5: Collapse "dvq" mr add/delete functions
  vdpa/mlx5: Rename mr destroy functions
  vdpa/mlx5: Allow creation/deletion of any given mr struct
  vdpa/mlx5: Move mr mutex out of mr struct
  vdpa/mlx5: Improve mr update flow
  vdpa/mlx5: Introduce mr for vq descriptor
  vdpa/mlx5: Enable hw support for vq descriptor mapping
  vdpa/mlx5: Make iotlb helper functions more generic
  vdpa/mlx5: Update cvq iotlb mapping on ASID change

Si-Wei Liu (3):
  vdpa: introduce dedicated descriptor group for virtqueue
  vhost-vdpa: introduce descriptor group backend feature
  vhost-vdpa: uAPI to get dedicated descriptor group id

 drivers/vdpa/mlx5/core/mlx5_vdpa.h |  31 +++--
 drivers/vdpa/mlx5/core/mr.c        | 191 ++++++++++++++++-------------
 drivers/vdpa/mlx5/core/resources.c |   6 +-
 drivers/vdpa/mlx5/net/mlx5_vnet.c  | 100 ++++++++++-----
 drivers/vhost/vdpa.c               |  27 ++++
 include/linux/mlx5/mlx5_ifc.h      |   8 +-
 include/linux/mlx5/mlx5_ifc_vdpa.h |   7 +-
 include/linux/vdpa.h               |  11 ++
 include/uapi/linux/vhost.h         |   8 ++
 include/uapi/linux/vhost_types.h   |   5 +
 10 files changed, 264 insertions(+), 130 deletions(-)

-- 
2.41.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings
@ 2023-09-28 16:45 Dragos Tatulea
  2023-09-28 16:45 ` [PATCH mlx5-next 01/16] vdpa/mlx5: Expose descriptor group mkey hw capability Dragos Tatulea
                   ` (6 more replies)
  0 siblings, 7 replies; 14+ messages in thread
From: Dragos Tatulea @ 2023-09-28 16:45 UTC (permalink / raw)
  To: eperezma, gal, Michael S . Tsirkin, Jason Wang, Saeed Mahameed,
	Leon Romanovsky, Xuan Zhuo
  Cc: virtualization, Dragos Tatulea, netdev, linux-rdma, linux-kernel,
	kvm

This patch series adds support for vq descriptor table mappings which
are used to improve vdpa live migration downtime. The improvement comes
from using smaller mappings which take less time to create and destroy
in hw.

The first part adds the vdpa core changes from Si-Wei [0].

The second part adds support in mlx5_vdpa:
- Refactor the mr code to be able to cleanly add descriptor mappings.
- Add hardware descriptor mr support.
- Properly update iotlb for cvq during ASID switch.

Changes in v2:

- The "vdpa/mlx5: Enable hw support for vq descriptor mapping" change
  was split off into two patches to avoid merge conflicts into the tree
  of Linus.

  The first patch contains only changes for mlx5_ifc.h. This must be
  applied into the mlx5-next tree [1] first. Once this patch is applied
  on mlx5-next, the change has to be pulled fom mlx5-next into the vhost
  tree and only then the remaining patches can be applied.

[0] https://lore.kernel.org/virtualization/1694248959-13369-1-git-send-email-si-wei.liu@oracle.com
[1] https://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git/log/?h=mlx5-next

Dragos Tatulea (13):
  vdpa/mlx5: Expose descriptor group mkey hw capability
  vdpa/mlx5: Create helper function for dma mappings
  vdpa/mlx5: Decouple cvq iotlb handling from hw mapping code
  vdpa/mlx5: Take cvq iotlb lock during refresh
  vdpa/mlx5: Collapse "dvq" mr add/delete functions
  vdpa/mlx5: Rename mr destroy functions
  vdpa/mlx5: Allow creation/deletion of any given mr struct
  vdpa/mlx5: Move mr mutex out of mr struct
  vdpa/mlx5: Improve mr update flow
  vdpa/mlx5: Introduce mr for vq descriptor
  vdpa/mlx5: Enable hw support for vq descriptor mapping
  vdpa/mlx5: Make iotlb helper functions more generic
  vdpa/mlx5: Update cvq iotlb mapping on ASID change

Si-Wei Liu (3):
  vdpa: introduce dedicated descriptor group for virtqueue
  vhost-vdpa: introduce descriptor group backend feature
  vhost-vdpa: uAPI to get dedicated descriptor group id

 drivers/vdpa/mlx5/core/mlx5_vdpa.h |  31 +++--
 drivers/vdpa/mlx5/core/mr.c        | 191 ++++++++++++++++-------------
 drivers/vdpa/mlx5/core/resources.c |   6 +-
 drivers/vdpa/mlx5/net/mlx5_vnet.c  | 100 ++++++++++-----
 drivers/vhost/vdpa.c               |  27 ++++
 include/linux/mlx5/mlx5_ifc.h      |   8 +-
 include/linux/mlx5/mlx5_ifc_vdpa.h |   7 +-
 include/linux/vdpa.h               |  11 ++
 include/uapi/linux/vhost.h         |   8 ++
 include/uapi/linux/vhost_types.h   |   5 +
 10 files changed, 264 insertions(+), 130 deletions(-)

-- 
2.41.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH mlx5-next 01/16] vdpa/mlx5: Expose descriptor group mkey hw capability
  2023-09-28 16:45 [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings Dragos Tatulea
@ 2023-09-28 16:45 ` Dragos Tatulea
  2023-09-28 16:45 ` [PATCH vhost 03/16] vhost-vdpa: introduce descriptor group backend feature Dragos Tatulea
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Dragos Tatulea @ 2023-09-28 16:45 UTC (permalink / raw)
  To: eperezma, gal, Michael S . Tsirkin, Jason Wang, Saeed Mahameed,
	Leon Romanovsky
  Cc: virtualization, Dragos Tatulea, netdev, linux-rdma, linux-kernel

Necessary for improved live migration flow. Actual support will be added
in a downstream patch.

Reviewed-by: Gal Pressman <gal@nvidia.com>
Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
---
 include/linux/mlx5/mlx5_ifc.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index dd8421d021cf..ec15330b970d 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -1231,7 +1231,13 @@ struct mlx5_ifc_virtio_emulation_cap_bits {
 	u8         max_emulated_devices[0x8];
 	u8         max_num_virtio_queues[0x18];
 
-	u8         reserved_at_a0[0x60];
+	u8         reserved_at_a0[0x20];
+
+	u8	   reserved_at_c0[0x13];
+	u8         desc_group_mkey_supported[0x1];
+	u8         reserved_at_d4[0xc];
+
+	u8         reserved_at_e0[0x20];
 
 	u8         umem_1_buffer_param_a[0x20];
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH vhost 03/16] vhost-vdpa: introduce descriptor group backend feature
  2023-09-28 16:45 [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings Dragos Tatulea
  2023-09-28 16:45 ` [PATCH mlx5-next 01/16] vdpa/mlx5: Expose descriptor group mkey hw capability Dragos Tatulea
@ 2023-09-28 16:45 ` Dragos Tatulea
  2023-09-28 16:45 ` [PATCH vhost 04/16] vhost-vdpa: uAPI to get dedicated descriptor group id Dragos Tatulea
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Dragos Tatulea @ 2023-09-28 16:45 UTC (permalink / raw)
  To: eperezma, gal, Michael S . Tsirkin, Jason Wang
  Cc: virtualization, Si-Wei Liu, kvm, netdev, linux-kernel

From: Si-Wei Liu <si-wei.liu@oracle.com>

Userspace knows if the device has dedicated descriptor group or not
by checking this feature bit.

It's only exposed if the vdpa driver backend implements the
.get_vq_desc_group() operation callback. Userspace trying to negotiate
this feature when it or the dependent _F_IOTLB_ASID feature hasn't
been exposed will result in an error.

Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/vdpa.c             | 17 +++++++++++++++++
 include/uapi/linux/vhost_types.h |  5 +++++
 2 files changed, 22 insertions(+)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 78379ffd2336..2f21798a37ee 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -389,6 +389,14 @@ static bool vhost_vdpa_can_resume(const struct vhost_vdpa *v)
 	return ops->resume;
 }
 
+static bool vhost_vdpa_has_desc_group(const struct vhost_vdpa *v)
+{
+	struct vdpa_device *vdpa = v->vdpa;
+	const struct vdpa_config_ops *ops = vdpa->config;
+
+	return ops->get_vq_desc_group;
+}
+
 static long vhost_vdpa_get_features(struct vhost_vdpa *v, u64 __user *featurep)
 {
 	struct vdpa_device *vdpa = v->vdpa;
@@ -690,6 +698,7 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
 		if (copy_from_user(&features, featurep, sizeof(features)))
 			return -EFAULT;
 		if (features & ~(VHOST_VDPA_BACKEND_FEATURES |
+				 BIT_ULL(VHOST_BACKEND_F_DESC_ASID) |
 				 BIT_ULL(VHOST_BACKEND_F_SUSPEND) |
 				 BIT_ULL(VHOST_BACKEND_F_RESUME) |
 				 BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK)))
@@ -700,6 +709,12 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
 		if ((features & BIT_ULL(VHOST_BACKEND_F_RESUME)) &&
 		     !vhost_vdpa_can_resume(v))
 			return -EOPNOTSUPP;
+		if ((features & BIT_ULL(VHOST_BACKEND_F_DESC_ASID)) &&
+		    !(features & BIT_ULL(VHOST_BACKEND_F_IOTLB_ASID)))
+			return -EINVAL;
+		if ((features & BIT_ULL(VHOST_BACKEND_F_DESC_ASID)) &&
+		     !vhost_vdpa_has_desc_group(v))
+			return -EOPNOTSUPP;
 		vhost_set_backend_features(&v->vdev, features);
 		return 0;
 	}
@@ -753,6 +768,8 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
 			features |= BIT_ULL(VHOST_BACKEND_F_SUSPEND);
 		if (vhost_vdpa_can_resume(v))
 			features |= BIT_ULL(VHOST_BACKEND_F_RESUME);
+		if (vhost_vdpa_has_desc_group(v))
+			features |= BIT_ULL(VHOST_BACKEND_F_DESC_ASID);
 		features |= vhost_vdpa_get_backend_features(v);
 		if (copy_to_user(featurep, &features, sizeof(features)))
 			r = -EFAULT;
diff --git a/include/uapi/linux/vhost_types.h b/include/uapi/linux/vhost_types.h
index 2d827d22cd99..18ad6ae7ab5c 100644
--- a/include/uapi/linux/vhost_types.h
+++ b/include/uapi/linux/vhost_types.h
@@ -185,5 +185,10 @@ struct vhost_vdpa_iova_range {
  * DRIVER_OK
  */
 #define VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK  0x6
+/* Device may expose the virtqueue's descriptor area, driver area and
+ * device area to a different group for ASID binding than where its
+ * buffers may reside. Requires VHOST_BACKEND_F_IOTLB_ASID.
+ */
+#define VHOST_BACKEND_F_DESC_ASID    0x7
 
 #endif
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH vhost 04/16] vhost-vdpa: uAPI to get dedicated descriptor group id
  2023-09-28 16:45 [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings Dragos Tatulea
  2023-09-28 16:45 ` [PATCH mlx5-next 01/16] vdpa/mlx5: Expose descriptor group mkey hw capability Dragos Tatulea
  2023-09-28 16:45 ` [PATCH vhost 03/16] vhost-vdpa: introduce descriptor group backend feature Dragos Tatulea
@ 2023-09-28 16:45 ` Dragos Tatulea
  2023-09-28 16:45 ` [PATCH vhost 14/16] vdpa/mlx5: Enable hw support for vq descriptor mapping Dragos Tatulea
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Dragos Tatulea @ 2023-09-28 16:45 UTC (permalink / raw)
  To: eperezma, gal, Michael S . Tsirkin, Jason Wang
  Cc: virtualization, Si-Wei Liu, kvm, netdev, linux-kernel

From: Si-Wei Liu <si-wei.liu@oracle.com>

With _F_DESC_ASID backend feature, the device can now support the
VHOST_VDPA_GET_VRING_DESC_GROUP ioctl, and it may expose the descriptor
table (including avail and used ring) in a different group than the
buffers it contains. This new uAPI will fetch the group ID of the
descriptor table.

Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/vdpa.c       | 10 ++++++++++
 include/uapi/linux/vhost.h |  8 ++++++++
 2 files changed, 18 insertions(+)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 2f21798a37ee..851535f57b95 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -613,6 +613,16 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd,
 		else if (copy_to_user(argp, &s, sizeof(s)))
 			return -EFAULT;
 		return 0;
+	case VHOST_VDPA_GET_VRING_DESC_GROUP:
+		if (!vhost_vdpa_has_desc_group(v))
+			return -EOPNOTSUPP;
+		s.index = idx;
+		s.num = ops->get_vq_desc_group(vdpa, idx);
+		if (s.num >= vdpa->ngroups)
+			return -EIO;
+		else if (copy_to_user(argp, &s, sizeof(s)))
+			return -EFAULT;
+		return 0;
 	case VHOST_VDPA_SET_GROUP_ASID:
 		if (copy_from_user(&s, argp, sizeof(s)))
 			return -EFAULT;
diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
index f5c48b61ab62..649560c685f1 100644
--- a/include/uapi/linux/vhost.h
+++ b/include/uapi/linux/vhost.h
@@ -219,4 +219,12 @@
  */
 #define VHOST_VDPA_RESUME		_IO(VHOST_VIRTIO, 0x7E)
 
+/* Get the group for the descriptor table including driver & device areas
+ * of a virtqueue: read index, write group in num.
+ * The virtqueue index is stored in the index field of vhost_vring_state.
+ * The group ID of the descriptor table for this specific virtqueue
+ * is returned via num field of vhost_vring_state.
+ */
+#define VHOST_VDPA_GET_VRING_DESC_GROUP	_IOWR(VHOST_VIRTIO, 0x7F,	\
+					      struct vhost_vring_state)
 #endif
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH vhost 14/16] vdpa/mlx5: Enable hw support for vq descriptor mapping
  2023-09-28 16:45 [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings Dragos Tatulea
                   ` (2 preceding siblings ...)
  2023-09-28 16:45 ` [PATCH vhost 04/16] vhost-vdpa: uAPI to get dedicated descriptor group id Dragos Tatulea
@ 2023-09-28 16:45 ` Dragos Tatulea
  2023-10-05  9:42   ` Eugenio Perez Martin
  2023-10-02 10:16 ` [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings Leon Romanovsky
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Dragos Tatulea @ 2023-09-28 16:45 UTC (permalink / raw)
  To: eperezma, gal, Michael S . Tsirkin, Jason Wang, Xuan Zhuo,
	Leon Romanovsky, Saeed Mahameed
  Cc: virtualization, Dragos Tatulea, linux-kernel, linux-rdma, netdev

Vq descriptor mappings are supported in hardware by filling in an
additional mkey which contains the descriptor mappings to the hw vq.

A previous patch in this series added support for hw mkey (mr) creation
for ASID 1.

This patch fills in both the vq data and vq descriptor mkeys based on
group ASID mapping.

The feature is signaled to the vdpa core through the presence of the
.get_vq_desc_group op.

Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
---
 drivers/vdpa/mlx5/net/mlx5_vnet.c  | 26 ++++++++++++++++++++++++--
 include/linux/mlx5/mlx5_ifc_vdpa.h |  7 ++++++-
 2 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 25bd2c324f5b..46441e41892c 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -823,6 +823,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
 	u32 out[MLX5_ST_SZ_DW(create_virtio_net_q_out)] = {};
 	struct mlx5_vdpa_dev *mvdev = &ndev->mvdev;
 	struct mlx5_vdpa_mr *vq_mr;
+	struct mlx5_vdpa_mr *vq_desc_mr;
 	void *obj_context;
 	u16 mlx_features;
 	void *cmd_hdr;
@@ -878,6 +879,11 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
 	vq_mr = mvdev->mr[mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP]];
 	if (vq_mr)
 		MLX5_SET(virtio_q, vq_ctx, virtio_q_mkey, vq_mr->mkey);
+
+	vq_desc_mr = mvdev->mr[mvdev->group2asid[MLX5_VDPA_DATAVQ_DESC_GROUP]];
+	if (vq_desc_mr)
+		MLX5_SET(virtio_q, vq_ctx, desc_group_mkey, vq_desc_mr->mkey);
+
 	MLX5_SET(virtio_q, vq_ctx, umem_1_id, mvq->umem1.id);
 	MLX5_SET(virtio_q, vq_ctx, umem_1_size, mvq->umem1.size);
 	MLX5_SET(virtio_q, vq_ctx, umem_2_id, mvq->umem2.id);
@@ -2265,6 +2271,16 @@ static u32 mlx5_vdpa_get_vq_group(struct vdpa_device *vdev, u16 idx)
 	return MLX5_VDPA_DATAVQ_GROUP;
 }
 
+static u32 mlx5_vdpa_get_vq_desc_group(struct vdpa_device *vdev, u16 idx)
+{
+	struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
+
+	if (is_ctrl_vq_idx(mvdev, idx))
+		return MLX5_VDPA_CVQ_GROUP;
+
+	return MLX5_VDPA_DATAVQ_DESC_GROUP;
+}
+
 static u64 mlx_to_vritio_features(u16 dev_features)
 {
 	u64 result = 0;
@@ -3139,7 +3155,7 @@ static int mlx5_set_group_asid(struct vdpa_device *vdev, u32 group,
 {
 	struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
 
-	if (group >= MLX5_VDPA_NUMVQ_GROUPS)
+	if (group >= MLX5_VDPA_NUMVQ_GROUPS || asid >= MLX5_VDPA_NUM_AS)
 		return -EINVAL;
 
 	mvdev->group2asid[group] = asid;
@@ -3160,6 +3176,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
 	.get_vq_irq = mlx5_get_vq_irq,
 	.get_vq_align = mlx5_vdpa_get_vq_align,
 	.get_vq_group = mlx5_vdpa_get_vq_group,
+	.get_vq_desc_group = mlx5_vdpa_get_vq_desc_group, /* Op disabled if not supported. */
 	.get_device_features = mlx5_vdpa_get_device_features,
 	.set_driver_features = mlx5_vdpa_set_driver_features,
 	.get_driver_features = mlx5_vdpa_get_driver_features,
@@ -3258,6 +3275,7 @@ struct mlx5_vdpa_mgmtdev {
 	struct vdpa_mgmt_dev mgtdev;
 	struct mlx5_adev *madev;
 	struct mlx5_vdpa_net *ndev;
+	struct vdpa_config_ops vdpa_ops;
 };
 
 static int config_func_mtu(struct mlx5_core_dev *mdev, u16 mtu)
@@ -3371,7 +3389,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
 		max_vqs = 2;
 	}
 
-	ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mlx5_vdpa_ops,
+	ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mgtdev->vdpa_ops,
 				 MLX5_VDPA_NUMVQ_GROUPS, MLX5_VDPA_NUM_AS, name, false);
 	if (IS_ERR(ndev))
 		return PTR_ERR(ndev);
@@ -3546,6 +3564,10 @@ static int mlx5v_probe(struct auxiliary_device *adev,
 		MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues) + 1;
 	mgtdev->mgtdev.supported_features = get_supported_features(mdev);
 	mgtdev->madev = madev;
+	mgtdev->vdpa_ops = mlx5_vdpa_ops;
+
+	if (!MLX5_CAP_DEV_VDPA_EMULATION(mdev, desc_group_mkey_supported))
+		mgtdev->vdpa_ops.get_vq_desc_group = NULL;
 
 	err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
 	if (err)
diff --git a/include/linux/mlx5/mlx5_ifc_vdpa.h b/include/linux/mlx5/mlx5_ifc_vdpa.h
index 9becdc3fa503..b86d51a855f6 100644
--- a/include/linux/mlx5/mlx5_ifc_vdpa.h
+++ b/include/linux/mlx5/mlx5_ifc_vdpa.h
@@ -74,7 +74,11 @@ struct mlx5_ifc_virtio_q_bits {
 	u8    reserved_at_320[0x8];
 	u8    pd[0x18];
 
-	u8    reserved_at_340[0xc0];
+	u8    reserved_at_340[0x20];
+
+	u8    desc_group_mkey[0x20];
+
+	u8    reserved_at_380[0x80];
 };
 
 struct mlx5_ifc_virtio_net_q_object_bits {
@@ -141,6 +145,7 @@ enum {
 	MLX5_VIRTQ_MODIFY_MASK_STATE                    = (u64)1 << 0,
 	MLX5_VIRTQ_MODIFY_MASK_DIRTY_BITMAP_PARAMS      = (u64)1 << 3,
 	MLX5_VIRTQ_MODIFY_MASK_DIRTY_BITMAP_DUMP_ENABLE = (u64)1 << 4,
+	MLX5_VIRTQ_MODIFY_MASK_DESC_GROUP_MKEY          = (u64)1 << 14,
 };
 
 enum {
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings
  2023-09-28 16:45 [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings Dragos Tatulea
                   ` (3 preceding siblings ...)
  2023-09-28 16:45 ` [PATCH vhost 14/16] vdpa/mlx5: Enable hw support for vq descriptor mapping Dragos Tatulea
@ 2023-10-02 10:16 ` Leon Romanovsky
  2023-10-02 10:16 ` (subset) " Leon Romanovsky
  2023-10-05 17:31 ` Michael S. Tsirkin
  6 siblings, 0 replies; 14+ messages in thread
From: Leon Romanovsky @ 2023-10-02 10:16 UTC (permalink / raw)
  To: Dragos Tatulea
  Cc: eperezma, gal, Michael S . Tsirkin, Jason Wang, Saeed Mahameed,
	Xuan Zhuo, virtualization, netdev, linux-rdma, linux-kernel, kvm

On Thu, Sep 28, 2023 at 07:45:11PM +0300, Dragos Tatulea wrote:
> This patch series adds support for vq descriptor table mappings which
> are used to improve vdpa live migration downtime. The improvement comes
> from using smaller mappings which take less time to create and destroy
> in hw.
> 
> The first part adds the vdpa core changes from Si-Wei [0].
> 
> The second part adds support in mlx5_vdpa:
> - Refactor the mr code to be able to cleanly add descriptor mappings.
> - Add hardware descriptor mr support.
> - Properly update iotlb for cvq during ASID switch.
> 
> Changes in v2:
> 
> - The "vdpa/mlx5: Enable hw support for vq descriptor mapping" change
>   was split off into two patches to avoid merge conflicts into the tree
>   of Linus.
> 
>   The first patch contains only changes for mlx5_ifc.h. This must be
>   applied into the mlx5-next tree [1] first. Once this patch is applied
>   on mlx5-next, the change has to be pulled fom mlx5-next into the vhost
>   tree and only then the remaining patches can be applied.
> 
> [0] https://lore.kernel.org/virtualization/1694248959-13369-1-git-send-email-si-wei.liu@oracle.com
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git/log/?h=mlx5-next
> 
> Dragos Tatulea (13):
>   vdpa/mlx5: Expose descriptor group mkey hw capability

I prepared shared branch with this patch.
https://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git/log/?h=mlx5-vhost

Thanks

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: (subset) [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings
  2023-09-28 16:45 [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings Dragos Tatulea
                   ` (4 preceding siblings ...)
  2023-10-02 10:16 ` [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings Leon Romanovsky
@ 2023-10-02 10:16 ` Leon Romanovsky
  2023-10-05 17:31 ` Michael S. Tsirkin
  6 siblings, 0 replies; 14+ messages in thread
From: Leon Romanovsky @ 2023-10-02 10:16 UTC (permalink / raw)
  To: eperezma, gal, Michael S . Tsirkin, Jason Wang, Saeed Mahameed,
	Xuan Zhuo, Dragos Tatulea
  Cc: virtualization, netdev, linux-rdma, linux-kernel, kvm


On Thu, 28 Sep 2023 19:45:11 +0300, Dragos Tatulea wrote:
> This patch series adds support for vq descriptor table mappings which
> are used to improve vdpa live migration downtime. The improvement comes
> from using smaller mappings which take less time to create and destroy
> in hw.
> 
> The first part adds the vdpa core changes from Si-Wei [0].
> 
> [...]

Applied, thanks!

[01/16] vdpa/mlx5: Expose descriptor group mkey hw capability
        https://git.kernel.org/rdma/rdma/c/d424348b060d87

Best regards,
-- 
Leon Romanovsky <leon@kernel.org>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH vhost 14/16] vdpa/mlx5: Enable hw support for vq descriptor mapping
  2023-09-28 16:45 ` [PATCH vhost 14/16] vdpa/mlx5: Enable hw support for vq descriptor mapping Dragos Tatulea
@ 2023-10-05  9:42   ` Eugenio Perez Martin
  2023-10-05 12:16     ` Dragos Tatulea
  0 siblings, 1 reply; 14+ messages in thread
From: Eugenio Perez Martin @ 2023-10-05  9:42 UTC (permalink / raw)
  To: Dragos Tatulea
  Cc: gal, Michael S . Tsirkin, Jason Wang, Xuan Zhuo, Leon Romanovsky,
	Saeed Mahameed, virtualization, linux-kernel, linux-rdma, netdev

On Thu, Sep 28, 2023 at 6:50 PM Dragos Tatulea <dtatulea@nvidia.com> wrote:
>
> Vq descriptor mappings are supported in hardware by filling in an
> additional mkey which contains the descriptor mappings to the hw vq.
>
> A previous patch in this series added support for hw mkey (mr) creation
> for ASID 1.
>
> This patch fills in both the vq data and vq descriptor mkeys based on
> group ASID mapping.
>
> The feature is signaled to the vdpa core through the presence of the
> .get_vq_desc_group op.
>
> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
> ---
>  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 26 ++++++++++++++++++++++++--
>  include/linux/mlx5/mlx5_ifc_vdpa.h |  7 ++++++-
>  2 files changed, 30 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 25bd2c324f5b..46441e41892c 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -823,6 +823,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
>         u32 out[MLX5_ST_SZ_DW(create_virtio_net_q_out)] = {};
>         struct mlx5_vdpa_dev *mvdev = &ndev->mvdev;
>         struct mlx5_vdpa_mr *vq_mr;
> +       struct mlx5_vdpa_mr *vq_desc_mr;
>         void *obj_context;
>         u16 mlx_features;
>         void *cmd_hdr;
> @@ -878,6 +879,11 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
>         vq_mr = mvdev->mr[mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP]];
>         if (vq_mr)
>                 MLX5_SET(virtio_q, vq_ctx, virtio_q_mkey, vq_mr->mkey);
> +
> +       vq_desc_mr = mvdev->mr[mvdev->group2asid[MLX5_VDPA_DATAVQ_DESC_GROUP]];
> +       if (vq_desc_mr)
> +               MLX5_SET(virtio_q, vq_ctx, desc_group_mkey, vq_desc_mr->mkey);
> +
>         MLX5_SET(virtio_q, vq_ctx, umem_1_id, mvq->umem1.id);
>         MLX5_SET(virtio_q, vq_ctx, umem_1_size, mvq->umem1.size);
>         MLX5_SET(virtio_q, vq_ctx, umem_2_id, mvq->umem2.id);
> @@ -2265,6 +2271,16 @@ static u32 mlx5_vdpa_get_vq_group(struct vdpa_device *vdev, u16 idx)
>         return MLX5_VDPA_DATAVQ_GROUP;
>  }
>
> +static u32 mlx5_vdpa_get_vq_desc_group(struct vdpa_device *vdev, u16 idx)
> +{
> +       struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> +
> +       if (is_ctrl_vq_idx(mvdev, idx))
> +               return MLX5_VDPA_CVQ_GROUP;
> +
> +       return MLX5_VDPA_DATAVQ_DESC_GROUP;
> +}
> +
>  static u64 mlx_to_vritio_features(u16 dev_features)
>  {
>         u64 result = 0;
> @@ -3139,7 +3155,7 @@ static int mlx5_set_group_asid(struct vdpa_device *vdev, u32 group,
>  {
>         struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
>
> -       if (group >= MLX5_VDPA_NUMVQ_GROUPS)
> +       if (group >= MLX5_VDPA_NUMVQ_GROUPS || asid >= MLX5_VDPA_NUM_AS)

Nit: the check for asid >= MLX5_VDPA_NUM_AS is redundant, as it will
be already checked by VHOST_VDPA_SET_GROUP_ASID handler in
drivers/vhost/vdpa.c:vhost_vdpa_vring_ioctl. Not a big deal.

>                 return -EINVAL;
>
>         mvdev->group2asid[group] = asid;
> @@ -3160,6 +3176,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
>         .get_vq_irq = mlx5_get_vq_irq,
>         .get_vq_align = mlx5_vdpa_get_vq_align,
>         .get_vq_group = mlx5_vdpa_get_vq_group,
> +       .get_vq_desc_group = mlx5_vdpa_get_vq_desc_group, /* Op disabled if not supported. */
>         .get_device_features = mlx5_vdpa_get_device_features,
>         .set_driver_features = mlx5_vdpa_set_driver_features,
>         .get_driver_features = mlx5_vdpa_get_driver_features,
> @@ -3258,6 +3275,7 @@ struct mlx5_vdpa_mgmtdev {
>         struct vdpa_mgmt_dev mgtdev;
>         struct mlx5_adev *madev;
>         struct mlx5_vdpa_net *ndev;
> +       struct vdpa_config_ops vdpa_ops;
>  };
>
>  static int config_func_mtu(struct mlx5_core_dev *mdev, u16 mtu)
> @@ -3371,7 +3389,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
>                 max_vqs = 2;
>         }
>
> -       ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mlx5_vdpa_ops,
> +       ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mgtdev->vdpa_ops,
>                                  MLX5_VDPA_NUMVQ_GROUPS, MLX5_VDPA_NUM_AS, name, false);
>         if (IS_ERR(ndev))
>                 return PTR_ERR(ndev);
> @@ -3546,6 +3564,10 @@ static int mlx5v_probe(struct auxiliary_device *adev,
>                 MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues) + 1;
>         mgtdev->mgtdev.supported_features = get_supported_features(mdev);
>         mgtdev->madev = madev;
> +       mgtdev->vdpa_ops = mlx5_vdpa_ops;
> +
> +       if (!MLX5_CAP_DEV_VDPA_EMULATION(mdev, desc_group_mkey_supported))
> +               mgtdev->vdpa_ops.get_vq_desc_group = NULL;

I think this is better handled by splitting mlx5_vdpa_ops in two: One
with get_vq_desc_group and other without it. You can see an example of
this in the simulator, where one version supports .dma_map incremental
updating with .dma_map and the other supports .set_map. Otherwise,
this can get messy if more members opt-out or opt-in.

But I'm ok with this too, so whatever version you choose:

Acked-by: Eugenio Pérez <eperezma@redhat.com>

>
>         err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
>         if (err)
> diff --git a/include/linux/mlx5/mlx5_ifc_vdpa.h b/include/linux/mlx5/mlx5_ifc_vdpa.h
> index 9becdc3fa503..b86d51a855f6 100644
> --- a/include/linux/mlx5/mlx5_ifc_vdpa.h
> +++ b/include/linux/mlx5/mlx5_ifc_vdpa.h
> @@ -74,7 +74,11 @@ struct mlx5_ifc_virtio_q_bits {
>         u8    reserved_at_320[0x8];
>         u8    pd[0x18];
>
> -       u8    reserved_at_340[0xc0];
> +       u8    reserved_at_340[0x20];
> +
> +       u8    desc_group_mkey[0x20];
> +
> +       u8    reserved_at_380[0x80];
>  };
>
>  struct mlx5_ifc_virtio_net_q_object_bits {
> @@ -141,6 +145,7 @@ enum {
>         MLX5_VIRTQ_MODIFY_MASK_STATE                    = (u64)1 << 0,
>         MLX5_VIRTQ_MODIFY_MASK_DIRTY_BITMAP_PARAMS      = (u64)1 << 3,
>         MLX5_VIRTQ_MODIFY_MASK_DIRTY_BITMAP_DUMP_ENABLE = (u64)1 << 4,
> +       MLX5_VIRTQ_MODIFY_MASK_DESC_GROUP_MKEY          = (u64)1 << 14,
>  };
>
>  enum {
> --
> 2.41.0
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH vhost 14/16] vdpa/mlx5: Enable hw support for vq descriptor mapping
  2023-10-05  9:42   ` Eugenio Perez Martin
@ 2023-10-05 12:16     ` Dragos Tatulea
  2023-10-05 14:48       ` Eugenio Perez Martin
  0 siblings, 1 reply; 14+ messages in thread
From: Dragos Tatulea @ 2023-10-05 12:16 UTC (permalink / raw)
  To: eperezma@redhat.com
  Cc: linux-rdma@vger.kernel.org, xuanzhuo@linux.alibaba.com,
	virtualization@lists.linux-foundation.org, Gal Pressman,
	linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
	jasowang@redhat.com, leon@kernel.org, Saeed Mahameed,
	mst@redhat.com

On Thu, 2023-10-05 at 11:42 +0200, Eugenio Perez Martin wrote:
> On Thu, Sep 28, 2023 at 6:50 PM Dragos Tatulea <dtatulea@nvidia.com> wrote:
> > 
> > Vq descriptor mappings are supported in hardware by filling in an
> > additional mkey which contains the descriptor mappings to the hw vq.
> > 
> > A previous patch in this series added support for hw mkey (mr) creation
> > for ASID 1.
> > 
> > This patch fills in both the vq data and vq descriptor mkeys based on
> > group ASID mapping.
> > 
> > The feature is signaled to the vdpa core through the presence of the
> > .get_vq_desc_group op.
> > 
> > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
> > ---
> >  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 26 ++++++++++++++++++++++++--
> >  include/linux/mlx5/mlx5_ifc_vdpa.h |  7 ++++++-
> >  2 files changed, 30 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > index 25bd2c324f5b..46441e41892c 100644
> > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > @@ -823,6 +823,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev,
> > struct mlx5_vdpa_virtque
> >         u32 out[MLX5_ST_SZ_DW(create_virtio_net_q_out)] = {};
> >         struct mlx5_vdpa_dev *mvdev = &ndev->mvdev;
> >         struct mlx5_vdpa_mr *vq_mr;
> > +       struct mlx5_vdpa_mr *vq_desc_mr;
> >         void *obj_context;
> >         u16 mlx_features;
> >         void *cmd_hdr;
> > @@ -878,6 +879,11 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev,
> > struct mlx5_vdpa_virtque
> >         vq_mr = mvdev->mr[mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP]];
> >         if (vq_mr)
> >                 MLX5_SET(virtio_q, vq_ctx, virtio_q_mkey, vq_mr->mkey);
> > +
> > +       vq_desc_mr = mvdev->mr[mvdev-
> > >group2asid[MLX5_VDPA_DATAVQ_DESC_GROUP]];
> > +       if (vq_desc_mr)
> > +               MLX5_SET(virtio_q, vq_ctx, desc_group_mkey, vq_desc_mr-
> > >mkey);
> > +
> >         MLX5_SET(virtio_q, vq_ctx, umem_1_id, mvq->umem1.id);
> >         MLX5_SET(virtio_q, vq_ctx, umem_1_size, mvq->umem1.size);
> >         MLX5_SET(virtio_q, vq_ctx, umem_2_id, mvq->umem2.id);
> > @@ -2265,6 +2271,16 @@ static u32 mlx5_vdpa_get_vq_group(struct vdpa_device
> > *vdev, u16 idx)
> >         return MLX5_VDPA_DATAVQ_GROUP;
> >  }
> > 
> > +static u32 mlx5_vdpa_get_vq_desc_group(struct vdpa_device *vdev, u16 idx)
> > +{
> > +       struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > +
> > +       if (is_ctrl_vq_idx(mvdev, idx))
> > +               return MLX5_VDPA_CVQ_GROUP;
> > +
> > +       return MLX5_VDPA_DATAVQ_DESC_GROUP;
> > +}
> > +
> >  static u64 mlx_to_vritio_features(u16 dev_features)
> >  {
> >         u64 result = 0;
> > @@ -3139,7 +3155,7 @@ static int mlx5_set_group_asid(struct vdpa_device
> > *vdev, u32 group,
> >  {
> >         struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > 
> > -       if (group >= MLX5_VDPA_NUMVQ_GROUPS)
> > +       if (group >= MLX5_VDPA_NUMVQ_GROUPS || asid >= MLX5_VDPA_NUM_AS)
> 
> Nit: the check for asid >= MLX5_VDPA_NUM_AS is redundant, as it will
> be already checked by VHOST_VDPA_SET_GROUP_ASID handler in
> drivers/vhost/vdpa.c:vhost_vdpa_vring_ioctl. Not a big deal.
Ack.

> 
> >                 return -EINVAL;
> > 
> >         mvdev->group2asid[group] = asid;
> > @@ -3160,6 +3176,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
> >         .get_vq_irq = mlx5_get_vq_irq,
> >         .get_vq_align = mlx5_vdpa_get_vq_align,
> >         .get_vq_group = mlx5_vdpa_get_vq_group,
> > +       .get_vq_desc_group = mlx5_vdpa_get_vq_desc_group, /* Op disabled if
> > not supported. */
> >         .get_device_features = mlx5_vdpa_get_device_features,
> >         .set_driver_features = mlx5_vdpa_set_driver_features,
> >         .get_driver_features = mlx5_vdpa_get_driver_features,
> > @@ -3258,6 +3275,7 @@ struct mlx5_vdpa_mgmtdev {
> >         struct vdpa_mgmt_dev mgtdev;
> >         struct mlx5_adev *madev;
> >         struct mlx5_vdpa_net *ndev;
> > +       struct vdpa_config_ops vdpa_ops;
> >  };
> > 
> >  static int config_func_mtu(struct mlx5_core_dev *mdev, u16 mtu)
> > @@ -3371,7 +3389,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev
> > *v_mdev, const char *name,
> >                 max_vqs = 2;
> >         }
> > 
> > -       ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev-
> > >device, &mlx5_vdpa_ops,
> > +       ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev-
> > >device, &mgtdev->vdpa_ops,
> >                                  MLX5_VDPA_NUMVQ_GROUPS, MLX5_VDPA_NUM_AS,
> > name, false);
> >         if (IS_ERR(ndev))
> >                 return PTR_ERR(ndev);
> > @@ -3546,6 +3564,10 @@ static int mlx5v_probe(struct auxiliary_device *adev,
> >                 MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues) +
> > 1;
> >         mgtdev->mgtdev.supported_features = get_supported_features(mdev);
> >         mgtdev->madev = madev;
> > +       mgtdev->vdpa_ops = mlx5_vdpa_ops;
> > +
> > +       if (!MLX5_CAP_DEV_VDPA_EMULATION(mdev, desc_group_mkey_supported))
> > +               mgtdev->vdpa_ops.get_vq_desc_group = NULL;
> 
> I think this is better handled by splitting mlx5_vdpa_ops in two: One
> with get_vq_desc_group and other without it. You can see an example of
> this in the simulator, where one version supports .dma_map incremental
> updating with .dma_map and the other supports .set_map. Otherwise,
> this can get messy if more members opt-out or opt-in.
> 
I implemented it this way because the upcoming resumable vq support will also
need to selectively implement .resume if the hw capability is there. That would
result in needing 4 different ops for all combinations. The other option would
be to force these two ops together (.get_vq_desc_group and .resume). But I would
prefer to not do that.

> But I'm ok with this too, so whatever version you choose:
> 
> Acked-by: Eugenio Pérez <eperezma@redhat.com>
> 
> > 
> >         err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
> >         if (err)
> > diff --git a/include/linux/mlx5/mlx5_ifc_vdpa.h
> > b/include/linux/mlx5/mlx5_ifc_vdpa.h
> > index 9becdc3fa503..b86d51a855f6 100644
> > --- a/include/linux/mlx5/mlx5_ifc_vdpa.h
> > +++ b/include/linux/mlx5/mlx5_ifc_vdpa.h
> > @@ -74,7 +74,11 @@ struct mlx5_ifc_virtio_q_bits {
> >         u8    reserved_at_320[0x8];
> >         u8    pd[0x18];
> > 
> > -       u8    reserved_at_340[0xc0];
> > +       u8    reserved_at_340[0x20];
> > +
> > +       u8    desc_group_mkey[0x20];
> > +
> > +       u8    reserved_at_380[0x80];
> >  };
> > 
> >  struct mlx5_ifc_virtio_net_q_object_bits {
> > @@ -141,6 +145,7 @@ enum {
> >         MLX5_VIRTQ_MODIFY_MASK_STATE                    = (u64)1 << 0,
> >         MLX5_VIRTQ_MODIFY_MASK_DIRTY_BITMAP_PARAMS      = (u64)1 << 3,
> >         MLX5_VIRTQ_MODIFY_MASK_DIRTY_BITMAP_DUMP_ENABLE = (u64)1 << 4,
> > +       MLX5_VIRTQ_MODIFY_MASK_DESC_GROUP_MKEY          = (u64)1 << 14,
> >  };
> > 
> >  enum {
> > --
> > 2.41.0
> > 
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH vhost 14/16] vdpa/mlx5: Enable hw support for vq descriptor mapping
  2023-10-05 12:16     ` Dragos Tatulea
@ 2023-10-05 14:48       ` Eugenio Perez Martin
  0 siblings, 0 replies; 14+ messages in thread
From: Eugenio Perez Martin @ 2023-10-05 14:48 UTC (permalink / raw)
  To: Dragos Tatulea
  Cc: linux-rdma@vger.kernel.org, xuanzhuo@linux.alibaba.com,
	virtualization@lists.linux-foundation.org, Gal Pressman,
	linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
	jasowang@redhat.com, leon@kernel.org, Saeed Mahameed,
	mst@redhat.com

On Thu, Oct 5, 2023 at 2:16 PM Dragos Tatulea <dtatulea@nvidia.com> wrote:
>
> On Thu, 2023-10-05 at 11:42 +0200, Eugenio Perez Martin wrote:
> > On Thu, Sep 28, 2023 at 6:50 PM Dragos Tatulea <dtatulea@nvidia.com> wrote:
> > >
> > > Vq descriptor mappings are supported in hardware by filling in an
> > > additional mkey which contains the descriptor mappings to the hw vq.
> > >
> > > A previous patch in this series added support for hw mkey (mr) creation
> > > for ASID 1.
> > >
> > > This patch fills in both the vq data and vq descriptor mkeys based on
> > > group ASID mapping.
> > >
> > > The feature is signaled to the vdpa core through the presence of the
> > > .get_vq_desc_group op.
> > >
> > > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
> > > ---
> > >  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 26 ++++++++++++++++++++++++--
> > >  include/linux/mlx5/mlx5_ifc_vdpa.h |  7 ++++++-
> > >  2 files changed, 30 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > index 25bd2c324f5b..46441e41892c 100644
> > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > @@ -823,6 +823,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev,
> > > struct mlx5_vdpa_virtque
> > >         u32 out[MLX5_ST_SZ_DW(create_virtio_net_q_out)] = {};
> > >         struct mlx5_vdpa_dev *mvdev = &ndev->mvdev;
> > >         struct mlx5_vdpa_mr *vq_mr;
> > > +       struct mlx5_vdpa_mr *vq_desc_mr;
> > >         void *obj_context;
> > >         u16 mlx_features;
> > >         void *cmd_hdr;
> > > @@ -878,6 +879,11 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev,
> > > struct mlx5_vdpa_virtque
> > >         vq_mr = mvdev->mr[mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP]];
> > >         if (vq_mr)
> > >                 MLX5_SET(virtio_q, vq_ctx, virtio_q_mkey, vq_mr->mkey);
> > > +
> > > +       vq_desc_mr = mvdev->mr[mvdev-
> > > >group2asid[MLX5_VDPA_DATAVQ_DESC_GROUP]];
> > > +       if (vq_desc_mr)
> > > +               MLX5_SET(virtio_q, vq_ctx, desc_group_mkey, vq_desc_mr-
> > > >mkey);
> > > +
> > >         MLX5_SET(virtio_q, vq_ctx, umem_1_id, mvq->umem1.id);
> > >         MLX5_SET(virtio_q, vq_ctx, umem_1_size, mvq->umem1.size);
> > >         MLX5_SET(virtio_q, vq_ctx, umem_2_id, mvq->umem2.id);
> > > @@ -2265,6 +2271,16 @@ static u32 mlx5_vdpa_get_vq_group(struct vdpa_device
> > > *vdev, u16 idx)
> > >         return MLX5_VDPA_DATAVQ_GROUP;
> > >  }
> > >
> > > +static u32 mlx5_vdpa_get_vq_desc_group(struct vdpa_device *vdev, u16 idx)
> > > +{
> > > +       struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > > +
> > > +       if (is_ctrl_vq_idx(mvdev, idx))
> > > +               return MLX5_VDPA_CVQ_GROUP;
> > > +
> > > +       return MLX5_VDPA_DATAVQ_DESC_GROUP;
> > > +}
> > > +
> > >  static u64 mlx_to_vritio_features(u16 dev_features)
> > >  {
> > >         u64 result = 0;
> > > @@ -3139,7 +3155,7 @@ static int mlx5_set_group_asid(struct vdpa_device
> > > *vdev, u32 group,
> > >  {
> > >         struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > >
> > > -       if (group >= MLX5_VDPA_NUMVQ_GROUPS)
> > > +       if (group >= MLX5_VDPA_NUMVQ_GROUPS || asid >= MLX5_VDPA_NUM_AS)
> >
> > Nit: the check for asid >= MLX5_VDPA_NUM_AS is redundant, as it will
> > be already checked by VHOST_VDPA_SET_GROUP_ASID handler in
> > drivers/vhost/vdpa.c:vhost_vdpa_vring_ioctl. Not a big deal.
> Ack.
>
> >
> > >                 return -EINVAL;
> > >
> > >         mvdev->group2asid[group] = asid;
> > > @@ -3160,6 +3176,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
> > >         .get_vq_irq = mlx5_get_vq_irq,
> > >         .get_vq_align = mlx5_vdpa_get_vq_align,
> > >         .get_vq_group = mlx5_vdpa_get_vq_group,
> > > +       .get_vq_desc_group = mlx5_vdpa_get_vq_desc_group, /* Op disabled if
> > > not supported. */
> > >         .get_device_features = mlx5_vdpa_get_device_features,
> > >         .set_driver_features = mlx5_vdpa_set_driver_features,
> > >         .get_driver_features = mlx5_vdpa_get_driver_features,
> > > @@ -3258,6 +3275,7 @@ struct mlx5_vdpa_mgmtdev {
> > >         struct vdpa_mgmt_dev mgtdev;
> > >         struct mlx5_adev *madev;
> > >         struct mlx5_vdpa_net *ndev;
> > > +       struct vdpa_config_ops vdpa_ops;
> > >  };
> > >
> > >  static int config_func_mtu(struct mlx5_core_dev *mdev, u16 mtu)
> > > @@ -3371,7 +3389,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev
> > > *v_mdev, const char *name,
> > >                 max_vqs = 2;
> > >         }
> > >
> > > -       ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev-
> > > >device, &mlx5_vdpa_ops,
> > > +       ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev-
> > > >device, &mgtdev->vdpa_ops,
> > >                                  MLX5_VDPA_NUMVQ_GROUPS, MLX5_VDPA_NUM_AS,
> > > name, false);
> > >         if (IS_ERR(ndev))
> > >                 return PTR_ERR(ndev);
> > > @@ -3546,6 +3564,10 @@ static int mlx5v_probe(struct auxiliary_device *adev,
> > >                 MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues) +
> > > 1;
> > >         mgtdev->mgtdev.supported_features = get_supported_features(mdev);
> > >         mgtdev->madev = madev;
> > > +       mgtdev->vdpa_ops = mlx5_vdpa_ops;
> > > +
> > > +       if (!MLX5_CAP_DEV_VDPA_EMULATION(mdev, desc_group_mkey_supported))
> > > +               mgtdev->vdpa_ops.get_vq_desc_group = NULL;
> >
> > I think this is better handled by splitting mlx5_vdpa_ops in two: One
> > with get_vq_desc_group and other without it. You can see an example of
> > this in the simulator, where one version supports .dma_map incremental
> > updating with .dma_map and the other supports .set_map. Otherwise,
> > this can get messy if more members opt-out or opt-in.
> >
> I implemented it this way because the upcoming resumable vq support will also
> need to selectively implement .resume if the hw capability is there. That would
> result in needing 4 different ops for all combinations. The other option would
> be to force these two ops together (.get_vq_desc_group and .resume). But I would
> prefer to not do that.
>

That's a good point. As more features are optional per device, maybe
this approach is better.

I'm not sure what Jason prefers, but I think it would be easy to
change it on top.

Thanks!

> > But I'm ok with this too, so whatever version you choose:
> >
> > Acked-by: Eugenio Pérez <eperezma@redhat.com>
> >
> > >
> > >         err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
> > >         if (err)
> > > diff --git a/include/linux/mlx5/mlx5_ifc_vdpa.h
> > > b/include/linux/mlx5/mlx5_ifc_vdpa.h
> > > index 9becdc3fa503..b86d51a855f6 100644
> > > --- a/include/linux/mlx5/mlx5_ifc_vdpa.h
> > > +++ b/include/linux/mlx5/mlx5_ifc_vdpa.h
> > > @@ -74,7 +74,11 @@ struct mlx5_ifc_virtio_q_bits {
> > >         u8    reserved_at_320[0x8];
> > >         u8    pd[0x18];
> > >
> > > -       u8    reserved_at_340[0xc0];
> > > +       u8    reserved_at_340[0x20];
> > > +
> > > +       u8    desc_group_mkey[0x20];
> > > +
> > > +       u8    reserved_at_380[0x80];
> > >  };
> > >
> > >  struct mlx5_ifc_virtio_net_q_object_bits {
> > > @@ -141,6 +145,7 @@ enum {
> > >         MLX5_VIRTQ_MODIFY_MASK_STATE                    = (u64)1 << 0,
> > >         MLX5_VIRTQ_MODIFY_MASK_DIRTY_BITMAP_PARAMS      = (u64)1 << 3,
> > >         MLX5_VIRTQ_MODIFY_MASK_DIRTY_BITMAP_DUMP_ENABLE = (u64)1 << 4,
> > > +       MLX5_VIRTQ_MODIFY_MASK_DESC_GROUP_MKEY          = (u64)1 << 14,
> > >  };
> > >
> > >  enum {
> > > --
> > > 2.41.0
> > >
> >
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings
  2023-09-28 16:45 [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings Dragos Tatulea
                   ` (5 preceding siblings ...)
  2023-10-02 10:16 ` (subset) " Leon Romanovsky
@ 2023-10-05 17:31 ` Michael S. Tsirkin
  2023-10-05 17:44   ` Dragos Tatulea
  6 siblings, 1 reply; 14+ messages in thread
From: Michael S. Tsirkin @ 2023-10-05 17:31 UTC (permalink / raw)
  To: Dragos Tatulea
  Cc: eperezma, gal, Jason Wang, Saeed Mahameed, Leon Romanovsky,
	Xuan Zhuo, virtualization, netdev, linux-rdma, linux-kernel, kvm

On Thu, Sep 28, 2023 at 07:45:11PM +0300, Dragos Tatulea wrote:
> This patch series adds support for vq descriptor table mappings which
> are used to improve vdpa live migration downtime. The improvement comes
> from using smaller mappings which take less time to create and destroy
> in hw.
> 
> The first part adds the vdpa core changes from Si-Wei [0].
> 
> The second part adds support in mlx5_vdpa:
> - Refactor the mr code to be able to cleanly add descriptor mappings.
> - Add hardware descriptor mr support.
> - Properly update iotlb for cvq during ASID switch.
> 
> Changes in v2:
> 
> - The "vdpa/mlx5: Enable hw support for vq descriptor mapping" change
>   was split off into two patches to avoid merge conflicts into the tree
>   of Linus.
> 
>   The first patch contains only changes for mlx5_ifc.h. This must be
>   applied into the mlx5-next tree [1] first. Once this patch is applied
>   on mlx5-next, the change has to be pulled fom mlx5-next into the vhost
>   tree and only then the remaining patches can be applied.


I get it you plan v3?

> [0] https://lore.kernel.org/virtualization/1694248959-13369-1-git-send-email-si-wei.liu@oracle.com
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git/log/?h=mlx5-next
> 
> Dragos Tatulea (13):
>   vdpa/mlx5: Expose descriptor group mkey hw capability
>   vdpa/mlx5: Create helper function for dma mappings
>   vdpa/mlx5: Decouple cvq iotlb handling from hw mapping code
>   vdpa/mlx5: Take cvq iotlb lock during refresh
>   vdpa/mlx5: Collapse "dvq" mr add/delete functions
>   vdpa/mlx5: Rename mr destroy functions
>   vdpa/mlx5: Allow creation/deletion of any given mr struct
>   vdpa/mlx5: Move mr mutex out of mr struct
>   vdpa/mlx5: Improve mr update flow
>   vdpa/mlx5: Introduce mr for vq descriptor
>   vdpa/mlx5: Enable hw support for vq descriptor mapping
>   vdpa/mlx5: Make iotlb helper functions more generic
>   vdpa/mlx5: Update cvq iotlb mapping on ASID change
> 
> Si-Wei Liu (3):
>   vdpa: introduce dedicated descriptor group for virtqueue
>   vhost-vdpa: introduce descriptor group backend feature
>   vhost-vdpa: uAPI to get dedicated descriptor group id
> 
>  drivers/vdpa/mlx5/core/mlx5_vdpa.h |  31 +++--
>  drivers/vdpa/mlx5/core/mr.c        | 191 ++++++++++++++++-------------
>  drivers/vdpa/mlx5/core/resources.c |   6 +-
>  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 100 ++++++++++-----
>  drivers/vhost/vdpa.c               |  27 ++++
>  include/linux/mlx5/mlx5_ifc.h      |   8 +-
>  include/linux/mlx5/mlx5_ifc_vdpa.h |   7 +-
>  include/linux/vdpa.h               |  11 ++
>  include/uapi/linux/vhost.h         |   8 ++
>  include/uapi/linux/vhost_types.h   |   5 +
>  10 files changed, 264 insertions(+), 130 deletions(-)
> 
> -- 
> 2.41.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings
  2023-10-05 17:31 ` Michael S. Tsirkin
@ 2023-10-05 17:44   ` Dragos Tatulea
  2023-10-05 19:18     ` Michael S. Tsirkin
  0 siblings, 1 reply; 14+ messages in thread
From: Dragos Tatulea @ 2023-10-05 17:44 UTC (permalink / raw)
  To: mst@redhat.com
  Cc: linux-rdma@vger.kernel.org, xuanzhuo@linux.alibaba.com,
	virtualization@lists.linux-foundation.org, Gal Pressman,
	eperezma@redhat.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org, jasowang@redhat.com, leon@kernel.org,
	Saeed Mahameed, kvm@vger.kernel.org

On Thu, 2023-10-05 at 13:31 -0400, Michael S. Tsirkin wrote:
> On Thu, Sep 28, 2023 at 07:45:11PM +0300, Dragos Tatulea wrote:
> > This patch series adds support for vq descriptor table mappings which
> > are used to improve vdpa live migration downtime. The improvement comes
> > from using smaller mappings which take less time to create and destroy
> > in hw.
> > 
> > The first part adds the vdpa core changes from Si-Wei [0].
> > 
> > The second part adds support in mlx5_vdpa:
> > - Refactor the mr code to be able to cleanly add descriptor mappings.
> > - Add hardware descriptor mr support.
> > - Properly update iotlb for cvq during ASID switch.
> > 
> > Changes in v2:
> > 
> > - The "vdpa/mlx5: Enable hw support for vq descriptor mapping" change
> >   was split off into two patches to avoid merge conflicts into the tree
> >   of Linus.
> > 
> >   The first patch contains only changes for mlx5_ifc.h. This must be
> >   applied into the mlx5-next tree [1] first. Once this patch is applied
> >   on mlx5-next, the change has to be pulled fom mlx5-next into the vhost
> >   tree and only then the remaining patches can be applied.
> 
> 
> I get it you plan v3?
There are some very small improvements (commit message in 13/16 and fix in
16/16) that could make a v3. The latter can be addressed as a separate patch
when moving dup_iotlb to vhost/iotlb. What do you think?

> 
> > [0]
> > https://lore.kernel.org/virtualization/1694248959-13369-1-git-send-email-si-wei.liu@oracle.com
> > [1]
> > https://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git/log/?h=mlx5-next
> > 
> > Dragos Tatulea (13):
> >   vdpa/mlx5: Expose descriptor group mkey hw capability
> >   vdpa/mlx5: Create helper function for dma mappings
> >   vdpa/mlx5: Decouple cvq iotlb handling from hw mapping code
> >   vdpa/mlx5: Take cvq iotlb lock during refresh
> >   vdpa/mlx5: Collapse "dvq" mr add/delete functions
> >   vdpa/mlx5: Rename mr destroy functions
> >   vdpa/mlx5: Allow creation/deletion of any given mr struct
> >   vdpa/mlx5: Move mr mutex out of mr struct
> >   vdpa/mlx5: Improve mr update flow
> >   vdpa/mlx5: Introduce mr for vq descriptor
> >   vdpa/mlx5: Enable hw support for vq descriptor mapping
> >   vdpa/mlx5: Make iotlb helper functions more generic
> >   vdpa/mlx5: Update cvq iotlb mapping on ASID change
> > 
> > Si-Wei Liu (3):
> >   vdpa: introduce dedicated descriptor group for virtqueue
> >   vhost-vdpa: introduce descriptor group backend feature
> >   vhost-vdpa: uAPI to get dedicated descriptor group id
> > 
> >  drivers/vdpa/mlx5/core/mlx5_vdpa.h |  31 +++--
> >  drivers/vdpa/mlx5/core/mr.c        | 191 ++++++++++++++++-------------
> >  drivers/vdpa/mlx5/core/resources.c |   6 +-
> >  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 100 ++++++++++-----
> >  drivers/vhost/vdpa.c               |  27 ++++
> >  include/linux/mlx5/mlx5_ifc.h      |   8 +-
> >  include/linux/mlx5/mlx5_ifc_vdpa.h |   7 +-
> >  include/linux/vdpa.h               |  11 ++
> >  include/uapi/linux/vhost.h         |   8 ++
> >  include/uapi/linux/vhost_types.h   |   5 +
> >  10 files changed, 264 insertions(+), 130 deletions(-)
> > 
> > -- 
> > 2.41.0
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings
  2023-10-05 17:44   ` Dragos Tatulea
@ 2023-10-05 19:18     ` Michael S. Tsirkin
  0 siblings, 0 replies; 14+ messages in thread
From: Michael S. Tsirkin @ 2023-10-05 19:18 UTC (permalink / raw)
  To: Dragos Tatulea
  Cc: linux-rdma@vger.kernel.org, xuanzhuo@linux.alibaba.com,
	virtualization@lists.linux-foundation.org, Gal Pressman,
	eperezma@redhat.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org, jasowang@redhat.com, leon@kernel.org,
	Saeed Mahameed, kvm@vger.kernel.org

On Thu, Oct 05, 2023 at 05:44:01PM +0000, Dragos Tatulea wrote:
> On Thu, 2023-10-05 at 13:31 -0400, Michael S. Tsirkin wrote:
> > On Thu, Sep 28, 2023 at 07:45:11PM +0300, Dragos Tatulea wrote:
> > > This patch series adds support for vq descriptor table mappings which
> > > are used to improve vdpa live migration downtime. The improvement comes
> > > from using smaller mappings which take less time to create and destroy
> > > in hw.
> > > 
> > > The first part adds the vdpa core changes from Si-Wei [0].
> > > 
> > > The second part adds support in mlx5_vdpa:
> > > - Refactor the mr code to be able to cleanly add descriptor mappings.
> > > - Add hardware descriptor mr support.
> > > - Properly update iotlb for cvq during ASID switch.
> > > 
> > > Changes in v2:
> > > 
> > > - The "vdpa/mlx5: Enable hw support for vq descriptor mapping" change
> > >   was split off into two patches to avoid merge conflicts into the tree
> > >   of Linus.
> > > 
> > >   The first patch contains only changes for mlx5_ifc.h. This must be
> > >   applied into the mlx5-next tree [1] first. Once this patch is applied
> > >   on mlx5-next, the change has to be pulled fom mlx5-next into the vhost
> > >   tree and only then the remaining patches can be applied.
> > 
> > 
> > I get it you plan v3?
> There are some very small improvements (commit message in 13/16 and fix in
> 16/16) that could make a v3. The latter can be addressed as a separate patch
> when moving dup_iotlb to vhost/iotlb. What do you think?


if there's a fix by all means post v3.

> > 
> > > [0]
> > > https://lore.kernel.org/virtualization/1694248959-13369-1-git-send-email-si-wei.liu@oracle.com
> > > [1]
> > > https://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git/log/?h=mlx5-next
> > > 
> > > Dragos Tatulea (13):
> > >   vdpa/mlx5: Expose descriptor group mkey hw capability
> > >   vdpa/mlx5: Create helper function for dma mappings
> > >   vdpa/mlx5: Decouple cvq iotlb handling from hw mapping code
> > >   vdpa/mlx5: Take cvq iotlb lock during refresh
> > >   vdpa/mlx5: Collapse "dvq" mr add/delete functions
> > >   vdpa/mlx5: Rename mr destroy functions
> > >   vdpa/mlx5: Allow creation/deletion of any given mr struct
> > >   vdpa/mlx5: Move mr mutex out of mr struct
> > >   vdpa/mlx5: Improve mr update flow
> > >   vdpa/mlx5: Introduce mr for vq descriptor
> > >   vdpa/mlx5: Enable hw support for vq descriptor mapping
> > >   vdpa/mlx5: Make iotlb helper functions more generic
> > >   vdpa/mlx5: Update cvq iotlb mapping on ASID change
> > > 
> > > Si-Wei Liu (3):
> > >   vdpa: introduce dedicated descriptor group for virtqueue
> > >   vhost-vdpa: introduce descriptor group backend feature
> > >   vhost-vdpa: uAPI to get dedicated descriptor group id
> > > 
> > >  drivers/vdpa/mlx5/core/mlx5_vdpa.h |  31 +++--
> > >  drivers/vdpa/mlx5/core/mr.c        | 191 ++++++++++++++++-------------
> > >  drivers/vdpa/mlx5/core/resources.c |   6 +-
> > >  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 100 ++++++++++-----
> > >  drivers/vhost/vdpa.c               |  27 ++++
> > >  include/linux/mlx5/mlx5_ifc.h      |   8 +-
> > >  include/linux/mlx5/mlx5_ifc_vdpa.h |   7 +-
> > >  include/linux/vdpa.h               |  11 ++
> > >  include/uapi/linux/vhost.h         |   8 ++
> > >  include/uapi/linux/vhost_types.h   |   5 +
> > >  10 files changed, 264 insertions(+), 130 deletions(-)
> > > 
> > > -- 
> > > 2.41.0
> > 
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-10-05 19:18 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-28 16:45 [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings Dragos Tatulea
2023-09-28 16:45 ` [PATCH mlx5-next 01/16] vdpa/mlx5: Expose descriptor group mkey hw capability Dragos Tatulea
2023-09-28 16:45 ` [PATCH vhost 03/16] vhost-vdpa: introduce descriptor group backend feature Dragos Tatulea
2023-09-28 16:45 ` [PATCH vhost 04/16] vhost-vdpa: uAPI to get dedicated descriptor group id Dragos Tatulea
2023-09-28 16:45 ` [PATCH vhost 14/16] vdpa/mlx5: Enable hw support for vq descriptor mapping Dragos Tatulea
2023-10-05  9:42   ` Eugenio Perez Martin
2023-10-05 12:16     ` Dragos Tatulea
2023-10-05 14:48       ` Eugenio Perez Martin
2023-10-02 10:16 ` [PATCH vhost v2 00/16] vdpa: Add support for vq descriptor mappings Leon Romanovsky
2023-10-02 10:16 ` (subset) " Leon Romanovsky
2023-10-05 17:31 ` Michael S. Tsirkin
2023-10-05 17:44   ` Dragos Tatulea
2023-10-05 19:18     ` Michael S. Tsirkin
  -- strict thread matches above, loose matches on Subject: below --
2023-09-28 16:33 Dragos Tatulea

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).