* [RFC 0/7] vhost-vdpa: add support for iommufd
@ 2023-05-03 9:13 Cindy Lu
2023-05-03 9:13 ` [RFC 1/7] vhost: introduce new UAPI to support IOMMUFD Cindy Lu
` (8 more replies)
0 siblings, 9 replies; 14+ messages in thread
From: Cindy Lu @ 2023-05-03 9:13 UTC (permalink / raw)
To: lulu, mst, jasowang, qemu-devel
Hi All
There is the RFC to support the IOMMUFD in vdpa device
any comments are welcome
Thanks
Cindy
Cindy Lu (7):
vhost: introduce new UAPI to support IOMMUFD
qapi: support iommufd in vdpa
virtio : add a ptr for vdpa_iommufd in VirtIODevice
net/vhost-vdpa: Add the check for iommufd
vhost-vdpa: Add the iommufd support in the map/unmap function
vhost-vdpa: init iommufd function in vhost_vdpa start
vhost-vdpa-iommufd: Add iommufd support for vdpa
hw/virtio/meson.build | 2 +-
hw/virtio/vhost-vdpa-iommufd.c | 240 +++++++++++++++++++++++++++++++++
hw/virtio/vhost-vdpa.c | 74 +++++++++-
include/hw/virtio/vhost-vdpa.h | 47 +++++++
include/hw/virtio/virtio.h | 5 +
linux-headers/linux/vhost.h | 72 ++++++++++
net/vhost-vdpa.c | 31 +++--
qapi/net.json | 1 +
8 files changed, 451 insertions(+), 21 deletions(-)
create mode 100644 hw/virtio/vhost-vdpa-iommufd.c
--
2.34.3
^ permalink raw reply [flat|nested] 14+ messages in thread
* [RFC 1/7] vhost: introduce new UAPI to support IOMMUFD
2023-05-03 9:13 [RFC 0/7] vhost-vdpa: add support for iommufd Cindy Lu
@ 2023-05-03 9:13 ` Cindy Lu
2023-05-03 9:13 ` [RFC 2/7] qapi: support iommufd in vdpa Cindy Lu
` (7 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Cindy Lu @ 2023-05-03 9:13 UTC (permalink / raw)
To: lulu, mst, jasowang, qemu-devel
Add 3 new UAPI
VHOST_VDPA_SET_IOMMU_FD: this to bind the vdpa device to iommufd
VDPA_DEVICE_ATTACH_IOMMUFD_AS: attach new ioas to iommufd
VDPA_DEVICE_DTTACH_IOMMUFD_AS: detach all the ioas from iommufd
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
| 72 +++++++++++++++++++++++++++++++++++++
1 file changed, 72 insertions(+)
--git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h
index f9f115a7c7..bf426177f3 100644
--- a/linux-headers/linux/vhost.h
+++ b/linux-headers/linux/vhost.h
@@ -180,4 +180,76 @@
*/
#define VHOST_VDPA_SUSPEND _IO(VHOST_VIRTIO, 0x7D)
+/* vhost vdpa set iommufd
+ * Input parameters:
+ * @iommufd: file descriptor from /dev/iommu; pass -1 to unset
+ * @group_id: identifier of the group that a virtqueue belongs to
+ * @ioas_id: IOAS identifier returned from ioctl(IOMMU_IOAS_ALLOC)
+ * Output parameters:
+ * @out_dev_id: device identifier
+ * @out_hwpt_id: hardware IO pagetable identifier
+ */
+struct vhost_vdpa_set_iommufd {
+ __s32 iommufd;
+ __u32 group_id;
+ __u32 ioas_id;
+ __u32 out_devid;
+ __u32 out_hwptid;
+};
+
+#define VHOST_VDPA_SET_IOMMU_FD \
+ _IOW(VHOST_VIRTIO, 0x7e, struct vhost_vdpa_set_iommufd)
+
+/*
+ * VDPA_DEVICE_ATTACH_IOMMUFD_AS -
+ * _IOW(VHOST_VIRTIO, 0x7f, struct vdpa_device_attach_iommufd_as)
+ *
+ * Attach a vdpa device to an iommufd address space specified by IOAS
+ * id.
+ *
+ * Available only after a device has been bound to iommufd via
+ * VHOST_VDPA_SET_IOMMU_FD
+ *
+ * Undo by VDPA_DEVICE_DETACH_IOMMUFD_AS or device fd close.
+ *
+ * @argsz: user filled size of this data.
+ * @flags: must be 0.
+ * @ioas_id: Input the target id which can represent an ioas
+ * allocated via iommufd subsystem.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vdpa_device_attach_iommufd_as {
+ __u32 argsz;
+ __u32 flags;
+ __u32 ioas_id;
+};
+
+#define VDPA_DEVICE_ATTACH_IOMMUFD_AS \
+ _IOW(VHOST_VIRTIO, 0x7f, struct vdpa_device_attach_iommufd_as)
+
+
+/*
+ * VDPA_DEVICE_DETACH_IOMMUFD_AS
+ *
+ * Detach a vdpa device from the iommufd address space it has been
+ * attached to. After it, device should be in a blocking DMA state.
+ *
+ * Available only after a device has been bound to iommufd via
+ * VHOST_VDPA_SET_IOMMU_FD
+ *
+ * @argsz: user filled size of this data.
+ * @flags: must be 0.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vdpa_device_detach_iommufd_as {
+ __u32 argsz;
+ __u32 flags;
+};
+
+#define VDPA_DEVICE_DETACH_IOMMUFD_AS \
+ _IOW(VHOST_VIRTIO, 0x83, struct vdpa_device_detach_iommufd_as)
+
+
#endif
--
2.34.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [RFC 2/7] qapi: support iommufd in vdpa
2023-05-03 9:13 [RFC 0/7] vhost-vdpa: add support for iommufd Cindy Lu
2023-05-03 9:13 ` [RFC 1/7] vhost: introduce new UAPI to support IOMMUFD Cindy Lu
@ 2023-05-03 9:13 ` Cindy Lu
2023-05-03 9:13 ` [RFC 3/7] virtio : add a ptr for vdpa_iommufd in VirtIODevice Cindy Lu
` (6 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Cindy Lu @ 2023-05-03 9:13 UTC (permalink / raw)
To: lulu, mst, jasowang, qemu-devel
Add a new option for iommufd, The usage is
....
-object iommufd,id=iommufd0 \
-device virtio-net-pci,netdev=vhost-vdpa1,disable-legacy=on,disable-modern=off\
-netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa1,iommufd=iommufd0\
...
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
qapi/net.json | 1 +
1 file changed, 1 insertion(+)
diff --git a/qapi/net.json b/qapi/net.json
index 522ac582ed..fffaf9bb5e 100644
--- a/qapi/net.json
+++ b/qapi/net.json
@@ -461,6 +461,7 @@
'*vhostdev': 'str',
'*vhostfd': 'str',
'*queues': 'int',
+ '*iommufd': 'str',
'*x-svq': {'type': 'bool', 'features' : [ 'unstable'] } } }
##
--
2.34.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [RFC 3/7] virtio : add a ptr for vdpa_iommufd in VirtIODevice
2023-05-03 9:13 [RFC 0/7] vhost-vdpa: add support for iommufd Cindy Lu
2023-05-03 9:13 ` [RFC 1/7] vhost: introduce new UAPI to support IOMMUFD Cindy Lu
2023-05-03 9:13 ` [RFC 2/7] qapi: support iommufd in vdpa Cindy Lu
@ 2023-05-03 9:13 ` Cindy Lu
2023-05-03 9:13 ` [RFC 4/7] net/vhost-vdpa: Add the check for iommufd Cindy Lu
` (5 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Cindy Lu @ 2023-05-03 9:13 UTC (permalink / raw)
To: lulu, mst, jasowang, qemu-devel
To support iommufd, vdpa needs to save the ioas_id and the ASID,
which need to be shared between all vhost_vdpa devices.
So Add a pointer in VirtIODevice.
vdpa device need to init it when the dev start, Add all the vdpa device
will read/write this same ptr. TODO: need to add a lock for
read and write
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
include/hw/virtio/vhost-vdpa.h | 23 +++++++++++++++++++++++
include/hw/virtio/virtio.h | 5 +++++
2 files changed, 28 insertions(+)
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 7997f09a8d..309d4ffc70 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -18,6 +18,10 @@
#include "hw/virtio/vhost-shadow-virtqueue.h"
#include "hw/virtio/virtio.h"
#include "standard-headers/linux/vhost_types.h"
+//#include "sysemu/iommufd.h"
+#include "qemu/osdep.h"
+#include "sysemu/sysemu.h"
+
/*
* ASID dedicated to map guest's addresses. If SVQ is disabled it maps GPA to
@@ -30,6 +34,8 @@ typedef struct VhostVDPAHostNotifier {
void *addr;
} VhostVDPAHostNotifier;
+typedef struct IOMMUFDBackend IOMMUFDBackend;
+
typedef struct vhost_vdpa {
int device_fd;
int index;
@@ -51,6 +57,23 @@ typedef struct vhost_vdpa {
VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
} VhostVDPA;
+
+typedef struct vdpa_iommufd {
+ IOMMUFDBackend *iommufd;
+ struct vhost_dev *dev;
+ /*ioas_id get from IOMMUFD, iommufd need to use this id to map,unmap*/
+ uint32_t ioas_id;
+ /*ASID used for vq*/
+ uint32_t asid;
+ __u32 devid; /*not use */
+ __u32 hwptid; /*not use*/
+ AddressSpace *as;
+ struct vdpa_iommufd *next;
+ // QLIST_ENTRY(vdpa_iommufd) iommufd_next;
+
+} VDPAIOMMUFDState;
+
+
int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range);
int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 77c6c55929..36b4783466 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -152,6 +152,11 @@ struct VirtIODevice
uint8_t device_endian;
bool use_guest_notifier_mask;
AddressSpace *dma_as;
+ /*this is an ptr point to struct vdpa_iommufd, will change to QLIST if
+ * needed*/
+ /*in this struct saved the ioas_id/ASID that we need to use in iommufd
+ map/unmap, this ioas_id/ASID will share between vqs,so we add the ptr here*/
+ void *iommufd_ptr;
QLIST_HEAD(, VirtQueue) *vector_queues;
QTAILQ_ENTRY(VirtIODevice) next;
EventNotifier config_notifier;
--
2.34.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [RFC 4/7] net/vhost-vdpa: Add the check for iommufd
2023-05-03 9:13 [RFC 0/7] vhost-vdpa: add support for iommufd Cindy Lu
` (2 preceding siblings ...)
2023-05-03 9:13 ` [RFC 3/7] virtio : add a ptr for vdpa_iommufd in VirtIODevice Cindy Lu
@ 2023-05-03 9:13 ` Cindy Lu
2023-05-03 9:13 ` [RFC 5/7] vhost-vdpa: Add the iommufd support in the map/unmap function Cindy Lu
` (4 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Cindy Lu @ 2023-05-03 9:13 UTC (permalink / raw)
To: lulu, mst, jasowang, qemu-devel
Add the check for object iommufd, if the iommfd enabled.
pass the information to vhost_vdpa.
vhost_vdpa dev start will check this bool and connect
to the iommufd
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
net/vhost-vdpa.c | 31 +++++++++++++++++--------------
1 file changed, 17 insertions(+), 14 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 1a13a34d35..d4819c28e1 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -659,16 +659,12 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
.avail_handler = vhost_vdpa_net_handle_ctrl_avail,
};
-static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
- const char *device,
- const char *name,
- int vdpa_device_fd,
- int queue_pair_index,
- int nvqs,
- bool is_datapath,
- bool svq,
- struct vhost_vdpa_iova_range iova_range,
- VhostIOVATree *iova_tree)
+static NetClientState *
+net_vhost_vdpa_init(NetClientState *peer, const char *device, const char *name,
+ int vdpa_device_fd, int queue_pair_index, int nvqs,
+ bool is_datapath, bool svq, bool enable_iommufd,
+ struct vhost_vdpa_iova_range iova_range,
+ VhostIOVATree *iova_tree)
{
NetClientState *nc = NULL;
VhostVDPAState *s;
@@ -691,6 +687,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
s->vhost_vdpa.iova_range = iova_range;
s->vhost_vdpa.shadow_data = svq;
s->vhost_vdpa.iova_tree = iova_tree;
+ s->vhost_vdpa.enable_iommufd = enable_iommufd;
if (!is_datapath) {
s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
vhost_vdpa_net_cvq_cmd_page_len());
@@ -793,6 +790,12 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
}
}
+ bool enable_iommufd = false;
+ if (opts->iommufd) {
+ enable_iommufd = true;
+ printf("[%s] %d called\n", __func__, __LINE__);
+ }
+
r = vhost_vdpa_get_features(vdpa_device_fd, &features, errp);
if (unlikely(r < 0)) {
goto err;
@@ -825,15 +828,15 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
for (i = 0; i < queue_pairs; i++) {
ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
vdpa_device_fd, i, 2, true, opts->x_svq,
- iova_range, iova_tree);
+ enable_iommufd, iova_range, iova_tree);
if (!ncs[i])
goto err;
}
if (has_cvq) {
- nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
- vdpa_device_fd, i, 1, false,
- opts->x_svq, iova_range, iova_tree);
+ nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd, i,
+ 1, false, opts->x_svq, enable_iommufd,
+ iova_range, iova_tree);
if (!nc)
goto err;
}
--
2.34.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [RFC 5/7] vhost-vdpa: Add the iommufd support in the map/unmap function
2023-05-03 9:13 [RFC 0/7] vhost-vdpa: add support for iommufd Cindy Lu
` (3 preceding siblings ...)
2023-05-03 9:13 ` [RFC 4/7] net/vhost-vdpa: Add the check for iommufd Cindy Lu
@ 2023-05-03 9:13 ` Cindy Lu
2023-05-03 9:13 ` [RFC 6/7] vhost-vdpa: init iommufd function in vhost_vdpa start Cindy Lu
` (3 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Cindy Lu @ 2023-05-03 9:13 UTC (permalink / raw)
To: lulu, mst, jasowang, qemu-devel
1.Change the map/umap function to legacy_map/unmap
2. Add the check for iommufd support,
a>If support iommufd, call the iommufd-related function
b>In order to use kernel's iotlb process. Still need to call
the legacy mode iotlb message, Kernel will check and
skip the legacy iotlb message if iommufd enable
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
hw/virtio/vhost-vdpa.c | 56 ++++++++++++++++++++++++++++++----
include/hw/virtio/vhost-vdpa.h | 24 +++++++++++++++
2 files changed, 74 insertions(+), 6 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 542e003101..85240926b2 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -26,6 +26,7 @@
#include "cpu.h"
#include "trace.h"
#include "qapi/error.h"
+#include "sysemu/iommufd.h"
/*
* Return one past the end of the end of section. Be careful with uint64_t
@@ -76,8 +77,9 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
* The caller must set asid = 0 if the device does not support asid.
* This is not an ABI break since it is set to 0 by the initializer anyway.
*/
-int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
- hwaddr size, void *vaddr, bool readonly)
+
+int vhost_vdpa_leagcy_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+ hwaddr size, void *vaddr, bool readonly)
{
struct vhost_msg_v2 msg = {};
int fd = v->device_fd;
@@ -103,13 +105,32 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
return ret;
}
+int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+ hwaddr size, void *vaddr, bool readonly)
+{
+ struct vhost_dev *dev = v->dev;
+
+ if ((v->enable_iommufd) && (v->ops == NULL)) {
+ vdpa_backend_iommufd_ops_class_init(v);
+ }
+
+ struct vdpa_iommu_backend_ops *ops = v->ops;
+ /* inoder to reuse the iotlb prcess to in kernel, still need to call leagcy
+ mode mapping but in kernel , the leagcy mode mapping was replace by
+ iommufd*/
+ if (v->enable_iommufd) {
+ ops->dma_map(dev, asid, iova, size, vaddr, readonly);
+ }
+ return vhost_vdpa_leagcy_dma_map(v, asid, iova, size, vaddr, readonly);
+}
/*
* The caller must set asid = 0 if the device does not support asid.
* This is not an ABI break since it is set to 0 by the initializer anyway.
*/
-int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
- hwaddr size)
+
+int vhost_vdpa_leagcy_dma_unmap(struct vhost_vdpa *v, uint32_t asid,
+ hwaddr iova, hwaddr size)
{
struct vhost_msg_v2 msg = {};
int fd = v->device_fd;
@@ -132,6 +153,26 @@ int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
return ret;
}
+int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+ hwaddr size)
+{
+ struct vhost_dev *dev = v->dev;
+
+ if ((v->enable_iommufd) && (v->ops == NULL)) {
+ vdpa_backend_iommufd_ops_class_init(v);
+ }
+
+
+ /* inoder to reuse the iotlb prcess to in kernel, still need to call leagcy
+ mode mapping but in kernel , the leagcy mode mapping was replace by
+ iommufd*/
+ if (v->enable_iommufd) {
+ struct vdpa_iommu_backend_ops *ops = v->ops;
+
+ ops->dma_unmap(dev, asid, iova, size);
+ }
+ return vhost_vdpa_leagcy_dma_unmap(v, asid, iova, size);
+}
static void vhost_vdpa_listener_begin_batch(struct vhost_vdpa *v)
{
@@ -423,13 +464,14 @@ static void vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v)
v->shadow_vqs = g_steal_pointer(&shadow_vqs);
}
-
+int g_iommufd;
static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
{
struct vhost_vdpa *v;
assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
trace_vhost_vdpa_init(dev, opaque);
int ret;
+ printf("[%s] %d called\n", __func__, __LINE__);
/*
* Similar to VFIO, we end up pinning all guest memory and have to
@@ -580,7 +622,9 @@ static int vhost_vdpa_cleanup(struct vhost_dev *dev)
vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
memory_listener_unregister(&v->listener);
vhost_vdpa_svq_cleanup(dev);
-
+ if (vhost_vdpa_first_dev(dev)) {
+ v->ops->detach_device(v);
+ }
dev->opaque = NULL;
ram_block_discard_disable(false);
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 309d4ffc70..aa0e3ed65b 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -55,6 +55,10 @@ typedef struct vhost_vdpa {
void *shadow_vq_ops_opaque;
struct vhost_dev *dev;
VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
+ /*iommufd related*/
+ struct vdpa_iommu_backend_ops *ops;
+ bool enable_iommufd;
+
} VhostVDPA;
@@ -76,9 +80,29 @@ typedef struct vdpa_iommufd {
int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range);
+int vhost_vdpa_leagcy_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+ hwaddr size, void *vaddr, bool readonly);
+int vhost_vdpa_leagcy_dma_unmap(struct vhost_vdpa *v, uint32_t asid,
+ hwaddr iova, hwaddr size);
+
int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
hwaddr size, void *vaddr, bool readonly);
int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
hwaddr size);
+struct vdpa_iommu_backend_ops {
+ /*< private >*/
+ ObjectClass parent_class;
+ int (*dma_map)(struct vhost_dev *dev, uint32_t asid, hwaddr iova,
+ hwaddr size, void *vaddr, bool readonly);
+ int (*dma_unmap)(struct vhost_dev *dev, uint32_t asid, hwaddr iova,
+ hwaddr size);
+ int (*attach_device)(struct vhost_vdpa *dev, AddressSpace *as,
+ Error **errp);
+ void (*detach_device)(struct vhost_vdpa *dev);
+ int (*reset)(VDPAIOMMUFDState *vdpa_iommufd);
+};
+
+void vdpa_backend_iommufd_ops_class_init(struct vhost_vdpa *v);
+
#endif
--
2.34.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [RFC 6/7] vhost-vdpa: init iommufd function in vhost_vdpa start
2023-05-03 9:13 [RFC 0/7] vhost-vdpa: add support for iommufd Cindy Lu
` (4 preceding siblings ...)
2023-05-03 9:13 ` [RFC 5/7] vhost-vdpa: Add the iommufd support in the map/unmap function Cindy Lu
@ 2023-05-03 9:13 ` Cindy Lu
2023-05-03 9:13 ` [RFC 7/7] vhost-vdpa-iommufd: Add iommufd support for vdpa Cindy Lu
` (2 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Cindy Lu @ 2023-05-03 9:13 UTC (permalink / raw)
To: lulu, mst, jasowang, qemu-devel
Add support for iommufd, init the vdpa_iommufd in vdpa_start
in this step, driver will bind to the iommufd device
and attach the default ASID(asid 0) to iommufd
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
hw/virtio/vhost-vdpa.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 85240926b2..6c01e3b44f 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1158,6 +1158,24 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
trace_vhost_vdpa_dev_start(dev, started);
if (started) {
+ if ((v->enable_iommufd) && (vhost_vdpa_first_dev(dev))) {
+ struct vdpa_iommufd *vdpa_iommufd;
+
+ vdpa_backend_iommufd_ops_class_init(v);
+
+ if (dev->vdev->iommufd_ptr == NULL) {
+ vdpa_iommufd = g_malloc(sizeof(VDPAIOMMUFDState));
+
+ vdpa_iommufd->iommufd = g_malloc(sizeof(IOMMUFDBackend));
+ dev->vdev->iommufd_ptr = vdpa_iommufd;
+
+ qemu_mutex_init(&vdpa_iommufd->iommufd->lock);
+ iommufd_backend_connect(vdpa_iommufd->iommufd, NULL);
+
+ v->ops->attach_device(v, dev->vdev->dma_as, NULL);
+ }
+ }
+
vhost_vdpa_host_notifiers_init(dev);
ok = vhost_vdpa_svqs_start(dev);
if (unlikely(!ok)) {
--
2.34.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [RFC 7/7] vhost-vdpa-iommufd: Add iommufd support for vdpa
2023-05-03 9:13 [RFC 0/7] vhost-vdpa: add support for iommufd Cindy Lu
` (5 preceding siblings ...)
2023-05-03 9:13 ` [RFC 6/7] vhost-vdpa: init iommufd function in vhost_vdpa start Cindy Lu
@ 2023-05-03 9:13 ` Cindy Lu
2023-05-05 3:29 ` [RFC 0/7] vhost-vdpa: add support for iommufd Jason Wang
2023-09-13 13:31 ` Michael S. Tsirkin
8 siblings, 0 replies; 14+ messages in thread
From: Cindy Lu @ 2023-05-03 9:13 UTC (permalink / raw)
To: lulu, mst, jasowang, qemu-devel
This file is support iommufd for vdpa, including the function:
1> iommufd bind/unbind the iommufd device
bind the vdpa device to iommufd and attach the ASID 0 to iommufd
2> iommufd map/unmap function.The map function working process is
a. Check if the asid was used before.
b. If this is the new asid, get the new ioas_id and attach it to iommufd.
save this information in vdpa_iommufd.
c. Use the ioas_id for mapping
The unmap logic is the same
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
hw/virtio/meson.build | 2 +-
hw/virtio/vhost-vdpa-iommufd.c | 240 +++++++++++++++++++++++++++++++++
2 files changed, 241 insertions(+), 1 deletion(-)
create mode 100644 hw/virtio/vhost-vdpa-iommufd.c
diff --git a/hw/virtio/meson.build b/hw/virtio/meson.build
index f93be2e137..848fdb18eb 100644
--- a/hw/virtio/meson.build
+++ b/hw/virtio/meson.build
@@ -13,7 +13,7 @@ if have_vhost
specific_virtio_ss.add(files('vhost-user.c'))
endif
if have_vhost_vdpa
- specific_virtio_ss.add(files('vhost-vdpa.c', 'vhost-shadow-virtqueue.c'))
+ specific_virtio_ss.add(files('vhost-vdpa.c', 'vhost-shadow-virtqueue.c','vhost-vdpa-iommufd.c'))
endif
else
softmmu_virtio_ss.add(files('vhost-stub.c'))
diff --git a/hw/virtio/vhost-vdpa-iommufd.c b/hw/virtio/vhost-vdpa-iommufd.c
new file mode 100644
index 0000000000..6a0875c0a4
--- /dev/null
+++ b/hw/virtio/vhost-vdpa-iommufd.c
@@ -0,0 +1,240 @@
+
+#include "qemu/osdep.h"
+#include <sys/ioctl.h>
+#include <linux/vhost.h>
+#include <linux/vfio.h>
+#include <linux/iommufd.h>
+#include "sysemu/iommufd.h"
+#include "hw/virtio/vhost.h"
+
+#include "hw/virtio/vhost-vdpa.h"
+
+static int vdpa_device_attach_ioas(struct vhost_vdpa *dev,
+ VDPAIOMMUFDState *vdpa_iommufd)
+{
+ int ret;
+
+ struct vdpa_device_attach_iommufd_as attach_data = {
+ .argsz = sizeof(attach_data),
+ .flags = 0,
+ .ioas_id = vdpa_iommufd->ioas_id,
+ };
+ /* Attach device to an ioas within iommufd */
+ ret = ioctl(dev->device_fd, VDPA_DEVICE_ATTACH_IOMMUFD_AS, &attach_data);
+ if (ret) {
+ error_report("fail to bind device fd=%d to ioas_id=%d", dev->device_fd,
+ vdpa_iommufd->ioas_id);
+ return ret;
+ }
+
+ return 0;
+}
+static VDPAIOMMUFDState *vdpa_get_ioas_by_asid(struct vhost_dev *hdev,
+ uint32_t asid)
+{
+ VDPAIOMMUFDState *vdpa_iommufd_ptr = hdev->vdev->iommufd_ptr;
+ while (vdpa_iommufd_ptr != NULL) {
+ if (asid == vdpa_iommufd_ptr->asid) {
+ return vdpa_iommufd_ptr;
+ }
+
+ vdpa_iommufd_ptr = vdpa_iommufd_ptr->next;
+ }
+
+ return NULL;
+}
+static VDPAIOMMUFDState *vdpa_add_new_ioas_id(struct vhost_dev *hdev,
+ uint32_t asid)
+{
+ int ret;
+ uint32_t ioas_id;
+
+ struct vhost_vdpa *v = hdev->opaque;
+ VDPAIOMMUFDState *vdpa_iommufd_ptr = hdev->vdev->iommufd_ptr;
+ VDPAIOMMUFDState *vdpa_iommufd_new = g_malloc(sizeof(VDPAIOMMUFDState));
+
+ vdpa_iommufd_new->dev = hdev;
+ vdpa_iommufd_new->asid = asid;
+ vdpa_iommufd_new->iommufd = vdpa_iommufd_ptr->iommufd;
+
+ ret = iommufd_backend_get_ioas(vdpa_iommufd_new->iommufd, &ioas_id);
+ if (ret < 0) {
+ error_report("Failed to alloc ioas (%s)", strerror(errno));
+ return NULL;
+ }
+
+ vdpa_iommufd_new->ioas_id = ioas_id;
+ /* this is new asid, attch to iommufd*/
+ ret = vdpa_device_attach_ioas(v, vdpa_iommufd_new);
+ if (ret < 0) {
+ error_report("Failed to attach ioas (%s)", strerror(errno));
+ return NULL;
+ }
+ while (vdpa_iommufd_ptr->next != NULL) {
+ vdpa_iommufd_ptr = vdpa_iommufd_ptr->next;
+ }
+ /*save this vdpa_iommufd in list */
+ vdpa_iommufd_ptr->next = vdpa_iommufd_new;
+ vdpa_iommufd_new->next = NULL;
+ return vdpa_iommufd_new;
+}
+static int vdpa_iommufd_map(struct vhost_dev *hdev, uint32_t asid, hwaddr iova,
+ hwaddr size, void *vaddr, bool readonly)
+{
+ VDPAIOMMUFDState *vdpa_iommufd;
+
+ if (hdev->vdev == NULL) {
+ error_report("Failed to get vdev (%s)", strerror(errno));
+ return 0;
+ }
+ /*search if this asid have attach to iommufd before*/
+ vdpa_iommufd = vdpa_get_ioas_by_asid(hdev, asid);
+ if (vdpa_iommufd == NULL) {
+ /*this asid is first use, need to alloc and add to iommufd*/
+ vdpa_iommufd = vdpa_add_new_ioas_id(hdev, asid);
+ }
+ return iommufd_backend_map_dma(vdpa_iommufd->iommufd, vdpa_iommufd->ioas_id,
+ iova, size, vaddr, readonly);
+}
+
+
+static int vdpa_iommufd_unmap(struct vhost_dev *hdev, uint32_t asid,
+ hwaddr iova, hwaddr size)
+{
+ VDPAIOMMUFDState *vdpa_iommufd;
+ if (hdev->vdev == NULL) {
+ error_report("Failed to get vdev (%s)", strerror(errno));
+ return 0;
+ }
+ /*search if this asid have attach to iommufd before*/
+
+ vdpa_iommufd = vdpa_get_ioas_by_asid(hdev, asid);
+ if (vdpa_iommufd == NULL) {
+ error_report("Failed to get ioas (%s)", strerror(errno));
+ return 0;
+ }
+ return iommufd_backend_unmap_dma(vdpa_iommufd->iommufd,
+ vdpa_iommufd->ioas_id, iova, size);
+}
+
+
+static void vdpa_device_detach_iommufd(struct vhost_vdpa *v,
+ VDPAIOMMUFDState *vdpa_iommufd,
+ Error **errp)
+{
+ struct vdpa_device_detach_iommufd_as detach_data = {
+ .argsz = sizeof(detach_data),
+ .flags = 0,
+ };
+
+ if (ioctl(v->device_fd, VDPA_DEVICE_DETACH_IOMMUFD_AS, &detach_data)) {
+ error_report("error bind device fd=%d ", v->device_fd);
+ return;
+ }
+}
+
+
+static int vdpa_device_bind_iommufd(struct vhost_vdpa *dev,
+ VDPAIOMMUFDState *vdpa_iommufd,
+ Error **errp)
+{
+ struct vhost_vdpa_set_iommufd bind = {
+ .iommufd = vdpa_iommufd->iommufd->fd,
+ .ioas_id = vdpa_iommufd->ioas_id,
+ };
+
+ int ret;
+ /* Bind device to iommufd */
+ ret = ioctl(dev->device_fd, VHOST_VDPA_SET_IOMMU_FD, &bind);
+ if (ret) {
+ error_report("error bind device fd=%d to iommufd=%d", dev->device_fd,
+ bind.iommufd);
+ return ret;
+ }
+
+ vdpa_iommufd->devid = bind.out_devid;
+ vdpa_iommufd->hwptid = bind.out_hwptid;
+
+ return vdpa_device_attach_ioas(dev, vdpa_iommufd);
+}
+
+static void vdpa_iommufd_destroy(VDPAIOMMUFDState *vdpa_iommufd)
+{
+ g_free(vdpa_iommufd);
+}
+
+/*attach the device to iommufd */
+static int vdpa_iommufd_attach_device(struct vhost_vdpa *v, AddressSpace *as,
+ Error **errp)
+{
+ VDPAIOMMUFDState *vdpa_iommufd;
+ int ret;
+ uint32_t ioas_id;
+ Error *err = NULL;
+ struct vhost_dev *dev = v->dev;
+ vdpa_iommufd = dev->vdev->iommufd_ptr;
+
+ /*allocate a new IOAS */
+ ret = iommufd_backend_get_ioas(vdpa_iommufd->iommufd, &ioas_id);
+ if (ret < 0) {
+ close(v->device_fd);
+ error_report("Failed to alloc ioas (%s)", strerror(errno));
+ return ret;
+ }
+
+ vdpa_iommufd->ioas_id = ioas_id;
+ vdpa_iommufd->dev = dev;
+ /* use the default ASID*/
+ vdpa_iommufd->asid = VHOST_VDPA_GUEST_PA_ASID;
+ vdpa_iommufd->next = NULL;
+
+ vdpa_iommufd->as = as;
+ /*bind the default ASID to iommufd*/
+ ret = vdpa_device_bind_iommufd(v, vdpa_iommufd, &err);
+ if (ret) {
+ /* todo check if fail */
+ error_report("Failed to vdpa_device_bind_iommufd (%s)",
+ strerror(errno));
+ iommufd_backend_put_ioas(vdpa_iommufd->iommufd, ioas_id);
+
+ vdpa_iommufd_destroy(vdpa_iommufd);
+ return ret;
+ }
+
+ return ret;
+}
+
+static void vdpa_iommufd_detach_device(struct vhost_vdpa *v)
+{
+ VDPAIOMMUFDState *vdpa_iommufd;
+
+ VDPAIOMMUFDState *vdpa_iommufd_tmp;
+ Error *err = NULL;
+
+ struct vhost_dev *dev = v->dev;
+ if (!dev->vdev) {
+ return;
+ }
+ vdpa_iommufd = dev->vdev->iommufd_ptr;
+ vdpa_device_detach_iommufd(v, vdpa_iommufd, &err);
+
+ while (vdpa_iommufd != NULL) {
+ iommufd_backend_put_ioas(vdpa_iommufd->iommufd, vdpa_iommufd->ioas_id);
+ vdpa_iommufd_tmp = vdpa_iommufd;
+ vdpa_iommufd = vdpa_iommufd->next;
+
+ vdpa_iommufd_destroy(vdpa_iommufd_tmp);
+ }
+}
+
+struct vdpa_iommu_backend_ops iommufd_ops = {
+ .dma_map = vdpa_iommufd_map,
+ .dma_unmap = vdpa_iommufd_unmap,
+ .attach_device = vdpa_iommufd_attach_device,
+ .detach_device = vdpa_iommufd_detach_device,
+};
+
+void vdpa_backend_iommufd_ops_class_init(struct vhost_vdpa *v)
+{
+ v->ops = &iommufd_ops;
+}
--
2.34.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [RFC 0/7] vhost-vdpa: add support for iommufd
2023-05-03 9:13 [RFC 0/7] vhost-vdpa: add support for iommufd Cindy Lu
` (6 preceding siblings ...)
2023-05-03 9:13 ` [RFC 7/7] vhost-vdpa-iommufd: Add iommufd support for vdpa Cindy Lu
@ 2023-05-05 3:29 ` Jason Wang
2023-05-05 6:29 ` Cindy Lu
2023-09-13 13:31 ` Michael S. Tsirkin
8 siblings, 1 reply; 14+ messages in thread
From: Jason Wang @ 2023-05-05 3:29 UTC (permalink / raw)
To: Cindy Lu; +Cc: mst, qemu-devel
Hi Cindy
On Wed, May 3, 2023 at 5:13 PM Cindy Lu <lulu@redhat.com> wrote:
>
> Hi All
> There is the RFC to support the IOMMUFD in vdpa device
> any comments are welcome
> Thanks
> Cindy
Please post the kernel patch as well as a reference.
Thanks
>
> Cindy Lu (7):
> vhost: introduce new UAPI to support IOMMUFD
> qapi: support iommufd in vdpa
> virtio : add a ptr for vdpa_iommufd in VirtIODevice
> net/vhost-vdpa: Add the check for iommufd
> vhost-vdpa: Add the iommufd support in the map/unmap function
> vhost-vdpa: init iommufd function in vhost_vdpa start
> vhost-vdpa-iommufd: Add iommufd support for vdpa
>
> hw/virtio/meson.build | 2 +-
> hw/virtio/vhost-vdpa-iommufd.c | 240 +++++++++++++++++++++++++++++++++
> hw/virtio/vhost-vdpa.c | 74 +++++++++-
> include/hw/virtio/vhost-vdpa.h | 47 +++++++
> include/hw/virtio/virtio.h | 5 +
> linux-headers/linux/vhost.h | 72 ++++++++++
> net/vhost-vdpa.c | 31 +++--
> qapi/net.json | 1 +
> 8 files changed, 451 insertions(+), 21 deletions(-)
> create mode 100644 hw/virtio/vhost-vdpa-iommufd.c
>
> --
> 2.34.3
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC 0/7] vhost-vdpa: add support for iommufd
2023-05-05 3:29 ` [RFC 0/7] vhost-vdpa: add support for iommufd Jason Wang
@ 2023-05-05 6:29 ` Cindy Lu
2023-06-05 5:41 ` Michael S. Tsirkin
0 siblings, 1 reply; 14+ messages in thread
From: Cindy Lu @ 2023-05-05 6:29 UTC (permalink / raw)
To: Jason Wang; +Cc: mst, qemu-devel
On Fri, May 5, 2023 at 11:29 AM Jason Wang <jasowang@redhat.com> wrote:
>
> Hi Cindy
>
> On Wed, May 3, 2023 at 5:13 PM Cindy Lu <lulu@redhat.com> wrote:
> >
> > Hi All
> > There is the RFC to support the IOMMUFD in vdpa device
> > any comments are welcome
> > Thanks
> > Cindy
>
> Please post the kernel patch as well as a reference.
>
> Thanks
>
sure,will do
Thanks
cindy
> >
> > Cindy Lu (7):
> > vhost: introduce new UAPI to support IOMMUFD
> > qapi: support iommufd in vdpa
> > virtio : add a ptr for vdpa_iommufd in VirtIODevice
> > net/vhost-vdpa: Add the check for iommufd
> > vhost-vdpa: Add the iommufd support in the map/unmap function
> > vhost-vdpa: init iommufd function in vhost_vdpa start
> > vhost-vdpa-iommufd: Add iommufd support for vdpa
> >
> > hw/virtio/meson.build | 2 +-
> > hw/virtio/vhost-vdpa-iommufd.c | 240 +++++++++++++++++++++++++++++++++
> > hw/virtio/vhost-vdpa.c | 74 +++++++++-
> > include/hw/virtio/vhost-vdpa.h | 47 +++++++
> > include/hw/virtio/virtio.h | 5 +
> > linux-headers/linux/vhost.h | 72 ++++++++++
> > net/vhost-vdpa.c | 31 +++--
> > qapi/net.json | 1 +
> > 8 files changed, 451 insertions(+), 21 deletions(-)
> > create mode 100644 hw/virtio/vhost-vdpa-iommufd.c
> >
> > --
> > 2.34.3
> >
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC 0/7] vhost-vdpa: add support for iommufd
2023-05-05 6:29 ` Cindy Lu
@ 2023-06-05 5:41 ` Michael S. Tsirkin
2023-06-05 8:04 ` Cindy Lu
0 siblings, 1 reply; 14+ messages in thread
From: Michael S. Tsirkin @ 2023-06-05 5:41 UTC (permalink / raw)
To: Cindy Lu; +Cc: Jason Wang, qemu-devel
On Fri, May 05, 2023 at 02:29:23PM +0800, Cindy Lu wrote:
> On Fri, May 5, 2023 at 11:29 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > Hi Cindy
> >
> > On Wed, May 3, 2023 at 5:13 PM Cindy Lu <lulu@redhat.com> wrote:
> > >
> > > Hi All
> > > There is the RFC to support the IOMMUFD in vdpa device
> > > any comments are welcome
> > > Thanks
> > > Cindy
> >
> > Please post the kernel patch as well as a reference.
> >
> > Thanks
> >
> sure,will do
> Thanks
> cindy
Is this effort going anywhere? It will soon be too late for
the next merge window.
> > >
> > > Cindy Lu (7):
> > > vhost: introduce new UAPI to support IOMMUFD
> > > qapi: support iommufd in vdpa
> > > virtio : add a ptr for vdpa_iommufd in VirtIODevice
> > > net/vhost-vdpa: Add the check for iommufd
> > > vhost-vdpa: Add the iommufd support in the map/unmap function
> > > vhost-vdpa: init iommufd function in vhost_vdpa start
> > > vhost-vdpa-iommufd: Add iommufd support for vdpa
> > >
> > > hw/virtio/meson.build | 2 +-
> > > hw/virtio/vhost-vdpa-iommufd.c | 240 +++++++++++++++++++++++++++++++++
> > > hw/virtio/vhost-vdpa.c | 74 +++++++++-
> > > include/hw/virtio/vhost-vdpa.h | 47 +++++++
> > > include/hw/virtio/virtio.h | 5 +
> > > linux-headers/linux/vhost.h | 72 ++++++++++
> > > net/vhost-vdpa.c | 31 +++--
> > > qapi/net.json | 1 +
> > > 8 files changed, 451 insertions(+), 21 deletions(-)
> > > create mode 100644 hw/virtio/vhost-vdpa-iommufd.c
> > >
> > > --
> > > 2.34.3
> > >
> >
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC 0/7] vhost-vdpa: add support for iommufd
2023-06-05 5:41 ` Michael S. Tsirkin
@ 2023-06-05 8:04 ` Cindy Lu
0 siblings, 0 replies; 14+ messages in thread
From: Cindy Lu @ 2023-06-05 8:04 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: Jason Wang, qemu-devel
On Mon, Jun 5, 2023 at 1:41 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Fri, May 05, 2023 at 02:29:23PM +0800, Cindy Lu wrote:
> > On Fri, May 5, 2023 at 11:29 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > Hi Cindy
> > >
> > > On Wed, May 3, 2023 at 5:13 PM Cindy Lu <lulu@redhat.com> wrote:
> > > >
> > > > Hi All
> > > > There is the RFC to support the IOMMUFD in vdpa device
> > > > any comments are welcome
> > > > Thanks
> > > > Cindy
> > >
> > > Please post the kernel patch as well as a reference.
> > >
> > > Thanks
> > >
> > sure,will do
> > Thanks
> > cindy
>
> Is this effort going anywhere? It will soon be too late for
> the next merge window.
>
Hi Michael
I'm now working on some vduse issue, I will go back to work in IOMMUFD soon
Thanks
Cindy
> > > >
> > > > Cindy Lu (7):
> > > > vhost: introduce new UAPI to support IOMMUFD
> > > > qapi: support iommufd in vdpa
> > > > virtio : add a ptr for vdpa_iommufd in VirtIODevice
> > > > net/vhost-vdpa: Add the check for iommufd
> > > > vhost-vdpa: Add the iommufd support in the map/unmap function
> > > > vhost-vdpa: init iommufd function in vhost_vdpa start
> > > > vhost-vdpa-iommufd: Add iommufd support for vdpa
> > > >
> > > > hw/virtio/meson.build | 2 +-
> > > > hw/virtio/vhost-vdpa-iommufd.c | 240 +++++++++++++++++++++++++++++++++
> > > > hw/virtio/vhost-vdpa.c | 74 +++++++++-
> > > > include/hw/virtio/vhost-vdpa.h | 47 +++++++
> > > > include/hw/virtio/virtio.h | 5 +
> > > > linux-headers/linux/vhost.h | 72 ++++++++++
> > > > net/vhost-vdpa.c | 31 +++--
> > > > qapi/net.json | 1 +
> > > > 8 files changed, 451 insertions(+), 21 deletions(-)
> > > > create mode 100644 hw/virtio/vhost-vdpa-iommufd.c
> > > >
> > > > --
> > > > 2.34.3
> > > >
> > >
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC 0/7] vhost-vdpa: add support for iommufd
2023-05-03 9:13 [RFC 0/7] vhost-vdpa: add support for iommufd Cindy Lu
` (7 preceding siblings ...)
2023-05-05 3:29 ` [RFC 0/7] vhost-vdpa: add support for iommufd Jason Wang
@ 2023-09-13 13:31 ` Michael S. Tsirkin
2023-09-14 5:44 ` Cindy Lu
8 siblings, 1 reply; 14+ messages in thread
From: Michael S. Tsirkin @ 2023-09-13 13:31 UTC (permalink / raw)
To: Cindy Lu; +Cc: jasowang, qemu-devel
On Wed, May 03, 2023 at 05:13:30PM +0800, Cindy Lu wrote:
> Hi All
> There is the RFC to support the IOMMUFD in vdpa device
> any comments are welcome
> Thanks
> Cindy
Any plans to work on this or should I consider this abandoned?
> Cindy Lu (7):
> vhost: introduce new UAPI to support IOMMUFD
> qapi: support iommufd in vdpa
> virtio : add a ptr for vdpa_iommufd in VirtIODevice
> net/vhost-vdpa: Add the check for iommufd
> vhost-vdpa: Add the iommufd support in the map/unmap function
> vhost-vdpa: init iommufd function in vhost_vdpa start
> vhost-vdpa-iommufd: Add iommufd support for vdpa
>
> hw/virtio/meson.build | 2 +-
> hw/virtio/vhost-vdpa-iommufd.c | 240 +++++++++++++++++++++++++++++++++
> hw/virtio/vhost-vdpa.c | 74 +++++++++-
> include/hw/virtio/vhost-vdpa.h | 47 +++++++
> include/hw/virtio/virtio.h | 5 +
> linux-headers/linux/vhost.h | 72 ++++++++++
> net/vhost-vdpa.c | 31 +++--
> qapi/net.json | 1 +
> 8 files changed, 451 insertions(+), 21 deletions(-)
> create mode 100644 hw/virtio/vhost-vdpa-iommufd.c
>
> --
> 2.34.3
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC 0/7] vhost-vdpa: add support for iommufd
2023-09-13 13:31 ` Michael S. Tsirkin
@ 2023-09-14 5:44 ` Cindy Lu
0 siblings, 0 replies; 14+ messages in thread
From: Cindy Lu @ 2023-09-14 5:44 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: jasowang, qemu-devel
Hi Michael,
Really sorry for the delay, I was sick-leave for almost 2 months,
which caused the delay in the development of this feature. I will
continue working on this feature soon.
Thanks
Cindy
On Wed, Sep 13, 2023 at 9:31 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Wed, May 03, 2023 at 05:13:30PM +0800, Cindy Lu wrote:
> > Hi All
> > There is the RFC to support the IOMMUFD in vdpa device
> > any comments are welcome
> > Thanks
> > Cindy
>
> Any plans to work on this or should I consider this abandoned?
>
>
> > Cindy Lu (7):
> > vhost: introduce new UAPI to support IOMMUFD
> > qapi: support iommufd in vdpa
> > virtio : add a ptr for vdpa_iommufd in VirtIODevice
> > net/vhost-vdpa: Add the check for iommufd
> > vhost-vdpa: Add the iommufd support in the map/unmap function
> > vhost-vdpa: init iommufd function in vhost_vdpa start
> > vhost-vdpa-iommufd: Add iommufd support for vdpa
> >
> > hw/virtio/meson.build | 2 +-
> > hw/virtio/vhost-vdpa-iommufd.c | 240 +++++++++++++++++++++++++++++++++
> > hw/virtio/vhost-vdpa.c | 74 +++++++++-
> > include/hw/virtio/vhost-vdpa.h | 47 +++++++
> > include/hw/virtio/virtio.h | 5 +
> > linux-headers/linux/vhost.h | 72 ++++++++++
> > net/vhost-vdpa.c | 31 +++--
> > qapi/net.json | 1 +
> > 8 files changed, 451 insertions(+), 21 deletions(-)
> > create mode 100644 hw/virtio/vhost-vdpa-iommufd.c
> >
> > --
> > 2.34.3
>
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2023-09-14 5:46 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-03 9:13 [RFC 0/7] vhost-vdpa: add support for iommufd Cindy Lu
2023-05-03 9:13 ` [RFC 1/7] vhost: introduce new UAPI to support IOMMUFD Cindy Lu
2023-05-03 9:13 ` [RFC 2/7] qapi: support iommufd in vdpa Cindy Lu
2023-05-03 9:13 ` [RFC 3/7] virtio : add a ptr for vdpa_iommufd in VirtIODevice Cindy Lu
2023-05-03 9:13 ` [RFC 4/7] net/vhost-vdpa: Add the check for iommufd Cindy Lu
2023-05-03 9:13 ` [RFC 5/7] vhost-vdpa: Add the iommufd support in the map/unmap function Cindy Lu
2023-05-03 9:13 ` [RFC 6/7] vhost-vdpa: init iommufd function in vhost_vdpa start Cindy Lu
2023-05-03 9:13 ` [RFC 7/7] vhost-vdpa-iommufd: Add iommufd support for vdpa Cindy Lu
2023-05-05 3:29 ` [RFC 0/7] vhost-vdpa: add support for iommufd Jason Wang
2023-05-05 6:29 ` Cindy Lu
2023-06-05 5:41 ` Michael S. Tsirkin
2023-06-05 8:04 ` Cindy Lu
2023-09-13 13:31 ` Michael S. Tsirkin
2023-09-14 5:44 ` Cindy Lu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).