* [RFC v2 0/3] Add packed virtqueue to shadow virtqueue
@ 2024-07-26 9:58 Sahil Siddiq
2024-07-26 9:58 ` [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements Sahil Siddiq
` (3 more replies)
0 siblings, 4 replies; 14+ messages in thread
From: Sahil Siddiq @ 2024-07-26 9:58 UTC (permalink / raw)
To: eperezma, sgarzare; +Cc: mst, qemu-devel, Sahil Siddiq
Hi,
I have made some progress in this project and thought I would
send these changes first before continuing. I split patch v1 [1]
into two commits (#1 and #2) to make it easy to review. There are
very few changes in the first commit. The second commit has not
changes.
There are a few things that I am not entirely sure of in commit #3.
Q1.
In virtio_ring.h [2], new aliases with memory alignment enforcement
such as "vring_desc_t" have been created. I am not sure if this
is required for the packed vq descriptor ring (vring_packed_desc)
as well. I don't see a type alias that enforces memory alignment
for "vring_packed_desc" in the linux kernel. I haven't used any
alias either.
Q2.
I see that parts of the "vhost-vdpa" implementation is based on
the assumption that SVQ uses the split vq format. For example,
"vhost_vdpa_svq_map_rings" [3], calls "vhost_svq_device_area_size"
which is specific to split vqs. The "vhost_vring_addr" [4] struct
is also specific to split vqs.
My idea is to have a generic "vhost_vring_addr" structure that
wraps around split and packed vq specific structures, rather
than using them directly in if-else conditions wherever the
vhost-vdpa functions require their usage. However, this will
involve checking their impact in several other places where this
struct is currently being used (eg.: "vhost-user", "vhost-backend",
"libvhost-user").
Is this approach alright or is there a better alternative? I would
like to get your thoughts on this before working on this portion of
the project.
Thanks,
Sahil
[1] https://lists.nongnu.org/archive/html/qemu-devel/2024-06/msg03417.html
[2] https://gitlab.com/qemu-project/qemu/-/blob/master/include/standard-headers/linux/virtio_ring.h#L149
[3] https://gitlab.com/qemu-project/qemu/-/blob/master/hw/virtio/vhost-vdpa.c#L1178
[4] https://gitlab.com/qemu-project/qemu/-/blob/master/include/standard-headers/linux/vhost_types.h#L30
Changes v1 -> v2:
* Split commit from RFC v1 into two commits.
* vhost-shadow-virtqueue.c
(vhost_svq_add_packed):
- Merge with "vhost_svq_vring_write_descs_packed()"
- Remove "num == 0" check
(vhost_svq_add): Use "is_packed" to check vq format.
(vhost_svq_get_vring_addr): Rename function.
(vhost_svq_get_vring_addr_packed): New function but is yet to be implemented.
(vhost_svq_memory_packed): New function.
(vhost_svq_start): Support packed vq format.
* vhost-shadow-virtqueue.h
(struct VhostShadowVirtqueue): New member "is_packed"
(vhost_svq_get_vring_addr): Renamed function.
(vhost_svq_get_vring_addr_packed): New function.
(vhost_svq_memory_packed): Likewise.
* vhost-vdpa.c
(vhost_svq_get_vring_addr): Rename function.
Sahil Siddiq (3):
vhost: Introduce packed vq and add buffer elements
vhost: Data structure changes to support packed vqs
vhost: Allocate memory for packed vring.
hw/virtio/vhost-shadow-virtqueue.c | 161 ++++++++++++++++++++++++++---
hw/virtio/vhost-shadow-virtqueue.h | 76 +++++++++-----
hw/virtio/vhost-vdpa.c | 4 +-
3 files changed, 198 insertions(+), 43 deletions(-)
--
2.45.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements
2024-07-26 9:58 [RFC v2 0/3] Add packed virtqueue to shadow virtqueue Sahil Siddiq
@ 2024-07-26 9:58 ` Sahil Siddiq
2024-07-26 13:48 ` Eugenio Perez Martin
2024-07-26 9:58 ` [RFC v2 2/3] vhost: Data structure changes to support packed vqs Sahil Siddiq
` (2 subsequent siblings)
3 siblings, 1 reply; 14+ messages in thread
From: Sahil Siddiq @ 2024-07-26 9:58 UTC (permalink / raw)
To: eperezma, sgarzare; +Cc: mst, qemu-devel, Sahil Siddiq
This is the first patch in a series to add support for packed
virtqueues in vhost_shadow_virtqueue. This patch implements the
insertion of available buffers in the descriptor area. It takes
into account descriptor chains, but does not consider indirect
descriptors.
Signed-off-by: Sahil Siddiq <sahilcdq@proton.me>
---
Changes v1 -> v2:
* Split commit from RFC v1 into two commits.
* vhost-shadow-virtqueue.c
(vhost_svq_add_packed):
- Merge with "vhost_svq_vring_write_descs_packed()"
- Remove "num == 0" check
hw/virtio/vhost-shadow-virtqueue.c | 93 +++++++++++++++++++++++++++++-
1 file changed, 92 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index fc5f408f77..c7b7e0c477 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -217,6 +217,91 @@ static bool vhost_svq_add_split(VhostShadowVirtqueue *svq,
return true;
}
+static bool vhost_svq_add_packed(VhostShadowVirtqueue *svq,
+ const struct iovec *out_sg, size_t out_num,
+ const struct iovec *in_sg, size_t in_num,
+ unsigned *head)
+{
+ bool ok;
+ uint16_t head_flags = 0;
+ g_autofree hwaddr *sgs = g_new(hwaddr, out_num + in_num);
+
+ *head = svq->vring_packed.next_avail_idx;
+
+ /* We need some descriptors here */
+ if (unlikely(!out_num && !in_num)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "Guest provided element with no descriptors");
+ return false;
+ }
+
+ uint16_t id, curr, i;
+ unsigned n;
+ struct vring_packed_desc *descs = svq->vring_packed.vring.desc;
+
+ i = *head;
+ id = svq->free_head;
+ curr = id;
+
+ size_t num = out_num + in_num;
+
+ ok = vhost_svq_translate_addr(svq, sgs, out_sg, out_num);
+ if (unlikely(!ok)) {
+ return false;
+ }
+
+ ok = vhost_svq_translate_addr(svq, sgs + out_num, in_sg, in_num);
+ if (unlikely(!ok)) {
+ return false;
+ }
+
+ /* Write descriptors to SVQ packed vring */
+ for (n = 0; n < num; n++) {
+ uint16_t flags = cpu_to_le16(svq->vring_packed.avail_used_flags |
+ (n < out_num ? 0 : VRING_DESC_F_WRITE) |
+ (n + 1 == num ? 0 : VRING_DESC_F_NEXT));
+ if (i == *head) {
+ head_flags = flags;
+ } else {
+ descs[i].flags = flags;
+ }
+
+ descs[i].addr = cpu_to_le64(sgs[n]);
+ descs[i].id = id;
+ if (n < out_num) {
+ descs[i].len = cpu_to_le32(out_sg[n].iov_len);
+ } else {
+ descs[i].len = cpu_to_le32(in_sg[n - out_num].iov_len);
+ }
+
+ curr = cpu_to_le16(svq->desc_next[curr]);
+
+ if (++i >= svq->vring_packed.vring.num) {
+ i = 0;
+ svq->vring_packed.avail_used_flags ^=
+ 1 << VRING_PACKED_DESC_F_AVAIL |
+ 1 << VRING_PACKED_DESC_F_USED;
+ }
+ }
+
+ if (i <= *head) {
+ svq->vring_packed.avail_wrap_counter ^= 1;
+ }
+
+ svq->vring_packed.next_avail_idx = i;
+ svq->free_head = curr;
+
+ /*
+ * A driver MUST NOT make the first descriptor in the list
+ * available before all subsequent descriptors comprising
+ * the list are made available.
+ */
+ smp_wmb();
+ svq->vring_packed.vring.desc[*head].flags = head_flags;
+
+ return true;
+}
+
static void vhost_svq_kick(VhostShadowVirtqueue *svq)
{
bool needs_kick;
@@ -258,7 +343,13 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
return -ENOSPC;
}
- ok = vhost_svq_add_split(svq, out_sg, out_num, in_sg, in_num, &qemu_head);
+ if (virtio_vdev_has_feature(svq->vdev, VIRTIO_F_RING_PACKED)) {
+ ok = vhost_svq_add_packed(svq, out_sg, out_num,
+ in_sg, in_num, &qemu_head);
+ } else {
+ ok = vhost_svq_add_split(svq, out_sg, out_num,
+ in_sg, in_num, &qemu_head);
+ }
if (unlikely(!ok)) {
return -EINVAL;
}
--
2.45.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [RFC v2 2/3] vhost: Data structure changes to support packed vqs
2024-07-26 9:58 [RFC v2 0/3] Add packed virtqueue to shadow virtqueue Sahil Siddiq
2024-07-26 9:58 ` [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements Sahil Siddiq
@ 2024-07-26 9:58 ` Sahil Siddiq
2024-07-26 9:58 ` [RFC v2 3/3] vhost: Allocate memory for packed vring Sahil Siddiq
2024-07-26 13:40 ` [RFC v2 0/3] Add packed virtqueue to shadow virtqueue Eugenio Perez Martin
3 siblings, 0 replies; 14+ messages in thread
From: Sahil Siddiq @ 2024-07-26 9:58 UTC (permalink / raw)
To: eperezma, sgarzare; +Cc: mst, qemu-devel, Sahil Siddiq
Introduce "struct vring_packed".
Modify VhostShadowVirtqueue so it can support split
and packed virtqueue formats.
Signed-off-by: Sahil Siddiq <sahilcdq@proton.me>
---
No changes since v1.
hw/virtio/vhost-shadow-virtqueue.h | 66 ++++++++++++++++++++----------
1 file changed, 44 insertions(+), 22 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
index 19c842a15b..ee1a87f523 100644
--- a/hw/virtio/vhost-shadow-virtqueue.h
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -46,10 +46,53 @@ typedef struct VhostShadowVirtqueueOps {
VirtQueueAvailCallback avail_handler;
} VhostShadowVirtqueueOps;
+struct vring_packed {
+ /* Actual memory layout for this queue. */
+ struct {
+ unsigned int num;
+ struct vring_packed_desc *desc;
+ struct vring_packed_desc_event *driver;
+ struct vring_packed_desc_event *device;
+ } vring;
+
+ /* Avail used flags. */
+ uint16_t avail_used_flags;
+
+ /* Index of the next avail descriptor. */
+ uint16_t next_avail_idx;
+
+ /* Driver ring wrap counter */
+ bool avail_wrap_counter;
+};
+
/* Shadow virtqueue to relay notifications */
typedef struct VhostShadowVirtqueue {
+ /* Virtio queue shadowing */
+ VirtQueue *vq;
+
+ /* Virtio device */
+ VirtIODevice *vdev;
+
+ /* SVQ vring descriptors state */
+ SVQDescState *desc_state;
+
+ /*
+ * Backup next field for each descriptor so we can recover securely, not
+ * needing to trust the device access.
+ */
+ uint16_t *desc_next;
+
+ /* Next free descriptor */
+ uint16_t free_head;
+
+ /* Size of SVQ vring free descriptors */
+ uint16_t num_free;
+
/* Shadow vring */
- struct vring vring;
+ union {
+ struct vring vring;
+ struct vring_packed vring_packed;
+ };
/* Shadow kick notifier, sent to vhost */
EventNotifier hdev_kick;
@@ -69,27 +112,12 @@ typedef struct VhostShadowVirtqueue {
/* Guest's call notifier, where the SVQ calls guest. */
EventNotifier svq_call;
- /* Virtio queue shadowing */
- VirtQueue *vq;
-
- /* Virtio device */
- VirtIODevice *vdev;
-
/* IOVA mapping */
VhostIOVATree *iova_tree;
- /* SVQ vring descriptors state */
- SVQDescState *desc_state;
-
/* Next VirtQueue element that guest made available */
VirtQueueElement *next_guest_avail_elem;
- /*
- * Backup next field for each descriptor so we can recover securely, not
- * needing to trust the device access.
- */
- uint16_t *desc_next;
-
/* Caller callbacks */
const VhostShadowVirtqueueOps *ops;
@@ -99,17 +127,11 @@ typedef struct VhostShadowVirtqueue {
/* Next head to expose to the device */
uint16_t shadow_avail_idx;
- /* Next free descriptor */
- uint16_t free_head;
-
/* Last seen used idx */
uint16_t shadow_used_idx;
/* Next head to consume from the device */
uint16_t last_used_idx;
-
- /* Size of SVQ vring free descriptors */
- uint16_t num_free;
} VhostShadowVirtqueue;
bool vhost_svq_valid_features(uint64_t features, Error **errp);
--
2.45.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [RFC v2 3/3] vhost: Allocate memory for packed vring.
2024-07-26 9:58 [RFC v2 0/3] Add packed virtqueue to shadow virtqueue Sahil Siddiq
2024-07-26 9:58 ` [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements Sahil Siddiq
2024-07-26 9:58 ` [RFC v2 2/3] vhost: Data structure changes to support packed vqs Sahil Siddiq
@ 2024-07-26 9:58 ` Sahil Siddiq
2024-07-26 14:28 ` Eugenio Perez Martin
2024-07-26 13:40 ` [RFC v2 0/3] Add packed virtqueue to shadow virtqueue Eugenio Perez Martin
3 siblings, 1 reply; 14+ messages in thread
From: Sahil Siddiq @ 2024-07-26 9:58 UTC (permalink / raw)
To: eperezma, sgarzare; +Cc: mst, qemu-devel, Sahil Siddiq
Allocate memory for the packed vq format and support
packed vq in the SVQ "start" operation.
Signed-off-by: Sahil Siddiq <sahilcdq@proton.me>
---
Changes v1 -> v2:
* vhost-shadow-virtqueue.h
(struct VhostShadowVirtqueue): New member "is_packed"
(vhost_svq_get_vring_addr): Renamed function.
(vhost_svq_get_vring_addr_packed): New function.
(vhost_svq_memory_packed): Likewise.
* vhost-shadow-virtqueue.c:
(vhost_svq_add): Use "is_packed" to check vq format.
(vhost_svq_get_vring_addr): Rename function.
(vhost_svq_get_vring_addr_packed): New function but is yet to be implemented.
(vhost_svq_memory_packed): New function.
(vhost_svq_start): Support packed vq format.
* vhost-vdpa.c
(vhost_svq_get_vring_addr): Rename function.
hw/virtio/vhost-shadow-virtqueue.c | 70 ++++++++++++++++++++++--------
hw/virtio/vhost-shadow-virtqueue.h | 10 ++++-
hw/virtio/vhost-vdpa.c | 4 +-
3 files changed, 63 insertions(+), 21 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index c7b7e0c477..045c07304c 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -343,7 +343,7 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
return -ENOSPC;
}
- if (virtio_vdev_has_feature(svq->vdev, VIRTIO_F_RING_PACKED)) {
+ if (svq->is_packed) {
ok = vhost_svq_add_packed(svq, out_sg, out_num,
in_sg, in_num, &qemu_head);
} else {
@@ -679,18 +679,29 @@ void vhost_svq_set_svq_call_fd(VhostShadowVirtqueue *svq, int call_fd)
}
/**
- * Get the shadow vq vring address.
+ * Get the split shadow vq vring address.
* @svq: Shadow virtqueue
* @addr: Destination to store address
*/
-void vhost_svq_get_vring_addr(const VhostShadowVirtqueue *svq,
- struct vhost_vring_addr *addr)
+void vhost_svq_get_vring_addr_split(const VhostShadowVirtqueue *svq,
+ struct vhost_vring_addr *addr)
{
addr->desc_user_addr = (uint64_t)(uintptr_t)svq->vring.desc;
addr->avail_user_addr = (uint64_t)(uintptr_t)svq->vring.avail;
addr->used_user_addr = (uint64_t)(uintptr_t)svq->vring.used;
}
+/**
+ * Get the packed shadow vq vring address.
+ * @svq: Shadow virtqueue
+ * @addr: Destination to store address
+ */
+void vhost_svq_get_vring_addr_packed(const VhostShadowVirtqueue *svq,
+ struct vhost_vring_addr *addr)
+{
+ /* TODO */
+}
+
size_t vhost_svq_driver_area_size(const VhostShadowVirtqueue *svq)
{
size_t desc_size = sizeof(vring_desc_t) * svq->vring.num;
@@ -707,6 +718,16 @@ size_t vhost_svq_device_area_size(const VhostShadowVirtqueue *svq)
return ROUND_UP(used_size, qemu_real_host_page_size());
}
+size_t vhost_svq_memory_packed(const VhostShadowVirtqueue *svq)
+{
+ size_t desc_size = sizeof(struct vring_packed_desc) * svq->num_free;
+ size_t driver_event_suppression = sizeof(struct vring_packed_desc_event);
+ size_t device_event_suppression = sizeof(struct vring_packed_desc_event);
+
+ return ROUND_UP(desc_size + driver_event_suppression + device_event_suppression,
+ qemu_real_host_page_size());
+}
+
/**
* Set a new file descriptor for the guest to kick the SVQ and notify for avail
*
@@ -759,19 +780,34 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
svq->vq = vq;
svq->iova_tree = iova_tree;
- svq->vring.num = virtio_queue_get_num(vdev, virtio_get_queue_index(vq));
- svq->num_free = svq->vring.num;
- svq->vring.desc = mmap(NULL, vhost_svq_driver_area_size(svq),
- PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
- -1, 0);
- desc_size = sizeof(vring_desc_t) * svq->vring.num;
- svq->vring.avail = (void *)((char *)svq->vring.desc + desc_size);
- svq->vring.used = mmap(NULL, vhost_svq_device_area_size(svq),
- PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
- -1, 0);
- svq->desc_state = g_new0(SVQDescState, svq->vring.num);
- svq->desc_next = g_new0(uint16_t, svq->vring.num);
- for (unsigned i = 0; i < svq->vring.num - 1; i++) {
+ if (virtio_vdev_has_feature(svq->vdev, VIRTIO_F_RING_PACKED)) {
+ svq->is_packed = true;
+ svq->vring_packed.vring.num = virtio_queue_get_num(vdev, virtio_get_queue_index(vq));
+ svq->num_free = svq->vring_packed.vring.num;
+ svq->vring_packed.vring.desc = mmap(NULL, vhost_svq_memory_packed(svq),
+ PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
+ -1, 0);
+ desc_size = sizeof(struct vring_packed_desc) * svq->vring.num;
+ svq->vring_packed.vring.driver = (void *)((char *)svq->vring_packed.vring.desc + desc_size);
+ svq->vring_packed.vring.device = (void *)((char *)svq->vring_packed.vring.driver +
+ sizeof(struct vring_packed_desc_event));
+ } else {
+ svq->is_packed = false;
+ svq->vring.num = virtio_queue_get_num(vdev, virtio_get_queue_index(vq));
+ svq->num_free = svq->vring.num;
+ svq->vring.desc = mmap(NULL, vhost_svq_driver_area_size(svq),
+ PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
+ -1, 0);
+ desc_size = sizeof(vring_desc_t) * svq->vring.num;
+ svq->vring.avail = (void *)((char *)svq->vring.desc + desc_size);
+ svq->vring.used = mmap(NULL, vhost_svq_device_area_size(svq),
+ PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
+ -1, 0);
+ }
+
+ svq->desc_state = g_new0(SVQDescState, svq->num_free);
+ svq->desc_next = g_new0(uint16_t, svq->num_free);
+ for (unsigned i = 0; i < svq->num_free - 1; i++) {
svq->desc_next[i] = cpu_to_le16(i + 1);
}
}
diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
index ee1a87f523..b396daf57d 100644
--- a/hw/virtio/vhost-shadow-virtqueue.h
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -67,6 +67,9 @@ struct vring_packed {
/* Shadow virtqueue to relay notifications */
typedef struct VhostShadowVirtqueue {
+ /* True if packed virtqueue */
+ bool is_packed;
+
/* Virtio queue shadowing */
VirtQueue *vq;
@@ -146,10 +149,13 @@ size_t vhost_svq_poll(VhostShadowVirtqueue *svq, size_t num);
void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int svq_kick_fd);
void vhost_svq_set_svq_call_fd(VhostShadowVirtqueue *svq, int call_fd);
-void vhost_svq_get_vring_addr(const VhostShadowVirtqueue *svq,
- struct vhost_vring_addr *addr);
+void vhost_svq_get_vring_addr_split(const VhostShadowVirtqueue *svq,
+ struct vhost_vring_addr *addr);
+void vhost_svq_get_vring_addr_packed(const VhostShadowVirtqueue *svq,
+ struct vhost_vring_addr *addr);
size_t vhost_svq_driver_area_size(const VhostShadowVirtqueue *svq);
size_t vhost_svq_device_area_size(const VhostShadowVirtqueue *svq);
+size_t vhost_svq_memory_packed(const VhostShadowVirtqueue *svq);
void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
VirtQueue *vq, VhostIOVATree *iova_tree);
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 3cdaa12ed5..688de4a662 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1130,7 +1130,7 @@ static void vhost_vdpa_svq_unmap_rings(struct vhost_dev *dev,
struct vhost_vdpa *v = dev->opaque;
struct vhost_vring_addr svq_addr;
- vhost_svq_get_vring_addr(svq, &svq_addr);
+ vhost_svq_get_vring_addr_split(svq, &svq_addr);
vhost_vdpa_svq_unmap_ring(v, svq_addr.desc_user_addr);
@@ -1189,7 +1189,7 @@ static bool vhost_vdpa_svq_map_rings(struct vhost_dev *dev,
size_t avail_offset;
bool ok;
- vhost_svq_get_vring_addr(svq, &svq_addr);
+ vhost_svq_get_vring_addr_split(svq, &svq_addr);
driver_region = (DMAMap) {
.translated_addr = svq_addr.desc_user_addr,
--
2.45.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [RFC v2 0/3] Add packed virtqueue to shadow virtqueue
2024-07-26 9:58 [RFC v2 0/3] Add packed virtqueue to shadow virtqueue Sahil Siddiq
` (2 preceding siblings ...)
2024-07-26 9:58 ` [RFC v2 3/3] vhost: Allocate memory for packed vring Sahil Siddiq
@ 2024-07-26 13:40 ` Eugenio Perez Martin
2024-07-26 17:11 ` Sahil
3 siblings, 1 reply; 14+ messages in thread
From: Eugenio Perez Martin @ 2024-07-26 13:40 UTC (permalink / raw)
To: Sahil Siddiq; +Cc: sgarzare, mst, qemu-devel, Sahil Siddiq
On Fri, Jul 26, 2024 at 11:58 AM Sahil Siddiq <icegambit91@gmail.com> wrote:
>
> Hi,
>
> I have made some progress in this project and thought I would
> send these changes first before continuing. I split patch v1 [1]
> into two commits (#1 and #2) to make it easy to review. There are
> very few changes in the first commit. The second commit has not
> changes.
>
> There are a few things that I am not entirely sure of in commit #3.
>
> Q1.
> In virtio_ring.h [2], new aliases with memory alignment enforcement
> such as "vring_desc_t" have been created. I am not sure if this
> is required for the packed vq descriptor ring (vring_packed_desc)
> as well. I don't see a type alias that enforces memory alignment
> for "vring_packed_desc" in the linux kernel. I haven't used any
> alias either.
>
The alignment is required to be 16 for the descriptor ring and 4 for
the device and driver ares by the standard [1]. In QEMU, this is
solved by calling mmap, which always returns page-aligned addresses.
> Q2.
> I see that parts of the "vhost-vdpa" implementation is based on
> the assumption that SVQ uses the split vq format. For example,
> "vhost_vdpa_svq_map_rings" [3], calls "vhost_svq_device_area_size"
> which is specific to split vqs. The "vhost_vring_addr" [4] struct
> is also specific to split vqs.
>
> My idea is to have a generic "vhost_vring_addr" structure that
> wraps around split and packed vq specific structures, rather
> than using them directly in if-else conditions wherever the
> vhost-vdpa functions require their usage. However, this will
> involve checking their impact in several other places where this
> struct is currently being used (eg.: "vhost-user", "vhost-backend",
> "libvhost-user").
>
Ok I've just found this is under-documented actually :).
As you mention, vhost-user is already using this same struct for
packed vqs [2], just translating the driver area from the avail vring
and the device area from the used vring. So the best option is to
stick with that, unless I'm missing something.
> Is this approach alright or is there a better alternative? I would
> like to get your thoughts on this before working on this portion of
> the project.
>
> Thanks,
> Sahil
>
[1] https://docs.oasis-open.org/virtio/virtio/v1.3/virtio-v1.3.html
[2] https://github.com/DPDK/dpdk/blob/82c47f005b9a0a1e3a649664b7713443d18abe43/lib/vhost/vhost_user.c#L841C1-L841C25
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements
2024-07-26 9:58 ` [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements Sahil Siddiq
@ 2024-07-26 13:48 ` Eugenio Perez Martin
2024-07-28 17:37 ` Sahil
0 siblings, 1 reply; 14+ messages in thread
From: Eugenio Perez Martin @ 2024-07-26 13:48 UTC (permalink / raw)
To: Sahil Siddiq; +Cc: sgarzare, mst, qemu-devel, Sahil Siddiq
On Fri, Jul 26, 2024 at 11:58 AM Sahil Siddiq <icegambit91@gmail.com> wrote:
>
> This is the first patch in a series to add support for packed
> virtqueues in vhost_shadow_virtqueue. This patch implements the
> insertion of available buffers in the descriptor area. It takes
> into account descriptor chains, but does not consider indirect
> descriptors.
>
> Signed-off-by: Sahil Siddiq <sahilcdq@proton.me>
> ---
> Changes v1 -> v2:
> * Split commit from RFC v1 into two commits.
> * vhost-shadow-virtqueue.c
> (vhost_svq_add_packed):
> - Merge with "vhost_svq_vring_write_descs_packed()"
> - Remove "num == 0" check
>
> hw/virtio/vhost-shadow-virtqueue.c | 93 +++++++++++++++++++++++++++++-
> 1 file changed, 92 insertions(+), 1 deletion(-)
>
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index fc5f408f77..c7b7e0c477 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -217,6 +217,91 @@ static bool vhost_svq_add_split(VhostShadowVirtqueue *svq,
> return true;
> }
>
> +static bool vhost_svq_add_packed(VhostShadowVirtqueue *svq,
> + const struct iovec *out_sg, size_t out_num,
> + const struct iovec *in_sg, size_t in_num,
> + unsigned *head)
> +{
> + bool ok;
> + uint16_t head_flags = 0;
> + g_autofree hwaddr *sgs = g_new(hwaddr, out_num + in_num);
> +
> + *head = svq->vring_packed.next_avail_idx;
> +
> + /* We need some descriptors here */
> + if (unlikely(!out_num && !in_num)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "Guest provided element with no descriptors");
> + return false;
> + }
> +
> + uint16_t id, curr, i;
> + unsigned n;
> + struct vring_packed_desc *descs = svq->vring_packed.vring.desc;
> +
> + i = *head;
> + id = svq->free_head;
> + curr = id;
> +
> + size_t num = out_num + in_num;
> +
> + ok = vhost_svq_translate_addr(svq, sgs, out_sg, out_num);
> + if (unlikely(!ok)) {
> + return false;
> + }
> +
> + ok = vhost_svq_translate_addr(svq, sgs + out_num, in_sg, in_num);
> + if (unlikely(!ok)) {
> + return false;
> + }
> +
(sorry I missed this from the RFC v1) I think all of the above should
be in the caller, isn't it? It is duplicated with split.
Also, declarations should be at the beginning of blocks per QEMU
coding style [1].
The rest looks good to me by visual inspection.
> + /* Write descriptors to SVQ packed vring */
> + for (n = 0; n < num; n++) {
> + uint16_t flags = cpu_to_le16(svq->vring_packed.avail_used_flags |
> + (n < out_num ? 0 : VRING_DESC_F_WRITE) |
> + (n + 1 == num ? 0 : VRING_DESC_F_NEXT));
> + if (i == *head) {
> + head_flags = flags;
> + } else {
> + descs[i].flags = flags;
> + }
> +
> + descs[i].addr = cpu_to_le64(sgs[n]);
> + descs[i].id = id;
> + if (n < out_num) {
> + descs[i].len = cpu_to_le32(out_sg[n].iov_len);
> + } else {
> + descs[i].len = cpu_to_le32(in_sg[n - out_num].iov_len);
> + }
> +
> + curr = cpu_to_le16(svq->desc_next[curr]);
> +
> + if (++i >= svq->vring_packed.vring.num) {
> + i = 0;
> + svq->vring_packed.avail_used_flags ^=
> + 1 << VRING_PACKED_DESC_F_AVAIL |
> + 1 << VRING_PACKED_DESC_F_USED;
> + }
> + }
> +
> + if (i <= *head) {
> + svq->vring_packed.avail_wrap_counter ^= 1;
> + }
> +
> + svq->vring_packed.next_avail_idx = i;
> + svq->free_head = curr;
> +
> + /*
> + * A driver MUST NOT make the first descriptor in the list
> + * available before all subsequent descriptors comprising
> + * the list are made available.
> + */
> + smp_wmb();
> + svq->vring_packed.vring.desc[*head].flags = head_flags;
> +
> + return true;
> +}
> +
> static void vhost_svq_kick(VhostShadowVirtqueue *svq)
> {
> bool needs_kick;
> @@ -258,7 +343,13 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
> return -ENOSPC;
> }
>
> - ok = vhost_svq_add_split(svq, out_sg, out_num, in_sg, in_num, &qemu_head);
> + if (virtio_vdev_has_feature(svq->vdev, VIRTIO_F_RING_PACKED)) {
> + ok = vhost_svq_add_packed(svq, out_sg, out_num,
> + in_sg, in_num, &qemu_head);
> + } else {
> + ok = vhost_svq_add_split(svq, out_sg, out_num,
> + in_sg, in_num, &qemu_head);
> + }
> if (unlikely(!ok)) {
> return -EINVAL;
> }
> --
> 2.45.2
>
[1] https://www.qemu.org/docs/master/devel/style.html#declarations
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC v2 3/3] vhost: Allocate memory for packed vring.
2024-07-26 9:58 ` [RFC v2 3/3] vhost: Allocate memory for packed vring Sahil Siddiq
@ 2024-07-26 14:28 ` Eugenio Perez Martin
2024-07-28 13:41 ` Sahil
0 siblings, 1 reply; 14+ messages in thread
From: Eugenio Perez Martin @ 2024-07-26 14:28 UTC (permalink / raw)
To: Sahil Siddiq; +Cc: sgarzare, mst, qemu-devel, Sahil Siddiq
On Fri, Jul 26, 2024 at 11:59 AM Sahil Siddiq <icegambit91@gmail.com> wrote:
>
> Allocate memory for the packed vq format and support
> packed vq in the SVQ "start" operation.
>
> Signed-off-by: Sahil Siddiq <sahilcdq@proton.me>
> ---
> Changes v1 -> v2:
> * vhost-shadow-virtqueue.h
> (struct VhostShadowVirtqueue): New member "is_packed"
> (vhost_svq_get_vring_addr): Renamed function.
> (vhost_svq_get_vring_addr_packed): New function.
> (vhost_svq_memory_packed): Likewise.
> * vhost-shadow-virtqueue.c:
> (vhost_svq_add): Use "is_packed" to check vq format.
> (vhost_svq_get_vring_addr): Rename function.
> (vhost_svq_get_vring_addr_packed): New function but is yet to be implemented.
> (vhost_svq_memory_packed): New function.
> (vhost_svq_start): Support packed vq format.
> * vhost-vdpa.c
> (vhost_svq_get_vring_addr): Rename function.
>
>
> hw/virtio/vhost-shadow-virtqueue.c | 70 ++++++++++++++++++++++--------
> hw/virtio/vhost-shadow-virtqueue.h | 10 ++++-
> hw/virtio/vhost-vdpa.c | 4 +-
> 3 files changed, 63 insertions(+), 21 deletions(-)
>
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index c7b7e0c477..045c07304c 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -343,7 +343,7 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
> return -ENOSPC;
> }
>
> - if (virtio_vdev_has_feature(svq->vdev, VIRTIO_F_RING_PACKED)) {
> + if (svq->is_packed) {
> ok = vhost_svq_add_packed(svq, out_sg, out_num,
> in_sg, in_num, &qemu_head);
> } else {
> @@ -679,18 +679,29 @@ void vhost_svq_set_svq_call_fd(VhostShadowVirtqueue *svq, int call_fd)
> }
>
> /**
> - * Get the shadow vq vring address.
> + * Get the split shadow vq vring address.
> * @svq: Shadow virtqueue
> * @addr: Destination to store address
> */
> -void vhost_svq_get_vring_addr(const VhostShadowVirtqueue *svq,
> - struct vhost_vring_addr *addr)
> +void vhost_svq_get_vring_addr_split(const VhostShadowVirtqueue *svq,
> + struct vhost_vring_addr *addr)
> {
> addr->desc_user_addr = (uint64_t)(uintptr_t)svq->vring.desc;
> addr->avail_user_addr = (uint64_t)(uintptr_t)svq->vring.avail;
> addr->used_user_addr = (uint64_t)(uintptr_t)svq->vring.used;
> }
>
> +/**
> + * Get the packed shadow vq vring address.
> + * @svq: Shadow virtqueue
> + * @addr: Destination to store address
> + */
> +void vhost_svq_get_vring_addr_packed(const VhostShadowVirtqueue *svq,
> + struct vhost_vring_addr *addr)
> +{
> + /* TODO */
> +}
> +
> size_t vhost_svq_driver_area_size(const VhostShadowVirtqueue *svq)
> {
> size_t desc_size = sizeof(vring_desc_t) * svq->vring.num;
> @@ -707,6 +718,16 @@ size_t vhost_svq_device_area_size(const VhostShadowVirtqueue *svq)
> return ROUND_UP(used_size, qemu_real_host_page_size());
> }
>
> +size_t vhost_svq_memory_packed(const VhostShadowVirtqueue *svq)
> +{
> + size_t desc_size = sizeof(struct vring_packed_desc) * svq->num_free;
> + size_t driver_event_suppression = sizeof(struct vring_packed_desc_event);
> + size_t device_event_suppression = sizeof(struct vring_packed_desc_event);
> +
> + return ROUND_UP(desc_size + driver_event_suppression + device_event_suppression,
> + qemu_real_host_page_size());
> +}
> +
> /**
> * Set a new file descriptor for the guest to kick the SVQ and notify for avail
> *
> @@ -759,19 +780,34 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
> svq->vq = vq;
> svq->iova_tree = iova_tree;
>
> - svq->vring.num = virtio_queue_get_num(vdev, virtio_get_queue_index(vq));
> - svq->num_free = svq->vring.num;
> - svq->vring.desc = mmap(NULL, vhost_svq_driver_area_size(svq),
> - PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
> - -1, 0);
> - desc_size = sizeof(vring_desc_t) * svq->vring.num;
> - svq->vring.avail = (void *)((char *)svq->vring.desc + desc_size);
> - svq->vring.used = mmap(NULL, vhost_svq_device_area_size(svq),
> - PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
> - -1, 0);
> - svq->desc_state = g_new0(SVQDescState, svq->vring.num);
> - svq->desc_next = g_new0(uint16_t, svq->vring.num);
> - for (unsigned i = 0; i < svq->vring.num - 1; i++) {
> + if (virtio_vdev_has_feature(svq->vdev, VIRTIO_F_RING_PACKED)) {
> + svq->is_packed = true;
> + svq->vring_packed.vring.num = virtio_queue_get_num(vdev, virtio_get_queue_index(vq));
> + svq->num_free = svq->vring_packed.vring.num;
> + svq->vring_packed.vring.desc = mmap(NULL, vhost_svq_memory_packed(svq),
> + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
> + -1, 0);
> + desc_size = sizeof(struct vring_packed_desc) * svq->vring.num;
> + svq->vring_packed.vring.driver = (void *)((char *)svq->vring_packed.vring.desc + desc_size);
(Expanding on the cover letter comment) Here the driver area is
aligned properly too as each descriptor is 16 bytes length, and
required alignment for the driver area is 4 bytes by VirtIO standard.
> + svq->vring_packed.vring.device = (void *)((char *)svq->vring_packed.vring.driver +
> + sizeof(struct vring_packed_desc_event));
> + } else {
> + svq->is_packed = false;
> + svq->vring.num = virtio_queue_get_num(vdev, virtio_get_queue_index(vq));
> + svq->num_free = svq->vring.num;
Nitpicks,
The variables is_packed and num_free can be merged out of the if/else
to reduce code duplication.
Also it is clearer to me to assign svq->is_packed =
virtio_vdev_has_feature(...) and then check the variable in the if.
But other parts of QEMU do as you do here so I don't have a strong
opinion.
> + svq->vring.desc = mmap(NULL, vhost_svq_driver_area_size(svq),
> + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
> + -1, 0);
> + desc_size = sizeof(vring_desc_t) * svq->vring.num;
> + svq->vring.avail = (void *)((char *)svq->vring.desc + desc_size);
> + svq->vring.used = mmap(NULL, vhost_svq_device_area_size(svq),
> + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
> + -1, 0);
> + }
> +
> + svq->desc_state = g_new0(SVQDescState, svq->num_free);
> + svq->desc_next = g_new0(uint16_t, svq->num_free);
> + for (unsigned i = 0; i < svq->num_free - 1; i++) {
> svq->desc_next[i] = cpu_to_le16(i + 1);
> }
> }
> diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
> index ee1a87f523..b396daf57d 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.h
> +++ b/hw/virtio/vhost-shadow-virtqueue.h
> @@ -67,6 +67,9 @@ struct vring_packed {
>
> /* Shadow virtqueue to relay notifications */
> typedef struct VhostShadowVirtqueue {
> + /* True if packed virtqueue */
> + bool is_packed;
> +
> /* Virtio queue shadowing */
> VirtQueue *vq;
>
> @@ -146,10 +149,13 @@ size_t vhost_svq_poll(VhostShadowVirtqueue *svq, size_t num);
>
> void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int svq_kick_fd);
> void vhost_svq_set_svq_call_fd(VhostShadowVirtqueue *svq, int call_fd);
> -void vhost_svq_get_vring_addr(const VhostShadowVirtqueue *svq,
> - struct vhost_vring_addr *addr);
> +void vhost_svq_get_vring_addr_split(const VhostShadowVirtqueue *svq,
> + struct vhost_vring_addr *addr);
> +void vhost_svq_get_vring_addr_packed(const VhostShadowVirtqueue *svq,
> + struct vhost_vring_addr *addr);
> size_t vhost_svq_driver_area_size(const VhostShadowVirtqueue *svq);
> size_t vhost_svq_device_area_size(const VhostShadowVirtqueue *svq);
> +size_t vhost_svq_memory_packed(const VhostShadowVirtqueue *svq);
Ok now I get the question on the cover letter better,
It is ok to reuse the already present functions, no need to create new
ones to export in this header.
Thanks!
>
> void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
> VirtQueue *vq, VhostIOVATree *iova_tree);
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 3cdaa12ed5..688de4a662 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -1130,7 +1130,7 @@ static void vhost_vdpa_svq_unmap_rings(struct vhost_dev *dev,
> struct vhost_vdpa *v = dev->opaque;
> struct vhost_vring_addr svq_addr;
>
> - vhost_svq_get_vring_addr(svq, &svq_addr);
> + vhost_svq_get_vring_addr_split(svq, &svq_addr);
>
> vhost_vdpa_svq_unmap_ring(v, svq_addr.desc_user_addr);
>
> @@ -1189,7 +1189,7 @@ static bool vhost_vdpa_svq_map_rings(struct vhost_dev *dev,
> size_t avail_offset;
> bool ok;
>
> - vhost_svq_get_vring_addr(svq, &svq_addr);
> + vhost_svq_get_vring_addr_split(svq, &svq_addr);
>
> driver_region = (DMAMap) {
> .translated_addr = svq_addr.desc_user_addr,
> --
> 2.45.2
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC v2 0/3] Add packed virtqueue to shadow virtqueue
2024-07-26 13:40 ` [RFC v2 0/3] Add packed virtqueue to shadow virtqueue Eugenio Perez Martin
@ 2024-07-26 17:11 ` Sahil
2024-07-26 18:25 ` Eugenio Perez Martin
0 siblings, 1 reply; 14+ messages in thread
From: Sahil @ 2024-07-26 17:11 UTC (permalink / raw)
To: Eugenio Perez Martin; +Cc: sgarzare, mst, qemu-devel, Sahil Siddiq
Hi,
On Friday, July 26, 2024 7:10:24 PM GMT+5:30 Eugenio Perez Martin wrote:
> On Fri, Jul 26, 2024 at 11:58 AM Sahil Siddiq <icegambit91@gmail.com> wrote:
> > [...]
> > Q1.
> > In virtio_ring.h [2], new aliases with memory alignment enforcement
> > such as "vring_desc_t" have been created. I am not sure if this
> > is required for the packed vq descriptor ring (vring_packed_desc)
> > as well. I don't see a type alias that enforces memory alignment
> > for "vring_packed_desc" in the linux kernel. I haven't used any
> > alias either.
>
> The alignment is required to be 16 for the descriptor ring and 4 for
> the device and driver ares by the standard [1]. In QEMU, this is
> solved by calling mmap, which always returns page-aligned addresses.
Ok, I understand this now.
> > Q2.
> > I see that parts of the "vhost-vdpa" implementation is based on
> > the assumption that SVQ uses the split vq format. For example,
> > "vhost_vdpa_svq_map_rings" [3], calls "vhost_svq_device_area_size"
> > which is specific to split vqs. The "vhost_vring_addr" [4] struct
> > is also specific to split vqs.
> >
> > My idea is to have a generic "vhost_vring_addr" structure that
> > wraps around split and packed vq specific structures, rather
> > than using them directly in if-else conditions wherever the
> > vhost-vdpa functions require their usage. However, this will
> > involve checking their impact in several other places where this
> > struct is currently being used (eg.: "vhost-user", "vhost-backend",
> > "libvhost-user").
>
> Ok I've just found this is under-documented actually :).
>
> As you mention, vhost-user is already using this same struct for
> packed vqs [2], just translating the driver area from the avail vring
> and the device area from the used vring. So the best option is to
> stick with that, unless I'm missing something.
>
>
> [1] https://docs.oasis-open.org/virtio/virtio/v1.3/virtio-v1.3.html
> [2]
> https://github.com/DPDK/dpdk/blob/82c47f005b9a0a1e3a649664b7713443d18abe43/
> lib/vhost/vhost_user.c#L841C1-L841C25
Sorry, I am a little confused here. I was referring to QEMU's vhost-user
implementation here.
Based on what I have understood, "vhost_vring_addr" is only being used
for split vqs in QEMU's vhost-user and in other places too. The implementation
does not take into account packed vqs.
I was going through DPDK's source. In DPDK's implementation of vhost-user [1],
the same struct (vhost_virtqueue) is being used for split vqs and packed vqs. This
is possible since "vhost_virtqueue" [2] uses a union to wrap around the split and
packed versions of the vq.
> > My idea is to have a generic "vhost_vring_addr" structure that
> > wraps around split and packed vq specific structures, rather
> > than using them directly in if-else conditions wherever the
> > vhost-vdpa functions require their usage. However, this will
> > involve checking their impact in several other places where this
> > struct is currently being used (eg.: "vhost-user", "vhost-backend",
> > "libvhost-user").
I was referring to something similar by this. The current "vhost_vring_addr"
can be renamed to "vhost_vring_addr_split" for example, and a new struct
"vhost_vring_addr" can wrap around this and "vhost_vring_addr_packed"
using a union.
I liked the idea of using three unions for each member in the virtqueue format
instead of having one union for the whole format. I didn't think of this. I think
by having three unions the "vhost_svq_get_vring_addr" [3] function won't have
to be split into two new functions to handle split and packed formats separately.
I am not sure if this is what you were referring to.
Thanks,
Sahil
[1] https://github.com/DPDK/dpdk/blob/82c47f005b9a0a1e3a649664b7713443d18abe43/lib/vhost/vhost_user.c#L861
[2] https://github.com/DPDK/dpdk/blob/82c47f005b9a0a1e3a649664b7713443d18abe43/lib/vhost/vhost.h#L275
[3] https://gitlab.com/qemu-project/qemu/-/blob/master/hw/virtio/vhost-shadow-virtqueue.c#L595
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC v2 0/3] Add packed virtqueue to shadow virtqueue
2024-07-26 17:11 ` Sahil
@ 2024-07-26 18:25 ` Eugenio Perez Martin
2024-07-28 16:42 ` Sahil
0 siblings, 1 reply; 14+ messages in thread
From: Eugenio Perez Martin @ 2024-07-26 18:25 UTC (permalink / raw)
To: Sahil; +Cc: sgarzare, mst, qemu-devel, Sahil Siddiq
On Fri, Jul 26, 2024 at 7:11 PM Sahil <icegambit91@gmail.com> wrote:
>
> Hi,
>
> On Friday, July 26, 2024 7:10:24 PM GMT+5:30 Eugenio Perez Martin wrote:
> > On Fri, Jul 26, 2024 at 11:58 AM Sahil Siddiq <icegambit91@gmail.com> wrote:
> > > [...]
> > > Q1.
> > > In virtio_ring.h [2], new aliases with memory alignment enforcement
> > > such as "vring_desc_t" have been created. I am not sure if this
> > > is required for the packed vq descriptor ring (vring_packed_desc)
> > > as well. I don't see a type alias that enforces memory alignment
> > > for "vring_packed_desc" in the linux kernel. I haven't used any
> > > alias either.
> >
> > The alignment is required to be 16 for the descriptor ring and 4 for
> > the device and driver ares by the standard [1]. In QEMU, this is
> > solved by calling mmap, which always returns page-aligned addresses.
>
> Ok, I understand this now.
>
> > > Q2.
> > > I see that parts of the "vhost-vdpa" implementation is based on
> > > the assumption that SVQ uses the split vq format. For example,
> > > "vhost_vdpa_svq_map_rings" [3], calls "vhost_svq_device_area_size"
> > > which is specific to split vqs. The "vhost_vring_addr" [4] struct
> > > is also specific to split vqs.
> > >
> > > My idea is to have a generic "vhost_vring_addr" structure that
> > > wraps around split and packed vq specific structures, rather
> > > than using them directly in if-else conditions wherever the
> > > vhost-vdpa functions require their usage. However, this will
> > > involve checking their impact in several other places where this
> > > struct is currently being used (eg.: "vhost-user", "vhost-backend",
> > > "libvhost-user").
> >
> > Ok I've just found this is under-documented actually :).
> >
> > As you mention, vhost-user is already using this same struct for
> > packed vqs [2], just translating the driver area from the avail vring
> > and the device area from the used vring. So the best option is to
> > stick with that, unless I'm missing something.
> >
> >
> > [1] https://docs.oasis-open.org/virtio/virtio/v1.3/virtio-v1.3.html
> > [2]
> > https://github.com/DPDK/dpdk/blob/82c47f005b9a0a1e3a649664b7713443d18abe43/
> > lib/vhost/vhost_user.c#L841C1-L841C25
>
> Sorry, I am a little confused here. I was referring to QEMU's vhost-user
> implementation here.
>
> Based on what I have understood, "vhost_vring_addr" is only being used
> for split vqs in QEMU's vhost-user and in other places too. The implementation
> does not take into account packed vqs.
>
> I was going through DPDK's source. In DPDK's implementation of vhost-user [1],
> the same struct (vhost_virtqueue) is being used for split vqs and packed vqs. This
> is possible since "vhost_virtqueue" [2] uses a union to wrap around the split and
> packed versions of the vq.
>
Ok, now I get you better. Let me start again from a different angle :).
vhost_vring_addr is already part of the API that QEMU uses between
itself and vhost devices, all vhost-kernel, vhost-user and vhost-vdpa.
To make non-backward compatible changes to it is impossible, as it
involves changes in all of these elements.
QEMU and DPDK, using vhost-user, already send and receive packed
virtqueues addresses using the current structure layout. QEMU's
hw/virtio/vhost.c:vhost_virtqueue_set_addr already sets vq->desc,
vq->avail and vq->user, which has the values of the desc, driver and
device. In that sense, I recommend not to modify it.
On the other hand, DPDK's vhost_virtqueue is not the same struct as
vhost_vring_addr. It is internal to DPDK so it can be modified. We
need to do something similar for the SVQ, yes.
To do that union trick piece by piece in VhostShadowVirtqueue is
possible, but it requires modifying all the usages of the current
vring. I think it is easier for us to follow the kernel's
virtio_ring.c model, as it is a driver too, and create a vring_packed.
We can create an anonymous union and suffix all members with a _packed
so we don't need to modify current split usage.
Let me know what you think.
> > > My idea is to have a generic "vhost_vring_addr" structure that
> > > wraps around split and packed vq specific structures, rather
> > > than using them directly in if-else conditions wherever the
> > > vhost-vdpa functions require their usage. However, this will
> > > involve checking their impact in several other places where this
> > > struct is currently being used (eg.: "vhost-user", "vhost-backend",
> > > "libvhost-user").
>
> I was referring to something similar by this. The current "vhost_vring_addr"
> can be renamed to "vhost_vring_addr_split" for example, and a new struct
> "vhost_vring_addr" can wrap around this and "vhost_vring_addr_packed"
> using a union.
>
> I liked the idea of using three unions for each member in the virtqueue format
> instead of having one union for the whole format. I didn't think of this. I think
> by having three unions the "vhost_svq_get_vring_addr" [3] function won't have
> to be split into two new functions to handle split and packed formats separately.
> I am not sure if this is what you were referring to.
>
But you need the if/else anyway, as the members are not vring_desc_t
but vring_packed_desc, and same with vring_avail_t and vring_used_t
with struct vring_packed_desc_event. Or am I missing something?
Thanks!
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC v2 3/3] vhost: Allocate memory for packed vring.
2024-07-26 14:28 ` Eugenio Perez Martin
@ 2024-07-28 13:41 ` Sahil
0 siblings, 0 replies; 14+ messages in thread
From: Sahil @ 2024-07-28 13:41 UTC (permalink / raw)
To: Eugenio Perez Martin; +Cc: sgarzare, mst, qemu-devel, Sahil Siddiq
Hi,
On Friday, July 26, 2024 7:58:49 PM GMT+5:30 Eugenio Perez Martin wrote:
> On Fri, Jul 26, 2024 at 11:59 AM Sahil Siddiq <icegambit91@gmail.com> wrote:
> > [...]
> > @@ -759,19 +780,34 @@ void vhost_svq_start(VhostShadowVirtqueue *svq,
> > VirtIODevice *vdev,>
> > svq->vq = vq;
> > svq->iova_tree = iova_tree;
> >
> > - svq->vring.num = virtio_queue_get_num(vdev,
> > virtio_get_queue_index(vq)); - svq->num_free = svq->vring.num;
> > - svq->vring.desc = mmap(NULL, vhost_svq_driver_area_size(svq),
> > - PROT_READ | PROT_WRITE, MAP_SHARED |
> > MAP_ANONYMOUS, - -1, 0);
> > - desc_size = sizeof(vring_desc_t) * svq->vring.num;
> > - svq->vring.avail = (void *)((char *)svq->vring.desc + desc_size);
> > - svq->vring.used = mmap(NULL, vhost_svq_device_area_size(svq),
> > - PROT_READ | PROT_WRITE, MAP_SHARED |
> > MAP_ANONYMOUS, - -1, 0);
> > - svq->desc_state = g_new0(SVQDescState, svq->vring.num);
> > - svq->desc_next = g_new0(uint16_t, svq->vring.num);
> > - for (unsigned i = 0; i < svq->vring.num - 1; i++) {
> > + if (virtio_vdev_has_feature(svq->vdev, VIRTIO_F_RING_PACKED)) {
> > + svq->is_packed = true;
> > + svq->vring_packed.vring.num = virtio_queue_get_num(vdev,
> > virtio_get_queue_index(vq)); + svq->num_free =
> > svq->vring_packed.vring.num;
> > + svq->vring_packed.vring.desc = mmap(NULL,
> > vhost_svq_memory_packed(svq), +
> > PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, +
> > -1, 0);
> > + desc_size = sizeof(struct vring_packed_desc) * svq->vring.num;
> > + svq->vring_packed.vring.driver = (void *)((char
> > *)svq->vring_packed.vring.desc + desc_size);
> (Expanding on the cover letter comment) Here the driver area is
> aligned properly too as each descriptor is 16 bytes length, and
> required alignment for the driver area is 4 bytes by VirtIO standard.
Ok, this makes sense now.
> > + svq->vring_packed.vring.device = (void *)((char *)svq-
>vring_packed.vring.driver +
> > + sizeof(struct vring_packed_desc_event)); + } else {
> > + svq->is_packed = false;
> > + svq->vring.num = virtio_queue_get_num(vdev, virtio_get_queue_index(vq));
> > + svq->num_free = svq->vring.num;
>
> Nitpicks,
>
> The variables is_packed and num_free can be merged out of the if/else
> to reduce code duplication.
>
> Also it is clearer to me to assign svq->is_packed =
> virtio_vdev_has_feature(...) and then check the variable in the if.
> But other parts of QEMU do as you do here so I don't have a strong
> opinion.
I think your suggestion will reduce code duplication. I'll change this.
> > [...]
> > @@ -146,10 +149,13 @@ size_t vhost_svq_poll(VhostShadowVirtqueue *svq, size_t num);
> >
> > void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int svq_kick_fd);
> > void vhost_svq_set_svq_call_fd(VhostShadowVirtqueue *svq, int call_fd);
> > -void vhost_svq_get_vring_addr(const VhostShadowVirtqueue *svq,
> > - struct vhost_vring_addr *addr);
> > +void vhost_svq_get_vring_addr_split(const VhostShadowVirtqueue *svq,
> > + struct vhost_vring_addr *addr);
> > +void vhost_svq_get_vring_addr_packed(const VhostShadowVirtqueue *svq,
> > + struct vhost_vring_addr *addr);
> >
> > size_t vhost_svq_driver_area_size(const VhostShadowVirtqueue *svq);
> > size_t vhost_svq_device_area_size(const VhostShadowVirtqueue *svq);
> >
> > +size_t vhost_svq_memory_packed(const VhostShadowVirtqueue *svq);
>
> Ok now I get the question on the cover letter better,
>
> It is ok to reuse the already present functions, no need to create new
> ones to export in this header.
I think "vhost_svq_memory_packed" will be required while the others can be
removed.
Thanks,
Sahil
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC v2 0/3] Add packed virtqueue to shadow virtqueue
2024-07-26 18:25 ` Eugenio Perez Martin
@ 2024-07-28 16:42 ` Sahil
0 siblings, 0 replies; 14+ messages in thread
From: Sahil @ 2024-07-28 16:42 UTC (permalink / raw)
To: Eugenio Perez Martin; +Cc: sgarzare, mst, qemu-devel, Sahil Siddiq
Hi,
On Friday, July 26, 2024 11:55:14 PM GMT+5:30 Eugenio Perez Martin wrote:
> On Fri, Jul 26, 2024 at 7:11 PM Sahil <icegambit91@gmail.com> wrote:
> > [...]
> > > > Q2.
> > > > I see that parts of the "vhost-vdpa" implementation is based on
> > > > the assumption that SVQ uses the split vq format. For example,
> > > > "vhost_vdpa_svq_map_rings" [3], calls "vhost_svq_device_area_size"
> > > > which is specific to split vqs. The "vhost_vring_addr" [4] struct
> > > > is also specific to split vqs.
> > > >
> > > > My idea is to have a generic "vhost_vring_addr" structure that
> > > > wraps around split and packed vq specific structures, rather
> > > > than using them directly in if-else conditions wherever the
> > > > vhost-vdpa functions require their usage. However, this will
> > > > involve checking their impact in several other places where this
> > > > struct is currently being used (eg.: "vhost-user", "vhost-backend",
> > > > "libvhost-user").
> > >
> > > Ok I've just found this is under-documented actually :).
> > >
> > > As you mention, vhost-user is already using this same struct for
> > > packed vqs [2], just translating the driver area from the avail vring
> > > and the device area from the used vring. So the best option is to
> > > stick with that, unless I'm missing something.
> > >
> > >
> > > [1] https://docs.oasis-open.org/virtio/virtio/v1.3/virtio-v1.3.html
> > > [2]
> > > https://github.com/DPDK/dpdk/blob/82c47f005b9a0a1e3a649664b7713443d18abe
> > > 43/
> > > lib/vhost/vhost_user.c#L841C1-L841C25
> >
> > Sorry, I am a little confused here. I was referring to QEMU's vhost-user
> > implementation here.
> >
> > Based on what I have understood, "vhost_vring_addr" is only being used
> > for split vqs in QEMU's vhost-user and in other places too. The
> > implementation does not take into account packed vqs.
> >
> > I was going through DPDK's source. In DPDK's implementation of vhost-user
> > [1], the same struct (vhost_virtqueue) is being used for split vqs and
> > packed vqs. This is possible since "vhost_virtqueue" [2] uses a union to
> > wrap around the split and packed versions of the vq.
>
> Ok, now I get you better. Let me start again from a different angle :).
>
> vhost_vring_addr is already part of the API that QEMU uses between
> itself and vhost devices, all vhost-kernel, vhost-user and vhost-vdpa.
> To make non-backward compatible changes to it is impossible, as it
> involves changes in all of these elements.
>
> QEMU and DPDK, using vhost-user, already send and receive packed
> virtqueues addresses using the current structure layout. QEMU's
> hw/virtio/vhost.c:vhost_virtqueue_set_addr already sets vq->desc,
> vq->avail and vq->user, which has the values of the desc, driver and
> device. In that sense, I recommend not to modify it.
>
> On the other hand, DPDK's vhost_virtqueue is not the same struct as
> vhost_vring_addr. It is internal to DPDK so it can be modified. We
> need to do something similar for the SVQ, yes.
>
> To do that union trick piece by piece in VhostShadowVirtqueue is
> possible, but it requires modifying all the usages of the current
> vring. I think it is easier for us to follow the kernel's
> virtio_ring.c model, as it is a driver too, and create a vring_packed.
> We can create an anonymous union and suffix all members with a _packed
> so we don't need to modify current split usage.
>
> Let me know what you think.
Thank you for the detailed explanation. This makes sense to me now.
Since the three *_addr members (not counting log_guest_addr) in
"vhost_vring_addr" simply store addresses, a union is not required
here and these members can be reused in the case of packed format
as well.
> > > > My idea is to have a generic "vhost_vring_addr" structure that
> > > > wraps around split and packed vq specific structures, rather
> > > > than using them directly in if-else conditions wherever the
> > > > vhost-vdpa functions require their usage. However, this will
> > > > involve checking their impact in several other places where this
> > > > struct is currently being used (eg.: "vhost-user", "vhost-backend",
> > > > "libvhost-user").
> >
> > I was referring to something similar by this. The current
> > "vhost_vring_addr" can be renamed to "vhost_vring_addr_split" for
> > example, and a new struct "vhost_vring_addr" can wrap around this and
> > "vhost_vring_addr_packed" using a union.
> >
> > I liked the idea of using three unions for each member in the virtqueue
> > format instead of having one union for the whole format. I didn't think
> > of this. I think by having three unions the "vhost_svq_get_vring_addr"
> > [3] function won't have to be split into two new functions to handle
> > split and packed formats separately. I am not sure if this is what you
> > were referring to.
>
> But you need the if/else anyway, as the members are not vring_desc_t
> but vring_packed_desc, and same with vring_avail_t and vring_used_t
> with struct vring_packed_desc_event. Or am I missing something?
I was referring to having a union for each member in "vhost_vring_addr".
But based on your explanation above I don't think this would be required
either.
Thanks,
Sahil
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements
2024-07-26 13:48 ` Eugenio Perez Martin
@ 2024-07-28 17:37 ` Sahil
2024-07-29 8:21 ` Eugenio Perez Martin
0 siblings, 1 reply; 14+ messages in thread
From: Sahil @ 2024-07-28 17:37 UTC (permalink / raw)
To: Eugenio Perez Martin; +Cc: sgarzare, mst, qemu-devel, Sahil Siddiq
Hi,
On Friday, July 26, 2024 7:18:28 PM GMT+5:30 Eugenio Perez Martin wrote:
> On Fri, Jul 26, 2024 at 11:58 AM Sahil Siddiq <icegambit91@gmail.com> wrote:
> > This is the first patch in a series to add support for packed
> > virtqueues in vhost_shadow_virtqueue. This patch implements the
> > insertion of available buffers in the descriptor area. It takes
> > into account descriptor chains, but does not consider indirect
> > descriptors.
> >
> > Signed-off-by: Sahil Siddiq <sahilcdq@proton.me>
> > ---
> > Changes v1 -> v2:
> > * Split commit from RFC v1 into two commits.
> > * vhost-shadow-virtqueue.c
> >
> > (vhost_svq_add_packed):
> > - Merge with "vhost_svq_vring_write_descs_packed()"
> > - Remove "num == 0" check
> >
> > hw/virtio/vhost-shadow-virtqueue.c | 93 +++++++++++++++++++++++++++++-
> > 1 file changed, 92 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.c
> > b/hw/virtio/vhost-shadow-virtqueue.c index fc5f408f77..c7b7e0c477 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > @@ -217,6 +217,91 @@ static bool vhost_svq_add_split(VhostShadowVirtqueue *svq,
> > return true;
> >
> > }
> >
> > +static bool vhost_svq_add_packed(VhostShadowVirtqueue *svq,
> > + const struct iovec *out_sg, size_t out_num,
> > + const struct iovec *in_sg, size_t in_num,
> > + unsigned *head)
> > +{
> > + bool ok;
> > + uint16_t head_flags = 0;
> > + g_autofree hwaddr *sgs = g_new(hwaddr, out_num + in_num);
> > +
> > + *head = svq->vring_packed.next_avail_idx;
> > +
> > + /* We need some descriptors here */
> > + if (unlikely(!out_num && !in_num)) {
> > + qemu_log_mask(LOG_GUEST_ERROR,
> > + "Guest provided element with no descriptors");
> > + return false;
> > + }
> > +
> > + uint16_t id, curr, i;
> > + unsigned n;
> > + struct vring_packed_desc *descs = svq->vring_packed.vring.desc;
> > +
> > + i = *head;
> > + id = svq->free_head;
> > + curr = id;
> > +
> > + size_t num = out_num + in_num;
> > +
> > + ok = vhost_svq_translate_addr(svq, sgs, out_sg, out_num);
> > + if (unlikely(!ok)) {
> > + return false;
> > + }
> > +
> > + ok = vhost_svq_translate_addr(svq, sgs + out_num, in_sg, in_num);
> > + if (unlikely(!ok)) {
> > + return false;
> > + }
> > +
>
> (sorry I missed this from the RFC v1) I think all of the above should
> be in the caller, isn't it? It is duplicated with split.
I don't think this will be straightforward. While they perform the same logical
step in both cases, their implementation is a little different. For example, the
"sgs" pointer is created a little differently in both cases. The parameters to
"vhost_svq_translate_addr" is also a little different. I think if they are moved to
the caller, they will be in both "svq->is_packed" branches (in "vhost_svq_add").
> Also, declarations should be at the beginning of blocks per QEMU
> coding style [1].
Sorry, I missed this. I'll rectify this.
Thanks,
Sahil
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements
2024-07-28 17:37 ` Sahil
@ 2024-07-29 8:21 ` Eugenio Perez Martin
2024-08-02 11:26 ` Sahil
0 siblings, 1 reply; 14+ messages in thread
From: Eugenio Perez Martin @ 2024-07-29 8:21 UTC (permalink / raw)
To: Sahil; +Cc: sgarzare, mst, qemu-devel, Sahil Siddiq
On Sun, Jul 28, 2024 at 7:37 PM Sahil <icegambit91@gmail.com> wrote:
>
> Hi,
>
> On Friday, July 26, 2024 7:18:28 PM GMT+5:30 Eugenio Perez Martin wrote:
> > On Fri, Jul 26, 2024 at 11:58 AM Sahil Siddiq <icegambit91@gmail.com> wrote:
> > > This is the first patch in a series to add support for packed
> > > virtqueues in vhost_shadow_virtqueue. This patch implements the
> > > insertion of available buffers in the descriptor area. It takes
> > > into account descriptor chains, but does not consider indirect
> > > descriptors.
> > >
> > > Signed-off-by: Sahil Siddiq <sahilcdq@proton.me>
> > > ---
> > > Changes v1 -> v2:
> > > * Split commit from RFC v1 into two commits.
> > > * vhost-shadow-virtqueue.c
> > >
> > > (vhost_svq_add_packed):
> > > - Merge with "vhost_svq_vring_write_descs_packed()"
> > > - Remove "num == 0" check
> > >
> > > hw/virtio/vhost-shadow-virtqueue.c | 93 +++++++++++++++++++++++++++++-
> > > 1 file changed, 92 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c
> > > b/hw/virtio/vhost-shadow-virtqueue.c index fc5f408f77..c7b7e0c477 100644
> > > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > > @@ -217,6 +217,91 @@ static bool vhost_svq_add_split(VhostShadowVirtqueue *svq,
> > > return true;
> > >
> > > }
> > >
> > > +static bool vhost_svq_add_packed(VhostShadowVirtqueue *svq,
> > > + const struct iovec *out_sg, size_t out_num,
> > > + const struct iovec *in_sg, size_t in_num,
> > > + unsigned *head)
> > > +{
> > > + bool ok;
> > > + uint16_t head_flags = 0;
> > > + g_autofree hwaddr *sgs = g_new(hwaddr, out_num + in_num);
> > > +
> > > + *head = svq->vring_packed.next_avail_idx;
> > > +
> > > + /* We need some descriptors here */
> > > + if (unlikely(!out_num && !in_num)) {
> > > + qemu_log_mask(LOG_GUEST_ERROR,
> > > + "Guest provided element with no descriptors");
> > > + return false;
> > > + }
> > > +
> > > + uint16_t id, curr, i;
> > > + unsigned n;
> > > + struct vring_packed_desc *descs = svq->vring_packed.vring.desc;
> > > +
> > > + i = *head;
> > > + id = svq->free_head;
> > > + curr = id;
> > > +
> > > + size_t num = out_num + in_num;
> > > +
> > > + ok = vhost_svq_translate_addr(svq, sgs, out_sg, out_num);
> > > + if (unlikely(!ok)) {
> > > + return false;
> > > + }
> > > +
> > > + ok = vhost_svq_translate_addr(svq, sgs + out_num, in_sg, in_num);
> > > + if (unlikely(!ok)) {
> > > + return false;
> > > + }
> > > +
> >
> > (sorry I missed this from the RFC v1) I think all of the above should
> > be in the caller, isn't it? It is duplicated with split.
>
> I don't think this will be straightforward. While they perform the same logical
> step in both cases, their implementation is a little different. For example, the
> "sgs" pointer is created a little differently in both cases.
Do you mean because MAX() vs in_num+out_num? It is ok to convert both
to the latter.
> The parameters to
> "vhost_svq_translate_addr" is also a little different. I think if they are moved to
> the caller, they will be in both "svq->is_packed" branches (in "vhost_svq_add").
>
I don't see any difference apart from calling it with in and out sgs
separately or calling it for all of the array, am I missing something?
> > Also, declarations should be at the beginning of blocks per QEMU
> > coding style [1].
>
> Sorry, I missed this. I'll rectify this.
>
No worries!
You can run scripts/checkpatch.pl in QEMU for the next series, it
should catch many of these small issues.
Thanks!
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements
2024-07-29 8:21 ` Eugenio Perez Martin
@ 2024-08-02 11:26 ` Sahil
0 siblings, 0 replies; 14+ messages in thread
From: Sahil @ 2024-08-02 11:26 UTC (permalink / raw)
To: Eugenio Perez Martin; +Cc: sgarzare, mst, qemu-devel, Sahil Siddiq
Hi,
On Monday, July 29, 2024 1:51:27 PM GMT+5:30 Eugenio Perez Martin wrote:
> On Sun, Jul 28, 2024 at 7:37 PM Sahil <icegambit91@gmail.com> wrote:
> > [...]
> > > > +static bool vhost_svq_add_packed(VhostShadowVirtqueue *svq,
> > > > + const struct iovec *out_sg, size_t out_num,
> > > > + const struct iovec *in_sg, size_t in_num,
> > > > + unsigned *head)
> > > > +{
> > > > + bool ok;
> > > > + uint16_t head_flags = 0;
> > > > + g_autofree hwaddr *sgs = g_new(hwaddr, out_num + in_num);
> > > > +
> > > > + *head = svq->vring_packed.next_avail_idx;
> > > > +
> > > > + /* We need some descriptors here */
> > > > + if (unlikely(!out_num && !in_num)) {
> > > > + qemu_log_mask(LOG_GUEST_ERROR,
> > > > + "Guest provided element with no descriptors");
> > > > + return false;
> > > > + }
> > > > +
> > > > + uint16_t id, curr, i;
> > > > + unsigned n;
> > > > + struct vring_packed_desc *descs = svq->vring_packed.vring.desc;
> > > > +
> > > > + i = *head;
> > > > + id = svq->free_head;
> > > > + curr = id;
> > > > +
> > > > + size_t num = out_num + in_num;
> > > > +
> > > > + ok = vhost_svq_translate_addr(svq, sgs, out_sg, out_num);
> > > > + if (unlikely(!ok)) {
> > > > + return false;
> > > > + }
> > > > +
> > > > + ok = vhost_svq_translate_addr(svq, sgs + out_num, in_sg, in_num);
> > > > + if (unlikely(!ok)) {
> > > > + return false;
> > > > + }
> > > > +
> > >
> > > (sorry I missed this from the RFC v1) I think all of the above should
> > > be in the caller, isn't it? It is duplicated with split.
> >
> > I don't think this will be straightforward. While they perform the same
> > logical step in both cases, their implementation is a little different.
> > For example, the "sgs" pointer is created a little differently in both
> > cases.
>
> Do you mean because MAX() vs in_num+out_num? It is ok to convert both
> to the latter.
>
> > The parameters to
> > "vhost_svq_translate_addr" is also a little different. I think if they are
> > moved to the caller, they will be in both "svq->is_packed" branches (in
> > "vhost_svq_add").
> I don't see any difference apart from calling it with in and out sgs
> separately or calling it for all of the array, am I missing something?
>
I tried refactoring this and have sent a new patch series. Please let me
know if I have missed something.
Thanks,
Sahil
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2024-08-02 11:26 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-26 9:58 [RFC v2 0/3] Add packed virtqueue to shadow virtqueue Sahil Siddiq
2024-07-26 9:58 ` [RFC v2 1/3] vhost: Introduce packed vq and add buffer elements Sahil Siddiq
2024-07-26 13:48 ` Eugenio Perez Martin
2024-07-28 17:37 ` Sahil
2024-07-29 8:21 ` Eugenio Perez Martin
2024-08-02 11:26 ` Sahil
2024-07-26 9:58 ` [RFC v2 2/3] vhost: Data structure changes to support packed vqs Sahil Siddiq
2024-07-26 9:58 ` [RFC v2 3/3] vhost: Allocate memory for packed vring Sahil Siddiq
2024-07-26 14:28 ` Eugenio Perez Martin
2024-07-28 13:41 ` Sahil
2024-07-26 13:40 ` [RFC v2 0/3] Add packed virtqueue to shadow virtqueue Eugenio Perez Martin
2024-07-26 17:11 ` Sahil
2024-07-26 18:25 ` Eugenio Perez Martin
2024-07-28 16:42 ` Sahil
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).