From: Sahil Siddiq <icegambit91@gmail.com>
To: eperezma@redhat.com, sgarzare@redhat.com
Cc: mst@redhat.com, qemu-devel@nongnu.org, sahilcdq@proton.me
Subject: Re: [RFC v5 3/7] vhost: Forward descriptors to device via packed SVQ
Date: Mon, 24 Mar 2025 19:44:22 +0530 [thread overview]
Message-ID: <a878b49b-0d00-483a-a5b6-27d048c4ebc7@gmail.com> (raw)
In-Reply-To: <20250324135929.74945-4-sahilcdq@proton.me>
Hi,
I had a few queries here.
On 3/24/25 7:29 PM, Sahil Siddiq wrote:
> Implement the insertion of available buffers in the descriptor area of
> packed shadow virtqueues. It takes into account descriptor chains, but
> does not consider indirect descriptors.
>
> Enable the packed SVQ to forward the descriptors to the device.
>
> Signed-off-by: Sahil Siddiq <sahilcdq@proton.me>
> ---
> Changes from v4 -> v5:
> - This was commit #2 in v4. This has been reordered to commit #3
> based on review comments.
> - vhost-shadow-virtqueue.c:
> (vhost_svq_valid_features): Move addition of enums to commit #6
> based on review comments.
> (vhost_svq_add_packed): Set head_idx to buffer id instead of vring's
> index.
> (vhost_svq_kick): Split into vhost_svq_kick_split and
> vhost_svq_kick_packed.
> (vhost_svq_add): Use new vhost_svq_kick_* functions.
>
> hw/virtio/vhost-shadow-virtqueue.c | 117 +++++++++++++++++++++++++++--
> 1 file changed, 112 insertions(+), 5 deletions(-)
>
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index 4f74ad402a..6e16cd4bdf 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -193,10 +193,83 @@ static void vhost_svq_add_split(VhostShadowVirtqueue *svq,
> /* Update the avail index after write the descriptor */
> smp_wmb();
> avail->idx = cpu_to_le16(svq->shadow_avail_idx);
> +}
> +
> +/**
> + * Write descriptors to SVQ packed vring
> + *
> + * @svq: The shadow virtqueue
> + * @out_sg: The iovec to the guest
> + * @out_num: Outgoing iovec length
> + * @in_sg: The iovec from the guest
> + * @in_num: Incoming iovec length
> + * @sgs: Cache for hwaddr
> + * @head: Saves current free_head
> + */
> +static void vhost_svq_add_packed(VhostShadowVirtqueue *svq,
> + const struct iovec *out_sg, size_t out_num,
> + const struct iovec *in_sg, size_t in_num,
> + hwaddr *sgs, unsigned *head)
> +{
> + uint16_t id, curr, i, head_flags = 0, head_idx;
> + size_t num = out_num + in_num;
> + unsigned n;
> +
> + struct vring_packed_desc *descs = svq->vring_packed.vring.desc;
> +
> + head_idx = svq->vring_packed.next_avail_idx;
Since "svq->vring_packed.next_avail_idx" is part of QEMU internals and not
stored in guest memory, no endianness conversion is required here, right?
> + i = head_idx;
> + id = svq->free_head;
> + curr = id;
> + *head = id;
Should head be the buffer id or the idx of the descriptor ring where the
first descriptor of a descriptor chain is inserted?
> + /* Write descriptors to SVQ packed vring */
> + for (n = 0; n < num; n++) {
> + uint16_t flags = cpu_to_le16(svq->vring_packed.avail_used_flags |
> + (n < out_num ? 0 : VRING_DESC_F_WRITE) |
> + (n + 1 == num ? 0 : VRING_DESC_F_NEXT));
> + if (i == head_idx) {
> + head_flags = flags;
> + } else {
> + descs[i].flags = flags;
> + }
> +
> + descs[i].addr = cpu_to_le64(sgs[n]);
> + descs[i].id = id;
> + if (n < out_num) {
> + descs[i].len = cpu_to_le32(out_sg[n].iov_len);
> + } else {
> + descs[i].len = cpu_to_le32(in_sg[n - out_num].iov_len);
> + }
> +
> + curr = cpu_to_le16(svq->desc_next[curr]);
> +
> + if (++i >= svq->vring_packed.vring.num) {
> + i = 0;
> + svq->vring_packed.avail_used_flags ^=
> + 1 << VRING_PACKED_DESC_F_AVAIL |
> + 1 << VRING_PACKED_DESC_F_USED;
> + }
> + }
>
> + if (i <= head_idx) {
> + svq->vring_packed.avail_wrap_counter ^= 1;
> + }
> +
> + svq->vring_packed.next_avail_idx = i;
> + svq->shadow_avail_idx = i;
> + svq->free_head = curr;
> +
> + /*
> + * A driver MUST NOT make the first descriptor in the list
> + * available before all subsequent descriptors comprising
> + * the list are made available.
> + */
> + smp_wmb();
> + svq->vring_packed.vring.desc[head_idx].flags = head_flags;
> }
>
> -static void vhost_svq_kick(VhostShadowVirtqueue *svq)
> +static void vhost_svq_kick_split(VhostShadowVirtqueue *svq)
> {
> bool needs_kick;
>
> @@ -209,7 +282,8 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq)
> if (virtio_vdev_has_feature(svq->vdev, VIRTIO_RING_F_EVENT_IDX)) {
> uint16_t avail_event = le16_to_cpu(
> *(uint16_t *)(&svq->vring.used->ring[svq->vring.num]));
> - needs_kick = vring_need_event(avail_event, svq->shadow_avail_idx, svq->shadow_avail_idx - 1);
> + needs_kick = vring_need_event(avail_event, svq->shadow_avail_idx,
> + svq->shadow_avail_idx - 1);
> } else {
> needs_kick =
> !(svq->vring.used->flags & cpu_to_le16(VRING_USED_F_NO_NOTIFY));
> @@ -222,6 +296,30 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq)
> event_notifier_set(&svq->hdev_kick);
> }
>
> +static void vhost_svq_kick_packed(VhostShadowVirtqueue *svq)
> +{
> + bool needs_kick;
> +
> + /*
> + * We need to expose the available array entries before checking
> + * notification suppressions.
> + */
> + smp_mb();
> +
> + if (virtio_vdev_has_feature(svq->vdev, VIRTIO_RING_F_EVENT_IDX)) {
> + return;
> + } else {
> + needs_kick = (svq->vring_packed.vring.device->flags !=
> + cpu_to_le16(VRING_PACKED_EVENT_FLAG_DISABLE));
> + }
> +
> + if (!needs_kick) {
> + return;
> + }
> +
> + event_notifier_set(&svq->hdev_kick);
> +}
> +
> /**
> * Add an element to a SVQ.
> *
> @@ -258,13 +356,22 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
> return -EINVAL;
> }
>
> - vhost_svq_add_split(svq, out_sg, out_num, in_sg,
> - in_num, sgs, &qemu_head);
> + if (svq->is_packed) {
> + vhost_svq_add_packed(svq, out_sg, out_num, in_sg,
> + in_num, sgs, &qemu_head);
> + } else {
> + vhost_svq_add_split(svq, out_sg, out_num, in_sg,
> + in_num, sgs, &qemu_head);
> + }
>
> svq->num_free -= ndescs;
> svq->desc_state[qemu_head].elem = elem;
> svq->desc_state[qemu_head].ndescs = ndescs;
*head in vhost_svq_add_packed() is stored in "qemu_head" here.
> - vhost_svq_kick(svq);
> + if (svq->is_packed) {
> + vhost_svq_kick_packed(svq);
> + } else {
> + vhost_svq_kick_split(svq);
> + }
> return 0;
> }
>
Thanks,
Sahil
next prev parent reply other threads:[~2025-03-24 14:15 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-24 13:59 [RFC v5 0/7] Add packed format to shadow virtqueue Sahil Siddiq
2025-03-24 13:59 ` [RFC v5 1/7] vhost: Refactor vhost_svq_add_split Sahil Siddiq
2025-03-26 11:25 ` Eugenio Perez Martin
2025-03-28 5:18 ` Sahil Siddiq
2025-03-24 13:59 ` [RFC v5 2/7] vhost: Data structure changes to support packed vqs Sahil Siddiq
2025-03-26 11:26 ` Eugenio Perez Martin
2025-03-28 5:17 ` Sahil Siddiq
2025-03-24 13:59 ` [RFC v5 3/7] vhost: Forward descriptors to device via packed SVQ Sahil Siddiq
2025-03-24 14:14 ` Sahil Siddiq [this message]
2025-03-26 8:03 ` Eugenio Perez Martin
2025-03-27 18:42 ` Sahil Siddiq
2025-03-28 7:51 ` Eugenio Perez Martin
2025-04-14 9:37 ` Sahil Siddiq
2025-04-14 15:07 ` Eugenio Perez Martin
2025-04-15 19:10 ` Sahil Siddiq
2025-03-26 12:02 ` Eugenio Perez Martin
2025-03-28 5:09 ` Sahil Siddiq
2025-03-28 6:42 ` Eugenio Perez Martin
2025-03-24 13:59 ` [RFC v5 4/7] vdpa: Allocate memory for SVQ and map them to vdpa Sahil Siddiq
2025-03-26 12:05 ` Eugenio Perez Martin
2025-03-24 13:59 ` [RFC v5 5/7] vhost: Forward descriptors to guest via packed vqs Sahil Siddiq
2025-03-24 14:34 ` Sahil Siddiq
2025-03-26 8:34 ` Eugenio Perez Martin
2025-03-28 5:22 ` Sahil Siddiq
2025-03-28 7:53 ` Eugenio Perez Martin
2025-03-24 13:59 ` [RFC v5 6/7] vhost: Validate transport device features for " Sahil Siddiq
2025-03-26 12:06 ` Eugenio Perez Martin
2025-03-28 5:33 ` Sahil Siddiq
2025-03-28 8:02 ` Eugenio Perez Martin
2025-03-24 13:59 ` [RFC v5 7/7] vdpa: Support setting vring_base for packed SVQ Sahil Siddiq
2025-03-26 12:08 ` Eugenio Perez Martin
2025-03-27 18:44 ` Sahil Siddiq
2025-03-26 7:35 ` [RFC v5 0/7] Add packed format to shadow virtqueue Eugenio Perez Martin
2025-04-14 9:20 ` Sahil Siddiq
2025-04-15 19:20 ` Sahil Siddiq
2025-04-16 7:20 ` Eugenio Perez Martin
2025-05-14 6:21 ` Sahil Siddiq
2025-05-15 6:19 ` Eugenio Perez Martin
2025-06-26 5:16 ` Sahil Siddiq
2025-06-26 7:37 ` Eugenio Perez Martin
2025-07-30 14:32 ` Sahil Siddiq
2025-07-31 13:52 ` Eugenio Perez Martin
2025-08-04 6:04 ` Sahil Siddiq
2025-08-05 9:07 ` Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a878b49b-0d00-483a-a5b6-27d048c4ebc7@gmail.com \
--to=icegambit91@gmail.com \
--cc=eperezma@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sahilcdq@proton.me \
--cc=sgarzare@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).