qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Lei Yang <leiyang@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: Hawkins Jiawei <yin31149@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	18801353760@163.com, qemu-devel@nongnu.org,
	qemu-stable@nongnu.org
Subject: Re: [PATCH RESEND] vhost: fix possible wrap in SVQ descriptor ring
Date: Wed, 10 May 2023 23:27:14 +0800	[thread overview]
Message-ID: <CAPpAL=xwMPQbC0HTOyQeBDdFiwR7SO0VT=UoZe-cWxXC7LVpYA@mail.gmail.com> (raw)
In-Reply-To: <CAPpAL=yWNtKG4j1395=FipdW1YAfoJ2N_CBAPQtE5QadcUOyTQ@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 5532 bytes --]

QE applied this patch to do sanity testing on vhost-vdpa, there is no any
regression problem.

Tested-by: Lei Yang <leiyang@redhat.com>


On Wed, May 10, 2023 at 9:32 AM Lei Yang <leiyang@redhat.com> wrote:

> QE applied this patch to do sanity testing on vhost-net, there is no
> any regression problem.
>
> Tested-by: Lei Yang <leiyang@redhat.com>
>
>
>
> On Tue, May 9, 2023 at 1:28 AM Eugenio Perez Martin <eperezma@redhat.com>
> wrote:
> >
> > On Sat, May 6, 2023 at 5:01 PM Hawkins Jiawei <yin31149@gmail.com>
> wrote:
> > >
> > > QEMU invokes vhost_svq_add() when adding a guest's element into SVQ.
> > > In vhost_svq_add(), it uses vhost_svq_available_slots() to check
> > > whether QEMU can add the element into the SVQ. If there is
> > > enough space, then QEMU combines some out descriptors and
> > > some in descriptors into one descriptor chain, and add it into
> > > svq->vring.desc by vhost_svq_vring_write_descs().
> > >
> > > Yet the problem is that, `svq->shadow_avail_idx - svq->shadow_used_idx`
> > > in vhost_svq_available_slots() return the number of occupied elements,
> > > or the number of descriptor chains, instead of the number of occupied
> > > descriptors, which may cause wrapping in SVQ descriptor ring.
> > >
> > > Here is an example. In vhost_handle_guest_kick(), QEMU forwards
> > > as many available buffers to device by virtqueue_pop() and
> > > vhost_svq_add_element(). virtqueue_pop() return a guest's element,
> > > and use vhost_svq_add_elemnt(), a wrapper to vhost_svq_add(), to
> > > add this element into SVQ. If QEMU invokes virtqueue_pop() and
> > > vhost_svq_add_element() `svq->vring.num` times,
> vhost_svq_available_slots()
> > > thinks QEMU just ran out of slots and everything should work fine.
> > > But in fact, virtqueue_pop() return `svq-vring.num` elements or
> > > descriptor chains, more than `svq->vring.num` descriptors, due to
> > > guest memory fragmentation, and this cause wrapping in SVQ descriptor
> ring.
> > >
> >
> > The bug is valid even before marking the descriptors used. If the
> > guest memory is fragmented, SVQ must add chains so it can try to add
> > more descriptors than possible.
> >
> > > Therefore, this patch adds `num_free` field in VhostShadowVirtqueue
> > > structure, updates this field in vhost_svq_add() and
> > > vhost_svq_get_buf(), to record the number of free descriptors.
> > > Then we can avoid wrap in SVQ descriptor ring by refactoring
> > > vhost_svq_available_slots().
> > >
> > > Fixes: 100890f7ca ("vhost: Shadow virtqueue buffers forwarding")
> > > Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
> > > ---
> > >  hw/virtio/vhost-shadow-virtqueue.c | 9 ++++++++-
> > >  hw/virtio/vhost-shadow-virtqueue.h | 3 +++
> > >  2 files changed, 11 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c
> b/hw/virtio/vhost-shadow-virtqueue.c
> > > index 8361e70d1b..e1c6952b10 100644
> > > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > > @@ -68,7 +68,7 @@ bool vhost_svq_valid_features(uint64_t features,
> Error **errp)
> > >   */
> > >  static uint16_t vhost_svq_available_slots(const VhostShadowVirtqueue
> *svq)
> > >  {
> > > -    return svq->vring.num - (svq->shadow_avail_idx -
> svq->shadow_used_idx);
> > > +    return svq->num_free;
> > >  }
> > >
> > >  /**
> > > @@ -263,6 +263,9 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const
> struct iovec *out_sg,
> > >          return -EINVAL;
> > >      }
> > >
> > > +    /* Update the size of SVQ vring free descriptors */
> > > +    svq->num_free -= ndescs;
> > > +
> > >      svq->desc_state[qemu_head].elem = elem;
> > >      svq->desc_state[qemu_head].ndescs = ndescs;
> > >      vhost_svq_kick(svq);
> > > @@ -450,6 +453,9 @@ static VirtQueueElement
> *vhost_svq_get_buf(VhostShadowVirtqueue *svq,
> > >      svq->desc_next[last_used_chain] = svq->free_head;
> > >      svq->free_head = used_elem.id;
> > >
> > > +    /* Update the size of SVQ vring free descriptors */
> >
> > No need for this comment.
> >
> > Apart from that,
> >
> > Acked-by: Eugenio Pérez <eperezma@redhat.com>
> >
> > > +    svq->num_free += num;
> > > +
> > >      *len = used_elem.len;
> > >      return g_steal_pointer(&svq->desc_state[used_elem.id].elem);
> > >  }
> > > @@ -659,6 +665,7 @@ void vhost_svq_start(VhostShadowVirtqueue *svq,
> VirtIODevice *vdev,
> > >      svq->iova_tree = iova_tree;
> > >
> > >      svq->vring.num = virtio_queue_get_num(vdev,
> virtio_get_queue_index(vq));
> > > +    svq->num_free = svq->vring.num;
> > >      driver_size = vhost_svq_driver_area_size(svq);
> > >      device_size = vhost_svq_device_area_size(svq);
> > >      svq->vring.desc = qemu_memalign(qemu_real_host_page_size(),
> driver_size);
> > > diff --git a/hw/virtio/vhost-shadow-virtqueue.h
> b/hw/virtio/vhost-shadow-virtqueue.h
> > > index 926a4897b1..6efe051a70 100644
> > > --- a/hw/virtio/vhost-shadow-virtqueue.h
> > > +++ b/hw/virtio/vhost-shadow-virtqueue.h
> > > @@ -107,6 +107,9 @@ typedef struct VhostShadowVirtqueue {
> > >
> > >      /* Next head to consume from the device */
> > >      uint16_t last_used_idx;
> > > +
> > > +    /* Size of SVQ vring free descriptors */
> > > +    uint16_t num_free;
> > >  } VhostShadowVirtqueue;
> > >
> > >  bool vhost_svq_valid_features(uint64_t features, Error **errp);
> > > --
> > > 2.25.1
> > >
> >
> >
>

[-- Attachment #2: Type: text/html, Size: 7340 bytes --]

      reply	other threads:[~2023-05-10 15:28 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-06 15:01 [PATCH RESEND] vhost: fix possible wrap in SVQ descriptor ring Hawkins Jiawei
2023-05-08 17:26 ` Eugenio Perez Martin
2023-05-09  6:55   ` Hawkins Jiawei
2023-05-10  1:32   ` Lei Yang
2023-05-10 15:27     ` Lei Yang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPpAL=xwMPQbC0HTOyQeBDdFiwR7SO0VT=UoZe-cWxXC7LVpYA@mail.gmail.com' \
    --to=leiyang@redhat.com \
    --cc=18801353760@163.com \
    --cc=eperezma@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-stable@nongnu.org \
    --cc=yin31149@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).