From: Wei Xu <wexu@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: qemu-devel@nongnu.org, tiwei.bie@intel.com, mst@redhat.com,
jfreiman@redhat.com, maxime.coquelin@redhat.com
Subject: Re: [Qemu-devel] [PATCH v4 10/11] virtio: migration support for packed ring
Date: Tue, 19 Feb 2019 19:00:03 +0800 [thread overview]
Message-ID: <20190219110003.GE15343@wei-ubt> (raw)
In-Reply-To: <af2abdf0-3f96-6e97-d5c0-5ab2211222e1@redhat.com>
On Tue, Feb 19, 2019 at 03:30:41PM +0800, Jason Wang wrote:
>
> On 2019/2/14 下午12:26, wexu@redhat.com wrote:
> >From: Wei Xu <wexu@redhat.com>
> >
> >Both userspace and vhost-net/user are supported with this patch.
> >
> >A new subsection is introduced for packed ring, only 'last_avail_idx'
> >and 'last_avail_wrap_counter' are saved/loaded presumably based on
> >all the others relevant data(inuse, used/avail index and wrap count
> >should be the same.
>
>
> This is probably only true for net device, see comment in virtio_load():
>
> /*
> * Some devices migrate VirtQueueElements that have been popped
> * from the avail ring but not yet returned to the used ring.
> * Since max ring size < UINT16_MAX it's safe to use modulo
> * UINT16_MAX + 1 subtraction.
> */
> vdev->vq[i].inuse = (uint16_t)(vdev->vq[i].last_avail_idx -
> vdev->vq[i].used_idx);
>
>
> So you need to migrate used_idx and used_wrap_counter since we don't have
> used idx.
This is trying to align with vhost-net/user as we discussed, since all we
have done is to support virtio-net device for packed ring, maybe we can
consider supporting other devices after we have got it verified.
>
>
> >
> >Signed-off-by: Wei Xu <wexu@redhat.com>
> >---
> > hw/virtio/virtio.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++---
> > 1 file changed, 66 insertions(+), 3 deletions(-)
> >
> >diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> >index 8cfc7b6..7c5de07 100644
> >--- a/hw/virtio/virtio.c
> >+++ b/hw/virtio/virtio.c
> >@@ -2349,6 +2349,13 @@ static bool virtio_virtqueue_needed(void *opaque)
> > return virtio_host_has_feature(vdev, VIRTIO_F_VERSION_1);
> > }
> >+static bool virtio_packed_virtqueue_needed(void *opaque)
> >+{
> >+ VirtIODevice *vdev = opaque;
> >+
> >+ return virtio_host_has_feature(vdev, VIRTIO_F_RING_PACKED);
> >+}
> >+
> > static bool virtio_ringsize_needed(void *opaque)
> > {
> > VirtIODevice *vdev = opaque;
> >@@ -2390,6 +2397,17 @@ static const VMStateDescription vmstate_virtqueue = {
> > }
> > };
> >+static const VMStateDescription vmstate_packed_virtqueue = {
> >+ .name = "packed_virtqueue_state",
> >+ .version_id = 1,
> >+ .minimum_version_id = 1,
> >+ .fields = (VMStateField[]) {
> >+ VMSTATE_UINT16(last_avail_idx, struct VirtQueue),
> >+ VMSTATE_BOOL(last_avail_wrap_counter, struct VirtQueue),
> >+ VMSTATE_END_OF_LIST()
> >+ }
> >+};
> >+
> > static const VMStateDescription vmstate_virtio_virtqueues = {
> > .name = "virtio/virtqueues",
> > .version_id = 1,
> >@@ -2402,6 +2420,18 @@ static const VMStateDescription vmstate_virtio_virtqueues = {
> > }
> > };
> >+static const VMStateDescription vmstate_virtio_packed_virtqueues = {
> >+ .name = "virtio/packed_virtqueues",
> >+ .version_id = 1,
> >+ .minimum_version_id = 1,
> >+ .needed = &virtio_packed_virtqueue_needed,
> >+ .fields = (VMStateField[]) {
> >+ VMSTATE_STRUCT_VARRAY_POINTER_KNOWN(vq, struct VirtIODevice,
> >+ VIRTIO_QUEUE_MAX, 0, vmstate_packed_virtqueue, VirtQueue),
> >+ VMSTATE_END_OF_LIST()
> >+ }
> >+};
> >+
> > static const VMStateDescription vmstate_ringsize = {
> > .name = "ringsize_state",
> > .version_id = 1,
> >@@ -2522,6 +2552,7 @@ static const VMStateDescription vmstate_virtio = {
> > &vmstate_virtio_ringsize,
> > &vmstate_virtio_broken,
> > &vmstate_virtio_extra_state,
> >+ &vmstate_virtio_packed_virtqueues,
> > NULL
> > }
> > };
> >@@ -2794,6 +2825,17 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
> > virtio_queue_update_rings(vdev, i);
> > }
> >+ if (virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
> >+ vdev->vq[i].shadow_avail_idx = vdev->vq[i].last_avail_idx;
> >+ vdev->vq[i].avail_wrap_counter =
> >+ vdev->vq[i].last_avail_wrap_counter;
> >+
> >+ vdev->vq[i].used_idx = vdev->vq[i].last_avail_idx;
> >+ vdev->vq[i].used_wrap_counter =
> >+ vdev->vq[i].last_avail_wrap_counter;
> >+ continue;
> >+ }
> >+
> > nheads = vring_avail_idx(&vdev->vq[i]) - vdev->vq[i].last_avail_idx;
> > /* Check it isn't doing strange things with descriptor numbers. */
> > if (nheads > vdev->vq[i].vring.num) {
> >@@ -2955,17 +2997,34 @@ hwaddr virtio_queue_get_used_size(VirtIODevice *vdev, int n)
> > uint16_t virtio_queue_get_last_avail_idx(VirtIODevice *vdev, int n)
> > {
> >- return vdev->vq[n].last_avail_idx;
> >+ uint16_t idx;
> >+
> >+ if (virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
> >+ idx = vdev->vq[n].last_avail_idx;
> >+ idx |= ((int)vdev->vq[n].avail_wrap_counter) << 15;
> >+ } else {
> >+ idx = (int)vdev->vq[n].last_avail_idx;
> >+ }
> >+ return idx;
> > }
> > void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint16_t idx)
> > {
> >- vdev->vq[n].last_avail_idx = idx;
> >- vdev->vq[n].shadow_avail_idx = idx;
> >+ if (virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
> >+ vdev->vq[n].last_avail_idx = idx & 0x7fff;
> >+ vdev->vq[n].avail_wrap_counter = !!(idx & 0x8000);
>
>
> Let's define some macros for those magic number.
OK.
>
>
> >+ } else {
> >+ vdev->vq[n].last_avail_idx = idx;
> >+ vdev->vq[n].shadow_avail_idx = idx;
> >+ }
> > }
> > void virtio_queue_restore_last_avail_idx(VirtIODevice *vdev, int n)
> > {
> >+ if (virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
> >+ return;
> >+ }
>
>
> Why doesn't packed ring care about this?
As elaborated above, used idx/wrap_counter are supposed to be the
same with avail ones for vhost-net/user.
>
>
> >+
> > rcu_read_lock();
> > if (vdev->vq[n].vring.desc) {
> > vdev->vq[n].last_avail_idx = vring_used_idx(&vdev->vq[n]);
> >@@ -2976,6 +3035,10 @@ void virtio_queue_restore_last_avail_idx(VirtIODevice *vdev, int n)
> > void virtio_queue_update_used_idx(VirtIODevice *vdev, int n)
> > {
> >+ if (virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
> >+ return;
> >+ }
>
>
> And this?
Same as above.
Wei
>
> Thanks
>
>
> >+
> > rcu_read_lock();
> > if (vdev->vq[n].vring.desc) {
> > vdev->vq[n].used_idx = vring_used_idx(&vdev->vq[n]);
next prev parent reply other threads:[~2019-02-19 11:03 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-14 4:26 [Qemu-devel] [PATCH v4 00/11] packed ring virtio-net backends support wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 01/11] virtio: rename structure for packed ring wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 02/11] virtio: device/driver area size calculation helper for split ring wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 03/11] virtio: initialize packed ring region wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 04/11] virtio: initialize wrap counter for packed ring wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 05/11] virtio: queue/descriptor check helpers " wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 06/11] virtio: get avail bytes check " wexu
2019-02-18 7:27 ` Jason Wang
2019-02-18 17:07 ` Wei Xu
2019-02-19 6:24 ` Jason Wang
2019-02-19 8:24 ` Wei Xu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 07/11] virtio: fill/flush/pop " wexu
2019-02-18 7:51 ` Jason Wang
2019-02-18 14:46 ` Wei Xu
2019-02-19 6:49 ` Jason Wang
2019-02-19 8:21 ` Wei Xu
2019-02-19 9:33 ` Jason Wang
2019-02-19 11:34 ` Wei Xu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 08/11] virtio: event suppression support " wexu
2019-02-19 7:19 ` Jason Wang
2019-02-19 10:40 ` Wei Xu
2019-02-19 13:06 ` Jason Wang
2019-02-20 2:17 ` Wei Xu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 09/11] virtio-net: update the head descriptor in a chain lastly wexu
2019-02-19 7:23 ` Jason Wang
2019-02-19 10:51 ` Wei Xu
2019-02-19 13:09 ` Jason Wang
2019-02-20 1:54 ` Wei Xu
2019-02-20 2:34 ` Jason Wang
2019-02-20 4:01 ` Wei Xu
2019-02-20 7:53 ` Jason Wang
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 10/11] virtio: migration support for packed ring wexu
2019-02-19 7:30 ` Jason Wang
2019-02-19 11:00 ` Wei Xu [this message]
2019-02-19 13:12 ` Jason Wang
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 11/11] virtio: CLI and provide packed ring feature bit by default wexu
2019-02-19 7:32 ` Jason Wang
2019-02-19 11:23 ` Wei Xu
2019-02-19 13:33 ` Jason Wang
2019-02-20 0:46 ` Wei Xu
2019-02-19 7:35 ` [Qemu-devel] [PATCH v4 00/11] packed ring virtio-net backends support Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190219110003.GE15343@wei-ubt \
--to=wexu@redhat.com \
--cc=jasowang@redhat.com \
--cc=jfreiman@redhat.com \
--cc=maxime.coquelin@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=tiwei.bie@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).