qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
Cc: Jason Wang <jasowang@redhat.com>,
	qemu-devel@nongnu.org, stefanha@redhat.com, dgilbert@redhat.com
Subject: Re: [PATCH v4 2/2] virtio-net: prevent offloads reset on migration
Date: Fri, 11 Oct 2019 10:13:27 -0400	[thread overview]
Message-ID: <20191011101256-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <1570802284-3064-2-git-send-email-mikhail.sennikovskii@cloud.ionos.com>

On Fri, Oct 11, 2019 at 03:58:04PM +0200, Mikhail Sennikovsky wrote:
> Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> command are not preserved on VM migration.
> Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
> get enabled.
> What happens is: first the VirtIONet::curr_guest_offloads gets restored and offloads
> are getting set correctly:
> 
>  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
>  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
>  #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
>  #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
>      at migration/vmstate.c:168
>  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
>  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
>  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
>  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
>  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
>  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
>  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
>  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> 
> However later on the features are getting restored, and offloads get reset to
> everything supported by features:
> 
>  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
>  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
>  #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
>  #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
>  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
>  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
>  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
>  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
>  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
>  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
>  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
>  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> 
> Fix this by preserving the state in saved_guest_offloads field and
> pushing out offload initialization to the new post load hook.
> 
> Signed-off-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>

kind of ugly, but works:

Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

Jason can you merge this and the previous patch pls?

> ---
>  hw/net/virtio-net.c            | 27 ++++++++++++++++++++++++---
>  include/hw/virtio/virtio-net.h |  2 ++
>  2 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index b9e1cd7..6adb0fe 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -2330,9 +2330,13 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
>          n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
>      }
>  
> -    if (peer_has_vnet_hdr(n)) {
> -        virtio_net_apply_guest_offloads(n);
> -    }
> +    /*
> +     * curr_guest_offloads will be later overwritten by the
> +     * virtio_set_features_nocheck call done from the virtio_load.
> +     * Here we make sure it is preserved and restored accordingly
> +     * in the virtio_net_post_load_virtio callback.
> +     */
> +    n->saved_guest_offloads = n->curr_guest_offloads;
>  
>      virtio_net_set_queues(n);
>  
> @@ -2367,6 +2371,22 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
>      return 0;
>  }
>  
> +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> +{
> +    VirtIONet *n = VIRTIO_NET(vdev);
> +    /*
> +     * The actual needed state is now in saved_guest_offloads,
> +     * see virtio_net_post_load_device for detail.
> +     * Restore it back and apply the desired offloads.
> +     */
> +    n->curr_guest_offloads = n->saved_guest_offloads;
> +    if (peer_has_vnet_hdr(n)) {
> +        virtio_net_apply_guest_offloads(n);
> +    }
> +
> +    return 0;
> +}
> +
>  /* tx_waiting field of a VirtIONetQueue */
>  static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
>      .name = "virtio-net-queue-tx_waiting",
> @@ -2909,6 +2929,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
>      vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
>      vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
>      vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> +    vdc->post_load = virtio_net_post_load_virtio;
>      vdc->vmsd = &vmstate_virtio_net_device;
>  }
>  
> diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
> index b96f0c6..07a9319 100644
> --- a/include/hw/virtio/virtio-net.h
> +++ b/include/hw/virtio/virtio-net.h
> @@ -182,6 +182,8 @@ struct VirtIONet {
>      char *netclient_name;
>      char *netclient_type;
>      uint64_t curr_guest_offloads;
> +    /* used on saved state restore phase to preserve the curr_guest_offloads */
> +    uint64_t saved_guest_offloads;
>      AnnounceTimer announce_timer;
>      bool needs_vnet_hdr_swap;
>      bool mtu_bypass_backend;
> -- 
> 2.7.4


  reply	other threads:[~2019-10-11 14:18 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-11 13:58 [PATCH v4 1/2] virtio: new post_load hook Mikhail Sennikovsky
2019-10-11 13:58 ` [PATCH v4 2/2] virtio-net: prevent offloads reset on migration Mikhail Sennikovsky
2019-10-11 14:13   ` Michael S. Tsirkin [this message]
2019-10-24 13:53     ` Mikhail Sennikovsky
2019-10-25  2:12       ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191011101256-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=mikhail.sennikovskii@cloud.ionos.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).