qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
Cc: Jason Wang <jasowang@redhat.com>,
	qemu-devel@nongnu.org, stefanha@redhat.com,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [RFC 2/2] virtio-net: use post load hook
Date: Fri, 11 Oct 2019 05:51:48 -0400	[thread overview]
Message-ID: <20191011055111-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <CALHVEJYqJ+4nvuXb27-1CKkZ=EL0QyEQY_qYRDYGk9kUWdR7kw@mail.gmail.com>

On Fri, Oct 11, 2019 at 11:46:22AM +0200, Mikhail Sennikovsky wrote:
> Hi Michael,
> 
> Unfortunately your approach will not work, because the
> VirtIONet::curr_guest_offloads would still be reset in
> virtio_net_set_features:
> --
> if (n->has_vnet_hdr) {
>     n->curr_guest_offloads =
>         virtio_net_guest_offloads_by_features(features);

So let's move that part to the new hook too.

> --
> ( https://github.com/qemu/qemu/blob/master/hw/net/virtio-net.c#L774 )
> 
> I.e. although virtio_net_apply_guest_offloads would now be called
> after the virtio_net_set_features, by the time it is called the
> VirtIONet::curr_guest_offloads would be reset to a full list of
> features.
> 
> Regards,
> Mikhail
> 
> Am Do., 10. Okt. 2019 um 20:04 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> >
> > Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> > command are not preserved on VM migration.
> > Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
> > get enabled.
> > What happens is: first the VirtIONet::curr_guest_offloads gets restored
> > and offloads are getting set correctly:
> >
> >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
> >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> >  #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
> >  #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
> >      at migration/vmstate.c:168
> >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
> >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> >
> > However later on the features are getting restored, and offloads get reset to
> > everything supported by features:
> >
> >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
> >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> >  #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
> >  #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
> >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
> >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> >
> > Fix this by pushing out offload initialization to the new post load hook.
> >
> > Reported-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > ---
> >  hw/net/virtio-net.c | 14 ++++++++++----
> >  1 file changed, 10 insertions(+), 4 deletions(-)
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 9f11422337..62fb858e2d 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -2333,10 +2333,6 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> >          n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
> >      }
> >
> > -    if (peer_has_vnet_hdr(n)) {
> > -        virtio_net_apply_guest_offloads(n);
> > -    }
> > -
> >      virtio_net_set_queues(n);
> >
> >      /* Find the first multicast entry in the saved MAC filter */
> > @@ -2370,6 +2366,15 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> >      return 0;
> >  }
> >
> > +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> > +{
> > +    if (peer_has_vnet_hdr(n)) {
> > +        virtio_net_apply_guest_offloads(n);
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> >  /* tx_waiting field of a VirtIONetQueue */
> >  static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
> >      .name = "virtio-net-queue-tx_waiting",
> > @@ -2912,6 +2917,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
> >      vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
> >      vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
> >      vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> > +    vdc->post_load = virtio_net_post_load_virtio;
> >      vdc->vmsd = &vmstate_virtio_net_device;
> >  }
> >
> > --
> > MST
> >


  reply	other threads:[~2019-10-11  9:55 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-10 18:04 [RFC 1/2] virtio: new post_load hook Michael S. Tsirkin
2019-10-10 18:04 ` [RFC 2/2] virtio-net: use post load hook Michael S. Tsirkin
2019-10-11  2:51   ` Jason Wang
2019-10-11  9:46   ` Mikhail Sennikovsky
2019-10-11  9:51     ` Michael S. Tsirkin [this message]
2019-10-11  9:58       ` Mikhail Sennikovsky
2019-10-11 10:08         ` Michael S. Tsirkin
2019-10-11 10:30           ` Mikhail Sennikovsky
2019-10-11 10:34             ` Mikhail Sennikovsky
2019-10-11 12:51               ` Michael S. Tsirkin
2019-10-11 12:34             ` Michael S. Tsirkin
2019-10-11 13:15 ` [RFC 1/2] virtio: new post_load hook Alex Bennée

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191011055111-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=mikhail.sennikovskii@cloud.ionos.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).