qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jonah Palmer <jonah.palmer@oracle.com>
Cc: Jason Wang <jasowang@redhat.com>,
	qemu-devel@nongnu.org, peterx@redhat.com, farosas@suse.de,
	eblake@redhat.com, armbru@redhat.com, si-wei.liu@oracle.com,
	eperezma@redhat.com, boris.ostrovsky@oracle.com
Subject: Re: [RFC 0/6] virtio-net: initial iterative live migration support
Date: Fri, 25 Jul 2025 05:33:40 -0400	[thread overview]
Message-ID: <20250725053122-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <0f5b804d-3852-4159-b151-308a57f1ec74@oracle.com>

On Thu, Jul 24, 2025 at 05:59:20PM -0400, Jonah Palmer wrote:
> 
> 
> On 7/23/25 1:51 AM, Jason Wang wrote:
> > On Tue, Jul 22, 2025 at 8:41 PM Jonah Palmer <jonah.palmer@oracle.com> wrote:
> > > 
> > > This series is an RFC initial implementation of iterative live
> > > migration for virtio-net devices.
> > > 
> > > The main motivation behind implementing iterative migration for
> > > virtio-net devices is to start on heavy, time-consuming operations
> > > for the destination while the source is still active (i.e. before
> > > the stop-and-copy phase).
> > 
> > It would be better to explain which kind of operations were heavy and
> > time-consuming and how iterative migration help.
> > 
> 
> You're right. Apologies for being vague here.
> 
> I did do some profiling of the virtio_load call for virtio-net to try and
> narrow down where exactly most of the downtime is coming from during the
> stop-and-copy phase.
> 
> Pretty much the entirety of the downtime comes from the vmstate_load_state
> call for the vmstate_virtio's subsections:
> 
> /* Subsections */
> ret = vmstate_load_state(f, &vmstate_virtio, vdev, 1);
> if (ret) {
>     return ret;
> }
> 
> More specifically, the vmstate_virtio_virtqueues and
> vmstate_virtio_extra_state subsections.
> 
> For example, currently (with no iterative migration), for a virtio-net
> device, the virtio_load call took 13.29ms to finish. 13.20ms of that time
> was spent in vmstate_load_state(f, &vmstate_virtio, vdev, 1).
> 
> Of that 13.21ms, ~6.83ms was spent migrating vmstate_virtio_virtqueues and
> ~6.33ms was spent migrating the vmstate_virtio_extra_state subsections. And
> I believe this is from walking VIRTIO_QUEUE_MAX virtqueues, twice.

Can we optimize it simply by sending a bitmap of used vqs?

> vmstate_load_state virtio-net v11
> vmstate_load_state PCIDevice v2
> vmstate_load_state_end PCIDevice end/0
> vmstate_load_state virtio-net-device v11
> vmstate_load_state virtio-net-queue-tx_waiting v0
> vmstate_load_state_end virtio-net-queue-tx_waiting end/0
> vmstate_load_state virtio-net-vnet v0
> vmstate_load_state_end virtio-net-vnet end/0
> vmstate_load_state virtio-net-ufo v0
> vmstate_load_state_end virtio-net-ufo end/0
> vmstate_load_state virtio-net-tx_waiting v0
> vmstate_load_state virtio-net-queue-tx_waiting v0
> vmstate_load_state_end virtio-net-queue-tx_waiting end/0
> vmstate_load_state virtio-net-queue-tx_waiting v0
> vmstate_load_state_end virtio-net-queue-tx_waiting end/0
> vmstate_load_state virtio-net-queue-tx_waiting v0
> vmstate_load_state_end virtio-net-queue-tx_waiting end/0
> vmstate_load_state_end virtio-net-tx_waiting end/0
> vmstate_load_state_end virtio-net-device end/0
> vmstate_load_state virtio v1
> vmstate_load_state virtio/64bit_features v1
> vmstate_load_state_end virtio/64bit_features end/0
> vmstate_load_state virtio/virtqueues v1
> vmstate_load_state virtqueue_state v1  <--- Queue idx 0
> ...
> vmstate_load_state_end virtqueue_state end/0
> vmstate_load_state virtqueue_state v1  <--- Queue idx 1023
> vmstate_load_state_end virtqueue_state end/0
> vmstate_load_state_end virtio/virtqueues end/0
> vmstate_load_state virtio/extra_state v1
> vmstate_load_state virtio_pci v1
> vmstate_load_state virtio_pci/modern_state v1
> vmstate_load_state virtio_pci/modern_queue_state v1  <--- Queue idx 0
> vmstate_load_state_end virtio_pci/modern_queue_state end/0
> ...
> vmstate_load_state virtio_pci/modern_queue_state v1  <--- Queue idx 1023
> vmstate_load_state_end virtio_pci/modern_queue_state end/0
> vmstate_load_state_end virtio_pci/modern_state end/0
> vmstate_load_state_end virtio_pci end/0
> vmstate_load_state_end virtio/extra_state end/0
> vmstate_load_state virtio/started v1
> vmstate_load_state_end virtio/started end/0
> vmstate_load_state_end virtio end/0
> vmstate_load_state_end virtio-net end/0
> vmstate_downtime_load type=non-iterable idstr=0000:00:03.0/virtio-net
> instance_id=0 downtime=13260
> 
> With iterative migration for virtio-net (maybe all virtio devices?), we can
> send this early while the source is still running and then only send the
> deltas during the stop-and-copy phase. It's likely that the source wont be
> using all VIRTIO_QUEUE_MAX virtqueues during the migration period, so this
> could really minimize a large majority of the downtime contributed by
> virtio-net.
> 
> This could be one example.
> 
> > > 
> > > The motivation behind this RFC series specifically is to provide an
> > > initial framework for such an implementation and get feedback on the
> > > design and direction.
> > > -------
> > > 
> > > This implementation of iterative live migration for a virtio-net device
> > > is enabled via setting the migration capability 'virtio-iterative' to
> > > on for both the source & destination, e.g. (HMP):
> > > 
> > > (qemu) migrate_set_capability virtio-iterative on
> > > 
> > > The virtio-net device's SaveVMHandlers hooks are registered/unregistered
> > > during the device's realize/unrealize phase.
> > 
> > I wonder about the plan for libvirt support.
> > 
> 
> Could you elaborate on this a bit?
> 
> > > 
> > > Currently, this series only sends and loads the vmstate at the start of
> > > migration. The vmstate is still sent (again) during the stop-and-copy
> > > phase, as it is today, to handle any deltas in the state since it was
> > > initially sent. A future patch in this series could avoid having to
> > > re-send and re-load the entire state again and instead focus only on the
> > > deltas.
> > > 
> > > There is a slight, modest improvement in guest-visible downtime from
> > > this series. More specifically, when using iterative live migration with
> > > a virtio-net device, the downtime contributed by migrating a virtio-net
> > > device decreased from ~3.2ms to ~1.4ms on average:
> > 
> > Are you testing this via a software virtio device or hardware one?
> > 
> 
> Just software (virtio-device, vhost-net) with these numbers. I can run some
> tests with vDPA hardware though.
> 
> Those numbers were from a simple, 1 queue-pair virtio-net device.
> 
> > > 
> > > Before:
> > > -------
> > > vmstate_downtime_load type=non-iterable idstr=0000:00:03.0/virtio-net
> > >    instance_id=0 downtime=3594
> > > 
> > > After:
> > > ------
> > > vmstate_downtime_load type=non-iterable idstr=0000:00:03.0/virtio-net
> > >    instance_id=0 downtime=1607
> > > 
> > > This slight improvement is likely due to the initial vmstate_load_state
> > > call "warming up" pages in memory such that, when it's called a second
> > > time during the stop-and-copy phase, allocation and page-fault latencies
> > > are reduced.
> > > -------
> > > 
> > > Comments, suggestions, etc. are welcome here.
> > > 
> > > Jonah Palmer (6):
> > >    migration: Add virtio-iterative capability
> > >    virtio-net: Reorder vmstate_virtio_net and helpers
> > >    virtio-net: Add SaveVMHandlers for iterative migration
> > >    virtio-net: iter live migration - migrate vmstate
> > >    virtio,virtio-net: skip consistency check in virtio_load for iterative
> > >      migration
> > >    virtio-net: skip vhost_started assertion during iterative migration
> > > 
> > >   hw/net/virtio-net.c            | 246 +++++++++++++++++++++++++++------
> > >   hw/virtio/virtio.c             |  32 +++--
> > >   include/hw/virtio/virtio-net.h |   8 ++
> > >   include/hw/virtio/virtio.h     |   7 +
> > >   migration/savevm.c             |   1 +
> > >   qapi/migration.json            |   7 +-
> > >   6 files changed, 247 insertions(+), 54 deletions(-)
> > > 
> > > --
> > > 2.47.1
> > 
> > Thanks
> > 
> > > 
> > 



  parent reply	other threads:[~2025-07-25  9:34 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-22 12:41 [RFC 0/6] virtio-net: initial iterative live migration support Jonah Palmer
2025-07-22 12:41 ` [RFC 1/6] migration: Add virtio-iterative capability Jonah Palmer
2025-08-06 15:58   ` Peter Xu
2025-08-07 12:50     ` Jonah Palmer
2025-08-07 13:13       ` Peter Xu
2025-08-07 14:20         ` Jonah Palmer
2025-08-08 10:48   ` Markus Armbruster
2025-08-11 12:18     ` Jonah Palmer
2025-08-25 12:44       ` Markus Armbruster
2025-08-25 14:57         ` Jonah Palmer
2025-08-26  6:11           ` Markus Armbruster
2025-08-26 18:08             ` Jonah Palmer
2025-08-27  6:37               ` Markus Armbruster
2025-08-28 15:29                 ` Jonah Palmer
2025-08-29  9:24                   ` Markus Armbruster
2025-09-01 14:10                     ` Jonah Palmer
2025-07-22 12:41 ` [RFC 2/6] virtio-net: Reorder vmstate_virtio_net and helpers Jonah Palmer
2025-07-22 12:41 ` [RFC 3/6] virtio-net: Add SaveVMHandlers for iterative migration Jonah Palmer
2025-07-22 12:41 ` [RFC 4/6] virtio-net: iter live migration - migrate vmstate Jonah Palmer
2025-07-23  6:51   ` Michael S. Tsirkin
2025-07-24 14:45     ` Jonah Palmer
2025-07-25  9:31       ` Michael S. Tsirkin
2025-07-28 12:30         ` Jonah Palmer
2025-07-22 12:41 ` [RFC 5/6] virtio, virtio-net: skip consistency check in virtio_load for iterative migration Jonah Palmer via
2025-07-28 15:30   ` [RFC 5/6] virtio,virtio-net: " Eugenio Perez Martin
2025-07-28 16:23     ` Jonah Palmer
2025-07-30  8:59       ` Eugenio Perez Martin
2025-08-06 16:27   ` Peter Xu
2025-08-07 14:18     ` Jonah Palmer
2025-08-07 16:31       ` Peter Xu
2025-08-11 12:30         ` Jonah Palmer
2025-08-11 13:39           ` Peter Xu
2025-08-11 21:26             ` Jonah Palmer
2025-08-11 21:55               ` Peter Xu
2025-08-12 15:51                 ` Jonah Palmer
2025-08-13  9:25                 ` Eugenio Perez Martin
2025-08-13 14:06                   ` Peter Xu
2025-08-14  9:28                     ` Eugenio Perez Martin
2025-08-14 16:16                       ` Dragos Tatulea
2025-08-14 20:27                       ` Peter Xu
2025-08-15 14:50                       ` Jonah Palmer
2025-08-15 19:35                         ` Si-Wei Liu
2025-08-18  6:51                         ` Eugenio Perez Martin
2025-08-18 14:46                           ` Jonah Palmer
2025-08-18 16:21                             ` Peter Xu
2025-08-19  7:20                               ` Eugenio Perez Martin
2025-08-19  7:10                             ` Eugenio Perez Martin
2025-08-19 15:10                               ` Jonah Palmer
2025-08-20  7:59                                 ` Eugenio Perez Martin
2025-08-25 12:16                                   ` Jonah Palmer
2025-08-27 16:55                                   ` Jonah Palmer
2025-09-01  6:57                                     ` Eugenio Perez Martin
2025-09-01 13:17                                       ` Jonah Palmer
2025-09-02  7:31                                         ` Eugenio Perez Martin
2025-07-22 12:41 ` [RFC 6/6] virtio-net: skip vhost_started assertion during " Jonah Palmer
2025-07-23  5:51 ` [RFC 0/6] virtio-net: initial iterative live migration support Jason Wang
2025-07-24 21:59   ` Jonah Palmer
2025-07-25  9:18     ` Lei Yang
2025-07-25  9:33     ` Michael S. Tsirkin [this message]
2025-07-28  7:09       ` Jason Wang
2025-07-28  7:35         ` Jason Wang
2025-07-28 12:41           ` Jonah Palmer
2025-07-28 14:51           ` Eugenio Perez Martin
2025-07-28 15:38             ` Eugenio Perez Martin
2025-07-29  2:38             ` Jason Wang
2025-07-29 12:41               ` Jonah Palmer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250725053122-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=armbru@redhat.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=eblake@redhat.com \
    --cc=eperezma@redhat.com \
    --cc=farosas@suse.de \
    --cc=jasowang@redhat.com \
    --cc=jonah.palmer@oracle.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=si-wei.liu@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).