From: Peter Xu <peterx@redhat.com>
To: Steve Sistare <steven.sistare@oracle.com>
Cc: qemu-devel@nongnu.org, "Juan Quintela" <quintela@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Thomas Huth" <thuth@redhat.com>,
"Daniel P. Berrangé" <berrange@redhat.com>
Subject: Re: [PATCH V4 01/11] cpus: pass runstate to vm_prepare_start
Date: Wed, 30 Aug 2023 11:52:00 -0400 [thread overview]
Message-ID: <ZO9loC9zzEAwZJjK@x1n> (raw)
In-Reply-To: <1693333086-392798-2-git-send-email-steven.sistare@oracle.com>
On Tue, Aug 29, 2023 at 11:17:56AM -0700, Steve Sistare wrote:
> When a vm in the suspended state is migrated, we must call vm_prepare_start
> on the destination, so a later system_wakeup properly resumes the guest,
> when main_loop_should_exit callsresume_all_vcpus. However, the runstate
> should remain suspended until system_wakeup is called, so allow the caller
> to pass the new state to vm_prepare_start, rather than assume the new state
> is RUN_STATE_RUNNING. Modify vm state change handlers that check
> RUN_STATE_RUNNING to instead use the running parameter.
>
> No functional change.
>
> Suggested-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
I think all the call sites should be covered indeed, via:
qemu_add_vm_change_state_handler_prio
qdev_add_vm_change_state_handler
virtio_blk_device_realize[1653] qdev_add_vm_change_state_handler(dev, virtio_blk_dma_restart_cb, s);
scsi_qdev_realize[289] dev->vmsentry = qdev_add_vm_change_state_handler(DEVICE(dev),
vfio_migration_init[796] migration->vm_state = qdev_add_vm_change_state_handler(vbasedev->dev,
virtio_init[3189] vdev->vmstate = qdev_add_vm_change_state_handler(DEVICE(vdev),
qemu_add_vm_change_state_handler
xen_init[106] qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
audio_init[1827] e = qemu_add_vm_change_state_handler (audio_vm_change_state_handler, s);
tpm_emulator_inst_init[978] qemu_add_vm_change_state_handler(tpm_emulator_vm_state_change,
blk_root_activate[223] blk->vmsh = qemu_add_vm_change_state_handler(blk_vm_state_changed,
gdbserver_start[384] qemu_add_vm_change_state_handler(gdb_vm_state_change, NULL);
pflash_post_load[1038] pfl->vmstate = qemu_add_vm_change_state_handler(postload_update_cb,
qxl_realize_common[2202] qemu_add_vm_change_state_handler(qxl_vm_change_state_handler, qxl);
kvmclock_realize[233] qemu_add_vm_change_state_handler(kvmclock_vm_state_change, s);
kvm_pit_realizefn[298] qemu_add_vm_change_state_handler(kvm_pit_vm_state_change, s);
vapic_post_load[796] qemu_add_vm_change_state_handler(kvmvapic_vm_state_change, s);
ide_bus_register_restart_cb[2767] bus->vmstate = qemu_add_vm_change_state_handler(ide_restart_cb, bus);
kvm_arm_its_realize[122] qemu_add_vm_change_state_handler(vm_change_state_handler, s);
kvm_arm_gicv3_realize[888] qemu_add_vm_change_state_handler(vm_change_state_handler, s);
kvmppc_xive_connect[794] xive->change = qemu_add_vm_change_state_handler(
via1_post_load[971] v1s->vmstate = qemu_add_vm_change_state_handler(
e1000e_core_pci_realize[3379] qemu_add_vm_change_state_handler(e1000e_vm_state_change, core);
igb_core_pci_realize[4012] core->vmstate = qemu_add_vm_change_state_handler(igb_vm_state_change, core);
spapr_nvram_post_load[235] nvram->vmstate = qemu_add_vm_change_state_handler(postload_update_cb,
ppc_booke_timers_init[366] qemu_add_vm_change_state_handler(cpu_state_change_handler, cpu);
spapr_machine_init[3070] qemu_add_vm_change_state_handler(cpu_ppc_clock_vm_state_change,
kvm_s390_tod_realize[133] qemu_add_vm_change_state_handler(kvm_s390_tod_vm_state_change, td);
usb_ehci_realize[2540] s->vmstate = qemu_add_vm_change_state_handler(usb_ehci_vm_state_change, s);
usb_host_auto_check[1912] usb_vmstate = qemu_add_vm_change_state_handler(usb_host_vm_state, NULL);
usbredir_realize[1466] qemu_add_vm_change_state_handler(usbredir_vm_state_change, dev);
virtio_rng_device_realize[226] vrng->vmstate = qemu_add_vm_change_state_handler(virtio_rng_vm_state_change,
xen_do_ioreq_register[825] qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
net_init_clients[1644] qemu_add_vm_change_state_handler(net_vm_change_state_handler, NULL);
memory_global_dirty_log_stop[2978] vmstate_change = qemu_add_vm_change_state_handler(
hvf_arch_init[2036] qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
kvm_arch_init_vcpu[567] qemu_add_vm_change_state_handler(kvm_arm_vm_state_change, cs);
kvm_arch_init_vcpu[2191] cpu->vmsentry = qemu_add_vm_change_state_handler(cpu_update_state, env);
sev_kvm_init[1014] qemu_add_vm_change_state_handler(sev_vm_state_change, sev);
whpx_init_vcpu[2248] qemu_add_vm_change_state_handler(whpx_cpu_update_state, cpu->env_ptr);
kvm_arch_init_vcpu[70] qemu_add_vm_change_state_handler(kvm_mips_update_state, cs);
kvm_arch_init_vcpu[891] qemu_add_vm_change_state_handler(kvm_riscv_vm_state_change, cs);
gtk_display_init[2410] qemu_add_vm_change_state_handler(gd_change_runstate, s);
qemu_spice_display_init_done[651] qemu_add_vm_change_state_handler(vm_change_state_handler, NULL);
qemu_spice_add_interface[868] qemu_add_vm_change_state_handler(vm_change_state_handler, NULL);
Looks all correct:
Reviewed-by: Peter Xu <peterx@redhat.com>
--
Peter Xu
next prev parent reply other threads:[~2023-08-30 15:52 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-29 18:17 [PATCH V4 00/11] fix migration of suspended runstate Steve Sistare
2023-08-29 18:17 ` [PATCH V4 01/11] cpus: pass runstate to vm_prepare_start Steve Sistare
2023-08-30 15:52 ` Peter Xu [this message]
2023-08-30 15:56 ` Steven Sistare
2023-08-29 18:17 ` [PATCH V4 02/11] migration: preserve suspended runstate Steve Sistare
2023-08-30 16:07 ` Peter Xu
2023-08-29 18:17 ` [PATCH V4 03/11] migration: add runstate function Steve Sistare
2023-08-30 16:11 ` Peter Xu
2023-08-29 18:17 ` [PATCH V4 04/11] migration: preserve suspended for snapshot Steve Sistare
2023-08-30 16:22 ` Peter Xu
2023-11-13 18:32 ` Steven Sistare
2023-08-29 18:18 ` [PATCH V4 05/11] migration: preserve suspended for bg_migration Steve Sistare
2023-08-30 16:35 ` Peter Xu
2023-11-13 18:32 ` Steven Sistare
2023-08-29 18:18 ` [PATCH V4 06/11] migration: preserve cpu ticks if suspended Steve Sistare
2023-08-30 16:47 ` Peter Xu
2023-09-07 15:50 ` Steven Sistare
2023-11-13 18:32 ` Steven Sistare
2023-08-29 18:18 ` [PATCH V4 07/11] tests/qtest: migration events Steve Sistare
2023-08-30 17:00 ` Peter Xu
2023-11-13 18:33 ` Steven Sistare
2023-11-13 19:20 ` Steven Sistare
2023-08-29 18:18 ` [PATCH V4 08/11] tests/qtest: option to suspend during migration Steve Sistare
2023-08-30 17:01 ` Peter Xu
2023-08-29 18:18 ` [PATCH V4 09/11] tests/qtest: precopy migration with suspend Steve Sistare
2023-08-29 18:18 ` [PATCH V4 10/11] tests/qtest: postcopy " Steve Sistare
2023-08-29 18:18 ` [PATCH V4 11/11] tests/qtest: background " Steve Sistare
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZO9loC9zzEAwZJjK@x1n \
--to=peterx@redhat.com \
--cc=berrange@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=steven.sistare@oracle.com \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).