From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Cc: skrtbhtngr@gmail.com, dmitry.fleytman@gmail.com,
qemu-devel@nongnu.org, yuval.shaia@oracle.com
Subject: Re: [Qemu-devel] [PATCH] hw/net: fix vmxnet3 live migration
Date: Wed, 7 Aug 2019 18:12:42 +0100 [thread overview]
Message-ID: <20190807171242.GB27871@work-vm> (raw)
In-Reply-To: <20190705010711.23277-1-marcel.apfelbaum@gmail.com>
* Marcel Apfelbaum (marcel.apfelbaum@gmail.com) wrote:
> At some point vmxnet3 live migration stopped working and git-bisect
> didn't help finding a working version.
> The issue is the PCI configuration space is not being migrated
> successfully and MSIX remains masked at destination.
>
> Remove the migration differentiation between PCI and PCIe since
> the logic resides now inside VMSTATE_PCI_DEVICE.
> Remove also the VMXNET3_COMPAT_FLAG_DISABLE_PCIE based differentiation
> since at 'realize' time is decided if the device is PCI or PCIe,
> then the above macro is enough.
>
> Use the opportunity to move to the standard VMSTATE_MSIX
> instead of the deprecated SaveVMHandlers.
>
> Signed-off-by: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Queued
> ---
> hw/net/vmxnet3.c | 52 ++----------------------------------------------
> 1 file changed, 2 insertions(+), 50 deletions(-)
>
> diff --git a/hw/net/vmxnet3.c b/hw/net/vmxnet3.c
> index 10d01d0058..8b17548b02 100644
> --- a/hw/net/vmxnet3.c
> +++ b/hw/net/vmxnet3.c
> @@ -2141,21 +2141,6 @@ vmxnet3_cleanup_msi(VMXNET3State *s)
> msi_uninit(d);
> }
>
> -static void
> -vmxnet3_msix_save(QEMUFile *f, void *opaque)
> -{
> - PCIDevice *d = PCI_DEVICE(opaque);
> - msix_save(d, f);
> -}
> -
> -static int
> -vmxnet3_msix_load(QEMUFile *f, void *opaque, int version_id)
> -{
> - PCIDevice *d = PCI_DEVICE(opaque);
> - msix_load(d, f);
> - return 0;
> -}
> -
> static const MemoryRegionOps b0_ops = {
> .read = vmxnet3_io_bar0_read,
> .write = vmxnet3_io_bar0_write,
> @@ -2176,11 +2161,6 @@ static const MemoryRegionOps b1_ops = {
> },
> };
>
> -static SaveVMHandlers savevm_vmxnet3_msix = {
> - .save_state = vmxnet3_msix_save,
> - .load_state = vmxnet3_msix_load,
> -};
> -
> static uint64_t vmxnet3_device_serial_num(VMXNET3State *s)
> {
> uint64_t dsn_payload;
> @@ -2203,7 +2183,6 @@ static uint64_t vmxnet3_device_serial_num(VMXNET3State *s)
>
> static void vmxnet3_pci_realize(PCIDevice *pci_dev, Error **errp)
> {
> - DeviceState *dev = DEVICE(pci_dev);
> VMXNET3State *s = VMXNET3(pci_dev);
> int ret;
>
> @@ -2249,8 +2228,6 @@ static void vmxnet3_pci_realize(PCIDevice *pci_dev, Error **errp)
> pcie_dev_ser_num_init(pci_dev, VMXNET3_DSN_OFFSET,
> vmxnet3_device_serial_num(s));
> }
> -
> - register_savevm_live(dev, "vmxnet3-msix", -1, 1, &savevm_vmxnet3_msix, s);
> }
>
> static void vmxnet3_instance_init(Object *obj)
> @@ -2440,29 +2417,6 @@ static const VMStateDescription vmstate_vmxnet3_int_state = {
> }
> };
>
> -static bool vmxnet3_vmstate_need_pcie_device(void *opaque)
> -{
> - VMXNET3State *s = VMXNET3(opaque);
> -
> - return !(s->compat_flags & VMXNET3_COMPAT_FLAG_DISABLE_PCIE);
> -}
> -
> -static bool vmxnet3_vmstate_test_pci_device(void *opaque, int version_id)
> -{
> - return !vmxnet3_vmstate_need_pcie_device(opaque);
> -}
> -
> -static const VMStateDescription vmstate_vmxnet3_pcie_device = {
> - .name = "vmxnet3/pcie",
> - .version_id = 1,
> - .minimum_version_id = 1,
> - .needed = vmxnet3_vmstate_need_pcie_device,
> - .fields = (VMStateField[]) {
> - VMSTATE_PCI_DEVICE(parent_obj, VMXNET3State),
> - VMSTATE_END_OF_LIST()
> - }
> -};
> -
> static const VMStateDescription vmstate_vmxnet3 = {
> .name = "vmxnet3",
> .version_id = 1,
> @@ -2470,9 +2424,8 @@ static const VMStateDescription vmstate_vmxnet3 = {
> .pre_save = vmxnet3_pre_save,
> .post_load = vmxnet3_post_load,
> .fields = (VMStateField[]) {
> - VMSTATE_STRUCT_TEST(parent_obj, VMXNET3State,
> - vmxnet3_vmstate_test_pci_device, 0,
> - vmstate_pci_device, PCIDevice),
> + VMSTATE_PCI_DEVICE(parent_obj, VMXNET3State),
> + VMSTATE_MSIX(parent_obj, VMXNET3State),
> VMSTATE_BOOL(rx_packets_compound, VMXNET3State),
> VMSTATE_BOOL(rx_vlan_stripping, VMXNET3State),
> VMSTATE_BOOL(lro_supported, VMXNET3State),
> @@ -2508,7 +2461,6 @@ static const VMStateDescription vmstate_vmxnet3 = {
> },
> .subsections = (const VMStateDescription*[]) {
> &vmxstate_vmxnet3_mcast_list,
> - &vmstate_vmxnet3_pcie_device,
> NULL
> }
> };
> --
> 2.17.1
>
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
prev parent reply other threads:[~2019-08-07 17:13 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-05 1:07 [Qemu-devel] [PATCH] hw/net: fix vmxnet3 live migration Marcel Apfelbaum
2019-07-05 10:57 ` Sukrit Bhatnagar
2019-07-05 10:59 ` Dmitry Fleytman
2019-07-05 11:14 ` Sukrit Bhatnagar
2019-07-05 11:20 ` Marcel Apfelbaum
2019-07-08 16:03 ` Dr. David Alan Gilbert
2019-07-08 18:47 ` Marcel Apfelbaum
2019-07-08 19:00 ` Dmitry Fleytman
2019-07-08 19:03 ` Dr. David Alan Gilbert
2019-08-07 17:12 ` Dr. David Alan Gilbert [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190807171242.GB27871@work-vm \
--to=dgilbert@redhat.com \
--cc=dmitry.fleytman@gmail.com \
--cc=marcel.apfelbaum@gmail.com \
--cc=qemu-devel@nongnu.org \
--cc=skrtbhtngr@gmail.com \
--cc=yuval.shaia@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).