From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Juan Quintela <quintela@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Thomas Huth <thuth@redhat.com>,
qemu-devel@nongnu.org
Subject: Re: [PATCH v3 1/5] multifd: Make sure that we don't do any IO after an error
Date: Thu, 16 Jan 2020 18:20:55 +0000 [thread overview]
Message-ID: <20200116182055.GM3108@work-vm> (raw)
In-Reply-To: <20200116154616.11569-2-quintela@redhat.com>
* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
> migration/ram.c | 22 +++++++++++++---------
> 1 file changed, 13 insertions(+), 9 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index ba6e0eea15..8f9f3bba5b 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -3442,7 +3442,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
> {
> RAMState **temp = opaque;
> RAMState *rs = *temp;
> - int ret;
> + int ret = 0;
> int i;
> int64_t t0;
> int done = 0;
> @@ -3521,12 +3521,14 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
> ram_control_after_iterate(f, RAM_CONTROL_ROUND);
>
> out:
> - multifd_send_sync_main(rs);
> - qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> - qemu_fflush(f);
> - ram_counters.transferred += 8;
> + if (ret >= 0) {
> + multifd_send_sync_main(rs);
> + qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> + qemu_fflush(f);
> + ram_counters.transferred += 8;
>
> - ret = qemu_file_get_error(f);
> + ret = qemu_file_get_error(f);
> + }
> if (ret < 0) {
> return ret;
> }
> @@ -3578,9 +3580,11 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
> ram_control_after_iterate(f, RAM_CONTROL_FINISH);
> }
>
> - multifd_send_sync_main(rs);
> - qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> - qemu_fflush(f);
> + if (ret >= 0) {
> + multifd_send_sync_main(rs);
> + qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> + qemu_fflush(f);
> + }
>
> return ret;
> }
> --
> 2.24.1
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2020-01-16 18:22 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-16 15:46 [PATCH v3 0/5] Fix multifd + cancel + multifd Juan Quintela
2020-01-16 15:46 ` [PATCH v3 1/5] multifd: Make sure that we don't do any IO after an error Juan Quintela
2020-01-16 18:20 ` Dr. David Alan Gilbert [this message]
2020-01-16 15:46 ` [PATCH v3 2/5] migration: Create MigrationState active field Juan Quintela
2020-01-17 16:26 ` Dr. David Alan Gilbert
2020-01-17 18:35 ` Juan Quintela
2020-01-21 11:08 ` Juan Quintela
2020-01-16 15:46 ` [PATCH v3 3/5] migration: Don't wait in semaphore for thread we know has finished Juan Quintela
2020-01-17 16:45 ` Dr. David Alan Gilbert
2020-01-21 11:10 ` Juan Quintela
2020-01-16 15:46 ` [PATCH v3 4/5] qemu-file: Don't do IO after shutdown Juan Quintela
2020-01-16 15:46 ` [PATCH v3 5/5] migration-test: Make sure that multifd and cancel works Juan Quintela
2020-01-16 15:59 ` Thomas Huth
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200116182055.GM3108@work-vm \
--to=dgilbert@redhat.com \
--cc=lvivier@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).