From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:46191) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TxEen-0002h1-66 for qemu-devel@nongnu.org; Mon, 21 Jan 2013 05:31:14 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TxEel-0007Z6-SZ for qemu-devel@nongnu.org; Mon, 21 Jan 2013 05:31:13 -0500 Received: from mail-ea0-f180.google.com ([209.85.215.180]:63072) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TxEel-0007Yl-MX for qemu-devel@nongnu.org; Mon, 21 Jan 2013 05:31:11 -0500 Received: by mail-ea0-f180.google.com with SMTP id c1so2234183eaa.25 for ; Mon, 21 Jan 2013 02:31:10 -0800 (PST) Sender: Paolo Bonzini Message-ID: <50FD18EA.80709@redhat.com> Date: Mon, 21 Jan 2013 11:31:06 +0100 From: Paolo Bonzini MIME-Version: 1.0 References: <1358510033-17268-1-git-send-email-quintela@redhat.com> <1358510033-17268-4-git-send-email-quintela@redhat.com> In-Reply-To: <1358510033-17268-4-git-send-email-quintela@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 3/4] ram: reuse ram_save_iterate() for the complete stage List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Juan Quintela Cc: qemu-devel@nongnu.org Il 18/01/2013 12:53, Juan Quintela ha scritto: > This means that we only have one memory loop for the iterate and > complete phase. I think this is premature. One important difference between iterate and complete is that ultimately iterate will run without the BQL, while that's not necessarily true of complete. So we may end up reverting this patch. > Signed-off-by: Juan Quintela > --- > arch_init.c | 16 ---------------- > migration.c | 12 ++++++++++++ > 2 files changed, 12 insertions(+), 16 deletions(-) > > diff --git a/arch_init.c b/arch_init.c > index 9f7d44d..9eef10a 100644 > --- a/arch_init.c > +++ b/arch_init.c > @@ -651,23 +651,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque, uint64_t free_space) > static int ram_save_complete(QEMUFile *f, void *opaque) > { > qemu_mutex_lock_ramlist(); > - migration_bitmap_sync(); > - > - /* try transferring iterative blocks of memory */ > - > - /* flush all remaining blocks regardless of rate limiting */ > - while (true) { > - int bytes_sent; > - > - bytes_sent = ram_save_block(f); > - /* no more blocks to sent */ > - if (bytes_sent == 0) { > - break; > - } > - bytes_transferred += bytes_sent; > - } > migration_end(); > - > qemu_mutex_unlock_ramlist(); > qemu_put_be64(f, RAM_SAVE_FLAG_EOS); > > diff --git a/migration.c b/migration.c > index e74ce49..de665f7 100644 > --- a/migration.c > +++ b/migration.c > @@ -717,6 +717,18 @@ static void *buffered_file_thread(void *opaque) > } else { > vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); > } > + > + /* 8 is the size of an end of section mark, so empty section */ > + while ((ret = qemu_savevm_state_iterate(s->file, free_space)) > + > 8) { > + ret = buffered_flush(s); > + if (ret < 0) { > + qemu_mutex_unlock_iothread(); > + break; > + } > + free_space = s->buffer_capacity - s->buffer_size; > + } > + If you really want to apply this patch, however, move this loop to qemu_savevm_state_complete. do_savevm has a similar loop: do { ret = qemu_savevm_state_iterate(f); if (ret < 0) goto out; } while (ret == 0); and then you can unify buffered_file_thread and do_savevm's code. Paolo > ret = qemu_savevm_state_complete(s->file); > if (ret < 0) { > qemu_mutex_unlock_iothread(); >