From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:56101) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TOltL-0003ha-GV for qemu-devel@nongnu.org; Thu, 18 Oct 2012 04:55:53 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TOltE-00005x-9M for qemu-devel@nongnu.org; Thu, 18 Oct 2012 04:55:47 -0400 Received: from mail-pb0-f45.google.com ([209.85.160.45]:51992) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TOltE-00005f-2u for qemu-devel@nongnu.org; Thu, 18 Oct 2012 04:55:40 -0400 Received: by mail-pb0-f45.google.com with SMTP id rp2so8228701pbb.4 for ; Thu, 18 Oct 2012 01:55:39 -0700 (PDT) Sender: Paolo Bonzini Message-ID: <507FC405.6050005@redhat.com> Date: Thu, 18 Oct 2012 10:55:33 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <1350545426-23172-1-git-send-email-quintela@redhat.com> <1350545426-23172-23-git-send-email-quintela@redhat.com> <507FC038.4030007@redhat.com> In-Reply-To: <507FC038.4030007@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 22/30] migration: unfold rest of migrate_fd_put_ready() into thread List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Juan Quintela Cc: qemu-devel@nongnu.org Il 18/10/2012 10:39, Paolo Bonzini ha scritto: > in this function. Perhaps it is better if you make an invariant that > the loop is entered and exited with the BQL taken, and it is only > unlocked in the middle. It makes sense once you fold everything in > migration.c. i.e. this: diff --git a/migration.c b/migration.c index 554d79a..bd9fe2d 100644 --- a/migration.c +++ b/migration.c @@ -661,23 +661,18 @@ static void *buffered_file_thread(void *opaque) ret = qemu_savevm_state_begin(m->file, &m->params); if (ret < 0) { DPRINTF("failed, %d\n", ret); - qemu_mutex_unlock_iothread(); goto out; } - qemu_mutex_unlock_iothread(); while (true) { int64_t current_time = qemu_get_clock_ms(rt_clock); uint64_t pending_size; - qemu_mutex_lock_iothread(); if (m->state != MIG_STATE_ACTIVE) { DPRINTF("put_ready returning because of non-active state\n"); - qemu_mutex_unlock_iothread(); break; } if (m->complete) { - qemu_mutex_unlock_iothread(); break; } if (s->bytes_xfer < s->xfer_limit) { @@ -687,7 +682,6 @@ static void *buffered_file_thread(void *opaque) if (pending_size && pending_size >= max_size) { ret = qemu_savevm_state_iterate(m->file); if (ret < 0) { - qemu_mutex_unlock_iothread(); break; } } else { @@ -708,7 +702,6 @@ static void *buffered_file_thread(void *opaque) printf("vm_stop 2 %ld\n", end_time - start_time); ret = qemu_savevm_state_complete(m->file); if (ret < 0) { - qemu_mutex_unlock_iothread(); break; } else { end_time = qemu_get_clock_ms(rt_clock); @@ -729,8 +722,8 @@ static void *buffered_file_thread(void *opaque) last_round = true; } } - qemu_mutex_unlock_iothread(); + qemu_mutex_unlock_iothread(); if (current_time >= initial_time + BUFFER_DELAY) { uint64_t transferred_bytes = s->bytes_xfer; uint64_t time_spent = current_time - initial_time; @@ -749,9 +742,11 @@ static void *buffered_file_thread(void *opaque) usleep((initial_time + BUFFER_DELAY - current_time)*1000); } buffered_flush(s); + qemu_mutex_lock_iothread(); } out: + qemu_mutex_unlock_iothread(); if (ret < 0) { migrate_fd_error(m); } BTW the completion (basically the "else" starting at "int old_vm_running = runstate_is_running()" looks much nicer in a separate function, like: if (pending_size && pending_size >= max_size) { ret = qemu_savevm_state_iterate(m->file); } else { ret = migration_complete(m); last_round = true; } if (ret < 0) { break; } Paolo