From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48460) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fVmXI-0001en-La for qemu-devel@nongnu.org; Wed, 20 Jun 2018 19:29:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fVmXH-0007H5-P6 for qemu-devel@nongnu.org; Wed, 20 Jun 2018 19:29:12 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:46182 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fVmXH-0007Gq-K9 for qemu-devel@nongnu.org; Wed, 20 Jun 2018 19:29:11 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3D1674022905 for ; Wed, 20 Jun 2018 23:29:11 +0000 (UTC) From: Juan Quintela Date: Thu, 21 Jun 2018 01:28:49 +0200 Message-Id: <20180620232851.7152-11-quintela@redhat.com> In-Reply-To: <20180620232851.7152-1-quintela@redhat.com> References: <20180620232851.7152-1-quintela@redhat.com> Subject: [Qemu-devel] [PATCH v15 10/12] migration: Wait for blocking IO List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: dgilbert@redhat.com, lvivier@redhat.com, peterx@redhat.com We have three conditions here: - channel fails -> error - we have to quit: we close the channel and reads fails - normal read that success, we are in bussiness So forget the complications of waiting in a semaphore. Signed-off-by: Juan Quintela Reviewed-by: Dr. David Alan Gilbert --- migration/ram.c | 13 ------------- 1 file changed, 13 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 09df573441..2c3a452a7d 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -602,8 +602,6 @@ typedef struct { bool running; /* should this thread finish */ bool quit; - /* thread has work to do */ - bool pending_job; /* array of pages to receive */ MultiFDPages_t *pages; /* packet allocated len */ @@ -1207,14 +1205,6 @@ static void multifd_recv_sync_main(void) for (i = 0; i < migrate_multifd_channels(); i++) { MultiFDRecvParams *p = &multifd_recv_state->params[i]; - trace_multifd_recv_sync_main_signal(p->id); - qemu_mutex_lock(&p->mutex); - p->pending_job = true; - qemu_mutex_unlock(&p->mutex); - } - for (i = 0; i < migrate_multifd_channels(); i++) { - MultiFDRecvParams *p = &multifd_recv_state->params[i]; - trace_multifd_recv_sync_main_wait(p->id); qemu_sem_wait(&multifd_recv_state->sem_sync); qemu_mutex_lock(&p->mutex); @@ -1227,7 +1217,6 @@ static void multifd_recv_sync_main(void) MultiFDRecvParams *p = &multifd_recv_state->params[i]; trace_multifd_recv_sync_main_signal(p->id); - qemu_sem_post(&p->sem_sync); } trace_multifd_recv_sync_main(atomic_read(&multifd_recv_state->packet_num)); @@ -1264,7 +1253,6 @@ static void *multifd_recv_thread(void *opaque) used = p->pages->used; flags = p->flags; trace_multifd_recv(p->id, p->packet_num, used, flags); - p->pending_job = false; p->num_packets++; p->num_pages += used; qemu_mutex_unlock(&p->mutex); @@ -1314,7 +1302,6 @@ int multifd_load_setup(void) qemu_sem_init(&p->sem, 0); qemu_sem_init(&p->sem_sync, 0); p->quit = false; - p->pending_job = false; p->id = i; p->pages = multifd_pages_init(page_count); p->packet_len = sizeof(MultiFDPacket_t) -- 2.17.1