From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Xu Subject: Re: [PATCH 01/12] migration: do not wait if no free thread Date: Mon, 11 Jun 2018 15:39:20 +0800 Message-ID: <20180611073920.GJ7736@xz-mi> References: <20180604095520.8563-1-xiaoguangrong@tencent.com> <20180604095520.8563-2-xiaoguangrong@tencent.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Cc: kvm@vger.kernel.org, mst@redhat.com, mtosatti@redhat.com, Xiao Guangrong , dgilbert@redhat.com, qemu-devel@nongnu.org, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, pbonzini@redhat.com To: guangrong.xiao@gmail.com Return-path: Content-Disposition: inline In-Reply-To: <20180604095520.8563-2-xiaoguangrong@tencent.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+gceq-qemu-devel2=m.gmane.org@nongnu.org Sender: "Qemu-devel" List-Id: kvm.vger.kernel.org On Mon, Jun 04, 2018 at 05:55:09PM +0800, guangrong.xiao@gmail.com wrote: > From: Xiao Guangrong > > Instead of putting the main thread to sleep state to wait for > free compression thread, we can directly post it out as normal > page that reduces the latency and uses CPUs more efficiently The feature looks good, though I'm not sure whether we should make a capability flag for this feature since otherwise it'll be hard to switch back to the old full-compression way no matter for what reason. Would that be a problem? > > Signed-off-by: Xiao Guangrong > --- > migration/ram.c | 34 +++++++++++++++------------------- > 1 file changed, 15 insertions(+), 19 deletions(-) > > diff --git a/migration/ram.c b/migration/ram.c > index 5bcbf7a9f9..0caf32ab0a 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -1423,25 +1423,18 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block, > > thread_count = migrate_compress_threads(); > qemu_mutex_lock(&comp_done_lock); Can we drop this lock in this case? > - while (true) { > - for (idx = 0; idx < thread_count; idx++) { > - if (comp_param[idx].done) { > - comp_param[idx].done = false; > - bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file); > - qemu_mutex_lock(&comp_param[idx].mutex); > - set_compress_params(&comp_param[idx], block, offset); > - qemu_cond_signal(&comp_param[idx].cond); > - qemu_mutex_unlock(&comp_param[idx].mutex); > - pages = 1; > - ram_counters.normal++; > - ram_counters.transferred += bytes_xmit; > - break; > - } > - } > - if (pages > 0) { > + for (idx = 0; idx < thread_count; idx++) { > + if (comp_param[idx].done) { > + comp_param[idx].done = false; > + bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file); > + qemu_mutex_lock(&comp_param[idx].mutex); > + set_compress_params(&comp_param[idx], block, offset); > + qemu_cond_signal(&comp_param[idx].cond); > + qemu_mutex_unlock(&comp_param[idx].mutex); > + pages = 1; > + ram_counters.normal++; > + ram_counters.transferred += bytes_xmit; > break; > - } else { > - qemu_cond_wait(&comp_done_cond, &comp_done_lock); > } > } > qemu_mutex_unlock(&comp_done_lock); Regards, -- Peter Xu