From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45616) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1avQzS-0008Qd-Kc for qemu-devel@nongnu.org; Wed, 27 Apr 2016 11:02:59 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1avQzO-0007Yy-IP for qemu-devel@nongnu.org; Wed, 27 Apr 2016 11:02:58 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47204) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1avQzO-0007Yt-De for qemu-devel@nongnu.org; Wed, 27 Apr 2016 11:02:54 -0400 Date: Wed, 27 Apr 2016 16:02:49 +0100 From: "Daniel P. Berrange" Message-ID: <20160427150249.GH17937@redhat.com> Reply-To: "Daniel P. Berrange" References: <20160427142023.GC17937@redhat.com> <20160427142929.GC2290@work-vm> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20160427142929.GC2290@work-vm> Subject: Re: [Qemu-devel] Hang with migration multi-thread compression under high load List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Dr. David Alan Gilbert" Cc: liang.z.li@intel.com, qemu-devel@nongnu.org On Wed, Apr 27, 2016 at 03:29:30PM +0100, Dr. David Alan Gilbert wrote: > ccing in Liang Li > > * Daniel P. Berrange (berrange@redhat.com) wrote: > > for some reason it isn't shown in the stack thrace for thread > > 1 above, when initially connecting GDB it says the main thread > > is at: > > > > decompress_data_with_multi_threads (len=702, host=0x7fd78fe06000, f=0x55901af09950) at /home/berrange/src/virt/qemu/migration/ram.c:2254 > > 2254 for (idx = 0; idx < thread_count; idx++) { > > > > > > Looking at the target QEMU, we see do_data_decompress method > > is waiting in a condition var: > > > > while (!param->start && !quit_decomp_thread) { > > qemu_cond_wait(¶m->cond, ¶m->mutex); > > ....do stuff.. > > param->start = false > > } > > > > > > Now the decompress_data_with_multi_threads is checking param->start without > > holding the param->mutex lock. > > > > Changing decompress_data_with_multi_threads to acquire param->mutex > > lock makes it work, but isn't ideal, since that now blocks the > > decompress_data_with_multi_threads() method on the completion of > > each thread, which defeats the point of having multiple threads. FWIW, the following patch also appears to "fix" the issue, presumably by just making the race much less likely to hit: diff --git a/migration/ram.c b/migration/ram.c index 3f05738..be0233f 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2271,6 +2271,7 @@ static void decompress_data_with_multi_threads(QEMUFile *f, if (idx < thread_count) { break; } + sched_yield(); } } Incidentally IIUC, this decompress_data_with_multi_threads is just busy waiting for a thread to become free, which seems pretty wasteful of CPU resources. I wonder if there's a more effective way to structure this, so that instead of having decompress_data_with_multi_threads() choose which thread to pass the decompression job to, it just puts the job into a queue, and then let all the threads pull from that shared queue. IOW whichever thread the kerenl decides to wakeup would get the job, without us having to explicitly assign a thread to the job. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|