From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:54335) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SqmAF-0001pu-NT for qemu-devel@nongnu.org; Mon, 16 Jul 2012 10:20:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1SqmA7-00026A-JS for qemu-devel@nongnu.org; Mon, 16 Jul 2012 10:20:43 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45645) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SqmA7-000264-9l for qemu-devel@nongnu.org; Mon, 16 Jul 2012 10:20:35 -0400 Message-ID: <5004232A.9010301@redhat.com> Date: Mon, 16 Jul 2012 16:20:26 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <1342435377-25897-1-git-send-email-pbonzini@redhat.com> <1342435377-25897-8-git-send-email-pbonzini@redhat.com> <50040251.5000700@siemens.com> <50041510.5070105@redhat.com> <50041869.20006@siemens.com> <500418BC.8080200@redhat.com> <50041CE7.80501@siemens.com> <50041F28.3030203@redhat.com> <50042086.3050101@siemens.com> In-Reply-To: <50042086.3050101@siemens.com> Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 07/12] qemu-thread: add QemuSemaphore List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jan Kiszka Cc: "kwolf@redhat.com" , "aliguori@linux.vnet.ibm.com" , "sw@weilnetz.de" , "qemu-devel@nongnu.org" , "stefanha@linux.vnet.ibm.com" Il 16/07/2012 16:09, Jan Kiszka ha scritto: > On 2012-07-16 16:03, Paolo Bonzini wrote: >> Il 16/07/2012 15:53, Jan Kiszka ha scritto: >>>>> >>>>> qemu_cond_wait only uses WaitForSingleObject with INFINITE timeout, and >>>>> the algorithm relies on that. >>> I see. But this doesn't look complex awfully. Just move the waker >>> signaling from within cond_wait under the mutex as well, maybe add >>> another flag if there is really someone waiting, and that's it. The >>> costs should be hard to measure, even in line of code. >> >> There is still a race after WaitForSingleObject times out. You need to >> catch the mutex before decreasing the number of waiters, and during that >> window somebody can broadcast the condition variable. > > ...and that's why you check what needs to be done to handle this race > after grabbing the mutex. IOW, replicate the state information that the > Windows semaphore contains into the emulated condition variable object. It is already there (cv->waiters), but it is accessed atomically. To do what you suggest I would need to add a mutex. >> I'm not saying it's impossible, just that it's hard and I dislike >> qemu_cond_timedwait as much as you dislike semaphores. :) > > As you cannot eliminate our use cases for condition variables, let's > focus on the existing synchronization patterns instead of introducing > new ones + new mechanisms. I'm not introducing ... semaphores have the advantage of having better support under Windows (at least older versions; Microsoft saw the light starting with Vista---yes I know the oxymoron---and added condvars). Paolo