From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=55642 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Pa7rO-0004jj-S5 for qemu-devel@nongnu.org; Tue, 04 Jan 2011 09:27:42 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Pa7rK-00038m-9G for qemu-devel@nongnu.org; Tue, 04 Jan 2011 09:27:35 -0500 Received: from mx1.redhat.com ([209.132.183.28]:6673) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Pa7rK-00037i-26 for qemu-devel@nongnu.org; Tue, 04 Jan 2011 09:27:34 -0500 Message-ID: <4D232E4E.5030600@redhat.com> Date: Tue, 04 Jan 2011 16:27:26 +0200 From: Avi Kivity MIME-Version: 1.0 References: <4D219AF5.2030204@web.de> <4D219E6D.8060902@redhat.com> <4D232BF6.6050102@codemonkey.ws> In-Reply-To: <4D232BF6.6050102@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: Role of qemu_fair_mutex List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Marcelo Tosatti , Jan Kiszka , qemu-devel , kvm On 01/04/2011 04:17 PM, Anthony Liguori wrote: > On 01/03/2011 04:01 AM, Avi Kivity wrote: >> On 01/03/2011 11:46 AM, Jan Kiszka wrote: >>> Hi, >>> >>> at least in kvm mode, the qemu_fair_mutex seems to have lost its >>> function of balancing qemu_global_mutex access between the io-thread >>> and >>> vcpus. It's now only taken by the latter, isn't it? >>> >>> This and the fact that qemu-kvm does not use this kind of lock made me >>> wonder what its role is and if it is still relevant in practice. I'd >>> like to unify the execution models of qemu-kvm and qemu, and this lock >>> is the most obvious difference (there are surely more subtle ones as >>> well...). >>> >> >> IIRC it was used for tcg, which has a problem that kvm doesn't have: >> a tcg vcpu needs to hold qemu_mutex when it runs, which means there >> will always be contention on qemu_mutex. In the absence of fairness, >> the tcg thread could dominate qemu_mutex and starve the iothread. > > No, it's actually the opposite IIRC. > > TCG relies on the following behavior. A guest VCPU runs until 1) it > encounters a HLT instruction 2) an event occurs that forces the TCG > execution to break. > > (2) really means that the TCG thread receives a signal. Usually, this > is the periodic timer signal. What about a completion? an I/O completes, the I/O thread wakes up, needs to acquire the global lock (and force tcg off it) inject and interrupt, and go back to sleep. > > When the TCG thread, it needs to let the IO thread run for at least > one iteration. Coordinating the execution of the IO thread such that > it's guaranteed to run at least once and then having it drop the qemu > mutex long enough for the TCG thread to acquire it is the purpose of > the qemu_fair_mutex. That doesn't compute - the iothread doesn't hog the global lock (it sleeps most of the time, and drops the lock while sleeping), so the iothread cannot starve out tcg. On the other hand, tcg does hog the global lock, so it needs to be made to give it up so the iothread can run, for example my completion example. I think the abstraction we need here is a priority lock, with higher priority given to the iothread. A lock() operation that takes precedence would atomically signal the current owner to drop the lock. Under kvm we'd run a normal mutex, so the it wouldn't need to take the extra mutex. -- error compiling committee.c: too many arguments to function