From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=45275 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Pi8Oc-00010k-LH for qemu-devel@nongnu.org; Wed, 26 Jan 2011 11:39:03 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Pi8Ob-00066Q-MW for qemu-devel@nongnu.org; Wed, 26 Jan 2011 11:39:02 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37598) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Pi8Ob-00066C-Ej for qemu-devel@nongnu.org; Wed, 26 Jan 2011 11:39:01 -0500 Message-ID: <4D404E1F.50809@redhat.com> Date: Wed, 26 Jan 2011 18:38:55 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [RFC][PATCH 11/12] qcow2: Convert qcow2 to use coroutines for async I/O References: <1295688567-25496-1-git-send-email-stefanha@linux.vnet.ibm.com> <1295688567-25496-12-git-send-email-stefanha@linux.vnet.ibm.com> <4D40406B.2070302@redhat.com> <4D4042A8.2040903@redhat.com> <4D4046EF.3050108@codemonkey.ws> <4D40481E.9040309@redhat.com> <4D404B9F.7040801@codemonkey.ws> In-Reply-To: <4D404B9F.7040801@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Kevin Wolf , Stefan Hajnoczi , qemu-devel@nongnu.org On 01/26/2011 06:28 PM, Anthony Liguori wrote: > On 01/26/2011 10:13 AM, Avi Kivity wrote: >> Serializing against a global mutex has the advantage that it can be >> treated as a global lock that is decomposed into fine-grained locks. >> >> For example, we can start the code conversion from an explict async >> model to a threaded sync model by converting the mutex into a >> shared/exclusive lock. Operations like read and write take the lock >> for shared access (and take a fine-grained mutex on the metadata >> cache entry), while operation like creating a snapshot take the lock >> for exclusive access. That doesn't work with freeze/thaw. > > The trouble with this is that you increase the amount of re-entrance > whereas freeze/thaw doesn't. > > The code from the beginning of the request to where the mutex is > acquired will be executed for every single request even while requests > are blocked at the mutex acquisition. It's just a few instructions. > > With freeze/thaw, you freeze the queue and prevent any request from > starting until you thaw. You only thaw and return control to allow > another request to execute when you begin executing an asynchronous > I/O callback. > What do you actually save? The longjmp() to the coroutine code, linking in to the mutex wait queue, and another switch back to the main coroutine? Given that we don't expect to block often, it seems hardly a cost worth optimizing. > I think my previous example was wrong, you really want to do: > > qcow2_aio_writev() { > coroutine { > freeze(); > sync_io(); // existing qcow2 code > thaw(); > // existing non I/O code > bdrv_aio_writev(callback); // no explicit freeze/thaw needed > } > } > > This is equivalent to our existing code because no new re-entrance is > introduced. The only re-entrancy points are in the > bdrv_aio_{readv,writev} calls. This requires you to know which code is sync, and which code is async. My conversion allows you to wrap the code blindly with a mutex, and have it do the right thing automatically. This is most useful where the code can be sync or async depending on data (which is the case for qcow2). -- error compiling committee.c: too many arguments to function