From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Nwc2I-0001yb-Sp for qemu-devel@nongnu.org; Tue, 30 Mar 2010 10:03:18 -0400 Received: from [140.186.70.92] (port=51596 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Nwc2A-0001r0-LO for qemu-devel@nongnu.org; Tue, 30 Mar 2010 10:03:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1Nwc29-00067Q-4i for qemu-devel@nongnu.org; Tue, 30 Mar 2010 10:03:10 -0400 Received: from mx1.redhat.com ([209.132.183.28]:12902) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1Nwc28-00067I-S9 for qemu-devel@nongnu.org; Tue, 30 Mar 2010 10:03:09 -0400 Message-ID: <4BB20496.3080102@redhat.com> Date: Tue, 30 Mar 2010 17:03:02 +0300 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH -V3 09/32] virtio-9p: Implement P9_TWRITE/ Thread model in QEMU References: <1269535420-31206-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> <1269535420-31206-10-git-send-email-aneesh.kumar@linux.vnet.ibm.com> <4BB04A59.1040204@linux.vnet.ibm.com> <4BB0C09D.7090204@linux.vnet.ibm.com> <4BB10E30.70908@redhat.com> <4BB110B3.8040300@linux.vnet.ibm.com> <4BB1136B.6050506@redhat.com> <4BB11A48.4060805@linux.vnet.ibm.com> <4BB1D143.5040308@redhat.com> <4BB1F915.6020800@linux.vnet.ibm.com> <4BB1FC83.6000202@redhat.com> <4BB202A5.8020105@linux.vnet.ibm.com> In-Reply-To: <4BB202A5.8020105@linux.vnet.ibm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: ericvh@gmail.com, Kevin Wolf , jvrao , "Aneesh Kumar K.V" , qemu-devel@nongnu.org On 03/30/2010 04:54 PM, Anthony Liguori wrote: > On 03/30/2010 08:28 AM, Avi Kivity wrote: >>> But that offers no advantage to what we have which fails the >>> proof-by-example that threading makes the situation better. >> >> >> It has an advantage, qcow2 is currently synchronous in parts: >> >> block/qcow2-cluster.c: ret = bdrv_write(s->hd, (cluster_offset >> >> 9) + n_start, > > block/qcow2-cluster.c: ret = > bdrv_read(bs->backing_hd, sector_num, buf, n1); > > The two of these happen in copy_sectors(). copy_sectors() runs from > qcow2_alloc_cluster_link_l2() which is called from qcow_aio_write_cb() > and preallocate(). > >> block/qcow2.c: bdrv_write(s->hd, (meta.cluster_offset >> 9) + >> num - 1, buf, 1); > > This only happens during creation (for preallocation). > >> block/qcow2.c: bdrv_write(bs, sector_num, buf, >> s->cluster_sectors); >> block/qcow2-cluster.c: ret = bdrv_read(s->hd, coffset >> 9, >> s->cluster_data, nb_csectors); > > These two are only for compressed images. Are compressed images supposed to block vcpus? > > So it looks like we really only have one operation > (qcow2_alloc_cluster_link_l2) that blocks. Do we really think that > it's sufficiently difficult to make this function asynchronous that it > justifies threading the block layer? There are also tons of bdrv_pread()s and bdrv_pwrite()s. Isn't growing some of the tables synchronous? how about creating a snapshot? If it were just one operation in qcow2, threading would be a big overkill. But threading also fixes all of the other format drivers, and makes working on qcow2 easier. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.