From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MiQxQ-0005kq-Su for qemu-devel@nongnu.org; Tue, 01 Sep 2009 06:51:24 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MiQxM-0005kW-9s for qemu-devel@nongnu.org; Tue, 01 Sep 2009 06:51:24 -0400 Received: from [199.232.76.173] (port=57764 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MiQxM-0005kS-70 for qemu-devel@nongnu.org; Tue, 01 Sep 2009 06:51:20 -0400 Received: from mx1.redhat.com ([209.132.183.28]:23028) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MiQxL-0006T9-ME for qemu-devel@nongnu.org; Tue, 01 Sep 2009 06:51:19 -0400 Received: from int-mx03.intmail.prod.int.phx2.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.16]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n81ApIY4022593 for ; Tue, 1 Sep 2009 06:51:18 -0400 Message-ID: <4A9CFC64.7050603@redhat.com> Date: Tue, 01 Sep 2009 12:50:12 +0200 From: Kevin Wolf MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH] qcow2: Order concurrent AIO requests on the same unallocated cluster References: <1251730129-18693-1-git-send-email-kwolf@redhat.com> <4A9CF517.30701@redhat.com> In-Reply-To: <4A9CF517.30701@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: qemu-devel@nongnu.org Avi Kivity schrieb: > On 08/31/2009 05:48 PM, Kevin Wolf wrote: >> When two AIO requests write to the same cluster, and this cluster is >> unallocated, currently both requests allocate a new cluster and the second one >> merges the first one when it is completed. This means an cluster allocation, a >> read and a cluster deallocation which cause some overhead. If we simply let the >> second request wait until the first one is done, we improve overall performance >> with AIO requests (specifially, qcow2/virtio combinations). >> >> This patch maintains a list of in-flight requests that have allocated new >> clusters. A second request touching the same cluster is limited so that it >> either doesn't touch the allocation of the first request (so it can have a >> non-overlapping allocation) or it waits for the first request to complete. >> > > Can't this cause an even/odd pattern where all even-numbered requests > run first, then all the odd-numbered requests? > > (0 goes to disk, 1 depends on it, 2 depends on 1, which isn't allocating > now, so it goes to disk, 3 depends on 2, ...) I guess it can happen in theory, not sure if it matters in practice. You are worried about image fragmentation? I think we could do something about it with a cleverer cluster allocation. However, I don't think it's an argument against this patch. What currently happens isn't much better: Allocate n clusters, free n-1. Almost as good in producing fragmentation. > Do you have performance numbers? No really detailed numbers. Installation time for RHEL on qcow2/virtio went down from 34 min to 19 min, though. Kevin