From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MiRVo-0000qz-AT for qemu-devel@nongnu.org; Tue, 01 Sep 2009 07:26:56 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MiRVj-0000nU-H4 for qemu-devel@nongnu.org; Tue, 01 Sep 2009 07:26:55 -0400 Received: from [199.232.76.173] (port=50436 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MiRVj-0000nH-C7 for qemu-devel@nongnu.org; Tue, 01 Sep 2009 07:26:51 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57623) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MiRVi-0005Eu-Na for qemu-devel@nongnu.org; Tue, 01 Sep 2009 07:26:51 -0400 Received: from int-mx04.intmail.prod.int.phx2.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.17]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n81BQn7v022033 for ; Tue, 1 Sep 2009 07:26:49 -0400 Date: Tue, 1 Sep 2009 14:26:47 +0300 From: Gleb Natapov Subject: Re: [Qemu-devel] [PATCH] qcow2: Order concurrent AIO requests on the same unallocated cluster Message-ID: <20090901112647.GZ30093@redhat.com> References: <1251730129-18693-1-git-send-email-kwolf@redhat.com> <4A9CF517.30701@redhat.com> <4A9CFC64.7050603@redhat.com> <4A9D047E.1040208@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4A9D047E.1040208@redhat.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Kevin Wolf , qemu-devel@nongnu.org On Tue, Sep 01, 2009 at 02:24:46PM +0300, Avi Kivity wrote: > On 09/01/2009 01:50 PM, Kevin Wolf wrote: > > > >>Can't this cause an even/odd pattern where all even-numbered requests > >>run first, then all the odd-numbered requests? > >> > >>(0 goes to disk, 1 depends on it, 2 depends on 1, which isn't allocating > >>now, so it goes to disk, 3 depends on 2, ...) > >I guess it can happen in theory, not sure if it matters in practice. > > We should check then. > > > You > >are worried about image fragmentation? I think we could do something > >about it with a cleverer cluster allocation. > > Not only image fragmentation - the odd requests will require RMW. > > >However, I don't think it's an argument against this patch. What > >currently happens isn't much better: Allocate n clusters, free n-1. > >Almost as good in producing fragmentation. > > The patch introduces complexity so it makes working towards a > non-fragmenting solution harder. I'm not saying it could be > simplified, it's a side effect of using a state machine design. > > >>Do you have performance numbers? > >No really detailed numbers. Installation time for RHEL on qcow2/virtio > >went down from 34 min to 19 min, though. > > That's very impressive. cache=none or cache=wt? > And how it compares with raw? -- Gleb.