From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=43694 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Oxyuw-0002Nw-FD for qemu-devel@nongnu.org; Tue, 21 Sep 2010 05:13:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1Oxyuk-0003wY-I6 for qemu-devel@nongnu.org; Tue, 21 Sep 2010 05:13:37 -0400 Received: from mx1.redhat.com ([209.132.183.28]:18332) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1Oxyuk-0003wG-Bp for qemu-devel@nongnu.org; Tue, 21 Sep 2010 05:13:26 -0400 Message-ID: <4C98774C.9030506@redhat.com> Date: Tue, 21 Sep 2010 11:13:48 +0200 From: Kevin Wolf MIME-Version: 1.0 Subject: Re: [Qemu-devel] [RFC] block-queue: Delay and batch metadata writes References: <1284991010-10951-1-git-send-email-kwolf@redhat.com> <4C977028.3050602@codemonkey.ws> <4C9778EC.9060704@redhat.com> <4C978313.9060402@codemonkey.ws> In-Reply-To: <4C978313.9060402@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: qemu-devel@nongnu.org Am 20.09.2010 17:51, schrieb Anthony Liguori: > On 09/20/2010 10:08 AM, Kevin Wolf wrote: >> >>> If you're comfortable with a writeback cache for metadata, then you >>> should also be comfortable with a writeback cache for data in which >>> case, cache=writeback is the answer. >>> >> Well, there is a difference: We don't pollute the host page cache with >> guest data and we don't get a virtual "disk cache" as big as the host >> RAM, but only a very limited queue of metadata. > > Would it be a mortal sin to open the file twice and have a cache=none > version for data and cache=writeback for metadata? Is the behaviour for this well-defined and portable? > The two definitely aren't consistent with each other but I think the > whole point here is that we don't care. What we do care about is ordering between data and metadata writes, for example when doing COW, the copy of the data should have completed before we update the L2 table. Also, what happens (in qcow2) when we free a data cluster and reuse it as metadata (or the other way round). Does this work or is there a chance that the old content is resurrected? Kevin