From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MIj0O-0005AK-SC for qemu-devel@nongnu.org; Mon, 22 Jun 2009 08:52:13 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MIj0K-000544-0c for qemu-devel@nongnu.org; Mon, 22 Jun 2009 08:52:12 -0400 Received: from [199.232.76.173] (port=40911 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MIj0J-00053v-LX for qemu-devel@nongnu.org; Mon, 22 Jun 2009 08:52:07 -0400 Received: from mx2.redhat.com ([66.187.237.31]:46245) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MIj0J-0005GK-6t for qemu-devel@nongnu.org; Mon, 22 Jun 2009 08:52:07 -0400 Message-ID: <4A3F7E32.8090905@redhat.com> Date: Mon, 22 Jun 2009 14:50:58 +0200 From: Kevin Wolf MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH] block-raw: Make cache=off default again References: <1245669483-7076-1-git-send-email-kwolf@redhat.com> <20090622113524.GA13583@lst.de> <4A3F6EA3.2010303@redhat.com> <4A3F7139.20401@redhat.com> <4A3F79C0.6000804@redhat.com> <4A3F7B87.6000605@redhat.com> In-Reply-To: <4A3F7B87.6000605@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Christoph Hellwig , qemu-devel@nongnu.org Avi Kivity schrieb: > On 06/22/2009 03:32 PM, Kevin Wolf wrote: >>> Were your refcount-combining patches merged? I don't see them, and >>> performance will suck without them. >>> >> They were merged some weeks ago, at least if a980c98c is what you mean. >> I don't have any patches for inclusion that aren't merged yet (except >> for this one, obviously). >> > > I meant that, yes. Missed it going in. > > We still have a read-modify-write when extending an image, but I guess > we're pretty close now, so it's worthwhile to try a guest install with > cache=off. As long as we don't have overlapping requests the RMW is basically a memset, so at least on IDE this shouldn't hurt too much. What happens with virtio I still need to understand. Obviously, as soon as virtio decides to fall back to 4k requests, performance becomes terrible. And if it doesn't, we still can get concurrent requests to the same cluster, resulting in a real RMW (however, if it's only one RMW remaining, it's kind of okay - last week's patches have removed some more of them in the cluster allocation path...) Kevin