From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Kp76K-0003GL-8w for qemu-devel@nongnu.org; Sun, 12 Oct 2008 15:59:40 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1Kp76H-0003G9-HO for qemu-devel@nongnu.org; Sun, 12 Oct 2008 15:59:39 -0400 Received: from [199.232.76.173] (port=35245 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Kp76H-0003G6-Cp for qemu-devel@nongnu.org; Sun, 12 Oct 2008 15:59:37 -0400 Received: from an-out-0708.google.com ([209.85.132.247]:18367) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Kp76G-0006pY-N0 for qemu-devel@nongnu.org; Sun, 12 Oct 2008 15:59:37 -0400 Received: by an-out-0708.google.com with SMTP id d18so110309and.130 for ; Sun, 12 Oct 2008 12:59:32 -0700 (PDT) Message-ID: <48F25720.9010306@codemonkey.ws> Date: Sun, 12 Oct 2008 14:59:28 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [RFC] Disk integrity in QEMU References: <48EE38B9.2050106@codemonkey.ws> <48EF1D55.7060307@redhat.com> <48F0E83E.2000907@redhat.com> <48F10DFD.40505@codemonkey.ws> <20081012004401.GA9763@acer.localdomain> <48F1CF9E.9030500@redhat.com> <48F23AF1.2000104@codemonkey.ws> <48F24320.9010201@redhat.com> In-Reply-To: <48F24320.9010201@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Chris Wright , Mark McLoughlin , kvm-devel , Laurent Vivier , qemu-devel@nongnu.org, Ryan Harper Avi Kivity wrote: > But would increase latency, memory bus utilization, and cpu overhead. > > In the cases where the page cache buys us something (host page cache > significantly larger than guest size), that's understandable. But for > the other cases, why bother? Especially when many systems don't have > this today. > > Let me phrase this another way: is there an argument against O_DIRECT? > It slows down any user who frequently restarts virtual machines. It slows down total system throughput when there are multiple virtual machines sharing a single disk. This later point is my primary concern because in the future, I expect disk sharing to be common in some form (either via common QCOW base images or via CAS). I'd like to see a benchmark demonstrating that O_DIRECT improves overall system throughput in any scenario today. I just don't buy the cost of the extra copy today is going to be significant since the CPU cache is already polluted. I think the burden of proof is on O_DIRECT because it's quite simple to demonstrate where it hurts performance (just the time it takes to do two boots of the same image). > In a significant fraction of deployments it will be both simpler and faster. > I think this is speculative. Is there any performance data to back this up? Regards, Anthony Liguori