From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KqpB1-0007mj-7o for qemu-devel@nongnu.org; Fri, 17 Oct 2008 09:15:35 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KqpB0-0007mR-IR for qemu-devel@nongnu.org; Fri, 17 Oct 2008 09:15:34 -0400 Received: from [199.232.76.173] (port=46730 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KqpB0-0007mM-A8 for qemu-devel@nongnu.org; Fri, 17 Oct 2008 09:15:34 -0400 Received: from pasmtpa.tele.dk ([80.160.77.114]:38164) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1KqpAz-0006sl-RO for qemu-devel@nongnu.org; Fri, 17 Oct 2008 09:15:34 -0400 Date: Fri, 17 Oct 2008 15:14:42 +0200 From: Jens Axboe Subject: Re: [Qemu-devel] [RFC] Disk integrity in QEMU Message-ID: <20081017131441.GJ19428@kernel.dk> References: <48EE38B9.2050106@codemonkey.ws> <20081010081157.GA13431@volta.aurel32.net> <48EF49D8.3090002@codemonkey.ws> <200810101353.32933.paul@codesourcery.com> <48EF5EC0.1020809@codemonkey.ws> <48EF6459.6090404@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <48EF6459.6090404@redhat.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Chris Wright , Mark McLoughlin , kvm-devel , Laurent Vivier , Ryan Harper , Paul Brook On Fri, Oct 10 2008, Avi Kivity wrote: > Anthony Liguori wrote: > >> > >> This isn't entirely true. With IDE devices you don't have command > >> queueing, so it's easy for a large write to stall subsequent reads > >> for a relatively long time. > >> I'm not sure how much this effects qemu, but I've definitely seen it > >> happening on real hardware. > >> > > > > I think that suggests we should have a cache=wb option and if people > > report slow downs with IDE, we can observe if cache=wb helps. My > > suspicion is that it's not going to have a practical impact because as > > long as the operations are asynchronous (via DMA), then you're getting > > native-like performance. > > > > My bigger concern is synchronous IO operations because then a guest > > VCPU is getting far less time to run and that may have a cascading > > effect on performance. > > IDE is limited to 256 sectors per transaction, or 128KB. If a sync > transaction takes 5 ms, then your write rate is limited to 25 MB/sec. > It's much worse if you're allocating qcow2 data, so each transaction is > several sync writes. No it isn't, even most IDE drives support lba48 which raises that limit to 64K sectors, or 32MB. -- Jens Axboe