From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Jrytw-0001EH-Br for qemu-devel@nongnu.org; Fri, 02 May 2008 13:18:28 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1Jrytu-0001E5-BW for qemu-devel@nongnu.org; Fri, 02 May 2008 13:18:27 -0400 Received: from [199.232.76.173] (port=39734 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Jrytu-0001E2-44 for qemu-devel@nongnu.org; Fri, 02 May 2008 13:18:26 -0400 Received: from mail2.shareable.org ([80.68.89.115]) by monty-python.gnu.org with esmtps (TLS-1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1Jrytt-0008M8-S1 for qemu-devel@nongnu.org; Fri, 02 May 2008 13:18:26 -0400 Received: from jamie by mail2.shareable.org with local (Exim 4.63) (envelope-from ) id 1Jrytr-0000P6-7z for qemu-devel@nongnu.org; Fri, 02 May 2008 18:18:23 +0100 Date: Fri, 2 May 2008 18:18:23 +0100 From: Jamie Lokier Subject: Re: [kvm-devel] [Qemu-devel] Re: [PATCH 1/3] Refactor AIO interface to allow other AIO implementations Message-ID: <20080502171823.GA1240@shareable.org> References: <20080420154943.GB14268@shareable.org> <480B8EDC.6060507@qumranet.com> <20080420233913.GA23292@shareable.org> <480C36A3.6010900@qumranet.com> <20080421121028.GD4193@shareable.org> <480D9D74.5070801@qumranet.com> <20080422142847.GC4849@shareable.org> <480DFE43.8060509@qumranet.com> <20080422153616.GC10229@shareable.org> <69304d110805020937l2f867cadrd7c906b8eb77b3f6@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <69304d110805020937l2f867cadrd7c906b8eb77b3f6@mail.gmail.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Antonio Vargas wrote: > Btw, regarding QEMU: QEMU gets requests _after_ sorting by the > guest's > elevator, then submits them to the host's elevator. If the guest and > host elevators are both configured 'anticipatory', do the > anticipatory > delays add up? > > > Anticipatory is non-work-conserving. If the data is going to end passing thru > host's deadline scheduler, probably it is better to the guest with deadline > or maybe even no-op since it doesn't really know anything about the real > disk locations of the data. That makes sense - especially for formats like qcow and snapshots, the guest has very little knowledge of access timings. It's a bit like a database accessing a large file: the database tries to schedule and merge I/O requests internally before sending them to the kernel. It doesn't know anything about the layout of disk blocks in the file, but it can guess that nearby accesses are more likely to involve lower seek times than far apart accesses. There is still one reason for guests to do a little I/O scheduling, and that's to merge adjacent requests into fewer ops passing through the guest/host interface. -- Jamie