From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1JrzQw-0006mI-Rh for qemu-devel@nongnu.org; Fri, 02 May 2008 13:52:34 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1JrzQu-0006m1-Tw for qemu-devel@nongnu.org; Fri, 02 May 2008 13:52:34 -0400 Received: from [199.232.76.173] (port=52497 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1JrzQu-0006lw-OU for qemu-devel@nongnu.org; Fri, 02 May 2008 13:52:32 -0400 Received: from py-out-1112.google.com ([64.233.166.181]) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1JrzQu-0006Xc-Ds for qemu-devel@nongnu.org; Fri, 02 May 2008 13:52:32 -0400 Received: by py-out-1112.google.com with SMTP id u52so75354pyb.10 for ; Fri, 02 May 2008 10:52:31 -0700 (PDT) Message-ID: <481B54DC.6040409@codemonkey.ws> Date: Fri, 02 May 2008 12:52:28 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [kvm-devel] [Qemu-devel] Re: [PATCH 1/3] Refactor AIO interface to allow other AIO implementations References: <20080420154943.GB14268@shareable.org> <480B8EDC.6060507@qumranet.com> <20080420233913.GA23292@shareable.org> <480C36A3.6010900@qumranet.com> <20080421121028.GD4193@shareable.org> <480D9D74.5070801@qumranet.com> <20080422142847.GC4849@shareable.org> <480DFE43.8060509@qumranet.com> <20080422153616.GC10229@shareable.org> <69304d110805020937l2f867cadrd7c906b8eb77b3f6@mail.gmail.com> <20080502171823.GA1240@shareable.org> In-Reply-To: <20080502171823.GA1240@shareable.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Jamie Lokier wrote: > That makes sense - especially for formats like qcow and snapshots, the > guest has very little knowledge of access timings. > > It's a bit like a database accessing a large file: the database tries > to schedule and merge I/O requests internally before sending them to > the kernel. It doesn't know anything about the layout of disk blocks > in the file, but it can guess that nearby accesses are more likely to > involve lower seek times than far apart accesses. > > There is still one reason for guests to do a little I/O scheduling, > and that's to merge adjacent requests into fewer ops passing through > the guest/host interface. > FWIW, in the process of optimizing the kernel driver for virtio-blk, I've found that using a no-op scheduler helps a fair bit. As long as you're using a reasonably sized ring, the back-end can merge adjacent requests. This also helps a fair bit too. Regards, Anthony Liguori > -- Jamie > > >