From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35964) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VoVQ5-0005ns-Af for qemu-devel@nongnu.org; Thu, 05 Dec 2013 04:40:37 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VoVPr-00041M-Uy for qemu-devel@nongnu.org; Thu, 05 Dec 2013 04:40:29 -0500 Received: from mail-ea0-x233.google.com ([2a00:1450:4013:c01::233]:35404) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VoVPr-00041C-Oz for qemu-devel@nongnu.org; Thu, 05 Dec 2013 04:40:15 -0500 Received: by mail-ea0-f179.google.com with SMTP id r15so11138693ead.24 for ; Thu, 05 Dec 2013 01:40:15 -0800 (PST) Sender: Paolo Bonzini Message-ID: <52A049FC.8010603@redhat.com> Date: Thu, 05 Dec 2013 10:40:12 +0100 From: Paolo Bonzini MIME-Version: 1.0 References: <1383560924-15788-1-git-send-email-matthias.bgg@gmail.com> <20131105132509.GC16457@stefanha-thinkpad.redhat.com> <20131111124329.GA1036@stefanha-thinkpad.redhat.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v2 0/3] Make thread pool implementation modular List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Matthias Brugger Cc: Kevin Wolf , Liu Ping Fan , Stefan Hajnoczi , Alex Bligh , Stefan Hajnoczi , Jeff Cody , Michael Tokarev , qemu-devel@nongnu.org, Markus Armbruster , malc , Anthony Liguori , Stefan Weil , Asias He , Luiz Capitulino , =?ISO-8859-1?Q?Andreas_F=E4r?= =?ISO-8859-1?Q?ber?= , Eduardo Habkost Il 05/12/2013 09:40, Matthias Brugger ha scritto: > CFQ the state of the art I/O scheduler The deadline scheduler typically provides much better performance for server usage (including hosting VMs). It doesn't support some features such as I/O throttling via cgroups, but QEMU now has a very good throttling mechanism implemented by Benoit Canet. I suggest that you repeat your experiments using all six configurations: - deadline scheduler with aio=native - deadline scheduler with aio=threads - deadline scheduler with aio=threads + your patches - CFQ scheduler with aio=native - CFQ scheduler with aio=threads - CFQ scheduler with aio=threads + your patches > In former versions, there was some work done to merge requests in > Qemu, but I don't think they were very useful, because you don't know > how the layout of the image file looks like on the physical disk. > Anyway I think this code parts have been removed. This is still there for writes, in bdrv_aio_multiwrite. Only virtio-blk.c uses it, but it's there. > The only layer where you really know how the blocks of the virtual > disk image are distributed over the disk is the block layer of the > host. So you have to do the block request merging there. With the new > architecture this would come for free, as you can map every thread > from a guest to one workerthread of Qemu. This also assumes a relatively "dumb" guest. If the guest uses itself a thread pool, you would have exactly the same problem, wouldn't you? Paolo