From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40987) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VfwTw-0000Ny-WA for qemu-devel@nongnu.org; Mon, 11 Nov 2013 13:45:10 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VfwTq-0001Xu-TU for qemu-devel@nongnu.org; Mon, 11 Nov 2013 13:45:04 -0500 Received: from mx1.redhat.com ([209.132.183.28]:51420) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VfwTq-0001Xa-M6 for qemu-devel@nongnu.org; Mon, 11 Nov 2013 13:44:58 -0500 Message-ID: <52812591.3040903@redhat.com> Date: Mon, 11 Nov 2013 19:44:33 +0100 From: Paolo Bonzini MIME-Version: 1.0 References: <1383560924-15788-1-git-send-email-matthias.bgg@gmail.com> <20131105132509.GC16457@stefanha-thinkpad.redhat.com> <20131111124329.GA1036@stefanha-thinkpad.redhat.com> <52811B60.8080202@redhat.com> <70557888-4FD2-41C9-8805-2BC2DC154F07@alex.org.uk> In-Reply-To: <70557888-4FD2-41C9-8805-2BC2DC154F07@alex.org.uk> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v2 0/3] Make thread pool implementation modular List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alex Bligh Cc: Kevin Wolf , Liu Ping Fan , Anthony Liguori , Stefan Hajnoczi , Jeff Cody , Michael Tokarev , qemu-devel@nongnu.org, Markus Armbruster , malc , Stefan Hajnoczi , Stefan Weil , Matthias Brugger , Asias He , Luiz Capitulino , =?ISO-8859-1?Q?Andreas_F=E4rber?= , Eduardo Habkost Il 11/11/2013 19:32, Alex Bligh ha scritto: > > On 11 Nov 2013, at 18:01, Paolo Bonzini wrote: > >> Il 11/11/2013 18:59, Alex Bligh ha scritto: >>>> Why is it necessary to push this task down into the host? I don't >>>> understand the advantage of this approach except that maybe it works >>>> around certain misconfigurations, I/O scheduler quirks, or plain old >>>> bugs - all of which should be investigated and fixed at the source >>>> instead of adding another layer of code to mask them. >>> >>> I can see an argument why a guest with two very differently >>> performing disks attached might be best served by two worker >>> threads, particularly if one such thread was in part CPU bound >>> (inventing this use case is left as an exercise for the reader). >> >> In most cases you want to use aio=native anyway, and then the QEMU >> thread pool is entirely bypassed. > > 'most cases' - really? I thought anything using either qcow2 or > ceph won't support that? qcow2 works very well with aio=native. ceph, libiscsi, gluster, etc. will not support aio=native indeed, but then they won't use the thread pool either so I wasn't thinking about them (only files and block devices). Paolo