From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Llnxx-0004G7-J4 for qemu-devel@nongnu.org; Mon, 23 Mar 2009 13:29:37 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1Llnxr-0004Eb-BM for qemu-devel@nongnu.org; Mon, 23 Mar 2009 13:29:35 -0400 Received: from [199.232.76.173] (port=59605 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Llnxr-0004EY-6i for qemu-devel@nongnu.org; Mon, 23 Mar 2009 13:29:31 -0400 Received: from bombadil.infradead.org ([18.85.46.34]:51967) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1Llnxq-0008PH-0g for qemu-devel@nongnu.org; Mon, 23 Mar 2009 13:29:30 -0400 Date: Mon, 23 Mar 2009 13:29:28 -0400 From: Christoph Hellwig Subject: Re: [Qemu-devel] [PATCH][RFC] Linux AIO support when using O_DIRECT Message-ID: <20090323172928.GB29449@infradead.org> References: <1237823124-6417-1-git-send-email-aliguori@us.ibm.com> <49C7B620.8030203@redhat.com> <49C7C392.3030001@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <49C7C392.3030001@codemonkey.ws> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Avi Kivity , kvm@vger.kernel.org, qemu-devel@nongnu.org On Mon, Mar 23, 2009 at 12:14:58PM -0500, Anthony Liguori wrote: > I'd like to see the O_DIRECT bounce buffering removed in favor of the > DMA API bouncing. Once that happens, raw_read and raw_pread can > disappear. block-raw-posix becomes much simpler. See my vectored I/O patches for doing the bounce buffering at the optimal place for the aio path. Note that from my reading of the qcow/qcow2 code they might send down unaligned requests, which is something the dma api would not help with. For the buffered I/O path we will always have to do some sort of buffering due to all the partition header reading / etc. And given how that part isn't performance critical my preference would be to keep doing it in bdrv_pread/write and guarantee the lowlevel drivers proper alignment. > We would drop the signaling stuff and have the thread pool use an fd to > signal. The big problem with that right now is that it'll cause a > performance regression for certain platforms until we have the IO thread > in place. Talking about signaling, does anyone remember why the Linux signalfd/ eventfd support is only in kvm but not in upstream qemu?