From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KnKpo-0008PI-If for qemu-devel@nongnu.org; Tue, 07 Oct 2008 18:15:16 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KnKpn-0008O3-FW for qemu-devel@nongnu.org; Tue, 07 Oct 2008 18:15:15 -0400 Received: from [199.232.76.173] (port=47036 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KnKpn-0008Nv-BH for qemu-devel@nongnu.org; Tue, 07 Oct 2008 18:15:15 -0400 Received: from mx2.redhat.com ([66.187.237.31]:59359) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1KnKpn-0003Qk-5p for qemu-devel@nongnu.org; Tue, 07 Oct 2008 18:15:15 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id m97MFDJP011624 for ; Tue, 7 Oct 2008 18:15:14 -0400 Received: from ns3.rdu.redhat.com (ns3.rdu.redhat.com [10.11.255.199]) by int-mx2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id m97MFDSX010842 for ; Tue, 7 Oct 2008 18:15:13 -0400 Received: from zweiblum.travel.kraxel.org (vpn-4-165.str.redhat.com [10.32.4.165]) by ns3.rdu.redhat.com (8.13.8/8.13.8) with ESMTP id m97MFCOg018797 for ; Tue, 7 Oct 2008 18:15:12 -0400 Message-ID: <48EBDF6F.8080602@redhat.com> Date: Wed, 08 Oct 2008 00:15:11 +0200 From: Gerd Hoffmann MIME-Version: 1.0 Subject: Re: [Qemu-devel] [5323] Implement an fd pool to get real AIO with posix-aio References: <48EB8393.8020005@redhat.com> <48EB87EE.9050003@codemonkey.ws> <48EBC932.9010808@redhat.com> <48EBD2C8.3040101@codemonkey.ws> In-Reply-To: <48EBD2C8.3040101@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Anthony Liguori wrote: > Gerd Hoffmann wrote: >> Anthony Liguori wrote: >> >>>> Are there plans to support vectored block requests with the thread pool >>>> implementation? >>>> >>> Yes, that's the primary reason for doing a new thread pool >>> implementation. >>> >> >> Cool. >> >> >>> Of course, we need a zero-copy DMA API before such a >>> thing would make sense. >>> >> >> Hmm, quick check of the IDE code indeed shows a copy happening there. >> Why is it needed? My xen disk backend doesn't copy data. Does that >> mean might have done something wrong? Does the virtio backend copy data >> too? > > It does now, because the cost of splitting up the AIO request for each > element of the scatter/gather list was considerably higher than the cost > of copying the data to a linear buffer. Ok, so virtio will likely stop doing that soon I guess? With the aio thread pool and even more with a vectored aio api the need for that should go away ... > You can only avoid doing a copy if you do something like phys_ram_base + > PA. I actually do this ... void *xenner_mfn_to_ptr(xen_pfn_t pfn) { ram_addr_t offset; offset = cpu_get_physical_page_desc(pfn << PAGE_SHIFT); return (void*)phys_ram_base + offset; } ... which should keep working even in case there are holes in the guest PA address space and thus guest_pa != phys_ram_base offset. > From an architectural perspective, this is not ideal since it > doesn't allow for things like IOMMU emulation. What we need, is a > zero-copy API at the PCI level. Sure, for IDE+SCSI emulation. For paravirtual drivers it shouldn't be an issue. cheers, Gerd