From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Lmmzg-0005Mw-6n for qemu-devel@nongnu.org; Thu, 26 Mar 2009 06:39:28 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1Lmmzb-0005JL-Di for qemu-devel@nongnu.org; Thu, 26 Mar 2009 06:39:27 -0400 Received: from [199.232.76.173] (port=49634 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Lmmza-0005JE-TV for qemu-devel@nongnu.org; Thu, 26 Mar 2009 06:39:23 -0400 Received: from smtp02.citrix.com ([66.165.176.63]:5423) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Lmmza-0000lD-G7 for qemu-devel@nongnu.org; Thu, 26 Mar 2009 06:39:22 -0400 Received: from [10.80.225.184] ([10.80.225.184]) by smtp01.ad.xensource.com (8.13.1/8.13.1) with ESMTP id n2QAdE7C032488 for ; Thu, 26 Mar 2009 03:39:15 -0700 Message-ID: <49CB599D.6000701@eu.citrix.com> Date: Thu, 26 Mar 2009 10:31:57 +0000 From: Stefano Stabellini MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH] honor IDE_DMA_BUF_SECTORS References: <49CA3591.1010309@eu.citrix.com> <49CA4C3D.1070705@redhat.com> <49CA59AE.8060605@eu.citrix.com> <49CA5F9F.5040203@redhat.com> <49CA60BA.5060704@eu.citrix.com> <49CA6E4A.4080408@eu.citrix.com> <49CB5793.4030006@redhat.com> In-Reply-To: <49CB5793.4030006@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "qemu-devel@nongnu.org" Avi Kivity wrote: > Stefano Stabellini wrote: >> I checked the ide driver in the kernel and it assumes that the max >> sectors is either 256 or 64K depending on lba support, exactly as qemu does. >> >> >> So now my question is: if I want to reduce the maximum dma request size >> inside qemu, given that I must fill correctly the guest provided sg >> list, is it OK to use IDE_DMA_BUF_SECTORS in dma_buf_prepare as I have >> done in my patch? >> >> I don't see any other possible solution, but if you have any other >> suggestion you are welcome to let me know. >> > > Look at the DMA API (dma-helpers.c) which already knows how to split > large dma requests. Splitting is controlled by > cpu_physical_memory_map() (which I'm guessing is your real limitation), > so you might want to look at that. > > The advantage of this approach is that it will apply to scsi and virtio > once they are ported to use the DMA API. > > Unfortunately that is not really helpful: after the split done by cpu_physical_memory_map the iovector is converted in a buffer in bdrv_aio_rw_vector and then the full length of the buffer is passed on to the bdrv_aio_write\read for the dma operation. I need a way to set a maximum limit for the total number of sectors in the dma operation, much like blk_queue_max_phys_segments in the kernel. This could also be useful to make sure that we don't allocate bounce buffers bigger than a predetermined limit.