From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LmnHN-0007z0-OS for qemu-devel@nongnu.org; Thu, 26 Mar 2009 06:57:45 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LmnHJ-0007xw-7Y for qemu-devel@nongnu.org; Thu, 26 Mar 2009 06:57:45 -0400 Received: from [199.232.76.173] (port=53553 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LmnHJ-0007xt-4R for qemu-devel@nongnu.org; Thu, 26 Mar 2009 06:57:41 -0400 Received: from mx2.redhat.com ([66.187.237.31]:42941) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LmnHI-0003Dq-Jr for qemu-devel@nongnu.org; Thu, 26 Mar 2009 06:57:40 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n2QAvb8N031974 for ; Thu, 26 Mar 2009 06:57:37 -0400 Message-ID: <49CB5FA0.10101@redhat.com> Date: Thu, 26 Mar 2009 12:57:36 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH] honor IDE_DMA_BUF_SECTORS References: <49CA3591.1010309@eu.citrix.com> <49CA4C3D.1070705@redhat.com> <49CA59AE.8060605@eu.citrix.com> <49CA5F9F.5040203@redhat.com> <49CA60BA.5060704@eu.citrix.com> <49CA6E4A.4080408@eu.citrix.com> <49CB5793.4030006@redhat.com> <49CB599D.6000701@eu.citrix.com> In-Reply-To: <49CB599D.6000701@eu.citrix.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Stefano Stabellini wrote: > Unfortunately that is not really helpful: after the split done by > cpu_physical_memory_map the iovector is converted in a buffer in > bdrv_aio_rw_vector and then the full length of the buffer is passed on > to the bdrv_aio_write\read for the dma operation. > > I need a way to set a maximum limit for the total number of sectors in > the dma operation, much like blk_queue_max_phys_segments in the kernel. > > This could also be useful to make sure that we don't allocate bounce > buffers bigger than a predetermined limit. > If cpu_physical_memory_map() returns NULL, then dma-helpers.c will stop collecting sg entries and submit the I/O. Tuning that will control how vectored requests are submitted. If you problem is specifically with the bdrv_aio_rw_vector bounce buffer, then note that this is a temporary measure until vectored aio is in place, through preadv/pwritev and/or linux-aio IO_CMD_PREADV. You should either convert to that when it is merged, or implement request splitting in bdrv_aio_rw_vector. Can you explain your problem in more detail? -- error compiling committee.c: too many arguments to function