From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Lirwe-0006ua-5V for qemu-devel@nongnu.org; Sun, 15 Mar 2009 11:08:08 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LirwY-0006uI-LT for qemu-devel@nongnu.org; Sun, 15 Mar 2009 11:08:06 -0400 Received: from [199.232.76.173] (port=45196 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LirwY-0006uF-E8 for qemu-devel@nongnu.org; Sun, 15 Mar 2009 11:08:02 -0400 Received: from mx2.redhat.com ([66.187.237.31]:48236) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LirwX-0004WY-Dk for qemu-devel@nongnu.org; Sun, 15 Mar 2009 11:08:02 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n2FF80l0007347 for ; Sun, 15 Mar 2009 11:08:00 -0400 Message-ID: <49BD19CF.7030807@redhat.com> Date: Sun, 15 Mar 2009 17:07:59 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 2/6] change vectored block I/O API to plain iovecs References: <20090314192701.GA3497@lst.de> <20090314192828.GB3717@lst.de> <49BCF79D.8050103@redhat.com> <20090315144843.GC30986@lst.de> In-Reply-To: <20090315144843.GC30986@lst.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Christoph Hellwig wrote: >> virtio gets its iovecs through a hacky (and incorrect, try >=4G) >> method. IMO virtio should be fixed to use the dma api, at which point >> it will start to use QEMUIOVector anyway, >> > > I would not call it hacky an incorrect. And the current dma API > certainly won't work due to the layering between the generic virtio > layer and virtio-block. > It is not incorrect, as Anthony pointed out. It's hacky in that it touches deep qemu internals. The dma api has two layers; while the upper layer (in dma-helpers.c) is too high level for virtio, the lower level ought to work. >> Internally yes, but why should bdrv_* not use QEMUIOVector? That API >> isn't very interested in posix. >> > > Because it makes life a lot easier. We already pass the length in > sector units anyway. While QEMUIOVector could replace instead of > currently duplicate it that would mean another translation between > byte and sector units at the block level. And then comes the issue > of feeding in iovecs - there is the case of iovecs coming from other > layers like virtio-blk virtio-blk could simply gather iovecs through QEMUIOVector > and the more important one of just creating > one-entry static iovecs in many places. These would mean another > dynamic allocation and lots of API churn. > QEMUStaticIOVector qiov; qemu_static_iovector_init(&qiov, data, len); some_random_function(&qiov.iov, ...); Of course we need not to free non-owned iovecs. > Keep the QEMUIOVector as a nice abstraction for the memory-managment > issues in dma-helper.c but I think as an API for passing data (which > doesn't care about how the iovec array is allocated) they aren't > very helpful. > They allow dropping two extra parameters, but I agree no huge benefit. I still like them. -- error compiling committee.c: too many arguments to function