From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LpQJ4-0004NB-BS for qemu-devel@nongnu.org; Thu, 02 Apr 2009 13:02:22 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LpQIy-0004Hh-Rh for qemu-devel@nongnu.org; Thu, 02 Apr 2009 13:02:22 -0400 Received: from [199.232.76.173] (port=54857 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LpQIy-0004HO-Hx for qemu-devel@nongnu.org; Thu, 02 Apr 2009 13:02:16 -0400 Received: from verein.lst.de ([213.95.11.210]:57275) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_3DES_EDE_CBC_SHA1:24) (Exim 4.60) (envelope-from ) id 1LpQIx-0008E5-U8 for qemu-devel@nongnu.org; Thu, 02 Apr 2009 13:02:16 -0400 Date: Thu, 2 Apr 2009 19:02:09 +0200 From: Christoph Hellwig Subject: Re: [Qemu-devel] [PATCH 05/10] xen: add block device backend driver. Message-ID: <20090402170209.GA10089@lst.de> References: <1238621982-18333-1-git-send-email-kraxel@redhat.com> <1238621982-18333-6-git-send-email-kraxel@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1238621982-18333-6-git-send-email-kraxel@redhat.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: xen-devel@lists.xensource.com, Gerd Hoffmann On Wed, Apr 01, 2009 at 11:39:37PM +0200, Gerd Hoffmann wrote: > +static void inline blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_request_t *src) > +{ > +static void inline blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_request_t *src) > +{ I think you'd be better of moving them to the .c file as normal static function and leave the inlining decisions to the compiler. > + > +/* > + * FIXME: the code is designed to handle multiple outstanding > + * requests, which isn't used right now. Plan is to > + * switch over to the aio block functions once they got > + * vector support. > + */ We already have bdrv_aio_readv/writev which currently linearize the buffer underneath. Hopefully Anthony will have commited the patch to implement the real one while I'm writing this, too :) After those patches bdrv_aio_read/write will be gone so this code won't compile anymore, too. > +static int ioreq_runio_qemu_aio(struct ioreq *ioreq) > +{ > + struct XenBlkDev *blkdev = ioreq->blkdev; > + int i, len = 0; > + off_t pos; > + > + if (-1 == ioreq_map(ioreq)) > + goto err; > + > + ioreq->aio_inflight++; > + if (ioreq->presync) > + bdrv_flush(blkdev->bs); /* FIXME: aio_flush() ??? */ > + > + switch (ioreq->req.operation) { > + case BLKIF_OP_READ: > + pos = ioreq->start; > + for (i = 0; i < ioreq->vecs; i++) { > + ioreq->aio_inflight++; > + bdrv_aio_read(blkdev->bs, pos / BLOCK_SIZE, > + ioreq->vec[i].iov_base, > + ioreq->vec[i].iov_len / BLOCK_SIZE, > + qemu_aio_complete, ioreq); > + len += ioreq->vec[i].iov_len; > + pos += ioreq->vec[i].iov_len; > + } hdrv_flush doesn't actually empty the aio queues but only issues a fsync. So we could still re-order requeuests around the barrier with this implementation. I will soon submit a real block-layer level barrier implementation that just allows to flag a bdrv_aio_read/write request as barrier and deal with this under the hood.