From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MlsKr-00017y-KM for qemu-devel@nongnu.org; Thu, 10 Sep 2009 18:41:49 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MlsKn-00011M-SK for qemu-devel@nongnu.org; Thu, 10 Sep 2009 18:41:49 -0400 Received: from [199.232.76.173] (port=34306 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MlsKn-000115-Mw for qemu-devel@nongnu.org; Thu, 10 Sep 2009 18:41:45 -0400 Received: from verein.lst.de ([213.95.11.210]:47124) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_3DES_EDE_CBC_SHA1:24) (Exim 4.60) (envelope-from ) id 1MlsKn-000468-0t for qemu-devel@nongnu.org; Thu, 10 Sep 2009 18:41:45 -0400 Date: Fri, 11 Sep 2009 00:41:41 +0200 From: Christoph Hellwig Subject: Re: [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use bdrv_aio_multiwrite Message-ID: <20090910224141.GA25700@lst.de> References: <1252511618-19497-1-git-send-email-kwolf@redhat.com> <1252511618-19497-3-git-send-email-kwolf@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1252511618-19497-3-git-send-email-kwolf@redhat.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: qemu-devel@nongnu.org On Wed, Sep 09, 2009 at 05:53:38PM +0200, Kevin Wolf wrote: > +static void do_multiwrite(BlockDriverState *bs, BlockRequest *blkreq, > + int num_writes) > { > - BlockDriverAIOCB *acb; > + int i, ret; > + ret = bdrv_aio_multiwrite(bs, blkreq, num_writes); > + > + if (ret != 0) { > + for (i = 0; i < num_writes; i++) { > + if (blkreq[i].error) { > + virtio_blk_req_complete(blkreq[i].opaque, VIRTIO_BLK_S_IOERR); > + } > + } > + } > +} > > +static void virtio_blk_handle_write(BlockRequest *blkreq, int *num_writes, > + VirtIOBlockReq *req, BlockDriverState **old_bs) > +{ > + if (req->dev->bs != *old_bs || *num_writes == 32) { I was under the impression we had one queue per device, so the first condition should never happen. > + if (*old_bs != NULL) { > + do_multiwrite(*old_bs, blkreq, *num_writes); > + } > + *num_writes = 0; > + *old_bs = req->dev->bs; > } > + > + blkreq[*num_writes].sector = req->out->sector; > + blkreq[*num_writes].nb_sectors = req->qiov.size / 512; > + blkreq[*num_writes].qiov = &req->qiov; > + blkreq[*num_writes].cb = virtio_blk_rw_complete; > + blkreq[*num_writes].opaque = req; > + blkreq[*num_writes].error = 0; > + > + (*num_writes)++; If you pass the completion routine to the function and map the error case to calling completion routine (which is the usual way to handle errors anyway) this function could become copletely generic. I think we also need to actually store the iocb in the BlockRequest ide and scsi use it to cancel I/O on migration (why virtio doesn't is on my TODO list to investigate) or some other cases. Another improvement to the data structure would be to have a container structure containing the BlockRequest array and num_writes, at which point this actually becomes a pretty clean abstraction, which we could also use to submit multiple iocbs in the native AIO code. Any chance to just use this batches subsmission unconditionally and also for reads? I'd hate to grow even more confusing I/O methods in the block later.