From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37479) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1btrv1-0002Xw-FN for qemu-devel@nongnu.org; Tue, 11 Oct 2016 03:56:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1btruz-0006y5-CW for qemu-devel@nongnu.org; Tue, 11 Oct 2016 03:56:10 -0400 Received: from mail-it0-x242.google.com ([2607:f8b0:4001:c0b::242]:35722) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1btruz-0006x5-1g for qemu-devel@nongnu.org; Tue, 11 Oct 2016 03:56:09 -0400 Received: by mail-it0-x242.google.com with SMTP id l13so1082216itl.2 for ; Tue, 11 Oct 2016 00:56:08 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20161005040259.GA30495@localhost.localdomain> References: <1475035789-685-1-git-send-email-ashish.mittal@veritas.com> <20160928111332.GD4196@stefanha-x1.localdomain> <20161005040259.GA30495@localhost.localdomain> From: ashish mittal Date: Tue, 11 Oct 2016 00:56:06 -0700 Message-ID: Content-Type: text/plain; charset=UTF-8 Subject: Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jeff Cody Cc: Stefan Hajnoczi , qemu-devel@nongnu.org, Paolo Bonzini , Kevin Wolf , Markus Armbruster , "Daniel P. Berrange" , famz@redhat.com, Ashish Mittal , Ketan.Nilangekar@veritas.com, Abhijit.Dey@veritas.com, Rakesh Ranjan Checked in a test server to libqnio that allows to open, read, write to a vxhs vdisk using the qemu-io binary. Steps to run the test server: (1) Touch a file in /tmp with the base-name of the vdisk ID to be opened. touch /tmp/\{98f48eef-b62f-46ee-b6e3-ad48ffd9ad0a\} (2) Start the test server. It listens on port 9999 by default. /path/to/qnio_server (3) In another terminal, start the qemu-io binary and open the vdisk qemu-io> open vxhs://127.0.0.1:9999/%7B98f48eef-b62f-46ee-b6e3-ad48ffd9ad0a%7D (4) Now you can write and read data from the vdisk that is backed by a file. qemu-io> writev -P 81 0 1k qemu-io> read -v 0 1k Following change would be needed to my last patch to allow opening of the vdisk: $ git diff diff --git a/block/vxhs.c b/block/vxhs.c index 90a4343..d849a9b 100644 --- a/block/vxhs.c +++ b/block/vxhs.c @@ -1215,7 +1215,7 @@ static int vxhs_qemu_init(QDict *options, BDRVVXHSState *s, } ret = vxhs_qnio_iio_open(cfd, of_vsa_addr, rfd, file_name); - if (!ret) { + if (ret) { error_setg(&local_err, "Failed qnio_iio_open"); ret = -EIO; } Will work on the qemu-iotests test suite next. Regards, Ashish On Tue, Oct 4, 2016 at 9:02 PM, Jeff Cody wrote: > On Wed, Sep 28, 2016 at 12:13:32PM +0100, Stefan Hajnoczi wrote: >> On Tue, Sep 27, 2016 at 09:09:49PM -0700, Ashish Mittal wrote: > > [...] > >> > +/* >> > + * This is called by QEMU when a flush gets triggered from within >> > + * a guest at the block layer, either for IDE or SCSI disks. >> > + */ >> > +static int vxhs_co_flush(BlockDriverState *bs) >> >> This is called from coroutine context, please add the coroutine_fn >> function attribute to document this. >> >> > +{ >> > + BDRVVXHSState *s = bs->opaque; >> > + int64_t size = 0; >> > + int ret = 0; >> > + >> > + /* >> > + * VDISK_AIO_FLUSH ioctl is a no-op at present and will >> > + * always return success. This could change in the future. >> > + */ >> > + ret = vxhs_qnio_iio_ioctl(s->qnio_ctx, >> > + s->vdisk_hostinfo[s->vdisk_cur_host_idx].vdisk_rfd, >> > + VDISK_AIO_FLUSH, &size, NULL, IIO_FLAG_SYNC); >> >> This function is not allowed to block. It cannot do a synchronous >> flush. This line is misleading because the constant is called >> VDISK_AIO_FLUSH, but looking at the library code I see it's actually a >> synchronous call that ends up in a loop that sleeps (!) waiting for the >> response. >> >> Please do an async flush and qemu_coroutine_yield() to return >> control to QEMU's event loop. When the flush completes you can >> qemu_coroutine_enter() again to return from this function. >> >> > + >> > + if (ret < 0) { >> > + trace_vxhs_co_flush(s->vdisk_guid, ret, errno); >> > + vxhs_close(bs); >> >> This looks unsafe. Won't it cause double close() calls for s->fds[] >> when bdrv_close() is called later? >> > > Calling the close on a failed flush is a good idea, in my opinion. However, > to make it safe, bs->drv MUST be set to NULL after the call to vxhs_close(). > That will prevent double free / close calls, and also fail out new I/O. > (This is actually what the gluster driver does, for instance). > > Jeff > > >