From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48734) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1brdQC-0007VJ-Cs for qemu-devel@nongnu.org; Wed, 05 Oct 2016 00:03:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1brdQ7-0004Fl-BY for qemu-devel@nongnu.org; Wed, 05 Oct 2016 00:03:08 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49932) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1brdQ7-0004FM-5Q for qemu-devel@nongnu.org; Wed, 05 Oct 2016 00:03:03 -0400 Date: Wed, 5 Oct 2016 00:02:59 -0400 From: Jeff Cody Message-ID: <20161005040259.GA30495@localhost.localdomain> References: <1475035789-685-1-git-send-email-ashish.mittal@veritas.com> <20160928111332.GD4196@stefanha-x1.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160928111332.GD4196@stefanha-x1.localdomain> Subject: Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: Ashish Mittal , qemu-devel@nongnu.org, pbonzini@redhat.com, kwolf@redhat.com, armbru@redhat.com, berrange@redhat.com, famz@redhat.com, ashish.mittal@veritas.com, Ketan.Nilangekar@veritas.com, Abhijit.Dey@veritas.com On Wed, Sep 28, 2016 at 12:13:32PM +0100, Stefan Hajnoczi wrote: > On Tue, Sep 27, 2016 at 09:09:49PM -0700, Ashish Mittal wrote: [...] > > +/* > > + * This is called by QEMU when a flush gets triggered from within > > + * a guest at the block layer, either for IDE or SCSI disks. > > + */ > > +static int vxhs_co_flush(BlockDriverState *bs) > > This is called from coroutine context, please add the coroutine_fn > function attribute to document this. > > > +{ > > + BDRVVXHSState *s = bs->opaque; > > + int64_t size = 0; > > + int ret = 0; > > + > > + /* > > + * VDISK_AIO_FLUSH ioctl is a no-op at present and will > > + * always return success. This could change in the future. > > + */ > > + ret = vxhs_qnio_iio_ioctl(s->qnio_ctx, > > + s->vdisk_hostinfo[s->vdisk_cur_host_idx].vdisk_rfd, > > + VDISK_AIO_FLUSH, &size, NULL, IIO_FLAG_SYNC); > > This function is not allowed to block. It cannot do a synchronous > flush. This line is misleading because the constant is called > VDISK_AIO_FLUSH, but looking at the library code I see it's actually a > synchronous call that ends up in a loop that sleeps (!) waiting for the > response. > > Please do an async flush and qemu_coroutine_yield() to return > control to QEMU's event loop. When the flush completes you can > qemu_coroutine_enter() again to return from this function. > > > + > > + if (ret < 0) { > > + trace_vxhs_co_flush(s->vdisk_guid, ret, errno); > > + vxhs_close(bs); > > This looks unsafe. Won't it cause double close() calls for s->fds[] > when bdrv_close() is called later? > Calling the close on a failed flush is a good idea, in my opinion. However, to make it safe, bs->drv MUST be set to NULL after the call to vxhs_close(). That will prevent double free / close calls, and also fail out new I/O. (This is actually what the gluster driver does, for instance). Jeff