From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60871) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VCPdZ-0003DH-IP for qemu-devel@nongnu.org; Thu, 22 Aug 2013 03:49:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VCPdU-0004OD-PZ for qemu-devel@nongnu.org; Thu, 22 Aug 2013 03:48:57 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34576) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VCPdU-0004O4-Hr for qemu-devel@nongnu.org; Thu, 22 Aug 2013 03:48:52 -0400 Date: Thu, 22 Aug 2013 09:48:46 +0200 From: Stefan Hajnoczi Message-ID: <20130822074846.GC10412@stefanha-thinkpad.redhat.com> References: <1377050567-19122-1-git-send-email-asias@redhat.com> <20130821152440.GB18303@stefanha-thinkpad.redhat.com> <5214DF5B.50203@redhat.com> <20130822055947.GB24870@in.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20130822055947.GB24870@in.ibm.com> Subject: Re: [Qemu-devel] [PATCH] block: Fix race in gluster_finish_aiocb List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Bharata B Rao Cc: Kevin Wolf , Vijay Bellur , Stefan Hajnoczi , qemu-devel@nongnu.org, Paolo Bonzini , Asias He , MORITA Kazutaka On Thu, Aug 22, 2013 at 11:29:47AM +0530, Bharata B Rao wrote: > On Wed, Aug 21, 2013 at 05:40:11PM +0200, Paolo Bonzini wrote: > > Il 21/08/2013 17:24, Stefan Hajnoczi ha scritto: > > > On Wed, Aug 21, 2013 at 10:02:47AM +0800, Asias He wrote: > > >> In block/gluster.c, we have > > >> > > >> gluster_finish_aiocb > > >> { > > >> if (retval != sizeof(acb)) { > > >> qemu_mutex_lock_iothread(); /* We are in gluster thread context */ > > >> ... > > >> qemu_mutex_unlock_iothread(); > > >> } > > >> } > > >> > > >> qemu tools, e.g. qemu-img, might race here because > > >> qemu_mutex_{lock,unlock}_iothread are a nop operation and > > >> gluster_finish_aiocb is in the gluster thread context. > > >> > > >> To fix, we introduce our own mutex for qemu tools. > > > > > > I think we need to look more closely at the error code path: > > > > > > acb->ret = ret; > > > retval = qemu_write_full(s->fds[GLUSTER_FD_WRITE], &acb, sizeof(acb)); > > > if (retval != sizeof(acb)) { > > > /* > > > * Gluster AIO callback thread failed to notify the waiting > > > * QEMU thread about IO completion. > > > * > > > * Complete this IO request and make the disk inaccessible for > > > * subsequent reads and writes. > > > */ > > > error_report("Gluster failed to notify QEMU about IO completion"); > > > > > > qemu_mutex_lock_iothread(); /* We are in gluster thread context */ > > > acb->common.cb(acb->common.opaque, -EIO); > > > qemu_aio_release(acb); > > > close(s->fds[GLUSTER_FD_READ]); > > > close(s->fds[GLUSTER_FD_WRITE]); > > > > > > Is it safe to close the fds? There is a race here: > > > > > > 1. Another thread opens a new file descriptor and gets GLUSTER_FD_READ or > > > GLUSTER_FD_WRITE's old fd value. > > > 2. Another gluster thread invokes the callback and does > > > qemu_write_full(s->fds[GLUSTER_FD_WRITE], ...). > > > > > > Since the mutex doesn't protect s->fds[] this is possible. > > > > > > Maybe a simpler solution for request completion is: > > > > > > 1. Linked list of completed acbs. > > > 2. Mutex to protect the linked list. > > > 3. EventNotifier to signal iothread. > > > > We could just use a bottom half, too. Add a bottom half to acb, > > schedule it in gluster_finish_aiocb, delete it in the bottom half's own > > callback. > > gluster_finish_aiocb gets called from gluster thread, is it safe to create > and schedule a bh from such a thread ? > > In my first implementation (http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg01748.html), I was using a BH from qemu read side thread (the thread > that would respond to pipe write from gluster callback thread). That > implementation was based on rbd and I later dropped the BH part since it > looked like a round about way of completing the aio when we are already using > the pipe mechanism for aio completion. Recent patches made creating and scheduling a BH thread-safe. I think Paolo's idea is better than mine. Stefan