From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:51319) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TA0EA-0006qC-58 for qemu-devel@nongnu.org; Fri, 07 Sep 2012 11:12:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TA0E8-0003rT-TA for qemu-devel@nongnu.org; Fri, 07 Sep 2012 11:12:14 -0400 Received: from mx1.redhat.com ([209.132.183.28]:12629) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TA0E8-0003rP-JR for qemu-devel@nongnu.org; Fri, 07 Sep 2012 11:12:12 -0400 Message-ID: <504A0EA5.2060308@redhat.com> Date: Fri, 07 Sep 2012 17:11:33 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <20120809130010.GA7960@in.ibm.com> <20120809130216.GC7960@in.ibm.com> <20120905074106.GA28080@in.ibm.com> <20120905095431.GB28080@in.ibm.com> <50484F63.5050406@redhat.com> <504867A9.3050808@redhat.com> <50486F24.2010106@redhat.com> <504875F6.9090302@redhat.com> <50487881.9030207@redhat.com> <50487B0A.6010908@redhat.com> <20120907150643.GF20421@in.ibm.com> In-Reply-To: <20120907150643.GF20421@in.ibm.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: bharata@linux.vnet.ibm.com Cc: Kevin Wolf , Anthony Liguori , Anand Avati , Stefan Hajnoczi , Vijay Bellur , Amar Tumballi , qemu-devel@nongnu.org, Blue Swirl Il 07/09/2012 17:06, Bharata B Rao ha scritto: > qemu_gluster_aio_event_reader() is the node->io_read in qemu_aio_wait(). > > qemu_aio_wait() calls node->io_read() which calls qemu_gluster_complete_aio(). > Before we return back to qemu_aio_wait(), many other things happen: > > bdrv_close() gets called from qcow2_create2() > This closes the gluster connection, closes the pipe, does > qemu_set_fd_hander(read_pipe_fd, NULL, NULL, NULL, NULL), which results > in the AioHandler node being deleted from aio_handlers list. > > Now qemu_gluster_aio_event_reader (node->io_read) which was called from > qemu_aio_wait() finally completes and goes ahead and accesses "node" > which has already been deleted. This causes segfault. > > So I think the option 1 (scheduling a BH from node->io_read) would > be better for gluster. This is a bug that has to be fixed anyway. There are provisions in aio.c, but they are broken apparently. Can you try this: diff --git a/aio.c b/aio.c index 0a9eb10..99b8b72 100644 --- a/aio.c +++ b/aio.c @@ -119,7 +119,7 @@ bool qemu_aio_wait(void) return true; } - walking_handlers = 1; + walking_handlers++; FD_ZERO(&rdfds); FD_ZERO(&wrfds); @@ -147,7 +147,7 @@ bool qemu_aio_wait(void) } } - walking_handlers = 0; + walking_handlers--; /* No AIO operations? Get us out of here */ if (!busy) { @@ -159,7 +159,7 @@ bool qemu_aio_wait(void) /* if we have any readable fds, dispatch event */ if (ret > 0) { - walking_handlers = 1; + walking_handlers++; /* we have to walk very carefully in case * qemu_aio_set_fd_handler is called while we're walking */ @@ -187,7 +187,7 @@ bool qemu_aio_wait(void) } } - walking_handlers = 0; + walking_handlers--; } return true; Paolo