qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: bharata@linux.vnet.ibm.com
Cc: Kevin Wolf <kwolf@redhat.com>,
	Anthony Liguori <aliguori@us.ibm.com>,
	Anand Avati <aavati@redhat.com>,
	Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>,
	Vijay Bellur <vbellur@redhat.com>,
	Amar Tumballi <amarts@redhat.com>,
	qemu-devel@nongnu.org, Blue Swirl <blauwirbel@gmail.com>
Subject: Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend
Date: Fri, 07 Sep 2012 17:11:33 +0200	[thread overview]
Message-ID: <504A0EA5.2060308@redhat.com> (raw)
In-Reply-To: <20120907150643.GF20421@in.ibm.com>

Il 07/09/2012 17:06, Bharata B Rao ha scritto:
> qemu_gluster_aio_event_reader() is the node->io_read in qemu_aio_wait().
> 
> qemu_aio_wait() calls node->io_read() which calls qemu_gluster_complete_aio().
> Before we return back to qemu_aio_wait(), many other things happen:
> 
> bdrv_close() gets called from qcow2_create2()
> This closes the gluster connection, closes the pipe, does
> qemu_set_fd_hander(read_pipe_fd, NULL, NULL, NULL, NULL), which results
> in the AioHandler node being deleted from aio_handlers list.
> 
> Now qemu_gluster_aio_event_reader (node->io_read) which was called from
> qemu_aio_wait() finally completes and goes ahead and accesses "node"
> which has already been deleted. This causes segfault.
> 
> So I think the option 1 (scheduling a BH from node->io_read) would
> be better for gluster.

This is a bug that has to be fixed anyway.  There are provisions in
aio.c, but they are broken apparently.  Can you try this:

diff --git a/aio.c b/aio.c
index 0a9eb10..99b8b72 100644
--- a/aio.c
+++ b/aio.c
@@ -119,7 +119,7 @@ bool qemu_aio_wait(void)
         return true;
     }

-    walking_handlers = 1;
+    walking_handlers++;

     FD_ZERO(&rdfds);
     FD_ZERO(&wrfds);
@@ -147,7 +147,7 @@ bool qemu_aio_wait(void)
         }
     }

-    walking_handlers = 0;
+    walking_handlers--;

     /* No AIO operations?  Get us out of here */
     if (!busy) {
@@ -159,7 +159,7 @@ bool qemu_aio_wait(void)

     /* if we have any readable fds, dispatch event */
     if (ret > 0) {
-        walking_handlers = 1;
+        walking_handlers++;

         /* we have to walk very carefully in case
          * qemu_aio_set_fd_handler is called while we're walking */
@@ -187,7 +187,7 @@ bool qemu_aio_wait(void)
             }
         }

-        walking_handlers = 0;
+        walking_handlers--;
     }

     return true;


Paolo

  reply	other threads:[~2012-09-07 15:12 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-09 13:00 [Qemu-devel] [PATCH v6 0/2] GlusterFS support in QEMU - v6 Bharata B Rao
2012-08-09 13:01 ` [Qemu-devel] [PATCH v6 1/2] qemu: Add a config option for GlusterFS as block backend Bharata B Rao
2012-08-09 13:02 ` [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU " Bharata B Rao
2012-08-13 12:50   ` Kevin Wolf
2012-08-14  4:38     ` Bharata B Rao
2012-08-14  8:29       ` Kevin Wolf
2012-08-14  9:34         ` Bharata B Rao
2012-08-14  9:58           ` Kevin Wolf
2012-09-06  8:29             ` Avi Kivity
2012-09-06 15:40               ` Bharata B Rao
2012-09-06 15:44                 ` Paolo Bonzini
2012-09-06 15:47                 ` Daniel P. Berrange
2012-09-06 16:04                   ` ronnie sahlberg
2012-09-06 16:06                   ` Avi Kivity
2012-09-07  3:24                   ` Bharata B Rao
2012-09-07  9:19                     ` Daniel P. Berrange
2012-09-07  9:36                     ` Paolo Bonzini
2012-09-07  9:57                       ` Kevin Wolf
2012-09-12  9:22                         ` Bharata B Rao
2012-09-12  9:24                           ` Paolo Bonzini
2012-09-07 10:00                   ` Kevin Wolf
2012-09-07 10:03                     ` Daniel P. Berrange
2012-09-07 10:05                       ` Paolo Bonzini
2012-08-15  5:21         ` Bharata B Rao
2012-08-15  8:00           ` Kevin Wolf
2012-08-15  9:22             ` Bharata B Rao
2012-08-15  8:51         ` Bharata B Rao
2012-09-05  7:41   ` Bharata B Rao
2012-09-05  9:57     ` Bharata B Rao
2012-09-06  7:23       ` Paolo Bonzini
2012-09-06  9:06         ` Kevin Wolf
2012-09-06  9:38           ` Paolo Bonzini
2012-09-06 10:07             ` Kevin Wolf
2012-09-06 10:18               ` Paolo Bonzini
2012-09-06 10:29                 ` Kevin Wolf
2012-09-06 11:01                   ` Paolo Bonzini
2012-09-07 15:06                   ` Bharata B Rao
2012-09-07 15:11                     ` Paolo Bonzini [this message]
2012-09-08 14:22                       ` Bharata B Rao
2012-09-05 10:01     ` Kevin Wolf
2012-09-05 10:43       ` Bharata B Rao
2012-09-06  7:35   ` Paolo Bonzini
2012-09-07  5:46     ` Bharata B Rao
2012-08-13  9:49 ` [Qemu-devel] [PATCH v6 0/2] GlusterFS support in QEMU - v6 Bharata B Rao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=504A0EA5.2060308@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=aavati@redhat.com \
    --cc=aliguori@us.ibm.com \
    --cc=amarts@redhat.com \
    --cc=bharata@linux.vnet.ibm.com \
    --cc=blauwirbel@gmail.com \
    --cc=kwolf@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@linux.vnet.ibm.com \
    --cc=vbellur@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).