From: Stefan Hajnoczi <stefanha@redhat.com>
To: Bharata B Rao <bharata@linux.vnet.ibm.com>
Cc: Kevin Wolf <kwolf@redhat.com>, Vijay Bellur <vbellur@redhat.com>,
Stefan Hajnoczi <stefanha@gmail.com>,
qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
Asias He <asias@redhat.com>,
MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Subject: Re: [Qemu-devel] [PATCH] block: Fix race in gluster_finish_aiocb
Date: Thu, 22 Aug 2013 09:48:46 +0200 [thread overview]
Message-ID: <20130822074846.GC10412@stefanha-thinkpad.redhat.com> (raw)
In-Reply-To: <20130822055947.GB24870@in.ibm.com>
On Thu, Aug 22, 2013 at 11:29:47AM +0530, Bharata B Rao wrote:
> On Wed, Aug 21, 2013 at 05:40:11PM +0200, Paolo Bonzini wrote:
> > Il 21/08/2013 17:24, Stefan Hajnoczi ha scritto:
> > > On Wed, Aug 21, 2013 at 10:02:47AM +0800, Asias He wrote:
> > >> In block/gluster.c, we have
> > >>
> > >> gluster_finish_aiocb
> > >> {
> > >> if (retval != sizeof(acb)) {
> > >> qemu_mutex_lock_iothread(); /* We are in gluster thread context */
> > >> ...
> > >> qemu_mutex_unlock_iothread();
> > >> }
> > >> }
> > >>
> > >> qemu tools, e.g. qemu-img, might race here because
> > >> qemu_mutex_{lock,unlock}_iothread are a nop operation and
> > >> gluster_finish_aiocb is in the gluster thread context.
> > >>
> > >> To fix, we introduce our own mutex for qemu tools.
> > >
> > > I think we need to look more closely at the error code path:
> > >
> > > acb->ret = ret;
> > > retval = qemu_write_full(s->fds[GLUSTER_FD_WRITE], &acb, sizeof(acb));
> > > if (retval != sizeof(acb)) {
> > > /*
> > > * Gluster AIO callback thread failed to notify the waiting
> > > * QEMU thread about IO completion.
> > > *
> > > * Complete this IO request and make the disk inaccessible for
> > > * subsequent reads and writes.
> > > */
> > > error_report("Gluster failed to notify QEMU about IO completion");
> > >
> > > qemu_mutex_lock_iothread(); /* We are in gluster thread context */
> > > acb->common.cb(acb->common.opaque, -EIO);
> > > qemu_aio_release(acb);
> > > close(s->fds[GLUSTER_FD_READ]);
> > > close(s->fds[GLUSTER_FD_WRITE]);
> > >
> > > Is it safe to close the fds? There is a race here:
> > >
> > > 1. Another thread opens a new file descriptor and gets GLUSTER_FD_READ or
> > > GLUSTER_FD_WRITE's old fd value.
> > > 2. Another gluster thread invokes the callback and does
> > > qemu_write_full(s->fds[GLUSTER_FD_WRITE], ...).
> > >
> > > Since the mutex doesn't protect s->fds[] this is possible.
> > >
> > > Maybe a simpler solution for request completion is:
> > >
> > > 1. Linked list of completed acbs.
> > > 2. Mutex to protect the linked list.
> > > 3. EventNotifier to signal iothread.
> >
> > We could just use a bottom half, too. Add a bottom half to acb,
> > schedule it in gluster_finish_aiocb, delete it in the bottom half's own
> > callback.
>
> gluster_finish_aiocb gets called from gluster thread, is it safe to create
> and schedule a bh from such a thread ?
>
> In my first implementation (http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg01748.html), I was using a BH from qemu read side thread (the thread
> that would respond to pipe write from gluster callback thread). That
> implementation was based on rbd and I later dropped the BH part since it
> looked like a round about way of completing the aio when we are already using
> the pipe mechanism for aio completion.
Recent patches made creating and scheduling a BH thread-safe.
I think Paolo's idea is better than mine.
Stefan
next prev parent reply other threads:[~2013-08-22 7:49 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-21 2:02 [Qemu-devel] [PATCH] block: Fix race in gluster_finish_aiocb Asias He
2013-08-21 8:16 ` Paolo Bonzini
2013-08-22 9:50 ` Asias He
2013-08-22 9:51 ` Paolo Bonzini
2013-08-23 8:32 ` Asias He
2013-08-23 9:05 ` Paolo Bonzini
2013-08-21 15:24 ` Stefan Hajnoczi
2013-08-21 15:40 ` Paolo Bonzini
2013-08-22 5:59 ` Bharata B Rao
2013-08-22 7:48 ` Stefan Hajnoczi [this message]
2013-08-22 9:06 ` Paolo Bonzini
2013-08-22 9:55 ` Bharata B Rao
2013-08-22 10:00 ` Paolo Bonzini
2013-08-22 10:28 ` Bharata B Rao
2013-08-22 11:15 ` Paolo Bonzini
2013-08-22 13:25 ` Bharata B Rao
2013-08-22 13:27 ` Paolo Bonzini
2013-08-22 14:01 ` Bharata B Rao
2013-08-22 14:52 ` Paolo Bonzini
2013-08-23 6:48 ` Bharata B Rao
2013-08-23 7:33 ` Paolo Bonzini
2013-08-23 8:11 ` Bharata B Rao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130822074846.GC10412@stefanha-thinkpad.redhat.com \
--to=stefanha@redhat.com \
--cc=asias@redhat.com \
--cc=bharata@linux.vnet.ibm.com \
--cc=kwolf@redhat.com \
--cc=morita.kazutaka@lab.ntt.co.jp \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=vbellur@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).