qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Bharata B Rao <bharata@linux.vnet.ibm.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: Anthony Liguori <aliguori@us.ibm.com>,
	Anand Avati <aavati@redhat.com>,
	Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>,
	Vijay Bellur <vbellur@redhat.com>,
	Amar Tumballi <amarts@redhat.com>,
	qemu-devel@nongnu.org, Blue Swirl <blauwirbel@gmail.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend
Date: Wed, 5 Sep 2012 16:13:23 +0530	[thread overview]
Message-ID: <20120905104322.GC28080@in.ibm.com> (raw)
In-Reply-To: <50472316.4060106@redhat.com>

On Wed, Sep 05, 2012 at 12:01:58PM +0200, Kevin Wolf wrote:
> Am 05.09.2012 09:41, schrieb Bharata B Rao:
> > On Thu, Aug 09, 2012 at 06:32:16PM +0530, Bharata B Rao wrote:
> >> +static void qemu_gluster_complete_aio(GlusterAIOCB *acb)
> >> +{
> >> +    int ret;
> >> +
> >> +    if (acb->canceled) {
> >> +        qemu_aio_release(acb);
> >> +        return;
> >> +    }
> >> +
> >> +    if (acb->ret == acb->size) {
> >> +        ret = 0; /* Success */
> >> +    } else if (acb->ret < 0) {
> >> +        ret = acb->ret; /* Read/Write failed */
> >> +    } else {
> >> +        ret = -EIO; /* Partial read/write - fail it */
> >> +    }
> >> +    acb->common.cb(acb->common.opaque, ret);
> > 
> > The .cb() here is bdrv_co_io_em_complete(). It does qemu_coroutine_enter(),
> > handles the return value and comes back here.
> 
> Right.
> 
> .cb is set by qemu_gluster_aio_rw/flush(), and the only way these can be
> called is through bdrv_co_io_em() and bdrv_co_flush(), which both set
> bdrv_co_io_em_complete as the callback.

Right.

> 
> > But if the bdrv_read or bdrv_write or bdrv_flush was called from a
> > coroutine context (as against they themselves creating a new coroutine),
> > the above .cb() call above doesn't return to this point.
> 
> Why?

Note that in this particular scenario (qemu-img create -f qcow2), bdrv_read
and bdrv_write are called from the coroutine thread that is running
qcow2_create(). So bdrv_read will find itself running in coroutine context
and hence will continue to use the same coroutine thread.

    if (qemu_in_coroutine()) {
        /* Fast-path if already in coroutine context */
        bdrv_rw_co_entry(&rwco);
    }

The path taken is.

bdrv_rw_co_entry -> bdrv_co_do_readv -> bdrv_co_readv_em -> bdrv_co_io_em
-> qemu_gluster_aio_readv

bdrv_co_io_em does qemu_coroutine_yield() next.

When the AIO is completed, qemu_gluster_complete_aio() is run as the read end
of the pipe becomes ready, so I assume it is in non-coroutine context to start
with. When it does acb->common.cb(), it enters the co-routine which was yielded
by bdrv_co_io_em.

Now the read call returns back and we ultimately end up in bdrv_rw_co_entry
which takes us back to bdrv_read and back to bdrv_pwrite where all this
originated (Note that qcow2_create2 called bdrv_pwrite in the first place).

So I never come back to the next statement in qemu_gluster_complete_aio()
after acb->common.cb(acb->common.opaque, ret). So the coroutine didn't
end and continued futher by issuing another bdrv_write call.

The effect of this is seen next when qcow2_create calls bdrv_close which does
bdrv_drain_all which calls qemu_aio_wait and I never come out of it.
In qemu_aio_wait, node->io_flush(node->opaque) returns a non-zero value
always, because node->io_flush which is qemu_gluster_aio_flush_cb() returns
non zero always. This is happening since I never got a chance to decrement
s->qemu_aio_count which was supposed to happen after qemu_gluster_complete_aio
came back from .cb() call.

So this is what I think is happening, hoping that I got it right.
Note that when I schedule a BH in qemu_gluster_complete_aio(), then
things work fine apparently because I am able to continue and decrement
s->qemu_aio_count.

Regards,
Bharata.

  reply	other threads:[~2012-09-05 10:41 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-09 13:00 [Qemu-devel] [PATCH v6 0/2] GlusterFS support in QEMU - v6 Bharata B Rao
2012-08-09 13:01 ` [Qemu-devel] [PATCH v6 1/2] qemu: Add a config option for GlusterFS as block backend Bharata B Rao
2012-08-09 13:02 ` [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU " Bharata B Rao
2012-08-13 12:50   ` Kevin Wolf
2012-08-14  4:38     ` Bharata B Rao
2012-08-14  8:29       ` Kevin Wolf
2012-08-14  9:34         ` Bharata B Rao
2012-08-14  9:58           ` Kevin Wolf
2012-09-06  8:29             ` Avi Kivity
2012-09-06 15:40               ` Bharata B Rao
2012-09-06 15:44                 ` Paolo Bonzini
2012-09-06 15:47                 ` Daniel P. Berrange
2012-09-06 16:04                   ` ronnie sahlberg
2012-09-06 16:06                   ` Avi Kivity
2012-09-07  3:24                   ` Bharata B Rao
2012-09-07  9:19                     ` Daniel P. Berrange
2012-09-07  9:36                     ` Paolo Bonzini
2012-09-07  9:57                       ` Kevin Wolf
2012-09-12  9:22                         ` Bharata B Rao
2012-09-12  9:24                           ` Paolo Bonzini
2012-09-07 10:00                   ` Kevin Wolf
2012-09-07 10:03                     ` Daniel P. Berrange
2012-09-07 10:05                       ` Paolo Bonzini
2012-08-15  5:21         ` Bharata B Rao
2012-08-15  8:00           ` Kevin Wolf
2012-08-15  9:22             ` Bharata B Rao
2012-08-15  8:51         ` Bharata B Rao
2012-09-05  7:41   ` Bharata B Rao
2012-09-05  9:57     ` Bharata B Rao
2012-09-06  7:23       ` Paolo Bonzini
2012-09-06  9:06         ` Kevin Wolf
2012-09-06  9:38           ` Paolo Bonzini
2012-09-06 10:07             ` Kevin Wolf
2012-09-06 10:18               ` Paolo Bonzini
2012-09-06 10:29                 ` Kevin Wolf
2012-09-06 11:01                   ` Paolo Bonzini
2012-09-07 15:06                   ` Bharata B Rao
2012-09-07 15:11                     ` Paolo Bonzini
2012-09-08 14:22                       ` Bharata B Rao
2012-09-05 10:01     ` Kevin Wolf
2012-09-05 10:43       ` Bharata B Rao [this message]
2012-09-06  7:35   ` Paolo Bonzini
2012-09-07  5:46     ` Bharata B Rao
2012-08-13  9:49 ` [Qemu-devel] [PATCH v6 0/2] GlusterFS support in QEMU - v6 Bharata B Rao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120905104322.GC28080@in.ibm.com \
    --to=bharata@linux.vnet.ibm.com \
    --cc=aavati@redhat.com \
    --cc=aliguori@us.ibm.com \
    --cc=amarts@redhat.com \
    --cc=blauwirbel@gmail.com \
    --cc=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@linux.vnet.ibm.com \
    --cc=vbellur@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).