From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, qemu-block@nongnu.org,
Eric Blake <eblake@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
Date: Tue, 22 Aug 2017 17:18:23 +0100 [thread overview]
Message-ID: <20170822161823.GE2109@work-vm> (raw)
In-Reply-To: <20170822125113.5025-1-stefanha@redhat.com>
* Stefan Hajnoczi (stefanha@redhat.com) wrote:
> The following scenario leads to an assertion failure in
> qio_channel_yield():
>
> 1. Request coroutine calls qio_channel_yield() successfully when sending
> would block on the socket. It is now yielded.
> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
> nbd_receive_reply() failed.
> 3. Request coroutine is entered and returns from qio_channel_yield().
> Note that the socket fd handler has not fired yet so
> ioc->write_coroutine is still set.
> 4. Request coroutine attempts to send the request body with nbd_rwv()
> but the socket would still block. qio_channel_yield() is called
> again and assert(!ioc->write_coroutine) is hit.
>
> The problem is that nbd_read_reply_entry() does not distinguish between
> request coroutines that are waiting to receive a reply and those that
> are not.
>
> This patch adds a per-request bool receiving flag so
> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
>
> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
With that patch that assert does seem to go away; just leaving the
other failure we're seeing.
Dave
> ---
> This should fix the issue that Dave is seeing but I'm concerned that
> there are more problems in nbd-client.c. We don't have good
> abstractions for writing coroutine socket I/O code. Something like Go's
> channels would avoid manual low-level coroutine calls. There is
> currently no way to cancel qio_channel_yield() so requests doing I/O may
> remain in-flight indefinitely and nbd-client.c doesn't join them...
>
> block/nbd-client.h | 7 ++++++-
> block/nbd-client.c | 35 ++++++++++++++++++++++-------------
> 2 files changed, 28 insertions(+), 14 deletions(-)
>
> diff --git a/block/nbd-client.h b/block/nbd-client.h
> index 1935ffbcaa..b435754b82 100644
> --- a/block/nbd-client.h
> +++ b/block/nbd-client.h
> @@ -17,6 +17,11 @@
>
> #define MAX_NBD_REQUESTS 16
>
> +typedef struct {
> + Coroutine *coroutine;
> + bool receiving; /* waiting for read_reply_co? */
> +} NBDClientRequest;
> +
> typedef struct NBDClientSession {
> QIOChannelSocket *sioc; /* The master data channel */
> QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
> @@ -27,7 +32,7 @@ typedef struct NBDClientSession {
> Coroutine *read_reply_co;
> int in_flight;
>
> - Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
> + NBDClientRequest requests[MAX_NBD_REQUESTS];
> NBDReply reply;
> bool quit;
> } NBDClientSession;
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index 422ecb4307..c2834f6b47 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -39,8 +39,10 @@ static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
> int i;
>
> for (i = 0; i < MAX_NBD_REQUESTS; i++) {
> - if (s->recv_coroutine[i]) {
> - aio_co_wake(s->recv_coroutine[i]);
> + NBDClientRequest *req = &s->requests[i];
> +
> + if (req->coroutine && req->receiving) {
> + aio_co_wake(req->coroutine);
> }
> }
> }
> @@ -88,28 +90,28 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
> * one coroutine is called until the reply finishes.
> */
> i = HANDLE_TO_INDEX(s, s->reply.handle);
> - if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) {
> + if (i >= MAX_NBD_REQUESTS ||
> + !s->requests[i].coroutine ||
> + !s->requests[i].receiving) {
> break;
> }
>
> - /* We're woken up by the recv_coroutine itself. Note that there
> + /* We're woken up again by the request itself. Note that there
> * is no race between yielding and reentering read_reply_co. This
> * is because:
> *
> - * - if recv_coroutine[i] runs on the same AioContext, it is only
> + * - if the request runs on the same AioContext, it is only
> * entered after we yield
> *
> - * - if recv_coroutine[i] runs on a different AioContext, reentering
> + * - if the request runs on a different AioContext, reentering
> * read_reply_co happens through a bottom half, which can only
> * run after we yield.
> */
> - aio_co_wake(s->recv_coroutine[i]);
> + aio_co_wake(s->requests[i].coroutine);
> qemu_coroutine_yield();
> }
>
> - if (ret < 0) {
> - s->quit = true;
> - }
> + s->quit = true;
> nbd_recv_coroutines_enter_all(s);
> s->read_reply_co = NULL;
> }
> @@ -128,14 +130,17 @@ static int nbd_co_send_request(BlockDriverState *bs,
> s->in_flight++;
>
> for (i = 0; i < MAX_NBD_REQUESTS; i++) {
> - if (s->recv_coroutine[i] == NULL) {
> - s->recv_coroutine[i] = qemu_coroutine_self();
> + if (s->requests[i].coroutine == NULL) {
> break;
> }
> }
>
> g_assert(qemu_in_coroutine());
> assert(i < MAX_NBD_REQUESTS);
> +
> + s->requests[i].coroutine = qemu_coroutine_self();
> + s->requests[i].receiving = false;
> +
> request->handle = INDEX_TO_HANDLE(s, i);
>
> if (s->quit) {
> @@ -173,10 +178,13 @@ static void nbd_co_receive_reply(NBDClientSession *s,
> NBDReply *reply,
> QEMUIOVector *qiov)
> {
> + int i = HANDLE_TO_INDEX(s, request->handle);
> int ret;
>
> /* Wait until we're woken up by nbd_read_reply_entry. */
> + s->requests[i].receiving = true;
> qemu_coroutine_yield();
> + s->requests[i].receiving = false;
> *reply = s->reply;
> if (reply->handle != request->handle || !s->ioc || s->quit) {
> reply->error = EIO;
> @@ -186,6 +194,7 @@ static void nbd_co_receive_reply(NBDClientSession *s,
> NULL);
> if (ret != request->len) {
> reply->error = EIO;
> + s->quit = true;
> }
> }
>
> @@ -200,7 +209,7 @@ static void nbd_coroutine_end(BlockDriverState *bs,
> NBDClientSession *s = nbd_get_client_session(bs);
> int i = HANDLE_TO_INDEX(s, request->handle);
>
> - s->recv_coroutine[i] = NULL;
> + s->requests[i].coroutine = NULL;
>
> /* Kick the read_reply_co to get the next reply. */
> if (s->read_reply_co) {
> --
> 2.13.5
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2017-08-22 16:18 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-22 12:51 [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry Stefan Hajnoczi
2017-08-22 13:23 ` Paolo Bonzini
2017-08-23 14:45 ` Stefan Hajnoczi
2017-08-23 16:15 ` Paolo Bonzini
2017-08-22 16:18 ` Dr. David Alan Gilbert [this message]
2017-08-23 14:20 ` Eric Blake
2017-08-23 14:26 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2017-08-23 14:51 ` [Qemu-devel] " Eric Blake
2017-08-23 16:17 ` Paolo Bonzini
2017-08-23 15:31 ` Vladimir Sementsov-Ogievskiy
2017-08-23 15:45 ` Eric Blake
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170822161823.GE2109@work-vm \
--to=dgilbert@redhat.com \
--cc=eblake@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).