qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Denis V. Lunev" <den@openvz.org>
To: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
	qemu-block@nongnu.org, qemu-devel@nongnu.org
Cc: kwolf@redhat.com, dplotnikov@virtuozzo.com, mreitz@redhat.com
Subject: Re: [PATCH] block/aio_task: allow start/wait task from any coroutine
Date: Thu, 11 Jun 2020 15:31:46 +0300	[thread overview]
Message-ID: <5904554c-4c12-7f59-00ec-f842aa813a12@openvz.org> (raw)
In-Reply-To: <20200611073631.10817-1-vsementsov@virtuozzo.com>

On 6/11/20 10:36 AM, Vladimir Sementsov-Ogievskiy wrote:
> Currently, aio task pool assumes that there is a main coroutine, which
> creates tasks and wait for them. Let's remove the restriction by using
> CoQueue. Code becomes clearer, interface more obvious.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>
> Hi! Here is my counter-propasal for
> "[PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine"
> by Denis. I'm sure that if we are going to change something here, better
> is make the interface work from any coroutine without the restriction of
> only-on-waiter at the moment.
>
> (Note, that it is still not thread-safe)
>
>
>  block/aio_task.c | 15 ++++-----------
>  1 file changed, 4 insertions(+), 11 deletions(-)
>
> diff --git a/block/aio_task.c b/block/aio_task.c
> index 88989fa248..d48b29ff83 100644
> --- a/block/aio_task.c
> +++ b/block/aio_task.c
> @@ -27,11 +27,10 @@
>  #include "block/aio_task.h"
>  
>  struct AioTaskPool {
> -    Coroutine *main_co;
>      int status;
>      int max_busy_tasks;
>      int busy_tasks;
> -    bool waiting;
> +    CoQueue waiters;
>  };
>  
>  static void coroutine_fn aio_task_co(void *opaque)
> @@ -52,21 +51,15 @@ static void coroutine_fn aio_task_co(void *opaque)
>  
>      g_free(task);
>  
> -    if (pool->waiting) {
> -        pool->waiting = false;
> -        aio_co_wake(pool->main_co);
> -    }
> +    qemu_co_queue_next(&pool->waiters);
nope, this will wakeup only single waiter.
the code will deadlock If there are 2 waiters for the last
entry.

You need something like qemu_co_queue_restart_all() here
at least.

Den


  reply	other threads:[~2020-06-11 12:32 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-10 19:00 [PATCH v2 0/2] qcow2: seriously improve savevm performance Denis V. Lunev
2020-06-10 19:00 ` [PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine Denis V. Lunev
2020-06-11  7:36   ` [PATCH] block/aio_task: allow start/wait task from any coroutine Vladimir Sementsov-Ogievskiy
2020-06-11 12:31     ` Denis V. Lunev [this message]
2020-06-11 12:52       ` Vladimir Sementsov-Ogievskiy
2020-06-10 19:00 ` [PATCH 2/2] qcow2: improve savevm performance Denis V. Lunev
2020-06-11  8:04   ` Vladimir Sementsov-Ogievskiy
2020-06-11  8:25     ` Denis V. Lunev
2020-06-11  8:44       ` Vladimir Sementsov-Ogievskiy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5904554c-4c12-7f59-00ec-f842aa813a12@openvz.org \
    --to=den@openvz.org \
    --cc=dplotnikov@virtuozzo.com \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=vsementsov@virtuozzo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).