qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Fam Zheng <fam@euphon.net>, qemu-block@nongnu.org
Subject: Re: [PATCH 2/3] async: always set ctx->notified in aio_notify()
Date: Tue, 4 Aug 2020 09:12:46 +0200	[thread overview]
Message-ID: <eb304bcb-6b5b-8544-0e94-e84055d4fab8@redhat.com> (raw)
In-Reply-To: <20200804052804.1165291-3-stefanha@redhat.com>

On 04/08/20 07:28, Stefan Hajnoczi wrote:
> @@ -425,19 +425,14 @@ void aio_notify(AioContext *ctx)
>      smp_mb();
>      if (atomic_read(&ctx->notify_me)) {
>          event_notifier_set(&ctx->notifier);
> -        atomic_mb_set(&ctx->notified, true);
>      }
> +
> +    atomic_mb_set(&ctx->notified, true);
>  }

This can be an atomic_set since it's already ordered by the smp_mb()
(actually a smp_wmb() would be enough for ctx->notified, though not for
ctx->notify_me).

>  void aio_notify_accept(AioContext *ctx)
>  {
> -    if (atomic_xchg(&ctx->notified, false)
> -#ifdef WIN32
> -        || true
> -#endif
> -    ) {
> -        event_notifier_test_and_clear(&ctx->notifier);
> -    }
> +    atomic_mb_set(&ctx->notified, false);
>  }

I am not sure what this should be.

- If ctx->notified is cleared earlier it's not a problem, there is just
a possibility for the other side to set it to true again and cause a
spurious wakeup

- if it is cleared later, during the dispatch, there is a possibility
that it we miss a set:

	CPU1				CPU2
	------------------------------- ------------------------------
	read bottom half flags
					set BH_SCHEDULED
					set ctx->notified
	clear ctx->notified (reordered)

and the next polling loop misses ctx->notified.

So the requirement is to write ctx->notified before the dispatching
phase start.  It would be a "store acquire" but it doesn't exist; I
would replace it with atomic_set() + smp_mb(), plus a comment saying
that it pairs with the smp_mb() (which actually could be a smp_wmb()) in
aio_notify().

In theory the barrier in aio_bh_dequeue is enough, but I don't
understand memory_order_seqcst atomics well enough to be sure, so I
prefer an explicit fence.

Feel free to include part of this description in aio_notify_accept().

Paolo



  reply	other threads:[~2020-08-04  7:14 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-04  5:28 [PATCH 0/3] aio-posix: keep aio_notify_me disabled during polling Stefan Hajnoczi
2020-08-04  5:28 ` [PATCH 1/3] async: rename event_notifier_dummy_cb/poll() Stefan Hajnoczi
2020-08-04  5:28 ` [PATCH 2/3] async: always set ctx->notified in aio_notify() Stefan Hajnoczi
2020-08-04  7:12   ` Paolo Bonzini [this message]
2020-08-04 10:23     ` Stefan Hajnoczi
2020-08-04  5:28 ` [PATCH 3/3] aio-posix: keep aio_notify_me disabled during polling Stefan Hajnoczi
2020-08-04 10:29   ` Stefan Hajnoczi
2020-08-04 16:53     ` Paolo Bonzini
2020-08-05  8:59       ` Stefan Hajnoczi
2020-08-04  7:13 ` [PATCH 0/3] " Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=eb304bcb-6b5b-8544-0e94-e84055d4fab8@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=fam@euphon.net \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).