From: Paolo Bonzini <pbonzini@redhat.com>
To: Richard Henderson <richard.henderson@linaro.org>, qemu-devel@nongnu.org
Cc: gshan@redhat.com, eesposit@redhat.com, david@redhat.com,
stefanha@redhat.com, cohuck@redhat.com, eauger@redhat.com
Subject: Re: [PATCH v2 8/9] async: update documentation of the memory barriers
Date: Tue, 7 Mar 2023 11:49:54 +0100 [thread overview]
Message-ID: <12ea9d46-1e95-62a1-70f7-d77a66b44bd0@redhat.com> (raw)
In-Reply-To: <10e49543-b4f8-f22c-a9ab-e6340c6a0074@linaro.org>
On 3/7/23 00:39, Richard Henderson wrote:
> On 3/6/23 14:33, Paolo Bonzini wrote:
>> @@ -107,11 +114,8 @@ static QEMUBH *aio_bh_dequeue(BHList *head,
>> unsigned *flags)
>> QSLIST_REMOVE_HEAD(head, next);
>> /*
>> - * The qatomic_and is paired with aio_bh_enqueue(). The implicit
>> memory
>> - * barrier ensures that the callback sees all writes done by the
>> scheduling
>> - * thread. It also ensures that the scheduling thread sees the
>> cleared
>> - * flag before bh->cb has run, and thus will call aio_notify
>> again if
>> - * necessary.
>> + * Synchronizes with qatomic_fetch_or() in aio_bh_enqueue(),
>> ensuring that
>> + * the removal finishes before BH_PENDING is reset.
>> */
>> *flags = qatomic_fetch_and(&bh->flags,
>
> Per this new comment, about the remove finishing first, it would seem
> that we need smp_mb__before_rmw here, because QSLIST_REMOVE_HEAD is not
> SEQCST.
Only release-acquire is needed for consistent access to the list, seqcst
and smp_mb__before/after_rmw() are only needed (as expected) to handle
wakeup.
Just to be safe, I tried modeling this with cppmem
(http://svr-pes20-cppmem.cl.cam.ac.uk/cppmem/); support for
compare-and-swap is very limited, therefore the test looks nothing like
the C code(*), but it should be ok:
int main() {
atomic_int x = 0;
atomic_int y = 0;
{{{
{ cas_strong_explicit(&x, 0, 1, mo_acquire, mo_acquire);
x.load(mo_relaxed).readsvalue(1); // fetch_or returned 0
// r1 = y.load(mo_relaxed);
y.store(1, mo_release); } // bh inserted
||| { y.load(mo_acquire).readsvalue(1); // bh was in list
y.store(0, mo_relaxed); // bh removed
// r1 = x.load(mo_relaxed);
x.store(0, mo_release); } // fetch_and
||| { cas_strong_explicit(&x, 0, 2, mo_acquire, mo_acquire);
x.load(mo_relaxed).readsvalue(2); // fetch_or returned 0
// r1 = y.load(mo_relaxed);
y.store(2, mo_release); } // bh inserted
}}};
return 0;
}
You can uncomment the debugging instructions (r1 = ...) too, but leaving
all three uncommented will blow up.
Use the release_acquire model since it's faster and there are no seqcst
operations in the above test. It will take 1-2 minutes to run the model
on a laptop, and the result shows that the only consistent (i.e. allowed
by the C standard) execution is the one where thread 0 runs entirely
before thread 1, and thread 1 runs entirely before thread 2. You get
the opposite order if the "bh was in list" line is changed to
"readsvalue(2)".
This matches the description I had sent yesterday.
Paolo
(*) A couple points of interest. First, there is no way to say "CAS has
succeeded" so I am writing different values to x (this is not a problem
because the code in QEMU only checks whether the pending bit was zero)
and checking that they can be read back (which isn't a big limitation
because other threads could sneak in anyway right after the checking
load). Second, there is no way to say "reads something other than 0",
so you cannot get both valid executions with one run of the model,
instead you can change the "bh was in list" line to "readsvalue(2)".
next prev parent reply other threads:[~2023-03-07 10:50 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-06 22:32 [PATCH v2 0/9] Fix missing memory barriers on ARM Paolo Bonzini
2023-03-06 22:32 ` [PATCH v2 1/9] qatomic: add smp_mb__before/after_rmw() Paolo Bonzini
2023-03-06 22:32 ` [PATCH v2 2/9] qemu-thread-posix: cleanup, fix, document QemuEvent Paolo Bonzini
2023-03-06 22:33 ` [PATCH v2 3/9] qemu-thread-win32: " Paolo Bonzini
2023-03-06 22:33 ` [PATCH v2 4/9] edu: add smp_mb__after_rmw() Paolo Bonzini
2023-03-06 22:33 ` [PATCH v2 5/9] aio-wait: switch to smp_mb__after_rmw() Paolo Bonzini
2023-03-07 10:50 ` David Hildenbrand
2023-03-06 22:33 ` [PATCH v2 6/9] qemu-coroutine-lock: add smp_mb__after_rmw() Paolo Bonzini
2023-03-06 22:33 ` [PATCH v2 7/9] physmem: add missing memory barrier Paolo Bonzini
2023-03-06 22:33 ` [PATCH v2 8/9] async: update documentation of the memory barriers Paolo Bonzini
2023-03-06 22:48 ` Stefan Hajnoczi
2023-03-06 23:39 ` Richard Henderson
2023-03-07 10:49 ` Paolo Bonzini [this message]
2023-03-07 15:54 ` Richard Henderson
2023-03-07 17:00 ` Paolo Bonzini
2023-03-07 17:26 ` Richard Henderson
2023-03-08 10:49 ` Paolo Bonzini
2023-03-08 16:47 ` Richard Henderson
2023-03-08 18:08 ` Paolo Bonzini
2023-03-08 18:41 ` Richard Henderson
2023-03-06 22:33 ` [PATCH v2 9/9] async: clarify usage of barriers in the polling case Paolo Bonzini
2023-03-06 22:49 ` Stefan Hajnoczi
2023-03-07 10:51 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=12ea9d46-1e95-62a1-70f7-d77a66b44bd0@redhat.com \
--to=pbonzini@redhat.com \
--cc=cohuck@redhat.com \
--cc=david@redhat.com \
--cc=eauger@redhat.com \
--cc=eesposit@redhat.com \
--cc=gshan@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=richard.henderson@linaro.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).