From: Frederic Konrad <fred.konrad@greensocs.com>
To: "Alex Bennée" <alex.bennee@linaro.org>
Cc: mttcg@listserver.greensocs.com, peter.maydell@linaro.org,
mark.burton@greensocs.com, qemu-devel@nongnu.org, agraf@suse.de,
guillaume.delbergue@greensocs.com, pbonzini@redhat.com
Subject: Re: [Qemu-devel] [RFC PATCH] Use atomic cmpxchg to atomically check the exclusive value in a STREX
Date: Wed, 10 Jun 2015 10:03:12 +0200 [thread overview]
Message-ID: <5577EF40.2010207@greensocs.com> (raw)
In-Reply-To: <87pp556l5d.fsf@linaro.org>
On 09/06/2015 15:55, Alex Bennée wrote:
> Alex Bennée <alex.bennee@linaro.org> writes:
>
>> fred.konrad@greensocs.com writes:
>>
>>> From: KONRAD Frederic <fred.konrad@greensocs.com>
>>>
>>> This mechanism replaces the existing load/store exclusive mechanism which seems
>>> to be broken for multithread.
>>> It follows the intention of the existing mechanism and stores the target address
>>> and data values during a load operation and checks that they remain unchanged
>>> before a store.
>>>
>>> In common with the older approach, this provides weaker semantics than required
>>> in that it could be that a different processor writes the same value as a
>>> non-exclusive write, however in practise this seems to be irrelevant.
> <snip>
>>>
>>> +/* Protect cpu_exclusive_* variable .*/
>>> +__thread bool cpu_have_exclusive_lock;
>>> +QemuMutex cpu_exclusive_lock;
>>> +
>>> +inline void arm_exclusive_lock(void)
>>> +{
>>> + if (!cpu_have_exclusive_lock) {
>>> + qemu_mutex_lock(&cpu_exclusive_lock);
>>> + cpu_have_exclusive_lock = true;
>>> + }
>>> +}
>>> +
>>> +inline void arm_exclusive_unlock(void)
>>> +{
>>> + if (cpu_have_exclusive_lock) {
>>> + cpu_have_exclusive_lock = false;
>>> + qemu_mutex_unlock(&cpu_exclusive_lock);
>>> + }
>>> +}
>> I don't quite follow. If these locks are mean to be protecting access to
>> variables then how do they do that? The lock won't block if another
>> thread is currently messing with the protected values.
> Having re-read after coffee I'm still wondering why we need the
> per-thread bool? All the lock/unlock pairs are for critical sections so
> don't we just want to serialise on the qemu_mutex_lock(), what do the
> flags add apart from allowing you to next locks that shouldn't happen?
>
>
You are probably right, this might be a rest of the old approach.
There were branches so we needed to allow next locks.
Thanks,
Fred
next prev parent reply other threads:[~2015-06-10 8:03 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-05 14:31 [Qemu-devel] [RFC PATCH] Use atomic cmpxchg to atomically check the exclusive value in a STREX fred.konrad
2015-06-09 9:12 ` Alex Bennée
2015-06-09 9:39 ` Mark Burton
2015-06-09 13:55 ` Alex Bennée
2015-06-09 14:00 ` Mark Burton
2015-06-09 15:35 ` Alex Bennée
2015-06-10 8:03 ` Frederic Konrad [this message]
2015-06-10 8:41 ` Frederic Konrad
2015-06-09 13:59 ` Alex Bennée
2015-06-09 14:02 ` Mark Burton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5577EF40.2010207@greensocs.com \
--to=fred.konrad@greensocs.com \
--cc=agraf@suse.de \
--cc=alex.bennee@linaro.org \
--cc=guillaume.delbergue@greensocs.com \
--cc=mark.burton@greensocs.com \
--cc=mttcg@listserver.greensocs.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).