qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Alex Bennée" <alex.bennee@linaro.org>
To: fred.konrad@greensocs.com
Cc: mttcg@listserver.greensocs.com, peter.maydell@linaro.org,
	mark.burton@greensocs.com, qemu-devel@nongnu.org, agraf@suse.de,
	guillaume.delbergue@greensocs.com, pbonzini@redhat.com
Subject: Re: [Qemu-devel] [RFC PATCH] Use atomic cmpxchg to atomically check the exclusive value in a STREX
Date: Tue, 09 Jun 2015 14:55:58 +0100	[thread overview]
Message-ID: <87pp556l5d.fsf@linaro.org> (raw)
In-Reply-To: <871thl8ctj.fsf@linaro.org>


Alex Bennée <alex.bennee@linaro.org> writes:

> fred.konrad@greensocs.com writes:
>
>> From: KONRAD Frederic <fred.konrad@greensocs.com>
>>
>> This mechanism replaces the existing load/store exclusive mechanism which seems
>> to be broken for multithread.
>> It follows the intention of the existing mechanism and stores the target address
>> and data values during a load operation and checks that they remain unchanged
>> before a store.
>>
>> In common with the older approach, this provides weaker semantics than required
>> in that it could be that a different processor writes the same value as a
>> non-exclusive write, however in practise this seems to be irrelevant.
>
<snip>
>>  
>> +/* Protect cpu_exclusive_* variable .*/
>> +__thread bool cpu_have_exclusive_lock;
>> +QemuMutex cpu_exclusive_lock;
>> +
>> +inline void arm_exclusive_lock(void)
>> +{
>> +    if (!cpu_have_exclusive_lock) {
>> +        qemu_mutex_lock(&cpu_exclusive_lock);
>> +        cpu_have_exclusive_lock = true;
>> +    }
>> +}
>> +
>> +inline void arm_exclusive_unlock(void)
>> +{
>> +    if (cpu_have_exclusive_lock) {
>> +        cpu_have_exclusive_lock = false;
>> +        qemu_mutex_unlock(&cpu_exclusive_lock);
>> +    }
>> +}
>
> I don't quite follow. If these locks are mean to be protecting access to
> variables then how do they do that? The lock won't block if another
> thread is currently messing with the protected values.

Having re-read after coffee I'm still wondering why we need the
per-thread bool? All the lock/unlock pairs are for critical sections so
don't we just want to serialise on the qemu_mutex_lock(), what do the
flags add apart from allowing you to next locks that shouldn't happen?


-- 
Alex Bennée

  parent reply	other threads:[~2015-06-09 13:55 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-05 14:31 [Qemu-devel] [RFC PATCH] Use atomic cmpxchg to atomically check the exclusive value in a STREX fred.konrad
2015-06-09  9:12 ` Alex Bennée
2015-06-09  9:39   ` Mark Burton
2015-06-09 13:55   ` Alex Bennée [this message]
2015-06-09 14:00     ` Mark Burton
2015-06-09 15:35       ` Alex Bennée
2015-06-10  8:03     ` Frederic Konrad
2015-06-10  8:41       ` Frederic Konrad
2015-06-09 13:59 ` Alex Bennée
2015-06-09 14:02   ` Mark Burton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87pp556l5d.fsf@linaro.org \
    --to=alex.bennee@linaro.org \
    --cc=agraf@suse.de \
    --cc=fred.konrad@greensocs.com \
    --cc=guillaume.delbergue@greensocs.com \
    --cc=mark.burton@greensocs.com \
    --cc=mttcg@listserver.greensocs.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).