qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Engraf <david.engraf@sysgo.com>
To: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel@nongnu.org
Cc: Fam Zheng <famz@redhat.com>
Subject: Re: [Qemu-devel] [PATCH] qemu_mutex_iothread_locked not correctly synchronized
Date: Wed, 25 Nov 2015 10:08:50 +0100	[thread overview]
Message-ID: <56557AA2.9080100@sysgo.com> (raw)
In-Reply-To: <5654883F.9050509@redhat.com>

Hi Paolo,

Am 24.11.2015 um 16:54 schrieb Paolo Bonzini:
> On 24/11/2015 16:43, David Engraf wrote:
>> Commit afbe70535ff1a8a7a32910cc15ebecc0ba92e7da introduced the function
>> qemu_mutex_iothread_locked to avoid recursive locking of the iothread
>> lock. The iothread_locked variable uses the __thread attribute which
>> results in a per thread storage location whereas the qemu_global_mutex
>> is not thread specific. This leads to race conditions because the
>> variable is not in sync between threads.
>
> Frankly, this makes no sense.  You're modifying the
> qemu_mutex_iothread_locked function to return whether _some_ thread has
> taken the mutex, but the function tells you whether _the calling_ thread
> has taken the mutex (that's the point about recursive locking).  This
> breaks the callers of qemu_mutex_iothread_locked completely.
>
> The variable need not be in sync between the different threads; each
> thread only needs to know about itself.  The current code works because:
 >
> 1) the iothread mutex is not locked before qemu_mutex_lock_iothread
>
> 2) the iothread mutex is not locked after qemu_mutex_lock_iothread
>
> 3) qemu_cond_wait doesn't matter because the thread does not run during
> a qemu_cond_wait.

But this is my issue on Windows:

qemu_tcg_cpu_thread_fn
-> qemu_tcg_wait_io_event
    -> qemu_cond_wait acquires the mutex

qemu_tcg_cpu_thread_fn
-> tcg_exec_all -> tcg_cpu_exec -> cpu_exec
    -> cpu_exec ends up in calling prepare_mmio_access
       prepare_mmio_access uses qemu_mutex_iothread_locked to check if
       the lock is already held for this thread, but it returns 0
       because qemu_cond_wait doesn't set iothread_locked but the mutex
       is locked. Thus the function calls qemu_mutex_lock_iothread which
       tries to acquire the mutex again. On Windows this results in an
       assertion:

Assertion failed: mutex->owner == 0, file util/qemu-thread-win32.c, line 60

I'm using a self compiled, cross gcc-5.2.0 mingw compiler.

David

  reply	other threads:[~2015-11-25  9:08 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-24 15:43 [Qemu-devel] [PATCH] qemu_mutex_iothread_locked not correctly synchronized David Engraf
2015-11-24 15:54 ` Paolo Bonzini
2015-11-25  9:08   ` David Engraf [this message]
2015-11-25 12:31     ` [Qemu-devel] [PATCH v2] " David Engraf
2015-11-25 13:26       ` Paolo Bonzini
2015-11-25 14:04         ` David Engraf
2015-11-25 14:36           ` Paolo Bonzini
2015-11-25 15:48             ` David Engraf
2015-11-25 16:16               ` Paolo Bonzini
2015-11-26  9:12                 ` David Engraf
2015-11-26 11:25                   ` Stefan Weil
2015-11-26 14:26                     ` David Engraf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56557AA2.9080100@sysgo.com \
    --to=david.engraf@sysgo.com \
    --cc=famz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).