From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47920) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7pjm-0001if-2l for qemu-devel@nongnu.org; Wed, 24 Jun 2015 14:49:31 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z7pjd-0002hy-Of for qemu-devel@nongnu.org; Wed, 24 Jun 2015 14:49:30 -0400 Received: from mail-wi0-f179.google.com ([209.85.212.179]:34246) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7pjd-0002hu-Fg for qemu-devel@nongnu.org; Wed, 24 Jun 2015 14:49:21 -0400 Received: by wicnd19 with SMTP id nd19so142940504wic.1 for ; Wed, 24 Jun 2015 11:49:20 -0700 (PDT) References: <1434646046-27150-1-git-send-email-pbonzini@redhat.com> <1434646046-27150-6-git-send-email-pbonzini@redhat.com> <87zj3pxcyl.fsf@linaro.org> <558AE719.5090506@redhat.com> From: Alex =?utf-8?Q?Benn=C3=A9e?= In-reply-to: <558AE719.5090506@redhat.com> Date: Wed, 24 Jun 2015 19:50:01 +0100 Message-ID: <87twtxx7om.fsf@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [PATCH 5/9] memory: let address_space_rw/ld*/st* run outside the BQL List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: jan.kiszka@siemens.com, qemu-devel@nongnu.org Paolo Bonzini writes: > On 24/06/2015 18:56, Alex Bennée wrote: >> This is where I get confused between the advantage of this over however >> same pid recursive locking. If you use recursive locking you don't need >> to add a bunch of state to remind you of when to release the lock. Then >> you'd just need: >> >> static void prepare_mmio_access(MemoryRegion *mr) >> { >> if (mr->global_locking) { >> qemu_mutex_lock_iothread(); >> } >> if (mr->flush_coalesced_mmio) { >> qemu_mutex_lock_iothread(); >> qemu_flush_coalesced_mmio_buffer(); >> qemu_mutex_unlock_iothread(); >> } >> } >> >> and a bunch of: >> >> if (mr->global_locking) >> qemu_mutex_unlock_iothread(); >> >> in the access functions. Although I suspect you could push the >> mr->global_locking up to the dispatch functions. >> >> Am I missing something here? > > The semantics of recursive locking with respect to condition variables > are not clear. Either cond_wait releases all locks, and then the mutex > can be released when the code doesn't expect it to be, or cond_wait > doesn't release all locks and then you have deadlocks. > > POSIX says to do the latter: > > It is advised that an application should not use a > PTHREAD_MUTEX_RECURSIVE mutex with condition variables because > the implicit unlock performed for a pthread_cond_timedwait() or > pthread_cond_wait() may not actually release the mutex (if it > had been locked multiple times). If this happens, no other > thread can satisfy the condition of the predicate." > > So, recursive locking is okay if you don't have condition variables > attached to the lock (and if you cannot do without it), but > qemu_global_mutex does have them. Ahh OK, so I was missing something ;-) > > QEMU has so far tried to use the solution that Stevens outlines here: > https://books.google.it/books?id=kCTMFpEcIOwC&pg=PA434 > > ... Leave the interfaces to func1 and func2 unchanged, and avoid > a recursive mutex by providing a private version of func2, > called func2_locked. To call func2_locked, hold the mutex > embedded in the data structure whose address we pass as the > argument. > > as a way to avoid recursive locking. This is much better because a) it > is more efficient---taking locks can be expensive even if they're > uncontended, especially if your VM spans multiple NUMA nodes on the > host; b) it is always clear when a lock is taken and when it isn't. > > Note that Stevens has another example right after this one of recursive > locking, involving callbacks, but it's ill-defined. There's no reason > for the "timeout" function in page 437 to hold the mutex when it calls > "func". It can unlock before and re-lock afterwards, like QEMU's own > timerlist_run_timers function. Unfortunately I can't read that link but it sounds like I should get myself a copy of the book. I take it that approach wouldn't approve of: static __thread int iothread_lock_count; void qemu_mutex_lock_iothread(void) { if (iothread_lock_count == 0) { qemu_mutex_lock(&qemu_global_mutex); } iothread_lock_count++; } void qemu_mutex_unlock_iothread(void) { iothread_lock_count--; if (iothread_lock_count==0) { qemu_mutex_unlock(&qemu_global_mutex); } if (iothread_lock_count < 0) { fprintf(stderr,"%s: error, too many unlocks %d\n", __func__, iothread_lock_count); } } Which should achieve the same "only one lock held" semantics but still make the calling code a little less worried about tracking the state. I guess it depends if there is ever going to be a situation where we say "lock is held, do something different"? > > Paolo -- Alex Bennée