From mboxrd@z Thu Jan 1 00:00:00 1970 From: rabin@rab.in (Rabin Vincent) Date: Sun, 15 Mar 2015 22:11:14 -0000 Subject: Question on mutex code In-Reply-To: <5505FE53.1060807@gmail.com> References: <54F64E10.7050801@gmail.com> <1425992639.3991.11.camel@opteya.com> <5504BECB.50605@gmail.com> <1426381401.28068.68.camel@stgolabs.net> <1426381746.28068.70.camel@stgolabs.net> <5505FE53.1060807@gmail.com> Message-ID: <20150315221018.GA25881@debian> To: kernelnewbies@lists.kernelnewbies.org List-Id: kernelnewbies.lists.kernelnewbies.org On Sun, Mar 15, 2015 at 11:49:07PM +0200, Matthias Bonne wrote: > So both mutex_trylock() and mutex_unlock() always use the slow paths. > The slowpath for mutex_unlock() is __mutex_unlock_slowpath(), which > simply calls __mutex_unlock_common_slowpath(), and the latter starts > like this: > > /* > * As a performance measurement, release the lock before doing other > * wakeup related duties to follow. This allows other tasks to > acquire > * the lock sooner, while still handling cleanups in past unlock > calls. > * This can be done as we do not enforce strict equivalence between > the > * mutex counter and wait_list. > * > * > * Some architectures leave the lock unlocked in the fastpath > failure > * case, others need to leave it locked. In the later case we have > to > * unlock it here - as the lock counter is currently 0 or negative. > */ > if (__mutex_slowpath_needs_to_unlock()) > atomic_set(&lock->count, 1); > > spin_lock_mutex(&lock->wait_lock, flags); > [...] > > So the counter is set to 1 before taking the spinlock, which I think > might cause the race. Did I miss something? Yes, you miss the fact that __mutex_slowpath_needs_to_unlock() is 0 for the CONFIG_DEBUG_MUTEXES case: #ifdef CONFIG_DEBUG_MUTEXES # include "mutex-debug.h" # include /* * Must be 0 for the debug case so we do not do the unlock outside of the * wait_lock region. debug_mutex_unlock() will do the actual unlock in this * case. */ # undef __mutex_slowpath_needs_to_unlock # define __mutex_slowpath_needs_to_unlock() 0