From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752014AbbCOWKb (ORCPT ); Sun, 15 Mar 2015 18:10:31 -0400 Received: from mail-la0-f45.google.com ([209.85.215.45]:36808 "EHLO mail-la0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751166AbbCOWK2 (ORCPT ); Sun, 15 Mar 2015 18:10:28 -0400 Date: Sun, 15 Mar 2015 23:10:18 +0100 From: Rabin Vincent To: Matthias Bonne Cc: Davidlohr Bueso , Yann Droneaud , kernelnewbies@kernelnewbies.org, linux-kernel@vger.kernel.org, Peter Zijlstra , Ingo Molnar Subject: Re: Question on mutex code Message-ID: <20150315221018.GA25881@debian> References: <54F64E10.7050801@gmail.com> <1425992639.3991.11.camel@opteya.com> <5504BECB.50605@gmail.com> <1426381401.28068.68.camel@stgolabs.net> <1426381746.28068.70.camel@stgolabs.net> <5505FE53.1060807@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5505FE53.1060807@gmail.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Mar 15, 2015 at 11:49:07PM +0200, Matthias Bonne wrote: > So both mutex_trylock() and mutex_unlock() always use the slow paths. > The slowpath for mutex_unlock() is __mutex_unlock_slowpath(), which > simply calls __mutex_unlock_common_slowpath(), and the latter starts > like this: > > /* > * As a performance measurement, release the lock before doing other > * wakeup related duties to follow. This allows other tasks to > acquire > * the lock sooner, while still handling cleanups in past unlock > calls. > * This can be done as we do not enforce strict equivalence between > the > * mutex counter and wait_list. > * > * > * Some architectures leave the lock unlocked in the fastpath > failure > * case, others need to leave it locked. In the later case we have > to > * unlock it here - as the lock counter is currently 0 or negative. > */ > if (__mutex_slowpath_needs_to_unlock()) > atomic_set(&lock->count, 1); > > spin_lock_mutex(&lock->wait_lock, flags); > [...] > > So the counter is set to 1 before taking the spinlock, which I think > might cause the race. Did I miss something? Yes, you miss the fact that __mutex_slowpath_needs_to_unlock() is 0 for the CONFIG_DEBUG_MUTEXES case: #ifdef CONFIG_DEBUG_MUTEXES # include "mutex-debug.h" # include /* * Must be 0 for the debug case so we do not do the unlock outside of the * wait_lock region. debug_mutex_unlock() will do the actual unlock in this * case. */ # undef __mutex_slowpath_needs_to_unlock # define __mutex_slowpath_needs_to_unlock() 0