From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755523AbcHSRMQ (ORCPT ); Fri, 19 Aug 2016 13:12:16 -0400 Received: from mail-bn3nam01on0137.outbound.protection.outlook.com ([104.47.33.137]:30336 "EHLO NAM01-BN3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755295AbcHSRMO (ORCPT ); Fri, 19 Aug 2016 13:12:14 -0400 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=waiman.long@hpe.com; Message-ID: <57B73A62.9020901@hpe.com> Date: Fri, 19 Aug 2016 12:57:06 -0400 From: Waiman Long User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12 MIME-Version: 1.0 To: Jason Low CC: Peter Zijlstra , Linus Torvalds , Ding Tianhong , Thomas Gleixner , Will Deacon , Ingo Molnar , , , Davidlohr Bueso , Tim Chen , , "Paul E. McKenney" , Subject: Re: [PATCH v4] locking/mutex: Prevent lock starvation when spinning is disabled References: <1471567197.4991.41.camel@j-VirtualBox> In-Reply-To: <1471567197.4991.41.camel@j-VirtualBox> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [72.71.243.49] X-ClientProxiedBy: BN6PR17CA0033.namprd17.prod.outlook.com (10.175.189.19) To DF4PR84MB0316.NAMPRD84.PROD.OUTLOOK.COM (10.162.193.30) X-MS-Office365-Filtering-Correlation-Id: 503e46bb-2d58-4ac0-35b5-08d3c851de2e X-Microsoft-Exchange-Diagnostics: 1;DF4PR84MB0316;2:jEB+BVQOVHVqNy67UtJxhmZY0HQmfUD+pnWIJNnYLwdV7uvMo+zJpJH+Caot36KJgfiNBC3K8l9XvJXgBLjZmtkOKlTmq48MvqJHa6kWYDoa3sk4cyiZ7TJvqSQEw9eIXR7h2yeLRJti/ClDL8ZklvautPviPI4pmUrgmriN8SEtPG9qFP2KjGGQjRlzqikG;3:JvSnCUBA71CqhXVYm8aUsx5vF1d8XdWDAVSjYWr28kfUYUbJjJqvi2jcH0C5dal1IKFEX6I/jXrD54UuGRc5t73gVJzxJ/iP7O4TeLjXGa8//DomT2nE+1BHT3XZrrxS;25:it7wRpfqzD9mcqrzO8ng6bKVnh0NfqbNYEBJK3F4VgJa1GeQBYmeqSaVW68r56ZszpNomtxxytSQKvGkfOGzb1YIMZ0Kfk7zZSFO7yPMNvSCuLPHkvrMiPNMRo5dVxZKIEA0sCNWeuVd32eRYgbpOiSUX6Tj2TToJepkhDVX0iqX+b8BWlGc0HLrMCsyaMkSkybQjxUHTqzxV6W+/npcqMGqyEE/KSd5Di3XZr9x4CNw6eoCQbUP5BNDd5yuAXL61UewyPgf95JhMWczJQtXwGwq1Y8nLbK0AVIzvkj8b1cJYb0lpyiycR3kGIeGC9XDV1XIKt6+mF+Br7H+NUaoWUdDKDKOAv7amyJpwSH2vRm6DoiVJvWBd2ij0GcAdC/HdT1rzskEel3sLXfc73SedhdvkL9YWrg00eA9s4cpuTY= X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DF4PR84MB0316; X-Microsoft-Exchange-Diagnostics: 1;DF4PR84MB0316;31:a52cQFA2LTznhy6uzJ92/nUOopgDnGk4l/bON3rhtwlhYjoV/Gri7xmDUx+tJH/a1M/a2b+2GJzQIpeBu3u+t3Ro+pec23TWo5W2SxF9OaUxxRGt/bvMXM6FN7Nv1R5kLbDqng8zLbyDPUkVkGKkjdqHLAiwHpxaTkea2gJqI4E95HH54+5b9EJmsjVyy34cZU4hUafw85w6U6reCJb6xlJjxQA/BDgY+IBXaFNFZsU=;20:c4QYWNsIS2ZaF+G3zEta+adW0uFKn+5ubN+yu69otORlKThTK39gDwcL/jWCEQYzn0cMs8l14KKYK2OqtQ8zwZlSd7REOdBQhxfipFxq5XnumnCXw/BCThW4JhH0eh5VziR/+qNQj5qxgyXA1ufnoQvveogx90M0D6cKz0SK6AXVazTfRxL+mC2C8MP4pCojr2kavosmeQiz7Jomo7NXAMwNjZ7Pjonr+KHkM6rzE4trvDuivOQJEjonFqjF3Akdc8mGVzPt/z+TM4WUqtrL3kZUzvX0TCPrT31t+PTc+kI3RkWIm2U/1MRurq6yBT1rzeysKNWMo8hBP0TMIUmg7VvmjiWBdnu27iFvz17n60Bj5qsBRn7vk18TPK9uzTmROVuvaAqSKImGR/ftt93PPHKtsC8vZjOq7a01816J/z+RjSdRhljxenM7htmDXhyq9pjvBPBSGkL7XiQUa9klNX6jI2DlaEN18X9P7MTZmrVGEbDQq8/YEknN93zLgP0U X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(227479698468861)(228905959029699); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046)(6055026);SRVR:DF4PR84MB0316;BCL:0;PCL:0;RULEID:;SRVR:DF4PR84MB0316; X-Microsoft-Exchange-Diagnostics: 1;DF4PR84MB0316;4:BMme/nncMcpKBs0tOlLEzE0sYrM32X1vt0IAE5Kv1bYezb4seOy5q5MrNzI2tHhfwru5uQv6AazAD7YEcUQDCUDvaWViKiSAZHbUTQH4414AHeHUSdjXryUnvAOWy8VRTte2LCcsCYv48nKG1P+K5OwHTCcIOGA7Ra9iXmM+raSifMQdKhhTnqOIB8BhWsrA8XsFe1CpxYQ3laXJGejemTregQxKl1ZIqQ9Yzd0YcbSHnoTc3X0aMeUnjQPK1fUaT7+M4OyIvbDO667QzVINp1F6bAfsaOO3Q4GLo1JaLL6YdytYUgjHUAGzIMQj9+xgPlpmUw2HN9Mk0ioXaGIw81f12Xx9A7APesPjomvKGMQfUcsAXmYnqP9TsjbVpJUOEsowusQS45Opr68RhPDOvF8SAnIksW0kBzwHqgqqsjVX877WWqofkWMpJAHhSSFlfBSLmT29tWWaKkrYqRM8kPrIC3x6fZ9OlSd43nIF1Gg= X-Forefront-PRVS: 0039C6E5C5 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10019020)(4630300001)(6049001)(6009001)(7916002)(24454002)(377454003)(199003)(189002)(42186005)(65956001)(83506001)(65806001)(66066001)(77096005)(47776003)(59896002)(86362001)(19580405001)(2950100001)(80316001)(575784001)(19580395003)(87266999)(23676002)(65816999)(54356999)(50986999)(33656002)(76176999)(117156001)(106356001)(8676002)(586003)(101416001)(36756003)(68736007)(8666005)(81166006)(7416002)(81156014)(7736002)(4001350100001)(6116002)(305945005)(4326007)(50466002)(97736004)(7846002)(110136002)(64126003)(189998001)(2906002)(230700001)(92566002)(105586002)(3846002)(7059030);DIR:OUT;SFP:1102;SCL:1;SRVR:DF4PR84MB0316;H:[192.168.142.189];FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtERjRQUjg0TUIwMzE2OzIzOkQzUUZIUVFWMUdLOVgxK2hDS1ZpYzh2OEJN?= =?utf-8?B?S3Foem1jZERaa05mdlBlZDR6VnBKdHpMTDFXQ3BoczI2L1ZqSUs2M2hkTHlt?= =?utf-8?B?Mml0dldrRW5KVzJhazE3aWpwa0luNlFRN2lZR2krNUhrRS8vOTZRVTl3dFBE?= =?utf-8?B?UDFVOWJFOWFsNjlidUxnakY3UktJRnlyMzZ5bnZkTUJBTWhEVkZMeEJZR2Jx?= =?utf-8?B?ZEwzWWdaR1VNMEFXZXA4dllGcFFnQVoyQU5WUG9JcitjZm1idEpMcjg2ZjBQ?= =?utf-8?B?eElSK09qM2x4c0dGejVXUjAvYnhrbExMK1ZBV1lMb2NBdFdTeCtvWXQyc0JS?= =?utf-8?B?M3djNW90UnVQNUpvcWJrRmxqU2wyWTQ4dlljaUhtblVlU25EMzArS1NpdUNF?= =?utf-8?B?Njg1MDBKQko0b2NqamVzUUZ6QzVWZGZjckdqTGxKck5vK01CUUptRm5ydU1h?= =?utf-8?B?NlIzQ2VLKzhYcWYwTVZXN0MxQnNxYjhwYkU3ZE52THpkaFFGbS9Ud1JpREdx?= =?utf-8?B?Yjl1V1dsbW9KeWhYbW5rdE43ajkzLzVTcElxcHB5dHFDczVrQ3E2UUt4QS90?= =?utf-8?B?V2xQRDFxcVRHWFo2UGVYUEU3Q0x0ZGFOSUFEYTVkUjVTdEZzS052VlpxT3E5?= =?utf-8?B?LzM4ell6bUl3aFQ0bEJSMGZjUHEyeDRaTU0xMTFBUnFaSkoyRFk1T0dsUEwy?= =?utf-8?B?UXFNc09OTUZzL0F0VSs2MEVjS3A0WDVBcTg5cm1HU1R1endxRHdHdlVKb0VI?= =?utf-8?B?dU04blhyZFEra2ZJZVBYWElwWlJIL1lyK2QySWl6WCt3R3FiSzVWL0VUYTRm?= =?utf-8?B?SzlLSTYrNkJocGtiR0pYaXVmZmNISmJINDlzeUpJeUtVVUFpMFhpbGVqdjVa?= =?utf-8?B?aXFBOEw1UDZkU1dJdE5zVE1tU0I5dUdFS2xuaXJrWk9xZ216WVdpdTRvTHRh?= =?utf-8?B?RnBFTlJleFlqa2NuNUpFVW9ybUM0K0JLZzlsWHVBb0Nqa3B3ellJL2ZnNUlp?= =?utf-8?B?RlRPbTFGOG9pUEl2RlVJS3lndTBYTW5JNE1VQlFUSzR5ekNGUnZQaEp0YlNy?= =?utf-8?B?L1Vuckx3QlQ1VC9nRmhZQ1Vjd2d3YzJ3bWZqZEEwRExYM09iYnNwSkgwbWV5?= =?utf-8?B?RlhkcHhtdkxJaTBnUmpVRkZDbTRka3p3ZU03NG4xMnVQUStIM2c2ZFJpREZs?= =?utf-8?B?TmoxNTVCNTBHbDUyY25YeGRMbGs2UzVnQkZ2SWZxWUtvb1B1bnJBTmYveTho?= =?utf-8?B?eC91V0lQaVQ0UUgxeTNpbXJITjhOT0dCRlRTUXdwWW45dE53cGszd25sRnds?= =?utf-8?B?ajJ5R0VQMnFlSTRSYlhIQVpmVEJYcVBsVHk5ZTM2aG5BTEFWYjMxMzZwcVNk?= =?utf-8?B?QzhzSFhxNDRQL2FGT21WZVo5MmZaaWVFU2ZsdkRsdjN2K01YSnEwdWhLOFp4?= =?utf-8?B?ME1jM3M1WXZJQlc1d2FyL0hLazcrcHdYd2hZd21CZ3ZWRnhhSk50Z2lvZGF4?= =?utf-8?B?ME1qTVhLMW1GbDU4d01uRDJuMld0Yy9WaFBxWmVNb0wwV1BqOEw4NldtcGpm?= =?utf-8?B?OUJDSkNuRy85TjFIWnlMT0xqSkRNWmFsQkFUSEtFSktvTU5nSTJ5MWpWTG94?= =?utf-8?B?eFZCVEhEbWxuWjRhUEw2TEpFRzQ3ak9XWVNBQ3Z0eWZvT09ZNE41bWxjM01S?= =?utf-8?B?eXFKdUpWQ01yZmtIREEwWCthZENzM1JvUWIyL250OVlzei9Hc2JuNjR1Njkr?= =?utf-8?B?SGNnMkpiUjdsWE9URnJhL2JpbVJsSkh4UnQwUGFibWEvYkptM2xvaHVZNGgw?= =?utf-8?B?V1FhYWQycXRueUxKMU9Qd24yVm1RR2I0NDcvSnptbjVKU0hwdFdDSkYyMzJQ?= =?utf-8?Q?mjqBa8M0aUNe+SDupVxd0hS9fde2utbw?= X-Microsoft-Exchange-Diagnostics: 1;DF4PR84MB0316;6:ZNTUgJe8g3NTXRyerLhqVqT9xY6vyaAfwyNYzaHYudg6CJHILLjbSICHtNv85dm5zx8JSVBdPEEl9FEaW1SAed/U5cU0zPtwoEoU//jM1fEZsB5VJAOyhSkQ14VazXN+1gYQbhUYIv7ZwbD95xF+qfxSievattesIiOgGf0tqtaHYJqmPomwU0EbEpvfqJNM2qIFEFkhT3L9xG0CaiA+PWvIJqMmghYyZVmte/1O8My5TiizkgMErCbPPmCYtFrqcSs1qYs7alS4dWl38nhB5n6Y9Yhw5kxo/9OoIChDa4pltK0eKni5+S14DI9Paih8Ntvlt9DAFWgr81ExrDzP9A==;5:QWzs9X9AsqoWN6Db4LSv99VnBwP0bC9k71gsxE8lZIga8MRSE+4HyF7ULP3HHDHBmnjvE4JdzSaRjBzE9Yb0Z2zV7QI+Nkne9uR22UmcdqeeXTXeNAnqJ5+Y4Xl12ElpfZyUut+rQOPgxy5MKQCJMQ==;24:98+TIgpNDE0jE0ijR0cBmk90yiQWIxWY8FtlI1Or/u+b8Gj5bQ9LiSuefqxliXreEclBZNlfsfWGKqytHir2XkwJWzxV17LjYHqaUrLCk2w=;7:l9J9KOWB5Z8l7Gv0es/PEJ5ukzfjfV5sY5268QpEaE1CbOsmV5KP8WR/QOOps9XQronYOrbmNBy2DUprdH13kiFGRqOUEEyLDjALhkvhpj8J2f4mvZPtzw+HnBy3ph82qvuS1Flee21TXRxFStuqnGsbpvRDokgYhhwH7xamIeukfiy8kpRb6bZOLRmXDzzbaVPWtKD2lcn0Ds1c4IHy/lTaCywzwvgkglsFS7ANeEveoWhT2cbnkTln325HdD2+ SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: hpe.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Aug 2016 16:57:11.8243 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: DF4PR84MB0316 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 08/18/2016 08:39 PM, Jason Low wrote: > Imre reported an issue where threads are getting starved when trying > to acquire a mutex. Threads acquiring a mutex can get arbitrarily delayed > sleeping on a mutex because other threads can continually steal the lock > in the fastpath and/or through optimistic spinning. > > Waiman has developed patches that allow waiters to return to optimistic > spinning, thus reducing the probability that starvation occurs. However, > Imre still sees this starvation problem in the workloads when optimistic > spinning is disabled. > > This patch adds an additional boolean to the mutex that gets used in > the CONFIG_SMP&& !CONFIG_MUTEX_SPIN_ON_OWNER cases. The flag signifies > whether or not other threads need to yield to a waiter and gets set > when a waiter spends too much time waiting for the mutex. The threshold > is currently set to 16 wakeups, and once the wakeup threshold is exceeded, > other threads must yield to the top waiter. The flag gets cleared > immediately after the top waiter acquires the mutex. > > This prevents waiters from getting starved without sacrificing much > much performance, as lock stealing is still allowed and only > temporarily disabled when it is detected that a waiter has been waiting > for too long. > > Reported-by: Imre Deak > Signed-off-by: Jason Low > --- > include/linux/mutex.h | 2 + > kernel/locking/mutex.c | 122 +++++++++++++++++++++++++++++++++++++++---------- > 2 files changed, 99 insertions(+), 25 deletions(-) > > diff --git a/include/linux/mutex.h b/include/linux/mutex.h > index f8e91ad..988c020 100644 > --- a/include/linux/mutex.h > +++ b/include/linux/mutex.h > @@ -58,6 +58,8 @@ struct mutex { > #ifdef CONFIG_MUTEX_SPIN_ON_OWNER > struct optimistic_spin_queue osq; /* Spinner MCS lock */ > int waiter_spinning; > +#elif defined(CONFIG_SMP) > + int yield_to_waiter; > #endif > #ifdef CONFIG_DEBUG_MUTEXES > void *magic; > diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c > index 64a0bfa..e078c49 100644 > --- a/kernel/locking/mutex.c > +++ b/kernel/locking/mutex.c > @@ -56,6 +56,8 @@ __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key) > #ifdef CONFIG_MUTEX_SPIN_ON_OWNER > osq_lock_init(&lock->osq); > lock->waiter_spinning = false; > +#elif defined(CONFIG_SMP) > + lock->yield_to_waiter = false; > #endif > > debug_mutex_init(lock, name, key); > @@ -72,6 +74,9 @@ EXPORT_SYMBOL(__mutex_init); > */ > __visible void __sched __mutex_lock_slowpath(atomic_t *lock_count); > > + > +static inline bool need_yield_to_waiter(struct mutex *lock); > + > /** > * mutex_lock - acquire the mutex > * @lock: the mutex to be acquired > @@ -100,7 +105,10 @@ void __sched mutex_lock(struct mutex *lock) > * The locking fastpath is the 1->0 transition from > * 'unlocked' into 'locked' state. > */ > - __mutex_fastpath_lock(&lock->count, __mutex_lock_slowpath); > + if (!need_yield_to_waiter(lock)) > + __mutex_fastpath_lock(&lock->count, __mutex_lock_slowpath); > + else > + __mutex_lock_slowpath(&lock->count); > mutex_set_owner(lock); > } > > @@ -449,6 +457,49 @@ static bool mutex_optimistic_spin(struct mutex *lock, > } > #endif > > +#if !defined(CONFIG_MUTEX_SPIN_ON_OWNER)&& defined(CONFIG_SMP) > + > +#define MUTEX_WAKEUP_THRESHOLD 16 > + > +static inline void update_yield_to_waiter(struct mutex *lock, int *wakeups) > +{ > + if (++(*wakeups)> MUTEX_WAKEUP_THRESHOLD&& !lock->yield_to_waiter) > + lock->yield_to_waiter = true; > +} > + > +static inline void clear_yield_to_waiter(struct mutex *lock, > + struct mutex_waiter *waiter) > +{ > + /* Only clear yield_to_waiter if we are the top waiter. */ > + if (lock->wait_list.next ==&waiter->list&& lock->yield_to_waiter) > + lock->yield_to_waiter = false; > +} > + > +static inline bool need_yield_to_waiter(struct mutex *lock) > +{ > + return unlikely(lock->yield_to_waiter); > +} > + > +#else /* !yield_to_waiter */ > + > +static inline void update_yield_to_waiter(struct mutex *lock, int *wakeups) > +{ > + return; > +} > + > +static inline void clear_yield_to_waiter(struct mutex *lock, > + struct mutex_waiter *waiter) > +{ > + return; > +} > + > +static inline bool need_yield_to_waiter(struct mutex *lock) > +{ > + return false; > +} > + > +#endif /* yield_to_waiter */ > + > __visible __used noinline > void __sched __mutex_unlock_slowpath(atomic_t *lock_count); > > @@ -541,6 +592,12 @@ __ww_mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx) > return 0; > } > > +static inline bool __mutex_trylock_pending(struct mutex *lock) > +{ > + return atomic_read(&lock->count)>= 0&& > + atomic_xchg_acquire(&lock->count, -1) == 1; > +} > + Maybe you can make a more general __mutex_trylock function that is used in all three trylock attempts in the slowpath. For example, static inline bool __mutex_trylock(struct mutex *lock, bool waiter) { if (waiter) { return atomic_read(&lock->count) >= 0 && atomic_xchg_acquire(&lock->count, -1) == 1; } else { return !need_yield_to_waiter(lock) && !mutex_is_locked(lock) && ((atomic_xchg_acquire(&lock->count, 0) == 1); } } > /* > * Lock a mutex (possibly interruptible), slowpath: > */ > @@ -553,7 +610,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > struct mutex_waiter waiter; > unsigned long flags; > bool acquired = false; /* True if the lock is acquired */ > - int ret; > + int ret, wakeups = 0; > > if (use_ww_ctx) { > struct ww_mutex *ww = container_of(lock, struct ww_mutex, base); > @@ -576,7 +633,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > * Once more, try to acquire the lock. Only try-lock the mutex if > * it is unlocked to reduce unnecessary xchg() operations. > */ > - if (!mutex_is_locked(lock)&& > + if (!need_yield_to_waiter(lock)&& !mutex_is_locked(lock)&& > (atomic_xchg_acquire(&lock->count, 0) == 1)) > goto skip_wait; > > @@ -587,24 +644,18 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > list_add_tail(&waiter.list,&lock->wait_list); > waiter.task = task; > > + /* > + * If this is the first waiter, mark the lock as having pending > + * waiters, if we happen to acquire it while doing so, yay! > + */ > + if (list_is_singular(&lock->wait_list)&& > + __mutex_trylock_pending(lock)) > + goto remove_waiter; > + > lock_contended(&lock->dep_map, ip); > > while (!acquired) { > /* > - * Lets try to take the lock again - this is needed even if > - * we get here for the first time (shortly after failing to > - * acquire the lock), to make sure that we get a wakeup once > - * it's unlocked. Later on, if we sleep, this is the > - * operation that gives us the lock. We xchg it to -1, so > - * that when we release the lock, we properly wake up the > - * other waiters. We only attempt the xchg if the count is > - * non-negative in order to avoid unnecessary xchg operations: > - */ > - if (atomic_read(&lock->count)>= 0&& > - (atomic_xchg_acquire(&lock->count, -1) == 1)) > - break; > - > - /* > * got a signal? (This code gets eliminated in the > * TASK_UNINTERRUPTIBLE case.) > */ > @@ -631,9 +682,21 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > acquired = mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, > true); > spin_lock_mutex(&lock->wait_lock, flags); > + > + update_yield_to_waiter(lock,&wakeups); > + > + /* > + * Try-acquire now that we got woken at the head of the queue > + * or we received a signal. > + */ > + if (__mutex_trylock_pending(lock)) > + break; That is not quite right. The lock may have been acquired in the optimistic spinning loop. You either have to move it back to the top or add a "!acquired" check before the trylock. Cheers, Longman