From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935057AbcKNRr3 (ORCPT ); Mon, 14 Nov 2016 12:47:29 -0500 Received: from merlin.infradead.org ([205.233.59.134]:34624 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753066AbcKNRrQ (ORCPT ); Mon, 14 Nov 2016 12:47:16 -0500 Message-Id: <20161114174446.690415221@infradead.org> User-Agent: quilt/0.63-1 Date: Mon, 14 Nov 2016 18:39:51 +0100 From: Peter Zijlstra To: gregkh@linuxfoundation.org, keescook@chromium.org, will.deacon@arm.com, elena.reshetova@intel.com, arnd@arndb.de, tglx@linutronix.de, mingo@kernel.org, hpa@zytor.com, dave@progbits.org Cc: linux-kernel@vger.kernel.org, "Peter Zijlstra (Intel)" Subject: [RFC][PATCH 5/7] kref: Implement kref_put_lock() References: <20161114173946.501528675@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline; filename=peterz-ref-4a.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Because home-rolling your own is _awesome_, stop doing it. Provide kref_put_lock(), just like kref_put_mutex() but for a spinlock. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/kref.h | 21 +++++++++++++++------ net/sunrpc/svcauth.c | 15 ++++++++++----- 2 files changed, 25 insertions(+), 11 deletions(-) --- a/include/linux/kref.h +++ b/include/linux/kref.h @@ -86,12 +86,21 @@ static inline int kref_put_mutex(struct struct mutex *lock) { WARN_ON(release == NULL); - if (unlikely(!atomic_add_unless(&kref->refcount, -1, 1))) { - mutex_lock(lock); - if (unlikely(!atomic_dec_and_test(&kref->refcount))) { - mutex_unlock(lock); - return 0; - } + + if (atomic_dec_and_mutex_lock(&kref->refcount, lock)) { + release(kref); + return 1; + } + return 0; +} + +static inline int kref_put_lock(struct kref *kref, + void (*release)(struct kref *kref), + spinlock_t *lock) +{ + WARN_ON(release == NULL); + + if (atomic_dec_and_lock(&kref->refcount, lock)) { release(kref); return 1; } --- a/net/sunrpc/svcauth.c +++ b/net/sunrpc/svcauth.c @@ -127,13 +127,18 @@ static struct hlist_head auth_domain_tab static spinlock_t auth_domain_lock = __SPIN_LOCK_UNLOCKED(auth_domain_lock); +static void auth_domain_release(struct kref *kref) +{ + struct auth_domain *dom = container_of(kref, struct auth_domain, ref); + + hlist_del(&dom->hash); + dom->flavour->domain_release(dom); + spin_unlock(&auth_domain_lock); +} + void auth_domain_put(struct auth_domain *dom) { - if (atomic_dec_and_lock(&dom->ref.refcount, &auth_domain_lock)) { - hlist_del(&dom->hash); - dom->flavour->domain_release(dom); - spin_unlock(&auth_domain_lock); - } + kref_put_lock(&dom->ref, auth_domain_release, &auth_domain_lock); } EXPORT_SYMBOL_GPL(auth_domain_put);