From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ig0-f178.google.com (mail-ig0-f178.google.com [209.85.213.178]) by kanga.kvack.org (Postfix) with ESMTP id 73C326B0044 for ; Tue, 1 Jul 2014 04:22:35 -0400 (EDT) Received: by mail-ig0-f178.google.com with SMTP id hn18so5182159igb.5 for ; Tue, 01 Jul 2014 01:22:35 -0700 (PDT) Received: from lgemrelse6q.lge.com (LGEMRELSE6Q.lge.com. [156.147.1.121]) by mx.google.com with ESMTP id fg5si26127194pad.120.2014.07.01.01.22.33 for ; Tue, 01 Jul 2014 01:22:34 -0700 (PDT) From: Joonsoo Kim Subject: [PATCH v3 7/9] slab: destroy a slab without holding any alien cache lock Date: Tue, 1 Jul 2014 17:27:36 +0900 Message-Id: <1404203258-8923-8-git-send-email-iamjoonsoo.kim@lge.com> In-Reply-To: <1404203258-8923-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1404203258-8923-1-git-send-email-iamjoonsoo.kim@lge.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: Christoph Lameter , Pekka Enberg , David Rientjes , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vladimir Davydov , Joonsoo Kim I haven't heard that this alien cache lock is contended, but to reduce chance of contention would be better generally. And with this change, we can simplify complex lockdep annotation in slab code. In the following patch, it will be implemented. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim --- mm/slab.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 1c319ad..854dfa0 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1050,10 +1050,10 @@ static void free_alien_cache(struct alien_cache **alc_ptr) } static void __drain_alien_cache(struct kmem_cache *cachep, - struct array_cache *ac, int node) + struct array_cache *ac, int node, + struct list_head *list) { struct kmem_cache_node *n = get_node(cachep, node); - LIST_HEAD(list); if (ac->avail) { spin_lock(&n->list_lock); @@ -1065,10 +1065,9 @@ static void __drain_alien_cache(struct kmem_cache *cachep, if (n->shared) transfer_objects(n->shared, ac, ac->limit); - free_block(cachep, ac->entry, ac->avail, node, &list); + free_block(cachep, ac->entry, ac->avail, node, list); ac->avail = 0; spin_unlock(&n->list_lock); - slabs_destroy(cachep, &list); } } @@ -1086,8 +1085,11 @@ static void reap_alien(struct kmem_cache *cachep, struct kmem_cache_node *n) if (alc) { ac = &alc->ac; if (ac->avail && spin_trylock_irq(&alc->lock)) { - __drain_alien_cache(cachep, ac, node); + LIST_HEAD(list); + + __drain_alien_cache(cachep, ac, node, &list); spin_unlock_irq(&alc->lock); + slabs_destroy(cachep, &list); } } } @@ -1104,10 +1106,13 @@ static void drain_alien_cache(struct kmem_cache *cachep, for_each_online_node(i) { alc = alien[i]; if (alc) { + LIST_HEAD(list); + ac = &alc->ac; spin_lock_irqsave(&alc->lock, flags); - __drain_alien_cache(cachep, ac, i); + __drain_alien_cache(cachep, ac, i, &list); spin_unlock_irqrestore(&alc->lock, flags); + slabs_destroy(cachep, &list); } } } @@ -1138,10 +1143,11 @@ static inline int cache_free_alien(struct kmem_cache *cachep, void *objp) spin_lock(&alien->lock); if (unlikely(ac->avail == ac->limit)) { STATS_INC_ACOVERFLOW(cachep); - __drain_alien_cache(cachep, ac, nodeid); + __drain_alien_cache(cachep, ac, nodeid, &list); } ac_put_obj(cachep, ac, objp); spin_unlock(&alien->lock); + slabs_destroy(cachep, &list); } else { n = get_node(cachep, nodeid); spin_lock(&n->list_lock); -- 1.7.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org