public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: Ravikiran G Thirumalai <kiran@scalex86.org>
Cc: Andrew Morton <akpm@osdl.org>,
	Christoph Lameter <clameter@sgi.com>,
	Pekka Enberg <penberg@cs.helsinki.fi>,
	LKML <linux-kernel@vger.kernel.org>, Ingo Molnar <mingo@elte.hu>,
	Arjan van de Ven <arjan@infradead.org>,
	alokk@calsoftinc.com
Subject: Re: [BUG] Lockdep recursive locking in kmem_cache_free
Date: Mon, 07 Aug 2006 09:27:36 +0200	[thread overview]
Message-ID: <1154935656.5932.262.camel@localhost.localdomain> (raw)
In-Reply-To: <20060802191029.GA4958@localhost.localdomain>

On Wed, 2006-08-02 at 12:10 -0700, Ravikiran G Thirumalai wrote:
> Here's an attempt to educate lockdep about alien cache lock. tglx, can you
> confirm if this fixes the false positive?  This is just an extension of the
> l3 lock lesson :).
> 
> Note: With this approach, lockdep forgets its education for alien caches
> if all cpus of a node go down and come back up.  But taking care of 
> that scenario will make things uglier....not sure if it is worth it.
> 
> Thanks,
> Kiran

Sorry, I did not come around to test it earlier. With this patch applied
the lockdep message is gone.

	tglx

> Place the alien array cache locks of on slab malloc slab caches on a seperate 
> lockdep class.  This avoids false positives from lockdep
> 
> Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
> Signed-off-by: Shai Fultheim <shai@scalex86.org>
> 
> Index: linux-2.6.18-rc3-x460/mm/slab.c
> ===================================================================
> --- linux-2.6.18-rc3-x460.orig/mm/slab.c	2006-07-30 21:27:28.000000000 -0700
> +++ linux-2.6.18-rc3-x460/mm/slab.c	2006-08-01 18:01:51.000000000 -0700
> @@ -682,23 +682,43 @@
>   * The locking for this is tricky in that it nests within the locks
>   * of all other slabs in a few places; to deal with this special
>   * locking we put on-slab caches into a separate lock-class.
> + *
> + * We set lock class for alien array caches which are up during init.
> + * The lock annotation will be lost if all cpus of a node goes down and 
> + * then comes back up during hotplug
>   */
> -static struct lock_class_key on_slab_key;
> +static struct lock_class_key on_slab_l3_key;
> +static struct lock_class_key on_slab_alc_key;
> +
> +static inline void init_lock_keys(void)
>  
> -static inline void init_lock_keys(struct cache_sizes *s)
>  {
>  	int q;
> +	struct cache_sizes *s = malloc_sizes;
>  
> -	for (q = 0; q < MAX_NUMNODES; q++) {
> -		if (!s->cs_cachep->nodelists[q] || OFF_SLAB(s->cs_cachep))
> -			continue;
> -		lockdep_set_class(&s->cs_cachep->nodelists[q]->list_lock,
> -				  &on_slab_key);
> +	while (s->cs_size != ULONG_MAX) {
> +		for_each_node(q) {
> +			struct array_cache **alc;
> +			int r;
> +			struct kmem_list3 *l3 = s->cs_cachep->nodelists[q];
> +			if (!l3 || OFF_SLAB(s->cs_cachep))
> +				continue;
> +			lockdep_set_class(&l3->list_lock, &on_slab_l3_key);
> +			alc = l3->alien;
> +			if (!alc)
> +				continue;
> +			for_each_node(r) {
> +				if (alc[r])
> +					lockdep_set_class(&alc[r]->lock,
> +					     &on_slab_alc_key);
> +			}
> +		}
> +		s++;
>  	}
>  }
>  
>  #else
> -static inline void init_lock_keys(struct cache_sizes *s)
> +static inline void init_lock_keys()
>  {
>  }
>  #endif
> @@ -1422,7 +1442,6 @@
>  					ARCH_KMALLOC_FLAGS|SLAB_PANIC,
>  					NULL, NULL);
>  		}
> -		init_lock_keys(sizes);
>  
>  		sizes->cs_dmacachep = kmem_cache_create(names->name_dma,
>  					sizes->cs_size,
> @@ -1495,6 +1514,10 @@
>  		mutex_unlock(&cache_chain_mutex);
>  	}
>  
> +	/* Annotate slab for lockdep -- annotate the malloc caches */
> +	init_lock_keys();
> +	
> +
>  	/* Done! */
>  	g_cpucache_up = FULL;
>  


  reply	other threads:[~2006-08-07  7:27 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-07-27 23:56 [BUG] Lockdep recursive locking in kmem_cache_free Thomas Gleixner
2006-07-28  5:22 ` Pekka Enberg
2006-07-28  6:14   ` Thomas Gleixner
2006-07-28 15:35     ` Christoph Lameter
2006-07-28 20:11       ` Thomas Gleixner
2006-07-28 20:18         ` Christoph Lameter
2006-07-28 20:27           ` Arjan van de Ven
2006-07-28 20:27           ` Thomas Gleixner
2006-07-28 20:35             ` Thomas Gleixner
2006-07-28 20:36               ` Christoph Lameter
2006-07-28 20:47                 ` Thomas Gleixner
2006-07-28 20:48                   ` Christoph Lameter
2006-07-28 21:12                     ` Ravikiran G Thirumalai
2006-07-28 21:20                       ` Thomas Gleixner
2006-08-02 19:10                         ` Ravikiran G Thirumalai
2006-08-07  7:27                           ` Thomas Gleixner [this message]
2006-07-28 21:26                       ` Christoph Lameter
2006-07-28 21:34                         ` Alok Kataria
2006-07-29  4:26                         ` Ravikiran G Thirumalai
2006-07-28 14:53   ` Christoph Lameter
2006-07-28 17:11     ` Ravikiran G Thirumalai
2006-07-28 17:14       ` Arjan van de Ven

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1154935656.5932.262.camel@localhost.localdomain \
    --to=tglx@linutronix.de \
    --cc=akpm@osdl.org \
    --cc=alokk@calsoftinc.com \
    --cc=arjan@infradead.org \
    --cc=clameter@sgi.com \
    --cc=kiran@scalex86.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=penberg@cs.helsinki.fi \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox