From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1162646Ab2GLXUZ (ORCPT ); Thu, 12 Jul 2012 19:20:25 -0400 Received: from hrndva-omtalb.mail.rr.com ([71.74.56.122]:30624 "EHLO hrndva-omtalb.mail.rr.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1162605Ab2GLXUD (ORCPT ); Thu, 12 Jul 2012 19:20:03 -0400 X-Authority-Analysis: v=2.0 cv=ZuBv2qHG c=1 sm=0 a=ZycB6UtQUfgMyuk2+PxD7w==:17 a=XQbtiDEiEegA:10 a=Ciwy3NGCPMMA:10 a=im46xJ-hraAA:10 a=5SG0PmZfjMsA:10 a=bbbx4UPp9XUA:10 a=meVymXHHAAAA:8 a=UZRvmTwgAAAA:8 a=0usFPS_H11ZFgFd2w40A:9 a=Zh68SRI7RUMA:10 a=F6qNFXbwvZAA:10 a=jeBq3FmKZ4MA:10 a=ZycB6UtQUfgMyuk2+PxD7w==:117 X-Cloudmark-Score: 0 X-Originating-IP: 74.67.80.29 Message-Id: <20120712231956.617994559@goodmis.org> User-Agent: quilt/0.60-1 Date: Thu, 12 Jul 2012 19:18:32 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , John Kacur Subject: [PATCH RT 5/8] slab: Prevent local lock deadlock References: <20120712231827.084920483@goodmis.org> Content-Disposition: inline; filename=0005-slab-Prevent-local-lock-deadlock.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Thomas Gleixner On RT we avoid the cross cpu function calls and take the per cpu local locks instead. Now the code missed that taking the local lock on the cpu which runs the code must use the proper local lock functions and not a simple spin_lock(). Otherwise it deadlocks later when trying to acquire the local lock with the proper function. Reported-and-tested-by: Chris Pringle Signed-off-by: Thomas Gleixner Signed-off-by: Steven Rostedt --- mm/slab.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 5f0c5ef..3bef5e5 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -739,8 +739,26 @@ slab_on_each_cpu(void (*func)(void *arg, int this_cpu), void *arg) { unsigned int i; + get_cpu_light(); for_each_online_cpu(i) func(arg, i); + put_cpu_light(); +} + +static void lock_slab_on(unsigned int cpu) +{ + if (cpu == smp_processor_id()) + local_lock_irq(slab_lock); + else + local_spin_lock_irq(slab_lock, &per_cpu(slab_lock, cpu).lock); +} + +static void unlock_slab_on(unsigned int cpu) +{ + if (cpu == smp_processor_id()) + local_unlock_irq(slab_lock); + else + local_spin_unlock_irq(slab_lock, &per_cpu(slab_lock, cpu).lock); } #endif @@ -2627,10 +2645,10 @@ static void do_drain(void *arg, int cpu) { LIST_HEAD(tmp); - spin_lock_irq(&per_cpu(slab_lock, cpu).lock); + lock_slab_on(cpu); __do_drain(arg, cpu); list_splice_init(&per_cpu(slab_free_list, cpu), &tmp); - spin_unlock_irq(&per_cpu(slab_lock, cpu).lock); + unlock_slab_on(cpu); free_delayed(&tmp); } #endif @@ -4095,9 +4113,9 @@ static void do_ccupdate_local(void *info) #else static void do_ccupdate_local(void *info, int cpu) { - spin_lock_irq(&per_cpu(slab_lock, cpu).lock); + lock_slab_on(cpu); __do_ccupdate_local(info, cpu); - spin_unlock_irq(&per_cpu(slab_lock, cpu).lock); + unlock_slab_on(cpu); } #endif -- 1.7.10.4