From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756525Ab2JJNix (ORCPT ); Wed, 10 Oct 2012 09:38:53 -0400 Received: from hrndva-omtalb.mail.rr.com ([71.74.56.122]:7493 "EHLO hrndva-omtalb.mail.rr.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756162Ab2JJNin (ORCPT ); Wed, 10 Oct 2012 09:38:43 -0400 X-Authority-Analysis: v=2.0 cv=a96yBDuF c=1 sm=0 a=rXTBtCOcEpjy1lPqhTCpEQ==:17 a=mNMOxpOpBa8A:10 a=Ciwy3NGCPMMA:10 a=QI34DmUYjSIA:10 a=5SG0PmZfjMsA:10 a=bbbx4UPp9XUA:10 a=meVymXHHAAAA:8 a=HZW87zcCyfsA:10 a=VwQbUJbxAAAA:8 a=bndTQQZpn-juXqhEl_8A:9 a=Zh68SRI7RUMA:10 a=jeBq3FmKZ4MA:10 a=rXTBtCOcEpjy1lPqhTCpEQ==:117 X-Cloudmark-Score: 0 X-Originating-IP: 74.67.115.198 Message-Id: <20121010133842.940659869@goodmis.org> User-Agent: quilt/0.60-1 Date: Wed, 10 Oct 2012 09:38:04 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , John Kacur Subject: [PATCH RT 3/6] mm: page_alloc: Use local_lock_on() instead of plain spinlock References: <20121010133801.331863565@goodmis.org> Content-Disposition: inline; filename=0003-mm-page_alloc-Use-local_lock_on-instead-of-plain-spi.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Thomas Gleixner The plain spinlock while sufficient does not update the local_lock internals. Use a proper local_lock function instead to ease debugging. Signed-off-by: Thomas Gleixner Cc: stable-rt@vger.kernel.org Signed-off-by: Steven Rostedt --- include/linux/locallock.h | 11 +++++++++++ mm/page_alloc.c | 4 ++-- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/include/linux/locallock.h b/include/linux/locallock.h index 0161fbb..f1804a3 100644 --- a/include/linux/locallock.h +++ b/include/linux/locallock.h @@ -137,6 +137,12 @@ static inline int __local_lock_irqsave(struct local_irq_lock *lv) _flags = __get_cpu_var(lvar).flags; \ } while (0) +#define local_lock_irqsave_on(lvar, _flags, cpu) \ + do { \ + __local_lock_irqsave(&per_cpu(lvar, cpu)); \ + _flags = per_cpu(lvar, cpu).flags; \ + } while (0) + static inline int __local_unlock_irqrestore(struct local_irq_lock *lv, unsigned long flags) { @@ -156,6 +162,11 @@ static inline int __local_unlock_irqrestore(struct local_irq_lock *lv, put_local_var(lvar); \ } while (0) +#define local_unlock_irqrestore_on(lvar, flags, cpu) \ + do { \ + __local_unlock_irqrestore(&per_cpu(lvar, cpu), flags); \ + } while (0) + #define local_spin_trylock_irq(lvar, lock) \ ({ \ int __locked; \ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 47d939e..c645e07 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -227,9 +227,9 @@ static DEFINE_LOCAL_IRQ_LOCK(pa_lock); #ifdef CONFIG_PREEMPT_RT_BASE # define cpu_lock_irqsave(cpu, flags) \ - spin_lock_irqsave(&per_cpu(pa_lock, cpu).lock, flags) + local_lock_irqsave_on(pa_lock, flags, cpu) # define cpu_unlock_irqrestore(cpu, flags) \ - spin_unlock_irqrestore(&per_cpu(pa_lock, cpu).lock, flags) + local_unlock_irqrestore_on(pa_lock, flags, cpu) #else # define cpu_lock_irqsave(cpu, flags) local_irq_save(flags) # define cpu_unlock_irqrestore(cpu, flags) local_irq_restore(flags) -- 1.7.10.4