From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754086AbYLGLUc (ORCPT ); Sun, 7 Dec 2008 06:20:32 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753411AbYLGLUW (ORCPT ); Sun, 7 Dec 2008 06:20:22 -0500 Received: from ozlabs.org ([203.10.76.45]:56378 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753237AbYLGLUV (ORCPT ); Sun, 7 Dec 2008 06:20:21 -0500 To: Russell King Cc: linux-kernel@vger.kernel.org, Mike Travis From: Rusty Russell Date: Sun, 7 Dec 2008 21:50:07 +1030 Subject: [PATCH 2/2] cpumask: Use smp_call_function_many(): arm Cc: Mike Travis MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Message-Id: <200812072150.15846.rusty@rustcorp.com.au> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by alpha id mB7BKidq030056 Change smp_call_function_mask() callers to smp_call_function_many(). I chose to make on_each_cpu_mask() take a cpumask pointer; for the smallcpumasks on arm this is probably a slight pessimization, but it setsa good example for generic code which is being weaned off on-stackcpumasks. I can simplify this patch further if you wish. Compile-tested only. Signed-off-by: Rusty Russell Signed-off-by: Mike Travis --- arch/arm/kernel/smp.c | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) --- linux-2.6.orig/arch/arm/kernel/smp.c+++ linux-2.6/arch/arm/kernel/smp.c@@ -526,20 +526,17 @@ int setup_profiling_timer(unsigned int m return -EINVAL; } -static int-on_each_cpu_mask(void (*func)(void *), void *info, int wait, cpumask_t mask)+static void+on_each_cpu_mask(void (*func)(void *), void *info, int wait,+ const struct cpumask *mask) {- int ret = 0;- preempt_disable(); - ret = smp_call_function_mask(mask, func, info, wait);- if (cpu_isset(smp_processor_id(), mask))+ smp_call_function_many(mask, func, info, wait);+ if (cpumask_test_cpu(smp_processor_id(), mask)) func(info); preempt_enable();-- return ret; } /**********************************************************************/@@ -600,20 +597,17 @@ void flush_tlb_all(void) void flush_tlb_mm(struct mm_struct *mm) {- cpumask_t mask = mm->cpu_vm_mask;-- on_each_cpu_mask(ipi_flush_tlb_mm, mm, 1, mask);+ on_each_cpu_mask(ipi_flush_tlb_mm, mm, 1, &mm->cpu_vm_mask); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) {- cpumask_t mask = vma->vm_mm->cpu_vm_mask; struct tlb_args ta; ta.ta_vma = vma; ta.ta_start = uaddr; - on_each_cpu_mask(ipi_flush_tlb_page, &ta, 1, mask);+ on_each_cpu_mask(ipi_flush_tlb_page, &ta, 1, &vma->vm_mm->cpu_vm_mask); } void flush_tlb_kernel_page(unsigned long kaddr)@@ -628,14 +622,13 @@ void flush_tlb_kernel_page(unsigned long void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) {- cpumask_t mask = vma->vm_mm->cpu_vm_mask; struct tlb_args ta; ta.ta_vma = vma; ta.ta_start = start; ta.ta_end = end; - on_each_cpu_mask(ipi_flush_tlb_range, &ta, 1, mask);+ on_each_cpu_mask(ipi_flush_tlb_range, &ta, 1, &vma->vm_mm->cpu_vm_mask); } void flush_tlb_kernel_range(unsigned long start, unsigned long end){.n++%ݶw{.n+{G{ayʇڙ,jfhz_(階ݢj"mG?&~iOzv^m ?I