From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Zijlstra Subject: [RFC][PATCH 1/6] mm: Optimize fullmm TLB flushing Date: Wed, 02 Mar 2011 18:59:29 +0100 Message-ID: <20110302180258.879537727@chello.nl> References: <20110302175928.022902359@chello.nl> Return-path: Content-Disposition: inline; filename=mmu_gather_fullmm.patch Sender: owner-linux-mm@kvack.org To: Andrea Arcangeli , Thomas Gleixner , Rik van Riel , Ingo Molnar , akpm@linux-foundation.org, Linus Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, Benjamin Herrenschmidt , David Miller , Hugh Dickins , Mel Gorman , Nick Piggin , Peter Zijlstra , Russell King , Chris Metcalf , Martin Schwidefsky List-Id: linux-arch.vger.kernel.org This originated from s390 which does something similar and would allow s390 to use the generic TLB flushing code. The idea is to flush the mm wide cache and tlb a priory and not bother with multiple flushes if the batching isn't large enough. This can be safely done since there cannot be any concurrency on this mm, its either after the process died (exit) or in the middle of execve where the thread switched to the new mm. Signed-off-by: Peter Zijlstra --- include/asm-generic/tlb.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) Index: linux-2.6/include/asm-generic/tlb.h =================================================================== --- linux-2.6.orig/include/asm-generic/tlb.h +++ linux-2.6/include/asm-generic/tlb.h @@ -149,6 +149,11 @@ tlb_gather_mmu(struct mmu_gather *tlb, s #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb->batch = NULL; #endif + + if (fullmm) { + flush_cache_mm(mm); + flush_tlb_mm(mm); + } } static inline void @@ -156,13 +161,15 @@ tlb_flush_mmu(struct mmu_gather *tlb) { struct mmu_gather_batch *batch; - if (!tlb->need_flush) - return; - tlb->need_flush = 0; - tlb_flush(tlb); + if (!tlb->fullmm && tlb->need_flush) { + tlb->need_flush = 0; + tlb_flush(tlb); + } + #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb_table_flush(tlb); #endif + if (tlb_fast_mode(tlb)) return; -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from casper.infradead.org ([85.118.1.10]:35095 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756943Ab1CBSF7 (ORCPT ); Wed, 2 Mar 2011 13:05:59 -0500 Message-ID: <20110302180258.879537727@chello.nl> Date: Wed, 02 Mar 2011 18:59:29 +0100 From: Peter Zijlstra Subject: [RFC][PATCH 1/6] mm: Optimize fullmm TLB flushing References: <20110302175928.022902359@chello.nl> Content-Disposition: inline; filename=mmu_gather_fullmm.patch Sender: linux-arch-owner@vger.kernel.org List-ID: To: Andrea Arcangeli , Thomas Gleixner , Rik van Riel , Ingo Molnar , akpm@linux-foundation.org, Linus Torvalds Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, Benjamin Herrenschmidt , David Miller , Hugh Dickins , Mel Gorman , Nick Piggin , Peter Zijlstra , Russell King , Chris Metcalf , Martin Schwidefsky Message-ID: <20110302175929.dMUms_6HDFXCMMwgxoO69rrjgXfryUPkFvsfL6aLDaI@z> This originated from s390 which does something similar and would allow s390 to use the generic TLB flushing code. The idea is to flush the mm wide cache and tlb a priory and not bother with multiple flushes if the batching isn't large enough. This can be safely done since there cannot be any concurrency on this mm, its either after the process died (exit) or in the middle of execve where the thread switched to the new mm. Signed-off-by: Peter Zijlstra --- include/asm-generic/tlb.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) Index: linux-2.6/include/asm-generic/tlb.h =================================================================== --- linux-2.6.orig/include/asm-generic/tlb.h +++ linux-2.6/include/asm-generic/tlb.h @@ -149,6 +149,11 @@ tlb_gather_mmu(struct mmu_gather *tlb, s #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb->batch = NULL; #endif + + if (fullmm) { + flush_cache_mm(mm); + flush_tlb_mm(mm); + } } static inline void @@ -156,13 +161,15 @@ tlb_flush_mmu(struct mmu_gather *tlb) { struct mmu_gather_batch *batch; - if (!tlb->need_flush) - return; - tlb->need_flush = 0; - tlb_flush(tlb); + if (!tlb->fullmm && tlb->need_flush) { + tlb->need_flush = 0; + tlb_flush(tlb); + } + #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb_table_flush(tlb); #endif + if (tlb_fast_mode(tlb)) return;