From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Morton Subject: Re: [PATCH 01/20] mm: mmu_gather rework Date: Tue, 19 Apr 2011 13:06:06 -0700 Message-ID: <20110419130606.fb7139b2.akpm@linux-foundation.org> References: <20110401121258.211963744@chello.nl> <20110401121725.360704327@chello.nl> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20110401121725.360704327@chello.nl> Sender: owner-linux-mm@kvack.org To: Peter Zijlstra Cc: Andrea Arcangeli , Avi Kivity , Thomas Gleixner , Rik van Riel , Ingo Molnar , Linus Torvalds , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, Benjamin Herrenschmidt , David Miller , Hugh Dickins , Mel Gorman , Nick Piggin , Paul McKenney , Yanmin Zhang , Martin Schwidefsky , Russell King , Paul Mundt , Jeff Dike , Tony Luck , Hugh Dickins List-Id: linux-arch.vger.kernel.org On Fri, 01 Apr 2011 14:12:59 +0200 Peter Zijlstra wrote: > Remove the first obstackle towards a fully preemptible mmu_gather. > > The current scheme assumes mmu_gather is always done with preemption > disabled and uses per-cpu storage for the page batches. Change this to > try and allocate a page for batching and in case of failure, use a > small on-stack array to make some progress. > > Preemptible mmu_gather is desired in general and usable once > i_mmap_lock becomes a mutex. Doing it before the mutex conversion > saves us from having to rework the code by moving the mmu_gather > bits inside the pte_lock. > > Also avoid flushing the tlb batches from under the pte lock, > this is useful even without the i_mmap_lock conversion as it > significantly reduces pte lock hold times. There doesn't seem much point in reviewing this closely, as a lot of it gets tossed away later in the series.. > free_pages_and_swap_cache(tlb->pages, tlb->nr); It seems inappropriate that this code uses free_page[s]_and_swap_cache(). It should go direct to put_page() and release_pages()? Please review this code's implicit decision to pass "cold==0" into release_pages(). > -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) > +static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page) I wonder if all the inlining which remains in this code is needed and desirable. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp1.linux-foundation.org ([140.211.169.13]:33775 "EHLO smtp1.linux-foundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750795Ab1DSUIR (ORCPT ); Tue, 19 Apr 2011 16:08:17 -0400 Date: Tue, 19 Apr 2011 13:06:06 -0700 From: Andrew Morton Subject: Re: [PATCH 01/20] mm: mmu_gather rework Message-ID: <20110419130606.fb7139b2.akpm@linux-foundation.org> In-Reply-To: <20110401121725.360704327@chello.nl> References: <20110401121258.211963744@chello.nl> <20110401121725.360704327@chello.nl> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-arch-owner@vger.kernel.org List-ID: To: Peter Zijlstra Cc: Andrea Arcangeli , Avi Kivity , Thomas Gleixner , Rik van Riel , Ingo Molnar , Linus Torvalds , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, Benjamin Herrenschmidt , David Miller , Hugh Dickins , Mel Gorman , Nick Piggin , Paul McKenney , Yanmin Zhang , Martin Schwidefsky , Russell King , Paul Mundt , Jeff Dike , Tony Luck , Hugh Dickins Message-ID: <20110419200606.AWo9kdpnRgVWU5bpwPkYYbsJMoJTuvFQUAabT9zF_HA@z> On Fri, 01 Apr 2011 14:12:59 +0200 Peter Zijlstra wrote: > Remove the first obstackle towards a fully preemptible mmu_gather. > > The current scheme assumes mmu_gather is always done with preemption > disabled and uses per-cpu storage for the page batches. Change this to > try and allocate a page for batching and in case of failure, use a > small on-stack array to make some progress. > > Preemptible mmu_gather is desired in general and usable once > i_mmap_lock becomes a mutex. Doing it before the mutex conversion > saves us from having to rework the code by moving the mmu_gather > bits inside the pte_lock. > > Also avoid flushing the tlb batches from under the pte lock, > this is useful even without the i_mmap_lock conversion as it > significantly reduces pte lock hold times. There doesn't seem much point in reviewing this closely, as a lot of it gets tossed away later in the series.. > free_pages_and_swap_cache(tlb->pages, tlb->nr); It seems inappropriate that this code uses free_page[s]_and_swap_cache(). It should go direct to put_page() and release_pages()? Please review this code's implicit decision to pass "cold==0" into release_pages(). > -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) > +static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page) I wonder if all the inlining which remains in this code is needed and desirable.