From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail137.messagelabs.com (mail137.messagelabs.com [216.82.249.19]) by kanga.kvack.org (Postfix) with SMTP id 50E646B01EE for ; Wed, 31 Mar 2010 15:00:42 -0400 (EDT) Date: Wed, 31 Mar 2010 13:59:44 -0500 (CDT) From: Christoph Lameter Subject: Re: [PATCH 00 of 41] Transparent Hugepage Support #16 In-Reply-To: <20100331164147.GN5825@random.random> Message-ID: References: <20100331141035.523c9285.kamezawa.hiroyu@jp.fujitsu.com> <20100331153339.GK5825@random.random> <20100331164147.GN5825@random.random> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-linux-mm@kvack.org To: Andrea Arcangeli Cc: KAMEZAWA Hiroyuki , linux-mm@kvack.org, Andrew Morton , Marcelo Tosatti , Adam Litke , Avi Kivity , Izik Eidus , Hugh Dickins , Nick Piggin , Rik van Riel , Mel Gorman , Dave Hansen , Benjamin Herrenschmidt , Ingo Molnar , Mike Travis , Chris Wright , bpicco@redhat.com, KOSAKI Motohiro , Balbir Singh , Arnd Bergmann , "Michael S. Tsirkin" , Peter Zijlstra , Johannes Weiner , Daisuke Nishimura List-ID: On Wed, 31 Mar 2010, Andrea Arcangeli wrote: > > Large pages would be more independent from the page table structure with > > the approach that I outlined earlier since you would not have to do these > > sync tricks. > > I was talking about memory compaction. collapse_huge_page will still > be needed forever regardless of split_huge_page existing or not. Right but neither function would not be so page table format dependent as here. > > There are applications that have benefited for years already from 1G page > > sizes (available on IA64 f.e.). So why wait? > > Because the difficulty on finding hugepages free increases > exponentially with the order of allocation. Plus increasing MAX_ORDER > so much would slowdown everything for no gain because we will fail to > obtain 1G pages freed. The cost of compacting 1G pages also is 512 > times bigger than with regular pages. It's not feasible right now with > current memory sizes, I just said it's probably better to move to > PAGE_SIZE 2M instead of extending to 1g pages in a kernel whose > PAGE_SIZE is 4k. You would still want 4k pages for small files. > Last but not the least it can be done but considering I'm abruptly > failing to merge 35 patches (and surely your comments aren't helping > in that direction...), it'd be counter-productive to make the core Well by know you may have realized that I am not too enthusiastic about the approach. But certainly 2M can be done before 1G support. I was not suggesting that 1G support is a requirement. However, 1G and 2M support at the same time would force a cleaner design and maybe get rid of the page table hackery here. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org