From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751730Ab3LLVEh (ORCPT ); Thu, 12 Dec 2013 16:04:37 -0500 Received: from relay1.sgi.com ([192.48.179.29]:59032 "EHLO relay.sgi.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751526Ab3LLVEe (ORCPT ); Thu, 12 Dec 2013 16:04:34 -0500 Date: Thu, 12 Dec 2013 15:04:32 -0600 From: Alex Thorlton To: Andy Lutomirski Cc: "linux-mm@kvack.org" , Andrew Morton , "Kirill A. Shutemov" , Benjamin Herrenschmidt , Rik van Riel , Wanpeng Li , Mel Gorman , Michel Lespinasse , Benjamin LaHaise , Oleg Nesterov , "Eric W. Biederman" , Al Viro , David Rientjes , Zhang Yanfei , Peter Zijlstra , Johannes Weiner , Michal Hocko , Jiang Liu , Cody P Schafer , Glauber Costa , Kamezawa Hiroyuki , Naoya Horiguchi , "linux-kernel@vger.kernel.org" Subject: Re: [RFC PATCH 2/3] Add tunable to control THP behavior Message-ID: <20131212210432.GB6034@sgi.com> References: <20131212180050.GC134240@sgi.com> <20131212204950.GA6034@sgi.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > Right. I like that behavior for my workload. (Although I currently > allocate huge pages -- when I wrote that code, THP interacted so badly > with pagecache that it was a non-starter. I think it's fixed now, > though.) In that case, it's probably best to just stick with current behavior, and leave the threshold at 1, unless we implement something like I discuss below. > In that case, I guess I misunderstood your description. Are saying > that, once any node accesses this many pages in the potential THP, > then the whole THP will be mapped? Well, right now, this patch completely gives up on mapping a THP if two different nodes take a page from our chunk before the threshold is hit, so I think you're mostly understanding it correctly. One thing we could consider is adding an option to map the THP on the node with the *most* references to the potential THP, instead of giving up on mapping the THP when multiple nodes reference it. That might be a good middle ground, but I can see some performance issues coming into play there if the threshold is set too high, since we'll have to move all the pages in the chunk to the node that hit the threshold. - Alex