From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail6.bemta12.messagelabs.com (mail6.bemta12.messagelabs.com [216.82.250.247]) by kanga.kvack.org (Postfix) with ESMTP id 93324900138 for ; Wed, 7 Sep 2011 22:18:33 -0400 (EDT) Subject: RE: [PATCH] slub Discard slab page only when node partials > minimum setting From: "Alex,Shi" In-Reply-To: <1315445674.29510.74.camel@sli10-conroe> References: <1315188460.31737.5.camel@debian> <1315357399.31737.49.camel@debian> <4E671E5C.7010405@cs.helsinki.fi> <6E3BC7F7C9A4BF4286DD4C043110F30B5D00DA333C@shsmsx502.ccr.corp.intel.com> <1315442639.31737.224.camel@debian> <1315445674.29510.74.camel@sli10-conroe> Content-Type: text/plain; charset="UTF-8" Date: Thu, 08 Sep 2011 10:24:16 +0800 Message-ID: <1315448656.31737.252.camel@debian> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: "Li, Shaohua" Cc: Christoph Lameter , "penberg@kernel.org" , "linux-kernel@vger.kernel.org" , "Huang, Ying" , "Chen, Tim C" , "linux-mm@kvack.org" On Thu, 2011-09-08 at 09:34 +0800, Li, Shaohua wrote: > On Thu, 2011-09-08 at 08:43 +0800, Shi, Alex wrote: > > On Wed, 2011-09-07 at 23:05 +0800, Christoph Lameter wrote: > > > On Wed, 7 Sep 2011, Shi, Alex wrote: > > > > > > > Oh, seems the deactivate_slab() corrected at linus' tree already, but > > > > the unfreeze_partials() just copied from the old version > > > > deactivate_slab(). > > > > > > Ok then the patch is ok. > > > > > > Do you also have performance measurements? I am a bit hesitant to merge > > > the per cpu partials patchset if there are regressions in the low > > > concurrency tests as seem to be indicated by intels latest tests. > > > > > > > My LKP testing system most focus on server platforms. I tested your per > > cpu partial set on hackbench and netperf loopback benchmark. hackbench > > improve much. > > > > Maybe some IO testing is low concurrency for SLUB, maybe a few jobs > > kbuild? or low swap press testing. I may try them for your patchset in > > the near days. > > > > BTW, some testing results for your PCP SLUB: > > > > for hackbench process testing: > > on WSM-EP, inc ~60%, NHM-EP inc ~25% > > on NHM-EX, inc ~200%, core2-EP, inc ~250%. > > on Tigerton-EX, inc 1900%, :) > > > > for hackbench thread testing: > > on WSM-EP, no clear inc, NHM-EP no clear inc > > on NHM-EX, inc 10%, core2-EP, inc ~20%. > > on Tigertion-EX, inc 100%, > > > > for netperf loopback testing, no clear performance change. > did you add my patch to add page to partial list tail in the test? > Without it the per-cpu partial list can have more significant impact to > reduce lock contention, so the result isn't precise. > No, the penberg tree did include your patch on slub/partial head. Actually PCP won't take that path, so, there is no need for your patch. I daft a patch to remove some unused code in __slab_free, that related this, and will send it out later. But, You reminder me that the compare kernel 3.1-rc2 has a bug. so, compare to 3.0 kernel, on hackbench process testing, the PCP patchset just have 5~9% performance on our 4 CPU socket, EX machine, while has about 2~4% drop on 2 socket EP machines. :) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org