From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757604Ab1IHAiQ (ORCPT ); Wed, 7 Sep 2011 20:38:16 -0400 Received: from mga03.intel.com ([143.182.124.21]:42077 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755015Ab1IHAiP (ORCPT ); Wed, 7 Sep 2011 20:38:15 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.68,348,1312182000"; d="scan'208";a="15024964" Subject: RE: [PATCH] slub Discard slab page only when node partials > minimum setting From: "Alex,Shi" To: Christoph Lameter Cc: "penberg@kernel.org" , "linux-kernel@vger.kernel.org" , "Huang, Ying" , "Li, Shaohua" , "Chen, Tim C" , "linux-mm@kvack.org" In-Reply-To: References: <1315188460.31737.5.camel@debian> <1315357399.31737.49.camel@debian> <4E671E5C.7010405@cs.helsinki.fi> <6E3BC7F7C9A4BF4286DD4C043110F30B5D00DA333C@shsmsx502.ccr.corp.intel.com> Content-Type: text/plain; charset="UTF-8" Date: Thu, 08 Sep 2011 08:43:59 +0800 Message-ID: <1315442639.31737.224.camel@debian> Mime-Version: 1.0 X-Mailer: Evolution 2.28.1 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 2011-09-07 at 23:05 +0800, Christoph Lameter wrote: > On Wed, 7 Sep 2011, Shi, Alex wrote: > > > Oh, seems the deactivate_slab() corrected at linus' tree already, but > > the unfreeze_partials() just copied from the old version > > deactivate_slab(). > > Ok then the patch is ok. > > Do you also have performance measurements? I am a bit hesitant to merge > the per cpu partials patchset if there are regressions in the low > concurrency tests as seem to be indicated by intels latest tests. > My LKP testing system most focus on server platforms. I tested your per cpu partial set on hackbench and netperf loopback benchmark. hackbench improve much. Maybe some IO testing is low concurrency for SLUB, maybe a few jobs kbuild? or low swap press testing. I may try them for your patchset in the near days. BTW, some testing results for your PCP SLUB: for hackbench process testing: on WSM-EP, inc ~60%, NHM-EP inc ~25% on NHM-EX, inc ~200%, core2-EP, inc ~250%. on Tigerton-EX, inc 1900%, :) for hackbench thread testing: on WSM-EP, no clear inc, NHM-EP no clear inc on NHM-EX, inc 10%, core2-EP, inc ~20%. on Tigertion-EX, inc 100%, for netperf loopback testing, no clear performance change.