public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
To: Christoph Lameter <clameter@sgi.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>,
	"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	mingo@elte.hu, Mel Gorman <mel@skynet.ie>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: tbench regression - Why process scheduler has impact on tbench and why small per-cpu slab (SLUB) cache creates the scenario?
Date: Wed, 12 Sep 2007 23:04:33 -0700	[thread overview]
Message-ID: <20070913060432.GB6078@linux-os.sc.intel.com> (raw)
In-Reply-To: <Pine.LNX.4.64.0709111314190.25781@schroedinger.engr.sgi.com>

On Tue, Sep 11, 2007 at 01:19:30PM -0700, Christoph Lameter wrote:
> On Tue, 11 Sep 2007, Nick Piggin wrote:
> 
> > The impression I got at vm meeting was that SLUB was good to go :(
> 
> Its not? I have had Intel test this thoroughly and they assured me that it 
> is up to SLAB.

Christoph, Not sure if you are referring to me or not here. But our
tests(atleast on with the database workloads) approx 1.5 months or so back
showed that on ia64 slub was on par with slab and on x86_64, slub was 9% down.
And after changing the slub min order and max order, slub perf on x86_64 is
down approx 3.5% or so compared to slab.

While I don't rule out large sized allocations like PAGE_SIZE, I am mostly
certain that the critical allocations in this workload are not PAGE_SIZE
based.  Mostly they are in the range less than 300-500 bytes or so.

Any changes in the recent slub which takes the pressure away from the page
allocator especially for smaller page sized architectures? If so, we can
redo some of the experiments. Looking at this thread, it doesn't sound like?

thanks,
suresh

  parent reply	other threads:[~2007-09-13  6:04 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-09-05  0:46 tbench regression - Why process scheduler has impact on tbench and why small per-cpu slab (SLUB) cache creates the scenario? Zhang, Yanmin
2007-09-05  3:59 ` Christoph Lameter
2007-09-05  5:22   ` Zhang, Yanmin
2007-09-05  6:58     ` Christoph Lameter
2007-09-05  9:13       ` Zhang, Yanmin
2007-09-05 10:45         ` Christoph Lameter
2007-09-06  0:52           ` Zhang, Yanmin
2007-09-05  7:07     ` Christoph Lameter
2007-09-08  8:08       ` Nick Piggin
2007-09-10  0:56         ` Zhang, Yanmin
2007-09-09 22:10           ` Nick Piggin
2007-09-10 19:07             ` Christoph Lameter
2007-09-10 15:17               ` Nick Piggin
2007-09-11 20:19                 ` Christoph Lameter
2007-09-11  4:59                   ` Nick Piggin
2007-09-13  6:04                   ` Siddha, Suresh B [this message]
2007-09-13 18:03                     ` Christoph Lameter
2007-09-14 19:15                       ` Siddha, Suresh B
2007-09-14 19:51                         ` Christoph Lameter
2007-09-19  2:17                           ` Siddha, Suresh B
2007-09-20 17:53                             ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070913060432.GB6078@linux-os.sc.intel.com \
    --to=suresh.b.siddha@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=clameter@sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mel@skynet.ie \
    --cc=mingo@elte.hu \
    --cc=nickpiggin@yahoo.com.au \
    --cc=torvalds@linux-foundation.org \
    --cc=yanmin_zhang@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox