From: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
To: Christoph Lameter <clameter@sgi.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>,
Nick Piggin <nickpiggin@yahoo.com.au>,
"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
LKML <linux-kernel@vger.kernel.org>,
mingo@elte.hu, Mel Gorman <mel@skynet.ie>,
Linus Torvalds <torvalds@linux-foundation.org>,
Matthew.R.wilcox@intel.com
Subject: Re: tbench regression - Why process scheduler has impact on tbench and why small per-cpu slab (SLUB) cache creates the scenario?
Date: Fri, 14 Sep 2007 12:15:11 -0700 [thread overview]
Message-ID: <20070914191511.GC6078@linux-os.sc.intel.com> (raw)
In-Reply-To: <Pine.LNX.4.64.0709131055430.8859@schroedinger.engr.sgi.com>
Christoph,
On Thu, Sep 13, 2007 at 11:03:53AM -0700, Christoph Lameter wrote:
> On Wed, 12 Sep 2007, Siddha, Suresh B wrote:
>
> > Christoph, Not sure if you are referring to me or not here. But our
> > tests(atleast on with the database workloads) approx 1.5 months or so back
> > showed that on ia64 slub was on par with slab and on x86_64, slub was 9% down.
> > And after changing the slub min order and max order, slub perf on x86_64 is
> > down approx 3.5% or so compared to slab.
>
> No, I was referring to another talk that I had at the OLS with Corey
> Gough. I keep getting confusing information from Intel. Last I heard was
Please don't go with informal talks and discussions. Please demand the numbers
and make decisions, conclusions based on those numbers. AFAIK, we haven't
posted confusing numbers so far.
> that IA64 had a regression and x86_64 was fine (but they were not allowed
> to tell me details). Would you please straighten out your story and give
> me details?
Numbers I posted in the previous e-mail is the only story we have so far.
> AFAIK the two of us discussed some issues related to object handover
> between processors that cause cache line bouncing and I sent you a
> patchset for testing but I did not get any feedback. The patches that were
Sorry, These systems are huge and limited. We are raising the priority
with the performance team to do the latest slub patch testing.
> discussed are now in mm.
>
> > While I don't rule out large sized allocations like PAGE_SIZE, I am mostly
> > certain that the critical allocations in this workload are not PAGE_SIZE
> > based. Mostly they are in the range less than 300-500 bytes or so.
> >
> > Any changes in the recent slub which takes the pressure away from the page
> > allocator especially for smaller page sized architectures? If so, we can
> > redo some of the experiments. Looking at this thread, it doesn't sound like?
>
> Its too late for 2.6.23. But we can certainly do things for .24. Could you
> please test the patches queued up in Andrew's tree? In particular the page
> allocator pass through and the per cpu structures optimizations?
We are trying to get the latest data with 2.6.23-rc4-mm1 with and without
slub. Is this good enough?
>
> There is more work out of tree to optimize the fastpath that is mostly
> driven by Mathieu Desnoyers. I hope to get that into mm in the next weeks
> but I do not think that it is going to be available before .25.
>
> The work of Matheiu also has implications for the page allocator. We may
> be able to significantly speed up the fastpath there as well.
Ok. Atleast till all the regressions addressed and all these patches well
tested, we shouldn't do away with slab from mainline anytime soon.
Other than us, who else are you banking on for analysing slub? Do
you have any numbers that you can share, which show where slub
is good or bad...
thanks,
suresh
next prev parent reply other threads:[~2007-09-14 19:15 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-09-05 0:46 tbench regression - Why process scheduler has impact on tbench and why small per-cpu slab (SLUB) cache creates the scenario? Zhang, Yanmin
2007-09-05 3:59 ` Christoph Lameter
2007-09-05 5:22 ` Zhang, Yanmin
2007-09-05 6:58 ` Christoph Lameter
2007-09-05 9:13 ` Zhang, Yanmin
2007-09-05 10:45 ` Christoph Lameter
2007-09-06 0:52 ` Zhang, Yanmin
2007-09-05 7:07 ` Christoph Lameter
2007-09-08 8:08 ` Nick Piggin
2007-09-10 0:56 ` Zhang, Yanmin
2007-09-09 22:10 ` Nick Piggin
2007-09-10 19:07 ` Christoph Lameter
2007-09-10 15:17 ` Nick Piggin
2007-09-11 20:19 ` Christoph Lameter
2007-09-11 4:59 ` Nick Piggin
2007-09-13 6:04 ` Siddha, Suresh B
2007-09-13 18:03 ` Christoph Lameter
2007-09-14 19:15 ` Siddha, Suresh B [this message]
2007-09-14 19:51 ` Christoph Lameter
2007-09-19 2:17 ` Siddha, Suresh B
2007-09-20 17:53 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070914191511.GC6078@linux-os.sc.intel.com \
--to=suresh.b.siddha@intel.com \
--cc=Matthew.R.wilcox@intel.com \
--cc=akpm@linux-foundation.org \
--cc=clameter@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mel@skynet.ie \
--cc=mingo@elte.hu \
--cc=nickpiggin@yahoo.com.au \
--cc=torvalds@linux-foundation.org \
--cc=yanmin_zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox