linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Lameter <clameter@sgi.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>,
	Matt Mackall <mpm@selenic.com>,
	linux-mm@kvack.org
Subject: Re: [patch 0/8] slub: Fallback to order 0 and variable order slab support
Date: Tue, 4 Mar 2008 10:53:31 -0800 (PST)	[thread overview]
Message-ID: <Pine.LNX.4.64.0803041044520.13957@schroedinger.engr.sgi.com> (raw)
In-Reply-To: <20080304122008.GB19606@csn.ul.ie>

On Tue, 4 Mar 2008, Mel Gorman wrote:

> 				Loss	to	Gain
> Kernbench Elapsed time		 -0.64%		0.32%
> Kernbench Total time		 -0.61%		0.48%
> Hackbench sockets-12 clients	 -2.95%		5.13%
> Hackbench pipes-12 clients	-16.95%		9.27%
> TBench 4 clients		 -1.98%		8.2%
> DBench 4 clients (ext2)		 -5.9%		7.99%
> 
> So, running with the high orders is not a clear-cut win to my eyes. What
> did you test to show that it was a general win justifying a high-order by
> default? From looking through, tbench seems to be the only obvious one to
> gain but the rest, it is not clear at all. I'll try give sysbench a spin
> later to see if it is clear-cut.

Hmmm... Interesting. The tests that I did awhile ago were with max order 
3. The patch as is now has max order 4. Maybe we need to reduce the order?

Looks like this was mostly a gain except for hackbench. Which is to be 
expected since the benchmark shelves out objects from the same slab round 
robin to different cpus. The higher the number of objects in the slab the 
higher the chance of contention on the slab lock.

> However, in *all* cases, superpage allocations were less successful and in
> some cases it was severely regressed (one machine went from 81% success rate
> to 36%). Sufficient statistics are not gathered to see why this happened
> in retrospect but my suspicion would be that high-order RECLAIMABLE and
> UNMOVABLE slub allocations routinely fall back to the less fragmented
> MOVABLE pageblocks with these patches - something that is normally a very
> rare event. This change in assumption hurts fragmentation avoidance and
> chances are the long-term behaviour of these patches is not great.

Superpage allocations means huge page allocations? Enable slub statistics 
and you will be able to see the number of fallbacks in 
/sys/kernel/slab/xx/order_fallback to confirm your suspicions.

How would the allocator be able to get MOVABLE allocations? Is fallback 
permitted for order 0 allocs to MOVABLE?

> If this guess is correct, using a high-order size by default is a bad plan
> and it should only be set when it is known that the target workload benefits
> and superpage allocations are not a concern. Alternative, set high-order by
> default only for a limited number of caches that are RECLAIMABLE (or better
> yet ones we know can be directly reclaimed with the slub-defrag patches).
> 
> As it is, this is painful from a fragmentation perspective and the
> performance win is not clear-cut.

Could we reduce the max order to 3 and see what happens then?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2008-03-04 18:53 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20080229044803.482012397@sgi.com>
     [not found] ` <20080229044820.044485187@sgi.com>
2008-02-29  8:13   ` [patch 7/8] slub: Make the order configurable for each slab cache Pekka Enberg
2008-02-29 19:37     ` Christoph Lameter
2008-03-01  9:47       ` Pekka Enberg
2008-03-03 17:49         ` Christoph Lameter
2008-03-03 22:56           ` Pekka Enberg
2008-03-03 23:36             ` Christoph Lameter
     [not found] ` <20080229044820.298792748@sgi.com>
2008-02-29  8:13   ` [patch 8/8] slub: Simplify any_slab_object checks Pekka Enberg
     [not found] ` <20080229044819.800974712@sgi.com>
2008-02-29  8:19   ` [patch 6/8] slub: Adjust order boundaries and minimum objects per slab Pekka Enberg
2008-02-29 19:41     ` Christoph Lameter
2008-03-01  9:58       ` Pekka J Enberg
2008-03-03 17:52         ` Christoph Lameter
2008-03-03 21:34           ` Matt Mackall
2008-03-03 22:36             ` Christoph Lameter
     [not found] ` <20080229044818.999367120@sgi.com>
2008-02-29  8:59   ` [patch 3/8] slub: Update statistics handling for variable order slabs Pekka Enberg
2008-02-29 19:43     ` Christoph Lameter
2008-03-01 10:29   ` Pekka Enberg
2008-03-04 12:20 ` [patch 0/8] slub: Fallback to order 0 and variable order slab support Mel Gorman
2008-03-04 18:53   ` Christoph Lameter [this message]
2008-03-05 18:28     ` Mel Gorman
2008-03-05 18:52       ` Christoph Lameter
2008-03-06 22:04         ` Mel Gorman
2008-03-06 22:18           ` Christoph Lameter
2008-03-07 12:17             ` Mel Gorman
2008-03-07 19:50               ` Christoph Lameter
2008-03-04 19:01   ` Matt Mackall
2008-03-05  0:04     ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.0803041044520.13957@schroedinger.engr.sgi.com \
    --to=clameter@sgi.com \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=mpm@selenic.com \
    --cc=penberg@cs.helsinki.fi \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).