linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Feng Tang <feng.tang@intel.com>
To: "Lameter, Christopher" <cl@os.amperecomputing.com>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Andrew Morton <akpm@linux-foundation.org>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Roman Gushchin <roman.gushchin@linux.dev>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [RFC Patch 3/3] mm/slub: setup maxim per-node partial according to cpu numbers
Date: Fri, 15 Sep 2023 13:05:25 +0800	[thread overview]
Message-ID: <ZQPmFcmaRSrbK45H@feng-clx> (raw)
In-Reply-To: <21a0ba8b-bf05-0799-7c78-2a35f8c8d52a@os.amperecomputing.com>

On Thu, Sep 14, 2023 at 07:40:22PM -0700, Lameter, Christopher wrote:
> On Thu, 14 Sep 2023, Feng Tang wrote:
> 
> > One reason I wanted to revisit the MIN_PARTIAL is, it was changed from
> > 2 to 5 in 2007 by Christoph, in commit 76be895001f2 ("SLUB: Improve
> > hackbench speed"), the system has been much huger since then.
> > Currently while a per-cpu partial can already have 5 or more slabs,
> > the limit for a node with possible 100+ CPU could be reconsidered.
> 
> Well the trick that I keep using in large systems with lots of memory is to
> use huge page sized page allocation. The applications on those already are
> using the same page size. Doing so usually removes a lot of overhead and
> speeds up things significantly.
> 
> Try booting with "slab_min_order=9"

Thanks for sharing the trick! I tried and it works here. But this is
kind of extreme and fit for some special use case, and these patches
try to be useful for generic usage.

Thanks,
Feng


  reply	other threads:[~2023-09-15  5:15 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-05 14:13 [RFC Patch 0/3] mm/slub: reduce contention for per-node list_lock for large systems Feng Tang
2023-09-05 14:13 ` [RFC Patch 1/3] mm/slub: increase the maximum slab order to 4 for big systems Feng Tang
2023-09-12  4:52   ` Hyeonggon Yoo
2023-09-12 15:52     ` Feng Tang
2023-09-05 14:13 ` [RFC Patch 2/3] mm/slub: double per-cpu partial number for large systems Feng Tang
2023-09-05 14:13 ` [RFC Patch 3/3] mm/slub: setup maxim per-node partial according to cpu numbers Feng Tang
2023-09-12  4:48   ` Hyeonggon Yoo
2023-09-14  7:05     ` Feng Tang
2023-09-15  2:40       ` Lameter, Christopher
2023-09-15  5:05         ` Feng Tang [this message]
2023-09-15 16:13           ` Lameter, Christopher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZQPmFcmaRSrbK45H@feng-clx \
    --to=feng.tang@intel.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@os.amperecomputing.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).