From: Feng Tang <feng.tang@intel.com>
To: Vlastimil Babka <vbabka@suse.cz>,
Andrew Morton <akpm@linux-foundation.org>,
Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Hyeonggon Yoo <42.hyeyoo@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: Feng Tang <feng.tang@intel.com>
Subject: [RFC Patch 2/3] mm/slub: double per-cpu partial number for large systems
Date: Tue, 5 Sep 2023 22:13:47 +0800 [thread overview]
Message-ID: <20230905141348.32946-3-feng.tang@intel.com> (raw)
In-Reply-To: <20230905141348.32946-1-feng.tang@intel.com>
There are reports about severe lock contention for slub's per-node
'list_lock' in 'hackbench' test, [1][2], on server systems. And
similar contention is also seen when running 'mmap1' case of
will-it-scale on big systems. As the trend is one processor (socket)
will have more and more CPUs (100+, 200+), the contention could be
much more severe and becomes a scalability issue.
One way to help reducing the contention is to double the per-cpu
partial number for large systems.
Following is some performance data, where it shows big improvment
in will-it-scale/mmap1 case, but no ovbious change for the 'hackbench'
test.
The patch itself only makes the per-cpu partial number 2X, and for
better analysis, the 4X case is also profiled
will-it-scale/mmap1
-------------------
Run will-it-scale benchmark's 'mmap1' test case on a 2 socket Sapphire
Rapids server (112 cores / 224 threads) with 256 GB DRAM, run 3
configurations with parallel test threads of 25%, 50% and 100% of
number of CPUs, and the data is (base is vanilla v6.5 kernel):
base base + 2X patch base + 4X patch
wis-mmap1-25 223670 +12.7% 251999 +34.9% 301749 per_process_ops
wis-mmap1-50 186020 +28.0% 238067 +55.6% 289521 per_process_ops
wis-mmap1-100 89200 +40.7% 125478 +62.4% 144858 per_process_ops
Take the perf-profile comparasion of 50% test case, the lock contention
is greatly reduced:
43.80 -11.5 32.27 -27.9 15.91 pp.self.native_queued_spin_lock_slowpath
hackbench
---------
Run same hackbench testcase mentioned in [1], use same HW/SW as will-it-scale:
base base + 2X patch base + 4X patch
hackbench 759951 +0.2% 761506 +0.5% 763972 hackbench.throughput
[1]. https://lore.kernel.org/all/202307172140.3b34825a-oliver.sang@intel.com/
[2]. ttps://lore.kernel.org/lkml/ZORaUsd+So+tnyMV@chenyu5-mobl2/
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
mm/slub.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/mm/slub.c b/mm/slub.c
index f7940048138c..51ca6dbaad09 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4361,6 +4361,13 @@ static void set_cpu_partial(struct kmem_cache *s)
else
nr_objects = 120;
+ /*
+ * Give larger system more per-cpu partial slabs to reduce/postpone
+ * contending per-node partial list.
+ */
+ if (num_cpus() >= 32)
+ nr_objects *= 2;
+
slub_set_cpu_partial(s, nr_objects);
#endif
}
--
2.27.0
next prev parent reply other threads:[~2023-09-05 14:09 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-05 14:13 [RFC Patch 0/3] mm/slub: reduce contention for per-node list_lock for large systems Feng Tang
2023-09-05 14:13 ` [RFC Patch 1/3] mm/slub: increase the maximum slab order to 4 for big systems Feng Tang
2023-09-12 4:52 ` Hyeonggon Yoo
2023-09-12 15:52 ` Feng Tang
2023-09-05 14:13 ` Feng Tang [this message]
2023-09-05 14:13 ` [RFC Patch 3/3] mm/slub: setup maxim per-node partial according to cpu numbers Feng Tang
2023-09-12 4:48 ` Hyeonggon Yoo
2023-09-14 7:05 ` Feng Tang
2023-09-15 2:40 ` Lameter, Christopher
2023-09-15 5:05 ` Feng Tang
2023-09-15 16:13 ` Lameter, Christopher
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230905141348.32946-3-feng.tang@intel.com \
--to=feng.tang@intel.com \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).