From: Feng Tang <feng.tang@intel.com>
To: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: "Sang, Oliver" <oliver.sang@intel.com>,
Jay Patel <jaypatel@linux.ibm.com>,
"oe-lkp@lists.linux.dev" <oe-lkp@lists.linux.dev>,
lkp <lkp@intel.com>, "linux-mm@kvack.org" <linux-mm@kvack.org>,
"Huang, Ying" <ying.huang@intel.com>,
"Yin, Fengwei" <fengwei.yin@intel.com>,
"cl@linux.com" <cl@linux.com>,
"penberg@kernel.org" <penberg@kernel.org>,
"rientjes@google.com" <rientjes@google.com>,
"iamjoonsoo.kim@lge.com" <iamjoonsoo.kim@lge.com>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"vbabka@suse.cz" <vbabka@suse.cz>,
"aneesh.kumar@linux.ibm.com" <aneesh.kumar@linux.ibm.com>,
"tsahu@linux.ibm.com" <tsahu@linux.ibm.com>,
"piyushs@linux.ibm.com" <piyushs@linux.ibm.com>
Subject: Re: [PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage
Date: Thu, 20 Jul 2023 21:49:06 +0800 [thread overview]
Message-ID: <ZLk7UpWWLf5agKDW@feng-clx> (raw)
In-Reply-To: <CAB=+i9QmF2C7QsZBEW0HMT-PGcEf3MeCukVaq0_O1HkGy7n93w@mail.gmail.com>
Hi Hyeonggon,
On Thu, Jul 20, 2023 at 08:59:56PM +0800, Hyeonggon Yoo wrote:
> On Thu, Jul 20, 2023 at 12:01 PM Oliver Sang <oliver.sang@intel.com> wrote:
> >
> > hi, Hyeonggon Yoo,
> >
> > On Tue, Jul 18, 2023 at 03:43:16PM +0900, Hyeonggon Yoo wrote:
> > > On Mon, Jul 17, 2023 at 10:41 PM kernel test robot
> > > <oliver.sang@intel.com> wrote:
> > > >
> > > >
> > > >
> > > > Hello,
> > > >
> > > > kernel test robot noticed a -12.5% regression of hackbench.throughput on:
> > > >
> > > >
> > > > commit: a0fd217e6d6fbd23e91f8796787b621e7d576088 ("[PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage")
> > > > url: https://github.com/intel-lab-lkp/linux/commits/Jay-Patel/mm-slub-Optimize-slub-memory-usage/20230628-180050
> > > > base: git://git.kernel.org/cgit/linux/kernel/git/vbabka/slab.git for-next
> > > > patch link: https://lore.kernel.org/all/20230628095740.589893-1-jaypatel@linux.ibm.com/
> > > > patch subject: [PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage
> > > >
> > > > testcase: hackbench
> > > > test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
> > > > parameters:
> > > >
> > > > nr_threads: 100%
> > > > iterations: 4
> > > > mode: process
> > > > ipc: socket
> > > > cpufreq_governor: performance
> > > >
> > > >
> > > >
> > > >
> > > > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > > > the same patch/commit), kindly add following tags
> > > > | Reported-by: kernel test robot <oliver.sang@intel.com>
> > > > | Closes: https://lore.kernel.org/oe-lkp/202307172140.3b34825a-oliver.sang@intel.com
> > > >
> > > >
> > > > Details are as below:
> > > > -------------------------------------------------------------------------------------------------->
> > > >
> > > >
> > > > To reproduce:
> > > >
> > > > git clone https://github.com/intel/lkp-tests.git
> > > > cd lkp-tests
> > > > sudo bin/lkp install job.yaml # job file is attached in this email
> > > > bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
> > > > sudo bin/lkp run generated-yaml-file
> > > >
> > > > # if come across any failure that blocks the test,
> > > > # please remove ~/.lkp and /lkp dir to run from a clean state.
> > > >
> > > > =========================================================================================
> > > > compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
> > > > gcc-12/performance/socket/4/x86_64-rhel-8.3/process/100%/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp2/hackbench
> > > >
> > > > commit:
> > > > 7bc162d5cc ("Merge branches 'slab/for-6.5/prandom', 'slab/for-6.5/slab_no_merge' and 'slab/for-6.5/slab-deprecate' into slab/for-next")
> > > > a0fd217e6d ("mm/slub: Optimize slub memory usage")
> > > >
> > > > 7bc162d5cc4de5c3 a0fd217e6d6fbd23e91f8796787
> > > > ---------------- ---------------------------
> > > > %stddev %change %stddev
> > > > \ | \
> > > > 222503 ą 86% +108.7% 464342 ą 58% numa-meminfo.node1.Active
> > > > 222459 ą 86% +108.7% 464294 ą 58% numa-meminfo.node1.Active(anon)
> > > > 55573 ą 85% +108.0% 115619 ą 58% numa-vmstat.node1.nr_active_anon
> > > > 55573 ą 85% +108.0% 115618 ą 58% numa-vmstat.node1.nr_zone_active_anon
> > >
> > > I'm quite baffled while reading this.
> > > How did changing slab order calculation double the number of active anon pages?
> > > I doubt two experiments were performed on the same settings.
> >
> > let me introduce our test process.
> >
> > we make sure the tests upon commit and its parent have exact same environment
> > except the kernel difference, and we also make sure the config to build the
> > commit and its parent are identical.
> >
> > we run tests for one commit at least 6 times to make sure the data is stable.
> >
> > such like for this case, we rebuild the commit and its parent's kernel, the
> > config is attached FYI.
>
> Hello Oliver,
>
> Thank you for confirming the testing environment is totally fine.
> and I'm sorry. I didn't mean to offend that your tests were bad.
>
> It was more like "oh, the data totally doesn't make sense to me"
> and I blamed the tests rather than my poor understanding of the data ;)
>
> Anyway,
> as the data shows a repeatable regression,
> let's think more about the possible scenario:
>
> I can't stop thinking that the patch must've affected the system's
> reclamation behavior in some way.
> (I think more active anon pages with a similar number total of anon
> pages implies the kernel scanned more pages)
>
> It might be because kswapd was more frequently woken up (possible if
> skbs were allocated with GFP_ATOMIC)
> But the data provided is not enough to support this argument.
>
> > 2.43 ± 7% +4.5 6.90 ± 11% perf-profile.children.cycles-pp.get_partial_node
> > 3.23 ± 5% +4.5 7.77 ± 9% perf-profile.children.cycles-pp.___slab_alloc
> > 7.51 ± 2% +4.6 12.11 ± 5% perf-profile.children.cycles-pp.kmalloc_reserve
> > 6.94 ± 2% +4.7 11.62 ± 6% perf-profile.children.cycles-pp.__kmalloc_node_track_caller
> > 6.46 ± 2% +4.8 11.22 ± 6% perf-profile.children.cycles-pp.__kmem_cache_alloc_node
> > 8.48 ± 4% +7.9 16.42 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
> > 6.12 ± 6% +8.6 14.74 ± 9% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
>
> And this increased cycles in the SLUB slowpath implies that the actual
> number of objects available in
> the per cpu partial list has been decreased, possibly because of
> inaccuracy in the heuristic?
> (cuz the assumption that slabs cached per are half-filled, and that
> slabs' order is s->oo)
From the patch:
static unsigned int slub_max_order =
- IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : PAGE_ALLOC_COSTLY_ORDER;
+ IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : 2;
Could this be related? that it reduces the order for some slab cache,
so each per-cpu slab will has less objects, which makes the contention
for per-node spinlock 'list_lock' more severe when the slab allocation
is under pressure from many concurrent threads.
I don't have direct data to backup it, and I can try some experiment.
Thanks,
Feng
> Any thoughts, Vlastimil or Jay?
>
> >
> > then retest on this test machine:
> > 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
next prev parent reply other threads:[~2023-07-20 14:16 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-28 9:57 [PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage Jay Patel
2023-07-03 0:13 ` David Rientjes
2023-07-03 8:39 ` Jay Patel
2023-07-09 14:42 ` Hyeonggon Yoo
2023-07-12 13:06 ` Vlastimil Babka
2023-07-20 10:30 ` Jay Patel
2023-07-17 13:41 ` kernel test robot
2023-07-18 6:43 ` Hyeonggon Yoo
2023-07-20 3:00 ` Oliver Sang
2023-07-20 12:59 ` Hyeonggon Yoo
2023-07-20 13:46 ` Hyeonggon Yoo
2023-07-20 14:15 ` Hyeonggon Yoo
2023-07-24 2:39 ` Oliver Sang
2023-07-31 9:49 ` Hyeonggon Yoo
2023-07-20 13:49 ` Feng Tang [this message]
2023-07-20 15:05 ` Hyeonggon Yoo
2023-07-21 14:50 ` Binder Makin
2023-07-21 15:39 ` Hyeonggon Yoo
2023-07-21 18:31 ` Binder Makin
2023-07-24 14:35 ` Feng Tang
2023-07-25 3:13 ` Hyeonggon Yoo
2023-07-25 9:12 ` Feng Tang
2023-08-29 8:30 ` Feng Tang
2023-07-26 10:06 ` Vlastimil Babka
2023-08-10 10:38 ` Jay Patel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZLk7UpWWLf5agKDW@feng-clx \
--to=feng.tang@intel.com \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=cl@linux.com \
--cc=fengwei.yin@intel.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=jaypatel@linux.ibm.com \
--cc=linux-mm@kvack.org \
--cc=lkp@intel.com \
--cc=oe-lkp@lists.linux.dev \
--cc=oliver.sang@intel.com \
--cc=penberg@kernel.org \
--cc=piyushs@linux.ibm.com \
--cc=rientjes@google.com \
--cc=tsahu@linux.ibm.com \
--cc=vbabka@suse.cz \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).