From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
To: Binder Makin <merimus@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, linux-block@vger.kernel.org,
bpf@vger.kernel.org, linux-xfs@vger.kernel.org,
David Rientjes <rientjes@google.com>,
Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Roman Gushchin <roman.gushchin@linux.dev>
Subject: Re: [LSF/MM/BPF TOPIC] SLOB+SLAB allocators removal and future SLUB improvements
Date: Wed, 22 Mar 2023 22:02:28 +0900 [thread overview]
Message-ID: <ZBr8Gf53CbJc0b5E@hyeyoo> (raw)
In-Reply-To: <CAANmLtzajny8ZK_QKVYOxLc8L9gyWG6Uu7YyL-CR-qfwphVTzg@mail.gmail.com>
On Wed, Mar 22, 2023 at 08:15:28AM -0400, Binder Makin wrote:
> Was looking at SLAB removal and started by running A/B tests of SLAB vs
> SLUB. Please note these are only preliminary results.
>
> These were run using 6.1.13 configured for SLAB/SLUB.
> Machines were standard datacenter servers.
>
> Hackbench shows completion time, so smaller is better.
> On all others larger is better.
> https://docs.google.com/spreadsheets/d/e/2PACX-1vQ47Mekl8BOp3ekCefwL6wL8SQiv6Qvp5avkU2ssQSh41gntjivE-aKM4PkwzkC4N_s_MxUdcsokhhz/pubhtml
>
> Some notes:
> SUnreclaim and SReclaimable shows unreclaimable and reclaimable memory.
> Substantially higher with SLUB, but I believe that is to be expected.
>
> Various results showing a 5-10% degradation with SLUB. That feels
> concerning to me, but I'm not sure what others' tolerance would be.
Hello Binder,
Thank you for sharing the data on which workloads
SLUB performs worse than SLAB. This information is critical for
improving SLUB and deprecating SLAB.
By the way, it appears that the spreadsheet is currently set to private.
Could you make it public for me to access?
I am really interested in performing similar experiments on my machines
to obtain comparable data that can be utilized to enhance SLUB.
Thanks,
Hyeonggon
> redis results on AMD show some pretty bad degredations. 10-20% range
> netpipe on Intel also has issues.. 10-17%
>
> On Tue, Mar 14, 2023 at 4:05 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> > As you're probably aware, my plan is to get rid of SLOB and SLAB, leaving
> > only SLUB going forward. The removal of SLOB seems to be going well, there
> > were no objections to the deprecation and I've posted v1 of the removal
> > itself [1] so it could be in -next soon.
> >
> > The immediate benefit of that is that we can allow kfree() (and
> > kfree_rcu())
> > to free objects from kmem_cache_alloc() - something that IIRC at least xfs
> > people wanted in the past, and SLOB was incompatible with that.
> >
> > For SLAB removal I haven't yet heard any objections (but also didn't
> > deprecate it yet) but if there are any users due to particular workloads
> > doing better with SLAB than SLUB, we can discuss why those would regress
> > and
> > what can be done about that in SLUB.
> >
> > Once we have just one slab allocator in the kernel, we can take a closer
> > look at what the users are missing from it that forces them to create own
> > allocators (e.g. BPF), and could be considered to be added as a generic
> > implementation to SLUB.
> >
> > Thanks,
> > Vlastimil
> >
> > [1] https://lore.kernel.org/all/20230310103210.22372-1-vbabka@suse.cz/
next prev parent reply other threads:[~2023-03-22 13:02 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-14 8:05 [LSF/MM/BPF TOPIC] SLOB+SLAB allocators removal and future SLUB improvements Vlastimil Babka
2023-03-14 13:06 ` Matthew Wilcox
2023-03-15 2:54 ` Roman Gushchin
2023-03-16 8:18 ` Vlastimil Babka
2023-03-16 20:20 ` Roman Gushchin
2023-03-22 12:30 ` Binder Makin
2023-04-04 16:03 ` Vlastimil Babka
2023-04-05 19:54 ` Binder Makin
2023-04-27 8:29 ` Vlastimil Babka
2023-05-05 19:44 ` Binder Makin
[not found] ` <CAANmLtzajny8ZK_QKVYOxLc8L9gyWG6Uu7YyL-CR-qfwphVTzg@mail.gmail.com>
2023-03-22 13:02 ` Hyeonggon Yoo [this message]
2023-03-22 13:24 ` Binder Makin
2023-03-22 13:30 ` Binder Makin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZBr8Gf53CbJc0b5E@hyeyoo \
--to=42.hyeyoo@gmail.com \
--cc=bpf@vger.kernel.org \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=merimus@google.com \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).