public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: "teawater" <hui.zhu@linux.dev>
To: "Harry Yoo (Oracle)" <harry@kernel.org>,
	"Shakeel Butt" <shakeel.butt@linux.dev>
Cc: "Johannes Weiner" <hannes@cmpxchg.org>,
	"Michal Hocko" <mhocko@kernel.org>,
	"Roman Gushchin" <roman.gushchin@linux.dev>,
	"Muchun Song" <muchun.song@linux.dev>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	cgroups@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, "Hui Zhu" <zhuhui@kylinos.cn>,
	"Vlastimil Babka" <vbabka@kernel.org>,
	"Hao Li" <hao.li@linux.dev>
Subject: Re: [PATCH mm-stable v3] mm/memcontrol: batch memcg charging in __memcg_slab_post_alloc_hook
Date: Wed, 01 Apr 2026 12:26:15 +0000	[thread overview]
Message-ID: <a897fa9eb0ba60fa5a5b4be106d9d376f2f1e2ca@linux.dev> (raw)
In-Reply-To: <acv5QCe0qMUUW2xP@hyeyoo>

> 
> On Tue, Mar 31, 2026 at 08:32:30AM -0700, Shakeel Butt wrote:
> 
> > 
> > On Tue, Mar 31, 2026 at 05:17:07PM +0800, Hui Zhu wrote:
> >  From: Hui Zhu <zhuhui@kylinos.cn>
> >  
> >  When kmem_cache_alloc_bulk() allocates multiple objects, the post-alloc
> >  hook __memcg_slab_post_alloc_hook() previously charged memcg one object
> >  at a time, even though consecutive objects may reside on slabs backed by
> >  the same pgdat node.
> >  
> >  Batch the memcg charging by scanning ahead from the current position to
> >  find a contiguous run of objects whose slabs share the same pgdat, then
> >  issue a single __obj_cgroup_charge() / __consume_obj_stock() call for
> >  the entire run. The per-object obj_ext assignment loop is preserved as-is
> >  since it cannot be further collapsed.
> >  
> >  This implements the TODO comment left in commit bc730030f956 ("memcg:
> >  combine slab obj stock charging and accounting").
> >  
> >  The existing error-recovery contract is unchanged: if size == 1 then
> >  memcg_alloc_abort_single() will free the sole object, and for larger
> >  bulk allocations kmem_cache_free_bulk() will uncharge any objects that
> >  were already charged before the failure.
> >  
> >  Benchmark using kmem_cache_alloc_bulk() with SLAB_ACCOUNT
> >  (iters=100000):
> >  
> >  bulk=32 before: 215 ns/object after: 174 ns/object (-19%)
> >  bulk=1 before: 344 ns/object after: 335 ns/object ( ~)
> >  
> >  No measurable regression for bulk=1, as expected.
> >  
> >  Signed-off-by: Hui Zhu <zhuhui@kylinos.cn>
> >  
> >  Do we have an actual user of kmem_cache_alloc_bulk(GFP_ACCOUNT) in kernel?
> > 

Hi Harry and Shakeel,

> Apparently we have a SLAB_ACCOUNT user in io_uring.c.
> (perhaps it's the only user?)

Looks like __io_alloc_req_refill is only user that call kmem_cache_alloc_bulk
with SLAB_ACCOUNT.

I am working on make a benchmark code for it.

Best,
Hui

> 
> > 
> > If yes, can you please benchmark that usage? Otherwise can we please wait for
> >  an actual user before adding more complexity? Or you can look for opportunities
> >  for kmem_cache_alloc_bulk(GFP_ACCOUNT) users and add the optimization along with
> >  the user.
> > 
> Good point. I was also wondering what are use cases benefiting
> from this beyond the microbenchmark.
> 
> > 
> > Have you looked at the bulk free side? I think we already have rcu freeing in
> >  bulk as a user. Did you find any opportunities in optimizing the
> >  __memcg_slab_free_hook() from bulk free?
> > 
> Probably a bit out of scope but one thing to note on slab side:
> kfree_bulk() (called by kfree_rcu batching) doesn't specify slab cache,
> and it builds a detached freelist which contains objects from the same slab.
> 
> On the other hand kmem_cache_free_bulk() with non-NULL slab cache
> simply calls free_to_pcs_bulk() and it passes objects one by one to
> __memcg_slab_free_hook() since objects may not come from the same slab.
> 
> Now that we have sheaves enabled for (almost) all slab caches, it might
> be worth revisiting - e.g. sort objects by slab cache and
> pass them to free_to_pcs_bulk() instead of building a detached freelist.
> 
> And let __memcg_slab_free_hook() handle objects from the same cache but
> from different slabs.
> 
> -- 
> Cheers,
> Harry / Hyeonggon
>


      reply	other threads:[~2026-04-01 12:26 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-31  9:17 [PATCH mm-stable v3] mm/memcontrol: batch memcg charging in __memcg_slab_post_alloc_hook Hui Zhu
2026-03-31 11:48 ` Harry Yoo (Oracle)
2026-03-31 15:32 ` Shakeel Butt
2026-03-31 16:41   ` Harry Yoo (Oracle)
2026-04-01 12:26     ` teawater [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a897fa9eb0ba60fa5a5b4be106d9d376f2f1e2ca@linux.dev \
    --to=hui.zhu@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=hao.li@linux.dev \
    --cc=harry@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeel.butt@linux.dev \
    --cc=vbabka@kernel.org \
    --cc=zhuhui@kylinos.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox