Linux cgroups development
 help / color / mirror / Atom feed
* [PATCH 0/8] per-memcg-per-node kmem accounting
@ 2026-05-11 20:20 Alexandre Ghiti
  2026-05-11 20:20 ` [PATCH 1/8] mm: memcontrol: propagate NMI slab stats to memcg vmstats Alexandre Ghiti
                   ` (7 more replies)
  0 siblings, 8 replies; 10+ messages in thread
From: Alexandre Ghiti @ 2026-05-11 20:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Muchun Song, Dennis Zhou, Tejun Heo, Christoph Lameter,
	Vlastimil Babka, Yosry Ahmed, Nhat Pham, Sergey Senozhatsky,
	Chengming Zhou, Suren Baghdasaryan, Qi Zheng, David Hildenbrand,
	Lorenzo Stoakes, Minchan Kim, Mike Rapoport, Axel Rasmussen,
	Barry Song, Kairui Song, Wei Xu, Yuanchu Xie, Liam R . Howlett,
	Joshua Hahn, linux-mm, linux-kernel, cgroups, Alexandre Ghiti

This series pursues the work initiated by Joshua [1]. We need kernel  
memory to be accounted on a per-node basis in order to be able to  
know the memcg and physical memory association.  
  
This series takes advantage of the recent introduction of per-node  
obj_cgroup [2] and makes those obj_cgroup tied to their numa node.  
  
The bulk of the series is percpu per-node accounting: percpu  
"precharges" the memcg before we know the actual location of the pages  
it uses, so charging and accounting had to be split. All other kmem 
users (slab, zswap, __memcg_kmem_charge_page) are straightforward 
conversions (zswap support is limited in this series because Joshua 
is working on it in parallel [3]). 
 
Thanks Joshua for your early feedbacks! 
  
[1] https://lore.kernel.org/linux-mm/20260404033844.1892595-1-joshua.hahnjy@gmail.com/  
[2] https://lore.kernel.org/linux-mm/56c04b1c5d54f75ccdc12896df6c1ca35403ecc3.1772711148.git.zhengqi.arch@bytedance.com/  
[3] https://lore.kernel.org/linux-mm/20260311195153.4013476-1-joshua.hahnjy@gmail.com/

Alexandre Ghiti (8):
  mm: memcontrol: propagate NMI slab stats to memcg vmstats
  mm: percpu: charge obj_exts allocation with __GFP_ACCOUNT
  mm: percpu: Split memcg charging and kmem accounting
  mm: memcontrol: track MEMCG_KMEM per NUMA node
  mm: memcontrol: per-node kmem accounting for page charges
  mm: slab: per-node kmem accounting for slab
  mm: percpu: per-node kmem accounting using local credit
  mm: zswap: per-node kmem accounting for zswap/zsmalloc

 include/linux/memcontrol.h |  27 +++++--
 include/linux/mmzone.h     |   1 +
 include/linux/zsmalloc.h   |   2 +
 mm/memcontrol.c            | 150 ++++++++++++++++++++++++++++---------
 mm/percpu-internal.h       |  16 +---
 mm/percpu.c                |  90 ++++++++++++++++++++--
 mm/vmstat.c                |   1 +
 mm/zsmalloc.c              |  11 +++
 mm/zswap.c                 |   9 ++-
 9 files changed, 242 insertions(+), 65 deletions(-)

-- 
2.54.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2026-05-11 22:50 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-11 20:20 [PATCH 0/8] per-memcg-per-node kmem accounting Alexandre Ghiti
2026-05-11 20:20 ` [PATCH 1/8] mm: memcontrol: propagate NMI slab stats to memcg vmstats Alexandre Ghiti
2026-05-11 22:49   ` Shakeel Butt
2026-05-11 20:20 ` [PATCH 2/8] mm: percpu: charge obj_exts allocation with __GFP_ACCOUNT Alexandre Ghiti
2026-05-11 20:20 ` [PATCH 3/8] mm: percpu: Split memcg charging and kmem accounting Alexandre Ghiti
2026-05-11 20:20 ` [PATCH 4/8] mm: memcontrol: track MEMCG_KMEM per NUMA node Alexandre Ghiti
2026-05-11 20:20 ` [PATCH 5/8] mm: memcontrol: per-node kmem accounting for page charges Alexandre Ghiti
2026-05-11 20:20 ` [PATCH 6/8] mm: slab: per-node kmem accounting for slab Alexandre Ghiti
2026-05-11 20:20 ` [PATCH 7/8] mm: percpu: per-node kmem accounting using local credit Alexandre Ghiti
2026-05-11 20:20 ` [PATCH 8/8] mm: zswap: per-node kmem accounting for zswap/zsmalloc Alexandre Ghiti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox