public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeel.butt@linux.dev>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: lsf-pc@lists.linux-foundation.org,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Tejun Heo" <tj@kernel.org>, "Michal Hocko" <mhocko@suse.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Michal Koutný" <mkoutny@suse.com>,
	"Roman Gushchin" <roman.gushchin@linux.dev>,
	"Hui Zhu" <hui.zhu@linux.dev>,
	"JP Kobryn" <inwardvessel@gmail.com>,
	"Muchun Song" <muchun.song@linux.dev>,
	"Geliang Tang" <geliang@kernel.org>,
	"Sweet Tea Dorminy" <sweettea-kernel@dorminy.me>,
	"Emil Tsalapatis" <emil@etsalapatis.com>,
	"David Rientjes" <rientjes@google.com>,
	"Martin KaFai Lau" <martin.lau@linux.dev>,
	"Meta kernel team" <kernel-team@meta.com>,
	linux-mm@kvack.org, cgroups@vger.kernel.org, bpf@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [LSF/MM/BPF TOPIC] Reimagining Memory Cgroup (memcg_ext)
Date: Wed, 11 Mar 2026 15:47:31 -0700	[thread overview]
Message-ID: <abHkgYHEq5U7G7rF@linux.dev> (raw)
In-Reply-To: <abFsDg5m3lp2vVOX@cmpxchg.org>

On Wed, Mar 11, 2026 at 09:20:14AM -0400, Johannes Weiner wrote:
> On Sat, Mar 07, 2026 at 10:24:24AM -0800, Shakeel Butt wrote:

[...]

> > 
> > - Workload owners rarely know their actual memory requirements, leading to
> >   overprovisioned limits, lower utilization, and higher infrastructure costs.
> 
> Is this actually a challenge?
> 
> It appears to me proactive reclaim is fairly widespread at this point,
> giving workload owners, job schedulers, and capacity planners
> real-world, long-term profiles of memory usage.
> 
> Workload owners can use this to adjust their limits accordingly, of
> course, but even that is less relevant if schedulers and planners go
> off of the measured information. The limits become failsafes, no
> longer the declarative source of truth for memory size.

Yes for sophisticated users, this is a solved problem, particularly for
workloads with consistent memory usage behavior. I think workloads with
inconsistent or sporadic usage behavior is still a challenge. 

> 
> > 
> > Per-Memcg Background Reclaim
> > 
> > In the new memcg world, with the goal of (mostly) eliminating direct synchronous
> > reclaim for limit enforcement, provide per-memcg background reclaimers which can
> > scale across CPUs with the allocation rate.
> 
> Meta has been carrying this patch for half a decade:
> 
> https://lore.kernel.org/linux-mm/20200219181219.54356-1-hannes@cmpxchg.org/
> 
> It sounds like others have carried similar patches.

Yeah ByteDance has something similar too.

> 
> The relevance of this, too, has somewhat faded with proactive
> reclaim. But I think it would still be worthwhile to have. The primary
> objection was a lack of attribution of the consumed CPU cycles.
> 
> > Lock-Aware Throttling
> > 
> > The ability to avoid throttling an allocating task that is holding locks, to
> > prevent priority inversion. In Meta's fleet, we have observed lock holders stuck
> > in memcg reclaim, blocking all waiters regardless of their priority or
> > criticality.
> > 
> > Thread-Level Throttling Control
> > 
> > Workloads should be able to indicate at the thread level which threads can be
> > synchronously throttled and which cannot. For example, while experimenting with
> > sched_ext, we drastically improved the performance of AI training workloads by
> > prioritizing threads interacting with the GPU. Similarly, applications can
> > identify the threads or thread pools on their performance-critical paths and
> > the memcg enforcement mechanism should not throttle them.
> 
> I'm struggling to envision this.
> 
> CPU and GPU are renewable resources where a bias in access time and
> scheduling delays over time is naturally compensated.
> 
> With memory access past the limit, though, such a bias adds up over
> time. How do you prevent high priority threads from runaway memory
> consumption that ends up OOMing the host?

Oh don't consider this feature in isolation. In practice there definitely will
be background reclaimers running here. The way I am envisioning the scenario for
this feature is something like: At some usage threshold, we will start the
background reclaimers, at the next threshold, we will start synchronously
throttle the threads that are allowed by the workload and at next threshold
point we may decide to just kill the workload.

> 
> > Combined Memory and Swap Limits
> > 
> > Some users (Google actually) need the ability to enforce limits based on
> > combined memory and swap usage, similar to cgroup v1's memsw limit, providing a
> > ceiling on total memory commitment rather than treating memory and swap
> > independently.
> > 
> > Dynamic Protection Limits
> > 
> > Rather than static protection limits, the kernel should support defining
> > protection based on the actual working set of the workload, leveraging signals
> > such as working set estimation, PSI, refault rates, or a combination thereof to
> > automatically adapt to the workload's current memory needs.
> 
> This should be possible with today's interfaces of memory.reclaim,
> memory.pressure and memory.low, right?

Yes, node controller or workload can dynamically their protection limit based on
psi or refaults or some other metrics.

> 
> > Shared Memory Semantics
> > 
> > With more flexibility in limit enforcement, the kernel should be able to
> > account for memory shared between workloads (cgroups) during enforcement.
> > Today, enforcement only looks at each workload's memory usage independently.
> > Sensible shared memory semantics would allow the enforcer to consider
> > cross-cgroup sharing when making reclaim and throttling decisions.
> 
> My understanding is that this hasn't been a problem of implementation,
> but one of identifying reasonable, predictable semantics - how exactly
> the liability of shared resources are allocated to participating groups.
> 

This particular feature is hand-wavy at the moment particulary due to lack of
mechanism that tells how much memory is really shared.

The high level idea is when we know there is shared memory/fs between different
workloads, during throttling decision, we can consider their memory usage
excluding the shared usage. So, mainly their exclusive memory usage. Will this
help or is useful, I need to brainstorm more.

> > Memory Tiering
> > 
> > With a flexible limit enforcement mechanism, the kernel can balance memory
> > usage of different workloads across memory tiers based on their performance
> > requirements. Tier accounting and hotness tracking are orthogonal, but the
> > decisions of when and how to balance memory between tiers should be handled by
> > the enforcer.
> > 
> > Collaborative Load Shedding
> > 
> > Many workloads communicate with an external entity for load balancing and rely
> > on their own usage metrics like RSS or memory pressure to signal whether they
> > can accept more or less work. This is guesswork. Instead of the
> > workload guessing, the limit enforcer -- which is actually managing the
> > workload's memory usage -- should be able to communicate available headroom or
> > request the workload to shed load or reduce memory usage. This collaborative
> > load shedding mechanism would allow workloads to make informed decisions rather
> > than reacting to coarse signals.
> > 
> > Cross-Subsystem Collaboration
> > 
> > Finally, the limit enforcement mechanism should collaborate with the CPU
> > scheduler and other subsystems that can release memory. For example, dirty
> > memory is not reclaimable and the memory subsystem wakes up flushers to trigger
> > writeback. However, flushers need CPU to run -- asking the CPU scheduler to
> > prioritize them ensures the kernel does not lack reclaimable memory under
> > stressful conditions. Similarly, some subsystems free memory through workqueues
> > or RCU callbacks. While this may seem orthogonal to limit enforcement, we can
> > definitely take advantage by having visibility into these situations.
> 
> It sounds like the lock holder problem would also fit into this
> category: Identifying critical lock holders and allowing them
> temporary access past the memory and CPU limits.
> 
> But as per above, I'm not sure if blank check exemptions are workable
> for memory. It makes sense for allocations in the reclaim path for
> example, because it doesn't leave us wondering who will pay for the
> excess through a deficit. It's less obvious for a path that is
> involved with further expansion of the cgroup's footprint.

No need to have blank check. Same as above for the thread throttling, the lock
holder not getting throttled will be, in practice, in the presense of background
reclaimers and may get killed if going over board too much.

Thanks for taking a look and poking holes.


  reply	other threads:[~2026-03-11 22:47 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-07 18:24 [LSF/MM/BPF TOPIC] Reimagining Memory Cgroup (memcg_ext) Shakeel Butt
2026-03-09 21:33 ` Roman Gushchin
2026-03-09 23:09   ` Shakeel Butt
2026-03-11  4:57 ` Jiayuan Chen
2026-03-11 17:00   ` Shakeel Butt
2026-03-11  7:19 ` Muchun Song
2026-03-11 20:39   ` Shakeel Butt
2026-03-12  2:46     ` Muchun Song
2026-03-13  6:17       ` teawater
2026-03-11  7:29 ` Greg Thelen
2026-03-11 21:35   ` Shakeel Butt
2026-03-11 13:20 ` Johannes Weiner
2026-03-11 22:47   ` Shakeel Butt [this message]
2026-03-12  3:06 ` hui.zhu
2026-03-12  3:36 ` hui.zhu
2026-03-25 18:47 ` Donet Tom

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=abHkgYHEq5U7G7rF@linux.dev \
    --to=shakeel.butt@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=cgroups@vger.kernel.org \
    --cc=emil@etsalapatis.com \
    --cc=geliang@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=hui.zhu@linux.dev \
    --cc=inwardvessel@gmail.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=martin.lau@linux.dev \
    --cc=mhocko@suse.com \
    --cc=mkoutny@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=sweettea-kernel@dorminy.me \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox