From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-183.mta1.migadu.com (out-183.mta1.migadu.com [95.215.58.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E4D731A56C for ; Thu, 20 Nov 2025 09:30:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.183 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763631003; cv=none; b=QdXz8xrbkB/rcTrQU8ISvEOhJJlagRpR3TQq9zhXFvpENnLVL8bcIL+YSshm4c+MPlNkPr28uwXFGHgSt7Vql8RYC78A6ebgUHyU4bMfsn8fWsz6YSO0ZFXtNyA27KMK60B5GXbM4+AldP0w2RQrQVdfOUJG/u/2dTgCMRJ92d8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763631003; c=relaxed/simple; bh=AxJxWuRoAy4ZbASVKOsuaDYFGzkRT5fbocxv3KvE9LU=; h=MIME-Version:Date:Content-Type:From:Message-ID:Subject:To:Cc: In-Reply-To:References; b=UrkC1uIzyckqlN9UkABAXapEVivmsRy/MiO0w3qC8f9R2m0Q9V2PNBrv2HKZouuMemu5ntSYOZF2x5RIz0SHbP0jrGzmZivnFsPWywDd5aZQBX62SQEm9qC9sfco+YIt9yYHc1v72DGFf8ca/VSH5qN1cOtovdtmmBN4jmrSDhY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=dtDoGHsj; arc=none smtp.client-ip=95.215.58.183 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="dtDoGHsj" Precedence: bulk X-Mailing-List: cgroups@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1763630998; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fwimz9cV5XjfDnANvfrVBMpem/qAU7ggyHG03WSRMT8=; b=dtDoGHsjHF0f7eLfgt/72alDvV4DJdD6aK60GYDAs5ZFWzDt7l6gh9O0z/OP5dc+8VUiYl rR5yrsObR0rxNQ1wDXbKE/VMSuEjJHrj7xVlwLv9IIMgKz/F0Mv3GnjF/V0r6FCHXTO3TB e0gZL2EWmB62xNPGB3j1pFtg15MYcP8= Date: Thu, 20 Nov 2025 09:29:52 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: hui.zhu@linux.dev Message-ID: <895f996653b3385e72763d5b35ccd993b07c6125@linux.dev> TLS-Required: No Subject: Re: [RFC PATCH 0/3] Memory Controller eBPF support To: "Roman Gushchin" Cc: "Andrew Morton" , "Johannes Weiner" , "Michal Hocko" , "Shakeel Butt" , "Muchun Song" , "Alexei Starovoitov" , "Daniel Borkmann" , "Andrii Nakryiko" , "Martin KaFai Lau" , "Eduard Zingerman" , "Song Liu" , "Yonghong Song" , "John Fastabend" , "KP Singh" , "Stanislav Fomichev" , "Hao Luo" , "Jiri Olsa" , "Shuah Khan" , "Peter Zijlstra" , "Miguel Ojeda" , "Nathan Chancellor" , "Kees Cook" , "Tejun Heo" , "Jeff Xu" , mkoutny@suse.com, "Jan Hendrik Farr" , "Christian Brauner" , "Randy Dunlap" , "Brian Gerst" , "Masahiro Yamada" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, "Hui Zhu" In-Reply-To: <87ldk1mmk3.fsf@linux.dev> References: <87ldk1mmk3.fsf@linux.dev> X-Migadu-Flow: FLOW_OUT 2025=E5=B9=B411=E6=9C=8820=E6=97=A5 11:04, "Roman Gushchin" =E5=86=99=E5=88=B0: >=20 >=20Hui Zhu writes: >=20 >=20>=20 >=20> From: Hui Zhu > >=20 >=20> This series proposes adding eBPF support to the Linux memory > > controller, enabling dynamic and extensible memory management > > policies at runtime. > >=20 >=20> Background > >=20 >=20> The memory controller (memcg) currently provides fixed memory > > accounting and reclamation policies through static kernel code. > > This limits flexibility for specialized workloads and use cases > > that require custom memory management strategies. > >=20 >=20> By enabling eBPF programs to hook into key memory control > > operations, administrators can implement custom policies without > > recompiling the kernel, while maintaining the safety guarantees > > provided by the BPF verifier. > >=20 >=20> Use Cases > >=20 >=20> 1. Custom memory reclamation strategies for specialized workloads > > 2. Dynamic memory pressure monitoring and telemetry > > 3. Memory accounting adjustments based on runtime conditions > > 4. Integration with container orchestration systems for > > intelligent resource management > > 5. Research and experimentation with novel memory management > > algorithms > >=20 >=20> Design Overview > >=20 >=20> This series introduces: > >=20 >=20> 1. A new BPF struct ops type (`memcg_ops`) that allows eBPF > > programs to implement custom behavior for memory charging > > operations. > >=20 >=20> 2. A hook point in the `try_charge_memcg()` fast path that > > invokes registered eBPF programs to determine if custom > > memory management should be applied. > >=20 >=20> 3. The eBPF handler can inspect memory cgroup context and > > optionally modify certain parameters (e.g., `nr_pages` for > > reclamation size). > >=20 >=20> 4. A reference counting mechanism using `percpu_ref` to safely > > manage the lifecycle of registered eBPF struct ops instances. > >=20 >=20Can you please describe how these hooks will be used in practice? > What's the problem you can solve with it and can't without? >=20 >=20I generally agree with an idea to use BPF for various memcg-related > policies, but I'm not sure how specific callbacks can be used in > practice. Hi Roman, Following are some ideas that can use ebpf memcg: Priority=E2=80=91Based Reclaim and Limits in Multi=E2=80=91Tenant Environ= ments: On a single machine with multiple tenants / namespaces / containers, under memory pressure it=E2=80=99s hard to decide =E2=80=9Cwho should be = squeezed first=E2=80=9D with static policies baked into the kernel. Assign a BPF profile to each tenant=E2=80=99s memcg: Under high global pressure, BPF can decide: Which memcgs=E2=80=99 memory.high should be raised (delaying reclaim), Which memcgs should be scanned and reclaimed more aggressively. Online Profiling / Diagnosing Memory Hotspots: A cgroup=E2=80=99s memory keeps growing, but without patching the kernel = it=E2=80=99s difficult to obtain fine=E2=80=91grained information. Attach BPF to the memcg charge/uncharge path: Record large allocations (greater than N KB) with call stacks and owning file/module, and send them to user space via a BPF ring buffer. Based on sampled data, generate: =E2=80=9CTop N memory allocation stacks in this container over the last 1= 0 minutes,=E2=80=9D Reports of which objects / call paths are growing fastest. This makes it possible to pinpoint the root cause of host memory anomalies without changing application code, which is very useful in operations/ops scenarios. SLO=E2=80=91Driven Auto Throttling / Scale=E2=80=91In/Out Signals: Use eBPF to observe memory usage slope, frequent reclaim, or near=E2=80=91OOM behavior within a memcg. When it decides =E2=80=9COOM is imminent,=E2=80=9D instead of just killin= g/raising limits, it can emit a signal to a control=E2=80=91plane component. For example, send an event to a user=E2=80=91space agent to trigger automatic scaling, QPS adjustment, or throttling. Prevent a cgroup from launching a large=E2=80=91scale fork+malloc attack: BPF checks per=E2=80=91uid or per=E2=80=91cgroup allocation behavior over= the last few seconds during memcg charge. And I maintain a software project, https://github.com/teawater/mem-agent, for specialized memory management and related functions. However, I found that implementing certain memory QoS categories for memcg solely from user space is rather inefficient, as it requires frequent access to values within memcg. This is why I want memcg to support eBPF=E2=80=94so that I can place custom memory management logic directly into the kernel using eBPF, greatly improving efficiency. Best, Hui >=20 >=20Thanks! >