public inbox for cgroups@vger.kernel.org
 help / color / mirror / Atom feed
From: Sweet Tea Dorminy <sweettea-kernel@dorminy.me>
To: linux-mm@vger.kernel.org, lsf-pc@lists.linux-foundation.org,
	cgroups@vger.kernel.org
Cc: roman.gushchin@linux.dev, shakeel.butt@linux.dev, sweettea@google.com
Subject: [LSF/MM/BPF TOPIC] Cgroup v1: timeline and path to removal.
Date: Wed, 25 Feb 2026 09:52:41 -0800	[thread overview]
Message-ID: <7a5619a6d27119fcf566e43563a72396@dorminy.me> (raw)

Hi all,

I'd like to propose a discussion session at LSF/MM/BPF 2026 on
establishing a concrete roadmap and timeline for the removal of cgroup
v1 from the Linux kernel, acknowledging that this likely also needs a
discussion at LPC. This would be best suited as an MM topic, I think.

Background
----------

The ecosystem momentum toward cgroup v2 has accelerated significantly
over the past two years:

- systemd v256 (June 2024) disabled cgroup v1 by default, and systemd
   v258 (September 2025) removed cgroup v1 support entirely.
- Kubernetes 1.31 moved cgroup v1 support into maintenance mode;
   Kubernetes 1.35 is the last release to support v1 at all.
- The container ecosystem (runc, containerd, Docker/Moby) has begun
   formal deprecation, with maintenance commitments extending to
   approximately May 2029.
- All major cloud Kubernetes offerings (GKE, EKS, AKS) now default to
   cgroup v2 on current node images.
- All LTS distributions to my knowledge have released their last version
   with a systemd version capable of using cgroup v1, e.g. Debian Trixie.
- Work has proceeded in-kernel to isolate cgroup v1 code, starting with
   the memory controller [1].

At LPC 2024 the Containers and Checkpoint/Restore microconference held
an initial discussion on deprecating cgroup v1 [2], including a survey
of distro and application EOL dates [3], but the kernel community has
not yet committed to a concrete deprecation or removal timeline.

Proposed timeline
-----------------

I'd like to put a concrete proposal on the table for discussion, 
oriented
around the assumption that 7.4, 7.10, and 7.16 are LTS kernels.

   2026:           Complete separation of all controller v1 code.
                   Introduce CONFIG_CGROUP_V1 (default=y), required for
                   any v1 code to be compiled in.

   Kernel 7.4      Print deprecation warnings when cgroup v1
   (late 2026):    hierarchies are mounted.

   Kernel 7.10     Switch CONFIG_CGROUP_V1 default to n. Require a
   (late 2027):    kernel command-line argument to mount v1 even when
                   CONFIG_CGROUP_V1=y.

   Kernel 7.17     Remove cgroup v1 code from the kernel entirely.
   (early 2029):

As I understand it, enterprise distros shipping LTS kernels (RHEL
8 = 4.18, Ubuntu 20.04 = 5.4, Oracle Linux 8 with UEK, etc.) will be
largely unaffected, so EOL dates are not a constraint on this timeline.

Proposed discussion topics
--------------------------

1. Kernel-side code isolation (step 1): The
    memory controller v1 code has been separated [1]. Let's isolate
    the remaining controllers (cpu, blkio, cpuset, pids,
    etc.), and can we finish doing so within 2026?

2. Remaining user-space blockers (step 2): Are
    there applications or use cases that still fundamentally require
    cgroup v1 and cannot migrate? A known pain point is mem+swap
    (memsw) accounting. There was a proposal last year to add memsw
    for cgroup v2 from Tencent [5], and it is an ongoing pain point
    for Google (where I currently work) since 2019 [4] -- how are
    companies dealing with this and any other pain points?

3. Hyperscaler readiness (step 3): The major
    hyperscalers are at varying stages of v2 migration. Meta has been
    on v2 since ~2018. AWS, Microsoft/Azure, and GKE now default to
    v2. However, Google continues to work toward completing its
    internal cgroup v2 migration (last presented in 2018 [4]).
    Oracle Cloud only switched its default image to Oracle Linux 9
    (cgroup v2) in May 2025. Alibaba's latest Cloud Linux 4 defaults
    to v2, but Alibaba Cloud Linux 3 (still widely deployed) defaults
    to v1 and was still receiving v1-specific backports in 2024.
    Tencent and ByteDance have large fleets whose v1/v2 status is not
    publicly documented to my knwoeldge. I hope to solicit any remaining
    migration timelines from these companies (and discuss Google's
    planned timeline), and determine whether any hard v1 dependencies
    remain that would block the proposed removal date.

Looking forward to the discussion.

[1] 
https://lore.kernel.org/all/20240625005906.106920-1-roman.gushchin@linux.dev/
[2] https://lpc.events/event/18/contributions/1807/
[3] 
https://lpc.events/event/18/contributions/1807/attachments/1613/3344/Deprecating-cgrp-v1-Kamalesh.pdf
[4] 
https://lpc.events/event/2/contributions/204/attachments/143/378/LPC2018-cgroup-v2.pdf
[5] 
https://lore.kernel.org/all/20250319064148.774406-1-jingxiangzeng.cas@gmail.com/

Thanks,
Sweet Tea

                 reply	other threads:[~2026-02-25 18:02 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7a5619a6d27119fcf566e43563a72396@dorminy.me \
    --to=sweettea-kernel@dorminy.me \
    --cc=cgroups@vger.kernel.org \
    --cc=linux-mm@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeel.butt@linux.dev \
    --cc=sweettea@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox