From: Kui-Feng Lee <sinquersw-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
ast-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
daniel-FeC+5ew28dpmcu3hnIyYJQ@public.gmane.org,
john.fastabend-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
andrii-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
martin.lau-fxUVXftIFDnyG1zEObXtfA@public.gmane.org,
song-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
yonghong.song-fxUVXftIFDnyG1zEObXtfA@public.gmane.org,
kpsingh-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
sdf-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
haoluo-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
jolsa-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
lizefan.x-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org,
hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org,
yosryahmed-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
mkoutny-IBi9RG/b67k@public.gmane.org
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
bpf-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
Date: Mon, 25 Sep 2023 11:22:17 -0700 [thread overview]
Message-ID: <9e83bda8-ea1b-75b9-c55f-61cf11b4cd83@gmail.com> (raw)
In-Reply-To: <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
On 9/22/23 04:28, Yafang Shao wrote:
> Currently, BPF is primarily confined to cgroup2, with the exception of
> cgroup_iter, which supports cgroup1 fds. Unfortunately, this limitation
> prevents us from harnessing the full potential of BPF within cgroup1
> environments.
>
> In our endeavor to seamlessly integrate BPF within our Kubernetes
> environment, which relies on cgroup1, we have been exploring the
> possibility of transitioning to cgroup2. While this transition is
> forward-looking, it poses challenges due to the necessity for numerous
> applications to adapt.
>
> While we acknowledge that cgroup2 represents the future, we also recognize
> that such transitions demand time and effort. As a result, we are
> considering an alternative approach. Instead of migrating to cgroup2, we
> are contemplating modifications to the BPF kernel code to ensure
> compatibility with cgroup1. These adjustments appear to be relatively
> minor, making this option more feasible.
Do you mean giving up moving to cgroup2? Or, is it just a tentative
solution?
WARNING: multiple messages have this Message-ID (diff)
From: Kui-Feng Lee <sinquersw@gmail.com>
To: Yafang Shao <laoar.shao@gmail.com>,
ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com,
andrii@kernel.org, martin.lau@linux.dev, song@kernel.org,
yonghong.song@linux.dev, kpsingh@kernel.org, sdf@google.com,
haoluo@google.com, jolsa@kernel.org, tj@kernel.org,
lizefan.x@bytedance.com, hannes@cmpxchg.org,
yosryahmed@google.com, mkoutny@suse.com
Cc: cgroups@vger.kernel.org, bpf@vger.kernel.org
Subject: Re: [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller
Date: Mon, 25 Sep 2023 11:22:17 -0700 [thread overview]
Message-ID: <9e83bda8-ea1b-75b9-c55f-61cf11b4cd83@gmail.com> (raw)
Message-ID: <20230925182217.Sx7Go78yPEyw419ZlKzt7A0xMnNiAh7bo1RjudtRpxw@z> (raw)
In-Reply-To: <20230922112846.4265-1-laoar.shao@gmail.com>
On 9/22/23 04:28, Yafang Shao wrote:
> Currently, BPF is primarily confined to cgroup2, with the exception of
> cgroup_iter, which supports cgroup1 fds. Unfortunately, this limitation
> prevents us from harnessing the full potential of BPF within cgroup1
> environments.
>
> In our endeavor to seamlessly integrate BPF within our Kubernetes
> environment, which relies on cgroup1, we have been exploring the
> possibility of transitioning to cgroup2. While this transition is
> forward-looking, it poses challenges due to the necessity for numerous
> applications to adapt.
>
> While we acknowledge that cgroup2 represents the future, we also recognize
> that such transitions demand time and effort. As a result, we are
> considering an alternative approach. Instead of migrating to cgroup2, we
> are contemplating modifications to the BPF kernel code to ensure
> compatibility with cgroup1. These adjustments appear to be relatively
> minor, making this option more feasible.
Do you mean giving up moving to cgroup2? Or, is it just a tentative
solution?
next prev parent reply other threads:[~2023-09-25 18:22 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-22 11:28 [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support for cgroup controller Yafang Shao
[not found] ` <20230922112846.4265-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2023-09-22 11:28 ` [RFC PATCH bpf-next 1/8] bpf: Fix missed rcu read lock in bpf_task_under_cgroup() Yafang Shao
2023-09-22 11:28 ` [RFC PATCH bpf-next 2/8] cgroup: Enable task_under_cgroup_hierarchy() on cgroup1 Yafang Shao
2023-09-22 11:28 ` [RFC PATCH bpf-next 3/8] cgroup: Add cgroup_get_from_id_within_subsys() Yafang Shao
2023-09-22 11:28 ` [RFC PATCH bpf-next 4/8] bpf: Add new kfuncs support for cgroup controller Yafang Shao
2023-09-22 11:28 ` [RFC PATCH bpf-next 5/8] selftests/bpf: Fix issues in setup_classid_environment() Yafang Shao
2023-09-22 11:28 ` [RFC PATCH bpf-next 6/8] selftests/bpf: Add parallel support for classid Yafang Shao
2023-09-22 11:28 ` [RFC PATCH bpf-next 7/8] selftests/bpf: Add new cgroup helper get_classid_cgroup_id() Yafang Shao
2023-09-22 11:28 ` [RFC PATCH bpf-next 8/8] selftests/bpf: Add selftests for cgroup controller Yafang Shao
2023-09-22 16:52 ` [RFC PATCH bpf-next 0/8] bpf, cgroup: Add bpf support " Tejun Heo
[not found] ` <ZQ3GQmYrYyKAg2uK-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
2023-09-24 6:32 ` Yafang Shao
[not found] ` <CALOAHbA9-BT1daw-KXHtsrN=uRQyt-p6LU=BEpvF2Yk42A_Vxw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2023-09-25 18:43 ` Tejun Heo
2023-09-25 18:43 ` Tejun Heo
[not found] ` <ZRHU6MfwqRxjBFUH-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
2023-09-26 3:01 ` Yafang Shao
2023-09-26 3:01 ` Yafang Shao
2023-09-26 18:25 ` Tejun Heo
2023-09-27 2:27 ` Yafang Shao
2023-09-25 18:22 ` Kui-Feng Lee [this message]
2023-09-25 18:22 ` Kui-Feng Lee
[not found] ` <9e83bda8-ea1b-75b9-c55f-61cf11b4cd83-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2023-09-26 3:08 ` Yafang Shao
2023-09-26 3:08 ` Yafang Shao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9e83bda8-ea1b-75b9-c55f-61cf11b4cd83@gmail.com \
--to=sinquersw-re5jqeeqqe8avxtiumwx3w@public.gmane.org \
--cc=andrii-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=ast-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=bpf-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=daniel-FeC+5ew28dpmcu3hnIyYJQ@public.gmane.org \
--cc=hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org \
--cc=haoluo-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=john.fastabend-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
--cc=jolsa-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=kpsingh-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
--cc=lizefan.x-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org \
--cc=martin.lau-fxUVXftIFDnyG1zEObXtfA@public.gmane.org \
--cc=mkoutny-IBi9RG/b67k@public.gmane.org \
--cc=sdf-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=song-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=yonghong.song-fxUVXftIFDnyG1zEObXtfA@public.gmane.org \
--cc=yosryahmed-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox