From: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
To: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
ast-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
daniel-FeC+5ew28dpmcu3hnIyYJQ@public.gmane.org,
andrii-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
kafai-b10kYP2dOMg@public.gmane.org,
songliubraving-b10kYP2dOMg@public.gmane.org,
yhs-b10kYP2dOMg@public.gmane.org,
john.fastabend-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
kpsingh-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
sdf-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
haoluo-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
jolsa-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org,
shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
muchun.song-fxUVXftIFDnyG1zEObXtfA@public.gmane.org,
akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
bpf-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org
Subject: Re: [PATCH bpf-next 0/5] bpf, mm: introduce cgroup.memory=nobpf
Date: Wed, 8 Feb 2023 14:29:43 -0500 [thread overview]
Message-ID: <Y+P4J5+fykUp67b5@cmpxchg.org> (raw)
In-Reply-To: <20230205065805.19598-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
On Sun, Feb 05, 2023 at 06:58:00AM +0000, Yafang Shao wrote:
> The bpf memory accouting has some known problems in contianer
> environment,
>
> - The container memory usage is not consistent if there's pinned bpf
> program
> After the container restart, the leftover bpf programs won't account
> to the new generation, so the memory usage of the container is not
> consistent. This issue can be resolved by introducing selectable
> memcg, but we don't have an agreement on the solution yet. See also
> the discussions at https://lwn.net/Articles/905150/ .
>
> - The leftover non-preallocated bpf map can't be limited
> The leftover bpf map will be reparented, and thus it will be limited by
> the parent, rather than the container itself. Furthermore, if the
> parent is destroyed, it be will limited by its parent's parent, and so
> on. It can also be resolved by introducing selectable memcg.
>
> - The memory dynamically allocated in bpf prog is charged into root memcg
> only
> Nowdays the bpf prog can dynamically allocate memory, for example via
> bpf_obj_new(), but it only allocate from the global bpf_mem_alloc
> pool, so it will charge into root memcg only. That needs to be
> addressed by a new proposal.
>
> So let's give the user an option to disable bpf memory accouting.
>
> The idea of "cgroup.memory=nobpf" is originally by Tejun[1].
I'm not the most familiar with bpf internals, but the memcg bits and
adding the boot time flag look good to me:
Acked-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
next prev parent reply other threads:[~2023-02-08 19:29 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-05 6:58 [PATCH bpf-next 0/5] bpf, mm: introduce cgroup.memory=nobpf Yafang Shao
2023-02-05 6:58 ` [PATCH bpf-next 2/5] bpf: use bpf_map_kvcalloc in bpf_local_storage Yafang Shao
2023-02-08 19:25 ` Johannes Weiner
[not found] ` <Y+P3HSLNR94wILP1-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2023-02-09 11:27 ` Yafang Shao
2023-02-05 6:58 ` [PATCH bpf-next 3/5] bpf: introduce bpf_memcg_flags() Yafang Shao
[not found] ` <20230205065805.19598-1-laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2023-02-05 6:58 ` [PATCH bpf-next 1/5] mm: memcontrol: add new kernel parameter cgroup.memory=nobpf Yafang Shao
2023-02-05 6:58 ` [PATCH bpf-next 4/5] bpf: allow to disable bpf map memory accounting Yafang Shao
2023-02-05 6:58 ` [PATCH bpf-next 5/5] bpf: allow to disable bpf prog " Yafang Shao
2023-02-08 19:29 ` Johannes Weiner [this message]
2023-02-08 20:54 ` [PATCH bpf-next 0/5] bpf, mm: introduce cgroup.memory=nobpf Roman Gushchin
[not found] ` <Y+QL8s1VEHlolXM3-+xijCwNIfdoLQcUKs7qKB+WAnPUfkyWGUBSOeVevoDU@public.gmane.org>
2023-02-09 11:28 ` Yafang Shao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y+P4J5+fykUp67b5@cmpxchg.org \
--to=hannes-druugvl0lcnafugrpc6u6w@public.gmane.org \
--cc=akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
--cc=andrii-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=ast-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=bpf-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=daniel-FeC+5ew28dpmcu3hnIyYJQ@public.gmane.org \
--cc=haoluo-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=john.fastabend-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
--cc=jolsa-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=kafai-b10kYP2dOMg@public.gmane.org \
--cc=kpsingh-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
--cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
--cc=mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=muchun.song-fxUVXftIFDnyG1zEObXtfA@public.gmane.org \
--cc=roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org \
--cc=sdf-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=songliubraving-b10kYP2dOMg@public.gmane.org \
--cc=tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=yhs-b10kYP2dOMg@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox