public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Chuyi Zhou <zhouchuyi@bytedance.com>,
	hannes@cmpxchg.org, ast@kernel.org, daniel@iogearbox.net,
	andrii@kernel.org, bpf@vger.kernel.org,
	linux-kernel@vger.kernel.org, wuyun.abel@bytedance.com,
	robin.lu@bytedance.com
Subject: Re: [RFC PATCH 0/5] mm: Select victim memcg using BPF_OOM_POLICY
Date: Mon, 31 Jul 2023 15:12:20 +0200	[thread overview]
Message-ID: <ZMezNBQYHBOKve80@dhcp22.suse.cz> (raw)
In-Reply-To: <ZMQME4f9Okp8Rl7N@P9FQF9L96D>

On Fri 28-07-23 11:42:27, Roman Gushchin wrote:
> On Fri, Jul 28, 2023 at 10:06:38AM +0200, Michal Hocko wrote:
> > On Thu 27-07-23 21:30:01, Roman Gushchin wrote:
> > > On Thu, Jul 27, 2023 at 10:15:16AM +0200, Michal Hocko wrote:
> > > > On Thu 27-07-23 15:36:27, Chuyi Zhou wrote:
> > > > > This patchset tries to add a new bpf prog type and use it to select
> > > > > a victim memcg when global OOM is invoked. The mainly motivation is
> > > > > the need to customizable OOM victim selection functionality so that
> > > > > we can protect more important app from OOM killer.
> > > > 
> > > > This is rather modest to give an idea how the whole thing is supposed to
> > > > work. I have looked through patches very quickly but there is no overall
> > > > design described anywhere either.
> > > > 
> > > > Please could you give us a high level design description and reasoning
> > > > why certain decisions have been made? e.g. why is this limited to the
> > > > global oom sitation, why is the BPF program forced to operate on memcgs
> > > > as entities etc...
> > > > Also it would be very helpful to call out limitations of the BPF
> > > > program, if there are any.
> > > 
> > > One thing I realized recently: we don't have to make a victim selection
> > > during the OOM, we [almost always] can do it in advance.
> > > 
> > > Kernel OOM's must guarantee the forward progress under heavy memory pressure
> > > and it creates a lot of limitations on what can and what can't be done in
> > > these circumstances.
> > > 
> > > But in practice most policies except maybe those which aim to catch very fast
> > > memory spikes rely on things which are fairly static: a logical importance of
> > > several workloads in comparison to some other workloads, "age", memory footprint
> > > etc.
> > > 
> > > So I wonder if the right path is to create a kernel interface which allows
> > > to define a OOM victim (maybe several victims, also depending on if it's
> > > a global or a memcg oom) and update it periodically from an userspace.
> > 
> > We already have that interface. Just echo OOM_SCORE_ADJ_MAX to any tasks
> > that are to be killed with a priority...
> > Not a great interface but still something available.
> > 
> > > In fact, the second part is already implemented by tools like oomd, systemd-oomd etc.
> > > Someone might say that the first part is also implemented by the oom_score
> > > interface, but I don't think it's an example of a convenient interface.
> > > It's also not a memcg-level interface.
> > 
> > What do you mean by not memcg-level interface? What kind of interface
> > would you propose instead?
> 
> Something like memory.oom.priority, which is 0 by default, but if set to 1,
> the memory cgroup is considered a good oom victim. Idk if we need priorities
> or just fine with a binary thing.

Priorities as a general API have been discussed at several occasions
(e.g http://lkml.kernel.org/r/ZFkEqhAs7FELUO3a@dhcp22.suse.cz). Their usage
is rather limited, hiearchical semantic not trivial etc.
-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2023-07-31 13:12 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-27  7:36 [RFC PATCH 0/5] mm: Select victim memcg using BPF_OOM_POLICY Chuyi Zhou
2023-07-27  7:36 ` [RFC PATCH 1/5] bpf: Introduce BPF_PROG_TYPE_OOM_POLICY Chuyi Zhou
2023-07-27  7:36 ` [RFC PATCH 2/5] mm: Select victim memcg using bpf prog Chuyi Zhou
2023-07-27  7:36 ` [RFC PATCH 3/5] libbpf, bpftool: Support BPF_PROG_TYPE_OOM_POLICY Chuyi Zhou
2023-07-27 12:26   ` Quentin Monnet
2023-07-28  3:01     ` Chuyi Zhou
2023-07-27  7:36 ` [RFC PATCH 4/5] bpf: Add a new bpf helper to get cgroup ino Chuyi Zhou
2023-07-27  7:36 ` [RFC PATCH 5/5] bpf: Sample BPF program to set oom policy Chuyi Zhou
2023-07-27  8:15 ` [RFC PATCH 0/5] mm: Select victim memcg using BPF_OOM_POLICY Michal Hocko
2023-07-27 12:12   ` Chuyi Zhou
2023-07-27 17:23     ` Michal Hocko
2023-07-31  6:00       ` Chuyi Zhou
2023-07-31 13:23         ` Michal Hocko
2023-07-31 16:26           ` Chuyi Zhou
2023-08-01  8:18             ` Michal Hocko
2023-08-02  3:04               ` Chuyi Zhou
2023-07-28  4:30   ` Roman Gushchin
2023-07-28  8:06     ` Michal Hocko
2023-07-28 18:42       ` Roman Gushchin
2023-07-31 13:12         ` Michal Hocko [this message]
2023-08-01  6:53     ` Abel Wu
2023-07-27 11:43 ` Alan Maguire
2023-07-27 15:57   ` Alexei Starovoitov
2023-07-27 17:17     ` Michal Hocko
2023-07-28  2:34   ` [External] " Chuyi Zhou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZMezNBQYHBOKve80@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.lu@bytedance.com \
    --cc=roman.gushchin@linux.dev \
    --cc=wuyun.abel@bytedance.com \
    --cc=zhouchuyi@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox