linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Balbir Singh <bsingharora@gmail.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>, Roman Gushchin <guro@fb.com>,
	Tejun Heo <tj@kernel.org>, Li Zefan <lizefan@huawei.com>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>,
	kernel-team@fb.com,
	"cgroups@vger.kernel.org" <cgroups@vger.kernel.org>,
	"open list:DOCUMENTATION" <linux-doc@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>
Subject: Re: [RFC PATCH] mm, oom: cgroup-aware OOM-killer
Date: Fri, 19 May 2017 05:43:59 +1000	[thread overview]
Message-ID: <1495136639.21894.3.camel@gmail.com> (raw)
In-Reply-To: <20170518192240.GA29914@cmpxchg.org>

On Thu, 2017-05-18 at 15:22 -0400, Johannes Weiner wrote:
> On Fri, May 19, 2017 at 04:37:27AM +1000, Balbir Singh wrote:
> > On Fri, May 19, 2017 at 3:30 AM, Michal Hocko <mhocko@kernel.org> wrote:
> > > On Thu 18-05-17 17:28:04, Roman Gushchin wrote:
> > > > Traditionally, the OOM killer is operating on a process level.
> > > > Under oom conditions, it finds a process with the highest oom score
> > > > and kills it.
> > > > 
> > > > This behavior doesn't suit well the system with many running
> > > > containers. There are two main issues:
> > > > 
> > > > 1) There is no fairness between containers. A small container with
> > > > a few large processes will be chosen over a large one with huge
> > > > number of small processes.
> > > > 
> > > > 2) Containers often do not expect that some random process inside
> > > > will be killed. So, in general, a much safer behavior is
> > > > to kill the whole cgroup. Traditionally, this was implemented
> > > > in userspace, but doing it in the kernel has some advantages,
> > > > especially in a case of a system-wide OOM.
> > > > 
> > > > To address these issues, cgroup-aware OOM killer is introduced.
> > > > Under OOM conditions, it looks for a memcg with highest oom score,
> > > > and kills all processes inside.
> > > > 
> > > > Memcg oom score is calculated as a size of active and inactive
> > > > anon LRU lists, unevictable LRU list and swap size.
> > > > 
> > > > For a cgroup-wide OOM, only cgroups belonging to the subtree of
> > > > the OOMing cgroup are considered.
> > > 
> > > While this might make sense for some workloads/setups it is not a
> > > generally acceptable policy IMHO. We have discussed that different OOM
> > > policies might be interesting few years back at LSFMM but there was no
> > > real consensus on how to do that. One possibility was to allow bpf like
> > > mechanisms. Could you explore that path?
> > 
> > I agree, I think it needs more thought. I wonder if the real issue is something
> > else. For example
> > 
> > 1. Did we overcommit a particular container too much?
> > 2. Do we need something like https://lwn.net/Articles/604212/ to solve
> > the problem?
> 
> The occasional OOM kill is an unavoidable reality on our systems (and
> I bet on most deployments). If we tried not to overcommit, we'd waste
> a *lot* of memory.
> 
> The problem is when OOM happens, we really want the biggest *job* to
> get killed. Before cgroups, we assumed jobs were processes. But with
> cgroups, the user is able to define a group of processes as a job, and
> then an individual process is no longer a first-class memory consumer.
> 
> Without a patch like this, the OOM killer will compare the sizes of
> the random subparticles that the jobs in the system are composed of
> and kill the single biggest particle, leaving behind the incoherent
> remains of one of the jobs. That doesn't make a whole lot of sense.

I agree, but see my response on oom_notifiers in parallel that I sent
to Roman.

> 
> If you want to determine the most expensive car in a parking lot, you
> can't go off and compare the price of one car's muffler with the door
> handle of another, then point to a wind shield and yell "This is it!"
> 
> You need to compare the cars as a whole with each other.
> 
> > 3. We have oom notifiers now, could those be used (assuming you are interested
> > in non memcg related OOM's affecting a container
> 
> Right now, we watch for OOM notifications and then have userspace kill
> the rest of a job. That works - somewhat. What remains is the problem
> that I described above, that comparing individual process sizes is not
> meaningful when the terminal memory consumer is a cgroup.

Could the cgroup limit be used as the comparison point? stats inside
of the memory cgroup?

> 
> > 4. How do we determine limits for these containers? From a fariness
> > perspective
> 
> How do you mean?

How do we set them up so that the larger job gets more of the limits
as opposed to the small ones?

Balbir Singh.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-05-18 19:44 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-18 16:28 [RFC PATCH] mm, oom: cgroup-aware OOM-killer Roman Gushchin
2017-05-18 17:30 ` Michal Hocko
2017-05-18 18:11   ` Johannes Weiner
2017-05-19  8:02     ` Michal Hocko
2017-05-18 18:37   ` Balbir Singh
2017-05-18 19:20     ` Roman Gushchin
2017-05-18 19:41       ` Balbir Singh
2017-05-18 19:22     ` Johannes Weiner
2017-05-18 19:43       ` Balbir Singh [this message]
2017-05-18 20:15         ` Johannes Weiner
2017-05-20 18:37 ` Vladimir Davydov
2017-05-22 17:01   ` Roman Gushchin
2017-05-23  7:07     ` Michal Hocko
2017-05-23 13:25       ` Johannes Weiner
2017-05-25 15:38         ` Michal Hocko
2017-05-25 17:08           ` Johannes Weiner
2017-05-31 16:25             ` Michal Hocko
2017-05-31 18:01               ` Johannes Weiner
2017-06-02  8:43                 ` Michal Hocko
2017-06-02 15:18                   ` Roman Gushchin
2017-06-05  8:27                     ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1495136639.21894.3.camel@gmail.com \
    --to=bsingharora@gmail.com \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan@huawei.com \
    --cc=mhocko@kernel.org \
    --cc=penguin-kernel@i-love.sakura.ne.jp \
    --cc=tj@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).