public inbox for cgroups@vger.kernel.org
 help / color / mirror / Atom feed
From: Roman Gushchin <guro@fb.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: linux-mm@kvack.org, Vladimir Davydov <vdavydov.dev@gmail.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
	David Rientjes <rientjes@google.com>, Tejun Heo <tj@kernel.org>,
	kernel-team@fb.com, cgroups@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [v6 2/4] mm, oom: cgroup-aware OOM killer
Date: Thu, 24 Aug 2017 15:58:01 +0100	[thread overview]
Message-ID: <20170824145801.GA23457@castle.DHCP.thefacebook.com> (raw)
In-Reply-To: <20170824141336.GP5943@dhcp22.suse.cz>

On Thu, Aug 24, 2017 at 04:13:37PM +0200, Michal Hocko wrote:
> On Thu 24-08-17 14:58:42, Roman Gushchin wrote:
> > On Thu, Aug 24, 2017 at 02:58:11PM +0200, Michal Hocko wrote:
> > > On Thu 24-08-17 13:28:46, Roman Gushchin wrote:
> > > > Hi Michal!
> > > > 
> > > There is nothing like a "better victim". We are pretty much in a
> > > catastrophic situation when we try to survive by killing a userspace.
> > 
> > Not necessary, it can be a cgroup OOM.
> 
> memcg OOM is no different. The catastrophic is scoped to the specific
> hierarchy but tasks in that hierarchy still fail to make a further
> progress.
> 
> > > We try to kill the largest because that assumes that we return the
> > > most memory from it. Now I do understand that you want to treat the
> > > memcg as a single killable entity but I find it really questionable
> > > to do a per-memcg metric and then do not treat it like that and kill
> > > only a single task. Just imagine a single memcg with zillions of taks
> > > each very small and you select it as the largest while a small taks
> > > itself doesn't help to help to get us out of the OOM.
> > 
> > I don't think it's different from a non-containerized state: if you
> > have a zillion of small tasks in the system, you'll meet the same issues.
> 
> Yes this is possible but usually you are comparing apples to apples so
> you will kill the largest offender and then go on. To be honest I really
> do hate how we try to kill a children rather than the selected victim
> for the same reason.

I do hate it too.

> 
> > > > > I guess I have asked already and we haven't reached any consensus. I do
> > > > > not like how you treat memcgs and tasks differently. Why cannot we have
> > > > > a memcg score a sum of all its tasks?
> > > > 
> > > > It sounds like a more expensive way to get almost the same with less accuracy.
> > > > Why it's better?
> > > 
> > > because then you are comparing apples to apples?
> > 
> > Well, I can say that I compare some number of pages against some other number
> > of pages. And the relation between a page and memcg is more obvious, than a
> > relation between a page and a process.
> 
> But you are comparing different accounting systems.
>  
> > Both ways are not ideal, and sum of the processes is not ideal too.
> > Especially, if you take oom_score_adj into account. Will you respect it?
> 
> Yes, and I do not see any reason why we shouldn't.

It makes things even more complicated.
Right now task's oom_score can be in (~ -total_memory, ~ +2*total_memory) range,
and it you're starting summing it, it can be multiplied by number of tasks...
Weird.
It also will be different in case of system and memcg-wide OOM.

> 
> > I've started actually with such approach, but then found it weird.
> > 
> > > Besides that you have
> > > to check each task for over-killing anyway. So I do not see any
> > > performance merits here.
> > 
> > It's an implementation detail, and we can hopefully get rid of it at some point.
> 
> Well, we might do some estimations and ignore oom scopes but I that
> sounds really complicated and error prone. Unless we have anything like
> that then I would start from tasks and build up the necessary to make a
> decision at the higher level.

Seriously speaking, do you have an example, when summing per-process
oom_score will work better?

Especially, if we're talking about customizing oom_score calculation,
it makes no sence to me. How you will sum process timestamps?

>  
> > > > > How do you want to compare memcg score with tasks score?
> > > > 
> > > > I have to do it for tasks in root cgroups, but it shouldn't be a common case.
> > > 
> > > How come? I can easily imagine a setup where only some memcgs which
> > > really do need a kill-all semantic while all others can live with single
> > > task killed perfectly fine.
> > 
> > I mean taking a unified cgroup hierarchy into an account, there should not
> > be lot of tasks in the root cgroup, if any.
> 
> Is that really the case? I would assume that memory controller would be
> enabled only in those subtrees which really use the functionality and
> the rest will be sitting in the root memcg. It might be the case if you
> are running only containers but I am not really sure this is true in
> general.

Agree.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-08-24 14:58 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-23 16:51 [v6 1/4] mm, oom: refactor the oom_kill_process() function Roman Gushchin
2017-08-23 16:51 ` [v6 0/4] cgroup-aware OOM killer Roman Gushchin
2017-08-23 16:51 ` [v6 2/4] mm, oom: " Roman Gushchin
2017-08-23 23:19   ` David Rientjes
2017-08-25 10:57     ` Roman Gushchin
2017-08-24 11:47   ` Michal Hocko
2017-08-24 12:28     ` Roman Gushchin
2017-08-24 12:58       ` Michal Hocko
     [not found]         ` <20170824125811.GK5943-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2017-08-24 13:58           ` Roman Gushchin
2017-08-24 14:13             ` Michal Hocko
2017-08-24 14:58               ` Roman Gushchin [this message]
2017-08-25  8:14                 ` Michal Hocko
2017-08-25 10:39                   ` Roman Gushchin
2017-08-25 10:58                     ` Michal Hocko
     [not found]                   ` <20170825081402.GG25498-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2017-08-30 11:22                     ` Roman Gushchin
2017-08-30 20:56                       ` David Rientjes
2017-08-31 13:34                         ` Roman Gushchin
2017-08-31 20:01                           ` David Rientjes
2017-08-23 16:52 ` [v6 3/4] mm, oom: introduce oom_priority for memory cgroups Roman Gushchin
2017-08-24 12:10   ` Michal Hocko
2017-08-24 12:51     ` Roman Gushchin
     [not found]       ` <20170824125113.GB15916-B3w7+ongkCiLfgCeKHXN1g2O0Ztt9esIQQ4Iyu8u01E@public.gmane.org>
2017-08-24 13:48         ` Michal Hocko
2017-08-24 14:11           ` Roman Gushchin
2017-08-28 20:54             ` David Rientjes
2017-08-23 16:52 ` [v6 4/4] mm, oom, docs: describe the cgroup-aware OOM killer Roman Gushchin
2017-08-24 11:15 ` [v6 1/4] mm, oom: refactor the oom_kill_process() function Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170824145801.GA23457@castle.DHCP.thefacebook.com \
    --to=guro@fb.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=rientjes@google.com \
    --cc=tj@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox