From: Michal Hocko <mhocko@suse.com>
To: Vasily Averin <vvs@virtuozzo.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Roman Gushchin <guro@fb.com>, Uladzislau Rezki <urezki@gmail.com>,
Vlastimil Babka <vbabka@suse.cz>,
Shakeel Butt <shakeelb@google.com>,
Mel Gorman <mgorman@techsingularity.net>,
Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>,
cgroups@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, kernel@openvz.org
Subject: Re: [PATCH memcg 3/3] memcg: handle memcg oom failures
Date: Thu, 21 Oct 2021 13:49:29 +0200 [thread overview]
Message-ID: <YXFPSvGFV539OcEk@dhcp22.suse.cz> (raw)
In-Reply-To: <d3b32c72-6375-f755-7599-ab804719e1f6@virtuozzo.com>
On Wed 20-10-21 18:46:56, Vasily Averin wrote:
> On 20.10.2021 16:02, Michal Hocko wrote:
> > On Wed 20-10-21 15:14:27, Vasily Averin wrote:
> >> mem_cgroup_oom() can fail if current task was marked unkillable
> >> and oom killer cannot find any victim.
> >>
> >> Currently we force memcg charge for such allocations,
> >> however it allow memcg-limited userspace task in to overuse assigned limits
> >> and potentially trigger the global memory shortage.
> >
> > You should really go into more details whether that is a practical
> > problem to handle. OOM_FAILED means that the memcg oom killer couldn't
> > find any oom victim so it cannot help with a forward progress. There are
> > not that many situations when that can happen. Naming that would be
> > really useful.
>
> I've pointed above:
> "if current task was marked unkillable and oom killer cannot find any victim."
> This may happen when current task cannot be oom-killed because it was marked
> unkillable i.e. it have p->signal->oom_score_adj == OOM_SCORE_ADJ_MIN
> and other processes in memcg are either dying, or are kernel threads or are marked unkillable
> by the same way. Or when memcg have this process only.
>
> If we always approve such kind of allocation, it can be misused.
> Process can mmap a lot of memory,
> ant then touch it and generate page fault and make overcharged memory allocations.
> Finally it can consume all node memory and trigger global memory shortage on the host.
Yes, this is true but a) OOM_SCORE_ADJ_MIN tasks are excluded from the
OOM handling so they have to be careful with the memory consumption and
b) is this a theoretical or a practical concern.
This is mostly what I wanted to make sure you describe in the changelog.
> >> Let's fail the memory charge in such cases.
> >>
> >> This failure should be somehow recognised in #PF context,
> >
> > explain why
>
> When #PF cannot allocate memory (due to reason described above), handle_mm_fault returns VM_FAULT_OOM,
> then its caller executes pagefault_out_of_memory(). If last one cannot recognize the real reason of this fail,
> it expect it was global memory shortage and executed global out_ouf_memory() that can kill random process
> or just crash node if sysctl vm.panic_on_oom is set to 1.
>
> Currently pagefault_out_of_memory() knows about possible async memcg OOM and handles it correctly.
> However it is not aware that memcg can reject some other allocations, do not recognize the fault
> as memcg-related and allows to run global OOM.
Again something to be added to the changelog.
> >> so let's use current->memcg_in_oom == (struct mem_cgroup *)OOM_FAILED
> >>
> >> ToDo: what is the best way to notify pagefault_out_of_memory() about
> >> mem_cgroup_out_of_memory failure ?
> >
> > why don't you simply remove out_of_memory from pagefault_out_of_memory
> > and leave it only with the blocking memcg OOM handling? Wouldn't that be a
> > more generic solution? Your first patch already goes that way partially.
>
> I clearly understand that global out_of_memory should not be trggired by memcg restrictions.
> I clearly understand that dying task will release some memory soon and we can do not run global oom if current task is dying.
>
> However I'm not sure that I can remove out_of_memory at all. At least I do not have good arguments to do it.
I do understand that handling a very specific case sounds easier but it
would be better to have a robust fix even if that requires some more
head scratching. So far we have collected several reasons why the it is
bad to trigger oom killer from the #PF path. There is no single argument
to keep it so it sounds like a viable path to pursue. Maybe there are
some very well hidden reasons but those should be documented and this is
a great opportunity to do either of the step.
Moreover if it turns out that there is a regression then this can be
easily reverted and a different, maybe memcg specific, solution can be
implemented.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2021-10-21 11:49 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-18 8:13 [PATCH memcg 0/1] false global OOM triggered by memcg-limited task Vasily Averin
2021-10-18 9:04 ` Michal Hocko
2021-10-18 10:05 ` Vasily Averin
2021-10-18 10:12 ` Vasily Averin
2021-10-18 11:53 ` Michal Hocko
[not found] ` <27dc0c49-a0d6-875b-49c6-0ef5c0cc3ac8@virtuozzo.com>
2021-10-18 12:27 ` Michal Hocko
2021-10-18 15:07 ` Shakeel Butt
2021-10-18 16:51 ` Michal Hocko
2021-10-18 17:13 ` Shakeel Butt
2021-10-18 18:52 ` Vasily Averin
2021-10-18 19:18 ` Vasily Averin
2021-10-19 5:34 ` Shakeel Butt
2021-10-19 5:33 ` Shakeel Butt
2021-10-19 6:42 ` Vasily Averin
2021-10-19 8:47 ` Michal Hocko
2021-10-19 6:30 ` Vasily Averin
2021-10-19 8:49 ` Michal Hocko
2021-10-19 10:30 ` Vasily Averin
2021-10-19 11:54 ` Michal Hocko
2021-10-19 12:04 ` Michal Hocko
2021-10-19 13:26 ` Vasily Averin
2021-10-19 14:13 ` Michal Hocko
2021-10-19 14:19 ` Michal Hocko
2021-10-19 19:09 ` Vasily Averin
2021-10-20 8:07 ` [PATCH memcg v4] memcg: prohibit unconditional exceeding the limit of dying tasks Vasily Averin
2021-10-20 8:43 ` Michal Hocko
2021-10-20 12:11 ` [PATCH memcg RFC 0/3] " Vasily Averin
[not found] ` <cover.1634730787.git.vvs@virtuozzo.com>
2021-10-20 12:12 ` [PATCH memcg 1/3] mm: do not firce global OOM from inside " Vasily Averin
2021-10-20 12:33 ` Michal Hocko
2021-10-20 13:52 ` Vasily Averin
2021-10-20 12:13 ` [PATCH memcg 2/3] memcg: remove charge forcinig for " Vasily Averin
2021-10-20 12:41 ` Michal Hocko
2021-10-20 14:21 ` Vasily Averin
2021-10-20 14:57 ` Michal Hocko
2021-10-20 15:20 ` Tetsuo Handa
2021-10-21 10:03 ` Michal Hocko
2021-10-20 12:14 ` [PATCH memcg 3/3] memcg: handle memcg oom failures Vasily Averin
2021-10-20 13:02 ` Michal Hocko
2021-10-20 15:46 ` Vasily Averin
2021-10-21 11:49 ` Michal Hocko [this message]
2021-10-21 15:05 ` Vasily Averin
2021-10-21 16:47 ` Michal Hocko
2021-10-22 8:10 ` [PATCH memcg v2 0/2] memcg: prohibit unconditional exceeding the limit of dying tasks Vasily Averin
[not found] ` <cover.1634889066.git.vvs@virtuozzo.com>
2021-10-22 8:11 ` [PATCH memcg v2 1/2] mm, oom: do not trigger out_of_memory from the #PF Vasily Averin
2021-10-22 8:55 ` Michal Hocko
2021-10-22 8:11 ` [PATCH memcg v2 2/2] memcg: prohibit unconditional exceeding the limit of dying tasks Vasily Averin
2021-10-22 9:10 ` Michal Hocko
2021-10-23 13:18 ` [PATCH memcg v3 0/3] " Vasily Averin
[not found] ` <cover.1634994605.git.vvs@virtuozzo.com>
2021-10-23 13:19 ` [PATCH memcg v3 1/3] mm, oom: pagefault_out_of_memory: don't force global OOM for " Vasily Averin
2021-10-25 9:27 ` Michal Hocko
2021-10-23 13:20 ` [PATCH memcg v3 2/3] mm, oom: do not trigger out_of_memory from the #PF Vasily Averin
2021-10-23 15:01 ` Tetsuo Handa
2021-10-23 19:15 ` Vasily Averin
2021-10-25 8:04 ` Michal Hocko
2021-10-26 13:56 ` Tetsuo Handa
2021-10-26 14:07 ` Michal Hocko
2021-10-25 9:34 ` Michal Hocko
2021-10-23 13:20 ` [PATCH memcg v3 3/3] memcg: prohibit unconditional exceeding the limit of dying tasks Vasily Averin
2021-10-25 9:36 ` Michal Hocko
2021-10-27 22:36 ` Andrew Morton
2021-10-28 7:22 ` Vasily Averin
2021-10-29 7:46 ` Greg Kroah-Hartman
2021-10-29 7:58 ` Michal Hocko
2021-10-21 8:03 ` [PATCH memcg 0/1] false global OOM triggered by memcg-limited task Vasily Averin
2021-10-21 11:49 ` Michal Hocko
2021-10-21 13:24 ` Vasily Averin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YXFPSvGFV539OcEk@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=kernel@openvz.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=penguin-kernel@i-love.sakura.ne.jp \
--cc=shakeelb@google.com \
--cc=urezki@gmail.com \
--cc=vbabka@suse.cz \
--cc=vdavydov.dev@gmail.com \
--cc=vvs@virtuozzo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).