From: Michal Hocko <mhocko@suse.com>
To: Charan Teja Kalla <quic_charante@quicinc.com>
Cc: akpm@linux-foundation.org, mgorman@techsingularity.net,
david@redhat.com, vbabka@suse.cz, hannes@cmpxchg.org,
quic_pkondeti@quicinc.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH V3 3/3] mm: page_alloc: drain pcp lists before oom kill
Date: Wed, 15 Nov 2023 15:09:45 +0100 [thread overview]
Message-ID: <ZVTRKTi2QCoMiv50@tiehlicka> (raw)
In-Reply-To: <342a8854-eef5-f68a-15e5-275de70e3f01@quicinc.com>
On Tue 14-11-23 22:06:45, Charan Teja Kalla wrote:
> Thanks Michal!!
>
> On 11/14/2023 4:18 PM, Michal Hocko wrote:
> >> At least in my particular stress test case it just delayed the OOM as i
> >> can see that at the time of OOM kill, there are no free pcp pages. My
> >> understanding of the OOM is that it should be the last resort and only
> >> after doing the enough reclaim retries. CMIW here.
> > Yes it is a last resort but it is a heuristic as well. So the real
> > questoin is whether this makes any practical difference outside of
> > artificial workloads. I do not see anything particularly worrying to
> > drain the pcp cache but it should be noted that this won't be 100%
> > either as racing freeing of memory will end up on pcp lists first.
>
> Okay, I don't have any practical scenario where this helped me in
> avoiding the OOM. Will comeback If I ever encounter this issue in
> practical scenario.
>
> Also If you have any comments on [PATCH V2 2/3] mm: page_alloc: correct
> high atomic reserve calculations will help me.
I do not have a strong opinion on that one to be honest. I am not even
sure that reserving a full page block (4MB) on small systems as
presented is really a good use of memory.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2023-11-15 14:09 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-05 12:50 [PATCH V2 0/3] mm: page_alloc: fixes for early oom kills Charan Teja Kalla
2023-11-05 12:50 ` [PATCH V2 1/3] mm: page_alloc: unreserve highatomic page blocks before oom Charan Teja Kalla
2023-11-09 10:29 ` Michal Hocko
2023-11-05 12:50 ` [PATCH V2 2/3] mm: page_alloc: correct high atomic reserve calculations Charan Teja Kalla
2023-11-16 9:59 ` Mel Gorman
2023-11-16 12:52 ` Michal Hocko
2023-11-17 16:19 ` Mel Gorman
2023-11-05 12:50 ` [PATCH V3 3/3] mm: page_alloc: drain pcp lists before oom kill Charan Teja Kalla
2023-11-05 12:55 ` Charan Teja Kalla
2023-11-09 10:33 ` Michal Hocko
2023-11-10 16:36 ` Charan Teja Kalla
2023-11-14 10:48 ` Michal Hocko
2023-11-14 16:36 ` Charan Teja Kalla
2023-11-15 14:09 ` Michal Hocko [this message]
2023-11-16 6:00 ` Charan Teja Kalla
2023-11-16 12:55 ` Michal Hocko
2023-11-17 5:43 ` Charan Teja Kalla
2024-01-25 16:36 ` Zach O'Keefe
2024-01-26 10:47 ` Charan Teja Kalla
2024-01-26 10:57 ` Michal Hocko
2024-01-26 22:51 ` Zach O'Keefe
2024-01-29 15:04 ` Michal Hocko
2024-02-06 23:15 ` Zach O'Keefe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZVTRKTi2QCoMiv50@tiehlicka \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=quic_charante@quicinc.com \
--cc=quic_pkondeti@quicinc.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).