From: Charan Teja Kalla <quic_charante@quicinc.com>
To: <akpm@linux-foundation.org>, <mgorman@techsingularity.net>,
<mhocko@suse.com>, <david@redhat.com>, <vbabka@suse.cz>,
<hannes@cmpxchg.org>, <quic_pkondeti@quicinc.com>
Cc: <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
Charan Teja Kalla <quic_charante@quicinc.com>
Subject: [PATCH V3 0/2] mm: page_alloc: fixes for high atomic reserve caluculations
Date: Fri, 24 Nov 2023 16:35:51 +0530 [thread overview]
Message-ID: <cover.1700821416.git.quic_charante@quicinc.com> (raw)
The state of the system where the issue exposed shown in oom kill logs:
[ 295.998653] Normal free:7728kB boost:0kB min:804kB low:1004kB
high:1204kB reserved_highatomic:8192KB active_anon:4kB inactive_anon:0kB
active_file:24kB inactive_file:24kB unevictable:1220kB writepending:0kB
present:70732kB managed:49224kB mlocked:0kB bounce:0kB free_pcp:688kB
local_pcp:492kB free_cma:0kB
[ 295.998656] lowmem_reserve[]: 0 32
[ 295.998659] Normal: 508*4kB (UMEH) 241*8kB (UMEH) 143*16kB (UMEH)
33*32kB (UH) 7*64kB (UH) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB
0*4096kB = 7752kB
From the above, it is seen that ~16MB of memory reserved for high atomic
reserves against the expectation of 1% reserves which is fixed in the
1st patch.
Don't reserve the high atomic page blocks if 1% of zone memory size is
below a pageblock size.
Changes in V3:
o Seperated the patch of unreserving the high atomic pageblock done from
should reclaim retry.
o Don't reserve high atomic page blocks for smaller zone sizes.
Changes in V2:
o Unreserving the high atomic page blocks is done from should_reclaim_retry()
o Reserve minimum and max memory for high atomic reserves as a pageblock and
1% of the memory zone respectively.
o drain the pcp lists before falling back to OOM.
o https://lore.kernel.org/linux-mm/cover.1699104759.git.quic_charante@quicinc.com/
Changes in V1:
o Unreserving the high atomic page blocks is tried to fix from
the oom kill path rather than in should_reclaim_retry()
o discussed why a lot more than 1% of managed memory is reserved
for high atomic reserves.
o https://lore.kernel.org/linux-mm/1698669590-3193-1-git-send-email-quic_charante@quicinc.com/
Charan Teja Kalla (2):
mm: page_alloc: correct high atomic reserve calculations
mm: pagealloc: enforce minimum zone size to do high atomic reserves
mm/page_alloc.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
--
2.7.4
next reply other threads:[~2023-11-24 11:06 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-24 11:05 Charan Teja Kalla [this message]
2023-11-24 11:05 ` [PATCH V3 1/2] mm: page_alloc: correct high atomic reserve calculations Charan Teja Kalla
2023-11-24 19:24 ` David Rientjes
2023-11-24 11:05 ` [PATCH V3 2/2] mm: page_alloc: enforce minimum zone size to do high atomic reserves Charan Teja Kalla
2023-11-24 19:24 ` David Rientjes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1700821416.git.quic_charante@quicinc.com \
--to=quic_charante@quicinc.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=quic_pkondeti@quicinc.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).