From: Waiman Long <longman@redhat.com>
To: Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
linux-mm@kvack.org, Rafael Aquini <aquini@redhat.com>,
Waiman Long <longman@redhat.com>
Subject: [PATCH 0/2] mm/page_owner: Extend page_owner to show memcg
Date: Fri, 28 Jan 2022 14:56:40 -0500 [thread overview]
Message-ID: <20220128195642.416743-1-longman@redhat.com> (raw)
While debugging the constant increase in percpu memory consumption on
a system that spawned large number of containers, it was found that a
lot of offlined mem_cgroup structures remained in place without being
freed. Further investigation indicated that those mem_cgroup structures
were pinned by some pages.
In order to find out what those pages are, the existing page_owner
debugging tool is extended to show memory cgroup information and whether
those memcgs are offlined or not. With the enhanced page_owner tool,
the following is a typical page that pinned the mem_cgroup structure
in my test case:
Page allocated via order 0, mask 0x1100cca(GFP_HIGHUSER_MOVABLE), pid 62760, ts 119274296592 ns, free_ts 118989764823 ns
PFN 1273412 type Movable Block 2487 type Movable Flags 0x17ffffc00c001c(uptodate|dirty|lru|reclaim|swapbacked|node=0|zone=2|lastcpupid=0x1fffff)
prep_new_page+0x8e/0xb0
get_page_from_freelist+0xc4d/0xe50
__alloc_pages+0x172/0x320
alloc_pages_vma+0x84/0x230
shmem_alloc_page+0x3f/0x90
shmem_alloc_and_acct_page+0x76/0x1c0
shmem_getpage_gfp+0x48d/0x890
shmem_write_begin+0x36/0xc0
generic_perform_write+0xed/0x1d0
__generic_file_write_iter+0xdc/0x1b0
generic_file_write_iter+0x5d/0xb0
new_sync_write+0x11f/0x1b0
vfs_write+0x1ba/0x2a0
ksys_write+0x59/0xd0
do_syscall_64+0x37/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
Charged to offlined memcg libpod-conmon-e59cc83faf807bacc61223fec6a80c1540ebe8f83c802870c6af4708d58f77ea
So the page was not freed because it was part of a shmem segment. That
is useful information that can help users to diagnose similar problems.
Waiman Long (2):
mm/page_owner: Introduce SNPRINTF() macro that includes length error
check
mm/page_owner: Dump memcg information
mm/page_owner.c | 76 ++++++++++++++++++++++++++++++++-----------------
1 file changed, 50 insertions(+), 26 deletions(-)
--
2.27.0
next reply other threads:[~2022-01-28 19:57 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-28 19:56 Waiman Long [this message]
2022-01-28 19:56 ` [PATCH 1/2] mm/page_owner: Introduce SNPRINTF() macro that includes length error check Waiman Long
2022-01-28 21:18 ` Ira Weiny
2022-01-28 21:22 ` Waiman Long
2022-01-28 19:56 ` [PATCH 2/2] mm/page_owner: Dump memcg information Waiman Long
2022-01-28 21:22 ` Ira Weiny
2022-01-28 21:31 ` Waiman Long
2022-01-28 21:48 ` Ira Weiny
2022-01-29 3:35 ` Waiman Long
2022-01-29 4:05 ` Ira Weiny
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220128195642.416743-1-longman@redhat.com \
--to=longman@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=aquini@redhat.com \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).