From: Max Kellermann <max@blarg.de>
To: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Kernel 5.1.15 stuck in compaction
Date: Mon, 8 Jul 2019 12:35:44 +0200 [thread overview]
Message-ID: <20190708103543.GA10364@swift.blarg.de> (raw)
Hi,
one of our web servers got repeatedly stuck in the memory compaction
code; two PHP processes have been busy at 100% inside memory
compaction after a page fault:
100.00% 0.00% php-cgi7.0 [kernel.vmlinux] [k] page_fault
|
---page_fault
__do_page_fault
handle_mm_fault
__handle_mm_fault
do_huge_pmd_anonymous_page
__alloc_pages_nodemask
__alloc_pages_slowpath
__alloc_pages_direct_compact
try_to_compact_pages
compact_zone_order
compact_zone
|
|--61.30%--isolate_migratepages_block
| |
| |--20.44%--node_page_state
| |
| |--5.88%--compact_unlock_should_abort.isra.33
| |
| --3.28%--_cond_resched
| |
| --2.19%--rcu_all_qs
|
--3.37%--pageblock_skip_persistent
ftrace:
<...>-962300 [033] .... 236536.493919: node_page_state <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493919: node_page_state <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493919: node_page_state <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493919: _cond_resched <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493919: rcu_all_qs <-_cond_resched
<...>-962300 [033] .... 236536.493919: compact_unlock_should_abort.isra.33 <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493919: pageblock_skip_persistent <-compact_zone
<...>-962300 [033] .... 236536.493919: isolate_migratepages_block <-compact_zone
<...>-962300 [033] .... 236536.493919: node_page_state <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493919: node_page_state <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493919: node_page_state <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493919: node_page_state <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493920: node_page_state <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493920: node_page_state <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493920: _cond_resched <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493920: rcu_all_qs <-_cond_resched
<...>-962300 [033] .... 236536.493920: compact_unlock_should_abort.isra.33 <-isolate_migratepages_block
<...>-962300 [033] .... 236536.493920: pageblock_skip_persistent <-compact_zone
<...>-962300 [033] .... 236536.493920: isolate_migratepages_block <-compact_zone
<...>-962300 [033] .... 236536.493920: node_page_state <-isolate_migratepages_block
Nothing useful in /proc/PID/{stack,wchan,syscall}.
slabinfo/kmalloc-{16,32} are going through the roof (~ 15 GB each),
and this memleak-lookalike triggering the oomkiller all the time is
what drew our attention to this server.
Right now, the server is still stuck, and I can attempt to collect
more information on request.
Max
next reply other threads:[~2019-07-08 10:35 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-08 10:35 Max Kellermann [this message]
2019-07-08 12:23 ` Kernel 5.1.15 stuck in compaction Max Kellermann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190708103543.GA10364@swift.blarg.de \
--to=max@blarg.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).