linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yueyang Pan <pyyjason@gmail.com>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Cc: Suren Baghdasaryan <surenb@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>, Michal Hocko <mhocko@suse.com>,
	Brendan Jackman <jackmanb@google.com>,
	Johannes Weiner <hannes@cmpxchg.org>, Zi Yan <ziy@nvidia.com>,
	Usama Arif <usamaarif642@gmail.com>,
	linux-mm@kvack.org, kernel-team@meta.com,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v1 0/2] mm/show_mem: Bug fix for print mem alloc info
Date: Thu, 28 Aug 2025 01:29:08 -0700	[thread overview]
Message-ID: <aLATVGnVx4Z+aHAh@devbig569.cln6.facebook.com> (raw)
In-Reply-To: <aK9htWRehfJDLFJD@fedora>

On Wed, Aug 27, 2025 at 12:51:17PM -0700, Vishal Moola (Oracle) wrote:
> On Wed, Aug 27, 2025 at 11:34:21AM -0700, Yueyang Pan wrote:
> > This patch set fixes two issues we saw in production rollout. 
> > 
> > The first issue is that we saw all zero output of memory allocation 
> > profiling information from show_mem() if CONFIG_MEM_ALLOC_PROFILING 
> > is set and sysctl.vm.mem_profiling=0. In this case, the behaviour 
> > should be the same as when CONFIG_MEM_ALLOC_PROFILING is unset, 
> 
> Did you mean to say when sysctl.vm.mem_profiling=never?
> 
> My understanding is that setting the sysctl=0 Pauses memory allocation
> profiling, while 1 Resumes it. When the sysctl=never should be the same
> as when the config is unset, but I suspect we might still want the info
> when set to 0.

Thanks for your feedback Vishal. Here I mean for both =0 and =never. 
In both cases, now __show_mem() will print all 0s, which both is redundant 
and also makes differentiate hard. IMO when __show_mem() prints something 
the output should be useful at least. 

> 
> > where show_mem prints nothing about the information. This will make 
> > further parse easier as we don't have to differentiate what a all 
> > zero line actually means (Does it mean  0 bytes are allocated 
> > or simply memory allocation profiling is disabled).
> > 
> > The second issue is that multiple entities can call show_mem() 
> > which messed up the allocation info in dmesg. We saw outputs like this:  
> > ```
> >     327 MiB    83635 mm/compaction.c:1880 func:compaction_alloc
> >    48.4 GiB 12684937 mm/memory.c:1061 func:folio_prealloc
> >    7.48 GiB    10899 mm/huge_memory.c:1159 func:vma_alloc_anon_folio_pmd
> >     298 MiB    95216 kernel/fork.c:318 func:alloc_thread_stack_node
> >     250 MiB    63901 mm/zsmalloc.c:987 func:alloc_zspage
> >     1.42 GiB   372527 mm/memory.c:1063 func:folio_prealloc
> >     1.17 GiB    95693 mm/slub.c:2424 func:alloc_slab_page
> >      651 MiB   166732 mm/readahead.c:270 func:page_cache_ra_unbounded
> >      419 MiB   107261 net/core/page_pool.c:572 func:__page_pool_alloc_pages_slow
> >      404 MiB   103425 arch/x86/mm/pgtable.c:25 func:pte_alloc_one
> > ```
> > The above example is because one kthread invokes show_mem() 
> > from __alloc_pages_slowpath while kernel itself calls 
> > oom_kill_process()
> 
> I'm not familiar with show_mem(). Could you spell out what's wrong with
> the output above?

So here in the normal case, the output should be sorted by size. Here 
two print happen at the same time so they interleave with each other, 
making further parse harder (need to sort again and dedup).

> 
> > Yueyang Pan (2):
> >   mm/show_mem: No print when not mem_alloc_profiling_enabled()
> >   mm/show_mem: Add trylock while printing alloc info
> > 
> >  mm/show_mem.c | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > -- 
> > 2.47.3
> > 

Thanks,
Pan


  reply	other threads:[~2025-08-28  8:29 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-27 18:34 [PATCH v1 0/2] mm/show_mem: Bug fix for print mem alloc info Yueyang Pan
2025-08-27 18:34 ` [PATCH v1 1/2] mm/show_mem: No print when not mem_alloc_profiling_enabled() Yueyang Pan
2025-08-27 18:34 ` [PATCH v1 2/2] mm/show_mem: Add trylock while printing alloc info Yueyang Pan
2025-08-27 22:06   ` Andrew Morton
2025-08-27 22:28     ` Shakeel Butt
2025-08-28  8:36       ` Yueyang Pan
2025-08-28  8:34     ` Yueyang Pan
2025-08-28  8:41       ` Vlastimil Babka
2025-08-28  8:47         ` Yueyang Pan
2025-08-28  8:53           ` Vlastimil Babka
2025-08-28  9:51             ` Yueyang Pan
2025-08-28  9:54               ` Vlastimil Babka
2025-08-28 22:10                 ` Yueyang Pan
2025-08-28 16:35         ` Shakeel Butt
2025-08-28 17:21           ` Vlastimil Babka
2025-08-27 19:51 ` [PATCH v1 0/2] mm/show_mem: Bug fix for print mem " Vishal Moola (Oracle)
2025-08-28  8:29   ` Yueyang Pan [this message]
2025-08-28 17:05     ` Vishal Moola (Oracle)
2025-08-28 22:07       ` Yueyang Pan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aLATVGnVx4Z+aHAh@devbig569.cln6.facebook.com \
    --to=pyyjason@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=jackmanb@google.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=surenb@google.com \
    --cc=usamaarif642@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=vishal.moola@gmail.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).