linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yueyang Pan <pyyjason@gmail.com>
To: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Suren Baghdasaryan <surenb@google.com>,
	Vlastimil Babka <vbabka@suse.cz>, Michal Hocko <mhocko@suse.com>,
	Brendan Jackman <jackmanb@google.com>,
	Johannes Weiner <hannes@cmpxchg.org>, Zi Yan <ziy@nvidia.com>,
	Usama Arif <usamaarif642@gmail.com>,
	linux-mm@kvack.org, kernel-team@meta.com,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v1 2/2] mm/show_mem: Add trylock while printing alloc info
Date: Thu, 28 Aug 2025 01:36:17 -0700	[thread overview]
Message-ID: <aLAVAZMKwYueL+5I@devbig569.cln6.facebook.com> (raw)
In-Reply-To: <aouiudhvbuwegvkdqwtkp7bk6gvrsxkqe7us5uj6ylg3pivjin@nl4ri6fbh2zd>

On Wed, Aug 27, 2025 at 03:28:41PM -0700, Shakeel Butt wrote:
> On Wed, Aug 27, 2025 at 03:06:19PM -0700, Andrew Morton wrote:
> > On Wed, 27 Aug 2025 11:34:23 -0700 Yueyang Pan <pyyjason@gmail.com> wrote:
> > 
> > > In production, show_mem() can be called concurrently from two
> > > different entities, for example one from oom_kill_process()
> > > another from __alloc_pages_slowpath from another kthread. This
> > > patch adds a mutex and invokes trylock before printing out the
> > > kernel alloc info in show_mem(). This way two alloc info won't
> > > interleave with each other, which then makes parsing easier.
> > > 
> > 
> > Fair enough, I guess.
> > 
> > > --- a/mm/show_mem.c
> > > +++ b/mm/show_mem.c
> > > @@ -23,6 +23,8 @@ EXPORT_SYMBOL(_totalram_pages);
> > >  unsigned long totalreserve_pages __read_mostly;
> > >  unsigned long totalcma_pages __read_mostly;
> > >  
> > > +static DEFINE_MUTEX(mem_alloc_profiling_mutex);
> > 
> > It would be a bit neater to make this local to __show_mem() - it didn't
> > need file scope.
> 
> +1, something static to __show_mem().

Thanks for your feedback, Shakeel. See my reply to Andrew for this.

> 
> > 
> > Also, mutex_unlock() isn't to be used from interrupt context, so
> > problem.
> > 
> > Something like atomic cmpxchg or test_and_set_bit could be used and
> > wouldn't involve mutex_unlock()'s wakeup logic, which isn't needed
> > here.
> 
> +1

Again, see my reply to Andrew.

> 
> > 
> > >  static inline void show_node(struct zone *zone)
> > >  {
> > >  	if (IS_ENABLED(CONFIG_NUMA))
> > > @@ -419,7 +421,7 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx)
> > >  	printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages));
> > >  #endif
> > >  #ifdef CONFIG_MEM_ALLOC_PROFILING
> > > -	if (mem_alloc_profiling_enabled()) {
> > > +	if (mem_alloc_profiling_enabled() && mutex_trylock(&mem_alloc_profiling_mutex)) {
> > >  		struct codetag_bytes tags[10];
> > >  		size_t i, nr;
> > >  
> > > @@ -445,6 +447,7 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx)
> > >  						  ct->lineno, ct->function);
> > >  			}
> > >  		}
> > > +		mutex_unlock(&mem_alloc_profiling_mutex);
> > >  	}
> > 
> > If we're going to suppress the usual output then how about we let
> > people know this happened, rather than silently dropping it?
> > 
> > pr_notice("memory allocation output suppressed due to show_mem() contention\n")
> > 
> > or something like that?
> 
> Personally I think this is not needed as this patch is suppressing only
> the memory allocation profiling output which is global, will be same
> for all the consumers and context does not matter. All consumers will
> get the memory allocation profiling data eventually.

For this point, I sort of agree with you. Wait for others' opinions?

Thanks
Pan

  reply	other threads:[~2025-08-28  8:36 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-27 18:34 [PATCH v1 0/2] mm/show_mem: Bug fix for print mem alloc info Yueyang Pan
2025-08-27 18:34 ` [PATCH v1 1/2] mm/show_mem: No print when not mem_alloc_profiling_enabled() Yueyang Pan
2025-08-27 18:34 ` [PATCH v1 2/2] mm/show_mem: Add trylock while printing alloc info Yueyang Pan
2025-08-27 22:06   ` Andrew Morton
2025-08-27 22:28     ` Shakeel Butt
2025-08-28  8:36       ` Yueyang Pan [this message]
2025-08-28  8:34     ` Yueyang Pan
2025-08-28  8:41       ` Vlastimil Babka
2025-08-28  8:47         ` Yueyang Pan
2025-08-28  8:53           ` Vlastimil Babka
2025-08-28  9:51             ` Yueyang Pan
2025-08-28  9:54               ` Vlastimil Babka
2025-08-28 22:10                 ` Yueyang Pan
2025-08-28 16:35         ` Shakeel Butt
2025-08-28 17:21           ` Vlastimil Babka
2025-08-27 19:51 ` [PATCH v1 0/2] mm/show_mem: Bug fix for print mem " Vishal Moola (Oracle)
2025-08-28  8:29   ` Yueyang Pan
2025-08-28 17:05     ` Vishal Moola (Oracle)
2025-08-28 22:07       ` Yueyang Pan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aLAVAZMKwYueL+5I@devbig569.cln6.facebook.com \
    --to=pyyjason@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=jackmanb@google.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=shakeel.butt@linux.dev \
    --cc=surenb@google.com \
    --cc=usamaarif642@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).