linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: jpoimboe@kernel.org, kent.overstreet@linux.dev,
	peterz@infradead.org, nphamcs@gmail.com,
	cerasuolodomenico@gmail.com, surenb@google.com,
	lizhijian@fujitsu.com, willy@infradead.org,
	shakeel.butt@linux.dev, vbabka@suse.cz, ziy@nvidia.com,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v2] vmstat: Keep count of the maximum page reached by the kernel stack
Date: Mon, 18 Mar 2024 13:40:55 -0700	[thread overview]
Message-ID: <20240318134055.c2f6b29bb6eb73ec93bf7079@linux-foundation.org> (raw)
In-Reply-To: <20240314145457.1106299-1-pasha.tatashin@soleen.com>

On Thu, 14 Mar 2024 14:54:57 +0000 Pasha Tatashin <pasha.tatashin@soleen.com> wrote:

> CONFIG_DEBUG_STACK_USAGE provides a mechanism to determine the minimum
> amount of memory left in a stack. Every time a new low-memory record is
> reached, a message is printed to the console.
> 
> However, this doesn't reveal how many pages within each stack were
> actually used. Introduce a mechanism that keeps count the number of
> times each of the stack's pages were reached:
> 
> 	$ grep kstack /proc/vmstat
> 	kstack_page_1 19974
> 	kstack_page_2 94
> 	kstack_page_3 0
> 	kstack_page_4 0
> 
> In the above example, out of 20,068 threads that exited on this
> machine, only 94 reached the second page of their stack, and none
> touched pages three or four.
> 
> In fleet environments with millions of machines, this data can help
> optimize kernel stack sizes.

We really should have somewhere to document vmstat things.

> --- a/include/linux/vm_event_item.h
> +++ b/include/linux/vm_event_item.h
> @@ -153,10 +153,39 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
>  		VMA_LOCK_ABORT,
>  		VMA_LOCK_RETRY,
>  		VMA_LOCK_MISS,
> +#endif
> +#ifdef CONFIG_DEBUG_STACK_USAGE
> +		KSTACK_PAGE_1,
> +		KSTACK_PAGE_2,
> +#if THREAD_SIZE >= (4 * PAGE_SIZE)
> +		KSTACK_PAGE_3,
> +		KSTACK_PAGE_4,
> +#endif
> +#if THREAD_SIZE > (4 * PAGE_SIZE)
> +		KSTACK_PAGE_REST,
> +#endif
>  #endif
>  		NR_VM_EVENT_ITEMS
>  };

This seems a rather cumbersome way to produce a kind of histogram.  I
wonder if there should be a separate pseudo file for this.

And there may be a call for extending this.  For example I can forsee
people wanting to know "hey, which process did that", in which case
we'll want to record additional info.  



  reply	other threads:[~2024-03-18 20:41 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-14 14:54 [PATCH v2] vmstat: Keep count of the maximum page reached by the kernel stack Pasha Tatashin
2024-03-18 20:40 ` Andrew Morton [this message]
2024-03-19 14:23   ` Pasha Tatashin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240318134055.c2f6b29bb6eb73ec93bf7079@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=cerasuolodomenico@gmail.com \
    --cc=jpoimboe@kernel.org \
    --cc=kent.overstreet@linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizhijian@fujitsu.com \
    --cc=nphamcs@gmail.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=peterz@infradead.org \
    --cc=shakeel.butt@linux.dev \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).