linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wenchao Hao <haowenchao22@gmail.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
	David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	Oscar Salvador <osalvador@suse.de>,
	Muhammad Usama Anjum <usama.anjum@collabora.com>,
	Andrii Nakryiko <andrii@kernel.org>, Peter Xu <peterx@redhat.com>,
	Barry Song <21cnbao@gmail.com>,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH] smaps: count large pages smaller than PMD size to anonymous_thp
Date: Mon, 16 Dec 2024 23:58:42 +0800	[thread overview]
Message-ID: <4169e59e-9015-4323-aae7-09bc8e513bbd@gmail.com> (raw)
In-Reply-To: <500d1007-56f5-43cb-be9d-4a39fccc6e53@arm.com>

On 2024/12/5 1:05, Ryan Roberts wrote:
> On 04/12/2024 14:40, Wenchao Hao wrote:
>> On 2024/12/3 22:42, Ryan Roberts wrote:
>>> On 03/12/2024 14:17, David Hildenbrand wrote:
>>>> On 03.12.24 14:49, Wenchao Hao wrote:
>>>>> Currently, /proc/xxx/smaps reports the size of anonymous huge pages for
>>>>> each VMA, but it does not include large pages smaller than PMD size.
>>>>>
>>>>> This patch adds the statistics of anonymous huge pages allocated by
>>>>> mTHP which is smaller than PMD size to AnonHugePages field in smaps.
>>>>>
>>>>> Signed-off-by: Wenchao Hao <haowenchao22@gmail.com>
>>>>> ---
>>>>>   fs/proc/task_mmu.c | 6 ++++++
>>>>>   1 file changed, 6 insertions(+)
>>>>>
>>>>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>>>>> index 38a5a3e9cba2..b655011627d8 100644
>>>>> --- a/fs/proc/task_mmu.c
>>>>> +++ b/fs/proc/task_mmu.c
>>>>> @@ -717,6 +717,12 @@ static void smaps_account(struct mem_size_stats *mss,
>>>>> struct page *page,
>>>>>           if (!folio_test_swapbacked(folio) && !dirty &&
>>>>>               !folio_test_dirty(folio))
>>>>>               mss->lazyfree += size;
>>>>> +
>>>>> +        /*
>>>>> +         * Count large pages smaller than PMD size to anonymous_thp
>>>>> +         */
>>>>> +        if (!compound && PageHead(page) && folio_order(folio))
>>>>> +            mss->anonymous_thp += folio_size(folio);
>>>>>       }
>>>>>         if (folio_test_ksm(folio))
>>>>
>>>>
>>>> I think we decided to leave this (and /proc/meminfo) be one of the last
>>>> interfaces where this is only concerned with PMD-sized ones:
>>>>
>>>> Documentation/admin-guide/mm/transhuge.rst:
>>>>
>>>> The number of PMD-sized anonymous transparent huge pages currently used by the
>>>> system is available by reading the AnonHugePages field in ``/proc/meminfo``.
>>>> To identify what applications are using PMD-sized anonymous transparent huge
>>>> pages, it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages
>>>> fields for each mapping. (Note that AnonHugePages only applies to traditional
>>>> PMD-sized THP for historical reasons and should have been called
>>>> AnonHugePmdMapped).
>>>>
>>>
>>> Agreed. If you need per-process metrics for mTHP, we have a python script at
>>> tools/mm/thpmaps which does a fairly good job of parsing pagemap. --help gives
>>> you all the options.
>>>
>>
>> I tried this tool, and it is very powerful and practical IMO.
>> However, thereare two disadvantages:
>>
>> - This tool is heavily dependent on Python and Python libraries.
>>   After installing several libraries with the pip command, I was able to
>>   get it running.
> 
> I think numpy is the only package it uses which is not in the standard library?
> What other libraries did you need to install?
> 

Yes, I just tested it on the standard version (Fedora), and that is indeed the case.
Previously, I needed to install additional packages is because I removed some unused
software from the old environment.

Recently, I revisited and started using your tool again. It’s very useful, meeting
my needs and even exceeding them. I am now testing with qemu to run a fedora, so
it's easy to run it.

>>   In practice, the environment we need to analyze may be a mobile or
>>   embedded environment, where it is very difficult to deploy these
>>   libraries.
> 
> Yes, I agree that's a problem, especially for Android. The script has proven
> useful to me for debugging in a traditional Linux distro environment though.
> 
>> - It seems that this tool only counts file-backed large pages? During
> 
> No; the tool counts file-backed and anon memory. But it reports it in separate
> counters. See `thpmaps --help` for full details.
> 
>>   the actual test, I mapped a region of anonymous pages and mapped it
>>   as large pages, but the tool did not display those large pages.
>>   Below is my test file(mTHP related sysfs interface is set to "always"
>>   to make sure using large pages):
> 
> Which mTHP sizes did you enable? Depending on your value of SIZE and which mTHP
> sizes are enabled, you may not have a correctly aligned region in p. So mTHP
> would not be allocated. Best to over-allocate then explicitly align p to the
> mTHP size, then fault it in.
> 

I enabled 64k/128k/256k MTHP and have been studying, debugging, and changing
parts of the khugepaged code to try merging standard pages into mTHP large
pages. So, I wanted to use smap to observe the large page sizes in a process.

>>
>> int main()
>> {
>>         int i;
>>         char *c;
>>         unsigned long *p;
>>
>>         p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> 
> What is SIZE here?
> 
>>         if (!p) {
>>                 perror("fail to get memory");
>>                 exit(-1);
>>         }
>>
>>         c = (unsigned char *)p;
>>
>>         for (i = 0; i < SIZE / 8; i += 8)
>>                 *(p + i) = 0xffff + i;
> 
> Err... what's your intent here? I think you're writting to 1 in every 8 longs?
> Probably just write to the first byte of every page.
> 

The data is fixed for the purpose of analyzing zram compression, so I filled
some data here.

> Thanks,
> Ryan
> 
>>
>>         while (1)
>>                 sleep(10);
>>
>>         return 0;
>> }
>>
>> Thanks,
>> wenchao
>>
> 


  reply	other threads:[~2024-12-16 15:58 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-03 13:49 [PATCH] smaps: count large pages smaller than PMD size to anonymous_thp Wenchao Hao
2024-12-03 14:17 ` David Hildenbrand
2024-12-03 14:42   ` Ryan Roberts
2024-12-04 14:40     ` Wenchao Hao
2024-12-04 17:05       ` Ryan Roberts
2024-12-16 15:58         ` Wenchao Hao [this message]
2024-12-20  6:48           ` Dev Jain
2024-12-04 14:30   ` Wenchao Hao
2024-12-04 14:37     ` David Hildenbrand
2024-12-04 14:47       ` Wenchao Hao
2024-12-04 17:07     ` Ryan Roberts
2024-12-06 11:16   ` Lance Yang
2024-12-08  6:06     ` Barry Song
2024-12-09 10:07       ` Ryan Roberts
2024-12-16 16:03       ` Wenchao Hao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4169e59e-9015-4323-aae7-09bc8e513bbd@gmail.com \
    --to=haowenchao22@gmail.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=andrii@kernel.org \
    --cc=david@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=osalvador@suse.de \
    --cc=peterx@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=usama.anjum@collabora.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).