* Re: [PATCH v2] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-06 8:19 [PATCH v2] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps David Hildenbrand (Arm)
@ 2026-03-06 10:39 ` Lorenzo Stoakes (Oracle)
2026-03-09 16:14 ` Jonathan Corbet
2026-03-12 2:51 ` Wei Yang
2 siblings, 0 replies; 4+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-06 10:39 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: linux-kernel, linux-fsdevel, linux-doc, linux-mm, Zi Yan,
Lance Yang, Vlastimil Babka, Andrew Morton, Lorenzo Stoakes,
Baolin Wang, Liam R . Howlett, Nico Pache, Dev Jain, Barry Song,
Jonathan Corbet, Shuah Khan, Usama Arif, Andi Kleen
On Fri, Mar 06, 2026 at 09:19:16AM +0100, David Hildenbrand (Arm) wrote:
> There was recently some confusion around THPs and the interaction with
> KernelPageSize / MMUPageSize. Historically, these entries always
> correspond to the smallest size we could encounter, not any current
> usage of transparent huge pages or larger sizes used by the MMU.
>
> Ever since we added THP support many, many years ago, these entries
> would keep reporting the smallest (fallback) granularity in a VMA.
>
> For this reason, they default to PAGE_SIZE for all VMAs except for
> VMAs where we have the guarantee that the system and the MMU will
> always use larger page sizes. hugetlb, for example, exposes a custom
> vm_ops->pagesize callback to handle that. Similarly, dax/device
> exposes a custom vm_ops->pagesize callback and provides similar
> guarantees.
>
> Let's clarify the historical meaning of KernelPageSize / MMUPageSize,
> and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped"
> regarding PMD entries.
>
> While at it, document "FilePmdMapped", clarify what the "AnonHugePages"
> and "ShmemPmdMapped" entries really mean, and make it clear that there
> are no other entries for other THP/folio sizes or mappings.
>
> Also drop the duplicate "KernelPageSize" and "MMUPageSize" entries in
> the example.
>
> Link: https://lore.kernel.org/all/20260225232708.87833-1-ak@linux.intel.com/
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Lance Yang <lance.yang@linux.dev>
> Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Shuah Khan <skhan@linuxfoundation.org>
> Cc: Usama Arif <usamaarif642@gmail.com>
> Cc: Andi Kleen <ak@linux.intel.com>
> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
Reads great now, thanks very much! So:
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> ---
>
> v1 -> v2:
> * Some rewording and clarifications
> * Drop duplicate entries in the example
>
> ---
> Documentation/filesystems/proc.rst | 40 +++++++++++++++++++++---------
> 1 file changed, 28 insertions(+), 12 deletions(-)
>
> diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
> index b0c0d1b45b99..e2d22a424dcd 100644
> --- a/Documentation/filesystems/proc.rst
> +++ b/Documentation/filesystems/proc.rst
> @@ -464,26 +464,37 @@ Memory Area, or VMA) there is a series of lines such as the following::
> KSM: 0 kB
> LazyFree: 0 kB
> AnonHugePages: 0 kB
> + FilePmdMapped: 0 kB
> ShmemPmdMapped: 0 kB
> Shared_Hugetlb: 0 kB
> Private_Hugetlb: 0 kB
> Swap: 0 kB
> SwapPss: 0 kB
> - KernelPageSize: 4 kB
> - MMUPageSize: 4 kB
> Locked: 0 kB
> THPeligible: 0
> VmFlags: rd ex mr mw me dw
>
> The first of these lines shows the same information as is displayed for
> the mapping in /proc/PID/maps. Following lines show the size of the
> -mapping (size); the size of each page allocated when backing a VMA
> -(KernelPageSize), which is usually the same as the size in the page table
> -entries; the page size used by the MMU when backing a VMA (in most cases,
> -the same as KernelPageSize); the amount of the mapping that is currently
> -resident in RAM (RSS); the process's proportional share of this mapping
> -(PSS); and the number of clean and dirty shared and private pages in the
> -mapping.
> +mapping (size); the smallest possible page size allocated when backing a
> +VMA (KernelPageSize), which is the granularity in which VMA modifications
> +can be performed; the smallest possible page size that could be used by the
> +MMU (MMUPageSize) when backing a VMA; the amount of the mapping that is
> +currently resident in RAM (RSS); the process's proportional share of this
> +mapping (PSS); and the number of clean and dirty shared and private pages
> +in the mapping.
> +
> +"KernelPageSize" always corresponds to "MMUPageSize", except when a larger
> +kernel page size is emulated on a system with a smaller page size used by the
> +MMU, which is the case for some PPC64 setups with hugetlb. Furthermore,
> +"KernelPageSize" and "MMUPageSize" always correspond to the smallest
> +possible granularity (fallback) that can be encountered in a VMA throughout
> +its lifetime. These values are not affected by Transparent Huge Pages
> +being in effect, or any usage of larger MMU page sizes (either through
> +architectural huge-page mappings or other explicit/implicit coalescing of
> +virtual ranges performed by the MMU). "AnonHugePages", "ShmemPmdMapped" and
> +"FilePmdMapped" provide insight into the usage of PMD-level architectural
> +huge-page mappings.
>
> The "proportional set size" (PSS) of a process is the count of pages it has
> in memory, where each page is divided by the number of processes sharing it.
> @@ -528,10 +539,15 @@ pressure if the memory is clean. Please note that the printed value might
> be lower than the real value due to optimizations used in the current
> implementation. If this is not desirable please file a bug report.
>
> -"AnonHugePages" shows the amount of memory backed by transparent hugepage.
> +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" show the amount of
> +memory backed by Transparent Huge Pages that are currently mapped by
> +architectural huge-page mappings at the PMD level. "AnonHugePages"
> +corresponds to memory that does not belong to a file, "ShmemPmdMapped" to
> +shared memory (shmem/tmpfs) and "FilePmdMapped" to file-backed memory
> +(excluding shmem/tmpfs).
>
> -"ShmemPmdMapped" shows the amount of shared (shmem/tmpfs) memory backed by
> -huge pages.
> +There are no dedicated entries for Transparent Huge Pages (or similar concepts)
> +that are not mapped by architectural huge-page mappings at the PMD level.
>
> "Shared_Hugetlb" and "Private_Hugetlb" show the amounts of memory backed by
> hugetlbfs page which is *not* counted in "RSS" or "PSS" field for historical
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-06 8:19 [PATCH v2] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps David Hildenbrand (Arm)
2026-03-06 10:39 ` Lorenzo Stoakes (Oracle)
@ 2026-03-09 16:14 ` Jonathan Corbet
2026-03-12 2:51 ` Wei Yang
2 siblings, 0 replies; 4+ messages in thread
From: Jonathan Corbet @ 2026-03-09 16:14 UTC (permalink / raw)
To: David Hildenbrand (Arm), linux-kernel
Cc: linux-fsdevel, linux-doc, linux-mm, David Hildenbrand (Arm),
Zi Yan, Lance Yang, Vlastimil Babka, Andrew Morton,
Lorenzo Stoakes, Baolin Wang, Liam R . Howlett, Nico Pache,
Dev Jain, Barry Song, Shuah Khan, Usama Arif, Andi Kleen
"David Hildenbrand (Arm)" <david@kernel.org> writes:
> There was recently some confusion around THPs and the interaction with
> KernelPageSize / MMUPageSize. Historically, these entries always
> correspond to the smallest size we could encounter, not any current
> usage of transparent huge pages or larger sizes used by the MMU.
>
> Ever since we added THP support many, many years ago, these entries
> would keep reporting the smallest (fallback) granularity in a VMA.
>
> For this reason, they default to PAGE_SIZE for all VMAs except for
> VMAs where we have the guarantee that the system and the MMU will
> always use larger page sizes. hugetlb, for example, exposes a custom
> vm_ops->pagesize callback to handle that. Similarly, dax/device
> exposes a custom vm_ops->pagesize callback and provides similar
> guarantees.
>
> Let's clarify the historical meaning of KernelPageSize / MMUPageSize,
> and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped"
> regarding PMD entries.
>
> While at it, document "FilePmdMapped", clarify what the "AnonHugePages"
> and "ShmemPmdMapped" entries really mean, and make it clear that there
> are no other entries for other THP/folio sizes or mappings.
>
> Also drop the duplicate "KernelPageSize" and "MMUPageSize" entries in
> the example.
Applied, thanks.
jon
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-06 8:19 [PATCH v2] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps David Hildenbrand (Arm)
2026-03-06 10:39 ` Lorenzo Stoakes (Oracle)
2026-03-09 16:14 ` Jonathan Corbet
@ 2026-03-12 2:51 ` Wei Yang
2 siblings, 0 replies; 4+ messages in thread
From: Wei Yang @ 2026-03-12 2:51 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: linux-kernel, linux-fsdevel, linux-doc, linux-mm, Zi Yan,
Lance Yang, Vlastimil Babka, Andrew Morton, Lorenzo Stoakes,
Baolin Wang, Liam R . Howlett, Nico Pache, Dev Jain, Barry Song,
Jonathan Corbet, Shuah Khan, Usama Arif, Andi Kleen
On Fri, Mar 06, 2026 at 09:19:16AM +0100, David Hildenbrand (Arm) wrote:
>There was recently some confusion around THPs and the interaction with
>KernelPageSize / MMUPageSize. Historically, these entries always
>correspond to the smallest size we could encounter, not any current
>usage of transparent huge pages or larger sizes used by the MMU.
>
>Ever since we added THP support many, many years ago, these entries
>would keep reporting the smallest (fallback) granularity in a VMA.
>
>For this reason, they default to PAGE_SIZE for all VMAs except for
>VMAs where we have the guarantee that the system and the MMU will
>always use larger page sizes. hugetlb, for example, exposes a custom
>vm_ops->pagesize callback to handle that. Similarly, dax/device
>exposes a custom vm_ops->pagesize callback and provides similar
>guarantees.
>
>Let's clarify the historical meaning of KernelPageSize / MMUPageSize,
>and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped"
>regarding PMD entries.
>
>While at it, document "FilePmdMapped", clarify what the "AnonHugePages"
>and "ShmemPmdMapped" entries really mean, and make it clear that there
>are no other entries for other THP/folio sizes or mappings.
>
>Also drop the duplicate "KernelPageSize" and "MMUPageSize" entries in
>the example.
>
>Link: https://lore.kernel.org/all/20260225232708.87833-1-ak@linux.intel.com/
>Reviewed-by: Zi Yan <ziy@nvidia.com>
>Reviewed-by: Lance Yang <lance.yang@linux.dev>
>Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
>Cc: Andrew Morton <akpm@linux-foundation.org>
>Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>Cc: Zi Yan <ziy@nvidia.com>
>Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
>Cc: Nico Pache <npache@redhat.com>
>Cc: Ryan Roberts <ryan.roberts@arm.com
>Cc: Dev Jain <dev.jain@arm.com>
>Cc: Barry Song <baohua@kernel.org>
>Cc: Lance Yang <lance.yang@linux.dev>
>Cc: Jonathan Corbet <corbet@lwn.net>
>Cc: Shuah Khan <skhan@linuxfoundation.org>
>Cc: Usama Arif <usamaarif642@gmail.com>
>Cc: Andi Kleen <ak@linux.intel.com>
>Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
>---
>
>v1 -> v2:
>* Some rewording and clarifications
>* Drop duplicate entries in the example
>
>---
> Documentation/filesystems/proc.rst | 40 +++++++++++++++++++++---------
> 1 file changed, 28 insertions(+), 12 deletions(-)
>
>diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
>index b0c0d1b45b99..e2d22a424dcd 100644
>--- a/Documentation/filesystems/proc.rst
>+++ b/Documentation/filesystems/proc.rst
>@@ -464,26 +464,37 @@ Memory Area, or VMA) there is a series of lines such as the following::
> KSM: 0 kB
> LazyFree: 0 kB
> AnonHugePages: 0 kB
>+ FilePmdMapped: 0 kB
> ShmemPmdMapped: 0 kB
> Shared_Hugetlb: 0 kB
> Private_Hugetlb: 0 kB
> Swap: 0 kB
> SwapPss: 0 kB
>- KernelPageSize: 4 kB
>- MMUPageSize: 4 kB
> Locked: 0 kB
> THPeligible: 0
> VmFlags: rd ex mr mw me dw
>
> The first of these lines shows the same information as is displayed for
> the mapping in /proc/PID/maps. Following lines show the size of the
>-mapping (size); the size of each page allocated when backing a VMA
>-(KernelPageSize), which is usually the same as the size in the page table
>-entries; the page size used by the MMU when backing a VMA (in most cases,
>-the same as KernelPageSize); the amount of the mapping that is currently
>-resident in RAM (RSS); the process's proportional share of this mapping
>-(PSS); and the number of clean and dirty shared and private pages in the
>-mapping.
>+mapping (size); the smallest possible page size allocated when backing a
>+VMA (KernelPageSize), which is the granularity in which VMA modifications
>+can be performed; the smallest possible page size that could be used by the
>+MMU (MMUPageSize) when backing a VMA; the amount of the mapping that is
>+currently resident in RAM (RSS); the process's proportional share of this
>+mapping (PSS); and the number of clean and dirty shared and private pages
>+in the mapping.
>+
>+"KernelPageSize" always corresponds to "MMUPageSize", except when a larger
>+kernel page size is emulated on a system with a smaller page size used by the
>+MMU, which is the case for some PPC64 setups with hugetlb. Furthermore,
>+"KernelPageSize" and "MMUPageSize" always correspond to the smallest
>+possible granularity (fallback) that can be encountered in a VMA throughout
>+its lifetime. These values are not affected by Transparent Huge Pages
>+being in effect, or any usage of larger MMU page sizes (either through
>+architectural huge-page mappings or other explicit/implicit coalescing of
>+virtual ranges performed by the MMU). "AnonHugePages", "ShmemPmdMapped" and
>+"FilePmdMapped" provide insight into the usage of PMD-level architectural
>+huge-page mappings.
>
> The "proportional set size" (PSS) of a process is the count of pages it has
> in memory, where each page is divided by the number of processes sharing it.
>@@ -528,10 +539,15 @@ pressure if the memory is clean. Please note that the printed value might
> be lower than the real value due to optimizations used in the current
> implementation. If this is not desirable please file a bug report.
>
>-"AnonHugePages" shows the amount of memory backed by transparent hugepage.
This confused me sometimes. And finally I found it just shows PMD-mapped THP.
The name is misleading. But it seems not easy to rename it, since there are
user space application use this, like selftests.
>+"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" show the amount of
>+memory backed by Transparent Huge Pages that are currently mapped by
>+architectural huge-page mappings at the PMD level. "AnonHugePages"
>+corresponds to memory that does not belong to a file, "ShmemPmdMapped" to
>+shared memory (shmem/tmpfs) and "FilePmdMapped" to file-backed memory
>+(excluding shmem/tmpfs).
>
>-"ShmemPmdMapped" shows the amount of shared (shmem/tmpfs) memory backed by
>-huge pages.
>+There are no dedicated entries for Transparent Huge Pages (or similar concepts)
>+that are not mapped by architectural huge-page mappings at the PMD level.
>
> "Shared_Hugetlb" and "Private_Hugetlb" show the amounts of memory backed by
> hugetlbfs page which is *not* counted in "RSS" or "PSS" field for historical
>--
>2.43.0
>
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 4+ messages in thread