From: Will Deacon <will@kernel.org>
To: Yang Shi <yang@os.amperecomputing.com>
Cc: catalin.marinas@arm.com, ryan.roberts@arm.com, cl@gentwo.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: Re: [v5 PATCH] arm64: mm: show direct mapping use in /proc/meminfo
Date: Mon, 26 Jan 2026 14:18:21 +0000 [thread overview]
Message-ID: <aXd3ralBsv9hPtfe@willie-the-truck> (raw)
In-Reply-To: <5f1bfe55-454c-40d5-ac45-1aed651b3747@os.amperecomputing.com>
On Tue, Jan 13, 2026 at 04:36:06PM -0800, Yang Shi wrote:
> On 1/13/26 6:36 AM, Will Deacon wrote:
> > On Tue, Jan 06, 2026 at 04:29:44PM -0800, Yang Shi wrote:
> > > +#if defined(CONFIG_ARM64_4K_PAGES)
> > > + size[PTE] = "4k";
> > > + size[CONT_PTE] = "64k";
> > > + size[PMD] = "2M";
> > > + size[CONT_PMD] = "32M";
> > > + size[PUD] = "1G";
> > > +#elif defined(CONFIG_ARM64_16K_PAGES)
> > > + size[PTE] = "16k";
> > > + size[CONT_PTE] = "2M";
> > > + size[PMD] = "32M";
> > > + size[CONT_PMD] = "1G";
> > > +#elif defined(CONFIG_ARM64_64K_PAGES)
> > > + size[PTE] = "64k";
> > > + size[CONT_PTE] = "2M";
> > > + size[PMD] = "512M";
> > > + size[CONT_PMD] = "16G";
> > > +#endif
> > > +
> > > + seq_printf(m, "DirectMap%s: %8lu kB\n",
> > > + size[PTE], dm_meminfo[PTE] >> 10);
> > > + seq_printf(m, "DirectMap%s: %8lu kB\n",
> > > + size[CONT_PTE],
> > > + dm_meminfo[CONT_PTE] >> 10);
> > > + seq_printf(m, "DirectMap%s: %8lu kB\n",
> > > + size[PMD], dm_meminfo[PMD] >> 10);
> > > + seq_printf(m, "DirectMap%s: %8lu kB\n",
> > > + size[CONT_PMD],
> > > + dm_meminfo[CONT_PMD] >> 10);
> > > + if (pud_sect_supported())
> > > + seq_printf(m, "DirectMap%s: %8lu kB\n",
> > > + size[PUD], dm_meminfo[PUD] >> 10);
> > This seems a bit brittle to me. If somebody adds support for l1 block
> > mappings for !4k pages in future, they will forget to update this and
> > we'll end up returning kernel stack in /proc/meminfo afaict.
>
> I can initialize size[PUD] to "NON_SUPPORT" by default. If the case happens,
> /proc/meminfo just shows "DirectMapNON_SUPPORT", then we will notice
> something is missed, but no kernel stack data will be leak.
Or just add the PUD sizes for all the page sizes...
> > > @@ -266,6 +351,17 @@ static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
> > > (flags & NO_BLOCK_MAPPINGS) == 0) {
> > > pmd_set_huge(pmdp, phys, prot);
> > > + /*
> > > + * It is possible to have mappings allow cont mapping
> > > + * but disallow block mapping. For example,
> > > + * map_entry_trampoline().
> > > + * So we have to increase CONT_PMD and PMD size here
> > > + * to avoid double counting.
> > > + */
> > > + if (pgprot_val(prot) & PTE_CONT)
> > > + dm_meminfo_add(addr, (next - addr), CONT_PMD);
> > > + else
> > > + dm_meminfo_add(addr, (next - addr), PMD);
> > I don't understand the comment you're adding here. If somebody passes
> > NO_BLOCK_MAPPINGS then that also prevents contiguous entries except at
> > level 3.
>
> The comment may be misleading. I meant if we have the accounting code for
> CONT_PMD in alloc_init_cont_pmd(), for example,
I think I'd just drop the comment. The code is clear enough once you
actually read what's going on.
> @@ -433,6 +433,11 @@ static int alloc_init_cont_pmd(pud_t *pudp, unsigned
> long addr,
> if (ret)
> goto out;
>
> + if (pgprot_val(prot) & PTE_CONT)
> + dm_meminfo_add(addr, (next - addr), CONT_PMD);
>
> pmdp += pmd_index(next) - pmd_index(addr);
> phys += next - addr;
> } while (addr = next, addr != end);
>
> If the described case happens, we actually miscount CONT_PMD. So I need to
> check whether it is CONT in init_pmd() instead. If the comment is confusing,
> I can just remove it.
>
> > It also doesn't look you handle the error case properly when the mapping
> > fails.
>
> I don't quite get what fail do you mean? pmd_set_huge() doesn't fail. Or you
> meant hotplug fails? If so the hot unplug will decrease the counters, which
> is called in the error handling path.
Sorry, I got confused here and thought that we could end up with a
partially-formed contiguous region but that's not the case. So you can
ignore this comment :)
Will
next prev parent reply other threads:[~2026-01-26 14:18 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-07 0:29 [v5 PATCH] arm64: mm: show direct mapping use in /proc/meminfo Yang Shi
2026-01-07 18:38 ` Christoph Lameter (Ampere)
2026-01-13 14:36 ` Will Deacon
2026-01-14 0:36 ` Yang Shi
2026-01-21 0:17 ` Yang Shi
2026-01-26 14:18 ` Will Deacon [this message]
2026-01-26 17:59 ` Yang Shi
2026-01-21 17:23 ` Ryan Roberts
2026-01-21 22:44 ` Yang Shi
2026-01-22 14:43 ` Ryan Roberts
2026-01-22 21:59 ` Yang Shi
2026-01-26 14:14 ` Will Deacon
2026-01-26 17:55 ` Yang Shi
2026-01-26 18:58 ` Will Deacon
2026-01-26 20:50 ` Yang Shi
2026-01-27 8:57 ` Ryan Roberts
2026-01-28 0:50 ` Yang Shi
2026-01-22 5:09 ` Anshuman Khandual
2026-01-22 14:17 ` Will Deacon
2026-01-23 2:40 ` Anshuman Khandual
2026-01-23 20:08 ` Yang Shi
2026-01-26 12:44 ` Will Deacon
2026-01-22 19:48 ` Yang Shi
2026-01-22 21:41 ` Yang Shi
2026-01-23 18:42 ` Yang Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aXd3ralBsv9hPtfe@willie-the-truck \
--to=will@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ryan.roberts@arm.com \
--cc=yang@os.amperecomputing.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox