From: George Guo <dongtai.guo@linux.dev>
To: pratyush@kernel.org
Cc: akpm@linux-foundation.org, dongtai.guo@linux.dev,
graf@amazon.com, guodongtai@kylinos.cn, jasonmiu@google.com,
kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, liukexin@kylinos.cn,
pasha.tatashin@soleen.com, ran.xiaokai@zte.com.cn,
rppt@kernel.org
Subject: Re: [PATCH 1/1] kho: fix KHO_TREE_MAX_DEPTH for non-4KB page sizes
Date: Wed, 13 May 2026 23:07:18 +0800 [thread overview]
Message-ID: <20260513150718.182224-1-dongtai.guo@linux.dev> (raw)
In-Reply-To: <2vxzcxz2dybi.fsf@kernel.org>
Sorry for the late reply.
On Mon, May 11 2026, Pratyush Yadav wrote:
> As of now, we only support KHO on x86 and arm64. Are you working on
> supporting it for LoongArch? What is your use case?
Yes, we are adding KHO support for LoongArch (16KB page size). The
LoongArch patches are being prepared separately. This fix is a
prerequisite.
> Maybe I don't understand the math so well, but I can't see the problem.
> ...
> level = 4, s = 50, idx = 1
> What am I missing?
The issue is that all three operations start the traversal at
KHO_TREE_MAX_DEPTH - 1, not KHO_TREE_MAX_DEPTH:
kho_radix_add_page() line 180: for (i = KHO_TREE_MAX_DEPTH - 1; i > 0; i--)
kho_radix_del_page() line 255: for (i = KHO_TREE_MAX_DEPTH - 1; i > 0; i--)
kho_radix_walk_tree() line 356: __kho_radix_walk_tree(..., KHO_TREE_MAX_DEPTH - 1, ...)
So with depth=4 the effective top level is 3, not 4.
phys_to_page() of that address faults in kho_preserved_memory_reserve().
This is confirmed by a kernel panic on 7.1-rc3 LoongArch (16KB pages)
without the fix. With depth=5 the top level is 4 (shift=50):
(key >> 50) % 2048 = 1 /* order bit correctly captured */
Signed-off-by: George Guo <guodongtai@kylinos.cn>
Panic log on 7.1-rc3 LoongArch (16KB pages) without the fix:
[ 0.000000] CPU 0 Unable to handle kernel paging request at virtual address 00003d3ffe000028, era == 90000000c162f10c, ra == 90000000c162f0f8
[ 0.000000] Oops[#1]:
...
[ 0.000000] Call Trace:
[ 0.000000] [<90000000c162f10c>] kho_preserved_memory_reserve+0xc4/0xe8
[ 0.000000] [<90000000c0129f88>] __kho_radix_walk_tree+0xf0/0x138
[ 0.000000] [<90000000c0129f10>] __kho_radix_walk_tree+0x78/0x138
[ 0.000000] [<90000000c012b730>] kho_radix_walk_tree+0x88/0xe8
[ 0.000000] [<90000000c162f874>] kho_memory_init+0x220/0x4e4
[ 0.000000] [<90000000c1639b38>] mm_core_init+0x168/0x1a0
[ 0.000000] [<90000000c1620d50>] start_kernel+0x5c4/0x778
[ 0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
prev parent reply other threads:[~2026-05-13 15:07 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-09 2:44 [PATCH 1/1] kho: fix KHO_TREE_MAX_DEPTH for non-4KB page sizes George Guo
2026-05-10 15:26 ` Mike Rapoport
2026-05-11 10:40 ` Pratyush Yadav
2026-05-13 7:50 ` Mike Rapoport
2026-05-13 15:07 ` George Guo [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260513150718.182224-1-dongtai.guo@linux.dev \
--to=dongtai.guo@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=graf@amazon.com \
--cc=guodongtai@kylinos.cn \
--cc=jasonmiu@google.com \
--cc=kexec@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=liukexin@kylinos.cn \
--cc=pasha.tatashin@soleen.com \
--cc=pratyush@kernel.org \
--cc=ran.xiaokai@zte.com.cn \
--cc=rppt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox