From: Zenghui Yu <zenghui.yu@linux.dev>
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: jgg@ziepe.ca, leon@kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
david@kernel.org, ljs@kernel.org, liam@infradead.org,
vbabka@kernel.org, rppt@kernel.org, surenb@google.com,
mhocko@suse.com
Subject: "alloc_tag was not set" when running mm/ksft_hmm.sh
Date: Wed, 6 May 2026 23:42:40 +0800 [thread overview]
Message-ID: <be9a8bae-9223-4966-bba9-2cbe39c8f4de@linux.dev> (raw)
Hi all,
Running mm/ksft_hmm.sh triggers the following splat:
------------[ cut here ]------------
alloc_tag was not set
WARNING: ./include/linux/alloc_tag.h:164 at ___free_pages+0x2a0/0x2d0,
CPU#5: hmm-tests/2020
Modules linked in: test_hmm rfkill drm backlight fuse
CPU: 5 UID: 0 PID: 2020 Comm: hmm-tests Kdump: loaded Not tainted
7.1.0-rc2-00099-gadc1e5c6203c-dirty #285 PREEMPT
Hardware name: QEMU QEMU Virtual Machine, BIOS
edk2-stable202408-prebuilt.qemu.org 08/13/2024
pstate: 61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
pc : ___free_pages+0x2a0/0x2d0
lr : ___free_pages+0x2a0/0x2d0
sp : ffff80008345b530
x29: ffff80008345b530 x28: ffff80008345b700 x27: ffffffffbfff8040
x26: ffff0000c41cb360 x25: ffff0000c0c64008 x24: ffff800081aae400
x23: 05ffff0000000200 x22: 0000000000000000 x21: 0000000000000000
x20: fffffdffc5f20040 x19: 0000000000000000 x18: fffffffffffe7c78
x17: 0000000000000000 x16: 0000000000000000 x15: fffffffffffe7c98
x14: 00000000000001d1 x13: ffff8000818f3d58 x12: 0000000000000573
x11: fffffffffffe7c98 x10: ffff80008194bd58 x9 : 3ffffffffffff000
x8 : ffff8000818f3d58 x7 : ffff80008194bd58 x6 : 0000000000000000
x5 : ffff0001fedb1088 x4 : 0000000000000001 x3 : 0000000000000000
x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff0000c7f58000
Call trace:
___free_pages+0x2a0/0x2d0 (P)
__free_pages+0x14/0x20
dmirror_devmem_free+0x13c/0x158 [test_hmm]
free_zone_device_folio+0x144/0x1e4
__folio_put+0x124/0x130
free_folio_and_swap_cache+0xa8/0xcc
__folio_split+0x664/0x7fc
split_folio_to_list+0x50/0x5c
migrate_vma_split_folio+0x13c/0x25c
migrate_vma_collect_pmd+0xed4/0xf68
walk_pgd_range+0x598/0x9a0
__walk_page_range+0x90/0x1a0
walk_page_range_mm_unsafe+0x194/0x20c
walk_page_range+0x20/0x2c
migrate_vma_setup+0x18c/0x224
dmirror_devmem_fault+0x188/0x2b8 [test_hmm]
do_swap_page+0x1458/0x185c
__handle_mm_fault+0x85c/0x1ba0
handle_mm_fault+0xb0/0x290
do_page_fault+0x1f8/0x6f8
do_translation_fault+0x60/0x6c
do_mem_abort+0x44/0x94
el0_da+0x30/0xdc
el0t_64_sync_handler+0xd0/0xe4
el0t_64_sync+0x198/0x19c
---[ end trace 0000000000000000 ]---
lib/test_hmm.c:705 module test_hmm func:dmirror_devmem_alloc_page has
16744448 allocated at module unload
It was tested on kernel built with arm64's virt.config and
+CONFIG_ZONE_DEVICE=y
+CONFIG_DEVICE_PRIVATE=y
+CONFIG_TEST_HMM=m
+CONFIG_MEM_ALLOC_PROFILING=y
+CONFIG_MEM_ALLOC_PROFILING_DEBUG=y
Thanks,
Zenghui
reply other threads:[~2026-05-06 15:42 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=be9a8bae-9223-4966-bba9-2cbe39c8f4de@linux.dev \
--to=zenghui.yu@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=jgg@ziepe.ca \
--cc=leon@kernel.org \
--cc=liam@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox