linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Jinqian Yang <yangjinqian1@huawei.com>
Cc: <linux-arm-kernel@lists.infradead.org>,
	<linux-kernel@vger.kernel.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Zenghui Yu <yuzenghui@huawei.com>,
	jiangkunkun <jiangkunkun@huawei.com>,
	Zhou Wang <wangzhou1@hisilicon.com>,
	liuyonglong <liuyonglong@huawei.com>
Subject: Re: [Question] QEMU VM fails to restart repeatedly with VFIO passthrough on GICv4.1
Date: Mon, 13 Oct 2025 08:15:25 +0100	[thread overview]
Message-ID: <868qhfxofm.wl-maz@kernel.org> (raw)
In-Reply-To: <5269ecde-be8e-4920-a76f-882da1475d5d@huawei.com>

On Mon, 13 Oct 2025 03:56:20 +0100,
Jinqian Yang <yangjinqian1@huawei.com> wrote:
> 
> Hi, all
> 
> On a GICv4.1 environment running kernel 6.16, when launching VMs with
> QEMU and passing through VF devices, after repeatedly booting and
> killing the VMs hundreds of times, the host reports call traces and the
> VMs become unresponsive. The call traces show VFIO call stacks.
> 
> [14201.974880] BUG: Bad page map in process qemu-system-aar
> pte:fefefefefefefefe pmd:8000820b1ba0403
> [14201.974895] addr:0000fffdd7400000 vm_flags:80240644bb
> anon_vma:0000000000000000 mapping:ffff08208e9b7758 index:401eed6a
> [14201.974905] file:[vfio-device] fault:vfio_pci_mmap_page_fault
> [vfio_pci_core] mmap:vfio_device_fops_mmap [vfio] mmap_prepare: 0x0
> read_folio:0x0
> [14201.974923] CPU: 2 UID: 0 PID: 50408 Comm: qemu-system-aar Kdump:
> loaded Tainted: G           O        6.16.0-rc4+ #1 PREEMPT
> [14201.974926] Tainted: [O]=OOT_MODULE
> [14201.974927] Hardware name: To be filled by O.E.M. To be filled by
> O.E.M./To be filled by O.E.M., BIOS HixxxxEVB V3.4.7 09/04/2025
> [14201.974928] Call trace:
> [14201.974929]  show_stack+0x20/0x38 (C)
> [14201.974934]  dump_stack_lvl+0x80/0xf8
> [14201.974938]  dump_stack+0x18/0x28
> [14201.974940]  print_bad_pte+0x138/0x1d8
> [14201.974943]  vm_normal_page+0xa4/0xd0
> [14201.974945]  unmap_page_range+0x648/0x1110
> [14201.974947]  unmap_single_vma.constprop.0+0x90/0x118
> [14201.974948]  zap_page_range_single_batched+0xbc/0x180
> [14201.974950]  zap_page_range_single+0x60/0xa0
> [14201.974952]  unmap_mapping_range+0x114/0x140
> [14201.974953]  vfio_pci_zap_and_down_write_memory_lock+0x3c/0x58
> [vfio_pci_core]
> [14201.974957]  vfio_basic_config_write+0x214/0x2d8 [vfio_pci_core]
> [14201.974959]  vfio_pci_config_rw+0x1d8/0x1290 [vfio_pci_core]
> [14201.974962]  vfio_pci_rw+0x118/0x200 [vfio_pci_core]
> [14201.974965]  vfio_pci_core_write+0x28/0x40 [vfio_pci_core]
> [14201.974968]  vfio_device_fops_write+0x3c/0x58 [vfio]
> [14201.974971]  vfs_write+0xd8/0x400
> [14201.974973]  __arm64_sys_pwrite64+0xac/0xe0
> [14201.974974]  invoke_syscall+0x50/0x120
> [14201.974976]  el0_svc_common.constprop.0+0xc8/0xf0
> [14201.974978]  do_el0_svc+0x24/0x38
> [14201.974979]  el0_svc+0x38/0x130
> [14201.974982]  el0t_64_sync_handler+0xc8/0xd0
> [14201.974984]  el0t_64_sync+0x1ac/0x1b0
> [14201.975025] Disabling lock debugging due to kernel taint
> 
> This value (0xfefefefefefefefe) is very special - it's a "poison" value.
> QEMU or the VFIO driver may have attempted to access or manipulate a
> page that has already been freed.
> 
> Thanks in advance for any insights!

I have no insight whatsoever, but there is very little in this report
to go on. So here are the questions you should ask yourself:

- How specific is this to GICv4.1?

- Does it stop triggering if you disable direct injection?

- What makes you think this value is explicitly a poison value rather
  than some other data?

- Who writes this "poison" data?

- Does it reproduce on 6.17 rather than a dodgy 6.16-rc4?

- What operation was QEMU performing on the device when this happens?

- Using what devices passed to the guest?

- What do the usual debug options (KASAN, lockdep) report?

- What is so specific about this HW?

- What is this out-of-tree module?

- Have you tried without it?

These are the questions I'd ask myself before even posting something,
because each and every one of them is relevant. There are probably
more, but once you have answered these question, you should be able to
figure out what the gaps are in your understanding of the problem.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


      reply	other threads:[~2025-10-13  7:15 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-13  2:56 [Question] QEMU VM fails to restart repeatedly with VFIO passthrough on GICv4.1 Jinqian Yang
2025-10-13  7:15 ` Marc Zyngier [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=868qhfxofm.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=alex.williamson@redhat.com \
    --cc=jiangkunkun@huawei.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=liuyonglong@huawei.com \
    --cc=tglx@linutronix.de \
    --cc=wangzhou1@hisilicon.com \
    --cc=yangjinqian1@huawei.com \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).