public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Wang Jianchao <jianchao.wan9@gmail.com>
To: kvm@vger.kernel.org
Subject: [HELP] Host seems to use address from Guest after vm-exit of external interrupt
Date: Tue, 7 Apr 2026 17:06:25 +0800	[thread overview]
Message-ID: <e3343610-d0d9-4572-baf1-9417f31aa8ff@gmail.com> (raw)

Dear respected kernel developers,

I hope this email finds you well. I am writing to humbly ask for your guidance regarding a puzzling behavior we have 
observed on a KVM/QEMU host.

Environment:
 - Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz (family: 0x6, model: 0x55, stepping: 0x7)
 - microcode: sig=0x50657, pf=0x80, revision=0x5003102
 - Linux Kernel version kernel-5.14.0-162.18.1.el9_1 + some patches from upstream
 - Qemu 9.1.2
 - Guest Kernel 6.12.13
 - ept and vpid are enabled

An external interrupt arrives, 0xfb IPI, in this case, then guest vm-exit,
Look into the kernel stack of kvm thread on task_struct.thread.sp, there are many
odd addresses in the stack, __per_cpu_end+-xxxxx. These addresses seem to be
address of guest kernel. handle_external_interrupt_irqoff() get a guest kernel
address from host_idt_base and run it well, the pt_regs on ffffa2970359bcb8 looks
well. The code traps into page fault on irqentry_enter + 0x11 which need to access
percpu variable with gs. On the lower address of stack, page fault on guest kernel
address traps into fault again and again due to accesses per-cpu variable with gs
util stack is used up.

Could anyone kindly point us in the right direction? Any advice on further debugging
would be deeply appreciated.

ffffa2970359bc00:  0000000000000000 ffffa2970359bcb8 
ffffa2970359bc10:  0000000000000000 0000000000000000 
ffffa2970359bc20:  0000000000000000 0000000000000000 
ffffa2970359bc30:  0000000000000000 0000000000000000 
ffffa2970359bc40:  __per_cpu_end+-2120405404 0000000000000000 
                   ffffffff81a00e64
                   [native_irq_return_iret]
ffffa2970359bc50:  0000000000000000 ffffa2970359bcb8 
ffffa2970359bc60:  ffffffffffffffff __per_cpu_end+-2121729727 
                                    ffffffff818bd941
                                    [irqentry_enter + 0x11]
ffffa2970359bc70:  0000000000000010 0000000000010046 
ffffa2970359bc80:  ffffa2970359bc98 0000000000000018 
ffffa2970359bc90:  ffff9112110c0600 __per_cpu_end+-2121733493 
                                    ffffffff818bca8b
                                    [sysvec_call_function_single + 0xb]
ffffa2970359bca0:  0000000000000000 0000000000000000 
ffffa2970359bcb0:  __per_cpu_end+-2120405754 ffff913a8ede4600 
                   ffffffff81a00d06
                   [asm_sysvec_call_function_single+0x22]
ffffa2970359bcc0:  ffff9114a6d78048 0000000000000000 
ffffa2970359bcd0:  0000000000000000 ffffa2970359bd60 
ffffa2970359bce0:  ffff9114a6d78000 0000000000000000 
ffffa2970359bcf0:  0000000000000000 0000000000000000 
ffffa2970359bd00:  0000000000000000 0000000000000c01 
ffffa2970359bd10:  013f7de9845dc650 ffffffff00000000 
ffffa2970359bd20:  00000000800000fb __per_cpu_end+-2120405776 
                                    ffffffff81a00cf0
                                    [asm_sysvec_call_function_single]
ffffa2970359bd30:  ffffffffffffffff vmx_do_interrupt_nmi_irqoff+16 
ffffa2970359bd40:  0000000000000010 0000000000000086 
ffffa2970359bd50:  ffffa2970359bd60 0000000000000018 
ffffa2970359bd60:  ffffa2970359be00 vmx_handle_exit_irqoff+312 
ffffa2970359bd70:  ffff9114a6d78000 vcpu_enter_guest+2556 
ffffa2970359bd80:  0000000000000019 ffff911229542f80 
ffffa2970359bd90:  ffff915000000040 ffff9114a6d78038 
ffffa2970359bda0:  ffffffff00000004 f1f09fe6388c6600 
ffffa2970359bdb0:  ffff911229542380 f1f09fe6388c6600 
ffffa2970359bdc0:  0000000000000002 f1f09fe6388c6600 
ffffa2970359bdd0:  ffff9114a6d78038 ffff9114a6d78000 
ffffa2970359bde0:  0000000000000001 ffff9114a6d78038 
ffffa2970359bdf0:  ffff9114a6d78048 ffff913a8ede4600


Best Regards
Jianchao

                 reply	other threads:[~2026-04-07  9:06 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e3343610-d0d9-4572-baf1-9417f31aa8ff@gmail.com \
    --to=jianchao.wan9@gmail.com \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox