linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* kvm guests crash when running "perf kvm top"
@ 2025-04-08 16:54 Seth Forshee
  2025-04-09 17:05 ` Sean Christopherson
  2025-04-24  8:53 ` Mi, Dapeng
  0 siblings, 2 replies; 6+ messages in thread
From: Seth Forshee @ 2025-04-08 16:54 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Namhyung Kim, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Sean Christopherson, Paolo Bonzini
  Cc: x86, linux-perf-users, kvm, linux-kernel

A colleague of mine reported kvm guest hangs when running "perf kvm top"
with a 6.1 kernel. Initially it looked like the problem might be fixed
in newer kernels, but it turned out to be perf changes which must avoid
triggering the issue. I was able to reproduce the guest crashes with
6.15-rc1 in both the host and the guest when using an older version of
perf. A bisect of perf landed on 7b100989b4f6 "perf evlist: Remove
__evlist__add_default", but this doesn't look to be fixing any kind of
issue like this.

This box has an Ice Lake CPU, and we can reproduce on other Ice Lakes
but could not reproduce on another box with Broadwell. On Broadwell
guests would crash with older kernels in the host, but this was fixed by
971079464001 "KVM: x86/pmu: fix masking logic for
MSR_CORE_PERF_GLOBAL_CTRL". That does not fix the issues we see on Ice
Lake.

When the guests crash we aren't getting any output on the serial
console, but I got this from a memory dump:

BUG: unable to handle page fault for address: fffffe76ffbaf00000
BUG: unable to handle page fault for address: fffffe76ffbaf00000
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
BUG: unable to handle page fault for address: fffffe76ffbaf00000
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 2e044067 P4D 3ec42067 PUD 3ec41067 PMD 3ec40067 PTE ffffffffff120
Oops: Oops: 0002 [#1] SMP NOPTI
BUG: unable to handle page fault for address: fffffe76ffbaf00000
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 2e044067 P4D 3ec42067 PUD 3ec41067 PMD 3ec40067 PTE ffffffffff120
Oops: Oops: 0002 [#2] SMP NOPTI
CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.15.0-rc1 #3 VOLUNTARY
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009)/Incus, BIOS unknown 02/02/2022
BUG: unable to handle page fault for address: fffffe76ffbaf00000
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 2e044067 P4D 3ec42067 PUD 3ec41067 PMD 3ec40067 PTE ffffffffff120
Oops: Oops: 0002 [#3] SMP NOPTI
CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.15.0-rc1 #3 VOLUNTARY
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009)/Incus, BIOS unknown 02/02/2022

We got something different though from an ubuntu VM running their 6.8
kernel:

BUG: kernel NULL pointer dereference, address: 000000000000002828
BUG: kernel NULL pointer dereference, address: 000000000000002828
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 10336a067 P4D 0 
Oops: 0000 [#1] PREEMPT SMP NOPTI
BUG: kernel NULL pointer dereference, address: 000000000000002828
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 10336a067 P4D 0 
Oops: 0000 [#2] PREEMPT SMP NOPTI
BUG: kernel NULL pointer dereference, address: 000000000000002828
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 10336a067 P4D 0 
Oops: 0000 [#3] PREEMPT SMP NOPTI
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.8.0-56-generic #58-Ubuntu
BUG: kernel NULL pointer dereference, address: 000000000000002828
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 10336a067 P4D 0 
Oops: 0000 [#4] PREEMPT SMP NOPTI
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.8.0-56-generic #58-Ubuntu
BUG: kernel NULL pointer dereference, address: 000000000000002828
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 10336a067 P4D 0 
Oops: 0000 [#5] PREEMPT SMP NOPTI
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.8.0-56-generic #58-Ubuntu
RIP: 0010:__sprint_symbol.isra.0+0x6/0x120
Code: ff e8 0e 9d 00 01 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 55 <48> 89 e5 41 57 49 89 f7 41 56 4c 63 f2 4c 8d 45 b8 48 8d 55 c0 41
RSP: 0018:ff25e52d000e6ff8 EFLAGS: 00000046
BUG: #DF stack guard page was hit at 0000000040b441e1 (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)

CPU information from one of the boxes where we see this:

processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 106
model name	: Intel(R) Xeon(R) Gold 5318Y CPU @ 2.10GHz
stepping	: 6
microcode	: 0xd0003f5
cpu MHz		: 800.000
cache size	: 36864 KB
physical id	: 0
siblings	: 44
core id		: 0
cpu cores	: 22
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 27
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
vmx flags	: vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb ept_5level flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs pml ept_violation_ve ept_mode_based_exec tsc_scaling
bugs		: spectre_v1 spectre_v2 spec_store_bypass swapgs mmio_stale_data eibrs_pbrsb gds bhi spectre_v2_user
bogomips	: 4000.00
clflush size	: 64
cache_alignment	: 64
address sizes	: 46 bits physical, 57 bits virtual
power management:

Let me know if I can provide any additional information or testing.

Thanks,
Seth

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: kvm guests crash when running "perf kvm top"
  2025-04-08 16:54 kvm guests crash when running "perf kvm top" Seth Forshee
@ 2025-04-09 17:05 ` Sean Christopherson
  2025-04-09 20:24   ` Seth Forshee
  2025-04-24  8:53 ` Mi, Dapeng
  1 sibling, 1 reply; 6+ messages in thread
From: Sean Christopherson @ 2025-04-09 17:05 UTC (permalink / raw)
  To: Seth Forshee
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Namhyung Kim, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Paolo Bonzini, x86, linux-perf-users, kvm, linux-kernel

On Tue, Apr 08, 2025, Seth Forshee wrote:
> A colleague of mine reported kvm guest hangs when running "perf kvm top"
> with a 6.1 kernel. Initially it looked like the problem might be fixed
> in newer kernels, but it turned out to be perf changes which must avoid
> triggering the issue. I was able to reproduce the guest crashes with
> 6.15-rc1 in both the host and the guest when using an older version of
> perf. A bisect of perf landed on 7b100989b4f6 "perf evlist: Remove
> __evlist__add_default", but this doesn't look to be fixing any kind of
> issue like this.
> 
> This box has an Ice Lake CPU, and we can reproduce on other Ice Lakes
> but could not reproduce on another box with Broadwell. On Broadwell
> guests would crash with older kernels in the host, but this was fixed by
> 971079464001 "KVM: x86/pmu: fix masking logic for
> MSR_CORE_PERF_GLOBAL_CTRL". That does not fix the issues we see on Ice
> Lake.
> 
> When the guests crash we aren't getting any output on the serial
> console, but I got this from a memory dump:

...

> Oops: 0000 [#1] PREEMPT SMP NOPTI
> BUG: kernel NULL pointer dereference, address: 000000000000002828

FWIW, this is probably slightly corrupted.  When I run with EPT disabled, to force
KVM to intercept #PFs, the reported CR2 is 0x28.  Which is consistent with the
guest having DS_AREA=0.  I.e. the CPU is attempting to store into the DS/PEBS
buffer.

As suspected, the issue is PEBS.  After adding a tracepoint to capture the MSRs
that KVM loads as part of the perf transition, it's easy to see that PEBS_ENABLE
gets loaded with a non-zero value immediate before death, doom, and destruction.

  CPU 0: kvm_entry: vcpu 0, rip 0xffffffff81000aa0 intr_info 0x80000b0e error_code 0x00000000
  CPU 0: kvm_perf_msr: MSR 38f: host 1000f000000fe guest 1000f000000ff
  CPU 0: kvm_perf_msr: MSR 600: host fffffe57186af000 guest 0
  CPU 0: kvm_perf_msr: MSR 3f2: host 0 guest 0
  CPU 0: kvm_perf_msr: MSR 3f1: host 0 guest 1
  CPU 0: kvm_exit: vcpu 0 reason EXCEPTION_NMI rip 0xffffffff81000aa0 info1 0x0000000000000028 intr_info 0x80000b0e error_code 0x00000000

The underlying issue is that KVM's current PMU virtualization uses perf_events
to proxy guest events, i.e. piggybacks intel_ctrl_guest_mask, which is also used
by host userspace to communicate exclude_host/exclude_guest.  And so perf's
intel_guest_get_msrs() allows using PEBS for guest events, but only if perf isn't
using PEBS for host events.

I didn't actually verify that "perf kvm top" generates for events, but I assuming
it's creating a precise, a.k.a. PEBS, event that measures _only_ guest, i.e.
excludes host.  That causes a false positive of sorts in intel_guest_get_msrs(),
and ultimately results in KVM running the guest with a PEBS event enabled, even
though the guest isn't using the (virtual) PMU.

Pre-ICX CPUs don't isolate PEBS events across the guest/host boundary, and so
perf/KVM hard disable PEBS on VM-Enter.  And a simple (well, simple for perf)
precise event doesn't cause problems, because perf/KVM will disable PEBS events
that are counting the host.  I.e. if a PEBS event counts host *and* guest, it's
"fine".

Long story short, masking PEBS_ENABLE with the guest's value (in addition to
what perf allows) fixes the issue on my end.  Assuming testing goes well, I'll
post this as a proper patch.

--
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index cdb19e3ba3aa..1d01fb43a337 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -4336,7 +4336,7 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data)
        arr[pebs_enable] = (struct perf_guest_switch_msr){
                .msr = MSR_IA32_PEBS_ENABLE,
                .host = cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask,
-               .guest = pebs_mask & ~cpuc->intel_ctrl_host_mask,
+               .guest = pebs_mask & ~cpuc->intel_ctrl_host_mask & kvm_pmu->pebs_enable,
        };
 
        if (arr[pebs_enable].host) {
--


> Let me know if I can provide any additional information or testing.

Uber nit: in the future, explicitly state whether a command is being run in the
guest or host.  I had a brain fart and it took me an embarrasingly long time to
grok that running "perf kvm top" in the guest would be nonsensical.

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: kvm guests crash when running "perf kvm top"
  2025-04-09 17:05 ` Sean Christopherson
@ 2025-04-09 20:24   ` Seth Forshee
  0 siblings, 0 replies; 6+ messages in thread
From: Seth Forshee @ 2025-04-09 20:24 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Namhyung Kim, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Paolo Bonzini, x86, linux-perf-users, kvm, linux-kernel

On Wed, Apr 09, 2025 at 10:05:00AM -0700, Sean Christopherson wrote:
> On Tue, Apr 08, 2025, Seth Forshee wrote:
> > A colleague of mine reported kvm guest hangs when running "perf kvm top"
> > with a 6.1 kernel. Initially it looked like the problem might be fixed
> > in newer kernels, but it turned out to be perf changes which must avoid
> > triggering the issue. I was able to reproduce the guest crashes with
> > 6.15-rc1 in both the host and the guest when using an older version of
> > perf. A bisect of perf landed on 7b100989b4f6 "perf evlist: Remove
> > __evlist__add_default", but this doesn't look to be fixing any kind of
> > issue like this.
> > 
> > This box has an Ice Lake CPU, and we can reproduce on other Ice Lakes
> > but could not reproduce on another box with Broadwell. On Broadwell
> > guests would crash with older kernels in the host, but this was fixed by
> > 971079464001 "KVM: x86/pmu: fix masking logic for
> > MSR_CORE_PERF_GLOBAL_CTRL". That does not fix the issues we see on Ice
> > Lake.
> > 
> > When the guests crash we aren't getting any output on the serial
> > console, but I got this from a memory dump:
> 
> ...
> 
> > Oops: 0000 [#1] PREEMPT SMP NOPTI
> > BUG: kernel NULL pointer dereference, address: 000000000000002828
> 
> FWIW, this is probably slightly corrupted.  When I run with EPT disabled, to force
> KVM to intercept #PFs, the reported CR2 is 0x28.  Which is consistent with the
> guest having DS_AREA=0.  I.e. the CPU is attempting to store into the DS/PEBS
> buffer.
> 
> As suspected, the issue is PEBS.  After adding a tracepoint to capture the MSRs
> that KVM loads as part of the perf transition, it's easy to see that PEBS_ENABLE
> gets loaded with a non-zero value immediate before death, doom, and destruction.
> 
>   CPU 0: kvm_entry: vcpu 0, rip 0xffffffff81000aa0 intr_info 0x80000b0e error_code 0x00000000
>   CPU 0: kvm_perf_msr: MSR 38f: host 1000f000000fe guest 1000f000000ff
>   CPU 0: kvm_perf_msr: MSR 600: host fffffe57186af000 guest 0
>   CPU 0: kvm_perf_msr: MSR 3f2: host 0 guest 0
>   CPU 0: kvm_perf_msr: MSR 3f1: host 0 guest 1
>   CPU 0: kvm_exit: vcpu 0 reason EXCEPTION_NMI rip 0xffffffff81000aa0 info1 0x0000000000000028 intr_info 0x80000b0e error_code 0x00000000
> 
> The underlying issue is that KVM's current PMU virtualization uses perf_events
> to proxy guest events, i.e. piggybacks intel_ctrl_guest_mask, which is also used
> by host userspace to communicate exclude_host/exclude_guest.  And so perf's
> intel_guest_get_msrs() allows using PEBS for guest events, but only if perf isn't
> using PEBS for host events.
> 
> I didn't actually verify that "perf kvm top" generates for events, but I assuming
> it's creating a precise, a.k.a. PEBS, event that measures _only_ guest, i.e.
> excludes host.  That causes a false positive of sorts in intel_guest_get_msrs(),
> and ultimately results in KVM running the guest with a PEBS event enabled, even
> though the guest isn't using the (virtual) PMU.
> 
> Pre-ICX CPUs don't isolate PEBS events across the guest/host boundary, and so
> perf/KVM hard disable PEBS on VM-Enter.  And a simple (well, simple for perf)
> precise event doesn't cause problems, because perf/KVM will disable PEBS events
> that are counting the host.  I.e. if a PEBS event counts host *and* guest, it's
> "fine".
> 
> Long story short, masking PEBS_ENABLE with the guest's value (in addition to
> what perf allows) fixes the issue on my end.  Assuming testing goes well, I'll
> post this as a proper patch.
> 
> --
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index cdb19e3ba3aa..1d01fb43a337 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -4336,7 +4336,7 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data)
>         arr[pebs_enable] = (struct perf_guest_switch_msr){
>                 .msr = MSR_IA32_PEBS_ENABLE,
>                 .host = cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask,
> -               .guest = pebs_mask & ~cpuc->intel_ctrl_host_mask,
> +               .guest = pebs_mask & ~cpuc->intel_ctrl_host_mask & kvm_pmu->pebs_enable,
>         };
>  
>         if (arr[pebs_enable].host) {

This fixes the issue for me, thanks!

> --
> 
> 
> > Let me know if I can provide any additional information or testing.
> 
> Uber nit: in the future, explicitly state whether a command is being run in the
> guest or host.  I had a brain fart and it took me an embarrasingly long time to
> grok that running "perf kvm top" in the guest would be nonsensical.

Apologies, I tried to make sure I differentiated between host vs guest
in my description since I know it gets confusing, but I missed that one.
I'll triple check for that in the future.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: kvm guests crash when running "perf kvm top"
  2025-04-08 16:54 kvm guests crash when running "perf kvm top" Seth Forshee
  2025-04-09 17:05 ` Sean Christopherson
@ 2025-04-24  8:53 ` Mi, Dapeng
  2025-04-24 13:13   ` Sean Christopherson
  1 sibling, 1 reply; 6+ messages in thread
From: Mi, Dapeng @ 2025-04-24  8:53 UTC (permalink / raw)
  To: Seth Forshee, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Namhyung Kim, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Sean Christopherson, Paolo Bonzini
  Cc: x86, linux-perf-users, kvm, linux-kernel

Is the command "perf kvm top" executed in host or guest when you see guest
crash? Is it easy to be reproduced? Could you please provide the detailed
steps to reproduce the issue with 6.15-rc1 kernel?


On 4/9/2025 12:54 AM, Seth Forshee wrote:
> A colleague of mine reported kvm guest hangs when running "perf kvm top"
> with a 6.1 kernel. Initially it looked like the problem might be fixed
> in newer kernels, but it turned out to be perf changes which must avoid
> triggering the issue. I was able to reproduce the guest crashes with
> 6.15-rc1 in both the host and the guest when using an older version of
> perf. A bisect of perf landed on 7b100989b4f6 "perf evlist: Remove
> __evlist__add_default", but this doesn't look to be fixing any kind of
> issue like this.
>
> This box has an Ice Lake CPU, and we can reproduce on other Ice Lakes
> but could not reproduce on another box with Broadwell. On Broadwell
> guests would crash with older kernels in the host, but this was fixed by
> 971079464001 "KVM: x86/pmu: fix masking logic for
> MSR_CORE_PERF_GLOBAL_CTRL". That does not fix the issues we see on Ice
> Lake.
>
> When the guests crash we aren't getting any output on the serial
> console, but I got this from a memory dump:
>
> BUG: unable to handle page fault for address: fffffe76ffbaf00000
> BUG: unable to handle page fault for address: fffffe76ffbaf00000
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> BUG: unable to handle page fault for address: fffffe76ffbaf00000
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 2e044067 P4D 3ec42067 PUD 3ec41067 PMD 3ec40067 PTE ffffffffff120
> Oops: Oops: 0002 [#1] SMP NOPTI
> BUG: unable to handle page fault for address: fffffe76ffbaf00000
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 2e044067 P4D 3ec42067 PUD 3ec41067 PMD 3ec40067 PTE ffffffffff120
> Oops: Oops: 0002 [#2] SMP NOPTI
> CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.15.0-rc1 #3 VOLUNTARY
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009)/Incus, BIOS unknown 02/02/2022
> BUG: unable to handle page fault for address: fffffe76ffbaf00000
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 2e044067 P4D 3ec42067 PUD 3ec41067 PMD 3ec40067 PTE ffffffffff120
> Oops: Oops: 0002 [#3] SMP NOPTI
> CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.15.0-rc1 #3 VOLUNTARY
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009)/Incus, BIOS unknown 02/02/2022
>
> We got something different though from an ubuntu VM running their 6.8
> kernel:
>
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 10336a067 P4D 0 
> Oops: 0000 [#1] PREEMPT SMP NOPTI
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 10336a067 P4D 0 
> Oops: 0000 [#2] PREEMPT SMP NOPTI
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 10336a067 P4D 0 
> Oops: 0000 [#3] PREEMPT SMP NOPTI
> CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.8.0-56-generic #58-Ubuntu
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 10336a067 P4D 0 
> Oops: 0000 [#4] PREEMPT SMP NOPTI
> CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.8.0-56-generic #58-Ubuntu
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 10336a067 P4D 0 
> Oops: 0000 [#5] PREEMPT SMP NOPTI
> CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.8.0-56-generic #58-Ubuntu
> RIP: 0010:__sprint_symbol.isra.0+0x6/0x120
> Code: ff e8 0e 9d 00 01 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 55 <48> 89 e5 41 57 49 89 f7 41 56 4c 63 f2 4c 8d 45 b8 48 8d 55 c0 41
> RSP: 0018:ff25e52d000e6ff8 EFLAGS: 00000046
> BUG: #DF stack guard page was hit at 0000000040b441e1 (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
>
> CPU information from one of the boxes where we see this:
>
> processor	: 0
> vendor_id	: GenuineIntel
> cpu family	: 6
> model		: 106
> model name	: Intel(R) Xeon(R) Gold 5318Y CPU @ 2.10GHz
> stepping	: 6
> microcode	: 0xd0003f5
> cpu MHz		: 800.000
> cache size	: 36864 KB
> physical id	: 0
> siblings	: 44
> core id		: 0
> cpu cores	: 22
> apicid		: 0
> initial apicid	: 0
> fpu		: yes
> fpu_exception	: yes
> cpuid level	: 27
> wp		: yes
> flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
> vmx flags	: vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb ept_5level flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs pml ept_violation_ve ept_mode_based_exec tsc_scaling
> bugs		: spectre_v1 spectre_v2 spec_store_bypass swapgs mmio_stale_data eibrs_pbrsb gds bhi spectre_v2_user
> bogomips	: 4000.00
> clflush size	: 64
> cache_alignment	: 64
> address sizes	: 46 bits physical, 57 bits virtual
> power management:
>
> Let me know if I can provide any additional information or testing.
>
> Thanks,
> Seth
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: kvm guests crash when running "perf kvm top"
  2025-04-24  8:53 ` Mi, Dapeng
@ 2025-04-24 13:13   ` Sean Christopherson
  2025-04-25  0:17     ` Mi, Dapeng
  0 siblings, 1 reply; 6+ messages in thread
From: Sean Christopherson @ 2025-04-24 13:13 UTC (permalink / raw)
  To: Dapeng Mi
  Cc: Seth Forshee, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Namhyung Kim, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Paolo Bonzini, x86,
	linux-perf-users, kvm, linux-kernel

On Thu, Apr 24, 2025, Dapeng Mi wrote:
> Is the command "perf kvm top" executed in host or guest when you see guest
> crash? Is it easy to be reproduced? Could you please provide the detailed
> steps to reproduce the issue with 6.15-rc1 kernel?

Host.  I already have a fix, I'll get it posted today.

https://lore.kernel.org/all/Z_aovIbwdKIIBMuq@google.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: kvm guests crash when running "perf kvm top"
  2025-04-24 13:13   ` Sean Christopherson
@ 2025-04-25  0:17     ` Mi, Dapeng
  0 siblings, 0 replies; 6+ messages in thread
From: Mi, Dapeng @ 2025-04-25  0:17 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Seth Forshee, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Namhyung Kim, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Paolo Bonzini, x86,
	linux-perf-users, kvm, linux-kernel


On 4/24/2025 9:13 PM, Sean Christopherson wrote:
> On Thu, Apr 24, 2025, Dapeng Mi wrote:
>> Is the command "perf kvm top" executed in host or guest when you see guest
>> crash? Is it easy to be reproduced? Could you please provide the detailed
>> steps to reproduce the issue with 6.15-rc1 kernel?
> Host.  I already have a fix, I'll get it posted today.
>
> https://lore.kernel.org/all/Z_aovIbwdKIIBMuq@google.com

Thumbs up! Glad to see it has been root caused. Thanks.



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-04-25  0:17 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-08 16:54 kvm guests crash when running "perf kvm top" Seth Forshee
2025-04-09 17:05 ` Sean Christopherson
2025-04-09 20:24   ` Seth Forshee
2025-04-24  8:53 ` Mi, Dapeng
2025-04-24 13:13   ` Sean Christopherson
2025-04-25  0:17     ` Mi, Dapeng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).