public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* soft lockup  in kvm_flush_remote_tlbs
@ 2007-10-24 23:00 david ahern
       [not found] ` <471FCEA6.6000903-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: david ahern @ 2007-10-24 23:00 UTC (permalink / raw)
  To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

I am trying, unsuccessfully so far, to get a vm running with 4 cpus. It is failing with a soft lockup:

BUG: soft lockup detected on CPU#3!
 [<c044a05f>] softlockup_tick+0x98/0xa6
 [<c042ccd4>] update_process_times+0x39/0x5c
 [<c04176ec>] smp_apic_timer_interrupt+0x5c/0x64
 [<c04049bf>] apic_timer_interrupt+0x1f/0x24
 [<f8a3c800>] kvm_flush_remote_tlbs+0xce/0xdb [kvm]
 [<f8a41a72>] kvm_mmu_pte_write+0x1f2/0x368 [kvm]
 [<f8a3d335>] emulator_write_emulated_onepage+0x73/0xe6 [kvm]
 [<f8a4542c>] x86_emulate_insn+0x20d8/0x3348 [kvm]
 [<f8a43106>] x86_decode_insn+0x624/0x872 [kvm]
 [<f8a3d764>] emulate_instruction+0x12b/0x258 [kvm]
 [<f88af2e4>] handle_exception+0x163/0x23f [kvm_intel]
 [<f88af09b>] kvm_handle_exit+0x70/0x8a [kvm_intel]
 [<f8a3deae>] kvm_vcpu_ioctl_run+0x234/0x339 [kvm]
 [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
 [<f8a3e33c>] kvm_vcpu_ioctl+0xbd/0xa8f [kvm]
 [<c0408f60>] save_i387+0x23f/0x273
 [<c04db730>] __next_cpu+0x12/0x21
 [<c041c97f>] find_busiest_group+0x177/0x462
 [<c04031cd>] setup_sigcontext+0x10d/0x190
 [<c0453bed>] get_page_from_freelist+0x96/0x310
 [<c0453dfd>] get_page_from_freelist+0x2a6/0x310
 [<c0415a5c>] flush_tlb_others+0x83/0xb3
 [<c0415d63>] flush_tlb_page+0x74/0x77
 [<c0454cf1>] set_page_dirty_balance+0x8/0x35
 [<c0459c1b>] do_wp_page+0x3a5/0x3bd
 [<c042e97e>] dequeue_signal+0x2d/0x9c
 [<c045af6b>] __handle_mm_fault+0x81b/0x87b
 [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
 [<c0479cac>] do_ioctl+0x1c/0x5d
 [<c0479f37>] vfs_ioctl+0x24a/0x25c
 [<c0479f91>] sys_ioctl+0x48/0x5f
 [<c0403eff>] syscall_call+0x7/0xb


I am working with kvm-48, but also tried the 20071020 snapshot. The stuck code is kvm_flush_remote_tlbs():

	while (atomic_read(&completed) != needed) {
		cpu_relax();
		barrier();
	}

which I take to mean one of the CPUs is not ack'ing the TLB flush request. 

Is this is a known bug and any options to correct it? It works fine with 2 vcpus, but for a comparison with xen I'd like to get the vm working with 4.


Host stats:
OS: RHEL5
Processors: 2-Core 2 Duos (4 processors)
KVM:  kvm-48 and kvm-20071020-1 snapshot rpms
QEMU command:

qemu-kvm -boot c -localtime -hda /opt/kvm/images/cucm.img -m 1536 -smp 4 -serial file:/tmp/serial.log -net nic,macaddr=00:1a:4b:34:74:52,model=rtl8139 -net tap,ifname=tap0,script=/bin/true -vnc :2 -monitor stdio


thanks,

david

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: soft lockup  in kvm_flush_remote_tlbs
       [not found] ` <471FCEA6.6000903-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-10-24 23:15   ` Laurent Vivier
       [not found]     ` <471FD204.6040400-6ktuUTfB/bM@public.gmane.org>
  2007-10-25  6:47   ` Avi Kivity
  1 sibling, 1 reply; 10+ messages in thread
From: Laurent Vivier @ 2007-10-24 23:15 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

david ahern a écrit :
> I am trying, unsuccessfully so far, to get a vm running with 4 cpus. It is failing with a soft lockup:
> 
> BUG: soft lockup detected on CPU#3!
>  [<c044a05f>] softlockup_tick+0x98/0xa6
>  [<c042ccd4>] update_process_times+0x39/0x5c
>  [<c04176ec>] smp_apic_timer_interrupt+0x5c/0x64
>  [<c04049bf>] apic_timer_interrupt+0x1f/0x24
>  [<f8a3c800>] kvm_flush_remote_tlbs+0xce/0xdb [kvm]
>  [<f8a41a72>] kvm_mmu_pte_write+0x1f2/0x368 [kvm]
>  [<f8a3d335>] emulator_write_emulated_onepage+0x73/0xe6 [kvm]
>  [<f8a4542c>] x86_emulate_insn+0x20d8/0x3348 [kvm]
>  [<f8a43106>] x86_decode_insn+0x624/0x872 [kvm]
>  [<f8a3d764>] emulate_instruction+0x12b/0x258 [kvm]
>  [<f88af2e4>] handle_exception+0x163/0x23f [kvm_intel]
>  [<f88af09b>] kvm_handle_exit+0x70/0x8a [kvm_intel]
>  [<f8a3deae>] kvm_vcpu_ioctl_run+0x234/0x339 [kvm]
>  [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
>  [<f8a3e33c>] kvm_vcpu_ioctl+0xbd/0xa8f [kvm]
>  [<c0408f60>] save_i387+0x23f/0x273
>  [<c04db730>] __next_cpu+0x12/0x21
>  [<c041c97f>] find_busiest_group+0x177/0x462
>  [<c04031cd>] setup_sigcontext+0x10d/0x190
>  [<c0453bed>] get_page_from_freelist+0x96/0x310
>  [<c0453dfd>] get_page_from_freelist+0x2a6/0x310
>  [<c0415a5c>] flush_tlb_others+0x83/0xb3
>  [<c0415d63>] flush_tlb_page+0x74/0x77
>  [<c0454cf1>] set_page_dirty_balance+0x8/0x35
>  [<c0459c1b>] do_wp_page+0x3a5/0x3bd
>  [<c042e97e>] dequeue_signal+0x2d/0x9c
>  [<c045af6b>] __handle_mm_fault+0x81b/0x87b
>  [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
>  [<c0479cac>] do_ioctl+0x1c/0x5d
>  [<c0479f37>] vfs_ioctl+0x24a/0x25c
>  [<c0479f91>] sys_ioctl+0x48/0x5f
>  [<c0403eff>] syscall_call+0x7/0xb
> 
> 
> I am working with kvm-48, but also tried the 20071020 snapshot. The stuck code is kvm_flush_remote_tlbs():
> 
> 	while (atomic_read(&completed) != needed) {
> 		cpu_relax();
> 		barrier();
> 	}
> 

This part has been removed by commit 49d3bd7e2b990e717aa66e229410b8f5096c4956, 
perhaps you could try it ?

commit 49d3bd7e2b990e717aa66e229410b8f5096c4956
Author: Laurent Vivier <Laurent.Vivier-6ktuUTfB/bM@public.gmane.org>
Date:   Mon Oct 22 16:33:07 2007 +0200

     KVM: Use new smp_call_function_mask() in kvm_flush_remote_tlbs()

     In kvm_flush_remote_tlbs(), replace a loop using smp_call_function_single()
     by a single call to smp_call_function_mask() (which is new for x86_64).

     Signed-off-by: Laurent Vivier <Laurent.Vivier-6ktuUTfB/bM@public.gmane.org>
     Signed-off-by: Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org>

> which I take to mean one of the CPUs is not ack'ing the TLB flush request. 

Yes, it seems...

> Is this is a known bug and any options to correct it? It works fine with 2 vcpus, but for a comparison with xen I'd like to get the vm working with 4.
> 
> 
> Host stats:
> OS: RHEL5
> Processors: 2-Core 2 Duos (4 processors)
> KVM:  kvm-48 and kvm-20071020-1 snapshot rpms
> QEMU command:
> 
> qemu-kvm -boot c -localtime -hda /opt/kvm/images/cucm.img -m 1536 -smp 4 -serial file:/tmp/serial.log -net nic,macaddr=00:1a:4b:34:74:52,model=rtl8139 -net tap,ifname=tap0,script=/bin/true -vnc :2 -monitor stdio
> 
> 
> thanks,
> 
> david
> 
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> _______________________________________________
> kvm-devel mailing list
> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
> https://lists.sourceforge.net/lists/listinfo/kvm-devel
> 


-- 
---------------- Laurent.Vivier-6ktuUTfB/bM@public.gmane.org  -----------------
"Given enough eyeballs, all bugs are shallow" E. S. Raymond


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: soft lockup  in kvm_flush_remote_tlbs
       [not found]     ` <471FD204.6040400-6ktuUTfB/bM@public.gmane.org>
@ 2007-10-25  4:26       ` david ahern
  0 siblings, 0 replies; 10+ messages in thread
From: david ahern @ 2007-10-25  4:26 UTC (permalink / raw)
  To: Laurent Vivier; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

I saw that in the latest git tree, but RHEL5 kernel does not have that function. I guess I'll have to evaluate my options -- upgrading kernels, or backporting code.

thanks,

david


Laurent Vivier wrote:
> david ahern a écrit :
>> I am trying, unsuccessfully so far, to get a vm running with 4 cpus.
>> It is failing with a soft lockup:
>>
>> BUG: soft lockup detected on CPU#3!
>>  [<c044a05f>] softlockup_tick+0x98/0xa6
>>  [<c042ccd4>] update_process_times+0x39/0x5c
>>  [<c04176ec>] smp_apic_timer_interrupt+0x5c/0x64
>>  [<c04049bf>] apic_timer_interrupt+0x1f/0x24
>>  [<f8a3c800>] kvm_flush_remote_tlbs+0xce/0xdb [kvm]
>>  [<f8a41a72>] kvm_mmu_pte_write+0x1f2/0x368 [kvm]
>>  [<f8a3d335>] emulator_write_emulated_onepage+0x73/0xe6 [kvm]
>>  [<f8a4542c>] x86_emulate_insn+0x20d8/0x3348 [kvm]
>>  [<f8a43106>] x86_decode_insn+0x624/0x872 [kvm]
>>  [<f8a3d764>] emulate_instruction+0x12b/0x258 [kvm]
>>  [<f88af2e4>] handle_exception+0x163/0x23f [kvm_intel]
>>  [<f88af09b>] kvm_handle_exit+0x70/0x8a [kvm_intel]
>>  [<f8a3deae>] kvm_vcpu_ioctl_run+0x234/0x339 [kvm]
>>  [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
>>  [<f8a3e33c>] kvm_vcpu_ioctl+0xbd/0xa8f [kvm]
>>  [<c0408f60>] save_i387+0x23f/0x273
>>  [<c04db730>] __next_cpu+0x12/0x21
>>  [<c041c97f>] find_busiest_group+0x177/0x462
>>  [<c04031cd>] setup_sigcontext+0x10d/0x190
>>  [<c0453bed>] get_page_from_freelist+0x96/0x310
>>  [<c0453dfd>] get_page_from_freelist+0x2a6/0x310
>>  [<c0415a5c>] flush_tlb_others+0x83/0xb3
>>  [<c0415d63>] flush_tlb_page+0x74/0x77
>>  [<c0454cf1>] set_page_dirty_balance+0x8/0x35
>>  [<c0459c1b>] do_wp_page+0x3a5/0x3bd
>>  [<c042e97e>] dequeue_signal+0x2d/0x9c
>>  [<c045af6b>] __handle_mm_fault+0x81b/0x87b
>>  [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
>>  [<c0479cac>] do_ioctl+0x1c/0x5d
>>  [<c0479f37>] vfs_ioctl+0x24a/0x25c
>>  [<c0479f91>] sys_ioctl+0x48/0x5f
>>  [<c0403eff>] syscall_call+0x7/0xb
>>
>>
>> I am working with kvm-48, but also tried the 20071020 snapshot. The
>> stuck code is kvm_flush_remote_tlbs():
>>
>>     while (atomic_read(&completed) != needed) {
>>         cpu_relax();
>>         barrier();
>>     }
>>
> 
> This part has been removed by commit
> 49d3bd7e2b990e717aa66e229410b8f5096c4956, perhaps you could try it ?
> 
> commit 49d3bd7e2b990e717aa66e229410b8f5096c4956
> Author: Laurent Vivier <Laurent.Vivier-6ktuUTfB/bM@public.gmane.org>
> Date:   Mon Oct 22 16:33:07 2007 +0200
> 
>     KVM: Use new smp_call_function_mask() in kvm_flush_remote_tlbs()
> 
>     In kvm_flush_remote_tlbs(), replace a loop using
> smp_call_function_single()
>     by a single call to smp_call_function_mask() (which is new for x86_64).
> 
>     Signed-off-by: Laurent Vivier <Laurent.Vivier-6ktuUTfB/bM@public.gmane.org>
>     Signed-off-by: Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
> 
>> which I take to mean one of the CPUs is not ack'ing the TLB flush
>> request. 
> 
> Yes, it seems...
> 
>> Is this is a known bug and any options to correct it? It works fine
>> with 2 vcpus, but for a comparison with xen I'd like to get the vm
>> working with 4.
>>
>>
>> Host stats:
>> OS: RHEL5
>> Processors: 2-Core 2 Duos (4 processors)
>> KVM:  kvm-48 and kvm-20071020-1 snapshot rpms
>> QEMU command:
>>
>> qemu-kvm -boot c -localtime -hda /opt/kvm/images/cucm.img -m 1536 -smp
>> 4 -serial file:/tmp/serial.log -net
>> nic,macaddr=00:1a:4b:34:74:52,model=rtl8139 -net
>> tap,ifname=tap0,script=/bin/true -vnc :2 -monitor stdio
>>
>>
>> thanks,
>>
>> david
>>
>> -------------------------------------------------------------------------
>> This SF.net email is sponsored by: Splunk Inc.
>> Still grepping through log files to find problems?  Stop.
>> Now Search log events and configuration files using AJAX and a browser.
>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>> _______________________________________________
>> kvm-devel mailing list
>> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
>> https://lists.sourceforge.net/lists/listinfo/kvm-devel
>>
> 
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: soft lockup  in kvm_flush_remote_tlbs
       [not found] ` <471FCEA6.6000903-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  2007-10-24 23:15   ` Laurent Vivier
@ 2007-10-25  6:47   ` Avi Kivity
       [not found]     ` <47203BEB.9070509-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  1 sibling, 1 reply; 10+ messages in thread
From: Avi Kivity @ 2007-10-25  6:47 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

david ahern wrote:
> I am trying, unsuccessfully so far, to get a vm running with 4 cpus. It is failing with a soft lockup:
>
> BUG: soft lockup detected on CPU#3!
>  [<c044a05f>] softlockup_tick+0x98/0xa6
>  [<c042ccd4>] update_process_times+0x39/0x5c
>  [<c04176ec>] smp_apic_timer_interrupt+0x5c/0x64
>  [<c04049bf>] apic_timer_interrupt+0x1f/0x24
>  [<f8a3c800>] kvm_flush_remote_tlbs+0xce/0xdb [kvm]
>  [<f8a41a72>] kvm_mmu_pte_write+0x1f2/0x368 [kvm]
>  [<f8a3d335>] emulator_write_emulated_onepage+0x73/0xe6 [kvm]
>  [<f8a4542c>] x86_emulate_insn+0x20d8/0x3348 [kvm]
>  [<f8a43106>] x86_decode_insn+0x624/0x872 [kvm]
>  [<f8a3d764>] emulate_instruction+0x12b/0x258 [kvm]
>  [<f88af2e4>] handle_exception+0x163/0x23f [kvm_intel]
>  [<f88af09b>] kvm_handle_exit+0x70/0x8a [kvm_intel]
>  [<f8a3deae>] kvm_vcpu_ioctl_run+0x234/0x339 [kvm]
>  [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
>  [<f8a3e33c>] kvm_vcpu_ioctl+0xbd/0xa8f [kvm]
>  [<c0408f60>] save_i387+0x23f/0x273
>  [<c04db730>] __next_cpu+0x12/0x21
>  [<c041c97f>] find_busiest_group+0x177/0x462
>  [<c04031cd>] setup_sigcontext+0x10d/0x190
>  [<c0453bed>] get_page_from_freelist+0x96/0x310
>  [<c0453dfd>] get_page_from_freelist+0x2a6/0x310
>  [<c0415a5c>] flush_tlb_others+0x83/0xb3
>  [<c0415d63>] flush_tlb_page+0x74/0x77
>  [<c0454cf1>] set_page_dirty_balance+0x8/0x35
>  [<c0459c1b>] do_wp_page+0x3a5/0x3bd
>  [<c042e97e>] dequeue_signal+0x2d/0x9c
>  [<c045af6b>] __handle_mm_fault+0x81b/0x87b
>  [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
>  [<c0479cac>] do_ioctl+0x1c/0x5d
>  [<c0479f37>] vfs_ioctl+0x24a/0x25c
>  [<c0479f91>] sys_ioctl+0x48/0x5f
>  [<c0403eff>] syscall_call+0x7/0xb
>
>
> I am working with kvm-48, but also tried the 20071020 snapshot. The stuck code is kvm_flush_remote_tlbs():
>
> 	while (atomic_read(&completed) != needed) {
> 		cpu_relax();
> 		barrier();
> 	}
>
> which I take to mean one of the CPUs is not ack'ing the TLB flush request. 
>
>   

I don't think it's a cpu not responding.  I've stared at the code for a 
while (we had this before) and the actual IPI/ack is fine.

What's probably happening is that corruption of the mmu data structures 
is causing kvm_flush_remote_tlbs() to be called repeatedly.  Since it's 
a very slow function, the lockup detector blames it for any lockup it 
sees even though it is innocent.

[we had exactly this issue before and it was indeed fixed after an rmap 
corruption was corrected]


> Is this is a known bug and any options to correct it? It works fine with 2 vcpus, but for a comparison with xen I'd like to get the vm working with 4.
>
>
>   

- please send (privately, it's big) an 'objdump -Sr' of mmu.o
- what guest are you running?  if it's publicly available, I can try to 
replicate it
- at what stage does the failure occur?  if it's early on, we can try 
running with AUDIT or DEBUG
- otherwise, I'll send debugging patches to try and see what's going on

-- 
Any sufficiently difficult bug is indistinguishable from a feature.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: soft lockup  in kvm_flush_remote_tlbs
       [not found]     ` <47203BEB.9070509-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-25 13:30       ` david ahern
       [not found]         ` <47209A77.3090503-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: david ahern @ 2007-10-25 13:30 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

As a quick test I added a printk to the loop, right after the while():

     while (atomic_read(&completed) != needed) {
         printk("kvm_flush_remote_tlbs: completed = %d, needed = %d\n", atomic_read(&completed), needed);
         cpu_relax();
         barrier();
     }


This is the output right before a lockup:

Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 2, needed = 2
Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 2, needed = 2
Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 1, needed = 2
Oct 24 16:03:57 bldr-ccm20 last message repeated 105738 times
Oct 24 16:03:57 bldr-ccm20 kernel: BUG: soft lockup detected on CPU#0!
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c044a0b7>] softlockup_tick+0x98/0xa6
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c042cc98>] update_process_times+0x39/0x5c
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04176ec>] smp_apic_timer_interrupt+0x5c/0x64
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04049bf>] apic_timer_interrupt+0x1f/0x24
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0424130>] vprintk+0x288/0x2bc
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0459db7>] follow_page+0x168/0x1b6
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04d8067>] cfq_slice_async_store+0x5/0x38
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0459db7>] follow_page+0x168/0x1b6
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0406406>] do_IRQ+0xa5/0xae
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c040492e>] common_interrupt+0x1a/0x20
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c042417c>] printk+0x18/0x8e
Oct 24 16:03:57 bldr-ccm20 kernel:  [<f89a9812>] kvm_flush_remote_tlbs+0xe0/0xf2 [kvm]
...


I'd like to get a solution for RHEL5, so I am attempting to backport smp_call_function_mask(). I'm open to other suggestions if you think it is corruption or the problem is somewhere else.

thanks,

david


Avi Kivity wrote:
> david ahern wrote:
>> I am trying, unsuccessfully so far, to get a vm running with 4 cpus.
>> It is failing with a soft lockup:
>>
>> BUG: soft lockup detected on CPU#3!
>>  [<c044a05f>] softlockup_tick+0x98/0xa6
>>  [<c042ccd4>] update_process_times+0x39/0x5c
>>  [<c04176ec>] smp_apic_timer_interrupt+0x5c/0x64
>>  [<c04049bf>] apic_timer_interrupt+0x1f/0x24
>>  [<f8a3c800>] kvm_flush_remote_tlbs+0xce/0xdb [kvm]
>>  [<f8a41a72>] kvm_mmu_pte_write+0x1f2/0x368 [kvm]
>>  [<f8a3d335>] emulator_write_emulated_onepage+0x73/0xe6 [kvm]
>>  [<f8a4542c>] x86_emulate_insn+0x20d8/0x3348 [kvm]
>>  [<f8a43106>] x86_decode_insn+0x624/0x872 [kvm]
>>  [<f8a3d764>] emulate_instruction+0x12b/0x258 [kvm]
>>  [<f88af2e4>] handle_exception+0x163/0x23f [kvm_intel]
>>  [<f88af09b>] kvm_handle_exit+0x70/0x8a [kvm_intel]
>>  [<f8a3deae>] kvm_vcpu_ioctl_run+0x234/0x339 [kvm]
>>  [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
>>  [<f8a3e33c>] kvm_vcpu_ioctl+0xbd/0xa8f [kvm]
>>  [<c0408f60>] save_i387+0x23f/0x273
>>  [<c04db730>] __next_cpu+0x12/0x21
>>  [<c041c97f>] find_busiest_group+0x177/0x462
>>  [<c04031cd>] setup_sigcontext+0x10d/0x190
>>  [<c0453bed>] get_page_from_freelist+0x96/0x310
>>  [<c0453dfd>] get_page_from_freelist+0x2a6/0x310
>>  [<c0415a5c>] flush_tlb_others+0x83/0xb3
>>  [<c0415d63>] flush_tlb_page+0x74/0x77
>>  [<c0454cf1>] set_page_dirty_balance+0x8/0x35
>>  [<c0459c1b>] do_wp_page+0x3a5/0x3bd
>>  [<c042e97e>] dequeue_signal+0x2d/0x9c
>>  [<c045af6b>] __handle_mm_fault+0x81b/0x87b
>>  [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
>>  [<c0479cac>] do_ioctl+0x1c/0x5d
>>  [<c0479f37>] vfs_ioctl+0x24a/0x25c
>>  [<c0479f91>] sys_ioctl+0x48/0x5f
>>  [<c0403eff>] syscall_call+0x7/0xb
>>
>>
>> I am working with kvm-48, but also tried the 20071020 snapshot. The
>> stuck code is kvm_flush_remote_tlbs():
>>
>>     while (atomic_read(&completed) != needed) {
>>         cpu_relax();
>>         barrier();
>>     }
>>
>> which I take to mean one of the CPUs is not ack'ing the TLB flush
>> request.
>>   
> 
> I don't think it's a cpu not responding.  I've stared at the code for a
> while (we had this before) and the actual IPI/ack is fine.
> 
> What's probably happening is that corruption of the mmu data structures
> is causing kvm_flush_remote_tlbs() to be called repeatedly.  Since it's
> a very slow function, the lockup detector blames it for any lockup it
> sees even though it is innocent.
> 
> [we had exactly this issue before and it was indeed fixed after an rmap
> corruption was corrected]
> 
> 
>> Is this is a known bug and any options to correct it? It works fine
>> with 2 vcpus, but for a comparison with xen I'd like to get the vm
>> working with 4.
>>
>>
>>   
> 
> - please send (privately, it's big) an 'objdump -Sr' of mmu.o
> - what guest are you running?  if it's publicly available, I can try to
> replicate it
> - at what stage does the failure occur?  if it's early on, we can try
> running with AUDIT or DEBUG
> - otherwise, I'll send debugging patches to try and see what's going on
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: soft lockup  in kvm_flush_remote_tlbs
       [not found]         ` <47209A77.3090503-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-10-25 13:46           ` Avi Kivity
       [not found]             ` <47209E23.8080808-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Avi Kivity @ 2007-10-25 13:46 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

david ahern wrote:
> As a quick test I added a printk to the loop, right after the while():
>
>      while (atomic_read(&completed) != needed) {
>          printk("kvm_flush_remote_tlbs: completed = %d, needed = %d\n", atomic_read(&completed), needed);
>          cpu_relax();
>          barrier();
>      }
>
>
> This is the output right before a lockup:
>
> Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 2, needed = 2
> Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 2, needed = 2
> Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 1, needed = 2
> Oct 24 16:03:57 bldr-ccm20 last message repeated 105738 times
> Oct 24 16:03:57 bldr-ccm20 kernel: BUG: soft lockup detected on CPU#0!
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c044a0b7>] softlockup_tick+0x98/0xa6
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c042cc98>] update_process_times+0x39/0x5c
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04176ec>] smp_apic_timer_interrupt+0x5c/0x64
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04049bf>] apic_timer_interrupt+0x1f/0x24
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0424130>] vprintk+0x288/0x2bc
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0459db7>] follow_page+0x168/0x1b6
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04d8067>] cfq_slice_async_store+0x5/0x38
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0459db7>] follow_page+0x168/0x1b6
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0406406>] do_IRQ+0xa5/0xae
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c040492e>] common_interrupt+0x1a/0x20
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c042417c>] printk+0x18/0x8e
> Oct 24 16:03:57 bldr-ccm20 kernel:  [<f89a9812>] kvm_flush_remote_tlbs+0xe0/0xf2 [kvm]
> ...
>
>
> I'd like to get a solution for RHEL5, so I am attempting to backport smp_call_function_mask(). I'm open to other suggestions if you think it is corruption or the problem is somewhere else.
>   

No, it looks like the problem is indeed in kvm_flush_remote_tlbs(), and 
not a corruption elsewhere.

Things to check:

- whether cpus_weight(mask) == needed
- whether wrapping the whole thing in preempt_disable()/preempt_enable() 
helps

hey! I see a bug!

>                         continue;
>                 cpu = vcpu->cpu;
>                 if (cpu != -1 && cpu != raw_smp_processor_id())
>                         if (!cpu_isset(cpu, cpus)) {
>                                 cpu_set(cpu, cpus);
>                                 ++needed;
>                         }
>         }
>

vcpu->cpu can change during execution if this snippet due to a vcpu 
being migrated concurrently with this being executed.  Since the 
compiler is free to reload 'cpu' from 'vcpu->cpu', the code can operate 
on corrupted data.

A 'barrier();' after 'cpu = vcpu->cpu;' should fix it, if this is indeed 
the bug.


-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: soft lockup  in kvm_flush_remote_tlbs
       [not found]             ` <47209E23.8080808-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-25 14:07               ` david ahern
  2007-10-25 18:34               ` david ahern
  1 sibling, 0 replies; 10+ messages in thread
From: david ahern @ 2007-10-25 14:07 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

I'll give your suggestions I try. I need to move to a server that I can forcibly reboot remotely (to recover), so it will be a while.

david


Avi Kivity wrote:
> david ahern wrote:
>> As a quick test I added a printk to the loop, right after the while():
>>
>>      while (atomic_read(&completed) != needed) {
>>          printk("kvm_flush_remote_tlbs: completed = %d, needed = %d\n", atomic_read(&completed), needed);
>>          cpu_relax();
>>          barrier();
>>      }
>>
>>
>> This is the output right before a lockup:
>>
>> Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 2, needed = 2
>> Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 2, needed = 2
>> Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 1, needed = 2
>> Oct 24 16:03:57 bldr-ccm20 last message repeated 105738 times
>> Oct 24 16:03:57 bldr-ccm20 kernel: BUG: soft lockup detected on CPU#0!
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c044a0b7>] softlockup_tick+0x98/0xa6
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c042cc98>] update_process_times+0x39/0x5c
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04176ec>] smp_apic_timer_interrupt+0x5c/0x64
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04049bf>] apic_timer_interrupt+0x1f/0x24
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0424130>] vprintk+0x288/0x2bc
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0459db7>] follow_page+0x168/0x1b6
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04d8067>] cfq_slice_async_store+0x5/0x38
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0459db7>] follow_page+0x168/0x1b6
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0406406>] do_IRQ+0xa5/0xae
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c040492e>] common_interrupt+0x1a/0x20
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<c042417c>] printk+0x18/0x8e
>> Oct 24 16:03:57 bldr-ccm20 kernel:  [<f89a9812>] kvm_flush_remote_tlbs+0xe0/0xf2 [kvm]
>> ...
>>
>>
>> I'd like to get a solution for RHEL5, so I am attempting to backport smp_call_function_mask(). I'm open to other suggestions if you think it is corruption or the problem is somewhere else.
>>   
> 
> No, it looks like the problem is indeed in kvm_flush_remote_tlbs(), and 
> not a corruption elsewhere.
> 
> Things to check:
> 
> - whether cpus_weight(mask) == needed
> - whether wrapping the whole thing in preempt_disable()/preempt_enable() 
> helps
> 
> hey! I see a bug!
> 
>>                         continue;
>>                 cpu = vcpu->cpu;
>>                 if (cpu != -1 && cpu != raw_smp_processor_id())
>>                         if (!cpu_isset(cpu, cpus)) {
>>                                 cpu_set(cpu, cpus);
>>                                 ++needed;
>>                         }
>>         }
>>
> 
> vcpu->cpu can change during execution if this snippet due to a vcpu 
> being migrated concurrently with this being executed.  Since the 
> compiler is free to reload 'cpu' from 'vcpu->cpu', the code can operate 
> on corrupted data.
> 
> A 'barrier();' after 'cpu = vcpu->cpu;' should fix it, if this is indeed 
> the bug.
> 
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: soft lockup  in kvm_flush_remote_tlbs
       [not found]             ` <47209E23.8080808-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  2007-10-25 14:07               ` david ahern
@ 2007-10-25 18:34               ` david ahern
       [not found]                 ` <4720E19B.20802-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  1 sibling, 1 reply; 10+ messages in thread
From: david ahern @ 2007-10-25 18:34 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

The issue appears to be with the RHEL5 kernel (host OS is rhel5). I tried your suggestions below -- no effect; still hit the softlockup.

I then moved the host to the 2.6.23.1 kernel but with the kvm-48 code base. Surprisingly, I had no issues starting my guest with '-smp 4'.

david


Avi Kivity wrote:  
> 
> No, it looks like the problem is indeed in kvm_flush_remote_tlbs(), and 
> not a corruption elsewhere.
> 
> Things to check:
> 
> - whether cpus_weight(mask) == needed
> - whether wrapping the whole thing in preempt_disable()/preempt_enable() 
> helps
>
> hey! I see a bug!
> 
>>                         continue;
>>                 cpu = vcpu->cpu;
>>                 if (cpu != -1 && cpu != raw_smp_processor_id())
>>                         if (!cpu_isset(cpu, cpus)) {
>>                                 cpu_set(cpu, cpus);
>>                                 ++needed;
>>                         }
>>         }
>>
> 
> vcpu->cpu can change during execution if this snippet due to a vcpu 
> being migrated concurrently with this being executed.  Since the 
> compiler is free to reload 'cpu' from 'vcpu->cpu', the code can operate 
> on corrupted data.
> 
> A 'barrier();' after 'cpu = vcpu->cpu;' should fix it, if this is indeed 
> the bug.

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: soft lockup  in kvm_flush_remote_tlbs
       [not found]                 ` <4720E19B.20802-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-10-25 18:35                   ` Avi Kivity
       [not found]                     ` <4720E1EB.6020700-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Avi Kivity @ 2007-10-25 18:35 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

david ahern wrote:
> The issue appears to be with the RHEL5 kernel (host OS is rhel5). I tried your suggestions below -- no effect; still hit the softlockup.
>
> I then moved the host to the 2.6.23.1 kernel but with the kvm-48 code base. Surprisingly, I had no issues starting my guest with '-smp 4'.
>
>   

Well, I regularly start up 4-way Linux guests on 2.6.bleeding.edge.  I'd 
like to clear that problem for users on older kernels, though.

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: soft lockup  in kvm_flush_remote_tlbs
       [not found]                     ` <4720E1EB.6020700-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-25 23:05                       ` david ahern
  0 siblings, 0 replies; 10+ messages in thread
From: david ahern @ 2007-10-25 23:05 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

It appears to be a problem with the kernel proper (ie., not a Red Hat patch). I hit the soft lockup problem with kvm-48 and the 2.6.18.4 kernel which is the base for RHEL5. That suggests a delivery between 2.6.18.4 (November 2006) and 2.6.23.1 fixed it.

david


Avi Kivity wrote:
> david ahern wrote:
>> The issue appears to be with the RHEL5 kernel (host OS is rhel5). I tried your suggestions below -- no effect; still hit the softlockup.
>>
>> I then moved the host to the 2.6.23.1 kernel but with the kvm-48 code base. Surprisingly, I had no issues starting my guest with '-smp 4'.
>>
>>   
> 
> Well, I regularly start up 4-way Linux guests on 2.6.bleeding.edge.  I'd 
> like to clear that problem for users on older kernels, though.
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2007-10-25 23:05 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-24 23:00 soft lockup in kvm_flush_remote_tlbs david ahern
     [not found] ` <471FCEA6.6000903-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-10-24 23:15   ` Laurent Vivier
     [not found]     ` <471FD204.6040400-6ktuUTfB/bM@public.gmane.org>
2007-10-25  4:26       ` david ahern
2007-10-25  6:47   ` Avi Kivity
     [not found]     ` <47203BEB.9070509-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-25 13:30       ` david ahern
     [not found]         ` <47209A77.3090503-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-10-25 13:46           ` Avi Kivity
     [not found]             ` <47209E23.8080808-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-25 14:07               ` david ahern
2007-10-25 18:34               ` david ahern
     [not found]                 ` <4720E19B.20802-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-10-25 18:35                   ` Avi Kivity
     [not found]                     ` <4720E1EB.6020700-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-25 23:05                       ` david ahern

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox