From mboxrd@z Thu Jan 1 00:00:00 1970 From: david ahern Subject: Re: soft lockup in kvm_flush_remote_tlbs Date: Thu, 25 Oct 2007 07:30:31 -0600 Message-ID: <47209A77.3090503@cisco.com> References: <471FCEA6.6000903@cisco.com> <47203BEB.9070509@qumranet.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org To: Avi Kivity Return-path: In-Reply-To: <47203BEB.9070509-atKUWr5tajBWk0Htik3J/w@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org As a quick test I added a printk to the loop, right after the while(): while (atomic_read(&completed) != needed) { printk("kvm_flush_remote_tlbs: completed = %d, needed = %d\n", atomic_read(&completed), needed); cpu_relax(); barrier(); } This is the output right before a lockup: Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 2, needed = 2 Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 2, needed = 2 Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 1, needed = 2 Oct 24 16:03:57 bldr-ccm20 last message repeated 105738 times Oct 24 16:03:57 bldr-ccm20 kernel: BUG: soft lockup detected on CPU#0! Oct 24 16:03:57 bldr-ccm20 kernel: [] softlockup_tick+0x98/0xa6 Oct 24 16:03:57 bldr-ccm20 kernel: [] update_process_times+0x39/0x5c Oct 24 16:03:57 bldr-ccm20 kernel: [] smp_apic_timer_interrupt+0x5c/0x64 Oct 24 16:03:57 bldr-ccm20 kernel: [] apic_timer_interrupt+0x1f/0x24 Oct 24 16:03:57 bldr-ccm20 kernel: [] vprintk+0x288/0x2bc Oct 24 16:03:57 bldr-ccm20 kernel: [] follow_page+0x168/0x1b6 Oct 24 16:03:57 bldr-ccm20 kernel: [] cfq_slice_async_store+0x5/0x38 Oct 24 16:03:57 bldr-ccm20 kernel: [] follow_page+0x168/0x1b6 Oct 24 16:03:57 bldr-ccm20 kernel: [] do_IRQ+0xa5/0xae Oct 24 16:03:57 bldr-ccm20 kernel: [] common_interrupt+0x1a/0x20 Oct 24 16:03:57 bldr-ccm20 kernel: [] printk+0x18/0x8e Oct 24 16:03:57 bldr-ccm20 kernel: [] kvm_flush_remote_tlbs+0xe0/0xf2 [kvm] ... I'd like to get a solution for RHEL5, so I am attempting to backport smp_call_function_mask(). I'm open to other suggestions if you think it is corruption or the problem is somewhere else. thanks, david Avi Kivity wrote: > david ahern wrote: >> I am trying, unsuccessfully so far, to get a vm running with 4 cpus. >> It is failing with a soft lockup: >> >> BUG: soft lockup detected on CPU#3! >> [] softlockup_tick+0x98/0xa6 >> [] update_process_times+0x39/0x5c >> [] smp_apic_timer_interrupt+0x5c/0x64 >> [] apic_timer_interrupt+0x1f/0x24 >> [] kvm_flush_remote_tlbs+0xce/0xdb [kvm] >> [] kvm_mmu_pte_write+0x1f2/0x368 [kvm] >> [] emulator_write_emulated_onepage+0x73/0xe6 [kvm] >> [] x86_emulate_insn+0x20d8/0x3348 [kvm] >> [] x86_decode_insn+0x624/0x872 [kvm] >> [] emulate_instruction+0x12b/0x258 [kvm] >> [] handle_exception+0x163/0x23f [kvm_intel] >> [] kvm_handle_exit+0x70/0x8a [kvm_intel] >> [] kvm_vcpu_ioctl_run+0x234/0x339 [kvm] >> [] kvm_vcpu_ioctl+0x0/0xa8f [kvm] >> [] kvm_vcpu_ioctl+0xbd/0xa8f [kvm] >> [] save_i387+0x23f/0x273 >> [] __next_cpu+0x12/0x21 >> [] find_busiest_group+0x177/0x462 >> [] setup_sigcontext+0x10d/0x190 >> [] get_page_from_freelist+0x96/0x310 >> [] get_page_from_freelist+0x2a6/0x310 >> [] flush_tlb_others+0x83/0xb3 >> [] flush_tlb_page+0x74/0x77 >> [] set_page_dirty_balance+0x8/0x35 >> [] do_wp_page+0x3a5/0x3bd >> [] dequeue_signal+0x2d/0x9c >> [] __handle_mm_fault+0x81b/0x87b >> [] kvm_vcpu_ioctl+0x0/0xa8f [kvm] >> [] do_ioctl+0x1c/0x5d >> [] vfs_ioctl+0x24a/0x25c >> [] sys_ioctl+0x48/0x5f >> [] syscall_call+0x7/0xb >> >> >> I am working with kvm-48, but also tried the 20071020 snapshot. The >> stuck code is kvm_flush_remote_tlbs(): >> >> while (atomic_read(&completed) != needed) { >> cpu_relax(); >> barrier(); >> } >> >> which I take to mean one of the CPUs is not ack'ing the TLB flush >> request. >> > > I don't think it's a cpu not responding. I've stared at the code for a > while (we had this before) and the actual IPI/ack is fine. > > What's probably happening is that corruption of the mmu data structures > is causing kvm_flush_remote_tlbs() to be called repeatedly. Since it's > a very slow function, the lockup detector blames it for any lockup it > sees even though it is innocent. > > [we had exactly this issue before and it was indeed fixed after an rmap > corruption was corrected] > > >> Is this is a known bug and any options to correct it? It works fine >> with 2 vcpus, but for a comparison with xen I'd like to get the vm >> working with 4. >> >> >> > > - please send (privately, it's big) an 'objdump -Sr' of mmu.o > - what guest are you running? if it's publicly available, I can try to > replicate it > - at what stage does the failure occur? if it's early on, we can try > running with AUDIT or DEBUG > - otherwise, I'll send debugging patches to try and see what's going on > ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/