From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:50923) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TtASg-000393-S1 for qemu-devel@nongnu.org; Thu, 10 Jan 2013 00:13:57 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TtASf-0005wy-K2 for qemu-devel@nongnu.org; Thu, 10 Jan 2013 00:13:54 -0500 Received: from g4t0017.houston.hp.com ([15.201.24.20]:11720) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TtASf-0005wu-Db for qemu-devel@nongnu.org; Thu, 10 Jan 2013 00:13:53 -0500 Message-ID: <50EE4E0F.6050005@hp.com> Date: Wed, 09 Jan 2013 21:13:51 -0800 From: Chegu Vinod MIME-Version: 1.0 References: <50EDD221.7090608@hp.com> <50EE4525.1070601@redhat.com> In-Reply-To: <50EE4525.1070601@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] vhost-net thread getting stuck ? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jason Wang Cc: qemu-devel qemu-devel , KVM On 1/9/2013 8:35 PM, Jason Wang wrote: > On 01/10/2013 04:25 AM, Chegu Vinod wrote: >> Hello, >> >> 'am running into an issue with the latest bits. [ Pl. see below. The >> vhost thread seems to be getting >> stuck while trying to memcopy...perhaps a bad address?. ] Wondering >> if this is a known issue or >> some recent regression ? > Hi: > > Looks like the issue has been fixed in following commits, does you tree > contain these? > > 499744209b2cbca66c42119226e5470da3bb7040 and > 76fe45812a3b134c39170ca32dfd4b7217d33145. > > They have been merged in to Linus 3.8-rc tree. I was using kvm.git kernel (as of today morning)....looks like the fixes aren't there yet. Will try the Linus's 3.8-rc tree. Thanks! Vinod > > Thanks >> 'am using the latest qemu (from qemu.git) and the latest kvm.git >> kernel on the host. Started the >> guest using the following command line.... >> >> /usr/local/bin/qemu-system-x86_64 \ >> -enable-kvm \ >> -cpu host \ >> -smp sockets=8,cores=10,threads=1 \ >> -numa node,nodeid=0,cpus=0-9,mem=64g \ >> -numa node,nodeid=1,cpus=10-19,mem=64g \ >> -numa node,nodeid=2,cpus=20-29,mem=64g \ >> -numa node,nodeid=3,cpus=30-39,mem=64g \ >> -numa node,nodeid=4,cpus=40-49,mem=64g \ >> -numa node,nodeid=5,cpus=50-59,mem=64g \ >> -numa node,nodeid=6,cpus=60-69,mem=64g \ >> -numa node,nodeid=7,cpus=70-79,mem=64g \ >> -m 524288 \ >> -mem-path /dev/hugepages \ >> -name vm2 \ >> -chardev >> socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm2.monitor,server,now >> ait \ >> -drive >> file=/dev/libvirt_lvm2/vm2,if=none,id=drive-virtio-disk0,format=raw,cache >> =none,aio=native \ >> -device >> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=v >> irtio-disk0,bootindex=1 \ >> -monitor stdio \ >> -net nic,model=virtio,macaddr=52:54:00:71:01:02,netdev=nic-0 \ >> -netdev tap,id=nic-0,ifname=tap0,script=no,downscript=no,vhost=on \ >> -vnc :4 >> >> >> Was just doing a basic kernel build in the guest when it hung with the >> following in the >> dmesg of the host. >> >> Thanks >> Vinod >> >> BUG: soft lockup - CPU#46 stuck for 23s! [vhost-135220:135231] >> Modules linked in: kvm_intel kvm fuse ip6table_filter ip6_tables >> ebtable_nat ebtables nf_conntrack_ipv4 nf_defrag_ipv4 xt_state >> nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle iptable_filter >> ip_tables bridge stp llc autofs4 sunrpc pcc_cpufreq ipv6 vhost_net >> macvtap macvlan tun uinput iTCO_wdt iTCO_vendor_support coretemp >> crc32c_intel ghash_clmulni_intel microcode pcspkr mlx4_core be2net >> lpc_ich mfd_core hpilo hpwdt i7core_edac edac_core sg netxen_nic ext4 >> mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif aesni_intel ablk_helper >> cryptd lrw aes_x86_64 xts gf128mul pata_acpi ata_generic ata_piix hpsa >> lpfc scsi_transport_fc scsi_tgt radeon ttm drm_kms_helper drm >> i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last >> unloaded: kvm] >> CPU 46 >> Pid: 135231, comm: vhost-135220 Not tainted 3.7.0+ #1 HP ProLiant >> DL980 G7 >> RIP: 0010:[] [] >> skb_flow_dissect+0x1b0/0x440 >> RSP: 0018:ffff881ffd131bc8 EFLAGS: 00000246 >> RAX: ffff8a1f7dc70c00 RBX: 00000000ffffffff RCX: 0000000000007fa0 >> RDX: 0000000000000000 RSI: ffff881ffd131c68 RDI: ffff8a1ff1bd6c80 >> RBP: ffff881ffd131c58 R08: ffff881ffd131bf8 R09: ffff8a1ff1bd6c80 >> R10: 0000000000000010 R11: 0000000000000004 R12: ffff8a1ff1bd6c80 >> R13: 000000000000000b R14: ffffffff8147330b R15: ffff881ffd131b58 >> FS: 0000000000000000(0000) GS:ffff8a1fff980000(0000) >> knlGS:0000000000000000 >> CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b >> CR2: 0000003d5c810dc0 CR3: 0000009f77c04000 CR4: 00000000000027e0 >> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 >> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 >> Process vhost-135220 (pid: 135231, threadinfo ffff881ffd130000, task >> ffff881ffcb754c0) >> Stack: >> ffff881ffd131c18 ffffffff81477b90 00000000000000e2 00002b289bcc58ce >> ffff881ffd131ce4 00000000000000a2 0000000000000000 00000000000000a2 >> 00000000000000a2 00000000000000a2 ffff881ffd131c88 00000000937e754e >> Call Trace: >> [] ? memcpy_fromiovecend+0x90/0xd0 >> [] __skb_get_rxhash+0x1a/0xe0 >> [] tun_get_user+0x468/0x660 [tun] >> [] ? __sdt_alloc+0x80/0x1a0 >> [] tun_sendmsg+0x5d/0x80 [tun] >> [] handle_tx+0x34a/0x680 [vhost_net] >> [] handle_tx_kick+0x15/0x20 [vhost_net] >> [] vhost_worker+0x10c/0x1c0 [vhost_net] >> [] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net] >> [] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net] >> [] kthread+0xce/0xe0 >> [] ? kthread_freezable_should_stop+0x70/0x70 >> [] ret_from_fork+0x7c/0xb0 >> [] ? kthread_freezable_should_stop+0x70/0x70 >> Code: b6 50 06 48 89 ce 48 c1 ee 20 31 f1 41 89 0e 48 8b 48 20 48 33 >> 48 18 48 89 c8 48 c1 e8 20 31 c1 41 89 4e 04 e9 35 ff ff ff 66 90 <0f> >> b6 50 09 e9 1a ff ff ff 0f 1f 80 00 00 00 00 41 8b 44 24 68 >> [root@hydra11 kvm_rik]# >> Message from syslogd@hydra11 at Jan 9 13:06:58 ... >> kernel:BUG: soft lockup - CPU#46 stuck for 22s! [vhost-135220:135231] >> >> > . >