qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] vhost-net thread getting stuck ?
@ 2013-01-09 20:25 Chegu Vinod
  2013-01-10  4:35 ` Jason Wang
  0 siblings, 1 reply; 3+ messages in thread
From: Chegu Vinod @ 2013-01-09 20:25 UTC (permalink / raw)
  To: KVM, qemu-devel qemu-devel


Hello,

'am running into an issue with the latest bits. [ Pl. see below. The 
vhost thread seems to be getting
stuck while trying to memcopy...perhaps a bad address?.  ] Wondering if 
this is a known issue or
some recent regression ?

'am using the latest qemu (from qemu.git) and the latest kvm.git kernel 
on the host. Started the
guest using the following command line....

/usr/local/bin/qemu-system-x86_64 \
-enable-kvm \
-cpu host \
-smp sockets=8,cores=10,threads=1 \
-numa node,nodeid=0,cpus=0-9,mem=64g \
-numa node,nodeid=1,cpus=10-19,mem=64g \
-numa node,nodeid=2,cpus=20-29,mem=64g \
-numa node,nodeid=3,cpus=30-39,mem=64g \
-numa node,nodeid=4,cpus=40-49,mem=64g \
-numa node,nodeid=5,cpus=50-59,mem=64g \
-numa node,nodeid=6,cpus=60-69,mem=64g \
-numa node,nodeid=7,cpus=70-79,mem=64g \
-m 524288 \
-mem-path /dev/hugepages \
-name vm2 \
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm2.monitor,server,now
ait \
-drive 
file=/dev/libvirt_lvm2/vm2,if=none,id=drive-virtio-disk0,format=raw,cache
=none,aio=native \
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=v
irtio-disk0,bootindex=1 \
-monitor stdio \
-net nic,model=virtio,macaddr=52:54:00:71:01:02,netdev=nic-0 \
-netdev tap,id=nic-0,ifname=tap0,script=no,downscript=no,vhost=on \
-vnc :4


Was just doing a basic kernel build in the guest when it hung with the 
following in the
dmesg of the host.

Thanks
Vinod

BUG: soft lockup - CPU#46 stuck for 23s! [vhost-135220:135231]
Modules linked in: kvm_intel kvm fuse ip6table_filter ip6_tables 
ebtable_nat ebtables nf_conntrack_ipv4 nf_defrag_ipv4 xt_state 
nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle iptable_filter 
ip_tables bridge stp llc autofs4 sunrpc pcc_cpufreq ipv6 vhost_net 
macvtap macvlan tun uinput iTCO_wdt iTCO_vendor_support coretemp 
crc32c_intel ghash_clmulni_intel microcode pcspkr mlx4_core be2net 
lpc_ich mfd_core hpilo hpwdt i7core_edac edac_core sg netxen_nic ext4 
mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif aesni_intel ablk_helper 
cryptd lrw aes_x86_64 xts gf128mul pata_acpi ata_generic ata_piix hpsa 
lpfc scsi_transport_fc scsi_tgt radeon ttm drm_kms_helper drm 
i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last 
unloaded: kvm]
CPU 46
Pid: 135231, comm: vhost-135220 Not tainted 3.7.0+ #1 HP ProLiant DL980 G7
RIP: 0010:[<ffffffff8147bab0>]  [<ffffffff8147bab0>] 
skb_flow_dissect+0x1b0/0x440
RSP: 0018:ffff881ffd131bc8  EFLAGS: 00000246
RAX: ffff8a1f7dc70c00 RBX: 00000000ffffffff RCX: 0000000000007fa0
RDX: 0000000000000000 RSI: ffff881ffd131c68 RDI: ffff8a1ff1bd6c80
RBP: ffff881ffd131c58 R08: ffff881ffd131bf8 R09: ffff8a1ff1bd6c80
R10: 0000000000000010 R11: 0000000000000004 R12: ffff8a1ff1bd6c80
R13: 000000000000000b R14: ffffffff8147330b R15: ffff881ffd131b58
FS:  0000000000000000(0000) GS:ffff8a1fff980000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000003d5c810dc0 CR3: 0000009f77c04000 CR4: 00000000000027e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process vhost-135220 (pid: 135231, threadinfo ffff881ffd130000, task 
ffff881ffcb754c0)
Stack:
  ffff881ffd131c18 ffffffff81477b90 00000000000000e2 00002b289bcc58ce
  ffff881ffd131ce4 00000000000000a2 0000000000000000 00000000000000a2
  00000000000000a2 00000000000000a2 ffff881ffd131c88 00000000937e754e
Call Trace:
  [<ffffffff81477b90>] ? memcpy_fromiovecend+0x90/0xd0
  [<ffffffff8147f3ca>] __skb_get_rxhash+0x1a/0xe0
  [<ffffffffa03c90f8>] tun_get_user+0x468/0x660 [tun]
  [<ffffffff81090010>] ? __sdt_alloc+0x80/0x1a0
  [<ffffffffa03c934d>] tun_sendmsg+0x5d/0x80 [tun]
  [<ffffffffa0468e8a>] handle_tx+0x34a/0x680 [vhost_net]
  [<ffffffffa04691f5>] handle_tx_kick+0x15/0x20 [vhost_net]
  [<ffffffffa0466dfc>] vhost_worker+0x10c/0x1c0 [vhost_net]
  [<ffffffffa0466cf0>] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net]
  [<ffffffffa0466cf0>] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net]
  [<ffffffff8107ecfe>] kthread+0xce/0xe0
  [<ffffffff8107ec30>] ? kthread_freezable_should_stop+0x70/0x70
  [<ffffffff815537ac>] ret_from_fork+0x7c/0xb0
  [<ffffffff8107ec30>] ? kthread_freezable_should_stop+0x70/0x70
Code: b6 50 06 48 89 ce 48 c1 ee 20 31 f1 41 89 0e 48 8b 48 20 48 33 48 
18 48 89 c8 48 c1 e8 20 31 c1 41 89 4e 04 e9 35 ff ff ff 66 90 <0f> b6 
50 09 e9 1a ff ff ff 0f 1f 80 00 00 00 00 41 8b 44 24 68
[root@hydra11 kvm_rik]#
Message from syslogd@hydra11 at Jan  9 13:06:58 ...
  kernel:BUG: soft lockup - CPU#46 stuck for 22s! [vhost-135220:135231]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Qemu-devel] vhost-net thread getting stuck ?
  2013-01-09 20:25 [Qemu-devel] vhost-net thread getting stuck ? Chegu Vinod
@ 2013-01-10  4:35 ` Jason Wang
  2013-01-10  5:13   ` Chegu Vinod
  0 siblings, 1 reply; 3+ messages in thread
From: Jason Wang @ 2013-01-10  4:35 UTC (permalink / raw)
  To: Chegu Vinod; +Cc: qemu-devel qemu-devel, KVM

On 01/10/2013 04:25 AM, Chegu Vinod wrote:
>
> Hello,
>
> 'am running into an issue with the latest bits. [ Pl. see below. The
> vhost thread seems to be getting
> stuck while trying to memcopy...perhaps a bad address?.  ] Wondering
> if this is a known issue or
> some recent regression ?

Hi:

Looks like the issue has been fixed in following commits, does you tree
contain these?

499744209b2cbca66c42119226e5470da3bb7040 and
76fe45812a3b134c39170ca32dfd4b7217d33145.

They have been merged in to Linus 3.8-rc tree.

Thanks
>
> 'am using the latest qemu (from qemu.git) and the latest kvm.git
> kernel on the host. Started the
> guest using the following command line....
>
> /usr/local/bin/qemu-system-x86_64 \
> -enable-kvm \
> -cpu host \
> -smp sockets=8,cores=10,threads=1 \
> -numa node,nodeid=0,cpus=0-9,mem=64g \
> -numa node,nodeid=1,cpus=10-19,mem=64g \
> -numa node,nodeid=2,cpus=20-29,mem=64g \
> -numa node,nodeid=3,cpus=30-39,mem=64g \
> -numa node,nodeid=4,cpus=40-49,mem=64g \
> -numa node,nodeid=5,cpus=50-59,mem=64g \
> -numa node,nodeid=6,cpus=60-69,mem=64g \
> -numa node,nodeid=7,cpus=70-79,mem=64g \
> -m 524288 \
> -mem-path /dev/hugepages \
> -name vm2 \
> -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm2.monitor,server,now
> ait \
> -drive
> file=/dev/libvirt_lvm2/vm2,if=none,id=drive-virtio-disk0,format=raw,cache
> =none,aio=native \
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=v
> irtio-disk0,bootindex=1 \
> -monitor stdio \
> -net nic,model=virtio,macaddr=52:54:00:71:01:02,netdev=nic-0 \
> -netdev tap,id=nic-0,ifname=tap0,script=no,downscript=no,vhost=on \
> -vnc :4
>
>
> Was just doing a basic kernel build in the guest when it hung with the
> following in the
> dmesg of the host.
>
> Thanks
> Vinod
>
> BUG: soft lockup - CPU#46 stuck for 23s! [vhost-135220:135231]
> Modules linked in: kvm_intel kvm fuse ip6table_filter ip6_tables
> ebtable_nat ebtables nf_conntrack_ipv4 nf_defrag_ipv4 xt_state
> nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle iptable_filter
> ip_tables bridge stp llc autofs4 sunrpc pcc_cpufreq ipv6 vhost_net
> macvtap macvlan tun uinput iTCO_wdt iTCO_vendor_support coretemp
> crc32c_intel ghash_clmulni_intel microcode pcspkr mlx4_core be2net
> lpc_ich mfd_core hpilo hpwdt i7core_edac edac_core sg netxen_nic ext4
> mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif aesni_intel ablk_helper
> cryptd lrw aes_x86_64 xts gf128mul pata_acpi ata_generic ata_piix hpsa
> lpfc scsi_transport_fc scsi_tgt radeon ttm drm_kms_helper drm
> i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last
> unloaded: kvm]
> CPU 46
> Pid: 135231, comm: vhost-135220 Not tainted 3.7.0+ #1 HP ProLiant
> DL980 G7
> RIP: 0010:[<ffffffff8147bab0>]  [<ffffffff8147bab0>]
> skb_flow_dissect+0x1b0/0x440
> RSP: 0018:ffff881ffd131bc8  EFLAGS: 00000246
> RAX: ffff8a1f7dc70c00 RBX: 00000000ffffffff RCX: 0000000000007fa0
> RDX: 0000000000000000 RSI: ffff881ffd131c68 RDI: ffff8a1ff1bd6c80
> RBP: ffff881ffd131c58 R08: ffff881ffd131bf8 R09: ffff8a1ff1bd6c80
> R10: 0000000000000010 R11: 0000000000000004 R12: ffff8a1ff1bd6c80
> R13: 000000000000000b R14: ffffffff8147330b R15: ffff881ffd131b58
> FS:  0000000000000000(0000) GS:ffff8a1fff980000(0000)
> knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> CR2: 0000003d5c810dc0 CR3: 0000009f77c04000 CR4: 00000000000027e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process vhost-135220 (pid: 135231, threadinfo ffff881ffd130000, task
> ffff881ffcb754c0)
> Stack:
>  ffff881ffd131c18 ffffffff81477b90 00000000000000e2 00002b289bcc58ce
>  ffff881ffd131ce4 00000000000000a2 0000000000000000 00000000000000a2
>  00000000000000a2 00000000000000a2 ffff881ffd131c88 00000000937e754e
> Call Trace:
>  [<ffffffff81477b90>] ? memcpy_fromiovecend+0x90/0xd0
>  [<ffffffff8147f3ca>] __skb_get_rxhash+0x1a/0xe0
>  [<ffffffffa03c90f8>] tun_get_user+0x468/0x660 [tun]
>  [<ffffffff81090010>] ? __sdt_alloc+0x80/0x1a0
>  [<ffffffffa03c934d>] tun_sendmsg+0x5d/0x80 [tun]
>  [<ffffffffa0468e8a>] handle_tx+0x34a/0x680 [vhost_net]
>  [<ffffffffa04691f5>] handle_tx_kick+0x15/0x20 [vhost_net]
>  [<ffffffffa0466dfc>] vhost_worker+0x10c/0x1c0 [vhost_net]
>  [<ffffffffa0466cf0>] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net]
>  [<ffffffffa0466cf0>] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net]
>  [<ffffffff8107ecfe>] kthread+0xce/0xe0
>  [<ffffffff8107ec30>] ? kthread_freezable_should_stop+0x70/0x70
>  [<ffffffff815537ac>] ret_from_fork+0x7c/0xb0
>  [<ffffffff8107ec30>] ? kthread_freezable_should_stop+0x70/0x70
> Code: b6 50 06 48 89 ce 48 c1 ee 20 31 f1 41 89 0e 48 8b 48 20 48 33
> 48 18 48 89 c8 48 c1 e8 20 31 c1 41 89 4e 04 e9 35 ff ff ff 66 90 <0f>
> b6 50 09 e9 1a ff ff ff 0f 1f 80 00 00 00 00 41 8b 44 24 68
> [root@hydra11 kvm_rik]#
> Message from syslogd@hydra11 at Jan  9 13:06:58 ...
>  kernel:BUG: soft lockup - CPU#46 stuck for 22s! [vhost-135220:135231]
>
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Qemu-devel] vhost-net thread getting stuck ?
  2013-01-10  4:35 ` Jason Wang
@ 2013-01-10  5:13   ` Chegu Vinod
  0 siblings, 0 replies; 3+ messages in thread
From: Chegu Vinod @ 2013-01-10  5:13 UTC (permalink / raw)
  To: Jason Wang; +Cc: qemu-devel qemu-devel, KVM

On 1/9/2013 8:35 PM, Jason Wang wrote:
> On 01/10/2013 04:25 AM, Chegu Vinod wrote:
>> Hello,
>>
>> 'am running into an issue with the latest bits. [ Pl. see below. The
>> vhost thread seems to be getting
>> stuck while trying to memcopy...perhaps a bad address?.  ] Wondering
>> if this is a known issue or
>> some recent regression ?
> Hi:
>
> Looks like the issue has been fixed in following commits, does you tree
> contain these?
>
> 499744209b2cbca66c42119226e5470da3bb7040 and
> 76fe45812a3b134c39170ca32dfd4b7217d33145.
>
> They have been merged in to Linus 3.8-rc tree.

I was using kvm.git kernel (as of today morning)....looks like the fixes 
aren't there yet.

Will try the Linus's 3.8-rc tree.

Thanks!
Vinod

>
> Thanks
>> 'am using the latest qemu (from qemu.git) and the latest kvm.git
>> kernel on the host. Started the
>> guest using the following command line....
>>
>> /usr/local/bin/qemu-system-x86_64 \
>> -enable-kvm \
>> -cpu host \
>> -smp sockets=8,cores=10,threads=1 \
>> -numa node,nodeid=0,cpus=0-9,mem=64g \
>> -numa node,nodeid=1,cpus=10-19,mem=64g \
>> -numa node,nodeid=2,cpus=20-29,mem=64g \
>> -numa node,nodeid=3,cpus=30-39,mem=64g \
>> -numa node,nodeid=4,cpus=40-49,mem=64g \
>> -numa node,nodeid=5,cpus=50-59,mem=64g \
>> -numa node,nodeid=6,cpus=60-69,mem=64g \
>> -numa node,nodeid=7,cpus=70-79,mem=64g \
>> -m 524288 \
>> -mem-path /dev/hugepages \
>> -name vm2 \
>> -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm2.monitor,server,now
>> ait \
>> -drive
>> file=/dev/libvirt_lvm2/vm2,if=none,id=drive-virtio-disk0,format=raw,cache
>> =none,aio=native \
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=v
>> irtio-disk0,bootindex=1 \
>> -monitor stdio \
>> -net nic,model=virtio,macaddr=52:54:00:71:01:02,netdev=nic-0 \
>> -netdev tap,id=nic-0,ifname=tap0,script=no,downscript=no,vhost=on \
>> -vnc :4
>>
>>
>> Was just doing a basic kernel build in the guest when it hung with the
>> following in the
>> dmesg of the host.
>>
>> Thanks
>> Vinod
>>
>> BUG: soft lockup - CPU#46 stuck for 23s! [vhost-135220:135231]
>> Modules linked in: kvm_intel kvm fuse ip6table_filter ip6_tables
>> ebtable_nat ebtables nf_conntrack_ipv4 nf_defrag_ipv4 xt_state
>> nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle iptable_filter
>> ip_tables bridge stp llc autofs4 sunrpc pcc_cpufreq ipv6 vhost_net
>> macvtap macvlan tun uinput iTCO_wdt iTCO_vendor_support coretemp
>> crc32c_intel ghash_clmulni_intel microcode pcspkr mlx4_core be2net
>> lpc_ich mfd_core hpilo hpwdt i7core_edac edac_core sg netxen_nic ext4
>> mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif aesni_intel ablk_helper
>> cryptd lrw aes_x86_64 xts gf128mul pata_acpi ata_generic ata_piix hpsa
>> lpfc scsi_transport_fc scsi_tgt radeon ttm drm_kms_helper drm
>> i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last
>> unloaded: kvm]
>> CPU 46
>> Pid: 135231, comm: vhost-135220 Not tainted 3.7.0+ #1 HP ProLiant
>> DL980 G7
>> RIP: 0010:[<ffffffff8147bab0>]  [<ffffffff8147bab0>]
>> skb_flow_dissect+0x1b0/0x440
>> RSP: 0018:ffff881ffd131bc8  EFLAGS: 00000246
>> RAX: ffff8a1f7dc70c00 RBX: 00000000ffffffff RCX: 0000000000007fa0
>> RDX: 0000000000000000 RSI: ffff881ffd131c68 RDI: ffff8a1ff1bd6c80
>> RBP: ffff881ffd131c58 R08: ffff881ffd131bf8 R09: ffff8a1ff1bd6c80
>> R10: 0000000000000010 R11: 0000000000000004 R12: ffff8a1ff1bd6c80
>> R13: 000000000000000b R14: ffffffff8147330b R15: ffff881ffd131b58
>> FS:  0000000000000000(0000) GS:ffff8a1fff980000(0000)
>> knlGS:0000000000000000
>> CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
>> CR2: 0000003d5c810dc0 CR3: 0000009f77c04000 CR4: 00000000000027e0
>> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
>> Process vhost-135220 (pid: 135231, threadinfo ffff881ffd130000, task
>> ffff881ffcb754c0)
>> Stack:
>>   ffff881ffd131c18 ffffffff81477b90 00000000000000e2 00002b289bcc58ce
>>   ffff881ffd131ce4 00000000000000a2 0000000000000000 00000000000000a2
>>   00000000000000a2 00000000000000a2 ffff881ffd131c88 00000000937e754e
>> Call Trace:
>>   [<ffffffff81477b90>] ? memcpy_fromiovecend+0x90/0xd0
>>   [<ffffffff8147f3ca>] __skb_get_rxhash+0x1a/0xe0
>>   [<ffffffffa03c90f8>] tun_get_user+0x468/0x660 [tun]
>>   [<ffffffff81090010>] ? __sdt_alloc+0x80/0x1a0
>>   [<ffffffffa03c934d>] tun_sendmsg+0x5d/0x80 [tun]
>>   [<ffffffffa0468e8a>] handle_tx+0x34a/0x680 [vhost_net]
>>   [<ffffffffa04691f5>] handle_tx_kick+0x15/0x20 [vhost_net]
>>   [<ffffffffa0466dfc>] vhost_worker+0x10c/0x1c0 [vhost_net]
>>   [<ffffffffa0466cf0>] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net]
>>   [<ffffffffa0466cf0>] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net]
>>   [<ffffffff8107ecfe>] kthread+0xce/0xe0
>>   [<ffffffff8107ec30>] ? kthread_freezable_should_stop+0x70/0x70
>>   [<ffffffff815537ac>] ret_from_fork+0x7c/0xb0
>>   [<ffffffff8107ec30>] ? kthread_freezable_should_stop+0x70/0x70
>> Code: b6 50 06 48 89 ce 48 c1 ee 20 31 f1 41 89 0e 48 8b 48 20 48 33
>> 48 18 48 89 c8 48 c1 e8 20 31 c1 41 89 4e 04 e9 35 ff ff ff 66 90 <0f>
>> b6 50 09 e9 1a ff ff ff 0f 1f 80 00 00 00 00 41 8b 44 24 68
>> [root@hydra11 kvm_rik]#
>> Message from syslogd@hydra11 at Jan  9 13:06:58 ...
>>   kernel:BUG: soft lockup - CPU#46 stuck for 22s! [vhost-135220:135231]
>>
>>
> .
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2013-01-10  5:13 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-01-09 20:25 [Qemu-devel] vhost-net thread getting stuck ? Chegu Vinod
2013-01-10  4:35 ` Jason Wang
2013-01-10  5:13   ` Chegu Vinod

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).