* repeatable hang with loop mount and heavy IO in guest
@ 2010-01-21 17:26 Antoine Martin
2010-01-21 20:08 ` RW
2010-01-22 7:57 ` Michael Tokarev
0 siblings, 2 replies; 13+ messages in thread
From: Antoine Martin @ 2010-01-21 17:26 UTC (permalink / raw)
To: kvm
I've tried various guests, including most recent Fedora12 kernels,
custom 2.6.32.x
All of them hang around the same point (~1GB written) when I do heavy IO
write inside the guest.
I have waited 30 minutes to see if the guest would recover, but it just
sits there, not writing back any data, not doing anything - but
certainly not allowing any new IO writes. The host has some load on it,
but nothing heavy enough to completely hand a guest for that long.
mount -o loop some_image.fs ./somewhere bs=512
dd if=/dev/zero of=/somewhere/zero
then after ~1GB: sync
Host is running: 2.6.31.4
QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
Guests are booted with "elevator=noop" as the filesystems are stored as
files, accessed as virtio disks.
The "hung" backtraces always look similar to these:
[ 361.460136] INFO: task loop0:2097 blocked for more than 120 seconds.
[ 361.460139] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 361.460142] loop0 D ffff88000b92c848 0 2097 2
0x00000080
[ 361.460148] ffff88000b92c5d0 0000000000000046 ffff880008c1f810
ffff880009829fd8
[ 361.460153] ffff880009829fd8 ffff880009829fd8 ffff88000a21ee80
ffff88000b92c5d0
[ 361.460157] ffff880009829610 ffffffff8181b768 ffff880001af33b0
0000000000000002
[ 361.460161] Call Trace:
[ 361.460216] [<ffffffff8105bf12>] ? sync_page+0x0/0x43
[ 361.460253] [<ffffffff8151383e>] ? io_schedule+0x2c/0x43
[ 361.460257] [<ffffffff8105bf50>] ? sync_page+0x3e/0x43
[ 361.460261] [<ffffffff81513a2a>] ? __wait_on_bit+0x41/0x71
[ 361.460264] [<ffffffff8105c092>] ? wait_on_page_bit+0x6a/0x70
[ 361.460283] [<ffffffff810385a7>] ? wake_bit_function+0x0/0x23
[ 361.460287] [<ffffffff81064975>] ? shrink_page_list+0x3e5/0x61e
[ 361.460291] [<ffffffff81513992>] ? schedule_timeout+0xa3/0xbe
[ 361.460305] [<ffffffff81038579>] ? autoremove_wake_function+0x0/0x2e
[ 361.460308] [<ffffffff8106538f>] ? shrink_zone+0x7e1/0xaf6
[ 361.460310] [<ffffffff81061725>] ? determine_dirtyable_memory+0xd/0x17
[ 361.460314] [<ffffffff810637da>] ? isolate_pages_global+0xa3/0x216
[ 361.460316] [<ffffffff81062712>] ? mark_page_accessed+0x2a/0x39
[ 361.460335] [<ffffffff810a61db>] ? __find_get_block+0x13b/0x15c
[ 361.460337] [<ffffffff81065ed4>] ? try_to_free_pages+0x1ab/0x2c9
[ 361.460340] [<ffffffff81063737>] ? isolate_pages_global+0x0/0x216
[ 361.460343] [<ffffffff81060baf>] ? __alloc_pages_nodemask+0x394/0x564
[ 361.460350] [<ffffffff8108250c>] ? __slab_alloc+0x137/0x44f
[ 361.460371] [<ffffffff812cc4c1>] ? radix_tree_preload+0x1f/0x6a
[ 361.460374] [<ffffffff81082a08>] ? kmem_cache_alloc+0x5d/0x88
[ 361.460376] [<ffffffff812cc4c1>] ? radix_tree_preload+0x1f/0x6a
[ 361.460379] [<ffffffff8105c0b5>] ? add_to_page_cache_locked+0x1d/0xf1
[ 361.460381] [<ffffffff8105c1b0>] ? add_to_page_cache_lru+0x27/0x57
[ 361.460384] [<ffffffff8105c25a>] ? grab_cache_page_write_begin+0x7a/0xa0
[ 361.460399] [<ffffffff81104620>] ? ext3_write_begin+0x7e/0x201
[ 361.460417] [<ffffffff8134648f>] ? do_lo_send_aops+0xa1/0x174
[ 361.460420] [<ffffffff81081948>] ? virt_to_head_page+0x9/0x2a
[ 361.460422] [<ffffffff8134686b>] ? loop_thread+0x309/0x48a
[ 361.460425] [<ffffffff813463ee>] ? do_lo_send_aops+0x0/0x174
[ 361.460427] [<ffffffff81038579>] ? autoremove_wake_function+0x0/0x2e
[ 361.460430] [<ffffffff81346562>] ? loop_thread+0x0/0x48a
[ 361.460432] [<ffffffff8103819b>] ? kthread+0x78/0x80
[ 361.460441] [<ffffffff810238df>] ? finish_task_switch+0x2b/0x78
[ 361.460454] [<ffffffff81002f6a>] ? child_rip+0xa/0x20
[ 361.460460] [<ffffffff81012ac3>] ? native_pax_close_kernel+0x0/0x32
[ 361.460463] [<ffffffff81038123>] ? kthread+0x0/0x80
[ 361.460469] [<ffffffff81002f60>] ? child_rip+0x0/0x20
[ 361.460471] INFO: task kjournald:2098 blocked for more than 120 seconds.
[ 361.460473] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 361.460474] kjournald D ffff88000b92e558 0 2098 2
0x00000080
[ 361.460477] ffff88000b92e2e0 0000000000000046 ffff88000aad9840
ffff88000983ffd8
[ 361.460480] ffff88000983ffd8 ffff88000983ffd8 ffffffff81808e00
ffff88000b92e2e0
[ 361.460483] ffff88000983fcf0 ffffffff8181b768 ffff880001af3c40
0000000000000002
[ 361.460486] Call Trace:
[ 361.460488] [<ffffffff810a6b16>] ? sync_buffer+0x0/0x3c
[ 361.460491] [<ffffffff8151383e>] ? io_schedule+0x2c/0x43
[ 361.460494] [<ffffffff810a6b4e>] ? sync_buffer+0x38/0x3c
[ 361.460496] [<ffffffff81513a2a>] ? __wait_on_bit+0x41/0x71
[ 361.460499] [<ffffffff810a6b16>] ? sync_buffer+0x0/0x3c
[ 361.460501] [<ffffffff81513ac4>] ? out_of_line_wait_on_bit+0x6a/0x76
[ 361.460504] [<ffffffff810385a7>] ? wake_bit_function+0x0/0x23
[ 361.460514] [<ffffffff8113edad>] ?
journal_commit_transaction+0x769/0xbb8
[ 361.460517] [<ffffffff810238df>] ? finish_task_switch+0x2b/0x78
[ 361.460519] [<ffffffff815137d9>] ? thread_return+0x40/0x79
[ 361.460522] [<ffffffff8114162d>] ? kjournald+0xc7/0x1cb
[ 361.460525] [<ffffffff81038579>] ? autoremove_wake_function+0x0/0x2e
[ 361.460527] [<ffffffff81141566>] ? kjournald+0x0/0x1cb
[ 361.460530] [<ffffffff8103819b>] ? kthread+0x78/0x80
[ 361.460532] [<ffffffff810238df>] ? finish_task_switch+0x2b/0x78
[ 361.460534] [<ffffffff81002f6a>] ? child_rip+0xa/0x20
[ 361.460537] [<ffffffff81012ac3>] ? native_pax_close_kernel+0x0/0x32
[ 361.460540] [<ffffffff81038123>] ? kthread+0x0/0x80
[ 361.460542] [<ffffffff81002f60>] ? child_rip+0x0/0x20
[ 361.460544] INFO: task dd:2132 blocked for more than 120 seconds.
[ 361.460546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 361.460547] dd D ffff88000a21f0f8 0 2132 2090
0x00000080
[ 361.460550] ffff88000a21ee80 0000000000000082 ffff88000a21ee80
ffff88000b3affd8
[ 361.460553] ffff88000b3affd8 ffff88000b3affd8 ffffffff81808e00
ffff880001af3510
[ 361.460556] ffff88000b78eaf0 ffff88000b3daa00 ffff880008de6c40
ffff88000ab44a80
[ 361.460558] Call Trace:
[ 361.460561] [<ffffffff8113dda5>] ? do_get_write_access+0x1f5/0x3b6
[ 361.460564] [<ffffffff81061956>] ? get_dirty_limits+0x1dc/0x210
[ 361.460566] [<ffffffff810385a7>] ? wake_bit_function+0x0/0x23
[ 361.460569] [<ffffffff810a6218>] ? __getblk+0x1c/0x26c
[ 361.460576] [<ffffffff8155d1d0>] ? __func__.28446+0x0/0x20
[ 361.460578] [<ffffffff8113df88>] ? journal_get_write_access+0x22/0x34
[ 361.460582] [<ffffffff8110dd9b>] ?
__ext3_journal_get_write_access+0x1e/0x47
[ 361.460584] [<ffffffff81101c4d>] ? ext3_reserve_inode_write+0x3e/0x75
[ 361.460587] [<ffffffff81101c9a>] ? ext3_mark_inode_dirty+0x16/0x31
[ 361.460589] [<ffffffff81101deb>] ? ext3_dirty_inode+0x62/0x7a
[ 361.460592] [<ffffffff810a10d9>] ? __mark_inode_dirty+0x25/0x134
[ 361.460595] [<ffffffff81098b80>] ? file_update_time+0xd4/0xfb
[ 361.460598] [<ffffffff8105ced8>] ? __generic_file_aio_write+0x16c/0x290
[ 361.460600] [<ffffffff8105d055>] ? generic_file_aio_write+0x59/0x9f
[ 361.460603] [<ffffffff81087ab5>] ? do_sync_write+0xcd/0x112
[ 361.460606] [<ffffffff810132d4>] ? pvclock_clocksource_read+0x3a/0x70
[ 361.460609] [<ffffffff81038579>] ? autoremove_wake_function+0x0/0x2e
[ 361.460612] [<ffffffff81000d1a>] ? __switch_to+0x177/0x255
[ 361.460621] [<ffffffff8127891e>] ? selinux_file_permission+0x4d/0xa3
[ 361.460624] [<ffffffff810883d8>] ? vfs_write+0xfc/0x138
[ 361.460627] [<ffffffff810884d0>] ? sys_write+0x45/0x6e
[ 361.460629] [<ffffffff810020ff>] ? system_call_fastpath+0x16/0x1b
[ 361.460632] INFO: task sync:2164 blocked for more than 120 seconds.
[ 361.460633] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 361.460639] sync D ffff88000ba11f88 0 2164 2136
0x00000080
[ 361.460642] ffff88000ba11d10 0000000000000086 0000000100000246
ffff88000b1e9fd8
[ 361.460645] ffff88000b1e9fd8 ffff88000b1e9fd8 ffffffff81808e00
ffff88000b3daa00
[ 361.460648] 00000000000001cc ffff88000b1e9e68 ffff88000b1e9e80
ffff88000b3daa78
[ 361.460651] Call Trace:
[ 361.460653] [<ffffffff8114122b>] ? log_wait_commit+0x9e/0xe0
[ 361.460656] [<ffffffff81038579>] ? autoremove_wake_function+0x0/0x2e
[ 361.460659] [<ffffffff81108fe7>] ? ext3_sync_fs+0x42/0x4b
[ 361.460669] [<ffffffff810c9711>] ? sync_quota_sb+0x45/0xf6
[ 361.460672] [<ffffffff810a4cd2>] ? __sync_filesystem+0x43/0x70
[ 361.460675] [<ffffffff810a4d86>] ? sync_filesystems+0x87/0xbd
[ 361.460677] [<ffffffff810a4e01>] ? sys_sync+0x1c/0x2e
[ 361.460679] [<ffffffff810020ff>] ? system_call_fastpath+0x16/0x1b
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest
2010-01-21 17:26 repeatable hang with loop mount and heavy IO in guest Antoine Martin
@ 2010-01-21 20:08 ` RW
2010-01-21 21:08 ` Thomas Beinicke
2010-01-22 7:57 ` Michael Tokarev
1 sibling, 1 reply; 13+ messages in thread
From: RW @ 2010-01-21 20:08 UTC (permalink / raw)
To: Antoine Martin; +Cc: kvm
Some months ago I also thought elevator=noop should be a good idea.
But it isn't. It works good as long as you only do short IO requests.
Try using deadline in host and guest.
Robert
On 01/21/10 18:26, Antoine Martin wrote:
> I've tried various guests, including most recent Fedora12 kernels,
> custom 2.6.32.x
> All of them hang around the same point (~1GB written) when I do heavy IO
> write inside the guest.
> I have waited 30 minutes to see if the guest would recover, but it just
> sits there, not writing back any data, not doing anything - but
> certainly not allowing any new IO writes. The host has some load on it,
> but nothing heavy enough to completely hand a guest for that long.
>
> mount -o loop some_image.fs ./somewhere bs=512
> dd if=/dev/zero of=/somewhere/zero
> then after ~1GB: sync
>
> Host is running: 2.6.31.4
> QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
>
> Guests are booted with "elevator=noop" as the filesystems are stored as
> files, accessed as virtio disks.
>
>
> The "hung" backtraces always look similar to these:
> [ 361.460136] INFO: task loop0:2097 blocked for more than 120 seconds.
> [ 361.460139] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [ 361.460142] loop0 D ffff88000b92c848 0 2097 2
> 0x00000080
> [ 361.460148] ffff88000b92c5d0 0000000000000046 ffff880008c1f810
> ffff880009829fd8
> [ 361.460153] ffff880009829fd8 ffff880009829fd8 ffff88000a21ee80
> ffff88000b92c5d0
> [ 361.460157] ffff880009829610 ffffffff8181b768 ffff880001af33b0
> 0000000000000002
> [ 361.460161] Call Trace:
> [ 361.460216] [<ffffffff8105bf12>] ? sync_page+0x0/0x43
> [ 361.460253] [<ffffffff8151383e>] ? io_schedule+0x2c/0x43
> [ 361.460257] [<ffffffff8105bf50>] ? sync_page+0x3e/0x43
> [ 361.460261] [<ffffffff81513a2a>] ? __wait_on_bit+0x41/0x71
> [ 361.460264] [<ffffffff8105c092>] ? wait_on_page_bit+0x6a/0x70
> [ 361.460283] [<ffffffff810385a7>] ? wake_bit_function+0x0/0x23
> [ 361.460287] [<ffffffff81064975>] ? shrink_page_list+0x3e5/0x61e
> [ 361.460291] [<ffffffff81513992>] ? schedule_timeout+0xa3/0xbe
> [ 361.460305] [<ffffffff81038579>] ? autoremove_wake_function+0x0/0x2e
> [ 361.460308] [<ffffffff8106538f>] ? shrink_zone+0x7e1/0xaf6
> [ 361.460310] [<ffffffff81061725>] ? determine_dirtyable_memory+0xd/0x17
> [ 361.460314] [<ffffffff810637da>] ? isolate_pages_global+0xa3/0x216
> [ 361.460316] [<ffffffff81062712>] ? mark_page_accessed+0x2a/0x39
> [ 361.460335] [<ffffffff810a61db>] ? __find_get_block+0x13b/0x15c
> [ 361.460337] [<ffffffff81065ed4>] ? try_to_free_pages+0x1ab/0x2c9
> [ 361.460340] [<ffffffff81063737>] ? isolate_pages_global+0x0/0x216
> [ 361.460343] [<ffffffff81060baf>] ? __alloc_pages_nodemask+0x394/0x564
> [ 361.460350] [<ffffffff8108250c>] ? __slab_alloc+0x137/0x44f
> [ 361.460371] [<ffffffff812cc4c1>] ? radix_tree_preload+0x1f/0x6a
> [ 361.460374] [<ffffffff81082a08>] ? kmem_cache_alloc+0x5d/0x88
> [ 361.460376] [<ffffffff812cc4c1>] ? radix_tree_preload+0x1f/0x6a
> [ 361.460379] [<ffffffff8105c0b5>] ? add_to_page_cache_locked+0x1d/0xf1
> [ 361.460381] [<ffffffff8105c1b0>] ? add_to_page_cache_lru+0x27/0x57
> [ 361.460384] [<ffffffff8105c25a>] ?
> grab_cache_page_write_begin+0x7a/0xa0
> [ 361.460399] [<ffffffff81104620>] ? ext3_write_begin+0x7e/0x201
> [ 361.460417] [<ffffffff8134648f>] ? do_lo_send_aops+0xa1/0x174
> [ 361.460420] [<ffffffff81081948>] ? virt_to_head_page+0x9/0x2a
> [ 361.460422] [<ffffffff8134686b>] ? loop_thread+0x309/0x48a
> [ 361.460425] [<ffffffff813463ee>] ? do_lo_send_aops+0x0/0x174
> [ 361.460427] [<ffffffff81038579>] ? autoremove_wake_function+0x0/0x2e
> [ 361.460430] [<ffffffff81346562>] ? loop_thread+0x0/0x48a
> [ 361.460432] [<ffffffff8103819b>] ? kthread+0x78/0x80
> [ 361.460441] [<ffffffff810238df>] ? finish_task_switch+0x2b/0x78
> [ 361.460454] [<ffffffff81002f6a>] ? child_rip+0xa/0x20
> [ 361.460460] [<ffffffff81012ac3>] ? native_pax_close_kernel+0x0/0x32
> [ 361.460463] [<ffffffff81038123>] ? kthread+0x0/0x80
> [ 361.460469] [<ffffffff81002f60>] ? child_rip+0x0/0x20
> [ 361.460471] INFO: task kjournald:2098 blocked for more than 120 seconds.
> [ 361.460473] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [ 361.460474] kjournald D ffff88000b92e558 0 2098 2
> 0x00000080
> [ 361.460477] ffff88000b92e2e0 0000000000000046 ffff88000aad9840
> ffff88000983ffd8
> [ 361.460480] ffff88000983ffd8 ffff88000983ffd8 ffffffff81808e00
> ffff88000b92e2e0
> [ 361.460483] ffff88000983fcf0 ffffffff8181b768 ffff880001af3c40
> 0000000000000002
> [ 361.460486] Call Trace:
> [ 361.460488] [<ffffffff810a6b16>] ? sync_buffer+0x0/0x3c
> [ 361.460491] [<ffffffff8151383e>] ? io_schedule+0x2c/0x43
> [ 361.460494] [<ffffffff810a6b4e>] ? sync_buffer+0x38/0x3c
> [ 361.460496] [<ffffffff81513a2a>] ? __wait_on_bit+0x41/0x71
> [ 361.460499] [<ffffffff810a6b16>] ? sync_buffer+0x0/0x3c
> [ 361.460501] [<ffffffff81513ac4>] ? out_of_line_wait_on_bit+0x6a/0x76
> [ 361.460504] [<ffffffff810385a7>] ? wake_bit_function+0x0/0x23
> [ 361.460514] [<ffffffff8113edad>] ?
> journal_commit_transaction+0x769/0xbb8
> [ 361.460517] [<ffffffff810238df>] ? finish_task_switch+0x2b/0x78
> [ 361.460519] [<ffffffff815137d9>] ? thread_return+0x40/0x79
> [ 361.460522] [<ffffffff8114162d>] ? kjournald+0xc7/0x1cb
> [ 361.460525] [<ffffffff81038579>] ? autoremove_wake_function+0x0/0x2e
> [ 361.460527] [<ffffffff81141566>] ? kjournald+0x0/0x1cb
> [ 361.460530] [<ffffffff8103819b>] ? kthread+0x78/0x80
> [ 361.460532] [<ffffffff810238df>] ? finish_task_switch+0x2b/0x78
> [ 361.460534] [<ffffffff81002f6a>] ? child_rip+0xa/0x20
> [ 361.460537] [<ffffffff81012ac3>] ? native_pax_close_kernel+0x0/0x32
> [ 361.460540] [<ffffffff81038123>] ? kthread+0x0/0x80
> [ 361.460542] [<ffffffff81002f60>] ? child_rip+0x0/0x20
> [ 361.460544] INFO: task dd:2132 blocked for more than 120 seconds.
> [ 361.460546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [ 361.460547] dd D ffff88000a21f0f8 0 2132 2090
> 0x00000080
> [ 361.460550] ffff88000a21ee80 0000000000000082 ffff88000a21ee80
> ffff88000b3affd8
> [ 361.460553] ffff88000b3affd8 ffff88000b3affd8 ffffffff81808e00
> ffff880001af3510
> [ 361.460556] ffff88000b78eaf0 ffff88000b3daa00 ffff880008de6c40
> ffff88000ab44a80
> [ 361.460558] Call Trace:
> [ 361.460561] [<ffffffff8113dda5>] ? do_get_write_access+0x1f5/0x3b6
> [ 361.460564] [<ffffffff81061956>] ? get_dirty_limits+0x1dc/0x210
> [ 361.460566] [<ffffffff810385a7>] ? wake_bit_function+0x0/0x23
> [ 361.460569] [<ffffffff810a6218>] ? __getblk+0x1c/0x26c
> [ 361.460576] [<ffffffff8155d1d0>] ? __func__.28446+0x0/0x20
> [ 361.460578] [<ffffffff8113df88>] ? journal_get_write_access+0x22/0x34
> [ 361.460582] [<ffffffff8110dd9b>] ?
> __ext3_journal_get_write_access+0x1e/0x47
> [ 361.460584] [<ffffffff81101c4d>] ? ext3_reserve_inode_write+0x3e/0x75
> [ 361.460587] [<ffffffff81101c9a>] ? ext3_mark_inode_dirty+0x16/0x31
> [ 361.460589] [<ffffffff81101deb>] ? ext3_dirty_inode+0x62/0x7a
> [ 361.460592] [<ffffffff810a10d9>] ? __mark_inode_dirty+0x25/0x134
> [ 361.460595] [<ffffffff81098b80>] ? file_update_time+0xd4/0xfb
> [ 361.460598] [<ffffffff8105ced8>] ? __generic_file_aio_write+0x16c/0x290
> [ 361.460600] [<ffffffff8105d055>] ? generic_file_aio_write+0x59/0x9f
> [ 361.460603] [<ffffffff81087ab5>] ? do_sync_write+0xcd/0x112
> [ 361.460606] [<ffffffff810132d4>] ? pvclock_clocksource_read+0x3a/0x70
> [ 361.460609] [<ffffffff81038579>] ? autoremove_wake_function+0x0/0x2e
> [ 361.460612] [<ffffffff81000d1a>] ? __switch_to+0x177/0x255
> [ 361.460621] [<ffffffff8127891e>] ? selinux_file_permission+0x4d/0xa3
> [ 361.460624] [<ffffffff810883d8>] ? vfs_write+0xfc/0x138
> [ 361.460627] [<ffffffff810884d0>] ? sys_write+0x45/0x6e
> [ 361.460629] [<ffffffff810020ff>] ? system_call_fastpath+0x16/0x1b
> [ 361.460632] INFO: task sync:2164 blocked for more than 120 seconds.
> [ 361.460633] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [ 361.460639] sync D ffff88000ba11f88 0 2164 2136
> 0x00000080
> [ 361.460642] ffff88000ba11d10 0000000000000086 0000000100000246
> ffff88000b1e9fd8
> [ 361.460645] ffff88000b1e9fd8 ffff88000b1e9fd8 ffffffff81808e00
> ffff88000b3daa00
> [ 361.460648] 00000000000001cc ffff88000b1e9e68 ffff88000b1e9e80
> ffff88000b3daa78
> [ 361.460651] Call Trace:
> [ 361.460653] [<ffffffff8114122b>] ? log_wait_commit+0x9e/0xe0
> [ 361.460656] [<ffffffff81038579>] ? autoremove_wake_function+0x0/0x2e
> [ 361.460659] [<ffffffff81108fe7>] ? ext3_sync_fs+0x42/0x4b
> [ 361.460669] [<ffffffff810c9711>] ? sync_quota_sb+0x45/0xf6
> [ 361.460672] [<ffffffff810a4cd2>] ? __sync_filesystem+0x43/0x70
> [ 361.460675] [<ffffffff810a4d86>] ? sync_filesystems+0x87/0xbd
> [ 361.460677] [<ffffffff810a4e01>] ? sys_sync+0x1c/0x2e
> [ 361.460679] [<ffffffff810020ff>] ? system_call_fastpath+0x16/0x1b
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest
2010-01-21 20:08 ` RW
@ 2010-01-21 21:08 ` Thomas Beinicke
2010-01-21 21:36 ` RW
0 siblings, 1 reply; 13+ messages in thread
From: Thomas Beinicke @ 2010-01-21 21:08 UTC (permalink / raw)
To: KVM mailing list
On Thursday 21 January 2010 21:08:38 RW wrote:
> Some months ago I also thought elevator=noop should be a good idea.
> But it isn't. It works good as long as you only do short IO requests.
> Try using deadline in host and guest.
>
> Robert
@Robert: I've been using noop on all of my KVMs and didn't have any problems
so far, never had any crash too.
Do you have any performance data or comparisons between noop and deadline io
schedulers?
Cheers,
Thomas
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest
2010-01-21 21:08 ` Thomas Beinicke
@ 2010-01-21 21:36 ` RW
0 siblings, 0 replies; 13+ messages in thread
From: RW @ 2010-01-21 21:36 UTC (permalink / raw)
To: Thomas Beinicke; +Cc: KVM mailing list
No sorry, I haven't any performance data with noop. I even don't
have had a crash. BUT I've experienced serve I/O degradation
with noop. Once I've written a big chunk of data (e.g. a simple
rsync -av /usr /opt) with noop it works for a while and
after a few seconds I saw heavy writes which made the
VM virtually unusable. As far as I remember it was kjournald
which cases the writes.
I've written a mail to the list some months ago with some benchmarks:
http://article.gmane.org/gmane.comp.emulators.kvm.devel/41112/match=benchmark
There're some I/O benchmarks in there. You can't get the graphs
currently since tauceti.net is offline until monday. I haven't
tested noop in these benchmarks because of the problems
mentioned above. But it compares deadline and cfq a little bit
on a HP DL 380 G6 server.
Robert
On 01/21/10 22:08, Thomas Beinicke wrote:
> On Thursday 21 January 2010 21:08:38 RW wrote:
>> Some months ago I also thought elevator=noop should be a good idea.
>> But it isn't. It works good as long as you only do short IO requests.
>> Try using deadline in host and guest.
>>
>> Robert
>
> @Robert: I've been using noop on all of my KVMs and didn't have any problems
> so far, never had any crash too.
> Do you have any performance data or comparisons between noop and deadline io
> schedulers?
>
> Cheers,
>
> Thomas
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest
2010-01-21 17:26 repeatable hang with loop mount and heavy IO in guest Antoine Martin
2010-01-21 20:08 ` RW
@ 2010-01-22 7:57 ` Michael Tokarev
2010-01-22 18:28 ` repeatable hang with loop mount and heavy IO in guest [SOLVED] Antoine Martin
1 sibling, 1 reply; 13+ messages in thread
From: Michael Tokarev @ 2010-01-22 7:57 UTC (permalink / raw)
To: Antoine Martin; +Cc: kvm
Antoine Martin wrote:
> I've tried various guests, including most recent Fedora12 kernels,
> custom 2.6.32.x
> All of them hang around the same point (~1GB written) when I do heavy IO
> write inside the guest.
[]
> Host is running: 2.6.31.4
> QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
Please update to last version and repeat. kvm-88 is ancient and
_lots_ of stuff fixed and changed since that time, I doubt anyone
here will try to dig into kvm-88 problems.
Current kvm is qemu-kvm-0.12.2, released yesterday.
/mjt
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest [SOLVED]
2010-01-22 7:57 ` Michael Tokarev
@ 2010-01-22 18:28 ` Antoine Martin
2010-01-22 19:15 ` repeatable hang with loop mount and heavy IO in guest [NOT SOLVED] Antoine Martin
0 siblings, 1 reply; 13+ messages in thread
From: Antoine Martin @ 2010-01-22 18:28 UTC (permalink / raw)
To: Michael Tokarev; +Cc: kvm
On 01/22/2010 02:57 PM, Michael Tokarev wrote:
> Antoine Martin wrote:
>
>> I've tried various guests, including most recent Fedora12 kernels,
>> custom 2.6.32.x
>> All of them hang around the same point (~1GB written) when I do heavy IO
>> write inside the guest.
>>
> []
>
>> Host is running: 2.6.31.4
>> QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
>>
> Please update to last version and repeat. kvm-88 is ancient and
> _lots_ of stuff fixed and changed since that time, I doubt anyone
> here will try to dig into kvm-88 problems.
>
> Current kvm is qemu-kvm-0.12.2, released yesterday.
>
Sorry about that, I didn't realize 88 was so far behind.
Upgrading to qemu-kvm-0.12.2 did solve my IO problems.
Found these build issues if anyone is interested:
"--enable-io-thread" gave me:
LINK x86_64-softmmu/qemu-system-x86_64
kvm-all.o: In function `qemu_mutex_lock_iothread':
/usr/src/KVM/qemu-kvm-0.12.2/qemu-kvm.c:2526: multiple definition of
`qemu_mutex_lock_iothread'
vl.o:/usr/src/KVM/qemu-kvm-0.12.2/vl.c:3772: first defined here
kvm-all.o: In function `qemu_mutex_unlock_iothread':
/usr/src/KVM/qemu-kvm-0.12.2/qemu-kvm.c:2520: multiple definition of
`qemu_mutex_unlock_iothread'
vl.o:/usr/src/KVM/qemu-kvm-0.12.2/vl.c:3783: first defined here
collect2: ld returned 1 exit status
And "--enable-cap-kvm-pit" is defined if you look at "--help", but does
not exist if you try to use it!?
# ./configure --enable-cap-kvm-pit | grep cap-kvm-pit
ERROR: unknown option --enable-cap-kvm-pit
--disable-cap-kvm-pit disable KVM pit support
--enable-cap-kvm-pit enable KVM pit support
Antoine
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest [NOT SOLVED]
2010-01-22 18:28 ` repeatable hang with loop mount and heavy IO in guest [SOLVED] Antoine Martin
@ 2010-01-22 19:15 ` Antoine Martin
2010-01-24 11:23 ` Antoine Martin
2010-02-03 19:28 ` Antoine Martin
0 siblings, 2 replies; 13+ messages in thread
From: Antoine Martin @ 2010-01-22 19:15 UTC (permalink / raw)
To: Michael Tokarev; +Cc: kvm
On 01/23/2010 01:28 AM, Antoine Martin wrote:
> On 01/22/2010 02:57 PM, Michael Tokarev wrote:
>> Antoine Martin wrote:
>>> I've tried various guests, including most recent Fedora12 kernels,
>>> custom 2.6.32.x
>>> All of them hang around the same point (~1GB written) when I do
>>> heavy IO
>>> write inside the guest.
>> []
>>> Host is running: 2.6.31.4
>>> QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
>> Please update to last version and repeat. kvm-88 is ancient and
>> _lots_ of stuff fixed and changed since that time, I doubt anyone
>> here will try to dig into kvm-88 problems.
>>
>> Current kvm is qemu-kvm-0.12.2, released yesterday.
> Sorry about that, I didn't realize 88 was so far behind.
> Upgrading to qemu-kvm-0.12.2 did solve my IO problems.
Only for a while. Same problem just re-occurred, only this time it went
a little further.
It is now just sitting there, with a load average of exactly 3.0 (+- 5%)
Here is a good trace of the symptom during writeback, you can see it
write the data at around 50MB/s, it goes from being idle to sys, but
after a while it just stops writing and goes into mostly wait state:
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
1 0 99 0 0 0| 0 0 | 198B 614B| 0 0 | 36 17
1 0 99 0 0 0| 0 0 | 198B 710B| 0 0 | 31 17
1 1 98 0 0 0| 0 128k| 240B 720B| 0 0 | 39 26
1 1 98 0 0 0| 0 0 | 132B 564B| 0 0 | 31 14
1 0 99 0 0 0| 0 0 | 132B 468B| 0 0 | 31 14
1 1 98 0 0 0| 0 0 | 66B 354B| 0 0 | 30 13
0 4 11 85 0 0| 852k 0 | 444B 1194B| 0 0 | 215 477
2 2 0 96 0 0| 500k 0 | 132B 756B| 0 0 | 169 458
3 57 0 39 1 0| 228k 10M| 132B 692B| 0 0 | 476 5387
6 94 0 0 0 0| 28k 23M| 132B 884B| 0 0 | 373 2142
6 89 0 2 2 0| 40k 38M| 66B 692B| 0 8192B| 502 5651
4 47 0 48 0 0| 140k 34M| 132B 836B| 0 0 | 605 1664
3 64 0 30 2 0| 60k 50M| 132B 370B| 0 60k| 750 631
4 59 0 35 2 0| 48k 45M| 132B 836B| 0 28k| 708 1293
7 81 0 10 2 0| 68k 67M| 132B 788B| 0 124k| 928 1634
5 74 0 20 1 0| 48k 48M| 132B 756B| 0 316k| 830 5715
5 70 0 24 1 0| 168k 48M| 132B 676B| 0 100k| 734 5325
4 70 0 24 1 0| 72k 49M| 132B 948B| 0 88k| 776 3784
5 57 0 37 1 0| 36k 37M| 132B 996B| 0 480k| 602 369
2 21 0 77 0 0| 36k 23M| 132B 724B| 0 72k| 318 1033
4 51 0 43 2 0| 112k 43M| 132B 756B| 0 112k| 681 909
5 55 0 40 0 0| 88k 48M| 140B 926B| 16k 12k| 698 557
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read writ| recv send| in out | int csw
3 45 0 51 1 0|2248k 29M| 198B 1028B| 28k 44k| 681 5468
1 21 0 78 0 0| 92k 17M|1275B 2049B| 92k 52k| 328 1883
3 30 0 66 1 0| 288k 28M| 498B 2116B| 0 40k| 455 679
1 1 0 98 0 0|4096B 0 | 394B 1340B|4096B 0 | 41 19
1 1 0 98 0 0| 148k 52k| 881B 1592B|4096B 44k| 75 61
1 2 0 97 0 0|1408k 0 | 351B 1727B| 0 0 | 110 109
2 1 0 97 0 0|8192B 0 |1422B 1940B| 0 0 | 53 34
1 0 0 99 0 0|4096B 12k| 328B 1018B| 0 0 | 41 24
1 4 0 95 0 0| 340k 0 |3075B 2152B|4096B 0 | 153 191
4 7 0 89 0 0|1004k 44k|1526B 1906B| 0 0 | 254 244
0 1 0 99 0 0| 76k 0 | 708B 1708B| 0 0 | 67 57
1 1 0 98 0 0| 0 0 | 174B 702B| 0 0 | 32 14
1 1 0 98 0 0| 0 0 | 132B 354B| 0 0 | 32 11
1 0 0 99 0 0| 0 0 | 132B 468B| 0 0 | 32 16
1 0 0 99 0 0| 0 0 | 132B 468B| 0 0 | 32 14
1 1 0 98 0 0| 0 52k| 132B 678B| 0 0 | 41 27
1 0 0 99 0 0| 0 0 | 198B 678B| 0 0 | 35 17
1 1 0 98 0 0| 0 0 | 198B 468B| 0 0 | 34 14
1 0 0 99 0 0| 0 0 | 66B 354B| 0 0 | 28 11
1 0 0 99 0 0| 0 0 | 66B 354B| 0 0 | 28 9
1 1 0 98 0 0| 0 0 | 132B 468B| 0 0 | 34 16
1 0 0 98 0 1| 0 0 | 66B 354B| 0 0 | 30 11
1 1 0 98 0 0| 0 0 | 66B 354B| 0 0 | 29 11
From that point onwards, nothing will happen.
The host has disk IO to spare... So what is it waiting for??
QEMU PC emulator version 0.12.2 (qemu-kvm-0.12.2), Copyright (c)
2003-2008 Fabrice Bellard
Guests: various, all recent kernels.
Host: 2.6.31.4
Please advise.
Thanks
Antoine
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest [NOT SOLVED]
2010-01-22 19:15 ` repeatable hang with loop mount and heavy IO in guest [NOT SOLVED] Antoine Martin
@ 2010-01-24 11:23 ` Antoine Martin
2010-02-03 19:28 ` Antoine Martin
1 sibling, 0 replies; 13+ messages in thread
From: Antoine Martin @ 2010-01-24 11:23 UTC (permalink / raw)
To: Michael Tokarev; +Cc: kvm
On 01/23/2010 02:15 AM, Antoine Martin wrote:
> On 01/23/2010 01:28 AM, Antoine Martin wrote:
>> On 01/22/2010 02:57 PM, Michael Tokarev wrote:
>>> Antoine Martin wrote:
>>>> I've tried various guests, including most recent Fedora12 kernels,
>>>> custom 2.6.32.x
>>>> All of them hang around the same point (~1GB written) when I do
>>>> heavy IO
>>>> write inside the guest.
>>> []
>>>> Host is running: 2.6.31.4
>>>> QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
>>> Please update to last version and repeat. kvm-88 is ancient and
>>> _lots_ of stuff fixed and changed since that time, I doubt anyone
>>> here will try to dig into kvm-88 problems.
>>>
>>> Current kvm is qemu-kvm-0.12.2, released yesterday.
>> Sorry about that, I didn't realize 88 was so far behind.
>> Upgrading to qemu-kvm-0.12.2 did solve my IO problems.
> Only for a while. Same problem just re-occurred, only this time it
> went a little further.
> It is now just sitting there, with a load average of exactly 3.0 (+- 5%)
>
> Here is a good trace of the symptom during writeback, you can see it
> write the data at around 50MB/s, it goes from being idle to sys, but
> after a while it just stops writing and goes into mostly wait state:
[snip]
> From that point onwards, nothing will happen.
> The host has disk IO to spare... So what is it waiting for??
Note: if I fill the disk in the guest with zeroes but without going via
a loop mounted filesystem, then everything works just fine. Something in
using the loopback makes it fall over.
Here is the simplest way to make this happen:
time dd if=/dev/zero of=./test bs=1048576 count=2048
2147483648 bytes (2.1 GB) copied, 65.1344 s, 33.0 MB/s
mkfs.ext3 ./test; mkdir tmp
mount -o loop ./test ./tmp
time dd if=/dev/zero of=./tmp/test-loop bs=1048576 count=2048
^this one will never return and you can't just kill "dd", it's stuck.
The whole guest has to be killed at this point.
>
> QEMU PC emulator version 0.12.2 (qemu-kvm-0.12.2), Copyright (c)
> 2003-2008 Fabrice Bellard
> Guests: various, all recent kernels.
> Host: 2.6.31.4
Before anyone suggests this, I have tried with/without elevator=noop,
with/without virtio disks.
No effect, still hangs.
Antoine
> Please advise.
>
> Thanks
> Antoine
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest [NOT SOLVED]
2010-01-22 19:15 ` repeatable hang with loop mount and heavy IO in guest [NOT SOLVED] Antoine Martin
2010-01-24 11:23 ` Antoine Martin
@ 2010-02-03 19:28 ` Antoine Martin
2010-02-26 17:38 ` repeatable hang with loop mount and heavy IO in guest Antoine Martin
1 sibling, 1 reply; 13+ messages in thread
From: Antoine Martin @ 2010-02-03 19:28 UTC (permalink / raw)
To: Michael Tokarev; +Cc: kvm
On 01/23/2010 02:15 AM, Antoine Martin wrote:
> On 01/23/2010 01:28 AM, Antoine Martin wrote:
>> On 01/22/2010 02:57 PM, Michael Tokarev wrote:
>>> Antoine Martin wrote:
>>>> I've tried various guests, including most recent Fedora12 kernels,
>>>> custom 2.6.32.x
>>>> All of them hang around the same point (~1GB written) when I do
>>>> heavy IO
>>>> write inside the guest.
>>> []
>>>> Host is running: 2.6.31.4
>>>> QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
>>> Please update to last version and repeat. kvm-88 is ancient and
>>> _lots_ of stuff fixed and changed since that time, I doubt anyone
>>> here will try to dig into kvm-88 problems.
>>>
>>> Current kvm is qemu-kvm-0.12.2, released yesterday.
>> Sorry about that, I didn't realize 88 was so far behind.
>> Upgrading to qemu-kvm-0.12.2 did solve my IO problems.
> Only for a while. Same problem just re-occurred, only this time it
> went a little further.
> It is now just sitting there, with a load average of exactly 3.0 (+- 5%)
>
> Here is a good trace of the symptom during writeback, you can see it
> write the data at around 50MB/s, it goes from being idle to sys, but
> after a while it just stops writing and goes into mostly wait state:
> ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
> 1 0 99 0 0 0| 0 0 | 198B 614B| 0 0 | 36 17
> 1 0 99 0 0 0| 0 0 | 198B 710B| 0 0 | 31 17
> 1 1 98 0 0 0| 0 128k| 240B 720B| 0 0 | 39 26
> 1 1 98 0 0 0| 0 0 | 132B 564B| 0 0 | 31 14
> 1 0 99 0 0 0| 0 0 | 132B 468B| 0 0 | 31 14
> 1 1 98 0 0 0| 0 0 | 66B 354B| 0 0 | 30 13
> 0 4 11 85 0 0| 852k 0 | 444B 1194B| 0 0 | 215 477
> 2 2 0 96 0 0| 500k 0 | 132B 756B| 0 0 | 169 458
> 3 57 0 39 1 0| 228k 10M| 132B 692B| 0 0 | 476 5387
> 6 94 0 0 0 0| 28k 23M| 132B 884B| 0 0 | 373 2142
> 6 89 0 2 2 0| 40k 38M| 66B 692B| 0 8192B| 502 5651
> 4 47 0 48 0 0| 140k 34M| 132B 836B| 0 0 | 605 1664
> 3 64 0 30 2 0| 60k 50M| 132B 370B| 0 60k| 750 631
> 4 59 0 35 2 0| 48k 45M| 132B 836B| 0 28k| 708 1293
> 7 81 0 10 2 0| 68k 67M| 132B 788B| 0 124k| 928 1634
> 5 74 0 20 1 0| 48k 48M| 132B 756B| 0 316k| 830 5715
> 5 70 0 24 1 0| 168k 48M| 132B 676B| 0 100k| 734 5325
> 4 70 0 24 1 0| 72k 49M| 132B 948B| 0 88k| 776 3784
> 5 57 0 37 1 0| 36k 37M| 132B 996B| 0 480k| 602 369
> 2 21 0 77 0 0| 36k 23M| 132B 724B| 0 72k| 318 1033
> 4 51 0 43 2 0| 112k 43M| 132B 756B| 0 112k| 681 909
> 5 55 0 40 0 0| 88k 48M| 140B 926B| 16k 12k| 698 557
> ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
> usr sys idl wai hiq siq| read writ| recv send| in out | int csw
> 3 45 0 51 1 0|2248k 29M| 198B 1028B| 28k 44k| 681 5468
> 1 21 0 78 0 0| 92k 17M|1275B 2049B| 92k 52k| 328 1883
> 3 30 0 66 1 0| 288k 28M| 498B 2116B| 0 40k| 455 679
> 1 1 0 98 0 0|4096B 0 | 394B 1340B|4096B 0 | 41 19
> 1 1 0 98 0 0| 148k 52k| 881B 1592B|4096B 44k| 75 61
> 1 2 0 97 0 0|1408k 0 | 351B 1727B| 0 0 | 110 109
> 2 1 0 97 0 0|8192B 0 |1422B 1940B| 0 0 | 53 34
> 1 0 0 99 0 0|4096B 12k| 328B 1018B| 0 0 | 41 24
> 1 4 0 95 0 0| 340k 0 |3075B 2152B|4096B 0 | 153 191
> 4 7 0 89 0 0|1004k 44k|1526B 1906B| 0 0 | 254 244
> 0 1 0 99 0 0| 76k 0 | 708B 1708B| 0 0 | 67 57
> 1 1 0 98 0 0| 0 0 | 174B 702B| 0 0 | 32 14
> 1 1 0 98 0 0| 0 0 | 132B 354B| 0 0 | 32 11
> 1 0 0 99 0 0| 0 0 | 132B 468B| 0 0 | 32 16
> 1 0 0 99 0 0| 0 0 | 132B 468B| 0 0 | 32 14
> 1 1 0 98 0 0| 0 52k| 132B 678B| 0 0 | 41 27
> 1 0 0 99 0 0| 0 0 | 198B 678B| 0 0 | 35 17
> 1 1 0 98 0 0| 0 0 | 198B 468B| 0 0 | 34 14
> 1 0 0 99 0 0| 0 0 | 66B 354B| 0 0 | 28 11
> 1 0 0 99 0 0| 0 0 | 66B 354B| 0 0 | 28 9
> 1 1 0 98 0 0| 0 0 | 132B 468B| 0 0 | 34 16
> 1 0 0 98 0 1| 0 0 | 66B 354B| 0 0 | 30 11
> 1 1 0 98 0 0| 0 0 | 66B 354B| 0 0 | 29 11
> From that point onwards, nothing will happen.
> The host has disk IO to spare... So what is it waiting for??
Moved to an AMD64 host. No effect.
Disabled swap before running the test. No effect.
Moved the guest to a fully up-to-date FC12 server
(2.6.31.6-145.fc12.x86_64), no effect.
I am still seeing traces like these in dmesg (various length, but always
ending in sync_page):
[ 2401.350143] INFO: task perl:29512 blocked for more than 120 seconds.
[ 2401.350150] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 2401.350156] perl D ffffffff81543490 0 29512 29510
0x00000000
[ 2401.350167] ffff88000fb4a2e0 0000000000000082 ffff88000f97c058
ffff88000f97c018
[ 2401.350177] ffffffff81808e00 ffff88000a027fd8 ffff88000a027fd8
ffff88000a027fd8
[ 2401.350185] ffff88000a027588 ffff88000fb4a2e0 ffff880001af4cf0
ffffffff8105a7da
[ 2401.350193] Call Trace:
[ 2401.350210] [<ffffffff8105a7da>] ? sync_page+0x0/0x45
[ 2401.350220] [<ffffffff8150a4e0>] ? io_schedule+0x1f/0x32
[ 2401.350228] [<ffffffff8105a818>] ? sync_page+0x3e/0x45
[ 2401.350235] [<ffffffff8150a6f4>] ? __wait_on_bit+0x3e/0x71
[ 2401.350245] [<ffffffff812b50ce>] ? submit_bio+0xa5/0xc1
[ 2401.350252] [<ffffffff8105a960>] ? wait_on_page_bit+0x69/0x6f
[ 2401.350263] [<ffffffff8103732d>] ? wake_bit_function+0x0/0x33
[ 2401.350270] [<ffffffff81062a03>] ? pageout+0x193/0x1dd
[ 2401.350277] [<ffffffff81063179>] ? shrink_page_list+0x23a/0x43b
[ 2401.350284] [<ffffffff8150a648>] ? schedule_timeout+0x9e/0xb8
[ 2401.350293] [<ffffffff8102e58e>] ? process_timeout+0x0/0xd
[ 2401.350300] [<ffffffff81509f07>] ? io_schedule_timeout+0x1f/0x32
[ 2401.350310] [<ffffffff8106855a>] ? congestion_wait+0x7b/0x89
[ 2401.350318] [<ffffffff81037303>] ? autoremove_wake_function+0x0/0x2a
[ 2401.350326] [<ffffffff8106383c>] ? shrink_inactive_list+0x4c2/0x6f6
[ 2401.350336] [<ffffffff8112ca0f>] ? ext4_ext_find_extent+0x47/0x267
[ 2401.350362] [<ffffffff81063d00>] ? shrink_zone+0x290/0x354
[ 2401.350369] [<ffffffff81063ee9>] ? shrink_slab+0x125/0x137
[ 2401.350377] [<ffffffff810645a1>] ? try_to_free_pages+0x1a0/0x2b2
[ 2401.350384] [<ffffffff810621c7>] ? isolate_pages_global+0x0/0x23b
[ 2401.350393] [<ffffffff8105f1e2>] ? __alloc_pages_nodemask+0x399/0x566
[ 2401.350403] [<ffffffff810807fa>] ? __slab_alloc+0x121/0x448
[ 2401.350410] [<ffffffff81125628>] ? ext4_alloc_inode+0x19/0xde
[ 2401.350418] [<ffffffff81080c4e>] ? kmem_cache_alloc+0x46/0x88
[ 2401.350425] [<ffffffff81125628>] ? ext4_alloc_inode+0x19/0xde
[ 2401.350433] [<ffffffff81097086>] ? alloc_inode+0x17/0x77
[ 2401.350441] [<ffffffff81097a83>] ? iget_locked+0x44/0x10f
[ 2401.350449] [<ffffffff8111af18>] ? ext4_iget+0x24/0x6bc
[ 2401.350455] [<ffffffff8112377a>] ? ext4_lookup+0x84/0xe3
[ 2401.350464] [<ffffffff8108d386>] ? do_lookup+0xc6/0x15c
[ 2401.350472] [<ffffffff8108dd0a>] ? __link_path_walk+0x4cb/0x605
[ 2401.350481] [<ffffffff8108dfaf>] ? path_walk+0x44/0x8a
[ 2401.350488] [<ffffffff8108e2e7>] ? path_init+0x94/0x113
[ 2401.350496] [<ffffffff8108e3b1>] ? do_path_lookup+0x20/0x84
[ 2401.350502] [<ffffffff81090843>] ? user_path_at+0x46/0x78
[ 2401.350509] [<ffffffff810921e2>] ? filldir+0x0/0x1b0
[ 2401.350516] [<ffffffff81029257>] ? current_fs_time+0x1e/0x24
[ 2401.350524] [<ffffffff81088b83>] ? cp_new_stat+0x148/0x15e
[ 2401.350531] [<ffffffff81088dbc>] ? vfs_fstatat+0x2e/0x5b
[ 2401.350538] [<ffffffff81088e44>] ? sys_newlstat+0x11/0x2d
[ 2401.350546] [<ffffffff8100203f>] ? system_call_fastpath+0x16/0x1b
Has anyone tried reproducing the example I posted? (loop mount a disk
image and fill it with "dd if=/dev/zero" in the guest)
Can anyone suggest a way forward? (as I have already updated kvm and the
kernels)
Thanks
Antoine
> QEMU PC emulator version 0.12.2 (qemu-kvm-0.12.2), Copyright (c)
> 2003-2008 Fabrice Bellard
> Guests: various, all recent kernels.
> Host: 2.6.31.4
>
> Please advise.
>
> Thanks
> Antoine
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest
2010-02-03 19:28 ` Antoine Martin
@ 2010-02-26 17:38 ` Antoine Martin
2010-05-21 9:38 ` repeatable hang with loop mount and heavy IO in guest (now in host - not KVM then..) Antoine Martin
0 siblings, 1 reply; 13+ messages in thread
From: Antoine Martin @ 2010-02-26 17:38 UTC (permalink / raw)
To: Michael Tokarev; +Cc: kvm
>> 1 0 0 98 0 1| 0 0 | 66B 354B| 0 0 | 30 11
>> 1 1 0 98 0 0| 0 0 | 66B 354B| 0 0 | 29 11
>> From that point onwards, nothing will happen.
>> The host has disk IO to spare... So what is it waiting for??
> Moved to an AMD64 host. No effect.
> Disabled swap before running the test. No effect.
> Moved the guest to a fully up-to-date FC12 server
> (2.6.31.6-145.fc12.x86_64), no effect.
I have narrowed it down to the guest's filesystem used for backing the
disk image which is loop mounted: although it was not completely full
(and had enough inodes), freeing some space on it prevents the system
from misbehaving.
FYI: the disk image was clean and was fscked before each test. kvm had
been updated to 0.12.3
The weird thing is that the same filesystem works fine (no system hang)
if used directly from the host, it is only misbehaving via kvm...
So I am not dismissing the possibility that kvm may be at least partly
to blame, or that it is exposing a filesystem bug (race?) not normally
encountered.
(I have backed up the full 32GB virtual disk in case someone suggests
further investigation)
Cheers
Antoine
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest (now in host - not KVM then..)
2010-02-26 17:38 ` repeatable hang with loop mount and heavy IO in guest Antoine Martin
@ 2010-05-21 9:38 ` Antoine Martin
2010-05-22 18:10 ` Jim Paris
0 siblings, 1 reply; 13+ messages in thread
From: Antoine Martin @ 2010-05-21 9:38 UTC (permalink / raw)
To: Michael Tokarev; +Cc: kvm
On 02/27/2010 12:38 AM, Antoine Martin wrote:
>>> 1 0 0 98 0 1| 0 0 | 66B 354B| 0 0 | 30 11
>>> 1 1 0 98 0 0| 0 0 | 66B 354B| 0 0 | 29 11
>>> From that point onwards, nothing will happen.
>>> The host has disk IO to spare... So what is it waiting for??
>> Moved to an AMD64 host. No effect.
>> Disabled swap before running the test. No effect.
>> Moved the guest to a fully up-to-date FC12 server
>> (2.6.31.6-145.fc12.x86_64), no effect.
> I have narrowed it down to the guest's filesystem used for backing the
> disk image which is loop mounted: although it was not completely full
> (and had enough inodes), freeing some space on it prevents the system
> from misbehaving.
>
> FYI: the disk image was clean and was fscked before each test. kvm had
> been updated to 0.12.3
> The weird thing is that the same filesystem works fine (no system
> hang) if used directly from the host, it is only misbehaving via kvm...
>
> So I am not dismissing the possibility that kvm may be at least partly
> to blame, or that it is exposing a filesystem bug (race?) not normally
> encountered.
> (I have backed up the full 32GB virtual disk in case someone suggests
> further investigation)
Well, well. I've just hit the exact same bug on another *host* (not a
guest), running stock Fedora 12.
So this isn't a kvm bug after all. Definitely a loop+ext(4?) bug.
Looks like you need a pretty big loop mounted partition to trigger it.
(bigger than available ram?)
This is what triggered it on a quad amd system with 8Gb of ram, software
raid-1 partition:
mount -o loop 2GB.dd source
dd if=/dev/zero of=8GB.dd bs=1048576 count=8192
mkfs.ext4 -f 8GB.dd
mount -o loop 8GB.dd dest
rsync -rplogtD source/* dest/
umount source
umount dest
^ this is where it hangs, I then tried to issue a 'sync' from another
terminal, which also hung.
It took more than 10 minutes to settle itself, during that time one CPU
was stuck in wait state.
dstat reported almost no IO at the time (<1MB/s)
I assume dstat reports page write back like any other disk IO?
That raid partition does ~60MB/s, so writing back 8GB shouldn't take 10
minutes. (that's even assuming it would have to write back the whole 8GB
at umount time - which should not be the case)
Cheers
Antoine
Here's the hung trace:
INFO: task umount:526 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
umount D 0000000000000002 0 526 32488 0x00000000
ffff880140f9fc88 0000000000000086 ffff880008e3c228 ffffffff810d5fd9
ffff880140f9fc28 ffff880140f9fcd8 ffff880140f9ffd8 ffff880140f9ffd8
ffff88021b5e03d8 000000000000f980 0000000000015740 ffff88021b5e03d8
Call Trace:
[<ffffffff810d5fd9>] ? sync_page+0x0/0x4a
[<ffffffff81046fbd>] ? __enqueue_entity+0x7b/0x7d
[<ffffffff8113a047>] ? bdi_sched_wait+0x0/0x12
[<ffffffff8113a055>] bdi_sched_wait+0xe/0x12
[<ffffffff814549f0>] __wait_on_bit+0x48/0x7b
[<ffffffff8102649f>] ? native_smp_send_reschedule+0x5c/0x5e
[<ffffffff81454a91>] out_of_line_wait_on_bit+0x6e/0x79
[<ffffffff8113a047>] ? bdi_sched_wait+0x0/0x12
[<ffffffff810748dc>] ? wake_bit_function+0x0/0x33
[<ffffffff8113ad0b>] wait_on_bit.clone.1+0x1e/0x20
[<ffffffff8113ad71>] bdi_sync_writeback+0x64/0x6b
[<ffffffff8113ad9a>] sync_inodes_sb+0x22/0xec
[<ffffffff8113e547>] __sync_filesystem+0x4e/0x77
[<ffffffff8113e71d>] sync_filesystem+0x4b/0x4f
[<ffffffff8111d6d9>] generic_shutdown_super+0x27/0xc9
[<ffffffff8111d7a2>] kill_block_super+0x27/0x3f
[<ffffffff8111ded7>] deactivate_super+0x56/0x6b
[<ffffffff81134262>] mntput_no_expire+0xb4/0xec
[<ffffffff8113482a>] sys_umount+0x2d5/0x304
[<ffffffff81458133>] ? do_page_fault+0x270/0x2a0
[<ffffffff81011d32>] system_call_fastpath+0x16/0x1b
INFO: task umount:526 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
umount D 0000000000000002 0 526 32488 0x00000000
ffff880140f9fc88 0000000000000086 ffff880008e3c228 ffffffff810d5fd9
ffff880140f9fc28 ffff880140f9fcd8 ffff880140f9ffd8 ffff880140f9ffd8
ffff88021b5e03d8 000000000000f980 0000000000015740 ffff88021b5e03d8
Call Trace:
[<ffffffff810d5fd9>] ? sync_page+0x0/0x4a
[<ffffffff81046fbd>] ? __enqueue_entity+0x7b/0x7d
[<ffffffff8113a047>] ? bdi_sched_wait+0x0/0x12
[<ffffffff8113a055>] bdi_sched_wait+0xe/0x12
[<ffffffff814549f0>] __wait_on_bit+0x48/0x7b
[<ffffffff8102649f>] ? native_smp_send_reschedule+0x5c/0x5e
[<ffffffff81454a91>] out_of_line_wait_on_bit+0x6e/0x79
[<ffffffff8113a047>] ? bdi_sched_wait+0x0/0x12
[<ffffffff810748dc>] ? wake_bit_function+0x0/0x33
[<ffffffff8113ad0b>] wait_on_bit.clone.1+0x1e/0x20
[<ffffffff8113ad71>] bdi_sync_writeback+0x64/0x6b
[<ffffffff8113ad9a>] sync_inodes_sb+0x22/0xec
[<ffffffff8113e547>] __sync_filesystem+0x4e/0x77
[<ffffffff8113e71d>] sync_filesystem+0x4b/0x4f
[<ffffffff8111d6d9>] generic_shutdown_super+0x27/0xc9
[<ffffffff8111d7a2>] kill_block_super+0x27/0x3f
[<ffffffff8111ded7>] deactivate_super+0x56/0x6b
[<ffffffff81134262>] mntput_no_expire+0xb4/0xec
[<ffffffff8113482a>] sys_umount+0x2d5/0x304
[<ffffffff81458133>] ? do_page_fault+0x270/0x2a0
[<ffffffff81011d32>] system_call_fastpath+0x16/0x1b
INFO: task umount:526 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
umount D 0000000000000002 0 526 32488 0x00000000
ffff880140f9fc88 0000000000000086 ffff880008e3c228 ffffffff810d5fd9
ffff880140f9fc28 ffff880140f9fcd8 ffff880140f9ffd8 ffff880140f9ffd8
ffff88021b5e03d8 000000000000f980 0000000000015740 ffff88021b5e03d8
Call Trace:
[<ffffffff810d5fd9>] ? sync_page+0x0/0x4a
[<ffffffff81046fbd>] ? __enqueue_entity+0x7b/0x7d
[<ffffffff8113a047>] ? bdi_sched_wait+0x0/0x12
[<ffffffff8113a055>] bdi_sched_wait+0xe/0x12
[<ffffffff814549f0>] __wait_on_bit+0x48/0x7b
[<ffffffff8102649f>] ? native_smp_send_reschedule+0x5c/0x5e
[<ffffffff81454a91>] out_of_line_wait_on_bit+0x6e/0x79
[<ffffffff8113a047>] ? bdi_sched_wait+0x0/0x12
[<ffffffff810748dc>] ? wake_bit_function+0x0/0x33
[<ffffffff8113ad0b>] wait_on_bit.clone.1+0x1e/0x20
[<ffffffff8113ad71>] bdi_sync_writeback+0x64/0x6b
[<ffffffff8113ad9a>] sync_inodes_sb+0x22/0xec
[<ffffffff8113e547>] __sync_filesystem+0x4e/0x77
[<ffffffff8113e71d>] sync_filesystem+0x4b/0x4f
[<ffffffff8111d6d9>] generic_shutdown_super+0x27/0xc9
[<ffffffff8111d7a2>] kill_block_super+0x27/0x3f
[<ffffffff8111ded7>] deactivate_super+0x56/0x6b
[<ffffffff81134262>] mntput_no_expire+0xb4/0xec
[<ffffffff8113482a>] sys_umount+0x2d5/0x304
[<ffffffff81458133>] ? do_page_fault+0x270/0x2a0
[<ffffffff81011d32>] system_call_fastpath+0x16/0x1b
INFO: task umount:526 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
umount D 0000000000000002 0 526 32488 0x00000000
ffff880140f9fc88 0000000000000086 ffff880008e3c228 ffffffff810d5fd9
ffff880140f9fc28 ffff880140f9fcd8 ffff880140f9ffd8 ffff880140f9ffd8
ffff88021b5e03d8 000000000000f980 0000000000015740 ffff88021b5e03d8
Call Trace:
[<ffffffff810d5fd9>] ? sync_page+0x0/0x4a
[<ffffffff81046fbd>] ? __enqueue_entity+0x7b/0x7d
[<ffffffff8113a047>] ? bdi_sched_wait+0x0/0x12
[<ffffffff8113a055>] bdi_sched_wait+0xe/0x12
[<ffffffff814549f0>] __wait_on_bit+0x48/0x7b
[<ffffffff8102649f>] ? native_smp_send_reschedule+0x5c/0x5e
[<ffffffff81454a91>] out_of_line_wait_on_bit+0x6e/0x79
[<ffffffff8113a047>] ? bdi_sched_wait+0x0/0x12
[<ffffffff810748dc>] ? wake_bit_function+0x0/0x33
[<ffffffff8113ad0b>] wait_on_bit.clone.1+0x1e/0x20
[<ffffffff8113ad71>] bdi_sync_writeback+0x64/0x6b
[<ffffffff8113ad9a>] sync_inodes_sb+0x22/0xec
[<ffffffff8113e547>] __sync_filesystem+0x4e/0x77
[<ffffffff8113e71d>] sync_filesystem+0x4b/0x4f
[<ffffffff8111d6d9>] generic_shutdown_super+0x27/0xc9
[<ffffffff8111d7a2>] kill_block_super+0x27/0x3f
[<ffffffff8111ded7>] deactivate_super+0x56/0x6b
[<ffffffff81134262>] mntput_no_expire+0xb4/0xec
[<ffffffff8113482a>] sys_umount+0x2d5/0x304
[<ffffffff81458133>] ? do_page_fault+0x270/0x2a0
[<ffffffff81011d32>] system_call_fastpath+0x16/0x1b
INFO: task sync:741 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
sync D 0000000000000000 0 741 552 0x00000000
ffff88003c31fed8 0000000000000086 0000000000000000 ffff8801b7472ec0
ffff88003c31fe38 0000000000000246 ffff88003c31ffd8 ffff88003c31ffd8
ffff8801b7473298 000000000000f980 0000000000015740 ffff8801b7473298
Call Trace:
[<ffffffff81455a2e>] __down_read+0x92/0xaa
[<ffffffff814550e1>] down_read+0x31/0x35
[<ffffffff8113e5f6>] sync_filesystems+0x86/0xf6
[<ffffffff8113e6b6>] sys_sync+0x17/0x33
[<ffffffff81011d32>] system_call_fastpath+0x16/0x1b
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest (now in host - not KVM then..)
2010-05-21 9:38 ` repeatable hang with loop mount and heavy IO in guest (now in host - not KVM then..) Antoine Martin
@ 2010-05-22 18:10 ` Jim Paris
2010-05-22 19:33 ` Antoine Martin
0 siblings, 1 reply; 13+ messages in thread
From: Jim Paris @ 2010-05-22 18:10 UTC (permalink / raw)
To: Antoine Martin; +Cc: Michael Tokarev, kvm
Antoine Martin wrote:
> On 02/27/2010 12:38 AM, Antoine Martin wrote:
> >>> 1 0 0 98 0 1| 0 0 | 66B 354B| 0 0 | 30 11
> >>> 1 1 0 98 0 0| 0 0 | 66B 354B| 0 0 | 29 11
> >>>From that point onwards, nothing will happen.
> >>>The host has disk IO to spare... So what is it waiting for??
> >>Moved to an AMD64 host. No effect.
> >>Disabled swap before running the test. No effect.
> >>Moved the guest to a fully up-to-date FC12 server
> >>(2.6.31.6-145.fc12.x86_64), no effect.
> >I have narrowed it down to the guest's filesystem used for backing
> >the disk image which is loop mounted: although it was not
> >completely full (and had enough inodes), freeing some space on it
> >prevents the system from misbehaving.
> >
> >FYI: the disk image was clean and was fscked before each test. kvm
> >had been updated to 0.12.3
> >The weird thing is that the same filesystem works fine (no system
> >hang) if used directly from the host, it is only misbehaving via
> >kvm...
> >
> >So I am not dismissing the possibility that kvm may be at least
> >partly to blame, or that it is exposing a filesystem bug (race?)
> >not normally encountered.
> >(I have backed up the full 32GB virtual disk in case someone
> >suggests further investigation)
> Well, well. I've just hit the exact same bug on another *host* (not
> a guest), running stock Fedora 12.
> So this isn't a kvm bug after all. Definitely a loop+ext(4?) bug.
> Looks like you need a pretty big loop mounted partition to trigger
> it. (bigger than available ram?)
>
> This is what triggered it on a quad amd system with 8Gb of ram,
> software raid-1 partition:
> mount -o loop 2GB.dd source
> dd if=/dev/zero of=8GB.dd bs=1048576 count=8192
> mkfs.ext4 -f 8GB.dd
> mount -o loop 8GB.dd dest
> rsync -rplogtD source/* dest/
> umount source
> umount dest
> ^ this is where it hangs, I then tried to issue a 'sync' from
> another terminal, which also hung.
> It took more than 10 minutes to settle itself, during that time one
> CPU was stuck in wait state.
This sounds like:
https://bugzilla.kernel.org/show_bug.cgi?id=15906
https://bugzilla.redhat.com/show_bug.cgi?id=588930
-jim
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: repeatable hang with loop mount and heavy IO in guest (now in host - not KVM then..)
2010-05-22 18:10 ` Jim Paris
@ 2010-05-22 19:33 ` Antoine Martin
0 siblings, 0 replies; 13+ messages in thread
From: Antoine Martin @ 2010-05-22 19:33 UTC (permalink / raw)
To: Jim Paris; +Cc: Michael Tokarev, kvm
On 05/23/2010 01:10 AM, Jim Paris wrote:
> Antoine Martin wrote:
>
>> On 02/27/2010 12:38 AM, Antoine Martin wrote:
>>
>>>>> 1 0 0 98 0 1| 0 0 | 66B 354B| 0 0 | 30 11
>>>>> 1 1 0 98 0 0| 0 0 | 66B 354B| 0 0 | 29 11
>>>>>
>>>> > From that point onwards, nothing will happen.
>>>>
>>>>> The host has disk IO to spare... So what is it waiting for??
>>>>>
>>>> Moved to an AMD64 host. No effect.
>>>> Disabled swap before running the test. No effect.
>>>> Moved the guest to a fully up-to-date FC12 server
>>>> (2.6.31.6-145.fc12.x86_64), no effect.
>>>>
>>> I have narrowed it down to the guest's filesystem used for backing
>>> the disk image which is loop mounted: although it was not
>>> completely full (and had enough inodes), freeing some space on it
>>> prevents the system from misbehaving.
>>>
>>> FYI: the disk image was clean and was fscked before each test. kvm
>>> had been updated to 0.12.3
>>> The weird thing is that the same filesystem works fine (no system
>>> hang) if used directly from the host, it is only misbehaving via
>>> kvm...
>>>
>>> So I am not dismissing the possibility that kvm may be at least
>>> partly to blame, or that it is exposing a filesystem bug (race?)
>>> not normally encountered.
>>> (I have backed up the full 32GB virtual disk in case someone
>>> suggests further investigation)
>>>
>> Well, well. I've just hit the exact same bug on another *host* (not
>> a guest), running stock Fedora 12.
>> So this isn't a kvm bug after all. Definitely a loop+ext(4?) bug.
>> Looks like you need a pretty big loop mounted partition to trigger
>> it. (bigger than available ram?)
>>
>> This is what triggered it on a quad amd system with 8Gb of ram,
>> software raid-1 partition:
>> mount -o loop 2GB.dd source
>> dd if=/dev/zero of=8GB.dd bs=1048576 count=8192
>> mkfs.ext4 -f 8GB.dd
>> mount -o loop 8GB.dd dest
>> rsync -rplogtD source/* dest/
>> umount source
>> umount dest
>> ^ this is where it hangs, I then tried to issue a 'sync' from
>> another terminal, which also hung.
>> It took more than 10 minutes to settle itself, during that time one
>> CPU was stuck in wait state.
>>
> This sounds like:
> https://bugzilla.kernel.org/show_bug.cgi?id=15906
> https://bugzilla.redhat.com/show_bug.cgi?id=588930
>
Indeed it does.
Let's hope this makes it to -stable fast.
Antoine
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2010-05-22 19:33 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-01-21 17:26 repeatable hang with loop mount and heavy IO in guest Antoine Martin
2010-01-21 20:08 ` RW
2010-01-21 21:08 ` Thomas Beinicke
2010-01-21 21:36 ` RW
2010-01-22 7:57 ` Michael Tokarev
2010-01-22 18:28 ` repeatable hang with loop mount and heavy IO in guest [SOLVED] Antoine Martin
2010-01-22 19:15 ` repeatable hang with loop mount and heavy IO in guest [NOT SOLVED] Antoine Martin
2010-01-24 11:23 ` Antoine Martin
2010-02-03 19:28 ` Antoine Martin
2010-02-26 17:38 ` repeatable hang with loop mount and heavy IO in guest Antoine Martin
2010-05-21 9:38 ` repeatable hang with loop mount and heavy IO in guest (now in host - not KVM then..) Antoine Martin
2010-05-22 18:10 ` Jim Paris
2010-05-22 19:33 ` Antoine Martin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox