* *terrible* speed of savevm/loadvm/delvm
@ 2008-11-12 17:15 Michael Tokarev
2008-11-13 12:33 ` Avi Kivity
0 siblings, 1 reply; 5+ messages in thread
From: Michael Tokarev @ 2008-11-12 17:15 UTC (permalink / raw)
To: KVM list
Somewhere between kvm-75 and kvm-78, the
mentioned commands has been slowed down
to insane levels. By "insane" I mean to
take about 10 minutes(!) to save/load a
128MB RAM/1GB HDD VM's state. It used
to require several seconds for much larger
VMs...
Here's a typical sequence of system calls
during savevm:
select(12, [11], [], NULL, NULL) = 1 (in [11])
read(11,
"\f\0\0\0\0\0\0\0\374\377\377\3771s\0\0\350\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
128) = 128
close(21) = 0
_llseek(12, 1669340278, [1669340278], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340280, [1669340280], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340282, [1669340282], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340284, [1669340284], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340286, [1669340286], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340288, [1669340288], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340290, [1669340290], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340292, [1669340292], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1674259520, [1674259520], SEEK_SET) = 0
write(12,
"\200\0\0\0c\343\260\0\200\0\0\0c\343\300\0\200\0\0\0c\343\320\0\200\0\0\0c\343\340\0\200"...,
64) = 64
dup(12) = 21
futex(0xf7d621a4, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xf7d621a0, {FUTEX_OP_SET, 0,
FUTEX_OP_CMP_GT, 1}) = 1
futex(0xf7d62120, FUTEX_WAKE_PRIVATE, 1) = 1
select(12, [11], [], NULL, NULL) = 1 (in [11])
read(11,
"\f\0\0\0\0\0\0\0\374\377\377\3771s\0\0\350\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
128) = 128
close(21) = 0
_llseek(12, 1669340294, [1669340294], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340296, [1669340296], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340298, [1669340298], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340300, [1669340300], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340302, [1669340302], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340304, [1669340304], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340306, [1669340306], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340308, [1669340308], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1674259584, [1674259584], SEEK_SET) = 0
write(12,
"\200\0\0\0c\3440\0\200\0\0\0c\344@\0\200\0\0\0c\344P\0\200\0\0\0c\344`\0\200"...,
64) = 64
dup(12) = 21
futex(0xf7d621a4, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xf7d621a0, {FUTEX_OP_SET, 0,
FUTEX_OP_CMP_GT, 1}) = 1
select(12, [11], [], NULL, NULL) = 1 (in [11])
read(11,
"\f\0\0\0\0\0\0\0\374\377\377\3771s\0\0\350\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
128) = 128
close(21) = 0
_llseek(12, 1669340310, [1669340310], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340312, [1669340312], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340314, [1669340314], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340316, [1669340316], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340318, [1669340318], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340320, [1669340320], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340322, [1669340322], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340324, [1669340324], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1674259648, [1674259648], SEEK_SET) = 0
write(12,
"\200\0\0\0c\344\260\0\200\0\0\0c\344\300\0\200\0\0\0c\344\320\0\200\0\0\0c\344\340\0\200"...,
64) = 64
dup(12) = 21
futex(0xf7d621a4, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xf7d621a0, {FUTEX_OP_SET, 0,
FUTEX_OP_CMP_GT, 1}) = 1
futex(0xf7d62120, FUTEX_WAKE_PRIVATE, 1) = 1
select(12, [11], [], NULL, NULL) = 1 (in [11])
read(11,
"\f\0\0\0\0\0\0\0\374\377\377\3771s\0\0\350\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
128) = 128
close(21) = 0
_llseek(12, 1669340326, [1669340326], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340328, [1669340328], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340330, [1669340330], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340332, [1669340332], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340334, [1669340334], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340336, [1669340336], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340338, [1669340338], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1669340340, [1669340340], SEEK_SET) = 0
write(12, "\0\1"..., 2) = 2
_llseek(12, 1674259712, [1674259712], SEEK_SET) = 0
write(12,
"\200\0\0\0c\3450\0\200\0\0\0c\345@\0\200\0\0\0c\345P\0\200\0\0\0c\345`\0\200"...,
64) = 64
dup(12) = 21
futex(0xf7d621a4, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xf7d621a0, {FUTEX_OP_SET, 0,
FUTEX_OP_CMP_GT, 1}) = 1
futex(0xf7d62120, FUTEX_WAKE_PRIVATE, 1) = 1
select(12, [11], [], NULL, NULL^C <unfinished ...>
As you see, it writes 2 bytes, llseeks to THE SAME
position, writes next 2 bytes and so on. This takes
HUGE amount of time, and can be done, in most cases,
in a single write without any seeks.
Is it just me or are savevm/loadvm/delvm really THAT
broken?
And since migration to disk has been removed too,
there's no way currently to save the VM state...
Thanks!
/mjt
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: *terrible* speed of savevm/loadvm/delvm
2008-11-12 17:15 *terrible* speed of savevm/loadvm/delvm Michael Tokarev
@ 2008-11-13 12:33 ` Avi Kivity
2008-11-20 19:08 ` Michael Tokarev
0 siblings, 1 reply; 5+ messages in thread
From: Avi Kivity @ 2008-11-13 12:33 UTC (permalink / raw)
To: Michael Tokarev; +Cc: KVM list
Michael Tokarev wrote:
> Somewhere between kvm-75 and kvm-78, the
> mentioned commands has been slowed down
> to insane levels. By "insane" I mean to
> take about 10 minutes(!) to save/load a
> 128MB RAM/1GB HDD VM's state. It used
> to require several seconds for much larger
> VMs...
>
> Here's a typical sequence of system calls
> during savevm:
[snip strace]
> As you see, it writes 2 bytes, llseeks to THE SAME
> position, writes next 2 bytes and so on. This takes
> HUGE amount of time, and can be done, in most cases,
> in a single write without any seeks.
>
> Is it just me or are savevm/loadvm/delvm really THAT
> broken?
It's qcow2 that is broken, with the new default cache=writethrough.
Does cache=writeback speed things up?
This is probably block reference counts being updated. There were some
batching patches posted, but they have not been applied yet, and are
probably insufficient for savevm.
> And since migration to disk has been removed too,
> there's no way currently to save the VM state...
That's being restored.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: *terrible* speed of savevm/loadvm/delvm
2008-11-13 12:33 ` Avi Kivity
@ 2008-11-20 19:08 ` Michael Tokarev
2008-11-23 13:51 ` Avi Kivity
0 siblings, 1 reply; 5+ messages in thread
From: Michael Tokarev @ 2008-11-20 19:08 UTC (permalink / raw)
To: Avi Kivity; +Cc: KVM list
Avi Kivity wrote Thu, 13 Nov 2008 14:33:16 +0200:
> Michael Tokarev wrote:
>> Somewhere between kvm-75 and kvm-78, the
>> mentioned commands has been slowed down
>> to insane levels. By "insane" I mean to
>> take about 10 minutes(!) to save/load a
>> 128MB RAM/1GB HDD VM's state. It used
>> to require several seconds for much larger
>> VMs...
>>
>> Here's a typical sequence of system calls
>> during savevm:
> [snip strace]
>
>> As you see, it writes 2 bytes, llseeks to THE SAME
>> position, writes next 2 bytes and so on. This takes
>> HUGE amount of time, and can be done, in most cases,
>> in a single write without any seeks.
>>
>> Is it just me or are savevm/loadvm/delvm really THAT
>> broken?
>
> It's qcow2 that is broken, with the new default cache=writethrough.
> Does cache=writeback speed things up?
Please excuse me for this long delay replying...
I tried other solutions meanwhile (migrate to exec:), but did not
succeed there either.
Well. While with writeback mode, the speed is definitely
better. But it is still very very slow - not THAT terrible,
but it still takes several mins to complete a 512MB VM with
a single 4Gb qcow2 file.
> This is probably block reference counts being updated. There were some
> batching patches posted, but they have not been applied yet, and are
> probably insufficient for savevm.
Oh well.... :(
So it really is that there's no way to "freeze" a vm currently.
savevm/loadvm is too slow to be useful, and migrate does not
work with local files (yet)...
>> And since migration to disk has been removed too,
>> there's no way currently to save the VM state...
>
> That's being restored.
Thanks! Tried that (current patches), does not quite here yet :)
/mjt
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: *terrible* speed of savevm/loadvm/delvm
2008-11-20 19:08 ` Michael Tokarev
@ 2008-11-23 13:51 ` Avi Kivity
2008-11-23 20:08 ` Michael Tokarev
0 siblings, 1 reply; 5+ messages in thread
From: Avi Kivity @ 2008-11-23 13:51 UTC (permalink / raw)
To: Michael Tokarev; +Cc: KVM list
Michael Tokarev wrote:
>>> As you see, it writes 2 bytes, llseeks to THE SAME
>>> position, writes next 2 bytes and so on. This takes
>>> HUGE amount of time, and can be done, in most cases,
>>> in a single write without any seeks.
>>>
>>> Is it just me or are savevm/loadvm/delvm really THAT
>>> broken?
>>>
>> It's qcow2 that is broken, with the new default cache=writethrough.
>> Does cache=writeback speed things up?
>>
>
> Please excuse me for this long delay replying...
> I tried other solutions meanwhile (migrate to exec:), but did not
> succeed there either.
>
> Well. While with writeback mode, the speed is definitely
> better. But it is still very very slow - not THAT terrible,
> but it still takes several mins to complete a 512MB VM with
> a single 4Gb qcow2 file.
>
Any idea what is happening? Is it disk bound, or cpu bound?
It shouldn't be that slow with writeback.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: *terrible* speed of savevm/loadvm/delvm
2008-11-23 13:51 ` Avi Kivity
@ 2008-11-23 20:08 ` Michael Tokarev
0 siblings, 0 replies; 5+ messages in thread
From: Michael Tokarev @ 2008-11-23 20:08 UTC (permalink / raw)
To: Avi Kivity; +Cc: KVM list
Avi Kivity wrote:
> Michael Tokarev wrote:
>>>> As you see, it writes 2 bytes, llseeks to THE SAME
>>>> position, writes next 2 bytes and so on. This takes
>>>> HUGE amount of time, and can be done, in most cases,
>>>> in a single write without any seeks.
[]
> Any idea what is happening? Is it disk bound, or cpu bound?
Yes, see above. llseek(curpos)+write(2bytes) - try doing
that for for a more-or-less large file and you'll see ;)
And it seems that this slowness is here for a long time really.
It's just that change in default write/cache mode made the
problem very obvious.
/mjt
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2008-11-23 20:08 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-11-12 17:15 *terrible* speed of savevm/loadvm/delvm Michael Tokarev
2008-11-13 12:33 ` Avi Kivity
2008-11-20 19:08 ` Michael Tokarev
2008-11-23 13:51 ` Avi Kivity
2008-11-23 20:08 ` Michael Tokarev
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox