qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] slower live-migration with XBZRLE
@ 2012-10-11 16:26 Vasilis Liaskovitis
  2012-10-12  7:14 ` Alexandre DERUMIER
  0 siblings, 1 reply; 2+ messages in thread
From: Vasilis Liaskovitis @ 2012-10-11 16:26 UTC (permalink / raw)
  To: qemu-devel; +Cc: owasserman

Hi,

I am testing XBZRLE compression with qemu-1.2 for live migration of large VM
and/or memory-intensive workloads. I have a 4GB guest that runs the memory r/w
load generator from the original patchset, see docs/xbzrle.txt or
http://lists.gnu.org/archive/html/qemu-devel/2012-07/msg01207.html

I have set xbzrle to "on" in both source/target, and default cache size in source
(I also tried using 1g cache size, during the test or with a new migration). The
migration starts but the ram transfer rate is very slow and migration total time
is very large.  Cache misses and overflows seem small as far as I can tell.

Here's example output from the source "info migrate" with xbzrle=on when it's done:

(qemu) info migrate
capabilities: xbzrle: on 
Migration status: completed
total time: 6530177 milliseconds
transferred ram: 4887726 kbytes
remaining ram: 0 kbytes
total ram: 4211008 kbytes
duplicate: 3126234 pages
normal: 43587 pages
normal bytes: 174348 kbytes
cache size: 268435456 bytes
xbzrle transferred: 4710325 kbytes
xbzrle pages: 266649315 pages
xbzrle cache miss: 43440
xbzrle overflow : 147

The same guest+workload migrates much faster with xbzrle=off. I would have
expected the opposite behaviour i.e with xbzrle=off, this guest+workload
combination would migrate very slowly or never end. 

Here's example output from the source "info migrate" with xbzrle=off when it's
done

(qemu) info migrate
capabilities: xbzrle: off 
Migration status: completed
total time: 10791 milliseconds
transferred ram: 220735 kbytes
remaining ram: 0 kbytes
total ram: 4211008 kbytes
duplicate: 1007476 pages
normal: 54938 pages
normal bytes: 219752 kbytes

Have I missed setting some other migration parameter? I tried using
migrate_set_speed to change the bandwidth limit to 1000000000 bytes/sec but it
didn't make any difference.

Are there any default parameters that would make xbzrle inefficient for this type
of workload? Has any one measured a point of diminishing returns where e.g.
encoding/decoding cpu-overhead makes the feature ineffective?

this was a live-migration performed on same host, but I have seen same behaviour
between 2 hosts. The test host was idle apart from the VMs.

sample command line:
-enable-kvm -M pc -smp 2,maxcpus=64 -cpu host -m 4096 -drive
file=/home/debian.img,if=none,id=drive-virtio-disk0,format=raw
-device virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-vga std -netdev type=tap,id=guest0,vhost=on -device virtio-net-pci,netdev=guest0

thanks,

- Vasilis

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [Qemu-devel] slower live-migration with XBZRLE
  2012-10-11 16:26 [Qemu-devel] slower live-migration with XBZRLE Vasilis Liaskovitis
@ 2012-10-12  7:14 ` Alexandre DERUMIER
  0 siblings, 0 replies; 2+ messages in thread
From: Alexandre DERUMIER @ 2012-10-12  7:14 UTC (permalink / raw)
  To: Vasilis Liaskovitis; +Cc: owasserman, qemu-devel

Hi, I have observed same behaviour with vm with lot of memory transfert, or playing video in the guest.
https://lists.gnu.org/archive/html/qemu-devel/2012-09/msg00138.html


You can try to tunned to xbzrle cache size, maybe it'll improve speed.


----- Mail original -----

De: "Vasilis Liaskovitis" <vasilis.liaskovitis@profitbricks.com>
À: qemu-devel@nongnu.org
Cc: owasserman@redhat.com
Envoyé: Jeudi 11 Octobre 2012 18:26:41
Objet: [Qemu-devel] slower live-migration with XBZRLE

Hi,

I am testing XBZRLE compression with qemu-1.2 for live migration of large VM
and/or memory-intensive workloads. I have a 4GB guest that runs the memory r/w
load generator from the original patchset, see docs/xbzrle.txt or
http://lists.gnu.org/archive/html/qemu-devel/2012-07/msg01207.html

I have set xbzrle to "on" in both source/target, and default cache size in source
(I also tried using 1g cache size, during the test or with a new migration). The
migration starts but the ram transfer rate is very slow and migration total time
is very large. Cache misses and overflows seem small as far as I can tell. 

Here's example output from the source "info migrate" with xbzrle=on when it's done:

(qemu) info migrate
capabilities: xbzrle: on
Migration status: completed
total time: 6530177 milliseconds
transferred ram: 4887726 kbytes
remaining ram: 0 kbytes
total ram: 4211008 kbytes
duplicate: 3126234 pages
normal: 43587 pages
normal bytes: 174348 kbytes
cache size: 268435456 bytes
xbzrle transferred: 4710325 kbytes
xbzrle pages: 266649315 pages
xbzrle cache miss: 43440
xbzrle overflow : 147

The same guest+workload migrates much faster with xbzrle=off. I would have
expected the opposite behaviour i.e with xbzrle=off, this guest+workload 
combination would migrate very slowly or never end.

Here's example output from the source "info migrate" with xbzrle=off when it's
done

(qemu) info migrate
capabilities: xbzrle: off
Migration status: completed
total time: 10791 milliseconds
transferred ram: 220735 kbytes
remaining ram: 0 kbytes
total ram: 4211008 kbytes
duplicate: 1007476 pages
normal: 54938 pages
normal bytes: 219752 kbytes

Have I missed setting some other migration parameter? I tried using
migrate_set_speed to change the bandwidth limit to 1000000000 bytes/sec but it
didn't make any difference.

Are there any default parameters that would make xbzrle inefficient for this type
of workload? Has any one measured a point of diminishing returns where e.g.
encoding/decoding cpu-overhead makes the feature ineffective?

this was a live-migration performed on same host, but I have seen same behaviour
between 2 hosts. The test host was idle apart from the VMs.

sample command line:
-enable-kvm -M pc -smp 2,maxcpus=64 -cpu host -m 4096 -drive
file=/home/debian.img,if=none,id=drive-virtio-disk0,format=raw
-device virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-vga std -netdev type=tap,id=guest0,vhost=on -device virtio-net-pci,netdev=guest0

thanks,

- Vasilis

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2012-10-12  7:15 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-11 16:26 [Qemu-devel] slower live-migration with XBZRLE Vasilis Liaskovitis
2012-10-12  7:14 ` Alexandre DERUMIER

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).