From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:32831) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TMZSe-0003MO-9Q for qemu-devel@nongnu.org; Fri, 12 Oct 2012 03:15:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TMZSY-0006Gx-0h for qemu-devel@nongnu.org; Fri, 12 Oct 2012 03:15:08 -0400 Received: from mailpro.odiso.net ([89.248.209.98]:41320) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TMZSX-0006Eq-O9 for qemu-devel@nongnu.org; Fri, 12 Oct 2012 03:15:01 -0400 Date: Fri, 12 Oct 2012 09:14:31 +0200 (CEST) From: Alexandre DERUMIER Message-ID: <0db4bdd7-ad9d-48a1-82ff-3cca8d49c4ae@mailpro> In-Reply-To: <20121011162641.GA4106@dhcp-192-168-178-175.profitbricks.localdomain> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [Qemu-devel] slower live-migration with XBZRLE List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Vasilis Liaskovitis Cc: owasserman@redhat.com, qemu-devel@nongnu.org Hi, I have observed same behaviour with vm with lot of memory transfert, or= playing video in the guest. https://lists.gnu.org/archive/html/qemu-devel/2012-09/msg00138.html You can try to tunned to xbzrle cache size, maybe it'll improve speed. ----- Mail original ----- De: "Vasilis Liaskovitis" =C3=80: qemu-devel@nongnu.org Cc: owasserman@redhat.com Envoy=C3=A9: Jeudi 11 Octobre 2012 18:26:41 Objet: [Qemu-devel] slower live-migration with XBZRLE Hi, I am testing XBZRLE compression with qemu-1.2 for live migration of large V= M and/or memory-intensive workloads. I have a 4GB guest that runs the memory = r/w load generator from the original patchset, see docs/xbzrle.txt or http://lists.gnu.org/archive/html/qemu-devel/2012-07/msg01207.html I have set xbzrle to "on" in both source/target, and default cache size in = source (I also tried using 1g cache size, during the test or with a new migration)= . The migration starts but the ram transfer rate is very slow and migration total= time is very large. Cache misses and overflows seem small as far as I can tell. = Here's example output from the source "info migrate" with xbzrle=3Don when = it's done: (qemu) info migrate capabilities: xbzrle: on Migration status: completed total time: 6530177 milliseconds transferred ram: 4887726 kbytes remaining ram: 0 kbytes total ram: 4211008 kbytes duplicate: 3126234 pages normal: 43587 pages normal bytes: 174348 kbytes cache size: 268435456 bytes xbzrle transferred: 4710325 kbytes xbzrle pages: 266649315 pages xbzrle cache miss: 43440 xbzrle overflow : 147 The same guest+workload migrates much faster with xbzrle=3Doff. I would hav= e expected the opposite behaviour i.e with xbzrle=3Doff, this guest+workload = combination would migrate very slowly or never end. Here's example output from the source "info migrate" with xbzrle=3Doff when= it's done (qemu) info migrate capabilities: xbzrle: off Migration status: completed total time: 10791 milliseconds transferred ram: 220735 kbytes remaining ram: 0 kbytes total ram: 4211008 kbytes duplicate: 1007476 pages normal: 54938 pages normal bytes: 219752 kbytes Have I missed setting some other migration parameter? I tried using migrate_set_speed to change the bandwidth limit to 1000000000 bytes/sec but= it didn't make any difference. Are there any default parameters that would make xbzrle inefficient for thi= s type of workload? Has any one measured a point of diminishing returns where e.g.= encoding/decoding cpu-overhead makes the feature ineffective? this was a live-migration performed on same host, but I have seen same beha= viour between 2 hosts. The test host was idle apart from the VMs. sample command line: -enable-kvm -M pc -smp 2,maxcpus=3D64 -cpu host -m 4096 -drive file=3D/home/debian.img,if=3Dnone,id=3Ddrive-virtio-disk0,format=3Draw -device virtio-blk-pci,bus=3Dpci.0,drive=3Ddrive-virtio-disk0,id=3Dvirtio-d= isk0,bootindex=3D1 -vga std -netdev type=3Dtap,id=3Dguest0,vhost=3Don -device virtio-net-pci,n= etdev=3Dguest0 thanks, - Vasilis