qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/3] Live migration without shared storage
@ 2009-09-07  9:24 lirans
  2009-09-07 16:40 ` Pierre Riteau
  0 siblings, 1 reply; 6+ messages in thread
From: lirans @ 2009-09-07  9:24 UTC (permalink / raw)
  To: qemu-devel

This series adds support for live migration without shared storage, means 
copy the storage while migrating. It was tested with KVM. Supports 2 ways 
to replicate the storage during migration:
 1. Complete copy of storage to destination 
 2. Assuming the storage is cow based, copy only the allocated 
    data, time of the migration will be linear with the amount of allocated 
    data (user responsibility to verify that the same backend file reside 
    on src and destination).

Live migration will work as follows:
(qemu) migrate -d tcp:0:4444 # for ordinary live migration
(qemu) migrate -d blk tcp:0:4444 # for live migration with complete storage copy
(qemu) migrate -d blk inc tcp:0:4444 # for live migration with incremental storage copy, storage is cow based.

The patches are against git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git 
kvm-87

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] Live migration without shared storage
  2009-09-07  9:24 [Qemu-devel] [PATCH 0/3] Live migration without shared storage lirans
@ 2009-09-07 16:40 ` Pierre Riteau
  2009-09-08 13:26   ` Lennart Sorensen
  2009-09-08 13:44   ` Liran Schour
  0 siblings, 2 replies; 6+ messages in thread
From: Pierre Riteau @ 2009-09-07 16:40 UTC (permalink / raw)
  To: lirans; +Cc: qemu-devel

On 7 sept. 2009, at 11:24, lirans@il.ibm.com wrote:

> This series adds support for live migration without shared storage,  
> means
> copy the storage while migrating. It was tested with KVM. Supports 2  
> ways
> to replicate the storage during migration:
> 1. Complete copy of storage to destination
> 2. Assuming the storage is cow based, copy only the allocated
>    data, time of the migration will be linear with the amount of  
> allocated
>    data (user responsibility to verify that the same backend file  
> reside
>    on src and destination).
>
> Live migration will work as follows:
> (qemu) migrate -d tcp:0:4444 # for ordinary live migration
> (qemu) migrate -d blk tcp:0:4444 # for live migration with complete  
> storage copy
> (qemu) migrate -d blk inc tcp:0:4444 # for live migration with  
> incremental storage copy, storage is cow based.
>
> The patches are against git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git
> kvm-87
>
>
>


I'm trying blk migration (not incremental) between two machines  
connected over Gigabit ethernet.
The transfer is quite slow (about 2 MB/s over the wire).
While the load on the sending end is low (vmstat says ~2000 blocks in/ 
sec, and top says ~1% in io wait), on the receiving end I see almost  
40% CPU in io wait, kjournald takes 20% of the CPU and vmstat reports  
~14000 blocks out/sec.

Hosts are running Debian Lenny (2.6.26 32 bits), kvm-87 + your patches.
The guest is also running Debian Lenny and is idle io wise. I tried  
with both idle and full cpu utilization, it doesn't change anything.

-- 
Pierre Riteau -- http://perso.univ-rennes1.fr/pierre.riteau/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] Live migration without shared storage
  2009-09-07 16:40 ` Pierre Riteau
@ 2009-09-08 13:26   ` Lennart Sorensen
  2009-09-08 13:56     ` Liran Schour
  2009-09-08 14:05     ` Pierre Riteau
  2009-09-08 13:44   ` Liran Schour
  1 sibling, 2 replies; 6+ messages in thread
From: Lennart Sorensen @ 2009-09-08 13:26 UTC (permalink / raw)
  To: Pierre Riteau; +Cc: qemu-devel, lirans

On Mon, Sep 07, 2009 at 06:40:51PM +0200, Pierre Riteau wrote:
> I'm trying blk migration (not incremental) between two machines  
> connected over Gigabit ethernet.
> The transfer is quite slow (about 2 MB/s over the wire).
> While the load on the sending end is low (vmstat says ~2000 blocks in/ 
> sec, and top says ~1% in io wait), on the receiving end I see almost 40% 
> CPU in io wait, kjournald takes 20% of the CPU and vmstat reports ~14000 
> blocks out/sec.
>
> Hosts are running Debian Lenny (2.6.26 32 bits), kvm-87 + your patches.
> The guest is also running Debian Lenny and is idle io wise. I tried with 
> both idle and full cpu utilization, it doesn't change anything.

If you happen to be using ext3, then that might explain it.  ext3 has
long had issues with performance while slowly writing a large file
(this affects mythtv a lot for example).  2.6.30 should improve things a
lot, although ext3 will never be great.  ext4 is better, and hopefully
btrfs will be great (when done).  xfs is also supposed to handle such
tasks very well.

-- 
Len Sorensen

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] Live migration without shared storage
  2009-09-07 16:40 ` Pierre Riteau
  2009-09-08 13:26   ` Lennart Sorensen
@ 2009-09-08 13:44   ` Liran Schour
  1 sibling, 0 replies; 6+ messages in thread
From: Liran Schour @ 2009-09-08 13:44 UTC (permalink / raw)
  To: Pierre Riteau; +Cc: qemu-devel


qemu-devel-bounces+lirans=il.ibm.com@nongnu.org wrote on 07/09/2009
19:40:51:

> On 7 sept. 2009, at 11:24, lirans@il.ibm.com wrote:
>
> > This series adds support for live migration without shared storage,
> > means
> > copy the storage while migrating. It was tested with KVM. Supports 2
> > ways
> > to replicate the storage during migration:
> > 1. Complete copy of storage to destination
> > 2. Assuming the storage is cow based, copy only the allocated
> >    data, time of the migration will be linear with the amount of
> > allocated
> >    data (user responsibility to verify that the same backend file
> > reside
> >    on src and destination).
> >
> > Live migration will work as follows:
> > (qemu) migrate -d tcp:0:4444 # for ordinary live migration
> > (qemu) migrate -d blk tcp:0:4444 # for live migration with complete
> > storage copy
> > (qemu) migrate -d blk inc tcp:0:4444 # for live migration with
> > incremental storage copy, storage is cow based.
> >
> > The patches are against
git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git
> > kvm-87
> >
> >
> >
>
>
> I'm trying blk migration (not incremental) between two machines
> connected over Gigabit ethernet.
> The transfer is quite slow (about 2 MB/s over the wire).
> While the load on the sending end is low (vmstat says ~2000 blocks in/
> sec, and top says ~1% in io wait), on the receiving end I see almost
> 40% CPU in io wait, kjournald takes 20% of the CPU and vmstat reports
> ~14000 blocks out/sec.
>
> Hosts are running Debian Lenny (2.6.26 32 bits), kvm-87 + your patches.
> The guest is also running Debian Lenny and is idle io wise. I tried
> with both idle and full cpu utilization, it doesn't change anything.
>
> --
> Pierre Riteau -- http://perso.univ-rennes1.fr/pierre.riteau/
>

True I see that there is a problem with performance. Need to change the
destination
side to write async to disk And in the source side to submit async read
from disk at
least at the rate of the network bandwidth. I will resend a new patch
series.

Thanks,
- Liran

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] Live migration without shared storage
  2009-09-08 13:26   ` Lennart Sorensen
@ 2009-09-08 13:56     ` Liran Schour
  2009-09-08 14:05     ` Pierre Riteau
  1 sibling, 0 replies; 6+ messages in thread
From: Liran Schour @ 2009-09-08 13:56 UTC (permalink / raw)
  To: Lennart Sorensen; +Cc: qemu-devel, Pierre Riteau


lsorense@csclub.uwaterloo.ca (Lennart Sorensen) wrote on 08/09/2009
16:26:31:

> On Mon, Sep 07, 2009 at 06:40:51PM +0200, Pierre Riteau wrote:
> > I'm trying blk migration (not incremental) between two machines
> > connected over Gigabit ethernet.
> > The transfer is quite slow (about 2 MB/s over the wire).
> > While the load on the sending end is low (vmstat says ~2000 blocks in/
> > sec, and top says ~1% in io wait), on the receiving end I see almost
40%
> > CPU in io wait, kjournald takes 20% of the CPU and vmstat reports
~14000
> > blocks out/sec.
> >
> > Hosts are running Debian Lenny (2.6.26 32 bits), kvm-87 + your patches.
> > The guest is also running Debian Lenny and is idle io wise. I tried
with
> > both idle and full cpu utilization, it doesn't change anything.
>
> If you happen to be using ext3, then that might explain it.  ext3 has
> long had issues with performance while slowly writing a large file
> (this affects mythtv a lot for example).  2.6.30 should improve things a
> lot, although ext3 will never be great.  ext4 is better, and hopefully
> btrfs will be great (when done).  xfs is also supposed to handle such
> tasks very well.
>
> --
> Len Sorensen

I found performance issues in the patch itself. I will first submit a new
patch
and maybe then will be able to talk about the fs performance.

Thanks,
- Liran

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] Live migration without shared storage
  2009-09-08 13:26   ` Lennart Sorensen
  2009-09-08 13:56     ` Liran Schour
@ 2009-09-08 14:05     ` Pierre Riteau
  1 sibling, 0 replies; 6+ messages in thread
From: Pierre Riteau @ 2009-09-08 14:05 UTC (permalink / raw)
  To: Lennart Sorensen; +Cc: qemu-devel, lirans

On 8 sept. 2009, at 15:26, Lennart Sorensen wrote:

> On Mon, Sep 07, 2009 at 06:40:51PM +0200, Pierre Riteau wrote:
>> I'm trying blk migration (not incremental) between two machines
>> connected over Gigabit ethernet.
>> The transfer is quite slow (about 2 MB/s over the wire).
>> While the load on the sending end is low (vmstat says ~2000 blocks  
>> in/
>> sec, and top says ~1% in io wait), on the receiving end I see  
>> almost 40%
>> CPU in io wait, kjournald takes 20% of the CPU and vmstat reports  
>> ~14000
>> blocks out/sec.
>>
>> Hosts are running Debian Lenny (2.6.26 32 bits), kvm-87 + your  
>> patches.
>> The guest is also running Debian Lenny and is idle io wise. I tried  
>> with
>> both idle and full cpu utilization, it doesn't change anything.
>
> If you happen to be using ext3, then that might explain it.  ext3 has
> long had issues with performance while slowly writing a large file
> (this affects mythtv a lot for example).  2.6.30 should improve  
> things a
> lot, although ext3 will never be great.  ext4 is better, and hopefully
> btrfs will be great (when done).  xfs is also supposed to handle such
> tasks very well.
>
> -- 
> Len Sorensen


Yes, I'm using ext3. When I tested last week, I created the disk image  
on the destination machine using qemu-img create -f raw, which creates  
a sparse file.
I experimented again today, but this time I allocated all the blocks  
in the disk image prior to the migration.
This resulted in increased throughput (between 6 and 8 MB/s on the  
wire), reduced CPU usage on the destination node (now qemu is the top  
CPU user, not kjournald), and the IO interrupt numbers are similar on  
the origin and destination node (about 7000/s).
Still, it could be better, but I see Liran is working on it :)

-- 
Pierre Riteau -- http://perso.univ-rennes1.fr/pierre.riteau/

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2009-09-08 14:05 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-09-07  9:24 [Qemu-devel] [PATCH 0/3] Live migration without shared storage lirans
2009-09-07 16:40 ` Pierre Riteau
2009-09-08 13:26   ` Lennart Sorensen
2009-09-08 13:56     ` Liran Schour
2009-09-08 14:05     ` Pierre Riteau
2009-09-08 13:44   ` Liran Schour

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).