From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:54902) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QqRlL-0003va-NV for qemu-devel@nongnu.org; Mon, 08 Aug 2011 11:29:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QqRlK-00046C-AL for qemu-devel@nongnu.org; Mon, 08 Aug 2011 11:29:07 -0400 Received: from mail-gw0-f45.google.com ([74.125.83.45]:60663) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QqRlK-000466-6h for qemu-devel@nongnu.org; Mon, 08 Aug 2011 11:29:06 -0400 Received: by gwb19 with SMTP id 19so1619329gwb.4 for ; Mon, 08 Aug 2011 08:29:05 -0700 (PDT) Message-ID: <4E4000BF.20708@codemonkey.ws> Date: Mon, 08 Aug 2011 10:29:03 -0500 From: Anthony Liguori MIME-Version: 1.0 References: <20110808032438.GC24764@valinux.co.jp> <4E3FAA53.4030602@redhat.com> <4E3FD774.7010502@codemonkey.ws> <4E3FFC9E.8050300@redhat.com> In-Reply-To: <4E3FFC9E.8050300@redhat.com> Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC] postcopy livemigration proposal List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: dlaor@redhat.com Cc: kvm@vger.kernel.org, Orit Wasserman , t.hirofuchi@aist.go.jp, satoshi.itoh@aist.go.jp, qemu-devel@nongnu.org, Isaku Yamahata , Avi Kivity On 08/08/2011 10:11 AM, Dor Laor wrote: > On 08/08/2011 03:32 PM, Anthony Liguori wrote: >> On 08/08/2011 04:20 AM, Dor Laor wrote: >>> >>> That's terrific (nice video also)! >>> Orit and myself had the exact same idea too (now we can't patent it..). >>> >>> Advantages: >>> - No down time due to memory copying. >> >> But non-deterministic down time due to network latency while trying to >> satisfy a page fault. > > True but it is possible to limit it with some dedicated network or > bandwidth reservation. Yup. Any technique that uses RDMA (which is basically what this is) requires dedicated network resources. >>> - Efficient, reduce needed traffic no need to re-send pages. >> >> It's not quite that simple. Post-copy needs to introduce a protocol >> capable of requesting pages. > > Just another subsection.. (kidding), still it shouldn't be too > complicated, just an offset+pagesize and return page_content/error What I meant by this is that there is potentially a lot of round trip overhead. Pre-copy migration works well with reasonable high latency network connections because the downtime is capped only by the maximum latency sending from one point to another. But with something like this, the total downtime is 2*max_latency*nb_pagefaults. That's potentially pretty high. So it may be desirable to try to reduce nb_pagefaults by prefaulting in pages, etc. Suffice to say, this ends up getting complicated and may end up burning network traffic too. >> This is really just a limitation of our implementation. In theory, >> pre-copy allows you to exert fine grain resource control over the guest >> which you can use to encourage convergence. > > But a very large guest w/ large working set that changes more frequent > than the network bandwidth might always need huge down time with the > current system. In theory, you can do things like reduce the guests' priority to reduce the amount of work it can do in order to encourage convergence. >> One thing I think we need to do is put together a live migration >> roadmap. We've got a lot of invasive efforts underway with live >> migration and I fear that without some planning and serialization, some >> of this useful work with get lost. > > Some of them are parallel. I think all the readers here agree that post > copy migration should be an option while we need to maintain the current > one. I actually think they need to be done mostly in sequence while cleaning up some of the current infrastructure. I don't think we really should make any major changes (beyond maybe the separate thread) until we eliminate QEMUFile. There's so much overhead involved in using QEMUFile today, I think it's hard to talk about performance data when we've got a major bottleneck sitting in the middle. Regards, Anthony Liguori