From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48715) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aduaU-0008KV-4p for qemu-devel@nongnu.org; Thu, 10 Mar 2016 02:00:47 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aduaQ-0003Bn-S9 for qemu-devel@nongnu.org; Thu, 10 Mar 2016 02:00:46 -0500 Received: from g4t3428.houston.hp.com ([15.201.208.56]:5305) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aduaQ-0003BR-MG for qemu-devel@nongnu.org; Thu, 10 Mar 2016 02:00:42 -0500 From: Jitendra Kolhe Date: Thu, 10 Mar 2016 12:31:32 +0530 Message-Id: <1457593292-30686-1-git-send-email-jitendra.kolhe@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: amit.shah@redhat.com Cc: ehabkost@redhat.com, kvm@vger.kernel.org, quintela@redhat.com, qemu-devel@nongnu.org, liang.z.li@intel.com, dgilbert@redhat.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, mohan_parthasarathy@hpe.com, simhan@hpe.com, pbonzini@redhat.com, akpm@linux-foundation.org, virtualization@lists.linux-foundation.org, rth@twiddle.net On 3/8/2016 4:44 PM, Amit Shah wrote: > On (Fri) 04 Mar 2016 [15:02:47], Jitendra Kolhe wrote: >>>> >>>> * Liang Li (liang.z.li@intel.com) wrote: >>>>> The current QEMU live migration implementation mark the all the >>>>> guest's RAM pages as dirtied in the ram bulk stage, all these pages >>>>> will be processed and that takes quit a lot of CPU cycles. >>>>> >>>>> From guest's point of view, it doesn't care about the content in fr= ee >>>>> pages. We can make use of this fact and skip processing the free pa= ges >>>>> in the ram bulk stage, it can save a lot CPU cycles and reduce the >>>>> network traffic significantly while speed up the live migration >>>>> process obviously. >>>>> >>>>> This patch set is the QEMU side implementation. >>>>> >>>>> The virtio-balloon is extended so that QEMU can get the free pages >>>>> information from the guest through virtio. >>>>> >>>>> After getting the free pages information (a bitmap), QEMU can use i= t >>>>> to filter out the guest's free pages in the ram bulk stage. This ma= ke >>>>> the live migration process much more efficient. >>>> >>>> Hi, >>>> An interesting solution; I know a few different people have been l= ooking at >>>> how to speed up ballooned VM migration. >>>> >>> >>> Ooh, different solutions for the same purpose, and both based on the = balloon. >> >> We were also tying to address similar problem, without actually needin= g to modify >> the guest driver. Please find patch details under mail with subject. >> migration: skip sending ram pages released by virtio-balloon driver > > The scope of this patch series seems to be wider: don't send free > pages to a dest at all, vs. don't send pages that are ballooned out. > > Amit Hi, Thanks for your response. The scope of this patch series doesn=E2=80=99t = seem to take care=20 of ballooned out pages. To balloon out a guest ram page the guest balloon= driver does=20 a alloc_page() and then return the guest pfn to Qemu, so ballooned out pa= ges will not=20 be seen as free ram pages by the guest. Thus we will still end up scanning (for zero page) for ballooned out page= s during=20 migration. It would be ideal if we could have both solutions. Thanks, - Jitendra