From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49553) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj9OD-0003aL-An for qemu-devel@nongnu.org; Thu, 24 Mar 2016 13:49:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aj9O9-0001Rl-HY for qemu-devel@nongnu.org; Thu, 24 Mar 2016 13:49:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46204) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj9O9-0001RN-AF for qemu-devel@nongnu.org; Thu, 24 Mar 2016 13:49:41 -0400 Date: Thu, 24 Mar 2016 17:49:33 +0000 From: "Dr. David Alan Gilbert" Message-ID: <20160324174933.GA11662@work-vm> References: <20160324012424.GB14956@linux-gk3p> <20160324090004.GA2230@work-vm> <20160324102354.GB2230@work-vm> <20160324165530-mutt-send-email-mst@redhat.com> <20160324175503-mutt-send-email-mst@redhat.com> <20160324181031-mutt-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20160324181031-mutt-send-email-mst@redhat.com> Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: "rkagan@virtuozzo.com" , "linux-kernel@vger.kenel.org" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "quintela@redhat.com" , "simhan@hpe.com" , "Li, Liang Z" , "qemu-devel@nongnu.org" , "jitendra.kolhe@hpe.com" , "mohan_parthasarathy@hpe.com" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , Wei Yang , "rth@twiddle.net" * Michael S. Tsirkin (mst@redhat.com) wrote: > On Thu, Mar 24, 2016 at 04:05:16PM +0000, Li, Liang Z wrote: > >=20 > >=20 > > On=A0%D, %SN wrote: > > %Q > >=20 > > %C > >=20 > > Liang > >=20 > >=20 > > > -----Original Message----- > > > From: Michael S. Tsirkin [mailto:mst@redhat.com] > > > Sent: Thursday, March 24, 2016 11:57 PM > > > To: Li, Liang Z > > > Cc: Dr. David Alan Gilbert; Wei Yang; qemu-devel@nongnu.org; > > > kvm@vger.kernel.org; linux-kernel@vger.kenel.org; pbonzini@redhat.c= om; > > > rth@twiddle.net; ehabkost@redhat.com; amit.shah@redhat.com; > > > quintela@redhat.com; mohan_parthasarathy@hpe.com; > > > jitendra.kolhe@hpe.com; simhan@hpe.com; rkagan@virtuozzo.com; > > > riel@redhat.com > > > Subject: Re: [RFC Design Doc]Speed up live migration by skipping fr= ee pages > > >=20 > > > On Thu, Mar 24, 2016 at 03:53:25PM +0000, Li, Liang Z wrote: > > > > > > > > Not very complex, we can implement like this: > > > > > > > > > > > > > > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to = 1 2. > > > > > > > > Clear all the bits in ram_list. > > > > > > > > dirty_memory[DIRTY_MEMORY_MIGRATION] > > > > > > > > 3. Send the get_free_page_bitmap request 4. Start to send > > > > > > > > pages to destination and check if the free_page_bitmap is= ready > > > > > > > > if (is_ready) { > > > > > > > > filter out the free pages from migration_bitma= p_rcu->bmap; > > > > > > > > migration_bitmap_sync(); > > > > > > > > } > > > > > > > > continue until live migration complete. > > > > > > > > > > > > > > > > > > > > > > > > Is that right? > > > > > > > > > > > > > > The order I'm trying to understand is something like: > > > > > > > > > > > > > > a) Send the get_free_page_bitmap request > > > > > > > b) Start sending pages > > > > > > > c) Reach the end of memory > > > > > > > [ is_ready is false - guest hasn't made free map yet = ] > > > > > > > d) normal migration_bitmap_sync() at end of first pass > > > > > > > e) Carry on sending dirty pages > > > > > > > f) is_ready is true > > > > > > > f.1) filter out free pages? > > > > > > > f.2) migration_bitmap_sync() > > > > > > > > > > > > > > It's f.1 I'm worried about. If the guest started generatin= g the > > > > > > > free bitmap before (d), then a page marked as 'free' in f.1 > > > > > > > might have become dirty before (d) and so (f.2) doesn't set= the > > > > > > > dirty again, and so we can't filter out pages in f.1. > > > > > > > > > > > > > > > > > > > As you described, the order is incorrect. > > > > > > > > > > > > Liang > > > > > > > > > > > > > > > So to make it safe, what is required is to make sure no free li= st us > > > > > outstanding before calling migration_bitmap_sync. > > > > > > > > > > If one is outstanding, filter out pages before calling > > > migration_bitmap_sync. > > > > > > > > > > Of course, if we just do it like we normally do with migration,= then > > > > > by the time we call migration_bitmap_sync dirty bitmap is compl= etely > > > > > empty, so there won't be anything to filter out. > > > > > > > > > > One way to address this is call migration_bitmap_sync in the IO > > > > > handler, while VCPU is stopped, then make sure to filter out pa= ges > > > > > before the next migration_bitmap_sync. > > > > > > > > > > Another is to start filtering out pages upon IO handler, but ma= ke > > > > > sure to flush the queue before calling migration_bitmap_sync. > > > > > > > > > > > > > It's really complex, maybe we should switch to a simple start, j= ust > > > > skip the free page in the ram bulk stage and make it asynchronous= ? > > > > > > > > Liang > > >=20 > > > You mean like your patches do? No, blocking bulk migration until gu= est > > > response is basically a non-starter. > > >=20 > >=20 > > No, don't wait anymore. Like below (copy from previous thread) > > -------------------------------------------------------------- > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1=20 > > 2. Clear all the bits in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION= ] > > 3. Send the get_free_page_bitmap request=20 > > 4. Start to send pages to destination and check if the free_page_bit= map is ready > > if (is_ready) { > > filter out the free pages from migration_bitmap_rcu->bmap; > > migration_bitmap_sync(); > > } > > continue until live migration complete. > > --------------------------------------------------------------- > > Can this work? > >=20 > > Liang >=20 > Not if you get the ready bit asynchronously like you wrote here > since is_ready can get set while you called migration_bitmap_sync. >=20 > As I said previously, > to make this work you need to filter out synchronously while VCPU is > stopped and while free pages from list are not being used. >=20 > Alternatively prevent getting free page list > and filtering them out from > guest from racing with migration_bitmap_sync. >=20 > For example, flush the VQ after migration_bitmap_sync. > So: >=20 > lock > migration_bitmap_sync(); > while (elem =3D virtqueue_pop) { > virtqueue_push(elem) > g_free(elem) > } > unlock >=20 >=20 > while in handle_output >=20 > lock > while (elem =3D virtqueue_pop) { > list =3D get_free_list(elem) > filter_out_free(list) > virtqueue_push(elem) > free(elem) > } > unlock >=20 >=20 > lock prevents migration_bitmap_sync from racing > against handle_output I think the easier way is just to ignore the guests free list response if it comes back after the first pass. Dave >=20 >=20 > This way you can actually use ioeventfd > for this VQ so VCPU won't be blocked. >=20 > I do not think this is so complex, and > this way you can add requests for guest > free bitmap at an arbitary interval > either in host or in guest. >=20 > For example, add a value that says how often > should guest update the bitmap, set it to 0 > to disable updates after migration done. >=20 > Or, make guest resubmit a new one when we consume > the old one, run handle_output about through > a periodic timer on host. >=20 >=20 > > > -- > > > MST -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK